Bootstrap FreeKB - OpenShift - Resolve Insufficient cpu
OpenShift - Resolve Insufficient cpu

Updated:   |  OpenShift articles

Let's say you are attempting to deploy a pod on OpenShift, and something like this is being returned. Notice in this example that 2 nodes have insufficient CPU.

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  28s   default-scheduler  0/13 nodes are available: 2 Insufficient cpu, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 8 node(s) didn't match Pod's node affinity/selector. preemption: 0/13 nodes are available: 11 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

 

Almost always, pods are deployed to worker nodes, thus it's probably the worker nodes that have insufficient CPU. The oc get nodes command can be used to list your nodes.

~]$ oc get nodes
NAME                  STATUS   ROLES            AGE    VERSION
my-node-edge-lm6wz     Ready    infra,worker     519d   v1.23.5+012e945
my-node-edge-pmlls     Ready    infra,worker     519d   v1.23.5+012e945
my-node-infra-c4v5h    Ready    infra,worker     519d   v1.23.5+012e945
my-node-infra-mc8rc    Ready    infra,worker     519d   v1.23.5+012e945
my-node-infra-p9cjv    Ready    infra,worker     519d   v1.23.5+012e945
my-node-master-0       Ready    master           522d   v1.23.5+012e945
my-node-master-1       Ready    master           522d   v1.23.5+012e945
my-node-master-2       Ready    master           522d   v1.23.5+012e945
my-node-worker-lk5vm   Ready    compute,worker   61d    v1.23.5+012e945
my-node-worker-pj4r4   Ready    compute,worker   61d    v1.23.5+012e945

 

You can view the node YAML to determine how much allocatable and capacity CPU the node has.

~]$ oc get node my-node-worker-lk5vm --output yaml
status:
  allocatable:
    cpu: 3500m
    ephemeral-storage: "114396791822"
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 62447948Ki
    pods: "250"
  capacity:
    cpu: "4"
    ephemeral-storage: 125293548Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 65696076Ki
    pods: "250"


And you can view the pod YAML to see if the containers in the pod have CPU requests/limits. In this example, there are 3 containers in the pod, each requesting 500m of CPU. That's quite a bit of CPU to be requesting.

~]$ oc get pod my-pod-kzz2w --output yaml
spec:
  containers:
  - name: my-container
    resources:
      requests:
        cpu: 500m
        memory: 128Mi
  initContainers:
  - name: my-first-init-container
    resources:
      requests:
        cpu: 500m
        memory: 128Mi
  - name: my-second-init-container
    resources:
      requests:
        cpu: 500m        

 

If the pod is managed by a deployment, you can try reducing the CPU requests in the deployment, to see if the pod is able to be scheduled on the node with less CPU. Of course, the pod may not function probably if it actually needs 500m of CPU requests, but you can at least try this change to see what the result it.

 

 

 




Did you find this article helpful?

If so, consider buying me a coffee over at Buy Me A Coffee



Comments


Add a Comment


Please enter 4840aa in the box below so that we can be sure you are a human.