
Let's say you are attempting to deploy a pod on OpenShift, and something like this is being returned. Notice in this example that 2 nodes have insufficient CPU.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 28s default-scheduler 0/13 nodes are available: 2 Insufficient cpu, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 8 node(s) didn't match Pod's node affinity/selector. preemption: 0/13 nodes are available: 11 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
Almost always, pods are deployed to worker nodes, thus it's probably the worker nodes that have insufficient CPU. The oc get nodes command can be used to list your nodes.
~]$ oc get nodes
NAME STATUS ROLES AGE VERSION
my-node-edge-lm6wz Ready infra,worker 519d v1.23.5+012e945
my-node-edge-pmlls Ready infra,worker 519d v1.23.5+012e945
my-node-infra-c4v5h Ready infra,worker 519d v1.23.5+012e945
my-node-infra-mc8rc Ready infra,worker 519d v1.23.5+012e945
my-node-infra-p9cjv Ready infra,worker 519d v1.23.5+012e945
my-node-master-0 Ready master 522d v1.23.5+012e945
my-node-master-1 Ready master 522d v1.23.5+012e945
my-node-master-2 Ready master 522d v1.23.5+012e945
my-node-worker-lk5vm Ready compute,worker 61d v1.23.5+012e945
my-node-worker-pj4r4 Ready compute,worker 61d v1.23.5+012e945
The oc adm top command can be used to display the current, real time amount of CPU and memory being used by nodes.
~]$ oc adm top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
my-edge-78v55 139m 3% 5634Mi 65%
my-edge-tk6gm 143m 4% 5514Mi 63%
my-infra-7hpsl 751m 13% 12094Mi 24%
my-infra-jld8v 1091m 19% 45008Mi 92%
my-infra-wxjgn 323m 5% 12444Mi 25%
my-master-0 499m 6% 15022Mi 23%
my-master-1 647m 8% 15334Mi 24%
my-master-2 327m 4% 11125Mi 17%
my-worker-4ccvr 195m 5% 6821Mi 11%
my-worker-flbcp 181m 5% 8027Mi 13%
my-worker-jchnk 178m 5% 8204Mi 13%
And by pods in the currently selected namespace.
~]$ oc adm top pods
NAME CPU(cores) MEMORY(bytes)
my-pod-5ff96b69cd-v6pv5 1m 36Mi
my-pod-66f68d8884-vbp6g 0m 121Mi
my-pod-84d6fdb8c-5zczs 1m 221Mi
Or in a specific namespace.
~]$ oc adm top pods --namespace my-project
NAME CPU(cores) MEMORY(bytes)
my-pod-5ff96b69cd-v6pv5 1m 36Mi
my-pod-66f68d8884-vbp6g 0m 121Mi
my-pod-84d6fdb8c-5zczs 1m 221Mi
Or in all namespaces.
~]$ oc adm top pods --all-namespaces
NAMESPACE NAME CPU(cores) MEMORY(bytes)
foo-project my-pod-7478b76dcc-2ql98 1m 77Mi
foo-project my-pod-5ffbb5f99-vclqf 1m 9Mi
foo-project my-pod-94d49d56d-n9g26 1m 42Mi
bar-project my-pod-7cf56894f6-5w4qp 1m 167Mi
bar-project my-pod-56d6b9b59c-j5z7b 1m 22Mi
bar-project my-pod-5d47f598cb-cc5nd 1m 53Mi
You can view the node YAML to determine how much allocatable and capacity CPU the node has.
~]$ oc get node my-node-worker-lk5vm --output yaml
status:
allocatable:
cpu: 3500m
ephemeral-storage: "114396791822"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 62447948Ki
pods: "250"
capacity:
cpu: "4"
ephemeral-storage: 125293548Ki
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 65696076Ki
pods: "250"
And you can view the pod YAML to see if the containers in the pod have CPU requests/limits. In this example, there are 3 containers in the pod, each requesting 500m of CPU. That's quite a bit of CPU to be requesting.
~]$ oc get pod my-pod-kzz2w --output yaml
spec:
containers:
- name: my-container
resources:
requests:
cpu: 500m
memory: 128Mi
initContainers:
- name: my-first-init-container
resources:
requests:
cpu: 500m
memory: 128Mi
- name: my-second-init-container
resources:
requests:
cpu: 500m
If the pod is managed by a deployment, you can try reducing the CPU requests in the deployment, to see if the pod is able to be scheduled on the node with less CPU. Of course, the pod may not function probably if it actually needs 500m of CPU requests, but you can at least try this change to see what the result it.
Did you find this article helpful?
If so, consider buying me a coffee over at