
There are two probes used to check the state of a container, Liveness and Readiness.
- When a liveness probe fails, this means the probed container is dead and should be restarted.
- When a readiness probe fails, this means the probed container is not ready to receive requests. The container might become ready in the future, but it should not requests right now.
When a probe fails, this almost always suggests some issue with the code in the container, which is to say that this is almost never due to some issue with the OpenShift cluster configuration. The only situation I could imagine that might indicate some issue with the OpenShift cluster configuration is if all of the containers on a node are failing the liveness or readiness probe. This means the developer that wrote the code for the container will need to check their code for issues. The probes almost always fail after a code change is made and a new version of the container is deployed.
The YAML used to deploy an application to OpenShift would have livenessProbe and readinessProbe for each container that the liveness and readiness probes should be performed against.
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: my-app
image: registry.openshift.example.com/my-app@sha256:fb68c4b10f4a0ece7ed0488af22e5699021e1b9a8461e9f4f9f39072d71a70da
ports:
livenessProbe:
failureThreshold: 3
httpGet:
path: /mgmt/actuator/health
port: 8081
scheme: HTTP
initialDelaySeconds: 50
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 3
readinessProbe:
failureThreshold: 3
httpGet:
path: /mgmt/actuator/health
port: 8081
scheme: HTTP
initialDelaySeconds: 40
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 3
Did you find this article helpful?
If so, consider buying me a coffee over at