
Logging in OpenShift is separated into different systems and services that serve a specific purpose.
- Collecting log data - typically done with filebeat, fluentd, logstash or vector
- Storing log data, for example, in an Amazon Web Services (AWS) S3 Bucket - typically done with Loki or Elastic Search
- Visualizing and query log data - typically done in the OpenShift console or in Kibana

There are abbreviations used as shorthand for the collection of systems and services to collect, store and visualize log data.
- EFK (Elastic Search, Fluentd, Kibana)
- ELK (Elastic Search, Logstash, Kibana)
- EVK (Elastic Search, Vector, Kibana)
- LFK (Loki, Fluentd, Kibana)
- LLK (Loki, Logstash, Kibana)
- LVK (Loki, Vector, Kibana)
The first step in configuring OpenShift to collect log data from various different types of resources in your OpenShift cluster, such as nodes, pods, and so on is to install an Operator that will collect the log data. This is often done by installing the Cluster Logging Operator. Check out my article OpenShift - Getting Started with Cluster Logging.
The storage of the log data, for example, in object storage such as an Amazon Web Services (AWS) S3 Bucket, is often done by Loki or Elastic Search. Loki is often used with Vector, where Vector is the collector that collects data and Loki stores the data is some sort of storage, such as an Amazon Web Services (AWS) S3 Bucket. Loki is meant for short-term storage of log data and is optimized for performance. Prior to Vector, Fluentd was often used as the log collector.

You configure logging by first installing the Loki Operator to manage your log storage followed by the OpenShift Logging Operator to manage the components of logging.
Create the openshift-operators-redhat namespace
For example, let's say you have a file named openshift-operators-redhat.yml with the following YAML.
apiVersion: v1
kind: Namespace
metadata:
name: openshift-operators-redhat
annotations:
openshift.io/node-selector: ""
labels:
openshift.io/cluster-monitoring: "true"
The oc apply command can be used to create the openshift-operators-redhat namespace.
oc apply -f openshift-operators-redhat.yml
Create the loki-operator Subscription
For example, let's say you have a file named loki-subscription.yml with the following YAML.
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: loki-operator
namespace: openshift-operators-redhat
spec:
channel: stable
name: loki-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
The oc apply command can be used to create the Subscription.
oc apply -f loki-subscription.yml
Create the openshift-logging namespace
For example, let's say you have a file named openshift-logging.yml with the following YAML.
apiVersion: v1
kind: Namespace
metadata:
name: openshift-logging
annotations:
openshift.io/node-selector: ""
labels:
openshift.io/cluster-logging: "true"
openshift.io/cluster-monitoring: "true"
The oc apply command can be used to create the openshift-operators-redhat namespace.
oc apply -f openshift-logging.yml
Create the cluster-logging OperatorGroup
For example, let's say you have a file named cluster-logging.yml with the following YAML.
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: cluster-logging
namespace: openshift-logging
annotations:
openshift.io/node-selector: ""
labels:
openshift.io/cluster-logging: "true"
openshift.io/cluster-monitoring: "true"
The oc apply command can be used to create the cluster-logging OperatorGroup.
oc apply -f cluster-logging.yml
Create the cluster-logging Subscription
For example, let's say you have a file named loki-operator-subscription.yml with the following YAML.
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: loki-operator
namespace: openshift-operators-redhat
spec:
channel: stable
name: loki-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
The oc apply command can be used to create the loki-operator Subscription.
oc apply -f loki-operator-subscription.yml
Create the loki storage secret
Create a secret that will be used by the LokiStack to know where to store log data. For example, let's say you have the following YAML in a file named loki-s3.yml.
apiVersion: v1
stringData:
access_key_id: AKIA2MABCD6GDQ1234RY
access_key_secret: 4FGkm30sdf-0m234dfAVMAD2340-dsfaADV324df
bucketnames: my-bucket-abc123
endpoint: https://s3.us-east-1.amazonaws.com
region: us-east-1
kind: Secret
metadata:
creationTimestamp: "2024-09-17T21:28:09Z"
name: loki-s3
namespace: openshift-logging
type: Opaque
The oc apply command can be used to create the secret.
oc apply -f loki-s3.yml
Create the LokiStack Custom Resource
For example, let's say you have the following YAML in a file named loki-stack.yml. Notice in this example that the LokiStack contains the loki-s3 secret so that the LokiStack knows to store log data in the Amazon Web Service S3 Bucket.
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: loki
namespace: openshift-logging
spec:
limits:
global:
ingestion:
ingestionBurstSize: 40
ingestionRate: 20
maxGlobalStreamsPerTenant: 25000
queries:
maxChunksPerQuery: 2000000
maxEntriesLimitPerQuery: 10000
maxQuerySeries: 3000
queryTimeout: 3m
retention:
days: 7
managementState: Managed
size: 1x.extra-small
storage:
schemas:
- effectiveDate: "2024-07-01"
version: v13
secret:
name: loki-s3
type: s3
storageClassName: thin-csi
template:
compactor:
nodeSelector:
node-type: openshift-logging
distributor:
nodeSelector:
node-type: openshift-logging
gateway:
nodeSelector:
node-type: openshift-logging
indexGateway:
nodeSelector:
node-type: openshift-logging
ingester:
nodeSelector:
node-type: openshift-logging
querier:
nodeSelector:
node-type: openshift-logging
queryFrontend:
nodeSelector:
node-type: openshift-logging
ruler:
nodeSelector:
node-type: openshift-logging
tenants:
mode: openshift-network
The oc apply command can be used to create the LokiStack Custom Resource.
oc apply -f loki-stack.yml
Create the ClusterLogging Custom Resource
For example, let's say you have the following YAML in a file named cluster-logging.yml. The Cluster Logging Custom Resource (CR) is used to define the system that will collect log data (vector in this example), where the log data will be stored (Loki in this example), and where the log data can be visualized and queried (the OpenShift console in this example).
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
type: vector
logStore:
lokistack:
name: logging-loki
type: lokistack
visualization:
type: ocp-console
ocpConsole:
logsLimit: 15
managementState: Managed
The oc apply command can be used to create the ClusterLogging Custom Resource (CR).
oc apply -f cluster-logging.yml
List the Operators
The oc get operators command can be used to list the installed Operators. In this example, the Loki and OpenShift Cluster Logging Operators are listed. Awesome!
~]$ oc get operators
NAME AGE
cluster-logging.openshift-logging 181d
loki-operator.openshift-operators-redhat 181d
Assuming the Loki Operator was installed in the openshift-logging namespace, the oc get all command can be used list the Loki pods, services, daemon set, deployments, replica sets, stateful sets and route.
~]$ oc get all --namespace openshift-logging
NAME READY STATUS RESTARTS AGE
pod/flowlogs-pipeline-2vzbx 1/1 Running 10 61d
pod/flowlogs-pipeline-4w4mj 1/1 Running 8 75d
pod/flowlogs-pipeline-fcpx6 1/1 Running 10 75d
pod/flowlogs-pipeline-g599l 1/1 Running 9 125d
pod/flowlogs-pipeline-hn2jx 1/1 Running 8 125d
pod/flowlogs-pipeline-kfn77 1/1 Running 10 125d
pod/flowlogs-pipeline-p75r5 1/1 Running 11 61d
pod/flowlogs-pipeline-phbc2 1/1 Running 11 61d
pod/flowlogs-pipeline-qpsb7 1/1 Running 12 125d
pod/flowlogs-pipeline-rw76z 1/1 Running 1 23d
pod/flowlogs-pipeline-xjnn4 1/1 Running 10 75d
pod/flowlogs-pipeline-xtp62 1/1 Running 1 23d
pod/flowlogs-pipeline-z4r99 1/1 Running 12 75d
pod/loki-compactor-0 1/1 Running 0 18d
pod/loki-distributor-65fdc984f6-7hwc5 1/1 Running 0 18d
pod/loki-distributor-65fdc984f6-dqwcc 1/1 Running 0 18d
pod/loki-gateway-775f67b766-8tx2d 2/2 Running 0 18d
pod/loki-gateway-775f67b766-gzvgs 2/2 Running 0 18d
pod/loki-index-gateway-0 1/1 Running 0 18d
pod/loki-index-gateway-1 1/1 Running 0 18d
pod/loki-ingester-0 1/1 Running 0 18d
pod/loki-ingester-1 1/1 Running 0 18d
pod/loki-querier-c7f847457-5sp9h 1/1 Running 0 18d
pod/loki-querier-c7f847457-l7kgz 1/1 Running 0 18d
pod/loki-query-frontend-7d6dbbd7d9-d4snw 1/1 Running 0 18d
pod/loki-query-frontend-7d6dbbd7d9-klqv5 1/1 Running 0 18d
pod/foo-plugin-55f86487c4-d4n7b 1/1 Running 0 23d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/flowlogs-pipeline-prom ClusterIP 172.30.163.134 <none> 9401/TCP 167d
service/loki-compactor-grpc ClusterIP None <none> 9095/TCP 162d
service/loki-compactor-http ClusterIP 172.30.125.165 <none> 3100/TCP 162d
service/loki-distributor-grpc ClusterIP None <none> 9095/TCP 162d
service/loki-distributor-http ClusterIP 172.30.234.132 <none> 3100/TCP 162d
service/loki-gateway-http ClusterIP 172.30.21.134 <none> 8080/TCP,8081/TCP,8083/TCP 162d
service/loki-gossip-ring ClusterIP None <none> 7946/TCP 162d
service/loki-index-gateway-grpc ClusterIP None <none> 9095/TCP 162d
service/loki-index-gateway-http ClusterIP 172.30.88.235 <none> 3100/TCP 162d
service/loki-ingester-grpc ClusterIP None <none> 9095/TCP 162d
service/loki-ingester-http ClusterIP 172.30.226.16 <none> 3100/TCP 162d
service/loki-querier-grpc ClusterIP None <none> 9095/TCP 162d
service/loki-querier-http ClusterIP 172.30.108.85 <none> 3100/TCP 162d
service/loki-query-frontend-grpc ClusterIP None <none> 9095/TCP 162d
service/loki-query-frontend-http ClusterIP 172.30.181.211 <none> 3100/TCP 162d
service/foo-plugin ClusterIP 172.30.90.11 <none> 9001/TCP 167d
service/foo-plugin-metrics ClusterIP 172.30.144.186 <none> 9002/TCP 167d
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/flowlogs-pipeline 13 13 13 13 13 <none> 167d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/loki-distributor 2/2 2 2 162d
deployment.apps/loki-gateway 2/2 2 2 162d
deployment.apps/loki-querier 2/2 2 2 162d
deployment.apps/loki-query-frontend 2/2 2 2 162d
deployment.apps/foo-plugin 1/1 1 1 167d
NAME DESIRED CURRENT READY AGE
replicaset.apps/loki-distributor-65fdc984f6 2 2 2 18d
replicaset.apps/loki-gateway-775f67b766 2 2 2 18d
replicaset.apps/loki-querier-c7f847457 2 2 2 18d
replicaset.apps/loki-query-frontend-7d6dbbd7d9 2 2 2 18d
replicaset.apps/foo-plugin-55f86487c4 1 1 1 133d
NAME READY AGE
statefulset.apps/loki-compactor 1/1 162d
statefulset.apps/loki-index-gateway 2/2 162d
statefulset.apps/loki-ingester 2/2 162d
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/loki loki-openshift-logging.apps.openshift.example.com loki-gateway-http public reencrypt None
Viewing Logs
The oc logs command can be used to view logs for a particular resource, such as a pod, a service, a replica set, and so on
The oc node-logs command can be used to view node logs
Did you find this article helpful?
If so, consider buying me a coffee over at