
Event Router watches for Kubernetes events and logs them for collection by the logging subsystem such as Loki.
Create the eventrouter service account
For example, let's say you have the following in a file named service_account.yml.
- apiVersion: v1
kind: ServiceAccount
metadata:
name: eventrouter
namespace: openshift-logging
The oc apply command can be used to create the service account.
oc apply -f service_account.yml
Create the Cluster Role
For example, let's say you have the following in a file named cluster_role.yml.
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: event-reader
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "watch", "list"]
The oc apply command can be used to create the cluster role.
oc apply -f cluster_role.yml
Create the Cluster Role Binding
For example, let's say you have the following in a file named cluster_role_binding.yml.
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: event-reader-binding
subjects:
- kind: ServiceAccount
name: eventrouter
namespace: openshift-logging
roleRef:
kind: ClusterRole
name: event-reader
The oc apply command can be used to create the cluster role binding.
oc apply -f cluster_role_binding.yml
Create the Config Map
For example, let's say you have the following in a file named config_map.yml.
- apiVersion: v1
kind: ConfigMap
metadata:
name: eventrouter
namespace: openshift-logging
data:
config.json: |-
{
"sink": "stdout"
}
The oc apply command can be used to create the config map.
oc apply -f config_map.yml
Create the Deployment
For example, let's say you have the following in a file named config_map.yml.
- apiVersion: apps/v1
kind: Deployment
metadata:
name: eventrouter
namespace: openshift-logging
labels:
component: "eventrouter"
logging-infra: "eventrouter"
provider: "openshift"
spec:
selector:
matchLabels:
component: "eventrouter"
logging-infra: "eventrouter"
provider: "openshift"
replicas: 1
template:
metadata:
labels:
component: "eventrouter"
logging-infra: "eventrouter"
provider: "openshift"
name: eventrouter
spec:
serviceAccount: eventrouter
containers:
- name: kube-eventrouter
image: "registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.4"
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: "100m"
memory: "128Mi"
volumeMounts:
- name: config-volume
mountPath: /etc/eventrouter
volumes:
- name: config-volume
configMap:
name: eventrouter
The oc apply command can be used to create the config map.
oc apply -f config_map.yml
Viewing Logs
The oc get pods command can be used to confirm that there is now an eventrouter pod in the openshift-logging namespace.
On the other hand, if your selected project contains one or more pods, something like this should be returned.
~]# oc get pods --namespace openshift-logging
NAME READY STATUS RESTARTS AGE
cluster-logging-eventrouter-d649f97c8-qvv8r 1/1 Running 0 8d
And the oc logs command can be used to view the logs in the pod.
oc logs pod/cluster-logging-eventrouter-d649f97c8-qvv8r --namespace openshift-logging
Which should return something like this.
{"verb":"ADDED","event":{"metadata":{"name":"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","namespace":"openshift-service-catalog-removed","selfLink":"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","uid":"787d7b26-3d2f-4017-b0b0-420db4ae62c0","resourceVersion":"21399","creationTimestamp":"2020-09-08T15:40:26Z"},"involvedObject":{"kind":"Job","namespace":"openshift-service-catalog-removed","name":"openshift-service-catalog-controller-manager-remover","uid":"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f","apiVersion":"batch/v1","resourceVersion":"21280"},"reason":"Completed","message":"Job completed","source":{"component":"job-controller"},"firstTimestamp":"2020-09-08T15:40:26Z","lastTimestamp":"2020-09-08T15:40:26Z","count":1,"type":"Normal"}}
Did you find this article helpful?
If so, consider buying me a coffee over at