
Let's say something like this is being returned.
[standalone@embedded /] echo Cannot configure jgroups 'kubernetes.KUBE_PING' protocol under 'tcp' stack. This protocol is already configured. >> ${error_file}
I got this after I made a change to one of the files in the /opt/eap/standalone/configuration directory in a JBOSS pod deployed to OpenShift. For more details on JBOSS on OpenShift, check out my article FreeKB - JBOSS - Getting Started with JBOSS on OpenShift.
Let's start by creating a Persistent Volume Claim. I had to go with a Persistent Volume Claim instead of a Config Map because a Config Map gets mounted in the container read only but a Persistent Volume Claim can be mounted with write access, and the JBOSS container needs to be able to write to files in the /opt/eap/standalone/configuration directory. Check out these articles for my notes on how to create a Persistent Volume and Persistent Volume Claim.
For example, I created a Persistent Volume Claim named my-persistent-volume-claim that had Access Mode RWX (Read Write Many).
~]$ oc get pvc --output wide
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
my-persistent-volume-claim Bound pvc-2db07c57-e282-48e7-bfb1-4cbd7245c25e 1Gi RWX file-storage 3m29s Filesystem
You'll want to ensure that the JBOSS deployment/pod is running with a Security Context Constraint that allows Persistent Volumes to be mounted in the container. By default, if a pod is not associated with a specific Service Account that has been bound to a certain Security Context Constraint, the pod should have the restricted Security Context Constraint, which can be seen using the oc describe pod command.
~]$ oc describe pod my-app-kf7hf
Annotations: openshift.io/scc: restricted
The oc get securitycontextconstraints command can be used to see that the restricted Security Context Constraint has persistentVolumeClaim for VOLUMES. In other words, the pod is running with a Security Context Constraint that allows Persistent Volumes to be mounted in the container. If the pod is running with a Security Context Constraint that does not have persistentVolumeClaim for VOLUMES, check out my article FreeKB - OpenShift - Run a deployment with a Service Account and Security Context Constraint.
~]$ oc get securitycontextconstraints restricted
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"]
Then I updated the JBOSS deployment to mount the Persistent Volume Claim to /var/data in the container. Check out my article FreeKB - OpenShift - Mount a Persistent Volume in a container for more details on how to mount a Persistent Volume Claim in a container. Here is a snippet of the deployment YAML with my-persistent-volume-claim mount to the /var/data directory in the JBOSS container.
apiVersion: apps/v1
kind: Deployment
metadata:
name: eap74-openjdk8-openshift-rhel7
spec:
template:
spec:
containers:
- name: eap74-openjdk8-openshift-rhel7
volumeMounts:
- mountPath: /var/data
name: my-volume
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-persistent-volume-claim
Then in the container I copied all of the files in the /opt/eap/standalone/configuration directory to the /var/data directory.
oc exec pod/eap74-openjdk8-openshift-rhel7-5c5486c667-lmzdm -- cp -R /opt/eap/standalone/configuration /var/data
I then used the oc exec command to create an interactive bash shell in the JBOSS pod and removed kubernetes.KUBE_PING from the /var/data/standalone-openshift.xml file.
<subsystem xmlns="urn:jboss:domain:jgroups:8.0">
<channels default="ee">
<channel name="ee" stack="tcp"/>
</channels>
<stacks>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="kubernetes.KUBE_PING"/> <- removed this line
<protocol type="MERGE3"/>
<protocol type="FD_SOCK"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG3"/>
</stack>
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="kubernetes.KUBE_PING"/> <- removed this line
<protocol type="MERGE3"/>
<protocol type="FD_SOCK"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG3"/>
</stack>
</stacks>
</subsystem>
Then I edited the deployment once again, replacing /var/data with /opt/eap/standalone/configuration so that the container has the files in my-persistent-volume-claim mounted to the /opt/eap/standalone/configuration directory in the container.
apiVersion: apps/v1
kind: Deployment
metadata:
name: eap74-openjdk8-openshift-rhel7
spec:
template:
spec:
containers:
- name: eap74-openjdk8-openshift-rhel7
volumeMounts:
- mountPath: /opt/eap/standalone/configuration
name: my-volume
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-persistent-volume-claim
At this point, I just want to make sure that the pod is Running and returning the JBOSS welcome page. Problem fixed!
~]$ oc get pods
NAME READY STATUS RESTARTS AGE
eap74-openjdk8-openshift-rhel7-f68b66fc8-jfxww 1/1 Running 0 10m
Did you find this article helpful?
If so, consider buying me a coffee over at