Bootstrap FreeKB - OpenShift - OpenShift API for Data Protection (OADP) Proxy
OpenShift - OpenShift API for Data Protection (OADP) Proxy

Updated:   |  OpenShift articles

Let's say something like this is being returned when describing your OpenShift API for Data Protection (OADP) backup Storage Locations.

~]$ oc describe backupStorageLocations --namespace openshift-adp
  Message:               BackupStorageLocation "my-aws-s3-bucket" is unavailable: rpc error: code = Unknown desc = operation error S3: ListObjectsV2, exceeded maximum number of attempts, 3, https response error StatusCode: 0, RequestID: , HostID: , request send failed, Get "https://my-bucket-asdfadkjsfasfljdf.s3.us-east-1.amazonaws.com/?delimiter=%2F&list-type=2&prefix=": proxyconnect tcp: dial tcp 10.1.12.13:80: i/o timeout

 

Notice in this example that backupStorageLocations is getting a proxy connect timeout when attempting to reach Amazon Web Services (AWS) S3 Bucket https://my-bucket-asdfadkjsfasfljdf.s3.us-east-1.amazonaws.com.

The oc get proxy command can be used to determine if your OpenShift cluster is configured with a cluster proxy. In this example, since the oc get proxy command returns a resource named "cluster" there is a cluster proxy.

~]$ oc get proxy
NAME      AGE
cluster   622d

 

The oc describe proxy cluster command can be used to display more information on the cluster proxy server (http://proxy.example.com in this example).

~]$ oc describe proxy cluster
Name:         cluster
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  config.openshift.io/v1
Kind:         Proxy
Metadata:
  Creation Timestamp:  2020-09-30T15:40:25Z
  Generation:          1
  Managed Fields:
    API Version:  config.openshift.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        .:
        f:httpProxy:
        f:httpsProxy:
        f:noProxy:
        f:trustedCA:
          .:
          f:name:
      f:status:
        .:
        f:httpProxy:
        f:httpsProxy:
    Manager:      cluster-bootstrap
    Operation:    Update
    Time:         2020-09-30T15:40:25Z
    API Version:  config.openshift.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        f:noProxy:
    Manager:         cluster-network-operator
    Operation:       Update
    Time:            2022-02-02T14:20:38Z
  Resource Version:  747167713
  Self Link:         /apis/config.openshift.io/v1/proxies/cluster
  UID:               69f64d95-6665-4233-8ce5-813b2cf84e06
Spec:
  Http Proxy:   http://proxy.example.com
  Https Proxy:  http://proxy.example.com
  No Proxy:     .example.com
  Trusted CA:
    Name:  user-ca-bundle
Status:
  Http Proxy:   http://proxy.example.com
  Https Proxy:  http://proxy.example.com
  No Proxy:     .cluster.local,.svc,.example.com,10.9.0.0/14,localhost
Events:         <none>

 

This probably also means that your velero deployment and pod are configured with the HTTP_PROXY, HTTPS_PROXY and NO_PROXY in your cluster proxy resource. Notice in this example that NO_PROXY does not include s3.us-east-1.amazonaws.com.

~]$ oc get deployment velero --namespace openshift-adp --output yaml

spec:
  template:
    spec:
      containers:
      - env:
        - name: HTTP_PROXY
          value: http://proxy.example.com

        - name: HTTPS_PROXY
          value: http://proxy.example.com
        - name: NO_PROXY
          value: .cluster.local

 

One option is to update the OADP DataProtectionApplication to include s3.us-east-1.amazonaws.com in NO_PROXY.

apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
  name: my-data-protection-application
  namespace: openshift-adp
spec:
  backupImages: false
  backupLocations:
    - name: my-aws-s3-bucket
      velero:
        default: true
        config:
          region: us-east-1
          profile: default
        credential:
          key: cloud
          name: cloud-credentials
        objectStorage:
          bucket: my-bucket-asdfadkjsfasfljdf
        provider: aws
  configuration:
    velero:
      defaultPlugins:
        - openshift
        - aws
      nodeSelector: worker
      resourceTimeout: 10m
      podConfig:
        env:
        - name: NO_PROXY
          value: .cluster.local,.s3.us-east-1.amazonaws.com

 

Let's say the above YAML is in a file named my-data-protection-application.yaml. Let's apply the YAML to make this change to the DataProtectionApplication resource.

oc apply -f my-data-protection-application.yaml

 

The oc get DataProtectionApplication command can be used to confirm that the DataProtectionApplication now has NO_PROXY .s3.us-east-1.amazonaws.com.

~]$ oc get DataProtectionApplication my-data-protection-application --namespace openshift-adp --output yaml
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
  creationTimestamp: "2025-04-15T00:57:45Z"
  generation: 3
  name: my-data-protection-application
  namespace: openshift-adp
  resourceVersion: "496457618"
  uid: fc4de771-5343-4a39-a875-7230531b330f
spec:
  backupImages: false
  backupLocations:
  - name: my-aws-s3-bucket
    velero:
      config:
        profile: default
        region: us-east-1
      credential:
        key: cloud
        name: cloud-credentials
      default: true
      objectStorage:
        bucket: my-bucket-asdfadkjsfasfljdf
      provider: aws
  configuration:
    velero:
      defaultPlugins:
      - openshift
      - aws
      podConfig:
        env:
        - name: NO_PROXY
          value: .cluster.local,.s3.us-east-1.amazonaws.com
        resourceAllocations:
          requests:
            cpu: 50m
      resourceTimeout: 10m
status:
  conditions:
  - lastTransitionTime: "2025-04-15T00:57:45Z"
    message: Reconcile complete
    reason: Complete
    status: "True"
    type: Reconciled

 

Likewise, the velero deployment should also contain NO_PROXY .s3.us-east-1.amazonaws.com.

~]$ oc get deployment velero --namespace openshift-adp --output yaml

spec:
  template:
    spec:
      containers:
      - env:
        - name: HTTP_PROXY
          value: http://proxy.example.com

        - name: HTTPS_PROXY
          value: http://proxy.example.com
        - name: NO_PROXY
          value: .cluster.local,.s3.us-east-1.amazonaws.com

 

And the velero pod too.

~]$ oc get pod velero-77c9955586-rxg6r --namespace openshift-adp --output yaml
spec:
  containers:
  - env:
    - name: NO_PROXY
      value: .cluster.local,.s3.us-east-1.amazonaws.com

 

And then you can check to see if the backupStorageLocations resources does or does not have the proxy connect timeout. If you see something like this, the backupStorageLocations no longer contains the proxy connect timeout. Nice!

~]$ oc describe backupStorageLocations --namespace openshift-adp
Name:         my-aws-s3-bucket
Namespace:    openshift-adp
Labels:       app.kubernetes.io/component=bsl
              app.kubernetes.io/instance=my-aws-s3-bucket
              app.kubernetes.io/managed-by=oadp-operator
              app.kubernetes.io/name=oadp-operator-velero
              openshift.io/oadp=True
              openshift.io/oadp-registry=True
Annotations:  <none>
API Version:  velero.io/v1
Kind:         BackupStorageLocation
Metadata:
  Creation Timestamp:  2025-04-15T00:57:45Z
  Generation:          57
  Owner References:
    API Version:           oadp.openshift.io/v1alpha1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  DataProtectionApplication
    Name:                  my-data-protection-application
    UID:                   fc4de771-5343-4a39-a875-7230531b330f
  Resource Version:        496466257
  UID:                     60581e13-8aea-4de5-856d-792a2ca055c7
Spec:
  Config:
    Checksum Algorithm:
    Profile:             default
    Region:              us-east-1
  Credential:
    Key:    cloud
    Name:   cloud-credentials
  Default:  true
  Object Storage:
    Bucket:  my-bucket-asdfadkjsfasfljdf
  Provider:  aws
Status:
  Last Synced Time:      2025-04-15T01:56:22Z
  Last Validation Time:  2025-04-15T01:56:32Z
  Phase:                 Available
Events:
  Type    Reason                           Age   From            Message
  ----    ------                           ----  ----            -------
  Normal  BackupStorageLocationReconciled  59m   DPA-controller  performed created on backupstoragelocation openshift-adp/default

 




Did you find this article helpful?

If so, consider buying me a coffee over at Buy Me A Coffee



Comments


Add a Comment


Please enter 753558 in the box below so that we can be sure you are a human.