
Install the OpenShift API for Data Protection (OADP) Operator
First and foremost, let's install the OpenShift API for Data Protection (OADP) Operator.
In the OpenShift console, navigate to Operators > OperatorHub. Enter OADP in search and select the OADP Operator.

Select Install.

It's usually totally OK to just keep the defaults and select Install.

Assuming the operator is successfully installed, something like this should be displayed.

If you are not familiar with the oc command, refer to OpenShift - Getting Started with the oc command.
The oc get operator command should return the OADP operator.
~]$ oc get operators
NAME AGE
redhat-oadp-operator.openshift-adp 119s
Assuming you installed the operator using the default option, there should be resources in the openshift-adp namespace.
~]$ oc get all --namespace openshift-adp
NAME READY STATUS RESTARTS AGE
pod/openshift-adp-controller-manager-7589f7647-5l47r 1/1 Running 0 3m5s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/openshift-adp-controller-manager-metrics-service ClusterIP 172.30.97.95 <none> 8443/TCP 3m10s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/openshift-adp-controller-manager 1/1 1 1 3m5s
NAME DESIRED CURRENT READY AGE
replicaset.apps/openshift-adp-controller-manager-7589f7647 1 1 1 3m5s
Amazon Web Services (AWS) S3 Bucket
Let's say we want to store OADP backups in an Amazon Web Services S3 Bucket. Let's use the aws s3api create-bucket command to create an S3 Bucket. Notice this is done using an account named "admin". Check out my article Amazon Web Services (AWS) - List Profile Config using the AWS CLI for more details on Amazon Web Services (AWS) profiles.
aws s3api create-bucket --bucket my-bucket-asdfadkjsfasfljdf --region us-east-1 --profile admin
Next let's create an Identity and Access Management (IAM) user named velero using the aws iam create-user command.
aws iam create-user --user-name velero --profile admin
Then create an access key and secret key for the velero user using the aws iam create-access-key command. Notice that the output will include both the value for both the access key and secret key. Make note of the value of the secret key! This is your one and only chance to view the access key. But don't worry, you can always create a new access key if you forgot to make note of the access key.
~]$ aws iam create-access-key --user-name velero --profile admin
{
"AccessKey": {
"UserName": "velero",
"AccessKeyId": "AKIA2MITL76GFDLORQU6",
"Status": "Active",
"SecretAccessKey": "Nzy7dzWcr4hU6sYUg0PCquMCiCv04ae2aXmFIsGE",
"CreateDate": "2025-04-09T01:26:08+00:00"
}
}
Let's say you add the access key and secret key to your $HOME/.aws/credentials file (on a Linux system).
~]$ cat ~/.aws/credentials
[velero]
aws_secret_access_key = Nzy7dzWcr4hU6sYUg0PCquMCiCv04ae2aXmFIsGE
aws_access_key_id = AKIA2MITL76GFDLORQU6
And to the $HOME/.aws/config file too.
~]$ cat ~/.aws/config
[profile velero]
region = us-east-1
output = json
You can now try to list the location of your S3 Buckets using the velero user account but you'll get Access Denied because you've not yet granted velero any permissions.
~]$ aws s3api get-bucket-location --bucket my-bucket-asdfadkjsfasfljdf --profile velero
An error occurred (AccessDenied) when calling the GetBucketLocation operation: User: arn:aws:iam::123456789012:user/velero is not authorized to perform: s3:GetBucketLocation because no identity-based policy allows the s3:GetBucketLocation action
Let's create a file named velero-s3-policy.json that contains the following JSON, replacing my-bucket-asdfadkjsfasfljdf with the name of your S3 Bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::my-bucket-asdfadkjsfasfljdf/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::my-bucket-asdfadkjsfasfljdf"
]
}
]
}
Let's use the aws iam put-user-policy command to attach the policy to the velero user account.
aws iam put-user-policy --user-name velero --policy-name velero-s3 --policy-document file://velero-s3-policy.json --profile admin
Now velero should be able to list the location of the S3 bucket. Don't worry if LocationContraint is null. We just want to make sure that we get a response instead of Access Denied. So far, so good.
~]$ aws s3api get-bucket-location --bucket my-bucket-asdfadkjsfasfljdf --profile velero
{
"LocationConstraint": null
}
Let's create a file named credentials-velero that contains your AWS Access Key and Secret Key.
[default]
aws_access_key_id=<your access key>
aws_secret_access_key=<your secret key>
Cloud Credentials Secret
Let's create a secret named cloud-credentials using the credentials-velero file.
oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=credentials-velero
Data Protection Application
Next let's create the Data Protection Application.
Let's create a YAML file that contains the following, replacing my-bucket-asdfadkjsfasfljdf with the name of your S3 Bucket.
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: my-data-protection-application
namespace: openshift-adp
spec:
backupImages: false
backupLocations:
- name: default
velero:
default: true
config:
region: us-east-1
profile: default
credential:
key: cloud
name: cloud-credentials
objectStorage:
bucket: my-bucket-asdfadkjsfasfljdf
provider: aws
configuration:
velero:
defaultPlugins:
- openshift
- aws
nodeSelector: worker
resourceTimeout: 10m
Be aware that by default, the velero deployment will contain three contains, each with 500m of CPU requests. If you do not have that much available CPU on your OpenShift nodes, you can bump down the CPU requests to perhaps something like 50m.
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: my-data-protection-application
namespace: openshift-adp
spec:
backupImages: false
backupLocations:
- name: default
velero:
default: true
config:
region: us-east-1
profile: default
credential:
key: cloud
name: cloud-credentials
objectStorage:
bucket: my-bucket-asdfadkjsfasfljdf
provider: aws
configuration:
velero:
defaultPlugins:
- openshift
- aws
nodeSelector: worker
resourceTimeout: 10m
podConfig:
resourceAllocations:
requests:
cpu: 50m
Let's use the oc apply command to create my-data-protection-application.
oc apply --filename my-data-protection-application.yaml
Let's ensure the status is Reconciled.
~]$ oc describe DataProtectionApplication --namespace openshift-adp
. . .
Status:
Conditions:
Last Transition Time: 2025-04-10T01:25:11Z
Message: Reconcile complete
Reason: Complete
Status: True
Type: Reconciled
The Data Protection Application should provision additional "velero" resources in the openshift-adp namespace.
~]$ oc get all --namespace openshift-adp
NAME READY STATUS RESTARTS AGE
pod/openshift-adp-controller-manager-55f68b778f-tlr8v 1/1 Running 0 8m52s
pod/velero-6777878978-nvqm4 1/1 Running 0 3m10s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/openshift-adp-controller-manager-metrics-service ClusterIP 172.30.220.147 <none> 8443/TCP 9m3s
service/openshift-adp-velero-metrics-svc ClusterIP 172.30.87.161 <none> 8085/TCP 3m10s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/openshift-adp-controller-manager 1/1 1 1 8m52s
deployment.apps/velero 1/1 1 1 3m10s
NAME DESIRED CURRENT READY AGE
replicaset.apps/openshift-adp-controller-manager-55f68b778f 1 1 1 8m52s
replicaset.apps/velero-6777878978 1 1 1 3m10s
If you included podConfig with 50m of CPU, then the contains in the deployment should have 50m of CPU requests.
~]$ oc get deployments --namespace openshift-adp --output yaml | grep cpu
cpu: 50m
cpu: 50m
cpu: 50m
Next let's check to see if the backupStorageLocation is Available.
~]$ oc get backupStorageLocations --namespace openshift-adp
NAME PHASE LAST VALIDATED AGE DEFAULT
default Available 60s 64s true
Backup
Now let's say you want to backup the pods in namespace my-project.
~]# oc get pods --namespace my-project
NAME READY STATUS RESTARTS AGE
foo-9mzm2 1/1 Running 0 8d
bar-pflxc 1/1 Running 0 8d
Let's create a backup resource that will backup the resources in namespace my-project once every 24 hours. Let's say this markup is in a file named my-project-backup.yml.
The reason "default" is used as the name of the storageLocation is because the oc get backupStorageLocations command returns a backup storage location named "default".
apiVersion: velero.io/v1
kind: Backup
metadata:
name: my-project
labels:
velero.io/storage-location: default
namespace: openshift-adp
spec:
includedNamespaces:
- my-project
storageLocation: default
ttl: 24h0m0s
Let's use the oc apply command to create the backup resource.
~]$ oc apply --filename my-project-backup.yml
backup.velero.io/my-project created
There should now be a backup resource named my-project in the openshift-adp namespace.
~]$ oc get backups --namespace openshift-adp
NAME AGE
my-project 102s
If you then describe the backup resource named my-project and the status should be InProgress.
~]$ oc describe backup my-project --namespace openshift-adp
Name: my-project
Namespace: openshift-adp
Labels: velero.io/storage-location=default
Annotations: velero.io/resource-timeout: 10m0s
velero.io/source-cluster-k8s-gitversion: v1.29.10+67d3387
velero.io/source-cluster-k8s-major-version: 1
velero.io/source-cluster-k8s-minor-version: 29
API Version: velero.io/v1
Kind: Backup
Metadata:
Creation Timestamp: 2025-04-16T01:25:13Z
Generation: 1
Resource Version: 497329502
UID: 8f72d858-77da-40bc-8fa2-b8f1ca0deb14
Spec:
Csi Snapshot Timeout: 10m0s
Default Volumes To Fs Backup: false
Included Namespaces:
my-project
Item Operation Timeout: 24h0m0s
Snapshot Move Data: false
Storage Location: default
Ttl: 24h0m0s
Status:
Expiration: 2025-04-17T01:25:13Z
Format Version: 1.1.0
Hook Status:
Phase: InProgress
Progress:
Items Backed Up: 18
Total Items: 18
Start Timestamp: 2025-04-16T01:25:13Z
Version: 1
Events: <none>
And then shortly thereafter, the status should be Completed. Awesome!
~]$ oc describe backup my-project --namespace openshift-adp
Name: my-project
Namespace: openshift-adp
Labels: velero.io/storage-location=default
Annotations: velero.io/resource-timeout: 10m0s
velero.io/source-cluster-k8s-gitversion: v1.29.10+67d3387
velero.io/source-cluster-k8s-major-version: 1
velero.io/source-cluster-k8s-minor-version: 29
API Version: velero.io/v1
Kind: Backup
Metadata:
Creation Timestamp: 2025-04-16T01:25:13Z
Generation: 1
Resource Version: 497329502
UID: 8f72d858-77da-40bc-8fa2-b8f1ca0deb14
Spec:
Csi Snapshot Timeout: 10m0s
Default Volumes To Fs Backup: false
Included Namespaces:
my-project
Item Operation Timeout: 24h0m0s
Snapshot Move Data: false
Storage Location: default
Ttl: 24h0m0s
Status:
Completion Timestamp: 2025-04-16T01:25:16Z
Expiration: 2025-04-17T01:25:13Z
Format Version: 1.1.0
Hook Status:
Phase: Completed
Progress:
Items Backed Up: 18
Total Items: 18
Start Timestamp: 2025-04-16T01:25:13Z
Version: 1
Events: <none>
Recall in this example that OADP was configured to store the backups in an Amazon Web Services (AWS) S3 Bucket named my-bucket-asdfadkjsfasfljdf. The aws s3api list-objects command can be used to list the objects in the S3 Bucket. Something like this should be returned, where there are my-project objects in the S3 Bucket. Awesome, it works!
~]$ aws s3api list-objects --bucket my-bucket-asdfadkjsfasfljdf --profile admin
{
"Contents": [
{
"Key": "backups/my-project/my-project-csi-volumesnapshotclasses.json.gz",
"LastModified": "2025-04-16T01:25:17+00:00",
"ETag": "\"6848cb8d5f3669ef603f87e48ece8567\"",
"Size": 29,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "john.doe",
"ID": "ab0e0a41e318d5103a77c82240d5cb3fc41ff11cc325c65b5c777a5f8e743743"
}
},
{
"Key": "backups/my-project/my-project-csi-volumesnapshotcontents.json.gz",
"LastModified": "2025-04-16T01:25:17+00:00",
"ETag": "\"6848cb8d5f3669ef603f87e48ece8567\"",
"Size": 29,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "john.doe",
"ID": "ab0e0a41e318d5103a77c82240d5cb3fc41ff11cc325c65b5c777a5f8e743743"
}
},
{
"Key": "backups/my-project/my-project-csi-volumesnapshots.json.gz",
"LastModified": "2025-04-16T01:25:17+00:00",
"ETag": "\"6848cb8d5f3669ef603f87e48ece8567\"",
"Size": 29,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "john.doe",
"ID": "ab0e0a41e318d5103a77c82240d5cb3fc41ff11cc325c65b5c777a5f8e743743"
}
},
{
"Key": "backups/my-project/my-project-itemoperations.json.gz",
"LastModified": "2025-04-16T01:25:16+00:00",
"ETag": "\"ae811dd04e417ed7b896b4c4fa3d2ac0\"",
"Size": 27,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "john.doe",
"ID": "ab0e0a41e318d5103a77c82240d5cb3fc41ff11cc325c65b5c777a5f8e743743"
}
},
{
"Key": "backups/my-project/my-project-logs.gz",
"LastModified": "2025-04-16T01:25:16+00:00",
"ETag": "\"673aef92adf289811d5c04b270084eac\"",
"Size": 11312,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "john.doe",
"ID": "ab0e0a41e318d5103a77c82240d5cb3fc41ff11cc325c65b5c777a5f8e743743"
}
},
{
"Key": "backups/my-project/my-project-resource-list.json.gz",
"LastModified": "2025-04-16T01:25:16+00:00",
"ETag": "\"47145873ba24f87182ee601bc7dd92fc\"",
"Size": 307,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "john.doe",
"ID": "ab0e0a41e318d5103a77c82240d5cb3fc41ff11cc325c65b5c777a5f8e743743"
}
},
{
"Key": "backups/my-project/my-project-results.gz",
"LastModified": "2025-04-16T01:25:16+00:00",
"ETag": "\"4b8f571a28628df1f222ee56c3673550\"",
"Size": 49,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "john.doe",
"ID": "ab0e0a41e318d5103a77c82240d5cb3fc41ff11cc325c65b5c777a5f8e743743"
}
},
{
"Key": "backups/my-project/my-project-volumeinfo.json.gz",
"LastModified": "2025-04-16T01:25:16+00:00",
"ETag": "\"05cd97096815e99b306792f280b67b06\"",
"Size": 292,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "john.doe",
"ID": "ab0e0a41e318d5103a77c82240d5cb3fc41ff11cc325c65b5c777a5f8e743743"
}
},
{
"Key": "backups/my-project/my-project-volumesnapshots.json.gz",
"LastModified": "2025-04-16T01:25:16+00:00",
"ETag": "\"6848cb8d5f3669ef603f87e48ece8567\"",
"Size": 29,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "john.doe",
"ID": "ab0e0a41e318d5103a77c82240d5cb3fc41ff11cc325c65b5c777a5f8e743743"
}
},
{
"Key": "backups/my-project/my-project.tar.gz",
"LastModified": "2025-04-16T01:25:16+00:00",
"ETag": "\"c28c1d05c60cfb80f21799b5b11faac9\"",
"Size": 13046,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "john.doe",
"ID": "ab0e0a41e318d5103a77c82240d5cb3fc41ff11cc325c65b5c777a5f8e743743"
}
},
{
"Key": "backups/my-project/velero-backup.json",
"LastModified": "2025-04-16T01:25:17+00:00",
"ETag": "\"33c1cecb4d65267049037e13b78759d1\"",
"Size": 3826,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "john.doe",
"ID": "ab0e0a41e318d5103a77c82240d5cb3fc41ff11cc325c65b5c777a5f8e743743"
}
}
]
}
Schedule Reoccurring Backup
Similar to the above example, we can also create a reoccurring backup. Let's say you have the following in a YAML file. This will create a backup of the resources in namespace my-project daily at 7:00 am.
apiVersion: velero.io/v1
kind: Backup
metadata:
name: my-project
namespace: openshift-adp
spec:
schedule: 00 07 * * *
template:
includedNamespaces:
- my-project
storageLocation: default
ttl: 24h0m0s
Let's use the oc apply command to create the scheduled backup resource.
~]$ oc apply --filename my-project-scheduled-backup.yml
schedule.velero.io/my-project created
There should now be a backup resource named my-project in the openshift-adp namespace.
$ oc get schedules --namespace openshift-adp
NAME STATUS SCHEDULE LASTBACKUP AGE PAUSED
my-project Enabled 00 07 * * * 42s
Restore
Now let's see how to go about restoring from the backup. Let's delete my-deployment in namespace my-project.
oc delete deployment my-deployment --namespace my-project
And ensure there are now no deployments in my-project.
~]$ oc get deployments --namespace my-project
No resources found in my-project namespace
And no pods.
~]$ oc get pods --namespace my-project
No resources found in my-project namespace
Similar to the backup resource, let's create a restore resource. For example, let's say you have the following in a YAML file to create a resource to restore the resources in my-project.
]$ cat my-project-restore.yml
apiVersion: velero.io/v1
kind: Restore
metadata:
name: my-project
namespace: openshift-adp
spec:
backupName: my-project
namespaceMapping:
my-project: my-project
restorePVs: false
Let's use the oc apply command to create the restore resource.
~]$ oc apply --filename my-project-restore.yml
restore.velero.io/my-project created
Let's ensure the restore resource exists.
~]$ oc get restore --namespace openshift-adp
NAME AGE
my-project 31s
If there are no issues, the Phase should be Completed, meaning the resources in my-project should have been restored from the backup in the backups location named "default".
]$ oc describe restore --namespace openshift-adp
Name: my-project
Namespace: openshift-adp
Labels: <none>
Annotations: <none>
API Version: velero.io/v1
Kind: Restore
Metadata:
Creation Timestamp: 2025-04-22T01:16:54Z
Finalizers:
restores.velero.io/external-resources-finalizer
Generation: 6
Resource Version: 502671529
UID: 131cc504-801b-4ff1-a8aa-76e7619041c9
Spec:
Backup Name: my-project
Excluded Resources:
nodes
events
events.events.k8s.io
backups.velero.io
restores.velero.io
resticrepositories.velero.io
csinodes.storage.k8s.io
volumeattachments.storage.k8s.io
backuprepositories.velero.io
Included Resources:
deployments
Item Operation Timeout: 4h0m0s
Namespace Mapping:
my - project: my-project
Restore P Vs: false
Status:
Completion Timestamp: 2025-04-22T01:16:56Z
Hook Status:
Phase: Completed
Progress:
Items Restored: 1
Total Items: 1
Start Timestamp: 2025-04-22T01:16:54Z
Events: <none>
And the deployment in my-project should have been restored. Wow! It works.
]$ oc get deployment --namespace my-project
NAME READY UP-TO-DATE AVAILABLE AGE
my-deployment 1/1 1 1 3m25
Did you find this article helpful?
If so, consider buying me a coffee over at