
Let's say you have a file on one of your EC2 instances that you mistakenly deleted, and you have no backup of the file, and you need the file. Oh no!
rm /home/john.doe/important_file.txt
If the file was located on a filesystem in an Elastic Block Storage (EBS) Volume and you have snapshots of your EBS Volumes, you may be able to restore the file from the EBS Volume Snapshot.
The aws ec2 describe-snapshots can be used to list your Elastic Block Storage (EBS) Volume Snapshots. Almost always, you'll want to use the --query option to only return the snapshots that map to the EBS volume that contains the file you need to restore.
aws ec2 describe-snapshots
Something like this should be returned.
]$ aws ec2 describe-snapshots --query 'Snapshots[?VolumeId==`vol-0d989fe3bad4dd2f9`]'
[
{
"Description": "my snapshot",
"Encrypted": false,
"OwnerId": "123456789012",
"Progress": "100%",
"SnapshotId": "snap-0bb8e2a3cfaa63c11",
"StartTime": "2024-04-13T05:21:54.893000+00:00",
"State": "completed",
"VolumeId": "vol-0d989fe3bad4dd2f9",
"VolumeSize": 8,
"Tags": [],
"StorageTier": "standard"
}
]
The snapshot will be associated with the EBS Volume that is already attached to your EC2 instance, so you'll need to create a new volume from the snapshot. The aws ec2 create-volume command can be used to create a volume from the snapshot. It's important to ensure that the volume is in the same Availabilty Zone as the EC2 instance.
aws ec2 create-volume --snapshot-id snap-066877671789bd71b --availability-zone us-east-1a
On your EC2 instance, the fdisk --list command can be used to list the partition tables. In this example, there is a single partition table at /dev/xvda.
~]$ sudo fdisk --list
Disk /dev/xvda: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 787BCE86-D220-459C-A909-15C4F48657B4
Device Start End Sectors Size Type
/dev/xvda1 24576 16777182 16752607 8G Linux filesystem
/dev/xvda127 22528 24575 2048 1M BIOS boot
/dev/xvda128 2048 22527 20480 10M EFI System
Partition table entries are not in disk order.
Likewise, the /dev (devices) directory only contains the "a" device such as /dev/sda or /dev/xvda.
~]$ ls -l /dev/ | grep -i sd
lrwxrwxrwx. 1 root root 4 Apr 15 05:22 sda -> xvda
lrwxrwxrwx. 1 root root 5 Apr 15 05:22 sda1 -> xvda1
lrwxrwxrwx. 1 root root 7 Apr 15 05:22 sda127 -> xvda127
lrwxrwxrwx. 1 root root 7 Apr 15 05:22 sda128 -> xvda128
And the mount command shows only the /dev/xvda device is mounted.
~]$ mount | grep -i /dev
/dev/xvda1 on / type xfs (rw,noatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,sunit=1024,swidth=1024,noquota)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime,seclabel)
The aws ec2 attach-volume command can be used to attach the volume that was created from the snapshot to the EC2 instance. In this example, we are going with the "b" device /dev/sdb since "a" is being used and no devices are using "b".
~]$ aws ec2 attach-volume --volume-id vol-0339078b6ef02c551 --instance-id i-06f4d813abb0316a9 --device /dev/sdb
{
"AttachTime": "2024-04-15T05:29:59.001000+00:00",
"Device": "/dev/sdb",
"InstanceId": "i-06f4d813abb0316a9",
"State": "attaching",
"VolumeId": "vol-0339078b6ef02c551"
}
Now, we see both the /dev/sda and /dev/sdb devices.
~]$ ll /dev/ | grep -i sd
lrwxrwxrwx. 1 root root 4 Apr 15 05:22 sda -> xvda
lrwxrwxrwx. 1 root root 5 Apr 15 05:22 sda1 -> xvda1
lrwxrwxrwx. 1 root root 7 Apr 15 05:22 sda127 -> xvda127
lrwxrwxrwx. 1 root root 7 Apr 15 05:22 sda128 -> xvda128
lrwxrwxrwx. 1 root root 4 Apr 15 05:30 sdb -> xvdb
lrwxrwxrwx. 1 root root 5 Apr 15 05:30 sdb1 -> xvdb1
lrwxrwxrwx. 1 root root 7 Apr 15 05:30 sdb127 -> xvdb127
lrwxrwxrwx. 1 root root 7 Apr 15 05:30 sdb128 -> xvdb128
And fdisk --list shows both /dev/xvda and /dev/xvdb.
~]$ sudo fdisk --list
Disk /dev/xvda: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 787BCE86-D220-459C-A909-15C4F48657B4
Device Start End Sectors Size Type
/dev/xvda1 24576 16777182 16752607 8G Linux filesystem
/dev/xvda127 22528 24575 2048 1M BIOS boot
/dev/xvda128 2048 22527 20480 10M EFI System
Partition table entries are not in disk order.
Disk /dev/xvdb: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2A9A4BF8-790D-49FB-9F05-95BC3D86CD47
Device Start End Sectors Size Type
/dev/xvdb1 24576 16777182 16752607 8G Linux filesystem
/dev/xvdb127 22528 24575 2048 1M BIOS boot
/dev/xvdb128 2048 22527 20480 10M EFI System
Partition table entries are not in disk order.
Let's create a directory where the volume will be mounted.
sudo mkdir -p /usr/local/temp
And let's mount the volume to the temporary directory.
sudo mount --types xfs /dev/xvdb1 /usr/local/temp
And validate it's mounted.
~]$ mount | grep xvdb1
/dev/xvdb1 on /usr/local/temp type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,sunit=1024,swidth=1024,noquota)
The mounted volume will probably contain the entire filesystem!
~]$ ll /usr/local/temp/
total 32
lrwxrwxrwx. 1 root root 7 Jan 30 2023 bin -> usr/bin
dr-xr-xr-x. 5 root root 16384 Jul 25 2023 boot
drwxr-xr-x. 3 root root 136 Jul 25 2023 dev
drwxr-xr-x. 83 root root 16384 Oct 2 2023 etc
drwxr-xr-x. 3 root root 22 Jul 28 2023 home
lrwxrwxrwx. 1 root root 7 Jan 30 2023 lib -> usr/lib
lrwxrwxrwx. 1 root root 9 Jan 30 2023 lib64 -> usr/lib64
drwxr-xr-x. 2 root root 6 Jul 25 2023 local
drwxr-xr-x. 2 root root 6 Jan 30 2023 media
drwxr-xr-x. 2 root root 6 Jan 30 2023 mnt
drwxr-xr-x. 4 root root 35 Jul 28 2023 opt
drwxr-xr-x. 2 root root 6 Jul 25 2023 proc
dr-xr-x---. 8 root root 221 Jan 26 02:46 root
drwxr-xr-x. 2 root root 6 Jul 25 2023 run
lrwxrwxrwx. 1 root root 8 Jan 30 2023 sbin -> usr/sbin
drwxr-xr-x. 2 root root 6 Jan 30 2023 srv
drwxr-xr-x. 2 root root 6 Jul 25 2023 sys
drwxrwxrwt. 2 root root 6 Jul 25 2023 tmp
drwxr-xr-x. 12 root root 144 Jul 25 2023 usr
drwxr-xr-x. 19 root root 266 Jul 28 2023 var
And like magic, there is the file that got deleted! Hooray!
~]# ls /usr/local/temp/home/john.doe/
important_file.txt
Let's move the file back into John Doe's directory.
mv /usr/local/temp/home/john.doe/important_file.txt /home/john.doe/
And for clean up, let's umount the volume.
sudo umount /usr/local/temp
Remove the directory we created.
sudo rm -rf /usr/local/temp
The aws ec2 detach-volume command can be used to detach the volume from the EC2 instance.
~]$ aws ec2 detach-volume --volume-id vol-0339078b6ef02c551 --instance-id i-06f4d813abb0316a9
{
"AttachTime": "2024-04-15T05:29:58+00:00",
"Device": "/dev/sdb",
"InstanceId": "i-06f4d813abb0316a9",
"State": "detaching",
"VolumeId": "vol-0339078b6ef02c551"
}
Did you find this article helpful?
If so, consider buying me a coffee over at