Bootstrap FreeKB - Amazon Web Services (AWS) - Archive and Delete S3 Bucket Objects using Terraform
Amazon Web Services (AWS) - Archive and Delete S3 Bucket Objects using Terraform


An S3 Bucket is similar to an NFS share in that it is a mountable storage volume.

An S3 Bucket Lifecycle Configuration can be used to:

  • Move objects from one Storage Class to another Storage Class (e.g. from the Standard Storage Class to the Archive or Deep Archive Storage Class)
  • Delete objects

There is a cost to storage object in an S3 Bucket, and different Storage Classes have different costs.

Let's say you have the following files on your Terraform server.

├── required_providers.tf
├── s3_buckets (directory)
│   ├── provider.tf
│   ├── buckets.tf

 

required_providers.tf will almost always have this.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
    }
  }
}

 

Let's say provider.tf in the network_load_balancer directory has the following. In this example, the "default" profile in /home/username/.aws/config and /home/username/.aws/credentials is being used. This assumes you have setup Terraform as described in Amazon Web Services (AWS) - Getting Started with Terraform.

provider "aws" {
  alias   = "default"
  profile = "default"
  region  = "default"
}

 

And let's say buckets.tf contains something like this to create a bucket named my_bucket_fjfnv9d3d9 and to move objects that have not been accessed in 180 to the Deep Archive Storage Class and then expire (deleted) objects after 365 days. storage_class can contain one of these values.

  • DEEP_ARCHIVE
  • GLACIER
  • GLACIER_IR
  • INTELLIGENT_TIERING
  • ONEZONE_IA
  • STANDARD_IA
resource "aws_s3_bucket" "my_bucket_fjfnv9d3d9" {
  bucket = "my-bucket-fjfnv9d3d9"

  tags = {
    Name        = "my-bucket-fjfnv9d3d9"
    Environment = "staging"
  }
}

resource "aws_s3_bucket_lifecycle_configuration" "s3_bucket_lifecycle_configuration" {
  bucket = aws_s3_bucket.s3_bucket.id

  rule {
    id = "expiration"

    status = "Enabled"

    filter {
      prefix = "logs/"
    }

    expiration {
      days = 365
    }

    transition {
      days = 180
      storage_class = "DEEP_ARCHIVE"
    }

  }
}

 

You may need to run the terraform init command.

terraform init

 

The terraform plan command can be used to see what Terraform will try to do.

terraform plan

 

And the terraform apply command can be used to create the S3 Bucket.

terraform apply

 




Did you find this article helpful?

If so, consider buying me a coffee over at Buy Me A Coffee



Comments


Add a Comment


Please enter 02351c in the box below so that we can be sure you are a human.