Let's say you have the following files on your Terraform server.
├── required_providers.tf
├── elastic_container_services (directory)
│ ├── task_defintions (directory)
│ ├── ├── ec2 (directory)
│ ├── ├── ├── provider.tf
│ ├── ├── ├── task_definition.tf
├── iam (directory)
│ ├── policies.tf
│ ├── profiles.tf
│ ├── provider.tf
│ ├── roles.tf
├── elastic_file_systems (directory)
│ ├── data.tf
│ ├── outputs.tf
│ ├── provider.tf
required_providers.tf will almost always have this.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
Let's say provider.tf has the following. In this example, the "default" profile in /home/username/.aws/config and /home/username/.aws/credentials is being used. This assumes you have setup Terraform as described in Amazon Web Services (AWS) - Getting Started with Terraform.
provider "aws" {
alias = "default"
profile = "default"
region = "default"
}
This assumes you have already:
- Created an Elastic Container Service (ECS) Cluster using Terraform
- Created an EC2 instance using Terraform
- Created the ecsInstanceRole using Terraform
- Created an Elastic File System (EFS) using Terraform
- Created an Elastic File System (EFS) Access Point using Terraform
- Created Elastic File System (EFS) Mount Targets using Terraform
AVOID TROUBLE
The Elastic File System (EFS) Mount Targets and Elastic Container Service (ECS) Services must be in the same Virtual Private Cloud (VPC).
The Security Group associated with the EC2 Instance must allow ingress on the ECS Task Definition containerPort / hostPort and ECS Service container_port.
In addition to AmazonEC2ContainerServiceforEC2Role, the role should also include "arn:aws:iam::aws:policy/AmazonElasticFileSystemFullAccess".
resource "aws_iam_role" "ecsInstanceRole" {
name = "ecsInstanceRole"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
managed_policy_arns = [data.aws_iam_policy.AmazonEC2ContainerServiceforEC2Role_policy.arn, "arn:aws:iam::aws:policy/AmazonElasticFileSystemFullAccess"]
}
It is also important to recognize that when creating your Elastic Container Service (ECS) services, almost always the service is going to have two or more subnets.
subnets = [data.aws_subnets.subnets.ids[0],data.aws_subnets.subnets.ids[1]]
You will want to ensure there are Elastic File System (EFS) Mount Points in the same subnets.
resource "aws_efs_mount_target" "my_aws_efs_mount_target" {
for_each = toset([data.aws_subnets.subnets.ids[0],data.aws_subnets.subnets.ids[1]])
file_system_id = aws_efs_file_system.efs.id
subnet_id = each.key
security_groups = [aws_security_group.my_security_group.id]
}
task_definitions.tf could have something like this where requires_compatibilities is EC2 and there is a volume block and mountPoint for the Elastic File System (EFS) that will be used with the Task Definition.
resource "aws_ecs_task_definition" "flask-ec2-task-definition" {
family = "flask"
network_mode = "awsvpc"
requires_compatibilities = ["EC2"]
cpu = 1024
memory = 2048
container_definitions = jsonencode([
{
name = "flask-container"
cpu = 10
memory = 512
image: "tiangolo/uwsgi-nginx-flask:python3.11",
command = [ "cp", "/myapp/main.py /app/main.py" ],
mountPoints: [
{
containerPath: "/efs",
sourceVolume: "efs-storage"
}
],
portMappings: [
{
containerPort: 3000,
hostPort: 3000
}
]
}
])
volume {
name = "efs-storage"
efs_volume_configuration {
file_system_id = "fs-123456789abcdefg"
root_directory = "/"
transit_encryption = "ENABLED"
authorization_config {
access_point_id = "fsap-123456789abcdefg"
iam = "DISABLED"
}
}
}
}
You may need to reissue the terraform init command.
terraform init
The terraform plan command can be used to see what Terraform will try to do.
terraform plan
The terraform apply command can be used to create, update or delete the resource.
terraform apply -auto-approve
You could then make an SSH connection onto the EC2 instance as ec2-user.
~]$ ssh -i ~/.ssh/id_rsa ec2-user@3.123.123.13
Last login: Thu Sep 21 12:28:06 2023 from 10.14.5.15
__| __| __|
_| ( \__ \ Amazon ECS-Optimized Amazon Linux AMI
____|\___|____/
For documentation, visit http://aws.amazon.com/documentation/ecs
[ec2-user@ip-10-20-1-189 ~]$
And list the Docker containers.
~]$ sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7958ffc85a3 tiangolo/uwsgi-nginx-flask:python3.11 "/entrypoint.sh /sta…" 18 seconds ago Up 17 seconds ecs-flask-ec2-1-flask-container-fce2afae91adf3f25900
3ce4d3790047 amazon/amazon-ecs-pause:0.1.0 "/pause" 23 seconds ago Up 19 seconds ecs-flask-ec2-1-internalecspause-a6e8f0e6a399bdffba01
deff0c19ba34 amazon/amazon-ecs-agent:latest "/agent" 17 hours ago Up 17 hours (healthy) ecs-agent
And the docker exec and mount commands can be used to see that the Elastic File System is mounted in the container.
[ec2-user@ip-10-20-1-189 ~]$ sudo docker exec c7958ffc85a3 mount
127.0.0.1:/ on /efs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,port=20645,timeo=600,retrans=2,sec=sys,clientaddr=127.0.0.1,local_lock=none,addr=127.0.0.1)
And the docker exec and ls commands can be used to list the files and directories in the mounted Elastic File System.
[ec2-user@ip-10-20-1-189 ~]$ sudo docker exec c7958ffc85a3 ls -l /efs
Did you find this article helpful?
If so, consider buying me a coffee over at