
Flow Logs are using to log requests coming in and going out of a Network Interface. For example, perhaps you have a Network Log Balancer. A Flow Log can be used to log the requests coming in and going out of the Network Interfaces being used by the Network Load Balancer.
You can create a Flow Log:
- For all of the Network Interfaces in a Virtual Private Cloud (VPC)
- For specific Network Interfaces in a Virtual Private Cloud (VPC)
The Flow Logs can be published to:
- A Cloudwatch Alarm Log Group
- An S3 Bucket
A flow log by default looks something like this.
version account-id interface-id srcaddr dstaddr srcport dstport protocol packets bytes start end action log-status
2 123456789012 eni-07a2b417b8527403c 35.203.211.127 172.31.47.140 54135 53522 6 1 44 1696986432 1696986437 ACCEPT OK
The Linux date command can be used to convert the start and end integers into a human readable date time.
~]$ echo $(date -d @1696986463)
Tue Oct 10 20:07:43 CDT 2023
Let's say you have the following files on your Terraform server.
├── required_providers.tf
├── flow_logs (directory)
│ ├── flow_logs.tf
│ ├── network_interface.tf
│ ├── provider.tf
│ ├── s3_bucket.tf
required_providers.tf will almost always have this.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
Let's say provider.tf has the following. In this example, the "default" profile in /home/username/.aws/config and /home/username/.aws/credentials is being used. This assumes you have setup Terraform as described in Amazon Web Services (AWS) - Getting Started with Terraform.
provider "aws" {
alias = "default"
profile = "default"
region = "default"
}
And flow_logs.tf could have something like this to create a Flow Log for all of the Network Interfaces in a Virtual Private Cloud, use --resource-type VPC. In this example, the Flow Logs will be delivered to Cloudwatch Log Group my-log-group.
resource "aws_flow_log" "flow_log" {
iam_role_arn = aws_iam_role.example.arn
log_destination = aws_cloudwatch_log_group.example.arn
traffic_type = "ALL"
vpc_id = aws_vpc.example.id
}
If you want to capture the Flow Logs for a Network Interface and store the logs in an S3 Bucket
- Use aws_network_interface to get the Network Inteface JSON. Check out my article List Network Interfaces using Terraform.
- Use aws_s3_bucket to create an S3 Bucket. Check out my article Create S3 Bucket using Terraform
data "aws_network_interface" "network_interface" {
filter {
name = "interface-type"
values = ["network_load_balancer"]
}
filter {
name = "subnet-id"
values = ["subnet-123456789012abdefg"]
}
}
resource "aws_s3_bucket" "s3_bucket" {
bucket = "my-logs-bucket-plmjuhbgfcvr"
tags = {
Name = "my-logs-bucket-plmjuhbgfcvr"
Environment = "production"
}
}
resource "aws_flow_log" "flow_log" {
log_destination = "aws_s3_bucket.s3_bucket.arn"
log_destination_type = "s3"
traffic_type = "ALL"
eni_id = data.aws_network_interface.network_interface.id
}
You may need to reissue the terraform init command.
terraform init
The terraform plan command can be used to see what Terraform will try to do.
terraform plan
The terraform apply command can be used to create or update the private key and public certificate.
terraform apply -auto-approve
Did you find this article helpful?
If so, consider buying me a coffee over at