Building a 3-Tier Multi-Region High Availability Architecture with Terraform
Adherence to best practices, scalability, and accuracy are necessary when deploying a three-tier design across several locations.
In our last blog here, we illustrated how to deploy a core infrastructure. We are now taking things a step further by developing our idea into a three-tier architecture. This entails adding a database layer, providing high availability by duplicating infrastructure across regions, and employing Terraform modules and aliasing to improve modularity and reuse.
Overview
A three-tier architecture separates the system into logical layers, each serving a specific purpose:
Frontend Layer:To ensure high availability and fault tolerance, an Auto Scaling Group (ASG) has been deployed behind an Application Load Balancer (ALB).
Application Layer: Hosted on instances in private subnets. These instances employed NAT Gateways in public subnets for outbound internet access, which kept them secure. Application configurations were maintained using a launch template. Each instance had an instance profile, which granted access to specific AWS resources like Secrets Manager (for database credentials) and an S3 bucket.
Database Layer: An Amazon RDS instance is kept within a private subnet. This database was only available to application instances, establishing strong security boundaries. We set up read replicas for failover and performance optimization.
Creating Aliases for Multi-Region Resources
The usage of Terraform aliases was one of the project's significant advances. Aliases enabled us to create numerous configurations for a single provider, resulting in smooth deployment across multiple regions.
This reduces the need to duplicate code and maintain distinct setups.
provider "aws" {
alias = "primary"
region = var.primary_region
}
provider "aws" {
alias = "secondary"
region = var.secondary_region
}
With these aliases, we could duplicate our previous resources, like the vpc module, in other regions.
module "vpc" {
source = "../../modules/vpc"
cidr_block = var.primary_cidr_block
dns_hostnames = true
desired_azs = 2
private_subnets_no = 3
public_subnets_no = 2
providers = {
aws = aws.primary
}
}
module "vpc_secondary" {
source = "../../modules/vpc"
cidr_block = var.secondary_cidr_block
dns_hostnames = true
desired_azs = 2
private_subnets_no = 2
public_subnets_no = 1
providers = {
aws = aws.secondary
}
}
Enabling Module Reuse
To ensure that the VPC module worked seamlessly with provider aliases, we updated its terraform
block to include the configuration_aliases
attribute
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
configuration_aliases = [aws]
}
}
}
This setting enabled the VPC module to accept aliased providers without requiring any additional adjustments. What was the result? One module, two regions, and a much simpler Terraform configuration.
Why This Approach?
Avoids Code Duplication: The same VPC module can be reused for any number of regions by simply passing the appropriate parameters.
Enhances Modularity: The VPC module remains clean, reusable, and easy to maintain.
Improves Scalability: Adding more regions or environments becomes trivial—just add another module block and pass a new provider alias.
Attaching an Instance Profile
Instance profiles in AWS enable your EC2 instances to automatically take an IAM role. Consider it like offering your instances their own "ID card" to safely access other AWS services.
To keep things clean, we'll:
Leverage a shared module for policies
Use an IAM role for EC2: The role will attach necessary policies for accessing Secrets Manager, S3, and RDS.
Attach the role to an instance profile
Policy Module
data "aws_secretsmanager_secret" "secret" {
name = var.secret_name
}
data "aws_secretsmanager_secret_version" "secret_version" {
secret_id = data.aws_secretsmanager_secret.secret.id
}
locals {
secret_data = jsondecode(data.aws_secretsmanager_secret_version.secret_version.secret_string)
}
data "aws_caller_identity" "current" {}
resource "aws_iam_policy" "ec2_policy" {
name = "CombinedEC2Policy"
description = "Combined policy for accessing Secrets Manager, RDS, and S3"
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Action = ["secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret"],
Resource = data.aws_secretsmanager_secret.secret.arn
},
{
Effect = "Allow",
Action = "rds:DescribeDBInstances",
Resource = "arn:aws:rds:${var.region}:${data.aws_caller_identity.current.account_id}:db:*"
},
{
Effect = "Allow",
Action = ["s3:GetObject", "s3:PutObject", "s3:ListBucket"],
Resource = ["${var.source_bucket.arn}", "${var.source_bucket.arn}/*"]
}
]
})
}
Now that we've defined our shared policies, the next step is to set up the instance profile.
IAM Role Definitions
resource "aws_iam_role" "secret_role" {
name = "ec2-secrets-role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "ec2.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
Attaching Policies to the Role:
Using the shared policies defined earlier, we attach the following:
Secrets Manager Policy: Allows retrieving secrets for sensitive configurations.
S3 Access Policy: Grants access to a specific S3 bucket for your application logs or static assets.
RDS Describe Policy: Enables retrieving metadata about your database.
resource "aws_iam_role_policy_attachment" "attach_secret_policy" { role = aws_iam_role.secret_role.name policy_arn = module.policy.secret_policy.arn } resource "aws_iam_role_policy_attachment" "attach_s3_policy" { role = aws_iam_role.secret_role.name policy_arn = module.policy.ec2_to_s3_policy.arn } resource "aws_iam_role_policy_attachment" "attach_rds_describe_policy" { role = aws_iam_role.secret_role.name policy_arn = module.policy.rds_describe_policy.arn }
With the instance profile ready, you’ll simply pass its ARN to the
launch_template
module when provisioning your EC2 instances. This ensures that any EC2 instance launched with the profile will have the necessary access it needs.
Provisioning S3 Buckets with Replication
Like before, we take advantage of aliasing to manage the creation of S3 buckets across multiple regions. However, what makes this setup stand out is the use of conditional logic. By incorporating variables like create_replication_rule
, we dynamically decide whether to provision the bucket standalone or set it up for replication. This flexibility allows the configuration to remain reusable across multiple scenarios and projects.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
configuration_aliases = [aws]
}
}
}
resource "random_string" "name" {
special = false
upper = false
length = 8
}
resource "aws_iam_role" "replication_role" {
count = var.create_replication_rule ? 1 : 0
name = "s3_replication_role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "ec2.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_s3_bucket" "bucket" {
bucket = "${var.environnment}-backup-${random_string.name.result}"
force_destroy = true
}
resource "aws_s3_bucket_versioning" "versioning" {
bucket = aws_s3_bucket.bucket.id
versioning_configuration {
status = "Enabled"
}
}
Replication Configuration
The replication configuration is added conditionally, depending on the bucket's role. The following snippet highlights how the replication rules, including delete marker replication, are applied dynamically.
resource "aws_s3_bucket_replication_configuration" "replication_configuration" {
count = var.create_replication_rule ? 1 : 0
bucket = aws_s3_bucket.bucket.id
role = aws_iam_role.replication_role[count.index].arn
rule {
id = "ReplicationRule"
status = "Enabled"
destination {
bucket = var.destination_bucket.arn
storage_class = "GLACIER"
}
filter {
prefix = ""
}
delete_marker_replication {
status = "Enabled"
}
}
depends_on = [aws_s3_bucket_versioning.versioning]
}
By leveraging the shared S3 bucket module, we efficiently set up the source and destination buckets with distinct roles. The source bucket includes a replication rule, while the destination bucket does not.
module "s3_bucket_source" {
source = "../../modules/shared/s3"
create_replication_rule = true
destination_bucket = module.s3_bucket_destination.s3_bucket
providers = {
aws = aws.primary
}
}
module "s3_bucket_destination" {
source = "../../modules/shared/s3"
create_replication_rule = false
providers = {
aws = aws.secondary
}
}
Adding the Database Layer
In this section, we introduce the database layer, which uses a modular and reusable approach for provisioning security groups and managing database instances. This setup demonstrates how existing modules, such as security groups, can seamlessly integrate into an expanded infrastructure.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
configuration_aliases = [aws]
}
}
}
# Reusing a shared Security Group module
module "security_group" {
count = var.create_replica ? 0 : 1
source = "../shared/security_groups"
vpc_id = var.vpc_id
allow_internet_access = false
inbound_ports = var.inbound_ports
security_group_ref_id = var.web_server_security_group_id
}
# Defining a Subnet Group for RDS
resource "aws_db_subnet_group" "main" {
name = "my-db-subnet-group"
subnet_ids = var.private_subnet_ids
tags = {
Name = "MyDBSubnetGroup"
}
}
# Creating the RDS Instance
resource "aws_db_instance" "db" {
allocated_storage = 20
engine = "mysql"
engine_version = "8.0"
instance_class = "db.t3.micro"
identifier = "${var.rds_identifier}-${var.instance_role}"
username = var.create_replica ? null : var.db_username
password = var.create_replica ? null : var.db_password
parameter_group_name = "default.mysql8.0"
publicly_accessible = false
vpc_security_group_ids = var.create_replica ? null : [module.security_group[0].security_group_id]
db_subnet_group_name = aws_db_subnet_group.main.name
replicate_source_db = var.create_replica ? var.source_db : null
multi_az = false
backup_retention_period = 7
skip_final_snapshot = true
tags = {
Name = "${var.environment}-${var.instance_role}-rds"
}
}
Reusing Security Group Module:
The
security_group
module is leveraged to create a security group tailored to the RDS instance, ensuring restricted access.By using
var.web_server_security_group_id
, access is limited to traffic originating from EC2 instances deployed via the configured launch template.
Conditional Logic:
The use of
var.create_replica
allows the setup to dynamically determine if this instance is a replica or a standalone database.Replica-specific settings like
replicate_source_db
are used only when creating replicas, while master instances have unique usernames, passwords, and security groups.
Subnet Group:
- The
aws_db_subnet_group
ensures that the RDS instance resides in private subnets, enhancing security and compliance.
Calling the RDS Module
For easier database layer deployment across several regions, we use modular configurations to provision both the primary database and a replica in a failover region. This approach emphasizes how the replica relies on resources provisioned in its own region and combines with previously created resources.
module "rds" {
source = "../../modules/rds"
db_password = module.instance_profile.secret.DB_PASSWORD
db_username = module.instance_profile.secret.DB_USERNAME
rds_identifier = var.rds_identifier
web_server_security_group_id = module.launch_template.web_server_security_group_id
vpc_id = module.vpc.vpc_id
private_subnet_ids = module.vpc.private_subnet_ids
environment = var.environment
instance_role = "primary"
providers = {
aws = aws.primary
}
}
module "rds_secondary" {
source = "../../modules/rds"
rds_identifier = var.rds_identifier
web_server_security_group_id = module.launch_template.web_server_security_group_id
vpc_id = module.vpc_secondary.vpc_id
private_subnet_ids = module.vpc_secondary.private_subnet_ids
environment = var.environment
instance_role = "replica"
create_replica = true
source_db = module.rds.rds_instance.arn
providers = {
aws = aws.secondary
}
}
Conclusion
By modularizing our infrastructure, we were able to effectively extend it across different regions while retaining simplicity and consistency. The database layer connects easily with previously supplied resources, assuring both security and availability. The use of conditional logic enables us to reuse modules for both primary and replica databases, highlighting Terraform's modular design. This technique simplifies management while also enabling durable, scalable structures across regions.