How to Use Terraform Modules
How to Use Terraform Modules Terraform is a powerful infrastructure-as-code (IaC) tool developed by HashiCorp that enables teams to define, provision, and manage cloud and on-premises infrastructure using declarative configuration files. While Terraform’s core syntax is intuitive, managing large-scale infrastructure across multiple environments—such as development, staging, and production—can quic
How to Use Terraform Modules
Terraform is a powerful infrastructure-as-code (IaC) tool developed by HashiCorp that enables teams to define, provision, and manage cloud and on-premises infrastructure using declarative configuration files. While Terraforms core syntax is intuitive, managing large-scale infrastructure across multiple environmentssuch as development, staging, and productioncan quickly become complex and error-prone. This is where Terraform modules come into play.
Terraform modules are reusable, self-contained packages of Terraform configurations that encapsulate infrastructure patterns. They allow you to write code once and deploy it repeatedly across different projects, environments, or teams. By abstracting complex infrastructure logic into modular components, modules promote consistency, reduce duplication, enhance maintainability, and accelerate deployment cycles.
Whether you're managing a single AWS account or orchestrating multi-cloud deployments across Azure, Google Cloud, and on-premises data centers, mastering Terraform modules is essential for scaling your IaC practices effectively. This guide provides a comprehensive, step-by-step tutorial on how to use Terraform modulesfrom creating your first module to publishing and consuming community-built onesalong with best practices, real-world examples, and essential tools to elevate your infrastructure automation game.
Step-by-Step Guide
Understanding Terraform Module Structure
Before diving into implementation, its critical to understand the basic structure of a Terraform module. A module is simply a directory containing one or more Terraform configuration files (.tf files), typically including:
- main.tf The primary configuration file defining resources.
- variables.tf Declares input variables the module accepts.
- outputs.tf Defines values the module exposes to the calling configuration.
- versions.tf Specifies required Terraform and provider versions.
- README.md Documentation for users of the module (highly recommended).
Modules can also include nested modules, local files, data sources, and even external scripts referenced via local-exec or remote-exec provisioners.
Creating Your First Terraform Module
Lets create a simple module that provisions an Amazon S3 bucket. This example will serve as the foundation for understanding how modules work.
- Create a new directory named
s3-bucket-modulein your project root. - Inside this directory, create the following files:
variables.tf
variable "bucket_name" {
description = "The name of the S3 bucket to create"
type = string
}
variable "region" {
description = "The AWS region to deploy the bucket"
type = string
default = "us-east-1"
}
variable "enable_versioning" {
description = "Enable versioning on the S3 bucket"
type = bool
default = true
}
main.tf
provider "aws" {
region = var.region
}
resource "aws_s3_bucket" "this" {
bucket = var.bucket_name
}
resource "aws_s3_bucket_versioning" "this" {
bucket = aws_s3_bucket.this.id
versioning_configuration {
status = var.enable_versioning ? "Enabled" : "Disabled"
}
}
outputs.tf
output "bucket_arn" {
description = "The ARN of the created S3 bucket"
value = aws_s3_bucket.this.arn
}
output "bucket_id" {
description = "The ID of the created S3 bucket"
value = aws_s3_bucket.this.id
}
versions.tf
terraform {
required_version = ">= 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0"
}
}
}
This module accepts three inputs: the bucket name, region, and whether to enable versioning. It provisions the S3 bucket and applies versioning if requested, then outputs the buckets ARN and ID for use elsewhere.
Calling the Module from a Root Configuration
Now that the module is defined, you need to invoke it from a root Terraform configuration. Create a new directory called production (or any environment name) in your project root.
In production/main.tf:
provider "aws" {
region = "us-west-2"
}
module "s3_bucket" {
source = "../s3-bucket-module"
bucket_name = "my-production-bucket-123"
region = "us-west-2"
enable_versioning = true
}
output "s3_bucket_arn" {
value = module.s3_bucket.bucket_arn
}
Notice the source argument. It tells Terraform where to find the module. Here, we use a relative path ../s3-bucket-module pointing to the module directory one level up.
Initializing and Applying the Configuration
From within the production directory, run:
terraform init
terraform plan
terraform apply
Terraform will download the necessary AWS provider, analyze the modules structure, and execute the plan. The output will show the S3 bucket being created, and once applied, youll see the ARN printed in the terminal thanks to the output block in the root configuration.
Using Modules from Remote Sources
While local modules are useful for internal reuse, Terraform supports sourcing modules from remote locations:
- Git repositories
- GitHub, GitLab, Bitbucket
- Terraform Registry
- HTTP URLs
To use a module from GitHub:
module "s3_bucket" {
source = "github.com/yourusername/terraform-aws-s3-bucket?ref=v1.0.0"
bucket_name = "my-bucket"
region = "us-east-1"
enable_versioning = true
}
The ?ref=v1.0.0 parameter pins the module to a specific Git tag, ensuring reproducibility. This is critical for production environments.
To use a module from the Terraform Registry:
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.16.0"
bucket = "my-secure-bucket"
versioning = {
enabled = true
}
}
The Terraform Registry hosts thousands of community-maintained modules, each with documentation, versioning, and testing. This is the preferred way to consume well-tested, standardized infrastructure components.
Managing Module Versions
Version control is non-negotiable in production infrastructure. Always pin your module versions to avoid unexpected behavior due to breaking changes.
For local modules, use relative paths as shown earlier. For remote modules, use:
- Git tags:
source = "github.com/org/repo?ref=v2.1.0" - Git branches:
source = "github.com/org/repo?ref=main"(not recommended for production) - Terraform Registry:
version = "3.16.0"
Never use unversioned sources like source = "github.com/org/repo" without a refthis leads to fragile, non-reproducible infrastructure.
Working with Nested Modules
Modules can call other modules, creating a hierarchical structure. This is useful for building complex systems from smaller, focused components.
For example, create a network module that provisions VPC, subnets, and route tables. Then create a web-app module that calls the network module and adds EC2 instances and load balancers.
network/main.tf
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
}
resource "aws_subnet" "public" {
count = length(var.public_subnets)
cidr_block = var.public_subnets[count.index]
availability_zone = var.availability_zones[count.index]
vpc_id = aws_vpc.main.id
}
web-app/main.tf
module "network" {
source = "../network"
vpc_cidr = "10.0.0.0/16"
public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
availability_zones = ["us-west-2a", "us-west-2b"]
}
resource "aws_instance" "web" {
ami = "ami-0abcdef1234567890"
instance_type = "t3.micro"
subnet_id = module.network.public_subnets[0]
}
This layered approach keeps each module focused and testable. It also enables teams to work independently on different layersnetworking, security, applicationwithout stepping on each others toes.
Best Practices
Use Descriptive and Consistent Naming
Module names should clearly indicate their purpose. Avoid vague names like my-module or aws-setup. Instead, use:
terraform-aws-s3-bucketterraform-azurerm-vnetterraform-google-kubernetes-cluster
Follow the Terraform Registry naming convention for community modules: terraform-.
Define Clear Input Variables and Outputs
Every module should document its inputs and outputs. Use descriptive description fields in variables.tf and outputs.tf. Avoid exposing internal implementation details through outputsonly expose what the caller needs.
Example:
variable "instance_type" {
description = "The EC2 instance type (e.g., t3.micro, m5.large). Must be supported in the selected region."
type = string
}
output "instance_id" {
description = "The AWS instance ID of the created EC2 instance."
value = aws_instance.web.id
}
Use Default Values Wisely
Provide sensible defaults for variables to reduce boilerplate, but avoid over-defaulting. For example, defaulting region to us-east-1 may be acceptable for internal tools but not for global applications.
Consider using conditional defaults:
variable "region" {
description = "AWS region"
type = string
default = lookup(var.environment_regions, var.environment, "us-east-1")
}
variable "environment" {
description = "Deployment environment (dev, staging, prod)"
type = string
}
variable "environment_regions" {
description = "Region mapping by environment"
type = map(string)
default = {
dev = "us-west-2"
staging = "us-east-1"
prod = "eu-west-1"
}
}
Enforce Version Constraints
Always declare required Terraform and provider versions in versions.tf. This prevents compatibility issues when teams use different Terraform versions.
terraform {
required_version = ">= 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0, < 6.0"
}
}
}
Use Modules for Repeated Patterns
Dont create modules for one-off resources. Use them when you have at least two similar configurations. Common candidates:
- EC2 instances with specific AMIs and security groups
- PostgreSQL RDS instances with backup and monitoring
- API Gateway + Lambda integrations
- Network ACLs and subnets across environments
Document Everything
Every module should include a README.md that explains:
- What the module does
- Required and optional inputs
- Outputs
- Example usage
- Known limitations
- Version compatibility
Good documentation reduces onboarding time and prevents misuse.
Test Modules Independently
Use tools like Terratest to write automated tests for your modules. Test for:
- Resource creation
- Correct attribute values
- Failure scenarios (e.g., invalid region)
Example Terratest snippet:
func TestS3Bucket(t *testing.T) {
t.Parallel()
bucketName := "test-bucket-" + strings.ToLower(random.UniqueId())
defer os.RemoveAll("test-fixtures")
defer os.RemoveAll("test-fixtures/.terraform")
options := &terraform.Options{
TerraformDir: "../s3-bucket-module",
Vars: map[string]interface{}{
"bucket_name": bucketName,
},
}
defer terraform.Destroy(t, options)
terraform.InitAndApply(t, options)
bucketArn := terraform.Output(t, options, "bucket_arn")
assert.Contains(t, bucketArn, bucketName)
}
Separate Environments Using Workspaces or Folders
Never use the same Terraform state for multiple environments. Use separate directories or Terraform workspaces:
/environments/dev//environments/staging//environments/prod/
Each directory has its own main.tf that calls the same modules but with different variable values.
Use Remote State for Shared Modules
If multiple teams use the same module, store its state remotely using backends like S3, Azure Blob Storage, or Terraform Cloud. This ensures state consistency and prevents accidental overwrites.
Tools and Resources
Terraform Registry
The Terraform Registry is the central hub for discovering, sharing, and consuming verified modules. It includes:
- Over 10,000 community and official modules
- Automated testing and versioning
- Documentation, changelogs, and ratings
- Integration with GitHub for pull request validation
Search for modules by provider (e.g., AWS, Azure, GCP) and filter by popularity, maintenance status, and version.
Terraform Cloud and Terraform Enterprise
Terraform Cloud (free tier available) and Terraform Enterprise offer advanced module management features:
- Private module registry for internal teams
- Automated testing and validation on push
- Policy as Code (Sentinel) enforcement
- Run triggers and collaboration features
Use Terraform Cloud to host your private modules and enforce governance across teams.
Atlantis
Atlantis is an open-source automation tool that integrates with GitHub, GitLab, and Bitbucket to run Terraform plans and applies automatically on pull requests. Its ideal for teams using modules because it ensures every module change is reviewed and tested before merging.
Checkov and Terrascan
Security scanning tools like Checkov and Terrascan scan your Terraform codeincluding modulesfor misconfigurations and compliance violations (e.g., public S3 buckets, unencrypted RDS).
Integrate them into your CI/CD pipeline to catch issues early.
tfsec
tfsec is a static analysis tool that checks Terraform templates for security issues. It supports custom rules and works well with module-based configurations.
Visual Studio Code Extensions
Install these extensions for better module development:
- Terraform by HashiCorp Syntax highlighting, auto-completion
- Terraform Intellisense Auto-completes resource types and attributes
- Terraform Snippets Quick inserts for common patterns
Module Linter (tflint)
tflint is a Terraform linter that detects syntax errors, deprecated attributes, and style violations. It can be configured to enforce module-specific rules.
Example .tflint.hcl configuration:
plugin "aws" {
enabled = true
}
rule "aws_s3_bucket_public_acl" {
enabled = true
}
GitHub Actions for CI/CD
Automate module testing and publishing with GitHub Actions. Example workflow:
name: Test Module
on:
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- name: Terraform Init
run: terraform init
- name: Terraform Plan
run: terraform plan
- name: Terraform Validate
run: terraform validate
Real Examples
Example 1: Multi-Tier Web Application Module
Imagine you need to deploy a scalable web application with a load balancer, auto-scaling group, and RDS database. Instead of copying code across projects, create a reusable module.
Module structure:
web-app-module/
??? main.tf
??? variables.tf
??? outputs.tf
??? versions.tf
??? README.md
main.tf
provider "aws" {
region = var.region
}
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
}
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
}
resource "aws_subnet" "public" {
count = length(var.public_subnets)
cidr_block = var.public_subnets[count.index]
availability_zone = var.availability_zones[count.index]
vpc_id = aws_vpc.main.id
}
resource "aws_subnet" "private" {
count = length(var.private_subnets)
cidr_block = var.private_subnets[count.index]
availability_zone = var.availability_zones[count.index]
vpc_id = aws_vpc.main.id
}
resource "aws_security_group" "web" {
name = "web-sg"
description = "Allow HTTP and HTTPS"
vpc_id = aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_launch_template" "web" {
name_prefix = "web-lt-"
image_id = var.ami_id
instance_type = var.instance_type
network_interfaces {
security_groups = [aws_security_group.web.id]
}
}
resource "aws_autoscaling_group" "web" {
launch_template {
id = aws_launch_template.web.id
version = "$Latest"
}
min_size = var.min_instances
max_size = var.max_instances
desired_capacity = var.desired_capacity
vpc_zone_identifier = aws_subnet.private[*].id
tag {
key = "Name"
value = "web-app"
propagate_at_launch = true
}
}
resource "aws_lb" "web" {
name = "web-lb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.web.id]
subnets = aws_subnet.public[*].id
}
resource "aws_lb_target_group" "web" {
name = "web-tg"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
}
resource "aws_lb_listener" "web" {
load_balancer_arn = aws_lb.web.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.web.arn
}
}
variables.tf
variable "region" {
type = string
}
variable "vpc_cidr" {
type = string
}
variable "public_subnets" {
type = list(string)
}
variable "private_subnets" {
type = list(string)
}
variable "availability_zones" {
type = list(string)
}
variable "ami_id" {
type = string
}
variable "instance_type" {
type = string
default = "t3.micro"
}
variable "min_instances" {
type = number
default = 2
}
variable "max_instances" {
type = number
default = 6
}
variable "desired_capacity" {
type = number
default = 2
}
outputs.tf
output "load_balancer_dns" {
value = aws_lb.web.dns_name
}
output "autoscaling_group_name" {
value = aws_autoscaling_group.web.name
}
Now, in your environment folder:
module "web_app" {
source = "../modules/web-app-module"
region = "us-east-1"
vpc_cidr = "10.0.0.0/16"
public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
private_subnets = ["10.0.3.0/24", "10.0.4.0/24"]
availability_zones = ["us-east-1a", "us-east-1b"]
ami_id = "ami-0abcdef1234567890"
instance_type = "t3.medium"
}
Example 2: Kubernetes Cluster with EKS Module
Deploying a managed Kubernetes cluster on AWS is complex. Use the official EKS module:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.18.0"
cluster_name = "my-production-cluster"
cluster_version = "1.27"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
enable_irsa = true
node_groups = {
workers = {
desired_capacity = 3
max_capacity = 6
min_capacity = 2
instance_type = "t3.medium"
subnets = module.vpc.private_subnets
tags = {
Name = "eks-worker-node"
}
}
}
}
This single block provisions a fully functional EKS cluster with worker nodes, IAM roles, and networkingall without writing hundreds of lines of raw Terraform.
FAQs
What is the difference between a Terraform module and a Terraform provider?
A provider is a plugin that Terraform uses to interact with an API (e.g., AWS, Azure, Google Cloud). A module is a reusable collection of Terraform configurations that use one or more providers to define infrastructure. Providers enable communication; modules enable reuse.
Can I use modules from private repositories?
Yes. Use SSH keys or personal access tokens to authenticate when sourcing modules from private Git repositories. Example:
source = "git::ssh://git@github.com/yourorg/terraform-module.git?ref=v1.0.0"
Ensure your CI/CD runner or local machine has the correct SSH key configured.
How do I update a module version safely?
Always test updates in a non-production environment first. Update the version in your source or version field, run terraform init to fetch the new version, then run terraform plan to see what changes will occur. Review the plan carefully before applying.
Can modules contain data sources?
Yes. Modules can include data sources to fetch existing infrastructure (e.g., lookup an existing VPC or AMI). This is common in modules that integrate with existing environments.
Why is my module not found when I run terraform init?
Common causes:
- Incorrect
sourcepath (e.g., typo or wrong relative path) - Missing
versions.tfwith required provider - Network restrictions blocking access to GitHub or Terraform Registry
- Authentication failure for private repositories
Run terraform init -upgrade to force re-download modules.
How do I share modules across multiple organizations?
Use Terraform Clouds private module registry. Upload your modules via the CLI or CI/CD pipeline, and grant access to other teams using workspaces and team permissions.
Do modules require a separate state file?
Yes. Each module call creates a child state within the parents state file. You dont manage module state separatelyits handled automatically by Terraform. However, if you use remote backends, the entire state (including module data) is stored remotely.
Can I use modules with Terraform 0.12 and earlier?
Modules have existed since Terraform 0.10, but syntax changed significantly in 0.12. If youre using older versions, upgrade to 1.x to benefit from improved module handling, better error messages, and HCL2 syntax.
Conclusion
Terraform modules are not just a conveniencethey are a necessity for any organization serious about scalable, maintainable, and secure infrastructure automation. By encapsulating infrastructure patterns into reusable, version-controlled components, modules eliminate duplication, enforce consistency, and empower teams to move faster without sacrificing quality.
This guide has walked you through the full lifecycle of Terraform module usage: from creating your first module, to calling it from a root configuration, sourcing it from remote repositories, testing it with Terratest, and integrating it into a CI/CD pipeline with tools like Atlantis and Checkov.
Remember: the most effective Terraform deployments are not those with the most resources defined, but those with the least duplicated code. Modules are the key to achieving that principle.
Start smallrefactor a repetitive resource into a module today. Then, gradually expand your library. Over time, your infrastructure will become more modular, more reliable, and more resilient to change. And in the world of cloud infrastructure, thats not just good practiceits survival.