How to Write Terraform Script

How to Write Terraform Script Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp that enables engineers to safely and predictably create, manage, and destroy infrastructure across multiple cloud providers and on-premises environments. Unlike traditional manual configuration or scripting methods, Terraform uses a declarative language to define the desired state of

Oct 30, 2025 - 12:18
Oct 30, 2025 - 12:18
 0

How to Write Terraform Script

Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp that enables engineers to safely and predictably create, manage, and destroy infrastructure across multiple cloud providers and on-premises environments. Unlike traditional manual configuration or scripting methods, Terraform uses a declarative language to define the desired state of infrastructure, making it reproducible, version-controlled, and scalable. Writing Terraform scripts — also known as Terraform configurations — is a critical skill for DevOps engineers, cloud architects, and site reliability engineers (SREs) who aim to automate infrastructure provisioning with consistency and reliability.

The importance of learning how to write Terraform scripts cannot be overstated in today’s cloud-native world. Organizations increasingly rely on multi-cloud and hybrid environments, where manual infrastructure management becomes error-prone, time-consuming, and unsustainable. Terraform eliminates these challenges by allowing teams to define infrastructure in code, review changes through version control systems like Git, and apply configurations across development, staging, and production environments with identical results. Moreover, Terraform’s provider ecosystem supports over 3,000 integrations, including AWS, Azure, Google Cloud Platform, Kubernetes, Docker, and even network devices like Cisco and Juniper.

This guide provides a comprehensive, step-by-step tutorial on how to write Terraform scripts from scratch. Whether you’re new to infrastructure automation or looking to refine your Terraform skills, this resource will equip you with the knowledge to write clean, maintainable, and production-grade Terraform configurations. We’ll walk through the core components of Terraform, explore best practices, recommend essential tools, present real-world examples, and answer frequently asked questions to solidify your understanding.

Step-by-Step Guide

1. Install Terraform

Before writing any Terraform script, you must have Terraform installed on your local machine or CI/CD environment. Terraform is distributed as a single binary, making installation straightforward.

On macOS, use Homebrew:

brew install terraform

On Ubuntu or Debian-based Linux systems:

sudo apt-get update && sudo apt-get install -y gnupg software-properties-common

wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg

echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update && sudo apt install terraform

On Windows, download the appropriate .zip file from the official Terraform downloads page, extract it, and add the directory to your system PATH.

Verify the installation by running:

terraform -version

You should see output similar to: Terraform v1.7.5. Ensure you’re using a recent version to benefit from the latest features and security updates.

2. Set Up Your Working Directory

Create a dedicated directory for your Terraform project. This keeps your configurations organized and isolated from other projects.

mkdir my-terraform-project

cd my-terraform-project

Initialize a Git repository to track changes:

git init

echo ".terraform/" >> .gitignore

echo "terraform.tfstate*" >> .gitignore

git add .

git commit -m "Initial commit with .gitignore"

Never commit sensitive files like terraform.tfstate or terraform.tfstate.backup to version control. These files contain the current state of your infrastructure and may include secrets.

3. Choose a Provider

Terraform interacts with cloud platforms and services through providers. A provider is a plugin that translates Terraform’s declarative language into API calls for a specific service.

For this example, we’ll use AWS as the provider. First, define the provider in a file named provider.tf:

provider "aws" {

region = "us-east-1"

}

Replace us-east-1 with your preferred AWS region. Terraform supports other providers such as azurerm for Azure, google for GCP, and digitalocean for DigitalOcean. Always specify the provider version to ensure stability:

provider "aws" {

region = "us-east-1"

version = "~> 5.0"

}

4. Configure AWS Credentials

Terraform needs AWS credentials to authenticate and make API calls. The recommended approach is to use AWS CLI credentials.

Install the AWS CLI if not already installed:

pip install awscli

Configure your credentials:

aws configure

Enter your AWS Access Key ID, Secret Access Key, default region, and output format. Alternatively, you can set environment variables:

export AWS_ACCESS_KEY_ID="your-access-key"

export AWS_SECRET_ACCESS_KEY="your-secret-key"

export AWS_DEFAULT_REGION="us-east-1"

For production environments, avoid hardcoded credentials. Use IAM roles for EC2 instances or temporary credentials via AWS SSO or STS.

5. Define Infrastructure Resources

Resources are the building blocks of Terraform. Each resource represents a component of your infrastructure — such as a virtual machine, network, storage bucket, or security group.

Create a file named main.tf and define your first resource: an Amazon EC2 instance.

resource "aws_instance" "example" {
ami           = "ami-0c55b159cbfafe1f0"  

Amazon Linux 2 AMI (us-east-1)

instance_type = "t2.micro"

tags = {

Name = "example-instance"

}

}

Here, aws_instance is the resource type, and example is the resource name you assign. The ami parameter specifies the Amazon Machine Image (OS template), and instance_type defines the compute capacity.

Save the file. Terraform uses a convention where files ending in .tf are automatically read during execution.

6. Initialize the Terraform Working Directory

Before applying your configuration, initialize the working directory to download the required provider plugins:

terraform init

This command downloads the AWS provider plugin and initializes backend configurations. You’ll see output confirming successful initialization.

7. Review the Plan

Always review what Terraform intends to do before applying changes. Run:

terraform plan

This generates an execution plan showing which resources will be created, modified, or destroyed. The output will indicate that one new EC2 instance will be created. This step is critical for preventing unintended changes.

8. Apply the Configuration

Once you’re satisfied with the plan, apply the configuration:

terraform apply

Terraform will prompt for confirmation. Type yes and press Enter. After a few moments, your EC2 instance will be provisioned.

You’ll see output confirming:

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

9. Verify the Infrastructure

Log into the AWS Management Console and navigate to the EC2 dashboard. You should see your new instance running with the tag “example-instance”.

You can also verify via the AWS CLI:

aws ec2 describe-instances --filters "Name=tag:Name,Values=example-instance"

10. Destroy Infrastructure (Optional)

To clean up and avoid unnecessary charges, destroy the infrastructure:

terraform destroy

Confirm with yes. Terraform will remove the EC2 instance and any associated resources.

11. Use Variables for Reusability

Hardcoding values like AMI IDs or instance types limits reusability. Use variables to make your scripts dynamic.

Create a file named variables.tf:

variable "instance_type" {

description = "The type of EC2 instance to launch"

type = string

default = "t2.micro"

}

variable "ami_id" {

description = "The AMI ID for the EC2 instance"

type = string

default = "ami-0c55b159cbfafe1f0"

}

variable "instance_name" {

description = "The name tag for the EC2 instance"

type = string

default = "example-instance"

}

Update main.tf to reference these variables:

resource "aws_instance" "example" {

ami = var.ami_id

instance_type = var.instance_type

tags = {

Name = var.instance_name

}

}

Now you can override values at runtime using a terraform.tfvars file:

instance_type = "t3.small"

ami_id = "ami-0abcdef1234567890"

instance_name = "production-web-server"

Or pass them via command line:

terraform apply -var="instance_type=t3.large" -var="instance_name=staging-server"

12. Use Outputs to Display Important Information

After provisioning, you may want to display key details like the public IP address. Create a file named outputs.tf:

output "instance_public_ip" {

description = "The public IP address of the EC2 instance"

value = aws_instance.example.public_ip

}

output "instance_id" {

description = "The ID of the EC2 instance"

value = aws_instance.example.id

}

Run terraform apply again. Terraform will now display these values at the end of the execution.

13. Organize Code with Modules

As your infrastructure grows, reusability becomes essential. Terraform modules allow you to package configurations into reusable components.

Create a directory named modules/web-server. Inside, create main.tf, variables.tf, and outputs.tf as before.

In the root directory, reference the module:

module "web_server" {

source = "./modules/web-server"

instance_type = "t2.micro"

ami_id = "ami-0c55b159cbfafe1f0"

instance_name = "web-server-01"

}

Modules promote DRY (Don’t Repeat Yourself) principles and make large projects manageable. You can also pull modules from the Terraform Registry:

module "vpc" {

source = "terraform-aws-modules/vpc/aws"

version = "5.0.0"

name = "my-vpc"

cidr = "10.0.0.0/16"

azs = ["us-east-1a", "us-east-1b"]

private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]

public_subnets = ["10.0.101.0/24", "10.0.102.0/24"]

}

14. Use State Management

Terraform stores the current state of your infrastructure in a file called terraform.tfstate. This file maps real-world resources to your configuration.

By default, the state is stored locally. For team collaboration, use a remote backend like Amazon S3:

backend "s3" {

bucket = "my-terraform-state-bucket"

key = "prod/terraform.tfstate"

region = "us-east-1"

dynamodb_table = "terraform-locks"

encrypt = true

}

Place this in a file named backend.tf. Then run terraform init again to migrate the state to S3.

Enabling state locking with DynamoDB prevents concurrent modifications and ensures consistency in team environments.

Best Practices

1. Use Version Control

Always store your Terraform code in a version control system like Git. This enables code reviews, audit trails, and rollback capabilities. Use branches for feature development and merge via pull requests.

2. Separate Environments

Use separate directories or workspaces for each environment: dev/, staging/, and prod/. Each should have its own state file and variable values. Avoid sharing state between environments.

3. Avoid Hardcoding Values

Use variables and modules to parameterize your configurations. Hardcoded values reduce reusability and increase the risk of human error.

4. Validate Before Applying

Always run terraform plan before terraform apply. Review the plan carefully for unintended changes, especially in production.

5. Use Descriptive Resource Names

Name resources clearly and consistently. For example, use aws_instance.web_server instead of aws_instance.server1. This improves readability and maintainability.

6. Implement Security Best Practices

Never store secrets in Terraform code. Use AWS Secrets Manager, HashiCorp Vault, or environment variables for sensitive data. Use IAM roles with least privilege and avoid using root credentials.

7. Use Terraform Linting and Formatting

Run terraform fmt to automatically format your code according to standard conventions. Use terraform validate to check syntax before applying changes.

8. Pin Provider and Module Versions

Always specify version constraints for providers and modules. This prevents unexpected behavior due to breaking changes in newer versions.

9. Document Your Code

Add comments and descriptions in your variables.tf and outputs.tf files. Consider creating a README.md in your project root explaining how to deploy and what each module does.

10. Automate Testing

Integrate Terraform into your CI/CD pipeline. Use tools like Terratest or Kitchen-Terraform to write automated tests that verify infrastructure behavior before deployment.

Tools and Resources

Terraform CLI

The official Terraform command-line interface is the primary tool for writing, planning, and applying configurations. It supports commands like plan, apply, destroy, show, and state.

Terraform Registry

The Terraform Registry is a centralized hub for discovering and sharing official and community-maintained modules. It includes pre-built modules for VPCs, EKS clusters, S3 buckets, and more.

Terraform Cloud and Terraform Enterprise

HashiCorp’s hosted and on-premises platforms offer enhanced collaboration features: remote state storage, run triggers, policy enforcement, and visual plan reviews. Ideal for enterprise teams.

Visual Studio Code with Terraform Extension

The official HashiCorp Terraform extension for VS Code provides syntax highlighting, auto-completion, linting, and inline documentation. It significantly improves productivity.

tfsec

tfsec is a static analysis tool that scans Terraform code for security misconfigurations — such as open S3 buckets, unencrypted EBS volumes, or overly permissive IAM policies.

checkov

Checkov is another open-source tool that scans infrastructure-as-code for compliance and security issues. It supports Terraform, CloudFormation, and Kubernetes.

terragrunt

Terragrunt is a thin wrapper around Terraform that helps enforce best practices like DRY code, remote state management, and modular organization. It’s especially useful for large-scale deployments.

Atlantis

Atlantis is an open-source automation tool that integrates with GitHub, GitLab, or Bitbucket to run Terraform plans and applies directly from pull requests. It enables infrastructure changes to go through code review workflows.

Documentation and Learning

Real Examples

Example 1: Provisioning a Secure VPC with Public and Private Subnets

This example uses the official AWS VPC module to create a secure network architecture with public and private subnets, NAT gateways, and internet gateways.

module "vpc" {

source = "terraform-aws-modules/vpc/aws"

version = "5.0.0"

name = "prod-vpc"

cidr = "10.0.0.0/16"

azs = ["us-east-1a", "us-east-1b", "us-east-1c"]

private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]

public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

enable_nat_gateway = true

single_nat_gateway = true

tags = {

Environment = "production"

Project = "web-app"

}

}

Output the subnet IDs for use in other modules:

output "private_subnet_ids" {

value = module.vpc.private_subnets

}

output "public_subnet_ids" {

value = module.vpc.public_subnets

}

Example 2: Deploying a Web Application with Auto Scaling

This example provisions an Auto Scaling Group (ASG) with a launch template, Application Load Balancer (ALB), and target group.

resource "aws_launch_template" "web" {

name_prefix = "web-launch-template"

image_id = "ami-0c55b159cbfafe1f0"

instance_type = "t3.micro"

security_group_ids = [aws_security_group.web.id]

user_data = base64encode <<-EOF

!/bin/bash

yum update -y

yum install -y httpd

systemctl start httpd

systemctl enable httpd

echo "Hello from Terraform!" > /var/www/html/index.html

EOF

}

resource "aws_security_group" "web" {

name = "web-sg"

description = "Allow HTTP traffic"

vpc_id = module.vpc.vpc_id

ingress {

from_port = 80

to_port = 80

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

}

egress {

from_port = 0

to_port = 0

protocol = "-1"

cidr_blocks = ["0.0.0.0/0"]

}

}

resource "aws_alb" "web" {

name = "web-alb"

internal = false

load_balancer_type = "application"

security_groups = [aws_security_group.web.id]

subnets = module.vpc.public_subnets

tags = {

Name = "web-alb"

}

}

resource "aws_alb_target_group" "web" {

name = "web-tg"

port = 80

protocol = "HTTP"

vpc_id = module.vpc.vpc_id

health_check {

path = "/"

interval = 30

timeout = 5

healthy_threshold = 3

unhealthy_threshold = 3

}

}

resource "aws_alb_listener" "web" {

load_balancer_arn = aws_alb.web.arn

port = "80"

protocol = "HTTP"

default_action {

type = "forward"

target_group_arn = aws_alb_target_group.web.arn

}

}

resource "aws_autoscaling_group" "web" {

name = "web-asg"

launch_template {

id = aws_launch_template.web.id

version = "$Default"

}

min_size = 2

max_size = 5

desired_capacity = 2

target_group_arns = [aws_alb_target_group.web.arn]

vpc_zone_identifier = module.vpc.public_subnets

tag {

key = "Name"

value = "web-server"

propagate_at_launch = true

}

}

This configuration creates a scalable, load-balanced web application that automatically recovers from instance failures.

Example 3: Deploying a Private Kubernetes Cluster on EKS

Using the official EKS module:

module "eks" {

source = "terraform-aws-modules/eks/aws"

version = "19.12.0"

cluster_name = "prod-eks-cluster"

cluster_version = "1.27"

vpc_id = module.vpc.vpc_id

subnet_ids = module.vpc.private_subnets

enable_irsa = true

node_groups_defaults = {

ami_type = "AL2_x86_64"

}

node_groups = {

workers = {

desired_capacity = 2

max_capacity = 5

min_capacity = 2

instance_type = "t3.medium"

}

}

tags = {

Environment = "production"

Project = "microservices"

}

}

After applying, you can configure kubectl to connect to the cluster using the output from the module.

FAQs

What is the difference between Terraform and Ansible?

Terraform is an infrastructure as code tool focused on provisioning and managing cloud resources declaratively. Ansible is a configuration management tool that focuses on configuring servers after they are provisioned using an imperative, agentless approach. Many teams use both: Terraform to create infrastructure and Ansible to configure software on those machines.

Can Terraform manage on-premises infrastructure?

Yes. Terraform supports providers for VMware vSphere, OpenStack, Nutanix, Cisco UCS, and even custom APIs. You can use Terraform to manage hybrid environments spanning cloud and on-premises data centers.

How do I handle secrets in Terraform?

Never store secrets like passwords, API keys, or certificates in Terraform files. Use external secret managers such as AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault. Reference secrets via data sources or environment variables.

What happens if I delete a resource manually in the cloud console?

Terraform maintains a state file that tracks the real-world resources. If you delete a resource manually, Terraform will detect the drift during the next plan or apply and attempt to recreate it. To avoid conflicts, always manage infrastructure through Terraform.

Can I use Terraform with Docker and Kubernetes?

Yes. Terraform has providers for Docker (to manage containers and networks) and Kubernetes (to deploy Helm charts, namespaces, and services). You can use Terraform to provision Kubernetes clusters and then deploy applications on them.

Is Terraform free to use?

Yes, the Terraform CLI and open-source providers are free. HashiCorp offers Terraform Cloud and Terraform Enterprise as paid services with advanced collaboration and governance features.

How do I roll back a Terraform change?

If you’ve committed your Terraform code to Git, you can revert to a previous commit and run terraform apply. Terraform will compare the new configuration with the current state and make the necessary changes to restore the previous infrastructure.

Why is my Terraform plan showing changes when I didn’t modify anything?

This is often due to state drift — a change made outside of Terraform, or a provider returning updated values (like a timestamp or random string). Use terraform refresh to update the state, or check for dynamic values in your configuration.

Conclusion

Writing Terraform scripts is not merely about typing code — it’s about adopting a disciplined, repeatable, and scalable approach to infrastructure management. By following the step-by-step guide in this tutorial, you’ve learned how to install Terraform, define resources, manage variables and outputs, organize code with modules, and implement secure, production-ready configurations.

Best practices such as version control, environment separation, and state management are not optional — they are the foundation of reliable infrastructure automation. Leveraging tools like tfsec, Checkov, and Atlantis further enhances your ability to deliver secure, auditable, and collaborative infrastructure changes.

The real-world examples demonstrated how Terraform can be applied to complex scenarios: from single EC2 instances to multi-tier web applications and managed Kubernetes clusters. These are not theoretical exercises — they are patterns used daily by engineering teams at Fortune 500 companies and high-growth startups alike.

As cloud infrastructure becomes increasingly complex, the ability to write clear, maintainable Terraform scripts will be a defining skill for modern DevOps and SRE professionals. Start small, iterate often, document thoroughly, and never underestimate the power of automation.

Mastering Terraform is not a one-time task — it’s a continuous journey of learning, refinement, and innovation. With the resources and practices outlined here, you now have the foundation to build, scale, and secure infrastructure with confidence.