Terraform Workflow Explained with a Real AWS Example (Beginner Friendly)

Why Terraform

Infrastructure as Code (IaC) tools like Terraform make it possible to define your cloud resources in simple text files and recreate them anytime with a few commands. As a beginner, a great first project is to use Terraform to launch a single EC2 instance on AWS and walk through the full workflow from setup to clean-up.​

This post is a hands-on log of that exact journey: one configuration file and five core Terraform commands

Terraform workflow in plain English

Terraform follows a predictable workflow: init → validate → plan → apply → destroy.​

  • terraform init: Prepares the working directory, downloads the AWS provider plugin, and creates a lock file so future runs use the same versions.​
  • terraform validate: Checks that your .tf files are syntactically correct and that the configuration is internally consistent.​
  • terraform plan: Shows what Terraform is going to do, such as which resources will be created, changed, or destroyed, using symbols like + for create.​
  • terraform apply: Runs the plan for real, asks you to confirm with yes, and then creates or updates your infrastructure.​
  • terraform destroy: Destroys the resources you created, again asking you to confirm, which helps avoid surprise cloud bills.​

In this example, all these commands are ran from the same working directory where the Terraform configuration files (.tf) live.​

Walkthrough: EC2 instance with Terraform

Here is the configuration file used for this simple project: a Terraform settings block, an AWS provider block, and a single EC2 instance resource.​

# Terraform Settings Block
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      #version = "~> 5.0" # Optional but recommended in production
    }
  }
}

# Provider Block
provider "aws" {
  profile = "default" # AWS Credentials Profile configured on your local desktop terminal  $HOME/.aws/credentials
  region  = "us-east-1"
}

# Resource Block
resource "aws_instance" "ec2demo" {
  ami           = "ami-068c0051b15cdb816" # Amazon Linux in us-east-1, update as per your region
  instance_type = "t2.micro"
}

  • The Terraform settings block declares required_providers and tells Terraform to use the official hashicorp/aws provider.​
  • The provider block configures which AWS account and region to use via the profile and region arguments.​
  • The resource block  aws_instance describes a single EC2 instance with a specific AMI and instance type (t2.micro).​

Step 1: terraform init

From the directory containing this file, running terraform init:​

  • Downloads the AWS provider plugin into a hidden .terraform directory.​
  • Creates a .terraform.lock.hcl file that records the exact provider versions used.​

This step is only needed when you first set up the project or change providers.​

Step 2: terraform validate

Next, terraform validate checks that the configuration is valid.​

If there is a syntax issue (for example, a missing brace or a typo in a block), it will throw an error and point to the problem in the .tf file. A successful validation means Terraform can understand the configuration but hasn’t yet contacted AWS.​

Step 3: terraform plan

Running terraform plan shows what Terraform intends to do before touching AWS.​

  • The output includes lines with a + sign, indicating resources that will be created, such as + aws_instance.ec2demo.​
  • At the bottom, Terraform summarizes how many resources will be added, changed, or destroyed.​

This is the “dry run” step where you verify that the configuration matches your expectations.​

Step 4: terraform apply

terraform apply executes the changes defined in the plan.​

  • Terraform prints the plan again so you can double-check it.​
  • You must type yes to confirm; only then does Terraform call the AWS APIs and create the EC2 instance.​

After it completes, you can go to the AWS console, open the EC2 dashboard, and see the instance created from your Terraform configuration.​

Step 5: terraform destroy

When you are done experimenting, terraform destroy removes the resources created by Terraform.​

  • Terraform shows a destroy plan, listing resources with a - symbol.​
  • After you type yes, it terminates the EC2 instance and cleans up, helping you avoid unnecessary charges.​

HCL basics that clicked

Terraform uses HCL (HashiCorp Configuration Language), which is designed to be both human-readable and machine-friendly.​

In the EC2 example:​

resource "aws_instance" "ec2demo" {
  ami           = "ami-068c0051b15cdb816"
  instance_type = "t2.micro"
}
  • resource is the block type, “aws_instance” and “ec2_demo” are block labels.​
  • Inside the block, ami and instance_type are identifiers (argument names), and their right-hand sides are argument values (expressions).​

There are two broad types of blocks you will see often:​

  • Top-level blocks such as terraformprovider, and resource.​
  • Nested blocks like provisioner blocks or resource-specific nested blocks such as tags { ... }.​

For comments, HCL supports:​

  • // or # for single-line comments.​
  • /* ... */ for multi-line comments.​

Thinking in terms of “blocks, arguments, and values” makes HCL much easier to read and write.​

What I’d do next

Once this basic workflow feels comfortable, natural next steps could be:​

  • Add tags to the EC2 instance using a tags block to practice nested blocks.​
  • Introduce variables (for AMI ID, instance type, or region) to avoid hardcoding values and make the configuration reusable.​
  • Create additional resources, such as a security group and attach it to the instance, to see how Terraform manages relationships.​
  • Later, explore remote state (for example, storing state in an S3 bucket) when you are ready to collaborate or work with multiple environments.​

This single EC2 example is already enough to build confidence with the Terraform workflow, and it gives you a clean foundation for more advanced IaC experiments.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Lead with Data: Leveraging SAP Analytics Cloud to Drive Business Success

Next Post

Balluff 5GigE Optical Cameras

Related Posts