← Back to Blog
Terraform Interview Questions 2026: The Complete Guide
Terraform15 min read·Mar 28, 2026
By InterviewDrill Team

Terraform Interview Questions 2026: The Complete Guide

Terraform has become the default IaC tool at most cloud-native companies. Interviews test not just syntax knowledge, but whether you've managed real production infrastructure with it. Here are the questions that come up most — and the answers that impress.


The One Question That Trips Up 80% of Candidates

"How do you manage Terraform state in a team environment?"

The weak answer: "We use remote state on S3."

The answer that gets you hired:

  • S3 backend with server-side encryption for storage
  • DynamoDB table for state locking — prevents two engineers running apply simultaneously
  • Separate state files per environment — never share state between dev and prod
  • State drift handlingterraform refresh to detect manual changes, terraform import to bring unmanaged resources under control
  • State file is sensitive — contains resource IDs and sometimes credentials. Lock down S3 bucket ACLs and enable versioning for rollback
terraform {
  backend "s3" {
    bucket         = "company-terraform-state"
    key            = "prod/eks/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "terraform-state-lock"
  }
}

Core Questions

1. What happens when two engineers run terraform apply at the same time?

Without DynamoDB locking: both reads get the same state, both make changes, one overwrites the other's changes. State corruption.

With DynamoDB locking: the first apply acquires a lock. The second gets an error: "Error acquiring the state lock". It waits or fails gracefully.

Follow-up: What do you do if a lock is stuck after a failed apply? → terraform force-unlock . Use with caution — only if you're certain no apply is actually running.


2. What is the difference between terraform workspace and separate directories for environments?

Workspaces:

  • Single backend, multiple state files within it
  • Switch with terraform workspace select prod
  • Same code, different state
  • Problem: same variables file — you must use terraform.workspace conditionals, which gets messy

Separate directories (recommended for most teams):

environments/
  dev/
    main.tf
    variables.tf
    terraform.tfvars
  staging/
    main.tf
  prod/
    main.tf
  • Completely isolated state
  • Different variable values per environment
  • Cleaner separation of concerns
  • Can use terragrunt to DRY up repeated config

When workspaces make sense: Simple setups, same infrastructure in multiple identical environments (e.g., one per customer tenant).


3. Explain the Terraform lifecycle: init, plan, apply, destroy

terraform init:

  • Downloads provider plugins
  • Configures the backend
  • Installs modules
  • Run this whenever you add a new provider or module

terraform plan:

  • Compares desired state (your .tf files) with actual state (state file + real infra)
  • Shows what will be created, modified, or destroyed
  • Never modifies anything — read only
  • Always review plan output before applying

terraform apply:

  • Executes the plan
  • Updates the state file after each resource operation
  • If it fails mid-way, state is partially updated — run apply again to converge

terraform destroy:

  • Destroys all resources managed by the configuration
  • Essentially a plan with everything marked for deletion
  • Irreversible — always preview with terraform plan -destroy first

4. What is a Terraform module and how do you structure one?

A module is a reusable, self-contained package of Terraform configuration. It takes inputs (variables) and produces outputs.

Standard module structure:

modules/
  vpc/
    main.tf        # resource definitions
    variables.tf   # input variables
    outputs.tf     # output values
    README.md      # documentation

Calling a module:

module "vpc" {
  source  = "./modules/vpc"
  version = "1.2.0"

  cidr_block = "10.0.0.0/16"
  environment = "prod"
}

output "vpc_id" {
  value = module.vpc.vpc_id
}

Module versioning best practice: Pin to specific versions in production. Never use source = "./modules/vpc" without a version tag for shared modules.


5. How do you handle secrets in Terraform?

What NOT to do: Put secrets in .tfvars files and commit to git.

Correct approaches:

  • AWS Secrets Manager data source — retrieve at runtime, never stored in state
data "aws_secretsmanager_secret_version" "db_password" {
  secret_id = "prod/db/password"
}

resource "aws_db_instance" "main" {
  password = data.aws_secretsmanager_secret_version.db_password.secret_string
}
  • Environment variablesTF_VAR_db_password is read by Terraform without being in any file
  • Vault provider — for HashiCorp Vault
  • Mark sensitive outputs — use sensitive = true on outputs containing secrets so they're redacted in plan output

Important caveat: Secrets passed as resource arguments ARE stored in state. Encrypt your state file and restrict access.


6. What is terraform import and when do you use it?

terraform import brings an existing resource that was created outside Terraform (manually, via CLI, or another tool) under Terraform management.

# Import an existing S3 bucket
terraform import aws_s3_bucket.my_bucket my-existing-bucket-name

Use cases:

  • Migrating existing infrastructure to Terraform
  • A colleague created something manually in the AWS console
  • Recovering from state corruption

Limitation: Import only adds the resource to state. It does NOT generate the .tf configuration. You must write the resource block manually to match the existing resource, then run plan to verify no changes are detected.


7. How do you prevent accidental deletion of critical resources?

lifecycle meta-argument:

resource "aws_db_instance" "prod" {
  # ...

  lifecycle {
    prevent_destroy = true
    ignore_changes  = [engine_version]  # ignore auto-minor version updates
  }
}

prevent_destroy: Terraform will error if a plan includes destruction of this resource. Protects production databases, S3 buckets with data, etc.

ignore_changes: Useful for resources that change outside Terraform (e.g., auto-scaling group desired count, AMI IDs updated by a pipeline).


8. What is the difference between count and for_each?

count:

resource "aws_instance" "web" {
  count = 3
  # ...
}
# Accessed as: aws_instance.web[0], aws_instance.web[1]

Problem: If you remove index 1 from a list of 3, Terraform destroys and recreates instances 1 and 2.

for_each:

resource "aws_instance" "web" {
  for_each = toset(["us-east-1a", "us-east-1b", "us-east-1c"])
  # ...
}
# Accessed as: aws_instance.web["us-east-1a"]

Resources are keyed by the map/set value. Removing one key only affects that specific resource — others are untouched.

Rule of thumb: Use for_each for anything in production. Use count only for truly identical resources where order doesn't matter.


Practice These Out Loud

The difference between reading about state locking and explaining it clearly under interview pressure is significant. Practice answering these questions aloud.

InterviewDrill.io has a dedicated Terraform & IaC track. First session is free → interviewdrill.io

Reading helps. Practicing wins interviews.

Practice these exact questions with an AI interviewer that pushes back. First session completely free.

Start Practicing Free →