Are you about to sit for a DevOps tech interview and want to brush up on your Terraform knowledge?
Well, good news! In this guide, I'm going to share with you 53 of the most common Terraform programming interview questions, as well as how to best respond to these questions.
These questions will not only help you to test your knowledge but at the same time will prepare you for the tech interview so that you can ace it on the day and secure your new role.
Sidenote:
In theory you should be able to answer all of these questions, but, if at any time you find yourself getting stuck, I actually teach and cover this all in my DevOps Bootcamp for Terraform.
In this course, I cover everything from the fundamentals all the way to provisioning real-world cloud infrastructure on AWS.
The goal is to take you from absolute beginner to being able to get hired as a DevOps Engineer or System Administrator, so it’s a great resource for any skill level.
With that out of the way, let’s dive into these basic to advanced, Terraform interview questions.
Infrastructure as code is a DevOps practice that manages an application's underlying infrastructure through programming.
Terraform is a platform-agnostic DevOps tool that allows DevOps Engineers to automate and manage the Data Center Infrastructure, the platforms, and the services that run on those platforms.
It lets us define both the cloud and the on-premise resources in human-readable configuration files that we can reuse and share.
With Terraform network engineers can programmatically provision the physical resources (servers, load-balancers, databases, security appliances) required by the applications to run.
There are two approaches for writing IaC:
A technology is considered declarative if it describes an intended goal rather than the steps needed to reach that goal. The Terraform language is declarative, meaning that it's responsible for figuring out how to achieve that state without actually defining all the steps.
Using the Terraform declarative code all we have to do is to declare the end state. Terraform will be aware of the state created in the past, and it will do only what is still necessary to reach the final state.
One thing to note is that with Terraform, the ordering of blocks and the files they are organized into is not too important.
The core Terraform workflow has 3 steps:
So, in a nutshell, the steps are: write, plan, and apply.
In the terminal run terraform version
like so:
These are plugins that Terraform uses to create and manage resources on a specific infrastructure.
A provider usually provides resources needed to manage a single cloud or infrastructure platform, such as AWS or Azure, or technology (for example Docker or database).
Think of a provider as something similar to a library in a programming language like Python or a Node.js module.
Terraform Registry is the main directory of publicly available Terraform providers for most major infrastructure platforms.
There are 3 types:
To help differentiate between these, the provider listings use badges to indicate who develops and maintains a given provider.
(main.cf)
? Give an example and explain what it doesExample:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
region = "us-central-1"
}
Provider configurations belong to the root module
of a Terraform project (main.cf
in most cases).
A provider configuration is created using a provider
block.
The name given in the block header ("aws"
in this example) is the local name of the provider to configure.
This provider should already be included in a required_providers
block. And the required_providers
block must be nested inside the top-level terraform block which can also contain other settings. We have such a configuration for each Terraform project.
The body of the block (between { and }) contains configuration arguments for the provider. Most arguments in this section are defined by the provider itself; in this example region
is the argument that is specific to the aws provider.
Considering the same Terraform code as before:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
region = "us-central-1"
}
source
is a global and unique source address for the provider we intend to use, such as hashicorp/aws in the above example.
source
also specifies the primary location where Terraform can download the provider.
Terraform configurations can be written either in the native Terraform language syntax or in JSon-compatible format.
Both the low-level JSON syntax and the native syntax, are defined in terms of a specification called HCL which is short for Hashicorp Configuration Language.
The Terraform configuration syntax is built around 2 key syntax constructs: blocks
and arguments
.
A block
is a container for other content, while arguments
are simply used to assign an expression to a name.
Each block
has a type and one or more labels. There are block types such as provider
that require one label. Whereas blocks such as resource
require two labels, and block types that require none:terraform
or required_providers
are such examples.
The Terraform language supports three different syntaxes for comments:
/
and end with */
The hash (#) single-line comment style is the default comment style and should be used in most cases.
Providers are distributed separately from Terraform itself and in order to use them, Terraform needs to download and install them.
To do that we need to initialize the working directory by running terraform init
.
The working directory is the one that contains the configuration files from which terraform will be invoked.
terraform plan
is the command used to preview the changes Terraform will make.
The first step is to read the current state of the existing remote objects to make sure that its state is up-to-date.
Then Terraform will compare the current configuration to the prior state. If it notices any differences, Terraform will propose a set of changing actions that should make the remote objects match the current configuration.
The resources that will be created will be indicated using a plus symbol (+) in green. If we agree with the Terraform plan we can run terraform apply
that will execute the actions proposed in the plan.
To manage the AWS infrastructure, Terraform needs a specific provider and also needs to authenticate to AWS.
The provider is downloaded by running terraform init
.
The authentication is done by providing an access and a secret key that are created in the AWS Management console under Identity and Access Management Dashboard.
The terraform fmt
command formats, validates, and rewrites our terraform configuration files so they are readable and consistent.
We can destroy the entire infrastructure created by Terraform by running terraform destroy-auto-approve
.
terraform apply
updates the real infrastructure according to the declarations that exist in main.cf
.
variable "web" {
}
We can access the variable using the notation var.web
The default name of the file is terraform.tfvars
.
On the other hand:
So, in a nutshell, a security group is the firewall of EC2 Instances and a Network ACL is the firewall of the VPC subnets.
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
The IP addresses are indicated by cidr_block
attribute and it means any source IP address.
The identifier of an EC2 image is called AMI, which stands for Amazon Machine Image.
A key pair, consisting of a public key and a private key, is a set of security credentials that we use to prove our identity when connecting to an EC2 instance.
The public key is stored on the instance and the private key is on the client. Anyone who possesses our private key can connect to our instances, so it's important that we store the private key in a secure place.
In the Terraform configuration file in the aws_instance
resource we specify the key pair using an attribute called key_name
.
They are a kind of API and their goal is to do something and give us back dynamic data. So they are used to fetch dynamic data from cloud providers.
Data sources are like queries. A list of AMIs that changes frequently or a list of Availability Zones are some examples of data sources.
Terraform Output Values are like the return values of functions in programming languages.
Output values allow us to export structured data about our resources. Data exported by outputs can be used to configure other parts of the infrastructure or by a child module to share data with the root module.
Also, the root module can use outputs to print values at the terminal after running terraform apply
.
Terraform state is used to map real-world resources to our configuration, to keep track of metadata, and to improve performance.
The primary purpose of Terraform state
is to store bindings between remote objects in the cloud and resources declared in our configuration.
The state is stored by default in a local file named terraform.tfstate
, but it can also be stored remotely, and that works better in a team environment.
The required command is terraform state list
.
We can run commands on instances in more than one way:
user_data
cloud-init
which is the industry standard for cloud instance initializationWe can use TF_LOG
to enable Terraform logging at the terminal or we can use TF_LOG_PATH
to enable logging to a file.
The Hashicorp configuration language uses the following simple types: number
, string
, bool
, and a special value which is null
.
There are two categories of complex types:
In Terraform, there are three kinds of collection types: list
, map
and set
and 2 kinds of structural types: object
and tuple
.
list
and map
types available in HCL?A list is a sequence of values of the same type and a map is a collection of key-value pairs, all of the same type.
We can define lists and maps as follows:
# type list (of strings)
variable "azs" {
description = "AZs in the Region"
type = list(string)
default = [
"eu-central-1a",
"eu-central-1b",
"eu-central-1c"
]
}
# type map
variable "amis" {
type = map(string)
default = {
"eu-central-1" = "ami-0dcc0ebde7b2e00db",
"us-west-1" = "ami-04a50faf2a2ec1901"
}
# type tuple
variable "my_instance" {
type = tuple([string, number, bool])
default = ["t2.micro", 1, true ]
}
# type object
variable "egress_dsg" {
type = object({
from_port = number
to_port = number
protocol = string
cidr_blocks = list(string)
})
default = {
from_port = 0,
to_port = 65365,
protocol = "tcp",
cidr_blocks = ["100.0.0.0/16", "200.0.0.0/16", "0.0.0.0/0"]
}
}
count
, and what does it do?resource "aws_instance" "server" {
ami = "ami-06ec8443c2a35b0ba"
instance_type = "t2.micro"
count = 3
}
count
is a meta-argument used to manage similar resources. In the example above Terraform will create three instances of the same type.
count
and for_each
Both count
and for_each
are meta-arguments used to duplicate resources that are similar.
However, for_each
was introduced more recently to overcome the downsides of count
, as count
is sensitive to any changes in the list order while for_each
isn’t.
The for_each
meta-argument accepts a map or a set of strings and creates an instance for each item in the map or set.
Sets and maps do not allow duplicates and they are unordered so creating or destroying individual resources using for_each
leaves all the others in their proper place.
local values
areTerraform local values
or simply 'locals’ are named values that we can refer to in our configuration.
Using meaningful names instead of hard-coded values will help us to simplify the configuration by avoiding repetition and writing more readable code.
Compared to variables, terraform locals do not change values during or between terraform runs and, unlike input variables, locals are not submitted by users but calculated inside the configuration.
Locals are available only in the current module. If we define them in a child module or a module that we import, the local values will not be available to the root module. Also, they are scoped locally!
locals
local
file()
Each Terraform configuration has an associated backend that defines how operations are executed and where the Terraform state is stored.
The default backend is local and it stores the state as a plain file in the current working directory.
Problem #1.
The local state is good for testing and development, or if we are working alone. But in production environments or when working in a team, the use of a local state file brings many complications.
Problem #2.
Concurrency is another problem. It's also important that nobody else runs Terraform at the same time. Otherwise, the current changes will not be seen and the state file can get corrupted.
The solution to both of these problems is to store the state remotely.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
backend "s3" {
}
}
The Terraform state will be saved remotely on Amazon S3.
Terraform modules are a powerful way to reuse code and stick to the DRY principle which is "Do Not Repeat Yourself".
Modules help us to organize configuration, encapsulate configuration, re-use configuration, provide consistency and ensure best practices. They also help us to reduce many errors because we'll have the code in a single place and import that code into different parts of our configuration.
A Terraform module is a set of Terraform configuration files in a single directory. Even the simplest configuration consisting of a single directory with one .tf
file is a module.
There are two types of modules:
Local modules are loaded from the local filesystem and are generally created by ourselves or other members of the team to organize and encapsulate our code.
Remote modules are loaded from a remote source such as Terraform Registry and are created and maintained by Hashicorp and its partners or by third parties.
output "vm_public_ip" {
value = aws_instance.web_server.public_ip
}
We can use module.server.vm_public_ip
.
On the Terraform Registry.
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
}
The module is called ‘Remote’.
terraform init
name
, cidr
and azs
?module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
}
These are all inputs declared in the module.
So there you have it. 53 of the most common Terraform questions and answers that you might face in a DevOps tech interview (for Terraform).
How did you do? Did you get all 53 correct? If so, I'd say you should stop studying and start interviewing!
Didn't get them all? Got tripped up on some? Don't worry about it because I'm here to help.
If you want to fast-track your Terraform + DevOps interview prep and get as much hands-on practice as you can, check out my DevOps Bootcamp.
Not only can you follow it from start to finish and work on fundamentals to advanced concepts, but you can also ask questions in the private Discord community.
And if you join or not, all I have left to say is good luck with your interview. You’ve got this!