Reviewing Terraform’s Basics — Part 3

for-each, data type, expressions, dynamic and more

Sigrid Jin
17 min readJun 29, 2024

for-each loop

We discussed the count loop, which caused errors when deleting a resource in the middle of the loop. To address this issue, we mentioned the for_eachloop as a solution.

  1. for-each is a loop that creates resources based on the number of key values declared.
  2. When iterating with for_each, each object is accessed individually based on the type value.
  3. Each object has two properties: key and value.
    a. each.key: Represents the key of the declared for_each item.
    b. each.value: Represents the value of the declared for_each item.
    c. for_each can access all object types but only allows map and set types. If the type is not map or set type, it needs to be converted using functions like toset.
resource "<PROVIDER>_<TYPE>" "<NAME>" {
for_each = <COLLECTION>

...
}

#################################################

resource "local_file" "name" {
for_each = {
a = "test"
b = "abc"
}
filename = "${path.module}/${each.key}.txt"
content = "${each.value}"
}

Let’s practice creating resources using for_each with map and set.

Example 1: Creating instances with different AMIs In this example, we create two instances with different AMIs. We also create a key pair along with the instances. Additionally, we use a for expression to output the public IP of each instance.

provider "aws" {
region = "ap-northeast-2"
}

resource "tls_private_key" "pub_key" {
algorithm = "RSA"
rsa_bits = 2048
}

resource "aws_key_pair" "sigrid-key" {
key_name = "sigrid-key"
public_key = tls_private_key.pub_key.public_key_openssh
}

# Using map for testing
resource "aws_instance" "map_test" {
for_each = tomap({
amzn = {
instance_type = "t2.micro"
ami = "ami-0ebb3f23647161078"
}

ubuntu = {
instance_type = "t2.micro"
ami = "ami-0bcdae8006538619a"
}
})

instance_type = each.value.instance_type
ami = each.value.ami
key_name = aws_key_pair.sigrid-key.key_name

tags = {
Name = each.key
}
}

output "instance_ip" {
value = { for instance, ip in aws_instance.map_test : instance => ip.public_ip }
}

output "key_pair_name" {
value = aws_key_pair.sigrid-key.key_name
}

output "private_key" {
value = tls_private_key.pub_key.private_key_pem
sensitive = true
}

output "public_key" {
value = tls_private_key.pub_key.public_key_openssh
}

Example 2: Creating user accounts using set

resource "aws_ami_user" "accounts" {
for_each = toset(["aws", "sigrid", "jin", "hi"])
name = each.key
}

Unlike the count index, the key values in for_each are unique. Therefore, deleting a value in the middle does not cause errors or shift the numbering as it does with count. The specified resource is deleted without affecting others.

Let’s test this by deleting the “sigrid” account in the middle and redeploying. When running terraform plan, you will see that only the specified account is deleted, unlike with count where the index numbers shift and cause replacement of other accounts.

Example 3: Creating files with different content Here’s a simpler example that creates two files and populates them with different content.

resource "local_file" "test" {
for_each = {
a = "content a"
b = "content b"
}
content = each.value
filename = "${path.module}/${each.key}.txt"
}

The files are successfully created with the names and content defined in for_each.

Referencing resources created with for_each

To reference the values of resources created with for_each, use the syntax <resource type>.<name>[<key>] for resources and module.<module name>[<key>] for modules.

In this example, we create resources in the abc block using variables and then reference those resources to create resources in the def block.

variable "names" {
default = {
a = "content a"
b = "content b"
c = "content c"
}
}

resource "local_file" "abc" {
for_each = var.names
content = each.value
filename = "${path.module}/abc-${each.key}.txt"
}

resource "local_file" "def" {
for_each = local_file.abc
content = each.value.content
filename = "${path.module}/def-${each.key}.txt"
}

Data Types of Terraform

Basic Types

  1. string: Represents textual data
  2. number: Represents numeric values
  3. bool: Represents boolean values (true or false)
  4. any: Explicitly allows any type

Complex Types

  1. list(<type>): An index-based collection
  2. map(<type>): A key-value based collection, sorted by keys
  3. set(<type>): A value-based collection, sorted by values
  4. object({<argument_name>=<type>, …}): A structured collection of named attributes
  5. tuple([<type>, …]): An ordered collection of elements, potentially of different types

While lists and sets have similar declaration syntax, they differ in how they’re referenced (by index vs. by key). Maps and sets automatically sort their values based on keys or values, respectively.

# String variables
variable "string_a" {
default = "myString"
}

variable "string_b" {
type = string
default = "myString"
}

# Number variables
variable "number_a" {
default = 123
}

variable "number_b" {
type = number
default = 123
}

# Boolean variable
variable "boolean" {
default = true
}

# List, Set, and Tuple examples
variable "list_example" {
type = list(string)
default = ["bbb", "ccc", "aaa"]
}

variable "set_example" {
type = set(string)
default = ["bbb", "ccc", "aaa"]
}

variable "tuple_example" {
type = tuple([string, number, bool])
default = ["aaa", 1, false]
}

# Map and Object examples
variable "map_example" {
type = map(string)
default = {"b": "bbb", "c": "ccc", "a": "aaa"}
}

variable "object_example" {
type = object({ name = string, age = number })
default = {"name": "gasida", "age": 27}
}

You can inspect these variables in the Terraform console using var.<variable_name> and type(var.<variable_name>) commands.

Key Observations

  1. Notice how list_example and set_example have the same declared values, but their output order differs. This is because sets automatically sort their values, while lists maintain the declared order.
  2. Maps and objects provide structured ways to store related data, with maps focusing on key-value pairs and objects allowing for more complex structures with named attributes of specific types.
  3. Tuples allow for ordered collections of elements with potentially different types, offering more flexibility than homogeneous lists.

For Expressions

For expressions can be used with different data types:

  • For lists: Returns values or index-value pairs
  • For maps: Returns keys or key-value pairs
  • For sets: Returns values

Practical Example: Transforming a List

Let’s look at an example where we transform a list of names using a for expression:

variable "names" {
default = ["a", "b", "c"]
}

resource "local_file" "abc" {
content = jsonencode([for s in var.names : upper(s)])
filename = "${path.module}/abc.txt"
}

output "file_content" {
value = local_file.abc.content
}

In this example, we’re using jsonencode() to convert the result to a JSON string, and upper() to capitalize each name. The for expression iterates over each element in var.names, applies the upper() function, and returns the result as a new list.

Rules for Using For Expressions

  1. For lists:
  • Single return value: Returns the value
  • Two return values: First is the index, second is the value
  • Conventionally use i for index and v for value

2. For maps:

  • Single return value: Returns the key
  • Two return values: First is the key, second is the value
  • Conventionally use k for key and v for value

3. Result formatting:

  • [] returns a tuple
  • {} returns an object

4. For object results:

  • Use => to separate key-value pairs
  • Use ... after a value to prevent key duplication
  • You can add if conditions for filtering
variable "names" {
type = list(string)
default = ["a", "b"]
}

output "A_upper_value" {
value = [for v in var.names : upper(v)]
}

output "B_index_and_value" {
value = [for i, v in var.names : "${i} is ${v}"]
}

output "C_make_object" {
value = { for v in var.names : v => upper(v) }
}

output "D_with_filter" {
value = [for v in var.names : upper(v) if v != "a"]
}
// output

A_upper_value = ["A", "B"]
B_index_and_value = ["0 is a", "1 is b"]
C_make_object = { "a" = "A", "b" = "B" }
D_with_filter = ["B"]
variable "members" {
type = map(object({
role = string
group = string
}))
default = {
ab = { role = "member", group = "dev" }
cd = { role = "admin", group = "dev" }
ef = { role = "member", group = "ops" }
}
}

output "A_to_tuple" {
value = [for k, v in var.members : "${k} is ${v.role}"]
}

output "B_get_only_role" {
value = {
for name, user in var.members : name => user.role
if user.role == "admin"
}
}

output "C_group" {
value = {
for name, user in var.members : user.role => name...
}
}
// output

A_to_tuple = ["ab is member", "cd is admin", "ef is member"]
B_get_only_role = { "cd" = "admin" }
C_group = { "member" = ["ab", "ef"], "admin" = ["cd"] }
variable "names" {
type = list(string)
default = ["a", "b"]
}

output "A_upper_value" {
value = [for v in var.names : upper(v)]
}

output "B_index_and_value" {
value = [for i, v in var.names : "${i} is ${v}"]
}

output "C_make_object" {
value = { for v in var.names : v => upper(v) }
}

output "D_with_filter" {
value = [for v in var.names : upper(v) if v != "a"]
}

# output

A_upper_value = ["A", "B"]
B_index_and_value = ["0 is a", "1 is b"]
C_make_object = { "a" = "A", "b" = "B" }
D_with_filter = ["B"]

Map Example

variable "members" {
type = map(object({
role = string
group = string
}))
default = {
ab = { role = "member", group = "dev" }
cd = { role = "admin", group = "dev" }
ef = { role = "member", group = "ops" }
}
}

output "A_to_tuple" {
value = [for k, v in var.members : "${k} is ${v.role}"]
}

output "B_get_only_role" {
value = {
for name, user in var.members : name => user.role
if user.role == "admin"
}
}

output "C_group" {
value = {
for name, user in var.members : user.role => name...
}
}

Output:

A_to_tuple = ["ab is member", "cd is admin", "ef is member"]
B_get_only_role = { "cd" = "admin" }
C_group = { "member" = ["ab", "ef"], "admin" = ["cd"] }

Here's an example of using for expressions to rename multiple S3 buckets.

provider "aws" {
region = "ap-northeast-2"
}

variable "s3_name" {
type = set(string)
default = ["sigrid-bucket-01", "sigrid-bucket-02"]
description = "aws s3 bucket names"
}

variable "postfix" {
type = string
default = "test"
description = "postfix"
}

resource "aws_s3_bucket" "mys3bucket" {
for_each = toset([for bucket in var.s3_name : format("%s-%s", bucket, var.postfix)])
bucket = each.key
}

This script will create new buckets with names "sigrid-bucket-01-test" and "sigrid-bucket-02-test". Note that when renaming S3 buckets, it’s important to note that AWS doesn’t provide a direct “rename” operation.

Instead, we need to create new buckets, copy the contents, and then delete the old buckets. This process requires careful handling to ensure data integrity and proper permission management. To safely rename S3 buckets while preserving their contents, we need to follow these steps:

  1. Create new buckets
  2. Copy objects from the old buckets to the new ones
  3. Delete the old buckets
provider "aws" {
region = "ap-northeast-2"
}

variable "s3_name" {
type = set(string)
default = ["sigrid-bucket-01", "sigrid-bucket-02"]
description = "aws s3 bucket names"
}

variable "postfix" {
type = string
default = "test"
description = "postfix"
}

locals {
new_buckets = [for bucket in var.s3_name : format("%s-%s", bucket, var.postfix)]
}

resource "aws_s3_bucket" "new_bucket" {
for_each = toset(local.new_buckets)
bucket = each.key
}

data "aws_s3_bucket_policy" "old_bucket" {
for_each = toset(var.s3_name)
bucket = each.key
}

resource "aws_s3_bucket_policy" "new_bucket" {
for_each = toset(local.new_buckets)
bucket = each.key
policy = data.aws_s3_bucket_policy.old_bucket[replace(each.key, "-${var.postfix}", "")].policy
}

resource "null_resource" "copy_objects" {
for_each = toset(var.s3_name)
provisioner "local-exec" {
command = <<EOT
aws s3 sync s3://${each.key} s3://${each.key}-${var.postfix}
EOT
}
}

resource "null_resource" "delete_old_buckets" {
for_each = toset(var.s3_name)
depends_on = [ null_resource.copy_objects ]
provisioner "local-exec" {
command = <<EOT
aws s3 rb s3://${each.key} --force
EOT
}
}

In this code:

  • defined the original bucket names and the postfix to be added.
  • created a locals block to generate the new bucket names.
  • created the new buckets using aws_s3_bucket resource.
  • fetched the policies of the old buckets using aws_s3_bucket_policy data source.
  • applied the old policies to the new buckets using aws_s3_bucket_policy resource.
  • used a null_resource with a local-exec provisioner to copy objects from the old buckets to the new ones using the AWS CLI.
  • used another null_resource to delete the old buckets after the copy operation is complete.

null_resource and local-exec provisioner

When designing Terraform provisioning workflows, situations often arise where users need to orchestrate specific actions that fall outside the standard lifecycle management provided by resource providers. This is where null_resource and terraform_data come into play.

A null_resource is a special resource in Terraform that allows you to configure provisioners that are not directly associated with a single existing resource. It's essentially a "dummy" resource that doesn't create any real infrastructure but can be used to run arbitrary actions.

The null_resource has only one defined attribute: 'id'. This means that changes to its internal configuration won't be included in new execution plans. To force re-execution of a null_resource, we use the triggers argument.

resource "null_resource" "foo" {
triggers = {
ec2_id = aws_instance.bar.id # Re-execute when the instance ID changes
}
}

resource "null_resource" "bar" {
triggers = {
always_run = timestamp() # Re-execute on every Terraform run
}
}

Additionally, the local-exec provisioner invokes a local executable after a resource is created. This can be used to run scripts on the machine running Terraform, not on the resource being created.

Let’s walk through an example where we provision an EC2 instance and use a provisioner to run a web service. The below configuration will fail due to a circular dependency between the EC2 instance and the Elastic IP.

provider "aws" {
region = "ap-northeast-2"
}

# ... (AMI data source and security group resource omitted for brevity)

resource "aws_instance" "web-srv" {
ami = data.aws_ami.amzn2.id
instance_type = "t2.micro"
subnet_id = "subnet-0210e58904c6b4a91"
private_ip = "172.31.1.10"
key_name = "my-ec2-keypair.pem"

user_data = <<EOF
#!/bin/bash
echo "hello, t101 study" > index.html
nohup busybox httpd -f -p 80 &
EOF
tags = {
Name = "test-web-srv"
}

provisioner "remote-exec" {
inline = [
"echo ${aws_eip.sigrid-eip.public_ip}"
]
}
}

resource "aws_eip" "sigrid-eip" {
instance = aws_instance.web-srv.id
associate_with_private_ip = aws_instance.web-srv.private_ip
}

output "public_ip" {
value = aws_instance.web-srv.public_ip
description = "public ip of the instance"
}

The below improved configuration resolves the circular dependency issue and uses null_resource to execute a remote command after the EC2 instance and Elastic IP are fully provisioned.

provider "aws" {
region = "ap-northeast-2"
}

# ... (AMI data source and security group resource omitted for brevity)

resource "aws_instance" "web-srv" {
ami = data.aws_ami.amzn2.id
instance_type = "t2.micro"
subnet_id = "subnet-0210e58904c6b4a91"
private_ip = "172.31.1.10"
key_name = "my-ec2-keypair"
vpc_security_group_ids = [aws_security_group.web-sg.id]
user_data_replace_on_change = true

user_data = <<EOF
#!/bin/bash
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "hello, t101 study" > /var/www/html/index.html
EOF
tags = {
Name = "test-web-srv"
}
}

resource "null_resource" "pubip" {
provisioner "remote-exec" {
connection {
type = "ssh"
host = aws_eip.sigrid-eip.public_ip
user = "ec2-user"
private_key = file("/Users/sigrid/gashida-keypair.pem")
}
inline = [
"echo ${aws_eip.sigrid-eip.public_ip}"
]
}
}

resource "aws_eip" "sigrid-eip" {
instance = aws_instance.web-srv.id
associate_with_private_ip = aws_instance.web-srv.private_ip
}

output "public_ip" {
value = aws_instance.web-srv.public_ip
description = "public ip of the instance"
}

output "eip" {
value = aws_eip.sigrid-eip.public_ip
description = "EIP of the instance"
}

The terraform_data resource, introduced in Terraform 1.4, serves a similar purpose to null_resource but with some advantages. It shares use cases with null_resource but offers trigger_replace for forced re-execution, input for state storage, and output for retrieving stored values.

  1. No separate provider configuration needed
  2. Lifecycle management is handled by Terraform core
resource "terraform_data" "foo" {
triggers_replace = [
aws_instance.foo.id,
aws_instance.bar.id
]

input = "world"
}

output "terraform_data_output" {
value = terraform_data.foo.output # This will output "world"
}

Dynamic Blocks in Terraform

In Terraform, while count and for_each are commonly used to create multiple instances of entire resources, there are scenarios where we need to dynamically generate multiple configuration blocks within a single resource. This is where dynamic blocks come into play, offering a powerful way to create flexible and reusable resource configurations.

Dynamic blocks allow you to dynamically generate nested blocks within a resource or module block. They are particularly useful when you need to create multiple similar nested blocks based on a collection of values.

Let’s look at a basic example of a security group resource without dynamic blocks.

resource "aws_security_group" "test-sg" {
name = "example-sg"
description = "test sg"
vpc_id = aws_vpc.main.id

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

While this works for a simple configuration, it becomes cumbersome when you need to define multiple ingress or egress rules. This is where dynamic blocks shine.

variable "sg_ports" {
type = list(number)
default = [22, 80, 443]
}

resource "aws_security_group" "test-sg" {
name = "example-sg"
description = "test sg with dynamic blocks"
vpc_id = aws_vpc.main.id

dynamic "ingress" {
for_each = var.sg_ports
content {
from_port = ingress.value
to_port = ingress.value
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

In this example, we’ve created a dynamic ingress block that iterates over the sg_ports list, creating an ingress rule for each port.

Let’s look at another example using the archive_file data source:

variable "names" {
default = {
a = "hello a"
b = "hello b"
c = "hello c"
}
}

data "archive_file" "dotfiles" {
type = "zip"
output_path = "${path.module}/dotfiles.zip"

dynamic "source" {
for_each = var.names
content {
content = source.value
filename = "${path.module}/${source.key}.txt"
}
}
}

This configuration dynamically creates source blocks for the archive file based on the names variable. It will generate a zip file containing three text files (a.txt, b.txt, c.txt) with their respective contents.

Conditional Expressions

Terraform’s conditional expressions use a ternary operator format, similar to many programming languages. The basic syntax is:

condition ? true_val : false_val

Here’s how it works:

  • The condition can be any expression that evaluates to true or false.
  • If the condition is true, the expression returns true_val.
  • If the condition is false, the expression returns false_val.
var.a != "" ? var.a : "default-a"

This expression checks if var.a is not an empty string. If it's not empty, it returns the value of var.a. Otherwise, it returns the string "default-a".

Let’s look at a practical example that demonstrates these use cases.

variable "enable_file" {
default = true
}

resource "local_file" "foo" {
count = var.enable_file ? 1 : 0
content = "foo!"
filename = "${path.module}/foo.bar"
}

output "content" {
value = var.enable_file ? local_file.foo[0].content : ""
}

In this example:

  • defined a variable enable_file with a default value of true.
  • The local_file resource uses a conditional expression in its count attribute. If enable_file is true, the count is 1 (creating the resource), otherwise it's 0 (not creating the resource).
  • The output block uses a conditional expression to return the file content if it exists, or an empty string if it doesn't.

Terraform allows setting variables through environment variables, which take precedence over default values in your code. Here’s how you can use this feature with conditional expressions:

  1. Set an environment variable: export TF_VAR_enable_file=false
  2. Verify the environment variable: export | grep TF_VAR_enable_file
  3. Run Terraform commands:
terraform init && terraform plan && terraform apply -auto-approve terraform state list echo "var.enable_file ? 1 : 0" | terraform console

The output would be 0. Note that the file isn’t created because the environment variable sets enable_file to false.

4. Remove the environment variable: unset TF_VAR_enable_file export | grep TF_VAR_enable_file

5. Re-run Terraform

terraform plan && terraform apply -auto-approve terraform state list
// output

local_file.foo[0]

echo "local_file.foo[0]" | terraform console echo "local_file.foo[0].content" | terraform console echo "var.enable_file ? 1 : 0" | terraform console
{
"content" = "foo!"
"filename" = "./foo.bar"
"id" = "0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33"
}
"foo!"
1

Now the file is created because the enable_file variable uses its default value (true) from the Terraform configuration.

Using the moved Block in Terraform

In Terraform, there are scenarios where you might need to change resource names, convert from count to for_each, or move resources into modules, altering their reference addresses. In these situations, you may want to rename resources while maintaining the deployed environment. This is where the moved block comes in handy.

The moved block allows you to inform Terraform about changes in resource addresses without recreating the underlying infrastructure. This is particularly useful when you want to refactor your Terraform code without impacting the actual deployed resources.

resource "local_file" "a" {
content = "foo!"
filename = "${path.module}/foo.bar"
}

output "file_content" {
value = local_file.a.content
}

$ echo "local_file.a.id" | terraform console
"5bf3e335199107182c6f7638efaad377acc7f452"

Now, let’s say we want to rename local_file "a" to local_file "b". The below approach would indeed rename the resource, but it would also recreate the provisioned file. If our requirement is to only change the name without affecting the underlying resource, this approach doesn’t meet our needs.

resource "local_file" "b" {
content = "foo!"
filename = "${path.module}/foo.bar"
}

output "file_content" {
value = local_file.b.content
}

Let’s use the moved block.

resource "local_file" "b" {
content = "foo!"
filename = "${path.module}/foo.bar"
}

moved {
from = local_file.a
to = local_file.b
}

output "file_content" {
value = local_file.b.content
}

$ echo "local_file.b.id" | terraform console
"5bf3e335199107182c6f7638efaad377acc7f452"

After applying these changes and confirming that everything works as expected, we can remove the moved block to complete the refactoring process.

resource "local_file" "b" {
content = "foo!"
filename = "${path.module}/foo.bar"
}

output "file_content" {
value = local_file.b.content
}

Functions in Terraform

Terraform, with its programming language-like characteristics, allows the use of built-in functions to modify or combine values. These functions enhance Terraform’s capabilities and flexibility.

Key points about Terraform functions:

  • Terraform does not support user-defined functions; only built-in functions are available.
  • Function categories include numeric, string, collection, encoding, filesystem, date/time, hash/crypto, IP network, and type conversion.

Let’s look at an example using the upper built-in function:

resource "local_file" "foo" {
content = upper("foo! bar!")
filename = "${path.module}/foo.bar"
}

// output
The content of the file will be "FOO! BAR!"
The file will be created in the current module's directory with the name "foo.bar"

Provisioners in Terraform

Provisioners are a Last Resort

Terraform includes the concept of provisioners as a measure of pragmatism, knowing that there are always certain behaviors that cannot be directly represented in Terraform’s declarative model.

However, they also add a considerable amount of complexity and uncertainty to Terraform usage. Firstly, Terraform cannot model the actions of provisioners as part of a plan because they can in principle take any action. Secondly, successful use of provisioners requires coordinating many more details than Terraform usage usually requires: direct network access to your servers, issuing Terraform credentials to log in, making sure that all of the necessary external software is installed, etc.

The following sections describe some situations which can be solved with provisioners in principle, but where better solutions are also available. We do not recommend using provisioners for any of the use-cases described in the following sections.

Even if your specific use-case is not described in the following sections, we still recommend attempting to solve it using other techniques first, and use provisioners only if there is no other option. — Terraform Docs

Provisioners in Terraform are used to execute commands or copy files that are not handled by providers. They are considered a last resort when other options are not available.

Important considerations for provisioners:

  • They are used for tasks like installing packages or creating files after resource creation.
  • Provisioner results are not synchronized with Terraform’s state file, so idempotency is not guaranteed.
  • They should be used sparingly, as they can make your Terraform configurations less predictable.
variable "sensitive_content" {
default = "secret"
}

resource "local_file" "foo" {
content = upper(var.sensitive_content)
filename = "${path.module}/foo.bar"

provisioner "local-exec" {
command = "echo the content is ${self.content}"
}

provisioner "local-exec" {
command = "abc"
on_failure = fail
}

provisioner "local-exec" {
when = destroy
command = "echo The deleting filename is ${self.filename}"
}
}
// output

- The file will be created with uppercase content "SECRET"
- The first provisioner will echo: "the content is SECRET"
- The second provisioner will fail because "abc" is not a valid command
- During resource destruction, it will echo: "The deleting filename is /path/to/module/foo.bar"

Types of Provisioners

  1. local-exec Provisioner

This provisioner executes commands on the machine running Terraform.

resource "null_resource" "test" {
provisioner "local-exec" {
command = <<EOF
echo Hello! > file.txt
echo $ENV >> file.txt
EOF
interpreter = ["bash", "-c"]
working_dir = "${path.module}"
environment = {
ENV = "world"
}
}
}

// output

A file named "file.txt" will be created in the current module directory with content:
Hello!
world

2. remote-exec Provisioner

This provisioner is used to run scripts or commands on a remote resource after it’s created.

resource "aws_instance" "test-vm" {
instance_type = "t2.micro"
ami = "ami-0e73d7a01dba794a4"
tags = {
Name = "test-vm"
}

connection {
type = "ssh"
user = "root"
password = var.root_password
host = self.public_ip
}

provisioner "file" {
source = "script.sh"
destination = "/usr/local/src/script.sh"
}

provisioner "remote-exec" {
inline = [
"chmod +x /usr/local/src/script.sh",
"/usr/local/src/script.sh args",
]
}
}

3. file Provisioner

This provisioner is used to copy files or directories from the machine running Terraform to the newly created resource.

resource "null_resource" "test" {
connection {
type = "ssh"
user = "root"
password = var.root_password
host = var.host
}

provisioner "file" {
source = "conf/myapp.conf"
destination = "/etc/myapp.conf"
}

provisioner "file" {
content = "ami used: ${self.ami}"
destination = "/tmp/file.log"
}

provisioner "file" {
source = "conf/configs.d"
destination = "/etc"
}

provisioner "file" {
source = "apps/app1/"
destination = "/data/webapps"
}
}

--

--