Terraform is a deployment technology, which can be used by anyone who wants to prepare and manage infrastructure through Infrastructure as Code (IaC). Infrastructure mainly refers to cloud based infrastructure, but technically, anything that can be controlled through Application Programming Interface (API) can be regarded as infrastructure. Infrastructure is the process of managing and preparing infrastructure through machine-readable definition files. We use IaC to automatically complete the process that should have been completed manually.
The so-called provisioning refers to infrastructure deployment rather than Configuration Management, which mainly deals with application delivery, especially on virtual machines (VM S). Configuration Management (CM) tools such as Ansible, Puppet, SaltStack and check have existed for many years and are very popular. Terraform does not replace these tools, at least not completely, because infrastructure provisioning and Configuration Management are essentially different issues. Even so, terraform will provide some functions that only CM tools will provide. After adopting terraform, many companies find that they do not need CM tools.
The basic principle of Terraform is that it allows writing human readable configuration code to define IaC. With the help of configuration code, you can deploy repeatable, transient and consistent environments to suppliers on public cloud, private cloud and hybrid cloud (see Figure 1.1).
This chapter will first introduce the advantages and disadvantages of Terraform compared with other IaC technologies, and how it stands out from these technologies. Then, it will improve it by deploying a server to AWS and using some dynamic features of Terraform to demonstrate Terraform's "Hello World!" Examples.
Figure 1.1 Terraform can deploy infrastructure to any cloud or hybrid cloud
1.1 advantages of Terraform
There has been a lot of publicity about terraform recently, but is it justified? Terraform is not the only IaC technology, and many other tools can do the same. Software deployment is a lucrative market area. Why can terraform compete with the technology of companies such as Amazon, Microsoft and Google in this field? There are six key features that make terraform different and give it a competitive advantage.
- Provisioning tools: deploy infrastructure, not just applications.
- Easy to use: suitable for those of us who are not geniuses.
- Free and open source: who doesn't like free stuff?
- Declarative: say what you want, not how to achieve it.
- Cloud independent: deploy to any cloud using the same tools.
- Strong expression ability and Extensibility: not limited by language.
Table 1.1 compares Terraform with other IaC tools.
Table 1.1 comparison between terraform and other IaC tools
name | Key features | |||||
Prepare tools | Easy to use | Free and open source | Declarative | Cloud independent | Strong expression ability and extensibility | |
Ansible | √ | × | × | √ | × | × |
Chef | √ | √ | × | × | × | × |
Puppet | √ | √ | × | × | × | × |
SaltStack | √ | × | × | × | × | × |
Terraform | × | × | × | × | × | × |
Pulumi | × | √ | × | √ | × | × |
AWS CloudFormation | × | × | √ | × | √ | √ |
GCP Deployment Manager | × | × | √ | × | √ | √ |
Azure Resource Manager | × | √ | √ | √ | √ | √ |
Technical comparison
Technically, Pulumi is closest to Terraform, the only difference is that it is not declarative. The Pulumi team believes that this is Pulumi's advantage over Terraform, but Terraform also has a Cloud Development Kit (CDK), which allows the same functions.
The design of Terraform is inspired by AWS CloudFormation and is very similar to GCP Deployment Manager and Azure Resource Manager. Although those technologies are also good, they are neither cloud independent nor open source. They can only be used for specific cloud providers and are generally not as concise and flexible as Terraform.
Ansible, check, Puppet and SaltStack are configuration management tools rather than infrastructure provisioning tools. The problem categories they solve are somewhat different from Terraform, but there are also overlaps.
1.1.1 tools
Terraform is an infrastructure provisioning tool, not a configuration management tool. Provisioning tools deploy and manage infrastructure, while configuration management tools (such as Ansible, Puppet, SaltStack and check) deploy software to existing servers. Some configuration management tools can also perform some degree of infrastructure provisioning, but not as good as terraform because they are not designed for such tasks.
The difference between configuration management tools and provisioning tools mainly lies in the concept. Configuration management tools are commonly used to manage variable infrastructure, while Terraform and other provisioning tools are commonly used to manage immutable infrastructure.
Variable infrastructure means performing software updates on existing servers. Immutable infrastructure does not care about existing servers. It regards infrastructure as a commodity that can be discarded after use. The difference between these two paradigms can be attributed to the difference between reuse and discard after use.
1.1.2 easy to use
Even non programmers can quickly and easily learn the basics of Terraform. By the end of Chapter 4, you will have the necessary skills for intermediate Terraform users. When you think about it, it's incredible. Of course, mastering Terraform is another matter, but it is true for most skills.
The main reason why Terraform is so easy to use is that its code is written in a domain specific configuration language called HashiCorp Configuration Language (HCL). HashiCorp developed this language to replace the more verbose configuration languages such as JSON and XML. HCL tried to strike a balance between human and machine readability and was influenced by some early attempts in this field, such as libucl and Nginx configuration. HCL is fully compatible with JSON, which means that HCL can be fully converted to JSON and vice versa. This makes it easy to interoperate with systems other than Terraform or dynamically generate configuration code.
1.1.3 free and open source software
Terraform's engine is called Terraform core, which is a free and open source software through Mozilla Public License v2 0 provided. The license provides that anyone can use, distribute or modify the software for personal and commercial purposes. Free is good because it means you don't have to worry about additional costs when using terraform. In addition, it makes the product and its working mode transparent to users.
Terraform does not provide an advanced version, but provides commercial solutions and enterprise solutions (Terraform Cloud and Terraform Enterprise), which can run terraform on a large scale. Chapter 6 will introduce these solutions, and in Chapter 12, we will implement a Terraform Enterprise ourselves.
1.1.4 declarative programming
Declarative programming refers to expressing computational logic (what to do), but not describing control flow (how to do it). You don't have to write step-by-step instructions, just describe the results you want. Database query language (SQL), functional programming language (Haskell, Clojure), configuration language (XML, JSON) and most IaC tools (Ansible, Chef, Puppet) are examples of declarative programming languages.
Declarative programming languages are the opposite of imperative (or procedural) programming. Imperative languages use conditional branches, loops, and expressions to control system flow, save state, and execute commands. Almost all traditional programming languages (such as Python, Java, C, etc.) are imperative programming languages.
Note that declarative programming focuses on results, not processes. Imperative programming focuses on the process, not the result.
1.1.5 cloud independent
Cloud independence refers to the ability to run seamlessly on any cloud platform using the same set of tools and workflows. Terraform is cloud independent. Using terraform to deploy infrastructure to AWS is as simple as deploying to GCP, Azure and even private data centers (see Figure 1.2). Cloud independence is important because it means you won't be limited to specific cloud providers, and you don't need to learn a new technology every time you change cloud providers.
Figure 1.2 deploying to multiple clouds simultaneously using Terraform
Terraform integrates with different clouds through providers. Providers are terraform plug-ins that interact with external APIs. Each cloud provider will maintain its own terraform provider to enable terraform to manage the resources in the cloud. The provider is written in Go language and distributed to the terraform registry as a binary file. They are responsible for authenticating, issuing API requests, and handling timeouts and errors. In this registry, there are hundreds of published providers that work together to enable you to manage thousands of different resources. This will be covered in Chapter 11, and you can even write your own terraform provider.
1.1.6 strong expression ability and highly scalable
Compared with other declarative IaC tools, Terraform is expressive and highly scalable. By using conditional statements, for expressions, instructions, template files, dynamic blocks, variables and many built-in functions, we can easily write code to achieve our goals. Table 1.2 compares Terraform with AWS CloudFormation (the technology that gave birth to Terraform) from a technical point of view.
Table 1.2 technical comparison between terraform and AWS CloudFormation
name | Language characteristics | Other features | |||||
Function provided by itself | Conditional statement | for loop | type | Support Plug-Ins | modularization | Waiting conditions | |
Terraform | 115 | yes | yes | String, number, list, map, Boolean, object, complex type | yes | yes | no |
AWS | 11 | yes | no | String, number, list | Limited degree | yes | yes |
1.2 "Hello Terraform!"
This section introduces a classic use case of Terraform - deploying a virtual machine (EC2 instance) on AWS. We will use Terraform's AWS provider to make API calls and deploy EC2 instances on our behalf. After the deployment is completed, we will let Terraform destroy the instance to avoid the server running all the time and causing more and more costs. Figure 1.3 shows the deployment process of this operation.
There is a prerequisite for this scenario - you must have terraform 0.15 installed 10. And have access credentials for AWS. The steps to deploy the project are as follows.
(1) Write the Terraform configuration file.
(2) Configure AWS provider.
(3) Initialize Terraform using terraform init.
(4) Use terrain apply to deploy EC2 instances.
(5) Use terrain destroy for cleaning.
Figure 1.4 illustrates the "Hello terrain!" Workflow of deployment.
Figure 1.3 architecture of deploying an EC2 instance on AWS using Terraform
Figure "Hello terrain 1.4!" Deployment process of
1.2.1 writing Terraform configuration
Terraform deploys the infrastructure by reading the configuration file. To tell terraform to deploy an EC2 instance, you need to use code to declare the EC2 instance. To do this, first create a new file and name it main tf and add the content in code listing 1.1 The tf extension indicates that this is a terraform configuration file. When the terraform is running, it will read all the data in the working directory tf extension and connect them.
Note that all the codes in this book can be searched on GitHub“
Terrain in action / Manning code ".
Code listing 1.1 main Content of TF
resource "aws_instance" "helloworld" { ⇽--- Declare a named“ HelloWorld"of aws_instance resources ami = "ami-09dd2e08d601bff67" ⇽--- EC2 Properties of the instance instance_type = "t2.micro" tags = { Name = "HelloWorld" } }
Note that this Amazon Machine Image (AMI) is only valid for the us-west-2 region.
In the code statement in listing 1.1, we want Terraform to have a T2 Micro AWS EC2 instance with Ubuntu AMI and a name tag. Compared with the equivalent CloudFormation code given below, we can see that the Terraform code is much clearer and much simpler.
{ "Resources": { "Example": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId": "ami-09dd2e08d601bff67", "InstanceType": "t2.micro", "Tags": [ { "Key": "Name", "Value": "HelloWorld" } ] } } } }
This EC2 code block is an example of a terrain resource. In Terraform, resources are the most important element because they provide infrastructure such as virtual machines, load balancers and NAT gateways. The resource is declared as an HCL object with a resource type and two tags. The first label specifies the type of resource to create, and the second label is the name of the resource. The name has no special meaning and is only used to reference the resource within the scope of a given module. The type and name together form a resource identifier, and the identifier of each resource is unique. Figure 1.5 shows the syntax of the terrain resource block.
Figure 1.5 syntax of resource block
Each resource has inputs and outputs. Inputs are called arguments and outputs are called properties. Arguments are passed through resources and can also be used as resource characteristics. In addition, resources have computational properties, but they can only be used after they are created. Calculation properties contain calculated information about managing resources. Figure 1.6 shows AWS_ Examples of arguments, properties, and computational properties of the instance resource.
Figure 1.6 AWS_ Examples of arguments, properties, and computational properties of the instance resource
1.2.2 configuring AWS providers
Next, we need to configure the AWS provider. The AWS provider is responsible for understanding API interactions, issuing authenticated requests, and providing resources to the Terraform. Next, configure the AWS provider by adding a provider block. Update main according to code listing 1.2 Code in TF.
Code listing 1.2 main tf
provider "aws" { ⇽--- statement AWS Provider  region = "us-west-2" ⇽--- Configure deployment region } resource "aws_instance" "helloworld" { ami = "ami-09dd2e08d601bff67" instance_type = "t2.micro" tags = { Name = "HelloWorld" } }
Note that AWS credentials need to be obtained before infrastructure is prepared. Credentials can be stored in a credential file or in an environment variable.
Unlike resources, providers have only one tag Name. This is the official Name used when the provider is published in the Terraform Registry (for example, "AWS" stands for AWS, "google" stands for GCP, "azurerm" stands for Azure). The syntax of the provider block is shown in Figure 1.7.
Figure 1.7 syntax of the provider block
Note that the Terraform registry is a global store for sharing binaries of versioning providers. When Terraform is initialized, any necessary providers are automatically found and downloaded from the registry.
The provider has no output, only input. The provider can be configured by passing input (or configuration arguments) to the provider block. Configuration parameters include service endpoint URL, region, provider version, any credentials required for API authentication, and so on. Figure 1.8 illustrates the injection process.
Figure 1.8 how the configured provider injects credentials into AWS when an API call is issued_ In instance
Usually, you don't want to pass the credential information to the provider as plain text, especially when you want to check this code into the version control system in the future. As a result, many providers allow credentials to be read from environment variables or shared credential files. If you are interested in credential management, we recommend reading Chapter 13 to learn more about this topic.
1.2.3 initialize Terraform
Before making Terraform deploy EC2 instances, we must first initialize the workspace. Although we have declared the AWS provider, Terraform still needs to download and install binaries from the Terraform registry. Initialization needs to be performed at least once for all workspaces.
The Terraform can be initialized by running the terraform init command. Running this command will see the following output.
$ terraform init Initializing the backend... Initializing provider plugins... - Finding latest version of hashicorp/aws... - Installing hashicorp/aws v3.28.0... ⇽--- Terraform obtain AWS The latest version of the provider - Installed hashicorp/aws v3.28.0 (signed by HashiCorp) Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. _Terraform has been successfully initialized! ⇽--- All we really care about is this message __ You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
Note that if you haven't already installed Terraform, you need to install it before you can run this command.
1.2.4 deploying EC2 instances
Now we are ready to deploy EC2 instances using Terraform. This requires the following terrain apply command.
Warning EC2 and CloudWatch Logs will be enabled after this operation, which may result in charging your AWS account.
$ terraform apply An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # aws_instance.helloworld will be created + resource "aws_instance" "helloworld" { + ami = "ami-09dd2e08d601bff67" ⇽--- ami characteristic + arn = (known after apply) + associate_public_ip_address = (known after apply) + availability_zone = (known after apply) + cpu_core_count = (known after apply) + cpu_threads_per_core = (known after apply) + get_password_data = false + host_id = (known after apply) * + id = (known after apply) + instance_state = (known after apply) * + instance_type = "t2.micro" ⇽--- instance_type characteristic + ipv6_address_count = (known after apply) + ipv6_addresses = (known after apply) + key_name = (known after apply) + network_interface_id = (known after apply) + outpost_arn = (known after apply) + password_data = (known after apply) + placement_group = (known after apply) + primary_network_interface_id = (known after apply) + private_dns = (known after apply) + private_ip = (known after apply) + public_dns = (known after apply) + public_ip = (known after apply) + security_groups = (known after apply) + source_dest_check = true + subnet_id = (known after apply) + tags = { ⇽--- Tags characteristic + "Name" = "HelloWorld" } + tenancy = (known after apply) + volume_tags = (known after apply) + vpc_security_group_ids = (known after apply) + ebs_block_device { + delete_on_termination = (known after apply) + device_name = (known after apply) + encrypted = (known after apply) + iops = (known after apply) + kms_key_id = (known after apply) + snapshot_id = (known after apply) + volume_id = (known after apply) + volume_size = (known after apply) + volume_type = (known after apply) } + ephemeral_block_device { + device_name = (known after apply) + no_device = (known after apply) + virtual_name = (known after apply) } + metadata_options { + http_endpoint = (known after apply) + http_put_response_hop_limit = (known after apply) + http_tokens = (known after apply) } + network_interface { + delete_on_termination = (known after apply) + device_index = (known after apply) + network_interface_id = (known after apply) } + root_block_device { + delete_on_termination = (known after apply) + device_name = (known after apply) + encrypted = (known after apply) + iops = (known after apply) + kms_key_id = (known after apply) + volume_id = (known after apply) + volume_size = (known after apply) + volume_type = (known after apply) } } *Plan: 1 to add, 0 to change, 0 to destroy. ⇽--- Summary of actions Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: ⇽--- Manual approval steps
Prompt: if you receive the error "No Valid Credentials Sources Found", it indicates that Terraform cannot pass AWS authentication.
The CLI output, called an execution plan, describes what Terraform plans to do to get the desired state. As a sanity check, it is a good idea to check the execution plan before proceeding. There shouldn't be anything strange here unless there are spelling mistakes. After checking the execution plan, approve the execution by entering yes on the command line.
After one or two minutes (it will take such a long time to set up EC2 instances), apply is completed successfully. Here are some sample outputs.
aws_instance.helloworld: Creating... aws_instance.helloworld: Still creating... [10s elapsed] aws_instance.helloworld: Still creating... [20s elapsed] aws_instance.helloworld: Creation complete after 25s [id=i-070098fcf77d93c54] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
To verify that the resource has been created, you can find it in the EC2 console of AWS, as shown in Figure 1.9. Note that this instance is located in the us-west-2 region, because that's how we set it in the provider.
Figure 1.9 EC2 example in AWS console
The status information of the resource is stored in a named terrain In the file of tfstate. Do not be extended Tfstate is misleading. It is actually a JSON file. Using the terrain show command, human readable output can be output from the state file, which makes it very convenient to list the information of resources managed by terrain. The following is the execution result of a terrain show command.
$ terraform show # aws_instance.helloworld: resource "aws_instance" "helloworld" { ami = "ami-09dd2e08d601bff67" arn = ➥"arn:aws:ec2:us-west-2:215974853022:instance/i-070098fcf77d93c54" associate_public_ip_address = true availability_zone = "us-west-2a" cpu_core_count = 1 cpu_threads_per_core = 1 disable_api_termination = false ebs_optimized = false get_password_data = false hibernation = false id = "i-070098fcf77d93c54" ⇽--- id Is an important computational feature instance_state = "running" instance_type = "t2.micro" ipv6_address_count = 0 ipv6_addresses = [] monitoring = false primary_network_interface_id = "eni-031d47704eb23eaf0" private_dns = ➥"ip-172-31-25-172.us-west-2.compute.internal" private_ip = "172.31.25.172" public_dns = ➥"ec2-52-24-28-182.us-west-2.compute.amazonaws.com" public_ip = "52.24.28.182" secondary_private_ips = [] security_groups = [ "default", ] source_dest_check = true subnet_id = "subnet-0d78ac285558cff78" tags = { "Name" = "HelloWorld" } tenancy = "default" vpc_security_group_ids = [ "sg-0d8222ef7623a02a5", ] credit_specification { cpu_credits = "standard" } enclave_options { enabled = false } metadata_options { http_endpoint = "enabled" http_put_response_hop_limit = 1 http_tokens = "optional" } root_block_device { delete_on_termination = true device_name = "/dev/sda1" encrypted = false iops = 100 tags = {} throughput = 0 volume_id = "vol-06b149cdd5722d6bc" volume_size = 8 volume_type = "gp2" } }
There are far more features here than we initially set in the resource block, because AWS_ Most of the features in instance are optional or calculated. You can customize AWS by setting optional arguments_ instance. If you want to know which optional arguments are available, you can refer to the AWS provider documentation.
1.2.5 destroy EC2 instance
Now it's time to say goodbye to the EC2 instance. When infrastructure is no longer in use, it should be destroyed because there is a charge to run infrastructure in the cloud. Terraform provides a special command - terraform destroy, which is used to destroy all resources. When you run this command, terraform will prompt you to confirm the destruction operation manually.
$ terraform destroy aws_instance.helloworld: Refreshing state... [id=i-070098fcf77d93c54] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - destroy Terraform will perform the following actions: # aws_instance.helloworld will be destroyed - resource "aws_instance" "helloworld" { - ami = "ami-09dd2e08d601bff67" -> null - arn = "arn:aws:ec2:us-west-2:215974853022: ➥instance/i-070098fcf77d93c54" -> null - associate_public_ip_address = true -> null - availability_zone = "us-west-2a" -> null - cpu_core_count = 1 -> null - cpu_threads_per_core = 1 -> null - disable_api_termination = false -> null - ebs_optimized = false -> null - get_password_data = false -> null - hibernation = false -> null - id = "i-070098fcf77d93c54" -> null - instance_state = "running" -> null - instance_type = "t2.micro" -> null - ipv6_address_count = 0 -> null - ipv6_addresses = [] -> null - monitoring = false -> null - primary_network_interface_id = "eni-031d47704eb23eaf0" -> null - private_dns = ➥"ip-172-31-25-172.us-west-2.compute.internal" -> null - private_ip = "172.31.25.172" -> null - public_dns = ➥"ec2-52-24-28-182.us-west-2.compute.amazonaws.com" -> null - public_ip = "52.24.28.182" -> null - secondary_private_ips = [] -> null - security_groups = [ - "default", ] -> null - source_dest_check = true -> null - subnet_id = "subnet-0d78ac285558cff78" -> null - tags = { - "Name" = "HelloWorld" } -> null - tenancy = "default" -> null - vpc_security_group_ids = [ - "sg-0d8222ef7623a02a5", ] -> null - credit_specification { - cpu_credits = "standard" -> null } - enclave_options { - enabled = false -> null } - metadata_options { - http_endpoint = "enabled" -> null - http_put_response_hop_limit = 1 -> null - http_tokens = "optional" -> null } - root_block_device { - delete_on_termination = true -> null - device_name = "/dev/sda1" -> null - encrypted = false -> null - iops = 100 -> null - tags = {} -> null - throughput = 0 -> null - volume_id = "vol-06b149cdd5722d6bc" -> null - volume_size = 8 -> null - volume_type = "gp2" -> null } } *Plan: 0 to add, 0 to change, 1 to destroy. ⇽--- Terraform Summary of planned actions Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value:
Warning do not manually edit or delete terrain Tfstate file, which is important, otherwise Terraform will not be able to track the resources it manages.
The destruction plan is similar to the previous execution plan, except that it is used for deletion.
Note that terrain destroy performs the same operation as you delete all the configuration code and then run terrain apply.
Confirm that you want to apply the destruction plan by entering yes on the command line. Wait a few minutes for Terraform to process, and then you will receive a notification that Terraform has destroyed all resources. The output will be as follows.
aws_instance.helloworld: Destroying... [id=i-070098fcf77d93c54] aws_instance.helloworld: Still destroying... ➥[id=i-070098fcf77d93c54, 10s elapsed] aws_instance.helloworld: Still destroying... ➥[id=i-070098fcf77d93c54, 20s elapsed] aws_instance.helloworld: Still destroying... ➥[id=i-070098fcf77d93c54, 30s elapsed] aws_instance.helloworld: Destruction complete after 31s Destroy complete! Resources: 1 destroyed.
Verify that the resource has indeed been destroyed by refreshing the AWS console or by running the terrain show command and confirming that it has not returned anything.
1.3 new "Hello terrain!"
I like the classic "Hello World!" Example and think it's a good entry-level project, but I don't think it systematically shows the whole technology. Terraform can not only prepare resources from static configuration code, but also dynamically prepare resources based on the results of external query and data search. Let's talk about data sources, which allow you to get data and perform calculations at run time.
This section will improve the classic "Hello World!" Example, add a data source to dynamically find the latest value of Ubuntu AMI. We will pass the output value into aws_instance, so there is no need to set AMI statically in the resource configuration of EC2 instance (see Figure 1.10).
Figure 1.10 AWS_ How does the output of AMI data source match with AWS_ The input of the instance resource is connected together
Because we have configured the AWS provider and initialized the Terraform with terraform init, we can skip some of the previous steps. Here, we will perform the following steps.
(1) Modify the Terraform configuration to add a data source.
(2) Redeploy using terrain apply.
(3) Use terrain destroy for cleaning.
Figure 1.11 illustrates the deployment process.
Figure 1.11 deployment process
1.3.1 modify Terraform configuration
We need to add code to read data from external data sources so that we can query the latest Ubuntu AMI released to AWS. Edit main TF, make it as shown in code listing 1.3.
Code listing 1.3 main tf
provider "aws" { region = "us-west-2" data "aws_ami" "ubuntu" { ⇽--- Declare a named“ ubuntu"of aws_ami data source most_recent = true filter { ⇽--- Set a filter to select all names that match this regular expression AMI name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"] } owners = ["099720109477"] ⇽--- canonical Ubuntu AWS account ID } resource "aws_instance" "helloworld" { ami = data.aws_ami.ubuntu.id ⇽--- Link resources instance_type = "t2.micro" tags = { Name = "HelloWorld" } }
Like resources, to declare a data source, you need to create an HCL object of type "data" with two labels. The first label specifies the type of data source, and the second label is the name of the data source. The type and name together constitute the identifier of the data source. The identifier must remain unique within a module. Figure 1.12 illustrates the syntax of the data source.
Figure 1.12 syntax of data source
The content of the data source code block is called "query constraint argument". Their behavior is the same as that of the resource's arguments. Query constraint arguments are used to specify which resource (s) to get data from. Data sources are unmanaged resources from which Terraform can read data, but cannot directly control them.
1.3.2 application modification
Next, we apply the modification to let Terraform deploy an EC2 instance of AMI using the output value of Ubuntu data source. This requires running terrain apply. The CLI output is shown below.
$ terraform apply Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # aws_instance.helloworld will be created + resource "aws_instance" "helloworld" { + ami = "ami-0928f4202481dfdf6" ⇽--- Set using the output of the data source + arn = (known after apply) + associate_public_ip_address = (known after apply) + availability_zone = (known after apply) + cpu_core_count = (known after apply) + cpu_threads_per_core = (known after apply) + get_password_data = false + host_id = (known after apply) + id = (known after apply) + instance_state = (known after apply) + instance_type = "t2.micro" // skip some logs } Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
Enter yes on the command line to apply the changes. After waiting a few minutes, the output will be as follows.
aws_instance.helloworld: Creating... aws_instance.helloworld: Still creating... [10s elapsed] aws_instance.helloworld: Creation complete after 19s [id=i-0c0a6a024bb4ba669] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
As before, you can verify the changes by accessing the AWS console or calling terrain show.
1.3.3 destruction of infrastructure
Run terrain destroy to destroy the infrastructure created in the previous step. Note that manual confirmation is still required here.
$ terraform destroy aws_instance.helloworld: Refreshing state... [id=i-0c0a6a024bb4ba669] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - destroy Terraform will perform the following actions: # aws_instance.helloworld will be destroyed - resource "aws_instance" "helloworld" { - ami = "ami-0928f4202481dfdf6" -> null - arn = "arn:aws:ec2:us-west-2:215974853022 ➥:instance/i-0c0a6a024bb4ba669" -> null - associate_public_ip_address = true -> null // skip some logs } Plan: 0 to add, 0 to change, 1 to destroy. Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value:
After manual confirmation and waiting for a few minutes, the EC2 instance will be destroyed successfully.
aws_instance.helloworld: Destroying... [id=i-0c0a6a024bb4ba669] aws_instance.helloworld: Still destroying... ➥[id=i-0c0a6a024bb4ba669, 10s elapsed] aws_instance.helloworld: Still destroying... ➥[id=i-0c0a6a024bb4ba669, 20s elapsed] aws_instance.helloworld: Still destroying... ➥[id=i-0c0a6a024bb4ba669, 30s elapsed] aws_instance.helloworld: Destruction complete after 30s Destroy complete! Resources: 1 destroyed.
1.4 fireside conversation
This chapter not only discusses what is Terraform and its advantages and disadvantages over other IaC tools, but also describes how to perform two practical deployments. The first deployment is Terraform's "Hello World!" Example, the second deployment is my personal preference because it demonstrates the dynamic capabilities of Terraform using data sources.
This article is taken from Terraform combat
Based on actual projects, this book reveals how to use Terraform to automatically expand and manage the infrastructure. This book focuses on the syntax, basic knowledge and advanced design of Terraform 0.12 (such as zero downtime deployment and creating Terraform providers). The main contents of this book include how to use Terraform, how to manage the life cycle of Terraform resources, how to program, how to deploy multi-layer Web applications in AWS cloud, how to realize server free deployment, how to deploy servers through Terraform, how to realize zero downtime deployment, how to test and reconstruct, how to expand Terraform, how to deploy automatically through Terraform, and how to realize security management.
This book is suitable for self-study and reference for system administrators, DevOps engineers and developers.