Infrastructure as Code using Terraform on AWS Cloud

Dipaditya Das
12 min readJun 26, 2020

--

If I say you that the automation of the whole infrastructure can be done just writing one code. It sounds exciting, isn’t it? Well yes, we can.

“Necessity is the mother of Invention.”

So, what’s the necessity I am talking about? Well, let’s say you are working on a cloud computing platform like AWS, GCP, or Microsoft Azure and wanted to build an infrastructure. How much time does it take to build the entire plan? How much time does someone need to evolve the present infrastructure? Can anyone build the plan from nothing to everything in one go? Yes, you can by using Terraform with the cloud computing service(IaaS).

What is Terraform?

Terraform is the infrastructure as a code offering from HashiCorp. It is a tool for building, changing and managing infrastructure in a safe, repeatable way. Operators and Infrastructure teams can use Terraform to manage environments with a configuration language called the HashiCorp Configuration Language (HCL) for human-readable, automated deployments.

Terraform works with over 160 different providers for a broad set of common infrastructure. Provider SDK makes it simple to create new and custom providers. Providers leverage infrastructure-specific APIs to preserve unique capabilities for each provider.

Advantages of Terraform

While many of the current offerings for infrastructure as code may work in your environment, Terraform aims to have a few advantages for operators and organizations of any size.

1. Platform Agnostic

In a modern datacenter, you may have several different clouds and platforms to support your various applications. With Terraform, you can manage a heterogeneous environment with the same workflow by creating a configuration file to fit the needs of your project or organization.

2. State Management

Terraform creates a state file when a project is first initialized. Terraform uses this local state to create plans and make changes to your infrastructure. Prior to any operation, Terraform does a refresh to update the state with the real infrastructure. This means that the Terraform state is the source of truth by which configuration changes are measured. If a change is made or a resource is appended to a configuration, Terraform compares those changes with the state file to determine what changes result in a new resource or resource modifications.

3. Operator Confidence

The workflow built into Terraform aims to instill confidence in users by promoting easily repeatable operations and a planning phase to allow users to ensure the actions taken by Terraform will not cause disruption in their environment. Upon terraform apply, the user will be prompted to review the proposed changes and must affirm the changes, or else Terraform will not apply the proposed plan.

Use Cases of Terraform

1. Infrastructure as a Code

Copyrights 2020 HashiCorp

2. Multi-Cloud Compliance and Management

Copyrights 2020 HashiCorp

3. Self-Service Infrastructure

Copyrights 2020 HashiCorp

4. Hybrid Multi-Cloud Compliance and Management

Copyrights 2020 HashiCorp

Infrastructure as a Code (IaaC)

If you are new to infrastructure as a code as a concept, it is the process of managing infrastructure in a file or files rather than manually configuring resources in a user interface. A resource in this instance is any piece of infrastructure in a given environment, such as a virtual machine, security group, network interface, etc.

At a high level, Terraform allows operators to use HCL to author files containing definitions of their desired resources on almost any provider (AWS, GCP, GitHub, Docker, etc) and automates the creation of those resources at the time of apply.

We will cover the basic functions of Terraform to create infrastructure on AWS.

Advantage of Infrastructure as a Code(IaaC)

  1. Easily Readable
  2. Easily Repeatable
  3. Operational certainty with “terraform plan”
  4. Quick provisioned development environments.
  5. Standardized environment builds
  6. Disaster Recovery

Prerequisites for this Practical

  1. First, we need to have an AWS account and then we have to create an IAM user in our AWS root account.
  2. Download and install AWS CLI from https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html
  3. Download and install the Terraform CLI from https://www.terraform.io/
  4. Make the AWS CLI and terraform executable as an environment variable in the Operating System(Windows, macOS, or Linux).
  5. Make a profile of AWS IAM user in O.S. using AWS CLI.

LET ‘S GET STARTED TO DO TERRAFORM AUTOMATION !!!

In this practical we will perform the following tasks:

  1. Creating a VPC(Virtual Private Cloud), Internet gateway, Route Table, a subnet in VPC and an association between them.
  2. Create the key and security group which allows the port 80 and 22.
  3. Launch an EC2 instance.
  4. In this Ec2 instance use the key and security group which we have created in step 1.
  5. Launch one Volume (EBS) and mount that volume into /var/www/html
  6. Developers have uploaded the code into GitHub repo also the repo has some images.
  7. Copy the GitHub repo code into /var/www/html
  8. Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.
  9. Create a CloudFront using s3 bucket(which contains images) and use the CloudFront URL to update in code in /var/www/html

Step -1: Configure AWS CLI

AWS CLI in Powershell 7(Win10)

Step-2: Create a terraform file (main.tf)

I am creating an main.tf file and coding our cloud provider, i.e., AWS. We are also providing the availability zone (Mumbai) with my AWS CLI profile name.

main.tf in Visual Studio Code Editor with Terraform Plugin (recommended)

Step-3: Creating VPC(Virtual Private Cloud)

Amazon Virtual Private Cloud (Amazon VPC) enables us to launch AWS resources into a virtual network that we have defined. This virtual network closely resembles a traditional network that we would operate in our own data center, with the benefits of using the scalable infrastructure of AWS. Its basically the network layering of EC2 instances. Here I have created an AWS VPC in the same availability zone (Mumbai).

Step-4: Creating Internet gateway

An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between our VPC and the internet. An internet gateway serves two purposes: to provide a target in our VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. An internet gateway supports IPv4 and IPv6 traffic. It does not cause availability risks or bandwidth constraints on our network traffic. I have created an Internet gateway for my AWS VPC.

Step-5: Create a Routing Table

A route table contains a set of rules, called routes, that are used to determine where network traffic from our subnet or gateway is directed. This route table is for inbound traffic to VPC through internet gateway.

Step-6: Create a Subnet in VPC

A VPC spans all of the Availability Zones in the Region. After creating a VPC, we can add one or more subnets in each Availability Zone. We can optionally add subnets in a Local Zone, which is an AWS infrastructure deployment that places compute, storage, database, and other select services closer to our end users. A Local Zone enables our end users to run applications that require single-digit millisecond latencies. When we create a subnet, we specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones. Availability Zones are distinct locations that are engineered to be isolated from failures in other Availability Zones. By launching instances in separate Availability Zones, we can protect our applications from the failure of a single location. We assign a unique ID to each subnet.

Step-7: Creating an association between route table and subnet

Each subnet in your VPC must be associated with a route table. A subnet can be explicitly associated with custom route table, or implicitly or explicitly associated with the main route table.

Step-8: Creating a Security Group

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When we launch an instance in a VPC, we can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in our VPC can be assigned to a different set of security groups.

Step-9: Create a Key-Pair

Here we have created a key-pair using Terraform tls_private_key generates a secure private key and encodes it as PEM. This resource is primarily intended for easily bootstrapping throwaway development environments. This is a logical resource, so it contributes only to the current Terraform state and does not create any external managed resources.

Step-10: Create an EC2 Instance

Here we have used the Amazon Linux 2 AMI (x64) of type t2.micro. After launching the instance the connection to the instance via SSH will be made and by using the provisioner “remote-exec”, Apache Server, Git and Php Interpreter will be installed. After the installation the httpd services starts and is enabled so that is doesn’t stop after reboot.

Step-11: Create an EBS Volume

This new EBS volume will act as an external hard-disk drive that can be mounted on a particular folder/directory/drive. In the later part of the code, we will mount it to “/var/www/html” because that is where we store our webpage(HTML/PHP code). This way we have to ensure that data isn’t lost even if the instance is terminated.

Step-12: Attach the EBS volume to EC2 Instance

Here we have attached the EBS volume to the EC2 instance and then formatted it to mount to the /var/www/html folder. After mounting we have used the git clone command to clone my GitHub repository containing my PHP code.

Step-13: Creating S3 bucket

We have used a S3 bucket store static content of the webpage. Here we have set the the bucket and object ACL to “public-read” so that everyone can view it.

Step-14: Uploading the image to S3-bucket

Step-15: Creating a AWS CloudFront Distrubution of the content(Image).

We have created a CloudFront Distribution which is Content Delivery Network(CDNaaS) for fast delivery of content used in any website, web app or mobile application.

Step-16: Modify the PHP code for Content delivery

I have modified the PHP code with the new cloudfront distribution URL of the content for faster delivery.

Step-17: Opening the webpage in Microsoft Edge Browser

To open the webpage in Chromium Edge I have used Provisioner local-exec.

Step-18: Initiate the Terraform Plugins

The terraform initcommand is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.

Step-19: Validate the terraform file(main.tf)

The terraform validate command validates the configuration files in a directory, referring only to the configuration and not accessing any remote services such as remote state, provider APIs, etc. Validate runs checks that verify whether a configuration is syntactically valid and internally consistent, regardless of any provided variables or existing state. It is thus primarily useful for general verification of reusable modules, including correctness of attribute names and value types. It is safe to run this command automatically, for example as a post-save check in a text editor or as a test step for a re-usable module in a CI system.

Step-20: Create an execution plan

The terraform plan command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files. This command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state. For example, terraform plan might be run before committing a change to version control, to create confidence that it will behave as expected.

Step-21: Apply the Execution Plan

The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

New EBS volume with name Tf-ebs
New Amazon OS is up and running in AWS cloud
New Security Group named Tf-firewall
New VPC named Tf-vpc
New S3 Bucket with name tfwebproductionbucket-v1
New CloudFront Distribution for S3 bucket contents

After all the services and resources are executed and created properly we will see a new tab will open automatically with my instance public IP address.

Final Output of our Terraform automation

Step-22: Destroy the Entire Infrastructure

The terraform destroy command is used to destroy the Terraform-managed infrastructure. The --auto-approve option helps us to skip the approval part where terraform program prompts us whether to continue or cancel the process.

Conclusion

I hope the examples help you learn and appreciate Terraform 0.12. You can read more about the Terraform 0.12 language here. Additionally, the Terraform CLI includes an upgrade command for upgrading Terraform configurations to the new version. So, we can now use terraform in creating Infrastructure as a Code, Multi-Cloud Compliance and Management or a Self-service Infrastructure or Hybrid Cloud Infrastructure.

GitHub Repository

Connect with me on linkedin

--

--

Dipaditya Das

IN ● MLOps Engineer ● Linux Administrator ● DevOps and Cloud Architect ● Kubernetes Administrator ● AWS Community Builder ● Google Cloud Facilitator ● Author