Source: Google

The Magic Of AWS CLI v2

Dipaditya Das

--

We have seen Hollywood movies and Web series where a person tries to hack a system using CLI. But what is CLI? It is the Command-Line Interface. Don’t worry we are not going to learn how to crack a system. But we are definitively going to use our CLI to do some hack in AWS.

Source: Google

Amazon Web Services(AWS)

To understand what is AWS, we need to understand Cloud Computing.

Cloud computing is the on-demand delivery of IT resources over the Internet with pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data centers and servers, you can access technology services, such as computing power, storage, and databases, on an as-needed basis from a cloud provider like Amazon Web Services (AWS).

Source: Microsoft Azure

Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 175 fully-featured services from data centers globally. Millions of customers — including the fastest-growing startups, largest enterprises, and leading government agencies — are using AWS to lower costs, become more agile, and innovate faster.

To learn more about Cloud Computing and AWS head over to my previous articles, where I explain with a use-case of Netflix handling more than 100 million customers like a charm and many other use-cases.

AWS Development Tools

Amazon has empowered the developers and architects to develop applications on AWS in the programming language of their choice with familiar tools.

Created By Dipaditya Das
  • Web Console: Simple web interface for Amazon Web Services.
  • Command-Line Tools: Control your AWS services from the command line and automate service management with scripts.
  • Integrated Development Environment(IDE) :
    Write, run, debug, and deploy applications on AWS using familiar Integrated Development Environments (IDE) like AWS Cloud9, etc.
  • Software Development Kit(SDK): Simplify coding with language-specific abstracted APIs for AWS services.
  • Infrastructure as Code(IaC): Simplify coding with language-specific abstracted APIs for AWS services like Terraform, Aws Cloud Formation, etc.

If you want to know how to automate the development in AWS using Terraform, then have a look at the previous article that I had published to get a good idea of the Magic of Infrastructure as a Code.

AWS Command Line Interface

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

The latest version of the AWS Command Line Interface is Version2(v2).

Problem Statement

We will use the AWS command-line interface to

  • Create a Key-Pair
  • Create a Security Group.
  • Launch an Elastic Cloud Compute Instance using Amazon Linux 2 AMI and the above created Key-Pair and Security Group.
  • Create an Elastic Block Storage volume of gp2 type and size of 1GiB.
  • Attach the volume to the EC2 instance that we have created above.
Created By Dipaditya Das

Pre-requisites of the Above Problem

For this practical, we are going to use

  • AWS CLI v2 which we can download from https://github.com/aws/aws-cli.git and install it.
  • Some kinds of Shell programs like bash, z-shell, Powershell7, etc.
  • Aws IAM account only for the Aws CLI use with Power Mode Access.

Step-0: Check the installation

We can verify whether the installation has been successful or not.

$ aws --version

Step-1: Configure the AWS CLI

For general use, the aws configure command is the fastest way to set up your AWS CLI installation. When we enter this command, the AWS CLI prompts you for four pieces of information:

  • Access key ID: It’s the first part of the Access Key which is unique in AWS and which are used to sign programmatic requests that you make to AWS.
  • Secret access key: It’s the second part of the Access Key which is hashed and is used as a password for the unique Access Key ID.
  • AWS Region: The Default region name identifies the AWS Region whose servers you want to send your requests to by default. This is typically the Region closest to you, but it can be any Region. For example, you can type us-east-1 to use the US East (N. Virginia).
  • Output format: The Default output format specifies how the results are formatted. The value can be any of the values in the following list. If you don't specify an output format, json is used as the default. There are five types of output format:
json The output is formatted as a JSON string.yaml The output is formatted as a YAML string. (Available in the AWS CLI version 2 only.)yaml-stream The output is streamed and formatted as a YAML string. Streaming allows for faster handling of large data types. (Available in the AWS CLI version 2 only.)text The output is formatted as multiple lines of tab-separated string values. This can be useful to pass the output to a text processor, like grep, sed, or awk.table The output is formatted as a table using the characters +|- to form the cell borders. It typically presents the information in a “human-friendly” format that is much easier to read than the others, but not as programmatically useful.

The following example shows sample values. Replace them with your own values.

#To create a new configuration$ aws configure
AWS Access Key ID[None]:
AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key[None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name[None]: us-east-1
Default output format[None]:
#To update just the region name:$ aws configure
AWS Access Key ID [****]:
AWS Secret Access Key [****]:
Default region name [us-east-1]: ap-south-1
Default output format [None]:

✨Now our AWS CLI is configured successfully.

⚡ The first thing in your mind that comes about command-line interfaces is to remember a lot of commands but that’s not true for AWS CLI. It has one of the most beautiful documentation on the web and a great manual in the CLI. There is a helpcommand for every single AWS services that CLI supports. After running help, you just keep on pressing the space bar to scroll and “q” to quit. Now my requirement is to check something on the “EC2” service. So, if you read the help a little bit you will see there is one subcommand called “ec2”.⚡

Step-3: Create a Key-Pair for the EC2 Instance

  • A key pair, consisting of a private key and a public key, is a set of security credentials that you use to prove your identity when connecting to an instance.
  • Amazon EC2 stores the public key, and you store the private key.
  • You use the private key, instead of a password, to securely access your instances.
  • Anyone who possesses your private keys can connect to your instances, so it’s important that you store your private keys in a secure place.
  • Because Amazon EC2 doesn’t keep a copy of your private key, there is no way to recover a private key if you lose it. However, there can still be a way to connect to instances for which you’ve lost the private key.
  • The keys that Amazon EC2 uses are 2048-bit SSH-2 RSA keys.

To create and verify the Key-Pair, we need to run the following commands

$ aws ec2 create-key-pair \
--key-name Arth-Key-Pair \
--query 'KeyMaterial' \
--output text > Arth-Key-Pair.pem
$ aws ec2 describe-key-pairs

✨ Arth-Key-Pair has been created successfully.

Step-4: Create a Security Group for the EC2 Instance

  • A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
  • When you launch an instance in a VPC, you can assign up to five security groups to the instance.
  • Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC can be assigned to a different set of security groups.
  • If you launch an instance using the Amazon EC2 API or a command-line tool and you don’t specify a security group, the instance is automatically assigned to the default security group for the VPC.
  • For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.

To create a Security Group, we need to use the create-security-group sub-command of ec2command:

$ aws ec2 create-security-group \
--group-name "AWS_TASK_SG" \
--description "Security group allowing SSH"

This will provide an output in JSON format providing the GroupId of the Security Group which will be unique.

Now, we provide SSH Inbound/Ingress Rule to our Security Group by using the authorize-security-group-ingress sub-command of ec2command:

$ aws ec2 authorize-security-group-ingress \
--group-id <Your_group_id> \
--protocol tcp \
--port 22 \
--cidr 0.0.0.0/0

⚡Bonus ⚡

In AWS CLI, there is an inbuilt JSON parser, which works great and is super intuitive. So let me explain what I have done here.

  • The command aws ec2 describe-security-groups will provide a detailed output in JSON format as we haven’t provided anything toDefault Output Format option while configuring AWS CLI, so by default it will be JSON.
  • To parse the JSON, we have to use --query option to filter out the information specifically.
  • The topmost key is SecurityGroups which is an array [] and to access all the security groups I used “*” to parse all groups. Then we are displaying all the GroupID, GroupName, and VpcID of the Security Groups.
  • As our Security Group is in the first row and the array is zero-based indexed, so we can fetch only the first row by using the --query parameter to display the first security group in a table format by using the --output parameter.

As you can see our security group “AWS_TASK_SG” with SSH inbound/ingress rule has been successfully created.

Step-5: Launch an Elastic Cloud Compute Instance using Amazon Linux 2 AMI and the above created Key-Pair and Security Group.

  • Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) Cloud.
  • Using Amazon EC2 eliminates your need to invest in hardware upfront, so you can develop and deploy applications faster.
  • You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage.
  • Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.

Before launching the EC2 instance, we should gather information first.

  • --image-id : To launch any instance we need an Operating System image and in AWS it’s called AMI(Amazon Machine Image). Each and every AMI has a unique id called image-id. We can search for the id in this website https://ap-south-1.console.aws.amazon.com/ec2/v2/home?region=ap-south-1#LaunchInstanceWizard:
  • --count : It's the number of EC2 instances that we have to launch at once.
  • --instance-type : There are more than 100 flavors or different varieties of systems having different resources(RAM/CPUs). Each and everyone has a different unique id.
  • --subnet-id : It is a unique id of the Datacenter where we are going to launch our instance.
  • --security-group-ids : The above the Security Group that we have created has a unique id.
  • --key-name : It's a good practice to launch an instance with a Key-Pair which will act as a Token for authentication.

These are the minimum required parameters that we have to find before creating the EC2 Instance. There are many more parameters. To learn more about the run-instance command.

To create an EC2 instance, we are going to use run-instances subcommand of ec2 command:

$ aws ec2 run-instances \
--image-id ami-0e306788ff2473ccb \
--instance-type t2.micro \
--count 1 \
--subnet-id subnet-fc1b70b0 \
--security-group-ids sg-0c2aaee5acc66f6c8 \
--key-name Arth-Key-Pair

🌟One the best practice is to provide a tag to the instance. I am going to provide a Name Tag to EC2 Instance.

To provide a Name Tag to EC2 Instance, we will use create-tags subcommand of ec2 command:

$ aws ec2 create-tags \
--resources i-000a4ad70600492fb \
--tags Key=Name,Value=ArthOS

Now to verify whether our EC2 instance has launched or not, we will see all instances in AWS.

$ aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running"
--query "Reservations[*].Instances[*].InstanceId:InstanceId,AZ:Placement.AvailabilityZone,Name:Tags[?Key=='Name']|[0].Value,ImageID:ImageId,Key:KeyName,PublicIpAddress: PublicIpAddress,InstanceType:InstanceType,SubnetId:SubnetId, VpcId:VpcId,PrivateIpAddress:PrivateIpAddress}" \
--output table

⚡ We have filtered the running instances and displayed all the necessary details. ⚡

✨ We have successfully launched the EC2 instance in AWS.

Step-6: Create an Elastic Block Storage volume of gp2 type and size of 1GiB.

  • Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale.
  • A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.

Before creating the EBS volume, we should gather information first.

  • --size : The size of the volume, in GiBs. You must specify either a snapshot ID or a volume size. Constraints: 1–16,384 for gp2 , 4–16,384 for io1 and io2 , 500–16,384 for st1 , 500–16,384 for sc1 , and 1–1,024 for standard . If you specify a snapshot, the volume size must be equal to or larger than the snapshot size.
  • --volume-type : The volume type of EBS. This can be gp2 for General Purpose SSD, io1 or io2 for Provisioned IOPS SSD, st1 for Throughput Optimized HDD, sc1 for Cold HDD, or standard for Magnetic volumes. Default — gp2.
  • --availability-zone : The Availability Zone in which to create the volume.

These are the minimum required parameters that we have to find before creating the EBS volume. There are many more parameters. To learn more about the create-volume command.

To create an EBS volume, we are going to use create-volume subcommand of ec2 command:

$ aws ec2 create-volume \
--availability-zone ap-south-1b \
--size 1 \
--volume-type gp2

We will also provide a Name Tag to the EBS Volume.

$ aws ec2 create-tags \
--resources vol-0f5bf81a9955130de \
--tags Key=Name,Value=Pendrive

Now to verify whether our EBS volume has launched or not, we are going to use describe-volumes sub-command of ec2 command:

$ aws ec2 describe-volumes \
--filters Name=status,Values=available \
--output table

✨ We have successfully created an EBS Volume in the same availability zone where our EC2 instance is present.

Step-7: Attach the volume to the EC2 instance that we have created above.

Now the final step. We need to attach the EBS volume(Pendrive) to EC2 Instance for using it.

Before attaching the EBS volume, we should gather information first.

To attach an EBS volume to an EC2 instance, we are going to use attach-volume subcommand of ec2 command:

$ aws ec2 attach-volume \
--volume-id vol-0f5bf81a9955130de \
--instance-id i-000a4ad70600492fb \
--device /dev/sdf

Now to verify the attachment of the volume to an instance, we will use describe-volumes sub-command of ec2 command:

$ aws ec2 describe-volumes \
--filters Name=tag:Name,Values=Pendrive \
--output table

✨ We have successfully attached the EBS volume to the EC2 instance.

Even though we have completed all the steps successfully, we cannot use the volume because it is not formatted and partitioned yet.

  • First, we need to SSH to the EC2 instance using OpenSSH from Powershell7.
$ ssh -i Arth-Key-Pair.pem -l ec2-user <PublicIpAddress>
  • Secondly, we have to switch to the root user by typing the following command:
$ sudo su - root
  • Then we will list all the drives in the instance including the one just attached as /dev/xvdf
$ fdisk -lfdisk -> is a menu-driven command-line utility that allows you to create and manipulate partition tables on a hard disk.-l ->  List the partition tables for the specified devices and then exit.  If no devices  are  given,  those  mentioned in /proc/partitions (if that file exists) are used.
  • Now we will partition the /dev/xvdf device, using fdisk command.
$ fdisk /dev/xvdf
  • Then we will press n for add a new partition, it will ask us whether we want to create a primary or extended partition and since we want to create a primary partition so we will press p . Then we will press enter three times for simplicity we will allow the default settings and after that, we will close the fdisk program by pressing w .
  • Now we have to format the new partition for Linux File-System using mkfs.ext4 command
$ mkfs.ext4 /dev/xvdf1The mkfs (i.e., make filesystem) command is used to create a filesystem (i.e., a system for organizing a hierarchy of directories, subdirectories and files) on a formatted storage device or media, usually a partition on a hard disk drive (HDD),
  • Then we will create a folder which will be our mount point for the device.
  • We will mount the device into the folder using mount command and after that, we will verify it using df -h command :
$ mkdir -v Pendrive$ mount /dev/xvdf1 /Pendrive$ df -hdf -> The df command is used to show the amount of disk space that is free on file systems.
-h -> Human Readable

✨ Congratulation …Now we have successfully completed everything.

⚡PRO-TIPS⚡

Sometimes finding the correct sub-command will be overwhelming. We can either search the official AWS Command Line Interface Documentation or else we can search by words using the pipe operator and grep command in Powershell.

💻I keep writing blogs about DevOps, Cloud Computing, Machine Learning, etc, blogs so feel free to follow me on Medium.

--

--

Dipaditya Das

IN ● MLOps Engineer ● Linux Administrator ● DevOps and Cloud Architect ● Kubernetes Administrator ● AWS Community Builder ● Google Cloud Facilitator ● Author