Posts by pankaj
Protected: Kubernetes
Create a Highly Available VPC
Create a Highly Available VPC
Read MoreUsing Terraform Dynamic Blocks and Built-in Functions to Deploy to AWS
Introduction
Terraform offers a strong set of features to help optimize your Terraform code. Two really useful features are dynamic blocks, which allow you to generate static repeated blocks within resources in Terraform; and built-in functions, which help you manipulate variables and data to suit your needs and help make your Terraform deployments better automated and more fault resilient.
Solution
- Check Terraform Status using the
version
command:terraform version
Since the Terraform version is returned, you have validated that the Terraform binary is installed and functioning properly.
Clone Terraform Code and Switch to Proper Directory
- The Terraform code required for this lab is below. Copy the same to your working directory.
Examine the Code in the Files
- View the contents of the
main.tf
file using theless
command:less main.tf
Themain.tf
file spins up AWS networking components such as a virtual private cloud (VPC), security group, internet gateway, route tables, and an EC2 instance bootstrapped with an Apache webserver which is publicly accessible. - Closely examine the code and note the following:
- We have selected AWS as our provider and our resources will be deployed in the
us-east-1
region. - We are using the
ssm_parameter
public endpoint resource to get the AMI ID of the Amazon Linux 2 image that will spin up the EC2 webserver. - We are using the
vpc
module (provided by the Terraform Public Registry) to create our network components like subnets, internet gateway, and route tables. - For the
security_group
resource, we are using a dynamic block on theingress
attribute to dynamically generate as many ingress blocks as we need. The dynamic block includes thevar.rules
complex variable configured in thevariables.tf
file. - We are also using a couple of built-in functions and some logical expressions in the code to get it to work the way we want, including the
join
function for thename
attribute in the security group resource, and thefileexists
andfile
functions for theuser_data
parameter in the EC2 instance resource.
- We have selected AWS as our provider and our resources will be deployed in the
- Enter
q
to exit the less program. - View the contents of the
variables.tf
file:less variables.tf
Thevariables.tf
file contains the complex variable type which we will be iterating over with the dynamic block in themain.tf
file. - Enter
q
to exit the less program. - View the contents of the
script.sh
file using thecat
command:cat script.sh
Thescript.sh
file is passed into the EC2 instance using itsuser_data
attribute and thefileexists
andfile
functions (as you saw in themain.tf
file), which then installs the Apache webserver and starts up the service. - View the contents of the
outputs.tf
file:cat outputs.tf
Theoutputs.tf
file returns the values we have requested upon deployment of our Terraform code.- The
Web-Server-URL
output is the publicly accessible URL for our webserver. Notice here that we are using thejoin
function for thevalue
parameter to generate the URL for the webserver. - The
Time-Date
output is the timestamp when we executed our Terraform code.
- The
Review and Deploy the Terraform Code
- As a best practice, format the code in preparation for deployment:
terraform fmt
- Validate the code to look for any errors in syntax, parameters, or attributes within Terraform resources that may prevent it from deploying correctly:
terraform validate
You should receive a notification that the configuration is valid. - Review the actions that will be performed when you deploy the Terraform code:
terraform plan
Note theChange to Outputs
, where you can see theTime-Date
andWeb-Server-URL
outputs that were configured in theoutputs.tf
file earlier.
Test Out the Deployment and Clean Up
- Once the code has executed successfully, view the outputs at the end of the completion message:
- The
Time-Date
output displays the timestamp when the code was executed. - The
Web-Server-URL
output displays the web address for the Apache webserver we created during deployment.
terraform output
command at any time in the CLI to view these outputs on demand. - The
- Verify that the resources were created correctly in the AWS Management Console:
- Navigate to the AWS Management Console in your browser.
- Type VPC in the search bar and select VPC from the contextual menu.
- On the Resources by Region page, click VPCs.
- Verify that the my-vpc resource appears in the list.
- Type EC2 in the search bar and select EC2 from the contextual menu.
- On the Resources page, click Instances (running).
- Verify that the instance, which has no name, appears in the list (and is likely still initializing).
- In the menu on the left, click Security Groups.
- Verify that the Terraform-Dynamic-SG security group appears in the list.
- Select the security group to see further details.
- Click on the Inbound rules tab, and note that three separate rules were created from the single dynamic block used on the
ingress
parameter in the code.
- In the CLI, copy the URL displayed as the
Web-Server_URL
output value. - In a new browser window or tab, paste the URL and press Enter.
- Verify that the Apache Test Page loads, validating that the code executed correctly and the logic within the AWS instance in Terraform worked correctly, as it was able to locate the
script.sh
file in the folder and bootstrap the EC2 instance accordingly. - In the CLI, tear down the infrastructure you just created before moving on:
terraform destroy --auto-approve
Scalability
The term “scalability” is often used as a catch-all phrase to suggest that something is poorly designed or flawed. It’s commonly used in arguments as a way to end discussions, indicating that a system’s architecture is limiting its potential for growth. However, when used positively, scalability refers to a desired property, such as a platform’s need for good scalability.
In essence, scalability means that when resources are added to a system, performance increases proportionally. This can involve serving more units of work or handling larger units of work, such as when datasets grow. In distributed systems, adding resources can also be done to improve service reliability, such as introducing redundancy to prevent failures. A scalable always-on service can add redundancy without sacrificing performance.
Achieving scalability is not easy, as it requires systems to be designed with scalability in mind. Systems must be architected to ensure that adding resources results in improved performance or that introducing redundancy does not adversely affect performance. Many algorithms that perform well under low load and small datasets can become prohibitively expensive when dealing with higher request rates or larger datasets.
Additionally, as systems grow through scale-out, they often become more heterogeneous. This means that different nodes in the system will have varying processing speeds and storage capabilities. Algorithms that rely on uniformity may break down or underutilize newer resources.
Despite the challenges, achieving good scalability is possible if systems are architected and engineered with scalability in mind. Architects and engineers must carefully consider how systems will grow, where redundancy is required, and how heterogeneity will be handled. They must also be aware of the tools and potential pitfalls associated with achieving scalability.
Manually Migrate Data Between Redshift Clusters
You have been presented with a few pain points to solve around your company’s Redshift solution. The original Redshift cluster that was launched for the company’s analytics stack has become underpowered over time. Several groups wish to create incremental backups of certain tables to S3 in a format that can be plugged into data lake solutions, as well as other groups wishing to have select pieces of the main Redshift schema splintered to new department-specific clusters.
You’ve come up with a plan to utilize the UNLOAD and COPY commands to facilitate all of the above and need to test a proof of concept to ensure that all pain points above can be addressed in this manner.
Read MoreA short story on AI
Not everyone was comfortable with the idea of AI systems developing self-awareness. Many people feared that they would eventually become more intelligent than humans, and would eventually turn on their creators. These concerns led to protests and demonstrations against the development of advanced AI systems.
Read MoreUsing Secrets Manager to Authenticate with an RDS Database Using Lambda
Introduction
AWS Secrets Manager helps you protect the secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. In this lab, we connect to a MySQL RDS database from an AWS Lambda function using a username and password, and then we hand over credential management to the AWS Secrets Manager service. We then use the Secrets Manager API to connect to the database instead of hard-coding credentials in our Lambda function. By the end of this lab, you will understand how to store a secret in AWS Secrets Manager and access it from a Lambda function.
Solution
Log in to the live AWS environment using the credentials provided. Use an incognito or private browser window to ensure you’re using the lab account rather than your own.
Make sure you’re in the N. Virginia (us-east-1
) region throughout the lab.
Download the MySQL Library ZIP file you’ll need for the first lab objective.
Create Lambda Function
- Navigate to Lambda > Functions.
- Click Create function.
- Make sure the Author from scratch option at the top is selected, and then use the following settings:
- Function name: Enter testRDS.
- Runtime: Select Node.js 14.x.
- Expand Advanced settings, and set the following values:
- Enable VPC: Check the box.
- VPC: Select the lab-provided VPC.
- Subnets: Enter Public and select the two subnets that have
Public
in their name/ID. - Security groups: Select the lab-provided Database-Security-Group security group (not the default security group).
- Click Create function.
- It may take 5–10 minutes to finish creating.
- Click the Configuration tab.
- Click Edit.
- Under Timeout, change it to 6 seconds.
- Click Save.
- In the left-hand menu, click Layers.
- Click Create layer.
- Set the following values:
- Name: Enter mysql.
- Upload a .zip file: Click Upload and upload the MySQL Library ZIP file you downloaded earlier.
- Compatible runtimes: Node.js 14.x
- Click Create.
- Click Functions in the left-hand menu.
- Click your testRDS function.
- In the Function overview section, click Layers under testRDS.
- In the Layers section, click Add a layer.
- Select Custom layers, and set the following values:
- Custom layers: Select mysql.
- Version: Select 1.
- Click Add.
Copy Code into Lambda Function
- In the Code source section, expand testRDS > index.js.
- Select the existing code in the index.js tab and replace it with the following:
var mysql = require('mysql'); exports.handler = (event, context, callback) => { var connection = mysql.createConnection({ host: "<RDS Endpoint>", user: "username", password: "password", database: "example", }); connection.query('show tables', function (error, results, fields) { if (error) { connection.destroy(); throw error; } else { // connected! console.log("Query result:"); console.log(results); callback(error, results); connection.end(function (err) { callback(err, results);}); } }); };
- In a new browser tab, navigate to RDS > DB Instances.
- Click the listed database.
- Copy the endpoint (in the Connectivity & security section) and paste it into a plaintext file (you’ll need it a couple times during the lab).
- Back in the Lambda function code, replace
<RDS Endpoint>
on line 6 with the endpoint you just copied. - Click Deploy.
- Click Test.
- In the Configure test event dialog, enter an Event name of test.
- Click Save.
- Click Test again.
- The Response should only be two square brackets, which is correct since we don’t have any tables defined in this database.
- Click the index.js tab.
- Replace line 12 with the following:
connection.query('CREATE TABLE pet (name VARCHAR(20), species VARCHAR(20))',function (error, results, fields) {
- Click Deploy.
- Click Test.
- This time, the Response should have information within curly brackets.
- Click the index.js tab.
- Undo the code change (Ctrl+Z or Cmd+Z) to get it back to the original code we pasted in.
- Click Deploy.
- Click Test.
- This time, we should see the
pet
table listed in the Response.
- This time, we should see the
Create a Secret in Secrets Manager
- In a new browser tab, navigate to Secrets Manager.
- Click Store a new secret.
- With Credentials for Amazon RDS database selected, set the following values:
- User name: Enter username.
- Password: Enter password.
- Encryption key: Leave as the default.
- Database: Select the listed DB instance.
- Click Next.
- On the next page, give it a Secret name of RDScredentials.
- Leave the rest of the defaults, and click Next.
- On the next page, set the following values:
- Automatic rotation: Toggle to enable it.
- Schedule expression builder: Select.
- Time unit: Change it to Days, 1.
- Create a rotation function: Select.
- SecretsManager: Enter rotateRDS.
- Use separate credentials to rotate this secret: Select No.
- Click Next.
- In the Sample code section, ensure the
region
is set tous-east-1
. - Click Store.
- It may take 5–10 minutes to finish the configuration.
- Once it’s done, click RDScredentials.
- In the Secret value section, click Retrieve secret value.
- You should see the password is now a long string rather than password.
- If yours still says password, give it a few minutes and refresh the page. Your Lambda function may still be in the process of getting set up.
- Back in the Lambda function, click Test.
- You will see errors saying access is denied because the password has changed.
- Click the index.js tab.
- Select all the code and replace it with the following:
var mysql = require('mysql'); var AWS = require('aws-sdk'), region = "us-east-1", secretName = "RDScredentials", secret, decodedBinarySecret; var client = new AWS.SecretsManager({ region: "us-east-1" }); exports.handler = (event, context, callback) => { client.getSecretValue({SecretId: secretName}, function(err, data) { if (err) { console.log(err); } else { // Decrypts secret using the associated KMS CMK. // Depending on whether the secret is a string or binary, one of these fields will be populated. if ('SecretString' in data) { secret = data.SecretString; } else { let buff = new Buffer(data.SecretBinary, 'base64'); decodedBinarySecret = buff.toString('ascii'); } } var parse = JSON.parse(secret); var password = parse.password; var connection = mysql.createConnection({ host: "<RDS Endpoint>", user: "username", password: password, database: "example", }); connection.query('show tables', function (error, results, fields) { if (error) { connection.destroy(); throw error; } else { // connected! console.log("Query result:"); console.log(results); callback(error, results); connection.end(function (err) { callback(err, results);}); } }); }); };
- Replace
<RDS Endpoint>
with the value you copied earlier. - Click Deploy.
Work with AWS VPC Flow Logs for Network Monitoring
Monitoring network traffic is a critical component of security best practices to meet compliance requirements, investigate security incidents, track key metrics, and configure automated notifications. AWS VPC Flow Logs captures information about the IP traffic going to and from network interfaces in your VPC. In this hands-on lab, we will set up and use VPC Flow Logs published to Amazon CloudWatch, create custom metrics and alerts based on the CloudWatch logs to understand trends and receive notifications for potential security issues, and use Amazon Athena to query and analyze VPC Flow Logs stored in S3.
Read MoreReduce Storage Costs with EFS
Introduction
Amazon Elastic File System (Amazon EFS) provides a simple, serverless elastic file system that lets you share file data without provisioning or managing storage. In this lab, we modify 3 existing EC2 instances to use a shared EFS storage volume instead of duplicated Elastic Block Store volumes. This reduces costs significantly, as we only need to store data in 1 location instead of 3. By the end of this lab, you will understand how to create EFS volumes and attach them to an EC2 instance.
Create EFS File System
Create an EFS Volume
- Navigate to EC2 > Instances (running).
- Click the checkbox next to webserver-01.
- Click the Storage tab and note the 10 GiB volume attached.
- In a new browser tab, navigate to EFS.
- Click Create file system, and set the following values:
- Name: SharedWeb
- Availability and durability: One Zone
- Click Create.
- Once it’s created, click View file system in the top right corner.
- Click the Network tab and wait for the created network to become available.
- Once it’s created, click Manage.
- Under Security groups, remove the currently attached default security group, and open the dropdown menu to select the provided EC2 security group (not the default).
- Click Save.
- Return to the EC2 browser tab.
- Click Security Groups in the left-hand menu.
- Click the checkbox next to that same security group (the one that is not default).
- Click the Inbound rules tab.
- Click Edit inbound rules.
- Click Add rule, and set the following values:
- Type: NFS
- Source: Custom, 0.0.0.0/0
- Click Save rules.
- Click EC2 Dashboard in the left-hand menu.
- Click Instances (running).
- With
webserver-01
selected, click Connect in the top right corner. - Click Connect. This should take you to a new terminal showing your EC2 instance in a new browser tab or window.
Mount the EFS File System and Test It
- In the terminal, list your block devices:
lsblk
- View the data inside the 10 GiB disk mounted to
/data
:ls /data
- Create a mount point or directory to attach our EFS volume:
sudo mkdir /efs
- Return to the AWS EFS console showing the
SharedWeb
file system. - Click Attach.
- Select Mount via IP.
- Copy the command under Using the NFS client: to your clipboard.
- Return to the terminal, and paste in the command.
- Add a slash right before
efs
and press Enter. - View the newly mounted EFS volume:
ls /efs
Nothing will be returned, but that shows us it’s mounted. - List the block devices again:
lsblk
- View the mounts:
mount
- View file system mounts:
df -h
- Move all files from
/data
to the/efs
file system:sudo rsync -rav /data/* /efs
- View the files now in the
/efs
file system:ls /efs
This time, a list should be returned.
Remove Old Data
Remove Data from webserver-01
- Unmount the partition:
sudo umount /data
- Edit the
/etc/fstab
file:sudo nano /etc/fstab
- Remove the line starting with
"UUID="
by placing the cursor at the beginning of the line and pressing Ctrl+K. - In the AWS console, navigate to the EFS tab.
- In the Using the NFS client: section, copy the IP in the command.
- Back in the terminal, paste in the IP you just copied:
<NFS MOUNT IP>:/
- Press the Tab key twice.
- Add the mount point and file system type (
nfs4
), so that the line now looks like this (with a tab after/data
):<NFS MOUNT IP>:/ /data nfs4
- Back on the EFS page of the AWS EFS console, copy the options (the part of the command starting with
nfsvers
and ending withnoresvport
). - In the terminal, press Tab after
nfs4
and add the copied options to the line with two zeroes at the end, so that it now looks like this:<NFS MOUNT IP>:/ /data nfs4 <OPTIONS> 0 0
- Save and exit by pressing Ctrl+X, followed by
Y
and Enter. - Unmount the
/efs
to test if this worked:sudo umount /efs
- View the file systems:
df -h
- Try and mount everything that is not already mounted:
sudo mount -a
- View the file systems again and check if
10.0.0.180:/
is mounted:df -h
You should see the NFS share is now mounted on/data
. - View the contents of
/data
:ls /data
- Navigate back to the AWS console with the Connect to instance EC2 page open.
- Click EC2 in the top left corner.
- Click Volumes.
- Scroll to the right and expand the Attached Instances column to find out which 10 GiB volume is attached to
webserver-01
. - Click the checkbox next to the 10 GiB volume attached to
webserver-01
. - Click Actions > Detach volume.
- Click Detach.
- Once it’s detached, click the checkbox next to the same volume again.
- Click Actions > Delete volume.
- Click Delete.
Remove Data from webserver-02
and webserver-03
- Click Instances in the left-hand menu.
- Click the checkbox next to webserver-02.
- Click Connect.
- Click Connect. This should launch a terminal in a new browser window or tab.
- In the tab with the terminal for
webserver-01
, view the contents of/etc/fstab
:cat /etc/fstab
- Copy the second line (starting with an IP) to your clipboard.
- Return to the terminal you launched for
webserver-02
. - Unmount the
/data
partition:sudo umount /data
- Edit the
/etc/fstab
file:sudo nano /etc/fstab
- Delete the second line using Ctrl+K.
- Paste in the line from your clipboard.
- Align the pasted line with the line above as seen in
webserver-01
. - Save and exit by pressing Ctrl+X, followed by
Y
and Enter. - Mount it:
sudo mount -a
- Check the disk status:
df -h
- Check the contents of
/data
:ls /data
- Return to the window with the Connect to instance EC2 page open.
- Click Instances in the top left.
- Click the checkbox next to webserver-03.
- Click Connect.
- Click Connect. This should launch a terminal in a new browser window or tab.
- Unmount the
/data
partition:sudo umount /data
- Edit the
/etc/fstab
file:sudo nano /etc/fstab
- Delete the second line using Ctrl+K.
- Paste in the line from your clipboard.
- Align the pasted line with the line above as seen in
webserver-01
. - Save and exit by pressing Ctrl+X, followed by
Y
and Enter. - Mount everything that is not already mounted:
sudo mount -a
- Check the disk status:
df -h
- Check the contents of
/data
:ls /data
- Return to the window with the Connect to instance EC2 page open.
- Navigate to EC2 > Volumes.
- Check the boxes for both of the 10 GiB volumes.
- Click Actions > Detach volume.
- Type detach into the box, and then click Detach.
- Once they’re detached, select them again and click Actions > Delete volume.
- Type delete into the box, and then click Delete.