Posts tagged How to

Setup Cloud Monitoring on GCP

Overview

Cloud Monitoring provides visibility into the performance, uptime, and overall health of cloud-powered applications. Cloud Monitoring collects metrics, events, and metadata from Google Cloud, Amazon Web Services, hosted uptime probes, application instrumentation, and a variety of common application components including Cassandra, Nginx, Apache Web Server, Elasticsearch, and many others. Cloud Monitoring ingests that data and generates insights via dashboards, charts, and alerts. Cloud Monitoring alerting helps you collaborate by integrating with Slack, PagerDuty, HipChat, Campfire, and more.

This lab shows you how to monitor a Compute Engine virtual machine (VM) instance with Cloud Monitoring. You’ll also install monitoring and logging agents for your VM which collects more information from your instance, which could include metrics and logs from 3rd party apps.

Set your region and zone

Certain Compute Engine resources live in regions and zones. A region is a specific geographical location where you can run your resources. Each region has one or more zones.Learn more about regions and zones and see a complete list in Regions & Zones documentation.

Run the following gcloud commands in Cloud Console to set the default region and zone for your lab:

gcloud config set compute/zone “ZONE” export ZONE=$(gcloud config get compute/zone) gcloud config set compute/region “REGION” export REGION=$(gcloud config get compute/region)

Task 1. Create a Compute Engine instance

  1. In the Cloud Console dashboard, go to Navigation menu > Compute Engine > VM instances, then click Create instance.
  2. Fill in the fields as follows, leaving all other fields at the default value:FieldValueNamelamp-1-vmRegionREGIONZoneZONESeriesE2Machine typee2-mediumBoot diskDebian GNU/Linux 11 (bullseye)FirewallCheck Allow HTTP traffic
  3. Click Create.Wait a couple of minutes, you’ll see a green check when the instance has launched.

Task 2. Add Apache2 HTTP Server to your instance

  1. In the Console, click SSH in line with lamp-1-vm to open a terminal to your instance.
  2. Run the following commands in the SSH window to set up Apache2 HTTP Server:

sudo apt-get update

sudo apt-get install apache2 php7.0

  1. When asked if you want to continue, enter Y.

Note: If you cannot install php7.0, use php5.sudo service apache2 restart

  1. Return to the Cloud Console, on the VM instances page. Click the External IP for lamp-1-vm instance to see the Apache2 default page for this instance.

Note: If you are unable to find External IP column then click on Column Display Options icon on the right side of the corner, select External IP checkbox and click OK.

Create a Monitoring Metrics Scope

Set up a Monitoring Metrics Scope that’s tied to your Google Cloud Project. The following steps create a new account that has a free trial of Monitoring.

  • In the Cloud Console, click Navigation menu Navigation menu icon > Monitoring.

When the Monitoring Overview page opens, your metrics scope project is ready.

Install the Monitoring and Logging agents

Agents collect data and then send or stream info to Cloud Monitoring in the Cloud Console.

The Cloud Monitoring agent is a collected-based daemon that gathers system and application metrics from virtual machine instances and sends them to Monitoring. By default, the Monitoring agent collects disk, CPU, network, and process metrics. Configuring the Monitoring agent allows third-party applications to get the full list of agent metrics. On the Google Cloud, Operations website, see Cloud Monitoring Documentation for more information.

In this section, you install the Cloud Logging agent to stream logs from your VM instances to Cloud Logging. Later in this lab, you see what logs are generated when you stop and start your VM.Note: It is best practice to run the Cloud Logging agent on all your VM instances.

  1. Run the Monitoring agent install script command in the SSH terminal of your VM instance to install the Cloud Monitoring agent:

curl -sSO https://dl.google.com/cloudagents/add-google-cloud-ops-agent-repo.sh

sudo bash add-google-cloud-ops-agent-repo.sh –also-install

  1. If asked if you want to continue, press Y.
  2. Run the Logging agent install script command in the SSH terminal of your VM instance to install the Cloud Logging agent:

sudo systemctl status google-cloud-ops-agent”*”

Press q to exit the status.sudo apt-get update

Task 3. Create an uptime check

Uptime checks verify that a resource is always accessible. For practice, create an uptime check to verify your VM is up.

  1. In the Cloud Console, in the left menu, click Uptime checks, and then click Create Uptime Check.
  2. For Protocol, select HTTP.
  3. For Resource Type, select Instance.
  4. For Instance, select lamp-1-vm.
  5. For Check Frequency, select 1 minute.
  6. Click Continue.
  7. In Response Validation, accept the defaults and then click Continue.
  8. In Alert & Notification, accept the defaults, and then click Continue.
  9. For Title, type Lamp Uptime Check.
  10. Click Test to verify that your uptime check can connect to the resource.When you see a green check mark everything can connect.
  11. Click Create.The uptime check you configured takes a while for it to become active. Continue with the lab, you’ll check for results later. While you wait, create an alerting policy for a different resource.

Task 4. Create an alerting policy

Use Cloud Monitoring to create one or more alerting policies.

  1. In the left menu, click Alerting, and then click +Create Policy.
  2. Click on Select a metric dropdown. Disable the Show only active resources & metrics.
  3. Type Network traffic in filter by resource and metric name and click on VM instance > Interface. Select Network traffic (agent.googleapis.com/interface/traffic) and click Apply. Leave all other fields at the default value.
  4. Click Next.
  5. Set the Threshold position to Above thresholdThreshold value to 500 and Advanced Options > Retest window to 1 min. Click Next.
  6. Click on the drop down arrow next to Notification Channels, then click on Manage Notification Channels.

Notification channels page will open in a new tab.

  1. Scroll down the page and click on ADD NEW for Email.
  2. In the Create Email Channel dialog box, enter your personal email address in the Email Address field and a Display name.
  3. Click on Save.
  4. Go back to the previous Create alerting policy tab.
  5. Click on Notification Channels again, then click on the Refresh icon to get the display name you mentioned in the previous step.
  6. Click on Notification Channels again if necessary, select your Display name and click OK.
  7. Add a message in documentation, which will be included in the emailed alert.
  8. Mention the Alert name as Inbound Traffic Alert.
  9. Click Next.
  10. Review the alert and click Create Policy.

You’ve created an alert! While you wait for the system to trigger an alert, create a dashboard and chart, and then check out Cloud Logging.

Task 5. Create a dashboard and chart

You can display the metrics collected by Cloud Monitoring in your own charts and dashboards. In this section you create the charts for the lab metrics and a custom dashboard.

  1. In the left menu select Dashboards, and then +Create Dashboard.
  2. Name the dashboard Cloud Monitoring LAMP Start Dashboard.

Add the first chart

  1. Click the Line option in the Chart library.
  2. Name the chart title CPU Load.
  3. Click on Resource & Metric dropdown. Disable the Show only active resources & metrics.
  4. Type CPU load (1m) in filter by resource and metric name and click on VM instance > Cpu. Select CPU load (1m) and click Apply. Leave all other fields at the default value. Refresh the tab to view the graph.

Add the second chart

  1. Click + Add Chart and select Line option in the Chart library.
  2. Name this chart Received Packets.
  3. Click on Resource & Metric dropdown. Disable the Show only active resources & metrics.
  4. Type Received packets in filter by resource and metric name and click on VM instance > Instance. Select Received packets and click Apply. Refresh the tab to view the graph.
  5. Leave the other fields at their default values. You see the chart data.

Task 6. View your logs

Cloud Monitoring and Cloud Logging are closely integrated. Check out the logs for your lab.

  1. Select Navigation menu > Logging > Logs Explorer.
  2. Select the logs you want to see, in this case, you select the logs for the lamp-1-vm instance you created at the start of this lab:
    • Click on Resource.
    • Select VM Instance > lamp-1-vm in the Resource drop-down menu.
    • Click Apply.
    • Leave the other fields with their default values.
    • Click the Stream logs.

You see the logs for your VM instance.

Check out what happens when you start and stop the VM instance.

To best see how Cloud Monitoring and Cloud Logging reflect VM instance changes, make changes to your instance in one browser window and then see what happens in the Cloud Monitoring, and then Cloud Logging windows.

  1. Open the Compute Engine window in a new browser window. Select Navigation menu > Compute Engine, right-click VM instances > Open link in new window.
  2. Move the Logs Viewer browser window next to the Compute Engine window. This makes it easier to view how changes to the VM are reflected in the logs
  3. In the Compute Engine window, select the lamp-1-vm instance, click the three vertical dots at the right of the screen and then click Stop, and then confirm to stop the instance.It takes a few minutes for the instance to stop.
  4. Watch in the Logs View tab for when the VM is stopped.
  5. In the VM instance details window, click the three vertical dots at the right of the screen and then click Start/resume, and then confirm. It will take a few minutes for the instance to re-start. Watch the log messages to monitor the start up.

Task 7. Check the uptime check results and triggered alerts

  1. In the Cloud Logging window, select Navigation menu > Monitoring > Uptime checks. This view provides a list of all active uptime checks, and the status of each in different locations.You will see Lamp Uptime Check listed. Since you have just restarted your instance, the regions are in a failed status. It may take up to 5 minutes for the regions to become active. Reload your browser window as necessary until the regions are active.
  2. Click the name of the uptime check, Lamp Uptime Check.Since you have just restarted your instance, it may take some minutes for the regions to become active. Reload your browser window as necessary.

Check if alerts have been triggered

  1. In the left menu, click Alerting.
  2. You see incidents and events listed in the Alerting window.
  3. Check your email account. You should see Cloud Monitoring Alerts.

Note: Remove the email notification from your alerting policy. The resources for the lab may be active for a while after you finish, and this may result in a few more email notifications getting sent out.

Congratulations! You have successfully set up and monitored a VM with Cloud Monitoring on GCP.

Setting Up Cost Control with Quota

In this lab you will complete the following tasks:

  • Queried a public dataset and explore associated costs.
  • Modified BigQuery API quota.
  • Tried to rerun the query after quota had been modified.

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

  1. Click Activate Cloud Shell Activate Cloud Shell icon at the top of the Google Cloud console.

When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session: Your Cloud Platform project in this session is set to YOUR_PROJECT_ID

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab completion.

  1. (Optional) You can list the active account name with this command:

gcloud auth list

  1. Click Authorize.

Open the BigQuery console

  1. In the Google Cloud Console, select Navigation menu > BigQuery.

The Welcome to BigQuery in the Cloud Console message box opens. This message box links the quickstart guide and the release notes.

  1. Click Done.

The BigQuery console opens.

Task 1. Query a public dataset in BigQuery

In this lab, you query the bigquery-public-data:wise_all_sky_data_release public dataset. Learn more about this dataset from the blog post Querying the Stars with BigQuery GIS.

  1. In the Query editor paste the following query:SELECT w1mpro_ep, mjd, load_id, frame_id FROM `bigquery-public-data.wise_all_sky_data_release.mep_wise` ORDER BY mjd ASC LIMIT 500
  2. Do not run the query. Instead, please answer the following question:

Use the query validator to determine how many bytes of data this will process when you run.

  1. Now run the query and see how quickly BigQuery processes that size of data.

Task 2. Explore query cost

The first 1 TB of query data processed per month is free.

Task 3. Update BigQuery quota

In this task, you update the BigQuery API quota to restrict the data processed in queries in your project.

  1. In your Cloud Shell, run this command to view your current usage quotas with the BigQuery API:

gcloud alpha services quota list –service=bigquery.googleapis.com –consumer=projects/${DEVSHELL_PROJECT_ID} –filter=”usage”

The consumerQuotaLimits display your current query per day limits. There is a separate quota for usage per project and usage per user.

  1. Run this command in Cloud Shell to update your per user quota to .25 TiB per day:

gcloud alpha services quota update –consumer=projects/${DEVSHELL_PROJECT_ID} –service bigquery.googleapis.com –metric bigquery.googleapis.com/quota/query/usage –value 262144 –unit 1/d/{project}/{user} –force

  1. After the quota is updated, examine your consumerQuotaLimits again:

gcloud alpha services quota list –service=bigquery.googleapis.com –consumer=projects/${DEVSHELL_PROJECT_ID} –filter=”usage”

You should see the same limits from before but also a consumerOverride with the value used in the previous step:— consumerQuotaLimits: – metric: bigquery.googleapis.com/quota/query/usage quotaBuckets: – defaultLimit: ‘9223372036854775807’ effectiveLimit: ‘9223372036854775807’ unit: 1/d/{project} – metric: bigquery.googleapis.com/quota/query/usage quotaBuckets: – consumerOverride: name:projects/33699896259/services/bigquery.googleapis.com/consumerQuotaMetrics/bigquery.googleapis.com%2Fquota%2Fquery%2Fusage/limits/%2Fd%2Fproject%2Fuser/consumerOverrides/Cg1RdW90YU92ZXJyaWRl overrideValue: ‘262144’ defaultLimit: ‘9223372036854775807’ effectiveLimit: ‘262144’ unit: 1/d/{project}/{user} displayName: Query usage metric: bigquery.googleapis.com/quota/query/usage unit: MiBy

Next, you will re-run your query with the updated quota.

Task 4. Rerun your query

  1. In the Cloud Console, click BigQuery.
  2. The query you previously ran should still be in the query editor, but if it isn’t, paste the following query in the Query editor and click Run: SELECT w1mpro_ep, mjd, load_id, frame_id FROM `bigquery-public-data.wise_all_sky_data_release.mep_wise` ORDER BY mjd ASC LIMIT 500
  3. Note the validator still mentions This query will process 1.36 TB when run. However, the query has run successfully and hasn’t processed any data. Why do you think that is?

Running the same query again may not process any data because of the automatic query, Caching feature in BigQuery.ShufflingJoiningcheck

Note: If your query is already blocked by your custom quota, don’t worry. It’s likely that you set the custom quota and re-run the query before the first query had time to cache the results.

Queries that use cached query results are at no additional charge and do not count against your quota. For more information on using cached query results, see Using cached query results.

In order for us to test the newly set quota, you must to disable query cache to process data using the previous query.

  1. To test that the quota has changed, disable the cached query results. In the Query results pane, click More > Query settings:
Query settings option highlighted in the More dropdown mnenu
  1. Uncheck Use cached results and click Save.
  2. Run the query again so that it counts against your daily quota.
  3. Once the query has run successfully and processed the 1.36 TB, run the query once more.What happened? Were you able to run the query? You should have received an error like the following:Custom quota exceeded: Your usage exceeded the custom quota for QueryUsagePerUserPerDay, which is set by your administrator. For more information, see https://cloud.google.com/bigquery/cost-controls

Task 5. Explore BigQuery best practices

Quotas can be used for cost controls but it’s up to your business to determine which quotas make sense for your team. This is one example of how to set quotas to protect from unexpected costs. One way to reduce the amount of data queried is to optimize your queries.

Learn more about optimizing BigQuery queries from the Control costs in BigQuery guide.

And just like that you completed all the tasks! Congrats..

Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine

Overview

This post shows you how to create a continuous delivery pipeline using Google Kubernetes Engine, Google Cloud Source Repositories, Google Cloud Container Builder, and Spinnaker. After you create a sample application, you configure these services to automatically build, test, and deploy it. When you modify the application code, the changes trigger the continuous delivery pipeline to automatically rebuild, retest, and redeploy the new version.

Objectives

  • Set up your environment by launching Google Cloud Shell, creating a Kubernetes Engine cluster, and configuring your identity and user management scheme.
  • Download a sample application, create a Git repository then upload it to a Google Cloud Source Repository.
  • Deploy Spinnaker to Kubernetes Engine using Helm.
  • Build your Docker image.
  • Create triggers to create Docker images when your application changes.
  • Configure a Spinnaker pipeline to reliably and continuously deploy your application to Kubernetes Engine.
  • Deploy a code change, triggering the pipeline, and watch it roll out to production.

Pipeline architecture

To continuously deliver application updates to your users, you need an automated process that reliably builds, tests, and updates your software. Code changes should automatically flow through a pipeline that includes artifact creation, unit testing, functional testing, and production rollout. In some cases, you want a code update to apply to only a subset of your users, so that it is exercised realistically before you push it to your entire user base. If one of these canary releases proves unsatisfactory, your automated procedure must be able to quickly roll back the software changes.

Process diagram

With Kubernetes Engine and Spinnaker you can create a robust continuous delivery flow that helps to ensure your software is shipped as quickly as it is developed and validated. Although rapid iteration is your end goal, you must first ensure that each application revision passes through a gamut of automated validations before becoming a candidate for production rollout. When a given change has been vetted through automation, you can also validate the application manually and conduct further pre-release testing.

After your team decides the application is ready for production, one of your team members can approve it for production deployment.

Application delivery pipeline

In this you will build the continuous delivery pipeline shown in the following diagram.

Continuous delivery pipeline flow diagram

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

  1. Click Activate Cloud Shell Activate Cloud Shell icon at the top of the Google Cloud console.

When connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session: Your Cloud Platform project in this session is set to YOUR_PROJECT_ID

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab completion.

  1. (Optional) You can list the active account name with this command:

gcloud auth list

  1. Click Authorize

4. (Optional) You can list the project ID with this command:

gcloud config list project

Task 1. Set up your environment

Configure the infrastructure and identities required for this lab. First you’ll create a Kubernetes Engine cluster to deploy Spinnaker and the sample application.

  1. Set the default compute zone:

gcloud config set compute/zone us-east1-d

  1. Create a Kubernetes Engine cluster using the Spinnaker tutorial sample application:

gcloud container clusters create spinnaker-tutorial \ –machine-type=n1-standard-2

Cluster creation takes between 5 to 10 minutes to complete. Wait for your cluster to finish provisioning before proceeding.

When completed you see a report detailing the name, location, version, ip-address, machine-type, node version, number of nodes and status of the cluster that indicates the cluster is running.

Configure identity and access management

Create a Cloud Identity Access Management (Cloud IAM) service account to delegate permissions to Spinnaker, allowing it to store data in Cloud Storage. Spinnaker stores its pipeline data in Cloud Storage to ensure reliability and resiliency. If your Spinnaker deployment unexpectedly fails, you can create an identical deployment in minutes with access to the same pipeline data as the original.

Upload your startup script to a Cloud Storage bucket by following these steps:

  1. Create the service account:

gcloud iam service-accounts create spinnaker-account \ –display-name spinnaker-account

  1. Store the service account email address and your current project ID in environment variables for use in later commands:

export SA_EMAIL=$(gcloud iam service-accounts list \ –filter=”displayName:spinnaker-account” \ –format=’value(email)’)

export PROJECT=$(gcloud info –format=’value(config.project)’)

  1. Bind the storage.admin role to your service account:

gcloud projects add-iam-policy-binding $PROJECT \ –role roles/storage.admin \ –member serviceAccount:$SA_EMAIL

  1. Download the service account key. In a later step, you will install Spinnaker and upload this key to Kubernetes Engine:

gcloud iam service-accounts keys create spinnaker-sa.json \ –iam-account $SA_EMAIL

Task 2. Set up Cloud Pub/Sub to trigger Spinnaker pipelines

  1. Create the Cloud Pub/Sub topic for notifications from Container Registry:

gcloud pubsub topics create projects/$PROJECT/topics/gcr

  1. Create a subscription that Spinnaker can read from to receive notifications of images being pushed:

gcloud pubsub subscriptions create gcr-triggers \ –topic projects/${PROJECT}/topics/gcr

  1. Give Spinnaker’s service account permissions to read from the gcr-triggers subscription:

export SA_EMAIL=$(gcloud iam service-accounts list \ –filter=”displayName:spinnaker-account” \ –format=’value(email)’)

gcloud beta pubsub subscriptions add-iam-policy-binding gcr-triggers \ –role roles/pubsub.subscriber –member serviceAccount:$SA_EMAIL

Task 3. Deploying Spinnaker using Helm

In this section you use Helm to deploy Spinnaker from the Charts repository. Helm is a package manager you can use to configure and deploy Kubernetes applications.

Helm is already installed in your Cloud Shell.

Configure Helm

  1. Grant Helm the cluster-admin role in your cluster:

kubectl create clusterrolebinding user-admin-binding \ –clusterrole=cluster-admin –user=$(gcloud config get-value account)

  1. Grant Spinnaker the cluster-admin role so it can deploy resources across all namespaces:

kubectl create clusterrolebinding –clusterrole=cluster-admin \ –serviceaccount=default:default spinnaker-admin

  1. Add the stable charts deployments to Helm’s usable repositories (includes Spinnaker):

helm repo add stable https://charts.helm.sh/stable helm repo update

Configure Spinnaker

  1. Still in Cloud Shell, create a bucket for Spinnaker to store its pipeline configuration:

export PROJECT=$(gcloud info \ –format=’value(config.project)’)

export BUCKET=$PROJECT-spinnaker-config

gsutil mb -c regional -l us-east1 gs://$BUCKET

  1. Run the following command to create a spinnaker-config.yaml file, which describes how Helm should install Spinnaker:

export SA_JSON=$(cat spinnaker-sa.json) export PROJECT=$(gcloud info –format=’value(config.project)’) export BUCKET=$PROJECT-spinnaker-config cat > spinnaker-config.yaml <<EOF gcs: enabled: true bucket: $BUCKET project: $PROJECT jsonKey: ‘$SA_JSON’ dockerRegistries: – name: gcr address: https://gcr.io username: _json_key password: ‘$SA_JSON’ email: 1234@5678.com # Disable minio as the default storage backend minio: enabled: false # Configure Spinnaker to enable GCP services halyard: spinnakerVersion: 1.19.4 image: repository: us-docker.pkg.dev/spinnaker-community/docker/halyard tag: 1.32.0 pullSecrets: [] additionalScripts: create: true data: enable_gcs_artifacts.sh: |- \$HAL_COMMAND config artifact gcs account add gcs-$PROJECT –json-path /opt/gcs/key.json \$HAL_COMMAND config artifact gcs enable enable_pubsub_triggers.sh: |- \$HAL_COMMAND config pubsub google enable \$HAL_COMMAND config pubsub google subscription add gcr-triggers \ –subscription-name gcr-triggers \ –json-path /opt/gcs/key.json \ –project $PROJECT \ –message-format GCR EOF

Deploy the Spinnaker chart

  1. Use the Helm command-line interface to deploy the chart with your configuration set:

helm install -n default cd stable/spinnaker -f spinnaker-config.yaml \ –version 2.0.0-rc9 –timeout 10m0s –wait

Note: The installation typically takes 5-8 minutes to complete.

  1. After the command completes, run the following command to set up port forwarding to Spinnaker from Cloud Shell:

export DECK_POD=$(kubectl get pods –namespace default -l “cluster=spin-deck” \ -o jsonpath=”{.items[0].metadata.name}”)

kubectl port-forward –namespace default $DECK_POD 8080:9000 >> /dev/null &

  1. To open the Spinnaker user interface, click the Web Preview icon at the top of the Cloud Shell window and select Preview on port 8080.
Web Preview icon at the top of the Cloud Shell window

The welcome screen opens, followed by the Spinnaker user interface.

Leave this tab open, this is where you’ll access the Spinnaker UI.

Task 4. Building the Docker image

In this section, you configure Cloud Build to detect changes to your app source code, build a Docker image, and then push it to Container Registry.

Create your source code repository

  1. In Cloud Shell tab, download the sample application source code:

gsutil -m cp -r gs://spls/gsp114/sample-app.tar .

  1. Unpack the source code:

mkdir sample-app tar xvf sample-app.tar -C ./sample-app

  1. Change directories to the source code:

cd sample-app

  1. Set the username and email address for your Git commits in this repository. Replace [USERNAME] with a username you create:

git config –global user.email “$(gcloud config get-value core/account)”

Copied!content_copygit config –global user.name “[USERNAME]”

  1. Make the initial commit to your source code repository:

git init

git add .

git commit -m “Initial commit”

  1. Create a repository to host your code:

gcloud source repos create sample-app

Note: Disregard the “you may be billed for this repository” message.git config credential.helper gcloud.sh

  1. Add your newly created repository as remote:

export PROJECT=$(gcloud info –format=’value(config.project)’)

git remote add origin https://source.developers.google.com/p/$PROJECT/r/sample-app

  1. Push your code to the new repository’s master branch:

git push origin master

  1. Check that you can see your source code in the Console by clicking Navigation Menu > Source Repositories.
  2. Click sample-app.

Configure your build triggers

Configure Container Builder to build and push your Docker images every time you push Git tags to your source repository. Container Builder automatically checks out your source code, builds the Docker image from the Dockerfile in your repository, and pushes that image to Google Cloud Container Registry.

Container Builder flow diagram
  1. In the Cloud Platform Console, click Navigation menu > Cloud Build > Triggers.
  2. Click Create trigger.
  3. Set the following trigger settings:
  • Namesample-app-tags
  • Event: Push new tag
  • Select your newly created sample-app repository.
  • Tag.*(any tag)
  • ConfigurationCloud Build configuration file (yaml or json)
  • Cloud Build configuration file location/cloudbuild.yaml
  1. Click CREATE.
CreateTrigger-1.png

From now on, whenever you push a Git tag (.*) to your source code repository, Container Builder automatically builds and pushes your application as a Docker image to Container Registry.

Prepare your Kubernetes Manifests for use in Spinnaker

Spinnaker needs access to your Kubernetes manifests in order to deploy them to your clusters. This section creates a Cloud Storage bucket that will be populated with your manifests during the CI process in Cloud Build. After your manifests are in Cloud Storage, Spinnaker can download and apply them during your pipeline’s execution.

  1. Create the bucket:

export PROJECT=$(gcloud info –format=’value(config.project)’)

gsutil mb -l us-east1 gs://$PROJECT-kubernetes-manifests

  1. Enable versioning on the bucket so that you have a history of your manifests:

gsutil versioning set on gs://$PROJECT-kubernetes-manifests

  1. Set the correct project ID in your kubernetes deployment manifests:

sed -i s/PROJECT/$PROJECT/g k8s/deployments/*

  1. Commit the changes to the repository:

git commit -a -m “Set project ID”

Build your image

Push your first image using the following steps:

  1. In Cloud Shell, still in the sample-app directory, create a Git tag:

git tag v1.0.0

  1. Push the tag:

git push –tags

  1. Go to the Cloud Console. Still in Cloud Build, click History in the left pane to check that the build has been triggered. If not, verify that the trigger was configured properly in the previous section.

Stay on this page and wait for the build to complete before going on to the next section.Note: If the Build fails, then click on the Build ID to open the Build details page and then click RETRY.

Task 5. Configuring your deployment pipelines

Now that your images are building automatically, you need to deploy them to the Kubernetes cluster.

You deploy to a scaled-down environment for integration testing. After the integration tests pass, you must manually approve the changes to deploy the code to production services.

Install the spin CLI for managing Spinnaker

spin is a command-line utility for managing Spinnaker’s applications and pipelines.

  1. Download the 1.14.0 version of spin:

curl -LO https://storage.googleapis.com/spinnaker-artifacts/spin/1.14.0/linux/amd64/spin

  1. Make spin executable:

chmod +x spin

Create the deployment pipeline

  1. Use spin to create an app called sample in Spinnaker. Set the owner email address for the app in Spinnaker:

./spin application save –application-name sample \ –owner-email “$(gcloud config get-value core/account)” \ –cloud-providers kubernetes \ –gate-endpoint http://localhost:8080/gate

Next, you create the continuous delivery pipeline. In this tutorial, the pipeline is configured to detect when a Docker image with a tag prefixed with “v” has arrived in your Container Registry.

  1. From your sample-app source code directory, run the following command to upload an example pipeline to your Spinnaker instance:

export PROJECT=$(gcloud info –format=’value(config.project)’) sed s/PROJECT/$PROJECT/g spinnaker/pipeline-deploy.json > pipeline.json ./spin pipeline save –gate-endpoint http://localhost:8080/gate -f pipeline.json

Manually trigger and view your pipeline execution

The configuration you just created uses notifications of newly tagged images being pushed to trigger a Spinnaker pipeline. In a previous step, you pushed a tag to the Cloud Source Repositories which triggered Cloud Build to build and push your image to Container Registry. To verify the pipeline, manually trigger it.

  1. Switch to your browser tab displaying your Spinnaker UI.

If you are unable to find it, you can get to this tab again by selecting Web Preview > Preview on Port 8080 in your Cloud Shell window.

  1. In the Spinnaker UI, click Applications at the top of the screen to see your list of managed applications.

sample is your application. If you don’t see sample, try refreshing the Spinnaker Applications tab.

  1. Click sample to view your application deployment.
  2. Click Pipelines at the top to view your applications pipeline status.
  3. Click Start Manual Execution, select Deploy in Select Pipeline, and then click Run to trigger the pipeline this first time.
  4. Click Execution Details to see more information about the pipeline’s progress.

The progress bar shows the status of the deployment pipeline and its steps.

Progress bar

Steps in blue are currently running, green ones have completed successfully, and red ones have failed.

  1. Click a stage to see details about it.

After 3 to 5 minutes the integration test phase completes and the pipeline requires manual approval to continue the deployment.

  1. Hover over the yellow “person” icon and click Continue.

Your rollout continues to the production frontend and backend deployments. It completes after a few minutes.

  1. To view the app, at the top of the Spinnaker UI, select Infrastructure > Load Balancers.
  2. Scroll down the list of load balancers and click Default, under service sample-frontend-production. You will see details for your load balancer appear on the right side of the page. If you do not, you may need to refresh your browser.
  3. Scroll down the details pane on the right and copy your app’s IP address by clicking the clipboard button on the Ingress IP. The ingress IP link from the Spinnaker UI may use HTTPS by default, while the application is configured to use HTTP.
Details pane
  1. Paste the address into a new browser tab to view the application. You might see the canary version displayed, but if you refresh you will also see the production version.
Production version of the application

You have now manually triggered the pipeline to build, test, and deploy your application.

Task 6. Triggering your pipeline from code changes

Now test the pipeline end to end by making a code change, pushing a Git tag, and watching the pipeline run in response. By pushing a Git tag that starts with “v”, you trigger Container Builder to build a new Docker image and push it to Container Registry. Spinnaker detects that the new image tag begins with “v” and triggers a pipeline to deploy the image to canaries, run tests, and roll out the same image to all pods in the deployment.

  1. From your sample-app directory, change the color of the app from orange to blue:

sed -i ‘s/orange/blue/g’ cmd/gke-info/common-service.go

  1. Tag your change and push it to the source code repository:

git commit -a -m “Change color to blue”

git tag v1.0.1

git push –tags

  1. In the Console, in Cloud Build > History, wait a couple of minutes for the new build to appear. You may need to refresh your page. Wait for the new build to complete, before going to the next step.

Note: If the Build fails, please click on Build ID and then click RETRY.

  1. Return to the Spinnaker UI and click Pipelines to watch the pipeline start to deploy the image. The automatically triggered pipeline will take a few minutes to appear. You may need to refresh your page.
Pipelines tab in Spinnaker UI

Task 7. Observe the canary deployments

  1. When the deployment is paused, waiting to roll out to production, return to the web page displaying your running application and start refreshing the tab that contains your app. Four of your backends are running the previous version of your app, while only one backend is running the canary. You should see the new, blue version of your app appear about every fifth time you refresh.
  2. When the pipeline completes, your app looks like the following screenshot. Note that the color has changed to blue because of your code change, and that the Version field now reads canary.
Blue canary version

You have now successfully rolled out your app to your entire production environment!

  1. Optionally, you can roll back this change by reverting your previous commit. Rolling back adds a new tag (v1.0.2), and pushes the tag back through the same pipeline you used to deploy v1.0.1:

git revert v1.0.1

Press CTRL+OENTERCTRL+X.git tag v1.0.2

git push –tags

  1. When the build and then the pipeline completes, verify the roll back by clicking Infrastructure > Load Balancers, then click the service sample-frontend-production Default and copy the Ingress IP address into a new tab.

Now your app is back to orange and you can see the production version number.

Orange production version of the UI

Congratulations!

You have now successfully completed the Continuous Delivery Pipelines with Spinnaker and Kubernetes Engine lab.

Setting up Jenkins on Kubernetes Engine on GCP

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

  1. Click Activate Cloud Shell Activate Cloud Shell icon at the top of the Google Cloud console.

When connected, you are already authenticated, and the project is set to your PROJECT_ID. The output contains a line that declares the PROJECT_ID for this session: Your Cloud Platform project in this session is set to YOUR_PROJECT_ID

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab completion.

  1. (Optional) You can list the active account name with this command:
gcloud auth list
  1. Click Authorize.
  2. (Optional) You can list the project ID with this command:
gcloud config list project
Output:[core] project = <project_ID>

Task 1. Prepare the environment

First, you’ll prepare your deployment environment and download a sample application.

  1. Set the default Compute Engine zone to <filled in at lab start>:
gcloud config set compute/zone
  1. Clone the sample code:
git clone https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes.git
  1. Navigate to the sample code directory:
cd continuous-deployment-on-kubernetes

Creating a Kubernetes cluster

Now you’ll use the Kubernetes Engine to create and manage your Kubernetes cluster.

  1. Next, provision a Kubernetes cluster using Kubernetes Engine. This step can take several minutes to complete:
gcloud container clusters create jenkins-cd \ --num-nodes 2 \ --scopes "https://www.googleapis.com/auth/projecthosting,cloud-platform"

The extra scopes enable Jenkins to access Cloud Source Repositories and Google Container Registry.

  1. Confirm that your cluster is running:
gcloud container clusters list

Example Output:

Look for RUNNING in the STATUS column:NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS jenkins-cd 1.9.7-gke.3 35.237.126.84 e2-medium 1.9.7-gke.3 2 RUNNING

  1. Get the credentials for your cluster. Kubernetes Engine uses these credentials to access your newly provisioned cluster.
gcloud container clusters get-credentials jenkins-cd
  1. Confirm that you can connect to your cluster:
kubectl cluster-info

Example output: If the cluster is running, the URLs of where your Kubernetes components are accessible display:

Kubernetes master is running at https://130.211.178.38 GLBCDefaultBackend is running at https://130.211.178.38/api/v1/proxy/namespaces/kube-system/services/default-http-backendHeapster is running at https://130.211.178.38/api/v1/proxy/namespaces/kube-system/services/heapster KubeDNS is running at https://130.211.178.38/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at https://130.211.178.38/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

Task 2. Configure Helm

In this lab, you will use Helm to install Jenkins from the Charts repository. Helm is a package manager that makes it easy to configure and deploy Kubernetes applications. Your Cloud Shell will already have a recent, stable version of Helm pre-installed.

If curious, you can run helm version in Cloud Shell to check which version you are using and also ensure that Helm is installed.

  1. Add Helm’s jenkins chart repository:
helm repo add jenkins https://charts.jenkins.io
  1. Update the repo to ensure you get the latest list of charts:
helm repo update

Task 3. Configure and install Jenkins

You will use a custom values file to add the Google Cloud-specific plugin necessary to use service account credentials to reach your Cloud Source Repository.

  1. Use the Helm CLI to deploy the chart with your configuration set:
helm upgrade --install -f jenkins/values.yaml myjenkins jenkins/jenkins
  1. Once that command completes ensure the Jenkins pod goes to the Running state and the container is in the READY state. This may take about 2 minutes:
kubectl get pods

Example output: NAME READY STATUS RESTARTS AGE myjenkins-0 2/2 Running 0 1m

  1. Run the following command to setup port forwarding to the Jenkins UI from the Cloud Shell:
echo http://127.0.0.1:8080 kubectl --namespace default port-forward svc/myjenkins 8080:8080 >> /dev/null &
  1. Now, check that the Jenkins Service was created properly:
kubectl get svc

Example output: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE myjenkins 10.35.249.67 8080/TCP 3h myjenkins-agent 10.35.248.1 50000/TCP 3h kubernetes 10.35.240.1 443/TCP 9h

We are using the Kubernetes Plugin so that our builder nodes will be automatically launched as necessary when the Jenkins master requests them. Upon completion of their work, they will automatically be turned down and their resources added back to the cluster’s resource pool.

Notice that this service exposes ports 8080 and 50000 for any pods that match the selector. This will expose the Jenkins web UI and builder/agent registration ports within the Kubernetes cluster.

Additionally, the jenkins-ui service is exposed using a ClusterIP so that it is not accessible from outside the cluster.

Task 4. Connect to Jenkins

  1. The Jenkins chart will automatically create an admin password for you. To retrieve it, run:
kubectl exec --namespace default -it svc/myjenkins -c jenkins -- /bin/cat /run/secrets/additional/chart-admin-password && echo
  1. To get to the Jenkins user interface, click on the Web Preview button in cloud shell, then click Preview on port 8080:
Expanded Web preview dropdown menu with Preview on port 8080 option highlighted
  1. You should now be able to log in with the username admin and your auto-generated password.

You may also be automatically logged in as well.

You now have Jenkins set up in your Kubernetes cluster!

Using Terraform Dynamic Blocks and Built-in Functions to Deploy to AWS

Introduction

Terraform offers a strong set of features to help optimize your Terraform code. Two really useful features are dynamic blocks, which allow you to generate static repeated blocks within resources in Terraform; and built-in functions, which help you manipulate variables and data to suit your needs and help make your Terraform deployments better automated and more fault resilient.

Solution

  1. Check Terraform Status using the version command:terraform version Since the Terraform version is returned, you have validated that the Terraform binary is installed and functioning properly.

Clone Terraform Code and Switch to Proper Directory

  1. The Terraform code required for this lab is below. Copy the same to your working directory.

Examine the Code in the Files

  1. View the contents of the main.tf file using the less command:less main.tf The main.tf file spins up AWS networking components such as a virtual private cloud (VPC), security group, internet gateway, route tables, and an EC2 instance bootstrapped with an Apache webserver which is publicly accessible.
  2. Closely examine the code and note the following:
    • We have selected AWS as our provider and our resources will be deployed in the us-east-1 region.
    • We are using the ssm_parameter public endpoint resource to get the AMI ID of the Amazon Linux 2 image that will spin up the EC2 webserver.
    • We are using the vpc module (provided by the Terraform Public Registry) to create our network components like subnets, internet gateway, and route tables.
    • For the security_group resource, we are using a dynamic block on the ingress attribute to dynamically generate as many ingress blocks as we need. The dynamic block includes the var.rules complex variable configured in the variables.tf file.
    • We are also using a couple of built-in functions and some logical expressions in the code to get it to work the way we want, including the join function for the name attribute in the security group resource, and the fileexists and file functions for the user_data parameter in the EC2 instance resource.
  3. Enter q to exit the less program.
  4. View the contents of the variables.tf file:less variables.tf The variables.tf file contains the complex variable type which we will be iterating over with the dynamic block in the main.tf file.
  5. Enter q to exit the less program.
  6. View the contents of the script.sh file using the cat command:cat script.sh The script.sh file is passed into the EC2 instance using its user_data attribute and the fileexists and file functions (as you saw in the main.tf file), which then installs the Apache webserver and starts up the service.
  7. View the contents of the outputs.tf file:cat outputs.tf The outputs.tf file returns the values we have requested upon deployment of our Terraform code.
    • The Web-Server-URL output is the publicly accessible URL for our webserver. Notice here that we are using the join function for the value parameter to generate the URL for the webserver.
    • The Time-Date output is the timestamp when we executed our Terraform code.

Review and Deploy the Terraform Code

  1. As a best practice, format the code in preparation for deployment:terraform fmt
  2. Validate the code to look for any errors in syntax, parameters, or attributes within Terraform resources that may prevent it from deploying correctly:terraform validate You should receive a notification that the configuration is valid.
  3. Review the actions that will be performed when you deploy the Terraform code:terraform plan Note the Change to Outputs, where you can see the Time-Date and Web-Server-URL outputs that were configured in the outputs.tf file earlier.

Test Out the Deployment and Clean Up

  1. Once the code has executed successfully, view the outputs at the end of the completion message:
    • The Time-Date output displays the timestamp when the code was executed.
    • The Web-Server-URL output displays the web address for the Apache webserver we created during deployment.
    Note: You could also use the terraform output command at any time in the CLI to view these outputs on demand.
  2. Verify that the resources were created correctly in the AWS Management Console:
    • Navigate to the AWS Management Console in your browser.
    • Type VPC in the search bar and select VPC from the contextual menu.
    • On the Resources by Region page, click VPCs.
    • Verify that the my-vpc resource appears in the list.
    • Type EC2 in the search bar and select EC2 from the contextual menu.
    • On the Resources page, click Instances (running).
    • Verify that the instance, which has no name, appears in the list (and is likely still initializing).
    • In the menu on the left, click Security Groups.
    • Verify that the Terraform-Dynamic-SG security group appears in the list.
    • Select the security group to see further details.
    • Click on the Inbound rules tab, and note that three separate rules were created from the single dynamic block used on the ingress parameter in the code.
  3. In the CLI, copy the URL displayed as the Web-Server_URL output value.
  4. In a new browser window or tab, paste the URL and press Enter.
  5. Verify that the Apache Test Page loads, validating that the code executed correctly and the logic within the AWS instance in Terraform worked correctly, as it was able to locate the script.sh file in the folder and bootstrap the EC2 instance accordingly.
  6. In the CLI, tear down the infrastructure you just created before moving on:terraform destroy --auto-approve

Using Secrets Manager to Authenticate with an RDS Database Using Lambda

Introduction

AWS Secrets Manager helps you protect the secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. In this lab, we connect to a MySQL RDS database from an AWS Lambda function using a username and password, and then we hand over credential management to the AWS Secrets Manager service. We then use the Secrets Manager API to connect to the database instead of hard-coding credentials in our Lambda function. By the end of this lab, you will understand how to store a secret in AWS Secrets Manager and access it from a Lambda function.

Solution

Log in to the live AWS environment using the credentials provided. Use an incognito or private browser window to ensure you’re using the lab account rather than your own.

Make sure you’re in the N. Virginia (us-east-1) region throughout the lab.

Download the MySQL Library ZIP file you’ll need for the first lab objective.

Create Lambda Function

  1. Navigate to Lambda > Functions.
  2. Click Create function.
  3. Make sure the Author from scratch option at the top is selected, and then use the following settings:
    • Function name: Enter testRDS.
    • Runtime: Select Node.js 14.x.
  4. Expand Advanced settings, and set the following values:
    • Enable VPC: Check the box.
    • VPC: Select the lab-provided VPC.
    • Subnets: Enter Public and select the two subnets that have Public in their name/ID.
    • Security groups: Select the lab-provided Database-Security-Group security group (not the default security group).
  5. Click Create function.
    • It may take 5–10 minutes to finish creating.
  6. Click the Configuration tab.
  7. Click Edit.
  8. Under Timeout, change it to 6 seconds.
  9. Click Save.
  10. In the left-hand menu, click Layers.
  11. Click Create layer.
  12. Set the following values:
    • Name: Enter mysql.
    • Upload a .zip file: Click Upload and upload the MySQL Library ZIP file you downloaded earlier.
    • Compatible runtimesNode.js 14.x
  13. Click Create.
  14. Click Functions in the left-hand menu.
  15. Click your testRDS function.
  16. In the Function overview section, click Layers under testRDS.
  17. In the Layers section, click Add a layer.
  18. Select Custom layers, and set the following values:
    • Custom layers: Select mysql.
    • Version: Select 1.
  19. Click Add.

Copy Code into Lambda Function

  1. In the Code source section, expand testRDS > index.js.
  2. Select the existing code in the index.js tab and replace it with the following:var mysql = require('mysql'); exports.handler = (event, context, callback) => { var connection = mysql.createConnection({ host: "<RDS Endpoint>", user: "username", password: "password", database: "example", }); connection.query('show tables', function (error, results, fields) { if (error) { connection.destroy(); throw error; } else { // connected! console.log("Query result:"); console.log(results); callback(error, results); connection.end(function (err) { callback(err, results);}); } }); };
  3. In a new browser tab, navigate to RDS > DB Instances.
  4. Click the listed database.
  5. Copy the endpoint (in the Connectivity & security section) and paste it into a plaintext file (you’ll need it a couple times during the lab).
  6. Back in the Lambda function code, replace <RDS Endpoint> on line 6 with the endpoint you just copied.
  7. Click Deploy.
  8. Click Test.
  9. In the Configure test event dialog, enter an Event name of test.
  10. Click Save.
  11. Click Test again.
    • The Response should only be two square brackets, which is correct since we don’t have any tables defined in this database.
  12. Click the index.js tab.
  13. Replace line 12 with the following:connection.query('CREATE TABLE pet (name VARCHAR(20), species VARCHAR(20))',function (error, results, fields) {
  14. Click Deploy.
  15. Click Test.
    • This time, the Response should have information within curly brackets.
  16. Click the index.js tab.
  17. Undo the code change (Ctrl+Z or Cmd+Z) to get it back to the original code we pasted in.
  18. Click Deploy.
  19. Click Test.
    • This time, we should see the pet table listed in the Response.

Create a Secret in Secrets Manager

  1. In a new browser tab, navigate to Secrets Manager.
  2. Click Store a new secret.
  3. With Credentials for Amazon RDS database selected, set the following values:
    • User name: Enter username.
    • Password: Enter password.
    • Encryption key: Leave as the default.
    • Database: Select the listed DB instance.
  4. Click Next.
  5. On the next page, give it a Secret name of RDScredentials.
  6. Leave the rest of the defaults, and click Next.
  7. On the next page, set the following values:
    • Automatic rotation: Toggle to enable it.
    • Schedule expression builder: Select.
    • Time unit: Change it to Days1.
    • Create a rotation function: Select.
    • SecretsManager: Enter rotateRDS.
    • Use separate credentials to rotate this secret: Select No.
  8. Click Next.
  9. In the Sample code section, ensure the region is set to us-east-1.
  10. Click Store.
    • It may take 5–10 minutes to finish the configuration.
  11. Once it’s done, click RDScredentials.
  12. In the Secret value section, click Retrieve secret value.
    • You should see the password is now a long string rather than password.
    • If yours still says password, give it a few minutes and refresh the page. Your Lambda function may still be in the process of getting set up.
  13. Back in the Lambda function, click Test.
    • You will see errors saying access is denied because the password has changed.
  14. Click the index.js tab.
  15. Select all the code and replace it with the following:var mysql = require('mysql'); var AWS = require('aws-sdk'), region = "us-east-1", secretName = "RDScredentials", secret, decodedBinarySecret; var client = new AWS.SecretsManager({ region: "us-east-1" }); exports.handler = (event, context, callback) => { client.getSecretValue({SecretId: secretName}, function(err, data) { if (err) { console.log(err); } else { // Decrypts secret using the associated KMS CMK. // Depending on whether the secret is a string or binary, one of these fields will be populated. if ('SecretString' in data) { secret = data.SecretString; } else { let buff = new Buffer(data.SecretBinary, 'base64'); decodedBinarySecret = buff.toString('ascii'); } } var parse = JSON.parse(secret); var password = parse.password; var connection = mysql.createConnection({ host: "<RDS Endpoint>", user: "username", password: password, database: "example", }); connection.query('show tables', function (error, results, fields) { if (error) { connection.destroy(); throw error; } else { // connected! console.log("Query result:"); console.log(results); callback(error, results); connection.end(function (err) { callback(err, results);}); } }); }); };
  16. Replace <RDS Endpoint> with the value you copied earlier.
  17. Click Deploy.

Work with AWS VPC Flow Logs for Network Monitoring

Monitoring network traffic is a critical component of security best practices to meet compliance requirements, investigate security incidents, track key metrics, and configure automated notifications. AWS VPC Flow Logs captures information about the IP traffic going to and from network interfaces in your VPC. In this hands-on lab, we will set up and use VPC Flow Logs published to Amazon CloudWatch, create custom metrics and alerts based on the CloudWatch logs to understand trends and receive notifications for potential security issues, and use Amazon Athena to query and analyze VPC Flow Logs stored in S3.

Read More

Reduce Storage Costs with EFS

Introduction

Amazon Elastic File System (Amazon EFS) provides a simple, serverless elastic file system that lets you share file data without provisioning or managing storage. In this lab, we modify 3 existing EC2 instances to use a shared EFS storage volume instead of duplicated Elastic Block Store volumes. This reduces costs significantly, as we only need to store data in 1 location instead of 3. By the end of this lab, you will understand how to create EFS volumes and attach them to an EC2 instance.

Create EFS File System

Create an EFS Volume

  1. Navigate to EC2 > Instances (running).
  2. Click the checkbox next to webserver-01.
  3. Click the Storage tab and note the 10 GiB volume attached.
  4. In a new browser tab, navigate to EFS.
  5. Click Create file system, and set the following values:
    • NameSharedWeb
    • Availability and durabilityOne Zone
  6. Click Create.
  7. Once it’s created, click View file system in the top right corner.
  8. Click the Network tab and wait for the created network to become available.
  9. Once it’s created, click Manage.
  10. Under Security groups, remove the currently attached default security group, and open the dropdown menu to select the provided EC2 security group (not the default).
  11. Click Save.
  12. Return to the EC2 browser tab.
  13. Click Security Groups in the left-hand menu.
  14. Click the checkbox next to that same security group (the one that is not default).
  15. Click the Inbound rules tab.
  16. Click Edit inbound rules.
  17. Click Add rule, and set the following values:
    • TypeNFS
    • SourceCustom0.0.0.0/0
  18. Click Save rules.
  19. Click EC2 Dashboard in the left-hand menu.
  20. Click Instances (running).
  21. With webserver-01 selected, click Connect in the top right corner.
  22. Click Connect. This should take you to a new terminal showing your EC2 instance in a new browser tab or window.

Mount the EFS File System and Test It

  1. In the terminal, list your block devices:lsblk
  2. View the data inside the 10 GiB disk mounted to /data:ls /data
  3. Create a mount point or directory to attach our EFS volume:sudo mkdir /efs
  4. Return to the AWS EFS console showing the SharedWeb file system.
  5. Click Attach.
  6. Select Mount via IP.
  7. Copy the command under Using the NFS client: to your clipboard.
  8. Return to the terminal, and paste in the command.
  9. Add a slash right before efs and press Enter.
  10. View the newly mounted EFS volume:ls /efs Nothing will be returned, but that shows us it’s mounted.
  11. List the block devices again:lsblk
  12. View the mounts:mount
  13. View file system mounts:df -h
  14. Move all files from /data to the /efs file system:sudo rsync -rav /data/* /efs
  15. View the files now in the /efs file system:ls /efs This time, a list should be returned.

Remove Old Data

Remove Data from webserver-01

  1. Unmount the partition:sudo umount /data
  2. Edit the /etc/fstab file:sudo nano /etc/fstab
  3. Remove the line starting with "UUID=" by placing the cursor at the beginning of the line and pressing Ctrl+K.
  4. In the AWS console, navigate to the EFS tab.
  5. In the Using the NFS client: section, copy the IP in the command.
  6. Back in the terminal, paste in the IP you just copied:<NFS MOUNT IP>:/
  7. Press the Tab key twice.
  8. Add the mount point and file system type (nfs4), so that the line now looks like this (with a tab after /data):<NFS MOUNT IP>:/ /data nfs4
  9. Back on the EFS page of the AWS EFS console, copy the options (the part of the command starting with nfsvers and ending with noresvport).
  10. In the terminal, press Tab after nfs4 and add the copied options to the line with two zeroes at the end, so that it now looks like this:<NFS MOUNT IP>:/ /data nfs4 <OPTIONS> 0 0
  11. Save and exit by pressing Ctrl+X, followed by Y and Enter.
  12. Unmount the /efs to test if this worked:sudo umount /efs
  13. View the file systems:df -h
  14. Try and mount everything that is not already mounted:sudo mount -a
  15. View the file systems again and check if 10.0.0.180:/ is mounted:df -h You should see the NFS share is now mounted on /data.
  16. View the contents of /data:ls /data
  17. Navigate back to the AWS console with the Connect to instance EC2 page open.
  18. Click EC2 in the top left corner.
  19. Click Volumes.
  20. Scroll to the right and expand the Attached Instances column to find out which 10 GiB volume is attached to webserver-01.
  21. Click the checkbox next to the 10 GiB volume attached to webserver-01.
  22. Click Actions > Detach volume.
  23. Click Detach.
  24. Once it’s detached, click the checkbox next to the same volume again.
  25. Click Actions > Delete volume.
  26. Click Delete.

Remove Data from webserver-02 and webserver-03

  1. Click Instances in the left-hand menu.
  2. Click the checkbox next to webserver-02.
  3. Click Connect.
  4. Click Connect. This should launch a terminal in a new browser window or tab.
  5. In the tab with the terminal for webserver-01, view the contents of /etc/fstab:cat /etc/fstab
  6. Copy the second line (starting with an IP) to your clipboard.
  7. Return to the terminal you launched for webserver-02.
  8. Unmount the /data partition:sudo umount /data
  9. Edit the /etc/fstab file:sudo nano /etc/fstab
  10. Delete the second line using Ctrl+K.
  11. Paste in the line from your clipboard.
  12. Align the pasted line with the line above as seen in webserver-01.
  13. Save and exit by pressing Ctrl+X, followed by Y and Enter.
  14. Mount it:sudo mount -a
  15. Check the disk status:df -h
  16. Check the contents of /data:ls /data
  17. Return to the window with the Connect to instance EC2 page open.
  18. Click Instances in the top left.
  19. Click the checkbox next to webserver-03.
  20. Click Connect.
  21. Click Connect. This should launch a terminal in a new browser window or tab.
  22. Unmount the /data partition:sudo umount /data
  23. Edit the /etc/fstab file:sudo nano /etc/fstab
  24. Delete the second line using Ctrl+K.
  25. Paste in the line from your clipboard.
  26. Align the pasted line with the line above as seen in webserver-01.
  27. Save and exit by pressing Ctrl+X, followed by Y and Enter.
  28. Mount everything that is not already mounted:sudo mount -a
  29. Check the disk status:df -h
  30. Check the contents of /data:ls /data
  31. Return to the window with the Connect to instance EC2 page open.
  32. Navigate to EC2 > Volumes.
  33. Check the boxes for both of the 10 GiB volumes.
  34. Click Actions > Detach volume.
  35. Type detach into the box, and then click Detach.
  36. Once they’re detached, select them again and click Actions > Delete volume.
  37. Type delete into the box, and then click Delete.