Posts in HOW To’s

GCP Series-Infrastructure Preview

Overview

In this lab, you build a sophisticated deployment in minutes using Marketplace. This lab shows several of the GCP infrastructure services in action and illustrates the power of the platform.

Objectives

  • Use Marketplace to build a Jenkins Continuous Integration environment.
  • Verify that you can manage the service from the Jenkins UI.
  • Administer the service from the Virtual Machine host through SSH.

Task 1: Use Marketplace to build a deployment

Navigate to Marketplace

  1. In the GCP Console, on the Navigation menu ( ), click Marketplace.
  2. Locate the Jenkins deployment by searching for Jenkins Certified by Bitnami.
  3. Click on the deployment and read about the service provided by the software.

Jenkins is an open-source continuous integration environment. You can define jobs in Jenkins that can perform tasks such as running a scheduled build of software and backing up data. Notice the software that is installed as part of Jenkins shown in the left side of the description.

The service you are using, Marketplace, is part of Google Cloud Platform. The Jenkins template is developed and maintained by an ecosystem partner named Bitnami. Notice on the left side a field that says “Last updated.” How recently was this template updated?

The template system is part of another GCP service called Deployment Manager. Later in this class you learn how templates such as this one can be built. That service is available to you. You can create templates like the one you are about to use.

In a class that was previously offered, students would set up a Jenkins environment similar to the one you are about to launch. It took about two days of labs to build the infrastructure that you will achieve in the next few minutes.

Launch Jenkins

  1. Click Launch on Compute Engine.
  2. Verify the deployment, accept the terms and services and click Deploy.
  3. Click Close on the Welcome to Deployment Manager window.

It will take a minute or two for Deployment Manager to set up the deployment. You can watch the status as tasks are being performed. Deployment Manager is acquiring a virtual machine instance and installing and configuring software for you. You will see jenkins-1 has been deployed when the process is complete.

Deployment Manager is a GCP service that uses templates written in a combination of YAML, python, and Jinja2 to automate the allocation of GCP resources and perform setup tasks. Behind the scenes a virtual machine has been created. A startup script was used to install and configure software, and network Firewall Rules were created to allow traffic to the service.

Task 2: Examine the deployment

In this section, you examine what was built in GCP.

View installed software and login to Jenkins

  1. In the right pane, click More about the software to view additional software details. Look at all the software that was installed.
  2. Copy the Admin user and Admin password values to a text editor.
  3. Click Visit the site to view the site in another browser tab. If you get an error, you might have to reload the page a couple of times.
  4. Log in with the Admin user and Admin password values.
  5. After logging in, you will be asked to Customize Jenkins. Click Install suggested plugins, and then click Restart after the installation is complete. The restart will take a couple of minutes.

Note: If you are getting an installation error, retry the installation and if it fails again, continue past the error and save and finish before restarting. The code of this solution is managed and supported by Bitnami.

Explore Jenkins

  1. In the Jenkins interface, in the left pane, click Manage Jenkins. Look at all of the actions available. You are now prepared to manage Jenkins. The focus of this lab is GCP infrastructure, not Jenkins management, so seeing that this menu is available is the purpose of this step.
  2. Leave the browser window open to the Jenkins service. You will use it in the next task.

Now you have seen that the software is installed and working properly. In the next task you will open an SSH terminal session to the VM where the service is hosted, and verify that you have administrative control over the service.

Task 3: Administer the service

View the deployment and SSH to the VM

  1. In the GCP Console, on the Navigation menu( ), click Deployment Manager.
  2. Click jenkins-1.
  3. Click SSH to connect to the Jenkins server.

The Console interface is performing several tasks for you transparently. For example, it has transferred keys to the virtual machine that is hosting the Jenkins software so that you can connect securely to the machine using SSH.

Shut down and restart the services

  1. In the SSH window, enter the following command to shut down all the running services:
sudo /opt/bitnami/ctlscript.sh stop
  1. Refresh the browser window for the Jenkins UI. You will no longer see the Jenkins interface because the service was shut down.
  2. In the SSH window, enter the following command to restart the services:
sudo /opt/bitnami/ctlscript.sh restart
  1. Return to the browser window for the Jenkins UI and refresh it. You may have to do it a couple of times before the service is reachable.
  2. In the SSH window, type exit to close the SSH terminal session.

Congratulations!

In a few minutes you were able to launch a complete Continuous Integration solution. You demonstrated that you had user access through the Jenkins UI, and you demonstrated that you had administrative control over Jenkins by using SSH to connect to the VM where the service is hosted and by stopping and then restarting the services.

10 best practices to optimize costs in AWS.

Amazon Web Services (AWS) forever changed the world of IT when it entered the market in 2006 offering services for pennies on the dollar. While AWS has significantly reduced their pricing over the years, many companies learned the hard way that moving to the public cloud didn’t always achieve the cost savings they expected.  In fact, organizations have frequently noticed that public cloud bills could be upto three times higher than expected. This doesn’t mean that moving to the public cloud is a mistake, as the public cloud provides huge benefits in agility, responsiveness, simplified operation, and improved innovation. The mistake is to assume that migrating to the public cloud without proper management, governance, and automation will lead to cost savings. To combat rising cloud infrastructure costs, use these proven best practices for cost reduction and optimization to make sure you are getting the most out of your environment.

1. DELETE UNATTACHED EBS VOLUMES

It’s common to see thousands of dollars in unattached Elastic Block Storage (EBS) volumes within AWS accounts. These are volumes that are costing money but aren’t being used for anything. When an instance is launched, an EBS volume is usually attached to act as the local block storage for the application. When an instance is launched via the AWS Console, there is a setting that ensures the associated EBS volume is deleted upon the termination of the instance. However, if that setting is not checked, the volume remains when an instance is terminated. Amazon will continue to charge for the full list price of the volume, even though the volume is not in use. By continuously checking for unattached EBS volumes in your infrastructure, you can cut thousands of dollars from your monthly AWS bill. One large online gaming company reduced its EBS usage by one third by eliminating unused EBS volumes and proactively monitoring for unattached volumes.

TIP: Best practices are to delete a volume when it has been unattached for two weeks, as it is unlikely the same volume will be utilized again.

2. DELETE AGED SNAPSHOTS

Many organizations use EBS snapshots to create point-in-copy recovery points to use in case of data loss or disaster. However, EBS snapshot costs can quickly get out of control if not closely monitored. Individual snapshots are not costly, but the cost can grow quickly when several are provisioned. A compounding factor on this issue is that users can configure settings to automatically create subsequent snapshots daily, without scheduling older snapshots for deletion. Organizations can help get EBS snapshots back under control by monitoring snapshot cost and usage per instance to make sure they do not spike out of control. Set a standard in your organization for how many snapshots should be retained per instance. Remember that most of the time, recovery will occur from the most recent snapshot. One B2B SaaS company found that among its millions of EBS snapshots, a large percentage of them were more than two years old, making them good candidates for deletion.

TIP: One way of finding snapshots that are good candidates for deletion is to identify the snapshots that have no associated volumes. When a volume is deleted, it’s common for the snapshot to remain in your environment. Be careful not to delete snapshots that are being utilized as a volume for an instance.

3. DELETE DISASSOCIATED ELASTIC IP ADDRESSES

An Elastic IP address is a public IP address that can be associated with an instance and allows the instance to be reachable via the Internet. The pricing structure for an Elastic IP address is unique in that when an instance is running, the Elastic IP is free of charge. However, if an instance is stopped or terminated and the Elastic IP address is not associated with another instance, you will be charged for the disassociated Elastic IPs. Unfortunately, it is difficult to identify and manage disassociated Elastic IPs within the AWS console. This may or may not amount to a significant cost driver in your AWS environment, but it’s key to stay on top of wasted resources and be proactive versus reactive in managing costs before they spike out of control. From a best practice standpoint, monthlyElastic IP charges should be as close to zero as possible. If disassociated Elastic IPs are within the AWS accounts, they should either bere-associated to an instance or outright deleted to avoid the wasted cost. One large telecommunications company learned the hard way that small changes in its environment can lead to significant charges in Elastic IPs. To reduce their overall monthly, spend, the company terminated hundreds of idle instances in one of its accounts. Company leaders forgot, however, to release the attached ElasticIP addresses. The finance department did not learn about this exorbitantly costly mistake until the following month when the AWS invoices arrived with elastic IP charges of almost $40,000.

4. TERMINATE ZOMBIE ASSETS

Zombie assets are infrastructure components that are running in your cloud environment but not being used for any purpose. Zombie assets come in many forms. For example, they could be EC2 once used for a particular purpose but are no longer in use and have not been turned off. Zombie EC2 instances also can occur when instances fail during the launch processor because of errors in the script that fail to de-provision instances. Zombie assets can also come in the form of idle Elastic Load Balancers (ELB) that aren’t being used effectively or an idle Relational Database Service (RDS) instance.No matter the cause, AWS will charge for them as long as these assets are in a running state. They must be isolated, evaluated and immediately terminated if deemed nonessential. Take a snap-shot, or point-in-time copy, of the asset before terminating or stopping it to ensure you can recover it if the asset is needed again. One customer had a nightly process to help its engineering velocity, loading an anonymize production database into RDS to use for testing and verification in a safe environment. The process worked well and saved lots of time for engineers. However, while the automation was good at spinning up new environments, the customer never planned for cleanup. Each night a newRDS instance was spun up, with the attached resources, and then was abandoned, eventually leaving hundreds of zombie resources.

TIP: Start your zombie hunt by identifying instances that have a Max CPU% <5% over the past 30 days. This doesn’t automatically mean this instance is a zombie, but it’s worth investigating further.

5. UPGRADE INSTANCES TO THE LATEST GENERATION

Every few years, AWS releases the next generation of instances with improved price-per-compute performance and additional functionality like clustering, enhanced networking, and the ability to attach new types of EBS volumes. For example, upgrading a c1.xlarge to a c3.xlarge will cut costs by up to 60% while offering significantly faster processing. The migration of a cluster of instance types from the first generation to the second generation will likely be a gradual process for most companies. The first step is to decide which accounts have instances that are candidates for conversion. If you are heavily invested in Reserved Instances (RIs), only instances with expiring reservations or those running strictly on-demand should be converted. One large B2B SaaS company found that almost 60% of the instance hours they ran in the past 12 months were using older-generation instance types. Analysis revealed that upgrading those instances to the latest generation would save them millions of dollars per year.

6. RIGHTSIZE EC2 INSTANCES & EBS VOLUMES

Rightsizing Elastic Compute Cloud (EC2) instance is the cost reduction initiative with the potential for the biggest impact. It’s common for developers to spin up new instances that are substantially larger than necessary. This may be intentional to give themselves extra headroom or accidental since they don’t know the performance requirements of the new workload yet. Over-provisioning an EC2 instance can lead to exponentially higher costs. Without performance monitoring or cloud management tools, it’s hard to tell when assets are over- or under-provisioned. Some information can be gathered from Cloud-Watch–it’s important to consider CPU utilization, memory utilization, disk utilization, and network in/out utilization. Reviewing these trended metrics over time, you can make decisions around reducing the size of the instance without hurting the performance of the applications on the instance. Because it’s common for instances to be underutilized, you can reduce costs by assuring that all instances are the right size. Similarly, EBS volumes can also be rightsized. Instead of looking at the dimension of CPU, disk, memory, and network, the critical factors to consider with EBS are capacity, IOPS, and throughput. As discussed earlier, removing unattached volumes is one way to reduce the cost associated with EBS volumes. Another approach is to evaluate which volumes are over-provisioned and can be modified for potential cost savings. AWS offers several types of EBS volumes, from Cold HDDs to Provisioned IOPS SSDs, each with their own set of pricing and performance. By analyzing the read/writes all volumes, you can find opportunities for cost savings. If a volume is attached to an instance and barely has any read/writes on that volume, the instance is either inactive or the volume is unnecessary. These are good candidates to flag for rightsizing evaluation. It’s typical to see GeneralPurpose SSD or Provisioned IOPS SSD volumes that barely have any read/write for a long period. They can be downgraded to Throughput Optimized HDD or even cold HDD volumes to reduce cost.

TIP: A good starting place for rightsizing is to look for instances that have an Avg CPU < 5% and Max CPU < 20% for 30 days. Instances that fit this criterion are viable candidates for rightsizing or termination.

7. STOP AND START INSTANCES ON A SCHEDULE

As previously highlighted, AWS will bill for that instance as long as an instance is running. Inversely, if an instance is in a stopped state, there is no charge associated to that instance. For instances that are running 24/7, Amazon will bill for 672 to 744hours per instance, depending on the month. If an instance is turned off between 5 pm and 9 am on weekdays and stopped weekends and holidays, then total billable hours per month would range from 152 to 184 hours per instance, saving you 488 to 592 instance hours per month. This is an extreme example, flexible workweeks and global teams mean that you can’t just power down instances outside normal working hours. However, outside of production, you’ll likely find many instances that do not need to truly run 24/7/365. The most cost-efficient environments dynamically stop and start instances based on a set schedule. Each cluster of instances can be treated in a different way. These types of lights-on/lights-off policies can often be even more cost-effective than purchases, so it’s crucial to analyze where this type of policy can be implemented.

TIP: Set a target for weekly hours that non-production systems should run. One large publishing company set that target at less than 80 hours per week, which is saving them thousands of dollars a month.

8. BUY RESERVED INSTANCES ON EC2, RDS AND AUTOMATE OPTIMIZATION

Purchasing Reserved Instances (RI) is an extremely effective cost-saving technique, yet many organizations are overwhelmed by the number of options. AWS Reserved Instances allow you to make a commitment to AWS to utilize specific instance types in return for a discount on your compute costs and a capacity reservation that guarantees your ability to run an instance of this type in the future. Reserved Instances are like coupons purchased either all upfront, partially upfront, or no upfront which customers can apply to running instances. RIs can save you up to 75% compared to on-demand pricing, so they’re a no-brainer for any company with sustained EC2 or RDS usage. One common misconception around RIs is that they cannot be modified. This is not true! Once purchased, RIs can be modified in several ways at no additional cost:

  1. Switching Availability Zones within the same region
  2. Switching between EC2 classic and Virtual Private Cloud
  3. Altering the instance type within the same family (this includes both splitting & merging instance types)
  4. Changing the account that benefits from the RI purchase “RIs can save you up to 75% compared to on-demand pricing, so they’re a no-brainer for any company with sustained EC2 or RDS usage.”

The most mature AWS customers are running more than 80% of their EC2 infrastructure covered by RI purchases. A best practice is to not let this number dip below 60% for maximum efficiency. One consumer travel website is now running more than 90% of its EC2 instances covered by RIs, saving the company millions of dollars a year. It’s critical to not only purchase RIs but also continuously modify them to get the most value. If a reservation is idle or underutilized, modification means the RI can cover on-demand usage to a greater degree. This ensures that the RIs are operating as efficiently as possible and that savings opportunities are being maximized.

TIP: A 1-year term reservation will almost always break-even after six months. This is when you can shut down an instance and still benefit from the reservation’s pricing discount. For a 3-year reservation, the break-even point usually occurs around nine months.

9. BUY RESERVED NODES ON REDSHIFT AND ELASTICACHE

EC2 and RDS aren’t the only assets in AWS that use reservations. Redshift and ElastiCache are two additional services that you can buy reservations for to reduce cost. Redshift Reserved Nodes function similarly to EC2 and RDS instances, in that they can be purchased all upfront, partially upfront, or no-upfront in 1- or 3-year terms. ElastiCache ReservedCache Nodes give you the option to make a low, one-time payment for each cache node you want to reserve and in turn receive a significant discount on the hourly charge for that Cache Node. Amazon ElastiCache provides three ElastiCacheReserved Cache Node types (Light, Medium, and Heavy Utilization Reserved cache nodes) that enable you to balance the amount you pay upfront with your effective hourly price. Taking advantage of Reserved Nodes can have a significant impact on your AWS bill.EC2 and RDS aren’t the only assets in AWS that use reservations. Redshift and ElastiCache are two additional services that you can buy reservations for to reduce cost. Redshift Reserved Nodes function similarly to EC2 and RDS instances, in that they can be purchased all upfront, partially upfront, or no-upfront in 1- or 3-year terms. ElastiCache ReservedCache Nodes give you the option to make a low, one-time payment for each cache node you want to reserve and in turn receive a significant discount on the hourly charge for that Cache Node. Amazon ElastiCache provides three ElastiCacheReserved Cache Node types (Light, Medium, and Heavy Utilization Reserved cache nodes) that enable you to balance the amount you pay upfront with your effective hourly price. Taking advantage of Reserved Nodes can have a significant impact on your AWS bill.

TIP: Reserved Nodes can save you up to 75% over on-demand rates when used in the steady state. One online gaming company reduced Redshift compute cost by nearly 75% by using Redshift Reserved Nodes.

 

10. MOVE OBJECT DATA TO LOWER COST TIERS

AWS offers several tiers of object storage at different price points and performance levels. Many AWS users tend to favor S3 storage, but you can save more than 75% by migrating older data to lower tiers of storage. The best practice is to move data between the tiers of storage depending on its usage. For example, Infrequent Access Storage is ideal for long-term storage, backups and disaster recovery content, while Glacier is best suited for archival. In addition, the infrequent access storage class is set at the object level and can exist in the same bucket as standard. The conversion is as simple as editing the properties of the content within the bucket or creating a lifecycle conversion policy to automatically transition S3 objects between storage classes. Here is a quick overview of the current object storage offerings from AWS.

TIP: Best practice is that any objects residing in S3 that are older than 30 days should be converted to S3 Infrequent Access. While standard storage class pricing is tiered based on the amount of content within the bucket, with a minimum price of $0.0275 per GB per month, Infrequent Access storage remains consistent at $0.0125 per GB per month. Keep in mind that access fees for Cold storage are two times greater than the access costs associated with Hot storage, so be careful not to migrate data that is frequently accessed.

CONCLUSION It’s important to remember that these best practices are not meant to be one-time activities, but ongoing processes. Because of the dynamic and ever changing nature of the cloud, cost optimization activities should ideally take place continuously.

 

Configuring Terraform and provisioning an AWS EC2 Instance.

Terraform provides an elegant user experience for operators to safely and predictably make changes to infrastructure. Terraform is distributed as a binary package for many supported platforms and architectures. Installing Terraform To install Terraform, after downloading the appropriate version of Terraform, unzip the package. Terraform runs as a single binary named terraform. The final step is to make sure that the binaryterraform is available on the environment path. See thisfor instructions on setting the PATH on Linux and Mac.  Verifying the Installation After installing Terraform, verify the installation worked by opening a new terminal… Read More

Read More

How to Install and Configure Ansible on Ubuntu 18.04

Introduction Configuration management systems are designed to make controlling large numbers of servers easy for administrators and operations teams. They allow you to control many different systems in an automated way from one central location. While there are many popular configuration management systems available for Linux systems, such as Chef and Puppet, these are often more complex than many people want or need. Ansible is a great alternative to these options because it requires a much smaller overhead to get started. In this guide, we will discuss how to install Ansible on… Read More

Read More

How to change root password in Ubuntu Linux

By default, the root user account password is locked in Ubuntu Linux for security reasons. As a result, you can not login using root user or use a command such as ‘su -‘ to become a SuperUser.

You need to use the passwd command to change the password for user accounts on Ubuntu Linux. A typical user can only change the password for his/her account only. A SuperUser (root) can change the password for any user account. Your user account info stored in /etc/passswd and an encrypted password stored in /etc/shadow file.

How to change root password in Ubuntu

The procedure to change the root user password on Ubuntu Linux:

  1. Type the following command to become root user and issue passwd:
    sudo -i
    passwd
  2. OR set a password for root user in a single go:
    sudo passwd root
  3. Test it your root password by typing the following command:
    su –

A note about root password on an Ubuntu server/desktop

Enabling the root account by setting the password is not needed. Almost everything you need to do as SuperUser (root) of an Ubuntu server can be done using sudo command. For example, restart apache server:
$ sudo systemctl restart apache2
You can add an additional user to sudo by typing the following command:
$ sudo adduser {userNameHere} sudo
For example, add a user named pankaj to sudo:
$ sudo adduser pankaj sudo

Configuring NTP using chrony

Chrony provides another implementation of NTP and is designed for systems that are often powered down or disconnected from the network. The main configuration file is /etc/chrony.conf  and parameters are similar to those in the /etc/ntp.conf file. – chronyd is the daemon that runs in user space.– chronyc is a command-line program that provides a command prompt and a number of commands. Examples:tracking: Displays system time informationsources: Displays information about current sources. Installing Chrony Install the chrony package by using the following command: # yum install chrony Use the following commands to start chronyd and to… Read More

Read More

Create a new swap partition on RHEL system

For the purpose of this post, let’s assume that you do not have any swap configured on your system. /dev/sdc is the drive referenced with no partitions. Since we are going to make a single partition filling the disk, note that any data currently on that disk will be lost. Follow the steps given below to add /dev/sdc1 partition as the new swap partition on the system. 1. Use the fdisk command as root to create a swap partition. # fdisk /dev/sdc A new prompt will appear, type ‘p’ to… Read More

Read More

Passwordless Login Using SSH Keygen in 5 Easy Steps

SSH (Secure SHELL) is an open source and most trusted network protocol that is used to login into remote servers for execution of commands and programs. It is also used to transfer files from one computer to another computer over the network using secure copy (SCP) Protocol. In this article we will show you how to setup password-less login on RHEL/CentOS 7.x/6.x/5.x and Fedora using ssh keys to connect to remote Linux servers without entering password. Using Password-less login with SSH keys will increase the trust between two Linux servers for easy file synchronization or transfer. My Setup Environment SSH Client : 192.168.0.12… Read More

Read More