Posts in Technology

A short story on AI

Not everyone was comfortable with the idea of AI systems developing self-awareness. Many people feared that they would eventually become more intelligent than humans, and would eventually turn on their creators. These concerns led to protests and demonstrations against the development of advanced AI systems.

Read More

The Cloud Architect

15 Infrastructure as Code tools you can use to automate your deployments

There are MANY tools that can help you automate your infrastructure. This post highlights a few of the more popular tools out there and some of their differentiating features.

Configuration Orchestration vs. Configuration Management

The first thing that should be clarified is the difference between “configuration orchestration” and “configuration management” tools, both of which are considered IaC tools and are included on this list.

Configuration orchestration tools, which include Terraform and AWS CloudFormation, are designed to automate the deployment of servers and other infrastructure.

Configuration management tools like Chef, Puppet, and the others on this list help configure the software and systems on this infrastructure that has already been provisioned.

Configuration orchestration tools do some level of configuration management, and configuration management tools do some level of orchestration. Companies can and many times use both types of tools together.

All right, on to the tools!

Terraform

Terraform logo

Terraform is an infrastructure provisioning tool created by Hashicorp. It allows you to describe your infrastructure as code, creates “execution plans” that outline exactly what will happen when you run your code, builds a graph of your resources, and automates changes with minimal human interaction.

Terraform uses its own domain-specific language (DSL) called Hashicorp Configuration Language (HCL). HCL is JSON-compatible and is used to create these configuration files that describe the infrastructure resources to be deployed.

Terraform is cloud-agnostic and allows you to automate infrastructure stacks from multiple cloud service providers simultaneously and integrate other third-party services.

You even can write Terraform plugins to add new advanced functionality to the platform.

AWS CloudFormation

Similar to Terraform, AWS CloudFormation is a configuration orchestration tool that allows you to code your infrastructure to automate your deployments.

Primary differences lie in that CloudFormation is deeply integrated into and can only be used with AWS, and CloudFormation templates can be created with YAML in addition to JSON.

CloudFormation allows you to preview proposed changes to your AWS infrastructure stack and see how they might impact your resources, and manages dependencies between these resources.

To ensure that deployment and updating of infrastructure is done in a controlled manner, CloudFormation uses Rollback Triggers to revert infrastructure stacks to a previous deployed state if errors are detected.

You can even deploy infrastructure stacks across multiple AWS accounts and regions with a single CloudFormation template. And much more.

We’ve written a ton of CloudFormation templates, so we’ll dig much deeper into this in future posts.

Azure Resource Manager and Google Cloud Deployment Manager

azure resource manager

If you’re using Microsoft Azure or Google Cloud Platform, these cloud service providers offer their own IaC tools similar to AWS CloudFormation.

Azure Resource Manager allows you to define the infrastructure and dependencies for your app in templates, organize dependent resources into groups that can be deployed or deleted in a single action, control access to resources through user permissions, and more.

GCP Deployment Manager

Google Cloud Deployment Manager offers many similar features to automate your GCP infrastructure stack. You can create templates using YAML or Python, preview what changes will be made before deploying, view your deployments in a console user interface, and much more.

Chef

chef logo

Chef is one of the most popular configuration management tools that organizations use in their continuous integration and delivery processes.

Chef allows you to create “recipes” and “cookbooks” using its Ruby-based DSL. These recipes and cookbooks specify the exact steps needed to achieve the desired configuration of your applications and utilities on existing servers. This is called a “procedural” approach to configuration management, as you describe the procedure necessary to get your desired state.

Chef is cloud-agnostic and works with many cloud service providers such as AWS, Microsoft Azure, Google Cloud Platform, OpenStack, and more.

Puppet

puppetlogo

Similar to Chef, Puppet is another popular configuration management tool that helps engineers continuously deliver software.

Using Puppet’s Ruby-based DSL, you can define the desired end state of your infrastructure and exactly what you want it to do. Then Puppet automatically enforces the desired state and fixes any incorrect changes.

This “declarative” approach – where you declare what you want your configuration to look like, and then Puppet figures out how to get there – is the primary difference between Puppet and Chef. Also, Puppet is mainly directed toward system administrators, while Chef primarily targets developers.

Puppet integrates with the leading cloud providers like AWS, Azure, Google Cloud, and VMware, allowing you to automate across multiple clouds.

Saltstack

Saltstack_logo

Saltstack differentiates itself from tools like Chef and Puppet by taking an “infrastructure as data” approach, instead of “infrastructure as code.”

What this means is that Saltstack’s declarative configuration patterns, while written in Python, are language-agnostic (i.e. you don’t need to learn a specific DSL to create them) and thus are more easily read and understood.

Another differentiator is that Saltstack supports remote execution of commands, whereas Chef and Puppet’s configuration code needs to be pulled from their servers.

Ansible

Ansible logo

Ansible is an infrastructure automation tool created by Red Hat, the huge enterprise open source technology provider.

Ansible models your infrastructure by describing how your components and system relate to one another, as opposed to managing systems independently.

Ansible doesn’t use agents, and its code is written in YAML in the form of Ansible Playbooks, so configurations are very easy to understand and deploy.

You can also extend Ansible’s functionality by writing your own Ansible modules and plugins.

Juju

Juju is an IaC tool brought to you by Canonical, the company behind Ubuntu.

You can create Juju charms, which are sets of scripts that deploy and operate software, and bundles, which are collections of charms linked together to deploy entire app infrastructures all at once.

You can then use Juju to manage and apply changes to your infrastructure with simple commands.

Juju works with bare metal, private clouds, multiple public cloud providers, as well as other orchestration tools like Puppet and Chef.

Docker

docker logo

Docker helps you easily create containers that package your code and dependencies together so your applications can run in any environment, from your local workstation to any cloud service provider’s servers.

YAML is used to create configuration files called Dockerfiles. These Dockerfiles are the blueprints to build the container images that include everything – code, runtime, system tools and libraries, and settings – needed to run a piece of software.

Because it increases the portability of applications, Docker has been especially valuable in organizations who use hybrid or multi-cloud environments.

The use of Docker containers has grown exponentially over the past few years and many consider it to be the future of virtualization.

Vagrant

Vagrant is another IaC tool built by HashiCorp, the makers of Terraform.

The difference is that Vagrant focuses on quickly and easily creating development environments that use a small amount of virtual machines, instead of large cloud infrastructure environments that can span hundreds or thousands of servers across multiple cloud providers.

Vagrant runs on top of virtual machine solutions from VirtualBox, VMware, AWS, and any other cloud provider, and also works well with tools like Chef and Puppet.

Pallet

Pallet logo

Pallet is an IaC tool used to automate infrastructure in the cloud, on server racks, or virtual machines, and provides a high level of environment customization.

You can run Pallet from anywhere, and you don’t have to set up and maintain a central server.

Pallet is written in Clojure, runs in a Java Virtual Machine, and works with AWS, OpenStack, VirtualBox, and others, but not Azure nor GCP.

You can use Pallet to start, stop, and configure nodes, deploy projects, and even run administrative tasks.

(R)?ex

(R)?ex is an open-source, weirdly-spelled infrastructure automation tool. “(R)?ex” is too hard to type over and over again, so I’m going to spell it “Rex” from now on.

Rex has its own DSL for you to describe your infrastructure configuration in what are called Rexfiles, but you can use Perl to harness Rex’s full power.

Like Ansible, Rex is agent-less and uses SSH to execute commands and manage remote hosts. This makes Rex easy to use right away.

CFEngine

CFEngine is one of the oldest IaC tools out there, with its initial release in 1993.

CFEngine allows you to define the desired states of your infrastructure using its DSL. Then its agents monitor your environments to ensure that their states are converging toward the desired states, and reports the outcomes.

It’s written in C and claims to be the fastest infrastructure automation tool, with execution times under 1 second.

NixOS

NixOS is a configuration management tool that aims to make upgrading infrastructure systems as easy, reliable, and safe as possible.

The platform does this by making configuration management “transactional” and “atomic.” What this means is that if an upgrade to a new configuration is interrupted for some reason, the system will either boot up in the new or old configuration, thus staying stable and consistent.

nix logo

NixOS also makes it very easy to rollback to a prior configuration, since new configuration files don’t overwrite old ones.

These configuration files are written in Nix expression language, its own unique functional language.

Conclusion

So there you have it. Check out these configuration orchestration and management tools that you can use to implement Infrastructure as Code and help you automate your infrastructure.

This list is by no means exhaustive but it should give you a starting point for tools that you can use during your IaC journey.

10 best practices to optimize costs in AWS.

Amazon Web Services (AWS) forever changed the world of IT when it entered the market in 2006 offering services for pennies on the dollar. While AWS has significantly reduced their pricing over the years, many companies learned the hard way that moving to the public cloud didn’t always achieve the cost savings they expected.  In fact, organizations have frequently noticed that public cloud bills could be upto three times higher than expected. This doesn’t mean that moving to the public cloud is a mistake, as the public cloud provides huge benefits in agility, responsiveness, simplified operation, and improved innovation. The mistake is to assume that migrating to the public cloud without proper management, governance, and automation will lead to cost savings. To combat rising cloud infrastructure costs, use these proven best practices for cost reduction and optimization to make sure you are getting the most out of your environment.

1. DELETE UNATTACHED EBS VOLUMES

It’s common to see thousands of dollars in unattached Elastic Block Storage (EBS) volumes within AWS accounts. These are volumes that are costing money but aren’t being used for anything. When an instance is launched, an EBS volume is usually attached to act as the local block storage for the application. When an instance is launched via the AWS Console, there is a setting that ensures the associated EBS volume is deleted upon the termination of the instance. However, if that setting is not checked, the volume remains when an instance is terminated. Amazon will continue to charge for the full list price of the volume, even though the volume is not in use. By continuously checking for unattached EBS volumes in your infrastructure, you can cut thousands of dollars from your monthly AWS bill. One large online gaming company reduced its EBS usage by one third by eliminating unused EBS volumes and proactively monitoring for unattached volumes.

TIP: Best practices are to delete a volume when it has been unattached for two weeks, as it is unlikely the same volume will be utilized again.

2. DELETE AGED SNAPSHOTS

Many organizations use EBS snapshots to create point-in-copy recovery points to use in case of data loss or disaster. However, EBS snapshot costs can quickly get out of control if not closely monitored. Individual snapshots are not costly, but the cost can grow quickly when several are provisioned. A compounding factor on this issue is that users can configure settings to automatically create subsequent snapshots daily, without scheduling older snapshots for deletion. Organizations can help get EBS snapshots back under control by monitoring snapshot cost and usage per instance to make sure they do not spike out of control. Set a standard in your organization for how many snapshots should be retained per instance. Remember that most of the time, recovery will occur from the most recent snapshot. One B2B SaaS company found that among its millions of EBS snapshots, a large percentage of them were more than two years old, making them good candidates for deletion.

TIP: One way of finding snapshots that are good candidates for deletion is to identify the snapshots that have no associated volumes. When a volume is deleted, it’s common for the snapshot to remain in your environment. Be careful not to delete snapshots that are being utilized as a volume for an instance.

3. DELETE DISASSOCIATED ELASTIC IP ADDRESSES

An Elastic IP address is a public IP address that can be associated with an instance and allows the instance to be reachable via the Internet. The pricing structure for an Elastic IP address is unique in that when an instance is running, the Elastic IP is free of charge. However, if an instance is stopped or terminated and the Elastic IP address is not associated with another instance, you will be charged for the disassociated Elastic IPs. Unfortunately, it is difficult to identify and manage disassociated Elastic IPs within the AWS console. This may or may not amount to a significant cost driver in your AWS environment, but it’s key to stay on top of wasted resources and be proactive versus reactive in managing costs before they spike out of control. From a best practice standpoint, monthlyElastic IP charges should be as close to zero as possible. If disassociated Elastic IPs are within the AWS accounts, they should either bere-associated to an instance or outright deleted to avoid the wasted cost. One large telecommunications company learned the hard way that small changes in its environment can lead to significant charges in Elastic IPs. To reduce their overall monthly, spend, the company terminated hundreds of idle instances in one of its accounts. Company leaders forgot, however, to release the attached ElasticIP addresses. The finance department did not learn about this exorbitantly costly mistake until the following month when the AWS invoices arrived with elastic IP charges of almost $40,000.

4. TERMINATE ZOMBIE ASSETS

Zombie assets are infrastructure components that are running in your cloud environment but not being used for any purpose. Zombie assets come in many forms. For example, they could be EC2 once used for a particular purpose but are no longer in use and have not been turned off. Zombie EC2 instances also can occur when instances fail during the launch processor because of errors in the script that fail to de-provision instances. Zombie assets can also come in the form of idle Elastic Load Balancers (ELB) that aren’t being used effectively or an idle Relational Database Service (RDS) instance.No matter the cause, AWS will charge for them as long as these assets are in a running state. They must be isolated, evaluated and immediately terminated if deemed nonessential. Take a snap-shot, or point-in-time copy, of the asset before terminating or stopping it to ensure you can recover it if the asset is needed again. One customer had a nightly process to help its engineering velocity, loading an anonymize production database into RDS to use for testing and verification in a safe environment. The process worked well and saved lots of time for engineers. However, while the automation was good at spinning up new environments, the customer never planned for cleanup. Each night a newRDS instance was spun up, with the attached resources, and then was abandoned, eventually leaving hundreds of zombie resources.

TIP: Start your zombie hunt by identifying instances that have a Max CPU% <5% over the past 30 days. This doesn’t automatically mean this instance is a zombie, but it’s worth investigating further.

5. UPGRADE INSTANCES TO THE LATEST GENERATION

Every few years, AWS releases the next generation of instances with improved price-per-compute performance and additional functionality like clustering, enhanced networking, and the ability to attach new types of EBS volumes. For example, upgrading a c1.xlarge to a c3.xlarge will cut costs by up to 60% while offering significantly faster processing. The migration of a cluster of instance types from the first generation to the second generation will likely be a gradual process for most companies. The first step is to decide which accounts have instances that are candidates for conversion. If you are heavily invested in Reserved Instances (RIs), only instances with expiring reservations or those running strictly on-demand should be converted. One large B2B SaaS company found that almost 60% of the instance hours they ran in the past 12 months were using older-generation instance types. Analysis revealed that upgrading those instances to the latest generation would save them millions of dollars per year.

6. RIGHTSIZE EC2 INSTANCES & EBS VOLUMES

Rightsizing Elastic Compute Cloud (EC2) instance is the cost reduction initiative with the potential for the biggest impact. It’s common for developers to spin up new instances that are substantially larger than necessary. This may be intentional to give themselves extra headroom or accidental since they don’t know the performance requirements of the new workload yet. Over-provisioning an EC2 instance can lead to exponentially higher costs. Without performance monitoring or cloud management tools, it’s hard to tell when assets are over- or under-provisioned. Some information can be gathered from Cloud-Watch–it’s important to consider CPU utilization, memory utilization, disk utilization, and network in/out utilization. Reviewing these trended metrics over time, you can make decisions around reducing the size of the instance without hurting the performance of the applications on the instance. Because it’s common for instances to be underutilized, you can reduce costs by assuring that all instances are the right size. Similarly, EBS volumes can also be rightsized. Instead of looking at the dimension of CPU, disk, memory, and network, the critical factors to consider with EBS are capacity, IOPS, and throughput. As discussed earlier, removing unattached volumes is one way to reduce the cost associated with EBS volumes. Another approach is to evaluate which volumes are over-provisioned and can be modified for potential cost savings. AWS offers several types of EBS volumes, from Cold HDDs to Provisioned IOPS SSDs, each with their own set of pricing and performance. By analyzing the read/writes all volumes, you can find opportunities for cost savings. If a volume is attached to an instance and barely has any read/writes on that volume, the instance is either inactive or the volume is unnecessary. These are good candidates to flag for rightsizing evaluation. It’s typical to see GeneralPurpose SSD or Provisioned IOPS SSD volumes that barely have any read/write for a long period. They can be downgraded to Throughput Optimized HDD or even cold HDD volumes to reduce cost.

TIP: A good starting place for rightsizing is to look for instances that have an Avg CPU < 5% and Max CPU < 20% for 30 days. Instances that fit this criterion are viable candidates for rightsizing or termination.

7. STOP AND START INSTANCES ON A SCHEDULE

As previously highlighted, AWS will bill for that instance as long as an instance is running. Inversely, if an instance is in a stopped state, there is no charge associated to that instance. For instances that are running 24/7, Amazon will bill for 672 to 744hours per instance, depending on the month. If an instance is turned off between 5 pm and 9 am on weekdays and stopped weekends and holidays, then total billable hours per month would range from 152 to 184 hours per instance, saving you 488 to 592 instance hours per month. This is an extreme example, flexible workweeks and global teams mean that you can’t just power down instances outside normal working hours. However, outside of production, you’ll likely find many instances that do not need to truly run 24/7/365. The most cost-efficient environments dynamically stop and start instances based on a set schedule. Each cluster of instances can be treated in a different way. These types of lights-on/lights-off policies can often be even more cost-effective than purchases, so it’s crucial to analyze where this type of policy can be implemented.

TIP: Set a target for weekly hours that non-production systems should run. One large publishing company set that target at less than 80 hours per week, which is saving them thousands of dollars a month.

8. BUY RESERVED INSTANCES ON EC2, RDS AND AUTOMATE OPTIMIZATION

Purchasing Reserved Instances (RI) is an extremely effective cost-saving technique, yet many organizations are overwhelmed by the number of options. AWS Reserved Instances allow you to make a commitment to AWS to utilize specific instance types in return for a discount on your compute costs and a capacity reservation that guarantees your ability to run an instance of this type in the future. Reserved Instances are like coupons purchased either all upfront, partially upfront, or no upfront which customers can apply to running instances. RIs can save you up to 75% compared to on-demand pricing, so they’re a no-brainer for any company with sustained EC2 or RDS usage. One common misconception around RIs is that they cannot be modified. This is not true! Once purchased, RIs can be modified in several ways at no additional cost:

  1. Switching Availability Zones within the same region
  2. Switching between EC2 classic and Virtual Private Cloud
  3. Altering the instance type within the same family (this includes both splitting & merging instance types)
  4. Changing the account that benefits from the RI purchase “RIs can save you up to 75% compared to on-demand pricing, so they’re a no-brainer for any company with sustained EC2 or RDS usage.”

The most mature AWS customers are running more than 80% of their EC2 infrastructure covered by RI purchases. A best practice is to not let this number dip below 60% for maximum efficiency. One consumer travel website is now running more than 90% of its EC2 instances covered by RIs, saving the company millions of dollars a year. It’s critical to not only purchase RIs but also continuously modify them to get the most value. If a reservation is idle or underutilized, modification means the RI can cover on-demand usage to a greater degree. This ensures that the RIs are operating as efficiently as possible and that savings opportunities are being maximized.

TIP: A 1-year term reservation will almost always break-even after six months. This is when you can shut down an instance and still benefit from the reservation’s pricing discount. For a 3-year reservation, the break-even point usually occurs around nine months.

9. BUY RESERVED NODES ON REDSHIFT AND ELASTICACHE

EC2 and RDS aren’t the only assets in AWS that use reservations. Redshift and ElastiCache are two additional services that you can buy reservations for to reduce cost. Redshift Reserved Nodes function similarly to EC2 and RDS instances, in that they can be purchased all upfront, partially upfront, or no-upfront in 1- or 3-year terms. ElastiCache ReservedCache Nodes give you the option to make a low, one-time payment for each cache node you want to reserve and in turn receive a significant discount on the hourly charge for that Cache Node. Amazon ElastiCache provides three ElastiCacheReserved Cache Node types (Light, Medium, and Heavy Utilization Reserved cache nodes) that enable you to balance the amount you pay upfront with your effective hourly price. Taking advantage of Reserved Nodes can have a significant impact on your AWS bill.EC2 and RDS aren’t the only assets in AWS that use reservations. Redshift and ElastiCache are two additional services that you can buy reservations for to reduce cost. Redshift Reserved Nodes function similarly to EC2 and RDS instances, in that they can be purchased all upfront, partially upfront, or no-upfront in 1- or 3-year terms. ElastiCache ReservedCache Nodes give you the option to make a low, one-time payment for each cache node you want to reserve and in turn receive a significant discount on the hourly charge for that Cache Node. Amazon ElastiCache provides three ElastiCacheReserved Cache Node types (Light, Medium, and Heavy Utilization Reserved cache nodes) that enable you to balance the amount you pay upfront with your effective hourly price. Taking advantage of Reserved Nodes can have a significant impact on your AWS bill.

TIP: Reserved Nodes can save you up to 75% over on-demand rates when used in the steady state. One online gaming company reduced Redshift compute cost by nearly 75% by using Redshift Reserved Nodes.

 

10. MOVE OBJECT DATA TO LOWER COST TIERS

AWS offers several tiers of object storage at different price points and performance levels. Many AWS users tend to favor S3 storage, but you can save more than 75% by migrating older data to lower tiers of storage. The best practice is to move data between the tiers of storage depending on its usage. For example, Infrequent Access Storage is ideal for long-term storage, backups and disaster recovery content, while Glacier is best suited for archival. In addition, the infrequent access storage class is set at the object level and can exist in the same bucket as standard. The conversion is as simple as editing the properties of the content within the bucket or creating a lifecycle conversion policy to automatically transition S3 objects between storage classes. Here is a quick overview of the current object storage offerings from AWS.

TIP: Best practice is that any objects residing in S3 that are older than 30 days should be converted to S3 Infrequent Access. While standard storage class pricing is tiered based on the amount of content within the bucket, with a minimum price of $0.0275 per GB per month, Infrequent Access storage remains consistent at $0.0125 per GB per month. Keep in mind that access fees for Cold storage are two times greater than the access costs associated with Hot storage, so be careful not to migrate data that is frequently accessed.

CONCLUSION It’s important to remember that these best practices are not meant to be one-time activities, but ongoing processes. Because of the dynamic and ever changing nature of the cloud, cost optimization activities should ideally take place continuously.

 

How Indian schools are starting to use Cloud Computing, AI and VR

Technology has seeped into every aspect of our lives and Indian schools are catching up by integrating Cloud Computing, AI and VR into the teaching process.

Indian schools are finally starting to use AI, VR and cloud computing as a part of the teaching and education process.

In the 21st century, it is technology that rules the world. With increasing adoption of tech in almost every sphere of life, there are very few sectors that remain untouched. India as a global IT leader has been at the forefront of this revolution, with the integration of technology in areas ranging from farming to governance to banking and even everyday activities like food delivery.

Of course, tech adoption hasn’t been equal in all sectors. India’s education sector has always lagged with respects to implementations of latest developments adopted by its global counterparts. But the rise of Edutech startups and focussed government policies in the recent few years seems to be reversing this trend.

In fact, there are currently more than 40+ startups; most of them came into existence in the last 5 years that are focussing on EdTech, demonstrating the massive potential for this trend in India.

Additionally, with the government looking to implement a massive overhaul of education in Indian with schemes such as RISE (Revitalising Infrastructure and Systems in Education), with a budget of more than Rs 1 lakh crore, the focus is on new age technologies such as cloud computing, AI and VR to enable these changes.

Cloud Computing

While cloud computing is primarily helping schools reduce the cost usually incurred through the purchase of legacy softwares and setting up data centres, an additional use has been in terms of enablement of MOOC (massive open online courses) that enable teachers and students in far-flung areas to learn and equip themselves with the latest knowledge.

In fact, government schemes such as SWAYAM (Study webs of active learning for young aspiring minds) which aim at making learning material available to all citizens, especially teachers and students, has become possible only because of cloud computing.

Artificial Intelligence

The earlier ‘one size fits all’ model of education is slowly losing ground to adaptive, personalised learning pedagogies.

In fact, AI is not just helping to create educational tools that automate the teaching of more nuanced topics (like the improvement of pronunciation and grammar correction) but also improving other fields such as administration (such as automation of admission), learning, tutoring and assessments.

In fact, the National Testing Agency (NTA) has even proposed the use of adaptive assessment for conducting entrance exams such as JEE Main, NEET UG and NET to help avoid paper leakages (a serious threat) while simultaneously ensuring the competitiveness and fairness of these exams are upheld.

Virtual Reality

Indian education has always struggled with a poor quality of teaching. Standardized tests have often revealed that students struggle to perform at the level they should be.

VR can help in not just improve quality of teaching by offering experiential, immersive experience but also lead to gamification of hard to understand topics.

Additionally, VR adoptions can lead to virtual labs where students can conduct and simulate experiments that may not be possible or may be too dangerous in the real world.

Taking a cue from footballer Gerard Pique’s line Evolution is all about looking forward the tech evolution of the Indian education sector is certainly focused towards the future.

The rapid improvements in technology, as well as the higher rates of adoption, are bound to have an impact on the students. With a two-pronged thrust from the private sector as well as the government, exciting developments await the student of tomorrow!

THERE’S NOW A GARGOYLE TALKING TRASH TO GUESTS AT DENVER’S AIRPORT

He’s 243 years old and is here to clear the air on all the conspiracies at DEN. Turns out this gargoyle is wiser than we thought and is sharing more than we planned. Along with giving travelers his two cents, he’s surprising them with smiles and plenty of laughs. We believe interactions at DEN should not only be helpful but fun too. That’s the art of airporting.

Containers or serverless? Consider this..

Every day in our technology world, we see a never-ending battle for the technology platform of choice. One among them is serverless vs. container that looks to be gaining steam. Both of them have their advantages, disadvantages, and proponents. Google, Cisco, and IBM are pushing the container approach, whereas Serverless is being pushed heavily by major public cloud providers like AWS, Azure & GCP (all these cloud operators also offer services for both native containers and Kubernetes).

As we saw in the growth of cloud, and now containers, this migration isn’t always seamless. In fact, we’ve seen both Microsoft (Azure Stack) and Amazon (Outposts) introduce products to work within our existing data center. Migrations haven’t returned the value that was promised because cost, security, regulatory needs, and complexity haven’t been as seamless as we all once imagined. The move to containers and serverless architectures will face some of the same dilemmas as cloud, but this is setting itself up to be 2019’s enterprise.

Unlike before, today’s industries wish to move to the latest and greatest technology quickly, usually because of significant advantages in price, time and simplicity once deployed. And, it’s simply more fun as an engineer to play and learn as the tech grows itself.

Now let’s examine some of the differences between the two latest and greatest:

New Apps:

Containers are useful when the application require multiple instances running parallelly. The hard parameters on the execution time also limit long-running and complicated processes from finishing, making large data crunching particularly challenging, whereas with a container the spawning of new containers to split workload is simpler and handled automatically.

Serverless, on the other hand lends itself to exciting possibilities for new applications, especially in the IoT, physical security and world of chatbots. The ability to use triggers to kick off various actions (without having any underlying infrastructure) and grow at scale allows for both a cost-effective and simplified management. Applications that are listening for triggers can execute code in an IoT environment as it changes, reducing the costs, and simplifying the software management for a company that may not have a giant DevOps staff.

Native Cloud Apps:

Migrating existing native cloud applications is basically a question about the nature of the application itself.  If you’ve purpose-built your applications with a singular public cloud vendor, then integrating and even migrating applications to serverless becomes very easy. All the major serverless platforms have built-in hooks to other native cloud services and allow for quick and seamless architecture changes. Specifically, applications that relied upon orchestration tools for quick scale up and down may find the simplicity that serverless provides to be very advantageous.

The argument for containers is like that of serverless: gain functionality from existing cloud services and improve orchestrated scaling. But the difference is in the maturity of containers versus serverless.  Container testing is much more advanced, and the number of people with the required skill set to efficiently architect the solution is far greater than with serverless. But most applications today are hybrid or multi-cloud/multi-architecture, and the differences here vary even greater.

Legacy Hybrid/On-Prem Apps:

The last ten years have shown the world the complexity with lift and shift, and the high costs associated with such a cloud strategy. Instead, we’ve seen the rise of Hybrid applications that run on multiple different clouds and may work with the existing on-premise solution providing some functionality.  Serverless can provide a faster path to the cloud; however, only if the time to architect the solution correctly has been applied. Legacy applications that perform on-demand quick checks and tasks can easily be ported towards a serverless solution, thus reducing the complexity of managing a larger, more complicated code base and physical footprint – and several these legacy mainframe tasks have already been ported successfully into serverless apps.

Containers offer a more straightforward fashion to migrating your legacy technologies, with far less architectural or vendor dependence.  The single biggest long-term hurdle for serverless growth is the vendor lock-in to one of the big 3 (Amazon, Microsoft & Google), as the technologies are not yet truly portable. Contrast that with Kubernetes running on top of various clouds to reduce the need to use a particular cloud vendor and adoption by companies like Cisco and IBM as part of their overall cloud strategy.  By contrast, the serverless approach is largely platform dependent, and thus being pushed heavily by the incumbent IaaS providers to keep you on their platform.

Summary:

In the short- term, it seems that serverless is still in its early stages and best suited to purpose-built applications and in specific domains. But it has fabulous upside and value as an alternative approach to handling large-scale complexity and heavy architecture costs. The recent announcement by Amazon at AWS hints they see a similar set of needs to complement Lambda and other serverless technologies in the future and the keen interest in serverless will only sprout up solutions to these problems over time.

Microsoft’s new tool to import data from a spreadsheet picture into Excel.

Microsoft launched a new tool inside the Excel app for Android, that lets you import data from a picture of a spreadsheet and import it right into Excel. Which means you don’t need to manually re-enter data into Excel, which is huge if you have a lot of printed data and can’t copy and paste the spreadsheet you’re looking at. Satya Nadella Microsoft CEO, at Mobile World Congress on February 25, 2019 in Barcelona, Spain. Joan Cros | NurPhoto | Getty Images We will test it out soon, and share how… Read More

Read More

Will The Rise Of Automation & AI Threaten Job Security?

There’s a fear that AI is going to take over our jobs – and with the advent of everything from self-driving cars to artificial customer service agents, it’s a valid concern. It’s especially fair when McKinsey, one of the most trusted global management consulting firms, predicts that as many as 800 million full-time employees could have their work displaced by 2030 due to automation. Yes, that statistic is alarming. However, to use that data point alone is not reality. In fact, with the following statistics next to it, we can paint… Read More

Read More

AWS SnowMobile

It’s amazing to see how companies like AWS are changing the way the data migration from the traditional DC to cloud happens. Have a look at AWS SnowMobile, AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. Snowmobile makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data center migration…. Read More

Read More