Loading...
Loading

Know Your Risks In The Cloud

2011-10-25by Sean Hull

The economics are unbeatable; despite its flaws cloud computing is becoming far too attractive for CEOs and CTOs to ignore.  While scalability experts and performance nuts are scratching their heads, managers are busy moving ahead with migration projects, and spearheading web applications to run on cloud servers.  

 

Yet, performance is one area where cloud computing infrastructure does not always shine. Here are the three biggest risk areas and what you can do to manage them.

 

1. Variable disk performance

The virtualization of storage through facilities like Amazon's EBS is both a boon and a bane. Scripted control gives plenty of flexibility with configurations but a frequent gripe is performance variability. The disk I/O or throughput of these systems can vary dramatically as other users share that same resource.  Luckily there are some proven ways to prevent this problem.

 

o Stripe across multiple EBS volumes with software RAID

This can be done with Linux's md or software raid subsystem.  Note that although this will reduce the problem, it will not eliminate it.  Furthermore, it may complicate using snapshots of volumes as your backup solution.  

 

o Use page caches and object caches

Using Varnish and memcache to provide additional layers of caching will mitigate against the immediate performance blips of disk I/O as these facilities use memory heavily.

 

o Horizontally scale your Database and Webserver tiers

By distributing incoming sessions across a number of servers you reduce the load on any one of those servers, and thus the disk I/O demands of those machines, as they are handling fewer users.  

 

You may notice from the points I’ve made that these are similar to the methods you would employ to scale any web application.  And you're right. You would build your web architecture to be prepared for a swarm of Internet users suddenly hitting your site. That same flexible and reactive approach should is also more resilient to variability in disk performance.

 

2. Server failures are a rule

Many of the clients we work with use Amazon EC2 to host their web applications. In a surprising number of cases, operations teams simply spin up a server using Elasticfox or the AWS dashboard, and begin deploying their application onto it manually, as they would with a server in a traditional hosting center.  Unfortunately Amazon is not a traditional hosting center.

 

EC2 instances not only have a lower SLA than normal physical servers, but they are apt to fail on a regular basis.  What does this mean?  A server can disappear out from under your nose at a moments notice.  This sounds like too huge a risk until you look at the solution.

 

Managing this risk involves scripting your entire infrastructure.  You may choose to do so with traditional shell scripts or with a configuration and infrastructure management framework like Chef.  Either way, you must automate the process of bringing your environment back up from bare metal.

 

But once you've done this you'll realize how much you've gained.  By running on unreliable servers, you have been forced to take disaster recovery very seriously.  You have also been forced to put all the little pieces together and rebuild from the ground up through code, process and documentation.  This is the best practice that we have paid lip service to for many years, finally playing on in our deployments.

 

3. Compliance requirements and data integrity

 

Some clients and firms ask us about where their data is physically stored; a question no doubt arising from legal requirements which may prohibit certain data from being physically located outside a country’s borders. While this may be a big question on most minds, a bigger question might be, what happens if a lawsuit subpoenas my servers in Amazon's environment and inadvertently sweeps up everything including sensitive company information which aren’t relevant to the case?

 

Either way you really only have two options:

 

o Employ Encryption

By encrypting your filesystem or volumes of sensitive data, though you don't know where that data is, you know that you hold the keys to those files and any copies of them.

 

o Store sensitive data outside the Cloud

Your secrets should obviously be kept close to your chest. Don’t store everything in the Cloud. Servers hosted in a traditional datacenter or on your own in-house servers play a big role in keeping your data safe. Plus you’ll probably always need to balance between deploying in the cloud and meeting compliance requirements at the same time.

 

The Cloud continues to be adopted en masse while many of the big challenges of deploying onto it remain less understood. Being aware of the biggest risks when migrating applications to the Cloud is the first step to conquering it.

news Buffer
Author

Sean Hull

Heavyweight Internet Group

Sean Hull is founder and senior consultant of New York City-based Heavyweight Internet Group. He has over 20 years of experience as a technology consultant, advisor, author and speaker serving clients such as Nielsen, The Hollywood Reporter, Adweek, NBC/iVillage, Zagats, Rent The Runway, ideeli and Kaplan Test Prep.With expertise in MySQL, Linux, and Cloud deployments, his interests lie in high internet traffic management, website performance, scalability challenges, business continuity, security, and data architecture; topics which he frequently writes about at www.iheavy.com/blog

View Sean Hull`s profile for more
line

Leave a Comment