Management Strategies For The Cloud Revolution
How to Build an Elastic Cloud Center
(Page 2 of 3)
How to Build an Elastic Cloud Center
In the first chapter, I tried to move the debate away from how large an Internet data center needs to be in order to be included in the cloud, a topic that engineers can argue over, and put the focus on the end users. Now let's move the spotlight in the opposite direction and try to show what the newly empowered end user can do with a data center that's available in the cloud.v As this is being written, Amazon.com itself is 15 years old, but Amazon Web Services' EC2 has been in operation for just three years. As of October 2009, it passed its first-year anniversary of operating as a generally available resource, following two years of operation as a beta, or experimental, facility.Amazon now offers three different sizes of server to choose from, small, large, and extra large, something like your choices at a Starbucks coffee shop. In addition, Amazon throws in two compute-cycle-intensive variations, which carry out many arithmetical calculations for each step in a program for applications that will require above-average use of CPUs or processors. Once you've chosen a size, it's still possible to add or subtract capacity by activating or deactivating more servers if the pace at which you want your job to run suddenly demands it. Amazon Web Services includes CloudWatch, where for an hourly fee, operational statistics on the servers you designate to be monitored will be collected. If you subscribe to CloudWatch, then you can also receive Auto Scaling, which will take the response time information from CloudWatch and automatically scale up or cut back the number of servers you are using. If you don't want the maximum wait of site visitors to exceed 1.5 seconds, then Auto Scaling will keep enough server availability on hand to maintain that response time. An additional service that Amazon offers is Elastic Load Balancing, which distributes incoming application traffic across the servers that a customer is operating. This service both spreads the load and detects unhealthy server performance, redistributing the workload around such a server until it can be restored to full operation. Elastic Load Balancing incurs a charge of 2.5 cents per hour, plus 0.8 cent for each gigabyte of data transferred, or 8 cents for every 10 gigabytes through the load balancer. Ten gigabytes is a lot of data; it's equivalent to 100 yards of books on a shelf.
More Services Insights
- Building a Business Case for Service Management & Automation
- Dimensional Research: Selecting the Correct SaaS-based IT Service
- Service-Oriented IT: Meld Process, Security and Tools
- State of Cloud 2011: Time for Process Maturation
A common case where CloudWatch, Auto Scaling, and Elastic Load Balancing might be useful is when a business is hosting a Web site and doesn't know how much traffic to expect. If the site goes from a few hundred hits an hour to tens of thousands or hundreds of thousands, the site owner can load balance by calling up additional EC2 servers itself, or employ an EC2 management service, such as RightScale, to monitor the situation and perform the task for it. This elasticity comes at a reasonable price; CloudWatch with Auto Scaling results in a charge of 1.5 cents per hour for each EC2 server used.Other cloud providers offer a similar elasticity of service. Chad Parker, the CEO of Cybernautic, a Web site design firm in Normal, Ill., was given the task of building a Web site for the Sunday evening hit TV show Extreme Makeover: Home Edition. The show was coming to nearby Philo, Ill., and he knew that he needed more server resources than usual behind the project.
Extreme Makeover travels to a new location each week and shows a home that has been refurbished for a local family with the help of a local builder, friends and neighbors, and hundreds of volunteers, onlookers, and other casual participants. By the time a show airs on a given Sunday evening, several thousand people have had some part in its production, or know someone who has. Parker knew that he had a lot of unpredictable traffic headed his way.He had a week to get a site up and running that could show text, pictures, and video and be updated frequently. In addition, the multimedia-heavy site would experience severe spikes in traffic when some small event, such as the pouring of the foundation, triggered a response among the event's expanding audience. Asked what traffic to expect, Conrad Ricketts, Extreme Makeover's executive producer, advised Parker that each site built for the show so far that year had crashed as successive waves of traffic washed over it. "I was told, 'You will need to make sure you have an unlimited supply of beer and pizza for your network administrator,'" recalled Parker when he was interviewed by InformationWeek four days after the Philo episode aired. It would be the network administrator's job to reboot the site after each crash. The prediction was that the network administrator would need to be at his post for long periods at a time. Parker concluded that if he attempted to host the show on his existing servers, the traffic would crash the 200 Web sites of his other existing clients, a prospect that he did not relish. He opted to place the Extreme Makeover site in the Rackspace Cloud, a service that guaranteed as much hardware, networking, and storage as the customer needed, no matter how drastically its workload changed. Contrary to what Parker expected, big waves of traffic hit the site prior to the show's October 25, 2009, airing. Parker says he knows that he would not have been ready for those spikes on his own. The newspapers in Bloomington and Champaign wrote stories about the home repair project and the family that would benefit, setting off waves of inquisitive visitors. A follower of the event put up a fan page on Facebook that overnight gained 12,600 fans, most of whom seemed to want to visit the Extreme Makeover site several times a day. Parker and his staff were kept busy posting updates, pictures, and stories on the project, updating the site as many as 50 to 60 times per day. Shortly thereafter, entertainment bloggers in Hollywood wrote about tidbits they had picked up on the upcoming Philo show, generating even more traffic. Visitors to the site spread news of updates over Twitter. Long before the show aired, traffic spiked to heights that Parker would not have conceived possible.