Businesses must ensure that cloud-enabled applications are tuned for optimal performance by paying close attention to network latency and other details of cloud architecture.
We know that time is money. A large retailer found that 100 milliseconds of extra delay could have a “substantial revenue” impact, while a search provider discovered that delaying search results by 500 milliseconds (from 400 MS to 900 MS) reduced page views and therefore revenue by 20%.
Often in this world, you get what you pay for, but in the world of applications and the cloud, things are more complex. Sometimes you get what you pay for, sometimes incremental improvements are extremely costly, and sometimes there is such a thing as a free lunch. For example, an investment in performance tuning means that it can take fewer resources to reach higher performance levels. By not wasting cycles in poorly written code, not only does the application run faster, but it takes fewer resources—servers, storage, networks—to deliver that faster response time.
The cloud also offers compelling economics for parallelization. Suppose that you just got married, and wanted to get 100 couples—your guests—from the wedding to a reception elsewhere. You could use one vehicle that made 100 round trips, substantially delaying the champagne toast, or 100 vehicles that each made just 1 trip, or something in between. Obviously, using 100 vehicles in parallel will reduce the time between “I do” and noshing on hors d’oeuvres. If you had to buy the vehicles, there would be a serious difference between laying out the cash for one car versus 100. But with taxicabs—a pay-per-use model—the cost for 100 cab rides is exactly the same regardless of which strategy we use. I called this my 7th Law of Cloudonomics: Space-Time is a Continuum. This is critically important. With the cloud, we can get whatever degree of speed-up we want—subject to the inherent parallelism of the application—without paying a penny more.
Finally, (per my 8th Law of Cloudonomics) since the area that a circle covers is proportional to its radius, to reduce latency by half requires not twice as many service nodes, but four times as many. As applications increase in interactivity, and given that user experience continues to be an important factor in the success of any business, the need for a distributed footprint is apparent. However, it’s too costly to keep building more locations to keep enhancing performance, so leveraging the footprint that hosting, content delivery, and cloud providers offer maximizes cost effectiveness.
There are three rules to keep in mind when it comes to enhancing users’ experience via the cloud: 1) tune your application; 2) maximize parallelism for performance enhancement at zero marginal cost via the pay-per-use model; and 3) use the cloud’s geographically-dispersed footprint to reduce network latency between the endpoint and the non-device-resident front-end of your application.
Joe Weinman leads Communications, Media and Entertainment Industry Solutions for Hewlett-Packard. The views expressed are his own.
Multicloud Infrastructure & Application ManagementEnterprise cloud adoption has evolved to the point where hybrid public/private cloud designs and use of multiple providers is common. Who among us has mastered provisioning resources in different clouds; allocating the right resources to each application; assigning applications to the "best" cloud provider based on performance or reliability requirements.
Join us for a roundup of the top stories on InformationWeek.com for the week of April 24, 2016. We'll be talking with the InformationWeek.com editors and correspondents who brought you the top stories of the week!