Businesses must ensure that cloud-enabled applications are tuned for optimal performance by paying close attention to network latency and other details of cloud architecture.

Joe Weinman, Contributor

March 24, 2011

2 Min Read

We know that time is money. A large retailer found that 100 milliseconds of extra delay could have a “substantial revenue” impact, while a search provider discovered that delaying search results by 500 milliseconds (from 400 MS to 900 MS) reduced page views and therefore revenue by 20%.

Often in this world, you get what you pay for, but in the world of applications and the cloud, things are more complex. Sometimes you get what you pay for, sometimes incremental improvements are extremely costly, and sometimes there is such a thing as a free lunch. For example, an investment in performance tuning means that it can take fewer resources to reach higher performance levels. By not wasting cycles in poorly written code, not only does the application run faster, but it takes fewer resources—servers, storage, networks—to deliver that faster response time.

The cloud also offers compelling economics for parallelization. Suppose that you just got married, and wanted to get 100 couples—your guests—from the wedding to a reception elsewhere. You could use one vehicle that made 100 round trips, substantially delaying the champagne toast, or 100 vehicles that each made just 1 trip, or something in between. Obviously, using 100 vehicles in parallel will reduce the time between “I do” and noshing on hors d’oeuvres. If you had to buy the vehicles, there would be a serious difference between laying out the cash for one car versus 100. But with taxicabs—a pay-per-use model—the cost for 100 cab rides is exactly the same regardless of which strategy we use. I called this my 7th Law of Cloudonomics: Space-Time is a Continuum. This is critically important. With the cloud, we can get whatever degree of speed-up we want—subject to the inherent parallelism of the application—without paying a penny more.

Finally, (per my 8th Law of Cloudonomics) since the area that a circle covers is proportional to its radius, to reduce latency by half requires not twice as many service nodes, but four times as many. As applications increase in interactivity, and given that user experience continues to be an important factor in the success of any business, the need for a distributed footprint is apparent. However, it’s too costly to keep building more locations to keep enhancing performance, so leveraging the footprint that hosting, content delivery, and cloud providers offer maximizes cost effectiveness.

There are three rules to keep in mind when it comes to enhancing users’ experience via the cloud: 1) tune your application; 2) maximize parallelism for performance enhancement at zero marginal cost via the pay-per-use model; and 3) use the cloud’s geographically-dispersed footprint to reduce network latency between the endpoint and the non-device-resident front-end of your application.

Joe Weinman leads Communications, Media and Entertainment Industry Solutions for Hewlett-Packard. The views expressed are his own.

About the Author(s)

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights