Cloud // Infrastructure as a Service
Commentary
3/24/2011
09:36 AM
Joe Weinman
Joe Weinman
Commentary
Connect Directly
Twitter
RSS
E-Mail
50%
50%
Repost This

Cloud Economics And The Customer Experience

Businesses must ensure that cloud-enabled applications are tuned for optimal performance by paying close attention to network latency and other details of cloud architecture.

I recently participated in a lively debate that revolved around three primary economic justifications for cloud computing: cost, revenue, and risk. But, there’s a fourth that Web 2.0 firms know all about—the total customer experience.

The topic came up at the Cloud Connect conference in Santa Clara, Calif., where I had the privilege of chairing the cloud economics track and engaging experts in cloudonomics, including Will Forrest, principal at McKinsey & Co.; Microsoft’s Rolf Harm; James Staten with Forrester Research; Randy Bias, CEO and Founder of Cloudscaling; and Ravi Rajagopal, VP at CA Technologies and adjunct professor at NYU.

Cost savings are the result of total cost reduction, typically through a mix of enterprise data centers and cloud services. Revenue growth occurs via IT resource elasticity that enhances the ability to serve all (revenue-producing) customer demand. And risk can be reduced by the ability to scale up and down based on random variation, cyclical factors, or macroeconomic considerations.

What about the customer experience? Today’s devices, whether mobile or wired, take advantage of an unprecedented rate of technology enhancement: longer battery life, stunning displays, faster processors, and components that add features and functions, such as cameras, accelerometers, and touch screens. This is driving an accelerating wave of innovation in apps for businesses and consumers.

Increasingly, the endpoint application interacts with cloud-based servers. Online games tie game consoles to game servers. CRM data presented or collected at the device is maintained in the cloud. Social network interactions are mediated by the cloud. And search results are determined in the cloud and delivered to the device.

The dilemma is that users want results more quickly, but behind-the-scenes network and processing tasks are becoming increasingly complex. Consider search results. It used to be that you would submit a query and a few seconds later a results page would load. Next, services evolved that gave you search term suggestions with every letter typed. Then, the search results themselves became instantly available. The latest technology will return full pages as you type in each letter of a URL.

If you type sixty words per minute, that works out to one word per second, or one to two hundred milliseconds per letter. For such services to be effective, less time than that must be spent in the trip from the keyboard across the network to the remote servers, the processing at those servers, and the return of results back to the display, including tasks such as page rendering, context switching, and traversing the network stack.

Next generation apps are likely to have an even higher ratio of time available to task complexity. For example, emerging augmented reality functionality on mobile devices may require not just sending a keystroke, but a high-def image from a panning mobile device to servers that use advanced algorithms to process live image data synchronously, then return text or graphics information for compositing with the image.

Moreover, today’s devices often have high engagement due to their ability to simulate physical phenomena, making the experience more immersive, and directly tying to human’s ability to compensate for slow reflex arcs in the somatic nervous system by projecting trajectories—to, in effect, skate not to where the puck is, but where it will be. What will happen when you play air hockey on your table with someone in another city? Tens of milliseconds of reaction time must be realized over network connections that can take up to 160 milliseconds (New York to Hong Kong), driving the need for dispersed processing.

In the real world, we take this for granted. A gas station on every corner means that we don’t need to drive to Alaska or the Middle East to fill up. In other words, to enhance the customer experience, a distributed architecture for at least the front-end customer interface is necessary. Sure, the gas may be refined in Louisiana, but at the moment of truth in the service interaction, the vendor must have a point of presence near the customer.

Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Register for InformationWeek Newsletters
White Papers
Current Issue
InformationWeek Elite 100 - 2014
Our InformationWeek Elite 100 issue -- our 26th ranking of technology innovators -- shines a spotlight on businesses that are succeeding because of their digital strategies. We take a close at look at the top five companies in this year's ranking and the eight winners of our Business Innovation awards, and offer 20 great ideas that you can use in your company. We also provide a ranked list of our Elite 100 innovators.
Video
Slideshows
Twitter Feed
Audio Interviews
Archived Audio Interviews
GE is a leader in combining connected devices and advanced analytics in pursuit of practical goals like less downtime, lower operating costs, and higher throughput. At GIO Power & Water, CIO Jim Fowler is part of the team exploring how to apply these techniques to some of the world's essential infrastructure, from power plants to water treatment systems. Join us, and bring your questions, as we talk about what's ahead.