InformationWeek Stories by Charles Babcockhttp://www.informationweek.comInformationWeeken-usCopyright 2012, UBM LLC.2013-01-03T10:20:00ZZynga Regroups In Quest For ProfitOnline game maker confirms it eliminated 11 underperforming titles, aims to shore up stock price in 2013.http://www.informationweek.com/cloud-computing/software/zynga-regroups-in-quest-for-profit/240145391?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947"><img src="http://twimgs.com/informationweek/galleries/automated/905/01_Cloud_tn.jpg" alt="7 Cheap Cloud Storage Options" title="7 Cheap Cloud Storage Options" class="img175" /></a><br /><div class="storyImageTitle">7 Cheap Cloud Storage Options</div><span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> After a bad third quarter, the loss of three-quarters of its stock price and the departure of its CFO, Zynga has regrouped, eliminated 11 underperforming game titles, and freed up infrastructure on its Zynga Cloud for launches in 2013, it confirmed Dec. 31. <P> At the same time, it left some customers in the lurch. One of the casualties is PetVille, which judging by the comments posted on PetVille forums, generated some venerated, if virtual, family pets. Death by cloud, a phrase that once described a process in the cinder-block-and-mortar dog pound, has taken on new meaning. <P> One of PetVille's 1 million players had played the game with her autistic son for two years. "It was something we could do together, and made us very happy. I wish you 'people' could have seen the streams of tears running down both our faces as we played our last session ... Is the almighty dollar THAT important over the happiness of some very loyal fans?" reported <a href="http://www.sfgate.com/technology/article/Zynga-puts-PetVille-out-of-its-misery-4160242.php#ixzz2GrwAW4LI">the <em>San Francisco Chronicle</em></a> Jan. 1. <P> There may be more eliminations. On Oct. 23, CEO Mark Pincus in a <a href="http://blog.zynga.com/2012/10/23/ceo-update-2/">message to employees</a> said 13 "older games" would be discontinued and 5% of its staff would be laid off. <P> This regrouping means Zynga has sought to free up space on its own Zynga Cloud infrastructure so that it may rely less on Amazon Web Services, a cost center it can do without under money-losing circumstances. In that sense, Zynga's decision to rely on its own, Amazon-like infrastructure, with AWS's EC2 serving as easily tapped supplement, may prove a survival tactic that many Web startups may seek to emulate. In 2011, Zynga relied on AWS for 80% of its compute capacity, with 20% from its own data centers. By mid-2012, it had reversed the ratio to 80% reliance on its own Zynga Cloud and 20% on Amazon. <P> <strong>[ Learn more about Zynga's hybrid cloud. See <a href="http://www.informationweek.com/cloud-computing/platform/zynga-cto-how-to-win-hybrid-cloud-game/232901673?itc=edit_in_body_cross">Zynga CTO: How To Win Hybrid Cloud Game</a>. ]</strong> <P> The Zynga Cloud relies on multiple data centers, closely linked to Facebook's and those of communications' specialist Equinix, to makes its games as responsive as possible to users. It built the Zynga Cloud out with Dell servers in configurations that in many ways mimic those on Amazon, but are grouped in sets of servers for optimum game operations. <P> The company has previously listed the games being either eliminated or frozen at their present membership levels. The eliminated games are: PetVille, Mafia Wars 2, Treasure Isle, FishVille, Indiana Jones Adventure, Montopia, ForestVille, Mojitomo, Word Scramble Challenge and Mafia Wars Shakedown. World Mafia Wars 2 was frozen at its present player level. <P> Zynga officials declined to comment on where exactly its cost-control measures were taking effect. But game eliminations will free up server infrastructure for potentially more successful launches in 2013. <P> Part of its third-quarter shakedown was to close its Boston office and cut its "Ville" and Bingo staff in Austin, Texas, by 100 game developers. But the product line is far from dead. In fact the eliminated games make space for a "Ville" newcomer, CoasterVille, launched Dec. 5, in which players use a cast of varied, quirky players to build a theme park and run it. <P> In addition, it's launching more mobile games that draw from the large base of iOS and Android users instead of collecting users off Facebook. Clay Jam, launched Nov. 29, runs on the iPhone, iPad, iPod Touch and Android smartphones and tablets. <P> It's uncertain whether that strategy will produce success. The company reported a $53 million loss for the third quarter on Oct. 24. Former CFO Dave Wehner moved to a senior finance position at Facebook Nov. 13 after two years at Zynga. Wehner had been with the company from its founding through its IPO in December 2011. Zynga's $10 initial share price dropped to a little over $2 throughout 2012, a decline of 78% in value. It rose to $2.36 Monday, after the game eliminations were announced, and closed at $2.39 Wednesday. <P> Zynga has also written off half the purchase price of OMGPop, as participation in its Draw Something game failed to meet expectations in 2012. <em>Forbes</em> estimated the acquisition cost Zynga $200 million. <P> "These changes come at an important time," said Pincus' October blog to employees. "We are positioning ourselves for long-term growth, and I'm confident that we have the breadth and depth of management talent to deliver on our mission of connecting the world through games." <P> Whether its cloud infrastructure and management team have unlocked the path to success remains in doubt. But the former gives it a platform on which it can mount new initiatives, while the latter has learned some important lessons from the school of hard knocks. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a>2013-01-02T09:06:00Z5 Data Center Trends For 2013Energy efficiency will continue to be a major focus of data center operations over the coming year, but that's not all we'll see.http://www.informationweek.com/hardware/data-centers/5-data-center-trends-for-2013/240145349?cid=RSSfeed_IWK_authorsAs 2013 opens with new prospects for data center operations, we'll see new looks at some old themes, especially around energy efficiency. Increased power costs and pressure from environmental groups will lead data center designers to look to new technologies to cut their traditional energy needs. But that's not all we'll see; here are five important trends you can expect to see gain strength in 2013. <P> <strong>1. Location Drives Energy Efficiency</strong> <P> There is one data center concern that overwhelms all others: the need for energy efficiency. At one time, energy costs were viewed as a given, compared to the expenses in hardware purchases and labor for operations. But as hardware became more efficient and automated procedures more prevalent, the cost of energy has steadily risen to capture 25% of total operating costs, and it now sits close to the top of the list. <P> In addition, there is a clash building between environmentalists versus smartphone and tablet users and data center operators. As the evidence builds for global warming, the unbridled growth of computing in many forms is coming under attack as a wasteful contributor to global warming. Indeed, such an attack was the theme of a <a href="http://www.informationweek.com/cloud-computing/infrastructure/ny-times-data-center-indictment-misses-b/240007880">landmark <em>New York Times</em> story</a> published Sept. 22, "The Cloud Factories: Power, Pollution and the Internet." <P> <strong>[ Our analysis of the <em>New York Times</em> story was one of <em>InformationWeek's</em> top 12 stories of 2012. Catch up on the other 11 at <a href="http://www.informationweek.com/global-cio/interviews/best-of-informationweek-2012-12-must-rea/240145150?itc=edit_in_body_cross">Best Of InformationWeek 2012: 12 Must-Reads</a>. ]</strong> <P> This clash will take place even though data center builders are showing a remarkable ability to reduce the amount of power consumed per unit of computing executed. The traditional enterprise data center uses just under twice as much electricity as it needs to do the actual computing. The extra amount goes to run cooling, lighting and systems that sustain the data center. <P> A measure of this ratio is <a href="http://www.informationweek.com/hardware/data-centers/data-centers-may-not-gobble-earth-after/231300144">PUE, or power usage effectiveness</a>. An ideal PUE would be 1.0, meaning all the power brought to the data center is used for computing -- probably not an achievable goal. But instead of 2.0, Google showed it could build multiple data centers that operated with a PUE of 1.16 in 2010, reduced to 1.14 in 2011. <P> Each hundredth of a point cut out of the PUE represents a huge commitment of effort. As Jim Trout, CEO of Vantage Data Centers, a wholesale data center space builder in Santa Clara, Calif., explained, <a href="http://www.informationweek.com/hardware/data-centers/data-centers-may-not-gobble-earth-after/231300144?pgno=2">only difficult gains remain</a>. "The low-hanging fruit has already been picked," he said in an interview. <P> Nevertheless, Facebook illustrated with its construction of a new data center in Prineville, Ore., that the right location can drive energy consumption lower. The second-biggest energy hog, just below electricity used in computing, is power consumed for cooling. Facebook built an energy-efficient data center east of the Cascades and close to cheap hydropower. By using a misting technique with ambient air, it can cool the facility without an air conditioning system. <P> It drove the PUE at Prineville down to 1.09, but a Facebook mechanical engineer conceded few enterprise data centers can locate in the high, dry-air plains of eastern Oregon, where summer nights are cool and winters cold. "These are ideal conditions for using evaporative cooling and humidification systems, instead of the mechanical chillers used in more-conventional data center designs," <a href="http://opencompute.org/2012/11/14/cooling-an-ocp-data-center-in-a-hot-and-humid-climate/">said Daniel Lee</a>, a mechanical engineer at Facebook, in a Nov. 14 blog. <P> Most enterprise data centers remain closer to expensive power and must operate year round in less than ideal conditions. Facebook also built a new data center in Forest City, N.C., (60 miles west of Charlotte) where summers are warm and humid, attempting to use the same ambient air technique. To Lee's surprise, during one of the three hottest summers on record, the misting method worked there as well, although at higher temperatures and humidity. Instead of needing 65-degree air, it will operate at up to 85 degrees. And instead of having a maximum 65% relative humidity, it can function with 90%. That most likely resulted in the need to increase the flow of fan-driven air. Nevertheless, a conventional air-conditioning system with its power-hungry condensers would have driven the Forest City PUE far above the Prineville level. <P> The equipment and design used to achieve that PUE are available for all to see. In 2011, Facebook initiated the Open Compute Project, with the designs and equipment specifications of its data centers made public. Both Prineville and Forest City follow the OCP specs. <P> Thus, Facebook has set a standard that is likely to be emulated by more and more data center builders. In short, 2013 will be the year when the Open Compute Project's original goal is likely to be put into practice: "What if we could mobilize a community of passionate people dedicated to making data centers and hardware more efficient, shrinking their environmental footprint?" <a href="http://opencompute.org/2012/04/09/open-compute-project-one-year-in/">wrote Frank Frankovsky</a>, Facebook's director of hardware design and supply chain, in a blog April 9. <P> Google is another practitioner of efficient data center operation, using its own design. For a broad mix of older and new data centers, it achieved an overall PUE of 1.14 in 2011, with a typical modern facility coming in at 1.12, <a href="http://www.datacenterdynamics.com/ja/keywords/joe-kava">according to Joe Kava</a>, VP of Google data centers, in a March 26 blog. In 2010, the overall figure was 1.16.<strong>2. Natural Gas Gains Steam</strong> <P> Beyond reducing the amount consumed, there's another energy issue looming in data center operations. There's usually little debate over what type of energy to use: electricity purchased off the grid is nearly everyone's first choice. <P> But the U.S. is currently experiencing a glut of natural gas, drilled from underground shale formations in South Dakota, Pennsylvania and the Appalachian states. Gas is at its lowest prices in years, due to the oversupply. And a few companies are poised to take advantage of it through use of onsite generators burning natural gas to supply all their power needs. <P> <a href="http://www.datagryd.com/">Datagryd</a> is one of them, in its 240,000-square-foot data center at 60 Hudson Street in Midtown Manhattan. CEO Peter Feldman said in an interview that not only can he generate electricity with the nation's cleanest fuel, but his firm has designed a cogeneration facility where the hot gases in the generators' exhaust drive a cooling system for his data center. When he's got surplus power, it can be sold to New York City's Con Edison utility. <P> As California and other states take up the possibility of allowing drilling for natural gas, Datagryd's example may become a pattern in future large data center operations. The ability to generate the electricity needed from fuel delivered by underground pipeline had its <a href="http://www.informationweek.com/hardware/data-centers/hurricane-sandy-surge-challenges-nyc-dat/240012583">advantages when Hurricane Sandy hit</a> New York and New Jersey. While generators at other nearby data centers ran out of fuel and sputtered to a stop, Datagryd continued delivering compute services to its customers; it didn't need to get diesel trucks over closed bridges or through blocked tunnels. It continued functioning throughout the crisis, Feldman said. <P> <strong>3. Rise Of The Port-A-Data-Center</strong> <P> Speaking of Hurricane Sandy, another alternative type of data center located in Dulles, Va., bore the brunt of Sandy's impact without going down. It was AOL's outdoor micro data center that stands in a module roughly equal to a Port-a-Potty. The modules are managed remotely, so if storm winds knocked the structure down, there was no one onsite to set things right. <P> The unit sits on a concrete slab and contains a weather-tight rack of servers, storage and switching. Power is plugged into the module, the network connected and water service installed, since hot air off the servers is used to warm water in a heat exchanger. The water is then cooled outside the module by ambient air. The water is in a closed-loop piping system, and its temperature can rise to as high as 85 degrees without the cooling system failing to do its job. <P> The design of the system brings the power source close to the servers that are going to use it. The water- and fan-driven cooling system requires little energy compared to air conditioning units. And there's no need for lights or electrical locking mechanisms, as a glasshouse data center typically has. The combination gives the micro data center a potential PUE rating of 1.1, according to spokesmen from <a href="http://www.ellipticalmedia.com/">Elliptical Mobile</a>, which produces the units. <P> "We experienced no hardware issues or alerts from our network operations center, nor did we find any issues with the unit leaking," said Scot Killian, senior technology director of data center services at AOL in <a href="http://www.datacenterknowledge.com/archives/2012/11/28/aols-outdoor-micro-modules-weather-sandy/"> a report by Data Center Knowledge</a> on Nov. 28. <P> AST Modular is another producer of micro data centers. These modules may soon start to serve as distributed units of an enterprise's data center, placed in branch offices, distributed manufacturing or locations serving clusters of small businesses. AOL is in an experimental phase with its module and hasn't stated how it plans to make long-term use of them in its business. <P> <strong>4. DTrace Makes Data Centers More Resilient</strong> <P> Data centers of the future will have many more built-in self-monitoring and self-healing features. In 2013, that means DTrace, an instrumentation and process-triggered probe into how well a particular operating system and application work together. <P> DTrace is a feature that first came out in Sun Microsystems' Solaris in 2005, then gradually became available in FreeBSD Unix, Apple Mac OS X and Linux. The Joyent public cloud system makes extensive use of it to guarantee performance and uptime through its SmartOS operating system, based on open source Illumos, (a variant of Solaris). <P> Developers and skilled operators can isolate any running process that they wish and order a snapshot of the CPU, memory and I/O it uses, along with other characteristics, through a DTrace script. The probe is triggered by the execution of the targeted process. <P> Twitter has used DTrace to identify and eliminate a Ruby on Rails process in Twitter's systems that was slowing operations by generating back traces in Twitter systems that were hundreds of frames long, tying up large amounts of compute power without producing beneficial results. <P> Jason Hoffman, CTO of Joyent, said in a recent interview that effective use of DTrace yields large amounts of data that can be analyzed to determine what goes wrong, when it goes wrong and how to counteract it. The Joyent staff is building tools to work with DTrace in this fashion and provide a more resilient cloud data center, he said. <P> <strong>5. New Renewable Energy Forms</strong> <P> The previously mentioned <em>New York Times</em> story panning the rapid build-out of data centers didn't consider a new possibility. The data centers of the future will not only consume less power per unit of computing done; they will also in some cases be built next to a self-renewing source of local energy -- yielding a net zero of carbon fuel consumption. There are many prime candidates for renewable power generation around the world from wind, solar, hydro-electric or geo-thermal, but most are too remote to become cost-effective suppliers to power grids. It's simply too expensive to build a transmission line to carry the amount of current that they can generate to the grid. <P> But data centers built near such sources could consume the power by bringing the data they're working with to the site, instead of bringing power to the data. Such a site would require only a few underground fiber optic cables to carry the I/O of the computer operations to customers. Facebook found Prineville, Ore., a suitable site for large data center operations; Google and cloud service providers are believed to be building early models of data centers relying on self-renewing energy sources. Microsoft is experimenting with a data center fueled by <a href="http://www.informationweek.com/microsofts-new-data-center-the-straight/240142559">biogas from a wastewater treatment facility</a>. Some enterprises may experiment with micro-data centers placed near a self-renewing energy source, such as a fast-flowing stream, sun-baked field or wind site. <P> Swift-flowing streams from glacier melt in Greenland and melting snows in Scandinavia have been chosen as sites for building prototypes of data centers built at self-renewing energy locations.2012-12-21T09:19:00ZRed Hat Buys ManageIQ, Gains Hybrid Cloud ToolsRed Hat will pay $104 million to fill out self-provisioning and performance management for its virtualization environment, get multi-hypervisor capability.http://www.informationweek.com/cloud-computing/infrastructure/red-hat-buys-manageiq-gains-hybrid-cloud/240145188?cid=RSSfeed_IWK_authorsRed Hat will acquire virtualization environment manager ManageIQ for $104 million to beef up the capabilities of Red Hat Virtualization 3.1, its own virtualization management console. <P> Red Hat Enterprise Virtualization generates and manages virtual machines run by the KVM hypervisor found in the Linux kernel. ManageIQ was an early fosterer of management in the multi-hypervisor virtual environment. In this <a href="http://www.informationweek.com/software/infrastructure/manageiq-says-virtual-machines-present-u/208700531"> early, 2009 look</a> at the company, ManageIQ CEO and co-founder Joe Fitzgerald talked about management problems in the more dynamic world of virtual machines. <P> The acquisition is aimed at making Red Hat a stronger player in the creation of multi-hypervisor, on-premises clouds capable of working with various public cloud services. ManageIQ workloads can be configured to run via Amazon Web Services EC2 or Microsoft Azure. Red Hat has plans to support Rackspace Cloud, a user of the Xen hypervisor, in the near future, said Bryan Che, general manager of the cloud business unit. <P> ManageIQ is already a partner of Red Hat's and has supported Red Hat's virtualization environment for 18 months. When Red Hat announced the 3.1 release of Enterprise Virtualization, ManageIQ announced at the same time 3.1 support in its EVM Suite products. <P> "ManageIQ provides a strong set of operational capabilities, monitoring and introspection into virtual machines," said Bryan Che, Red Hat general manager of its cloud business unit, in an interview on the acquisition. <P> One of the things that ManageIQ has specialized in, given its roots in virtual machine configuration, is inspection of virtual machines to discover their components and check on whether the software being used is up to date -- a process Che referred to as instrospection. <P> <strong>[Want to learn more about Red Hat's enterprise virtualization strategy? See <a href=" http://www.informationweek.com/hardware/virtualization/red-hat-speeds-up-open-source-virtualiza/240144200?itc=edit_in_body_cross" > Red Hat Speeds Up Open Source Virtualization Race.</a>]</strong> <P> ManageIQ's EVM Suite consists of components that provide end user self-provisioning, chargeback and management features that round out the existing features of Red Hat Enterprise Virtualization. The suite can orchestrate different types of virtual machines, set and enforce policies for VMs and establish triggers for when additional VMs may be needed to deal with a particular workload. <P> The components of the EVM Suite (Control, Automate and Integrate) will be incorporated into the Red Hat product, said Che. <P> ManageIQ provides deep monitoring capability, performance analytics and predictive analysis of what's happening across the infrastructure," said Che. "And it keeps a record of that information." <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <P> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a>2012-12-20T09:30:00ZCloudVelocity Seeks To Champion Enterprise Hybrid CloudStartup maps out applications, creates clones and moves them to cloud data center.http://www.informationweek.com/cloud-computing/platform/cloudvelocity-seeks-to-champion-enterpri/240145109?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-myths-about-cloud-computing/240124922"><img src="http://twimgs.com/informationweek/graphics_library/175x175/money_cloud.jpg" alt="7 Dumb Cloud Computing Myths" title="7 Dumb Cloud Computing Myths" class="img175" /></a><br /> <div class="storyImageTitle">7 Dumb Cloud Computing Myths</div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> CloudVelocity has launched the beta version of its system that discovers an application, creates a "blueprint" of it from which a duplicate can be created, and then sends the clone to another location to run in a similar manner. <P> To do so, the CloudVelocity system must be able to identify the application's dependencies, such as networks, storage and database systems used, then generate equivalents from the blueprint at the new location. <P> One of the most likely places a clone is to be run, conceded CEO Rajeev Chawla, is Amazon's Elastic Compute Cloud. Thus, CloudVelocity is looking to become the agent that enables hybrid cloud computing. Run your system in the enterprise, and have a clone standing by -- ready to be fired up when traffic demands it -- in the cloud. CloudVelocity will handle some of the details of the migration under the covers. <P> For example, many enterprise applications are now running in a virtual machine under VMware's ESX Server. ESX Server uses a different file format than the Amazon Machine Images that run in EC2, but CloudVelocity will handle the details of the conversion with its One Hybrid Cloud platform. Likewise, it could convert the clone from VMware to the Xen hypervisor prevalent in the Rackspace cloud and move it there, although that capability won't appear until an undisclosed time in 2013. <P> <strong>[ Want to learn more about cloud developments over the past year? See <a href="http://www.informationweek.com/cloud-computing/infrastructure/cloud-computing-best-and-worst-news-of-2/240144531">Cloud Computing: Best And Worst News Of 2012</a>. ]</strong> <P> The goal is to be able to size up a variety of running applications and be able to move them to the cloud of the owner's choice, though the Santa Clara, Calif., firm isn't yet near that goal. <P> Chawla, in an interview, termed it "a multi-tier application service," meaning the middleware, such as an application server, Web server and/or a database server, can all be packaged up together in the clone. "What you have locally runs virtually in the cloud," he said. <P> What about security? Chawla claimed that part of the task of the One Hybrid Cloud is to grasp the security needs of the application in order to set the policies of a firewall and to select the right type of networking for its new location. <P> Cloud Velocity launched the automation platform Dec. 12, the same day that it closed its first round of venture-capital funding. Mayfield Fund advanced $5 million for the firm. The beta software comes in both a developer edition and an enterprise edition. The former allows developers to quickly clone multi-tier applications and work on development with them; the latter includes the ability to migrate multi-tier apps to the cloud and create failover services via the application generated there. <P> "We believe that CloudVelocity will have the same impact on public cloud adoption as VMware did on the adoption of server virtualization," said Navin Chaddha, managing director of the Mayfield Fund, in one of the announcement's higher flying statements. CloudVelocity "will make public clouds look like internal data centers," he claimed. <P> That's hard to do when the application has dozens of dependencies that include a widening web of other custom, enterprise applications. Most companies want to move to the cloud a discrete workload that doesn't need to call other applications frequently for data. <P> CTO Anand Iyengar said in an interview that One Hybrid Cloud had been adopted by <a href="http://www.lealtamedia.com">Lealta Media</a> to create failover production systems in the cloud, ready to take over in the event of a disaster in the data center. <P> "This will help ensure that our online business stays available within the AWS cloud," said Lealta VP of engineering Nitin Shingate in the announcement.2012-12-19T09:06:00ZCloud Computing: Best And Worst News Of 2012What were the key developments for enterprise cloud computing this year? Let's look at four big wins -- and three setbacks.http://www.informationweek.com/cloud-computing/infrastructure/cloud-computing-best-and-worst-news-of-2/240144531?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --><div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/software/8-cloud-tools-for-road-warriors/240142591"><img src="http://twimgs.com/informationweek/galleries/automated/914/01_Clouds_tn.jpg" alt="8 Cloud Tools For Road Warriors" title="8 Cloud Tools For Road Warriors" class="img175" /></a><br /><div class="storyImageTitle">8 Cloud Tools For Road Warriors</div><span class="inlinelargerView">(click image for larger view and for slideshow)</span></div><!-- /KINDLE EXCLUDE -->With KPMG <a href="http://www.cloudpro.co.uk/saas/5106/kpmg-cloud-services-revenues-set-soar?utm_campaign=cloudpro_newsletter&utm_medium=email&utm_source=newsletter">predicting</a> a doubling of cloud services revenue over the next two years, it's a good time to point out where cloud computing has gained strength over the past year in capabilities and services. At the same time, we should look at the cloud's weaknesses, as a cautionary tale for those IT teams whom KPMG says are about to migrate production applications to the cloud. <P> Here are the top seven developments we saw in public cloud computing in 2012: The three biggest setbacks, and the four biggest wins. We'll start with the setbacks. <P> <strong>Setback #1: Outages Plague Amazon And Others.</strong> <P> When you're trying to convince big companies to bet their business on cloud operations, the worst thing that can happen is for the cloud infrastructure you're thinking of using to suffer an unplanned outage. Amazon Web Services didn't have an outage in 2012 that rivaled the hit it took over the Easter weekend in April 2011, when multi-availability zones in one of its data centers went down. However, Amazon was nevertheless buffeted at its big East Coast complex by service outages on <a href="http://www.informationweek.com/cloud-computing/infrastructure/amazon-defended-after-june-14-cloud-outa/240002207">June 14</a> and <a href="http://www.informationweek.com/cloud-computing/infrastructure/amazon-outage-hits-netflix-heroku-pinter/240003096?queryText=%22Amazon%20outage%22">June 29</a>, due to power outages. <P> <strong>[ Check out the <em>InformationWeek</em> buyers' guide to IaaS. <a href="http://www.informationweek.com/cloud-computing/infrastructure/infrastructure-as-a-service-options/240009686?itc=edit_in_body_cross">Infrastructure-As-A-Service Options</a>. ]</strong> <P> The June 29 outage came after the region suffered a series of violent electrical storms, and the outages were contained to one availability zone inside the data center. Amazon didn't say why the battery and generator backup systems of a supposedly highly available cloud didn't keep services running. The outage only disrupted important customers for a few minutes, but they were some of Amazon's most prominent customers, including Salesforce.com's developer cloud Heroku, Netflix, and social networking firms Instagram and Pinterest. On Oct. 22, Amazon suffered an outage of its Elastic Block Storage service for a few hours, making it impossible for some companies to update their websites or retrieve data, even though the sites remained on display. <P> None of these incidents was a crippling event for Amazon customers or for Amazon's cloud infrastructure business. A few customers likely <a href="http://www.informationweek.com/cloud-computing/infrastructure/amazon-cloud-outage-causes-customer-to-l/240003249">dropped the service</a>, as <a href="http://www.whatsyourprice.com">WhatsYourPrice</a> did after the online dating service got a flood of complaints during the two-hour June 29 outage. However, the ongoing problem of outages gives cloud skeptics ammunition against moving essential applications there, even as the likes of Amazon, Google, Rackspace and Microsoft win the debate around other performance factors, such as load balancing and database services. <P> The periodic outages suggest just what a complex beast infrastructure-as-a-service (IaaS) is. Its supposedly redundant, resilient architecture keeps coming up with unanticipated ways of crapping out during predictable events, or from the occasional human error. Cloud providers do have a better track record for uptime than the average enterprise data center, but cloud architects still have their work cut out for them to reduce the odds of high-profile outages that hurt customers and damage the public cloud's reputation. <P> <strong>Setback #2: Virtual Machine Snooping Threat Gets More Real.</strong> <P> It's still only a theory, but a researcher this year published a disturbing example of <a href="http://www.darkreading.com/cloud-security/167901092/security/attacks-breaches/240012743/researchers-develop-cross-vm-side-channel-attack.html">one virtual machine spying on another</a> on the same physical server. The possibility of such a risk torments virtual server users, who thus far had been coached that the parameters of virtual machines are hard boundaries that couldn't be breached. Cloud computing relies heavily on the multi-tenant, virtual server host, where one physical server is used by multiple companies and customers. <P> It's important to note that no known breaches using this technique have occurred in the wild. The researchers at the University of North Carolina, University of Wisconsin and the RSA unit of EMC said it was difficult to execute such an attack even in a lab setting. <P>The researchers used a technique called a side channel attack to spy on use of the server's shared instruction cache over three to four hours. Doing so let them decipher enough of a 457-bit private key encryption to reduce the guesses needed to crack the encryption down to 10,000. That number is relatively small in the world of private key spying, because the task of trying all 10,000 remaining possibilities can be automated via systematic testing. The researchers used virtual machines generated under the Xen hypervisor, which is the same as the hypervisor generally used in the Amazon Web Services EC2 cloud and Rackspace Cloud. <P> The researchers, including Yinqian Zhang, at the University of North Carolina at Chapel Hill, and Ari Juels at the RSA Laboratories unit of EMC, said it was unlikely anyone had used their technique to infiltrate workloads on cloud servers. <P> Nevertheless, Juels told <em>Dark Reading</em>, "The upshot is that isolation in public clouds is imperfect and can potentially be breached. So highly sensitive workloads should not be placed in a public cloud. Our attack is the first solid confirmation of a long hypothesized attack vector." It remains true that it's extremely difficult to create a locked down computer system that's still engaged with the outside world. The bad guys are sure to find new ways to test the limits as they confront this architecture of virtualized servers in a cloud. Given the widespread use of virtualization in the cloud, this research provides an unsettling demonstration. <P> <strong>Setback #3: Cloud Pricing Is Still A Mess.</strong> <P> Today, it's hard to tell much from most cloud computing bills. You might see that it's higher. You can even see from a billing code how much of the increase came from a certain business unit, such as the marketing department. But you can't correlate the marketing department's workloads to the charges in the bill, to know what activities are driving usage. <P> What you'd really like to do is compare your cloud bill at Rackspace to what your charges would be on Google Compute Engine, Softlayer Technologies, Bluelock or Microsoft Azure. But the comparison requires <a href="http://www.informationweek.com/hardware/data-centers/clouds-thorniest-question-does-it-pay-of/240001236?pgno=3">hours of work at a spreadsheet</a>, isolating information and then trying to translate it into corresponding terms. <P> One of the trickiest parts is figuring out how each vendor defines its standard virtual CPU and what it charges for one. Amazon calls a virtual CPU the equivalent of a 2007 Xeon or Opteron core. Google also has a physical equivalent but uses a different chip (one of two threads in a Sandy Bridge Xeon core), leaving it to you to look up the differences and drive your own definition of where virtual CPU value lies. The documentation is confusing, referring to logical core as 2.75 GCEUs (also referred to as GQs) or Google Compute Engine Units, which is also equal to about half of a Sandy Bridge. So a logical core isn't a core at all. In most cases it has only a vendor-defined relationship to some physical computing unit, with vendors all over the map. <P> Vendors also offer different configurations of standard servers. The major vendors do small, medium and large configurations, with several extra-large types thrown in the mix. But they each define the combination of memory, CPU and storage a little differently, making direct comparison difficult. You'd almost think they want it to be difficult. <P> What's also tricky is proving whether your use of a public cloud costs less than doing the same work in house. Doing so takes good system accounting procedures and knowledge of what particular functions within the data center actually cost. <em>InformationWeek</em> columnist Art Wittmann has thrown some healthy skepticism on this, noting that CPU performance and drive storage capacity keep climbing at logarithmic rates. "The Moore's Law advantage is immense and isn't something you should give up lightly, but some cloud providers are asking you to do exactly that," Wittmann <a href="http://www.informationweek.com/news/cloud-computing/infrastructure/232601509">writes</a>. <P> There is help for deciphering pricing structures and figuring out how cloud computing fits into the IT budget. Some services will take the usage reporting stats provided by AWS and other vendors and use them to fill out a fuller picture. A warning: In some cases you have to give them your account identification information for them to be able to do that, which turns a lot of critical compute data about your business over to an outsider. Candidates to help include Apptio, Cloudability and Newvem, with migration and management partners such as Cloud Velocity, Skytap and Rightscale able to provide fuller usage pictures with billing information. <P> There's reason for hope in 2013. As I wrote this, <a href="http://aws.typepad.com/aws/2012/12/aws-detailed-billing-reports.html"> Amazon Web Services evangelist Jeff Barr</a> posted a blog entry saying Amazon will now make available an hourly report itemizing each server instance. That's a big step forward in visibility into the monthly cloud bill.<strong>Win #1: Customers Have More Providers From Which To Choose.</strong> <P> Amazon, Rackspace and others have new rivals. Providers of co-location data centers and hosted services or managed services have found that the distance between what they have traditionally done and cloud computing, where customers do much of the management themselves and pay per use, is short. <P> SoftLayer Technologies added 30,000 servers to the 70,000 already in its 13 data centers to increase its IaaS offering. Smaller providers with a strong ties to their customers have emerged including Hosting.com, Bluelock and Peak 10, to name a few. Bluelock is one of many regional providers of IaaS whose appearance was spurred by the prevalence of VMware virtualization in the data center and the availability of VMware-compatible cloud management software for service providers. VMware officials in February said they had 94 <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-rapidly-expands-cloud-partner-net/232600354">such partners.</a> Joyent with its SmartOS open-source Solaris IaaS, running in Joyent data centers or installed in customers' data centers, is another alternative, with powerful funding backers. <P> There's also growing competition in the software used to manage infrastructure clouds. The OpenStack open source code project has assembled seven cloud computing modules governing servers, storage, image management, networking and operational reporting. Rackspace, Dell, and the SUSE Linux unit of Attachmate now offer OpenStack-based cloud services, with software vendors Nimbula, Piston and Nebula offering wares to build a private OpenStack cloud <P> <strong>Win #2: Price Wars Bring Down Cloud Storage Costs.</strong> <P> This was the best news of 2012, and it came toward the end of the year with genuine competition in cloud storage. In theory, cloud prices should trend downward because the power of devices used in storage and computing have been increasing quickly, but it had been hard to see the theory in action -- until Dec. 1. That's when Amazon <a href="http://www.informationweek.com/cloud-computing/infrastructure/amazon-web-services-slashes-storage-pric/240142741">cut Simple Storage Service (S3) prices</a> by 24% to 27%. The next day Google <a href="http://www.informationweek.com/cloud-computing/infrastructure/google-counters-amazons-storage-price-cu/240142946">cut storage prices by 20%</a> to remain in step. It was the biggest drop in cloud storage prices since the services were created. <P> Savvis is among the rival cloud vendors wrestling with how much to compete on price for IaaS. Before the latest plunge, Savvis had been planning to match Google and Amazon on storage prices while offering more value-added features, such as checkbox offsite data replication, said director of storage product management P. J. Farmer. Such an option would provide protection against loss of data or systems in a natural disaster, such as Hurricane Sandy. "Now I'll have to think it over," said Farmer, about matching those now-lower rates. Savvis' special features might let it hold prices a bit above Amazon and Google. But Farmer's dilemma shows the trend is both hard to escape and far from over. <P> <strong>Win #3: Telecom Provides Reliable Backup.</strong> <P> Why doesn't one cloud data center serve as a backup facility for another? That way, a natural disaster that robs one site of its ability to operate won't necessarily harm the businesses of its customers, who failover to the backup center. <P> The concept is simple, but the execution is hard. Duplicate systems must be stored in each location. And at the moment of crisis, it's impossible to transfer all the data needed by the system in a few minutes. So preparations need to be made. Data must be replicated to the backup site on an ongoing basis, so in the advent of a disaster, only a few days or few hours of data must follow the migration to the backup site. <P> With the launch of cloud services from data centers owned by telecom providers, however, the process got a little easier. Savvis, owned by CenturyLink, the third largest U.S. telecom supplier, has tied its cloud data centers together with <a href=http://www.informationweek.com/cloud-computing/infrastructure/savvis-cloud-storage-takes-on-amazon-goo/240144085>CenturyLink private lines</a>. <P> That creates a potential secure and reliable failover chain, if you should ever find you're on the East Coast with a hurricane bearing down, or on the West Coast after an earthquake. <P> It's a check-off option to replicate data created in one Savvis cloud center in another, provided you're already a CenturyLink telecom customer. In that case, the service is free over existing lines. Even if you're not, the possibilities of the new service are obvious: disaster recovery mechanisms that, if necessary, could be adapted on the fly by those with foresight and preparation. Savvis operates five data centers around the world where the service is an option. <P> Such options are likely to become more prevalent as the telecom providers weigh into cloud computing. Terremark, owned by Verizon, is a candidate to combine the strength of extensive private networking with cloud computing. <P> <em>InformationWeek</em> <a href="http://www.informationweek.com/cloud-computing/infrastructure/data-center-chains-in-cloud-promise-easi/231602424">predicted such chains would emerge</a> back in September 2011. <P> <strong>Win #4: More Cloud Server Types Available.</strong> <P> Amazon has led the way in setting the pace for types of virtual servers, offering micro, small, medium and large. It added extra-large and double extra-large to the list, then high-memory extra-large, double extra-large, and quadruple extra-large. Concentrated CPUs arrived in the form of high-CPU medium and high-CPU extra-large. Cluster instances, graphics processing unit instances, and high I/O instances round out the list. <P> Each vendor is doing some variation of this service catalog. But there is a further development on the horizon. Instead of the cloud telling you how much memory and storage you'll get with your chosen server size, you will soon be able to tell the cloud. Being able to configure exactly the server you want, with appropriate I/O and networking as well other components will be the final phase of giving customers choice in self-provisioning their virtual servers.2012-12-17T10:55:00ZBromium Secures Older PCs, Terminals Via 'Microvisor'CTO Simon Crosby says goal is to isolate untrusted tasks on Windows XP machines, thin clients as users bring outside code and content inside the enterprise.http://www.informationweek.com/security/client/bromium-secures-older-pcs-terminals-via/240144489?cid=RSSfeed_IWK_authorsBromium, the startup that isolates potentially intrusive end-user tasks in micro virtual machines, says it's extended the first version of its vSentry software to protect legacy Windows XP and terminal server desktops -- those frequently running on older versions of the Intel and AMD chip family. <P> VSentry was launched Sept. 19, and the 1.1 vSentry update, announced Dec. 11, begins to make it applicable to Windows XP, thin clients and terminal services devices. <P> The older chips are virtualization unaware, so they lack the ability to realize they're dealing with a virtual machine. They thus can't use Bromium 1.0 capabilities to assert micro-hypervisor or "microvisor" control over end-user tasks. Virtualization hooks built into modern Intel and AMD chips allow vSentry to "hardware-isolate each untrustworthy task." With the 1.1 release, vSentry has been upgraded to terminal services and Windows XP systems, even though the devices running them don't necessarily contain the most modern chips. <P> Sometimes these legacy desktops are under consideration for upgrade through a virtual desktop infrastructure -- being managed through central servers with only displays running locally. That move allows users to stick with a familiar system but puts it on server hardware and under more automated management. <P> Author and researcher Shawn Bass wrote recently on the brianmadden.com virtualization website that virtual desktops and virtual desktop infrastructure are <a href="http://www.brianmadden.com/blogs/shawnbass/archive/2012/08/03/vdi-and-ts-are-not-more-secure-than-physical-desktops-part-2-of-5-centralization-helps-in-other-ways.aspx">no more secure than non-virtualized systems</a>. There's been a presumption they were somewhat safer due to the fact they run on central servers under IT, with all data stored in the data center. But Bass says end users make use of too many public resources to avoid exposures to malware, and the virtual desktop is just as much at risk as its bare metal counterpart. <P> <strong>[ Want to learn more about how Bromium takes a different approach to security? See <a href="http://www.informationweek.com/security/application-security/bromium-strengthens-desktop-security-usi/240008950?itc=edit_in_body_cross">Bromium Strengthens Desktop Security Using Virtualization</a>. ]</strong> <P> Bromium's CTO Simon Crosby picked up on the theme in a blog written to announce the <a href="http://blogs.bromium.com/2012/12/11/vsentry-for-xp-rds-and-vdi-is-here/">release of vSentry 1.1</a> Dec. 11. <P> "Virtual desktops are vulnerable to exactly the same attacks as native PCs ... A compromised virtual desktop puts the attacker in an ideal location -- the data center -- from which he can further penetrate the infrastructure," said Crosby, echoing Bass' blog post. <P> The exposure may be greater than with standard desktops, Crosby continued, because once an intruder gains access to a virtual desktop, he's inside the data center and attached to many other networked desktops. "Since VDI desktops typically all appear on the same LAN segment (or VLAN), it is possible for attackers to spread laterally from one virtual desktop to another," he wrote. <P> What Bromium does about the risk is impose a new form of security, one that isolates untrusted activity in a micro virtual machine, then discards it when its stated purpose is completed. Tasks that might be isolated under a microvisor would include rendering an email attachment, or rendering a consumer website with misrepresented download invitations embedded in its presentations. <P> Bromium's vSentry detects the nature of the activity and spins up a micro virtual machine where the task must execute. If the task attempts to access files, network, devices or the Windows clipboard, the hardware interrupts the execution and turns the task over to the microvisor, which then enforces policies specific to the task. <P> If what the code is attempting to do is outside the nature of the task, the attempt is written to cache in that part of the virtual machine, making it appear to the attacker that everything is proceeding as planned. Meanwhile, the microvisor has isolated the attack and created an event log record of what was being attempted. <P> When the task is done, the virtual machine is flushed from the system, eliminating the malware involved, as well. The microvisor has been given enough intelligence to take action when common forms of intrusion appear -- e.g., the request for a file that is not part of the task or an attempt to gain access to a network not involved in the task. "It's a step beyond sandboxing," said Crosby in an interview. <P> "If the task in a micro VM does something bad, we know there's only one task inside the VM. We'll be able to look inside and see an attack as it happened, see what was the intent. We'll be able to see where the attacker is from, what registry entries were modified, what networks were activated. Every task is a honeypot" in which to catch an attacker, Crosby added. <P> The idea of isolating untrusted tasks in a micro VM is a different approach to end-user security than trying to keep all malware out with firewalls and intruder detection. It assumes some malware will get through and seeks to isolate it from other systems where it might inflict its damage. <P> Bromium is a young company with 75 people seeking to rapidly expand its capabilities beyond Windows Server, Windows 7 and 8, Windows XP and terminal services. A Macintosh version is in the works, along with vSentry versions for Android, BlackBerry and iPhone. With more computing being done on personal devices, end-user security is taking on increasing importance. Crosby said the microvisor approach puts handcuffs on an intruder and allows forensic experts to study him in a "cell" at their leisure. <P> But exactly how much work IT does, compared to vSentry, to invoke policies governing tasks is not yet supported by user testimony in the press or on the Bromium website. Enterprise deals are priced at $100-$150 per end user for a perpetual license, depending on volume, Crosby said. <P> Sentry 1.1 works with the virtual desktop infrastructure environments provided by VMware, Citrix Systems and Microsoft. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a>2012-12-14T09:05:00ZCA's New CEO Faces Big Balancing ActMichael Gregoire expected to set a more entrepreneurial style as CA continues transition from mainframe products to cloud and virtual systems.http://www.informationweek.com/cloud-computing/infrastructure/cas-new-ceo-faces-big-balancing-act/240144421?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/software/information-management/13-big-data-vendors-to-watch-in-2013/240144124"><img src="http://twimgs.com/informationweek/galleries/automated/924/BigDataLogos_tn.jpg" alt="13 Big Data Vendors To Watch In 2013" title="13 Big Data Vendors To Watch In 2013" class="img175" /></a><br /> <div class="storyImageTitle">13 Big Data Vendors To Watch In 2013</div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> When the youthful Michael Gregoire takes over as CEO of CA Technologies on April 1, he will find he's inherited a large company in a state of transition. <P> CA is caught between breaking away from dependence on mainframe products, which still provide 60% of its revenues, and initiating new cloud and virtual infrastructure management products, which have not yet taken off. Riding these two horses in tandem may prove a tricky balancing act for any CEO. <P> <a href="http://www.ca.com/us/news/Press-Releases/na/2012/CA-Technologies-Names-Michael-P-Gregoire-Chief-Executive-Officer.aspx">Gregoire, 46, was selected by CA's board Thursday</a> to succeed William McCracken, who initiated the company's transition when he took office in January 2010. He is stepping down at the age of 70 before the transition can be brought to full bloom. Gregoire is the former of chairman, president and CEO of Taleo, the talent management software firm acquired by Oracle in February for $1.9 billion. Gregoire took Taleo from $78 million to $324 million over a seven year period. He will join CA on Jan. 7. <P> He will have his work cut out for him. He only needs to turn to latest quarterly earnings report to spot the trouble. <P> <strong>[ Want to learn more about how CA took an acquisition path to move into new product areas? See <a href="www.informationweek.com/cloud-computing/infrastructure/fruit-of-ca-cloud-acquisitions-adds-up/231002897?itc=edit_in_body_cross">Fruit of CA Cloud Acquisitions Adds Up</a>. ]</strong> <P> On Oct. 25, McCracken told investors that second quarter fiscal 2013 "was short of our objectives in several areas. We are disappointed in our performance." Mainframe revenues were off 5% at $619 million, enterprise solutions were down 2% and services were down 1%. <P> License renewals, always a source of strength for CA, were running "in the high 80% range" when they usually renew close to 90%. In another part of the recorded remarks, McCracken said, "the renewal portfolio (was) down year-over-year" without saying how much. North American revenues were down for the second quarter in a row. The one bright spot was mainframe products, previously on a steady decline, were up 10%. New product sales, in contrast, were down 25%. <P> New products include the infrastructure management and cloud management products that CA has been adding to its product catalog during the last three years through acquisition. For example, at CA World in 2010 it launched Cloud Connected Management Suite. <P> More recently in its second quarter, CA launched Nimsoft Monitor 6 for visibility into on-premises and in-the-cloud workloads, and its service-building product from the ITKO acquisition. Lisa can provide a "mock up" of a variety of systems that a service needs to be connected to, saving developers from actually needing physical resources, such as a mainframe, with which to test a new service. <P> CA has charged into virtualized server management through its CA Server Automation and CA Automation Suite. It has its own strong heritage in systems management based on CA Unicenter and boasts a cross-hypervisor approach that includes Microsoft's Hyper V systems, Citrix XenServer as well as VMware ESX Server. <P> But it appears to be fighting stiff headwinds as VMware, Microsoft and Citrix Systems continue to roll up the virtualized part of the data center with their own product lines. Virtualization customers so far show a greater willingness to get their management tools from their virtualization vendor than from a traditional systems management supplier such as BMC, HP or CA. <P> CA moved into more robust virtual machine capacity management about the same time VMware did, through its late 2010 purchase of Hyperformix. VMware introduced vCenter Operations Management, which combines configuration, capacity and performance management for virtual machines and their hosts, in March 2011, with the second version released in January of this year. <P> Gregoire is coming out of a small, fast-growing company atmosphere to take the helm at CA, unlike McCracken, who was a 36-year-veteran of IBM before taking over the reins. The contrast was also drawn by Art Weinbach, CA chairman of the board, in a memo sent to employees Thursday upon the announcement of Gregoire's appointment. "Mike is a successful entrepreneur" and "has demonstrated an impressive ability to drive growth in both small and large companies. <P> "He knows how technologies such as cloud and SaaS are revolutionizing the value we deliver to customers," Weinbach continued. "Most of all, Mike has been a winner at everything he's done in his business and personal life. He is the ultimate competitor and has a fierce desire to succeed." <P> To succeed, Gregiore will have to bridge the gap between an existing and largely mainframe customer base and an emerging set of companies that are going to be heavily invested in virtualization and cloud computing. He will need as many new customers as he can gather up to keep revenues flowing, while the mainframe users play catch up with their cloud initiatives in the changing world around them. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a> <P> <em>Note: Story updated to clarify CA's license renewal rates.</em>2012-12-12T09:36:00ZIntel Launches Low Power Atom To Counter ARMIntel offers 6-watt chip for data centers to beat back Calxeda, other ARM designers using mobile chips to build servers.http://www.informationweek.com/hardware/processors/intel-launches-low-power-atom-to-counter/240144257?cid=RSSfeed_IWK_authorsIntel introduced a new low-power, 6-watt processor Tuesday as a possible replacement for the common 40- and 95-watt servers that fill data centers worldwide, and in some cases, poorly utilize large amounts of electricity. <P> The new lightweight, micro-module servers, as opposed to tower, rack-mount or even blade servers, run cooler and are more compact. Greater numbers can be packed in a rack and several servers can share a cooling fan, instead of each unit needing its own direct airflow. <P> Intel's Atom S1200 processor is a two-core system on a chip, with cores running at speeds between 1.56 GHz and 2.0 GHz. In other words, they lag the latest full-power Xeon chips; the current Sandy Bridge Xeon, for example, might run from 3.2- to 3.6-GHz clock speeds. But like Xeon, each Atom core is able to run two threads simultaneously, giving it greater instruction-processing capabilities than single-thread chips. <P> The Atom S1240 runs at 6.1 watts; the Atom S1220, 8.1 watts; Atom S1260, 8.5 watts. <P> <strong>[ Want to learn more about low-power servers? See <a href="http://www.informationweek.com/hardware/processors/calxeda-gets-55-million-to-fight-intel-s/240008791?itc=edit_in_body_cross">Calxeda Gets $55 Million To Fight Intel Servers</a>. ]</strong> <P> Intel is carefully positioning the Atom as a specialized processor, good for a high number of small tasks running in parallel workloads, not a general-purpose chip like other members of the x86 family. Many cloud applications make use of distributed processors, such as the big data handler Hadoop, and it's conceivable Atom will find its way into Hadoop clusters and similar work. Intel mainly wishes to avoid having Atom cannibalize sales of its high-power, high-end processors. But some server manufacturers, such as <a href="http://www.datacenterknowledge.com/archives/2012/05/31/dell-unveils-arm-based-server-ecosystem/">Dell</a> and <a href="http://www.informationweek.com/hardware/blades/hp-plans-low-power-servers-using-calxeda/231902072">HP</a>, have begun to produce microservers based on the competing ARM architecture. ARM designs produce low-power chips used in smartphones and mobile devices. <P> "The data center continues to evolve into unique segments and Intel continues to be a leader in these transitions," said Diane Bryant, VP and general manager of the data center and connected systems group at Intel, in <a href="http://intelstudios.edgesuite.net/121211_event/index.htm">a webcast</a> from Intel's Santa Clara, Calif., headquarters. <P> "We recognized several years ago the need for a new breed of high-density, energy-efficient servers ... We are delivering the industry's only 6-watt system on a chip that has key data center features," said Bryant. Two of those data center features are multi-threading and error-correcting code, where data taken from RAM is compared to a master copy to ensure the data about to be used is intact. <P> Atom is also able to run existing Linux and Windows and x86 applications without modification, a big plus, while ARM does not. <P> Advocates of green data centers say the low-power chips should supplant those with bigger energy appetites. But Intel took pains on its website to outline many areas where microservers would not necessarily be the right choice. "The microserver approach is not suited to workloads in many segments, such as high-performance computing, financial services, virtualized infrastructure, mission-critical computing and databases," said an Atom specification sheet. <P> Cloud computing suppliers, on the other hand, are looking to build the most efficient data centers possible for running discrete workloads, and Atom may play a role in future cloud construction. <P> Intel is shipping the Atom processor starting at $54 for 1,000 or more. It will seek to build a server ecosystem around it, and in addition to HP and Dell, cited Accusys, Huawei, Quanta, Supermicro, CETC, Inspur, Microsan and Qsan as producers of server designs incorporating the chip.2012-12-11T09:30:00ZRed Hat Speeds Up Open Source Virtualization RaceKVM-based Enterprise Virtualization 3.1 enables extra-large virtual machines and better live migration across more storage systems than before.http://www.informationweek.com/hardware/virtualization/red-hat-speeds-up-open-source-virtualiza/240144200?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947"><img src="http://twimgs.com/informationweek/galleries/automated/905/01_Cloud_tn.jpg" alt="7 Cheap Cloud Storage Options" title="7 Cheap Cloud Storage Options" class="img175" /></a><br /><div class="storyImageTitle">7 Cheap Cloud Storage Options</div><span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> Red Hat last week enhanced its open source alternative to Microsoft and VMware, Enterprise Virtualization 3.1, with the ability to mount larger virtual machines and achieve live migration across more storage systems than before. <P> It also cited the continued high performance of its kernel virtual machine, or open source KVM, in its <a href="http://www.redhat.com/about/news/press-archive/2012/12/red-hat-expands-virtualization-collaboration-with-sap-via-new-certification">Dec. 5 announcement</a>. <P> Enterprise Virtualization 3.1 allows the creation of a virtual machine with up to 160 virtual CPUs and 2 TB of memory, said Red Hat's Chuck Dubuque, senior manager of product marketing. That's larger than the maximum 64 virtual CPUs and 1 TB of memory supported by Microsoft System Center's Virtual Machine Manager or VMware's vSphere 5.1. <P> The extra-large virtual machine that can run under Red Hat's 3.1 system means that "Enterprise Virtualization 3.1 can take the largest x86 boxes and virtualize them, those with eight CPU sockets with 10 cores each," said Dubuque. <P> <strong>[ Want to learn more about how KVM helps Red Hat compete on the server virtualization front? See <a href="http://www.informationweek.com/hardware/virtualization/vmware-should-worry-more-about-red-hat/231600699?itc=edit_in_body_cross">VMware Should Worry More About Red Hat</a>. ]</strong> <P> Furthermore, these extra-large virtual machines are running the efficient KVM hypervisor, which uses the memory manager and scheduler inside the Linux kernel. That's part of the reason why KVM holds 19 of 27 published <a href="http://www.spec.org/benchmarks.html#virtual">SPECvirt_sc2010 benchmarks</a> in hypervisor performance. The SPECvirt is a general benchmark derived from an earlier VMware VMmark benchmark. <P> Results from a second independent benchmark, <a href="http://www.tpc.org/tpcvms/default.asp">TPC-VMS</a>, should be available soon to see whether its performance yields similar results for KVM. That benchmark comes from the Transaction Processing Performance Council. <P> Another new key capability is the 3.1 system's ability to live migrate virtual machines from one storage area network to another, using storage live migration. The feature matches what VMware and Microsoft can already do with their live migration systems, and it wasn't available in January's 3.0 release of Enterprise Virtualization. Red Hat's storage live migration "is in technical preview with 3.1 and will work," said Dubuque, but it is not a supported feature. It will be supported as it becomes generally available in a future release of the virtualization system, he said. <P> Red Hat added the storage live migration as a result of its October 2011 <a href="http://www.informationweek.com/storage/systems/red-hat-expands-storage-portfolio-with-g/231700317">acquisition of Gluster</a>, which created the open source GlusterFS storage file system. It was added to Red Hat Storage Server 2.0 in June this year, and allows Red Hat customers to scale out their storage systems across multiple storage domains. Enterprise Virtualization's ability to use Storage Server 2.0 capabilities reflects the first step in integrating the two, Dubuque said. That is, Red Hat plans to let customers take one x86 server and allow it work as both a virtualization compute and storage system. <P> Storage live migration is used when a large amount of data held on virtual disks must follow a migrating virtual machine, called vMotion in the VMware world. By including storage live migration, the virtual machine becomes independent of its underlying hardware and can migrate from rack to rack, across the data centers or even between data centers. <P> Dubuque said 3.1 allows live storage migration of a virtual machine's virtual disks from rack to rack and across the data center, but its current level of technical management doesn't allow transfers from one data center to another. In this release, storage live migration is "within a data center or where there is a significantly high bandwidth/low latency network between data centers," such as a campus-area network, he said. <P> Despite its limits, the beginning of storage live migration in Red Hat Enterprise Virtualization shows how the open source alternative is attempting to keep pace with its commercial counterparts. <P> Dubuque also noted that Enterprise Virtualization could previously access virtual images and data stored over iSCSI, Fibre Channel NFS or local storage. In the 3.1 version, it can access any storage managed by Storage Server 2.0, which means a wider set of options. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a>2012-12-11T09:30:00ZAmazon Details Cloud Bills By The HourAWS uses operational data to deposit hourly usage information in a customer's S3 storage account. Customers can request alerts when thresholds are crossed.http://www.informationweek.com/cloud-computing/infrastructure/amazon-details-cloud-bills-by-the-hour/240144463?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-myths-about-cloud-computing/240124922"><img src="http://twimgs.com/informationweek/graphics_library/175x175/money_cloud.jpg" alt="7 Dumb Cloud Computing Myths" title="7 Dumb Cloud Computing Myths" class="img175" /></a><br /> <div class="storyImageTitle">7 Dumb Cloud Computing Myths</div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> Amazon Web Services has added a rich level of billing detail for customers who want to know more about what's in their monthly bill for the Elastic Compute Cloud (EC2) and related services. <P> In the past, customers have complained that an Amazon bill was little better than the utility bill they received at home -- one lump sum without any way of figuring out who within the organization was responsible for increases or the lion's share of the charges. <P> Now Amazon is making it possible to receive an hourly report on each server, giving customers an unprecedented amount of detail in the billing information delivered directly to them. <P> Jeff Barr, Amazon Web Services evangelist, in a blog directed to customers Dec. 13, said Amazon had received many customer complaints about the lack of detail in bills and was responding with "a <a href="http://aws.typepad.com/aws/2012/12/aws-detailed-billing-reports.html">new hourly grain view</a> of your AWS usage and charges." The hourly view was detailed down to individual servers and the services of a particular availability zone in which they were running. <P> <strong>[ Want to learn how one user ran up a bill of $23,000 when he expected to use $2,300 worth of service? See <a href="http://www.informationweek.com/cloud-computing/infrastructure/clouds-big-caveat-runaway-costs/240001508?itc=edit_in_body_cross">Cloud's Big Caveat: Runaway Costs</a>. ]</strong> <P> Customers receiving consolidated bills covering many accounts, for example, an IT manager supervising employee use of cloud services, will get hourly details with "unblended rates," separating out on-demand instances from reserved instances. That gives a customer a view of how much of their needs were being met by Amazon's lowest-cost option -- reserved instances -- versus the more expensive on-demand instances. <P> Earlier this year, Amazon made the detailed information available to customers in wholesale form, leaving it to customers to select data out of an undifferentiated stream that contained 25 fields of information on each server. Customers could programmatically select out the data they wanted with which to build reports. <P> With the new detailed billing reports, AWS gives customers a view of how much time services were used per server. The reports are sent to a bucket in their S3 storage account, adding eventually to the monthly storage bill if they're not periodically downloaded or cleaned out. <P> "We've had a number of requests for better access to more detailed billing information," Barr acknowledged. "We do our best to listen and to learn and to make sure that our plans line up with what we are hearing," Barr wrote. <P> Amazon also estimates what a bill is expected to be and stores Amazon CloudWatch monitoring statistics on actual usage. Customers can set up thresholds for email alerts to be sent to them when usage goes above averages or the estimates.2012-12-10T09:32:00ZSavvis Cloud Storage Takes On Amazon, GoogleSavvis Symphony Cloud Storage marries data center management expertise with CenturyLink's networking savvy to simplify data replication and increase disaster recovery reliability.http://www.informationweek.com/cloud-computing/infrastructure/savvis-cloud-storage-takes-on-amazon-goo/240144085?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947"><img src="http://twimgs.com/informationweek/galleries/automated/905/01_Cloud_tn.jpg" alt="7 Cheap Cloud Storage Options" title="7 Cheap Cloud Storage Options" class="img175" /></a><br /><div class="storyImageTitle">7 Cheap Cloud Storage Options</div><span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE -->Savvis is moving more convincingly into standard cloud services. The managed hosting provider launched Savvis Direct as a beta compute service Monday, following on its Dec. 3 launch of Symphony Cloud Storage, its long-term storage equivalent to Amazon's S3. <P> Savvis comes to the party with some of its own particular bells and whistles. To provide cloud services, it's marrying its experience in managing large data centers with CenturyLink's networking savvy. CenturyLink is the third-largest carrier in the United States, and that means if you're already a CenturyLink business customer, you can move your data between CenturyLink cloud centers at no additional charge on the private network. <P> There could be advantages to that. With security the number one concern about public cloud data centers, Savvis can offer its telecom services users a chance to access cloud servers over private lines. That could easily prove an attractive option versus relying on the public Internet for archiving sensitive business data, as say, Microsoft Azure or Amazon Web Services customers do. <P> Likewise, with a strong telecommunications network undergirding them, Savvis' five cloud data centers around the world are interlinked by lines that allow easier data replication services. Customers can set up workloads and store the data they produce in Symphony Storage at their compute site, then replicate it to another cloud location. The second location might be across town -- Savvis operates redundant data centers at its cloud locations -- or it might be across the country. It has cloud centers in Santa Clara, Calif.; Sterling, Va.; Toronto, Canada; Slough, U.K.; and Singapore. <P> <strong>[ Want to learn more about how Savvis is plunging into cloud services? See <a href="http://www.informationweek.com/global-cio/interviews/silicon-valley-needs-to-get-out-more/240143972?itc=edit_in_body_cross">Savvis Challenges Amazon With On-Demand IaaS</a>. ]</strong> <P> When setting up your Symphony storage, "you can pick one to replicate to via a checkbox procedure, or you can go to all five" for those who demand the highest guarantee of data survivability, said P. J. Farmer, Savvis' director of cloud storage product management. <P> With Amazon Web Services, you can also choose a second availability zone and achieve greater data survivability and availability, also. Amazon doesn't specify availability zones by location, but a second zone for a workload in its popular northern Virginia data center complex, U.S. East-1, is still in northern Virginia. So are zones 3, 4 and 5. That's a drawback when a storm with an 800-mile wide front comes ashore on the East Coast. <P> A user in Toronto can replicate to Slough, or a user in Sterling can replicate to Santa Clara, noted Farmer, which would tend to get the data out of a natural disaster area. <P> Savvis uses this networking capability to add more services to its cloud service, still in a limited customer offering. For example, Savvis has "geo-intelligent routing" that can respond to a customer data replication choice without the customer needing to set network parameters or make destination adjustments. At the same time, it has a global namespace system that captures metadata on stored objects. The metadata is shared, in what functions as a federated directory, across the five cloud data centers. That means any employee with access rights can call up the employer's data by seeking the correct file name. The system resolves the location of the requested file or files and delivers them -- from multiple locations, if necessary. <P> The global namespace works through one URL, where the requestor's location determines the first stop for his or her request, i.e., the nearest Savvis cloud center. "A request for a file in Germany will be routed to Slough, regardless of where the file is located," said Farmer. At its first stop, the global namespace system at that data center resolves the actual location of the data, retrieves it and sends it back to the user. <P> "From a mobile application development perspective, this has a lot of benefit," noted Farmer. "You could distribute an application around the world using one URL. You would not need to regionalize it," she said. <P> In addition to international mobile apps, there could be many other uses. "You can do things you couldn't do before. You have to think what your use case is going to be," she said. An international hotel chain, for example, might wish to distribute its room directory to cloud centers on a regional basis, serving those closest and most likely to need a room the fastest. <P> Symphony Storage is built on an EMC Atmos storage platform and is compatible with the Atmos API framework, she said. <P> So what does such a storage system cost? Farmer said she had planned to set pricing at a competitive level with Amazon and Google storage services, until a price war broke out Nov. 29 when Amazon Web Services announced it would be cutting S3 storage prices <a href="http://www.informationweek.com/cloud-computing/infrastructure/amazon-web-services-slashes-storage-pric/240142741">from 24% to 27%</a>. Google responded the next day, announcing similar cuts. <P> "I planned to be quite competitive on pricing until [Nov. 29]. Now I'll have to think it over," she said. Savvis is bringing added value to storage with its features and it may not follow Amazon and Google as far as they are willing to go down the price reduction path, she said. <P> Symphony Storage is in limited availability to select customers until early in the first quarter of 2013, when it will become generally available. Savvis says 30 of the top 100 companies in the Fortune 500 are already its cloud customers. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a>2012-12-07T13:30:00ZWhy Amazon's Cloud Business Is Like Kindle SalesAmazon Web Services is cheap upfront, and Amazon only gets paid if people keep using it, CEO Bezos says.http://www.informationweek.com/cloud-computing/infrastructure/why-amazons-cloud-business-is-like-kindl/240144013?cid=RSSfeed_IWK_authors <!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/10-cloud-computing-pioneers/240142397 "><img src="http://twimgs.com/informationweek/galleries/automated/909/01_cloud_gurus_tn.jpg" alt="10 Cloud Computing Pioneers" title="10 Cloud Computing Pioneers" class="img175" /></a><br /> <div class="storyImageTitle">10 Cloud Computing Pioneers </div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> Amazon Web Services (AWS) had a coming out party of sorts when it staged its first partner and customer event, dubbed Re: Invent, and held in Las Vegas Nov. 27-29. The most revealing moments came when Amazon CTO Werner Vogels took the stage Nov. 29 to pitch questions at CEO Jeff Bezos. <P> In creating its infrastructure-as-a-service (IaaS) unit in 2002, Amazon did something both risky and bold. It concluded it had a new type of infrastructure behind its retail operation and further services could be created on it. Better yet, it could create a new service on an infrastructure optimized to work with end users. There were no cloud users back then. There were hosted service providers and managed service providers and colocation facilities. But letting people commission and control infrastructure on an hourly basis, paying as they go with a credit card, was unique. <P> Vogels started his conversation with Bezos by pointing out that the last time they shared a stage was at the announcement of Amazon's second-generation tablet, the Kindle Fire. This was a soft pitch, of course. Bezos picked up the cue and drew an analogy between the $159 Kindle and IaaS. <P> "One of the unusual things we do in the Kindle device business is we sell our hardware at near breakeven. We make money when people use the device, not when they buy the device ... If I buy the device and put it in my desk drawer and never use it, then Amazon doesn't deserve to make any money," said Bezos. <P> <strong>[ Want to know how Amazon drives down computing service prices? See <a href="http://www.informationweek.com/cloud-computing/infrastructure/amazon-web-services-slashes-storage-pric/240142741?itc=edit_in_body_cross">Amazon Web Services Slashes Storage Prices</a>. ]</strong> <P> Amazon Web Services uses the same principles, Bezos continued, closing the loop. "It's a pay-as-you-go service. We are not incented to get people to overbuy hardware and operate at low utilization rates," he said, taking a shot at traditional hardware businesses. <P> Furthermore, such an approach is a good discipline for any business because it will stay tuned to what customers actually need, not what they can produce. "Our point of view is if we can arrange things in such a way that our interests are aligned with our customers', then in the long term that will work out really well for customers and it will work out for Amazon," Bezos added. <P> In the same vein, he threw in a plug for how <a href="http://www.informationweek.com/hardware/handheld/amazon-revamp-puts-apple-on-notice/240006920">Amazon's tablet business</a> is different from Apple's: "Likewise, it causes us to have the right kinds of behavior. If we're not making money when people buy the device, then we don't need people to be on the upgrade treadmill. We have people using 5-year old Kindle Ones and we're perfectly happy with that. <P> I find this analogy strained but still revealing of the core of AWS. Selling a hardware device is different from providing a cloud service, although the analogy works somewhat when the hardware device becomes a platform for the same vendor to sell services. That part's OK. The main point of these comments is that Amazon is not making money until its customers decide what has value to them in the form of an Amazon service. That's a major differentiator between Amazon and HP, Dell or IBM. And the fact that it sells hardware at breakeven is a major differentiator between Amazon and Apple. Point established. <P> Next, Vogels solicited Bezos to address the notion of staying focused on essential business services. "I'm always amazed that you talk about the notion of flywheels," he prompted. <P> To Bezos, a business must be built around building out long-term services, not chasing short-term profits with short-lived devices. "I frequently get the question, 'What's going to change in the next 10 years?' I almost never get the question, 'What's <em>not</em> going to change in the next 10 years?' I submit that second question is the more important of the two -- because you can build a business strategy around the things that are stable over time. We know the customers want low prices. We know that's going to be true 10 years from now. I can't imagine a customer coming up to me and saying, "I just love Amazon. I just wish the prices were a little higher. I just wish delivery would take a little longer." Invest in those things today (that will) pay dividends 20 years from now." <P> Bezos again closed the loop to show how this core Amazon ideal is part of AWS. "At AWS, the big ideas are also pretty straightforward. It's impossible for me to imagine 10 years from now someone would tell me, 'I love AWS but I just wish you were a little less reliable. I love AWS but I wish you would raise prices ... I wish you would innovate and improve APIs at a slower rate.' <P> "The big ideas in business are often very obvious but it's very hard to maintain a firm grasp of the obvious. But if you can do that, if you can continue to spin up those flywheels, and put energy into those things, like we do with AWS, over time, you build a better and better service for your customers," he said. <P> Concentrating on core competencies is not a new idea, but what Bezos said next sets up what must be the tension between Amazon.com and its AWS business unit. If you are setting up a new business, you can't be sure where you'll find the next "flywheel." Having gotten a new style of retail going online, Amazon.com had a chance to generalize upon it and move beyond retail into specific services based on the lessons learned.It wasn't part of the Vogels-Bezos discussion, but to do so, Amazon followed the path recommended in <em><a href="http://www.amazon.com/Innovators-Dilemma-Revolutionary-Change-Business/dp/0062060244">The Innovator's Dilemma</a></em> by Clayton Christiansen: it created a separate unit, isolated it from the main organization and rewarded it for its ability to pursue an opportunity other than that of the main business. <P> "First of all, innovation is a point of view. You have to actually select people who want to innovate and explore. Being a pioneer, an explorer, isn't for everybody. Some people wake up in the morning and they get their energy, the thing they think about in the shower is who are three companies we're going to kill this year. That's the conqueror mentality, it's a competitor-focused mentality instead of a customer-focused mentality. <P> "When you attract people who have the DNA of pioneers, you build a company of like-minded people who want to invent, and that's what they think about when they get up in the morning ... If you're the right kind of person, you like to invent, you like change, that's just fun ... over the last 18+ years, we've attracted a bunch of people who like to do it. <P> "There are a couple other things that are not as fun. One of them is, you have to have a willingness to fail. You have to have a willingness to be misunderstood for long periods of time. If you do something in a new way, people are initially going to misunderstand it relative to the traditional way. They'll be well-meaning critics who genuinely want the best outcome, but they're worried ... And there'll also be self-interested critics who have a vested interest ... If you never want to be criticized, then don't do anything new," Bezos said. <P> In this segment, he was directly answering Wall Street critics who have put pressure on Amazon by asking when the company will stop investing so much for the future and realize more profit. Bezos doesn't like this criticism, but he can live with it, given the nature of his company. He responded further by saying innovation is the only way to continue to discover what's useful to customers. At the end, he said, don't chase a wave, hoping to gain profits. Instead, do what you're passionate about, and wait for the wave to come to you. <P> "Successful invention is invention that customers care about. It's actually relatively easy to invent new things that customers don't care about. For successful invention, you have to increase your rate of experimentation. You have to think about, how do you go about organizing your systems, your people, all of your assets ... to increase your experimentation. If you double the experiments per year, you're going to double your inventions," Bezos said. <P> What he didn't say was the AWS experiment has its own expenses, and it's possible in doubling experiments to also double expenses without gaining offsetting revenue. AWS is doubtlessly a revenue producer. I've heard estimates that say that it will produce $1-$1.5 billion in 2012, and there's a chance revenue will grow rapidly from this takeoff point. But Amazon's reports don't give us a glimpse of the relationship between revenue and expenses. <P> Nevertheless, Bezos said repeatedly that he runs a high-volume, low-margin business. And to do so, you must practice certain disciplines that some of Amazon's higher margin competitors -- Oracle, Microsoft and Google spring to mind -- don't necessarily have to practice. <P> "Let's put it this way, in old days, you might be best advised to put 30% of effort and energy into building a product or service and 70% into shouting about that service. That has flipped around. The balance of power is shifting from the providers of offerings to the consumers of offerings. I believe that's a great thing for society. I believe it's even a great thing for the providers of services, provided the companies acknowledge it and embrace it. If you have a business model that relies upon your customers being misinformed, or let's just say incompletely informed, you better start working on changing your business model," Bezos said. <P> Brave words from a founder who is proud of the company he's created. <P> It remains to be seen how far Bezos and Vogels can drive the AWS experiment, but it's clear the battle lines have been drawn with the old, high-margin software companies, such as Oracle and Microsoft, and the new high-margin Web services companies, such as Google. And throw in Apple as a high-margin device maker as well. <P> If Amazon really is discovering the long-lasting flywheels of the coming era, its revenues will more than match its commitments. And competitors will find it tough to move in on a low-margin firm that is constantly realigning its goals with its customers. This was Amazon's first big, public statement of how its new cloud computing business is a natural outgrowth of its overall culture and approach to business. <P> Bezos seemed to be saying that AWS, far from being a stepchild, is a long-term extension of what the parent company is all about. And despite the critics, Amazon will find ways to make it work. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a>2012-12-05T09:45:00ZEMC, VMware Team To Woo Cloud DevelopersCan Pivotal Initiative partnership successfully merge scattered open source code with proprietary code for next-gen cloud apps?http://www.informationweek.com/cloud-computing/platform/emc-vmware-team-to-woo-cloud-developers/240143834?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/10-cloud-computing-pioneers/240142397 "><img src="http://twimgs.com/informationweek/galleries/automated/909/01_cloud_gurus_tn.jpg" alt="10 Cloud Computing Pioneers" title="10 Cloud Computing Pioneers" class="img175" /></a><br /> <div class="storyImageTitle">10 Cloud Computing Pioneers </div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> EMC and VMware are forming a new joint business unit, the Pivotal Initiative, with EMC chief strategy officer Paul Maritz as its head. It will include 1,400 employees, 600 from VMware and 800 from EMC. Maritz is the former CEO of VMware. <P> The joint entity will take a combination of open source code and proprietary code owned by the two companies and combine it into a software stack designed to engage developers building next-generation cloud applications, a goal that's in the long-term interests of both companies, according to a blog posted on the initiative. <P> Included will be a key piece of VMware's vCloud Suite called vFabric, which provides application data caching and deployment capabilities; also, the Spring lightweight Java development framework and VMware's open source Cloud Foundry development hosting service. They can be combined with the analytics capabilities of EMC's Greenplum open source data warehouse. At the same time, other parts of the VMware's vCloud Suite, such as vCloud Director for orchestrating pooled resources, remained off the agenda. <P> The initiative also gets its name from <a href="http://www.informationweek.com/development/database/emc-marries-social-networking-and-big-da/232602911">Pivotal Labs</a>, acquired by EMC last May. The San Francisco firm provides software to manage the agile development process. If EMC can help developers build applications with modern techniques, it's in a better position to move out of its data storage role into one more oriented toward enterprise use of data. <P> <strong>[ Want to learn more about VMware's acquisition of an OpenFlow company? See <a href="http://www.informationweek.com/storage/virtualization/nicira-acquisition-is-vmwares-smartest-m/240004378?itc=edit_in_body_cross">Nicira Acquisition Is VMware's Smartest Move Yet</a>. ]</strong> <P> But it's not clear from a single blog posting what EMC and VMware have in mind. They issued only a vague statement about its ultimate goal: "We are experiencing a major change in the wide-scale move to cloud computing, which includes both infrastructural transformation and transformation of how applications will be built and used, based on cloud, mobility and big data," wrote Terry Anderson, VP of corporate global communications in comments posted to <a href="http://blogs.vmware.com/console/2012/12/the-pivotal-initiative.html">The Console Blog</a>, an outlet for the VMware executive team. <P> It's not news that enterprise IT is experiencing "a major change" from cloud. The announcement left unsaid how the new unit will navigate the change differently from the ways EMC and VMware have so far. It was also hard to explain why such a seemingly far-reaching announcement came from a little-known source. Instead of a posting by Maritz, VMware CEO Pat Gelsinger or CTO Steve Herrod, it was made by Anderson, who had put her name to few previous announcements. <P> EMC isn't putting its top guns on the record behind Pivotal yet. Asked for more detail, a spokesman said only, "The company has no additional comment at this time." It's possible Maritz's more direct, authoritative voice will weigh in during the second quarter of 2013, when "a specific operational structure [still] to be determined" will be established, Anderson's blog stated. <P> "The Pivotal Initiative signals an entirely new level of focused investment and organization to maximize the impact that these assets can have for customers and EMC's path forward," Anderson's statement continued. <P> One question is whether the new entity is being formed to produce products and become a profit center or satisfy some other need -- for instance, lead in open source development crucial to the company's future, OpenStack, or conduct R&D. Given the headcount, it appears it will have to become a profit center. <P> Also, a new entity that separates VMware from potentially competing open source projects, such as OpenStack, would give everybody a little more breathing room. There's little doubt vSphere 5, vCenter Operations Management and vCloud Suite remain proprietary products. If you have a jump on others in building the <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-launches-software-defined-data-ce/240143004">software-defined data center</a> why give it up? <P> The Pivotal Initiative, on the other hand, could provide a more open, outward-facing developer platform that includes, as part of its credo, contributing to projects. It might have noted open source spokesmen, such as Spring leader Rod Johnson, former Hyperic CEO Javier Soltero or Martin Casado, composer of the OpenFlow spec and CTO of VMware's Nicira networking unit. No reason why the initiative's software couldn't maintain well-defined links and integration ties to the proprietary products. <P> Still, part of the initiative is clearly product-oriented. Some open source projects in VMware have yet to be productized or monetized, such as <a href="http://www.informationweek.com/big-data/commentary/cloud-computing/platform/vmware-cloud-foundry-plays-disruptive-ro/232900186">Cloud Foundry</a>. If services can allow analytics to be built into a next-generation application, then project leaders are more likely to consider an enterprise version of Cloud Foundry. The new unit may assemble a free stack of next-generation, cloud-application-building code, add for-fee services, then bring out an enterprise version that is supported and does more -- for a price. <P> Most likely, the Pivotal Initiative will be designed to assemble tools and software for the rapid development of applications that work either inside or outside the enterprise data center. The new breed of cloud applications is expected to operate in a hybrid-manner, VMware-virtualized environment interacting with Amazon Web Services or an OpenStack public cloud. <P> In September, VMware joined the OpenStack Foundation, even though two of the foundation's board members voted no, considering VMware's interests to be contrary to the open source code project's. VMware's Casado says the two are not at odds. VMware will want the virtualized data center to interoperate <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-does-complicated-dance-with-open/240067325">with sources outside it</a>, he said. <P> EMC employees moving into the unit will come out of its Greenplum and Pivotal labs organizations. Employees from VMware will come from its vFabric, Spring, Gemfire, Cloud Foundry and Cetas units. <P> EMC and VMware are two companies with rapidly expanding horizons. They are potentially positioned to take advantage of the changes underfoot, but they need to harness both proprietary and open source energies to get to where they want to go. Few big companies have executed this high-wire act. We'll soon see how well they can do it.2012-12-04T10:45:00ZSalesforce: Every Developer A SaaS VendorSalesforce.com's Heroku unit is making it easier for Salesforce app developers to become SaaS vendors on their own.http://www.informationweek.com/cloud-computing/software/salesforce-every-developer-a-saas-vendor/240143732?cid=RSSfeed_IWK_authorsHeroku, the <a href="http://www.informationweek.com/cloud-computing/software/salesforcecom-seeks-foothold-throughout/240007747">Salesforce</a> unit that sits atop the Amazon Web Services cloud, is taking a big step toward becoming a platform for software-as-a-service suppliers. <P> But most of the budding SaaS suppliers that use Heroku will not be competing with Salesforce. On the contrary, they'll provide custom applications and add-on services to the Salesforce product line, with Salesforce opening the door to authorized developers and their work. <P> <a href="http://www.informationweek.com/cloud-computing/platform/heroku-adds-database-service-that-scales/240009994">Heroku</a> now offers 85 such services from its hosting environment, a total that's been built up over three years. But it's got another 80 in the pipeline that will become available in 2013 as well, giving Salesforce.com customers a wide range of potential add-on applications and services, said Oren Teich, COO of Heroku, in an interview. <P> Heroku already offers software developers selected services, such as Amazon's Relational Database Service and Bonsai full-text search, produced by One More Cloud. As of Tuesday, companies that produce add-on services that work on Heroku will find offering their wares greatly simplified. Heroku is adding a billing system that tracks add-on usage, bills the user and sends the add-on producer a check at the end of the month. <P> <strong>[ For more on how Salesforce.com is tapping into the energies of independent developers, see <a href="http://www.informationweek.com/cloud-computing/platform/salesforcecom-wants-developers-to-think/240142676?itc=edit_in_body_cross"> Salesforce.com Wants Developers To Think Big.</a> ]</strong> <P> Heroku will ask add-on service producers to submit information about their code so the company can present it to potential customers. From long experience with developers using its platform, Heroku knows how to seek specific information about a new service's distinguishing features. In addition to listing different fee plans, Heroku also plans avoid what Teich calls a "common stumbling block in offering SaaS" by describing each plan thoroughly enough for potential customers to understand the differences between them. <P> Add-on service suppliers "are good technical people who don't necessarily understand marketing," Teich noted. Heroku's goal is to help them get their wares to market. "We have a whole new self-service interface [the Heroku Add-On Provider Portal]," he said. "It gives add-on providers the ability to manage full lifecycle billing and other aspects of making their code available." <P> Heroku already serves as a development platform for thousands of developers using Ruby, Java and Java Virtual Machine-based languages such as Node.js, the server-side version of JavaScript. Some of Heroku's most frequently invoked services are written in Node.js. Heroku started out as a Ruby developer's platform but <a href="http://www.informationweek.com/cloud-computing/platform/salesforcecoms-heroku-expands-to-host-j/240007577">expanded its charter</a> in September to include Java and languages running in the JVM. <P> Over 2.5 million applications have been produced on the Heroku platform so far. While Heroku is still in its early stage, Salesforce.com is seeking to capitalize on Heroku's activity by supplying its Canvas API to authorized developers. This allows their applications to share data with Salesforce.com CRM, HR and Chatter applications. The applications can exchange requests for information and execute joint transactions, with the results displaying in the Salesforce.com user interface as if the application were running inside the Salesforce data center. In fact, it's on Amazon EC2 but integrated with the existing Salesforce.com app. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a> <P> <i>Join Cloud Connect for a free webcast with "Cloudnomics" author Joe Weinman. Cloudnomics is a new way to discuss the benefits of private clouds. Many have focused on the cost reduction possibilities while others have focused on business agility. However, private clouds can play a strategic role, as well. The <a href="http://event.on24.com/r.htm?e=543922&s=1&k=03050B993D09D35972131EDAF5030AD5&partnerref=jdpl">Cloudnomics</a> webcast happens Dec. 12. (Free registration required.)</i>2012-12-03T09:30:00ZVMware Launches Software-Defined Data CenterVCloud Suite 5.1 combines three key VMware products to create an early version of a software-defined data center.http://www.informationweek.com/cloud-computing/infrastructure/vmware-launches-software-defined-data-ce/240143004?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-myths-about-cloud-computing/240124922"><img src="http://twimgs.com/informationweek/graphics_library/175x175/money_cloud.jpg" alt="7 Dumb Cloud Computing Myths" title="7 Dumb Cloud Computing Myths" class="img175" /></a><br /> <div class="storyImageTitle">7 Dumb Cloud Computing Myths</div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> VMware announced Monday that it has integrated its virtualized environment software with its cloud management products, putting into one box the software that generates virtual machines, applies cloud management, and introduces performance and capacity management to the resulting environment. <P> Dubbed VMware vCloud Suite 5.1, it combines the latest version of three previously separate products: vCenter Operations Management Suite 5.6 for virtual systems management; vFabric Application Director 5.0 for deploying applications in a virtual environment; and vCloud Suite 5.1 for managing pooled resources as a cloud operation. <P> The combination is available at a price of $4,995 per processor. VCenter Operations and vFabric Application Director also remain available as separate products with their separate price points. <P> The combination might also be dubbed the "software-defined data center in a box." VMware has adopted the notion of the software-defined data center as the best explanation for what its combined software aims to achieve. A software-defined data center is one where major changes and adjustments may be accomplished in live operations through rules and processes captured in software. <P> <strong>[ Want to learn more about VMware's notion of a software-defined data center? See <a href="http://www.informationweek.com/hardware/virtualization/vmware-cto-software-defined-data-centers/240000054?itc=edit_in_body_cross">VMware CTO: Software-Defined Data Centers Are Future</a>. ]</strong> <P> One of the main thrusts of the new combination is simplify and speed the deployment and availability of a new application in an on-premises, virtualized environment, said Shahar Erez, director of products for application management, in an interview. <P> Launching an application "remains a very fragmented process. Every time you want to deliver software-as-a-service, you have to start from scratch," noted Erez. If a user requests an application, IT staffers have to discover the right operating system and middleware with which to package it, with additional networking, servers and storage needed to implement the package. The result is often "a month-long deployment," he said. <P> One of VMware's lesser-known products, vFabric Application Director, is meant to overcome those complexities and delays. It includes blueprints of recommended combinations of applications, operating systems and middleware. More than 100 blueprints are available to vFabric customers in a new exchange launched to support use of the vFabric, the VMware Cloud Applications Marketplace. <P> Combinations representing best-practice configurations are available there, supplied by 30 independent software vendors and systems integrators, as well as VMware. For example, such decisions as determining the right amount of cache memory and setting load balancing will have been determined by the experts who built the blueprint. Riverbed, Zend and Oracle's MySQL unit combined to produce an auto-scaling and load-balanced database blueprint, Erez said. <P> Recommended configurations for Hadoop deployments are also included in the marketplace. <P> The other major piece of the suite, vCenter Operations 5.6, supplies systems management to virtual machines. It combines configuration management, performance management and capacity management in one system. <P> With the vCloud Suite 5.1 bundle, VMware is also taking its first steps toward embracing cloud environments outside its own, including offering to manage those external workloads. An application, once chosen from a blueprint, can be deployed into a VMware-virtualized environment on premises, a VMware-based cloud, such as Savvis or Bluelock, or into the Amazon Web Services public cloud. Even when the workload becomes an external one, it can be managed from the vCloud Suite's management console, he said. The capabilities are derived from <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-buys-dynamicops-accelerates-softw/240003110">VMware's DynamicOps acquisition</a> in July. <P> Both VMware and Amazon have taken steps to make their environments more interchangeable. AWS announced several months ago that it would accept and run workloads configured as VMware ESX Server virtual machines, even though it has its own Amazon Machine Images, a virtual file format based on the open source Xen hypervisor. <P> The base level of vCenter Operations, known as the Foundation version, is also available as a free, separate download to existing VMware vSphere customers. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a>2012-11-30T11:10:00ZGoogle Counters Amazon's Storage Price CutsGoogle cut storage prices by 20% at start of week, and then slashed another 10% in response to Amazon's price drops. Is there a storage price war going on in the cloud?http://www.informationweek.com/cloud-computing/infrastructure/google-counters-amazons-storage-price-cu/240142946?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947"><img src="http://twimgs.com/informationweek/galleries/automated/905/01_Cloud_tn.jpg" alt="7 Cheap Cloud Storage Options" title="7 Cheap Cloud Storage Options" class="img175" /></a><br /><div class="storyImageTitle">7 Cheap Cloud Storage Options</div><span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE -->On Nov. 26, Google cut its standard storage prices by 20% and announced a new form of storage, Durable Reduced Availability, available for $.07 a month per gigabyte. <P> Google's Durable Reduced Availability service offers the same level of latency as Google's regular storage, but availability might be delayed as the service undergoes a peak of demand. DRA might be suitable for data backup that can wait briefly or for batch processes that could be rescheduled, said Google's director of new products Shailesh Rao in an interview. <P> On Nov. 29, Google announced an additional 10% price cut for the new service (and for its standard storage service as well). At that rate, the price for a GB of Google DRA should plunge below 2.5 cents by Feb. 1. <P> Of course, that isn't going to happen. But cloud users are now enjoying a spate of price cuts in the most competitive services of cloud computing. Storage customers bring providers a step closer to gathering more workloads for a built-out cloud infrastructure, which both Amazon Web Services and Google are interested in doing. Google's 10% price drop for a service introduced just three days earlier came on the heels of Amazon's Nov. 28 S3 storage price reductions, which averaged 25%. <P> <strong>[ Amazon announced storage price changes at its Re:Invent show in Las Vegas. Read more at <a href="http://www.informationweek.com/cloud-computing/infrastructure/amazon-web-services-slashes-storage-pric/240142741?itc=edit_in_body_cross"> Amazon Web Services Slashes Storage Prices</a>. ]</strong> <P> "We are committed to delivering the best value in the marketplace to businesses and developers looking to operate in the cloud. That's why today we are reducing the price of Google Cloud Storage by an additional 10%, resulting in a total price reduction of over 30%," said Google product manager Dave Barth in a blog post on Thursday. <P> Effective Dec. 1, Amazon's first GB of standard S3 storage per month drops from $.125 to $.095. In comparison, Google's first GB of standard storage drops from $.095 to $.085 per month on Dec. 1, maintaining what Google sees as its price advantage. <P> Amazon also offers Reduced Redundancy storage for non-critical data. In this plan, data that is lost can be easily reproduced from a copy maintained by the owner on premises or at some other location. Most forms of cloud storage create at least three copies of a data set. (It's not clear how many copies of data are used in Amazon's Reduced Redundancy storage plan, but it could be fewer than three.) Now priced at $.093 per GB, Reduced Redundancy storage will drop to $.076 on Dec. 1. <P> Google's Durable Reduced Availability storage is not directly comparable to Amazon's Reduced Redundancy, since Google's service operates with a different set of attributes. But it's the closest service in price, dropping from $.085 to $.063, again maintaining an upfront price advantage. <P> Both Google and Amazon made similar changes throughout their pricing structure, but the changes are stepped up at different GB intervals, making a direct comparison difficult. <P> "This seems to be a bit of a pricing war: Google already cut prices by 20 percent earlier this week and then followed up with an additional 10 percent cut today to stay under Amazon's new pricing," observed Jon Brodkin, writing for <a href="http://arstechnica.com/information-technology/2012/11/amazon-google-slash-cloud-storage-prices-more-than-25/">Ars Technica</a>. <P> <i><a href="http://www.cloudconnectevent.com/santaclara/?_mc=DIWEEK">Cloud Connect</a> returns to Silicon Valley, April 2-5, 2013, for four days of lectures, panels, tutorials and roundtable discussions on a comprehensive selection of cloud topics taught by leading industry experts. Use priority code DIWEEK by Jan. 1 to save up to $700 with Super Early Bird Savings. Join us in Silicon Valley to see new products, keep up-to-date on industry trends and create and strengthen professional relationships. Register for <a href="http://www.cloudconnectevent.com/santaclara/?_mc=DIWEEK">Cloud Connect</a> now. </i>2012-11-30T09:59:00ZAmazon's Vogels Challenges IT: Rethink App DevAmazon Web Services CTO says promised land of cloud computing requires a new generation of applications that follow different principles.http://www.informationweek.com/cloud-computing/infrastructure/amazons-vogels-challenges-it-rethink-app/240142928?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/10-cloud-computing-pioneers/240142397"><img src="http://twimgs.com/informationweek/galleries/automated/909/01_cloud_gurus_tn.jpg" alt="10 Cloud Computing Pioneers" title="10 Cloud Computing Pioneers" class="img175" /></a><br /> <div class="storyImageTitle">10 Cloud Computing Pioneers </div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> In an unusually direct challenge to enterprise IT departments, Amazon Web Services CTO Werner Vogels advocated a new set of rules for enterprise application development. "I'm not going to go too much Old Testament on you," said the would-be Moses of cloud computing during a keynote at the <a href="http://www.informationweek.com/cloud-computing/infrastructure/amazon-web-services-slashes-storage-pric/240142741">Amazon event Re:Invent</a>, "but here are some tablets that have been given to me." <P> Tablets, as in stone etched with commandments. Instead of ten, he had four tablets -- Controllable, Resilient, Adaptive and Data Driven -- each with some command lines written on them. As he expounded upon each, he made it clear that Amazon itself followed these rules in building out its infrastructure as a service on which the Amazon.com retail operation now runs. <P> "Thou shalt use new concepts to build new applications," he decreed as an opening command line for "controllable." <P> "You have to leave behind the old world of resource-based, data centric thinking," he said. Until recently, IT has conceived first of the physical resources that it would need to build a system, then let those resources constrain what it did next. <P> <strong>[ Did Amazon learn some software lessons the hard way? See <a href="http://www.informationweek.com/storage/disaster-recovery/amazon-cloud-outage-cleanup-hits-softwar/231300538?itc=edit_in_body_cross "> Amazon Cloud Outage Cleanup Hits Software Error</a>. ]</strong> <P> If an application is developed to run on AWS' Elastic Compute Cloud, then it will run in virtual servers and there's no limit to how fast it can scale up, regardless of the initial resources allocated to it. Instead of conceiving of physical servers, think of "fungible software components" that can be put to work on jobs of different scale and traffic intensities. <P> Such software needs to be "decomposed into small, loosely coupled, stateless building blocks" that can modified independently without disrupting other building blocks, Vogels said. The idea has been around since the advent of Web services and services oriented architecture in the enterprise, but Vogels dusted it off and gave it renewed urgency. <P> Automate your application deployment and operational characteristics, was roughly a second commandment, which at one point he expressed as, "Let business levers control your system." <P> By that, he was saying an application should run itself, following parameters set to meet changing levels of business need. "As an engineer, you do not want to be involved in scaling," he said. Furthermore, if you want scalability to always work, "it is best if there are no humans in the process." Business rules can determine appropriate response times from an application for customers, and automated processes, such as fresh server spin up and load balancing, can see that they are followed. <P> "Architect with cost in mind" was another rule. Vogels said he's good at choosing the algorithm that's most fault tolerant, but he "has no clue" how to determine which algorithm is going to be the lowest cost one to run over a long period. For effective, long term operations, some projection of the cost of running the code must be included in the decision on which code is used. <P> "Protecting your customers is your first priority," he asserted, saying too few applications build in protections that are now cost effective with today's processing power. "If you have sensitive customer data, you should encrypt it," whether it's in transit or standing still. "Integrate security into your application from the ground up. If firewalls were the way to go, we'd still have moats around cities," he warned. <P> The adaptive, resilient and data driven tablets also got some air time. "Build, test, integrate and deploy continuously," Vogels urged. That's a widely shared but hard to implement DevOps idea that's also been around a long time, but Amazon practices it with a vengeance. It deploys new code every 11 seconds, and has made a maximum of 1,079 deployments in an hour, he said. <P> In its early days, AWS phased in a new deployment, initiating it on one server at a time until it had been spread through a rack, then the next, etc. "We were good at it but it was a very complex and very error-prone process," he noted. Amazon has automated its new code deployments with open source Chef and other tools managing configuration, version management and roll-back. What used to be a one-machine at a time process is now done in blocks of 10,000 machines at a time. <P> If something goes wrong, "rollback is one single API call" to return a block of machines to their former state, he said. <P> Applications need to be instrumented so that they report constant metrics as they run. That information can be relayed back into a business management system that monitors whether users, including customers, are being served. The well-placed reporting mechanism "is the canary in the coal mine" and warns of imminent system slowdown, as some service or component in an application starts to show signs of stress. <P> "Don't treat failure as an exception. There are many events happening in your system," some of which "will make you need to reboot," he warned. <P> AWS had to rebuild the S3 storage service to allow it to better cope with disk drive failures, which happen "many, many times a day" in the AWS storage service, said Alyssa Harvey, VP of storage services, after being called to the stage by Vogels. <P> S3 was launched in 2006, an early cloud service, built to hold 20 billion objects "with the wrong design assumptions about object size," Harvey said. Rebuilt around a simpler, more resilient design, the new architecture is one of the reasons Amazon was able to announce a 24% to 27% reduction in storage prices Wednesday, she said. <P> "Instrument everything, all the time," added Vogels. "Put everything in [server] logs. You need to control the worst experience your customers are getting," and analytics can be periodically applied to server logs to see what events lead to slowdowns and how they may be prevented in the future. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a>2012-11-29T09:11:00ZAmazon Web Services Slashes Storage PricesAt its first developers conference, Re:Invent, Amazon features customers like Netflix and NASDAQ and disses its software firm rivals.http://www.informationweek.com/cloud-computing/infrastructure/amazon-web-services-slashes-storage-pric/240142741?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/10-cloud-computing-pioneers/240142397"><img src="http://twimgs.com/informationweek/galleries/automated/909/01_cloud_gurus_tn.jpg" alt="10 Cloud Computing Pioneers" title="10 Cloud Computing Pioneers" class="img175" /></a><br /> <div class="storyImageTitle">10 Cloud Computing Pioneers </div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> Amazon, at its first developer conference Re:Invent, announced plans for significant price reductions. Amazon Web Services Senior VP Andy Jassy said the cloud supplier will cut storage prices 24% to 27% on Dec. 1. <P> Soon after making that announcement, Jassy welcomed Netflix CEO Reed Hastings, who had a big smile, to the auditorium stage in Las Vegas. Netflix is a heavy user of Amazon servers to stream video to customers. Hastings said his firm in 2008 used a million hours of streaming time a month. It now uses a billion hours, he said, along with heavy use of Amazon's Simple Storage Service. The S3 price cut was welcome news for his balance sheet, he told Jassy. <P> Jassy said the price cuts would be implemented throughout Amazon's all nine regional data centers. He said Amazon has frequently cut prices because it intends to be a high volume, low margin business. That's unlike software suppliers used to 60% to 80% gross margins that are now "inserting the word 'cloud' into their old product lines," he said. A traditional software supplier can't convert to cloud supplier overnight because it doesn't understand the high volume, low margin business model, he claimed. <P> "You have to be careful who is telling you what. If you look at what they're actually offering, it doesn't have any of the characteristics that should be associated with cloud" computing, he said. They include low upfront or capital expense cost, elasticity and an ability to flexibly provision servers as needed, without increasing your own manpower. <P> AWS announced a major round of prices cuts on core services last March that ranged between 5% to 20%. Some EC2 Reserved Instance usage, where customers make a down payment on long term use of a virtual server, was cut 37%. At the time, VP Adam Selipsky said price cutting <a href="http://www.informationweek.com/cloud-computing/infrastructure/amazon-brings-price-cutter-mentality-to/232601304"> is part of the company's DNA.</a> Since then, it's announced several other individual cuts to relational database services (14%) and Elasticache (16%). <P> AWS will also have a new, <a href="http://www.informationweek.com/software/information-management/amazon-debuts-low-cost-big-data-warehous/240142712">low-cost, petabyte-scale data warehouse service, Redshift</a>, available sometime in 2013, he noted. The service is available to select customers now in preview stage. <P> <strong>[ Want to learn more about Amazon's upcoming Redshift Data Warehouse service? See <a href="http://www.informationweek.com/software/information-management/amazon-debuts-low-cost-big-data-warehous/240142712?itc=edit_in_body_cross"> Amazon Debuts Low Cost, Big Data Warehousing</a>. ]</strong> <P> Hastings provided insight into something that Amazon working on. VMware has been able to move a running virtual machine around the data center for several years, he noted. "That's extremely demanding to do at scale. But once you can move a running, live instance, you can search for the best compute server offering. There would be tremendous efficiency gains. Good luck on that and hopefully you can come up with it in the next couple years," he told Jassy as he shook hands to depart. The Amazon exec made no effort to deny such a service is part of Amazon's long term roadmap. <P> Ted Myerson, senior VP of global access services for NASDAQ, was another Amazon customer pulled into the spotlight. He said NASDAQ is building a regulatory-compliant environment that runs on Amazon's EC2 and will be offered as FinCloud to financial services firms and exchanges. <P> Financial services companies constantly "retool and rework their legacy system to keep them in compliance. Amazon is offering an alternative, building financial services business logic atop an already compliant environment, where data is captured and stored and where transactions are subject to audit. <P> "Why did we do this on Amazon?" Myerson asked rhetorically. "Very simply, Amazon Web Services has the experience." <P> In 2003, Amazon.com, AWS parent company, was a $5.2 billion Web business. Each day Amazon adds enough server capacity to power such a business, Jassy said. Amazon Simple Storage Service, or S3, has 1.3 trillion objects stored on it and routinely processes 800,000 concurrent requests, Jassy said. <P> AWS operates a data center complex in northern Virginia, a U.S. government-specialized service, and two data centers on the West Coast, with one in Washington and one in northern California. Outside the U.S., it operates data centers in Tokyo, Singapore, Sydney, Australia, and Dublin, Ireland. It also operates one in Brazil. Most data centers have more than one availability zone, or independent sub-data center with its own power supply and communications. It offers 25 availability zones in all. <P> The multiple and widespread AWS data centers constitutes an advantage to a global business that decides to adopt its infrastructure as a service. It can place applications in locations around the world close to concentrations of its customers, Jassy noted. <P> CA Technologies and BMC were two partners named as adding virtual machine management capabilities to their systems management tools to encompass both internal and external IaaS workloads. Amazon is working with such partners to make it easier for customers to implement hybrid cloud computing, where related workloads may be running both on-premises and in the external, public cloud, as demand dictates.2012-11-28T10:15:00ZSalesforce.com Wants Developers To Think BigWith Identity and Canvas plans, Salesforce can begin to provide services to enterprise applications that were thought to have been totally separate from CRM.http://www.informationweek.com/cloud-computing/platform/salesforcecom-wants-developers-to-think/240142676?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/10-cloud-computing-pioneers/240142397 "><img src="http://twimgs.com/informationweek/galleries/automated/909/01_cloud_gurus_tn.jpg" alt="10 Cloud Computing Pioneers" title="10 Cloud Computing Pioneers" class="img175" /></a><br /> <div class="storyImageTitle">10 Cloud Computing Pioneers </div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> Cloud vendors understand how important developers are to their future, and they compete for developer loyalties by integrating open source tools with their environments. Red Hat's <a href=" http://www.informationweek.com/cloud-computing/platform/red-hat-unwraps-openshift-enterprise-at/240142602">OpenShift Enterprise</a> is a leading example, as well as VMware's Cloud Foundry and Spring Framework. <P> To expand the horizons of software-as-a-service (SaaS), Salesforce.com needs to attack the same problem in its own way. Salesforce.com is providing the programming means to reach out from a focused CRM and human resources set of applications to connect with other applications in the enterprise and bring useful information from them into Salesforce apps. <P> Its primary instrument for doing so is Salesforce Identity, which can build a federated directory of enterprise users, then manage user access across many applications from the existing Salesforce application environment. The service is still in preview status and scheduled to become available as a supported service in winter 2013. Without this building block, Salesforce would be hard pressed to know how to connect to any non-Salesforce application in its customers' environments. <P> Further in the background, but probably more important in the long run, is the Force.com Canvas service. It provides a software development kit to enterprise developers that lets them tie a new application, built either inside the data center or on the external Heroku cloud, into the Salesforce user interface. Canvas provides JavaScript libraries that can be invoked by a developer to tie application services produced in different languages into the Salesforce user experience. Until now, customization was done largely in Salesforce's proprietary Apex on the Force.com platform. <P> <strong>[ Want to learn more about Canvas, introduced at Dreamforce in September? See <a href="http://www.informationweek.com/cloud-computing/software/salesforcecom-seeks-foothold-throughout/240007747?itc=edit_in_body_cross">Salesforce.com Seeks Foothold Throughout The Enterprise</a>. ]</strong> <P> In the early stages, it's hard to see exactly where the new capabilities will lead, but active customers have ways of thinking up uses for them. Salesforce is increasingly committed to expanding the capabilities of SaaS so that it is not stuck on a limited set of branded applications. From its position of strength in CRM and social networking, Salesforce is beginning to provide services to enterprise applications that were originally thought to have been totally separate from CRM. <P> Or at least, it's possible to conclude that as you come away from a conversation with Quinton Wall, director of developer evangelism, and Chuck Mortimore, VP of product management for Salesforce Identity. Both argue that service additions and custom enhancements can now take place on a much larger scale than before with SaaS. <P> Legacy applications still tend to be isolated systems inside the enterprise. Salesforce is trying to be a centralizing force for them as it provides for an Identity service, easily activated by anyone with a Salesforce account, and combines personal information from Active Directories and LDAP directories around the enterprise. "We can drive more of a streamlined identity management system, a single-sign-on system" if enterprise IT moves to using identity management as a service in Salesforce.com data centers, instead of exclusively relying on its own Active Directories. <P> Canvas has much broader implications. It could be used to connect a new custom service, built on the Force.com platform, to an existing Salesforce CRM application. But it could also be used to connect a custom service built in a non-Salesforce-generated language, such as Java or Ruby, to a Salesforce application. <P> Canvas allows the Salesforce-savvy developer to take an existing isolated application -- Wall likes to refer to them as "ghost town" applications: no one goes there anymore -- and think about "how can we take it and put it in a social network context ... make it an active part of the conversation." <P> The whole application wouldn't suddenly become part of the Salesforce application suite but some action of the application could be imported into the Salesforce context. One example is connecting the procurement system so that each purchase indicates how many fellow employees have purchased the same product. <P> That could previously be done inside the enterprise through a heavily engineered, point-to-point connection between a procurement app and the Salesforce.com app. Why not make it simply a Web service integration to a customer's Salesforce.com database, with an ability to call up the information the next time it's relevant? <P> Canvas provides JavaScript libraries that know how to connect to existing Salesforce SOAP or REST APIs. To the end user inside the familiar Salesforce.com user interface, the information seems to come from inside the Salesforce environment, along with all the other application services, even though an outside system is now the source, said Mortimer. <P> "We can take the legacy applications and expose them inside the Salesforce user interface," he said. One of the standards relied upon by Canvas to do this is OAuth 2.0, which authorizes an outside developer's application from the developer's private key to proceed and work with a Salesforce data center application. <P> The example is simple enough, but many more possibilities will emerge as Salesforce users contemplate what they might build on the Heroku cloud, now owned by Salesforce. There they can use many languages to construct a new application or service, then link it into their Salesforce suite. They can likewise add social networking features to the legacy app, extracting specific information and exposing it in the enterprise social graph. <P> From its name, it's possible to say Canvas is meant to provide Salesforce.com users with a broader palette with which they can paint a widening set of application linkages, pushing the horizon back from its current tight focus on CRM, HR and Chatter. <P> "With Force.com Canvas, developers can retool their legacy Web apps, or write new apps that instantly tap into Salesforce regardless of what language they are written in, or where they reside," said Wall. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="" width="16" height="16"align="right"> </a>2012-11-27T10:36:00ZRed Hat Unwraps OpenShift Enterprise At Amazon EventRed Hat broadens its open source cloud development platform, which increasingly competes with VMware's Spring and Cloud Foundry.http://www.informationweek.com/cloud-computing/platform/red-hat-unwraps-openshift-enterprise-at/240142602?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-myths-about-cloud-computing/240124922"><img src="http://twimgs.com/informationweek/graphics_library/175x175/money_cloud.jpg" alt="7 Dumb Cloud Computing Myths" title="7 Dumb Cloud Computing Myths" class="img175" /></a><br /> <div class="storyImageTitle">7 Dumb Cloud Computing Myths</div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> Red Hat announced Tuesday it is broadening its cloud development platform to include better managed developer workflows and automated provisioning of an application for a cloud deployment. <P> The announcement was made at Amazon Web Services (AWS) Re:Invent conference in Las Vegas, which gets underway Tuesday. <a href="https://reinvent.awsevents.com/">Re:Invent</a> is a user event to discuss best practices in making use of Amazon EC2 infrastructure-as-a-service (IaaS). <P> Red Hat said at Re:Invent that its open source developer platform, OpenShift Enterprise is now generally available. The platform, first revealed in May 2011, can be used either online in the cloud or installed in an enterprise data center. <P> Red Hat is a participant at AWS Re:Invent because the online option is staged as a hosted service by Red Hat in the Amazon EC2 cloud. OpenShift Enterprise is a multi-language development environment comprised of Red Hat-integrated components. It's attractive to enterprise developers because it supports the use of Java Enterprise Edition, something that VMware's Spring Framework stops short of doing. It also supports Ruby, Python, Perl, PHP and Node.js, or JavaScript for servers. <P> OpenShift Enterprise automates the provisioning and systems management of a new application and its related parts in a way that makes it easier to deploy to a cloud environment. It also allows development and operations staff to work more closely together on new applications. <P> With the deployment additions, OpenShift Enterprise "can transform a Linux administrator into a cloud administrator," claimed Ashesh Badani, general manager of Red Hat's cloud business unit, in an interview. <P> <strong>[ Want to learn more about Red Hat's cloud development platform, based on open source code? See <a href="http://www.informationweek.com/cloud-computing/platform/red-hat-shifts-into-gear-with-openshift/240002786?itc=edit_in_body_cross">Red Hat Shifts Into Gear With OpenShift</a>. ]</strong> <P> After developers work with such core Red Hat products as Red Hat Enterprise Linux and its JBoss Application Server and related middleware, it became crucial that they be able to deploy resulting applications into compatible environments as smoothly as possible, said Badani. <P> In fact, enterprises that rely on open source code are faced with the task of juggling updates and additions to components as they try to keep different pieces compatible with each other. "It is actually incredibly difficult to manage and sustain" all the components of a development platform, which is why Red Hat decided to produce one, said Badani. <P> Red Hat charges based on the number of server cores used with OpenShift Enterprise. Pricing begins at an annual subscription of $5,550 for two cores, a Red Hat spokeswoman said. <P> Developers may use any Eclipse-based tools on the platform, as well as Red Hat's JBoss Developer Studio tools. New applications or new services for existing applications can be developed and speedily deployed in the cloud, using OpenShift. It uses a hardened version of RHEL, one that meets the National Security Agency's SE Linux standard, or security-enhanced Linux, which imposes mandatory access controls. <P> "Leverage what you've got; go into the cloud-based world," summed up Badani. <P> Red Hat competes with VMware to produce a platform for Java developers. VMware acquired the Spring Framework and offers it as open source code on Cloud Foundry. Spring is known as a lightweight Java development environment, one that sidesteps some of the complexities of Java Enterprise Edition. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a>2012-11-26T11:06:00Z10 Cloud Computing PioneersCloud computing has rewritten decades of technology rules. Take a closer look at 10 innovators who helped make it possible.http://www.informationweek.com/cloud-computing/infrastructure/10-cloud-computing-pioneers/240142397?cid=RSSfeed_IWK_authorsIt's hard to write history when you're still in the thick of recording it. However, in cloud computing we've amassed just enough background to name some of the early pioneers who've helped establish the relatively new computing paradigm. <P> The list is neither exhaustive nor all inclusive. And, undoubtedly, there will be other lists, highlighting other quiet innovators whose names we're just beginning to hear, and whose accomplishments will be well-known in the coming years. <P> But for IT managers in the midst of considering or adopting cloud computing, this list offers a commentary on where we have so recently come from, and where we may be going in the near future. <P> This list necessarily ignores how even these pioneers are standing on the shoulders of giants themselves. Consider, for example, the key work accomplished on distributed systems at Sun Microsystems and the early cluster builders, who preceded Google, Facebook, Microsoft and Rackspace on the cloud front. <P> Still, cloud development has moved at an accelerated pace compared to how long it took personal computing or client-server computing to emerge. Amazon Web Services Simple Storage Service (S3) service launched just six years ago, followed by Enterprise Compute Cloud (EC2). Google AppEngine launched in 2008. Microsoft's beta version of Azure cloud services came in 2009. <P> The cloud paradigm is less than a decade old, but from the start, there seemed to be an understanding among its diverse pioneers that a new era was dawning and it would share a set of common characteristics. Any list of cloud computing pioneers would have Amazon's Werner Vogels near the top. But the architects and hands-on implementers who made his evangelism real, like Chris Pinkham, also deserve a nod. <P> Even the individuals named are in the habit of saying progress in the cloud is seldom an individual effort. Usually cloud advances are established by a large group of collaborators, and more often than not they are working in full public view with an open source code project like OpenStack (or Eucalyptus or CloudStack) or the Open Compute hardware project. <P> But some individuals were standing there before the pattern of cloud computing emerged. They acted at a time when the notion was still under attack. In believing, they risked being branded as charlatans and producers of mere vaporware, when in fact they were forging ahead to help define a new era. <P> Delve into our look at 10 pioneers of the cloud computing era. The order in which they should appear will remain under heavy debate as long as the cloud history is still being written. <P>Werner Vogels, CTO and VP of Amazon Web Services, joined Amazon in 2004 as director of systems research, coming from a computer science research post at Cornell University. In Holland, he had been a student of some of the leading minds in computing. The late <a href="http://www.informationweek.com/development/database/sailing-mystery-unsolved-court-declares/240000689">Jim Gray</a>, a Turing Award winner "for seminal contributions to database and transaction processing research and technical leadership in system implementation," was a proctor for Vogels' defense of his PhD thesis at the Vrije Universiteit in Amsterdam. At Vrije, Vogels' advisers included Andrew Tannenbaum, who wrote standard textbooks on operating systems as well as the code for the Minix operating system, and Henri Bal, a specialist in large, parallel systems. <P> He became Amazon CTO early in 2005 and later that year was named VP. He's had a vision of a new type of distributed system, one that relied on inexpensive parts but could scale out infinitely, making the Amazon Compute Cloud elastic and not come to a halt if a piece of hardware failed underneath it. He was an advocate of Amazon getting into the business of distributing virtual server computing cycles over the Internet and charging on a basis of time, and got the chance to advocate enterprises adopt it as Amazon's first "outward facing" CTO. He has been a tireless evangelizer for greater use of the Amazon public cloud. His expertise, commitment and credibility were essential to establishing the broad acceptance that Amazon Web Services enjoyed from an early stage. <P> <strong>RECOMMENDED READING:</strong> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-cloud-computing-myths/240124922">7 Dumb Cloud Computing Myths</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947">7 Cheap Cloud Storage Options</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/openstack-fights-cloud-lock-in-worries/240047880">OpenStack Fights Cloud Lock-In Worries</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/6-ways-amazon-cloud-helped-obama-win/240142268">6 Ways Amazon Cloud Helped Obama Win</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/expect-to-save-millions-in-the-cloud-pro/240008984">Expect To Save Millions In The Cloud? Prove It</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-does-complicated-dance-with-open/240067325">VMware Does Complicated Dance With Open Source</a> <P> <a href="http://www.informationweek.com/global-cio/interviews/5-ways-to-survive-the-coming-it-apocalyp/240044401">5 Ways To Survive The Coming IT Apocalypse</a> <P>Before Werner Vogels got a cloud infrastructure to evangelize at Amazon, there was Chris Pinkham, designer of Amazon Enterprise Compute Cloud (EC2). Actually, designing the Amazon infrastructure was one of those collaborative ventures, like Sergey Brin and Larry Page at Google, where two heads are better than one. Pinkham was the project's managing director; Amazon software architect Christopher Brown was lead developer. Together they produced Amazon's first public cloud infrastructure. <P> I once thought Amazon Web Services must have sprung out of Amazon.com spare capacity. Not so. Initially they were two separate things, with the cloud merely the tail of the online-merchandising dog. <P> Amazon.com IT operations manager Jesse Robbins has told the story of how he jealously guarded the retail operation's data centers and didn't let experimenters near them. Pinkham, who gained expertise by running the first Internet service provider in South Africa, had joined Amazon in 2000 as director of its network engineering group, then became VP responsible for IT infrastructure worldwide. <P> Amazon had been discussing internally the possibility of creating a public-facing, virtualized infrastructure that could be sold as a service. Pinkham was the most likely candidate to pull it off. But "Chris really, really wanted to be back in South Africa," Robbins once told <a href="http://itknowledgeexchange.techtarget.com/cloud-computing/amazons-early-efforts-at-cloud-computing-partly-accidental/">blogger Carl Brooks</a>, who wrote: "Rather than lose the formidable talent ... Amazon brass cleared the project and off [Pinkham and Brown] went [to work in South Africa] with a freedom to innovate that many might be jealous of." <P> Pinkham had the knowledge of how things needed to scale in a Web service environment. Both he and Brown set about exploiting the possibilities of a fully virtualized data center. EC2 was developed with different goals than the retail operation: The customer would have to be able to self-provision a virtual server, receive separate chargeback and have enough control to allow for virtual server launch, load balancing, storage activation and adding services such as database. <P> The two pulled it off, and Amazon EC2 was born. In 2006 Pinkham left Amazon to start a new company, Nimbula. He now proselytizes its software, Vogels-style, saying it generalizes the Amazon environment for companies to use as a private cloud. <P> <strong>RECOMMENDED READING:</strong> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-cloud-computing-myths/240124922">7 Dumb Cloud Computing Myths</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947">7 Cheap Cloud Storage Options</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/openstack-fights-cloud-lock-in-worries/240047880">OpenStack Fights Cloud Lock-In Worries</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/6-ways-amazon-cloud-helped-obama-win/240142268">6 Ways Amazon Cloud Helped Obama Win</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/expect-to-save-millions-in-the-cloud-pro/240008984">Expect To Save Millions In The Cloud? Prove It</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-does-complicated-dance-with-open/240067325">VMware Does Complicated Dance With Open Source</a> <P> <a href="http://www.informationweek.com/global-cio/interviews/5-ways-to-survive-the-coming-it-apocalyp/240044401">5 Ways To Survive The Coming IT Apocalypse</a>Randy Bias, cofounder and CTO of CloudScaling, has been a specialist in IT infrastructure since 1990, which positioned him to think through and lead some of the leading cloud computing innovations. He was a pioneer implementer of infrastructure-as-a-service as VP of technology strategy at GoGrid, a division of hosting provider ServePath. GoGrid launched a public beta of its Grid infrastructure in March 2008. <P> He pioneered one of the first multi-platform, multi-cloud management systems at CloudScale Networks and went on to found CloudScaling, where he was a successful implementer of large-scale clouds based on a young and unproven open source code software stack, OpenStack. Those large-scale clouds included KT, the largest cloud service in Korea (formerly known as Korea Telecom), and big data center services provider Internap. <P> Part of the support OpenStack receives is based on these implementations, and Bias was elected as one of <a href="http://www.informationweek.com/cloud-computing/infrastructure/openstack-fights-cloud-lock-in-worries/240047880">eight gold-sponsor board members</a> of the OpenStack Foundation. He keeps an unvarnished point of view on cloud claims and cloud pretensions, and is known for his uncompromising point of view. In 2009, he advocated the efficiencies of cloud computing as a way to counter climate change. <P> The O'Reilly Radar blog says Bias "led the open licensing of GoGrid's API, which inspired Sun Microsystems, Rackspace Cloud, VMware and others to open license their cloud APIs." <P> <strong>RECOMMENDED READING:</strong> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-cloud-computing-myths/240124922">7 Dumb Cloud Computing Myths</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947">7 Cheap Cloud Storage Options</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/openstack-fights-cloud-lock-in-worries/240047880">OpenStack Fights Cloud Lock-In Worries</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/6-ways-amazon-cloud-helped-obama-win/240142268">6 Ways Amazon Cloud Helped Obama Win</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/expect-to-save-millions-in-the-cloud-pro/240008984">Expect To Save Millions In The Cloud? Prove It</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-does-complicated-dance-with-open/240067325">VMware Does Complicated Dance With Open Source</a> <P> <a href="http://www.informationweek.com/global-cio/interviews/5-ways-to-survive-the-coming-it-apocalyp/240044401">5 Ways To Survive The Coming IT Apocalypse</a>Jonathan Bryce liked working with computers as a youth and had an older brother who was one of Rackspace's first 12 employees. He urged Jonathan to work at Rackspace, and Bryce became familiar with many phases of the operation, from racking servers to customer service and technical support. He partnered with website designer and friend Todd Morey to host sites on their own rented servers in Rackspace. They left Rackspace in 2005 to branch out into their own website building and hosting business, Mosso Cloud, named for an Italian musical notation phrase that means "to play faster and with more passion." <P> But Mosso still ran on servers in the Rackspace data center. Rackspace executives saw the relationship between its hosting-services business and emerging uses of cloud computing, so they asked Bryce to keep building out the Mosso Cloud. He had a system that could launch applications on a website and was thinking about a virtual machine launching system. Then Rackspace bought Slicehost, which already had such a system. Its virtual machine management became part of Mosso, and Bryce rejoined the company as the head of Rackspace Cloud. <P> Rackspace attempted to expand its cloud computing business by distinguishing itself from the market leader, Amazon Web Services. It offered smaller, get-started virtual servers, at $0.015 an hour. And it opened up its cloud API, prompting NASA to propose that they combine their cloud efforts in a joint project, OpenStack. By 2009, Rackspace saw OpenStack as both the means of spreading a common cloud computing base in private companies that could interoperate with Rackspace, and a means of changing the terms of competition with Amazon. <P> Rackspace led OpenStack as a sponsor, but realized it would have greater appeal as a more broadly sponsored project. It turned over management to the newly formed <a href="http://www.informationweek.com/cloud-computing/infrastructure/openstack-fights-cloud-lock-in-worries/240047880">OpenStack Foundation</a> in September. Both Cisco's CTO of cloud computing, Lew Tucker, and Red Hat's Brian Stevens, both members of the foundation's board, said Bryce was their top candidate to become its executive director, a post he accepted. At age 31, he's an innovative spirit with implementation experience who asserted himself when it still wasn't clear which direction cloud computing would follow. <P> <em>Photo of Jonathan Bryce, OpenStack from openstack.org website</em> <P> <strong>RECOMMENDED READING:</strong> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-cloud-computing-myths/240124922">7 Dumb Cloud Computing Myths</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947">7 Cheap Cloud Storage Options</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/openstack-fights-cloud-lock-in-worries/240047880">OpenStack Fights Cloud Lock-In Worries</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/6-ways-amazon-cloud-helped-obama-win/240142268">6 Ways Amazon Cloud Helped Obama Win</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/expect-to-save-millions-in-the-cloud-pro/240008984">Expect To Save Millions In The Cloud? Prove It</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-does-complicated-dance-with-open/240067325">VMware Does Complicated Dance With Open Source</a> <P> <a href="http://www.informationweek.com/global-cio/interviews/5-ways-to-survive-the-coming-it-apocalyp/240044401">5 Ways To Survive The Coming IT Apocalypse</a> <P> Lew Tucker already had 20 years of software development and engineering under his belt when the cloud era rolled around. He was quick to recognize that his previous projects were pointing in the cloud's direction. <P> He had been CTO and VP of engineering at Radar networks, producer of the Twine social network and VP of the AppExchange at Salesforce.com. His big-company experience brought a different voice to the debate over cloud, one of experienced and toughened engineering that said cloud not only could be, but also should be the next wave of computing. <P> Tucker was CTO of cloud computing at Sun Microsystems in 2008-2010, a crucial period when Oracle acquired Sun, and where his depth of knowledge countered Oracle's fatuous putdowns of cloud computing. After the acquisition, Oracle CEO Larry Ellison interviewed Tucker; Tucker said it took only three minutes before both men had made up their minds. In that short time, Oracle lost one of the few spokesmen capable of rolling back the skepticism that Oracle would ever be serious about cloud computing, something that it's still reaching for as it reverses course and wades more deeply into the field. <P> Tucker is now CTO of cloud computing at Cisco Systems, a tireless advocate (and board member) for OpenStack and an ignorer of boundaries -- as long as the other party can talk about cloud computing. At the recent Cloud Expo, he ducked into a meeting room to pay his regards to Rich Wolski, head of the Eucalyptus open source project at the University of California at Santa Barbara. Eucalyptus might be painted as an OpenStack competitor, but in Tucker's eyes Wolski's simply another passionate cloud enthusiast. He does the same on the OpenStack board of directors, where he's part of the social cohesion that holds competing members together. <P> <strong>RECOMMENDED READING:</strong> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-cloud-computing-myths/240124922">7 Dumb Cloud Computing Myths</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947">7 Cheap Cloud Storage Options</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/openstack-fights-cloud-lock-in-worries/240047880">OpenStack Fights Cloud Lock-In Worries</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/6-ways-amazon-cloud-helped-obama-win/240142268">6 Ways Amazon Cloud Helped Obama Win</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/expect-to-save-millions-in-the-cloud-pro/240008984">Expect To Save Millions In The Cloud? Prove It</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-does-complicated-dance-with-open/240067325">VMware Does Complicated Dance With Open Source</a> <P> <a href="http://www.informationweek.com/global-cio/interviews/5-ways-to-survive-the-coming-it-apocalyp/240044401">5 Ways To Survive The Coming IT Apocalypse</a>Rich Wolski is the co-founder and CTO of Eucalyptus Systems who decided that Amazon's public cloud APIs were so important that they should have open source code counterparts -- even if Amazon Web Services was against it. <P> He has been criticized on several fronts. One, his approach to cloud computing was too narrow -- it was based only on Amazon's example and initiative. Another: if Amazon wished to make its APIs open source, it could do so; if it didn't, it could make life difficult for an open source project that was doing so. <P> Wolski ignored the critics and pushed ahead both with his open source code leadership and Eucalyptus Systems, which makes a stack of software for building private clouds with Amazon EC2 compatibility. Amazon executives, for years unresponsive to Eucalyptus' entreaties to join the open source project, announced in late May that Amazon would <a href="http://www.informationweek.com/cloud-computing/infrastructure/amazon-makes-clever-private-cloud-play/232700120">partner with Eucalyptus Systems</a> as a provider of private cloud APIs. It was, finally, a blessing on Wolski's initiative. <P> Also a computer science professor, Wolski is a person of strong convictions who believes the world will convert to a new style of computing -- and that <a href="http://www.informationweek.com/software/infrastructure/eucalyptus-wont-be-left-behind-in-networ/240062620">Eucalyptus is destined</a> to play a role in the conversion. <P> <strong>RECOMMENDED READING:</strong> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-cloud-computing-myths/240124922">7 Dumb Cloud Computing Myths</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947">7 Cheap Cloud Storage Options</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/openstack-fights-cloud-lock-in-worries/240047880">OpenStack Fights Cloud Lock-In Worries</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/6-ways-amazon-cloud-helped-obama-win/240142268">6 Ways Amazon Cloud Helped Obama Win</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/expect-to-save-millions-in-the-cloud-pro/240008984">Expect To Save Millions In The Cloud? Prove It</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-does-complicated-dance-with-open/240067325">VMware Does Complicated Dance With Open Source</a> <P> <a href="http://www.informationweek.com/global-cio/interviews/5-ways-to-survive-the-coming-it-apocalyp/240044401">5 Ways To Survive The Coming IT Apocalypse</a> <P>In the early days of cloud computing, NASA CTO Chris Kemp took several leading concepts of how to assemble a low cost, horizontally scalable data center and put them to work at the NASA Ames Research Center in Mountain View, Calif. <P> One concept was placing banks of standard <a href="http://www.informationweek.com/government/cloud-saas/nasa-launches-portable-cloud-effort/222002580 ">x86 server racks in a shipping container</a> with one power supply and network hookup. The container was dropped off by supplier Verari, and hooked up and ready to start accepting workloads in a few days, compared to the long time it takes to construct a new, permanent data center. He also ensured a close tie-in to MAE-West, a major Internet access point, which NASA already had at Ames. <P> Kemp initially created the Nebula cloud project to collect big data from NASA research projects, such as the Mars mapping project. But Kemp also conceived of a mobile cloud data center that could be transported to different locations to provide onsite compute power, no matter where a spacecraft was launched or an interplanetary mission was managed. <P> Kemp also advocated sharing NASA data, and both Google and Microsoft have used telescopic images and mapping from the Mars Reconnaissance Orbiter to create public image libraries online. He also initiated the OpenStack open source code project when NASA sought to team up with Rackspace to combine cloud computing software assets. <P> In March 2011, Chris Kemp resigned his post with NASA, an agency with which he had dreamed of working since he was a child, to <a href="http://www.informationweek.com/government/leadership/nasa-cto-resigns-to-launch-startup/229301105">become founder and CEO of Nebula</a>. He was leaving, he said, "to find a garage in Palo Alto to do the work I love," a turn of phrase that showed he would be equally at home walking the halls of Congress or working the venture capital hallways of Menlo Park, Calif. <P> Not an imposing figure in stature, he is nevertheless an indomitable one. <a href="http://www.informationweek.com/cloud-computing/infrastructure/great-open-source-cloud-debate-rages/240002525">In a debate</a> among Eucalyptus Systems, Citrix CloudStack and OpenStack at GigaOm's Structure 2012, Kemp, speaking for OpenStack, was hemmed in by CloudStack's Sameer Dholakia and Eucalyptus' Marten Mickos, who seemed to have jointly aimed their sharpest comments at OpenStack. In answer, Kemp declared that he would be on the stage the following year without either of them as OpenStack grew larger. It was a brash, if not rash, comment, but one that nevertheless brought a moment of breathing room in which to talk about OpenStack capabilities and momentum. <P> <strong>RECOMMENDED READING:</strong> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-cloud-computing-myths/240124922">7 Dumb Cloud Computing Myths</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947">7 Cheap Cloud Storage Options</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/openstack-fights-cloud-lock-in-worries/240047880">OpenStack Fights Cloud Lock-In Worries</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/6-ways-amazon-cloud-helped-obama-win/240142268">6 Ways Amazon Cloud Helped Obama Win</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/expect-to-save-millions-in-the-cloud-pro/240008984">Expect To Save Millions In The Cloud? Prove It</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-does-complicated-dance-with-open/240067325">VMware Does Complicated Dance With Open Source</a> <P> <a href="http://www.informationweek.com/global-cio/interviews/5-ways-to-survive-the-coming-it-apocalyp/240044401">5 Ways To Survive The Coming IT Apocalypse</a> <P>Marc Benioff, CEO of Salesforce.com, stands out as the pioneer and guerrilla marketer of software-as-a-service. He drew attention to the concept at a time when it was widely disregarded as an aberration of limited use by brazenly advancing the concept of cloud services as the "death of software." He meant that on-premises software, the systems that have been making enterprise data centers run since 1964, were going away, replaced by software running in a remote data center accessible over the Internet. <P> Much has already been written about the successful establishment of Salesforce.com, which doesn't need repetition here. But for his role in winning respect for the concept of SaaS, no one matches the standing of Benioff. <P> <strong>RECOMMENDED READING:</strong> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-cloud-computing-myths/240124922">7 Dumb Cloud Computing Myths</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947">7 Cheap Cloud Storage Options</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/openstack-fights-cloud-lock-in-worries/240047880">OpenStack Fights Cloud Lock-In Worries</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/6-ways-amazon-cloud-helped-obama-win/240142268">6 Ways Amazon Cloud Helped Obama Win</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/expect-to-save-millions-in-the-cloud-pro/240008984">Expect To Save Millions In The Cloud? Prove It</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-does-complicated-dance-with-open/240067325">VMware Does Complicated Dance With Open Source</a> <P> <a href="http://www.informationweek.com/global-cio/interviews/5-ways-to-survive-the-coming-it-apocalyp/240044401">5 Ways To Survive The Coming IT Apocalypse</a>The phrase, "the data center as the computer," comes so close to capturing what a cloud data center is about that a tip of the hat has to go to Urs Holzle. The senior VP for technical infrastructure at Google led the design and build-out of the search engine's supporting infrastructure and supplied a pattern for Amazon, Microsoft, GoGrid and others to follow. <P> As one of Google's first 10 employees, Holzle refused to be caught in the limits of what was then available from technology providers. Servers hadn't been designed for the cloud data center, so Google manufactured its own, according to the tenets that Holzle laid down. A Google data center is designed to use about <a href="http://www.informationweek.com/hardware/data-centers/google-reveals-data-center-secrets/240009254">half the power</a> of a conventional enterprise data center. <P> In 2009, Holzle and fellow Google architect Luiz Andre Barroso captured in a Google whitepaper the concepts essential to building a worldwide string of search engine data centers. It was called "<a href="http://www.morganclaypool.com/doi/pdf/10.2200/S00193ED1V01Y200905CAC006">The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines</a>." <P> Holzle is a former associate professor of computer science at the University of California at Santa Cruz. He received a PhD from Stanford in the efficient use of programming languages. He is co-sponsor, with VMware CEO Pat Gelsinger, of the Climate Savers Computing Initiative, and he co-authored a second paper with Barroso, "The Case For Energy Proportional Computing," which outlines ways for servers to use only the energy required to execute the current workload. The paper is credited with pushing Intel and other manufacturers to find ways to adjust the current consumed by their chips. <P> <strong>RECOMMENDED READING:</strong> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-cloud-computing-myths/240124922">7 Dumb Cloud Computing Myths</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947">7 Cheap Cloud Storage Options</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/openstack-fights-cloud-lock-in-worries/240047880">OpenStack Fights Cloud Lock-In Worries</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/6-ways-amazon-cloud-helped-obama-win/240142268">6 Ways Amazon Cloud Helped Obama Win</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/expect-to-save-millions-in-the-cloud-pro/240008984">Expect To Save Millions In The Cloud? Prove It</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-does-complicated-dance-with-open/240067325">VMware Does Complicated Dance With Open Source</a> <P> <a href="http://www.informationweek.com/global-cio/interviews/5-ways-to-survive-the-coming-it-apocalyp/240044401">5 Ways To Survive The Coming IT Apocalypse</a>Frank Frankovsky worked as Dell's director of Data Center Solutions during the crucial period of 2006-2009, building up the hardware maker's ability to sell rack-mount servers to search engine and Web service companies seeking to build new, more efficient data centers. <P> The unit's been a key, behind-the-scenes business that has kept Dell a leading player in server hardware. If Data Center Solutions had been broken out as a separate business, it would have been the <a href="http://www.informationweek.com/infrastructure/management/dells-data-center-unit-racking-up-cloud/222300841">number-three seller of servers</a> in the U.S. in early 2010, Dell executives told <em>InformationWeek</em> during a visit to the Dell campus. <P> In October 2009, Frankovsky become director of hardware design and supply chain at Facebook during a crucial period in its expansion. While there, he advocated that cloud server design be based on publicly pooled intelligence, despite Google's insistence that its server and data center designs were a competitive advantage. In April 2011, Mark Zuckerberg and other Facebook officials announced the launch of the <a href="http://www.informationweek.com/hardware/data-centers/facebook-challenges-with-green-open-sour/229401209">Open Compute project</a> to set standards for efficient cloud servers. <P> "The benefits of sharing so far outweigh the benefits of keeping it all closed," Frankovsky told <a href="http://venturebeat.com/2012/07/17/google-open-compute/#ut7QTjxXqp8LEKus.99">Venture Beat</a> in July 2012. <P> As an organizer of the OpenCompute.org project, Frankovsky helped pull in innovative and potentially competing projects behind the Open Compute standard. Financial services companies had watched the Google example and sought cloud computing servers of their own. Intel and AMD had been asked by their Wall Street customers to produce their version of a cloud server, examples that were donated to the new organization. <P> "What began a few short months ago as an audacious idea -- what if hardware were open? -- is now a fully formed industry initiative, with a clear vision, a strong base to build from and significant momentum," Frankovsky wrote in an Oct. 27, 2011 blog. <P> <strong>RECOMMENDED READING:</strong> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-cloud-computing-myths/240124922">7 Dumb Cloud Computing Myths</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947">7 Cheap Cloud Storage Options</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/openstack-fights-cloud-lock-in-worries/240047880">OpenStack Fights Cloud Lock-In Worries</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/6-ways-amazon-cloud-helped-obama-win/240142268">6 Ways Amazon Cloud Helped Obama Win</a> <P> <a href="http://www.informationweek.com/government/cloud-saas/expect-to-save-millions-in-the-cloud-pro/240008984">Expect To Save Millions In The Cloud? Prove It</a> <P> <a href="http://www.informationweek.com/cloud-computing/infrastructure/vmware-does-complicated-dance-with-open/240067325">VMware Does Complicated Dance With Open Source</a> <P> <a href="http://www.informationweek.com/global-cio/interviews/5-ways-to-survive-the-coming-it-apocalyp/240044401">5 Ways To Survive The Coming IT Apocalypse</a>2012-11-26T10:16:00ZGoogle Adds Cloud Infrastructure Muscle Vs. AmazonGoogle Compute Engine's first major upgrade adds 36 server instances to its cloud catalog, cuts prices to become more competitive.http://www.informationweek.com/cloud-computing/infrastructure/google-adds-cloud-infrastructure-muscle/240142524?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/7-cheap-cloud-storage-options/240134947"><img src="http://twimgs.com/informationweek/galleries/automated/905/01_Cloud_tn.jpg" alt="7 Cheap Cloud Storage Options" title="7 Cheap Cloud Storage Options" class="img175" /></a><br /><div class="storyImageTitle">7 Cheap Cloud Storage Options</div><span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> In a bid to capture more cloud customers, Google has reduced prices on its existing virtual servers by 5% and added 36 selections to the four previously available in its Compute Engine server catalog. <P> Google seeks to put more muscle behind its infrastructure-as-a-service offerings since <a href="http://www.informationweek.com/cloud-computing/infrastructure/google-compute-engine-challenges-amazon/240002733">Compute Engine</a> was first announced as "a limited preview offering" during the Google I/O 2012 show in June. It's still in limited preview, i.e., a beta test through customers, with no date in sight for when it will become a generally available product, said Shailesh Rao, director of new products and solutions in the Google Enterprise unit. <P> But Rao added that Google is seeing heavy signup by Silicon Valley startups, who are often next door to the Mountain View, Calif., Googleplex. Compute Engine has gotten good reviews on the ease and speed with which it launches a virtual server. <P> "We have a long ways to go, but we're excited about what we've been able to do so far," said Rao in an interview. Another source of customers is existing companies looking to build a new application and run it in a cloud data center, he added. <P> <strong>[ Think you know everything you need to know about the cloud? See <a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-cloud-computing-myths/240124922?itc=edit_in_body_cross">7 Dumb Cloud Computing Myths</a>. ]</strong> <P> Rao said Google seldom has to sell customers on the reliability of its infrastructure since it's the same data centers servers that run Google search. "We've built a beautiful house over the last 14 years. Now we want people to come and stay as long as they want," he said. <P> In the past, customers seeking to make application services available to people in Europe had to be Google premier support customers. Now standard Compute Engine users may deploy workloads to either North America or Europe. <P> The new server configurations come much closer to matching the wide variety of options found on Amazon Web Services, with virtual machines with more CPU power and larger random access memory. Google's previous entry level -- a "standard" virtual server with one "core" (equal to half a 2011 Intel Sandy Bridge CPU core), plus 3.75 GB of RAM and 420 GB of disk space -- was priced at $0.145 an hour. With the price reduction, it's now, $0.138 an hour. <P> Rao said Google is trying to be competitive in its pricing, which appears to position a slightly heftier virtual server next to a similar Amazon offering at a slightly lower price. <P> Added to the standard offering, however, are some wide-ranging variations. For example, Google lists a maximum HighCPU server with disk as an eight-core, 7.2-GB RAM and 3,540 GB of disk space at $0.68 per hour. It also offers a maximum HighMem server with the same dimensions, except it has 52 GB of RAM, priced at $1.272 per hour. Prices are slightly higher in European deployments. <P> Google has been competitive in cloud storage, with pricing formerly at $0.12 for the first terabyte, now reduced 20% to $0.095 per terabyte. It's also introducing a "durable reduced availability" storage option that is 30% less expensive than its previous prices. The "reduced availability" means it may function much like standard storage, or in periods of high traffic it will be given a lower priority and respond with longer latencies than before, Rao noted. <P> Rao said Google offerings include the strong networking characteristics of Google data centers that can deliver sub-second search results. <P> Google is also adding a persistent disk snapshotting service, with snapshots sent to a customer-designated backup location, if desired. <P> Rao acknowledged that Google was unlikely to win existing Amazon customers with "limited preview" services. But he said Google was going after the startup market. "We are a very good option for them, and for new people moving to the cloud," he said. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a>2012-11-21T09:12:00ZJoyent Gets New CEO, Preps Cloud ToolsHenry Wasik joins Joyent from Dell, where he lead networking unit; cloud software gets upgrade in early 2013.http://www.informationweek.com/cloud-computing/infrastructure/joyent-gets-new-ceo-preps-cloud-tools/240142476?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --> <div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/cloud-computing/infrastructure/7-dumb-myths-about-cloud-computing/240124922"><img src="http://twimgs.com/informationweek/graphics_library/175x175/money_cloud.jpg" alt="7 Dumb Cloud Computing Myths" title="7 Dumb Cloud Computing Myths" class="img175" /></a><br /> <div class="storyImageTitle">7 Dumb Cloud Computing Myths</div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div> <!-- /KINDLE EXCLUDE --> Joyent, the "resilient" cloud based on Joyent's own version of the former Sun Microsystems Solaris, recently announced a new version of its cloud software, Joyent7. It also said it has appointed a new president and CEO, Henry Wasik, former CEO of Force10 Networks. <P> Wasik was named CEO of the cloud infrastructure-as-a-service provider on Nov. 7. His former company, Force10, was acquired by Dell in August 2011, with Wasik serving as head of the Dell Force10 unit. Force10 was founded to build 10-Gb and 40-Gb Ethernet switches. Wasik served as CEO for eight years, until he left to join Joyent. <P> Previously he was senior VP of voice networks and applications software at Alcatel, with annual revenue of $700 million and a staff of 1,000. He holds a bachelor's degree in electrical engineering and a master's degree in industrial management. He previously worked for startups Mostek and InteCom. <P> Joyent, a San Francisco-based specialist in resilient cloud infrastructure, builds heavy instrumentation into its Joyent7 system software using a Solaris utility, DTrace. DTrace collects data on different parts of a system and can detect when a part is performing abnormally. SmartOS also uses the ZFS file system, which provides its own confirmation that a file sent arrived at its proper destination. <P> <strong>[ Want to learn more about why some observers call Joyent the "resilient" cloud? See <a href="http://www.informationweek.com/cloud-computing/infrastructure/joyents-cloud-competes-with-google-amazo/240003345?itc=edit_in_body_cross">Joyent's Cloud Competes With Google, Amazon</a>. ]</strong> <P> DTrace and ZFS became part of Solaris, then an open source version of Solaris, Illumos, on which Joyent's own SmartOS and Joyent7 are based. Joyent runs its own cloud infrastructure on Joyent7 at several locations and will install it on premises for a customer planning to offer cloud services, such as Spanish telecommunications company Telifonica and Bharti Airtel, the largest telecom in India. <P> Jason Hoffman, founder and CTO of Joyent, described in an interview new features in Joyent7, which he said is slated for release in early 2013. These include: <P> -- New de-bugging and performance tools for Node.js for diagnosing Node.js code. Joyent includes key contributors to the Node.js language, a version of JavaScript for running server commands. <P> -- A new unified directory service, written in Node.js, for high performance user management, replication and synchronization of user identification. <P> -- Added billing and financial management in Joyent7's reporting capabilities. <P> -- Fuller representation of cloud operations on a reporting dashboard. A toolkit provides more command line tools for system admins and operators, which enable automated operations through scripting. <P> -- The ability for any Joyent7-based cloud to provision NFS-based storage, a standard feature of Solaris environments, through an API automated process. The NFS storage is roughly equivalent to Amazon Web Services S3 object storage. <P> -- The capability for Joyent7 clouds to invoke built-in APIs for image management, security groups and workflow. <P> The updated cloud system "is for people who want to be a service provider, like us," said Hoffman in a visit to <em>InformationWeek's</em> office. Joyent operates a combination of its software and x86 hardware for partners in Europe, Russia, Asia and South America who are providing cloud-based services. <P> The Joyent staff includes Bryan Cantrill, former distinguished engineer at Sun, who co-designed and implemented DTrace. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a>2012-11-16T13:07:00ZHurricane Sandy Lesson: VM Migration Can Stop OutagesWhen a hurricane or other disaster threatens, why not just move critical systems out of the way? It can be done -- but not at the last minute. http://www.informationweek.com/news/240142242?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --><div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/security/attacks/muslim-hacktivists-target-us-banks-8-fac/240009554"><img src="http://twimgs.com/informationweek/galleries/automated/886/01_Wall-Street_tn.jpg" alt="Who Is Hacking U.S. Banks? 8 Facts" title="Who Is Hacking U.S. Banks? 8 Facts" class="img175" /></a><br /> <div class="storyImageTitle">Who Is Hacking U.S. Banks? 8 Facts</div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div><!-- /KINDLE EXCLUDE --> When it comes to disaster recovery, there's nothing like having learned your lesson before the disaster arrives. One of the most useful lessons is that virtualized systems now enable IT managers to move entire systems out of harm's way, even if that means half way across the country -- although it's necessary to have an alternative site set up well in advance. <P> Another lesson is that moderate-cost backup telephony services based on IP networks, including the Internet, and open-source code can serve as host substitutes for your local carrier and office PBX service, if those get knocked offline. <P> <a href="http://www.informationweek.com/telecom/unified-communications/open-source-asterisk-telephony-goes-high/232200752">Asterisk</a> and <a href="http://www.informationweek.com/telecom/unified-communications/broadsoft-to-deliver-uc-platform-for-ser/232500690">BroadSoft</a> are providers of what's come to be known as SIP trunkline services. Another, Evolve IP, played a role in ensuring patients needing oxygen in the aftermath of Hurricane Sandy could still place their calls to Apria Health Care and get their deliveries. <P> Apria is a national deliverer of home health care services, which learned in 2005 when Katrina hit New Orleans that it could lose telephone service at the moment it was most needed. Apria supplies hospital beds, drug infusion equipment, and other medical goods and nursing services to homes of elderly, incapacitated and convalescing patients. But one of the most frequently needed services, especially in a disaster, is simply the personal oxygen tank. <P> <strong>[ Read how data centers reacted to Hurricane Sandy, including bucket brigades. <a href=" http://www.informationweek.com/services/disaster-recovery/hurricane-sandy-disaster-recovery-improv/240012673?itc=edit_in_body_cross"> Hurricane Sandy Disaster Recovery Improv Tales.</a> ]</strong> <P> During Katrina, Apria employees were incapacitated by the failure of office phone systems as the storm and its flood waters swept through the New Orleans region. That meant oxygen users who had left home to move in with relatives or find a room on higher ground had only what they had been able to carry away with them, often reflecting a hurried departure at the last minute. But when they picked up the phone -- their primary way of placing orders -- they often found the line to the Apria Health Care office was dead. <P> "Patients couldn't get through to us after Katrina," said David Slack, VP of IT network engineering for Apria. There was an alternative -- calling a national toll free number -- but few patients had it when they needed it. Something had to be done. <P> Apria looked at backups to its phone system from the major carriers, who expected the firm to spend $2,000 to $4,000 per office to upgrade to SIP trunkline services. Apria has 550 offices nationwide, which would have brought the total bill to over $2 million. It found a lower-cost substitute in the firms that carry enterprise voice and data over IP networks, including the Internet. BroadSoft and Asterisk use private branch exchange open-source code to provide such a service. Apria settled on Evolve IP, another provider, an enterprise version of the Vonage VOIP for consumers. <P> The Evolve IP system, providing a virtual PBX for Apria hosted in an Evolve IP data center, was installed a year ago, based on the lessons learned from Katrina. The hosted PBX needed to be accessible from any branch and have a call forwarding capability in case a Katrina-like storm should knock out a branch. As Sandy gathered strength and was projected to veer into Middle Atlantic states, Apria's executives knew the system was going to get its first test. Twenty of its offices were directly in Sandy's path. The system needed to be programmed where to forward calls if the primary destination wasn't available, and Apria made those adjustments. If the Middletown, N.Y., branch lost telephone service, the system was to shift calls to the nearest well-staffed hub, in Cromwell, Conn. <P> "We lost both phone and data connectivity in Middletown. Immediately the phones started ringing in Cromwell, the backup site. All the intelligence on (emergency) routing was in the cloud. It understood instantly if a branch office was down. It worked fantastically," said Slack in an interview. In this case, the "cloud" is three private Apria data centers in Philadelophia; Wayne, Pa.; and Las Vegas, connected through SIP gateways to the branches. <P> Apria offices in Brooklyn, N.Y., and Elmsford, N.J., also lost voice and data service and the backup system took over their calls as well. <P> In all, 1,006 messages dealing with customers' needs were handled by Apria's remote, virtual PBX system. The system could be reprogrammed on the fly to select a new backup location if a designated one was knocked out. "Our product is not a nicety that people come and pick up. To maintain themselves, clients need our services," Slack said. And those 1,006 rerouted calls was proof of that after Sandy. <P> Not everyone was fortunate enough to have had a prior hurricane as an instructor. One hard lesson taught by Sandy was that the magnitude of the disaster can change the terms on which you thought your recovery plan was operating. The top priority of a plan is to keep a data center running; the top priority of government authorities might lie somewhere else. <P>Datapipe prepared its two data centers in Somerset, N.J., for the storm, making sure to top off the fuel tanks of the diesel backup generators and going the extra length of calling in a diesel fuel tank truck from its private contractor and parking it on premises. Daniel Newton, senior VP of operations, said other preparations on staffing and communications had been made and the data centers rode out the storm, experiencing only a couple of minor leaks from wind-driven rain. <P> The site kept a bank of emergency generators running as other sites lost their power supplies and the Somerset utility power showed fluctuations. Newton thought nothing of using a little diesel fuel. He had plenty to spare. Then "an unforeseen circumstance occurred" as his backup supply tanker fired up its engine and drove off. Its owner had been ordered to deliver fuel to hospitals, nursing homes and convalescent centers instead of standing by at Datapipe. The sheer scale of the storm had undermined the plan. <P> Newton said the site never suffered a power outage, so the issue became moot. But he had discovered a hole in the plan. Datapipe immediately plugged it "by procuring our own fuel truck." <P> In New York, the solution wasn't as simple. The unexpected happened and a storm surge washed over three blocks of lower Manhattan from Battery Park. Caught in that surge was 75 Broad Street, with Peer 1 Hosting and Internap data centers in the building. Steve Orchard, senior VP of development and operations, knew the building's fuel supply system was stocked up and his backup generators were on the second floor, well above any conceivable flooding. But he didn't allow for the reserve fuel tank's vent pipe, allowing air to enter the tank as fuel was pumped out. It was two feet above the ground, outside the building. <P> When the storm surge hit the neighborhood, it <a href=" http://www.informationweek.com/hardware/data-centers/hurricane-sandy-surge-challenges-nyc-dat/240012583?queryText=Hurricane%20Sandy">flooded the basement</a>, disrupting the redundant pumping system's electrical supply and shutting down the pumps. That would have been a relatively simple problem to fix: bring in a new pump and move fuel from ground level to the second floor. But salt water had been able to enter and flow down the vent pipe into the building's reserve diesel supply and 10,000 gallons of precious fuel was contaminated. Orchard had two major issues to overcome with the building's engineers. They did so, rigging a new pump, fuel supply and "creative fabrication of piping and hoses" to start moving fuel to the second floor. <P> Orchard said there were many different workmen involved, figuring out how to disengage the fuel line from its current linkages and apply new fittings to allow it to connect to a fuel truck. They had to locate a small generator to provide power to let them do the work. Internap had to shut down late in the morning Oct. 30. It was up and running again before midnight, having been out of commission for less than 12 hours, thanks to "the creativity and resiliency" of the Internap staff and 75 Broad building engineers. <P> One thing that might have gone wrong didn't. When salt water got into the vent pipe, the redundant system of pumps in the basement stopped working at the same time, so the contamination didn't spread in the line. Instead of needing to flush the line and perhaps repair generators, Orchard only needed to connect the new source. <P> The best laid plans of many data centers had some aspect of disaster recovery go off the rails. The collected experiences might become an argument for disaster preparedness to move out of the realm of attempting to guarantee the physical integrity and continuous operation of a given data center to system transfer -- migrating mission-critical virtual machines out of the data center to another, outside of harm's way. That approach would be useful only not for hurricanes but for fires, floods and earthquakes. <P> The ability to move virtual machines, which started with VMware's VMotion capability, used to be limited to a new location in the same data center rack. It gained the capability to move across racks in one data center, then between data centers. <P> Internap, Datapipe and many other data center service providers now have high-level disaster recovery services that allow the movement of critical systems from one location to another. But SunGard, another provider of the services, warns you can't wait until the last minute to invoke them. <P> "If you don't have a subscription (for disaster recovery) with SunGard, we don't allow you to sign up on the fly. The owner can't call the insurance company to write a policy when the building's on fire," said Walter Dearing, VP of recovery services at the SunGard Availability Services unit. <P> In fact, such recovery systems still take forethought and planning to implement. The key issue is not whether you can place virtual machine duplicates in some other location. That's a cinch. The main problem is getting a synchronized and up-to-date data flow into those systems to allow them to keep running. <P> Few companies keep a hot mirrored system running in a remote location, receiving a real-time stream of data and ready to pick up where another leaves off in a few milliseconds. There are many ways to recover systems that are less expensive than a complete live duplicate, and each customer decides what level of recovery he must have. <P> Do they want to recover by digging week-old tapes out of a vault somewhere? Do they have snapshot backups that are only a day old? Do they have all the server logs they need to reconstruct transactions up until a point that is only a few minutes or a few seconds short of the point of failure? <P> "Tape is the cheapest backup. It's also the most error prone in terms of physically accessing the tapes and the data on the tapes," Dearing said, especially during weather like a hurricane. But even a site that is taking frequent snapshots of its data and replicating them to two or more remote locations will need data recovery systems in place to maintain data integrity and restart systems. <P> "Recovery facilities do not have a cookie cutter similarity," he noted. Its facilities in Carlstadt , N.J., and Philadelphia are prepared to handle recovery of much larger, more complex systems than its facility in Arizona, he said. Even with a virtual machine recovery system, it must be tested frequently and rigorously, something many customers find hard to fit into busy schedules. Seamless failover based on virtual machines is possible today between sites, Internap, Datapipe and SunGard executives all agree. "But you can't do it without the proper due diligence," Dearing said. <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"> <img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a> <P> <i>Recent breaches have tarnished digital certificates, the Web security technology. The new, all-digital <a href="http://www.informationweek.com/drdigital/111212dr/?k=axxe&cid=article_axxt_os">Digital Certificates</a> issue of Dark Reading gives five reasons to keep it going. (Free registration required.)</i>2012-11-15T14:06:00ZMicrosoft's Ballmer Touts Tablets, Phone, CloudMicrosoft CEO discusses strategy to compete on multiple fronts: enterprise IT, consumer devices and application software.http://www.informationweek.com/software/enterprise-applications/microsofts-ballmer-touts-tablets-phone-c/240142145?cid=RSSfeed_IWK_authors<!-- KINDLE EXCLUDE --><div class="inlineStoryImage inlineStoryImageRight"><a href="http://www.informationweek.com/software/windows8/microsoft-pop-up-stores-hands-on-look/240012472"><img src="http://twimgs.com/informationweek/galleries/automated/891/1_tn.jpg" alt="Microsoft Pop-Up Stores: Hands-On Look" title="Microsoft Pop-Up Stores: Hands-On Look" class="img175" /></a><br /> <div class="storyImageTitle">Microsoft Pop-Up Stores: Hands-On Look</div> <span class="inlinelargerView">(click image for larger view and for slideshow)</span></div><!-- /KINDLE EXCLUDE -->Microsoft CEO Steve Ballmer described the departure of Steven Sinofsky as the head of Windows 8 development as "amicable" and said that Microsoft is well positioned in tablets, phones and cloud computing to start expanding its business again. <P> "He's made one of the most amazing contributions anyone could make to a company. We certainly wish him well," Ballmer said of Sinofsky, who until Monday had led the development of Windows 8. He added that Julie Larson-Green, who will take over the Windows product, "has worked with us for 20 years and will provide the leadership needed to continue to advance Windows 8." <P> Ballmer made the remarks at the Churchill Club of Santa Clara as he paid one of his infrequent visits to Silicon Valley. Surrounded by Intel chip production facilities, major league data centers and technology companies, he was addressing one of his key constituencies two days after the sudden departure of Sinofsky. About 500-600 people attended the sold-out dinner event at the Santa Clara Marriott. <P> Sinofsky was a key "can-do" executive, and his abrupt exit had sparked speculation that the Windows 8 organization inside Microsoft was in trouble. Ballmer ignored the speculation but tried to quell any doubts about the longevity of the Windows 8 tiled interface in a talk that was part diplomatic overture to partners and venture capitalists and part Barnum and Bailey. <P> <strong>[ For more about Microsoft's latest bid to enter the consumer device market, see <a href="http://www.informationweek.com/hardware/handheld/microsoft-windows-8-surface-tablets-big/240002115?queryText=Surface"> Microsoft Windows 8 Surface Tablets: Big Hardware Play.</a> ]</strong> <P> Waving a Windows 8 RT tablet, Ballmer said it was essential that Microsoft demonstrate the capabilities of its software on devices that are positioned on the leading edge of the consumer market. <P> Microsoft doesn't wish to compete with Dell, HP, Acer or other Windows device makers but, as with the Xbox, it must innovate where a particular <a href="http://www.informationweek.com/windows/microsoft-news/microsoft-well-sell-a-few-million-surfac/240003436"> hardware form factor</a> is the best example of the capabilities of its new software. He expects Microsoft partners to be the main producers of Windows 8 tablets and wants to see a healthy ecosystem arise around the form factor. Ballmer also held up an Asus ultra-slim laptop running Windows 8 as a sample of how partners will compete based on the use of the Microsoft operating system. <P> Addressing the struggling Windows Phone 8, Ballmer made a reference to Apple's iPhone: "Our challenge is not to get 60% of the smartphone market. It's to get 10% of that market, then 15%, then 20%." If Microsoft can establish a credible Windows phone, then its Nokia unit -- or other manufacturers, such as HTC -- will incrementally improve upon that base until they are competing more effectively for market share. <P> Asked by Reid Hoffman, co-founder of LinkedIn and a venture capitalist at Greylock Partners, what had been his biggest surprise as CEO, Ballmer brought the discussion back to Windows 8. <P> "Touch, touch, touch, touch," he said emphatically. The fact that the most popular user interface could shift to a touch-based interface such as Apple's on the iPad and iPhone, in which finger gestures and soft keyboards control the device, admittedly caught Microsoft by surprise, Ballmer said. It has scrambled to produce a competitive response. Windows 8 features touch-based scrolling similar to that of Apple devices, with the addition of a larger tiled presentation screen with each tile image &#8211; several times the size of Apple icons -- representing an application. <P> Ballmer conceded that the Windows 8 user interface will initially present a learning curve to traditional Windows users, but he said it was easy to "pin" application icons onto a desktop screen and re-establish a familiar setting for users who choose to ignore the properties of the new interface. But, he said, he "likes having a stock price feed and other real-time information going past me on the screen." He also said he had "no trouble" selling the new interface to the head of a major bank last week. "[That bank executive] plans to put Windows 8 machines in all his branches, and they've got a lot of branches." <P> The strength of Windows applications library, Ballmer said, will continue to see Microsoft through competitive challenges. Tablets and phones that can easily share data among Windows applications enable all of a user's devices to relate to one another. <P> Microsoft was the world's leading specialist in producing "extensible" applications, he added. In June, Microsoft spent $1.2 billion acquiring Yammer, a social networking firm that Microsoft will use to add social networking services to its applications. "Yammer has an incredible extensibility model. There are all kinds of opportunities for extending this application." <P> Ballmer said Microsoft is moving ahead in cloud computing as well, with its Windows Azure services and increased cloud-based applications. Asked whether Office 365 software as a service in Azure "was self-cannibalizing" the regular Office suite, Ballmer responded that Microsoft couldn't worry about that -- what's important is to get into cloud computing. "Let's move, let's move," he said, citing a favorite line from the film "Annie Hall" in which Woody Allen explains that relationships are like sharks; they have to keep moving forward. <P> Microsoft has brought its own advantages to cloud computing, such as allowing users to sign into an enterprise Active Directory server and carry that identification and privilege level with them as they go out into cloud applications. <P> The Internet, with its worldwide reach, and the cloud, with its low-cost by-the-hour servers, have reduced the barriers to entry into the software business and increased the pace of change, posing new competition to Microsoft. <P> "The technology business," said Ballmer in a moment of self-revelation, "is not like the brownie mix product that I worked on [at Procter & Gamble.] The pace is just not the same. In technology, you have to re-invent yourself more frequently." <P> <a rel="author" href="https://plus.google.com/115152004403021879577/about"><img src="https://ssl.gstatic.com/images/icons/gplus-16.png" width="16" height="16"align="right"> </a> <P> <i>Upgrading isn't the easy decision that Win 7 was. We take a close look at Server 2012, changes to mobility and security, and more in the new <a href="http://www.informationweek.com/gogreen/092412/?k=axxe&cid=article_axxt_os">Here Comes Windows 8</a> issue of InformationWeek. Also in this issue: Why you should have the difficult conversations about the value of OS and PC upgrades before discussing Windows 8. (Free registration required.)</i>