Company executives plan to discuss exploratory work in grid computing, a method of using the Internet to link computers' processing power and data storage, at a presentation of the company's three-year business plan to creditors next month. WorldCom is talking to Hewlett-Packard, IBM, Sun Microsystems, and researchers at the open-source grid computing Globus Project about applying the technology to help customers build hosted data centers capable of deploying more computers when demand is high -- say, around Christmas for a retailer -- then taking those machines offline when usage ebbs, lowering costs.
Using dedicated servers for Web hosting will eventually seem dated, but it's not clear how soon grid computing will take over, WorldCom's Shaw says.
Key developments this year could speed acceptance of grid computing, which is getting a lot of attention from technology vendors trying to boost flat sales. Next month, the Globus Project, a group of researchers from Argonne National Laboratory and the University of Southern California, plans to release the first public test version of the Globus Toolkit 3, the latest version that lets users submit jobs to other Internet-connected machines running Globus.
The research community has embraced Globus -- and now some large companies are trying it. Charles Schwab & Co. is running tests with IBM Research of Globus-based software that can draw processing power from its Web servers when they're running below peak capacity, then deliver those cycles to applications that analyze clients' portfolios. As the discount brokerage expands its investment-advice business, grid technology could boost customer satisfaction by letting Schwab's advisers deliver portfolio advice on the spot, says executive VP David Dibble, instead of hours or days later. "We have jobs that used to run in batch in tens of minutes -- if not hours -- running in seconds," he says.
The computer industry's bet is that grid computing, based on the Globus Toolkit and commercial server-side software that can communicate with networks of linked computers, can be used to solve the No. 1 problem technology companies face today: How to respond to business customers who overbought hardware and network capacity during the '90s and want to direct excess power to match business priorities, instead of having it sit idle. "It's difficult today to take advantage of server capacity without putting its purpose at risk," says Victor Nilson, VP of enterprise architecture at Cingular Wireless, a joint venture between BellSouth and SBC Communications that's experimenting with grid computing in its research labs. "One box is on its knees, and the box beside it is bored to tears."
Computers are becoming more disaggregated components than self-contained boxes, as hardware and software designers take advantage of technical advances such as networked disk storage, the super-efficient InfiniBand input/output technology, and Web services.
As applications become more distributed across machines on a network, the world's largest IT companies are designing products to manage workloads across various machines. IBM's "dynamic surge protection" technology uses mathematical models to forecast when data centers can expect a spike in transaction demand and hot-swap resources to meet the load. Hewlett-Packard's Utility Data Center architecture and Sun's N1 resource controller software address similar problems. Even Microsoft is getting into the computing-on-demand business: Windows 2003 Server, due April 24, will include tools for managing CPU and memory utilization and an automated deployment service for faster server setup.
The emphasis on efficiency is a product of the lousy economy, says Jonathan Eunice, president of technology research company Illuminata. Which brings us back to Globus. If it's been so effective at linking the computing cycles of supercomputers at universities and national labs, couldn't it do the same for business systems?
Well, not always, or, at least, not yet. The current version of Globus is oriented toward batch jobs that line up for computing resources. That's fine in science, but unacceptable in business, where users interact with constantly changing data.
Also, unless you're comfortable typing commands into a C-prompt, Globus probably isn't for you. On the portion of the Globus Web site that argues for its ease of use, the authors note that "astrophysicists at the Max Planck Institute in Potsdam recently performed a simulation of two colliding black holes on Cray T3E supercomputers in Germany and California by running a standard Message Passing Interface code." Well, it's not Windows.
An industry group called the Global Grid Forum has drafted a specification called the Open Grid Services Architecture, a set of extensions to Web services. The new version of the Globus Toolkit supports these extensions, and IBM is building support for them into its software, as are HP and Sun. The idea is that applications written as Web services will be able to discover machines available on the grid, and use Globus' services, without much additional programming. If every computer has that software installed by default, just as the TCP/IP Internet stack is today, momentum for grids could speed up.
So far, though, several important companies, including Microsoft, appear lukewarm on the prospect. An effort to port Globus to Windows XP, disclosed last year, hasn't yielded any released code.
Enterprise applications vendors are holding out as well. For example, Oracle posts a Globus development kit on its Web site, but Benny Souder, a VP of distributed database development, says he isn't sure who's using it. "It's not too easy for people to use Globus and Oracle together," he says.
Ease of use isn't everything, of course. But when it comes to getting CIO buy-in, it counts for a lot.
Photo of Shaw by Stan Kaady