Federal agencies and U.S. national labs will be responsible for much of the wide-ranging agenda at SC11, an annual supercomputing conference being held this week in Seattle, with topics running the gamut from supercomputing in the cloud and exascale computing to power management and networking.
"It used to be, 'Oh, this is the new technology,' but what's impressive to me today is about the breadth," Jim Costa, SC11's technical program co-chair and a senior manager at Sandia National Laboratories, said in an interview. "There is no one way to build high performance computing, and you've got big integration problems. We're still trying to figure out how to put it all together."
Overall, the government will play a role in dozens of panels and paper presentations at the conference, showing that it is still one of the key players in cutting-edge supercomputing. Researchers from Pacific Northwest National Laboratory alone will present more than a dozen sessions on topics such as supercomputing for the future smart-power grid, optimizing power management in supercomputers, supercomputing with semantic databases, and new algorithms for understanding complex sub-atomic processes in solar energy systems.
Networking is an important topic at this year's show. Oak Ridge National Laboratory, home of Jaguar, the United States' fastest supercomputer, will be showing off "wide area" 100GB Ethernet. Also, Brookhaven National Lab will be detailing a method to better accommodate multiple network reservation requests. The National Nuclear Security Administration and Lawrence Livermore National Lab, meanwhile, will demo a new implementation of InfiniBand (a network link often used in supercomputing) being used to connect a Lawrence Livermore supercomputer with NASA's fastest supercomputer, Pleiades.
Other papers and panels with government participation include research on the significance of memory errors in future exascale computers, image compositing, a software framework for processing massive data sets and hardware and software optimizations for fusion research from Lawrence Berkeley National Lab, a framework for GPU performance projections and techniques for improved I/O performance from Argonne National Lab, and techniques to facilitate debugging of large-scale parallel apps from Lawrence Livermore National Lab and Purdue University.
Although the U.S. government continues to play a key role in the supercomputing world, however, high- performance computing is increasingly a global game, Costa noted. Researchers and governments from around the world will be at SC11 this year. A student competition at the conference will include participants from Japan, China, Taiwan, and Russia.
At the outset of the conference, Cray and the National Center for Supercomputing Applications at the University of Illinois announced that Cray would be taking over development on the $188 million National Science Foundation-funded Blue Waters supercomputer, on hiatus since IBM backed out of the effort in August due to high costs.
SC11 also will be the source of numerous other tech announcements, including the latest Top500 ranking of the world's 500 fastest supercomputers, and an array of new products from supercomputing and other tech vendors.
The Agile ArchiveWhen it comes to managing data, donít look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
2014 Analytics, BI, and Information Management SurveyITís tried for years to simplify data analytics and business intelligence efforts. Have visual analysis tools and Hadoop and NoSQL databases helped? Respondents to our 2014 InformationWeek Analytics, Business Intelligence, and Information Management Survey have a mixed outlook.