Supercomputers: New Software Needed - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Government // Big Data Analytics
News
12/31/2013
09:06 AM
50%
50%

Supercomputers: New Software Needed

Next hurdle for high-performance computing is figuring out how to handle unstructured data.

Top 10 Government IT Innovators Of 2013
Top 10 Government IT Innovators Of 2013
(click image for larger view)

Supercomputing, in the broadest sense, is about finding the perfect combination of speed and power, even as the definition of perfection changes as technology advances. But the single biggest challenge in high-performance computing (HPC) is now on the software side: Creating code that can keep up with the processors.

"As you go back and try to adapt legacy codes to modern architecture, there's a lot of baggage that comes along," said Mike Papka, director of the Argonne Leadership Computing Facility and deputy associate laboratory director for computing, environment and life sciences at Argonne National Laboratory. "It's not clear to me what the path forward is … [the Department of Energy] is very interested in a modern approach to programming, what applications look like."

[From the bombing in Boston to the evolution of more sophisticated robots, here are some of our top government IT stories from 2013. Top 15 Government Technology Stories Of 2013. ]

Much attention has been given to rating the speed of supercomputers. Twice a year, the top 500 supercomputers are evaluated and ranked based on their processing speed, most recently in November, when China's National University of Defense Technology's Tianhe-2 (Milky Way-2) supercomputer achieved a benchmark speed of 33.86 petaflops/second. Titan, a Cray supercomputer operated by the Oak Ridge National Laboratory, which in June 2012 was No. 1 on the list, came in second at 17.59 Pflop/s.

That next level is exascale computing, machines capable of a million trillion calculations per second (an exaflop). HPC may achieve that level by 2020, Papka said, but before then -- perhaps in the 2017-2018 timeframe -- the next generation of supercomputers may get to 400 Pflop/s.

"If all the stars aligned, the money's there, and developers had the resources [by] combining Oak Ridge and Argonne, we have made the case that the scientific community needs a 400-petaflop machine," Papka said. "Vendors have work to do, labs have infrastructure to put in place -- heating, cooling, floor space. It's not just buying machines any more, you've got to have the software [and] applications in place."

One of the challenges to faster supercomputers is designing an operating system capable of handling that many calculations per second. Argonne, in collaboration with two other national laboratories, is working on the project, which is called Argo.

Tony Celeste, director of federal sales at Brocade, said another emerging trend in HPC is a growing awareness of its applicability to other IT developments, such as big data and analytics. "There are a number of emerging applications in those areas," he said. "Software now, networks in particular, have to move vast amounts of data around. The traffic pattern has changed; there's a lot of communication going on between servers, and between servers and supercomputers ... It's changing what supercomputing was 10, 15 years ago."

Other important trends Celeste identified include emphasis on having open, rather than proprietary, systems, and the growing awareness of energy efficiency as a requirement.

Patrick Dreher, chief scientist in the HPC technologies group at DRC, said the growing interest in HPC outside of the circles of fundamental scientific research, is driven by "demand for better, more accurate, more detailed computational simulations across the spectrum of science and engineering. It's a very cost-effective way to design products, research things, and much cheaper and faster than building prototypes."

Dreher's colleague, Rajiv Bendale, director of DRC's science and technology division, said the HPC community's emphasis is shifting a little away from the speed/power paradigm and toward addressing software challenges. "What matters is not acquiring the iron, but being able to run code that matters," Bendale said. "Rather than increasing the push to parallelize codes, the effort is on efficient use of codes."

Cloud Connect Summit, March 31 – April 1 2014, offers a two-day program colocated at Interop Las Vegas developed around "10 critical cloud decisions." Cloud Connect Summit zeros in on the most pressing cloud technology, policy and organizational decisions & debates for the cloud-enabled enterprise. Cloud Connect Summit is geared towards a cross-section of disciplines with a stake in the cloud-enabled enterprise. Register for Cloud Connect Summit today.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
InformationWeek Is Getting an Upgrade!

Find out more about our plans to improve the look, functionality, and performance of the InformationWeek site in the coming months.

Commentary
Why IT Leaders Should Make Cloud Training a Top Priority
John Edwards, Technology Journalist & Author,  4/14/2021
Slideshows
10 Things Your Artificial Intelligence Initiative Needs to Succeed
Lisa Morgan, Freelance Writer,  4/20/2021
Commentary
Lessons I've Learned From My Career in Technology
Guest Commentary, Guest Commentary,  5/4/2021
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Planning Your Digital Transformation Roadmap
Download this report to learn about the latest technologies and best practices or ensuring a successful transition from outdated business transformation tactics.
Slideshows
Flash Poll