Supercomputers Unleash Big Data's Power

Here are six reasons why established companies, and even startups, are using supercomputing resources, and why your IT organization may want to consider such options to meet your big data and business analytics needs.
The Dataset Fits Into Memory
The Computational Capability Is Powerful
The Interconnect Is Critical
Advanced Modeling Capability
Expand What's Possible
It Speeds Discovery

Manufacturers, logistics companies, pharmaceutical companies, and energy companies have something in common: They're using supercomputers to push the limits of research and discovery, as well as to answer questions that may not be possible or practical to answer using other means.

Organizations are using the cloud and PCs to solve the problems yesterday's supercomputers once solved. Cloud computing has also grown to encompass High Performance Computing or HPC in the cloud, and the providers of those products, services, and solutions are targeting research and scientific communities that have traditionally used supercomputers. As cloud solutions and supercomputers continue to advance, their use is not necessarily mutually exclusive. Some companies are working with universities and national labs to access the most powerful resources available, including companies that own their own supercomputers.

"We've got to get [companies] from where they are today to where we are in the massively parallel, multi tens of petaflop capabilities," said Jeff Nichols, associate laboratory director for Oakridge National Lab's (ORNL) computing and computational sciences, in an interview. "Companies in the automotive industry, the airline industry, the energy space, and science space want to work with us to solve their big science questions."

ORNL and the Joint Institute of Computational Sciences (JICS) -- which is a joint partnership between ORNL and the University of Tennessee -- each have a number of national class and leadership class computing resources with different architectures that allow them to solve different kinds and scales of problems. Titan is ORNL's big gun and the second most powerful supercomputer on the planet today. It's a Cray XK7 27 petaflop machine with 299,008 16-core AMD Opteron CPUs, 18,688 NVIDIA Tesla K20 GPU accelerators and 710 terabytes of total system memory.

In addition to having a wide range of hardware at its disposal, JICS optimizes packaged and proprietary software so it can run efficiently on supercomputers or in the cloud. In addition, JICS has a staff of 20 Ph.D.s in physics, chemistry, computer science, math and other fields who speak machine language. They help organizations understand what is possible to achieve computationally. And, because they're scientists, they also help companies advance the state of the art.

"As you begin to use computing, you start asking harder questions and need more computing. Novices hit a wall right away," said Tony Mezzacappa, JICS director and chair of theoretical and computational astrophysics and astronomy at the University of Tennessee, in an interview. "Knowledgeable folks [who] know what they're doing know how to scale across the full set of nodes on their machine, but eventually they hit a wall too. They want to solve a problem that requires more memory or more compute power to execute in a reasonable amount of time."

Here are six reasons why established companies and even startups are using supercomputing resources, and why your IT organization may want to consider it to meet your big data and business analytics needs.

Next slide