November 29, 2021
It once seemed inevitable -- a surefire thing -- that supercomputers would help businesses tackle the demands imposed by massive databases, complex engineering tools, and other processor-draining challenges. Then, suddenly, both technology and businesses took a different course.
Chris Monroe, co-founder of and chief scientist at quantum computing company IonQ, offers a simple explanation for the abrupt change in interest. “Supercomputers failed to catch on because, although they bring the promise of speed and ability to process large computational problems, they come with a significant physical footprint [and] energy/cooling requirements,” he notes. “When it comes to mainstream adoption, supercomputers never hit the right balance of affordability, size, access, and value-add enterprise use cases.”
Supercomputers have traditionally been defined by the fact that they bring together a collection of parallel hardware providing a very high computational throughput and rapid interconnections. “This is in contrast to traditional parallel processing where [there are] a lot of networked servers working on a problem,” explains Scott Buchholz, government and public services CTO and national emerging technology research director for Deloitte Consulting. “Most business problems can be solved either by the latest generation of standalone processors or else by parallel servers.”
The arrival of cloud computing and easily accessible APIs, as well as the development of private clouds and SaaS software, put high-performance computing (HPC) and supercomputers in the rearview mirror, observes Chris Mattmann, chief technology and innovation officer (CTIO) at NASA's Jet Propulsion Laboratory (JPL). “Relegated to science and other boutique use, HPC/supercomputers ... never caught up to modern-day [business] standards.”
Today, while most businesses have shied away from supercomputers, scientific and engineering teams often turn to the technology to help them address a variety of highly complex tasks in areas such as weather predictions, molecular simulation, and fluid dynamics. “The sets of scientific and simulation problems that supercomputers are uniquely well suited to solving will not go away,” Buchholz states.
Scott Buchholz, Deloitte Consulting
Supercomputers are primarily used in areas in which sizeable models are developed to make predictions involving a vast number of measurements, notes Francisco Webber, CEO at Cortical.io, a firm that specializes in extracting value from unstructured documents.
“The same algorithm is applied over and over on many observational instances that can be computed in parallel," says Webber, hence the acceleration potential when run on large numbers of CPUs.” Supercomputer applications, he explains, can range from experiments in the Large Hadron Collider, which can generate up to a petabyte of data per day, to meteorology, where complex weather phenomena are broken down to the behavior of myriads of particles.
There's also a growing interest in graphics processing unit (GPU)-and tensor processing unit (TPU)-based supercomputers. “These machines may be well suited to certain artificial intelligence and machine learning problems, such as training algorithms [and] analyzing large volumes of image data,” Buchholz says. “As those use cases grow, there may be more opportunities to 'rent' time via the cloud or other service providers for those who need periodic access, but don’t have a sufficient volume of use cases to warrant the outright purchase of a supercomputer.”
While mostly relegated to large academic and government laboratories, supercomputers have managed to find a foothold in a few specific industry sectors, such as petroleum, automotive, aerospace, chemical, and pharmaceutical enterprises. “While the adoption isn’t necessarily widespread in scale, it does demonstrate these organizations' capacity for investments and experimentation,” Monroe says.
The focus moving forward will be on new types of supercomputer architectures, such as neuromorphic and quantum computing, Mattmann predicts. “This is where supercomputing companies will be investing in to disrupt the traditional model powering clouds.”
Chris Mattmann, NASA JPL
Classical computing will simply reach a limit, Monroe observes. “Moore's law no longer applies, and organizations need to think beyond silicon,” he advises. “Even the best-made supercomputers … are dated the moment they are designed.” Monroe adds that he's also beginning to see calls for merging supercomputers with quantum computers, creating a hybrid computer architecture.
Eventually, however, Monroe anticipates the widespread adoption of powerful and stable quantum computers. “Their unique computational power is better suited to solve complex and wide-scale problems, like financial risk management, drug discovery, macroeconomic modeling, climate change, and more—beyond the capabilities of even the largest supercomputers,” he notes. “While supercomputers still have a large presence … the top business minds are already looking toward quantum.”
Buchholz doesn't expect mainstream enterprises to reverse their view of supercomputers at any point in the foreseeable future. “If the question is whether or not most organizations need a special-purpose, multi-million-dollar piece of hardware, the answer is generally, ‘no’, because most applications and systems are targeted at what can be done with commodity hardware today,” he explains.
On the other hand, Buchholz notes that technological momentum may eventually sweep many enterprises into the supercomputer market, whether they realize it or not. “It’s important to remember that today’s supercomputer is the next decade’s commodity hardware,” he states.
About the Author(s)
You May Also Like