Nvidia has taken the wraps off its DGX-1 deep learning supercomputer, which contains the company's next generation of GPU accelerators that deliver the equivalent throughput of 250 x86 servers.
The DGX-1 is designed to help researchers and scientists comprehend and analyze huge reams of data, as well as unlock the potential of artificial intelligence (AI) technology.
The system includes a complete suite of optimized deep learning software, as well as optimized versions of several popular deep learning frameworks, including Caffe, Theano, and Torch.
The DGX-1 additionally provides access to cloud management tools, software updates, and a repository for containerized applications.
For Nvidia, the release of the DGX-1 comes at a time when enterprises are starting to invest much more money in deep learning software as the markets for artificial intelligence (AI), Internet of Things (IoT), and big data continue to grow. In November, Tractica predicted annual software revenue for enterprise applications of deep learning would increase from $109 million in 2015 to $10.4 billion in 2024.
"Artificial intelligence is the most far-reaching technological advancement in our lifetime," Jen-Hsun Huang, CEO and cofounder of NVIDIA, wrote in an April 5 statement. "It changes every industry, every company, everything. It will open up markets to benefit everyone."
Nvidia plans to release the DGX-1 in the US in June. The company then plans to release it to other markets starting in the third quarter.
The DGX-1 system is the first to be built with the company's Pascal-powered Tesla P100 accelerators, interconnected with Nvidia NVLink.
Pascal delivers more than 5 teraflops of double-precision performance for high-performance computing HPC workloads, while NVLink is a high-bandwidth, energy-efficient interconnect that enables faster communication between the CPUs and GPUs, as well as between the GPUs themselves.
Nvidia claims its NVLink technology allows data-sharing at rates five to twelve times faster than the traditional PCIe Gen3 interconnect, and touts the technology as a fundamental ingredient of the US Department of Energy's next-generation supercomputers.
Other technologies employed in the DGX-1 include 16nm FinFET fabrication technology, for improved energy efficiency; Chip on Wafer on Substrate with HBM2, for maximizing big data workloads; and new half-precision instructions to deliver more than 21 teraflops of peak performance for deep learning.
In addition, the software includes the Nvidia Deep Learning GPU Training System (DIGITS), an interactive system for designing deep neural networks (DNNs). It also includes the company's recently released CUDA Deep Neural Network library (cuDNN) version 5, a GPU-accelerated library of primitives for designing DNNs.
The cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. It is part of the Nvidia Deep Learning SDK.
While Nvidia may be first to market with a deep learning supercomputer, the company may not have much time to rest on its laurels, with rivals ranging from Apple to Salesforce beefing up their investment in AI and deep learning technology.
Salesforce has been racking up acquisitions of several deep learning companies, including image recognition specialist MetaMind, machine learning startup PredictionIO, data science for enterprises company MinHash, and a "smart" iPhone calendar app called Tempo AI that automatically added context such as contacts and documents to calendar items.
In October, Apple reportedly acquired startup Perceptio, which focuses on image-based recognition and deep learning. Both concepts are based on a set of algorithms that attempt to model high-level abstractions in data.Nathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin. View Full Bio