Nvidia and VMware CEOs Explore AI Infrastructure Potential
Discussion at VMworld points to possibilities that AI might open up for cloud, software development, and automation in business.
At this week’s VMworld virtual conference, Nvidia CEO Jensen Huang joined VMware CEO Patrick Gelsinger to talk about the potential of AI and machine learning to help businesses further their transformation and the evolution of compute. They also discussed partnerships between the companies, including their collaboration on Project Monterey, a reimagining of hybrid cloud architecture to support future apps. That project also includes Intel, Lenovo, Dell Technologies, Pensando Systems, and Hewlett Packard Enterprise.
During the talk, Gelsinger spoke about how AI could unlock software for businesses to accelerate and apps to deliver insights. VMware is a provider of cloud computing and virtualization software. “Apps are becoming central to every business, to their growth, resilience, and future,” he said. The world has reached an inflection point, Gelsinger said, for how apps are designed and delivered. “Data is becoming the jet fuel for the next generation of applications.”
He described AI as key to taking advantage of such data. Gelsinger also laid out how his company changed some of its strategy by working with Nvidia and making the GPU a “first-class compute citizen” after years of VMware being CPU-centric in terms of how compute is treated by its virtualization, automation layer. “This is critical to making [AI] enterprise-available,” he said. “It’s not some specialized infrastructure in the corner of the data center. It’s a resource that’s broadly available to all apps, all infrastructure.”
This can mean using a GPU infrastructure to solve computer science problems at the deepest level of infrastructure, Gelsinger said. That includes applying it to medical research, handling confidential patient information, biomedical research, and addressing security concerns. “We expect to see all of these accelerations in healthcare being AI-powered as we go forward,” he said.
Gelsinger said other business sectors will likely be fueled by data while leveraging power of AI, though there are some issues to resolve to nurture such a trend. One challenge is how to make it easier for developers to work in this space and build AI applications, AI data analysis, machine learning, and high-performance computing. This includes the cloud, the data center, and the edge, he said.
Data sets and data gravity
Data gravity becomes another issue, Gelsinger said, as data sets grow huge. Enterprises may have to decide whether data sets need to move to the cloud to get the most out of AI. They might prioritize a push to the edge to improve performance. For some regulated organizations, he said governance might prevent moving all data out of their premise-based data centers.
Huang talked about the possibilities that may be introduced by bringing the Nvidia AI computing platform and AI application frameworks to VMware and its cloud foundation. The collaboration took a fair bit computer science and engineering, he said, given the scope of a robust AI being meshed with virtualization. “AI is really a supercomputing type of application,” Huang said. “It’s a scaled out, distributed, and accelerated computing application.” The combined resources are expected to allow companies to do data analytics, AI model training, and scaling out inference operations, he said, which should automate businesses and products.
Huang called AI a new way of developing software that could even outpace the capabilities of human developers. “Data scientists are steering these powerful computers to learn from data to generate code,” he said. For example, Huang said the University of California, San Francisco (UCSF) Health is using Nvidia’s AI algorithm and platform for research in the hospital’s intelligent imaging center in radiology. This is part of the center’s focus on development of clinical AI technology for medical imaging applications.
Achieving the potential that AI can offer UCSF Health and other organizations will include data processing, machine learning, or training AI models in inference deployment, Huang said. “This computing infrastructure is super complicated,” he said. “Today it’s GPU accelerated. It’s connected by highspeed networks; it’s multi-node, scaled out for data processing and AI training. It’s orchestrating containers for the deployment of inference models.”
For more on AI and cloud infrastructure, follow up with these stories:
Deloitte's State of AI in the Enterprise
Cloud Strategies Aren't Just About Digital Transformation Anymore
About the Author
You May Also Like