Google Cloud, Nvidia, and Expanding Collaborations in AI

Partnerships between cloud providers and tech vendors may be the shape of things to come as more organizations chase opportunities in AI.

Joao-Pierre S. Ruth, Senior Editor

August 31, 2023

4 Min Read
Nvidia logo on a device
Cynthia Lee via Alamy Stock Photo

If this week’s Google Cloud Nexus conference is a barometer for strategies at play, the market seems to be on a path where cloud providers and technology vendors will continue to team up to elevate AI.

During Tuesday’s keynote streamed from the conference, Thomas Kurian, CEO of Google Cloud, was joined by Nvidia CEO Jensen Huang to discuss some of the ways their organizations were approaching AI development. This includes Google Cloud expanding its partnership with Nvidia to integrate accelerated libraries and GPUs with Google’s Serverless Spark product for data science workloads.

“Generative AI is revolutionizing every layer of the computing stack,” Huang said. He added that the Nvidia and Google Cloud collaboration would, by his assessment, reinvent cloud infrastructure for generative AI. “This is a reengineering of the entire stack, from processors to the systems, to the networks, and all of the software.” This is intended, Huang said, to accelerate Google Cloud’s Vertex AI and to create software and infrastructure for AI researchers and developers.

Nvidia is also putting its DGX Cloud into the Google Cloud Platform, he said. “DGX Cloud is Nvidia’s AI supercomputer. It’s where we do our AI research.” Meanwhile, Kurian talked up the hardware and other resources from Nvidia that Google Cloud uses, including GPUs tapped to build next generation AI.  

Huang said generative AI is transforming computing and reinventing software. “The work that we’ve done to create frameworks that allows us to push the frontiers of large language models, distributed across giant infrastructures,” he said, “so that we could save time for the AI researchers, scale up to gigantic next generation models, save money, save energy -- all of that requires cutting-edge computer science.”

He followed up with the announcement of Pax ML, a large language model framework, and plans to collaborate with Google to build a next-gen supercomputer, DGX GH200.

After the keynote, Gartner’s Chirag Dekate, vice president and analyst, spoke with InformationWeek about what the continued partnership between Nvidia and Google might mean for AI and as a growing trend.

Diverse Acceleration Technologies

“What both Google and Nvidia are trying to focus on is enabling access to their innovative technologies through their channels,” he says. “From a Google vantage point, what they’re trying to do is enable access to diverse acceleration technologies. So, customers who want to build generative AI applications, either implicitly or explicitly, can take advantage of either TPUs (tensor processing units) or GPUs depending on what they want.”

Dekate says Jensen’s plans for a next-gen supercomputer in collaboration with Google show how the tech landscape is changing. “This is a sign of things to come, because what you’re now seeing is cloud providers as well as technology vendors gearing up for a future that every layer in the stack is now going to be purpose designed for AI acceleration at scale,” Dekate says, which would include the infrastructure level, the middleware level, the application level, and beyond.

Changes driven by AI, he says, will likely be nuanced as different players step into the ring. “It is not one technology that rules the gen-AI opportunity,” Dekate says. “What you see is vendors like Google enabling access to diverse technology streams.”

The continued collaboration between Google and Nvidia makes some sense, he says, given their histories and current trajectories. “Google has always been synonymous with AI ever since its inception,” Dekate says. “Google has always been known for its leadership in AI, and NVIDIA has always been known for its leadership class GPUs, enabling access to innovative compute power that is often needed for leadership-class AI.”

He described the shared effort as a symbiotic relationship between the two companies that supports both of their efforts. “Google benefits from Nvidia developer ecosystems at the same time Nvidia benefits from the kind of platforms that Google is building,” Dekate says.

A shared destiny in AI seems to be in the offing as demand and opportunities to use the technology continue to escalate. “Every layer in the stack is being reinvented,” he says. “Every layer in the stack is being re-engineered to deliver an AI capability infused for enterprise ecosystems. In some sense the last decade was a cloud decade, and this decade is now the value-creation-from-AI decade, if you will. We’re already starting to see that take shape.”

What to Read Next:

Podcast: Cylons and the Cloud Connectivity Cybersecurity Conundrum

Google Cloud and Virtusa Aim to Train Engineers and Push AI

Big Tech Forging Partnerships to Further AI Development Strategies

About the Author

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights