Google Cloud’s Anton Chuvakin Talks GenAI in the Enterprise

Popular, consumer-grade generative AI might serve as a fun toy, but enterprises should rethink allowing its use within their operations given the alternatives.

Joao-Pierre S. Ruth, Senior Editor

February 8, 2024

7 Min Read
High-Tech AI Computer Chip with Futuristic Circuit Board Design 3D Illustration
Adam Flaherty via Alamy Stock Photo

With the proliferation of generative AI, much of it consumer-oriented, it may be inevitable that such platforms and tools find their way into the workplace -- even if they are not designed to meet the oversight businesses require in their technology.

From proprietary code to sensitive data, the stakes can be high for organizations. Generative AI (genAI) that is not built specifically for the rigors businesses face could be a liability when it comes to regulatory compliance on information security and access. Moreover, what consumer-focused generative AI produces might be fraught with AI hallucinations, errors, and simply not measure up to the standards businesses require.

Amusing tools for spawning digital content might leave a few windows, doors, and ventilation ducts open -- potentially compromising digital security.

Anton Chuvakin, security advisor at the Office of the CISO with Google Cloud, spoke with InformationWeek about how consumer-oriented generative AI might bring more headaches than efficiency to businesses.

Consumer technology, such as smartphones, worked their way into the workplace, sometimes before folks really thought about how device management or security was going to be dealt with. Organizations might be caught up in the shiny new object of generative AI now, and not asking how they can actually police this. Can they shift to something else, potentially stop the use of consumer-oriented generative AI?

Related:How Generative AI Is Changing the Nature of Cyber Insurance

I think that the cases we’ve encountered are kind of fun and occasionally quite irrational. Wasn’t there the classic case in the media about the lawyer who was using ChatGPT, and it was giving him made-up data? To me this is like the top of the iceberg of consumer-grade AI being used for business. In many cases what I would have to deal with as part of the Office of the CISO is the security leader calling me and saying, “We need security guidance.” And then they ask about these controls and those controls. And then they, “Oh, by the way, we use ChatGPT-3.”

I’m like, “But it’s a toy.”

A very fun toy. The point is that these are toys for fun and personal education. But you’re describing the levels of control, granular access between teams at your organization, but then you say you use ChatGPT-3. That makes absolutely no sense, but to him it did because his business was pushing him -- that they want to use this tool.

In essence, these stories are quite surreal at times because what we encounter from clients is just a lack of understanding of what’s a consumer toy -- probably inaccurate, but still good.

Related:Selipsky at AWS re:Invent on Securing Data in the GenAI World

I was writing a letter to my former dance teacher using Bard [Google’s chat-based AI tool. Today, Google announced it was changing Bard's name to "Gemini."] It’s like a belated Christmas message. Bard [Gemini] is doing great with that. Would I do it for security advice given my realities to a customer? No. General advice would probably be good, but you need to have precision; you need to have certainty; you need to have provenance of data. But this intermixing is kind of endemic and sometimes it pops up not only as mismatch of use cases, but also people demanding controls, which they expect in enterprise technology from consumer-grade technology.

The result is even more hilarity is generated because it’s not going to fit.

When our field teams talk to customers about Vertex AI [Google’s tool for testing and prototyping generative AI models], there are many, many layers of controls -- technology, procedural controls explaining how we do things.

What I want to invent, ultimately what we want to invent, is more than just education. We don’t just want to tell people, “Hey, you’re really doing it wrong.” It only goes so far. I feel like building an enterprise stack so that it’s as easy to adopt as consumer-grade tech but has all the controls is going to be the direction, probably for the future.

Related:Generative AI an Emerging Risk as CISOs Shift Cyber Resilience Strategies

Another common enterprise theme is people say, “Would it learn from my prompts?” And the answer is, “Yes, of course for consumer-grade; no, of course not for enterprise.”

It’s like complete polar opposites. Yes, it would. No, it would not. Absolutely, yes. Absolutely, no. You see these forks in the road and if you really want an enterprise AI for enterprise use cases, you push vendors to build things, require things, require controls, require privacy controls, require governance controls -- a long list of things versus just go and sign up.

There are mindsets about being aware of security, visibility, access, and what is going on within an IT infrastructure or cloud infrastructure. For whatever reason, did that just kind of get forgotten once generative AI came onto the scene?

If you look at some of the online reports about some people who are trying to create an enterprise AI out of consumer AI, you would see some hilarity in the access permissions. For example, your function at an enterprise shouldn’t see what my function in the enterprise does with the AI. It may be compliance; it may be just practical. It may be that mine is less sensitive than yours, but this type of cross-pollination, cross-learning is sort of assumed in consumer because you wanted to learn from everything. But it’s assumed to not be there in enterprise.

For example, if I am a security incident responder and you are an IT guy, I don’t want you looking at my tickets (a very 1990s example), because it is possible that I’m investigating you for leaking corporate data. There are many other reasons why security data is more sensitive. Imagine the same thing with genAI where you’re training AI on tickets.

Some companies would say, “AI -- tickets. Push the button.” Did they think, “Whoa, wait a second. Their permissions, the level of sensitivity here, it’s not just like ‘a ticket database.’” I’ve been telling a story -- it didn’t happen to a client, but it’s something I’ve heard from industry contacts where something vaguely similar happened. If they didn't have genAI, if they were just playing in enterprise, they would think, “OK, what are the access rules? Who would access what?”

But with this particular AI, not only they didn’t think about it, the actual tech stack they used did not have a way to do it because it was kind of a derived from ultimately consumer genAI. To me this type of permissioning, and I’m not talking about like fine permissioning, but more like, “Just give it all the data.”

What are the consequences for enterprises? What’s at stake here if organizations don’t make it clear within their operations how they’re going to use gen AI, whether or not they’re going to allow use of the consumer-facing options? Have we learned lessons from examples in the earlier days for ChatGPT when proprietary code from Samsung got into the wild?

In essence they went to ChatGPT, and they submitted pieces of Samsung code and wanted to improve it or whatever the use case was -- I vaguely recall that. It wasn’t really an accident from their point of view. They really did want to do exactly that. It was just the wrong tool.

The problem is that at the time, there were no right tools. I think that the excitement to use new technology is obviously a feature of many IT technologists. Maybe less so in security. Frankly, just the other day, I was polling security leaders about what they care more about: securing AI or using AI for security?

I expected them to go full-on paranoia and say, “Hey, we’re all securing AI.” But in reality, it split half and half. It was a very informal poll, not Google-sponsored. The point is that the balance wasn’t, “I’m a CISO; I care about secure use of AI by my company.” The result was one CISO, "yes," another CISO is, “I care about using AI for security now.” The motivation to move quickly is very strong and I sense that the fear of missing out here is stronger. This is my guess, based on my experience.

About the Author

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights