Google Cloud’s Anton Chuvakin Talks GenAI in the Enterprise – InformationWeek

With the proliferation of generative AI, much of it consumer-oriented, it may be inevitable that such platforms and tools find their way into the workplace -- even if they are not designed to meet the oversight businesses require in their technology.

From proprietary code to sensitive data, the stakes can be high for organizations. Generative AI (genAI) that is not built specifically for the rigors businesses face could be a liability when it comes to regulatory compliance on information security and access. Moreover, what consumer-focused generative AI produces might be fraught with AI hallucinations, errors, and simply not measure up to the standards businesses require.

Amusing tools for spawning digital content might leave a few windows, doors, and ventilation ducts open -- potentially compromising digital security.

Anton Chuvakin, security advisor at the Office of the CISO with Google Cloud, spoke with InformationWeek about how consumer-oriented generative AI might bring more headaches than efficiency to businesses.

Consumer technology, such as smartphones, worked their way into the workplace, sometimes before folks really thought about how device management or security was going to be dealt with. Organizations might be caught up in the shiny new object of generative AI now, and not asking how they can actually police this. Can they shift to something else, potentially stop the use of consumer-oriented generative AI?

Related:How Generative AI Is Changing the Nature of Cyber Insurance

I think that the cases weve encountered are kind of fun and occasionally quite irrational. Wasnt there the classic case in the media about the lawyer who was using ChatGPT, and it was giving him made-up data? To me this is like the top of the iceberg of consumer-grade AI being used for business. In many cases what I would have to deal with as part of the Office of the CISO is the security leader calling me and saying, We need security guidance. And then they ask about these controls and those controls. And then they, Oh, by the way, we use ChatGPT-3.

Im like, But its a toy.

A very fun toy. The point is that these are toys for fun and personal education. But youre describing the levels of control, granular access between teams at your organization, but then you say you use ChatGPT-3. That makes absolutely no sense, but to him it did because his business was pushing him -- that they want to use this tool.

In essence, these stories are quite surreal at times because what we encounter from clients is just a lack of understanding of whats a consumer toy -- probably inaccurate, but still good.

Related:Selipsky at AWS re:Invent on Securing Data in the GenAI World

I was writing a letter to my former dance teacher using Bard [Googles chat-based AI tool. Today, Google announced it was changing Bard's name to "Gemini."] Its like a belated Christmas message. Bard [Gemini] is doing great with that. Would I do it for security advice given my realities to a customer? No. General advice would probably be good, but you need to have precision; you need to have certainty; you need to have provenance of data. But this intermixing is kind of endemic and sometimes it pops up not only as mismatch of use cases, but also people demanding controls, which they expect in enterprise technology from consumer-grade technology.

The result is even more hilarity is generated because its not going to fit.

When our field teams talk to customers about Vertex AI [Googles tool for testing and prototyping generative AI models], there are many, many layers of controls -- technology, procedural controls explaining how we do things.

What I want to invent, ultimately what we want to invent, is more than just education. We dont just want to tell people, Hey, youre really doing it wrong. It only goes so far. I feel like building an enterprise stack so that its as easy to adopt as consumer-grade tech but has all the controls is going to be the direction, probably for the future.

Related:Generative AI an Emerging Risk as CISOs Shift Cyber Resilience Strategies

Another common enterprise theme is people say, Would it learn from my prompts? And the answer is, Yes, of course for consumer-grade; no, of course not for enterprise.

Its like complete polar opposites. Yes, it would. No, it would not. Absolutely, yes. Absolutely, no. You see these forks in the road and if you really want an enterprise AI for enterprise use cases, you push vendors to build things, require things, require controls, require privacy controls, require governance controls -- a long list of things versus just go and sign up.

There are mindsets about being aware of security, visibility, access, and what is going on within an IT infrastructure or cloud infrastructure. For whatever reason, did that just kind of get forgotten once generative AI came onto the scene?

If you look at some of the online reports about some people who are trying to create an enterprise AI out of consumer AI, you would see some hilarity in the access permissions. For example, your function at an enterprise shouldnt see what my function in the enterprise does with the AI. It may be compliance; it may be just practical. It may be that mine is less sensitive than yours, but this type of cross-pollination, cross-learning is sort of assumed in consumer because you wanted to learn from everything. But its assumed to not be there in enterprise.

For example, if I am a security incident responder and you are an IT guy, I dont want you looking at my tickets (a very 1990s example), because it is possible that Im investigating you for leaking corporate data. There are many other reasons why security data is more sensitive. Imagine the same thing with genAI where youre training AI on tickets.

Some companies would say, AI -- tickets. Push the button. Did they think, Whoa, wait a second. Their permissions, the level of sensitivity here, its not just like a ticket database. Ive been telling a story -- it didnt happen to a client, but its something Ive heard from industry contacts where something vaguely similar happened. If they didn't have genAI, if they were just playing in enterprise, they would think, OK, what are the access rules? Who would access what?

But with this particular AI, not only they didnt think about it, the actual tech stack they used did not have a way to do it because it was kind of a derived from ultimately consumer genAI. To me this type of permissioning, and Im not talking about like fine permissioning, but more like, Just give it all the data.

What are the consequences for enterprises? Whats at stake here if organizations dont make it clear within their operations how theyre going to use gen AI, whether or not theyre going to allow use of the consumer-facing options? Have we learned lessons from examples in the earlier days for ChatGPT when proprietary code from Samsung got into the wild?

In essence they went to ChatGPT, and they submitted pieces of Samsung code and wanted to improve it or whatever the use case was -- I vaguely recall that. It wasnt really an accident from their point of view. They really did want to do exactly that. It was just the wrong tool.

The problem is that at the time, there were no right tools. I think that the excitement to use new technology is obviously a feature of many IT technologists. Maybe less so in security. Frankly, just the other day, I was polling security leaders about what they care more about: securing AI or using AI for security?

I expected them to go full-on paranoia and say, Hey, were all securing AI. But in reality, it split half and half. It was a very informal poll, not Google-sponsored. The point is that the balance wasnt, Im a CISO; I care about secure use of AI by my company. The result was one CISO, "yes," another CISO is, I care about using AI for security now. The motivation to move quickly is very strong and I sense that the fear of missing out here is stronger. This is my guess, based on my experience.

More:
Google Cloud's Anton Chuvakin Talks GenAI in the Enterprise - InformationWeek

Related Posts

Comments are closed.