Panel examines the role humans must play living in a GenAI world

A panel last week at VMWare Explore 2023 probed the many twists and turns organizations must be prepared to take when it comes to adopting generative artificial intelligence (GenAI) initiatives.

It was a frank and timely conversation, void of any marketing hype, that probed ethical principles that need to be in place when it comes to the development of AI systems, as well as the role humans should play when it comes to shaping what organizers described as the “next wave of business innovation.”

Moderated by Richard Munro, director for strategy and communications for the VMware Office of the chief technology officer (OCTO), panelists were data journalist and author Meredith Broussard, an associate professor at New York University, Chris Wolf, vice president of VMware AI Labs and Karen Silverman, chief executive officer (CEO) and founder of The Cantellus Group and an expert in governance strategies for AI.

Munro launched the discussion by asking Broussard what AI is and what it is not. She defined it “as just math, very complicated and beautiful math.

“It is really important to emphasize what is real about AI, as opposed to what is imaginary. And another way that I like to describe AI is it is pattern reproduction. What we do when we make an AI system is we take a whole bunch of data, as much data as we can get from anywhere, usually scraped from the internet, and we feed it into the computer and say computer, make a model.

“Computer makes a model and (it) shows the mathematical patterns in the data. And then we can use that model for a number of amazing things. We can make decisions, we can make predictions, we can generate new text, we can generate new images, we can generate new audio. But all of the biases of the data are reflected in the data we are using to train the model. And therefore, the model is also going to reproduce certain biases that are pre-existing.”

Munro said it really comes down to two specific approaches, namely what role should humans play in an organization wanting to embrace the capabilities AI offers, and what role should humans play in making that happen.

He then asked Wolf to describe how he approached AI and what kind of “things did you implement at the organizational level.” The key, Wolf said, is to develop guidelines early on for when the ChatGPT explosion really took hold.

Internally, he said, he received a number of requests to integrate a version of ChatGPT into the VMware’s product set. “It was basically a case of, ‘Hey Chris, what do you think about this?’ And I am like, ‘what do you think about no.’

“The challenge sometimes with AI is that if you look at it as a solution in isolation, you do not have the breadth to say, ‘well what are the implications?’ The reason we were saying no to these ChatGPT integrations was because they could, in turn, violate the privacy and compliance mandates that our customers have. Therefore, we can not do these things and products in the way that some folks would desire.”

While VMware has since established an internal AI Council that provides a set of implementation guidelines, Wolf said the company is “being very mindful internally and externally in terms of how we are approaching AI.”

Silverman replied that while the setting up of an advisory council is a good idea, with seemingly “everybody on the planet” expressing an interest in ChatGPT, the “first thing we all need to do is take a deep breath. Sort out where you really think these technologies are going to help you in your daily life, whether that is your personal daily life or your corporate daily life, and are there repetitive processes that you would like to do away with, do faster, do better? Are there creative projects that you would like support with and help with?

“This is also a team sport. As humans, we have to get over the orthodoxy of how some of our information and organizational flows have worked. And we have to break down silos around security, privacy, certainly compliance, and ethics, but also strategy, revenue, education, training … I mean, just go down the list. We have to start thinking creatively and accept that this is going to take more, not less, work from us.”

Wolf said his advice to any organization is this:  Avoid the temptation of a quick win, just to try to satisfy some mandate and have an early success for AI, and be mindful of it and consider an AI platform that has choice built in as a capability.

“That’s really important, because this space is moving so fast. Every week there are new models emerging and what that tells you is that you know that the churn of models and innovation is happening so quickly, you really cannot just say, ‘I’m going to just use this one model to solve all of my business problems.’

“A platform that gives you the ability to quickly pivot as better technologies come around, or models that are more aligned to your business goals and your business ethics, should be your foundation. Work to lay the AI foundation today, and then have optionality in terms of how you can take advantage of it going forward.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Paul Barker
Paul Barker
Paul Barker is the founder of PBC Communications, an independent writing firm that specializes in freelance journalism. He has extensive experience as a reporter, feature writer and editor and has been covering technology-related issues for more than 30 years.

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs