ITBusiness.ca

‘A lot of the value of AI is related to the fact that they hallucinate’, says OpenAI’s CEO at Dreamforce

From left to right: Sam Altman and Marc Benioff at Dreamforce 2023

OpenAI’s chief executive officer (CEO) Sam Altman delved into the promises, challenges, and trust in AI, yesterday, in a conversation with Salesforce’s CEO and chair Marc Benioff at Dreamforce 2023.

Altman stressed OpenAI’s endeavour to making the GPT series “more reliable, more robust, more multimodal, better at reasoning”, while also getting the systems to be more secure and highly trusted, as the company launches the enterprise version of its popular chatbot, ChatGPT.

Trusting AI also comes, in part, from the assurance that the system is less likely to hallucinate, or make up ‘facts’, the executives indicated.

Altman acknowledged that there are a lot of technical challenges involved in dealing with a model’s propensity to hallucinate but, those hallucinations are also heavily related to the value of these systems.

He explained, “If you just sort of do the naïve thing and say, ‘never say anything that you’re not 100 per cent sure about,’ you can get a model to do that. But it won’t have the magic that people like so much, if you do it the naïve way.”

Intelligence, he added , is an “emergent property of matter to a degree that we do not contemplate enough.” But, it’s also “the ability to recognize patterns in data, the ability to hallucinate, to come up with novel ideas and have a feedback loop to test those.” Studying these systems, Altman explained, is much easier than studying the human brain, and collateral damage is inevitable as “there’s no way we’re going to figure out what every neuron in your brain is doing.”

But the goal, he noted, is to get the system to be “factual when you want and creative when you want, and that’s what [OpenAI] is working on.”

However, intelligence integrated into every system, Altman said, “will be just an expected, obvious thing” which will lead to the amplification of one individual’s capabilities, whereby they can focus more on the big picture problem and operate at a higher level of abstraction.

Further, the idea that AI’s exponential growth is going to level off is wrong, and a very difficult bias to overcome, he argued, be it for the government or human beings, adding that “when you accept it, it means that you have to confront such radical change in all parts of life.”

Altman also highlighted the need for enterprises to trust the AI systems and to be clear and transparent on policies, an area in which OpenAI received much criticism, notably amid recent accusations of data and intellectual property (IP) leaks as well as web scraping. 

Benioff said during his keynote yesterday that “it is a well-known secret at this point that [companies, in general] are using your data to make money,” adding, “That is not what we do at Salesforce.”

He added, “When we first released Einstein, that was the big idea. We’re not looking at your data. Your data is not our product. We are here to do one thing – to make you better. We’re here to make you more productive, to make you more successful. We’re not here to take your data. We’re not here to look at your data.”

Benioff also stressed the need for alignment in fostering trust in AI, a key discussion point in his conversation with Altman.

Altman explained that slowing down capabilities to work more on alignment is “nonsensical”, arguing that “the thing helping us make these very capable systems is how we’re going to align them to human values and intent”, and that “capabilities gain is an alignment gain”. The process of reinforcement learning from human feedback (RLHF), for instance, he added, is an alignment technique that essentially makes “a model go from not usable at all to extremely usable.”

“There’s more one dimensionality to the progress than people think,” he said. “And we think about it like a whole system, we have to make a system that is capable and aligned. It’s not that we have to make a capable system, and then separately, go figure out how to align it.”

The government, he concluded, also needs to get a framework in place, even if it’s imperfect, to help AI companies deal both with short term and long term challenges. Establishing a new agency makes more sense, he added, to help the government build up the muscle to fight the potential ills of AI.

Exit mobile version