IBM Corp. is no newbie when it comes to artificial intelligence (AI). Its Watson system, which caught the public’s eye in 2011 when it won on the television game show Jeopardy, has been used for everything from chatbots to assisting in medical research to (I kid you not) concocting recipes for poutine.
However, recent developments around generative AI applications such as ChatGPT, and the associated horror stories of the technology making up “facts” – a phenomenon known as hallucinations – have raised the technology’s profile to unprecedented heights, and often not in a good way. This can make it difficult for businesses to accept it as a tool.
Yet at IBM Think on Tour in Toronto, Sebastian Krause, IBM senior vice president and chief revenue officer, noted during his keynote, “The rapid advancements in AI have really brought its potential into focus, creating one of those, I would say, very rare moments when a technology is of such a profound benefit to both businesses and societies alike – or it can be of such a profound benefit if it is used correctly. So with all this potential, we believe that it is very important to differentiate between AI for business and AI for the consumer space.”
After Krause’s keynote, IT World Canada sat down with Dave McCann, IBM Canada president and managing partner of IBM Consulting Canada, and IBM Canada’s general manager, technology Frank Attaie for a discussion about what factors can make AI enterprise-grade.
“Trust, transparency, precision and accuracy, removal of any and all biases in a very auditable fashion down to the base code, and then raising the right risk, regulatory, and ethical standards,” Attaie said. “Those are really the foundational elements of what enterprise-grade AI needs to be. That’s really our view of what you need in order to put push enterprise-grade AI out.”
There have, McCann observed, been several driving forces that create what he called a “uniqueness of the opportunity that’s in front of us.” First, the easy access to large foundation models like ChatGPT let people try it out, see quick results, then realize that it could disrupt or change the way they do things. That pulled AI into the spotlight.
The second force put the brakes on. “From a business perspective, we can’t really think about ‘touch, put our data in, include it within,’ without understanding how trustworthy it is, how transparent it is, how we can be able to say where every single thing comes from,” he said. “And it was almost like this driving force where businesses rapidly tried it – and stopped.”
Most of IBM’s customers across Canada have strong policies in place that do not allow these early models to be used for business, he explained. “I think that created this moment where we think there’s a lot of opportunity, which is really around the foundation of how we built watsonx. And the watsonx.governance element of our solution, I think, has been at the core of driving differentiation and, let’s call it the excitement, business has around taking AI adoption with foundation models and generative AI to the next level.”
The watsonx platform is IBM’s recently-announced AI and data platform for business. It consists of three components: the watsonx.ai studio for new foundation models, generative AI and machine learning; watsonx.data, which the company says is a “fit-for-purpose store for the flexibility of a data lake and the performance of a data warehouse”; and the watsonx.governance toolkit “to enable AI workflows that are built with responsibility, transparency and explainability.”
“It is an evolution of the tools that we’ve built … we’ve talked about large language models, foundation models, it’s been five, six years; we’ve been doing AI within the mainframe since the 1960s,” Attaie said. “The tools that you see today, very much foundational, are from the investments we’ve made in prior years.”
“As we think about what generative AI brings to the table on the productivity discussion, we’re finally at that place where instead of building ten individual fairly significant models to solve ten different use cases, we can build one foundation model, or we can benefit from an open source, secure governed foundation model that’s trusted and transparent,” McCann added.
“And you can load it into watsonx, and then build 10 use cases off that with almost minimal effort. And I think that’s the key differentiation to where we were 12 months ago to where we are today, with where we can leverage foundation models and generative AI, because the productivity discussion becomes real.”
That’s the big shift now, he noted, moving toward foundation models rather than building individual models for each use case. “I think the shift in our discussion that we’re having with our clients you’re seeing prominently everywhere, and it gets me excited, is the conversation starting with AI plus something else. And that’s normally starting in a way where you can benefit from generative and foundation models. So to me, I think that that’s the big thing.”