ITBusiness.ca

Will ChatGPT eliminate jobs? Two Gartner analysts predict it will not

Image credit: Getty Images

Numbers frequently tell a story, which is why the following is of such importance: It took two years for Twitter to get to one million users, roughly 10 months for Facebook, upwards of two and-a-half months for Spotify, a month for Instagram, and for OpenAI’s ChatGPT, all of five days following its launch in late November.

Those stats were referred to earlier this week at a press-only presentation broadcast live from the Gartner Data & Analytics Summit in Orlando, Fla. that discussed the question of whether AI-based content generation (Generative AI, aka Gen AI) tools such as ChatGPT and Google’s Bard will help or hurt organizations.

Primary themes included: will they replace jobs, and how will they be integrated into the technologies that organizations use every day?

These offerings, Gartner noted, have generated significant hype with their potential to enable innovative and useful functionality, but there have also been many discussions around the problems such technology can create.

Speakers in the Q&A session were Frances Karamouzis, Gartner distinguished VP analyst and Svetlana Sicular, Gartner VP analyst, both of whom focus on issues related to AI and the enterprise.

According to the research firm, the ChatGPT service will “change rapidly during 2023, and will be complemented by other offerings. Gartner clients have asked a flurry of questions regarding ChatGPT. Their most frequently asked questions traverse areas as diverse as business value, workforce impact, ethical and legal concerns, technology, vendor landscapes, security and experiences.”

Karamouzis pointed out that the trajectory of making ChatGPT available to the masses is one reason it is in the spotlight, another is ease-of-use, and, last but not least, was the accessibility factor, in that “anybody can just go on and play around with it.”

Asked by Meghan Rimol DeLisi, senior manager of public relations at Gartner, who moderated the panel, what impact ChatGPT will have on the human workforce as a whole, Karamouzis replied that Gartner is predicting that by 2026, upwards of 100 million people “will have what we call a robo-colleague – a synthetic virtual colleague – and will use them.

“But to answer your direct question, will it replace jobs? We don’t think there’s going to be this huge replacement of jobs.”

She likened it to the days when students were allowed to bring a calculator with them when sitting an exam. Tools like ChatGPT, “will allow for automation, but will they replace some jobs. No, but I think there will be incredible tools that will help recalibrate and redefine how we do work.”

A Gartner research document outlining frequently asked questions on the Gen AI tool states that “there will be new jobs created, while others will be redefined. The net change in the workforce will vary dramatically depending on factors such as industry, location, and the size and offerings (products or services) of the enterprise.

“However, it is clear that the use of tools such as ChatGPT (or competitors), hyperautomation and AI innovations will focus on tasks that are repetitive and high-volume, with an emphasis on efficiency, such as reducing cycle time, increasing productivity and improving quality control (reducing error rates), among others.”

Sicular, who was responsible for pioneering responsible AI, AI governance, augmented intelligence and big data research at Gartner, was asked to define some of the most important ethical considerations when it comes to exploring the use of AI in the enterprise.

She replied that it is not only about an organization being ethical, but being aware of a multitude of either existing or pending legal regulations.

“One type is existing regulations that you have to follow. And the other type of regulations are pending AI regulations that are mostly in draft, they are not enforced but upcoming, and we know directionally where they will be going. And the regulators are all over new AI capabilities.”

In describing the ethical side of the equation, she referred to her 13-year-old daughter’s experience: “She is in a group that writes fanfiction and they have a very popular person whom they all follow. And one day she asked me, ‘Mom, did you hear about AI ethics? I replied, yes, I kind of did.’ And she said in their group, somebody intercepted the writer and used ChatGPT to finish the story.

“What do you do about it? It is about ethics. It’s about how do you really decide what’s right, what’s wrong in this whole new world? These are the ethical questions to be aware of, there is no black and white. It’s about asking the right questions, and answering those questions. For example, we see the debate – ChatGPT is not an author. Should it be?

“There are many gray areas that we need to address, and we need to think about them.”

During a rapid-fire response session, the two panelists were asked what is the one thing that is the most misunderstood by the C-suite when it comes to Generative AI.

Karamouzis responded by saying it’s the concept that there is no such thing as a no-AI enterprise. AI, she said, is everywhere – embedded in offerings from all the big suite vendors, and contained in everything from platforms that carry out specific administrative tasks to fraud detection systems.

The biggest risk for an organization, she said, is simply “standing still,” and not do anything when it comes to implementing an AI strategy.

Exit mobile version