ITBusiness.ca

AI expert says report findings proof onset of ‘Terminal AI’ has begun

Don Delvy: As AI continues to evolve at an unprecedented pace, it is crucial for industry leaders to embrace transparency, accountability and a commitment to responsible innovation.

A new report highlighting an escalating rise in phishing incidents since the launch of ChatGPT is the start of a cascade of events that a Silicon Valley cloud and artificial intelligence (AI) expert predicts could result in a catastrophic “threat to mankind.”

The study by cybersecurity vendor SlashNext, which provides offerings for cloud email, mobile and web messaging apps, revealed an alarming 1,265 per cent increase in malicious phishing emails since the launch of OpenAI’s generative artificial intelligence (GenAI) platform a year ago.

“The one thing that is certain is the future of generative AI is still largely unknown,” authors of the document state. “The rapid growth of these tools on cybercrime forums and markets highlights how cybercriminals have embraced the technology and that the potential threat is real.”

While noting that “fortunately, there are cybersecurity vendors who have introduced generative AI technologies, which are used to detect and stop malicious generative AI attack attempts,” they add, “the results in the report highlight how much the threat landscape has changed since 2022.”

Other findings revealed that 68 per cent of all phishing emails are text-based Business Email Compromise (BEC) attacks, mobile phishing is on the rise, with 39 per cent of mobile threats consisting of Smishing (SMS phishing), and Credential Phishing “continues a stratospheric rise, with a 967 per cent increase.”

For Don Delvy, the CEO and founder of D1OL: The Digital Athlete Engine, a cloud-based sports smart platform, the findings should be a wake-up call for everyone when it comes to the downside of a technology that he says should never have been put into the public domain in the first place.

“The recent advancements in AI have ignited a global conversation about its potential impact on society,” he said. “While AI holds immense promise for transforming industries and enhancing human capabilities, it also raises concerns about ethical implications and responsible use.

“We are facing unprecedented ignorance, incompetence and corruption in the global technology industrial complex, at the worst possible time, the precipice of Terminal AI.”

Terminal AI, he said, “refers to artificial intelligence that becomes a catastrophic threat, potentially leading to a nuclear holocaust, the destabilization of governments, economies, and societies. This concept underscores the urgent need for strategies that future-proof AI to save the world. A proactive approach that focuses on the enduring sustainability and ethical foundations of AI would prevent such a dire outcome by ensuring AI develops in a safe, controlled, and beneficial manner for humanity.”

To address these concerns “and foster a constructive dialogue,” Delvy, a graduate of Purdue University who has been involved in software development for close to 30 years, says the following five steps need to be taken:

In an interview with IT World Canada, Delvy described GenAI technologies as “hands down the most explosive technology the world has ever seen, right behind nuclear.

“I would never have put a large language model (LLLM) on a public cloud, that is first and foremost.”

As for the SlashNext report, he said the “mammoth rise in phishing emails created by ChatGPT is absolutely 100 per cent the beginning, and you are going to see actual damage.

“I have a seven-year-old son I am trying to  protect here.”

Asked about the recent firing and re-hiring of Sam Altman from OpenAI, Delvy pointed out that the entire incident “highlights the importance of ethical leadership in the AI sector. As AI continues to evolve at an unprecedented pace, it is crucial for industry leaders to embrace transparency, accountability and a commitment to responsible innovation.”

Exit mobile version