ITBusiness.ca

AI systems with WMD power will be here soon, expert tells Canadian MPs

Industry committee during hearings in 2023 on Bill C-27

Advanced artificial intelligence systems with the equivalent power of weapons of mass destruction (WMD) could be only two years away, an expert has told MPs, which is why Canada’s proposed legislation to regulate AI should ban systems that introduce “extreme risks.”

The proposed legislation, called the Artificial Intelligence and Data Act (AIDA), should forbid the development of AI systems that need more than a set level of computing power, Jérémie Harris, Ottawa-based co-founder of Gladstone AI, an artificial intelligence safety consulting firm, told the House of Commons Industry committee on Tuesday.

Jeremie Harris, Gladstone AI

Only until AI developers can prove a model exceeding that level won’t have certain dangerous capabilities should it be allowed to go forward, he said.

The idea isn’t new, he said: It’s included in U.S. President Joe Biden’s recent Executive Order to government departments wanting to buy or develop AI systems. It requires developers of AI systems using a set amount of computational power involving bio-weapon design capability, chemical synthesis capability, and self awareness capability to report the results of their safety audits.

Second, Harris said, AIDA must address open source development of dangerous AI models.

“In its current form, AIDA would allow me to train an AI model that can automatically design and execute crippling malware attacks, and publish it for anyone to freely download. If it’s illegal to publish instructions on how to make bio-weapons or nuclear bombs, it should be illegal to publish AI models that can be downloaded and used by anyone to generate those same instructions for a few hundred bucks.”

And third, AIDA should explicitly address the research and development phase of the AI lifecycle. “From the moment the development process begins, powerful AI models become tempting targets for theft by nation-states and other actors,” Harris explained. As models gain more capabilities and context-awareness during the development process, loss of control and accidents become a greater risk as well. “Developers should bear responsibility for ensuring the safe and secure development of powerful models.”

The current version of AIDA ” is significantly better than nothing,” Harris said, but it needs amendments.

RELATED CONTENT: AIDA is ‘fundamentally flawed’

Another witness who urged Parliament not to delay passing AIDA was Jennifer Quaid, associate law professor and vice dean of civil law research at the University of Ottawa.

Some issues need to be clarified, she said, like making the proposed AI commissioner for enforcing AIDA responsible to Parliament and not the Innovation Minister, and ensuring the legislation forces the identification of different roles in AI development like application developers and operators so responsible people can be held accountable. But, she added, “time is of the essence … Delay is not an option.”

On the other hand, reporter and podcaster Erica Ifill complained AIDA won’t prevent the development of AI systems that discriminate against minorities.

AIDA is part of Bill C-27, a package of legislation including the proposed Consumer Privacy Protection Act (CPPA), a replacement for the Personal Information Protection and Electronic Documents Act (PIPEDA) that regulates many businesses.

Harris pulled no punches in warning of the coming capabilities of advanced AI systems. His testimony left Conservative MP Bernard Genereux wondering if the third world war is coming, and Liberal Turnbull describing Harris’ opening statement as “quite scary.”

“We work with researchers at the world’s top AI labs on problems in advanced AI safety,” Harris said, “and it’s no exaggeration to say the water-cooler conversations in the frontier AI safety community frames near-future AI as a weapon of mass destruction or WMD-like enabling technology. Publicly and privately, frontier AI labs are telling us to expect AI systems to be capable of carrying out catastrophic malware attacks and supporting bioweapon design, among many other alarming capabilities, in the next few years. Our own research suggests this is a reasonable assessment.

“Beyond weaponization, evidence suggests that as advanced AI approaches superhuman general capabilities, it may become uncontrollable, and display what are known as “power-seeking” behaviours. These include AIs preventing themselves from being shut off, establishing control over their environment, and self-improving. Today’s most advanced AI systems may already be displaying early signs of this behaviour.”

Most of the safety researchers Harris deals with at frontier labs, he said, consider power-seeking by advanced AI “to be a significant source of global catastrophic risk.”

“If we anchor legislation on the risk profile of current AI systems, we will very likely fail what may be the single greatest test of technology governance we have ever faced. The challenge AIDA must take on is to mitigate risk in a world where, if current trends simply continue, the average Canadian will have access to WMD-like tools – and in which the very development of AI systems may introduce catastrophic risks.”

At one point, he said governments should encourage the private sector to make fundamental advances in the science of AI, to create a scientific theory for predicting the emergence of dangerous AI capabilities.

Exit mobile version