OpenAI CEO testifies before U.S. Senate Judiciary Subcommittee, calls for regulation of AI

Artificial intelligence experts, including the developer of ChatGPT and a privacy leader at IBM, called for the U.S. government to regulate the industry, including possibly creating an international oversight agency, to make sure the fledgling technology is controlled.

“We believe the benefits of the tools we have deployed so far vastly outweigh the risks,” Sam Altman, CEO of ChatGPT developer OpenAi, told a U.S. Senate committee hearing this morning. But, he added, “if this technology goes wrong, it can go quite wrong.”

The U.S. “might consider a combination of licencing and testing requirements for development and release of AI models above a threshold of capabilities,” he said. The most powerful AI models should adhere to a set of safety requirements, and that may require global co-operation, he added.

“We can and must work together to identify and manage the potential downsides so we can all enjoy the tremendous upsides.”

Gary Marcus, psychology professor and founder of several AI companies, who worried that criminals “are going to create counterfeit people,” said so far we don’t know if the benefits of AI will outweigh the risks. “Fundamentally, these new systems will be destabilizing,” Marcus said. “They can and will create destabilizing lies on a scale that humanity has never seen before.

“We probably need a cabinet-level organization within the United States to address this. My reasoning for that is the number of risks is large, the amount of information to keep up on is so much, I think we need a lot of technical expertise, we need a lot of co-ordination of these efforts.”

On top of that, he suggested the creation of an international AI safety organization similar to CERN, the European organization for nuclear research. Independent researchers should oversee clinical trials of AI systems before they are publicly released, he said.

Christina Montgomery, chief privacy and trust officer at IBM, said the company urges Congress to adopt “a precision regulation approach to AI. That means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself.” It would mean creating the strongest regulations for use cases with the greatest risks to people; clearly defining high risks; ensuring transparency so consumers know when they are interacting with an AI system; and making companies conduct impact assessments for higher-risk systems to show how they perform against tests for bias.

The three were testifying before the Senate Judiciary Subcommittee on Privacy, Technology and the Law. The hearing is the first of several the subcommittee will hold on possibly regulating AI systems in the U.S.. Several Senators said Congress shouldn’t make the mistake it made years ago in giving legal immunity to U.S.-based online computer services like social media for content created by users.

In his opening remarks today subcommittee chair Senator Richard Blumenthal said that Congress failed to seize control over social media “and the result is predators on the internet, toxic content, exploiting children … Now we have the obligation to do it on AI before the threats and risks become real.” He called for “sensible safeguards,” which, he added, are not a burden to innovation.

Blumenthal had his own list of possible federal regulations, including transparency (AI companies ought to be required test their systems, disclose known risks and allow system access to independent researchers access); the creation of bias scorecards similar to the nutrition labels on food; limits or bans on use of AI where it might affect privacy and people’s livelihoods; and accountability, meaning companies can be legally liable for damages. This last, he added, “could be the most powerful tool of all”

“The AI industry doesn’t have to wait for Congress” to be proactive, he added.

Marcus was the most alarmist, citing news reports of an AI system that advised a 13-year-old how to lie to their parents to go on a trip with a stranger. “Current systems are not transparent,” he complained, “they do not adequately protect our privacy and they continue to perpetuate bias. Even their maker don’t entirely understand how they work. Most of all we cannot remotely guarantee they are safe. Hope here is not enough. The big tech companies preferred plan is to say, ‘Trust us.’ But why should we? The sums of money at stake are mind-boggling”

The hearing also came 12 days after the Biden administration met with OpenAI and the CEOs of Alphabet, Anthropic and Microsoft to discuss AI issues. The White House also issued five principles for responsible AI use.

Separately, a Canadian committee of Parliament is expected to soon start hearings on proposed AI legislation and the European Parliament has moved closer to creating an AI law.

The hearing comes amid a divide among AI and IT researchers following the public launch last November of ChatGPT-3. [It’s now in version 4.] Some were dazzled by its ability to respond to search queries with paragraphs of text or flow charts, as opposed to lists of links issued by other search engines. Others, however, were aghast at ChatGPT’s mistakes and seemingly wild imagination, and its potential to be used to help students cheat, cybercrooks to write better malware, and nation-states to create misinformation.

In March, AI experts from around the world signed an open letter calling for a six-month halt in the development of advanced AI systems. The signatories worry that “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

Meanwhile a group of Canadian experts urged Parliament to pass this country’s proposed AI legislation fast, arguing that regardless of possible imperfections “the pace at which AI is developing now requires timely action.”

Some problems can be solved without legislation, such as forbidding employees who use ChatGPT or similar generative AI systems from loading sensitive customer or corporate data into the systems. Samsung was forced to issue such a ban.

Aside from the problem of defining an AI systems — the proposed Canadian Artificial Intelligence Data Act says “artificial intelligence system means a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions” — there are questions about what AI systems can be allowed to do or be forbidden from doing.

Although yet to be put into a draft, the European Parliament has been directed to create a law with a number of bans, including the use of “real-time” remote biometric identification systems in publicly accessible spaces.

The proposed Canadian legislation says a person responsible for a high-impact AI system  — which has yet to be defined — must, in accordance with as yet unpublished regulations, “establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system.”

Johannes Ullrich, dean of research at the SANS Technology Institute, told IT World Canada that there are a lot of questions around the transparency of AI models. These include where they get the training data from, and how any bias in training data affects the accuracy of results.

“We had issues with training data bias in facial recognition and other machine learning models before,” he wrote in an email. “They have not been well addressed in the past. The opaque nature of a large AI/ML model makes it difficult to assess any bias introduced in these models.

“More traditional search engines will lead users to the source of the data, while currently, large ML models are a bit hit and miss trying to obtain original sources.

“The other big question right now is with respect to intellectual property rights,” he said. AI models are typically not creating new original content, but they are representing data derived from existing work. “Without offering citations, the authors of the original work are not credited, and it is possible that the models used training data that was proprietary or that they were not authorized to use without referencing the source (for example a lot of creative commons licenses require these references.)”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Howard Solomon
Howard Solomon
Currently a freelance writer. Former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, Howard has written for several of ITWC's sister publications, including ITBusiness.ca. Before arriving at ITWC he served as a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times.

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs