Should artificially intelligent robots have the same rights as people?
Yes, says one of Canada’s top artificial intelligence (AI) entrepreneurs.
Suzanne Gildert is co-founder and chief scientific officer of Kindred AI, a Vancouver startup whose backers include Google’s venture capital arm. At the SingularityU conference in Toronto on Wednesday, Gildert made the case for extending human rights to artificially intelligent robots.
“A subset of the artificial intelligence developed in the next few decades will be very human-like. I believe these entities should have the same rights as humans,” said Gildert, a Brit who has relocated her career to Canada.
AI-based robots and software programs are increasingly performing tasks – from beating chess champs to driving cars – that, previously, could only be done by humans. From Hollywood to the halls of the European Parliament, questions are being raised about whether robots with human abilities should be treated like humans too.
The dilemma is echoed in TV shows like Westworld, HBO’s hit series set in a Wild West themed amusement park run by human-like androids. In the blockbuster movie sequel Blade Runner 2049, Canadian actor Ryan Gosling plays a replicant; in the film, although the human-looking droids are superior to people, they’re programmed to ‘die’ after four years.
In January, the European Union passed a motion adopting a report that calls for the development of “electronic personhood” regulations for robots and AI systems.
Even Microsoft Corp. co-founder Bill Gates has weighed in. During a February interview with Quartz magazine, he argued robots should pay taxes. If millions of human jobs disappear due to robots, he warned, so will the various taxes paid by those human workers, depleting government coffers for social programs.
Piles of hardware become “beings”
At Wednesday’s event, Gildert didn’t get specific about what robot rights would look like, or how they could be extended and protected. She did, however, provide a rare glimpse into the relationships developing between the new breed of AI robots and the scientists who create them.
“Something has felt different recently. I’ve started to have these ‘holy crap!’ moments that this thing I’m building might actually work, it might actually lead to artificial intelligence,” Gildert explained.
“There’s actually a more serious side to building AI,” she continued, “when this thing that you’re building stops being a pile of hardware and it starts to become a being.”
For example, Gildert pointed out robots can be programmed to experience memories. Although scientists can quickly erase every memory a robot has ever had, Gildert acknowledged that, “every time I push that delete button, it gives me an uneasy feeling that I’m wiping a being out of existence.”
AI robots today are also used to test pleasure and pain systems. Gildert asserted this “effectively amounts to torturing the robot in our lab” in ways that evoke animal testing.
Gildert said most AI robots will eventually be designed to resemble human bodies because “we can more easily merge with AI if it inhabits a body similar to ours.” Moreover, robots made with parts that move like human bodies will be better able to take over physical tasks performed by humans.
“Our world is designed for humans. Everything we see around us is designed for adult sized bodies to interact with.”
Gildert predicted this will produce “a third category of AI/human hybrids” imbued with both the physical features and intellectual abilities of people. As these AI/human hybrids proliferate, so will the ethical and legal questions surrounding them, she said. Should hybrids be allowed to make mistakes like people or should they be programmed with the goal of always being perfect?
“Robots fundamentally have to make mistakes in order to learn,” Gildert said. “We must expect and accept this. We have to allow them to have this childlike phase as they learn and grow.”
Recent research suggests AI algorithms can mimic human failings, simply by reflecting the biases of people who programmed them. A study published in the April edition of Science found that a machine learning tool interpreted certain online text in prejudiced ways. In some cases, it associated the words “woman” and “female” with arts, humanities and the home but assigned the words “man” and “male” to connotations involving math and engineering professions.
“When I’m programming this robot, I have to decide what it’s going to value, what it’s going to consider as ‘good’,” Gildert told the Toronto event audience. “Where does that training data come from? Me. So that AI is also learning my values, my mannerisms, my behaviour.”
Gildert concluded her presentation by urging AI scientists to recognize the impact of their own human prejudices and imperfections on the robots they create.
“If we really are bringing AI (robot) ‘children’ into the world, at some points they’re going to grow up and we have to give them rights and responsibilities. I think we’re going to have to grow up alongside them.”