Machine learning systems that discriminate violate human rights, Toronto-based declaration says

Machine learning software has been touted as the next wave of innovation, promising to help governments and businesses make faster and more accurate decisions.

But human rights activists and technology groups warned Wednesday that creating systems that discriminate should be treated as a violation of human rights.

It came with the release at the RightsCon conference of the so-called Toronto Declaration on preventing machine learning from being used to support discrimination.

Machine learning systems – sometimes called artificial intelligence – are more than pattern recognition software, say adherents of the declaration. Used wrongly – deliberately or inadvertently — by data scientists and software developers, they can violate privacy, data protection, freedom of expression, participation in cultural life and equality before the law.

“Systems that make decisions and process data can also implicate economic, social, and cultural rights,” says the document, ranging from deciding who gets jobs, healthcare or jobs.

As a result, countries and the private sector have obligations to promote, protect and respect human rights by ensuring machine learning systems aren’t used to support discrimination on grounds that include race, colour, sex, language, religion, political or other opinion, national or social origin, property or birth.

The declaration also says victims of human rights violations or abuses must have access to prompt and effective remedies, although these aren’t spelled out.

“There are many, many examples” today of discriminatory machine learning systems,” Anna Bacciarelli, Amnesty International’s advisor on technology and human rights and a member of the drafting team, said in an interview. “They are proliferating at present.”

Anna Bacciarelli, Amnesty International

She cited a 2016 report by ProPublica which raised questions about the fairness of risk assessment software used in some U.S. courts to help judges decide which accused are likely to commit another crime.

“We’re basically just showing states, private actors, tech companies how they need to apply their products to international human rights law,” she said of the declaration.

The document was drafted by a group of 11 people before the conference, including members of Amnesty International; Access Now, a digital rights group which organizes the annual conference; and Privacy International. It was honed by a larger group before being released that included people from technology companies.

It will be discussed at the conference, and a final version will be released at the end of the month. It is also expected that guidelines for developers and data scientists will also be created.

The non-binding declaration could be adopted by the United Nations, say supporters.

Immediate reaction was mixed. Human rights advocates were enthusiastic – it has already been endorsed by Human Rights Watch and Amnesty International. Others were cautious.

Steve Crown, Microsoft’s vice president and deputy general counsel, called the declaration a “hugely important step.”

“Artificial intelligence has so much opportunity to improve the human condition.” he said in an interview,” and because its so powerful, [also] opportunities for misuse … so we have to be thoughtful about the misuse or unintended consequences as well as all the great things.”

But he also said that “we don’t want to see pre-emptive regulation before we understand how to use the [legal] tools we already have.”

Key to the consensus on the document, said Dinah PoKempner, Human Rights Watch’s general counsel, is that much of the language on discrimination and equality was taken from the International Covenant on Civil and Political Rights adopted by many countries.

“It’s not as though we just invented something,” she said at a session. “This has been there, it’s there in your law. It’s there to be used and elaborated by national, local and municipal governments, and it’s there to be incorporated by corporations, ethicists and [software] designers as well. It’s not a recipe for every situation and every application. We will face many challenges in the future, but we don’t go into the future blind.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Howard Solomon
Howard Solomon
Currently a freelance writer. Former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, Howard has written for several of ITWC's sister publications, including ITBusiness.ca. Before arriving at ITWC he served as a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times.

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs