AI can be as as fallible as the humans that make it, Microsoft panel concludes

TORONTO – Artificial intelligence (AI) might be a boon for businesses, collecting, analyzing, and making recommendations based on greater quantities of data than humans could ever hope to sift through, but it’s still subject to the same biases, a Microsoft-moderated panel at a Microsoft-sponsored AI conference agreed.

Among the experts invited to Microsoft’s Future Now conference on March 22, 2018 was Integrate.ai vice president of product and strategy Kathryn Hume, who shared two examples of AI prediction going wrong and one in which her enterprise platform developer was trying to help a customer avoid similar mistakes.

“Predictions are predicated upon the data that we’ve collected in the past,” Hume said. “So there’s incredible potential for us to get ahead of things that could cause ill in society, but we have to be… thinking about whether we’re getting ahead in a way that’s achieving the goals and outcomes that we’d like.”

Her first example of AI gone wrong was from a Microsoft-led research project in 2016 that fed a natural language processing platform comparative statements which the platform was invited to complete based on how frequently the initial comparison appeared in Google News articles.

“As man is to woman, so king is to…” returned “queen,” for example.

However, “A man is to a computer programmer as a woman is to…” returned “homemaker,” and “as white man is to ‘entitled to,’ so black man is to…” returned “assaulted,” to cite two examples the researchers themselves called egregious.

“And this is accurate,” Hume said. “This lies within the data.”

Her second example was a case study from author Virginia Eubanks’ Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, in which a predictive AI platform designed to identify potential child abuse began disproportionately targeting poor families for a logical but regrettable reason.

I’m an American, so I’m used to having there be an extreme split between the public offering of healthcare services and private,” Hume said. “And… it happened that the government had collected a lot more information about poor families, because they were the users of public services. So rich people may have had instances of internal domestic abuse, but [the platform developers] didn’t know about them, because they went to receive treatment from private services.”

As part of her response to moderator Andrée Gagnon asking how companies could prevent this sort of unwanted outcome, Hume said she believed the answer needed to be context-dependent.

“We need to move from ‘ommygod, bias is bad!’ to something where there’s a risk-oriented, outcomes-oriented discussion that technical teams can have with business stakeholders so that they can make a tough decision that might mean a short-term tradeoff on strict quarterly revenues in the service of leading to greater equality in society,” she said. “And for me, it’s incredibly heartening that CEOs of banks are actually thinking about this today.”

Hume also noted that in their scientific paper, the researchers behind Microsoft’s language processing study based their definition of bias on surveys conducted through Amazon platform Mechanical Turk, and said in her opinion this was a viable solution as well.

“It reminds me of the medical and legal theory around ‘reasonableness,’ where it’s the most opaque and slippery term ever,” she said. “But we might say, ‘well, maybe reasonableness is social norms in a given moment,’ in which case crowdsourcing… so we don’t have bias of viewpoints… might be an answer.”

Potential solutions include philosophers, regular compliance testing

Hume’s anecdotes didn’t surprise Thomson Reuters vice president of research and development Khalid Al-Kofahi, who shared his own example of flawed data once analyzed by the global news organization.

“A few years ago we wanted to see what people complain about on social media… and we focused on people harming themselves,” he said. “So what is the number one injury that people report on Twitter? Cutting your finger on the broken glass of your iPhone or Android… and the number two injury was, especially for ladies, when they burn their skin with curling irons.”

Microsoft Research Montreal (formerly Maluuba) head of product Mohamed Musbah – who, as moderator Gagnon noted, works in language processing – called Al-Kofahi’s anecdote a perfect example of AI’s current limitations.

“It reflects on how tied data is to the models of the systems we’re developing,” he said. “One of the things we’re learning as researchers from computer science is that we can’t solve this problem ourselves. We need to start relying on other groups with years of research who have been working on this for far longer than we have, and bringing them in to deal with the social and cultural perspective.”

In fact, Gagnon, who serves on Microsoft Canada’s corporate external and legal affairs department, noted that some of the most important researchers hired by CEO Satya Nadella to support AI development have been philosophers, who can often identify a wider range of use cases for an AI platform and anticipate more challenges its implementation might create than developers.

For all its AI evangelism, the company has long realized that it’s important a product isn’t simply built and released for commercial use with no thought for how the data might be biased, she said.

“Not because it’s not accurate,” Gagnon said. “The data could actually be reflection of what exists in society today, and the algorithm that’s learning from that data could potentially make that bias more entrenched.”

Reuters’ Al-Kofahi agreed with Gagnon’s assessment, observing that in his experience, machine learning doesn’t just reflect biases in data, it amplifies them, and opens them to misuse by parties who benefit from the divisions that result.

“More and more of our lives are going digital,” he said. “Anything that we say, we utter, we comment on, opinions, how we live, images, pictures – everything is forever preserved. And machine learning at scale allows individuals, governments, institutions, companies, to process and analyze that data at scale. The combination of these two, if left unchecked, I think can be very dangerous.”

Al-Kofahi said that in his opinion, companies can best remain ahead of these risks by codifying a set of ethical standards, integrating compliance into the development process, and regularly testing its products to ensure they remain that way.

“These applications learn from user data… which then serves to change the rules over time,” he said. “Who’s to say that the application is still technically compliant in six months?”

For what it’s worth, the panel also agreed that AI could be a force for good, and was unlikely to decimate jobs to the extent its detractors fear.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Eric Emin Wood
Eric Emin Wood
Former editor of ITBusiness.ca turned consultant with public relations firm Porter Novelli. When not writing for the tech industry enjoys photography, movies, travelling, the Oxford comma, and will talk your ear off about animation if you give him an opening.

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs