ITBusiness.ca

Deep learning and AI can create different ethical issues

Washington D.C. – In its most basic form, artificial intelligence is an algorithm that is trained to learn via the data that is fed to it. But what happens when that data is full of bias?

“In traditional model building, even with good data we can introduce biases by not constructing the right variables or picking up nuances. A model is a representative of the mechanism that generated the data. So if we don’t represent that mechanism correctly, then we are not forecasting correctly, but forecasting something else,” explains Oliver Schabenberger, chief technology officer and executive vice president of SAS Institute Inc. to Canadian media at the Analytics Experience 2017.

Oliver Schabenberger, SAS CTO

For instance, deep learning algorithms only learn from the data that is provided. It can’t learn outside of that data. So if that data is correct, it will be able to make correct decisions. Hence the importance of providing clean data to these programs.

“We control and determine the logic of the program. Everything deep learning learns comes from the training data. It is important to understand any biases in the data if they are there, because it will leak into the end result. It comes down to how good our data is,” said Schabenberger.

CEO and founder of SAS, Jim Goodnight, looks at it quite simply. “AI is just making a program that you trained with models to make these decisions,” he told Canadian media.

But if that data fed into those models is incorrect, it isn’t so simple to fix.

Schabenberger explains how ethical use of analytics applies to all forms of analytics, but deep learning and AI shines a different spotlight on it. He can see how a simple statistical model went wrong. He can look at the math and find what happened. It’s not so simple with deep learning.

“A neural network does not give up its secrets. I can’t tell you where something goes wrong if something it predicts is not correct. My ability to correct this is limited. If it’s a model I can examine the math and see how it got there. I can’t step through a neural network and see what happened, or put it through a debugger. The only way to change it is new data,” Schabenberger said.

The fact that it is not so simple to fix can be disconcerting for some, and in certain industries like banking, it means for now there will still be that human element tacked onto any AI technology introduced.

“It needs the right ethics. Biases from the data are where the human aspect comes in. I would love to have machines, but there should be a hybrid structure, so a computer could make the decision with human involvement. The human could override a decision if need be,” said Qaisar Bomboat, IT Risk architect at the Canadian Western Bank.

The Edmonton-based Canadian Western Bank primarily operates in Western Canada where it serves personal in commercial clients. Bomboat is part of the enterprise risk management team.

Taking the human out of a decision like whether or not the bank would give someone a loan would be the ideal scenario, but that requires trust in the data that is being fed to the program making those decisions. That’s easier said than done.

As Goodnight said, AI is just a program that you trained with models to make that decision. And while both Schabenberger and Goodnight are confident in the data behind SAS’ deep learning efforts. That trust may still take some time to reach anyone who is skeptical on the matter.

Exit mobile version