ITBusiness.ca

Canadian ‘godfathers of AI’ win 2018 Turing Award

2018 Turing Award winners - Hintoon, Bengio, Lecun

Two of Canada’s artificial intelligence (AI) pioneers have won the 2018 Association for Computing Machinery’s (ACM) A.M. Turing Award along with another peer and will share the $1 million prize.

The University of Toronto’s Geoffrey Hinton, University of Montreal’s Yoshua Bengio, and New York University’s Yann LeCun are claiming the world’s most prestigious prize in computer science. The trio have made “conceptual and engineering breakthroughs that have made deep neurla networks a critical component of computing,” says a statement from the ACM.

The awards underline what many in the Canadian innovation eco-system have known for years – that Hinton and Bengio have played foundational roles in the development of deep neural networks as a breakthrough approach in machine learning. Their research on such methods have sparked a surge in commercial efforts to create AI products and services for countless uses. Thanks to their efforts, computers can now mimic some narrow features of human capability – including the ability to make sense of what they see, what they hear, and even what they feel.

“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society,” said ACM President Cherri M. Pancake in a statement. “The growth of and interest in AI is due, in no small part, to the recent advances in deep learning for which Bengio, Hinton and LeCun laid the foundation. These technologies are used by billions of people. Anyone who has a smartphone in their pocket can tangibly experience advances in natural language processing and computer vision that were not possible just 10 years ago. In addition to the products we use every day, new advances in deep learning have given scientists powerful new tools—in areas ranging from medicine, to astronomy, to materials science.”

While the researchers each started their work on neural networks independently, they also collaborated, coming together with their shared conviction that computers could learn in a similar way to the human brain even when most in the field disagreed. In a University of Toronto blog post, Hinton reflects on that time.

“We stuck to that when almost everyone else in artificial intelligence and computer science didn’t,” Hinton said. “It wasn’t quite the same in psychology, where a lot of people believed in neural networks. But if you looked at the people doing computer science and machine learning, they became thoroughly disenchanted with them.”

The Turing Awards comes with a $1 million prize named after Alan M. Turing, a British mathematician that laid the groundwork for modern computing. The prize is financially supported by Google Inc.

Geoffrey Hinton

The vice-president and engineering fellow at Google Inc. and chief scientific advisor of The Vector Institute, as well as a professor emeritus at the University of Toronto, Hinton is also a companion in the Order of Canada. In 1986, he co-authored a paper demonstrating that neural networks could discover their own internal representations of data using a backpropagation algorithm. This is a standard feature in most neural networks today. In 2012, he collaborated with students to improve convolutional neural networks. The result was a famous breakthrough in the ImageNet competition, where Hinton and his team halved the error rate for object recognition and redefined the approach to computer vision.

Yoshua Bengio

A professor at the University of Montreal and the scientific director at Quebec’s AI Insitute Mila and the Institute for Data Valorization, as well as the co-director of CIFAR’s Learning in Machines and Brains program, Yoshua Bengio combined neural networks with probabilistic models of sequences in the 1990s. That work was used by AT&T/NCR for reading handwritten cheques and remain a foundation in modern speech recognition systems. A paper he authored in 2000 had a profound impact on natural language processing tasks such as translation and answering questions. Since 2010, his papers on Generative Adversarial Networks spurred a revolution in computer vision and graphics, allowing computers to create original images.

Yann LeCun

The Silver Professor of the Courant Institute of Mathematical Sciences at New York University, vice-president and chief AI scientist at Facebook, and a member of the U.S. National Academy of Engineering, Yann LeCun developed convolutional neural networks in the 1980s. It was essential to making neural networks more efficient and a foundational standard in many different types of machine learning. He also accelerated the backpropagation algorithms learning time using two simple methods. His work on hierarchical feature representation in neural networks is now used in many recognition-based tasks.

Exit mobile version