Transcriber: Natasha Savic
Reviewer: Claire Ghyselen Our world is changing in many ways and one of the things which is going
to have a huge impact on our future is artificial intelligence - AI, bringing another industrial revolution. Previous industrial revolutions
expanded human's mechanical power. This new revolution,
this second machine age is going to expand
our cognitive abilities, our mental power. Computers are not just going
to replace manual labor, but also mental labor. So, where do we stand today? You may have heard
about what happened last March when a machine learning system
called AlphaGo used deep learning to beat
the world champion at the game of Go. Go is an ancient Chinese game which had been much more difficult
for computers to master than the game of chess. How did we succeed,
now, after decades of AI research? AlphaGo was trained to play Go. First, by watching over and over tens of millions of moves made
by very strong human players. Then, by playing against itself,
millions of games. Machine Learning allows computers
to learn from examples. To learn from data. Machine learning
has turned out to be a key to cram knowledge into computers. And this is important because knowledge
is what enables intelligence. Putting knowledge into computers had been
a challenge for previous approaches to AI. Why? There are many things
which we know intuitively. So we cannot communicate them verbally. We do not have conscious access
to that intuitive knowledge. How can we program computers
without knowledge? What's the solution? The solution is for machines to learn
that knowledge by themselves, just as we do. And this is important because knowledge
is what enables intelligence. My mission has been
to contribute to discover and understand principles
of intelligence through learning. Whether animal, human or machine learning. I and others believe that there are
a few key principles, just like the law of physics. Simple principles which could explain
our own intelligence and help us build intelligent machines. For example, think about the laws
of aerodynamics which are general enough to explain
the flight of both, birds and planes. Wouldn't it be amazing to discover
such simple but powerful principles that would explain intelligence itself? Well, we've made some progress. My collaborators and I have contributed
in recent years in a revolution in AI with our research on neural networks
and deep learning, an approach to machine learning
which is inspired by the brain. It started with speech recognition on your phones,
with neural networks since 2012. Shortly after, came a breakthrough
in computer vision. Computers can now do a pretty good job
of recognizing the content of images. In fact, they approach human performance
on some benchmarks over the last 5 years. A computer can now get
an intuitive understanding of the visual appearance of a Go-board that is comparable to that
of the best human players. More recently, following some discoveries made in my lab, deep learning has been used to translate
from one language to another and you are going to start seeing
this in Google translate. This is expanding the computer's ability to understand and generate
natural language. But don't be fooled. We are still very, very far from a machine that would be as able as humans to learn to master
many aspects of our world. So, let's take an example. Even a two year old child
is able to learn things in a way that computers
are not able to do right now. A two year old child actually
masters intuitive physics. She knows when she drops a ball
that it is going to fall down. When she spills some liquids
she expects the resulting mess. Her parents do not need to teach her about Newton's laws
or differential equations. She discovers all these things by herself
in a unsupervised way. Unsupervised learning actually remains
one of the key challenges for AI. And it may take several more decades
of fundamental research to crack that knot. Unsupervised learning is actually trying
to discover representations of the data. Let me show you and example. Consider a page on the screen
that you're seeing with your eyes or that the computer is seeing
as an image, a bunch of pixels. In order to answer a question
about the content of the image you need to understand
its high-level meaning. This high level meaning corresponds
to the highest level of representation in your brain. Low down, you have
the individual meaning of words and even lower down, you have characters
which make up the words. Those characters could be
rendered in different ways with different strokes
that make up the characters. And those strokes are made up of edges and those edges are made up of pixels. So these are different
levels of representation. But the pixels are not
sufficient by themselves to make sense of the image, to answer a high level question
about the content of the page. Your brain actually has
these different levels of representation starting with neurons
in the first visual area of cortex - V1, which recognizes edges. And then, neurons in the second
visual area of cortex - V2, which recognizes strokes and small shapes. Higher up, you have neurons
which detect parts of objects and then objects and full scenes. Neural networks,
when they're trained with images, can actually discover these types
of levels of representation that match pretty well
what we observe in the brain. Both, biological neural networks,
which are what you have in your brain and the deep neural networks
that we train on our machines can learn to transform from one level
of representation to the next, with the high levels corresponding
to more abstract notions. For example the abstract notion
of the character A can be rendered in many different ways
at the lowest levels as many different configurations of pixels depending on the position,
rotation, font and so on. So, how do we learn
these high levels of representations? One thing that has been
very successful up to now in the applications of deep learning, is what we call supervised learning. With supervised learning, the computer
needs to be taken by the hand and humans have to tell the computer
the answer to many questions. For example, on millions and millions
of images, humans have to tell the machine well... for this image, it is a cat. For this image, it is a dog. For this image, it is a laptop. For this image, it is a keyboard,
And so on, and so on millions of times. This is very painful and we use
crowdsourcing to manage to do that. Although, this is very powerful and we are able to solve
many interesting problems, humans are much stronger and they can learn over many more
different aspects of the world in a much more autonomous way, just as we've seen with the child
learning about intuitive physics. Unsupervised learning could also help us
deal with self-driving cars. Let me explain what I mean: Unsupervised learning allows computers
to project themselves into the future to generate plausible futures
conditioned on the current situation. And that allows computers to reason
and to plan ahead. Even for circumstances
they have not been trained on. This is important because if we use supervised learning
we would have to tell the computers about all the circumstances
where the car could be and how humans
would react in that situation. How did I learn to avoid
dangerous driving behavior? Did I have to die
a thousand times in an accident? (Laughter) Well, that's the way we train
machines right now. So, it's not going to fly
or at least not to drive. (Laughter) So, what we need is to train our models to be able to generate plausible images
or plausible futures, be creative. And we are making progress with that. So, we're training
these deep neural networks to go from high-level meaning to pixels rather than from pixels
to high level meaning, going into the other direction
through the levels of representation. And this way, the computer
can generate images that are new images different
from what the computer has seen while it was trained, but are plausible and look like natural images. We can also use these models
to dream up strange, sometimes scary images, just like our dreams and nightmares. Here's some images
that were synthesized by the computer using these deep charted models. They look like natural images but if you look closely,
you will see they are different and they're still missing
some of the important details that we would recognize as natural. About 10 years ago, unsupervised learning has been
a key to the breakthrough that we obtained
discovering deep learning. This was happening in just few labs,
including mine at the time at a time when neural networks
were not popular. They were almost abandoned
by the scientific community. Now, things have changed a lot. It has become a very hot field. There are now hundreds of students
every year applying for graduate studies at my lab with my collaborators. Montreal has become
the largest academic concentration of deep learning researchers in the world. We just received a huge
research grant of 94 million dollars to push the boundaries
of AI and data science and also to transfer technology of deep
learning and data science to the industry. Business people stimulated by all this
are creating start-ups, industrial labs, many of which near the universities. For example, just a few weeks ago, we announced
the launch of a start-up factory called 'Element AI' which is going to focus
on the deep learning applications. There are just not enough
deep learning experts. So, they are getting paid crazy salaries, and many of my former academic colleagues
have accepted generous deals from companies to work in industrial labs. I, for myself, have chosen
to stay in university, to work for the public good, to work with students, to remain independent. To guide the next generation
of deep learning experts. One thing that we are doing
beyond commercial value is thinking about the social
implications of AI. Many of us are now starting
to turn our eyes towards social value added
applications, like health. We think that we can use deep learning to improve treatment
with personalized medicine. I believe that in the future, as we collect more data from millions
and billions people around the earth, we will be able to provide medical advice to billions of people
who don't have access to it right now. And we can imagine many other
applications for social value of AI. For example, something
that will come out of our research on natural language understanding is providing all kinds of services like legal services,
to those who can't afford them. We are now turning our eyes also towards the social implications
of AI in my community. But it's not just for experts
to think about this. I believe that beyond the math
and the jargon, ordinary people can get the sense of what goes on under the hood enough to participate
in the important decisions that will take place, in the next
few years and decades about AI. So please, set aside your fees and give yourself
some space to learn about it. My collaborators and I have written
several introductory papers and a book entitled "Deep Learning" to help students and engineers
jump into this exciting field. There are also many online resources:
softwares, tutorials, videos.. and many undergraduate students
are learning a lot of this about research in deep learning
by themselves, to later join the ranks of labs like mine. Ai is going to have a profound
impact on our society. So, it's important to ask:
How are we going to use it? Immense positives may come
along with negatives such as military use or rapid disruptive changes
in the job market. To make sure the collective choices
that will be made about AI in the next few years, will be for the benefit of all, every citizen should take an active role in defining how AI will shape our future. Thank you. (Applause)