Translator: Hélène Vernet
Reviewer: Bangyou Xiang The rise of the machines! Who here is scared of killer robots? (Laughter) I am! I used to work in UAVs -
Unmanned Aerial Vehicles - and all I could think seeing
these things is that someday, somebody is going to strap
a machine-gun to these things, and they're going
to hunt me down in swarms. I work in robotics at Brown University
and I'm scared of robots. Actually, I'm kind of terrified, but, can you blame me? Ever since I was a kid,
all I've seen are movies that portrayed the ascendance
of Artificial Intelligence and our inevitable conflict with it - 2001 Space Odyssey,
The Terminator, The Matrix - and the stories they tell are pretty scary: rogue bands of humans running away
from super intelligent machines. That scares me. From the hands,
it seems like it scares you as well. I know it is scary to Elon Musk. But, you know, we have a little bit
of time before the robots rise up. Robots like the PR2
that I have at my initiative, they can't even open the door yet. So in my mind, this discussion
of super intelligent robots is a little bit of a distraction
from something far more insidious that is going on with AI systems
across the country. You see, right now,
there are people - doctors, judges, accountants - who are getting information
from an AI system and treating it as if it is information
from a trusted colleague. It's this trust that bothers me not because of how often
AI gets it wrong. AI researchers pride themselves
in accuracy on results. It's how badly it gets it wrong
when it makes a mistake that has me worried. These systems do not fail gracefully. So, let's take a look
at what this looks like. This is a dog that has been misidentified
as a wolf by an AI algorithm. The researchers wanted to know: why did this particular husky
get misidentified as a wolf? So they rewrote the algorithm
to explain to them the parts of the picture
it was paying attention to when the AI algorithm made its decision. In this picture, what do you
think it paid attention to? What would you pay attention to? Maybe the eyes,
maybe the ears, the snout ... This is what it paid attention to: mostly the snow
and the background of the picture. You see, there was bias in the data set
that was fed to this algorithm. Most of the pictures of wolves were in snow, so the AI algorithm conflated
the presence or absence of snow for the presence or absence of a wolf. The scary thing about this is the researchers had
no idea this was happening until they rewrote
the algorithm to explain itself. And that's the thing with AI algorithms,
deep learning, machine learning. Even the developers who work on this stuff have no idea what it's doing. So, that might be
a great example for a research, but what does this mean in the real world? The Compas Criminal Sentencing
algorithm is used in 13 states to determine criminal recidivism or the risk of committing
a crime again after you're released. ProPublica found
that if you're African-American, Compas was 77% more likely to qualify
you as a potentially violent offender than if you're a Caucasian. This is a real system being used
in the real world by real judges to make decisions about real people's lives. Why would the judges trust it
if it seems to exhibit bias? Well, the reason they use Compas
is because it is a model for efficiency. Compas lets them go
through caseloads much faster in a backlogged criminal justice system. Why would they question
their own software? It's been requisitioned by the State,
approved by their IT Department. Why would they question it? Well, the people sentenced
by Compas have questioned it, and their lawsuits should chill us all. The Wisconsin State Supreme Court ruled that compass did not deny
a defendant due process provided it was used "properly." In the same set of rulings, they ruled that the defendant could not inspect
the source code of Compass. It has to be used properly
but you can't inspect the source code? This is a disturbing set of rulings
when taken together for anyone facing criminal sentencing. You may not care about this because
you're not facing criminal sentencing, but what if I told you
that black box AI algorithms like this are being used to decide whether or not
you can get a loan for your house, whether you get a job interview, whether you get Medicaid, and are even driving cars
and trucks down the highway. Would you want the public
to be able to inspect the algorithm that's trying to make a decisiom
between a shopping cart and a baby carriage
in a self-driving truck, in the same way the dog/wolf
algorithm was trying to decide between a dog or a wolf? Are you potentially a metaphorical dog
who's been misidentified as a wolf by somebody's AI algorithm? Considering the complexity
of people, it's possible. Is there anything
you can do about it now? Probably not, and that's
what we need to focus on. We need to demand
standards of accountability, transparency and recourse in AI systems. ISO, the International Standards
Organization, just formed a committee to make decisions about
what to do for AI standards. They're about five years out
from coming up with a standard. These systems are being used now, not just in loans, but they're being
used in vehicles like I was saying. They're being used in things like
Cooperative Adaptive Cruise Control. It's funny that they call that "cruise control" because the type of controller used
in cruise control, a PID controller, was used for 30 years in chemical plants
before it ever made into a car. The type of controller that's used to drive a self-driving car
and a machine learning, that's only been used
in research since 2007. These are new technologies. We need to demand the standards
and we need to demand regulation so that we don't get snake oil
in the marketplace. And we also have to have
a little bit of skepticism. The experiments in Authority done by Stanley Milgram
after World War II, showed that your average person
would follow an authority figures orders even if it meant harming their fellow citizen. In this experiment, every day Americans would shock an actor past the point of him
complaining about her trouble, past the point of him screaming in pain, past the point of him
going silent in simulated death, all because somebody with no credentials, in a lab coat, was saying some variation of the phrase "The experiment must continue." In AI, we have Milgram's
ultimate authority figure. We have a dispassionate
system that can't reflect, that can't make another decision, that there is no recourse to, that will always say "The system
or "The process must continue." Now, I'm going to tell you a little story. It's about a car trip I took
driving across country. I was coming into Salt Lake City
and it started raining. As I climbed into the mountains,
that rain turned into snow, and pretty soon that snow was whiteout. I couldn't see the taillights
of the car in front of me. I started skidding. I went 360 one way,
I went 360 the other way. I went off the highway. Mud-coated my windows,
I couldn't see a thing. I was terrified some car was going
to come crashing into me. Now, I'm telling you this story
to get you thinking about how something small
and seemingly mundane like a little bit precipitation, can easily grow
into something very dangerous. We are driving in the rain
with AI right now, and that rain will turn to snow, and that snow could become a blizzard. We need to pause, check the conditions, put in place safety standards, and ask ourselves
how far do we want to go, because the economic incentives
for AI and automation to replace human labor will be beyond anything we have seen
since the Industrial Revolution. Human salary demands can't compete with the base cost of electricity. AIs and robots will replace
fry cooks and fast-food joints and radiologists in hospitals. Someday, the AI will diagnose your cancer, and a robot will perform the surgery. Only a healthy skepticism of these systems is going to help keep people in the loop. And I'm confident, if we can
keep people in the loop, if we can build transparent
AI systems like the dog/wolf example where the AI explained
what it was doing to people, and people were able to spot-check it, we can create new jobs
for people partnering with AI. If we work together with AI, we will probably be able to solve
some of our greatest challenges. But to do that, we need
to lead and not follow. We need to choose to be less like robots, and we need to build the robots
to be more like people, because ultimately, the only thing we need to fear
is not killer robots, it's our own intellectual laziness. The only thing we need
to fear is ourselves. Thank you. (Applause)