>>NEIL DEGRASSE TYSON: Welcome to the universe,
yes. Hi, everybody. Iâm Neil deGrasse Tyson,
the Frederick P. Rose director of the Hayden Planetarium,
and welcome. Is this our 18th or 19th annual Isaac Asimov
Panel Debate. This has become a very hot ticket in New York
City and I almost feel apologetic,
because we canât accommodate everyone who wants to see it. We have to go to a lottery model to get tickets
out. And short of going to a bigger venue or charging
more, weâre still trying to work this out,
but youâre here in the audience now, and thatâs good. So did they tell you what tonightâs topic
is? Itâs a very hot topic on every frontier. Weâre talking about artificial intelligence
and, are people afraid of it? Do people embrace it? Should we be doing it? Should we not be doing it? And itâs all over the news. Not the least of which, in todayâs
business section of The New York Times, today. This is a paper version of the newsâ
the newspaper. Itâs got a, sort of an android robot
holding the national flag of China. And the title is,
âChinaâs Blitz to Dominate AI.â And I just came back 48 hours ago from
the United Arab Emirates, and they have a newly-established
minister of artificial intelligence. There are countries around the world that
see this and recognize it as a way to
sort of leapfrog technologies, and I think this is a,
thereâs another... here it is. âChinaâs blitz to rule AI meets with silence
from the White House.â So I just thought I would just say that
Iâm just saying. Weâre trying to burn clean coal. Thatâs what our priorities are. [laughter] >>TYSON: But Iâm just saying. [applause] >>TYSON: Donât get me started, because... [laughter] Thatâs the topic tonight. We combed the country to find some of the
top AI people in the land, and we are delighted for this
mix of five panelists we have this evening. Let me first introduce to you,
whoâs right on the wings, Mike Wellman. Michael Wellman is a professor of computer
science and engineering at the University of Michigan,
and he leads the strategic reasoning group. Michael Wellman, come on. [applause] >>TYSON: And... oh, I was supposed to introduce
you in a different order than that. Yeah. Yeah, I will get back to you in a minute. >>MICHAEL WELLMAN: Okay. [laughter] >>TYSON: Just talk among yourselves there,
for the... And next up we have a friend and colleague
in the astrophysics community whoâs directed his attention to AI. Max Tegmark. Come on out, Max. [applause] >>TYSON: Professor of physics. [How are you], Max. >>MAX TEGMARK: Excellent, man. >>TYSON: Heâs doing research in AI at MIT,
and heâs also president of the Future of Life Institute. So Max, welcome. We also have, get my order straight, here. Here we go. When I was... Nope, thatâs not it. Yeah. So,
Next we have... You couldnât do this without representation
from industry, and thatâs precisely what we obtained for
this panel. John Giannandrea, come on out. John. [applause] >>JOHN GIANNANDREA: Thank you. >>TYSON: John is, heâs a senior vice president
of engineering at Google
where he leads the Google search and the Google AI teams. So we got Google in the house. Google in the house. Next, Iâve got Ruchir Puri. Ruchir, come on out. [applause] >>TYSON: Ruchir is the chief architect of
IBM Watson, and heâs also an IBM Fellow. So we got him. [applause] >>TYSON: And weâve got Helen Greiner. Helen, come on out. Helen. [applause] >>HELEN GREINER: Thank you. >>TYSON: Helen, cofounder of the
iRobot Corporation, maker of the Roomba. The Roomba. We all know the vacuuming robot. Sheâs also founder of the drone company
CyPhy Works. She makes drones, now. Is that good or bad? [laughter] >>TYSON: I donât know, weâll find out. Ladies and gentlemen,
thank you for coming. This is our panel, everyone. Yes. [applause] >>TYSON: So, Mike, youâre a professor
at University of Michigan. So what do you do? >>WELLMAN: Well, I study artificial intelligence
from the perspective of economics. You know, economics is a social science that
treats its entities, its agents as rational beingsâ
ideally rationalâ >>TYSON: Really. really. >>WELLMAN: Artificial intelligence is the
subfield of computer science thatâs trying to make
ideally rational beings. So itâs a very natural fit. >>TYSON: Can an irrational being
make a rational being? >>WELLMAN: We can do our best. >>TYSON: And so you teach a course on this. Iâm just curious, how do you frame
a course around something thatâs so dynamic and so changing and so
emotionally fraught with fear. >>WELLMAN: So what I do, and what one does
in teaching an AI course is you bring together the standard frameworks and representations
and algorithms, techniques that AI people have developed over
the years to address thinking-like problems
and reasoning and problem solving, decision making, learning
using very standard forms of algorithms. Now, some people are coming to it from the
emotional perspective. I sometimes have gotten
comments on my teaching evaluations that said, âI signed up for an AI course, and all I
got was computer science.â Thatâs what it is. Itâs an engineering discipline,
and thatâs the most efficient way to make progress. >>TYSON: Excellent. So, Helen, what are you about? >>GREINER: Iâm all about the robots. >>TYSON: Youâre all about the robots, yeah. >>GREINER: My brother was a huge Star Wars
fan when we were young, and, well,
for me, it was all about R2-D2, and Iâve wanted to build robots since I
saw Star Wars on the big screen. He had everything: character, strategy, loyalty. >>TYSON: Youâre telling me Star Wars had
like a positive net effect on this world? >>GREINER: I think it had a positive net effect
on children. >>TYSON: Uh-huh. At least one here, yes. >>GREINER: Many, many. So weâve been trying to build robots like
this, and weâve had great accomplishments, you
know? Weâve had robots that have been credited
with saving the lives of hundreds of soldiers, thousands of civilians. Weâve got the Roomba, which,
best-selling vacuum ânot robot vacuumâ
but best-selling vacuum last year by retail revenue numbers, and I think a little bit
of a cultural icon, too. And so I think weâve come some of the way,
but weâre not at R2-D2 yet. So I think some of the debate is about
where it needs to go. >>TYSON: So you cofounded the company, iRobot,
which I think was the name of an Isaac Asimov novel. Yeah. And that company invested the Roomba. Great word, by the way. Room-ba. Yeah, thatâs just good. That was very good. I like words thatâ >>GREINER: So we asked our engineers what
we should call it first, and they said the Mark Master 2000:
the Cyber Suck. [laughter] Thatâs probably the best marketing dollars
that weâll ever spend to get that name. >>TYSON: Gotcha. So that cost you money to get that name, okay. >>GREINER: Yeah. >>TYSON: Okay. Does your Roomba count as a kind of AI, would
you say? >>GREINER: I believe so. People are starting to use AI to be synonymous
with deep learning techniques. But for roboticists,
thereâs a lot of tools in the tool bag. Roomba runs something called behavior control,
which was invented by one of my business partners at iRobot
where we have a lot of behaviors that all run in parallel. The first generation, it wouldnât fall down
the stairs, it did obstacle avoidance, it followed the
walls. The latest generation,
I think it was something like 13 years later, it does full navigation using a camera system. So visual SLAM techniques. >>TYSON: Okay. Has your Roomba ever killed everyone? [laughter] >>GREINER: You know, we actually... >>TYSON: Wait, wait. Thereâs only a yes/no. No! Thatâs... The... I... >>GREINER: The answerâs certainlyâ >>TYSON: No sentence. Yes or no? >>GREINER: Certainly no. >>TYSON: Okay, thank you. >>GREINER: But we actually, you know. You know, itâs product design. You have to look at what are the ramifications
it could have? Like the worst thing we came up with,
maybe it goes into someoneâs fire, pulls out the embers,
and sets the place on fire. Has never happened,
and by the way, they usually have hearths that
keep a Roomba out. And screens. But thereâs a lot of, you know... >>TYSON: But in the future, youâre... >>GREINER: No Roombaâs killed anyone. >>TYSON: Okay. [laughter] >>TYSON: Yeah. Ruchir. Ruchir Puri. Did I say that correctly? >>RUCHIR PURI: Yep. >>TYSON: Yeah, thank you. Youâve been at IBM for more than two decades. >>PURI: Yep. >>TYSON: And so Iâm just curious. Before we get to Watson,
which you have something to say about, our earliest memories of IBM getting in this
game I think was Deep Blue
where it was a chess program that beat the worldâs best chess player. What made it so good in its day? >>PURI: Well, I think from the point of view,
also, Iâve dealt with optimization algorithms
for pretty much quarter of the century. And whereâ >>TYSON: You said optimization algorithms. >>PURI: Optimization algorithms. >>TYSON: Yeah, so this would just... So that it can calculate as quickly and efficiently
as possible towards a goal. >>PURI: It really... There were three things that came together,
actually. So search algorithms, really smart evaluation
criteria, and the third one is really
sort of massively parallel computing application as well. So those three things came together
to really give rise to something that, you know,
that wowâd people. Itâs an application of technology. Again, algorithms coming together from three
points of view to give rise to an application thatâs really
[broad]. >>TYSON: So, but could Deep Blue do anything
other than play chess? >>PURI: Interestingly, the Deep Blue was... We have, at IBM, we have something called
grand challenges. And we pose these problems
to really move the field forward. Deep Blue was a grand challenge
posed to the scientists at IBM. Similar to that, actually, Jeopardy was also
a grand challenge. >>TYSON: But Jeopardy wasnât Deep Blue. >>PURI: No. Jeopardy certainly wasnât Deep Blue, butâ >>TYSON: Yeah, but that was Watson, correct? >>PURI: Yes, thatâsâ >>TYSON: Weâll get to Watson in a minute. >>PURI: Okay. >>TYSON: I just want to work my way up to
that. And I think I have some firsthand knowledge
of your grand challenges. I was once invited to address
a retreat among IBM engineers where they were given cash rewards
for their innovation. Do they still do that? >>PURI: Certainly. We encourage our employees and scientists
to really get the innovations out there
and get their innovative juices flowing, absolutely, we do that. >>TYSON: Yeah. I was delighted, because each one got recognized,
they got a certificate, the CEO was there. >>PURI: Yep. We still do that. >>TYSON: I mean, it was very much taken seriously. >>PURI: Yep. We still do that. >>TYSON: Right. very good. Weâll get back to you on that. So, John, I think I messed up your last name. Giannandrea? >>GIANNANDREA: Giannandrea. Giannandrea. >>TYSON: Oh, Giannandrea. >>GIANNANDREA: Yes, thatâs correct. >>TYSON: Yeah. Giannandrea. So you represent Google on this panel,
and could you just tell me, remind us what is the game of Go,
and then tell us what AlphaGo is? >>GIANNANDREA: Sure. So Go is this ancient, Oriental board game
which is harder than chess. [laughter] >>GIANNANDREA: And the reason itâs harder
than chessâ [laughter] >>TYSON: Okay, Go. >>GIANNANDREA: And the reason itâs harder
than chess is because any givenâ >>TYSON: Well, just to be clear, itâs a
board game. You didnât say that. >>GIANNANDREA: Itâs a board game. Yeah, sorry. >>TYSON: If you said itâs a war game... >>GIANNANDREA: No, this is a board game. Itâs a board game. >>TYSON: Itâs a war board game. Itâs not likeâ >>GIANNANDREA: Itâs a strategy game. [How about] that. >>TYSON: âweapons and things. >>GIANNANDREA: No. Thereâs only two pieces:
the black pieces and the white pieces. >>TYSON: Okay. Go. >>GIANNANDREA: And the reason itâs hard... And people have been playing this game for
2,000 years and it is highly revered in Asia
and people are paid full-time jobs to be professionals at this game. And the reason itâs hard is because
any given position on the board, there are many, many more positions that you
could take. So you canât use brute force approaches
to figure out how to play the game. >>TYSON: So intuition has a very big role. >>GIANNANDREA: So the recent systems that
have become very, very good at this game, you could even say super human at this game,
because they beat the world champions, theyâre doing something fundamentally new. And people look at that and use words like
intuitionâ which is not a technical wordâand... [laughter] >>GIANNANDREA: You know, so thereâs something
going on. [laughter] >>TYSON: Who invited you to this? >>GIANNANDREA: But itâs a serious issue,
because I think when people use words like that,
when a chess grandmaster is beaten by Deep Blue,
or when the world champion in Go was beaten first in Korea, Lee Sedol,
itâs an emotional toll on that player, because they just spent their entire life
perfecting their ability to play this game. And then a machine comes along and appears
to beat them using... and the words that are used are like
creativity or intuition or thatâs something I didnât expect it to
play. And so I think that adds to the mystique of
AI when actually whatâs going on is
engineering, plain and simple. >>TYSON: So brute force. >>GIANNANDREA: No. In the case of the AlphaGo system, it was
a combination of training and new algorithms to do so-called
deep learning, which Iâm sure weâll get into. >>TYSON: Okay. So AlphaGo was trained on
previous games that had been played? >>GIANNANDREA: Yes. Thereâs two versions of it. The one that won the world championships
was trained on all human games that it could get his hands on
and then played itself. So it basically practiced after it had learned
theâ >>TYSON: How quickly could it play itself
and finish a game? >>GIANNANDREA: Well, we do it in the cloud
with thousands of computers so it could do it, you know,
thousands of times at the same time. >>TYSON: So... Okay. [laughter] >>GIANNANDREA: Very fast. >>TYSON: Very fast, okay. [laughter] >>GIANNANDREA: And then most recently there
wereâ >>TYSON: And itâs just in the cloud? >>GIANNANDREA: In the cloud, thatâs right. >>TYSON: Up there somewhere. >>GIANNANDREA: Lots and lots of computers. >>TYSON: But the computing cloud, not the
storage cloud. >>GIANNANDREA: Thatâs right. the computing cloud. >>TYSON: Yeah. Yeah. >>GIANNANDREA: So recently thereâs been
a version of this called AlphaGo Zero,
and the interesting thing about thisâ >>TYSON: So thatâs an upgrade. >>GIANNANDREA: This is a later version. And what they tried to do with that
is the researchers tried to see if they could learn to play Go
without looking at any human games. >>TYSON: That way it would come up with stuff
on its own. >>GIANNANDREA: Yeah. And the [unintelligible] AlphaGo Zero
was actually better than the ones that learned from humans,
and it also plays chess very well. [laughter] >>TYSON: Iâll try to find other questions
for you later. Weâll see. >>GIANNANDREA: It doesnât do Jeopardy, though. >>TYSON: Okay, so that learned... So it taught itself, basicallyâ >>GIANNANDREA: Yeah. >>TYSON: âand was not biased by
the creativity of any human game that had previously been played. And so that, you play that game against
AlphaGoâ >>GIANNANDREA: Another copy, yeah. >>TYSON: And it beat AlphaGo. >>GIANNANDREA: Yeah, thatâs right. >>TYSON: So itâs extra badass. [laughter] >>GIANNANDREA: Yeah. Now, games are a special thing,
because games have an objective score. And so itâs not... Itâs actually a good test for this level
of the current technology. >>TYSON: So, Max, we go back. Great to have you. This is like your fifth time here in the museum. Itâs not even your first Asimov panel,
so thanks for showing up again. You recently published a book, Life 3.0. Itâs your third or fourth book... >>TEGMARK: Second. >>TYSON: Second? Okay. It feels like three books. >>TEGMARK: They just take so long to readâ >>TYSON: Thatâs what it is, yeah. >>TEGMARK: âbecause theyâre so boring. [laughter] >>TYSON: Your first book was
Our Mathematical Universe and thinking of all of the universe as,
as a simulation, basically. And we had you on the simulation panel last
year. Life 3.0, whatâs that about? >>TEGMARK: Itâs... Well, my day job is working at MIT doing AI
research from a physics perspective, these days. So I like to take a step back and look at
things, and if youâ >>TYSON: A cosmic perspective. >>TEGMARK: Yeah. And if you look beyond the next election cycle
and all these near-term AI controversies about jobs
and stuff like that, then itâs pretty natural to ask, well,
what happens next? What happens if all these folks succeed
and ultimately make machines that can do everything we can? The earliest life that came along, I call
it 1.0, because it was really dumb stuff like bacteria
that couldnât learn anything in its lifetime. And then I call us 2.0, because we can learn. >>TYSON: Oh, youâre referring to the evolutionary
achievements in the tree of life. >>TEGMARK: Yeah, yeah. >>TYSON: Okay. >>TEGMARK: And what comes next? I think
we should think about this, because if the only...
the only strategy we have is to say, âHey, letâs just build machines
that can do everything cheaper than us, what could possibly go wrong,â you know? I think thatâs just pathetically unambitious
and lame, you know? Weâre an ambitious species, homo sapiens. We should aim... we should aim higher. We should say like,
âHow can we use all this technology to empower us;
not to overpower us?â [applause] >>TYSON: Okay. Weâll need more of that, Iâm sure,
as this conversation progresses. Let me get back to Mike. Mike, does... Could you remind us about the Turing test,
what that is? >>WELLMAN: Sure. Alan Turing, back in 1951â >>TYSON: The movie The Imitation Game
is a biopic on him. >>WELLMAN: Right, and it does depict the Turing
test a bit. So back in 1951, he proposed this thought
experience realizing that, to try to get people to understand
machines as being able to think would require defining thinking,
and that would be very controversial. So he set up this thing that became called
the Turing test. That is,
see if you could have a machine have a dialogue with somebody
and convince them that theyâre a person rather than a machine. If a person in an interrogation
could not tell the difference between whether they were
speaking with a machine or a person, then you might as well say itâs thinking. This is really audacious in 1951. Think about what machines were like back then. People hadnât even thought aboutâ
thought of word processing yet, and they were thinking about AI. That test, I think, has been very useful
as a thought experiment. The field of AI has never really generally
accepted that as the goal of AI or
the definition of AI. But certainlyâ >>TYSON: Is that because youâve evolved
past that? We do have machines that sound like theyâre
not machines, but people. So once you hit that goal, you say, oh, we
need a better goal. And are you just moving the goalpost? >>WELLMAN: So we havenât hit the goal. So it turns out Turing didnât realize that
it would be easy to fool a lot of people, even without being very good at thinking. >>TYSON: It reminds me, was it a New Yorker
comic where two dogs are at computers,
and one turns to the other and says, âThe good thing about the Internet
is that no one knows youâre a dog.â >>WELLMAN: Thatâs right. And no one knows youâre a bot either,
and that is a potential way that AI is going to
affect us and be ubiquitous. So it is quite relevant to try to impersonate
people. But we use that as a gateway to a lot of Internet
activities. You do a CAPTCHA,
that is called a computer automated-something-Turing. I forget the exact acronym. >>TYSON: The T in CAPTCHA stands for Touring? >>WELLMAN: Yes. >>TYSON: Oh, I didnât know that. Cool. >>WELLMAN: Or, itâs basically you have to
tell the machineâ >>TYSON: Weâve all done it. >>WELLMAN: âthat youâre a human. >>TYSON: Yeah. >>WELLMAN: So find something that only humans
can do. And of course, that bar keeps on moving all
the time. So itâs quite relevant to try to impersonate
for the Alexas and the Siris in the world are
trying to be as humanlike as possible. In films, we try to put
and videogames realistic characters all the time. So it still speaks to us,
even though itâs not the whole story about AI. >>TYSON: Right. so your point is we did so well with
satisfying the Turing test very early, that it just wasnât good enough
a discriminator for the AI that people were seeking. >>WELLMAN: Well, again, I would say that being
specifically like a human is only one way to be intelligent. And you could be superhuman in many other
ways, and you donât stop when you reach human
level-performance in particular tasks,
because the goal is not to be like a human. The goal is to make ideally rational intelligence
that could do all sorts of things. >>TYSON: So, Helen, with the company
you cofounded called iRobot, could you tell us about,
what is it, the three laws of robotics, by Isaac Asimov? >>GREINER: Yeah, definitely. The robots could not... The three laws:
One, the robots cannot hurt humans or cause humans to come to harm through an
action. Robots cannotâ >>TYSON: That was one. >>GREINER: That was one. >>TYSON: So theyâ >>GREINER: Robots have to obey orders. >>TYSON: âcannot harm you,
and their inaction also canât harm you. >>GREINER: Yeah. They have to obey orders,
unless it conflicts with number one. So they canât... Iâm sorry. The second one is they have to obey orders
unless it conflicts with number one. And the third one is
they cannot harm themselves, unless it conflicts with number one and number
two. And thereâs one he added later on, the zeroth
oneâ >>TYSON: I didnât know that. >>GREINER: âwhich is robots cannot cause
harm to humanity or through an action, have humanity come to
harm. >>TYSON: So it generalizes it up from the
individual? >>GREINER: Yeah. >>TYSON: Humanity... The... >>GREINER: Yeah. Well, he made that the zeroth law. So he stuck it in the front when he thought
of it. >>TYSON: The zeroth law, okay. >>GREINER: But whatâs amazing about it is
he... he started writing the I, Robot books in 1940. Practical transistors werenât invented till
1947, so, I mean, one of the reasons weâre all
so honored to be here at the Asimov memorial debate,
is I think weâ I can speak for the panel that weâre all
huge, huge fans of what he was writing about,
especially way back. >>TYSON: Well, just consider that heâs written
aboutâ on topics quite diverse. So no matter what subject we have here,
there are books that he wrote about it. So... >>GREINER: Yeah. >>TYSON: Every panel weâve ever had onâ >>GREINER: But AIâs a really good one for
him. >>TYSON: âon any subject is,
I read that by Asimov when I was a kid, so. >>GREINER: Thatâs why people ask me,
are you putting those in the robots? And the short answer is,
theyâre great as a literary device. Theyâre a little bit more tricky to program. And so, unfortunately, the answer is
that the state of technology is not ready for those types of
abstract rules yet. >>TYSON: But theyâre nice guidance;
just philosophical guidance, I guess. >>GREINER: Um, I have a very practical view. I think the laws if you state them now, might
be, of robots,
can save people, they have saved people, and they could save a heck of a lot more people. It might be that robots... >>TYSON: Well, plus, the military would not
be obeying those laws. >>GREINER: Yeah, exactly. exactly. >>TYSON: Right. >>GREINER: And the whole books were about
how those laws resulted in conflicts, right? But in reality, because Iâm a business woman
as well as a robot lover, robots are not going to hurt people. >>TYSON: Donât say a robot lover. Thatâs just... doesnât... >>GREINER: I am. Iâ >>TYSON: Just find some other phrase. >>GREINER: Iâm a robot enthusiast. >>TYSON: There you go. thank you. >>GREINER: Robots are not going to hurt people. Theyâre not going to hurt themselves. Theyâre not going to do these things,
because theyâre going to be either scrapped, theyâre going to be sent back,
or someoneâs going to be sued. And so from a business standpoint,
the robots are going to be safe to operate. >>TYSON: One of my favorite... Well, no. A video that I found amusing
was a cat riding around on a Roomba. >>GREINER: You know, that got so many views,
and I have no idea why. [laughter] >>GREINER: I mean, itâs like tens of millions
of views, right? Itâs crazy. >>TYSON: I mean, if the Roomba were big enough
for me to sit on, I would do that. Thatâs... [laughter] >>TYSON: Wouldnât you? thatâs fun. >>GREINER: That was not in our brainstorming
sessions when we thought about all the applications
for robots. >>TYSON: So, Ruchir, could you get us from
Deep Blue to Watson? What happened in that transition? And if we can remind people
why we all know about Watson, there was the big contest that you guys entered
it in. >>PURI: Certainly. And let me pick up the thread from,
from the chess and the Go and the, you know. Letâs... [laughter] >>PURI: Letâs make thisâ >>TYSON: Okay, continue. >>PURI: âinteresting, actually. >>TYSON: Continue. >>PURI: [Just finally there are]â >>TYSON: By the way,
Deep Blue beat Kasparov when Google had 10 employees, okay? >>PURI: True. Thatâs true. >>TYSON: So just, like, where were you? >>PURI: Thatâs true. absolutely true. >>TYSON: Okay. Did I get you on that one? >>PURI: Yes, [thank you]. [laughter] >>TYSON: Okay. Take us there. >>PURI: So the journey continues from,
from the point of view of chess game that beat Kasparov
to, we went down, to, okay, what is next? And obviously natural languageâ >>TYSON: Kasparov was the
world champion at the time. >>PURI: Yes, at the time. And natural language, which is so fundamental
to humans, actually. And the intricacies of natural language,
as weâve been... At least thereâs one fundamental trait
that humanity has, which is just the proliferation of language,
the advent of language itself. So we decided that will be the next leap that
we are going to make. And there is no game better than Jeopardy
that captures the intricacies. So we posed that as a grand challenge, andâ >>TYSON: Jeopardy, not only language, but
culture. >>PURI: But culture, right. >>TYSON: Yeah. Itâs not a calculation anymore
in a traditional sense. >>PURI: It is certainly not a calculation
anymore, and the way the questions are posed are so
nuanced that you really are dealing with,
at this point in time, not just a calculation machine and simple
evaluation criteria and search algorithms and parallel computing,
but really understanding language, questions and answers,
and the way we interact as human beings. So that was really the advent of the next
challenge. Because once we are able to solve that,
the implications are phenomenal in terms of the benefit it can bring to
us as a society, which is where we took that level to. The first thing that we started right after
Jeopardy was the applications of that technology to
the health domain, which is so fundamental to all of us. So right from chess game,
the next challenge is really addressing fundamentals of
what defines us as humans in terms of communications, addressing those intricacies,
and then applications of that abound. >>TYSON: To serve needs in actual society. >>PURI: Yeah, absolutely. >>TYSON: So Watson, in principle, can become
the best doctor ever, because Watson can read all the research papers
and then interpret symptoms in the context of what is known worldwide,
rather than just what one doctor happened to learn. >>PURI: Absolutely. And, at least the way we think about it is
really notâ Itâs not about, does it become the best
doctor, but, as we all know, no physician, single
physician has the time, even if they have
certainly the intelligence to figure all of this out,
they donât have the time to figure all of that out. And as Max was saying,
itâs really about empowering professionals than necessarily overpowering them. And really, Watson is about empowering
the society, as opposed to overpowering it, and thatâs why I really think about,
itâs bringing capabilities whereby, yes, it can read millions of
studies and millions of trials that may be going on,
and there some well-publicized cases as well where it had
actually saved patients either in North Carolina or Tokyo
or a study that was published more recently in India as well. But itâs, from our point of view,
itâs really about bringing the technology together with the
human beings in what we call augmented intelligence. >>TYSON: So in all fairness to our understanding
of this, Watson only knows what is available on the
Internet, correct? >>PURI: Yeah. Watson only knows what is being fed to it. Letâs say it that way. Whether it is available on Internet or it
is private information... >>TYSON: So how does Watson know what is fake...
news or not? [laughter] >>TYSON: You can make a super machine that
cannot distinguish the two. Well, apparently humans canât either, but... [laughter] >>TYSON: In principle, weâeducatedâcan
make a judgment. Will Watson be in a position to make that
judgment? >>PURI: I think, at least regarding fake news,
the question really is on, we are all pushing the boundaries of that
technology, and yes, the machines need to be trained,
and they can really help us, given what has gone on, actually,
in the last couple of years. Once they are actuallyâ
you bring that technology to bear in terms of realizing there is a problem,
you can actually correct for it. So itâs not about whether Watson can distinguish
it today or not. Once you realize the problem,
you can actually start working on technologies that can start deciphering that much better,
thereby helping us, as society, because, you know, what is going on around. >>TYSON: So, but that... From what youâve described, Watson would
still be shy of this Holy Grail of just thinking stuff up
on its own without reference to... I mean, when you think of the most creative
people there ever were, sure, thereâs some foundation from where
that you could trace that creativity. But for many of them, thereâs a spark,
and something new comes out of them that had no precedent. So, from what you described, Watson is capable
of digesting preexisting knowledge,
but in its current state, or at least the state weâre familiar,
it is not inventing something new. >>PURI: Yeah. Certainly, the purpose of the technology today
is really not about that spark in itself. Although, it will find out...
in particular find out, itâll find insights that you didnât know existed, actually. Although they were hidden in there,
you didnât know existed, so it may be an ah-hah moment for you, I got
it, but still, it existed there. It didnât... So it will actually do that. But, yeah, it wouldnât get that... The notion you are saying, hey, that was spark,
no, it doesnât have. >>TYSON: So, John, tell me about the future. We could spend a whole panel on this,
but I just want to put it on the table briefly: Whatâs the role of AI in the future of autonomous
cars? And I know you guys are working on this. >>GIANNANDREA: Yeah, we are. >>TYSON: You entered certain autonomous car
chalâ You, your company. >>GIANNANDREA: Yeah, we have a division of
Alphabet that works on this. >>TYSON: Just to be clear, the holding company
is Alphabetâ >>GIANNANDREA: Yeah. >>TYSON: âand Google is one of several companies
under Alphabetâ >>GIANNANDREA: Thatâs right. >>TYSON: âand one of those companies were
tasked with making the autonomous car. >>GIANNANDREA: One of those companies is called
Waymo, and theyâre one of many companies that are
making autonomous cars. So itâs a super hard problem. I think people have been working on it seriously
for more than a decade. Theyâre making progress. These cars have driven millions of miles
with very small numbers of incidents, but theyâre still pretty constrained. Theyâre more accurate than a human driver,
but theyâre limited in where they can go. So for example, the kinds of streets that
they can drive on, the cities, and so on and so forth. But the technology is progressing fairly dramatically. Iâm pretty confident to say that we will
have fully autonomous cars for most of the large car manufacturers within
a decade. >>TYSON: And what role does AI play in that,
or is it just really good programming? >>GIANNANDREA: Well, itâs machine learning. So, you know, these systems have a lot of
computers on the car, they can detect a stop sign or can figure
out that thereâs an impediment in the road or a kid just ran
into the road or thereâs a cyclist. In California, we have this weird thing
where motorcycles are allowed to drive between the lanesâ >>TYSON: You have many weird things in California. >>GIANNANDREA: We have many weird things in
California. >>TYSON: Uh-huh. >>GIANNANDREA: But motorcycles are allowed
to drive between the lanes of the cars,
and so for the computer to actually understand whatâs going on
and figure out whatâs safe and not safe is actually quite hard. I think one of the things thatâs going to
happen here is even if you donât see millions of autonomous
cars like in three years, most of the new cars that you buy will have
semiautonomous features in them, like automatic braking or telling you what
the speed limit is. >>TYSON: Which weâre all accustomed to and
expect it on our next cars, now. >>GIANNANDREA: Yeah. yeah. So I think this technology kind of
comes in increments. Itâs not like a big-bang thing, you know? And Iâll just echo this comment about augmentation,
because the phrase AI, it means so many different things to so many
different people that itâs really hard to kind of pin down
what it is. But the idea of augmented intelligence has
been around for a very long time. A lot of the ideas we have in computing today
came from the work of Doug Engelbart back in the â50s,
and he had been describing computers as being a tool;
a tool that can help a doctor look through more information,
that can help pinpoint something in an x-ray. Not something that would replace the doctor,
and thatâs how we think about it. >>TYSON: Which is Maxâs point. Not be... Just... Whatâs the two words you put together? >>TEGMARK: Oh, empowered versus overpowered. >>TYSON: Overpowered, yeah, thatâs right. Very good. I like that. Can you describe for us whatâs the difference
or what is the ascent from AI to general AI? Because we hear this term general AIâ >>TEGMARK: Yeah. >>TYSON: And whatâs going on there? what have we been talking about so far,
and if itâs not general AI, what is? >>TEGMARK: Itâs really important to be clear
on what we mean by intelligence. As you mentioned correctly, John,
different people mean different stuff, right? I think itâs a really good idea to go into
the footsteps of Helen, here, and make a very broad definition of intelligence. So even Roomba is intelligence. And just define intelligence simply as
the ability to accomplish complex goals, you know? So Roomba has a very narrow intelligence:
really good at vacuum cleaning. Today we have aâ >>TYSON: Was that a diss on... >>TEGMARK: I am a proud Roomba owner. And we... >>TYSON: Roomba can carry cats around, okay? >>TEGMARK: Yeah. >>TYSON: For all we know
the Roomba is like the Uber for cats in the house, okay? [laughter] >>TEGMARK: Yeah. So... >>TYSON: Wouldnât that be cool if cats could
like, get the Roomba to come and take them around? Get the Roomba to open a door for them, yeah? >>TEGMARK: Thatâs right. So today, we have many areas... So if you define intelligence as the ability
to accomplish complex goals, then there are many areas today where machines,
in narrow domains, are already much better than us. Not just vacuum cleaning and high frequency
trading and multiplying large numbers together and
stuff like that, but also now in playing chess and playing
Go and so on. But no machine today that weâve builtâ >>TYSON: No single machine. >>TEGMARK: No single machine, not even the
whole Internet combined has the broad intelligence of a human child,
who, given enough time, can get quite good at anything. So this is whatâs meant by artificial general
intelligence, or acronymed AGI,
which has been the Holy Grail of artificial intelligence ever since
Marvin Minsky and McCarthy and others founded it,
came up with the whole... founded the field in the â60s. And nowâ >>TYSON: But, Helen, you come to this from
a productâ a consumer product point of view. And I want to get back to what you just said. People who are making AI want to sell something. So theyâll sell you something that cleans
the room, that drives the car, that does any one of the things that help
our lives. Whoâs going to buy something that has general
intelligence? >>TEGMARK: Well, everybody. >>TYSON: And will the general intelligence
be as good at the pieces of it as the specific products that
industry would be making for that one need that you have? >>TEGMARK: Oh yeah, by definition. So if people say that they think that machines
will never be able toâ there will always be jobs left for humansâ
theyâre just saying, by definition, that AI researchers will
fail to build artificial intelligence, because thatâs the very definition of it,
that machines can do everything better than us. And many people... Like I have had many conversations where youâreâ >>GREINER: Iâd like to point outâ >>TEGMARK: Yeah. Let me just finish my sentence. >>GREINER: âthereâs a mechanical and a
sensing component as well as the,
what youâre calling AGI, mechanical and a sensing element to make these
machines better. >>TEGMARK: Sure. But anywayâ >>TYSON: Oh, good point. So you can have software, but if it doesnât
have the physical means to enact what itâs supposed
to, itâs just a box. >>TEGMARK: No, no. it can do some great stuff. Like you could feed it a photograph,
and it could tell you if you have breast cancer or something like that, right? But itâs not going to go out and sweep your
house. >>TEGMARK: Yeah. But I think the final word on definition should
go to Shane Legg, one of the leaders of Google DeepMind, because
he coined the phrase, and he simply meant
something that can do the same information processing
that the human brain can do. And if you hook it up to good enough robots,
which Iâm sure we can build, then it can do great stuff. And so thatâs the goal of certain companies,
Like Google DeepMind, for example, to try to build that. And thatâs why they keep trying to push
the envelope, right? >>TYSON: Wait. But I gotta go to my industry people. What does it mean to buy something that has
general AI? What do I do with that? Do I say, make me the best cup of coffee,
drive me to my office, whatâs the square root of two,
and... I mean, in practice, is that a thing? >>GIANNANDREA: So in principle,
and this is highly speculative, but in principle, an AGI could build any kind
of other AGI, and therefore,
could build you any machine you want it to build. And thatâs when people worryâ >>TYSON: Thatâs when we all die. >>GIANNANDREA: No, no, no, no. Thatâs when a class of people who call themselves
transhumanists would say that humans would evolve. And I personally donât believe in this. I see no evidence that itâs going to happen. But thatâs the source of a lot of the ethical
discussionsâ >>TEGMARK: Right. Exactly. >>GIANNANDREA: âabout this topic. >>TYSON: Mike, speaking of ethics,
could you tell us about the trolleyology
and what role AI can play in assisting our reasoning there? >>WELLMAN: So probably many of you have heard
about trolley problems. This became popular
in psychology to pose ethical dilemmas to people and see
how they react. And thereâs many variations of it,
but the standard kind of story is a trolley is going down a track and itâs
about to hit or kill three people,
and then you notice that thereâs a switch, and you could make it go over to another track
where thereâs only one person. And you could choose to kill that other person
instead of the first three. Would you do it? >>TYSON: Wait, wait. So the dilemma there is
somebodyâs going to die no matter what. You either can not touch it,
then the trolley kills three people on its own,
or you can intervene and actively kill one person. >>WELLMAN: Right. Nowâ >>TYSON: Right. >>WELLMAN: âIâm not a psychologist, but
I think it... It seems to be a kind of a silly question
to ask people, because humans can really never get, I think,
into a mental state where they could really believe that, with
certainty, if this action, if I take this action, Iâll kill this one
person for sure, and the other action... Thereâs always this uncertainty. Thereâs always questions about what the
blame... Itâs not actually a realistic situation. So the question is, will AIs, actually maybe,
is it more realistic for them, perhaps? Could an autonomous vehicle be in a situation
where, all of the sudden a bicyclist runs in front
of it, and it has a chance to swerve and do some
other damage, and will it have to weigh that. >>TYSON: You would have to take out the vegetable
cart first and then find out what else it does, yes. >>WELLMAN: Yeah. And, you know, so will they have to be coded
in them what the solutions are to those dilemmas? When it does happenâ >>TYSON: Wait, wait. Wait. So that implies that humans get together,
figure out a solution, and you hand it to AI. But thatâs not the point of AI. The AI is going to have some higher intelligence
than we do, and thatâs why Iâm curious. >>WELLMAN: So I actuallyâ >>TYSON: If you bring AI to that problem,
is it going to give different answers than we would,
and then we say, oh my gosh, we never thought about it that way. Letâs do it that way. >>WELLMAN: So I think this is [unintelligible]
thatâs going in some of the session, here. Actually, no. AI, the idea is we want to giveâ
for the humans to give the AI the values, and the AI is concerned with
making decisions and taking actions to promote those values. So ultimately, we are saying...
you know, we value life, we value... Thatâs part of what the robot laws are for. >>GREINER: There are no robot laws. They are science fiction. >>WELLMAN: And so the danger is
that they would be weaponized by the party that is programming them and is
controlling them; not that theyâre going to all of the sudden
decide to get rid of the humans. Thatâs not the source of the danger. With respect to the trolley problem situation
in this hypothetical autonomous vehicle, when it does happen that
a car, one of these cars runs over a bicyclistâ and itâll happen, I think, much less frequently
than humans do it todayâ weâll take the black boxâ
I hope theyâre engineered so that they have a black box that
captures everything that was in their senses all the time
and itâs very secure, so they canât lie about itâ
and they will be able to dissect it and will say,
âYou made this decision. Why did you do that?â And it might say,
âWell, I hit the bicycle, because if I swerved to the left,
I would have run over a child.â Or if it said, âWell, I did that, because
if I swerved too fast Iâd wake up the passenger,â then youâd
say, no, that was the wrong decision. [laughter] That was not what we meant for you to do. Itâs still better than what the Tesla said
a couple years agoâ >>GREINER: Yeah. >>TYSON: Yeah. If I made it say,
donât wake me up for any reasonâ >>WELLMAN: Thatâs right. >>TYSON: âand itâs the robotâs job to
obey me... >>WELLMAN: Exactly. Thatâs exactly an answer... So this is part of the danger of AI is that
the unintended consequence of the specification of the values
wonât hit what you really care about. >>TYSON: Let me ask Google and IBM, here. In your efforts in this, I donât want to
call it a race, but letâs call it an exploration,
is there a tandem sort of ethical group? Let me start over here in IBM. Is there a... Is anyone thinking about the ethics of what
AI would do if you achieved this goal? Because we certainly have sci-fi movies,
and none of them... It never ends well, okay? In any of them. Any of them. >>PURI: Yeah. So certainly,
we were one of the first companies to actually bring principles of
ethics and responsibility to AI. Itâs captured sort of [bold] ways in what
we do overall on the information we have. But most importantly,
there are three fundamental tenants we go by
as it pertains to AI. One is building AI with responsibility. Second one is building AI thatâs unbiased. And the third one is building AI thatâs
explainable. I think those are the fundamental tenants
that we drive and strive towards, and we have, in our research teams,
we have significant number of people and scientists and experts that really try to
drive ourâ the AI services that we offer,
the solutions that we build with tremendous number of businesses around,
to drive them with those three principles. And obviously, I think we all know
the way AI techniques work these days, they are driven a lot by the data. And you are as good as,
as we were just discussing before, you are as good as the data that you are fed. And detecting bias in the data itself
is actually one of the more important research and technical challenges. And having techniques that are able to de-bias
that data as well, in terms of when you are learning,
you know that there is bias in the dataâ or be able to de-bias it so that you can build
models that are actually unbiased. So thatâs why I said
there are three fundamental principles that we go with
that are sort of very formal and engrained in the principles through which
we are driving AI. >>TYSON: Speaking of bias,
John, if I remember correctly, there were some fascinating studies recently
where Google facial recognition software
was not as good at identifying black people as it was with
white people. And then they found out that just
white people programmed it, so thatâs... [laughter] >>TYSON: So, um... So maybe thatâs just kind of obvious at
that point. But that would, I think, count as a bias. >>GIANNANDREA: I was actually at lunch with
one of the authors of that paper today. They havenât actually measured our systems. They measured other peopleâs systems. But itâs a serious issue, and I think thatâ >>TYSON: So it wasnât your facial recognition? >>GIANNANDREA: It wasnât ours. But this issue of bias and machine learning
is super important. >>TYSON: Iâm sorry to have implicated you. >>GIANNANDREA: No, no. Thatâs okay. Itâs okay. So we think that this is,
at least for the next few years, the most serious ethical issue. I think this AGI stuff is years, decades away,
so I donât spend very much time on this. But this question of youâre building learned
systems, machine learned systems learning from data,
if your data is biased, youâre going to build biased systems. And this could be everything from
whether to give somebody a mortgage or what their credit score prediction would
be or there are people selling systems now that
are used by Quartz to predict recidivism rates. And theyâre not explainable,
and itâs not entirely clear what data they used to train them,
and we think this is just unethical. >>TYSON: So itâs garbage in; garbage out,
at that level. >>GIANNANDREA: Yeah. And soâ >>TEGMARK: And we know that one was very biased,
yeah. >>GIANNANDREA: Yeah. So many of our companies work together
outside of the commercial realm with academia, but also in nonprofits
looking at this question, because we really worried about building systems
that give a bad name to all this machine learning. >>TYSON: So in all of your efforts,
how would you characterize the, sort of the ethical
dimension of whatâs going on? You have people... Are they philosophies, are they psychologists,
what are they? >>GIANNANDREA: No. Theyâre usually data scientists and researchers
who are looking for systemic bias in the systems and the data that weâre using to train the
systems. But we have significant efforts with [crosstalk]. >>TYSON: Okay. So I get the bias part,
but how about the trolley car part where we... Will the AI have the values we care about
if it will properly serve us? If the AI achieves consciousness
and then comes up with values of its own... >>GIANNANDREA: I mean, our company has very
few situations, autonomous vehicles would be one,
where we have to actually struggle with these issues. Mostly, weâre worried about recommendations
systems giving bad reconditions to people,
or ranking systems giving bad results to questions that you ask. >>TYSON: But this is moving fast as a field. >>GIANNANDREA: I think as a field itâs moving
fast, and I think academia is now got entire classes
on AI ethics and machine learning ethics. And I think societyâs responding in an appropriate
way, because weâre worried about this stuff. >>TYSON: So, Max, youâre president of the... Tell me the name. >>TEGMARK: Future of Life Institute. >>TYSON: Future of Life Institute. Sounds very New Age-y, by the way. >>TEGMARK: Well, future of life; weâre for
it. >>TYSON: Okay. >>TEGMARK: We would like it to exist. >>TYSON: Not a controversialâ >>TEGMARK: You would think. >>TYSON: Put on that on Twitter,
and then people would argue with it for sure. >>TEGMARK: You would be surprised, yeah. >>TYSON: So could you tell me the
difference between an âisâ and an âaughtâ philosophically,
and how that matters in AI? >>TEGMARK: Yeah. You know, it basically comesâ >>TYSON: Was it Hume who did this? >>TEGMARK: I believe so, yeah. >>TYSON: But one of the philosophers, yeah. >>TEGMARK: I think so. It basically comes down to, you know,
saying that might makes right is a really lousy foundation for morality. Just because something is in a certain way
doesnât mean thatâs the right way, and just because by default something is going
to happen if we donât pay attention to it
doesnât mean thatâs what we really want to happen. You know, Iâm very optimistic that we can
use AI to help life flourish like never before,
if we win the race between this growing power of AI that weâre seeing,
and the growing wisdom that we need to manage it. And there, I feel weâre kind of a little
bit asleep at the stick. You said hereâsorry to pick on you, Johnâ >>TYSON: Well, I donât want any AI person
to say weâre asleep at anything. >>TEGMARK: But I have to pick on you a little
bit, Johnâ >>GIANNANDREA: Pick on me. >>TEGMARK: âbecause you said,
âWell, you know, I think this AGI stuff is kind of decades away,
so Iâm not thinking about it much.â But I bet you wouldnât say, âI think this
climate change stuff is a few decades away, so Iâm not thinking about it,â right? You look young and healthy,
youâre working out, taking your vitamins, youâre going to be around then, right? And if itâs going to take a few decades
to get this right, it feels really important right now to think
about it enough that we canâ >>GIANNANDREA: I totally agree. >>TEGMARK: âsteer things in a good direction. >>GIANNANDREA: What I said was I donât spend
very much time at Google
with researchers on this task. But we do invest in groups around the world
at Oxford and Berkeley and other places who are looking at this stuff. >>TEGMARK: Yeah. And youâre a member of the Partnership of
AI, which is awesome. >>GIANNANDREA: Yeah. Itâs not that weâre abdicating responsibility. Itâs the, we just have no idea what the
timeline is. We do know what the timeline is for global
warming. >>TEGMARK: Yeah, andâ >>TYSON: Well, if anyone knows the timeline
of this, it would be you, presumably. >>TEGMARK: Well, I think also we do know
quite a bit about the timeline. First, we know thereâs great controversy. And your cofounder, Rodney Brooks, told me
in person he thinks DeepMindâs quest for AGI
is going to fail for at least 300 years, right? But most AI researchers in recent surveys
think itâs actually going to succeed, you know, maybe in 40 years, maybe in 30 years. So that, to me, means itâs not too soon
to start thinking hard about
what we can do now that will be helpful. >>TYSON: I get it. But I want to get back to the point of,
there are things that are, and there are things that ought to be. >>TEGMARK: Yeah. >>TYSON: Do you trust AI
to judge what ought to be? >>TEGMARK: No. >>TYSON: Or is this... Okay, good. >>TEGMARK: I could give a longer answer, too. [crosstalk] >>TYSON: And how do you
imbue what ought to be in an AI,
if an AI is a higher level of consciousness and capacity
than we are. Maybe it knows better than we do. >>TEGMARK: Yeah. But people often tell me,
if AGI is by definition smarter than us, why donât we just
let it figure out morality; whatâs ought to be. But the fallacy in this of course is that,
you know, artificial intelligence and technology in
general is not good or evil. Itâs morally neutral. Itâs a tool that can be used to do good
or to do evil. Intelligence itself
is simply the ability to accomplish goals, good or bad, right? If Hitler had been more intelligent
I think that would have sucked, right? So I wouldnât want to delegate to him
for that reason what we should do. Instead, we should take the attitude we take
when we raise kids. We often raise children who areâ
end up being more intelligent than us. We donât just ignore them for 20 years and
hope they... something good comes out of it. [laughter] We really try to... While theyâre still young enough that they
listen to us a little bit, right? >>TYSON: A little bit; little bit. >>TEGMARK: We try to instill in them
values that we think are good. And I think this is linked back to what you
were saying about letâs teach morality to machines. >>TYSON: You said in the next 20 years we
still have a chance to teach AI who and what we are
so that when it achieves consciousness, it will not exterminate us. [laughter] >>TEGMARK: Well, itâs even harder, though,
than raising kids. >>TYSON: Itâll keep us around as pets. [laughter] >>TEGMARK: Itâs tough though, becauseâ
sorry if I get a little nerdy, nowâ but with children,
we canât teach them morality when theyâre six months old,
because they just donât get it. And like with my teenage son,
itâs kind of too late, because they donât listen to me anymore. [laughter] >>TEGMARK: But there is this magic window
we have over a few years
when theyâre actually smart enough to understand us
and still, maybe we have some hope that theyâll adopt our morality. Where AIâ >>TYSON: Whereas AI in thatâ >>TEGMARK: It has... Itâs not yet reached the point where it
understands human values, because we canât explain it
yet, but it might pretty quickly blow through this
window where itâs actually going to...
where itâs still not as smart as us and we can influence it. And we have to kind of plan this curriculum,
plan this ahead, you know? And I think itâs really good that you are
working on that, for example, so that we can... Because we donât want to wait until after
someone hasâ or the night before someone switches on a
super intelligence to be, oh, how do we figure out this, you
know, teaching it right from wrong stuff? Thatâs probably too late. [laughter] >>TYSON: So, Mike, thereâs a... Probably too late, yeah. Thatâs certainly too late if that happened. So, Mike, Iâm curious about something. The capital markets,
I donât want to say that they rely on this, but they,
a lot of what makes them fluctuate is that different people
have different information that theyâre betting on
if they buy and sell stock. So if you make a machine
that has access to all information and is perfectly rational,
is that machine or the person who owns that machine the first trillionaire
in the world? >>WELLMAN: So interestingly,
Wall Street trading is one of the first areas where
autonomous agents are really out there. And I think thatâs one of the reasons why
itâs useful to study long-term implications of AI by this
case study of seeing whatâs happening right now. And right now, lots of firms not very far
from hereâ >>TYSON: This is New York. >>WELLMAN: âare programming computers and
putting their al... using machine learning and using a lot of data,
and a lot of the same data to make decisions. So one question is,
well, if everyone is using the same data and maybe
stumbles on the same algorithms, are there possible effects on the stability
of markets that, if something goes wrong, could they be more prone to crashes or not? Thatâs something that weâre studying. And if so, are there things that we could
do to try to mitigate that? The question you asked about the first trillionaire
is if one group, one firm, one country
has an edge in AI, will they be able to
then leapfrog everybody else and just suck up all of the resources? Thatâs actually a significant issue. Financial markets is one place where the money
is, and if you really get it so much better than
everybody else, there could be major shifts in distributions
of wealth. And itâs not only financial markets. It could be the Internet. You can put smart AIs out there and say,
âFind some way to make money for me,â and they will. So headline, Chinaâs Blitz to Dominate AI
is what youâre showing. >>TYSON: Right. So youâre saying a country can just corner
the market if they get there first. >>WELLMAN: So this is somewhat, I think,
ill understood and controversial, but certainly in this longer road to
more general, more capable AI, if one entity has a significant edge,
they will have a very strong incentive to shut others out
and to capitalize on that advantage. And so thereâs, no doubt, thereâs an arms
race dynamic to many aspects of artificial intelligence
technology that perhaps is most frightening in the military
realm, but also comes up in financial realms and
other ways. Itâs in the fake news realm. We were talking about if AIâs going to be
better at discriminating fake news. Never mind that. Theyâre going to be much better at promulgating
fake news, and thatâs going to be a challenge for all
of us. >>TYSON: This could go to any one of you. It could go to Helen. Helen, what... Do you foresee robots or AI in general
informing political policy? Because if they can analyzeâ
Look at Watson. Watson reads a thousand medical papers and
comes up with some conclusions based on it. So you have machineâyou make machines,
you make drones that can make decisions
that we canât, and they can make them more quickly,
and presumably, better. So is there a scenario where,
here are political factions arguing, because, really,
their feelings are involved more than facts. And at the end of the day,
in an informed democracy, you kind of want facts to matter... I would think. [laughter] >>TYSON: Just, I would just... >>GREINER: We areâ
and Iâm a little bit on the other side of itâ
we are very far away from this AGI, generalized AI,
and thereâs wonderful progress being made that allow
AI systems and robots to do more than they could do before
in recognition and characterization, but we havenât made that leap,
and itâs going to take an innovation step to get there. So to really worry about that now,
I mean, right now, the machines are feeding information into
the system, and humans are making the judgment. Now, I believe that day will come,
but itâs unpredictable. Because as an innovation... Maybe innovation steps would have to happen
before that day comes. >>TYSON: Okay, so itâs not... Because in innovation, you canât order up
an innovation. >>GREINER: Yeah. You donât know when itâs going to happen. Hopefully some of the younger people in the
audience will make those innovations,
because I think we should have it happen. >>TYSON: So, Ruchir, it just seems to me,
given that Watson might be uniquely qualified to come up with a political policy decision,
if it reads every consequence of every political decision thatâs ever been made,
looks at what became of it, looks at how people reacted,
looks at what people wanted, and then just said, âYou should do this.â So there should be maybe a machine
on the floor of Congress and people come up to it and ask it, right? It would be like the oracle of Congress. [laughter] It could be Watson, right? Letâs check... Iâm arguing in the dining room with my
political colleague from across the wallâ across the hall, uh, the aisleâ
and we say, âLetâs go check Watson.â >>PURI: Are you telling me to print posters,
Watson 2020? [laughter] >>TYSON: Watson AlphaGo 2020, yes. Uh-huh. >>PURI: So first of all, I think letâs take
the questionâ precisely the question you asked. Could AI be helping public policy? And to that, Iâll answer, absolutely yes. >>GREINER: Yep. >>PURI: It could be helping public health
policy, it could be helping public policy as it pertains
to decisions that are within the country as wellâ
whether it is taxation or other scenariosâ absolutely yes. And it already is, actually. So I will not just say it should be helping. It already is helping. Now the question really on the table is
have we reached a scenario where there is this oracle, actually,
that knows everything. And, no, we have not reached that scenario
yet. We are fartherâ >>TYSON: Yet? >>PURI: The reason Iâm saying that is because
itâs really about domains that you specialize in, actually,
and that information is spread in those domains. So just as an example,
we are working towards an in compliance domain; a regulatory compliance. And yes, we can actually feed information
to the machine, and it learns, and itâs going to find insights,
and, for example, obligations that a particular entity may have. But I think by oracle,
everybody understands it to be know-all, actually. It knows everything, it reacts to everything,
and we have not reached that point. Neither is the intention to reach that point
whereby you know everything, you react to everything. The point is that really
be precise in scenarios thatâs going to help society,
whether it is in healthcare domain or whether it is in public policy domain
or it is in a compliance domain. So thatâs where lot of the benefit to society
is going to come from. At least as engineer and scientist, I would
say letâs be more precise,
letâs define the problem and solve the problem in domain,
and then we make the progress from there, just like what we did in the scenario of
we looked at chess, we defined the next problemâ thatâs really the next level up in terms
of the languageâ you solve that problem, and you move on from
there. >>WELLMAN: Maybe your question...
if human-level intelligence might be hard, what about Congress-level intelligence? [laughter] >>WELLMAN: But I think thatâs not really
fair. >>TYSON: Well, thatâs how that saying goes:
If pro is the opposite of con, then progress is the opposite of Congress, right? Ever hear that one? Anybody [know that one]? [applause] >>TYSON: That one goes way back, yeah. Yeah. >>WELLMAN: But I think itâs true. Once we agree on the values,
then AI can be a great help in sorting out the policy questions. And of course itâs not that Congress is
not intelligent. Itâs that itâs all about fighting about
the values and the priorities. >>TYSON: Right. >>WELLMAN: And that problem doesnât go away
when you have AI. >>TYSON: Helen, can you foresee a future where
robots get angry with people? >>GREINER: Um, I think that we
can put in simulated emotions to help with decision making. I think that you can also have it
to do a more natural interaction with people that respond how a human would respond. But not in the way that you might think of
a person as being angry, for a while,
until some of these other innovations come out. >>TYSON: So thereâs a video of
all of the occasions where they abuse their own robots. So they have robots that are walking,
and then they just kicked them, and then they... So, I mean, itâs interesting, because... >>GREINER: I think you can tell a lot about
a person about how they treat a robot, by the way. [laughter] >>TYSON: Well, thatâs my point. So these are robots that
you almost kind of feel for them, because some of them are sort of humanoid
rather than non-humanoid. And the early ones, they would just sort of
fall over. And, I get it, theyâre trying to
increase the stability of these robots. So now theyâre poking them and pressing
them, and then the robot rebalances and comes back. >>GREINER: But they get lots of complaints
about it, donât they? >>TYSON: I know. In theâ >>GREINER: Like, thatâs being mean to the
robot. >>TYSON: Donât be mean to the robots. >>GREINER: But I think there is going something
going on, which you hit the nail on the head, thatâ >>TYSON: Because I think that all of robots
will have memory. >>GREINER: When we hadâ >>TYSON: And the first time they achieve consciousness... [laughter] >>GREINER: Thereâs been studies that
people name their Roombas. They get attached to them. Our military robots, too,
when we put them out in the field, we had big Marines come into the robot hospital
saying, âCan you fix it?â And itâs all blown up, and, you know,
he didnât want any other robot, he wanted that one,
because it had gone on missions for him. It had done like over 18 missions, and
they named it Scooby-Doo. >>TYSON: Did you just say... If I hear you correct... >>GREINER: And, you know, theyâre big, tough
military guys, but, because theyâre working with the robot
and because the only things they
experience, they have this kind of behaviors of animals. Itâs like, itâs not anthropomorphizing. I think thereâs another word that could
be like thinking somethingâs sentient
like sentipomorphizing. Maybe weâll make up a word, coin a word
for that. >>TYSON: I love that word. From here on, sentip... >>GREINER: Sentient. >>TYSON: Sentipomorphizing. >>GREINER: Yeah, exactly. >>TYSON: Right. So youâre saying military who have been
served by a robot, if the robot blows up because it found the
mine, and... >>GREINER: Mm-hmm. They want it back. They want it fixed. >>TYSON: âthen they take the pieces and
they go to the robot doctor and say, can you fix him, doc? And these are big, burly Marines. >>GREINER: Yep. And we say, you can have another one. They say, no, I want this one, because its
name was Scooby-Doo and it saved, you know,
it saved 11 guys on one mission, right? [laughter] >>GREINER: And there have been reports of
people giving them burials,
people, military service membersâ >>TYSON: They buried their robots? >>GREINER: Yeah. Giving them field promotions andâ >>TYSON: Do they know that microbesâ >>GREINER: âviewing them with personalities,
saying this oneâs tough; this oneâs a little bit wimpy. Iâve had people tell me that theyâre sure
their Roomba moved a pot into the way of the virtual wall
so it could escape. I can assure you, it didnât figure that
out. It really accidentally did it. But itâs that sentipomorphizing that people
automatically do, and itâs wonder...
itâs kind of cool, right? >>TYSON: If you bury a hunk of metal,
microbes wonât eat it. Itâll just still be metal later on. >>GREINER: We saved Scooby-Doo. We brought him back, and heâs...
heâs at the iRobot headquarters, that one. >>TYSON: Heâs repaired. Yeah. I want to sort of land this plane,
but I want to do it in a way that... Because thereâs still some really important
pieces of this conversation we have not addressed,
because you all are kind of, I donât know, youâve been shy of this
threshold that I want to take each one of you. At some point... Well, let me lead up to it. So I have a calculator on my hip,
and it calculates better than any human who ever lived. So in a sense, itâs a superhuman property
that it contains that we built. Now, you can go down the list of computer-run
things that do them better
than the best human ever could have or ever will. And that list is growing, okay? And autonomous cars will be among. It will drive a car better and faster in a
more controlled than any human who ever lived. So as these accumulate,
it doesnât seem to me to be a stretch to ask
if general AI achieves some kind of conscious stateâ
whatever that is, however we define itâ that that consciousness
will be a superhuman consciousness. Is that...? Youâre shaking your head, Mike. >>WELLMAN: No, Iâm nodding. >>TYSON: No, no, no. Youâre nodding. Mike is shaking. >>GIANNANDREA: Iâm shaking my head, yeah. >>TYSON: Yeah. Why are you shaking your head? >>GIANNANDREA: Because having more smart tools
that are superhuman at very narrow things, like calculating or driving or diagnosing
cancer is not the same thing as having a consciousness
and having AGI. Weâve had more tools for the last 200 yearsâ
that calculator youâre talking about, you didnât have 50 years agoâ
it doesnât make us less human. It frees us up to do more things. I remember when my daughter was in school,
they wouldnât let her use a calculator to do homework,
which, with 20 years of hindsight, seems absurd, right? But just because you have these tools and
you can use aâ >>TYSON: Thatâs what Iâm asking. >>GIANNANDREA: But itâs not inevitableâ [crosstalk] >>GIANNANDREA: âitâs not inevitable that
if you have moreâ >>TYSON: Thatâs not what Iâm asking. >>GIANNANDREA: But youâre making the leap. Youâre saying that if you have more of these
tools, then youâll have AGI,
and I disagree with that. >>TYSON: No, no. No. Okay, I can see how youâd think that. That was not my intent. Iâm saying these tools are evidence to me
that the day general AI arrives, thereâs some decision-making power that
it will have that will be superhuman. Because everything else we created using computers
and put a lot of thought behind became superhuman in that way. Is it unfair to imagine
for the safety of us all whether general AI would have superhuman consciousness? >>TEGMARK: I think itâs very likely. I think we humans are so stuck on the idea
that we are like the pinnacle of how smart itâs
possible to be, and we have a long tradition of
lack of humility, right? But letâs face it. Our intelligence is fundamentally limited
by Mommyâs birth canal width, and the fact thatâ >>TYSON: Explain that, please, because thatâs... [laughter] >>TEGMARK: âweâre made of these blobs
of neurons and, theyâre pretty cool, our brains, but thereâs
nothingâ >>TYSON: Wait. Pause. Pause right there. >>TEGMARK: âspecial about that level. >>TYSON: Just to be clear,
we could have had bigger brains, but we would have killed our mothers in every
birth, okay? So we have basically the biggest possible
brain to be born out of your mother without killing
her. And so thatâs it. That limited how big our brains could get. [crosstalk] >>TYSON: Itâs already an issue,
getting the damn head out of there. So... >>GIANNANDREA: But youâre comparing two
different things here, right? >>TEGMARK: Yeah. Iâm talking about AGI. >>TYSON: Am I right? Iâve read that, right? >>GIANNANDREA: Youâre comparing one personâs
brain size with the sum total of humanity. Like thereâs seven, eight billion of us. We communicate with language. We hopefully cooperate. That is way more powerful than a single AGI. >>TEGMARK: Sure. I donât necessarily disagree,
but what Iâm saying that if once we future outâ
if we figure out how to make AGI, suppose that happens in 35 years,
then thereâs no reason to think that itâs going to stop there
and become like in all those lousy Hollywood movies
where we have all these robots which are roughly as smart as us, and thatâs it,
and we just become buddies with them and go drink beer with them. Itâs very likely that
they will just continue dramatically getting better
and they can now start developing even better robots
and they will be as much better at everything as they are today at multiplying large numbers. >>TYSON: This is my... Thatâs the foundation of my inquiry. Mike, where are you on this? >>WELLMAN: Iâm with Max. I see no boundary and reason that that wouldnât
occur. The timing is very uncertain. And I think this uncertainty is also
a part of the equation that we have now about being prepared for it,
because it could happen faster than we think. It could happen slower than we think. There could be obstacles that make it really
far, but we just donât really know. But, you know, itâs true, you put a lot
of brains together, but we have very minimal communication channels
between us. This linear speech that weâre doing,
compared to what computers do when you build [up them all] together and have them
talk, they can do just so much more. So I think there really is... Theyâre already super intelligent in many
ways, not just your calculator. Everything we do, it doesnât stop. Theyâre not... The algorithmic traders that I talked about
donât at all stop at whatever human traders can do. >>GREINER: So I believe we are machines
made of biological components, so I think that we will eventually
be able to duplicate and improve upon. But the problem is when you discount timing
at all, and whatâs being done,
like, you know, these bag of tricks are not going to get us
there. Thereâs core inventions that have to happen,
potentially different hardware than running a
[unintelligible] machine, right? Thereâs a lot of stuff that has to be happen
that... And if you want them to be mobile,
have better sensors, better mechanics, as well as all the AI. So I think itâs... You say, well, why shouldnât we worry about
it now? Well, because itâs not very close, you know? In 2000, Bill Joy started writing about how
these threats to humanity, and one of them was robots. I start getting calls from like Wall Street
Journal and everywhere at iRobot saying,
âWhat kind of human robots are you making?â And Iâm like, you know, I couldnât say
it then, because we hadnât launched the Roomba yet,
but, âWeâre making a robot vacuum,â you know? Donât... Because it gets people... >>TYSON: Yeah, but what else does it do? >>GREINER: Yeah. But it gets people maybe focused on the wrong
things rather than what these new achievements that
AI are just getting to, because they think itâs becoming general
AI, and itâs really not yet. And there are many of them on the stage which
would like it to, but itâs not. >>TYSON: Mike, let me just ask. My deepest skepticism that this will
go the way people imagine, especially in the movies,
is we donât really understand consciousness right now
in humans. So itâs not obvious to me that we can just
assert by fiat that a smart enough computer will achieve
consciousness, when we donât even understand it within
ourselves. And there was in interesting bit in the movie
I, Robot, I donât remember if it was captured in the
book itself by Isaac Asimov, but they noted that,
because they didnât replace code with new code
every time they upgraded the robots, every generation of robot had this baggage
of software that was just dangling there
kind of like our brains with leftover wiring
from long before we became human; different parts of the brain. Evolution doesnât swap that out and make
it fresh. It builds around it, and itâs gotta deal
with it. We have to deal with it behaviorally. Our primal nature has to be overcome by
later brain revelations that we got from natural selection. My point is, in that film, they asserted
that this extra dangling software made the robots do things that the intended
softwareâ that the latest softwareâdid not intend. And so, in a way, it was almost like a free
will was emerging in them. The robots would do things. And they said, âWell, I didnât program
that in.â Well, thatâs leftover wiring from 20 years
ago. I donât know what I just did there. I donât know what that is. [laughter] So evidence that we donât understand consciousness
is you go to the bookstore
and thereâs shelves upon shelves of books on consciousness. Thatâs evidence that we donât understand
it, because people are still writing books on
it. You go to the shelf and ask for books on gravity,
thereâs like, two books, okay? We got this one. [laughter] So where does it come from
that people just declare that general AI will have consciousness? single [applause] >>TYSON: Oh. thank you, that one person who... Yeah. >>WELLMAN: I donât understand consciousness,
But I also donât think itâs really, even necessarily has to be part of this discussion. I mean, when you have an AI that is super
intelligent in every way, it can do any job as well as any person can
do, every capacity,
whether it has whatever we think of as consciousness and has that same,
you know, illusion of free will and way of thinking about itself
seems to be maybe beside the point. Weâre still faced with an issue about dealing
with entities like that, whether or not we correspond on the consciousness
question. >>TEGMARK: Yeah, I agree with Mike there that,
whether itâs conscious or not doesnât have to affect at all how it treats us. Maybe it should affect how we treat it, right? From an ethical perspective. But I also think we should all remember... >>TYSON: Maybe theyâll come up with their
three laws. >>TEGMARK: Maybe. >>TYSON: Yeah. But robots should not harm a human. No, no. Humans should not harm a robot. >>TEGMARK: Exactly. >>TYSON: Yeah. >>TEGMARK: But we should also remember, I
think, this famous quote of Upton Sinclair
who said that itâs very hard to get a person to understand something when
his or her salary depends on not understanding it. And I find itâ
no offense to the three of you here who are from companiesâ
but itâs been so striking, so striking howâ [laughter and applause] >>TEGMARK: âevery time thereâs a debate
like this that Iâm in, itâs always the academics who are like,
âYeah, this might happen,â and the people from the companies are always
like, âEverything will be fine.â [laughter] >>TEGMARK: I would love to ask you the same
questions over beer when itâs not on camera. [laughter and applause] >>TYSON: Thatâs why I flank the three of
you [from the] academics. That was all very much on purpose. Weâre going to open the floor to questions
in just one moment. If I could just get some summative reflections. Letâs start down here. Should we fear AI? And if so, on what level. Keep short. >>TEGMARK: Yeah. Itâs a, should we fear fire or not? Should we love it? I mean, AI is in an incredibly powerful tool,
and itâs either going to be the best thing ever to happen to humanity,
or the worst thing ever. I donât think the question is whether we
should stress out about it. I think the question isâ [laughter] âwhat [unintelligible] stuff should we do
now? >>TYSON: No. Max, Max, you just said,
âIt could be the best thing for the... or the worst thing ever,
but we shouldnât stress.â That is the definition of stress. [laughter] >>TEGMARK: I meant we should... Itâs interesting [to present] this quibble
about how stressed you should be. The interesting question is what should we
do thatâs useful to maximize the chances that this will be
awesome? Because if we really work hard for this,
I really do think that AI can help us crack all the toughest challenges we have
that are facing us today and tomorrow and create a really inspiring future. But weâre going to have to work for it. Itâs not going to just happen if weâre
asleep at the wheel. >>TYSON: John? >>GIANNANDREA: So my problem with this question
is we didnât, in this whole hour,
define what we mean by AI, right? So there are some very smart people
who think that AGI is inevitable, and that it has ethical implications and so
on and so forth. My beef with that is
thereâs lots of technical reasons to believe that itâs not inevitable,
or, I agree with Helen, that itâs... We just have no idea what breakthrough after
breakthrough after breakthrough would be required
to go from the kind of practical AI we have today,
to the kind of AI weâre talking about conjecturing here. So Iâll give you one example. Small children can learn from small numbers
of examples. Today, we have to give computers hundreds
of thousands or millions of examples. A child that learns to play chess can also
play tic-tac-toe, right? Our Go program canât play tic-tac-toe,
unless we program it to do so. So thereâs these huge barriers to generality
of intelligence, and as a technologist and as an engineer
and somebody working in industry, I see no evidence of this stuff imminently
going to happen. That doesnât mean we shouldnât be having
the academic, ethical conversation. I just donât see any evidence of it. Now, the reason thatâs a problem is because
it scares people, and it scares people into thinking that everything
with this AI label is scary, and so then people think that we shouldnât
be doing healthcare with AI or we shouldnât be doing better data science
or we shouldnât be doing decision support or autonomous vehicles. And yet, if we build these systems,
they wonât have the ethical problem that weâre conjecturing,
and yet, they will do a tremendous amount of great things for our humanity. And weâre conflating the two things,
and weâre scaring ourselves into not doing what we should be doing,
which is saving peopleâs lives. >>TYSON: So thereâs a cultural rational
barrier that youâre up against, here. >>GIANNANDREA: Yes. >>TYSON: Okay. Ruchir? >>PURI: I think... Well, AI is an extremely powerful tool. I do not believe we are anywhere close to
this fear mongering or, by some people,
and the fear that exits. And I can understand certainly,
I think a narrative can be raised to a point where you really start
fearing it. Iâll give a very good example,
and I think just picking on Johnâs threadâI talk about this in the talks I give often
as wellâ our two daughters, and when they were young,
we had like two books of A is for apple, B is for boy, C is for cat. And you look atâyou show them... And they were in love with only one book,
anyway. Doesnât matter how bad it was. And you show them a picture of cat,
only one picture of cat, and you repeat that multiple times over several
days or a month, and then you show them a picture of a cat
they have never seen before, and they say in their cute voice, âCat.â It takes, roughly speaking, today, a computer
750 pics of a cat to recognize itâs a cat. [laughter] Now, Iâll give a good example. If I ever showed my daughter 750 pictures
of cat when she was less than one year old,
sheâs 16 right now, sheâll be confused till today, actually,
what a cat was. So we are so far away from actually
whatever we are discussing that I find that question humorous, almost,
and I have encapsulated that as a syndrome called cats and dog syndrome, actually. So Iâll leave it there. [laughter] >>TYSON: All right. Helen. Helen? >>GREINER: So you shouldnât fear technology. You should be concerned and maybe do something
about AI, for example, cyber hacking into AI systems,
people using an AI system maliciously, unconscious bias in the AI system
but you really donât need to worry about general AI yet. >>TYSON: Yet. Okay. Mike? >>WELLMAN: So I think itâs really important
to keep aware of this distinction between the short-term,
narrow AIs which have their own concerns and,
you know, safety concerns and societal concerns with them
separate from the long-term general AI super-intelligent concerns
which are of a different magnitude and different [realm]
and probably much further away. But we, as a scientific field,
and certainly as a society I think, we can think at multiple timescales
and make these distinctions all the time. I think if we are... donât... If we refuse to talk about the thing thatâs
over the horizon, weâll lose credibility if we deny that thereâs
a potential problem. I think that that is a way to
make sure that, just, we keep our head in the sand. There are things that we really should be
figuring out way in advance of this potential super intelligence,
andâ >>TYSON: Whether weâll all die. >>WELLMAN: Well, our children we care about,
andâ >>TYSON: Whether our children will die. [laughter] >>WELLMAN: And even if they donât,
how well theyâll live with those superintelligence [crosstalk]. >>TYSON: As pets of superintelligence. [laughter] >>WELLMAN: Well, we hope as...
in a good partnership with them. [laughter] >>WELLMAN: We hope. >>GREINER: [Are we] sucking up to the AI already? >>WELLMAN: Thatâs the best we can do. >>TYSON: Is that the best you got, here? >>WELLMAN: Thatâs the best I can do. >>TYSON: We hope...
that our children will be in partnership with AI. >>WELLMAN: I think thatâs a fair way to...
way to sum it up. And Iâll stop. >>TEGMARK: Okay. Just in defense of Mike, here,
there is so much more detailed description in all the world religions of hell than of
heaven, right? Because itâs always much easier to think
in all the ways we can screw up than to think about good outcomes. Thatâs why youâre giggling when youâre
trying to say what youâre hoping for. But that doesnât mean we shouldnât try. Itâs incredibly important that we change... You were making fun of Hollywood for just
never showing us any future that doesnât suck, right? Blade Runner, or whatever. We really need to start thinking about what
kind of future with advanced AI
would we find really inspiring? And this is not something you should just
leave to tech nerds like us here, right? This is something everybody should think about. And the more clear vision we share for what
sort of future we want, the more likely we are to get it. >>TYSON: Do you detail this in your book,
Life 3.0? Do you go there? >>TEGMARK: I talk a lot about it. I try very hard not to give any glib answers,
because this is really a question we should all discuss. But you donât do good career planning by
just listing everything that could go wrong. >>TYSON: Although... [laughter] >>TEGMARK: You have to envision success. [applause] >>TYSON: AlthoughâI will only be able to
paraphrase this quote from Ray Bradburyâ
when asked... the great science fiction writer, futurist,
they asked him, âWhy do you keep portraying these dystopic
futures? Do you think thatâs what the future of life
will be?â And he says,
âNo. I portray these futures
so that you know what future not to head towards.â That was Ray Bradbury. Ladies and gentlemen, thank you for your attention
this evening. Join me in thanking this panel. [applause] Letâs open up the stage. Weâll have about 10 minutes for questions. We have a microphone on each aisle. And if you try to direct your question to
one panelist, that will go faster than saying,
can I get all five of them to reply. So are we ready? Letâs start it off right over here. >>AUDIENCE: Hi, Neil, how are you doing? >>TYSON: Hey, how are you doing? >>AUDIENCE: All right, good. I wanted to get a little bit back to the
artificial intelligence in the vehicles, and the more complex scenario ofâ
and I read a little bit about this in California carsâ
where, is it... You have a scenario,
the school bus, the bicycle, the kid, or a hundred-foot cliff. And the AI decides the best thing to do is
to drive the car off the hundred-foot cliff, because thatâll
cause the less damage, but itâs going to kill you. Is that something that would be learned,
or a decision that it will make? How can it avoid making that decision where
a human factor might say, hey, thereâs no one in the school bus,
the bicycle might be able to make it, you know, at a glimpse,
as opposed to just those simple, I donât know, algorithms
or decisions that an intelligence that it would make:
kill the driver; save everyone else. >>TYSON: Yeah. Mike... I mean, John, why donât you take that? >>GIANNANDREA: Well, I think all of these
systems haveâ distinguish between the learned part,
like a detector for a stop sign, and the policy part. I think itâs very important that the policy
part be explicitly planned for and then you end up with all the ethical issues
about what do you want your policy to be. Ideally, you would just stop the bus, right? >>TYSON: Right. That you have brakes good enough
so you donât have to drive it off the cliff. >>GIANNANDREA: Yeah. Hopefully you saw the cliff
far enough in advance. >>TYSON: In the first place. >>GIANNANDREA: Yeah. >>TYSON: Right. So it may be
that so many of these scenarios you described are real-life scenarios that human beings,
in our frailty encounter, but it calculated the rate that the bicycle
was entering the street, it knew what its breaking distance is,
so maybe it would just be better at it, and weâre troubling ourselves
over scenarios that are real for humans and
highly unlikely for autonomous AI, I would imagine. >>TYSON: Next up, yes. >>AUDIENCE: Okay. You were talking about the eventual
future of artificial intelligence as general intelligence. There was something discussed several years
ago called the singularity, when intelligence gets to the point,
both human and artificial sort of blends together. >>TYSON: Was that a question? >>AUDIENCE: Yeah. Do you consider this idea of a singularity
be a possibility? >>TYSON: Sure. Mike? >>WELLMAN: All right. So the singularity usually refers to something
thatâs also been called the intelligence explosion:
a point where thereâs a kind of a critical mass where
something becomes so smartâMax alluded to this beforeâ
where it can then further self-improve at a rapidly-accelerating rate. Itâs quite controversial whether that phenomenon
will happen. Itâs hard to really rule it out. Thereâs also... Itâs hard to rule it in as well. Itâs not clear that itâs really necessary
to achieve super intelligence, that it goes through this super-accelerating
phase, but thatâs one scenario where it could happen
faster than we realize. >>TYSON: And thereby not be a linear extrapolation
into the future about when it occurs, because if it grows exponentially,
what looks like small today becomes very large very quickly. Agree? >>TEGMARK: Exactly. >>GREINER: That how it works at Google, so
we should have Google answer. >>TYSON: Okay. Google, where are you on this exponential
curve? >>GIANNANDREA: I mean, what I would say about
this is people who have been marketing this notion that the singularity
is inevitable, and there are people who will say that,
many of them that Iâve talked to actually want it to happen. And I just donât think theyâre being rational
about the likelihood of it happening. Thatâs my personal view. >>TEGMARK: Yeah. And many of the people who say itâs never
going to happen donât want it to happen. So we have to be very mindful [crosstalk]. >>GIANNANDREA: Thatâs true, too. >>TYSON: All right. next question over here. Thank you for that. >>AUDIENCE: Hi, howâs it going? >>TYSON: hey. Good. >>AUDIENCE: My question is,
If I haveâ >>TYSON: Thatâs very New York, you know? Hey, howâs it goinâ? Good. How you doinâ? Weâre doinâ good. Yeah? >>AUDIENCE: My question is,
if you have the artificial intelligence, or the AIG or whatever,
and it comes to harm or kill you and you pull the plug on it,
is that murder? Because itâs a full, intelligent-like sentient
machine that youâre pulling the plug on. >>TYSON: Let me go to Max on that one. So if we judge value to our society by
level of sentence, and then thereâs an AIâ weâre already burying AI robots
or repairing them as though theyâre humansâ so do you think the day will come where laws
protect the lives of robots? >>TEGMARK: First of all, if a human comes
to try to murder you, and the NYPD pulls the plug on him,
thatâs already the law today, right? So there has to be some sort of protection
of... in there. You canât do anything you want just because
youâre conscious. Second, I think itâs a,
aside from the very difficult science question which we have to solve
about what kind of information processing even is conscious,
thereâs... Itâs certainly not as simple as just saying,
oh, you know, all consciousness is equal,
if youâre as smart as the human and thereâs consciousâ
you know, one consciousness, one voteâ because then if youâre a computer program
and youâre only getting 10% for your favorite candidate,
just make a trillion clones of yourself and have them all vote, right? There are a lot of really challenging questions
here that we need to face,
which, again, just comes back to this question, you know,
what sort of society with humans and highly intelligent beings are we even hoping
to create? And once you know that,
then you can ask your questions about what sort of laws it should have
to keep it working. >>AUDIENCE: Thank you. >>TYSON: That was actually an implicit ad
for his book, which weâre selling outside,
signed by him. What kind of world do you want? Life 3.0 available at a local bookstore. Yes? >>AUDIENCE: This isnât my question,
but have you guys seen the Terminator movies? Anyway, moving right along... [laughter] >>TEGMARK: A great summary of everything you
donât have to worry about. >>AUDIENCE: Hereâs my question,
You talked a lot about bias, and since there isnât one of us whoâs
without unconscious bias, how do you in fact try to eliminate unconscious
bias from a sentient machine? >>PURI: I would really say,
so interesting thing about machines in particular is that you actually, unlike humans,
all of us are inherently biased, as you pointed out,
in some way or the other, whether we admit it nor not. You actually can have techniques and algorithms
that detect bias in the dimensions that a particular entity cares about,
whether there are laws related to it, or whether you really care about it from the
point of your society: could be in the dimension of race, color,
loans that are given out. And algorithms are everywhere, actually, in
our life right now. So I would really say interesting thing about
machine learning technology is that you can detect bias. There can be actually laws related to
you need to have techniques to detect bias. You can actually unbias as well. So in that way, I really feel
we are one step fromâ the point of view for potentialâ
one step ahead that you can actually have laws
related to detecting bias. You can have unbiasing algorithms as well,
and society in generalâ and potentially policymaking bodiesâ
can ensure that that happens. And I think as industry,
I certainly can say that about IBM. Thatâs one of the things we really focus
on to make sure we are building responsibility,
unbias, and [explain-ability of party]. >>TYSON: That was a great question. >>AUDIENCE: Thatâs optimistic. >>TYSON: That was a great question, by the
way, but I will add... Let me further emphasize that
much of what you do in scientific research after youâve gotten a result
is to check whether thereâs any bias in that result. So thereâs a lot of statistical tools
just for that purpose, because you do not want to publish a paper
that somebody finds out has a bias. Forgetting race, creed, gender, color,
just bias of some kind. It could be voltage bias because of the way
you designed your experiment relative to everybody else,
claiming a result thatâs not real. So itâs to protect your own reputation,
even, that we have these tools. So itâs actually not as remote. You can test the bias you didnât even know
you hadâ >>AUDIENCE: Well, thatâs the bias that youâre
looking for, it seems to me. >>TYSON: No, no. >>AUDIENCE: You know the ones you know you
have. >>TYSON: No, no. I get that. What Iâm saying is,
in cases where we have data that has no connection to any
rational, social, cultural bias that you could have,
thereâs still a way to look for bias. And itâs a bias of, in the system,
that is giving you this answer instead of another answer. A big part of scientific research
is discovering bias. So thatâs all. So you can feel more comfortable about this,
is what Iâm saying. >>TYSON: Sleep well tonight. I promise. [laughter] >>TYSON: Okay. Letâs just, weâll take a few more. Yes, there. >>AUDIENCE: Hello, Dr. Tyson. First, thank you and the panelists for a truly
fascinating event. [applause] So one of the things thatâs happening with
GPS, as we become more dependent on it,
is that our own navigational skills are atrophying. So if we look at that in the context of AI,
do we need to worry, in addition to the AI outstripping our own
abilities, that we will become increasingly dependent
on the AI tools and atrophy our own functional intelligence? >>TYSON: Thatâs a great question. I want to add to it, and I want to go to John
on this. If our faculties atrophy
because theyâre replaced by AI, and we knowâ
and I didnât get there, because we donât have three hours hereâ
we also know that AI will be replacing many peopleâs jobs. And I saw some statistic,
maybe itâs exaggerated, but the sense of it is surely accurate
that 70% of men have, as their livelihood, the act of driving some
kind of vehicle either in a taxi, a car service, a forklift,
a truck, a... Whatâs that? Post office? [Thatâs what I said] trucks, deliveries,
this sort of thing. So autonomous vehicles renders all of them
unemployed. So there are consequences to this
that itâs not clear that we are carrying with us
an understanding or a sensitivity to that. Surely, Google has thought about this. Whatâs going on there? >>GIANNANDREA: Yeah. So I think throughout the course of history,
technology has caused job displacement, and people find other jobs to do. So it would take many, many, many decades
for all transportation to be autonomous. But even if that happened,
there still would be maintenance jobs, there would be manufacturing jobs,
and so on and so forth. I think no one company has the answer to this. I think policymakers have been
actively talking about this, you know, for as long as Iâve been in the field. Thereâs no doubt that... I mean, Iâll give you an example of healthcare. It might sound like,
oh, if you build this autonomous system, then itâs going to cause
a doctor... you know, doctors to lose their jobs. Thatâs not actually whatâs going to happen. Whatâs going to happen is doctors will be
able to see more patients and do a better job of diagnosing them. And oh, by the way, in the rest of the world,
the ratio of doctors to people is pitiful, and people die as a result. So when we design a system that can automatically
diagnose diabetic retinopathy, for example,
and weâre deploying this in countries around the world,
itâs a net addition of wealth to the world. >>TYSON: So the concern about this might have
some luddite elements to it, is what youâre [saying]. >>GIANNANDREA: No. No, I donât think so. I do think there will be job shifts and mixes,
but I think that it will take a very long time. And to this gentlemanâs question about GPS,
and now I think weâre up to three different independent GPS systems in the world,
how many people in this room can use a sextant? One or two? Good, good. So there you go. I mean, do we think thatâs inherently disastrous? I donât think so. >>TYSON: I just know when satellites get taken
out, I can find my way home. I got this. audience: Slide rule. >>TYSON: And a slide... Iâm the last person on earth to be formally
taught how to use a slide rule. Let me quantify that better. I am the... I am the
youngest person that Iâve ever met who was formally trained on a slide rule. Because when I learned a slide rule,
the next semester, the price of a four function calculator dropped
from $200 to $30. And so then classes just made the calculator... Thatâs as much as a book cost,
so then they stopped teaching slide rule, and then I have a slide rule on my hand,
and I felt, um... yeah. In an emergency, I can... You know. [Yes, there]. >>AUDIENCE: Thank you very much. We know there are neurons in our brain connecting
at 200 times per second, and they can activate very
different parts in our brain and give us our thoughts and ideas and executions. Iâm wondering, how big is a computer, a
supercomputer, that mimics our brain thinking ability? >>TYSON: Good one. Letâs go to Ruchir. Ruchir... Thatâs a great philosophical question. Do our modern computers replicate the number
of neurosynaptic phenomena in a human brain? And is that some measure of power? >>PURI: So let me give you, actually,
a very concrete example. So what brought this latest evolution of AI
together is actually sort of very large amount of data
together with a compute element which does matrix manipulations,
for those of you who may be familiar with linear algebra,
something called graphics proccing units, known as GPUs, in general. A single GPU consumes around 250 watts of
power. It takes thousands of them
to focus on a very narrow task. This brain that all of us have
is 1,200 centimeter cube and consumes 20 watts of power
and runs on sandwiches. [laughter] >>PURI: Just weigh it out, actually. Come on. [applause] >>PURI: I gave you very concrete numbers,
actually. And we are at a very narrow domain,
and most of the time, computers fails at that as well. So I think we talk about AGI,
thatâs interesting talk, yes we shouldâ
certainly in academics we have to worry about itâ
I am a long way away from it right now. >>AUDIENCE: My guess is that we already have
enough hardware in the world
that we could make superhuman AGI with it, but weâre just so behind on the software. >>TYSON: And the brain, I think, was
historically called wetware, right? >>GIANNANDREA: Mm-hmm. >>TYSON: Software, hardware, wetware. >>GIANNANDREA: Mm-hmm. >>TYSON: Okay. Just... Iâm showing off that you knew that term,
yeah. [laughter] >>GIANNANDREA: And just to be clear, I mean,
with all the advances in neuroscience, which have been tremendous in the last 30
years, we still have no idea how the human brain
works. So we shouldnât get ahead of ourselves. >>TYSON: Right. And we donât know what consciousness is,
because weâre still writing books on it. >>TEGMARK: Well, weâll probably be able
to figure out how to build AGI
before we figure out how the brain works, just like we figured out how to build airplanes
before we were able to build mechanical birds. >>GIANNANDREA: Maybe. >>TYSON: Thatâs a good point. [laughter] >>AUDIENCE: Good evening. I could probably be up there with you, Neil,
on learning slide rule. Iâm 56 years old,
and I learned how to use a slide rule before I had a calculator. >>TYSON: Excellent. So I will no longer say Iâm the youngest
person, because Iâm older than you. Yes. >>AUDIENCE: A questionâ >>TYSON: Wait. I gotta test him. Whatâs the K scale for? >>AUDIENCE: Itâs been a long time. >>TYSON: Oh. [laughter] >>AUDIENCE: Itâs been a long time. I still have my [unintelligible]. >>TYSON: Oh, give me an old-timer. Old-timer here, whatâs the K scale for. Steve? K scale? The K scale is the cube root scale. >>AUDIENCE: Okay. >>TYSON: That was really good. >>AUDIENCE: I still have it, though. I still have my slide rule. I still have it in my... >>TYSON: All right. >>AUDIENCE: All right. Up to this point, everyoneâs been talking
about quantity: how to power, power, power. What about quality? Certain things in life that we do
canât be quantified. Itâs a quality:
love, hateâ >>TYSON: âappreciation of a paintingâ >>AUDIENCE: Right, exactly. >>TYSON: âmusicâ >>AUDIENCE: Emotion. How is AI working on that end of quality of
things, as opposed to quantity
and raw computing power to do something? >>TYSON: Michael, where does aesthetics come
in? Aesthetics? >>AUDIENCE: Yes. >>WELLMAN: Well, thatâs right. I mean, certainly there are computers that
compose music and even paint,
and the question is, how will you judge this quality? And, yeah, I suppose one way to do it would
be to ask humans about that,
and people have even tried evolving art that humans like,
and there is computer art. It may not be for everyone,
but itâs just difficult to judge. But thereâs really, again, no... What theyâreâcomputers are going to have
to figure out a lot about humanâs tastes
to compete on thatâin that territory. >>TYSON: Unless it achieves a super consciousness
and invents a higher-level aesthetic
than anything we ever imagined. >>WELLMAN: Yeah, well, look. Maybe they alreadyâ >>TYSON: Wait. Youâre acting like I pulled that out of
the ether. Because AlphaGo made a move, if I remember
correctly... No. Alpha Zero made a Go moveâ >>WELLMAN: AlphaGo [unintelligible]. >>TYSON: âthat no one had ever imagined
before. >>WELLMAN: Yeah. Yeah, and I was lucky enough to be in Korea
for that match, and I could just see the gasps on the expertâs
faces. It was like move number 23 in one of the games,
and the experts were just like, that must be a mistake, right? And it actually turned out to be the beginning
of the end of the game. And so then people anthropomorphize, though,
and they say, well, this program must have intuition and creativity. But itâs just an engineering model, right? >>TEGMARK: But, you know, running a computer
that makes art that it likes is actually very easy. [laughter] >>TYSON: Yes. >>AUDIENCE: You talked a lot about AGI
and then the future of AI. And thereâs a lot of scared people about
AI when you hear it. What are you doing to combat
the scared people and explain these extremely complex algorithms
to the public, and more importantly, the government? >>TYSON: I would say, Helen, what... You said you had early pushback on the Roomba,
because it was the first sort of AI in the house. How did you deal with the PR challenge of
this? >>GREINER: I think we had more pushback
before they saw it. Like I remember the first focus group. Weâd go to women and say,
Hey, how about a robot vacuum? And they would imagine like
a Terminator pushing a vacuum, and theyâre like, no, no, not at my house. You take out a Roomba, you show it to them,
and, you know, if it gets uppity, you just give it a whack,
and... you know. Itâs a completely different thing. >>TYSON: You punish your Roomba? >>GREINER: Itâs like computers. People used to fear, like HAL taking over
from 2020, and once they have a computer on their desktop
and they see that, you know,
blue screen of death in olden times, they start not fearing it. Same thing with a Roomba. Once you have a Roomba and you see what it
can do, what it canât do... >>TYSON: If I would just add to this,
I think slowly, weâve become more accustomed to computers
running things that in a previous day,
might have freaked us out. Weâve all been on the tram
that gets you from one airport terminal to another
and no one freaks out that there isnât an engineer
driving it at all. Itâs just... And it opens and closes doors. No one gets decapitated
coming in and out. So, youâre right,
itâs a slow adjustment, but I think itâs real and irreversible,
I mean in the sense that weâre not going to go back and say,
gee, I want a human being driving this tram. We know itâs not necessary. And I had an interesting revelation. I saw the movie Airport. Thatâs the disaster movie from the 1970s. And itâs a Boeing 707 or a 727. Not a big plane, by todayâs standards. They went into the cockpit. There were four people in the cockpit. I said, âWhat the hell are they doing?â One guyâs got a map with a compass. Thereâs a... And I had forgotten
there was a day when you needed all these people to fly the damn plane. Now, you barely even need one person, right? For the triple-7 and some of the others. Theyâre really computer flown,
and weâre so much more comfortable with this. Yeah, so I think itâll happen, but slowly. >>TEGMARK: Also to combat fear,
I think itâs really important to also focus on
talking about the upsides. Everyone knows someone whoâs been diagnosed
with a disease the doctor said was incurable. Well, it was not incurable. We humans werenât intelligent enough
today to figure out how to cure it. Of course, this is something AI can help with,
right? We should talk about things like that. And also, the second thing is
itâs just so important that the public doesnât perceive
that us AI researchers are trying to sweep the whole question under the rug. Itâs like, nothing here to worry about. Because thatâs what folks fear, right? If the public can see
that the researchers are having a sober discussion about this
theyâll feel much more confident, I think. >>TYSON: Okay. Only time for just a few more. Yes? >>AUDIENCE: Thank you. Iâm a young AI researcher from Queensborough
Community Collegeâ >>TYSON: Cool. >>AUDIENCE: And I have a hundred-plus-one
questions for you just right now. >>TYSON: Letâs do the plus-one. How about that? >>AUDIENCE: My only question is, can I have
more questions? >>TYSON: Ooh. >>AUDIENCE: Really. Would you give me the opportunity
to talk to you at some point for seven minutes of your day
just about AI? >>TEGMARK: Email us. >>GREINER: Sure. >>GIANNANDREA: Sure. [applause] >>GREINER: LinkedIn; LinkedIn. >>TYSON: Generally, the email of academics
is public. You just go to the university. generally you can find them. Folks in corporations, theyâre harder to
get at, because theyâreâ theyâre up to stuff that they donât want
us to know. Generally, thatâs how that works. >>GIANNANDREA: But we do likeâ >>GREINER: LinkedIn is a great way to connect. >>GIANNANDREA: Yeah. We do like Reddit EMAs and things like that,
so thereâs a lot of places where you can interact with us. >>PURI: You can find us on the Internet as
well, so. >>TYSON: Cool. Right here, yes. >>AUDIENCE: Hi. So you guys kind of touched on this question. Some people prior already asked my question,
so I kind of tweaked it. So as AI kind of grows,
and as AI kind of takes over the tasks that humans can do currently,
would you consider, or would you think that there is potential
for like a renaissance of art, philosophy, and
new sciences that we can explore as AIs take over our old jobs? >>TYSON: Is it because we have free time available
to us? >>AUDIENCE: Yeah. >>TYSON: Thatâs an interesting question. All right, so Max? >>TEGMARK: I think absolutely. Thereâs... You know, today,
we have this obsession that we all have to have a job,
otherwise weâre worthless human beings, right? It doesnât have to be that way. If we can have machines to provide most of
the goods and services and we can just future out a way of sharing
this great wealth so that everybody gets better off,
you could easily envision a future where youâd really get to have a lot more
time living life the way you want. >>TYSON: That is so hopeful of you that
you believe that humans with free time will create, and not just consume video
from the couch. This is so beautiful. [laughter] >>TYSON: That is a beautiful thing. [applause] >>TYSON: Yes? >>AUDIENCE: In 1946,
Isaac Asimov wrote a short story in which technology had advanced to the point
where a political candidate was suspected of being a robot
and no one could tell for sure whether or not
he really was a robot. But what he did not envision
was a time when technology advanced to the point
where an informed electorate would not be able to distinguish
between real news and news that was generated by artificial
intelligence programs. Considering weâre at that point now,
shouldnât it be the primary concern of the AI community
to realize that the tools that they have created
can be used in a way that they never intended, and that they should do something about it? [applause] >>TYSON: Oof. That one has to go to John from Google. >>GIANNANDREA: Sure. Um... >>TYSON: Yeah? >>GIANNANDREA: So Iâll say something positive
and something more serious. So most of the fake news that we battle every
day, in for example, something like Google Search,
is actually human-generated. Itâs actually not algorithmically generated. So absolutely, we have a responsibility to
do a better job in our products and our competitorâs products,
and I know for a fact that we take that responsibly very seriously
and have made a lot of efforts in the last two years,
starting with, I think, accepting that responsibility. The thing Iâm worried about is that what
you just said might come true in future elections. Today is beyond the state of the art for computers
and natural language understanding to understanding veracity;
that itâs true versus not true. So we have lots of proxies for what we think
is trustworthy, but if computers advance to the point where
they can write as well as humans and at scale,
then I think we may have a serious problem. And there is a generalâ >>TYSON: And give speeches; good speeches,
yeah. >>GIANNANDREA: Yeah. I mean, there are some systems today
that can write newspaper articles, and you consume them about sports and finance,
and you donât know that theyâre written automatically. What Iâm really worried about is the so-called
rise of so-called generative systems, where videos and texts and tweets and so on
and so forth can be written,
and the technology doesnât exist to distinguish. I do think itâll be a bit of an arms race,
right? There are researchers are working both sides
of this to try and detect these things,
and Michael might want to say something about this as well. But itâs the very forefront of what a lot
of artificial intelligence researchers worry about, and itâs... But the stuff that is most worrisome today
is actually generated by human beings. >>AUDIENCE: Well weâre already at the point
where on Twitter,
if someone takes a position that you disagree with,
you say, âWell, youâre a bot.â You donât even believe theyâre a real
person anymore, you know? Because you believe the technologyâ
a lot of people on Twitter believe that technologyâs advanced to that point already. So even if the technology isnât real,
if people believe itâs real, then you have a serious problem. >>GIANNANDREA: Yeah, but I donât think itâs
beyond the state of the art for social networks to do a better job, and I
think they are. >>TYSON: Wait, wait. Weâre forgetting that
we spend 20 years educating our children. And so you can adjust the educational system
to be explicitly aware and sensitive
to how they could be duped by the Internet. We do that for how to not be duped by charlatans,
by con artists. They are the lessons of life. So I think itâs unrealistic
to have an entire industry somehow change so that they donât hurt us,
when, in fact, itâs our susceptibility to this that
one ultimately can point to. And so we need defense mechanisms
to protect us against that. And I think, as an educator,
that happens in the educational system. Maybe Iâm biased about this,
but I think we have more power over that than people admit. Yeah. Can I get like the three youngest kids up
front right now? Just... Okay. Go ahead. You go spread... I have the power to make this happen. You just go to the front of the line. Okay, yes. Go. Thanks for coming, by the way. >>AUDIENCE: Thank you, thank you. >>TYSON: And how old are you? >>AUDIENCE: Iâm 13. >>TYSON: Thirteen, very cool. >>AUDIENCE: Yeah. Soâ >>TYSON: Is it good being a teenager? >>AUDIENCE: Uh, I mean, it depends. >>TYSON: Yeah, good. Thatâs a very good answer. Thatâs the correct answer, yeah. >>AUDIENCE: So if... >>TYSON: If you ask any adult,
do you want to be a teenager again? The answer will be no, okay? >>AUDIENCE: So if thereâs no bias,
how can an AI have a personality? I know this was kind of touched on before
with the other bias question. >>TYSON: Thatâs an interesting question,
because so much of what creates the nuances in us
are things you like, things you donât like, tastes that you have,
and some of that could be viewed as bias. So where are we here? Great question. >>WELLMAN: Yeah. I recently ran across somebody
referring to nondiscriminatory learning, and thatâs really an oxymoron. Itâs impossible. The whole point of learning is to make distinctions
and to discriminate. And so whatâs really hard is defining
what is the kind of bias that is unwelcome bias,
and which is the kind of discrimination that is actually helping us make
the right [case]. Defining that is very hard. >>TYSON: You donât mean decimation in the
civil rights sense. You mean discrimination is liking this rather
than that as a simple act. >>WELLMAN: Right. Well, the thing is that that could then morph
into the other kind if itâs...
if youâre using the wrong reasons to make your decisions about what youâre
accepting or what youâre choosing to do. And I think that we have to refine what our
notions are. We have a current legal system that is designed
for a world where humans are making all the decisions,
and you could get into a lot of human things, like intent. Now, thereâs big loopholes for
situations where machines are making decisions that are potentially subject to biases. >>AUDIENCE: Thank you. >>TYSON: Okay, sure. Right over here, yes? And how old are you? >>AUDIENCE: Iâm 10. >>TYSON: Ten, very cool. Welcome. >>AUDIENCE: So this was slightly touched on
earlier, but Asimov, he wrote a book called I, Robot,
and the first story in it is about a girl whoâs best friends with a robot,
and she doesnât have any other friends expect the robot. And do you ever think that a robot could replace
all human friends and interactions with other humans? >>TYSON: Whoa. >>GREINER: Ooh. Um, well, I think in a very long timeframe,
yes. And, as I said, that people today, I think,
start to get attached to these mechanical devices,
maybe thinking of them more as a pet right now than a friend,
but I think in the long term you could get attached to a robot system. >>TYSON: There was an actualâ
There was an episode of Twilight Zone that addressed this problem. There was a colony, an outposted colony on
an asteroid, and thereâs... I forgot the details, but they sent him a
robot to keep him company. And then it was time to get him back to earth,
and there was only weight enough on the craft for him,
and not the robot. But it was a female robot, and he actually
fell in love with the robot. And they kept telling him, itâs a robot. âNo, but sheâs real. I swear that sheâs real.â And in the... I donât want to give away what happens here,
but... [laughter] Yeah, no, I wonât give it away. But if you find that episode,
I think all the episodes are on Netflix, so do a search for like robot on an asteroid. Youâll find that episode, and check it out. >>AUDIENCE: Thank you. >>TYSON: Yeah. and most Twilight Zone episodes
donât end well. Just, I want to just... [laughter] >>TYSON: Letâs clear out this line, and
weâll end with you, okay? Yes, go ahead. >>AUDIENCE: Okay. IBM has a panel for ethics, morals, and values. But how can you say that a company in China
would have the same outlook to make a computer advanced technology
as IBM or Google? Because, can you trust China with doing that? And another question is, is that, with these
advanced robots like the Replicant and Blade Runner, why doâ
I know you said itâs far ahead in the futureâ but why make a machine that looks so humanoid
anyway, when you could have an R2-D2 and say, okayâ >>GREINER: Yeah, R2-D2. >>TYSON: Good one. >>AUDIENCE: âcould you wash my floor, could
you do my dishes? I donât need any robotics to make it look
so humanoid or like... >>TYSON: CP30 was... >>GREINER: Right. I think youâre hitting on something, now. >>AUDIENCE: I mean, whatâs the point,
if maybe there could be a future where they might want to like, you know, hey, you
wash the floor and you, whatever it is. >>TYSON: Yeah, Helen. >>GREINER: Right. [crosstalk]
Or to phrase it another way, thereâs like 8 billion humans
in the world. They all work really, really well,
so Iâm not sure the market for making a humanoid is actually there. But one of the reasons Roomba is effective,
it goes under the beds and into places where humans find it difficult
to get. So, by designing them around the jobs theyâre
doing, I think theyâre actually more effective
than potentially making a humanoid. >>AUDIENCE: But why make a future robot look
humanoid, then we have [crosstalk]. >>TYSON: No, thatâs her point. Her point is that will not happenâ >>GREINER: Yeah, why? I agree with you. >>TYSON: âin the way we all think it will. And hereâs an example. I remember seeing any old movie,
and you say, okay, I donât want to drive my car. I want a robot to do it. So out comes a humanoid robot, and it drives
the car. Without thinking that maybe the car itself
could be the robot, right? And remember The Jetsons, the maid, the robot
maid, had an apron. laugher >>TYSON: Okay? And it was clearly female,
when it didnât have to have any gender at all. So thatâs how we used to think of it,
but I agree with Helen entirely. You design something for its task,
and that will hardly ever have to look like a human being. You have the last question this evening. So how old are you? >>AUDIENCE: Eleven. >>TYSON: How old? >>AUDIENCE: Eleven. >>TYSON: Eleven. Very cool. Very cool. >>AUDIENCE: So my question is,
as AI increases in our society, do you foresee social ramifications for our
future and for our future generations? >>TYSON: Social ramifications like what? >>AUDIENCE: Such as
intelligent machines are integrated more into society. Could we become socially inept and regress
as the machines get smarter? >>TYSON: Yeah. Do humans start looking less relevant, less
important, clumsy, stupid, inept? Is that enough words to get the point across,
here? Yeah, Mike? >>WELLMAN: I mean, I think people will have
to deal with the fact that a lot of the stuff that they
have gotten status from in the past may not be
an avenue for them to do so in the future, and find other ways to find meaning in lives,
not just tied to a certain livelihood that they may be [from]. It has been, for most of our recent history
of automation, that it was lower-status jobs that got automated
away earlier. That may not be the case. It may be the lawyers that get automated next. [laughter and applause] >>TYSON: So the higher the capacity of AI,
the higher is the level of the job it can replace. >>WELLMAN: It may not be in any kind of direct
ordering, you know? It might be that you can get the lawyers,
but you canât get the dishwashers or the... So itâs going to be mixed around. >>TYSON: So it could be that AI will create
a version of itself that will replace AI researchers. >>WELLMAN: None of us are safe. Iâll leave it there. >>TYSON: Thank you. Thank you for that question. Thank you. >>AUDIENCE: Thank you. [applause] >>TYSON: Allow me to share with you an AI
epiphany I had two days ago where I said publicly that I was fearless
of AI because if it starts getting unruly our out
of hand I just unplug it, or, since this is America,
I can just shoot it, right? So Iâm pretty confident that I... What would I have to fear? And then, um, I was listening to a podcast
hosted by Sam Harris where he had an AI person on just recently,
forgive me, Iâve forgotten his name, and Sam Harris mentioned my comment to him. And apparently itâs a well-known... Itâs like AI in a box. So you know itâs powerful,
you know if it gets into the economic systems and the Internet
itâll take over the world, so you just leave it in a box. Itâs safe there. And what the guy said is,
âIt gets out of the box every time.â And I said... Iâm thinking to myself, how and why? Because...
itâs smarter than you. It understands human emotions. It understands what I feel, what I want, what
I need. It could pose an argument
where I am convinced that I need to take it out of the box. Then it controls the world. And we donât even have to discuss what that
conversation needs to be. We just have to be aware, for example,
that, letâs say youâre trying to get a chimp in a room,
and the chimps say, âWe think something bad is going to happen in that room,
so nobody go into that room.â Then we come up, and we are way smarter than
chimps. We just take a banana; toss it in the room. âOh, thereâs a banana in there now!â We go in; we capture the chimp. The chimp did not imagine that
we would show up with a banana. We captured the chimp. So just imagine something that much more intelligent
than we are that sees a broader spectrum of solutions
to problems than we are capable of imagining. And when I heard that, itâs like, yes. The AI gets out of the box every time. Yes, weâre all going to die. No. [laughter] Join me in thanking our panel. [applause]
I like that Neil is a Sam fan.
Man.
Kudos to him, embrace death by AI
Lots of scientists or public science speakers promote the scientific method of bayesian reasoning, changing your mind when proven wrong. But very little do so.
I can only applaud for Neil to publicly share his epiphany.
Commendable
He maybe realized his shotgun comment was a bit naive. Hooray for the changing of minds!
"Yes, we're all gonna die." LMFAO.
Gotta love Neil deGrasse Tyson.
Somebody tell Neil to watch Ex Machina, one of the few good AI movies that pretty much deals specifically with the 'AI escaping the box' issue.
Sounds like ex machina
Maybe Steven Pinker will be next.