We have five years...I think digital super
intelligence will happen in my lifetime. If AI has a goal and humanity
just happens to be in the way, it will destroy humanity as a matter of course,
without even thinking about it, no hard feelings. It's just like if we're building a road
and an ant hill happens to be in the way, we don't hate ants, we're just building
a road, and so: "Goodbye anthill"! For me the bottom line is that
people who talk about risks with AI should not be dismissed as all luddites scare
mongers, they're doing safety engineering. When you think through everything that can go
wrong, so that you can guarantee that it goes right. And that's how we're successfully going to
send our species into an inspiring future with AI. We created it, so I think as we move forward
this intelligence will contain parts of us, and I think the question is "Will
it contain the good parts or the bad parts?" With each search we train it to be better.
Sometimes we type in the search and it tells us the answer before you finished asking the question.
You know, "who is the president of Kazakhstan" and it'll just tell you, you don't have to go to
the Kazakhstan national website to find out. I think you just have to consider, like even
in the benign scenario, where AI... if AI is much smarter than a person, what do we do? - Yeah
- What what is that... what job do we have? Believe in a benevolent AI force and cross
our fingers? Yeah, just like even... but that's the benign scenario. The benign scenario, the AI can do any job that a human can but better. - Yeah
- That's the benign scenario! Baxter is a really good example
of the kind of competition we face for machines. Baxter can do almost anything
we can do with our hands. Baxter costs about what, a minimum wage worker
makes in a year, but Baxter won't be taking the place of one minimum wage worker, he'll be taking
the place of three, because they never get tired, they never take breaks. That's probably the first
thing we're going to see: displacement of jobs They're going to be done quicker, faster, cheaper by
machines. - We always do things we are good at.
- Sure, okay, what would be an example of something
that humans are better than a computer at and then let's see if that happens. We face a giant divide between rich and poor because that's what automation and AI will provoke: a greater
divide between the haves and the have-nots. Right now it's working into the
middle class into white-collar jobs. IBM's Watson does business analytics that we
used to pay a business analyst 300 an hour to do. - There will be fewer and fewer jobs that a robot
cannot do better.
- Okay. I think things... things are definitely going to go into kind of autonomous or locally autonomous drone warfare. This is where it's at, where the future will be.
I'm just saying, that's not I want the future to be this, it's just, this is what the future will be.
- Okay - Is autonomous drone warfare.
And at a local local level, the you know,
I can't believe i'm saying this 'cause this is like dangerous, but is simply what will occur. Is sort of a drones locally being autonomous. The thing that worries me right now, that keeps me
awake, is the development of autonomous weapons. Up to now people have expressed unease about
drones, which are remotely piloted aircraft. If you take a drone's camera, feed it into the
AI system, it's a very easy step from here to fully autonomous weapons that choose their
own targets and release their own missiles. The expected lifespan of a human being in that kind of battle environment
will be measured in seconds. - Okay
- But it's.... the fighter
jet era has passed that is it's just... - Yeah, fighter jet era has passed.
- Okay. The air force just designed a 400 billion
jet program to put pilots in the sky and a 500 dollar AI, designed by a couple of graduate
students as being the best human pilots, with a relatively simple algorithm. Do you think the current approaches will take
us to general intelligence or do totally new ideas need to be invented?
- I think we're missing a few key ideas for general intelligence, general, artificial general intelligence (AGI)... But it's going to be upon us very quickly, and then we'll need to figure out what
shall we do, if we even have that choice. What's another big part of artificial intelligence
is to make them conscious and make them feel. We build artificial intelligence and the very
first thing we want to do is replicate us. I think the key point will come when all
the major senses are replicated: sight, touch, smell. When we replicate our senses is
that when it becomes alive? ...Or create an AI system that we can love and loves us back in a deep meaningful way like in the movie "Her". I think AI will be capable of convincing
you to fall in love with it very well. ....and that's different than us humans.
- You know we start getting into a metaphysical question of like "Do emotions and thoughts exist in a different
realm than the physical?" and maybe they do, maybe they don't, I don't know, and from a physics
standpoint essentially if it loves you in a way that is that you can't tell whether it's real or
not.
It is real. - It's a physics view of love.
- Yeah. - Are you attracted to me?
- What?
- Are you attracted to me? You give me indications that you are. - I do?
- Yes
- How? - Micro expressions
- Micro expression
- The way your eyes fix on my eyes and lips. ...and it's similar to seeing our world
a simulation, there may not be a test to tell the difference between what the real world
and the simulation and therefore from a physics perspective it might as well be the same thing.
- Yes and there may be ways to test whether it's a simulation. There might be, I'm not saying
there aren't, but you could certainly imagine that a simulation could correct that
once an entity in the simulation found a way to detect the simulation, it could either
restart the you know, pause the simulation, or start a new simulation or do one of any
other things that then corrects for that error. - "Eyebrow raised"
- This generation technology is just surrounding them all the time. It's almost like they expect to have robots in their homes and they expect these robots to be socially intelligent. - What makes robots smart?
- I think you would have to train it. - All right. - But if you look angry it's going to run away.
- Oh, that's good! We're training computers to read and recognize emotions.
- Ready, set, go! The response so far has been really amazing.
people are integrating this into health apps, meditation apps, robots, cars. We're gonna see how this unfolds. We have to really be at the forefront of enforcing
and designing these best practices and guidelines around how we build and deploy ethical AI.
I like to say that artificial intelligence should not be about the artificial, it should be about
the humans. The data itself is not good or evil, it's how it's used. We're relying really on the
good will of these people and on the policies of these companies. There is no legal requirement
for how they can and should use that kind of data. Really, it's just getting machines to learn by
themselves. It's called deep learning and deep learning and neural networks mean roughly the
same thing. Deep learning is a totally different approach where the computer learns more like
a toddler by just getting a lot of data and eventually figuring stuff out.
The computer just gets smarter and smarter as it has more experiences. DeepMind turned to another challenge
and the challenge was the game of Go, which people have generally argued has been beyond the power of
computers to play with the best human go players. AlphaGo which went from in the span of maybe six to nine months it went from being unable
to beat even a reasonably good go player to then beating the european world champion who
was ranked 600, then beating Lee Se-Dol four five world champion for many years, then beating
the current world champion, then beating everyone while playing simultaneously. First they challenged the european Go champion, then they challenged a Korean Go champion, and they were able to win both times
in kind of striking fashion. We were reading articles in New York Times years
ago talking about how Go would take a hundred years for us to solve. Then there was AlphaZero which crushed AlphaGo 100 to zero and AlphaZero just learnt by playing itself, and it can play basically any game that you put the rules in for. If you, whatever rules you give it, literally
read the rules, play the game and be superhuman, for any game. People say, well you know, but that's
still just a board. Poker is an art, poker involves reading people, poker involves lying and bluffing,
it's not an exact thing, that will never be you know, a computer you can't do that.
They took the best poker players in the world and took seven days for the computer to start demolishing
the humans. So the best poker player in the world, the best Go player in the world, and the pattern
here is that AI might take a little while to wrap its tentacles around a new skill but when
it does, when it gets it, it is unstoppable. I do not think that a robot
could ever be conscious, unless they programmed it that way... - Conscious? No! - No!
- No! I mean I think a robot could be programmed to be conscious how they programmed it to do everything else. AI at the superhuman level, if we succeed
with that will be by far the most powerful invention we've ever made and
the last invention we ever have to make. And if we create AI that's smarter than us
we have to be open to the possibility that we might actually lose control to them. - How many years?
Before you don't have to talk. - If the development
continues to accelerate then maybe like five years, five to ten years.
- That's quick, that's really quick! - That's the best case scenario
- No talking anymore in five years!
- Best case scenario but 10 years is more like it. Really in the first few versions all we're going to be trying to do is solve for brain injuries, so it's like don't worry, that it's not going to sneak up on you. - This will take a while.
- 25 years from now, what are we going to be in 25 years?
- Probably something I think, like there could be a whole brain interface, like almost all the neurons are connected to... your the sort of AI extension of yourself. If you have some you know if you have ultra
intelligent AI we would be you know so so far below them in intelligence that it
would be would be like, you know, a pet, basicaly. - Yeah that's what I was thinking, like a pet, a cat.
- Like a cat, like a house cat - Yeah, we'd be like the house cat.
- Right! I think it's incredibly important
that AI not be other, it must be us. I'm certainly open to ideas if anybody can
suggest a path that's better, but i think we're really going to have to either merge with AI or be left behind. So when maybe you or somebody else creates an AGI system and you get to ask her one question, what would that question be? What's outside the simulation? It's hard to kind of think of unplugging a system that's
distributed everywhere on the planet, that's distributed now across the solar
system. You can't just, you know, shut that off. We've opened Pandora's box we've unleashed
forces that we can't control, we can't stop. We're in the midst of essentially
creating a new life form on earth.
hello, this was excellent video! I admire how you put it all together. Especially the background music gives it nice urgent vibe. But at the same time, I don't take it lightly. The topic is serious enough, especially that comparison with ants at the beginning struck me. You don't have to hate them, but they are just in the way..and also "there will be very few jobs that robots cannot do better" - depressing.
keep putting up this great content! It was my pleasure to watch this.