Henry: “Will Artificial Intelligence ever
replace humans?” is a hotly debated question these days. Some people claim computers will eventually
gain superintelligence, be able to outperform humans on any task, and destroy humanity. Other people say “don’t worry, AI will
just be another tool we can use and control, like our current computers.” So we’ve got physicist and AI-researcher
Max Tegmark back again to share with us the collective takeaways from the recent Asilomar
conference on the future of AI that he helped organize – and he’s going to help separate
AI myths from AI facts. Max: Hello! Henry: First of all, Max, machines (including
computers) have long been better than us at many tasks, like arithmetic, or weaving, but
those are often repetitive and mechanical operations. So why shouldn’t I believe that there are
some things that are simply impossible for machines to do as well as people? Say making minutephysics videos, or consoling
a friend? Max: Well, we’ve traditionally thought of
intelligence as something mysterious that can only exist in biological organisms, especially
humans. But from the perspective of modern physical
science, intelligence is simply a particular kind of information processing and reacting
performed by particular arrangements of elementary particles moving around, and there’s no
law of physics that says it’s impossible to do that kind of information processing
better than humans already do. It’s not a stretch to say that earthworms
process information better than rocks, and humans better than earthworms, and in many
areas, machines are already better than humans. This suggests we’ve likely only seen the
tip of the intelligence iceberg, and that we’re on track to unlock the full intelligence
that’s latent in nature and use it to help humanity flourish - or flounder. Henry: So how do we keep ourselves on the
right side of the “flourish or flounder” balance? What, if anything, should we really be concerned
about with superintelligent AI? Max: Here’s what has many top AI researchers
concerned: not machines or computers turning evil, but something more subtle: superintelligence
that simply doesn’t share our goals.  If a heat-seeking missile is homing in on
you, you probably wouldn’t think: “No need to worry, it’s not evil, it’s just
following its programming.” No, what matters to you is what the heat-seeking
missile does and how well it does it, not what it’s feeling, or whether it has feelings
at all. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very
good at attaining its goals, so the most important thing for us to do is to ensure that its goals
are aligned with ours. As an analogy, humans are more intelligent
and competent than ants, Â and if we want to build a hydroelectric dam where there happens
to be an anthill, there may no malevolence involved, but, well... too bad for the ants. Cats and dogs, on the other hand, have done
a great job of aligning their goals with the goals of humans – I mean, even though I’m
a physicist, I can’t help think kittens are the cutest particle arrangements in our
universe... If we build superintelligence, we’d be better
off in the position of cats and dogs than ants. Or better yet, we’ll figure out how to ensure
that AI adopts our goals rather than the other way around. Henry: And when exactly is superintelligence
going to arrive? When do we need to start panicking? Max: First of all, Henry, superintelligence
doesn’t have to be something negative. In fact, if we get it right, AI might become
the best thing ever to happen to humanity. Everything I love about civilization is the
product of intelligence, so if AI amplifies our collective intelligence enough to solve
today’s and tomorrow’s greatest problems, humanity might flourish like never before. Second, most AI researchers think superintelligence
is at least decades away... Buuuut the research needed to ensure that
it remains beneficial to humanity (rather than harmful) might also take decades, so
we need to start right away. For example, we’ll need to figure out how
to ensure machines learn the collective goals of humanity, adopt these goals for themselves,
and retain the goals as they keep getting smarter. And what about when our goals disagree? Should we vote on what the machine’s goals
should be? Just do whatever the president wants? Whatever the creator of the superintelligence
wants? Let the AI decide? Â In a very real way, the question of how
to live with superintelligence is a question of what sort of future we want to create for
humanity. Which obviously shouldn’t just be left to
AI researchers, as caring and socially skilled as we are.;) Henry: Thanks, Max! So, uh, how do I get involved to make sure
we don’t end up living in a superintelligence-powered dictatorship? Max: At the Future of Life Institute (Henry
interjects: which is sponsoring this video), we’ve built a site where you can go to answer
questions, ask questions, and otherwise contribute your thoughts to help shape the future of
AI policy and research. Link’s in the video description. Henry: Awesome.
Pretty basic but nice, concise summary of the issue.
I expected an argument involving P vs. NP
Edit: Here's a great video on this. https://youtu.be/YX40hbAHx3s
2Punx2Furious, we had a discussion about this a while ago, but I think there is a ceiling for artificial intelligence (can't become any smarter than a human being). I guess you're still interested in the idea of a superintelligent AI, and I can't convince you otherwise.