Chris Anderson: Elon Musk,
great to see you. How are you? Elon Musk: Good. How are you? CA: We're here at the Texas Gigafactory
the day before this thing opens. It's been pretty crazy out there. Thank you so much
for making time on a busy day. I would love you to help us,
kind of, cast our minds, I don't know, 10, 20,
30 years into the future. And help us try to picture
what it would take to build a future that's worth
getting excited about. The last time you spoke at TED, you said that that was really
just a big driver. You know, you can talk about lots of other
reasons to do the work you're doing, but fundamentally, you want
to think about the future and not think that it sucks. EM: Yeah, absolutely. I think in general, you know, there's a lot of discussion of like,
this problem or that problem. And a lot of people are sad
about the future and they're ... Pessimistic. And I think ... this is ... This is not great. I mean, we really want
to wake up in the morning and look forward to the future. We want to be excited
about what's going to happen. And life cannot simply be about sort of, solving one miserable
problem after another. CA: So if you look forward 30 years,
you know, the year 2050 has been labeled by scientists as this, kind of, almost like this
doomsday deadline on climate. There's a consensus of scientists,
a large consensus of scientists, who believe that if we haven't
completely eliminated greenhouse gases or offset them completely by 2050, effectively we're inviting
climate catastrophe. Do you believe there is a pathway
to avoid that catastrophe? And what would it look like? EM: Yeah, so I am not one
of the doomsday people, which may surprise you. I actually think we're on a good path. But at the same time, I want to caution against complacency. So, so long as we are not complacent, as long as we have a high sense of urgency about moving towards
a sustainable energy economy, then I think things will be fine. So I can't emphasize that enough, as long as we push hard
and are not complacent, the future is going to be great. Don't worry about it. I mean, worry about it, but if you worry about it, ironically,
it will be a self-unfulfilling prophecy. So, like, there are three elements
to a sustainable energy future. One is of sustainable energy generation,
which is primarily wind and solar. There's also hydro, geothermal, I'm actually pro-nuclear. I think nuclear is fine. But it's going to be primarily
solar and wind, as the primary generators of energy. The second part is you need batteries
to store the solar and wind energy because the sun
doesn't shine all the time, the wind doesn't blow all the time. So it's a lot of stationary battery packs. And then you need electric transport. So electric cars, electric planes, boats. And then ultimately, it’s not really possible
to make electric rockets, but you can make
the propellant used in rockets using sustainable energy. So ultimately, we can have a fully
sustainable energy economy. And it's those three things: solar/wind, stationary
battery pack, electric vehicles. So then what are the limiting
factors on progress? The limiting factor really will be
battery cell production. So that's going to really be
the fundamental rate driver. And then whatever the slowest element of the whole lithium-ion
battery cells supply chain, from mining and the many steps of refining to ultimately creating a battery cell and putting it into a pack, that will be the limiting factor
on progress towards sustainability. CA: All right, so we need to talk
more about batteries, because the key thing
that I want to understand, like, there seems to be
a scaling issue here that is kind of amazing and alarming. You have said that you have calculated that the amount of battery production
that the world needs for sustainability is 300 terawatt hours of batteries. That's the end goal? EM: Very rough numbers, and I certainly would invite others
to check our calculations because they may arrive
at different conclusions. But in order to transition, not just
current electricity production, but also heating and transport, which roughly triples the amount
of electricity that you need, it amounts to approximately 300 terawatt
hours of installed capacity. CA: So we need to give people
a sense of how big a task that is. I mean, here we are at the Gigafactory. You know, this is one of the biggest
buildings in the world. What I've read, and tell me
if this is still right, is that the goal here is to eventually
produce 100 gigawatt hours of batteries here a year eventually. EM: We will probably do more than that, but yes, hopefully we get there
within a couple of years. CA: Right. But I mean, that is one -- EM: 0.1 terrawat hours. CA: But that's still 1/100
of what's needed. How much of the rest of that 100
is Tesla planning to take on let's say, between now and 2030, 2040, when we really need to see
the scale up happen? EM: I mean, these are just guesses. So please, people shouldn't
hold me to these things. It's not like this is like some -- What tends to happen
is I'll make some like, you know, best guess and then people, in five years, there’ll be some jerk
that writes an article: "Elon said this would happen,
and it didn't happen. He's a liar and a fool." It's very annoying when that happens. So these are just guesses,
this is a conversation. CA: Right. EM: I think Tesla probably ends up
doing 10 percent of that. Roughly. CA: Let's say 2050 we have this amazing, you know,
100 percent sustainable electric grid made up of, you know, some mixture
of the sustainable energy sources you talked about. That same grid probably
is offering the world really low-cost energy, isn't it, compared with now. And I'm curious about like, are people entitled to get
a little bit excited about the possibilities of that world? EM: People should be optimistic
about the future. Humanity will solve sustainable energy. It will happen if we, you know,
continue to push hard, the future is bright and good
from an energy standpoint. And then it will be possible to also use
that energy to do carbon sequestration. It takes a lot of energy to pull
carbon out of the atmosphere because in putting it in the atmosphere
it releases energy. So now, you know, obviously
in order to pull it out, you need to use a lot of energy. But if you've got a lot of sustainable
energy from wind and solar, you can actually sequester carbon. So you can reverse the CO2 parts
per million of the atmosphere and oceans. And also you can really have
as much fresh water as you want. Earth is mostly water. We should call Earth “Water.” It's 70 percent water by surface area. Now most of that’s seawater, but it's like we just happen to be
on the bit that's land. CA: And with energy,
you can turn seawater into -- EM: Yes. CA: Irrigating water
or whatever water you need. EM: At very low cost. Things will be good. CA: Things will be good. And also, there's other benefits
to this non-fossil fuel world where the air is cleaner -- EM: Yes, exactly. Because, like, when you burn fossil fuels, there's all these side reactions and toxic gases of various kinds. And sort of little particulates
that are bad for your lungs. Like, there's all sorts
of bad things that are happening that will go away. And the sky will be cleaner and quieter. The future's going to be good. CA: I want us to switch now to think
a bit about artificial intelligence. But the segue there, you mentioned how annoying it is
when people call you up for bad predictions in the past. So I'm possibly going to be annoying now, but I’m curious about your timelines
and how you predict and how come some things are so amazingly
on the money and some aren't. So when it comes to predicting sales
of Tesla vehicles, for example, you've kind of been amazing, I think in 2014 when Tesla
had sold that year 60,000 cars, you said, "2020, I think we will do
half a million a year." EM: Yeah, we did
almost exactly a half million. CA: You did almost exactly half a million. You were scoffed in 2014
because no one since Henry Ford, with the Model T, had come close
to that kind of growth rate for cars. You were scoffed, and you actually
hit 500,000 cars and then 510,000 or whatever produced. But five years ago,
last time you came to TED, I asked you about full self-driving, and you said, “Yeah, this very year, I'm confident that we will have a car
going from LA to New York without any intervention." EM: Yeah, I don't want to blow your mind,
but I'm not always right. CA: (Laughs) What's the difference between those two? Why has full self-driving in particular
been so hard to predict? EM: I mean, the thing that really got me, and I think it's going to get
a lot of other people, is that there are just so many
false dawns with self-driving, where you think you've got the problem, have a handle on the problem, and then it, no, turns out
you just hit a ceiling. Because if you were to plot the progress, the progress looks like a log curve. So it's like a series of log curves. So most people don't know
what a log curve is, I suppose. CA: Show the shape with your hands. EM: It goes up you know,
sort of a fairly straight way, and then it starts tailing off and you start getting diminishing returns. And you're like, uh oh, it was trending up and now
it's sort of, curving over and you start getting to these,
what I call local maxima, where you don't realize
basically how dumb you were. And then it happens again. And ultimately... These things, you know,
in retrospect, they seem obvious, but in order to solve
full self-driving properly, you actually have to solve real-world AI. Because what are the road networks
designed to work with? They're designed to work
with a biological neural net, our brains, and with vision, our eyes. And so in order to make it
work with computers, you basically need to solve
real-world AI and vision. Because we need cameras and silicon neural nets in order to have self-driving work for a system that was designed
for eyes and biological neural nets. You know, I guess
when you put it that way, it's sort of, like, quite obvious that the only way
to solve full self-driving is to solve real world AI
and sophisticated vision. CA: What do you feel
about the current architecture? Do you think you have an architecture now where there is a chance for the logarithmic curve
not to tail off any anytime soon? EM: Well I mean, admittedly
these may be infamous last words, but I actually am confident
that we will solve it this year. That we will exceed -- The probability of an accident, at what point do you exceed
that of the average person? I think we will exceed that this year. CA: What are you seeing behind the scenes
that gives you that confidence? EM: We’re almost at the point
where we have a high-quality unified vector space. In the beginning, we were trying
to do this with image recognition on individual images. But if you get one image out of a video, it's actually quite hard to see
what's going on without ambiguity. But if you look at a video segment
of a few seconds of video, that ambiguity resolves. So the first thing we had to do
is tie all eight cameras together so they're synchronized, so that all the frames
are looked at simultaneously and labeled simultaneously by one person, because we still need human labeling. So at least they’re not labeled
at different times by different people in different ways. So it's sort of a surround picture. Then a very important part
is to add the time dimension. So that you’re looking at surround video, and you're labeling surround video. And this is actually quite difficult to do
from a software standpoint. We had to write our own labeling tools and then create auto labeling, create auto labeling software to amplify
the efficiency of human labelers because it’s quite hard to label. In the beginning,
it was taking several hours to label a 10-second video clip. This is not scalable. So basically what you have to have
is you have to have surround video, and that surround video has to be
primarily automatically labeled with humans just being editors and making slight corrections
to the labeling of the video and then feeding back those corrections
into the future auto labeler, so you get this flywheel eventually where the auto labeler
is able to take in vast amounts of video and with high accuracy, automatically label the video
for cars, lane lines, drive space. CA: What you’re saying is ... the result of this is that you're
effectively giving the car a 3D model of the actual objects
that are all around it. It knows what they are, and it knows how fast they are moving. And the remaining task is to predict what the quirky behaviors are
that, you know, that when a pedestrian is walking
down the road with a smaller pedestrian, that maybe that smaller pedestrian
might do something unpredictable or things like that. You have to build into it
before you can really call it safe. EM: You basically need to have
memory across time and space. So what I mean by that is ... Memory can’t be infinite, because it's using up a lot
of the computer's RAM basically. So you have to say how much
are you going to try to remember? It's very common
for things to be occluded. So if you talk about say,
a pedestrian walking past a truck where you saw the pedestrian start
on one side of the truck, then they're occluded by the truck. You would know intuitively, OK, that pedestrian is going to pop out
the other side, most likely. CA: A computer doesn't know it. EM: You need to slow down. CA: A skeptic is going to say
that every year for the last five years, you've kind of said, well, no this is the year, we're confident that it will be there
in a year or two or, you know, like it's always been about that far away. But we've got a new architecture now, you're seeing enough improvement
behind the scenes to make you not certain,
but pretty confident, that, by the end of this year, what in most, not in every city,
and every circumstance but in many cities and circumstances, basically the car will be able
to drive without interventions safer than a human. EM: Yes. I mean, the car currently
drives me around Austin most of the time with no interventions. So it's not like ... And we have over 100,000 people in our full self-driving beta program. So you can look at the videos
that they post online. CA: I do. And some of them are great,
and some of them are a little terrifying. I mean, occasionally
the car seems to veer off and scare the hell out of people. EM: It’s still a beta. CA: But you’re behind the scenes,
looking at the data, you're seeing enough improvement to believe that a this-year
timeline is real. EM: Yes, that's what it seems like. I mean, we could be here
talking again in a year, like, well, another year went by,
and it didn’t happen. But I think this is the year. CA: And so in general,
when people talk about Elon time, I mean it sounds like
you can't just have a general rule that if you predict that something
will be done in six months, actually what we should imagine
is it’s going to be a year or it’s like two-x or three-x,
it depends on the type of prediction. Some things, I guess,
things involving software, AI, whatever, are fundamentally harder
to predict than others. Is there an element that you actually deliberately make
aggressive prediction timelines to drive people to be ambitious? Without that, nothing gets done? EM: Well, I generally believe,
in terms of internal timelines, that we want to set the most aggressive
timeline that we can. Because there’s sort of like
a law of gaseous expansion where, for schedules, where
whatever time you set, it's not going to be less than that. It's very rare
that it'll be less than that. But as far as our predictions
are concerned, what tends to happen in the media is that they will report
all the wrong ones and ignore all the right ones. Or, you know, when writing
an article about me -- I've had a long career
in multiple industries. If you list my sins, I sound
like the worst person on Earth. But if you put those
against the things I've done right, it makes much more sense, you know? So essentially like,
the longer you do anything, the more mistakes
that you will make cumulatively. Which, if you sum up those mistakes, will sound like I'm the worst
predictor ever. But for example, for Tesla vehicle growth, I said I think we’d do 50 percent,
and we’ve done 80 percent. CA: Yes. EM: But they don't mention that one. So, I mean, I'm not sure what my exact
track record is on predictions. They're more optimistic than pessimistic,
but they're not all optimistic. Some of them are exceeded
probably more or later, but they do come true. It's very rare that they do not come true. It's sort of like, you know, if there's some radical
technology prediction, the point is not
that it was a few years late, but that it happened at all. That's the more important part. CA: So it feels like
at some point in the last year, seeing the progress on understanding, the Tesla AI understanding
the world around it, led to a kind of, an aha moment at Tesla. Because you really surprised people
recently when you said probably the most important
product development going on at Tesla this year
is this robot, Optimus. EM: Yes. CA: Many companies out there
have tried to put out these robots, they've been working on them for years. And so far no one has really cracked it. There's no mass adoption
robot in people's homes. There are some in manufacturing,
but I would say, no one's kind of, really cracked it. Is it something that happened in the development of full self-driving
that gave you the confidence to say, "You know what, we could do
something special here." EM: Yeah, exactly. So, you know, it took me a while
to sort of realize that in order to solve self-driving, you really needed to solve real-world AI. And at the point of which you solve
real-world AI for a car, which is really a robot on four wheels, you can then generalize that
to a robot on legs as well. The two hard parts I think -- like obviously companies
like Boston Dynamics have shown that it's possible
to make quite compelling, sometimes alarming robots. CA: Right. EM: You know, so from a sensors
and actuators standpoint, it's certainly been demonstrated by many that it's possible to make
a humanoid robot. The things that are currently missing
are enough intelligence for the robot to navigate the real world
and do useful things without being explicitly instructed. So the missing things are basically
real-world intelligence and scaling up manufacturing. Those are two things
that Tesla is very good at. And so then we basically just need
to design the specialized actuators and sensors that are needed
for humanoid robot. People have no idea,
this is going to be bigger than the car. CA: So let's dig into exactly that. I mean, in one way, it's actually
an easier problem than full self-driving because instead of an object
going along at 60 miles an hour, which if it gets it wrong,
someone will die. This is an object that's engineered
to only go at what, three or four or five miles an hour. And so a mistake,
there aren't lives at stake. There might be embarrassment at stake. EM: So long as the AI doesn't take it over
and murder us in our sleep or something. CA: Right. (Laughter) So talk about -- I think the first applications
you've mentioned are probably going to be manufacturing, but eventually the vision is to have
these available for people at home. If you had a robot that really understood
the 3D architecture of your house and knew where every object
in that house was or was supposed to be, and could recognize all those objects, I mean, that’s kind of amazing, isn’t it? Like the kind of thing
that you could ask a robot to do would be what? Like, tidy up? EM: Yeah, absolutely. Make dinner, I guess, mow the lawn. CA: Take a cup of tea to grandma
and show her family pictures. EM: Exactly. Take care
of my grandmother and make sure -- CA: It could obviously recognize
everyone in the home. It could play catch with your kids. EM: Yes. I mean, obviously,
we need to be careful this doesn't become a dystopian situation. I think one of the things
that's going to be important is to have a localized
ROM chip on the robot that cannot be updated over the air. Where if you, for example, were to say,
“Stop, stop, stop,” if anyone said that, then the robot would stop,
you know, type of thing. And that's not updatable remotely. I think it's going to be important
to have safety features like that. CA: Yeah, that sounds wise. EM: And I do think there should be
a regulatory agency for AI. I've said that for many years. I don't love being regulated, but I think this is an important
thing for public safety. CA: Let's come back to that. But I don't think many people
have really sort of taken seriously the notion of, you know, a robot at home. I mean, at the start
of the computing revolution, Bill Gates said there's going to be
a computer in every home. And people at the time said, yeah,
whatever, who would even want that. Do you think there will be basically
like in, say, 2050 or whatever, like a robot in most homes,
is what there will be, and people will love them
and count on them? You’ll have your own butler basically. EM: Yeah, you'll have your sort of
buddy robot probably, yeah. CA: I mean, how much of a buddy? How many applications have you thought, you know, can you have
a romantic partner, a sex partner? EM: It's probably inevitable. I mean, I did promise the internet
that I’d make catgirls. We could make a robot catgirl. CA: Be careful what
you promise the internet. (Laughter) EM: So, yeah, I guess it'll be
whatever people want really, you know. CA: What sort of timeline
should we be thinking about of the first models
that are actually made and sold? EM: Well, you know, the first units
that we intend to make are for jobs that are dangerous,
boring, repetitive, and things that people don't want to do. And, you know, I think we’ll have like
an interesting prototype sometime this year. We might have something useful next year, but I think quite likely
within at least two years. And then we'll see
rapid growth year over year of the usefulness
of the humanoid robots and decrease in cost
and scaling up production. CA: Initially just selling to businesses, or when do you picture
you'll start selling them where you can buy your parents one
for Christmas or something? EM: I'd say in less than ten years. CA: Help me on the economics of this. So what do you picture the cost
of one of these being? EM: Well, I think the cost is actually
not going to be crazy high. Like less than a car. Initially, things will be expensive
because it'll be a new technology at low production volume. The complexity and cost of a car
is greater than that of a humanoid robot. So I would expect that it's going
to be less than a car, or at least equivalent to a cheap car. CA: So even if it starts at 50k,
within a few years, it’s down to 20k or lower or whatever. And maybe for home
they'll get much cheaper still. But think about the economics of this. If you can replace a $30,000, $40,000-a-year worker, which you have to pay every year, with a one-time payment of $25,000 for a robot that can work longer hours, a pretty rapid replacement
of certain types of jobs. How worried should
the world be about that? EM: I wouldn't worry about the sort of,
putting people out of a job thing. I think we're actually going to have,
and already do have, a massive shortage of labor. So I think we will have ... Not people out of work, but actually still a shortage
labor even in the future. But this really will be
a world of abundance. Any goods and services will be available
to anyone who wants them. It'll be so cheap to have goods
and services, it will be ridiculous. CA: I'm presuming it should be possible
to imagine a bunch of goods and services that can't profitably be made now
but could be made in that world, courtesy of legions of robots. EM: Yeah. It will be a world of abundance. The only scarcity
that will exist in the future is that which we decide to create
ourselves as humans. CA: OK. So AI is allowing us to imagine
a differently powered economy that will create this abundance. What are you most worried
about going wrong? EM: Well, like I said,
AI and robotics will bring out what might be termed the age of abundance. Other people have used this word, and that this is my prediction: it will be an age of abundance
for everyone. But I guess there’s ... The dangers would be
the artificial general intelligence or digital superintelligence decouples
from a collective human will and goes in the direction
that for some reason we don't like. Whatever direction it might go. You know, that’s sort of
the idea behind Neuralink, is to try to more tightly couple
collective human world to digital superintelligence. And also along the way solve a lot
of brain injuries and spinal injuries and that kind of thing. So even if it doesn't succeed
in the greater goal, I think it will succeed in the goal
of alleviating brain and spine damage. CA: So the spirit there is
that if we're going to make these AIs that are so vastly intelligent,
we ought to be wired directly to them so that we ourselves can have
those superpowers more directly. But that doesn't seem to avoid
the risk that those superpowers might ... turn ugly in unintended ways. EM: I think it's a risk, I agree. I'm not saying that I have
some certain answer to that risk. I’m just saying like maybe one of the things
that would be good for ensuring that the future
is one that we want is to more tightly couple the collective human world
to digital intelligence. The issue that we face here
is that we are already a cyborg, if you think about it. The computers are
an extension of ourselves. And when we die, we have,
like, a digital ghost. You know, all of our text messages
and social media, emails. And it's quite eerie actually, when someone dies but everything
online is still there. But you say like, what's the limitation? What is it that inhibits
a human-machine symbiosis? It's the data rate. When you communicate,
especially with a phone, you're moving your thumbs very slowly. So you're like moving
your two little meat sticks at a rate that’s maybe 10 bits per second,
optimistically, 100 bits per second. And computers are communicating
at the gigabyte level and beyond. CA: Have you seen evidence
that the technology is actually working, that you've got a richer, sort of,
higher bandwidth connection, if you like, between like external
electronics and a brain than has been possible before? EM: Yeah. I mean, the fundamental principles
of reading neurons, sort of doing read-write on neurons
with tiny electrodes, have been demonstrated for decades. So it's not like the concept is new. The problem is that there is
no product that works well that you can go and buy. So it's all sort of, in research labs. And it's like some cords
sticking out of your head. And it's quite gruesome,
and it's really ... There's no good product
that actually does a good job and is high-bandwidth and safe and something actually that you could buy
and would want to buy. But the way to think
of the Neuralink device is kind of like a Fitbit
or an Apple Watch. That's where we take out
sort of a small section of skull about the size of a quarter, replace that with what, in many ways really is very much like
a Fitbit, Apple Watch or some kind of smart watch thing. But with tiny, tiny wires, very, very tiny wires. Wires so tiny, it’s hard to even see them. And it's very important
to have very tiny wires so that when they’re implanted,
they don’t damage the brain. CA: How far are you from putting
these into humans? EM: Well, we have put in
our FDA application to aspirationally do the first
human implant this year. CA: The first uses will be
for neurological injuries of different kinds. But rolling the clock forward and imagining when people
are actually using these for their own enhancement, let's say, and for the enhancement of the world, how clear are you in your mind as to what it will feel like
to have one of these inside your head? EM: Well, I do want to emphasize
we're at an early stage. And so it really will be
many years before we have anything approximating
a high-bandwidth neural interface that allows for AI-human symbiosis. For many years, we will just be solving
brain injuries and spinal injuries. For probably a decade. This is not something
that will suddenly one day it will have this incredible
sort of whole brain interface. It's going to be, like I said, at least a decade of really
just solving brain injuries and spinal injuries. And really, I think you can solve
a very wide range of brain injuries, including severe depression,
morbid obesity, sleep, potentially schizophrenia, like, a lot of things that cause
great stress to people. Restoring memory in older people. CA: If you can pull that off,
that's the app I will sign up for. EM: Absolutely. CA: Please hurry. (Laughs) EM: I mean, the emails that we get
at Neuralink are heartbreaking. I mean, they'll send us
just tragic, you know, where someone was sort of,
in the prime of life and they had an accident on a motorcycle and someone who's 25, you know,
can't even feed themselves. And this is something we could fix. CA: But you have said that AI is one
of the things you're most worried about and that Neuralink may be one of the ways where we can keep abreast of it. EM: Yeah, there's the short-term thing, which I think is helpful on an individual
human level with injuries. And then the long-term thing is an attempt to address the civilizational risk of AI by bringing digital intelligence and biological intelligence
closer together. I mean, if you think of how
the brain works today, there are really two layers to the brain. There's the limbic system and the cortex. You've got the kind of,
animal brain where -- it’s kind of like the fun part, really. CA: It's where most of Twitter
operates, by the way. EM: I think Tim Urban said, we’re like somebody, you know,
stuck a computer on a monkey. You know, so we're like,
if you gave a monkey a computer, that's our cortex. But we still have a lot
of monkey instincts. Which we then try to rationalize
as, no, it's not a monkey instinct. It’s something more important than that. But it's often just really
a monkey instinct. We're just monkeys with a computer
stuck in our brain. But even though the cortex
is sort of the smart, or the intelligent part of the brain, the thinking part of the brain, I've not yet met anyone
who wants to delete their limbic system or their cortex. They're quite happy having both. Everyone wants both parts of their brain. And people really want their
phones and their computers, which are really the tertiary,
the third part of your intelligence. It's just that it's ... Like the bandwidth, the rate of communication
with that tertiary layer is slow. And it's just a very tiny straw
to this tertiary layer. And we want to make that tiny
straw a big highway. And I’m definitely not saying
that this is going to solve everything. Or this is you know,
it’s the only thing -- it’s something that might be helpful. And worst-case scenario, I think we solve
some important brain injury, spinal injury issues,
and that's still a great outcome. CA: Best-case scenario, we may discover new
human possibility, telepathy, you've spoken of, in a way,
a connection with a loved one, you know, full memory and much faster
thought processing maybe. All these things. It's very cool. If AI were to take down Earth,
we need a plan B. Let's shift our attention to space. We spoke last time at TED
about reusability, and you had just demonstrated that
spectacularly for the first time. Since then, you've gone on to build
this monster rocket, Starship, which kind of changes the rules
of the game in spectacular ways. Tell us about Starship. EM: Starship is extremely fundamental. So the holy grail of rocketry
or space transport is full and rapid reusability. This has never been achieved. The closest that anything has come
is our Falcon 9 rocket, where we are able to recover
the first stage, the boost stage, which is probably about 60 percent
of the cost of the vehicle of the whole launch, maybe 70 percent. And we've now done that
over a hundred times. So with Starship, we will be
recovering the entire thing. Or at least that's the goal. CA: Right. EM: And moreover,
recovering it in such a way that it can be immediately re-flown. Whereas with Falcon 9, we still need
to do some amount of refurbishment to the booster and
to the fairing nose cone. But with Starship, the design goal
is immediate re-flight. So you just refill
propellants and go again. And this is gigantic. Just as it would be
in any other mode of transport. CA: And the main design is to basically take
100 plus people at a time, plus a bunch of things
that they need, to Mars. So, first of all, talk about that piece. What is your latest timeline? One, for the first time,
a Starship goes to Mars, presumably without people,
but just equipment. Two, with people. Three, there’s sort of, OK, 100 people at a time, let's go. EM: Sure. And just to put the cost
thing into perspective, the expected cost of Starship, putting 100 tons into orbit, is significantly less
than what it would have cost or what it did cost to put our tiny
Falcon 1 rocket into orbit. Just as the cost of flying
a 747 around the world is less than the cost of a small airplane. You know, a small airplane
that was thrown away. So it's really pretty mind-boggling
that the giant thing costs less, way less than the small thing. So it doesn't use exotic propellants or things that are difficult
to obtain on Mars. It uses methane as fuel, and it's primarily oxygen,
roughly 77-78 percent oxygen by weight. And Mars has a CO2 atmosphere
and has water ice, which is CO2 plus H2O,
so you can make CH4, methane, and O2, oxygen, on Mars. CA: Presumably, one of the first tasks
on Mars will be to create a fuel plant that can create the fuel
for the return trips of many Starships. EM: Yes. And actually, it's mostly
going to be oxygen plants, because it's 78 percent oxygen,
22 percent fuel. But the fuel is a simple fuel
that is easy to create on Mars. And in many other parts
of the solar system. So basically ... And it's all propulsive landing,
no parachutes, nothing thrown away. It has a heat shield that’s capable
of entering on Earth or Mars. We can even potentially go to Venus. but you don't want to go there. (Laughs) Venus is hell, almost literally. But you could ... It's a generalized method of transport
to anywhere in the solar system, because the point at which
you have propellant depo on Mars, you can then travel to the asteroid belt and to the moons of Jupiter and Saturn and ultimately anywhere
in the solar system. CA: But your main focus and SpaceX's main focus is still Mars. That is the mission. That is where most of the effort will go? Or are you actually imagining
a much broader array of uses even in the coming, you know, the first decade or so of uses of this. Where we could go,
for example, to other places in the solar system to explore, perhaps NASA wants to use
the rocket for that reason. EM: Yeah, NASA is planning to use
a Starship to return to the moon, to return people to the moon. And so we're very honored that NASA
has chosen us to do this. But I'm saying it is a generalized -- it’s a general solution to getting anywhere
in the greater solar system. It's not suitable for going
to another star system, but it is a general solution for transport
anywhere in the solar system. CA: Before it can do any of that, it's got to demonstrate it can get into
orbit, you know, around Earth. What’s your latest advice
on the timeline for that? EM: It's looking promising for us
to have an orbital launch attempt in a few months. So we're actually integrating -- will be integrating the engines
into the booster for the first orbital flight
starting in about a week or two. And the launch complex
itself is ready to go. So assuming we get regulatory approval, I think we could have an orbital
launch attempt within a few months. CA: And a radical new technology like this presumably there is real risk
on those early attempts. EM: Oh, 100 percent, yeah. The joke I make all the time
is that excitement is guaranteed. Success is not guaranteed,
but excitement certainly is. CA: But the last I saw on your timeline, you've slightly put back the expected date to put the first human on Mars
till 2029, I want to say? EM: Yeah, I mean, so let's see. I mean, we have built a production
system for Starship, so we're making a lot
of ships and boosters. CA: How many are you planning
to make actually? EM: Well, we're currently expecting
to make a booster and a ship roughly every, well, initially,
roughly every couple of months, and then hopefully by the end
of this year, one every month. So it's giant rockets, and a lot of them. Just talking in terms
of rough orders of magnitude, in order to create
a self-sustaining city on Mars, I think you will need something
on the order of a thousand ships. And we just need a Helen of Sparta,
I guess, on Mars. CA: This is not in most
people's heads, Elon. EM: The planet that launched 1,000 ships. CA: That's nice. But this is not in most people's heads, this picture that you have in your mind. There's basically a two-year window, you can only really fly to Mars
conveniently every two years. You were picturing that during the 2030s, every couple of years, something like 1,000 Starships take off, each containing 100 or more people. That picture is just completely
mind-blowing to me. That sense of this armada
of humans going to -- EM: It'll be like "Battlestar
Galactica," the fleet departs. CA: And you think that it can
basically be funded by people spending maybe a couple hundred grand
on a ticket to Mars? Is that price about where it has been? EM: Well, I think if you say like, what's required in order to get
enough people and enough cargo to Mars to build a self-sustaining city. And it's where you have an intersection of sets of people who want to go, because I think only a small percentage
of humanity will want to go, and can afford to go
or get sponsorship in some manner. That intersection of sets, I think, needs to be a million people
or something like that. And so it’s what can a million people
afford, or get sponsorship for, because I think governments
will also pay for it, and people can take out loans. But I think at the point
at which you say, OK, like, if moving to Mars costs are,
for argument’s sake, $100,000, then I think you know,
almost anyone can work and save up and eventually have $100,000
and be able to go to Mars if they want. We want to make it available
to anyone who wants to go. It's very important to emphasize
that Mars, especially in the beginning, will not be luxurious. It will be dangerous, cramped,
difficult, hard work. It's kind of like that Shackleton ad
for going to the Antarctic, which I think is actually not real,
but it sounds real and it's cool. It's sort of like, the sales pitch
for going to Mars is, "It's dangerous, it's cramped. You might not make it back. It's difficult, it's hard work." That's the sales pitch. CA: Right. But you will make history. EM: But it'll be glorious. CA: So on that kind of launch rate
you're talking about over two decades, you could get your million people
to Mars, essentially. Whose city is it? Is it NASA's city, is it SpaceX's city? EM: It’s the people of Mars’ city. The reason for this, I mean,
I feel like why do this thing? I think this is important for maximizing the probable lifespan of humanity
or consciousness. Human civilization could come
to an end for external reasons, like a giant meteor or super volcanoes
or extreme climate change. Or World War III, or you know,
any one of a number of reasons. But the probable life span
of civilizational consciousness as we know it, which we should really view
as this very delicate thing, like a small candle in a vast darkness. That is what appears to be the case. We're in this vast darkness of space, and there's this little
candle of consciousness that’s only really come about
after 4.5 billion years, and it could just go out. CA: I think that's powerful, and I think a lot of people
will be inspired by that vision. And the reason you need the million people is because there has to be
enough people there to do everything that you need to survive. EM: Really, like the critical threshold
is if the ships from Earth stop coming for any reason, does the Mars City die out or not? And so we have to -- You know, people talk about like,
the sort of, the great filters, the things that perhaps, you know, we talk about the Fermi paradox,
and where are the aliens? Well maybe there are these
various great filters that the aliens didn’t pass, and so they eventually
just ceased to exist. And one of the great filters
is becoming a multi-planet species. So we want to pass that filter. And I'll be long-dead before
this is, you know, a real thing, before it happens. But I’d like to at least see us make
great progress in this direction. CA: Given how tortured
the Earth is right now, how much we're beating each other up, shouldn't there be discussions going on with everyone who is dreaming
about Mars to try to say, we've got a once
in a civilization's chance to make some new rules here? Should someone be trying
to lead those discussions to figure out what it means for this
to be the people of Mars' City? EM: Well, I think ultimately this will be up to the people
of Mars to decide how they want to rethink society. Yeah there’s certainly risk there. And hopefully the people of Mars
will be more enlightened and will not fight
amongst each other too much. I mean, I have some recommendations, which people of Mars
may choose to listen to or not. I would advocate for more
of a direct democracy, not a representative democracy, and laws that are short enough
for people to understand. Where it is harder to create laws
than to get rid of them. CA: Coming back a bit nearer term, I'd love you to just talk a bit
about some of the other possibility space that Starship seems to have created. So given -- Suddenly we've got this ability
to move 100 tons-plus into orbit. So we've just launched
the James Webb telescope, which is an incredible thing. It's unbelievable. EM: Exquisite piece of technology. CA: Exquisite piece of technology. But people spent two years trying
to figure out how to fold up this thing. It's a three-ton telescope. EM: We can make it a lot easier
if you’ve got more volume and mass. CA: But let's ask a different question. Which is, how much more powerful
a telescope could someone design based on using Starship, for example? EM: I mean, roughly, I'd say it's probably
an order of magnitude more resolution. If you've got 100 tons
and a thousand cubic meters volume, which is roughly what we have. CA: And what about other exploration
through the solar system? I mean, I'm you know -- EM: Europa is a big question mark. CA: Right, so there's an ocean there. And what you really want to do
is to drop a submarine into that ocean. EM: Maybe there's like,
some squid civilization, cephalopod civilization
under the ice of Europa. That would be pretty interesting. CA: I mean, Elon, if you could take
a submarine to Europa and we see pictures of this thing
being devoured by a squid, that would honestly be
the happiest moment of my life. EM: Pretty wild, yeah. CA: What other possibilities
are out there? Like, it feels like if you're going to
create a thousand of these things, they can only fly to Mars every two years. What are they doing the rest of the time? It feels like there's this
explosion of possibility that I don't think people
are really thinking about. EM: I don't know, we've certainly
got a long way to go. As you alluded to earlier,
we still have to get to orbit. And then after we get to orbit, we have to really prove out and refine
full and rapid reusability. That'll take a moment. But I do think we will solve this. I'm highly confident
we will solve this at this point. CA: Do you ever wake up with the fear that there's going to be this
Hindenburg moment for SpaceX where ... EM: We've had many Hindenburg. Well, we've never had Hindenburg moments
with people, which is very important. Big difference. We've blown up quite a few rockets. So there's a whole compilation online
that we put together and others put together, it's showing rockets are hard. I mean, the sheer amount of energy
going through a rocket boggles the mind. So, you know, getting out
of Earth's gravity well is difficult. We have a strong gravity
and a thick atmosphere. And Mars, which is less than 40 percent, it's like, 37 percent of Earth's gravity and has a thin atmosphere. The ship alone can go all the way from the surface of Mars
to the surface of Earth. Whereas getting to Mars requires
a giant booster and orbital refilling. CA: So, Elon, as I think more
about this incredible array of things that you're involved with, I keep seeing these synergies, to use a horrible word, between them. You know, for example, the robots you're building from Tesla
could possibly be pretty handy on Mars, doing some of the dangerous
work and so forth. I mean, maybe there's a scenario
where your city on Mars doesn't need a million people, it needs half a million people
and half a million robots. And that's a possibility. Maybe The Boring Company could play a role helping create some of the subterranean
dwelling spaces that you might need. EM: Yeah. CA: Back on planet Earth, it seems like a partnership
between Boring Company and Tesla could offer an unbelievable deal to a city to say, we will create for you
a 3D network of tunnels populated by robo-taxis that will offer fast, low-cost
transport to anyone. You know, full self-driving may
or may not be done this year. And in some cities,
like, somewhere like Mumbai, I suspect won't be done for a decade. EM: Some places are more
challenging than others. CA: But today, today,
with what you've got, you could put a 3D network
of tunnels under there. EM: Oh, if it’s just in a tunnel,
that’s a solved problem. CA: Exactly, full self-driving
is a solved problem. To me, there’s amazing synergy there. With Starship, you know, Gwynne Shotwell talked
about by 2028 having from city to city, you know, transport on planet Earth. EM: This is a real possibility. The fastest way to get
from one place to another, if it's a long distance, is a rocket. It's basically an ICBM. CA: But it has to land -- Because it's an ICBM,
it has to land probably offshore, because it's loud. So why not have a tunnel
that then connects to the city with Tesla? And Neuralink. I mean, if you going to go to Mars having a telepathic connection
with loved ones back home, even if there's a time delay... EM: These are not intended
to be connected, by the way. But there certainly could be
some synergies, yeah. CA: Surely there is a growing argument that you should actually put
all these things together into one company and just have a company
devoted to creating a future that’s exciting, and let a thousand flowers bloom. Have you been thinking about that? EM: I mean, it is tricky because Tesla
is a publicly-traded company, and the investor base of Tesla and SpaceX and certainly Boring Company
and Neuralink are quite different. Boring Company and Neuralink
are tiny companies. CA: By comparison. EM: Yeah, Tesla's got 110,000 people. SpaceX I think is around 12,000 people. Boring Company and Neuralink
are both under 200 people. So they're little, tiny companies, but they will probably
get bigger in the future. They will get bigger in the future. It's not that easy to sort
of combine these things. CA: Traditionally, you have said
that for SpaceX especially, you wouldn't want it public, because public investors wouldn't support
the craziness of the idea of going to Mars or whatever. EM: Yeah, making life multi-planetary is outside of the normal time horizon
of Wall Street analysts. (Laughs) To say the least. CA: I think something's changed, though. What's changed is that Tesla is now
so powerful and so big and throws off so much cash that you actually could
connect the dots here. Just tell the public that x-billion
dollars a year, whatever your number is, will be diverted to the Mars mission. I suspect you'd have massive
interest in that company. And it might unlock a lot
more possibility for you, no? EM: I would like to give the public access
to ownership of SpaceX, but I mean the thing that like, the overhead associated
with a public company is high. I mean, as a public company,
you're just constantly sued. It does occupy like, a fair bit of ... You know, time and effort
to deal with these things. CA: But you would still only have one
public company, it would be bigger, and have more things going on. But instead of being
on four boards, you'd be on one. EM: I'm actually not even on the Neuralink
or Boring Company boards. And I don't really attend
the SpaceX board meetings. We only have two a year, and I just stop by and chat for an hour. The board overhead for a public
company is much higher. CA: I think some investors probably worry
about how your time is being split, and they might be excited
by you know, that. Anyway, I just woke up the other day thinking, just, there are so many ways
in which these things connect. And you know,
just the simplicity of that mission, of building a future that is worth
getting excited about, might appeal to an awful lot of people. Elon, you are reported by Forbes
and everyone else as now, you know, the world's richest person. EM: That’s not a sovereign. CA: (Laughs) EM: You know, I think it’s fair to say that if somebody is like, the king
or de facto king of a country, they're wealthier than I am. CA: But it’s just harder to measure -- So $300 billion. I mean, your net worth on any given day is rising or falling
by several billion dollars. How insane is that? EM: It's bonkers, yeah. CA: I mean, how do you handle
that psychologically? There aren't many people in the world
who have to even think about that. EM: I actually don't think
about that too much. But the thing that is
actually more difficult and that does make sleeping difficult is that, you know, every good hour or even minute of thinking about Tesla and SpaceX has such a big effect on the company that I really try to work
as much as possible, you know, to the edge
of sanity, basically. Because you know,
Tesla’s getting to the point where probably will get
to the point later this year, where every high-quality
minute of thinking is a million dollars impact on Tesla. Which is insane. I mean, the basic, you know,
if Tesla is doing, you know, sort of $2 billion a week,
let’s say, in revenue, it’s sort of $300 million a day,
seven days a week. You know, it's ... CA: If you can change that by five percent
in an hour’s brainstorm, that's a pretty valuable hour. EM: I mean, there are many instances
where a half-hour meeting, I was able to improve
the financial outcome of the company by $100 million
in a half-hour meeting. CA: There are many other people out there who can't stand
this world of billionaires. Like, they are hugely
offended by the notion that an individual can have
the same wealth as, say, a billion or more
of the world's poorest people. EM: If they examine sort of -- I think there's some axiomatic flaws
that are leading them to that conclusion. For sure, it would be very
problematic if I was consuming, you know, billions of dollars a year
in personal consumption. But that is not the case. In fact, I don't even own
a home right now. I'm literally staying at friends' places. If I travel to the Bay Area, which is where most
of Tesla engineering is, I basically rotate through
friends' spare bedrooms. I don't have a yacht,
I really don't take vacations. It’s not as though my personal
consumption is high. I mean, the one exception is a plane. But if I don't use the plane,
then I have less hours to work. CA: I mean, I personally think
you have shown that you are mostly driven by really quite a deep
sense of moral purpose. Like, your attempts to solve
the climate problem have been as powerful as anyone else
on the planet that I'm aware of. And I actually can't understand, personally, I can't understand the fact that you get all this criticism
from the Left about, "Oh, my God, he's so rich,
that's disgusting." When climate is their issue. Philanthropy is a topic
that some people go to. Philanthropy is a hard topic. How do you think about that? EM: I think if you care
about the reality of goodness instead of the perception of it,
philanthropy is extremely difficult. SpaceX, Tesla, Neuralink
and The Boring Company are philanthropy. If you say philanthropy
is love of humanity, they are philanthropy. Tesla is accelerating sustainable energy. This is a love -- philanthropy. SpaceX is trying to ensure
the long-term survival of humanity with a multiple-planet species. That is love of humanity. You know, Neuralink is trying to help
solve brain injuries and existential risk with AI. Love of humanity. Boring Company is trying to solve traffic,
which is hell for most people, and that also is love of humanity. CA: How upsetting is it to you to hear this constant drumbeat of, "Billionaires, my God,
Elon Musk, oh, my God?" Like, do you just shrug that off or does it does it actually hurt? EM: I mean, at this point,
it's water off a duck's back. CA: Elon, I’d like to,
as we wrap up now, just pull the camera back
and just think ... You’re a father now
of seven surviving kids. EM: Well, I mean, I'm trying
to set a good example because the birthrate on Earth is so low that we're facing civilizational collapse unless the birth rate returns
to a sustainable level. CA: Yeah, you've talked about this a lot, that depopulation is a big problem, and people don't understand
how big a problem it is. EM: Population collapse
is one of the biggest threats to the future of human civilization. And that is what is going on right now. CA: What drives you on a day-to-day
basis to do what you do? EM: I guess, like,
I really want to make sure that there is a good future for humanity and that we're on a path to understanding
the nature of the universe, the meaning of life. Why are we here, how did we get here? And in order to understand
the nature of the universe and all these fundamental questions, we must expand the scope
and scale of consciousness. Certainly it must not diminish or go out. Or we certainly won’t understand this. I would say I’ve been motivated
by curiosity more than anything, and just desire to think about the future and not be sad, you know? CA: And are you? Are you not sad? EM: I'm sometimes sad, but mostly I'm feeling I guess relatively optimistic
about the future these days. There are certainly some big
risks that humanity faces. I think the population collapse
is a really big deal, that I wish more people would think about because the birth rate is far below
what's needed to sustain civilization at its current level. And there's obviously ... We need to take action
on climate sustainability, which is being done. And we need to secure
the future of consciousness by being a multi-planet species. We need to address -- Essentially, it's important to take
whatever actions we can think of to address the existential risks
that affect the future of consciousness. CA: There's a whole
generation coming through who seem really sad about the future. What would you say to them? EM: Well, I think if you want the future
to be good, you must make it so. Take action to make it good. And it will be. CA: Elon, thank you for all this time. That is a beautiful place to end. Thanks for all you're doing. EM: You're welcome.
If you want to comment, stay on topic, and don’t be toxic (Rule #1). It’s not that hard.
Some people(or maybe paid trolls?) only have the attention span of an infant or a dog. They can’t process anything more complex than “Elon Musk good” or “Elon Musk bad” at any given time. Quite simply unable to process anything more complex than that. It’s unbelievable and I don’t actually believe people are born that way, rather it must be lack of effort owing to a lack of reason to do so. The world full of such people is definitely not one I’m excited about. Technologically we might be moving in the right direction. But intellectually I don’t think we are making progress, even if we are as a species, then income inequality probably shouldn’t be the biggest concern, maybe we should all be concerned about intellectual inequality. There’s a large portion of the population that is, for lack of a more nuanced word, just really really dumb for anything beyond TikTok and Instagram.
Lmao this sub really goes full Karen once in a while
Think what you want but Musk is amazing. He wants the earth to be a better place, he wants life for humans to be better and he is working toward that end. What is not to like about that? People that equate wealth to something negative really points to issues they have with themselves. No one is going to become a billionaire sitting in your parents basement complaining on reddit about people that work hard to improve the world we live in.
Inspiring
Fantastic interview but the very last question is the best. Everything Elon does is philanthropy.
edit: wow, this thread is helping me find all sorts of trolls to block.
Yes, obviously he screws up, but at the macro scale, all the companies he's started are overall for the good of humanity and (at least attempt to) solve significant issues we are facing as a whole.
[removed]
[removed]
This is a great interview. I really love the way Elon articulates his thoughts and vision.
However I call BS on any claim to a TeslaBot in the near future.
Elon literally says that to solve FSD they had to make real-world AI and he goes into detail about the implementation and kind of ignores the whole planning part.... Which is fine. I can believe that path planning in a car is kind of straightforward and maybe, maybe, even a solved problem. Whatever.
But that has nothing to do with the kind of manipulation intelligence you'd need in a TeslaBot. Being able to fold laundry is IMO on a whole other level than navigating a car IF you take out the perception problem.
That is, if you have perception solved and you know exactly the state of the world around you, then maybe FSD is an easy problem, but folding laundry is anything but easy even with perfect perception. Not to mention that solving perception for traffic is not solving it for soft objects in our home.
Think about computer animation. Simulating cars and traffic is old news and has been done in CGI for ages but simulating soft tissue, clothes, hair, etc. is a much more recent development.
Also, inside our brain, we're already hardwired (probably) for manipulating objects. Just because you solved perception doesn't mean you have manipulation solved. It's different areas in our brain.
Just like solving perception turned out to be a much harder problem than anticipated (Elon refers to the multiple log curves in the development as the experience) manipulation is probably going to be similarly difficult and require additional breakthroughs even once perception is a done deal.