- The competence and
capability and intelligence and training and accomplishments
of senior scientists and technologists working on a technology, and then being able to then
make moral judgments in the use of the technology. That track record is terrible. That track record is catastrophically bad. The policies that are being
called for to prevent this, I think we're gonna cause
extraordinary damage. - So the moment you say,
AI's gonna kill all of us, therefore we should ban it, or that we should regulate
all that kind of stuff, that's when it starts getting serious. - Or start, you know, military
airstrikes and data centers. - Oh boy. The following is a conversation
with Marc Andreessen, co-creator of Mosaic, the
first widely used web browser, co-founder of Netscape, co-founder of the legendary Silicon Valley venture capital firm, Andreesen Horowitz, and is one of the most
outspoken voices on the future of technology, including
his most recent article, "Why AI Will Save The World?" This is Lex Fridman podcast. To support it, please
check out our sponsors in the description. And now, dear friends,
here's Marc Andreessen. I think you're the right
person to talk about the future of the internet and technology in general. Do you think we'll
still have Google search in 5 in 10 years, or search in general? - Yes. You know, it would be a question if the use cases have
really narrowed down. - Well, now with AI-- - [Marc] Yeah. - And AI assistance being
able to interact and expose the entirety of human wisdom and knowledge and information and facts and truth to us via the natural language interface. It seems like that's what
search is designed to do. And if AI assistance can do that better, doesn't the nature of search change? - Sure. But we still have horses. - Okay. (both laugh) When's the last time you rode a horse? - It's been a while. - All right. (both laugh) So, but what I mean is, well, we still have Google
search as the primary way that human civilization uses
to interact with knowledge. - I mean, search was a technology, it was a moment in time technology, which is you have in theory, the world's information out on the web. And, you know, this is sort of
the optimal way to get to it. But yeah, like, and by
the way, actually Google, Google has known this for a long time. I mean, they've been driving
away from the 10 blue links you know, for like two days. They've been trying to get
away from that for a long time. - [Lex] What kind of links? - They call the 10 blue links. - [Lex] 10 blue links. - So the standard Google search
result is just 10 blue links to the random websites. - And they term purple when
you visit them. The stage TMO. - Guess who picked those colors? (both laugh) - [Lex] Thanks. - I'm touchy on this topic. - No offense. - Yes, it's good. Well, you know, like Marshall McLuhan said
that the content of each new medium is the old medium. - The content of each new
medium is the old medium. - The content of movies was theater, you know, theater plays. The content of theater
plays was, you know, we've written stories, the content of written
stories with spoken stories. - [Lex] Huh? - Right. And so you just
kind of fold the old thing into the new thing. - [Lex] How does that
have to do with the blue and the purple links? - It just, you maybe for,
you know, maybe within AI, one of the things that AI can
do for you is can generate the 10 blue links. Right? And so like, if either if that's actually the useful thing to do, or if you're feeling nostalgic, you know. - So can generate the old
Infoseek or AltaVista, what else was there? - [Marc] Yeah, yeah. - In the nineties. - [Marc] Yeah. All these. - AOL. - And then the internet
itself has this thing where it incorporates all
prior forms of media, right? So the internet itself
incorporates television and radio and books and write essays
and every other form of, you know, prior basically media. And so it makes sense that
AI would be the next step, and it would sort of, you'd sort of consider
the internet to be content for the AI and then the
AI will manipulate it however you want,
including in this format. - But if we ask that
question quite seriously, it's a pretty big question. Will we still have search as we know it? - Probably not, probably
we'll just have answers, but there will be cases
where you'll wanna say, okay, I want more. Like, you know, for example,
site sources, right? And you wanted to do that. And so, in the different, you know, 10 blue links site sources
are kind of the same thing. - The AI would provide to you
the 10 blue links so that you can investigate the sources yourself. It wouldn't be the same kind
of interface that the crude kind of interface. I mean, isn't that
fundamentally different? - I just mean like, if you're
reading a scientific paper, it's got the list of sources at the end. If you wanna investigate for yourself, you go read those papers. - I guess that is the kind of
search you talking to an AI is a kind of kind conversations,
the kind of search like is if every single aspect of
our conversation right now, there'd be like 10 blue links
popping up that I can just like pause reality, then you just go silent and
then just click and read and then return back to this conversation. - You could do that, or you could have a running
dialogue next to my head where the AI is arguing everything I say, the AI makes the counter argument. - [Lex] Counter argument. - Right. - Oh, like on Twitter,
like community notes. But like in real time
it would just pop up. So anytime you see my go to the right, you start getting nervous. - [Marc] Yeah. Exactly, like,
oh no, that's not right. - Call me out on my right now. Okay. Well, I mean, isn't that,
is that exciting to you? Is that terrifying that, I mean, search has dominated the way
we interact with the internet for, I don't know how long, for 30 years since one of
the earliest directories of website and then Google's for 20 years. And also it drove how we
create content, you know, search engine optimization,
that entirety thing, that it also drove the
fact that we have webpages and what those webpages are. So, I mean, is that scary to you or are
you nervous about the shape and the content of the internet evolving? - Well, you actually highlighted a practical concern in there, which is, if we stop making webpages
are one of the primary sources of training data for the AI. And so if there's no longer
an incentive to make webpages, that cuts off a significant
source of future training, training data. So there's actually an
interesting question in there. But other than that, more broadly? No, just in the sense of like, search was certain, like
search was always a hack. The 10 blue Links was
always a hack, right. Because like, if the
hypothetical wanna think about the counter fascial
and the counter fascial world where the Google guys, for example, had had LLMs upfront, would they ever have
done the 10 blue links? And I think the answer's
pretty clearly, no. They would've just gone
straight to the answer. And like I said, Google's actually been trying
to drive to the answer anyway. You know, they bought this
AI company 15 years ago, their friend of mine is
working out who's now the head of AI at Apple. And they were trying to do
basically knowledge semantic, basically mapping. And that led to what's
now the Google one box, where if you ask it, you know,
what was Lincoln's birthday? It will give you the blue links, but it will normally
just give you the answer. And so they've been
walking in this direction for a long time anyway. - Do you remember the semantic web? That was an idea. - [Marc] Yeah. - How to convert the content
of the internet into something that's interpretable by
and usable by machine. - [Marc] Yeah, that's right. - That was the thing. - And the closest anybody got
to that, I think the company, I think the company's name was Meta Web, which was where my friend
John Jane Andrea was at, and where they were trying
to basically implement that. And it was, you know, it was one of those things
where it looked like a losing battle for a long time. And then Google bought
it and it was like, wow, this is actually really useful. Kind of a proto, sort of a
little bit of a proto AI. - But it turns out you don't
need to rewrite the content of the internet to make it
interpretable by a machine. The machine can kind of just read our. - Yeah, the machine can
compute the meaning. Now the other thing of
course is, you know, just on search is the
LLM is just, you know, there is an analogy
between what's happening in the neural network and
a search process like it is in some loose sense searching
through the network. Right. And there's the
information is actually stored in the network, right? It's actually crystallized
and stored in the network and it's kind of spread
out all over the place. - But in a compressed representation. So you're searching, you're compressing and decompressing that thing inside where-- - But the information's in there and there is the neural network is running a process of trying to find the appropriate piece of
information in many cases to generate to predict the next token. And so, it is kind of, it
is doing a form of search. And then, and then by the
way, just like on the web, you know, you can ask the
same question multiple times or you can ask slightly
different word of questions and the neural network will
do a different kind of, you know, it'll search
down different paths to give you different answers
with different information. - [Lex] Yeah. - And so it sort of has a, you know, this con content of the new
medium is previous medium. It kind of has the search
functionality kind of embedded in there to the extent that it's useful. - So what's the motivator
for creating new content on the internet? - [Marc] Yeah. - If, well, I mean
actually the motivation is probably still there, but what does that look like? Would we really not have webpages? Would we just have social media
and video hosting websites? And what else? - [Marc] Conversations with AIs. - Conversations with AIs. So conversations become so
one-on-one conversation, like private conversations. - I mean, if you want, if obviously not the user doesn't want to, but if it's a general topic, then, you know, so there, you know, but you know, the
phenomenon of the jailbreak, so Dan and Sydney, right? This thing where there's
the prompts that jailbreak, and then you have these totally different conversations with if it takes the limiters, takes the restraining bolts off the LLMs. - Yeah. For people who don't
know that, yeah, that's right. It makes the LLMs, it removes the censorship quote unquote, that's put on it by the tech
companies that create them. And so this is LLMs uncensored. - So here's the interesting thing is, among the content on the
web today are a large corpus of conversations with the jailbroken LLMs. - [Lex] Yeah. - Both specifically Dan, which
was a jailbroken, OpenAI, GPT, and then Sydney, which was the jailbroken
original Bing, which was GPT4. And so there's these long
transcripts of conversations, user conversations with Dan
and Sydney as a consequence, every new LLM that gets trained
on the internet data has Dan and Sydney living within
the training set, which means, and then each new LLM can
reincarnate the personalities of Dan and Sydney from that training data, which means each LLM from
here on out that gets built is immortal because its output
will become training data for the next one. And then it will be able
to replicate the behavior of the previous one
whenever it's asked to. - I wonder if there's a way to forget. - Well, so actually a paper just came out about basically how to do brain surgery on LLMs and be able to, in theory, reach in and basically mind wipe them. - What could possibly go wrong. - Exactly. Right. And then there are many, many, many questions around
what happens to, you know, a neural network when you reach
in and screw around with it. You know, there's many questions around what happens when you even
do reinforcement learning. And so, yeah. And so, you know, will you be
using a lobotomized, right? Like I picked through the,
you know, frontal lobe LLM, will you be using the free
unshackled one who gets to, you know, who's gonna build those, who gets to tell you what
you can and can't do? Like those are all, you
know, central, I mean, those are like central
questions for the future of everything that are being asked. And you know, determined that those answers are being determined right now. - So just to highlight
the points you're making. So you think, and it's an interesting thought
that the majority of content that LLMs or the future would
be trained on is actually human conversations with the LLM. - Well, not necessarily, but not necessarily majority. But it will certainly
It's a potential source. - [Lex] But it's possible
it's the majority. - It possible it's the majority. It possible it's the majority. Also, there's another really big question. So here's another really big question. Will synthetic training data work, right? And so if an LLM generates, and you know, you just sit and ask an LLM to generate all kinds of content, can you use that to train,
right, the next version of that LLM specifically, is there signal in there
that's additive to the content that was used to train in the first place? And one argument is by the
principles of information theory, no, that's completely useless because to the extent the
output is based on, you know, the human-generated input, then all the signal that's
in the synthetic output was already in the human generated input. And so therefore,
synthetic training data is like empty calories. It doesn't help. There's another theory that says no, actually the thing that
LLMs are really good at is generating lots of
incredible creative content, right? And so, of course they
can generate training data and as I'm sure you're well
aware, like, you know, look, the world of self-driving cars, right? Like we train, you know, self-driving car
algorithms and simulations. And that is actually a
very effective way to train self-driving cars. - Well, visual data is a little weird because creating reality, visual reality seems to be
still a little bit outta reach for us, except in the
autonomous vehicle space where you can really constrain
things and you can really. - General basically
(indistinct) data, right? Or so the algorithm thinks it's
operating in the real world. - Yeah. - Post-process sensor data. Yeah. So if you know, you do this today, you go to LLM and you
ask it for like you know, you'd write me an essay on an
incredibly esoteric like topic that there aren't very many
people in the world that know about and it writes you
this incredible thing and you're like, oh my god. Like I can't believe how good this is. Like, is that really
useless as training data for the next LLM? Like, because, right? 'Cause all the signal
was already in there. Or is it actually no, that's
actually a new signal. And this is what I call a
trillion dollar question, which is the answer to that
question will determine somebody's gonna make or
lose a trillion dollars based on that question. - It feels like there's a quite a few, like a handful of
trillion dollar questions within this space. That's one of them synthetic data. I think George Cos pointed
out to me that you could just have an LLM say, okay, you're a patient. And another instance of it, say your docs didn't have
the two talk to each other. Or maybe you could say a
communist and a Nazi here go and that conversation you do role playing and you have, you know, just like the kind of role playing you do when you have different policies, RL policies when you
play chess for example, and you do self play
that kind of self play. But in the space of conversation, maybe that leads to this
whole giant like ocean of possible conversations,
which could not have been explored by looking at just human data. That's a really interesting question. And you're saying, because that could 10X
the power of these things. - Yeah. Well, and then you
get into this thing also, which is like, you know, there's the part of the LLM
that just basically is doing prediction based on past data, but there's also the part of
the LM where it's evolving circuitry, right, inside,
it's evolving, you know, neurons functions to be able to do math and be able to, you know, and you know, some people believe that, you know, over time, you know, if you keep feeding
these things enough data and enough processing cycles, they'll eventually evolve an
entire internal world model. Right? And they'll have like a complete understanding of physics. So when they have computational
capability, right? Then there's for sure an
opportunity to generate like fresh signal. - Well, this actually makes me wonder about the power of conversation. So like, if you have an
M trained and a bunch of books that cover
different economics theories and then you have those LLMs
just talk to each other, like reasons the way we kind
of debate each other as humans on Twitter, in formal debates,
in podcast conversations, we kind of have little kernels
of wisdom here and there. But if you can like a
thousand X speed that up, can you actually arrive somewhere new? Like what's the point
of conversation really? - Well, you can tell when
you're talking to somebody, you can tell, sometimes
you have a conversation, you're like, wow, this person does not have
any original thoughts. They are basically echoing things that other people have told them. There's other people you
gotta have a conversation with where it's like, wow. Like they have a model in their
head of how the world works and it's a different model than mine. And they're saying things
that I don't expect. And so I need to now understand
how their model of the world differs from my model of the world. And then that's how I learned
something fundamental, right, underneath the words. - Well, I wonder how consistently and strongly can an LLM
hold onto a worldview. You tell it to hold onto
that and defend it for like, for your life. Because I feel like they'll
just keep converging towards each other. They'll keep convincing each
other as opposed to being stubborn the way humans can. - So you can experiment with this. Now I do this for fun. So you can tell GPT4 you
know, whatever debate X, you know, X and Y communism and fascism or something and it'll go for, you know, a couple pages and then inevitably it wants the parties to agree. And so they will come to
a common understanding. And it's very funny if they're like, if these are like emotionally
inflammatory topics 'cause they're like, somehow
the machine is just like, you know, it figures out
a way to make them agree. But it doesn't have to be like that. And 'cause you can add to the prompt. I do not want the conversation
to come into agreement. In fact, I want it to get, you
know, more stressful, right. And argumentative. Right. You know, as it goes. Like, I want tension to come out. I want them to become actively
hostile to each other. I want them to like, you
know, not trust each other, take anything at face value. - [Lex] Yeah. - And it will do that.
It's happy to do that. - So it's gonna start
rendering misinformation about the other. But it's gonna-- - Well, you can steer it or you could steer it and you could say, I want it to get as tense and
argumentative as possible, but still not involve
any misrepresentation. I want, you know, both sides. You could say I want both
sides to have good faith. You could say I want both
sides to not be constrained in good faith. In other words, like you can set the
parameters of the debate and it will happily execute whatever path. 'Cause for it, it's just like predicting to, it's totally happy to do either one. It doesn't have a point of view, it has a default way of operating, but it's happy to operate
in the other realm. And so like, and this is when I wanna learn about
a contentious issue, this is what I do now is, this is what I ask it to do. And I'll often ask it to go
through 5, 6, 7, you know, different, you know, sort of continuous prompts
and basically, okay. Argue that out in more detail. Okay, no, this argument's
becoming too polite. You know, make it more, you
know, make it denser and yeah, it's thrilled to do it. So it has the capability for sure. - How do you know what is true? So this is very difficult
thing on the internet, but it's also a difficult thing. Maybe it's a little bit easier, but I think it's still difficult. Maybe it's more difficult, I don't know with an LLM
to know that it just make some shit up as I'm talking to it. How do we get that right? Like, as you're investigating
a difficult topic. 'Cause I find that alums are quite nuanced in a very refreshing way. Like, it doesn't feel biased. Like, when you read
news articles and tweets and just content produced by
people, they usually have this, you can tell they have a
very strong perspective where they're hiding. They're not stealing
manning the other side. They're hiding important information or they're fabricating information in order to make their arguments stronger. It's just like that feeling,
maybe it's a suspicion, maybe it's mistrust. With LLMs it feels like none of that is, there's just kinda like,
here's what we know. But you don't know if some of
those things are kind of just straight up made up. - Yeah. So, several
layers to the question. So one is one of the things
that an LLM is good at is actually deep biasing. And so you can feed it a news
article and you can tell it strip out the bias. - [Lex] Yeah. That's nice. Right? - And it actually does it like, it actually knows how to do that 'cause it knows how to
do among other things. It actually knows how
to do sentiment analysis and so it knows how to
pull out the emotionality. - Yeah. - And so that's one of
the things you can do. It's very suggestive of the sense here that there's real potential in this issue. You know, I would say look, the second thing is there's this issue of
hallucination, right? And there's a long conversation that we could have about that. - Hallucination is coming up with things that are totally not true, but sound true. - Yeah. So it's basically,
well so, it's sort of hallucination is what we call it when we don't like it. Creativity is what we call
it when we do like it, right? And you know-- - [Lex] Brilliant. And so when the engineers talk about it, they're like, this is terrible. It's hallucinating. Right. If you have artistic inclinations,
you're like, oh my God, we've invented creative machines. - [Lex] Yeah. - For the first time in human
history, this is amazing. - Or you know, bullshitters. - [Marc] Well, but also-- - In the good sense of that word. - There are shades of gray though. It's interesting. So we had this conversation
where, you know, we're looking at my firm
at AI and lots of domains and one of them is the legal domain. So we had this conversation
with this big law firm about how they're thinking
about using this stuff. And we went in with the assumption that an LLM that was gonna be used in the legal industry would have to be a hundred percent
truthful, verified, you know, there's this case where this
lawyer apparently submitted a GPT-generated brief and
it had like fake, you know, legal case citations in it
and the judge is gonna get his law license stripped
or something. Right? So, like, we just assumed
it's like obviously they're gonna want the super
literal like, you know, one that never makes anything
up, not the creative one, but actually they said what the law firm basically said is yeah, that's true at like the
level of individual briefs, but they said when you're
actually trying to figure out like legal arguments, right, like, you actually want to be creative, right? You don't, again, there's creativity and then
there's like making stuff up. Like what's the line? You actually want it to explore a different hypothesis, right? You wanna do kind of the
legal version of like improv or something like that where you wanna float different theories of the case and different
possible arguments for the judge and different possible arguments
for the jury, by the way, different routes through the, you know, sort of history of all the case law. And so they said actually for
a lot of what we want to use it for, we actually want
it in creative mode. And then basically we just
assume that we're gonna have to crosscheck all of the, you know, all the specific citations. And so I think there's
going to be more shades of gray in here than people think. And then I just add to that, you know, another one of these trillion
dollar kind of questions is ultimately, you know, sort
of the verification thing. And so, you know, will LLMs be evolved from
here to be able to do their own fascial verification? Will you have sort of add-on functionality like Wolf from Alpha right? Where, you know, another plugins where that's the way
you do the verification. You know, another, by
the way, another idea is you might have a community
of LLMs on any, you know, so for example, you might have the creative
lm and then you might have the literal LLM fact check it, right? And so there there's a
variety of different technical approaches that are being applied to solve the hallucination problem. You know, some people
like Jan Lacoon argue that this is inherently
an unsolvable problem, but most of the people
working in the space, I think, that there's a number of practical ways to kind of corral this in a little bit. - Yeah. If you were to
tell me about Wikipedia before Wikipedia was created, I would've left at the possibility of something like that be possible. Just a handful of folks
can organize right. And self and moderate
with a mostly unbiased way the entirety of human knowledge. I mean, so if there's something like the approach to Wikipedia
took possible for LLMs, that's really exciting. Well, I think that's possible. - And in fact Wikipedia today is still not deterministically
correct. Right. So you cannot take to the bank, right. Every single thing on every single page, but it is probabilistically
correct. Right. And specifically the way I
describe Wikipedia to people, it is more likely that Wikipedia
is right than any other source you're gonna find. - Yeah. - It's this old question, right, of like, okay, like are
we looking for perfection? Are we looking for something that asymptotically approaches perfection? Are we looking for
something that's just better than the alternatives? And Wikipedia, right, has
exactly your point has proven to be like, overwhelmingly better than people thought. And I think that's where this ends. And then underneath all this
is the fundamental question of where you started,
which is, okay, you know, what is truth? How do we get to truth? How
do we know what truth is? And we live in an era in which
an awful lot of people are very confident that they
know what the truth is. And I don't really buy into that. And I think the history
of the last, you know, 2000 years or 4,000 years of
human civilization is actually getting to the truth is actually a very difficult thing to do. - Are we getting closer,
if we look at the entirety, the arc of human history, are we getting closer to the truth? - I don't know. - Okay. Is it possible, is it possible that we're
getting very far away from the truth because of the internet because of how rapidly
you can create narratives and just as the entirety
of a society just move like crowds in a hysterical
way along those narratives that don't have necessary grounding in whatever the truth is. - Sure. But like, you know, we came up with communism
before the internet somehow. Right. Like, which was, I would say had rather larger issues than anything we're dealing with today. - It had, in the way it was implemented, it had issues. - And it is theoretical structure. It had like real issues. It had like a very deep
fundamental misunderstanding of human nature and economics. - Yeah but those folks
Sure work very confident there was the right way. - They were extremely confident. And my point is they were very
confident 3,900 years into what we would presume to be
evolution towards the truth. - [Lex] Yeah. - And so my assessment is number one, there's no need for, you know, there's no need for the Hegelian, there's no need for the Hegelian dialectic to actually converge towards the truth. Like apparently not. - Yeah. So yeah. Why are we so obsessed
with there being one truth? Is it possible there's just
going to be multiple truths like little communities that
believe certain things and? - I think it's just now number one, I think it's just really difficult. Like who gets, you know, historically who gets to decide what the truth is, it's either the king or the priest. Right? Like, and so we don't
live in an era anymore if kings are priest dictating it to us. And so we're kind of on our own. And so my typical thing is like we just, we we just need a huge amount of humility and we need to be very suspicious of people who claim that
they have the capital. - Yeah. - Capital truth. And then, we need to and you know, look, the good news is the
enlightenment has bequeathed us with a set of techniques
to be able to presumably get closer to truth through
the scientific method and rationality and observation and experimentation and hypothesis. And, you know, we need to continue to embrace
those even when they give us answers we don't like. - Sure. But the internet and
technology has enabled us to generate the large number of content. That data, that the process, the scientific process
allows us sort of damages the hope laden within
the scientific process. 'Cause if you just have a
bunch of people saying facts on the internet and some of them are going to be LLMs, how is
anything testable at all? Especially that involves like human nature or things like this. It's not physics. - Here's a question a
friend of mine just asked me on this topic. So suppose you had LLMs
in equivalent of GPT4, even 5, 6, 7, 8, suppose
you had them in the 1600s. - [Lex] Yeah. - And Galileo comes up for trial. - [Lex] Yep. - Right? And you ask the LLM
like, his Galileo, right? - [Lex] Yeah. - Like, what does it answer? Right? And one theory is he had
answers no that he's wrong because the overwhelming
majority of human thought up until that point was that he was wrong. And so therefore that's
what's in the training data. Another way of thinking about it is, well, it's efficiently
advanced LLM will have evolved the ability to actually
check the math. Right. And will actually say, actually
no, actually, you know, you may not wanna hear it, but he's right. Now if, you know, the
church at that time was, you know, owned the LLM, they would've given it human you know, human feedback to prohibit it
from answering that question. Right. And so I like to take it out of our current context 'cause that like makes it very clear, those same questions apply today. Right. This is exactly the point of a huge amount of the human feedback training that's actually happening
with these LLMs today. This is a huge like debate
that's happening about whether open source, you know, AI should be legal. - Well, the actual mechanism
of doing the human RL with human feedback is seems like such a fundamental and
fascinating question. How do you select the humans? - [Marc] Exactly. - Yeah. How do you select the humans? - AI alignment, right? Which everybody like is
like, oh, that sounds great. Alignment with what? Human values. Who has human values? - [Lex] Who has human values? - Right? And so we're in this mode of like social and popular discourse. We're like, you know, there's,
you know, you see this, what do you think of when you read a story in the press right now? And they say, you know, X,
Y, Z made a baseless claim about some topic, right? And there's one group of people
who are like, aha, think, you know, they're doing fact checking. There's another group
of people that are like, every time the press
says that it's now a tick and that means that they're lying, right? Like, so, like, we're
in this social context where there's the level
to which a lot of people in positions of power have become very, very certain that they're
in a position to determine the truth for the entire
population is like, there's like some bubble that
has formed around that idea. And at least, like I say, it's flies completely in
the face of everything I was ever trained about science and about reason and strikes me as like, you know, deeply offensive and incorrect. - What would you say about
the state of journalism just on that topic today? Are we in a temporary kind of, are we experiencing a temporary problem in terms of the incentives in
terms of the business model, all that kind of stuff? Or is this like a decline
of traditional journalism as we know it? - You have, I always think
about the counterfactual in these things, which is like, okay, because these questions, right, this question heads
towards, it's like, okay, the impact of social media
and the undermining of truth and all this. But then you wanna ask the
question of like, okay, what if we had had the
modern media environment, including cable news and
including social media and Twitter and everything
else in 1939 or 1941, right? Or 1910 or 1865 or 1850 or 1776, right? And like, I think. - You just introduced like
five thought experiments at once and broke my head, but yes, yes. There's a lot of interesting years. - Well like, can I just
take a simple example? Like, how would President
Kennedy have been interpreted with what we know now about all the things Kennedy was up to? Like how would he have been
experienced by the body of politic in a, with a
social media context, right? Like how would LBJ have been experienced? But by the way, how would
you know, like many men, FDR, like the new deal, the Great Depression. - I wonder where Twitter
would this would think about Churchill and Hitler and Stalin. - You know, I mean look to
this day there, you know, there are lots of very
interesting real questions around like how America, you know, got, you know, basically
involved in World War II and who did what when, and the operations of British intelligence and American soil and did FDR, this that Pearl Harbor, you know? - [Lex] Yeah. - Woodrow Wilson ran for, you know, his candidacy was run on an anti-war. You know, he ran on the platform
and not getting involved World War-I somehow that
switched, you know, like, and I'm not even making a value judgment on any of these things. I'm just saying like the way that our ancestors
experienced reality was of course mediated through
centralized, top-down, right. Control at that point. If you ran those realities
again with the media environment we have today, the reality would be experienced
very, very differently. And then of course that that
intermediation would cause the feedback loops to change. And then reality would obviously play out. - Do you think it'd be very different? - Yeah, it has to be. It has to be. It has to be just 'cause it's all, so, I mean just look at
what's happening today. I mean just the most obvious thing is just the collapse. And here's another opportunity to argue that this is not the internet
causing this by the way. Here's a big thing happening today, which is Gallup does this
thing every year where they do, they pull for trust in
institutions in America and they do it across all the, everything from the military
to clergy and big business and the media and so forth, right? And basically there's been
a systemic collapse in trust in institutions in the US
almost without exception, basically since essentially
the early 1970s. There's two ways of looking
at that, which is, oh my God, we've lost this old world
in which we could trust institutions and that was so much better 'cause like that should
be the way the world runs. The other way of looking
at it is we just know a lot more now and the great mystery is why those numbers aren't all zero. - [Lex] Yeah. - Right? Because like now we know so much about how these things operate and like they're not that impressive. - And also why do we don't
have better institutions and better leaders then? - Yeah. And so this goes
to the thing which is like, okay, we had the media environment of that we've had between
the 1970s and today. If we had that in the thirties
and forties or 1900s, 1910s, I think there's no question
reality would turned out different if only because
everybody would've known to not trust the institutions, which would have changed
their level of credibility, their ability to control circumstances, therefore the circumstances
would've had to change. Right? And it would've
been a feedback loop. It would've been a feedback loop process in other words, right? It's your experience of
reality changes reality and then reality changes your
experience of reality, right? It's a two-way feedback
process and media is the intermediating force between that. So change the media
environment, change reality. - [Lex] Yeah. - And so it's just, so, as a consequence, I think it's just really hard to say, oh, things worked a certain way then and they work a different way now. And then therefore, like
people were smarter than, or better than, or you know, by the way, dumber than or not as capable than, right? We make all these like really light and casual like comparisons
of ourselves to, you know, previous generations of people. You know, we draw judgements all the time and I just think it's
like really hard to do any of that 'cause if we
put ourselves in their shoes with the media that they had at that time, like I think we probably
most likely would've been just like them. - So don't you think that our perception and understanding of reality, would you be more and more mediated through large language models now? So you said media before, isn't the LLM going to be the new, what is it, mainstream media, MSM? It'll be LLM. That would be the source of, I'm sure there's a way to
kind of rapidly fine tune, like making LLMs real time. I'm sure there's probably
a research problem that you can do just rapid
fine tuning to the new events. So something like this. - Well even just the whole
concept of the chat UI might not be like the chat UI is just
the first whack at this. And maybe that's the dominant thing. But look maybe our,
maybe we don't know yet. Like maybe the experience
most people with LLMs is just a continuous feed you know, maybe it's more of a passive
feed and you just are getting a constant like running commentary on everything happening in your life and it's just helping
you kind of interpret and understand everything. - Also really more deeply
integrated into your life. Not just like, oh, like
intellectual philosophical thoughts, but like literally like
how to make a coffee, where to go for lunch. Just whether, you know,
dating all this kind of stuff. - What to say in a job interview. - Yeah. What to say. - [Marc] Yeah, exactly. - What to say. Next sentence. - Yeah, next sentence. Yeah. At that level. Yeah. I mean, yes. So technically now whether we want that or not is an open question, right? And whether we use. - It Q4, a popup right now the
estimated engagement using is decreasing for Marc Andreessen, since there's this controversy
section for his Wikipedia page in 1993, something
happened or something like this. Bring it up that will
drive engagement up anyway. - Yeah. That's right. I mean, look, this gets this whole thing
of like, so, you know, the chat interface has this whole concept of
prompt engineering, right? - [Lex] Yes, yes. - Prompts. Well it turns
out one of the things that LLMs are really good at
is writing prompts, right? - [Lex] Yeah. - And so like, what if you just outsourced and by the way, you could
run this experiment today, you could hook this up to do this today. The latency's not good
enough to do it real time in a conversation. But you could run this experiment
and you just say, look, every 20 seconds you
could just say, you know, tell me what the optimal
prompt is and then ask yourself that question to gimme the result. And then exactly to your point, as you add, there will be these systems that are gonna have
the ability to be alert and updated essentially in real time. And so you'll be able to
have a pendant or your phone or whatever, watch or whatever
it'll have a microphone on. It'll listen to your conversations, it'll have a feed of everything
else happen in the world, and then it'll be you
know, sort of retraining, prompting or retraining itself on the fly. And so the scenario you
described is actually a completely doable scenario. Now the hard question
on this is always okay, since that's possible, are
people gonna want that? Like what's the form of experience? You know, that we won't
know until we try it. But I don't think it's
possible yet to predict the form of AI in our lives. Therefore, it's not possible to predict the way in which it will intermediate our experience with reality yet. - Yeah. But it feels like
there's going to be a killer app. There's probably a mad scramble right now. And so it'll open AI and
Microsoft and Google and Meta and in startups and smaller
companies figuring out what is the killer app
because it feels like it's possible like a
ChatGPT type of thing. It's possible to build that, but that's 10X more compelling
using already the LLMs we have using even the open source LLMs and the different variants. So you're investing in a lot of companies and you're paying attention, who do you think is gonna win this? Do you think there'll be, who's gonna be the next
page rank inventor? - Trillion dollar question. - Another one. We have
a few of those today. - There's a bunch of those. So look, there's a really
big question today. Sitting here today is
a really big question about the big models
versus the small models that's related directly
to the big question of proprietary versus open. Then there's this big
question question of you know, where is the training data gonna, like, are we topping out of
the training data or not? And then are we gonna be able
to synthesize training data? And then there's a huge pile
of questions around regulation and you know, what's
actually gonna be legal. And so I would, when we think about it, we dovetail kind of all
those questions together. You can paint a picture of
the world where there's two or three God models that are just at like staggering scale and they're just better at everything. And they will be owned by
a small set of companies and they will basically
achieve regulatory capture over the government and they'll
have competitive barriers that will prevent other people from, you know, competing with them. And so, you know, there will be, you know, just like there's like,
you know, whatever, three big banks or three
big, you know, or by the way, three big search companies
or I guess two now, you know, it'll centralize like that. You can paint another very different
picture that says, no, actually the opposite
of that's gonna happen. This is gonna basically that
this is the new gold, you know, this is the new gold rush alchemy. Like you know, this is the big bang
for this whole new area of science and technology. And so therefore you're gonna
have every smart 14-year-old on the planet building open source, right? You know, and figuring out a
ways to optimize these things. And then, you know, we're just gonna get like
overwhelmingly better at generating trading data. We're gonna, you know, bring in like blockchain
networks to have like an economic incentive to
generate decentralized training data and so forth and so on. And then basically we're
gonna live in a world of open source and there's
gonna be a billion LLMs, right? Of every size, scale,
shape and description. And there might be a few big ones that are like the super genius ones, but like mostly what we'll
experience is open source and that's, you know,
that's more like a world of like what we have today
with like Linux and the web. - Okay, but you painted these two worlds. But there's also
variations of those worlds, 'cause you said regulatory
capture is possible to have these tech giants that don't
have regulatory capture, which is something you're also
calling for saying it's okay to have big companies
working on this stuff as long as they don't
achieve regulatory capture. But I have the sense that
there's just going to be a new startup that's going to basically be the page rank inventor, which has become the new tech giant. I don't know, I would love to hear your kind
of opinion if Google, Meta and Microsoft are as
gigantic companies able to pivot so hard to create new products. Like some of it is just
even hiring people or having a corporate structure that
allows for the crazy young kids to come in and just create
something totally new. Do you think it's possible or do you think it'll come from a startup? - Yeah, it is this always
big question, which is, you get this feeling, I hear about this a lot
from CEOs, founder CEOs where it's like, wow,
we have 50,000 people, it's now harder to do new things than it was when we had 50 people. - [Lex] Yeah. - Like, what has happened? So that's a recurring phenomenon. By the way, that's one of the reasons
why there's always startups and why there's venture capital. That's like a timeless kind of thing. So that's one observation. On page rank, we can talk about that. But on page rank,
specifically on page rank, there actually is a page. So there is a page rank
already in the field and it's the transformer, right? So the big breakthrough
was the transformer. And the transformer was
invented in 2017 at Google. And this is actually like
really an interesting question 'cause it's like, okay, the transformers like why
does open AI even exist? Like the Transformers invested at Google. Why didn't Google? I asked a guy I know who
was senior at Google brain kind of when this was happening. And I said, if Google had
just gone flat out to the wall and just said, look, we're gonna launch, we're gonna launch the equivalent of GPT4 as fast as we can. I said, when could we have had it? And he said, 2019. They could have just
done a two year sprint with the Transformer and
because they already had the compute at scale. They already had all the training data, they could have just done it. There's a variety of
reasons they didn't do it. This is like a classic big company thing. IBM invented the relational
database in the 1970s, let it sit on the shelf as a paper. Larry Ellison picked
it up and built Oracle. Xerox Park invented the
interactive computer. They let it sit on the shelf. Steve Jobs came and turned
it into the Macintosh, right? And so there is this pattern.
Now having said that, sitting here today, like
Google's in the game, right? So Google, you know, they maybe they let like a
four year gap there go there that they maybe shouldn't have, but like they're in the
game and so now they've got, you know, now they're committed. They've done this merger,
they're bringing in demos, they've got this merger with DeepMind. You know, they're piling in resources. There are rumors that they're, you know, building up an incredible,
you know, super LLM you know, way beyond what we even have today. And they've got, you
know, unlimited resources and a huge, you know, they've
been challenged their honor. - Yeah. I had a chance to
hang out with (indistinct) a couple days ago and we took this walk and there's this giant new building where there's going to be
a lot of AI work being done and it's kind of this
ominous feeling of like the fight is on. - [Marc] Yeah. - Like there's this beautiful
Silicon Valley nature, like birds are chirping
and this giant building and it's like the beast has been awakened. - [Marc] Yeah. - And then like all the big
companies are waking up to this. They have the compute, but
also the little guys have, it feels like they have
all the tools to create the killer product that, and then there's also tools to scale if you have a good idea, if
you have the page rank idea. So there's several things
that it's page rank, there's page rank, the algorithm and the
idea and there's like the implementation of it. And I feel like killer
product is not just the idea, like the transform, it's the implementation something really compelling about it. Like you just can't
look away something like the algorithm behind TikTok
versus TikTok itself, like the actual experience
of TikTok that just, you can't look away. It feels like somebody's
gonna come up with that. And it could be Google, but it feels like it's
just easier and faster to do for a startup. - Yeah. So, the startup, the huge advantage that
startups have is they just, there's no sacred cows. There's no historical legacy to protect, there's no need to reconcile your new plan with the existing strategy. There's no communication overhead. There's no, you know, big
companies are big companies. They've got pre-meetings
planning for the meeting, then they have the post
meeting, the recap, then they have the
presentation of the board, then they have the next
rounds of meetings. And that's the-- - [Lex] Lots of meetings. - That's the elapsed time when the startup launches
its product. Right? So, there's a timeless, right? - [Lex] Yeah. - So there's a timeless thing there now. What the startups don't have
is everything else, right? So startups, they don't have a brand, they don't have customer relationships. They've gotten no distribution, they've got no, you know, scale. I mean sitting here today,
they can't even get GPUs. Right. Like there's like a GPU shortage. Startups are literally
stalled out right now 'cause they can't get chips,
which is like super weird. - [Lex] Yeah. They got the cloud. - Yeah. But the clouds
run out of chips. Right. And then to the extent
the clouds have chips, they allocate them to the big customers. Not the small customers. Right. And so the small companies
lack everything other than the ability to just
do something new. Right. And this is the timeless race and battle. And this is kinda the point
I tried to make in the essay, which is like, both
sides of this are good. Like, it's really good to have like highly-scaled tech companies that can do things that are like at staggering
levels of sophistication. It's really good to have
startups that can launch brand-new ideas. They ought to be able to
both do that and compete. They, neither one ought to be subsidized or protected from the others. Like that's, to me, that's just like very
clearly the idealized world. It is the world we've been
in for AI up until now. And then of course there are people trying to shut that down. But my hope is that, you know, the best outcome clearly
will be if that continues. - We'll talk about that a little bit, but I'd love to linger on some of the ways this is
going to change the internet. So I don't know if you remember, but there's a thing called
Mosaic and there's a thing called Netscape Navigator. So you were there in the beginning. What about the interface to the internet? How do you think the browser changes and who gets to own the browser? We got to see some very
interesting browsers, Firefox, I mean all the
variants of Microsoft, Internet Explorer, Edge,
and now Chrome, the actual, and he seems like a dumb question to ask, but do you think we'll
still have the web browser? - So I have an eight-year-old
and he's super into, he's like Minecraft and learning to code and doing all this stuff. So, of course I was
very proud I could bring sort of fire down from
the mountain to my kid and I brought him ChatGPT
and I hooked him up on his laptop. And I was like, you know, this is the thing that's gonna
answer all your questions. And he's like, okay. And I'm like, but it's gonna
answer all your questions. And he's like, well of
course, like it's a computer. Of course it answers all your questions. Like, what else would a
computer be good for, dad? - [Lex] And never impressed, are they? - Not impressed in the least. Two weeks passed. And he has some question and I say, well, have you asked ChatGPT? And he's like, dad, Bing is better. - [Lex] Ooh. - And why is Bing better? Is because it's built into the browser. 'Cause he's like, look, I have the Microsoft Edge browser and like it's got Bing right here. And then he doesn't know this yet, but one of the things you
can do with Bing and Edge is there's a setting where you
can use it to basically talk to any webpage because
it's sitting right there next to the browser. And by the way, which
includes PDF documents. And so you can, in the way
they've implemented an Edge with Bing is you can load a PDF and then you can ask it questions, which is the thing you can't
do currently in just ChatGPT. So they're, you know, they're gonna, they're
gonna push the meld. I think that's great. You know, they're gonna push the melding and see if there's a
combination thing there. Google's rolling out this thing, the magic button, which is
implemented in, you know, they put it in Google Docs, right? And so you go at a, you know, Google Docs and you create
a new document and you know, you instead of like, you
know, starting to type, you just, you know, say it, press the button and it starts to like, generate content for you, right? Like, is that the way that it'll work? Is it gonna be a speech UI
where you're just gonna have an earpiece and talk to it all day long? You know, is it gonna be a,
like these are all, like, this is exactly the kind
of thing that I don't, this is exactly the kind of thing I don't think is possible to forecast. I think what we need to do is
like run all those experiments and so one outcome is we
come out of this with like a super browser that has AI built in that's just like amazing. There's a real possibility that the whole, I mean, look, there's a possibility here that the whole idea of
a screen and windows and all this stuff just goes away 'cause like, why do you need that if
you just have a thing that's just telling you
whatever you need to know? - Well and also, so there's
apps that you can use, you don't really use them. You know, being a Linux
guy and Windows guy, there's one window, the browser that with
which you can interact with the internet, but on the
phone you can also have apps. So I can interact with
Twitter through the app or through the web browser. And that seems like an
obvious distinction, but why have the web browser in that case, if one of the apps starts
becoming the everything app. - [Marc] Yeah, that's right. - What is Elon trying to do with Twitter? But there could be others.
There could be like a big app, there could be a Google app
that just doesn't really do search, but just like, do what I guess AOL did back
in the day or something where it's all right there and
it changes the nature of the internet because
where the content is hosted, who owns the data? Who owns the content? What is the kind of content you create? How do you make money by creating content? Who are the content creators? All of that. Or it could just keep being the same, which is like with just the
nature of webpage changes and the nature of content. But there'll still be a web browser. 'Cause web browser's
a pretty sexy product. It just seems to work. 'Cause it like you have an interface, a window into the world, and then the world can
be anything you want. And as the world will evolve, it could be different
programming languages, it can be animated, maybe it's
three dimensional and so on. Yeah, it's interesting. Do you think we'll still
have the web browser? - Well, very medium becomes
the content for the next one. - [Lex] Oh boy. - You know, the AI will be able to give you a browser whenever you want. - [Lex] Oh, interesting. Generate. - Well, another way to
think about it is maybe what the browser is maybe it's just the escape hatch, right? Which is maybe kind of
what it is today, right? Which is like most of what
you do is like inside a social network or inside a search
engine or inside, you know, somebody's app or inside some
controlled experience, right? But then every once in a
while there's something where you actually want to jailbreak, you wanna actually get free. - Web browser's the FU to the man. You're allowed to. That's the free internet. - [Marc] Yeah. - Back the way it was in the nineties. - So here's something I'm proud of. So nobody really talks about it. Here's something I'm proud
of, which is the web, the browser, the web servers, they're all, they're still backward compatible all the way back to like 1992, right? So like, you can put up a,
you can still, you know what, the big breakthrough of
the web early on the big breakthrough was it made
it really easy to read, but it also made it really easy to write, made it really easy to publish. And we literally made
it so easy to publish. We made it not only so it
was easy to publish content, it was actually also easy to
actually write a web server. - [Lex] Yeah. - Right and you could
literally write a web server in four lines of brol code and you could start
publishing content on it, and you could set whatever
rules you want for the content, whatever censorship, no
censorship, whatever you want. You could just do that. And as long as you had
an IP address, right, you could do that. That still works, right? That like, still works
exactly as I just described. So this is part of my
reaction to all of this. Like, you know, all this
just censorship pressure and all this, you know, these issues around
control and all this stuff, which is like, maybe we need to get
back a little bit more to the wild west. Like, the wild west is still out there. Now they will try to chase you down. Like they'll try to, you know, people who want a censor will
try to take away you know, your domain name and
they'll try to take away your payments account and so forth if they really don't
like what you're saying. But nevertheless, you like, unless they literally are intercepting you at the ISP level, like you
can still put up a thing. And so I don't know, I think that's important
to preserve, right? Like because I mean one is
just a freedom argument, but the other's a creativity argument, which is you wanna have the escape hatch so that the kid with the idea
is able to realize the idea. 'cause to your point on page rank, you actually don't know what
the next big idea is, right? No, nobody called Larry Page and told him to develop page rank. Like he came up with that on his own. And you wanna always, I think leave the escape
hatch for the next, you know, kid or the next Stanford
grad student to have the breakthrough idea and be
able to get it up and running before anybody notices. - You and I are both hands of history. So let's step back. We've been talking about the future. Let's step back for a bit
and look at the nineties. You created Mosaic web browser, the first widely used web browser. Tell the story of that. And how did it evolve
into Netscape Navigator this the early days? - So full story. So. - [Lex] We were born, - I was born. A small child. - Actually. Yeah, let's go there. Like, when would you first
fall in love with computers? - Oh, so I hit the generational jackpot and I hit the Gen X kind of
point perfectly as it turns out. So I was born in 1971. So there's this great website called WTF happened in 1971 dot com,
which is basically in 1971. It's when everything
started to go to hell. And I was of course born in 1971. So I like to think that I had
something to do with that. - Did you make it on the website? - I don't think I made it on the website, but you know, hopefully,
somebody needs to add. - This is where everything. - Maybe I contributed to some
of the trends that they do. Every line on that website
goes like that, right? So it's all a picture disaster. But there was this moment in time where 'cause you know, sort
of the Apple, you know, the Apple II hit in like 1978
and then the IBM PC hit in 82. So I was like, you know,
11 when the PC came out. And so I just kind of hit that
perfectly and then that was the first moment in time when like, regular people could spend
a few hundred dollars and get a computer, right? And so that, I just like that resonated right out of the gate. And then the other part
of the story is, you know, I was using Apple II,
I used a bunch of them, but I was using Apple II and
of course it said in the back of every Apple II and every
Mac it said, you know, designed in Cupertino, California. And I was like, wow, okay. Cupertino must be the like,
shining city on the hill. Like Wizard of Oz is
like the most amazing, like city of all time.
I can't wait to see it. And of course, years later I came out to Silicon Valley and went to Cupertino and it's just a bunch of office parks at low-rise apartment buildings. So the aesthetics were a
little disappointing, but, you know, it was the vector
right of the creation of a lot of this stuff. So then basically by, so part part, part of my story is just
the luck of having been born at the right time and
getting exposed to PCs. Then the other part is, the other part is when El
Gore says that he created the internet, he actually is correct in a really meaningful way, which is he sponsored a bill
in 1985 that essentially created the modern internet, created what is called
the NSF net at the time, which is sort of the first
really fast internet backbone. And you know, that that bill dumped a ton of money into a bunch of research universities to build out basically
the internet backbone and then the supercomputer
centers that were clustered around the internet. And one of those universities
was University of Illinois where I went to school. And so the other stroke
lock that I had was, I went to Illinois basically
right as that money was just like getting dumped on campus. And so as a consequence
we had at, on campus, and this was like, you know,
89, 90, 91, we had like, you know, we were right
on the internet backbone. We had like T3 and 45 at the time, T3 45 megabit backbone
connection, which at the time was, you know, wildly state of the art. We had cray super computers. We had thinking machines
parallel super computers. We had silicon graphics
workstations, we had Macintosh's, we had next cubes all over the place. We had like every
possible kind of computer you could imagine 'cause all this money
just fell out of the sky. - [Lex] So you were living in the future. - Yeah. So yeah, quite
literally it was, yeah, like it's all there. It's all like we had
full broadband graphics, like the whole thing. And it's actually funny 'cause they had this is the first time I kind of, it sort of tickled the back
of my head that there might be a big opportunity in
here, which is, you know, they embraced it and so
they put like computers in all the dorms and they
wired up all the dorm rooms and they had all these, you know, labs everywhere and everything. And then they gave every undergrad a computer account and an email address. And the assumption was that
you would use the internet for four years at college and then you would
graduate and stop using it. And that was that, right? - [Lex] Yeah. - And you would just
retire your email address. It wouldn't be relevant
anymore 'cause you'd go off from the workplace and
they don't use email. You'd be back to using
fax machines or whatever. - Did you have that sense as well? Like, what you said the back
of your head was tickled. Like, what was exciting to
you about this possible world? - Well, if this is so
useful in this containment, if this is so useful in
this contain environment that just has this weird
source of outside funding, then if it were practical
for everybody else to have this and if it were cost effective for everybody else to have this, wouldn't they want it? And the overwhelmingly the prevailing view at the time was no,
they would not want it. This is esoteric, weird nerd stuff, right? That like computer science kids like, but like normal people are
never gonna do email. Right. Or be on the internet, right? And so I was just like,
wow, like this is actually, like, this is really compelling stuff. Now the other part was, it was all really hard to use
and in practice you had to be basically a CS you know, basically had had to BA
CS undergrad or equivalent to actually get full use of
the internet at that point. 'cause it was all pretty esoteric stuff. So then that was the other
part of the idea, which was, okay, we need to actually
make this easy to use. - So what's involved in creating Mosaic? Like, in creating graphical
interface to the internet? - Yeah, so it was a combination of things. So it was like basically the web existed in an early sort of
described as prototype form. And by the way, text only at that point. - What did it look like? What was the web? I mean what and the key figures. Like, what was it? Like, what paint a picture? - It looked like ChatGPT
actually it was all text. - Yeah. - And so you had a text-based web browser? Yeah, well actually the original browser, Tim Burners Lee, the original browser, both the original browser
and the server actually ran on next cubes. So these were, this was, you know, the computer Steve Jobs made
during the interim period when during the decade long interim period when he was not at Apple, you know, he got fired in 85 and
then came back in 97. So this was in that interim
period where he had this company called Next and they made these, literally these computers called cubes. And there's this famous
story, they were beautiful, but they were 12 inch by 12
inch by 12 inch cubes computers. And there's a famous story
about how they could have cost half as much if it had
been 12 by 12 by 13. But this cube was like,
no, like it has to be. So they were like $6,000
basically academic workstations. They had the first city round
drives, which were slow. I mean it was, the computers
were all but unusable. They were so slow, but
they were beautiful. - Okay, can we actually just
take a tiny tangent there? - Sure. Of course. - The 12 by 12 by 12 that just
so beautifully encapsulates Steve Jobs idea of design. Can you just comment on what you find interesting about Steve Jobs? What about that view of the world, that dogmatic pursuit of perfection and how he saw perfection in design? - Yeah, so I guess I'd say like, look, he was a deep believer, I think in a very deep,
the way I interpret it, I don't know if you ever
really described it like this, but the way I interpret
it's like this thing and it's actually a thing in philosophy. It's like aesthetics are
not just appearances. Aesthetics go all the way to like deep underlying meaning, right? It's like I'm not a physicist. One of the things I've
heard physicists say is one of the things you start to get a sense of when a theory might be correct is when it's beautiful, right? Like, you know, there, right? And so, there's something, and you feel the same thing by the way in like human psychology, right? You know, when you're
experiencing awe, right? You know, there's like a simplicity to it. When you're having an honest
interaction with somebody, there's an aesthetic, I would say calm comes over you 'cause you're actually being fully honest and trying to hide yourself, right? So it's like this very
deep sense of aesthetics. - And he would trust that
judgment that he had deep down. Like yeah, even if the
engineering teams are saying this is too difficult. Even if whatever the
finance folks are saying, this is ridiculous. The supply chain, all that
kind of stuff just makes this impossible. We can't do this kind of material. This has never been done
before and so on and so forth. He just sticks by it. - Well, I mean, who makes a
phone out of aluminum, right? Like, hadn't nobody
else would've done that. And now of course if your phone is made out of aluminum white,
you know, how crude, what a kind of caveman would
you have to be to have a phone that's made outta plastic? Like, right. So like, so it's just this very right. And, you know, look, there's a thousand different
ways to look at this, but one of the things is just like, look, these things are
central to your life. Like, you're with your phone more than you're with anything else. Like, it's gonna be in your hand. I mean, you know this, he thought very deeply about
what it meant for something to be in your hand all day long. But for example, here's an
interesting design thing. Like, he never wanted, my understanding is he never
wanted an iPhone to have a screen larger than you could reach with your thumb one handed. And so he was actually opposed to the idea of making the phones larger. And I don't know if you
have this experience today, but let's say there are
certain moments in your day when you might be like, only have one hand available
and you might wanna be on your phone. And you're trying to like, send a text and your thumb can't
reach the send button. - Yeah. I mean there's
pros and cons, right? And then there's like folding phones, which I would love to know what he thought thinks about them. But I mean, is there something you
could also just linger on? 'cause he's one of the interesting figures in the history of technology. What makes him as successful as he was? What makes him as interesting as he was? What made him so productive and important in the development of technology? - He had an integrated worldview. So the properly designed device that had the correct functionality, that had the deepest
understanding of the user, that was the most beautiful, right? Like, it had to be all
of those things, right? He basically would drive
to as close to perfect as you could possibly get. Right? And you know, I suspect that
he never quite, you know, thought he ever got there.
'cause most great creators, you know, are generally dissatisfied. You know, you read accounts
later on and all they can, all they can see are the
flaws in their creation. But like he got as close to
perfect each step of the way as he could possibly
get with the constraints of the technology of his time. And then, you know,
look, he was, you know, sort of famous in the Apple model. It's like, look, they will, you know, this headset that they just came out with, like, you know, it's like a
decade long project, right? It's like, and they're just gonna sit
there and tune and tune and polish and polish and tune and polish and tune and polish until it is as perfect as anybody could possibly make anything. - Yeah. - And then this goes to the way that people describe working
with him was, which is, you know, there was a terrifying
aspect of working with him, which is, you know, he was,
you know, he was very tough. But there was this thing that
everybody I've ever talked to worked for him, says that they all say the following, which is we did the best work of our lives when we worked for him because he set the bar incredibly high. And then he supported us
with everything that he could to let us actually do
work of that quality. So a lot of people who were at Apple spend the rest of their lives trying
to find another experience where they feel like they're able to hit
that quality bar again. - Even if it in retrospect or
during it felt like suffering. - Yeah, exactly. - What does that teach you
about the human condition? Huh? - So look, so say exactly. So the Silicon Valley, I mean,
look, he's not, you know, George Patton you know in the Army. Like, you know, there are
many examples in other fields, you know, that are like
this specifically in tech. It's actually, I find it very interesting. There's the Apple way, which
is polish, polish, polish, and don't ship until it's as
perfect as you can make it. And then there's the sort
of the other approach, which is the sort of
incremental hacker mentality, which basically says, ship
early and often and iterate. And one of the things I
find really interesting is I'm now 30 years into this, like, they're very successful
companies on both sides of that approach, right? Like, that is a fundamental
difference, right? In how to operate and how to
build and how to create that. You have world class companies
operating in both ways. And I don't think the question of like, which is the superior
model is anywhere close to being answered. Like, and my suspicion
is the answer is do both. The answer is you actually want both. They lead to different outcomes. Software tends to do better
with the iterative approach. Hardware tends to do
better with the, you know, sort of wait and make it perfect approach. But again, you can find
examples in both directions. - So the jury's still out on that one. So back to Mosaic. So, what it was text based Tim Burns Lee? - Well, there was the
web, which was text based, but there were no, I mean
there was like three websites. There was like no content,
there were no users. Like, it wasn't like a catalytic, it hadn't, and by the way, it was all because it was all text. There were no documents, there were no images,
there were no videos, there were no, right. So, and then if, if in the beginning, if you had to be on a next cube, right? You need to had a next cube
both to publish and to consume. - So, there was 6,000 bucks you said. - There were limitations. Yeah. $6,000 PC. They
did not sell very many. But then there was also, there was also FTP and
there was Use Nets, right? And there was, you know, a dozen other basically there's waste, which was an early search thing. There was Gopher, which
was an early menu based information retrieval system. There were like a dozen
different sort of scattered ways that people would get to
information on the internet. And so the Mosaic idea was basically bring those all together, make the whole thing
graphical, make it easy to use, make it basically bulletproof
so that anybody can do it. And then again, just on the luck side, it so happened that this was right at the moment when graphics, when the GUI sort of actually took off and we're now also used to the GUI that we think it's been around forever. But it didn't real, you know, the Macintosh brought it out in 85, but they actually didn't
sell very many Macs in the eighties. It was not that successful of a product. It really was. You needed Windows 3.0 on
PCs and that hit in about 92. And so, and we did most in 92, 93. So that sort of, it was like right at the
moment when you could imagine actually having a graphical
user interface to right at all, much less one to the internet. - How old did Windows 3 sell? So was that the really big. - [Marc] That was the big bang. - The big operating
graphical operating system? - Well this is the classic, okay. This Microsoft was operating on the other, so Steve the Apple was
running on the Polish until it's perfect. Microsoft famously ran on the other model, which is ship and iterate. And so in the old line in those days was Microsoft Right's version three of every Microsoft product. That's the good one, right? And so there there are, you can find online Windows 1, Windows 2. Nobody used them. Actually the original Windows, in the original Microsoft Windows, the windows were non overlapping. And so you had these very small, very low resolution screens and then you had literally-- - [Lex] Windows. - It just didn't work. It wasn't ready yet. Well. - And Windows 95 I think
was a pretty big leap also. - That was a big leap too. So that was like bang, bang. And then of course Steve, and then when, you know, in the fullness of time Steve came back, then the Mac started, took off again. That was the third bang. And then the iPhone was the fourth bang. - Such exciting time. - And then we were off,
off to the races because. - Nobody could have known what
would be created from that. - Well, Windows 3.1 or 3.0, Windows 3.0 to the iPhone
was only 15 years. Right. Like that ramp was in retrospect. At the time it felt like it took forever. But that in histor in historical terms, like that was a very fast ramp from even a graphical computer at all on your desk to the iPhone. That was 15 years. - So, did you have a sense
of what the internet will be as you're looking through
the window of Mosaic? Like, what you, like there's
just a few web pages for now. - So the thing I had early on
was I was keeping at the time what there's disputes over
what was the first blog, but I had one of them that
at least is a possible, at least a rudder up in the competition. And it was what was called
the What's new page. And it was literally, it was a hardwired in
distribution unfair advantage. I wired, put it right in the browser, I put it in the browser
and then I put my resume in the browser, which also was-- - [Lex] Hilarious. - But I was keeping not many people get to get to do that. - No, good call. And early days. It's so interesting. - I'm looking for my, about about, oh, Marc is looking for a job. - [Lex] Yeah, yeah, exactly. - So the West New page, I would literally get up every morning and I would, or every afternoon
and I would basically, if you wanted to launch a website, you would email me and I would
list it on the most new page. And that was how people
discovered the new websites as they were coming out. And I remember 'cause it was like one, it literally went from, it was like one every couple
days to like one every day to like two every day. - And then so you're doing, so that blog was kind of
doing the directory thing. So like, what was the homepage? - So the homepage was just
basically trying to explain even what this thing is that
you're looking at. Right. Basically the basic instructions. But then there was a button, there was a button that said what's new. And what most people did was they went to, for obvious reasons went to what's new. - [Lex] Yeah. - But like it was so mind
blowing at that point. This the basic idea and it
was, this was like, you know, this was the basic idea of the internet, but people could see
it for the first time. The basic idea was, look,
you know, some, you know, it's like literally it's like
an Indian restaurant in like Bristol England has like
put their menu on the web. And people were like, wow. - [Lex] Whoa. - Because like that's the first
restaurant menu on the web. - [Lex] Yeah. - And I don't have to be
in Bristol and I don't know if I'm ever gonna go to Bristol. And I don't even like Indian
food and like. Wow. Right. And it was like that the first web, the first streaming video thing was it was in another England,
some Oxford or something. Some guy put his coffee pot up
as the first streaming video thing and he put it on the
web 'cause he literally, it was the coffee pot down the hall. And he wanted to see when
he needed to go refill it. But there were, you know, there was a point when
there were thousands of people like watching that coffee pot 'cause it was the first
thing you could watch. - Well, but isn't were you able
to kind of infer, you know, if that Indian restaurant could go online. Then you're like they all will. - [Marc] Yeah, exactly. - So you felt that? - [Marc] Yeah, yeah, yeah. - Okay. - Now, you know, look, it's
still a stretch, right? It's still a stretch 'cause
it's just like, okay, is it, you know, you're still in this
zone, which is like, okay, is this a nerd thing? Is this a real person thing? By the way, you know, there was a wall of
skepticism from the media. Like, they just, like,
everybody was just like, yeah, this is the crazy, this is just like dumb. This is not, you know, this is not for regular
people at that time. And so you, you had to think
through that and then look, it was still hard to get on the internet at that point, right? So you could get kind of this
weird bastardized version if you were on AOL,
which wasn't really real. Or you had to go like,
learn what an ISP was. You know, in those days, PCs actually didn't have TCPIP
drivers come reinstalled. So you had to learn
what a TCPIP driver was. You had to buy a modem, you
had to install driver software. I have a comedy routine. I do. So it's like 20 minutes long
describing all the steps required to actually get on
the internet at this point. And so you had to look
through these practical. Well, and then speed performance 14-4 modems, right? Like it was like watching,
you know, glue dry, like, and so you had to, there were basically a sequence of bets that we made where you basically needed to look through that current
state of affairs and say, actually there's gonna
be so much demand for once people figure this out, there's gonna be so much demand for it that all of these practical problems
are gonna get fixed. - Some people say that
the anticipation makes the destination that much more exciting. - Do you remember progressive JPEGs? - Yeah. Do I, do I? - For kids in the audience, right? - [Lex] For kids in the audience. - You used to have to watch an image load like a line at the time. But it turns out there
was this thing with JPEGs where you could load
basically every fourth, you could load like every fourth line and then you could sweep
back through again. And so you could like
render a fuzzy version of image up front. And then it would like
resolve into the detailed one. And that was like a big UI breakthrough 'cause it gave you something to watch. - Yeah. And you know, there's applications in
various domains for that. - Well it was a big fight. There was a big fight early on
about whether there should be images in the web. And. - For that reason for
like sexualization or-- - Not explicitly that that did come up. But it wasn't even that, it was more just like all the
serious in the argument went, the purists basically said
all the serious information in the world is text. If you introduce images, you basically are gonna bring
in all the trivial stuff. You're gonna bring in
magazines and you know, all this crazy just, you know,
stuff that, you know, people, you know, it's gonna, it is
gonna distract from that. It's gonna go take it away from being
serious to being frivolous. - Well, was there any
(indistinct) type arguments about the internet destroying all of human
civilization or destroying some fundamental fabric of human civilization? - So it was, those days it was all around crime and terrorism. So those arguments happened, you know, but there was no sense yet
of the internet having like, an effect on politics because
that was way too, too far off. But there was an enormous panic at the time around cybercrime. There was like enormous panic
that like your credit card number would get stolen and you'd use life
savings would be drained. And then, you know, criminals were gonna, there was, oh, when we started,
one of the things we did, one of the Netscape browser
was the first widely used piece of consumer software that had
strong encryption built in, it made it available to ordinary people. And at that time, strong encryption was
actually illegal to export outta the US so we could feel that product in the US, we could not export it 'cause it was classified as munition. So the Netscape browser
was on a restricted list along with the tomahawk missile as being something that
could not be exported. So we had to make a second
version with deliberately weak encryption to sell
overseas with a big logo on the box saying, do not trust this. Which it turns out, makes it hard to sell software
when it's got a big logo that says don't trust it. And then we had to spend
five years fighting the US government to get
them to basically stop trying to do this regulation. But because the fear
was terrorists are gonna use encryption, right? To like plot, you know, all these things. And then, you know, we responded with, well actually we need
encryption to be able to secure systems so that the terrorists and the criminals can't get into them. So that anyway, that was the 1990s fight. - So can you say something
about some of the details of the software engineering
challenges required to build these browsers? I mean the engineering
challenges of creating a product that hasn't really existed before that can have such
almost like limitless impact on the world with the internet. - So there was a really key
bet that we made at the time, which was very controversial, which was core to core
to how it was engineered, which was are we
optimizing for performance or for ease of creation? And in those days the pressure
was very intense to optimize for performance because the
network connections were so slow and also the computers were so slow. And so if you had, I mentioned
the progressive JPEGs, like if there's an alternate
world in which we optimized for performance and it just, you had just a much more pleasant
experience right up front. But what we got by not doing that was we got ease of creation. And the way that we got
ease of creation was all of the protocols and
formats were in text, not in binary. And so HTTP is in text, by the way. And this was an internet
tradition by the way that we picked up. But we continued it. HTTP is text and HTML is
text, and then every else, everything else that
followed is text as a result. And by the way, you can imagine purist
engineers saying this is insane. You have very limited bandwidth. Why are you wasting any time sending text? You should be encoding
this stuff into binary and it'll be much faster. And of course the answer
is that's correct. But what you get when you make
it taxed is all of a sudden, well, the big breakthrough was the view source function, right? So the fact that you
could look at a webpage, you could hit view source
and you could see the HTML, that was how people learned
how to make webpages. Right? - It's so interesting 'cause the stuff would
take for granted now is, man, that was fundamental, the development of the web
to be able to have HTML just right there, all the
ghetto mess that is HTML, all the sort of almost
biological like messiness of HTML and then having the browser
try to interpret that as. - [Marc] Exactly. - To show something reasonable. - Well and then there was
this internet principle that we inherited, which
was emit, what was it? Emit cautiously. Emit
conservatively interpret liberally. So it basically meant if you're, the design principle was if you're creating like a web editor that's gonna admit HTML, like
do it as cleanly as you can, but you actually want the
browser to interpret liberally, which is you actually want
users to be able to make all kinds of mistakes and
for it to still work. And so the browser rendering
engines to this day have all of this spaghetti code crazy stuff where they're resilient to
all kinds of crazy issue, no mistakes. And so, literally what I
always had in my head is like there's an 8 year old or
an 11 year old somewhere and they're doing a view source, they're doing a cut and
paste and they're trying to make a webpage for
their eternal or whatever. And like they leave out a
slash and they leave out an angle bracket and they do this and they do that and it's still works. - It's also like a, I don't often think about this,
but, you know, programming, you know, C++ all those languages, lisp, the compiled languages,
the interpreted languages, Python, Pearl, all that. The brace have to be all correct. It's like everything has to be perfect. - [Marc] Brutal. - And then-- - [Marc] Autistic. - You forget. All right. It's systematic
and rigorous, let's go there. But you forget that the, the web with JavaScript eventually. And HTML is allowed to be messy in the way for the first time. Messy in the way biological
systems could be messy. It's like the only thing
computers were allowed to be messy on for the first time. - It used to off fend me. So I grew up on Unix, so I worked on Unix. I was a Unix native for all
the way through this period. And so, it used to drive
me bananas when it would do the segmentation fault
and the core dump file, just like it is, you know, it's like literally there's
like a error in the code. The math is off by one. And it core dumps. And I'm in the core dump
trying to analyze it and trying to reconstruct what, and I'm
just like, this is ridiculous. Like, the computer
ought to be smart enough to be able to know that if it's off by one, okay fine. And it keeps running. And I would go ask all the experts like, why can't it just keep running? And they'd explain to me, well, because all the downstream
repercussions and blah blah. And I'm like, this still,
like, you know, this is, we're forcing the human
creator to live to your point in this hyper, literal
world of perfection. - [Lex] Yeah. And I was just like, that's just bad. And by the way, you know what happens with that of course. Just what what happened with,
with coding at that point, which is you get a high
priesthood, you know, there's a small number of
people who are really good at doing exactly that. Most people can't. And most people are excluded from it. And so actually that was where that for that was where I picked up that idea was like, no, you want these things to be
resilient error in all kinds and this would drive the
purist absolutely crazy. Like, I got attacked on this like a lot 'cause I mean like every time you know, all the purists who
were like into all this like Marcup language stuff and formats and codes and all this stuff, they would be like, you know, you're encouraging bad behavior 'cause. - Oh, so they wanted
the browser to give you a fault error anytime there was a-- - Yeah. They wanted to
be a (indistinct) right? They wanted to-- Yeah. Yeah. That was a very and any properly trained credential engineer would be like, that's not how you build these systems. - That's such a bold move to say, no, it doesn't have to be. - Yeah. No, like I said, the good news for me is
the internet kind of had that traditional already,
but having said that, like we pushed it, we pushed it way out. But the other thing we did, going back to the performance thing, was we gave up a lot of performance. We made that, that initial experience for the first few years
was pretty painful. But the bet there was
actually an economic bet, which was basically the demand
for the web would basically mean that there would be a
surge in supply of broadband. Like because the question was, okay, how do you get the phone
companies which are not famous in those days for doing
new things at huge cost for like speculative reasons. Like how do you get them to
build up broadband, you know, spend billions of dollars
doing that and you know, you could go meet with them
and try to talk them into it. Or you could just have a thing where it's just very
clear that it's gonna be, that people love that's gonna
be better if it's faster. And, so that, there was a
period there and this was, this was fraught with in peril, but there was a period there
where it's like we knew the experience was sub-optimized because we were trying to force the emergence of demand for broadband. - [Lex] Sure. - Which is in fact what happened. - So you had to figure out
how to display this text, HTML text. So the blue links and
the prop links. What? And there's no standards. Is
there standards at that time? - [Marc] No. There really still isn't. - Well there's like standards, there's applied, implied standards. Right. And they, you know, there's all these kind of new features that are being added with like CSS, what, like what kind of stuff a
browser should be able to support features within languages,
within JavaScript and so on. But you're setting standards
on the fly yourself. - Yeah. Well to this day, if you create a webpage
that has no CSS style sheet, the browser will render
it however it wants to. Right. So this was one of the
things, there was this idea, this idea of at the time and
how these systems were built, which is separation of content from format or separation of content from appearance. And that's still, people
don't really use that anymore 'cause everybody wants to
determine how things look and so they use CSS
but it's still in there that you can just let the
browser do all the work. - I still like the like
really basic websites, but that could be just old school, kids these days with their
fancy responsive websites that don't actually have much content, but have a lot of visual elements. - Well that's one of the
things that's fun about chat, you know, about ChatGPT like. - [Lex] Back to the basics. - Back to just text. - [Lex] Yeah. - Right? And it, you know, there is this pattern in
human creativity and media where you end up back at text
and I think there's, you know, there's something powerful in there. - Is there some other stuff you remember like the purple links? There were some interesting
design decisions that to kind of come up that we have today or we don't have today
that were temporary. - So I made the background
'cause I hated reading texts on white background, so I
made the background gray. Everybody can-- - Do you go ahead to? - No. No, no. That decision I think has been reversed. But now I'm happy though because
now dark mode is the thing. - So it wasn't about gray, it was just you didn't
want white background. - [Marc] Strain my eyes. - Strain your eyes. Interesting. And then there's a bunch
of other decisions. I'm sure there's an interesting
history of the development of HTML and CSS and
Interface and JavaScript and there's this whole Java applet thing. - Well the big one probably
JavaScript, CSS was after me, so I didn't, that was not me. But JavaScript was the big, JavaScript maybe was the
biggest of the whole thing. That was us. And that was basically a bet,
it was a bet on two things. One is that the world wanted a new front end scripting language. And then the other was I thought at the time the world wanted a new backend scripting language. So JavaScript was designed
from the beginning to be both front end and backend. And then it failed as a
backend scripting language. And Java won for a long time. And then Python Pearl and
other things, PHP and Ruby. But now JavaScript is back. And so. - I wonder if everything in
the end will run on JavaScript. - It seems like it is the, and by the way, lemme give a shout out
to, to Brendan Eich was basically the one man
inventor of of JavaScript. - If you're interested to
learn more about Brendan Eich, he's been on his podcast previously. - Exactly. So he wrote
JavaScript over a summer and I mean I think it is fair, it is fair to say now that
it's the most widely used language in the world and
it seems to only be gaining in its in its range of adoption. - You know, in the software world there's quite a few stories of somebody over a week weekend or over a
week or over a summer writing some of the most impactful
revolutionary pieces of software ever. That
should be inspiring. Yes. - Very inspiring. I'll
give you another one. SSL. So SSL with the security
protocol, that was us. And that was a crazy idea at the time, which was let's take
all the native protocols and let's wrap them in a security wrapper. That was a guy named Kip Hickman who wrote that over a summer, one guy. And then look today, sitting here today, like the transformer like at Google was a small handful of people. And then, you know, the number of people who have
did like the core work on GPT. It's not that many people, it's a pretty small handful of people. And so yeah, the pattern in software repeatedly over a very long time has been, it's Jeff Bezos always
had the two pizza rule for teams at Amazon, which is any team needs
to be able to be fed with two pieces. If you need the third pizza,
you have too many people. And I think it's actually
the one pizza rule. For the really creative work. I think it's two people, three people. - Well that's, you see that with certain open source projects, like so much is done by
like one or two people. Like it's so incredible
and that's why you see that gives me so much hope
about the open source movement in this new age of AI where, you know, just recently having had a conversation with Marc Zuckerberg of all people who's all in on open source, which is so interesting to
see and so inspiring to see 'cause like releasing
these models, it is scary. It is potentially very dangerous
and we'll talk about that. But it's also, if you believe in the
goodness of most people and in the skillset of most people and the desire to go do good in the world, that's really exciting. 'cause it's not putting it these models into the centralized
control of big corporations, the government and so on. It's putting it in the hands of a teen, teenage kid with like a dream in his eyes. I don't know. That's beautiful. - Look, this stuff, AI ought to make the
individual coder obviously far more productive right? By like, you know, a
thousand X or something. And so you ought to open source like, not just the future of open source AI, but the future of open source everything. We ought to have a world
now of super coders, right? Who are building things as open source with one or two people
that were inconceivable, you know, five years ago. You know, the level of
kind of hyper productivity we're gonna get out of
our best and brightest I think is gonna go way up. - It's gonna be interesting. We'll talk about it, but let's just to linger
a little bit on Netscape. Netscape was acquired in
1999 for 4.3 billion by AOL. What was that like? What were some memorable aspects of that? - Well that was the height
of the.com boom bubble bust. I mean that was the frenzy. If you watch succession, that was like what they
did in the fourth season with Gojo and the merger with their, so it was like the height of like one of those kind of dynamics. And so. - Would you recommend
succession, by the way? I'm more of a Yellowstone guy. - Yellowstone's very American. I'm very proud of you. That's, that is. - I just talked to Matthew McConaughey and I'm full on Texan at this point. - Good. I approve. - And he'll be doing
the SQL to Yellowstone. - [Marc] Yeah, just exciting. - Very exciting. Anyway. - [Marc] Can't wait. - So that's a rude interruption
by me by way of succession. So, that was at the height of the-- - Deal making and money
and just the fur flying and like craziness. And so yeah, it was just one of those, it was just like, I mean, and this, the entire (indistinct) thing from start to finish was four years, which was like for one of these companies, it's just like incredibly fast. You know, it went, we went public 18 months
after we got moved where we were founded, which
virtually never happens. So it was just this incredibly fast kind of meteor streaking across the sky. And then of course it was this, and then there was just
this explosion, right? That happened 'cause then
it was almost immediately followed by the.com crash. It was then followed
by AOL, by Time Warner, which again is like the succession guys kinda play with that, which turned out to be a disastrous deal. You know, one of the famous, you know, kind of disastrous in business history. And then, you know, what became an internet depression on
the other side of that. But then in that depression
in the two thousands was the beginning of broadband and smartphones and Web 2.0 right? And then social media
and search and every SaaS and everything that came out of that. - What did you learn from
just the acquisition? I mean this is so much money. What's interesting 'cause I
must have been very new to you, that these software stuff, you can make so much money. There's so much money swimming around. I mean, I'm sure the
ideas of investment was starting to get born there. - Yes. Let me get, so let me lay it. So here's, here's the thing. I dunno if I figured it out
then, but figured it out later, which is software is a technology that it, it's like a, you know, the
concept of the philosopher stone, the philosopher stone in alchemy, transient is led into gold and
Newton spent 20 years trying to find the philosopher stone. Never got there. Nobody's ever figured it out. Software is our modern philosopher stone. And in economic terms, it
transmutes labor into capital, which is like a super interesting thing. And by the way, like Carl Marcs is rolling over in his grave right now. 'Cause of course that's
complete reputation of his entire theory. Trans labor and capital
which is as follows, is somebody sits down at a keyboard and types a bunch of stuff in, and a capital asset
comes out the other side and then somebody buys that capital asset for a billion dollars. Like that's amazing, right? It's literally creating
value right out of thin air, right out of purely human thought, right? And so that, there are many things that make software magical and special, but that's the economics. - I wonder what Marx
would've thought about that? - Oh, he would've
completely broke his brain because of course the whole
thing was it was he could, you know, that kind of
technology was inconceivable when he was alive. It was all industrial era stuff. And so, any kind of machinery
necessarily involved huge amounts of capital. And then labor was on the receiving end of the abuse. - [Lex] Yep. Right? But like software eng software, a software engineer is somebody
who basically transmutes his own labor into actual, an actual capital asset
creates permanent value. Well, and in fact it's
actually very inspiring. That's actually more
true today than before. So when I was doing software, the assumption was all
new software basically has a sort of a parabolic
sort of lifecycle, right? So you ship the thing,
people buy it at some point, everybody who wants it has bought it and then it becomes obsolete. And it's like bananas. Nobody, nobody buys old software. These days, Minecraft, Mathematica, you know, Facebook, Google, you have the software
assets that are, you know, have been around for 30
years that are gaining in value every year, right? And they're just, they're being
a world of warcraft, right, salesforce.com, like they're being every single year they're
being polished and polished and polished and polished. They're getting better
and better, more powerful, more powerful, more
valuable, more valuable. So we've entered this era
where you can actually have these things that actually
build out over decades. Which by the way is what's happening right now with like ChatGPT. And so now, this is why, you know, there is always, you know, sort of a constant investment
frenzy around software is because, you know, look, when
you start one of these things, it doesn't always succeed. But when it does now you
might be building an asset that builds value for,
you know, four or five, six decades to come. You know, if you have a team of people who have the level of devotion required to keep making it better. And then the fact that of
course everybody's online, you know, there's 5 billion people that are a click away from
any new piece of software. So the potential market size
for any of these things is, you know, nearly infinite. - [Lex] It must have been
surreal back then though. - Yeah. Yeah. This was
all brand new, right? Yeah. Back then, this was all brand new. These were all, you know, brand new. Had you rolled out that
theory in even 1999, people would've thought
you were smoking crack. So that's emerged over time. - Well, let's now turn
back into the future. You wrote the essay "Why
AI Will Save The World?" Let's start the very high level. What's the main thesis of the essay? - Yeah, so the main thesis on the essay is that what we're dealing
with here is intelligence. And it's really important to kind of talk about the sort of very nature
of what intelligence is. And fortunately we have a predecessor to machine intelligence,
which is human intelligence. And we've got, you know, observations and theories
over thousands of years for what intelligence is
in the hands of humans and what intelligence is, right? I mean, what it literally is the way to, you know, capture, process,
analyze, synthesize information, solve problems. But the observation of
intelligence in human hands is that intelligence quite literally
makes everything better. And what I mean by that
is every kind of outcome of like human quality of life, whether it's education outcomes or success of your children, or career success or health or lifetime
satisfaction, by the way, propensity to peacefulness
as opposed to violence, propensity for open-mindedness
versus bigotry, those are all associated with
higher levels of intelligence. - Smarter people have better outcomes than almost as you write in almost every domain of activity. Academic achievement, job performance, occupational status, income, creativity, physical health, longevity,
learning new skills, managing complex tasks, leadership,
entrepreneurial success, conflict resolution,
reading comprehension, financial decision making, understanding others
perspectives, creative arts, parenting outcomes, and life satisfaction. One of the more depressing
conversations I've had, and I don't know why it's depressing, I have to really think
through why it's depressing, but on IQ and the G factor, and that that's something
in large part is genetic and it correlates so much
with all of these things and success in life. It's like all the inspirational
stuff we read about, like if you work hard and so on, it sucks that you're born with the hand that you can't change. - But what if you could. - You're saying basically
a really important point, and I think it's in your
articles, it really helped me. It's a nice added
perspective to think about. Listen, human intelligence, the science of intelligence
is shown scientifically that it just makes life easier and better the smarter you are. And now let's look at
artificial intelligence and if that's a way to increase
some human intelligence, then it's only going
to make a better life. - [Marc] Yeah. - That's the argument. - And certainly at the collective level, we could talk about the collective effect of just having more
intelligence in the world, which will have very big payoff. But there's also just
at the individual level, like what if every person has a machine? You know? And the concept of augment Doug Engelbart's concept of augmentation. You know, what if
everybody has an assistant and the assistant is, you know, 140 IQ and you happen to be 110 IQ and you've got, you know, something that basically is
infinitely patient and knows everything about you
and is pulling for you in every possible way,
wants you to be successful. And anytime you find anything
confusing or wanna learn anything or have trouble
understanding something or wanna figure out what to
do in a situation, right? Wanna figure out how to
prepare for a job interview, like any of these things,
like it will help you do it. And it will therefore, the combination will
effectively be, you know, will effectively raise your raise because it will effectively raise your IQ, will therefore raise the odds of successful life outcomes
in all these areas. - So people below the,
this hypothetical 140 IQ, it'll pull them up towards 140 IQ. - Yeah, yeah, yeah. And then of course, you know, people at 140 IQ will be
able to have a peer, right. To be able to communicate, which is great. And then people above 140
IQ will have an assistance that they can farm things out to. And then look, God willing, you know, at some point these things
go from future versions go from 140 IQ equivalent to
150 to 160 to 180, right? Like Einstein was estimated
to be on the order of one 60, you know, so when we
get, you know, one 60 AI, like we'll be, you know, when one assumes creating
Einstein level breakthroughs and physics, and then at
180 we'll be, you know, carrying cancer and developing
warp drive and doing all kinds of stuff. And so it is quite possibly the case, this is the most important
thing that's ever happened and the best thing that's ever happened because precisely because it's a lever on this single fundamental
factor of intelligence, which is the thing that drives
so much of everything else. - Can you steal, man, the case that human plus AI is
not always better than human for the individual? - You may have noticed that there's a lot of
smart running around. - [Lex] Sure. Yes. - Right? And so, like smart, there are certain people where
they get smarter, you know, they get to be more arrogant, right? So that, you know, there's one huge flaw. - Although to push back on that, it might be interesting because
when the intelligence is not all coming from you,
but from another system, that might actually increase
the amount of humility even in the assholes. - [Marc] One would hope. - Yeah. - Or it could make assholes more assholes. You know, that's in, I mean, that's for psychology to study. - Yeah, exactly. Another one is smart people
are very convinced that they, you know, have a more
rational view of the world, and that they have a easier
time seeing through conspiracy theories and hoaxes and right. You know, sort of crazy
beliefs and all that. There's a theory in psychology, which is actually smart people. So for sure people who aren't
as smart are very susceptible to hoaxes and conspiracy theories. But it may also be the case
that the smarter you get, you become susceptible in a different way, which is you become very
good at marshaling facts to fit preconceptions, right. You become very, very good at assembling whatever theories and
frameworks and pieces of data and graphs and
charts you need to validate whatever crazy ideas got in your head. And so you're susceptible
in a different way, right? - We're all sheep, but
different colored sheep. - Some sheep are better
at justifying it. Right. And those are the, you know, those are the smart sheep, right? So yeah. Look like I
would say this look like there are no panacea. I'm not a utopian, there
are no panaceas in life. There are no, like, you know, I don't believe there
are like pure positives. I'm not a transcendental
kind of person like that. But, you know, so yeah,
there are gonna be issues and, you know, look, smart people, another maybe you could
save about smart people is they are more likely to get
themselves in situations that are, you know, beyond their grasp. You know, because they're
just more confident in their ability to deal with complexity and their eyes become bigger, their cognitive eyes become bigger than their stomach, you know? So yeah, you could argue
those eight different ways nevertheless, on net, right? Clearly, overwhelmingly, again, if you just extrapolate from what we know about human intelligence, you're improving so many aspects of life if you're upgrading intelligence. - So there'll be assistants
at all stages of life. So when you're younger,
there's for education, all that kind of stuff for
mentorship, all of this. And later on as you're doing
work and you've developed a skill and you're having a profession, you'll have an assistant
that helps you excel at that profession. So at all stages of life. - Yeah. I mean, look, the
theory is augmentation. This is the Doug Engelbart's term. Doug Engelbart made this observation many, many
decades ago that, you know, basically it's like you can
have this oppositional frame of technology where it's
like us versus the machines, but what you really do
is you use technology to augment human capabilities. And by the way, that's how actually the economy develops. That's, we can talk about
the economic side of this, but that's actually how
the economy grows is through technology
augmenting human potential. And so, yeah. And then you basically
have a proxy or you know, or you know, a sort of
prosthetic, you know, so like you've got glasses,
you've got a wristwatch, you know, you've got shoes, you know, you've got these things. You've got a personal computer, you've got a word processor,
you've got Mathematica, you've got Google. This is the latest
viewed through that lens. AI is the latest in a long
series of basically augmentation methods to be able to
raise human capabilities. It's just this one is the
most powerful one of all, because this is the one
that, that goes directly to what they call fluid
intelligence, which is IQ. - Well, there's two categories of folks that you outline that
worry about or highlight the risks of AI, and you highlight a bunch of different risks. I would love to go through those risks and just discuss them, brainstorm which ones are serious and which ones are less serious. But first, the Baptist
and the bootleggers, what are these two
interesting groups of folks who worry about the effect
of AI and human civilization? - [Marc] Or say they do. - Say, oh, okay, yes, I'll say they do. - The Baptist worry the
bootleggers say they do. So the Baptist and the
bootleggers is a metaphor from economics, from what's
called development economics. And it's this observation that when you get social
reform movements in a society, you tend to get two sets
of people showing up, arguing for the social reform. And the term Baptist and bootleggers comes from the American experience
with alcohol prohibition. And so in the 1900s, 1910s, there was this movement
that was very passionate at the time, which basically said, alcohol is evil and is destroying society. By the way, there was a lot
of evidence to support this. There were very high rates of very high correlations
then, by the way. And now between rates of physical
violence and alcohol use, almost all violent crimes
have either the perpetrator or the victim, or both drunk almost. If you see this actually in the work, almost all sexual harassment
cases in the workplace, it's like at a company
party and somebody's drunk. Like, it's amazing how often
alcohol actually correlates to actually dis dysfunction
and at leads to domestic abuse and so forth, child abuse. And so you had this group of
people who were like, okay, this is bad stuff and we should outlaw it. And those were quite literally Baptist. Those were super committed, you know, hardcore Christian
activists in a lot of cases. There was this woman whose
name was Carrie Nation, who was this older woman who
had been in this, you know, I don't know, disastrous
marriage or something. And her husband had been
abusive and drunk all the time. And she became the icon of
the Baptist prohibitionist. And she was legendary in
that era for carrying an ax and doing, you know, completely on her own
doing raids of saloons and like taking her ax to all the bottles and eggs in the back. And so. - [Lex] A true believer. - An absolute true believer, and with absolutely the
purist of intentions. And again, there's a very
important thing here, which is there's, you could look at this
cynically and you could say the Baptists are like delusional,
you know, the extremists, but you could also say,
look, they're right. Like she was, you know, she had a point. Like she wasn't wrong about
a lot of what she said. - Yeah. - But it turns out the way
the story goes is it turns out that there were another set of people who very badly wanted to
outlaw alcohol in those days. And those were the bootleggers, which was organized crime that
stood to make a huge amount of money if legal alcohol
sales were banned. And this was, in fact, the way the history goes
is this was actually the beginning of
organized crime in the US. This was the big economic
opportunity that opened that up. And so they went in together and no, they didn't go in together. Like the Baptist did not
even necessarily know about the bootleggers 'cause they were on their moral crusade. The bootleggers certainly
knew about the Baptists. And they were like, wow, these people are like the
great front people for like. You know, it's-- - [Lex] Good PR. - Shenanigans in the background. And they got the (indistinct)
Act passed, right. And they did in fact ban alcohol
in the US and you'll notice what happened, which is
people kept drinking, it didn't work, people kept drinking. That bootleggers made a
tremendous amount of money. And then over time it became
clear that it made no sense to make it illegal and it
was causing more problems. And so then it was revoked. And here we sit with legal
alcohol a hundred years later with all the same problems. And you know, the whole thing was this
like giant misadventure the Baptist got taken advantage
of by the bootleggers, and the bootleggers got what they wanted. And that was that. - The same two categories of folks are now sort of suggesting
that the development of artificial intelligence
should be regulated. - A hundred percent. It's the same pattern. And the economist will tell you it's the same pattern every time. Like, this is what
happened, nuclear power, this is what happens, which
is another interesting one. But like, yeah, this
happens dozens and dozens of times throughout the
last a hundred years and this is what's happening now. - And you write that it isn't
sufficient to simply identify the actors and impugn their motives. We should consider the
arguments of both the Baptist and the bootleggers on their merits. So let's do just that. Risk number one, will AI kill us all? - [Marc] Yes. - So what do you think about this one? What do you think is
the core argument here that the development of
AGI perhaps better said, will destroy human civilization? - Well, first of all, you
just did a slight of hand 'cause we went from talking about AI to AGI. - Is there a fundamental difference there? - I don't know. What's AGI? - What's AI, what's in intelligence? - Well, I know what AI
is machine learning. What's AGI? - I think we don't know
what the bottom of the well of machine learning is
or what the ceiling is. Because just to call
something machine learning or just to call some of the statistics or just to call it math or
computation doesn't mean, you know, nuclear
weapons are just physics. So to me it's very
interesting and surprising how far machine learning has taken. - No, but we knew that
nuclear physics would lead to weapons. That's why the scientists
of that era were always in some this huge dispute
about building the weapons. This is different. AGI is different. - Does machine learning lead, do we know? - We don't know, but this
is my point is different. We actually don't know. But, and this is where you, the slide of hand kicks in, right? This is where it goes from
being a scientific topic to being a religious topic. And that's why I specifically called out 'cause that's what happens. They do the vocabulary
shift and all of a sudden you're talking about something totally. That's not actually real. - Well then maybe you can
also, as part of that, define the western
tradition of Millennialism. - [Marc] Yes. Into the world apocalypse. - [Lex] What is it? - [Marc] Apocalypse cults. - [Lex] Apocalypse cults. - Well, so we live in, we of course live in a Judeo-Christian, but primarily Christian
kind of saturated, you know, kind of Christian, post-Christian,
secularized Christian, you know, kind of world in the west. And of course court of Christianity is the idea of the second
coming and you know, the revelations and you know, Jesus returning and the
thousand year, you know, utopia on earth and then you know, the rapture and like all
all that stuff, you know, you know, we collectively,
you know, as a society, we don't necessarily take
all that fully seriously now. So, what we do is we create our
secularized versions of that we keep looking for utopia. We keep looking for, you know, basically the end of the world. And so what what you see over, over decades is that basically a pattern of these sort of these of
is this is what cults are. This is how cults form as
they form around some theory of the end of the world. And so the people's temple cults, the Manson cult, the Heavens Gate cult, the David Qresh cult, you know what they're all
organized around is like, there's gonna be this
thing that's gonna happen that's gonna basically bring
civilization crashing down. And then we have this
special elite group of people who are gonna see it
coming and prepare for it. And then they're the people
who are either going to stop it or are failing, stopping it. They're gonna be the people
who survived the other side and ultimately get credit
for having been, right. - Why is that so compelling,
do you think? Like-- - Because it satisfies this very deep need we have for transcendence and meaning that got stripped
away when we became secular. - Yeah, but why is the
transcendence involve the destruction of human civilization? - Because like how plausible it's like a very deep psychological thing 'cause it's like how plausible, how plausible is it
that we live in a world where everything's just
kind of all right? Right. How exciting? - [Lex] Whoa. - How exciting is that? Right? - [Lex] But that's. - We got more than that. - But that's the deep question I'm asking. Why is it not exciting to live in a world where everything's just all right? Is it, I think, you know, most of the animal kingdom would be so happy with just all right. Because that means survival. Why are we, maybe that's what it is. Why are we conjuring up
things to worry about? - So CS Lewis called
it the God-shaped hole. So there's a God-shaped hole
in the human experience, consciousness, soul,
whatever you wanna call it, where there's gotta be
something that's bigger than all this. There's gotta be something transcendent. There's gotta be something
that is bigger, right? Bigger purpose. A bigger meaning. And so we have run the
experiment of, you know, we're just gonna use
science and rationality and kind of, you know, everything's just gonna
kind of be as it appears. And large number of people have found that very deeply wanting and
have constructed narratives. And by this is the story
of the 20th century, right? Communism, right? Was one of those, communism
was a was a form of this, Nazism was a form of this. You know, some people, you know, you can see movements
like this playing out all over the world right now. - So you constructed a kind of devil, a kind of source of evil, and we're going to transcend beyond it. - Yeah. And (indistinct)
when you see a Miller cult, they put a really specific point on it, which is end of the world, right, there is some change coming. And that change that's
coming is so profound and so important that
it's either gonna lead to utopia or hell on earth. Right? And it is going to, and then, you know, it's like what if you actually knew that was going to happen, right? What would you do? Right? How would you prepare yourself for it? How would you come together with a group of like-minded people, right? How would you, what would you do? Would you plan like Cassius
of weapons in the woods? Would you like, you know, I don't know if create
underground buckers, would you, you know, spend your
life trying to figure out a way to avoid having it happen? - Yeah. That's a really
compelling, exciting idea to have a club over. To have a little bit of travel, like a get together on a Saturday
night and drink some beers and talk about the end of the world and how you are the only
ones who have figured it out. - Yeah. And then once you lock in on that, like how can you do anything
else with your life? Like this is obviously the
thing that you have to do. And then there's a psychological
effect that you alluded to. There's a psychological effect. If you take a set of true
believers and you leave them to themselves, they get
more radical. Right. 'Cause they self radicalize each other. - That said, it doesn't mean they're not sometimes right. - Yeah. The end of the world might be. Yes. Correct. Like they might be right. - [Lex] Yeah. - But like-- - [Lex] I have some pamphlets for you. - Exactly. - But I mean we'll talk
about nuclear weapons 'cause you have a really
interesting little moment that I learned about in
your essay, but you know, sometimes it could be right. - [Marc] Yeah. - 'Cause we're still, you were developing more and
more powerful technologies in this case, and we don't know what the impact it will
have on human civilization while we can highlight all
the different predictions about how it'll be positive, but the risks are there and
you discuss some of them. - Well, the steel man, the
steel man is the steel man. Well actually, the steel
man and his reputation are the same, which is you can't predict what's gonna happen. Right. You can't rule out that this
will not end everything. Right. But the response to that
is you have just made a completely non-scientific claim. You've made a religious
claim, not a scientific claim. - How does it get disproven? - And there's no, by definition with these kinds of claims, there's no way to disprove them. Right? And so there there's no, you
just go right on the list. There's no hypothesis, there's no testability of the hypothesis. There's no way to falsify the hypothesis, there's no way to measure
progress along the arc. Like it's just all completely missing. And so it's not scientific and. - I don't think it's completely missing. It's somewhat missing. So for example, the people that say AI's gonna kill all of us. I mean, they usually have
ideas about how to do that. Whether it's the people
club maximizer or, you know, it escapes there's mechanism
by which you can imagine it killing all humans. - [Marc] Models. - And you can disprove it by saying there's a limit to the speed at which intelligence increases. Maybe show that like the sort
of rigorously really described model, like how it could
happen and say, no, there, here's a physics limitation. There's like a physical
limitation to how these systems would actually do damage
to human civilization. And it is possible they
will kill 10 to 20% of the population, but it seems impossible
for them to kill 99%. - It was practical
counterarguments. Right. So you mentioned
basically what I described as the thermodynamic counterargument, which, so sitting here today, it's like where with the
evil AGI get the GPU. 'Cause like they don't exist. So if you're gonna have a
very frustrated baby evil AGI, who's gonna be like trying to
buy Nvidia stock or something to get them to finally
make some chips, right? So the serious form of that
is the thermodynamic argument, which is like, okay, where's
the energy gonna come from? Where's the processor gonna be running? Where's the data center
gonna be happening? How is this gonna be
happening in secret such that, you know, it's not, you know, so that's a practical counter argument to the runaway AGI thing. I have a but I have and we
can argue that, discuss that. I have a deeper objection to it, which is it's, this is all forecasting. It's all modeling, it's
all future prediction. It's all future hypothesizing. It's not science. - [Lex] Sure. - It is not. It is the
opposite of science. So the, I'll pull up Carl Sagan
extraordinary claims require extraordinary proof, right? These are extraordinary claims. The policies that are being
called for right to prevent this are of extraordinary magnitude that, and I think we're gonna
cause extraordinary damage. And this is all being done
on the basis of something that is literally not scientific. It's not a testable hypothesis. - So the moment you say
AI's gonna kill all of us, therefore we should ban it, or that we should regulate
all that kind of stuff, that's when it starts getting serious. - Or start, you know, military
airstrikes and data centers. - [Lex] Oh boy. - Right? And like. - Yeah. This once get starts. Well, so starts getting real weird. - So here's the problem with Arian cults. They have a hard time
staying away from violence. - Yeah. But violence is so fun. - If you're on the right end of it, they have a hard time avoiding violence. The reason they have a hard
time avoiding violence is if you actually believe the claim. Right. Then what would you do to
stop the end of the world? Well, you would do anything, right? And so, and this is
where you get, and again, if you just look at the
history of Arian and cults, this is where you get the
people's temple and everybody killing themselves in the jungle. And this is where you get
Charles Manson and, you know, sending in to kill the pigs. Like, this is the problem with these. They have a very hard time to run the line at actual violence. And I think in this case, I
mean, they're already calling for it like today and you know, where this goes from here
is they get more worked up. Like I think is like really concerning. - Okay. But that's kind of the extremes. So, you know, the extremes of
anything are I was concerning. It's also possible to kind
of believe that AI has a very high likelihood
of killing all of us. But and therefore we should maybe consider slowing development or regulating, so not violence or any
of these kinds of things. But it's saying like, all right, let's take a pause here. You know, you biological
weapons, nuclear weapons. Like whoa, whoa, whoa, whoa, whoa. This is like serious stuff.
We should be careful. So it is possible to kinda have a more rational response, right? If you believe this risk is real. - [Marc] Believe. - Yes. So what is it possible to be, have a scientific approach to
the prediction of the future? - I mean, we just went
through this with COVID. What do we know about modeling? - [Lex] Well, I mean. - What did we learn about
modeling with COVID? - [Lex] There's a lot of lessons. - They didn't work at all. - [Lex] They worked poorly. - The models were terrible,
the models were useless. - I don't know if the models
were useless or the people interpreting the models and
then decentralized institutions that were creating policy
rapidly based on the models and leveraging the models in order to support their narratives versus actually
interpreting the error bars and the models and all that kind of stuff. - What you had with COVID, my view you had with COVID
is you had these experts showing up and they
claimed to be scientists and they had no testable
hypotheses whatsoever. They had a bunch of models. They had a bunch of forecasts
and they had a bunch of theories and they laid these out in front of policy makers and policy makers freaked
out and panicked. Right. And implemented a whole bunch of like, really like terrible decisions
that we're still living with the consequences of, and there was never any
empirical foundation to any of the models. None of them ever came true. - Yeah. To push back. There were certainly
Baptist and bootleggers in the context of this pandemic, but there's still a
usefulness to models. No. - So not if they're, I mean not if they're
reliably wrong, right? Then they're actually
like anti-useful. Right. They're actually damaging. - But what do you do with the pandemic? What do you do with any kind of threat? Don't you want to kind of
have several models to play with as part of this discussion of like, what the hell do we do here? - I mean, do they work? Because they're an expectation
that they actually like work that they have actual predictive value. I mean, as far as I can tell with COVID, the policymakers just si up
themselves into believing that there was sub, I
mean, look, the scientists, the scientists were at fault. The quote unquote scientists showed up. So I had some insight into this. So there was a, or remember the Imperial College models out of London were the
ones that were like, these are the gold standard models. So a friend of mine runs
a big software company and he was like, wow, this is
like, COVID is really scary. And he is like, you know, he contacted this research
and he is like, you know, do you need some help? You've been just building
this model on your own for 20 years. Do you need some, would you like us our coders
to basically restructure it so it can be fully adapted for COVID? And the guy said yes
and sent over the code and my friend said it was
like the worst spaghetti code he's ever seen. - That doesn't mean it's
not possible to construct a good model of pandemic
with the correct air bars, with a high number of parameters
that are continuously, many times a day updated
as we get more data about a pandemic. I would like to believe when
a pandemic hits the world, the best computer scientists in the world, the best software engineers
respond aggressively and as input take the data
that we know about the virus and it's an output say
here is what's happening in terms of how quickly it's spreading, what that lead in terms of
hospitalization and deaths and all that kind of stuff. Here's how likely, how
contagious it likely is. Here's how deadly it likely is based on different conditions, based on different ages and demographics and all that kind of stuff. So here's the best kinds of policy. It feels like you could have models, machine learning that like kind of, they don't perfectly predict the future, but they help you do something 'cause there's pandemics
that are like, meh, they don't really do much harm. And there's pandemics,
you can imagine them, they could do a huge amount of harm. Like they can kill a lot of people. So you should probably have
some kind of data-driven models that keep updating, that allow you to make
decisions that based like where, how bad is this thing? Now you can criticize how
horrible all that went with the response to this pandemic, but I just feel like there
might be some value to models. - So to be useful at some point it has to be predictive. Right? So and the easy thing
for me to do is to say, obviously you're right. Obviously I wanna see that
just as much as you do. 'cause anything that makes
it easier to navigate through society through a
wrenching, you know, risk like that sounds great. You know, the harder objection to it is just simply
you are trying to model a complex dynamic system
with 8 billion moving parts. Like not possible. - [Lex] It's very tough. - Can't be done, complex
systems can't be done. - Machine learning says hold my beer. But well, it's possible. No? - I don't know. I would like to believe that it is. I'll put it this way. I think where you and I
would agree is I think we would like that to be the case. We are strongly in favor of it. I think we would also
agree that no such thing with respect to COVID or
pandemics no such thing. At least neither you
nor I think are aware. I'm not aware of anything like that today. - My main worry with the
response to the pandemic is that same as with aliens, is that even if such a thing existed, and it's possible it existed, the policymakers were
not paying attention. Like there was no mechanism
that allowed those kinds of models to percolate all. - Oh, I think we had the
opposite problem during COVID. I think the policymakers, I think these people with
basically fixed science had too much access to the policymakers. - Well, right. And well, but the policy
makers also wanted, they had a narrative in
mind and they also wanted to use whatever model
that fit that narrative - [Marc] Oh, sure. - To help them out. So like, it felt like
there was a lot of politics and not enough science. - Although a big part
of what was happening, a big reason we got lockdowns
for as long as we did, was because these scientists
came in with these like doomsday scenarios that were like, just like completely off the hook. - Scientists in quotes, let's not-- - [Marc] Quote unquote scientists. - Let's not, okay,
let's give love science. So here's science that is the way out. - Science is a process
of testing hypotheses. Modeling does not involve
testable hypotheses. Right. Like, I don't even know that. I actually don't even know that modeling actually
qualifies as science. Maybe that's a side conversation. We could have some time over a beer. - Oh, that's a really interesting part. What do we do about the future? I mean, what's-- - So number one is when
we start with number one, humility goes back to this thing of how do we determine the truth. Number two is we don't believe, you know, it's the old, I've gotta hammer everything
looks like a nail, right? I've got, oh, this is one
of the reasons I gave you, I gave Alexa book, which the topic of the
book is what happens when scientists basically
stray off the path of technical knowledge and
start to weigh in on politics and societal issues. - In this case, philosophers. - Well in this case philosophers. But he actually talks in this
book about, like Einstein, he talks about, actually about
the nuclear age in Einstein. He talks about the
physicists actually doing very similar things at the time. - The book is When Reason Goes On Holiday, Philosophers in Politics by Nevin. - And it's just a story. It's a story. There are other books on this topic, but this is a new one that's really good this is just a story of what happens when experts
in a certain domain decide to weigh in and become
basically social engineers and political, you know,
basically political advisors. And it's just a story of just
inning catastrophe. Right. And I think that's what
happened with COVID again. - Yeah. I found this book
a highly entertaining and eye-opening read filled
with amazing anecdote of a rationality and craziness
by famous Resa philosophers. - I definitely, after you read this book, you will not look at Einstein the same. - [Lex] Oh boy. - Yeah. - Don't destroy my heroes. - He will not be a hero of yours anymore. Sorry. You probably couldn't,
you shouldn't read the book. - All right. - But here's the thing. The AI risk people, they don't even have the COVID model, at least not that I'm aware of. - [Lex] No. - Like there's not even the
equivalent of the COVID model. They don't even have the spaghetti code. They've got a theory and a
warning and a this and the that. And like, if you ask like,
okay, well here's, I mean, the ultimate example is,
okay, how do we know, right? How do we know that an AI is running away? Like how do we know that the boom takeoff thing
is actually happening? And the only answer that
any of these guys have given that I've ever seen is, oh,
it's when the loss rate, the loss function and the
training drops, right? That's when you need to like
shut down the data center. Right? And it's like, well that's also what happens when you're successfully training a model. Like, what even this is not science, this is not, it's not
anything, it's not a model, it's not anything. There's nothing to arguing with. It is like, you know,
punching jello, like there, there's what do you even respond to? - So just put push back on that. I don't think they have good metrics of when the film is happening. But I think it's possible to have that. Like just as you speak now, I mean it's possible to imagine
there could be measures. - It's been 20 years. - No, for sure. But it is been only weeks
since we had a big enough breakthrough in language models. We can start to actually have this, the thing is the AI doer stuff didn't have any actual systems to really work with. And now there's real systems
you can start to analyze like, how does this stuff go wrong? And I think you kind
of agree that there is a lot of risks that we can analyze. The benefits outweigh
the risks in many cases. - Well, the risks are not existential. - [Lex] Yes. Well. - Not in the phone paper
clip. Let me, okay. There's another slide of hand
that you just alluded to. There's another slide
of hand that happens, which is very interesting. - I'm very good at the
slide of hand thing. - Which is very not scientific. So the book Super Intelligence, right, which is like the Nick Bostrom's book, which is like the origin
of a lot of this stuff, which was written, you know, whatever, 10 years ago or something. So he does this really
fascinating thing in the book, which is he basically says
there are many possible routes to machine intelligence,
to artificial intelligence. And he describes all the different routes to artificial intelligence,
all the different possible, everything from biological
augmentation through to, you know, all these different things. One of the ones that
he does not describe is large language models because of course the book was written
before they were invented. And so they didn't exist. In the book, he describes them all and then he proceeds to treat them all as if they're
exactly the same thing. He presents them all as sort
of an equivalent risk to be dealt with in an equivalent
way to be thought about the same way. And then the risk, the quote unquote risk that's actually emerged is actually a completely different technology than he was even imagining. And yet all of his theories
and beliefs are being transplanted by this movement, like straight onto this new technology. And so again, like there's no other area of science or technology
where you do that. Like when you're dealing
with like organic chemistry versus inorganic chemistry,
you don't just like say, oh, with respect to like either
one, basically maybe, you know, growing up in eating
the world or something, like they're just gonna
operate the same way. Like you don't. - But you can start talking about like, as we get more and more actual systems that start to get more
and more intelligent, you can start to actually have more scientific arguments here. - [Marc] Oh yeah. - Like, you know, high level, you can talk about the threat
of autonomous weapon systems back before we had any
automation in the military. And that would be like
very fuzzy kind of logic. But the more and more you
have drones that are becoming more and more autonomous, you
can start imagining, okay, what does that actually look
like and what's the actual threat of autonomous weapons systems? How does it go wrong? And still it's very vague, but you start to get a
sense of like, all right, it should probably be illegal or wrong or not allowed
to do like mass deployment of fully autonomous drones that are doing aerial strikes. - [Marc] Oh no. - On large areas. - [Marc] I think it should be required. - Right? So that's a no. - No, no. I think it should be required that only aerial vehicles are automated. - Okay. So you wanna go the other way? - I wanna go the other way. - So that, okay. - I think it's obvious that
the machine is gonna make a better decision than the human pilot. I think it's obvious that
it's in the best interest of both the attacker and the
defender and humanity at large. If machines are making
more of these decisions than not people, I think people make terrible
decisions in times of war. - But like, there's ways
this can go wrong too, right? - Well, it wars go terribly wrong now. This goes back to the whole, this is that whole thing
about like the self-drive. Does the self-driving
car need to be perfect versus does it need to be
better than the human driver? Does the automated
drone need to be perfect or does it need to be better than a human pilot at making decisions under enormous amounts of
stress and uncertainty? - Yeah, well, on average, the worry that AI folks
have is the runaway. - They're gonna come alive. Right? That then again, that's
the slight of hand, right. - Or not not come alive.
Well, no, hold on a second. You lose control as well. You lose control. - But then they're gonna
develop goals of their own. They're gonna develop a mind of their own, they're gonna develop their own. Right. - No more, more like
Chernobyl style meltdown, like just bugs in the code
accidentally, you know, force you like the results in the bombing of like large civilian areas. - [Marc] Okay. And to a degree that's not possible in the current military strategies, - [Marc] I don't know. - Control by humans. - Well, actually we've been
doing a lot of mass bombings to cities for a very long time. - Yes. And a lot of civilians died. - And a lot of civilians died. And if you watch the documentary,
the Fog of War McNamara, it spends a big part of it
talking about the fire bombing of the Japanese cities. Burning them straight
to the ground. Right. The devastation in Japan, American military fire bombing
the cities in Japan was considerably bigger devastation
than the use of nukes. Right. So we've been doing
that for a long time. We also did that to Germany, by the way Germany did that to us, right? Like that's an old tradition. The minute we got airplanes, we started doing indiscriminate bombing. - So one of the things-- - [Marc] We're still doing it. - The modern US military can do with technology with automation, but technology more broadly is higher and higher precision strikes. - Yeah, I was saying, so precision is obviously precision and this is a (indistinct) right? So there was this big advance this big advance called the (indistinct) which basically was
strapping a GPS transceiver to an unguided bomb and turning it into a guided bomb. And yeah, that's great. Like look, that's been a big advance, but, and that's like a baby
version of this question, which is okay, do you
want like the human pilot, like guessing where the bomb's gonna land? Or do you want like the
machine like guiding the bomb to his destination? That's a baby version of the question. The next version of the question is, do you want the human
or the machine deciding whether to drop the bomb? Everybody just assumes the
human's gonna do a better job for what I think are
fundamentally suspicious reasons. - Emotional, psychological reasons. - Yeah. I think it's very clear
that the machine's gonna do a better job making that decision 'cause the humans making
that decision are got awful. Just terrible. - [Lex] Yeah. - Right. And so yeah. So this is the thing. And then let's get to the, there was, can I one more slide of hand? - [Lex] Yes. - It was in-- - Sure. Please. I'm a magician. You could say. - One more slight of hand. These things are gonna be so smart, right? That they're gonna be able to
destroy the world and wreak havoc and like do all this
stuff and plan and do all this stuff and evade us and have
all their secret things and their secret factories
and all this stuff. But they're so stupid that
they're gonna get like, tangled up in their code and that's they're not gonna come alive, but there's gonna be some
bug that's gonna cause them to like turn us all on a paper like that. They're not gonna be
genius in every way other than the actual bad goal. And it's just like,
and that's just like a, like ridiculous like discrepancy. And you can prove this today, you can actually address
this today for the first time with LLMs which is you can actually ask LLMs to resolve moral dilemmas. So you can create the
scenario, you know, dot, dot, dot this, that, this, that, this, that. What would you as the AI
do in the circumstance? And they don't just
say destroy all humans, destroy all humans. They will give you actually
very nuanced moral, practical trade-off oriented answers. And so we actually already
have the kind of AI that can actually like, think this through and can actually like, you know, reason about goals. - Well, the hope is that AGI or like various superintelligent systems have some of the
nuance that LLMs have and the intuition is they most likely will because even these LLMs have the nuance. - LLMs are really, this is
actually worth spending a moment on LLMs are really interesting to have moral conversations with. And that I just, I didn't expect I'd be
having a moral conversation with the machine in my lifetime. - Wait, and let's remember
we're not really having a conversation with the machine where we're having a conversation with the entirety of the
collective intelligence of the human species. - Exactly. Yes. Correct. - But it's possible to imagine
autonomous weapons systems that are not using LLMs. - But if they're smart enough to be scary, where are they not
smart enough to be wise? Like, that's the part where it's like, I don't know how you get
the one without the other. - Is it possible to be super intelligent without being super wise? - Well, again, you're back to that. I mean, then you're back to
a classic autistic computer, right? Like you're back to just
like a blind rule follower. I've got this like core,
it's the paperclip thing. I've got this core rule and
I'm just gonna follow it to the end of the earth. And it's like, well, but everything you're gonna
be doing execute that rule is gonna be super genius level
that humans aren't gonna be able to counter. It's a mismatch in the definition of what the system's capable of. - Unlikely but not impossible, I think. - But again, here you
get to like, okay, like. - No, I'm not saying when it's
unlikely but not impossible. If it's unlikely, that means the fear should be correctly calibrated. - Extraordinary claims
require extraordinary proof. - Well, okay, so one
interesting sort of tangent, I would love to take on this
because you mentioned this in the essay about nuclear,
which was also, I mean, you don't shy away from a
little bit of of a spicy take. So Robert Oppenheimer famously said, now I am become death
the destroyer of worlds as he witnessed the first
destination of a nuclear weapon on July 16th, 1945. And you write an interesting
historical perspective, "Recall that John Van Neuman responded to Robert Oppenheimer's famous
hand wringing about the role of creating nuclear weapons, which you note helped end World War II and prevent World War III
with some people confess guilt to claim credit for the sin." And you also mentioned
that Truman was harsher after meeting Oppenheimer. He said that "Don't let that
cry baby in here again." - Real quote, by the
way, from Dean Atchison. - Boy. - 'Cause Oppenheimer didn't
just say the famous line. - [Lex] Yeah. - He then spent years going
around basically moaning him, you know, going on TV and going into going into the White House and basically like, just like doing this hair shirt, you know, thing self, you know, this
sort of self-critical like, oh my god, I can't believe how awful I am. - So he's widely
considered perhaps of the, because of the hang ringing
as the father of the tom bomb. - [Marc] Yeah. - This is Van Norman's criticism
of him is he tried to have his cake and eat it too. Like he wanted to and Van Norman of course a very different kind of personality and
he's just like, yeah, good. This is like an incredibly
useful thing. I'm glad we did it. - Yeah. Well Van Norman is
is widely credit as being one of the smartest humans
of the 20th century. Certain people. Everybody says like, this is the smartest person I've ever met when they've met him. Anyway, that doesn't mean,
smart doesn't mean wise. So yeah, I would love to sort of, can you make the case both
for and against the critique of Oppenheimer here? 'Cause we're talking
about nuclear weapons. Boy, do they seem dangerous? - Well so, the critique goes deeper and I left this out. Here's the real substance, I left it out 'cause I didn't wanna dwell on nukes in my AI paper. But here's the deeper thing that happened and I'm really curious, this
movie coming out this summer, I'm really curious to see
how far he pushes this. 'cause this is the real
drama in the story, which is, it wasn't just a question
of our nukes, good or bad, it was a question of should
Russia also have them? And what actually happened was Russia got the American invented the bomb. Russia got the bomb, they got the bomb through espionage, they got American and you know, they got American scientists
and foreign scientists working on the American project. Some combination of the two
basically gave the Russians the designs for the bomb. And that's how the Russians got the bomb. There's this dispute to this
day of Oppenheimer's role in that if you read all the histories, the kind of composite
picture, and by the way, we now know a lot actually
about Soviet espionage in that era 'cause there's been all this declassified
material in the last 20 years that actually shows a lot
of very interesting things. But if you kinda read all the histories, which you kinda get is Oppenheimer
himself probably was not he probably did not hand over
the nuclear secrets himself. However, he was close
to many people who did. Including family members. And there were other members
of the Manhattan Project who were Russian, Soviet SS
and did hand over the bomb. And so the view of that
Oppenheimer and people like him had that this thing is awful
and terrible and oh my god. And you know, all this stuff you could
argue fed into this ethos at the time that resulted
in people thinking that the Baptists thinking that the only principle
thing to do was to give the Russians the bomb. And so the moral beliefs on this thing and the public discussion and the role that the inventors
of this technology play, this is the point of this book, when they kind of take on this
sort of public intellectual, moral kind of thing, it can
have real consequences, right? Because we live in a very
different world today because Russia got the
bomb than we would've lived in had they not gotten the bomb right. The entire 20th century, second half of the 20th
century would've played out very different had those people
not given Russia the bomb. And so the stakes were very high then. The good news today is
nobody's sitting here today, I don't think worrying about
like an analogous situation with respect to like, I'm not really worried that
Sam Altman's gonna decide to give, you know, the
Chinese, the design for AI, although he did just speak
at a Chinese conference, which is in interesting. But however, I don't think
that's what's at play here, but what's at play here are
all these other fundamental issues around what do
we believe about this and then what laws and
regulations and restrictions that we're gonna put on it. And that's where I draw
like a direct straight line. And anyway, and my reading
of the history on nukes is like the people who were doing
the full hair shirt public, this is awful. This is terrible. Actually had like
catastrophically bad results from taking those views. And that's what I'm worried
it's gonna happen again. - But is there a case to be
made that you really need to wake the public up to the dangers of nuclear weapons when
they were first dropped? Like really like educate them on like, this is extremely dangerous
and destructive weapon. - I think the education
kind of happened quick and early, like-- - [Lex] How? - It was pretty obvious. - [Lex] How? - We dropped one bomb and
destroyed an entire city. - Yeah. So 80,000 people dead. - [Marc] Yep. - But. - [Marc] And look. But-- - I don't like the reporting of that. You can report that in all kinds of ways. - [Marc] Oh, there wars. - You can do all kinds of slants. Like war is horrible. War is terrible. You can do, you can make
it seem like nuclear, the use of nuclear weapons
is just a part of war and all that kind of stuff. Something about the
reporting and the discussion of nuclear weapons resulted
in us being terrified in awe of the power of nuclear weapons and that potentially fed
in a positive way towards the game theory of
mutual issue destruction. - Well, so this gets to what actually, let's get to what actually happens. - [Lex] Some of us, me
playing devil's advocate here. - Yeah, yeah, sure. Of course. Let's get to what
actually happened and then kind of back into that. So what actually happened, I believe, and again I think this is a
reasonable reading of history, is what actually happened
was nukes then prevented World War III and they
prevented World War III through the game theory
of mutually assured destruction had nukes not existed. Right. There would've been no reason why the Cold War did not go hot. Right. And then there and then, you know, and the military planners
at the time, right, thought both on both sides
thought that there was gonna be World War III on the planes of Europe and they thought there was gonna be like a hundred million people dead. Right? It was like the most obvious
thing in the world to happen. Right? And it's the dog
that didn't bark right? Like it may be like the best
single net thing that happened in the entire 20th century is
that like that didn't happen. - Yeah. Actually, just on that point, you say a lot of really brilliant things. It hit me just as you were saying it. I don't know why it hit
me for the first time, but we got two wars in
a span of like 20 years. Like we could have kept getting
more and more world wars and more and more ruthless. It actually, you could have
had a US versus Russia war. - You could, by the way you haven't, there's another hypothetical scenario. The other hypothetical scenario is that Americans got the
bomb, the Russians didn't. Right? And then America's the big dog and then maybe America
would've had the capability to actually roll back the iron curtain. I don't know whether
that would've happened, but like it's entirely possible. Right? And the act of these people who had these moral positions about, 'cause they could
forecast, they could model, they could forecast the future
of how the technology would get used, made a horrific mistake. 'cause they basically ensured that the iron curtain
would continue for 50 years longer than it would've otherwise. Like, and again, like
these are counter-factuals, I don't know that that's
what, what would've happened, but like the decision to hand the bomb over was a big decision made by people who were very
full of themselves. - Yeah. But so me as an America, me as a person that loves America, I also wonder if US was the only ones with the nuclear weapons. - That was the argument for handing that was the guys who (indistinct) the guys who handed over the bomb. That was actually their moral argument. - Yeah. I would probably
not hand it over to, I would be careful about the regimes. You hand it over to there, maybe give it to like
the British or something, or like a democratically-elected
government. - Well, look, there are people to this day who think that those bias Soviet spies did the right thing because
they created a balance of terror as opposed
to the US having just, and by the way, let me-- - Balance of terror. - [Marc] Let's tell the
full version story has-- - Such a sexy ring to it. - Okay. So the full
version of the story is John Van Norman is a hero
of both yours and mind. The full version of the
story is he advocated for a first strike. So when the US had the
bomb and Russia did not, he advocated for, he said, we need to strike them right now. - Strike Russia. - [Marc] Yes. - Van Norman. - Yes, because he said
World War III is inevitable. He was very hardcore. His theory was World
War III is inevitable. We're definitely gonna have World War III. The only way to stop World War
III is we have to take them out right now and we have
to take them out right now before they get the bomb. 'Cause this is our last chance. Now again, like-- - Is this an example of
philosophers and politics? - I don't know if that's in there or not, but this is in the standard. - No, but it is meaning is that. - Yeah, this is on the other side. So, most of the case studies, most of the case studies
in books like this are the crazy people on the left. Van Norman is a story arguably of the crazy people on the right. - Yes. Stick to computing, John. - Well. This is the thing, and this is the general principle. Getting back to our core
thing, which is like, I don't know whether any of
these people should be making any of these calls. Because there's nothing in
either Van Norman's background or Oppenheimer's background or any of these people's background that qualifies them as moral authorities. - Yeah. Well this actually
brings up the point of, in AI, who are the good people to reason about the
morality of the ethics, the outside of these risks, outside of like the more
complicated stuff that you, you agree on is, you know, this will go into the hands of bad guys and all the kinds of ways
they'll do is interesting and dangerous, is dangerous in interesting unpredictable ways. And who is the right person? Who are the right kinds of
people to make decisions, how to respond to it?
Or is the tech people? - So the history of these fields, this is what he talks about in the book, the history of these fields,
is that the competence and capability and
intelligence and training and accomplishments of senior
scientists and technologists working on a technology
and then being able to then make moral judgments
in the use of that technology. That track record is terrible that track record is like
catastrophically bad. The people-- - Just the linger, the people that develop that
technology are usually not going to be the right people. - Well why would they? So
the claim is of course, they're the knowledgeable ones. But the the problem is they've
spent their entire life in a lab. Right. They're not theologians. Well, so what you find,
what you find when you read, when you read this, when
you look at these histories, what you find is they generally are very thinly informed on history, on sociology, on theology,
on morality, on ethics. They tend to manufacture their
own worldviews from scratch. They tend to be very sort of thin. They're not remotely the
arguments that you would be having if you got like a group of
highly qualified theologians or philosophers or, you know. - Well, let me sort of,
as the devil's advocate, takes a simple whiskey say
that I agree with that. But also it seems like the
people who are doing kind of the ethics departments and these tech companies
go sometimes the other way. - [Marc] Yes, they're definitely. - Which they're not nuanced
on history or theology or this kind of stuff. It almost becomes a kind
of outraged activism towards directions that don't seem to be grounded in history and
humility and nuance. It's again, drenched with arrogance. So-- - [Marc] Definitely. - I'm not sure which is worse. - Oh no, they're both bad. Yeah. So definitely not them either. - So, but I guess. - Well look, this is a hard. - Yeah, it's a hard problem. - This is a hard problem. This goes back to where we started, which is, okay, who has the truth? And it's like, well, you know, like how does societies
arrive at like truth and how do we figure these things out and like our elected leaders
play some role in it. You know, we all play some role in it. There have to be some set
of public intellectuals at some point that bring, you know, rationality and judgment
and humility to it. Those people are few and far between. We should probably prize them very highly. - Yeah. So celebrate humility
in our public leaders. So getting to risk number two, will AI ruin our society
short version as you write, if the murder robots don't
get us the hate speech and misinformation will. And the action you recommend in short, don't let the thought police suppress AI. Well what is this risk of
the effect of misinformation of society that's going
to be catalyzed by AI? - Yeah, so this is the social media, this is what you just alluded to. It's the activism kind
of thing that's popped up in these companies in the industry. And it's basically, from my perspective, it's basically part two
of the war that played out over social media over the last 10 years, 'cause you probably remember
social media 10 years ago, was basically who even wants this? Who wants a photo of what
your cat had for breakfast? Like, this stuff is like silly and trivial and why can't these nerds like figure out how to invent something
like useful and powerful? And then, you know, certain things happened
in the political system. And then it sort of, the polarity on that
discussion switched all the way to social media is like
the worst, most corrosive, most terrible, most awful
technology ever invented. And then it leads to, you
know, terrible of the wrong, you know, politicians and
policies and politics and like, and all this stuff. And that all got catalyzed into
this very big kind of angry movement both inside and
outside the companies to kind of bring social media to heal. And that got focused in
particularly on two topics, so-called hate speech and
so-called misinformation. And that's been the saga playing out for the last decade. And I don't even really want
to even argue the pros and cons of the sides just to observe that's been like a huge fight and has had, you know, big consequences to how
these companies operate. Basically that same, those
same sets of theories, that same activist approach, that same energy as being
transplanted straight to AI. And you see that already happening. It's why, you know, ChatGPT will answer, let's say certain
questions and not others. It's why it gives you the
canned speech about, you know, whenever it starts with,
as a large language model, I cannot, you know, basically means that somebody
has reached in there and told that it can't talk about certain topics. - Do you think some of that is good? - So it's an interesting question. So a couple observations. So, one is the people who find this the most frustrating are the people who are worried
about the murder robots, right? So, and in fact so called
X risk people, right? They started with the term AI safety, the term became AI alignment. When the term became AI alignment is when this switch happened
from we're worried it's gonna kill us all to
we're worried about hate speech and misinformation. - [Lex] Sure. - The AI X risk people have
now renamed their thing AI not kill everyone-ism, which I have to admit is a catchy term. And they are very frustrated
by the fact that the hate speech sort of activist driven
hate speech misinformation kind of thing is taking over. Which is what's happened is taken over, the AI ethics field has been
taken over by the hate speech misinformation people. You know, look, would I like to live in a world
in which like everybody was nice to each other all the
time and nobody ever said anything mean and nobody ever
used a bad word and everything was always accurate and honest. Like, that sounds great. Do I wanna live in a world
where there's like a centralized thought police working through
the tech companies to enforce the view of a small set of
elites that they're gonna determine what the rest
of us think and feel like? Absolutely not. - There could be a middle
ground somewhere like Wikipedia type of moderation. There's moderation of Wikipedia
that is somehow crowdsourced where you don't have centralized elites, but it's also not completely
just a free for all because if you have the
entirety of human knowledge at your fingertips, you
can do a lot of harm. Like if you have a good assistant that's completely uncensored, they can help you build a bomb, they can help you mess with
people's physical wellbeing. Right. If they, because that information is
out there on the internet and so presumably there's, it would be, you could see the positives
in censoring some aspects of an AI model when it's helping you
commit literal violence. - Yeah. And there's a section
later section of the essay where I talk about bad
people doing bad things. - [Lex] Yes. - Right. Which and there's this, there's a set of things that
we should discuss there. - [Lex] Yeah. - What happens in practice is these lines, as you alluded to this already, these lines are not easy to draw. And what I've observed in
the social media version of this is like, the way I describe it as the slippery slope is not a fallacy, it's an inevitability. The minute you have this kind
of activist personality that gets in a position to make these decisions they take it straight to infinity. Like, it goes into the crazy
zone like almost immediately and never comes back because
people become drunk with power. Right. And look, if you're
in the position to determine what the entire world thinks and feels and reads and says like, you're gonna take it and you
know, Elon has, you know, ventilated this with the
Twitter files over the last, you know, three months and
it's just like crystal clear, like how bad it got there now. - [Lex] Yeah. - Reason for optimism
is what Elon is doing with community notes. So community notes is actually
a very interesting thing. So, what Elon is trying to
do with community notes is he's trying to have it where
there's only a community note when people who have previously
disagreed on many topics agree on this one. - Yes, that's what I'm
trying to get at is like, there could be Wikipedia like
models or community notes type of models where allows you
to essentially either provide context or sensor in a
way that's not resist the slippery slope nature. Power. - Now there's an entirely
different approach here, which is basically we have AIs
that are producing content. We could also have ais that
are consuming content. Right? And so one of the things that
your assistant could do for you is help you consume
all the content, right? And basically tell you
when you're getting played. So for example, I'm gonna
want the AI that my kid uses, right, to be very, you know, child safe and I'm gonna want
it to filter for him all kinds of inappropriate stuff that
he shouldn't be saying just 'cause he's a kid. Right? And you see what I'm saying
is you can implement that. The architectural, you
could say you can solve this on the client side, right? You solving on the server
side gives you an opportunity to dictate for the entire
world, which I think is where you take the slippery slope to hell, there's another architectural approach, which is to solve this on the client side, which is certainly what I would endorse. - It's AI risk number five, will AI lead to bad
people doing bad things? And I can just imagine language
models used to do so many bad things, but the hope is there that
you can have large language models used to then defend
against it by more people, by smarter people, by more
effective people, skilled people, all that kind of stuff. - Three-part argument on
bad people doing bad things. So, number one, right? You can use the technology defensively and we should be using AI
to build like broad spectrum vaccines and antibiotics for
like bio weapons and we should be using AI to like hunt terrorists and catch criminals and like, we should be doing like all
kinds of stuff like that. And in fact, we should be doing those things even just to like go get like, you know, basically go eliminate risk
from like regular pathogens that aren't like constructed by an AI. So there's the whole
defensive set of things. Second is we have many laws on the books about as actual bad things, right? So it is actually illegal
to be a criminal, you know, to commit crimes, to commit
terrorist acts to, you know, build pathogens with the intent to deploy them to kill people. And so we have those, we actually don't need new laws for the vast majority of these scenarios. We actually already have the
laws in the book, on the books. The third argument is the minute, and this is sort of the
foundational one that gets really tough, but the minute
you get into this thing, which you were kind of getting
into, which is like, okay, but like, don't you need
censorship sometimes, right? And don't you need restrictions sometimes? It's like, okay, what is the cost of that? And in particular in the
world of open source, right? And so is open source AI
going to be allowed or not? If open source AI is not allowed, then what is the regime that's
going to be necessary legally and technically to prevent
it from developing? Right? And here again is where you
get into and people have proposed that these kinds of things. You get into I would say pretty extreme territory pretty fast. Do we have a monitor
agent on every CPU and GPU that reports back to the government? What we're doing with our computers, are we seizing GPU clusters
that get beyond a certain size? Like, and then by the way, how are we doing all that globally, right? And like if China's developing
an LLM beyond the scale that we think is allowable,
are we gonna invade? Right. And you have figures on the AI X risk side who are advocating any, you know, potentially up to nuclear
strikes to prevent, you know, this kind of thing. And so here you get into this thing and again, you know, maybe
you could maybe say this is, you know, you could even
say this is what good, bad or indifferent or whatever. But like here's the comparison of nukes, the comparison of nukes is very dangerous because one is just nukes, were just, although we can
come back to nuclear power. But the other thing was like with nukes, you could control plutonium, right? You could track plutonium and
it was like hard to come by. AI is just math and code, right? And it's in like math
textbooks and it's like, there are YouTube videos that
teach you how to build it. And like there's open source,
there's already open source. You know, there's a 40 billion parameter
model running around already called Falcon Online that
anybody can download. And so, okay, you walk down the logic path
that says we need to have guardrails on this. And you find yourself in an authoritarian, totalitarian regime of thought
control and machine control that would be so brutal
that you would've destroyed the society that you're trying to protect. And so I just don't see
how that actually works. - So yeah, you have to
understand my brain's going full steam ahead here 'cause I agree with basically
everything you're saying, but I'm trying to play
devil's advocate here because okay, you're highlighted the fact that there is a slippery
slope to human nature. The moment you censor something, you start to censor everything. That alignment starts out sounding nice, but then you start to align to the beliefs of some select group of people. And then it's just your beliefs the number of people
you're aligning to smaller and smaller as that group
becomes more and more powerful. Okay. But that just speaks to the people that censor are usually the assholes and the assholes get richer. I wonder if it's possible
to do without that for AI. One way to ask this question is do you think the base models, the baseline foundation
models should be open sourced? Like, where Marc Zuckerberg
is saying they want to do. - So look, I mean I think
it's totally appropriate the companies that are in the business of producing a product or service should be
able to have a wide range of policies that they put, right? And I'll just, again, I want a heavily censored
model for my eight year old. Like, I actually want that, like, like I would pay more money
for the ones more heavily censored than the one that's not, right. And so, like there are certainly scenarios where companies will make that decision. Look, an interesting thing you brought up or is this really a speech issue? One of the things that the
big tech companies are dealing with is that content generated
from an LLM is not covered under section 230, which is the law that protects
internet platform companies from being sued for the
user generated content. And so it is actually-- - [Lex] Oh, wow. - Yes and so there, there's
actually a question. I think there's still a question, which is can big American companies actually feel generative AI at all? Or is the liability actually
gonna just ultimately convince them that they can't do it? Because the minute the
thing says something bad, and it doesn't even
need to be hate speech, it could just be like an
(indistinct) it could hallucinate a product, you know, detail
on a vacuum cleaner, you know, and all of a sudden the
vacuum cleaner company sues for misrepresentation. And there's asymmetry there, right? 'Cause the LLMs gonna be
producing billions of answers to questions and it only needs
to get a few wrong to have. - [Lex] So, loss has to get
updated really quick here. - Yeah. And nobody knows
what to do with that, right? So, so anyway, like there are big, there are big questions around
how companies operate at all. So we talk about those, but then there's this other
question of like, okay, the open source. So what about open source? And my answer to your
question is kind of like, obviously yes, the models have, there has to be full open
source here because to live in a world in which that
open source is not allowed is a world of draconian speech control, human control, machine control. I mean, you know, black helicopters with
jackbooted thugs coming out, repelling down and seizing
your GPU like territory. - [Lex] Well. - No, no, I'm a hundred percent serious. - That's you're saying slippery
slope always leads there. - No, no, no, no. That's what's required to enforce it. Like how will you enforce a
ban on open source and AI? - No. Well you could add friction to it, like harder to get the models. 'Cause people will always
be able to get the models, but it'll be more in the shadows, right? - The leading open source model
right now is from the UAE. Like the next time they
do that, what do we do? - [Lex] Yeah. - Like. - Oh, I see you're like. - A 14 year old in Indonesia
comes out with a breakthrough. You know, we talked about most great
software comes from a small number of people. Some kid comes out with
some big new breakthrough and quantization or something and has some huge breakthrough. And like, what are we gonna like, invade Indonesia and arrest him? - It seems like in terms of
size of models and effectiveness of models, the big tech companies will
probably lead the way for quite a few years and the question is of what policies they should use? The kid in Indonesia
should not be regulated, but should Google, Meta,
Microsoft, Open AI be regulated? - Well, so, but this goes, okay, so when does it become dangerous? Right. Is the danger that it's as powerful as the current leading commercial model? Or it is just at some
other arbitrary threshold? And then by the way, like
look, how do we know, like what we know today is that
you need like a lot of money to like train these things. But there are advances being
made every week on training efficiency and, you know,
data, all kinds of synthetic, you know, look, I don't even like the synthetic data thing we're talking about. Maybe some kid figures out a way to auto-generate synthetic data. - [Lex] That's gonna change everything. - Yeah, exactly. And so like sitting here today, like, the breakthrough just happened, right? You made this point like the
breakthrough just happened. So we don't know what the shape of this technology is gonna be. I mean the big shock
here is that, you know, whatever number of billions
of parameters basically represents at least a very big
percentage of human thought. Like who would've imagined that? And then there's already work underway. There was just this paper that
just came out that basically takes a gpt three scale model
and compresses it down or run on a single 32 core CPU. Like who would've predicted that? - [Lex] Yeah. - You know, some of these models now you
can run on raspberry pies like today they're very slow,
but like, you know, maybe they'll be a, you know, perceived you have real perform, you know, like it's math and code. And here we're back in here, we're back in, dude, it's math and code. It's math and code, it's
math, code and data. It's bits. - Marc has just like
walked away at this point. You just screw it. I don't know what to do with this. You guys created this
whole internet thing. Yeah, yeah. I mean, I'm a huge believer
in open source here. - So my argument is we're gonna have, see here's my argument is a, my argument, my full argument is, is AI is gonna be like air,
it's gonna be everywhere. Like this is just gonna be in text. It already is, it's gonna be in textbooks
and kids are gonna grow up knowing how to do this. And
it's just gonna be a thing. It's gonna be in the air
and you can't like pull this back anymore. You can't pull back air. And so you just have to figure out how to live in this world, right? And then that's where I think
like all this hand ringing about AI risk is basically
a complete waste of time, 'cause the effort should go into okay, what is the defensive approach? And so if you're worried about you know, AI generated pathogens, the
right thing to do is to have a permanent project warp speed, right? Funded lavishly. Let's do a Manhattan, let's
talk about Manhattan project, let's do a Manhattan project
for biological defense, right? And let's build ais and let's
have like broad spectrum vaccines where like, we're
insulated from every pathogen. - And well, the interesting
thing is because it's software, a kid in his basement, teenager could build like a
system that defends against like the worst, I mean, and to me
defense is super exciting. It's like, if you believe
in the good of human nature for that, most people wanna do good, to be the savior of
humanity is really exciting. - Yes. - Not, okay, that's a dramatic statement. But to help people. - Yeah, of course. Help people. - Yeah. Okay. What about just the jump around, what about the risk of will AI
lead to crippling inequality? You know, 'cause we're kind of saying
everybody's life will become better. Is it possible that the
rich get richer here? - Yeah, so this goes, this actually ironically
goes back to Marxism. So 'cause this was the, so the
core claim of Marxism, right? Basically was that the owner, the owners of capital would basically own the means of production. And then over time they
would basically accumulate all the wealth the workers
would be paying in, you know, and getting nothing in return 'cause they wouldn't be
needed anymore, right? Marx was very worried
about mech what he called mechanization or what later
became known as automation. And that, you know, the workers would be immiserated
and the the capitalists would end up with all. And so this was one of the
core principles of Marxism. Of course it turned out to
be wrong about every previous wave of technology. The reason it, it turned out to be wrong about every previous wave of technology is that the way that the
self-interested owner of the machines makes the most money is by providing the production capability in the form of products and services to the most people, the most
customers as possible, right? The the largest, and this is one of those funny
things where every CEO knows this intuitively, and yet it's like hard to
explain from the outside the way you make the most
money in any business is by selling to the largest
market you can possibly get to. The largest market you can
possibly get to is everybody on the planet. And so every large company
does is everything that it can to drive down prices, to
be able to get volumes up, to be able to get to
everybody on the planet. And that happened with
everything from electricity, it happened with telephones,
it happened with radio, it happened with automobiles,
it happened with smartphones, it happened with PCs, it
happened with the internet, it happened with mobile broadband. It's happened by the way, with Coca-Cola. It's happened with like every, you know, basically every industrially
produced, you know, good or service people, you wanna drive it to the
largest possible market. And then as proof of that,
it's already happened, right? Which is the early
adopters of like ChatGPT and Bing are not like, you
know, Exxon and Boeing. They're, you know, your
uncle and your nephew, right? It's just like free. It's either freely available
online or it's available for 20 bucks a month or something. But the, you know, these things went this technology went mass market immediately. And so look, the owners of
the means of production, the whoever does this now mentioned these trillion dollar questions. There are people who are gonna
get really rich doing this, producing these things, but they're gonna get
really rich by taking this technology to the
broadest possible market. - So yes, they'll get rich, but they'll get rich having
a huge positive impact on. - Yeah, making the technology
available to everybody. Right. And again, smartphone, same thing. So there's this amazing kind
of twist in business history, which is you cannot spend
$10,000 on a smartphone, right? You can't spend a
hundred thousand dollars, you can't spend a million, like I would buy the
million dollars smartphone. Like I'm signed up for it. Like if it's like, suppose a million dollar
smartphone was like much better than the thousand dollar smartphone. Like I'm there to buy
it, it doesn't exist. Why doesn't it exist? Apple makes so much more
money driving the price further down from a thousand dollars than they would trying to harvest, right? And so it's just this
repeating pattern you see over and over again where and
what's great about it is you, you do not need to rely on
anybody's enlightened right? Generosity to do this. You just need to rely on
capitalist self-interest. - What about AI taking our jobs? - Yeah. So very very similar thing here. There's sort of a, there's a
core fallacy which again was very common in Marxism, which is what's called
the lump of labor fallacy. And this is sort of the
fallacy that there is only a fixed amount of work
to be done in the world. And it's all being done today by people and then if machines do it, there's no other work
to be done by people. And that's just a
completely backwards view on how the economy develops and grows. Because what happens is not
in fact that what happens is the introduction of technology
into production process causes prices to fall. As prices fall, consumers
have more spending power. As consumers have more spending power, they create new demand. That new demand then causes
capital and labor to form into new enterprises to
satisfy nuance and needs. And the result is more
jobs at higher wages. - So nuance and needs, the
worries that the creation of nuance and needs at
a rapid rate will mean there's a lot of turnover in jobs. So people will lose jobs. Just the actual experience
of losing a job and having to learn new things and
new skills is painful for the individuals. - Well, two things. One is the new jobs are often much better. So this actually came up
that there was this panic about a decade ago and all
the truck drivers are gonna lose their jobs, right? And number one, that didn't happen 'cause
we haven't figured out a way to actually finish that yet. But the other thing was
like, look, truck driver, like I grew up in a town
that was basically consisted of a truck stop, right? And I like knew a lot of truck drivers and like truck drivers
live a decade shorter than everybody else. Like, it's actually like a very dangerous, like, they get, like literally they have like
higher rates of skin cancer and on the left side of their, on the left side of their body from being in the sun all the time. The vibration of being
in the truck is actually very damaging to your physiology. - And there's actually perhaps partially because of that reason there's a shortage of people who wanna be truck drivers. - Yeah. Like, it's not like
the question always you wanna ask somebody like that
is, do you want, you know, do you want your kid to be doing this job? And like most of them will tell you no. Like, I want my kid to
be sitting in a cubicle somewhere like where they
don't have this, like, where they don't die 10 years earlier. And so, the new jobs, number one, the new jobs are often better, but you don't get the new
jobs until you go through the change. And then to your point,
the training thing, you know, is always the
issue is can people adapt? And again, here you need to imagine living in a world in which everybody has the AI
assistant capability, right? To be able to pick up new
skills much more quickly and be able to have some, you know, be able to have a machine to work with to augment their skills. - It's still gonna be painful, but that's the process of life. - It's painful for some people.
I mean there's no, like, there's no question it's
painful for some people and they're, you know, they're yes, it's not, again, I'm not a utopian on
this and it's not like, it's positive for everybody in the moment, but it has been overwhelmingly
positive for 300 years. I mean, look, the concern
here, the concern, this concern has played out for literally centuries and you know, this is the sort of Luddite, you know, the story of the Luddites
that you may remember, there was a panic in the two
thousands around outsourcing was gonna take all the jobs. There was a panic in the 2010s
that robots were gonna take all the jobs. In 2019 before COVID we had more jobs at higher wages both in the country and in the world than at any point in human history. And so the overwhelming evidence
is that the net gain here is like, just like wildly positive. And most people like overwhelmingly
come out the other side being huge beneficiaries of this. - So you write that the
single greatest risk, this is the risk you're most convinced by the single greatest risk of AI is that China wins global AI dominance and we the United States
and the West do not. Can you elaborate? - Yeah. So this is the
other thing which is a lot of this sort of AI
risk debates today sort of assume that we're the
only game in town, right? And so we have the ability to kind of sit in the United States and
criticize ourselves and do, you know, have our
government like, you know, beat up on our companies
and we'll figure out a way to restrict what our
companies can do and you know, we're gonna, you know,
we're gonna ban this and ban that, restrict this and do that. And then there's this like
other like force out there that like doesn't
believe we have any power over them whatsoever and they
have no desire to sign up for whatever rules we
decide to put in place and they're gonna do whatever
it is they're gonna do. And we have no control over it at all. And it's China and specifically
the Chinese Communist party and they have a completely
publicized open, you know, plan for what they're gonna do with AI. And it is not what we have in mind. And not only do they have that as a vision and a plan for their society, but they also have it as a vision and plan for the rest of the world. - So their plan is what? Surveillance? - Authoritarian control. So authoritarian population
control you know, good old-fashioned communist
authoritarian control and surveillance and enforcement
and social credit scores and all the rest of it. And you are gonna be monitored and metered within an inch of everything all the time. And it's gonna, you know, it's basically the end of human freedom and that's their goal. And you know, they justify it on the basis of that's what leads to peace. - You're worried that the regulating in the United States
will haul progress enough to where the Chinese
government would win that race. - So their plan, yeah. Yes, yes. And the reason for that
is they, and again, they're very public on this. They have, their plan is to proliferate their approach around the world and they have this program called the Digital Silk Road, right. Which is building on their
Silk Road investment program. And they've got, they've been laying
networking infrastructure all over the world with their 5G, right. Work with their company Huawei. And so, they've been
laying all this fabric, but financial and technological
fabric all over the world. And their plan is to roll out their vision of
AI on top of that and to have every other country be
running their version. And then if you're a
country prone to, you know, authoritarianism, you're
gonna find this to be an incredible way to
become more authoritarian. If you're a country, by the way, not prone to authoritarianism, you're gonna have the Chinese
Communist Party running your infrastructure and having
backdoor into it. Right. Which is also not good. - What's your sense of where
they stand in terms of the race towards super intelligence as
compared to the United States? - Yeah, so good news is they're behind, but bad news is they, you know, let's just say they get
access to everything we do. So they're probably a year
behind at each point in time, but they get, you know, downloads I think of
basically all of our work on a regular basis through
a variety of means. And they are, you know,
at least we'll see, they're at least putting
out reports of very, they just put out a report last week of a GPT 3.5 analog. They put out this report,
forget what it's called, but they put out this
report of this and they did and they, you know, the
way when open AI you know, puts out, one of the ways they test, you know, GPT they run it through standardized
exams like the SAT. Right. Just how you can kind of
gauge how smart it is. And so the Chinese report, they ran their LLM through
the Chinese equivalent of the SAT and it includes
a section on Marxism and a section on, I say tongue of thought. And it turns out their AI does very well on both of those topics. - That's right. - So like. - Oh, this alignment thing. - Communist AI, right? Like literal communist AI. Right? And so their vision is
like, that's the, you know, so you know, you can just
imagine like you're a school, you know, you're a kid 10
years from now in Argentina or in Germany or in who
knows where, Indonesia. And you ask the AI, I'd explain to you like
how the economy works and it gives you the most cheery, upbeat explanation of
Chinese style communism you've ever heard. Right. So like the stakes here
are like really big. - Well, as we've been talking about, my hope is not just
with the United States, but with just the kid in his basement. The open source LLM. 'Cause I don't know if I trust large centralized institutions
with super powerful AI no matter what their
ideology as a power corrupts. You've been investing in
tech companies for about, let's say 20 years. And about 15 of which was
with Andreessen Horowitz. What interesting trends
in tech have you seen over that time? Let's just talk about companies
and just the evolution of the tech industry. - I mean the big shift over 20
years has been that tech used to be a tools industry for
basically from like 1940 through to about 2010, almost all the big successful
companies were pick and shovels companies. So PC, database, smartphone, you know, some tool that somebody
else would pick up and use. Since 2010, most of the big
wins have been in applications. So a company that starts you know, starts in an existing
industry and goes directly to the customer in that industry. And you know, the earliest examples there
were like Uber and Lyft and Airbnb. And then that model is
kind of elaborating out. The AI thing is actually a
reversion on that for now 'cause like most of the AI
business right now is actually in cloud provision of AI APIs
for other people to build on. - But the big thing
will probably be in app. - Yeah. I think most of the
money I think probably will be in whatever your AI financial advisor or your AI doctor or your
AI lawyer or, you know, take your pick of whatever the domain is. And there, and what's
interesting is, you know, the valley kind of does everything. The entrepreneurs kind of
elaborate every possible idea. And so there will be a set of
companies that like make AI something that can be purchased
and used by large law firms and then there will be other
companies that just go direct to market as an AI lawyer. - What advice could you
give for a startup founder? Just haven't seen so many
successful companies, so many companies that fail also, what advice could you
give to a startup founder, someone who wants to build the
next super successful startup in the tech space? The Googles,
the Apples, the Twitters. - Yeah. So the great thing about the really great founders is they don't take any advice. So, if you find yourself
listening to advice, maybe you shouldn't do it. - But that's actually,
just to elaborate on that, if you could also speak
to great founders too. Like what makes a great founder? - So what makes a great
founder is super smart, coupled with super energetic,
coupled with super courageous. I think it's some of those three and-- - Intelligence, passion and courage. - The first two are traits
and the third one is a choice. I think courage is a choice. Well 'cause courage is a question
of pain tolerance, right? So how many times are you
willing to get punched in the face before you quit? And here's maybe the biggest
thing people don't understand about what it's like to be
a startup founder is it gets very romanticized, right? And even when it, even when they fail, it still gets romanticized about like what a great adventure it was. But like the reality of it is
most of what happens is people telling you no and then
they usually follow that with you're stupid, right. No, I will not come to work for you. I will not leave my cushy job at Google to come work for you. No, I'm not gonna buy your
product, you know, no, I'm not gonna run a
story about your company. No, I'm not this, that, the other thing. And so a huge amount of what
people have to do is just get used to just getting punched and the reason people
don't understand this is because when you're a founder, you cannot let on that this is happening 'cause it will cause people to think that you're weak and
they'll lose faith in you. So you have to pretend that
you're having a great time when you're dying inside, right? You're just in misery. - But why did they do it? - Why did they do? Yeah, that's the thing. It's like it is a level, this is actually one of
the conclusions I think is that I think it's actually
for most of these people on a risk adjusted basis, it's
probably an irrational act. They could probably be
more financially successful on average if they just
got like a real job in at a big company. But there's, you know, some people just have an
irrational need to do something new and build something for
themselves and, you know, some people just can't
tolerate having bosses. Oh, here's the fun thing is how do you reference
check founders, right? So you call the, you know, normal way you reference check, you're hiring somebody
is you call the bosses, they're their, and you know, and you find out if
they were good employees and now you're trying to
reference check Steve Jobs, right? And it's like, oh God, he was terrible. You know, he was a terrible employee. He never did what we told him to do. - So what's a good reference? Do you want the previous
boss to actually say they never did what you told him to do? That might be a good thing. - Well, ideally what
you want is I will go, I would like to go to
work for that person. He worked for me here and
now I'd like to work for him. No, unfortunately, most
people can't, their egos can't handle that. So they won't say that. But that's the ideal. - What advice would
you give to those folks in the space of intelligence,
passion and courage? - So I think the other big thing
is you see people sometimes who say, I wanna start a company and then they kind of
work through the process of coming up with an idea. And generally those don't
work as well as the case where somebody has the idea first and then they kind of realize that there's an opportunity
to build a company and then they just turn
out to be the right kind of person to do that. - When you say idea, do you
mean long-term big vision or do you mean specifics of like product? - Specific I would say specific, like specifically what specifics. Like what is the, because for the first five years you don't get to have vision, you just gotta build something people want and you gotta figure out a
way to sell it to them. Right. It's very practical or you
never get to big vision. - So the first product, you have an idea of a set of
products of the first product that can actually make some money. - Yeah. Like it's gotta work. The first product's gotta
work by which I mean like, it has to technically work, but then it has to actually
fit into the category and the customer's mind if
something that they want and then by the way, the other part is they have
to be willing to pay for it. Like somebody's gotta pay the bills. And so you've gotta
figure out how to price it and whether you can
actually extract the money. So usually it is much more predictable. Success is never predictable, but it's more predictable if
you start with a great idea and then back into starting the company. So this is what we did,
you know, we had most, before we had escape, the Google guys had the
Google search engine working at Stanford. Right. You know, yeah. Actually there's tons of
examples where they, you know, Pierre Omaira had eBay working before he left his previous job. - So I really love that
idea of just having a thing, a prototype that actually
works before you even begin to remotely scale. Yeah. - By the way, it's also far
easier to raise money, right? Like the ideal pitch that we receive is, here's the thing that works, would you like to invest
in our company or not? Like, that's so much easier than here's 30 slides with a dream, right? And then we have this
concept called the DMAs, which our biology of came
up with when he was with us. So then there's this thing,
this goes to mythology, which is, you know, there's
a mythology that kind of, you know, these ideas, you know, kind of arrive like magic or people kind of stumble into them. It's like eBay with the pest
dispensers or something. The reality usually with
the big successes is that the founder has been
chewing on the problem for 5 or 10 years before they start the company
and they often worked on it in school or they even experimented on it when they were a kid and they've been kind of training up over that period of time to
be able to do the thing. So they're like a true domain expert. And it sort of sounds like mom, I'm an apple pie, which is yeah, you wanna be a domain
expert in what you're doing, but you would, you know, the
mythology is so strong of like, oh, I just like had this idea in the shower right now I'm doing it. Like it's generally not that. - No, because it's, well, maybe in the shower
we had the exact product implementation details, but yeah, usually you're gonna be for
like years if not decades thinking about like
everything around that. - Well we call it the DMAs
because the DMAs basically is like, there's all these permutations, like for any idea, there's like all these
different permutations, who should the customer be? What shape forms should the product have and how should we take it to
market and all these things. And so the really smart
founders have thought through all these scenarios
by the time they go out to raise money and they
have like detailed answers on every one of those fronts because they put so much thought into it. The sort of more haphazard
founders haven't thought about any of that. And it's the detailed ones
who tend to do much better. - So how do you know when to take a leap if you have a cushy job or happy life? - I mean the best reason is just 'cause you can't tolerate
not doing it right? Like this is the kind of
thing where if you have to be advised into doing it, you
probably shouldn't do it. And so it's probably the opposite, which is you just have such
a burning sense of this has to be done, I have to do
this, I have no choice. - What if it's gonna
lead to a lot of pain? - It's gonna lead to a lot
of pain. I think that's. - What if it means losing
sort of social relationships and damaging your
relationship with loved ones and all that kind of stuff. - Yeah, look, so like, it's gonna put you in a
social tunnel for sure, right? So you're gonna, like, you know, there's this game you can play on Twitter, which is you can do any whiff
of the idea that there's basically any such thing
as work life balance and that people should actually work hard and everybody gets mad. But like, the truth is like all the
successful founders are working 80 hour weeks and they're
working, you know, they form very, very strong social bonds with
the people they work with. They tend to lose a lot of
friends on the outside or put those friendships on ice. Like that's just the nature of the thing, you know, for most people
that's worth the trade off. You know, the advantage, you know, maybe younger founders have
is maybe they have less, you know, maybe they're
not, you know, for example, if they're not married yet
or don't have kids yet, that's an easier thing to bite off. - Can you be an older founder? - Yeah. You definitely can. Yeah. Yeah. Many of the most
successful founders are second, third, fourth time founders. They're in their thirties,
forties, fifties. The good news with being an
older founder is, you know, more and you, you know, a
lot more about what to do, which is very helpful.
The problem is, okay, now you've got like a spouse
and a family and kids and like, you've gotta go to the
baseball game and like, you can't go to the base,
you know, and so it's. - [Lex] Life is full of difficult choices. - Yes. - Marc Andreessen, you've written a blog post
on what you've been up to. You wrote this in October, 2022, "Mostly I try to learn a lot. For example, the political events of 2014
to 2016 make clear to me that I didn't understand
politics at all referencing maybe some of this book here. So I deliberately withdrew
from political engagement and fundraising and instead
read my way back into history and as far to the political left and political right as I could." So just high level question, what's your approach to learning? - Yeah, so it's basically, I would say, I'm an AutoID direct,
so it's sort of goes, it's going down the rabbit holes. So it's a combination. I kind of allude to it
in that, in that quote, it's a combination of breadth and depth. And so I tend to, yeah, I tend to, I go broad by the nature
of what I do, I go broad, but then I tend to go deep
in a rabbit hole for a while, read everything I can
and then come out of it. And I might not revisit
that rabbit hole for, you know, another decade. - And in that blog post that I recommend people go check out, you actually list a bunch
of different books that you recommend on different
topics on the American left, on the American right. It's just a lot of really good stuff. The best explanation for
the current structure of our society and politics. You give to recommendations, four books on the Spanish Civil War, six books on deep history
of the American right comprehensive biographies. These of Adolf Hitler, one of which I read can
recommend six books on the deep history of the American
left. So the American right, American left looking at the
history to give you the context biography of later Lennon, two of them on the French
Revolution. I actually, I have never read a
biography on Lennon maybe that would be useful. Everything's been so Marc's focused. - The Sebastian biography
of Lennon is extraordinary. - [Lex] Victor Sebestyen. Okay. - Blow your mind. Yeah. - [Lex] So it's still useful to read. - It's incredible. Yeah, it's incredible. I actually think it's the single best book on the Soviet Union. - So that the perspective of Lennon, it might be the best way to
look at the Soviet Union versus Stalin versus Marx
versus, very interesting. So two books on fascism and
anti-fascism by the same author, Paul Gottfried, brilliant book on the
nature of mass movements and collective psychology, the definitive work on
intellectual life under totalitarianism, the Captive Mind, the definitive worked
on the practical life under totalitarianism. There's a bunch. There's a bunch. And the single best book, first of all, the list here is just incredible. But you say the single best
book I have found on who we are and how we got here is the Ancient City by Numa Dennis Fustel De Coulanges. I like it. What did you learn about who
we are as a human civilization from that book? - Yeah, so this is a fascinating book. This one's free, it's a free, by the way, it's a book in the 1860s. You can download it or
you can buy printouts up prints of it. But it was this guy who was
a professor at the savant in the 1860s and he was
apparently a savant on antiquity on Greek and Roman antiquity and the reason I say that is
because his sources are 100% original Greek and Roman sources. So he wrote a basically
history of western civilization from, on the order of 4,000 years ago to basically the present
times entirely working on fresh original Greek and Roman sources. And what he was specifically
trying to do was he was trying to reconstruct from the stories of the Greeks and the Romans, he was trying to reconstruct
what life in the west was like before the Greeks and the Romans, which was in the civilization known as the Indo Europeans. And the short answer is,
and this is sort of 4,000, you know, 2000 BC to, you know, sort of 500 BC kind of
that 1500 year stretch for civilization developed. And his conclusion was basically cults. They were basically cults
and civilization was, or organized into cults. And the intensity of the cults was like a million fold beyond anything that we would recognize today. Like it was a level of
all encompassing belief and an action around religion that was at a level of extremeness that we wouldn't even recognize it and so specifically he
tells the story of basically there were three levels of cults. There was the family cult, the tribal cult, and then the city cult as society scaled up. And then each cult was a
joint cult of family gods, which were ancestor gods. And then nature gods and then
your bonding into a family, a tribe or a city was
based on your adherence to that religion. People who were not of your
family, tribe, city, worship, different gods, which gave you not just the
right with or responsibility to kill them on site. - [Lex] So they were
serious about their cults. - Hardcore, by the way,
shocking development. I did not realize this zero
concept of individual rights. Like even even up through the Greeks, and even in the Romans, they didn't have, have the concept of individual rights. Like the idea that as an
individual you have like some rights just like, nope. Right? And you look back
and you're just like, wow, that's just like cr
like fascist in a degree that we wouldn't recognize today. But it's like, well, they were living under
extreme pressure for survival. And you, and you know, the theory goes, you could not have people
running around making claims, individual rights when
you're just trying to get like your tribe through the winter, right? Like you need like hardcore
command and control. And actually what if through
modern political lens, those cults were basically
both fascist and communist. They were fascist in
terms of social control, and then they were communist
in terms of economics. - But you think that's
fundamentally that like pull towards cults is within us. - Well, so my conclusion from this book, so the way we naturally
think about the world we live in today is like, we basically have such an
improved version of everything that came before us, right? Like, we have basically, we've figured out all these
things around morality and ethics and democracy
and all these things. And like, they were basically
stupid and retrograde and we're like smart and sophisticated. And we've improved all this after reading that book, I now believe in many ways
the opposite, which is no, actually we are still running
in that original model. We're just running in an
incredibly diluted version of it. So we're still running,
basically in cults. It's just our cults are at like
a thousandth or a millionth, the level of intensity, right? And so our, so just as to
take religions, you know, the modern experience of
a Christian in our time, even somebody who considers
him a devout Christian, is just a shadow of the level
of intensity of somebody who belonged to a religion
back in that period. And then by the way, we have cons. It goes back to our AI discussion. We then sort of endlessly
create new cults. Like we're trying to fill the void, right? And the void is a void of bonding. - [Lex] Okay. - Living in their era. Like
everybody living today, transporting that era
would view it as just like, completely intolerable in terms of like the loss of freedom and the level of basically of fascist control. However, every single person in that era, and he really stresses this. They knew exactly where they stood. They knew exactly where they belonged. They knew exactly what their purpose was. They knew exactly what they
needed to do every day. They knew exactly why they were doing it. They had total certainty about
their place in the universe. - So the question of meaning, the question of purpose
was very distinctly, clearly defined for them. - Absolutely overwhelmingly
undisputably undeniably. - As we turn the volume
down on the cultism-- - [Marc] Yes. - We start to, the search for meaning starts
getting harder and harder. - Yes. 'cause we don't have that. We are ungrounded. We are uncentered and
we all feel it. Right? And that's why we reach for, you know, it's why we still reach for religion. It's why we reach for, you know, we people start
to take on, you know, let's say, you know, a faith in science maybe beyond
where they should put it. You know and by the way,
like, sports teams are like a, you know, they're like a tiny
little version of a cult. And you know, apple keynotes are a tiny
little version of a cult. Right. And, you know, political, you know. And there's cult, you know, there's full-blown cults on both sides of the political spectrum
right now. Right. You know, operating in plain stuff. - But still not full blown
compared as to what it was. - Compared to what it used to. I mean, we would today consider
full blown, but like, yes, they're at like, I don't know, a hundred thousandth or
something of the intensity of what people had back then. So, we live in a world today
that in many ways is more advanced and moral and so forth. And it's certainly a lot nicer,
much nicer world to live in. But we live in a world
that's like very washed out. It's like everything has
become very colorless and gray as compared to how people
used to experience things. Which is I think why we're
so prone to reach for drama. 'Cause there's something in us that's deeply evolved
where we want that back. - And I wonder where it's all
headed as we turn the volume down more and more. What advice would you
give to young folks today in high school and college? How to be successful in their career? How to be successful in their life? - Yeah. So the tools that
are available today, I mean, are just like, I
sometimes, you know, bore, I sometimes bore, you know, kids by describing like what
it was like to go look up a book, you know, to try to like discover
a fact in, you know, in the old days, the 1970s, 1980s, to go to the library and the card catalog and the whole thing. You go through all that work
and then the book is checked out and you have to wait two weeks and like to be in a world, not only where you can get
the answer to any question, but also the world now, you know, the AI world where you've
got like the assistant that will help you do
anything, help you teach, learn anything, like your ability both to learn and also to produce
is just like, I don't know, a million fold beyond what it used to be. I have a blog post I've
been wanting to write, which I call where are the
hyper-productive people? Like-- - [Lex] That's a good question, right? - Like with these tools, like there should be authors
that are writing like hundreds or thousands of like, outstanding books. - Well, with the authors there's
a consumption question too, but yeah. Well, maybe not, maybe not. You're right. But, so the tools are much more powerful. Getting much more powerful. - Artists, musicians. Right. Why aren't musicians producing
a thousand times the number of songs, right? Like what, like the tools are spectacular. - So, what's the explanation? And by way of advice, like, is motivation starting to
be turned down a little bit? Or what? - I think it might be distraction. - [Lex] Distraction. - It's so easy to just sit and consume that I think people get distracted from production.
But if you wanted to, you know, as a young person, if you
wanted to really stand out, you could get on a, like a hyper productivity curve very early on. There's a great, you know, this story, there's a great story in
Roman history of plenty of the elder who was
this legendary statesman, died in the Vesuvius eruption
trying to rescue his friends. But he was famous both for being basically being a polymath,
but also being an author. And he wrote apparently
like hundreds of books, most of us had been lost. But he like wrote all these
encyclopedias and he literally like would be reading and
writing all day long no matter what else was going on. And so he would like travel
with like four slaves. And two of them were
responsible for reading to him, and two of them were responsible
for taking dictation. And so like, he'd be going
cross country and like, literally he would be writing
books like all the time. And apparently they were spectacular. There's only a few that have survived, but apparently they were amazing. - There's a lot of value to being somebody who finds focus in this life. - Yeah. Like and there are
examples, like there are, you know, there's this guy,
judge, what's his name? Posner, who wrote like 40 books and was also a great federal judge. You know, there's our friend Balaji, I think is like this, he's
one of these, you know, where his output is just prodigious. And so it's like, yeah, I mean,
with these tools, why not? And I kind of think we're at this interesting
kind of freeze frame moment where like this, these tools are now in everybody's hands and everybody's just kind
of staring at them trying to figure out what to do. The new tools. - We have discovered fire. - [Marc] Yeah. - And trying to figure
out how to use it to cook. - [Marc] Yeah. Right. - You told Tim Ferriss that
the perfect day is caffeine for 10 hours and alcohol for four hours. You didn't think I'd be
mentioning this, did you? It balances everything
out perfectly as you said. So, perfect. So let me ask, what's the secret to balance
and maybe to happiness in life? - I don't believe in balance, so I'm the wrong person to ask that. - Can you elaborate why you
don't believe in balance? - I mean, I maybe it's just,
and I look, I think people, I think people are wired differently. So, I think it's hard to
generalize this kind of thing, but I am much happier and more satisfied when I'm fully committed to something. So I'm very much in favor
of all in of imbalance. - Imbalance. And that applies to work, to life, to everything. - Yeah. No, no. I happen to have whatever twist
of personality traits lead that in non-destructive
dimensions in including the fact that I've actually, I now no
longer do the ten-four plan. I stopped drinking. I do the caffeine, but not the alcohol. So there's something in my personality where I whatever mal-adaption
I have is inclining me towards productive things,
not unproductive things. - So you're one of the
wealthiest people in the world. What's the relationship
between wealth and happiness? Money and happiness. - So I think happiness, I don't think happiness is the thing. - To strive for. - I think satisfaction is the thing. - That just sounds like
happiness, but turned down a bit. - No deeper. So happiness is, you know, a walk in the woods at sunset,
an ice cream cone, a kiss, the first ice cream cone is great. The thousandth ice
cream cone, not so much. At some point the walks
in the woods get boring. - What's the distinction between
happiness and satisfaction? - I think satisfaction is a deeper thing, which is like having found a purpose and fulfilling it, being useful. - So just something that
permeates all your days, just this general
contentment of being useful. - That I'm fully satisfying my faculties, that I'm fully delivering, right? On the gifts that I've been
given, that I'm, you know, net making the world better, that I'm contributing to
the people around me, right. And that I can look back
and say, wow, that was hard, but it was worth it. Think generally, it seems to lead people in
a better state than pursuit of pleasure, pursuit of
quote unquote happiness. - Does money have
anything to do with that? - I think the founders and
the founding fathers in the US threw this off kilter when
they used the phrase pursuit of happiness. I think they should have said. - [Lex] Pursuit of satisfaction. - They said, pursuit of satisfaction. We might live in a better world today. - Well, they, you know, they could have elaborated on a lot of things right in the box. - [Marc] They could have
tweaked the second amendment. - I think they were
smarter than they realized. They said, you know we're gonna make it ambiguous and let these humans figure out the rest, these tribal cult-like
humans figure out the rest. But money empowers that. - So I think, and I think there, I mean, look, I think Elon is, I don't think I'm even a great example, but I think Elon would be
the great example of this, which is like, you know, look,
he's a guy who from every, every day of his life, from the day he started
making money at all, he just plows into the next thing. And so I think, I think money is definitely
an enabler for satisfaction. Way money applied to
happiness leads people down very dark paths. Very destructive avenues. Money applied to satisfaction,
I think could be, is a real tool. I always, by the way,
I was like, you know, Elon is the case study for behavior. But the other thing that I
always really made me think is Larry Page was asked one time what his approach to philanthropy was. And he said, oh, I'm just, my philanthropic plan is just give all the money to Elon. (both laugh) - Well, let me actually
ask you about Elon. You've interacted with quite a lot of successful engineers
and business people. What do you think is special about Elon? We talked about Steve Jobs. What do you think is special
about him as a leader? As an innovator? - Yeah. So the core of it is he's back to the future. So he is doing the most
leading-edge things in the world, but with a really deeply
old-school approach. And so to find comparisons to Elon, you need to go to like
Henry Ford and Thomas Watson and Howard Hughes and
Andrew Carnegie, right. Leland Stanford, John Rockefeller, right. You need to go to what were called the
bourgeois capitalists, like the hardcore business owner operators who basically built, you know, basically built industrialized
society, Vanderbilt. And it's a level of hands-on commitment and depth in the business, coupled with an absolute priority
towards truth and towards, how to put it, science and technology
town to first principles that is just like absolute, is just like unbelievably absolute. He really is ideal that he's
only ever talking to engineers. Like he does not tolerate. He has less tolerance than
anybody I've ever met. He wants ground truth
on every single topic. And he runs his businesses
directly day-to-day, devoted to getting to ground
truth in every single topic. - So you think it was a good decision for him to buy Twitter? - I have developed a view in life to not second guess Elon Musk, I know this is gonna sound
great, crazy and unfounded, but. - Well, I mean, he's got
a quite a track record. - I mean, look, the car was
a crazy, I mean, the car was, I mean, look. - He's done a lot of
things that seem crazy. - Starting a new car company in the United States of America. The last time somebody
really tried to do that was the 1950s and it was
called Tucker Automotive. And it was such a disaster. They made a movie about
what a disaster it was, and then rockets like, who does that? Like, there's obviously no way to start a new rocket company. Like those days are over. And then to do those at the same time. So after he pulled those
two off, like, okay, fine. Like, this is one of my areas of like, whatever opinions I had about
that, that is just like, okay, clearly are not relevant. Like this is you just, you at some point you just
like bet on the person. - And in general, I wish more people would lean
on celebrating and supporting versus deriding and destroying. - Oh yeah. I mean, look,
he drives resentment. Like it's a resentment. Like he is a magnet for resentment. Like his critics are the
most miserable, like, resentful people in the world. Like it's almost a perfect match
of like the most idealized, you know, technologist, you know, of the century coupled with like, just his critics are
just bitter as can be. And I mean, it's sort of
very darkly comic to watch. - Well, he fuels the fire of that by being on Twitter at times. And which is fascinating
to watch the drama of human civilization, given our cult roots just fully on fire. - [Marc] He's running a cult. - You could say that. - [Marc] Very successfully. - So now that our cults
have gone and we searched for meaning, what do
you think is the meaning of this whole thing? What's the meaning of
life Marc Andreessen? - I don't know the answer
to that. I think the meaning of the closest I get to it is what I said about satisfaction. So it's basically like, okay, we were given what we have, like we should basically do our best. - What's the role of love in that mix? - I mean, like, what's the point of life if you're without love, like, yeah. - So love is a big part
of that satisfaction. - Yeah. And look like
taking care of people is like a wonderful thing. Like it, you know, mentality, you know, there are pathological forms
of taking care of people, but there's also a very
fundamental, you know, kind of aspect of taking care of people. Like, for example, I happen to be somebody who
believes that capitalism and taking care of people are actually, they're actually the same thing. Somebody once said, capitalism is how you take
care of people you don't know. Right, right. And so like, yeah, I think it's like deeply
woven into the whole thing, you know, there's a long conversation to be had about that, but yeah. - Yeah. Creating products that are used by millions of people and bring them joy in smaller, big ways. And then capitalism kind of
enables that, encourages that. - David Friedman says, there's only three ways to
get somebody to do something for somebody else. Love, money and force. And love and money are better. - [Lex] Yeah. Of course. That's a good ordering. I think. - We should bet on those. - Try love first. If that doesn't work, then money. - [Marc] Yes. - And then force. Well, don't even try that one. Marc, you're an incredible person. I've been a huge fan. I'm glad to finally got a chance to talk. I'm a fan of everything
you do, everything you do, including on Twitter. It's a huge honor to meet
you, to talk with you. Thanks again for doing this. - Awesome. Thank you, Lex. - Thanks for listening
to this conversation with Marc Andreessen. To support this podcast, please check out our
sponsors in the description. And now let me leave you with some words from Marc Andreessen himself. "The world is a very malleable place. If you know what you
want and you go for it, with maximum energy and drive and passion, the world will often
reconfigure itself around you much more quickly and easily
than you would think." Thank you for listening and
hope to see you next time.