- Okay, good evening everybody.
So I'm Stephan Meier. I'm the James P. Gorman
Professor of Business here at the Business School, and I'm very excited to welcome you all to this fireside chat today on Shaping the Future of
AI and the Digital Economy. This is part of a new speaker series, this distinct speaker series that should be a platform
for business visionaries to share their insight on how to set and implement very ambitious goals and how to inspire new innovation, and who will be better to kick us off than Reid Hoffman as a speaker. So it's my honor to welcome and to introduce our two
guests, Reid Hoffman, the co-founder of LinkedIn and Inflection AI and
the partner at Greylock and Costis Maglaras, the Dean
of Columbia Business School. So it's amazing to have you here, Reid. He said I should introduce
him as "the guy." I'm expanding a little bit. So he has been named "the most connected
man in Silicon Valley," and I'm sure he is probably
also one of the busier ones. So thank you for being here.
He's very accomplished. Obviously entrepreneur and executive, played important roles, critical roles in building
the leading consumer brands like LinkedIn and PayPal as an investor. He was critically involved
in many, many companies such as Facebook and Airbnb. He also co-authored not just
one but five bestselling book, although his last one was with a very powerful coauthor, GPT-4. And I'm sure they talk about
whether that's cheating or smart when they talk about. He's also involved in not
just business activities, with many philanthropic endeavors and has got many, many awards for those. The most remarkable one to me were the Honorary Commander
of the British Empire from the Queen England. I think that's pretty cool. And the Salute to Greatness Award from the Martin Luther King Center. So welcome Reid to
Columbia Business School. He's going to have a discussion
with Costis Maglaras. So he's the 16th Dean of
Columbia Business School and the David and Lyn Silfen
Professor of Business. He's an electrical engineer
turned business professor turned Dean of the Business School. And under his leadership, I think the school transformed
quite dramatically, especially in... and in particular embracing
technology in education. And given his vision, he was instrumental to the STEM certification of our program, in launching the MBA MS
Engineering dual degree and many initiatives, including the Digital Future Initiative, which is, first of all, close to my heart because I'm one of the faculty co-director
and its co-sponsor here. This is our new think tank
at the business school. The goal is to prepare you, the students, through the next century
of digital transformation and help organizations, governments, and communities to understand
and leverage and prosper from the current and future
waves of digital disruption. So I very much look forward to
the discussion we're having. And without much further ado, let's welcome Reid Hoffman to
the Columbia Business School and the two of you to the front and pass it over to Dean Maglaras. - Okay. Thank you, Stephan. So that was a short enough introduction? - Could be shorter, but yeah. - So thank you so much for coming.
- But thank you. I was thinking about this conversation and I realized it could have taken many, many different paths. We could talk about early
internet and the PayPal period. We can talk about building one of the most successful
social networks, LinkedIn, in this period of hyperscaling the business. We could talk about your incredible career as a venture capitalist in the Bay Area, but I think maybe we should
talk about AI right now and in particular because you
have actually been involved both investing, starting companies, advising governments, you know, sort of all-in in that space. And I think it would be
great to sort of talk about that and hear your thoughts on that. Now, we were having a little conversation before and you used the word
"cognitive superpowers." - Yes. - And so one thing that
I wanted to ask you just to kick us off is
to contextualize for us the incredible growth of the
capabilities of these AIs and how do you see that from
your seat and, you know, where do you see us going, and then we'll take it from there. - So at its baseline, I
think what we're doing is we're creating kind of the cognitive
industrial revolution, steam engine of the mind. And as part of that, it greatly amplifies what
we're capable of, right? So part of with the
steam engine, you know, that made physical things much stronger, kicked off the industrial revolution, allowed you transport and logistics, allowed you manufacturing. This is now that, but in the kind of cognitive
and language characteristics. And what kicked it off,
the algorithms had, I mean look, there's been
innovations, of course, in them, but had been already known for decades but it's a scale compute thing. And so it's the fact that
you can apply thousands, tens of thousands of compute units, GPUs, and it shifts the paradigm
for how they're built, from programming them to them learning. That's part of the reason why you have data and all the rest. And we're still at the
very beginning of this because while these, you
know, AIs, these agents, these models have learned
really interesting things, they were just beginning and understanding what the relationship with
data, trading paradigms, scale of compute, all of this plays into. But for example, you know, GPT-4 already gives us superpowers. So if you said, "Well, I'd like to have an expert tell me the inner relationship between the ministry of
experts model of training AI and game theory," I'm sure we could probably find 20 or 30 people who might be able to do that. And then you say, "I'd like that, plus it's parallels with oceanography." Zero human beings. GPT-4 can do that. Now, the idea for doing
that, the question, like to some degree what you
get out when you're playing with these devices is what you bring. So for example, in an educational
con, you know, context, you say, write me a smart
essay on business strategy. And it is unlikely to be that good. I mean it'll be coherent,
well-written and all this. But you say, well, what I'd like to do is understand how the intersection of
data and manufacturing will make a difference between general purpose
and specific robots within certain kinds of supply chains within the global world. Presuming that these materials
are getting more expensive and those materials are getting cheaper, might get something much
better, for example, and it's still useful in
terms of how these operate. And that's what I mean
by cognitive superpowers. Like one of the things that,
since I've wrote the book, that I've realized is changing
in patterns of thought is that more of our pattern of
thought will look as we learn to be thoughtful and deep will
be more like a video game. Because opposed to taking a long walk trying to get that one big idea, you sit down and you start banging and kind of saying,
"Okay, here's a prompt. Ah, that wasn't very interesting.
Oh, here's another one. Oh wait, maybe I should pursue this." And I think we will have this
branch of ability to think and reason in that iterative
process in a much better way. And obviously, there's going to be a ton of
different kinds of superpowers 'cause, for example, I have very limited artistic capabilities, but if I have an idea
and I can describe it, I can go to DALL-E or Midjourney
and begin to get stuff. And that also broadens out things. So if I wanted to create a
card for my friend's birthday and wanted to do something
specific and I had a visual idea, that's another form of superpower. And these are just
gestures at all the things. I think anything that we do with language is the minimum of where the
amplification will begin. - And when you think
about the speed of change, going back to that, I mean, people have been... a lot of these things that you mentioned have been known for quite some time. Compute is what we started basically applying at
scale in the last 10 years. Data is what we started applying at scale in the last 10 years. Transformer was invented
about seven years ago, but in some sense, we have seen sort of a dramatic
increase in capabilities. Do you see that continuing? - All exponential curves eventually, all J curves eventually turn to S curves, but I think it certainly
will be continuing for the next couple
years and maybe longer. Anyone who claims they certainly know it'll continue past the next couple years, and maybe it's ideological
or wish or whatever else, it definitely does happen. But you know, all of these
things eventually do, and part of the way the s curve, one of the mistakes I think
some of the people who are, "Oh my god, it's going
to be super intelligent in three years," is they go, "Okay, you got this s curve of capabilities driven
by larger scale compute." And they say, "Well, that's an IQ curve." And it's like, "Well, it's
not exactly an IQ curve, you're making an inference, a judgment where it's not
actually in fact the same thing." Now it is some capabilities are exponentially getting better, but that's not the same thing as IQ. - Yeah, yeah. When you think about, you know, you sit in a vantage point
where you're investing across the gamut of sort
of the whole plumbing of the AI ecosystem. Do you have views on sort of open sourcing versus sort of proprietary
models or things like that? And how do we sort of accelerate growth, make it potentially more
sort of pervasive or, you know, what are your
thoughts about that? - So I was on the Board of
Mozilla for nearly 11 years. At LinkedIn, we open source
a number of different things. I generally think there's a lot of value in open sourcing various
forms of software, similar to public and open science. These models have a lot of capabilities and one of the problems with
open sourcing the models, having them generally available, is they put the capabilities
in the hands of everybody. Now if you said, hey, we
could open source them and it would only be, you
know, academic institutions, it'd only be entrepreneurs
and only be governments, well, some governments, like Russian Internet
Research Agency, not so much, then it'll be great. But the problem with open sourcing is once the model gets out of the barn as it were, it's there out there infinitely. And like one of the things
we certainly will be seeing in this year's elections is the use of these
various open source models to generate content to try to disrupt our information system
within the election. And that's something we're
going to have to work against. Now I think we can also
use AI to help with that, but the reason I've been more cautionary on open sourcing these models has been it will also amplify bad actors, whether it's rogue
states, cyber criminals, terrorists, et cetera. I find if those people have
an open source web browser, there's nothing in particular
more they can do with that. An open source database, again, nothing in particular
they can do with that. These models give them superpowers that might be more harmful. - Have we already opened
source efficiently by sharing, I mean some of these models are. - So some of the use cases like political misinformation at scale, yes, the open source models
currently can do that. Some of them, we're going to see... we have seen an increasing and we'll continue to see
increasing cyber attacks because phishing and so forth,
similar to misinformation. And then some other areas, I think so far we haven't extended a line, bio-terrorism, et cetera. But if you just keep
open sourcing everything that gets there, you cannot control those
kinds of negative cases, and some of them are serious. So, so far, some problems, but nothing,
you know, 5-alarm fire-ish. But we get to 5-alarm fire fairly rapidly if you're not careful. - Okay. Let me switch to something else. We have gone down the path that
you were mentioning before, large scale compute, large scale models, and sort of building
essentially foundation models that have generic properties that then we deploy in
different applications. A different school of thought would be rather than building large scale, general purpose model is
to sort of build smaller, targeted application-specific models. And you know, as you know in engineering, this has been happening
for a very, very long time. What do you think? And there is a lot of effort
on both sides right now. What do you see there? - So there's going to be great
results out of both efforts. What you see so far, when you
look at GPT-2, 3, 4 and like, for example, you fine
tune 3 for some cases, and then 4 is just better
at most of those cases at 3 was fine tuned for, even 3.5. And so there's a virtue so far
in the increase in abilities that as you get to the
larger scale models, they just become more
robust, more capable, more like, you know, if you look at this as a
kind of a research assistant that's immediate and quick and on demand. It has some hallucination issues, although they're trying to, you know, fix that with search and
other kinds of things. You'll never get the
hallucination issue to zero, but you might get it to
substantially below human norm, in which case, for our purposes, that's pretty close to zero
or definitely good enough. And so large scale models have amazing increase in capabilities. Now be said, there may be reasons why
you'd want a smaller model. It runs on a phone, it's cheaper to run, it only really needs to
do something specific or it has a different
training area where you want its generativity to be
much better in that area and its errors to be
much lower in that area and you don't care about everything else. And so that's reason
why I think, you know, one of the things that I
think is an inevitable part of the future is that it
won't just be like one model when you're creating an
agent or you're creating a, you know, kind of applications . You'll be deploying multiple models that are coordinating.
- Okay. Good. Switching a little bit to
hardware, you mentioned compute. Hardware, NVIDIA coming to mind, GPUs, large, you know, clusters. With NVIDIA, I was doing a conversation with
Jensen here five months ago when his company had
just crossed 1 trillion and now they're like at 2.3 or something.
- Yeah. - What do you think about
sort of, in some sense, the fact that this is not
being commoditized right now, it doesn't look like TSMC is extracting a significant rent out
of sort of their ability to produce this sort of
super cutting-edge clusters. And do you see that persisting? Do you see hardware as being an inevitable enabler of this revolution or something that over time will sort of get commoditized or saturate? - Well, so I think NVIDIA's
done a lot of great work. They didn't create GPUs
specifically to do AI or specifically crypto.
- Yeah. - It just happens to be a very
good mathematics processor and that that ties-
- Yeah. - into those cases. And I think, you know, this is
one of the good things about, you know, kind of capitalism,
invention, et cetera, is that I think it's inevitable that there will be good
competition for NVIDIA. There's nothing that
structurally locks it out despite having done great
work, have a great team, have a great building and design culture. And so I think NVIDIA chips will continue to be in very high demand
for the coming years. But, you know, I'm aware
of lots of efforts- - Yeah. - to create alternative
chips, alternative programs. And that's part of what happens when you have a market demand. And so I think, you know, as soon as maybe one
to two years from now, you'll begin to see some chips that, at least they may not be as
useful for the training yet, but will be helping on
what we call inference in the industry, which is serving the models
and serving the results. - Yep, yep. - And you know, but I
think there's a, you know, I see lots of startups pitching and I see lots of large
companies also figuring out how they can do this in interesting ways. - Yeah. So you mentioned training. You may need cutting-edge
technology. Inference which is when I have trained the model and I now I'm doing a
query to get a response, I may need special purpose hardware, but simpler, different hardware. - Yes, yes. Good. - Data, I want to switch
a little bit to data. You know, these systems are ingesting, essentially incredible amounts of data, our data as well. And they're both issues of, are we running out of new data to feed it? I don't believe that that's the case. But the other thing is
issues of ownership of data and what are your thoughts on that? I mean, because I'm sure
you're thinking about it from all angles, including, you know, everything that is going on
with OpenAI, with like, so. - So it is a complicated issue that most people don't
think very well about. You know, so there tends
to be a question of like, for example, your camera is being taken. It's like, is a picture being
taken of me in this room, is that your picture, is that my picture, is that our picture? You know, like, you
know, what is all that, that kind of thing in terms of data. I probably signed some release,
it's probably your picture, but, you know, but it's this kind of complicated thing. And then you say, what's
the value of data? What's the value of data
of what's made of it? And so my first cut on
all of this stuff is that like when you're training, it's like these models are reading, right? So the same rules that govern data should be the same rules
that govern reading, which is do you have legitimate access to what you're reading? Did you buy the book? You
know, et cetera, et cetera. And then, you know, that's
fine 'cause that's reading. 'Cause reason why copyright
law does not protect me from buying a book or
giving or selling it to you and then you reading it,
that's all part of it. What it does prevent me
to do is I buy the book and I go, Ooh, I'm going to redo this book and start selling it
myself, you know, et cetera. And so I think that's
the kind of the nuance where you want to be on the data side. And obviously, there's
various places where you say, well, there's things that
are in my private data that I don't want to be revealed anywhere. And these generative models
are not particularly good at knowing when that is. So let's not feed that data into the generative model itself. Now part of it is that
the generative model is more like an inference engine. People think of it
frequently as a database. But to get to the inference engine is 'cause you have a bunch
of data getting there, but it's an inference engine. So, you know, one of the things that I found amusing about "The New York Times" legal case and this was that, see, it
reproduces these articles, and you're like, well, you cut and paste it in the
first half of the article and said complete it and it had kind of
learned this stuff for it. Now, someone who has access to
the first half of the article probably has access to the whole article. And that's probably
still a legitimate case. I'm not clear that there's a harm here. It's if you said, give
me the article that gives what his title and it generated it, then you say, well, okay, you're giving something "The
New York Times" is selling and this person hasn't bought it yet, then that would be an issue. I don't think these models do that 'cause they're trained not to do it. So there's a bunch of different complexity around the data side. And I think that the question
of like, for example, you're training on what's broadly
available on the internet. You're like, "I'm
publishing on the internet," I'm saying, "Everyone please read this." An AI model reading it, getting trained on that strikes me as a fair use of the technology. And I think it's really important that we want these models to exist. You know, it's one of the
things we were talking about very quickly before
we got on the stage here is, you know, with these AI models, we know how to create a medical assistant that could work on every smartphone that would be available to everybody, whether or not you have
access to a doctor or not. And obviously, they could
even train in a way that says, Hey, do you have access to a doctor? If you do, great. Let me tell you some things about, you should go see your doctor right away and that's what you tell 'em or, you know, you're probably okay, but these are the things you might want to check in with your doctor about. And if you don't have access to a doctor and say, look, I'm not a doctor, but here's what you might consider, and that could be amazing. Tutor as well and a bunch of other things. Or say for example, you're
can't afford a lawyer and you're looking at something
that's like a contract. Well, actually having something that could help you with
that is a good thing. And so I'm generally thinking, we should want these models trained and our primary question is not, should the model be trained not? Like let's make sure we get them in as broad as many
people's hands as possible to help humanity overall, not just wealthy people or
not just wealthy countries, but you know, that kind
of thing as an angle. Anyway, that's a first blush
on some of the data things, but it's obviously, very complicated.
- Yeah, yeah. It's an evolving topic,
let's just say that. You mentioned OpenAI. You were one of the first early investors on OpenAI.
- I left the first round, yeah. - And you have then started Inflection AI, you've done all sorts
of investments in that. What compelled you to help start OpenAI and then keep following on that? You know, how do you
evaluate AI investments? I mean, you've been now at
for eight years or something in that specific... When was the OpenAI
investment, was 16 or... 15, yeah.
- 15, 16 or something, yeah. I don't have a historian's brain, so I have to go look at the
documents to get that right. So, you know, originally, I was thinking I was
going to be an academic. I was getting a philosophy
degree in Oxford and I decided I'd have
more impact on the world by helping create software. And I didn't actually
ever think that like, I kind of backed into being an investor. The investor thing was not a goal, it was the how do I help the right kinds of projects get built. And so it was more entrepreneurial or more product creation initially. And so the thing I look
at when I'm looking at these technologies and I'm generally, as an expert, most focused on software. I do some non-software investing, but it's almost all philanthropic. It's like, "Okay, I think key for climate change is nuclear fission and fusion," so I'll do some investing in that. Sometimes you'll see a product that you really think
should exist in the world. So you invest in it. I write those investments
in my book to zero when I make them 'cause I, you know, I have no idea how to
predict a range of outcomes and I hope, of course,
they're economically valuable, but it's, is it as it is. So in the software side, you know, starting with internet
stuff, and then Web 2.0, and then you know, Web 3, and then AI, I look for what are the things that could make a huge
difference in elevating anything from an individual human to groups to societies and industries to humanity? And what is the way the world should be? And if this is the way the world should be that could create a really
valuable transformation of an industry and there's a entrepreneur who she or he has a really good plan and has the resources,
the timing is right, that's the thing where I get into kind of doing an investment. Now, for OpenAI, it started with, you know, participating in some conversations with Sam Altman and Elon
Musk and it was like, "Look, we're going to have this AI
revolution that's coming. We should make sure that
beneficial AI isn't just a province of super large tech companies but has a for-humanity bent." And I'm not obviously opposed
to large tech companies. I think they do a lot
of stuff for humanities. But I think it was a good thing. It's the same reason I was on the board of Mozilla for over 11 years. I'm still on the board of Kiva. It's like, what are public interest
technologies really matter and it's a problem we
don't solve very well? Like yes, let's help this get started. And that's getting into it. And now at that stage, it was like, maybe there's something here, right? The part of venture investing
is like C or series A is, "Well, it's an idea. Might work, right? Let's try it." And then as you go through, and this is one of the things I've really learned
from being a, you know, one of the things I think
creates Silicon Valley such an interesting place is that it's an intense learning network. We all trade in what works and doesn't work at a very fast rate. And so the whole ecosystem is learning. And it was like part of this is mirrored in how financing works, which is seed series A, B, C, et cetera. As you prove out some gates
and and show something that shows that it's a
higher likelihood of working, then you do the next larger round, higher price, et cetera. And you're bringing a
network to bear of people who are looking at it, investing
in it, choosing to do it, employees, investors,
customers, partners, et cetera. And so at the beginning of OpenAI was "This scale AI thing is likely to bear some interesting fruit. We don't know, maybe. Let's try. And let's make sure that it's governance is oriented towards humanity
considerations first," which is the reason it's a 501, now has a subsidiary of the 501 that's a for-profit that's
governed by the nonprofit, and you know, helped that get kicked off. - And how do you evaluate AI investments in the last few years then? - Well, I'm not sure any
software investment today pitches itself as anything
other than an AI investment, which is kind of entertaining. And you know, it's very much
like the early internet, which is going to be some things
that are be really amazing. There's going to be a bunch of things that are be going to be kind of nutty and aren't going to work out 'cause they're not really thinking about what the landscape looks like from the strategy stuff is. I think overall, the results, the kind of the ones that survive
and thrive will broadly be very positive and
connective and, you know, may have to look at some of the things and see what they're doing, but you know, I could imagine some
startups that have, you know, potentially negative outcomes. But you know, remember, investors don't like to be associated, employees don't like to be associated, customers don't like to do it. There's a lot of network governance 'cause when we think about like, how are you responsive to humanity, it isn't just like voters
going to the polls, you know, polling booths, voting booths. It's also like customers,
it's also employees, it's also investors, it's
also press, it's also... So all of these things create
networks of governance. And so I think usually
when you get through this, you usually have a
broadly positive result, not always, but usually. And so I think that, you know, we're going to see transformation of anything that involves cognitive tasks, anything that involves language. I think we're going to see
new kinds of drug discovery. One of the things that I was telling Stanford's
long-term planning commission, might have been seven
years ago, was to say, "I see a line of sight to how an AI can be an amplifier in every academic discipline other than maybe theoretical physics, and even theoretical
physics, maybe, right? And so, you know, what would the AI... And if you wanted to do
this exercise yourself, just think about what if, like, it's 1,000x better than a
specialized search engine and each of these disciplines could use a specialized search engine. Well, imagine that 1,000x better. That would be a useful AI tool in that. Doesn't mean it'll write the papers, could, obviously, in some ways, but the papers will be much better when it's a combination of them and kind of human
conceptualization around it. - I want to pivot a little bit. We have a group of primarily
MBA students in here and I want to pivot to leadership and in particular
managing explosive growth and scaling of these companies. You've done it and you started one of the most successful
social network companies at LinkedIn and you manage it and you led it through a
period of explosive growth. What are the things that come to mind in thinking about that? - Well, as you certainly know, I did write a book called "Blitzscaling." And it partially, like the book so far have been kind of the world as I've discovered it, whether it's the startup view, each of you should be
thinking about your work lives and careers as kind of the
entrepreneur of yourself. Doesn't mean you should
necessarily start a company. Maybe you should, maybe you shouldn't, but you should be thinking about your career path
in an entrepreneur way. That was the first one, alliance, how that interfaces with companies and company organization, right? You know, et cetera, et cetera. And "Blitzscaling" was what is the thing that Silicon Valley and to some degree actually China understands that most
of the rest of the world doesn't really understand and how the pace in a
globally connected world of how you go from an idea to
a transformation of industry and what are the things
that are atypical of that? And so there's like a bunch of principles like embrace chaos, you know, have a
disposition of moving fast, and then fixing what breaks as you go. There is a bite chapter on
responsible blitzscaling, which is make sure you
don't break something that's really bad, right? But that's fine. So those are all part of it. And part of the thing is
that when you, for example, when you're doing internet software, you know, which includes obviously mobile, you're basically kind of
competing with the whole world. It's not just competing
with the person, you know, sitting next to you, the
person down the street. And so that's part of
the reason why being part of an ecosystem which
understands what the speed and the tempo and what is the
way you solve key problems like go to market or what
is the modern technology to build on for doing something is actually in fact really, really key. - How do you manage people
through that process when you have that
vision and you're trying to lead a company through
that explosive growth phase? - Well, so there's a whole bunch of principles in the book too, but like for example, one of
them is when you think about... So part of the tempo of scale was scaling number of employees by
roughly order of magnitude, like 1,000 thousand, et cetera. And then how the organization changes. Because by the way, some of what happens in these companies is that change over magnitude. Like I've seen companies that have gone from 20 people
at the beginning of the year to 800 at the end of the year. Like, "Okay, how do you do that?" And so part of the thing
that you realize is that you're not aiming for perfect, you're not aiming for one
stable org chart, et cetera. That some of the people that you had who are key leaders at the earlier stages aren't the right leaders of the thing. So for example, like a
totally micro piece of advice, but very important for doing these kind of blitzscaling things is, you know, say, okay, you're the head of product in my 30-person organization. You don't say, well as long
as you're doing a good job, you will continue to
be the head of product for when we're becoming
1,000-person organization. Maybe they will, maybe they won't. What you do say is, "As
long as you're doing well, your job will continue to get bigger." 'Cause by the way, when you've moved from a 30
to 1,000-person organization, you have a much bigger footprint of what you're doing as company. It's like, your job got bigger, it wasn't necessarily you
stayed as head of product. And frequently as you
jump each level of scale, when you're doing this at speed, usually 50% plus of what you would think of as the executive management
of the company will change. And you have to be ready
for that kind of dynamism and you have to be ready for making errors in judgements in that. And even though the person
did this really great job before is now no longer the right fit. And you have to have
made your early promises, built your relationships of trust, done that in order to change that. And the book's full of stuff like this, 'cause this is things I learned, you know. Blitzscaling, probably
the first places earlier was learning blitzscaling was PayPal. - What are the soft skills that you think are most
important in that space? What should we be teaching and what should we be aspiring to have? - Well, so you know, classically, like for example, Fred Kofman has this book
called "Conscious Business," which I really like, which is kind of this notion of thinking about
management as compassion, but not just compassion of the individual of whom you were dealing with, but also compassion to the
entire people around you. So for example, you say, Oh, this doctor is giving
really bad diagnoses. Oh, but it'd be so painful
for them if you fire them. Well, remember all their customers, right, all the people they're treating. Like be compassionate to them too. So you have to have this
broad sense of compassion and be compassionate
across the whole front. I think that the soft skills
are maybe the most central one, which is good, learning institution, is always be learning, right? The soft skills are recognized that when you're moving fast, you're going to be making errors. I mean, like for example,
one of the ways that I, in kind of startup and
blitzscaling environments, I will frequently say, "Look, this here's my working
decision and judgment. I may not be right, but we have to make the judgment
and we have to go wrong." You know, so everyone has to get on board. But I'm not saying that people, you disagree with me,
you're necessarily wrong. It's, we have to make
that to operate well, we have to make that decision. So like this is actually
I think one of the things, I forget which chapter it is
in the "Blitzscaling" book, but that was a couple books ago. The OODA loop is one of the terms that is used in Silicon Valley and it's from fighter pilot terminology, Observe, Orient, Decide, Act. And in fighter pilots, the reason they teach
this is because basically, in a dog fight in fighter
pilots, the the pilot, the fighter pilot with a
faster OODA loop survives and the other one dies. So you really try to get
your OODA loop right. Silicon Valley is one of those places that talks about OODA
loops for individuals and OODA loops for companies and goes, "That has to be functioning right," because the speed of competition
is very, very intense. And this is one of the things
that people don't understand is like for every mega startup that comes out of Silicon Valley, there are between dozens
and hundreds, probably, rarely thousands of competitors. And by the way, in China, it's thousands and tens of thousands. And so the ones that
emerge have fast OODA loops and are really aggressive. And so you have to have the capability of doing that yourself, of instilling that in the culture, of navigating the complexities of you're making all of
these decisions very fast. And so for example, one of
the reasons why in, you know, relative soft skills of
leadership is embrace chaos, is to have everyone first lesson in the counterintuitive
rules of blitzscaling is because if everyone understands like, Look, I'm not going to
be perfectly informed, we are going to be making
some inefficient decisions, but 'cause we have to move
fast, have to make decisions, have to learn from 'em,
we do that collectively. And so they always be learning
is a key part of that. - Yep. Okay. I want to pivot a little
bit before we turn to Q&A and talk about AI,
future of work, society. This is a big topic with interesting ideas
from all over the spectrum. What are your thoughts, first of all, about AI over the next three, five years thinking about society? And then we can talk a little bit about more specific things. And I want to pivot. I'm not going to talk
specifically about the book, but Reid just wrote a book in two months with the aide of GPT-4. So yeah.
- 10 weeks, it's a little longer than eight weeks.
- Two and a half months. - Yes, yeah. - So that tells you something about what we're going to
be capable of presently, not in near future. What are your thoughts? - So obviously what people like to do is beat the drum on job replacement and you know, look, not to
be too simplistic about it, there will be, over time, our human organizations
adopt much more slowly than usually technologies, availability of technology
and changing the jobs happen. But if you have a job
that's basically trying to have a human mimic a robot, a robot can generally
do those jobs better. But really, what's going to
happen is a lot of transformation. So for example, if you look at a company and you say, okay, we're going to, let's say the tools three years from now can create 2x better performance or 4x better performance per job. Sales. Are you going to fire salespeople? No, you like the 2x, 4x better
performance. That's great. So it isn't that human beings
are going to be replaced, it's that human beings that
are using AI are going to be the people who are going
to be getting the jobs. Marketing, it's a competitive
thing between companies. The composition of some
of the jobs may change. So like for example, if what your job is is
to do digital form entry into the advertising
system, act like a robot, well, that will be greatly accelerated. But the job of how do
we position ourselves, get an emotional
connection, create a brand, how do we explore this in different ways, how do we bring new kinds of marketing like content marketing,
et cetera, et cetera, so you go through all these, most of the departments
do not end up going, we're trading human jobs down. We are preferring humans
that can use AI, right? So they- - This is a point I want to interject. - Yes. - We need to be teaching people to be able intelligent
consumers and users of AI. - Yes, exactly. And look, even customer
service, which tends to be the, "Here's your script, follow the script," easy behave like a robot, look, that behave like a robot, those jobs as that will go down. But maybe customer service now becomes a, how do you build a relationship, right? So yes, you have an AI
that solves a problem of "My thing arrived and it's broken," or "I don't know how to
use it" or da, da, da, and a robot, you know, an AI helps that. But then it goes, hey,
and would you like to, would you, you know, are you interested in engaging
more with our company? And then that goes to a human-assisted AI as a way of doing it. So anyway, so maybe, right?
Speculation, but jobs change. So some tasks get really accelerated. Others become newly possible. And so like, you know, it happens even in the
educational institutions, so. - It does. What do you think about
the speed of change though? I mean, sometimes, society is-
- Yes. - good at adapting when the speed of change
is intergenerational, but intergenerational change
tends to be difficult. Do you have thoughts on that? - Look, if part of society
continues to accelerate, like, you know, you know, back in futurism
and postmodernism, they thought we were at the maximum speed and we're much faster than now. You deploy a new product in the internet and it can be in the hands of billions of people in days actually. It doesn't usually play that quite way, but like it can do that.
- Yeah, yeah. - And that kind of speed
is new and challenging and it's one of the reasons
why I'm glad you kind of brought it back there
'cause I'm not trying to be pollyannish about the
transition being totally easy. It is good that AI can help
us with the transition. Like you say, hey, we're now
building autonomous trucks. Even though we have a shortage
of truck drivers today, you know, if every manufacturer started building a autonomous truck, only autonomous trucks now, take over 10 years to
be over half the trucks on the road would be autonomous trucks. But you say, okay, what happens
when the truck driver goes? Well, wait a minute. This is the job I like and the job's, you know, decreasing and going away. And you say, well, okay look, it happens. It's by the way, makes the road safer, makes it greener in
terms of grid management and a bunch of other stuff. But here's an AI that
can help you figure out what other kinds of jobs you might like, help you learn to do those
jobs, help you do those jobs. And so that kind of transformation
is I think very possible, but it is difficult to have a
transformation than the speed of specifically is no
longer, like, you know, a little bit of the educational system is built on the industrial model, which is you train people and
you had your training thing and now you go work, and it's like, well, you're going to have
to always be learning, right? It's like the training that
you get today, in five years, if we're making progress,
that will have been modified. And then not just by
working through experience, but you'll have to be constantly learning. - Okay, good. I want to pivot briefly on policy, which is an area that you're also involved and in particular, you know, we're thinking about domestic policy, thinking a little bit
about geopolitics and AI and in addition to that, thinking about what should be the role of these technology companies
in educating us also in thinking about good policies
versus restrictive policies that may actually stifle innovation, which I think is an area that you're really against that latter part.
- Yes. - But what's the state of play right now and what do you think
we're going to be seeing over the next few years? - Well, our tool set to
do really great things and also mitigate bad
things is only increasing. So, for example, one parallel is as you get these AI models
larger and larger, actually, we find and empirically that it's easier to align
them with human interests and to have them go, like for example, if someone comes to the AI and says, I'm really depressed, I'm thinking about, you know, doing self-harm, as opposed to saying, oh, here's a good website
about how to do self-harm. It goes, oh, that's really difficult. I mean, you know, are you
talking to people, right? Have you thought about talking
to somebody and you know, you know, I think you, you know, you might be able to manage this, and so like, you know, and to respond in a more aligned and helpful way.
- Yeah. - And that's part of the reason why I'm so like get to the future. - Yeah. - You know, you start imposing, "Let's slow down, let's stop right now." It's like, well, actually that's harm. Like, you know, for example, if we have a medical
assistant on every phone, no, let's not slow down until
we have that available to everybody who has a phone
and let's try to give everyone access to a phone in some way. Maybe access to the village's phone and maybe access to the neighbor's phone, but you know, something along those lines. Now that obviously doesn't mean, like if you say you're trying to get from point A to
point B in a journey, like going five miles an
hour doesn't help you, but that doesn't mean you
don't like navigate well. You're getting to the curve, you slow down for the
curve as you're going 'cause it's like, you know,
don't go over the cliff. And so, you know, as you have to be smart about how you're navigating. Now, you know, one of
the things I think about, you know, on these policy questions is, well, take for example, you know, this notion of there is transformation. So for example, I
remember about a year ago, it's one of the reasons the first chapter in "Impromptu" was about education, that there was a large
amount of consternation from the university
establishment about, "Oh my god, this changing our college
application process." And you're like, "Well, by the
way, that's called progress." And yes, the intermediate is difficult and you have to figure out the new, but the fact that it was like, "Well, we've been doing this, you know, for X decades in this way and we're very comfortable
doing it this way." By the way, that was the complaint that weavers had with the loom, right? Like, "We're very comfortable
with our weaving." You're like, yes, yes,
but if we go to the loom, we can get much more
clothing for everybody. That's a good thing. And we just have to help human beings make that transition in various ways. And so I think that the question when you think about policy, like, too often, the
very natural thing is, how do we slow down, how do we stop? And the question is how do we
drive to the right locations? What are the things? So like for example, you know, one of the things I sometimes
sit with politicians in this country and I say, "Well, would you like a return to vigor in the manufacturing industry?" And they're like, "Yes, those are great middle
class jobs," et cetera. "Okay, what's your industrial
policy for getting there? Like, "Well, okay. Yeah, protectionism doesn't really work, may work for a decade and after that, you're handing a much worse
future to your children. AI and robotics is the best
way for rejuvenating it." And they said, "Well, but doesn't it just all
become robot factories?" "Like, look, if it's all robot factories, we have other opportunities. But actually, in fact when
you look at Amazon centers, as they get more automated, they do ship more packages
per person who works there, again productivity,
you know, and progress, but they also hire more people, right?" And so that's what the
capitalism progress is and that's what I think
is the kind of thing that we should be looking at here. - Are there specific things that you think our policy
conversations needs to be focusing on over the next few years? - Well, so for example, let's
take the medical assistant. Right now, most of the builders of these
models try to steer them away from giving any medical
advice in any particular way because they don't want
to have to take liability. unless they're in a medical circumstance as there's at least one person
doing that in the audience. 'cause I've met him recently, just before. But like for example, the general like GPT-4 and so forth is kind of steering more away from that. I actually think that if I
was a proactive policy person, I would say, look, here's the lines that
you have to color in. You have to say, I'm not a doctor. You have to say, you know,
can you see a doctor? You have to say, I'm not sure
of my advice to you, right? And you really should be trying to seek medical advice
as much as possible. But then within that, you can give some answers
and we should be following up and you seeing how that works. Then you could start provisioning a medical assistant on every phone, right? And somebody who, you
know, 'cause you know, I personally think that one of the things as we progress as societies, medical care should be
offered by the society. I don't think it necessarily
go through employer, but we have a lot of people
who are uninsured here, which means they don't have access. Well, this could be a way
to begin to help there. And that would be something
that would, for example, be proactive where
you're shifting to trying to get somewhere to make
a positive result happen is something that could
be done on a policy basis. - I'm getting a signal to go to Q&A. But I wanted to ask you about your quick thoughts on AI, social networks and the election cycle- - (chuckles) Quick thoughts? - in 60 seconds, because we need to go to Q&A.
- 42. - Look, part of what I've
been an advocate for, LinkedIn obviously
shows this too, is that, and this is what I think
technology companies need to realize is we're
not just offering products to individuals, but as you
get to a certain level, you also have to have
society as a customer. And it's like how do
you manage with society and the group of folks? And democracies work when
we collectively learn. You want these networks to
engage in collective learning, and collective learning means learning towards what
is actually true, right? So if you have, like for example, if you have information ecosystems that are claiming that the
2020 election was stolen and when you see through such
things as the Dominion lawsuit that the Fox, you know, opinion commentators who
were perpetrating this were texting each other knowing
that it was wrong, right? That's a serious problem. That's something that
society should learn about. You should have learning
ecosystems in order to get there. And so it doesn't mean you
have a provider of truth, like I at LinkedIn will tell you what is true and what is not true. That's a challenge. What you want is to have a
learning ecosystem by which... That's one of the reasons
why like when we think about almost any system, any institution, we say judgment of truth is important. We do panels of human beings. We do it in like science, you know, and like academic journals and reviewers. We do it in juries, we do
it in studies, in science, all of these things, like okay, how do we deploy those
as learning systems? That's what we should be working towards. - So we have what, five months to go? - A few more than that, right?
- Yes, seven. Okay, let's open it up for questions. Do we have mics going around or... - There seems to be a mic there. - Okay, all right. So we'll start here. - Ah, and there. - [Student] Thank you for coming today. I'm a first year student here. Question is, if you're 30 years old today, and you're willing to take risk, take a venture, willing to learn, what would be an opportunity that you would capitalize today? - So one of the things that I said in "Start Up of You" is that people kind of underrate
certain kinds of decisions. So one decision is to invest for getting soft assets
versus hard assets. So that's network around
you, knowledge, et cetera. Now, a lot of people here, you are investing in knowledge is good. Join networks and industries
more than specific companies. Companies can be good, but like, which networks and which
industries are the ones that will amplifying and growing, right? And then do anything to get into it. So like for example, look, I'm, you know, not unhappy with where
my career has ended up and what I've ended up doing. But like for example, if I would go back and
think about like what things maybe I would've done that
would be smarter decisions would be I was oriented when I left Apple to go to Fujitsu on, "I must be a product manager. I must have product
manager as my experience." Actually, in fact, maybe the decision would've
been go to Netscape because it's an online revolution and being part of the online revolution is what matters more than the Fujitsu. So choose kind of where the networks and industries that fit you and fit the kind of things
you would want to do. You know, so for example, you know, I guess I won't call out industries and get certain people grumpy with me, but you know, there are industries
that are on the decline. Be serious. Like if you don't want... Like be thoughtful. If you go, "I must go do
that industry," great, but realize you're choosing an industry where the tide is in front of
you, not behind you, right? So that kind of set of choices. Now obviously, software, technology, artificial intelligence can be part of that cognitive industrial revolution. But then the way you begin thinking, where am I contrarian and right? Where do I have an
interesting potential thesis that can break out and do
something extraordinary? Like maybe people aren't
that focused right now, like only small groups are
focused on drug discovery and AI. I'll go do that. I have
a biology background. I don't know anything
about your background. So I'm just, you know, kind
of throwing out things. And part of what I chartered
in "The Startup of You" is to say, look, you've
got your set of assets, you've got your aspirations,
you have market reality. You're doing a product across that to have the greatest
competitive differentiation. That's the kind of thing to be looking at. - Questions? Over there. - [Chelsea] Hi, I am Chelsea.
Thank you so much for sharing. I'm just curious, how's your investment thesis and what kind of company you like the most and what drives you to make the decision to invest the Airbnb? Thank you. - Well, I'll do the Airbnb
one 'cause that's easy 'cause the household has a fun story. So the first person who pitched Airbnb to me pitched it as couch-surfing and it caused me to not meet
the founders for a year. 'cause I went, "Oh, couch-surfing's
not a very good idea and it just won't work
as a general strategy." And so, you know, these other people were telling
me these founders are great. First lesson I learned is do not allow someone to pitch to you that's not the founders themselves to overly condition you to negative because that person got it wrong. It was within three minutes of meeting the
three founders that I was like, "Okay, I'm going to make
you an offer to invest. Come in to pitch the
venture with us on a Sunday. Come in and pitch the
partnership tomorrow," et cetera, when we pitch the partnership. And David Sze was my most
valuable reason I'm at Greylock, my most valuable board member at LinkedIn. So it went through the
pitch, the founders went out, David Sze looked at me and said, "Every VC has to have a deal to fail on. Airbnb can be yours," right? We talk very bluntly to each other around a partnership table. And I was like, "Huh, that's interesting. I think that increases my
interest in investing." Contrarian and right, remember? Six months later, data hadn't changed. David came to me and said,
"Okay, you were totally..." David's learning machine is great. Came to me and said, "You were
totally right. I was wrong. What did you see that I didn't see?" And I said, "Look, everything
you said was right. The local unions will hate
it, especially in the hotels. Cities won't like the
kind of rezoning of stuff. Neighbors will be uncomfortable. Maybe something bad's going
to happen that, you know, a murder or something that
would be bad in these things. All of that could quell this investment. They have good plans and this is the way the world should be, which is, suddenly, travelers have a chance
to get much more unique experiences, connections, local community kind of where they are. Hosts can become small entrepreneurs offering their room, their apartment. They can innovate in ways that hotels actually don't innovate. It can be cheaper, it can be more expensive
and more delightful. It can be the whole range. And that's actually in fact
the way the world should be. So I think if we navigate those risks, we can create something
that's really amazing. And I think, you know, these three founders have, you know, potentially what it takes." There's always a risk coefficient. That's kind of, canonically, about how I look at investments. Can't comment on the
most recent investments 'cause there's still stealth, but that was the reason I answered Airbnb in a little bit more depth. - Was a question over there. Yeah. - [Lei] Cool. Thanks for being here. Lei, MBA 25. Two-part question. First, do you think
there are any arguments to open sourcing AI that have merit? And then second, do you feel as though policymakers are getting ahead of the tech so they can regulate it more properly than sort of the Web 2 ecosystem,
thinking 2016, you know, the bad actors already existed and we're letting some
of that influence happen without anybody really paying attention. Do you feel policymakers
are putting their ear more to what's being built and truly
understand what's happening? - Let's see. So on the first question, let's see. I do think that, like generally speaking, open sourcing software is something I'm broadly sympathetic to. So I think there's a bunch of different kind of like small open source models that are totally good to do. I think enabling entrepreneurship,
enabling academic work, having openness in
examination is all good. And so if you can look
in a model and know, 'cause by the way, these models can be post trained once they're out in the world. So it's like, I did safety
training and then I released it. It's like, well, the safety
training can be undone. So you say, well, I trained it
not to tell you how to make, you know, anthrax and yeah, I can untrain it and it can do that. And then suddenly you
have a lot more people who can follow a recipe
for how to create anthrax, not good relative to,
you know, public health. So that's kind of roughly
on the open source thing. And I'm trying to figure out ways to say, how do you get some percentage
of open source without 100%? Like how do you have it, say you have broader
access, which is good, but not like terrorists or crazy people or criminals or rogue states? So that's one. Now two, the problem is you build technology, it does a bunch of good things
and you get some challenges. And at that point, in
hindsight, everyone goes, "Well, it's really obvious how you should have regulated back then. You should have done it." Well, yeah, but by the way, if you try to regulate beforehand your picture of what the real issues are and what are the ways to navigate, actually in fact, are almost certainly, even amongst experts, to be inaccurate. And so you prevent potentially
a lot of good things and maybe you prevent bad things, but you also just make
a lot less progress. And so my general view
on, like for example, when regulators talk to me
about regulating anything, including, for example, social
networks, I'd say, "Look, try to define your outcomes
in a crisp way and say, here, I want more of these
outcomes and less of these, how would you make that work?" Example, social networks. We had that New Zealand
instance where, you know, basically, a crazy terrorist
was filming murders. You say, okay, I want
to have less murders. But what you do is you say, okay, here, you have to audit it, you have to run it through your auditors and we'll have a fine infrastructure. So you showed one murder by accident. Okay, that's $10,000. You know, you showed 100
murders on the same incidents, well, that's $1 million dollars and, you know, dah, dah, dah. And by the way, the tech ecosystem will figure out a way to keep it very small. And so that's the kind of
thing that is the right way to be thinking about kind of
regulation in these issues, but of course then you
have to do the hard work of thinking about what are the outcomes that you're trying to steer away from, which is the real work
as opposed to saying, "Just stop until you know you're perfect," which if you did that, like today's lens for
evaluating technology, aspirin wouldn't be approved, cars wouldn't be approved, you know, if they were starting at the beginning. So you have to say, no, no, no. How do we learn as we go and
iterate and add seat belts, add, you know, like as as you go? - There was a question right behind, and then we're going to go there. - [Student] Hi. Thanks for coming. And my question is about Inflection AI. I think Inflection AI is maybe
the best large language model at understanding and expressing emotions. And I'm curious about what's
the secret of achieve that, and why can't GPT-4 and Gemini do the same thing? Thank you. - Ah, the secrets of our
training development. - I think it's best- - We're out of time. - I think it's best if I'm just honest and say that's part of the trade secrets, right? I think it's reproducible. I think other people is... One of the things in
technology is people see it, they realize that they
can produce it as well, but there was years of work by very smart people to make that happen. - Was a question over there. - [Student] Hi. - A microphone is coming down to you. - [Student] I have a mic right here. I can ask a question now? - Sure. We have for time for two? - Sure, yeah. - Okay. Yeah. - Yeah, go ahead, and
then we'll come here. - Yeah. - [Philippe] My name is Philippe. Thank you very much for coming. My question is regarding your background. You studied symbolic
systems and philosophy. I'd love to understand why
you chose to study that and a deeper dive in why you chose to switch to entrepreneurship. How does your background
make you a better investor, entrepreneur or person? - So one of the things
that I try to get people to pay attention to is I
think investing is like, oh, you do an analysis, you know, discounted cash flow and
market growth and CAC and LTV and all these things that you, you know, and those are important,
but it's actually, when you're imagining
the world as it can be, it's a kind of a lens of possibilities. It's a question of what you might be able to construct the technology,
it's how teams operate, how scaling works and a
bunch of other things. And so, you know, one of the things that
when I was teaching one of the Y Combinator
classes with Sam Altman, Sam asked me, "Well, what do I believe that most of the people in
this room don't believe? And I was like, well, in order be a good entrepreneur, you need to have an explicit
theory of human nature, right? And then when you're building your product and thinking about it, you say, here's what I think human nature is and here is how I think people will respond well to my product and I will help them elevate
and become better, through it. And that's one of the areas
where philosophy is useful. And in the case of small systems, although I got into philosophy
from thinking about like what things that small
systems needed to learn, it was kind of questions around, how do you take precision
in thinking about thinking or thinking about how languages work and how do you use that in the artifacts, what you're creating? And I think that's much more important than many other things when you're doing technology creation. Now, obviously you have
to understand something about technology too,
surprise, but anyway. - [Mike] I'm Mike Weinberg. The
question is on VC valuation. There was a bubble the
last couple of years. That bubble moved (unintelligible)... - Well, one of the things... So the question, I think the
microphone wasn't fully on, but, so I'll repeat a little bit of it. The question is like
valuations and bubbles, venture capital, now, AI. Part of the thing that happens here is that everyone goes, "Oh my god, there's going to be amazing technology just like in the internet
and people start investing and you know, valuations, especially as an investor, would kind of presume
that you would like to, like they don't quite make sense on a discounted cashflow
analysis and everything else. Well, part of the
question is what timeframe and what does the compounding look like? So I think there are a
lot of nutty deals done and part of the nutty deals
is nutty valuations too. But it's also where people know that you're possibly going to create multi-billion dollar companies
in relatively short order and you're taking a risk bet on that. Now, you know, as an investor, I would prefer the valuations be lower, the market causes the
valuations to be higher. You know, that's good for entrepreneurs, which is ultimately what I really like because that's how things get created. Investors kind of come along for the ride, try to help out if they're good. And so anyway, so you know, it's a simplistic answer
and then it is, you know, one of the reasons why
almost always technology, you know, classic investors kind of go, "Ah, this technology stuff is all nutty," because it's all bid
up on very high prices like a canonical case like Tesla. Like why is Tesla, you know, valued more than all the
rest of the car companies, you know, combined? I'm not saying that's should be, I'm just saying, you
know, that's a question, and you say, well, it seems irrational. And it's like, well, if you believed, which maybe the investors
in Tesla broadly believe that the shift in cars, in transport is shifting from a mechanical engineering paradigm to a software paradigm and none of the current
companies are going to survive and do that and that Tesla's
going to be the one great, you know, automotive mega corp, then that valuation is not as nutty. Now I think the valuation maybe is making that as seem like a certainty rather than a possibility, right? But you know, that kind of thing is
part of what's happening in these market valuations around tech and that's one of the reasons
why people are certain that there's things
that are coming in tech that are the future, which
they are broadly right about. - [Mike] Thank you. - Okay, we'll take one
last question back there, and then we'll end. - [Angela] Thank you for
sharing. Angela, MBA 25. I have a question for you. Do you think AI will
fundamentally challenge the importance of human
relationship and interaction, especially in those industries where human relationship
is very much the center, for example, K to 12 schools. Do you think that you know, we say that, "Oh, teacher-student relationship is one of most important thing
in determine a student's, you know, growth and performance." Do you think that AI
will eventually challenge or modify that in any sense? - Well, I think it'll transform because it'll add in, for example, like an infinitely patient tutor say, in the educational system. So as opposed to, you know, currently where you
have a teacher who says, "Look, I'm responsible for x students. I only have limited amount of time and if a student's not getting it right, I have limited time to debug
it and spend time with it," you have something that will actually help in that circumstance. I don't think it'll replace because like, as a gestural comparison, you know, human beings do not play chess as well AIs anymore, full stop. But we have more people watching human beings play chess with each other. We are human-oriented,
we are people-oriented. That's kind of like we're
tribal pack animals. And so I think broadly, even though there will be some people go, "Oh, no one understands me,
this AI is my only friend," and you know, we'll have
some of that weird function. I think we build AIs like Pi to say, Hey, let me help you connect with your friends, is I think much healthier
and I think, broadly, and I think people will naturally head in that direction anyway because we like human
connection in various ways. And so I think it can be
transformative, but amplified, whether it's education,
medical, all kinds of things. I think it will be a helpful
thing, but it will transform. - Okay. Reid, thank
you so much for coming. Thank you for engaging us. Thank you all for coming.