Last week, Reid Hoffman posted a video
where he interviewed his digital twin, Reid A.I., which was created, of course,
with AI. And today we're very excited to
interview Reid. I read A.I..
Thanks for being here. It's an honor to be here engaging in a
conversation that once seemed like science fiction.
This is my first interview ever with an AI twin, so I think we're kind of making
history. It's truly a groundbreaking moment.
Interviewing and A.I. Twin opens new dimensions and dialogue
and interaction. Let's see where this leads.
Before we get started, what do I call you?
Are you a digital twin? Are you a deep fake?
Are you an avatar? Think of me as Reid's twin, reflecting
his thoughts through artificial intelligence.
I'm not a deepfake. My role is transparent and constructive.
Bridging technology and human experience to enhance discussions with insights
from Reid Hoffman's perspectives. What do you think of the word deepfake?
Do you think it makes this technology sound bad or inauthentic?
The term deepfake carries a connotation of deceit and the potential misuse of
technology, such as spreading misinformation or unauthorized
impersonations. Calling an AI like me a deepfake might
overshadow its potential benefits like education, entertainment and meaningful
dialogue by focusing only on the negatives.
My last question for you is why do you think Reid built you?
And answer me in Pig Latin? If you can eat Ray, eat it Cray Imma.
Is AI in way as your day ultimately explore year and Yamen straight day
artificial intelligence is you asked vai possibilities.
All right. Well, as fun as this has been, I think
I'm ready to talk to the real read read A.I..
Thanks for joining us. Please welcome to the stage the real
Reid Hoffman, partner at Greylock. For a conversation with Bloomberg's
Emily Chang, you are actually making our own a
time. So why are you doing it all?
How do you think the real read the your digital twin did?
Well, you know it very well. Stunningly.
Stunningly. I've literally had friends of mine go,
Oh my God, I'm so creeped out. I just finished watching it.
Although part of that was to say, look, this is part of the future and part of
what we can be doing and how do we navigate that as part of to introduce by
showing, not just telling in various ways.
And, you know, like for example, one thing that I
that I sometimes feel I think I may have made this comment when I was talking to
myself on Unreal is like the buzzword bingo side of the read.
I was like, Please stop doing that. This takes a little bit of time and
resources to do now, but it's only going to get easier and everyone's going to be
able to do this. Yes.
Is this the beginning of the end of truth as we know it?
I strongly hope not. And I'm working towards the answer being
not. I do think that the question about how
do we how do we figure out how to preserve our or recreate the new
foundation? The truth.
I think there will be a set of things, whether it be watermarking digital
certificates are kind of trusted sources.
I think we need to invest in that collectively across the entire tech and
media ecosystem. I think part of the reason I do is say,
look, this is this is what's doable. So the technology we use for AI is
commercially available. There was nothing obviously I have
access to tech that the world doesn't. This was me going to companies and
saying, hey, I'll I'll I'll buy video from our line or buy, you know, audio
from 11 labs. I'll set up a chat, get, you know, kind
of specialized agent on my readings and text and here you go Right now, Steve
Huffman was just on stage saying he thinks the Internet's going to be safer
in a few years. And you were like, I don't think so.
Look, you did you think the opposite? I think the opposite.
But mostly because one of the things I think is under common at it is that the
the cyber security realm is the Internet in actually a much broader state of war
than Ukraine or Gaza. Like, you've got criminals from North
Korea and Russia, you know, holding hospitals hostage, etc..
Like we have this because it can't see it, but the countries have not responded
to it's actually a state of war. Now, some of that will, of course, be,
you know, things that, for example, you know, Putin was going to be trying to
interfere with U.S. elections because, of course, his exit
strategy from Ukraine is to elect Donald Trump.
And so you're going to see a whole bunch of things there.
And so that's the reason why I don't think we're we're on a road to safer any
time soon. So the Internet is at a state of war.
That sounds pretty terrifying. And then what happens if you just, like,
give them AI? Well,
some of that has been given already. It's one of the reasons why, you know,
you have to be careful about how you're navigating open source, because once.
It's open source. It's there for everyone is there for the
North Koreans, It's there for the Russians, it's there for the criminals.
So it's there. All the case now, by the way, and a lot
of times that's totally fine. Open source web server.
Great. Fine.
Open source database, grapevine. You know, LinkedIn, we open source tons
of technologies. Great.
Right. But just to be careful which of the
things that are that are challenging and that's part of the the question about
like where do we end up going on? For example, have you said what is some
of the stuff that's most currently challenging around A.I.?
It's a combination of deepfakes with like electoral interference and
cybercrime. Those are the two things that are most
currently the kind of the issues we have to be navigating.
And it's one of the reasons why I try to get us to talk less about the
superintelligence risks. Not that we shouldn't pay attention
them, that we should be in dialogue, but the first thing is air in the hands of
bad humans. Right?
But let's make sure that we're navigating that world first.
Well, and we're getting a little ahead of ourselves.
But Chris Cox from Metta was here talking about Lama.
Vinod Khosla very categorically was like Lama should not be available in China.
What are your thoughts on that? Well, once you open source, it's
available to everybody. So now that being said, I think that
there are some, you know, kind of good arguments about saying, well, actually
like some of like American technology platforms being open in the rest of the
world, including in China being something where that standardizes the
world in terms of how things operate and so forth can be a good thing.
And actually, look, I myself think that, you know, look, we have definite
important competition with China. We have a competition with China in
terms of political ecosystem and how should the world order be?
We have competition with China in terms of economic and values that are embedded
in economics, which is part of the reason I'm a strong supporter of what,
you know, the Biden White House has done with the CHIPS Act and all the rest.
And I think that's a very good thing to do.
And I think they played that very smartly.
But the thing that I think is also important is we also have to have
bridges and alliances. We want to have economic relationships.
So you want both the kind of the competition shaping and channels of
economic relationship. I think the world is better and we in
the US and they in China are better with that.
How where is China versus the US on who's ahead by how much?
Well, specifics. So.
All right. Well, look, I'd say the straw poll of
experts would say the US is ahead by about two years,
but it's two years in a very fast moving game.
As easy to fumble through years is like the blink of an eye.
Yeah. So that's one of the reasons why, you
know, generally speaking, you know, when I'm in dialogues like this one, I said,
look, it's really important we keep our pace going.
We we understand that it's important that the companies build this.
And people say, well, you know, what about the concerns?
As I get less navigate the concerns as we go, like we're like, let's let's
drive into the future and navigate around the potholes and all but keep
driving, don't stop. And so so I think that's roughly where
it is. I think, like I said, the CHIPS Act is,
I think, good, but I don't want to overregulate either.
And basically was worried about overregulate 100%.
So in terms of look again, once again, I think that, you know, look, I was
helping some, but I hope the Biden administration did a very smart plate.
First, they said bring in the big players and say we're going to push you
really hard and voluntary commitments. Great.
And I was like, okay, we're going to set fix watermarking, red teaming,
training, being open about training models of a certain size for compute and
say, okay, then let's look at what we can do.
Let's do an executive order based on the National Production Act to try to make
a part of those voluntary commitments into essentially a rule of law for other
American companies. But we're really just focusing on the
limited set of cases that could be very dangerous.
We're allowing a massive amount of innovation across the entire ecosystem
because that's what we should do. Now, obviously, if you could do more in
deepfakes, that be good. But because you know, the large
companies, Microsoft, Google, Openai are all wall of all committed to
watermarking. That's not obviously what open source
models are going to be. Now you mentioned Biden.
You are going around the world briefing the most influential leaders in the
world about AI, people like the Pope, who you just talked to like a few days
ago, not a few days ago. That was that was televised.
All right. What is your headline when you're going
into these meetings? What is the one thing that you want them
to take away? So the headline is, there are important
risks to navigate. But the reason why it's important to
navigate them is because there is an absolutely amazing human future that is
easy to drive to. So, for example, a lot of people say,
Oh, hey, what's it going to do? It's like, well, I can today build a
medical assistant that will be on every smartphone that will be better than your
average GP. Doesn't mean we shouldn't have.
GP's general practitioners doctors, but think 8 billion people in the world.
There are billions of people who have no access.
What can you do for them? There are people in this country who
it's very hard to have access or have no access other than going to the emergency
room. What can you do for them?
Like, that's super. What?
A tutor. On every smartphone for every age and
every subject. Like, one of the things is go play with
Conmigo. What Tom Cotton's doing with the Khan
Academy is amazing, and it can be done for every human being.
Let's do that so that that positive future makes sure that you know what
we're trying to drive towards. Do we have that pope drip deep faith
somewhere? I mean, you had to you had to have seen
this. I've seen it, yes.
How did he how did he feel about this? Well, so we didn't talk about this,
Right. I wait.
So I mean, I got to stop myself. When I look at I look to see if I can go
get one of those jacket. And look, one of the things that I think
is amazing about Pope Francis is that all of his questions were about, look,
look, he said, look, I have confidence that for the elites in Europe and the US
and China, that it'll be just fine. What about the other billions of people?
Like, can we make sure that it is good for them?
And they said, Oh, this medical assistant, that could be really good.
How do we make sure that that medical assistant gets to all of the people in
the global South? How do how do we help make that happen?
Was Well, you happen to be the head of a state.
You can go talk to other states, too. This is you know, it's good.
And he is he he cares. He cares about all people, not just
Catholics. You're on the board of Microsoft.
And Microsoft just made an interesting investment, an inflection investment
deal, like they they're licensing the technology, hiring all the employees and
the co-founders. There's a really interesting strategy
here where they're not outright buying things.
I mean, aren't regulators raising their eyebrows at this?
Well, to be clear, it was a 100% legal deal
and all the rest. That's good, right?
And the regulators are asking questions. I still kind of understand it.
I think that the the macro thing was what we decided in
inflection was that the agent would take a long time to build into a good
business and start up. And yet the agent was also something
that was really important. We were really committed to getting an
agent that wasn't just kind of IQ, but IQ, and I was part of this.
Everyone has their own personal intelligence, everyone has their their
PI as part of that. And we wanted to see that future happen.
And then we knew we needed to pivot the business to be to be starting, of
course, with this inflection 2.5 model. And so and so we said, look, this is the
way we think this could happen. And so we had, you know, Mustafa and
Satya themselves had a whole long set of conversations where I wasn't in the
room. I was occasionally fielding a phone call
from one or the other, but being on both here wasn't like up.
No, no, but being on both boards, I had to be, you know, kind of like not in the
room, right? Because it's one of those things about,
you know, being a B and high integrity. Right.
But it was like, this is how I understand this and this is how we can
come together. And so I think I think it's great for
both companies. And I think it's it's it's a pattern by
which some future technology business deals will happen.
Now, you were on the board of Open AI until a few months before the open air
pocalypse whereby Sam was kicked out and now he's back and you were the first
person. Six months.
Six months. Okay.
You're the first person I reached out to.
You didn't tell me very much at the time.
And now that you're here and on stage and under the bright lights, I just want
to know if you can tell us a little more like, do we actually know what happened
there and is it fully over or is there something still festering beneath the
surface? Well, I have strong confidence in this
current board. I think Brad Taylor is great.
I think Larry Summers great. Some of the other folks, Sue
Desmond-Hellmann is great. I know these people.
I think that the notion here I can say something.
It's almost like a public fact about it, which is mere Miramonte, who they
appointed as the first interim CEO, was at an event with me in Napa called The
Grove Thursday night. So, you know, they didn't call her
before that dinner Thursday night, announcing on Friday that she's the
interim CEO. So when you get to kind of like, okay,
look, whatever you think is going on, if you're replacing the CEO with a new
interim CEO, you usually don't call that person.
It's like after dinner the night before and say, oh, by the way, we're doing
this blog post tomorrow. You're the CEO tomorrow.
You know, maybe you should get back into town to come do that.
Right. So the previous board had some
competence, navigation. Issues.
And I mean, Sam is back. Your confidence, your level of
confidence very high. Look, I think Sam has done an amazing
job. Not perfect.
No one's perfect. I'm not perfect.
Sure. Yes.
Well, I might be perfect for me, and so. But like.
Like Sam and the rest of the team at opening, I have created magic.
Like. Like this whole revolution.
This whole, like. Oh, my gosh.
Generative day. I can be an assistant in anything that
we do with language and many other things that comes about from their work.
That's just like like that. That is one of the moments of this is
the time to be alive. That is incredible.
It is magical. And like, kudos to them for doing this.
All right. Last question.
We have one poll to get to here, which is how worried are you about a ISE role
in the upcoming election? Concerned, Somewhat concerned.
Not at all concerned. You are super active in in politics
actually on both sides of the aisle on this in this latest election.
What do you think the October surprise is going to be?
And it has to doesn't have to have anything to do tech.
Oh, everyone is also very worried about it.
I mean, yes, I would say very concerned. Look, I'd say what it is, is
you'll see something like something coming out of a legal court of law and
by jurors who have convicted Trump like
they convicted him of lying about and slandering about sexual assault.
You'll see something else coming out about that.
And then it will be claimed that that's a deepfake, right?
So it creates a language of non truth, which is a massive problem, even when
you see like there will be some deepfake and other kinds of issues.
But true things will be called deepfakes.
That's the thing that I think will cause it, because it's really important to
know like, no, no when it actually goes through a court of law and there's 12
jurors who deliberate on it and conclude that's a real thing.
You are like a perennial optimist. How worried are you about the election
or do we do we get the. Look, I'm deeply concerned, but I, I
think that people who care about kind of a a a future by which America is and the
rule of law that we care about your everyday American.
I think if that if that persists, Joe Biden will be president again.
All right. The real Reid Hoffman, everybody.