Please welcome to the stage Dr.
FFA Lee Sequoia, Professor in Computer Science and co-director of Human
centered A.I. Institute at Stanford University.
For a conversation with Bloomberg's Emily Chang. A.K.A.
the godmother of artificial intelligence.
How do you feel about that title? That's the first question.
You know, Emily, I would never call myself godmother of anything, but when I
was given that title, I actually had to pause and think about.
And then I was like, Well, if men can be called the godfathers of things, women,
so can women. And so I accept it 100%.
I mean, you are one of the most influential computer science artists of
our time. You have written hundreds and hundreds
of papers. You were the creator of Image Net, which
laid the foundation for modern A.I., which was basically this database of
images and their descriptions. Did you have any idea how influential it
would be? An image was conceived back in 2007 as
probably the inflection idea for Big Data's role in AI algorithms.
So from a scientific point of view, I had the conviction that big data would
fundamentally change the way we do AI. But I don't think I could have dreamed
that the convergence of big data, neural network and GPUs.
Essentially give the birth of modern. And I cannot have dreamed the progress,
the speed of progress since then. You are in rooms where the people who
are making decisions about the future of this technology.
President Biden, Sam Altman, Sundar Pichai, Satya Nadella, you are
testifying before Congress. You are on task forces.
What is your main message to the people who have the power about how they should
use that power? Quick question, Emily.
Honestly, the message is the same as I'm in the room of K 12 summer camps as well
as Stanford. You know, our introduction to AA courses
is that recognize. This technology, what it is, and how to
use it in the most responsible and thoughtful way.
Embrace it because it's a horizontal technology that is changing.
Our civilization, is bringing goods, is going to, you know, accelerate
scientific discovery, help us to find cures for cancer, you know, map out our
biodiversity and discover new materials with us, but also recognize all the
consequences and potentially unintended consequences and how to develop and
deploy it responsibly. I just think that more is of of balance,
of thoughtfulness is so important in today's conversations, whether it's in
the White House or on campus right now. And I don't know if you would call this
a crisis or an inflection point, but A.I.
models are running out of data to train on.
And then you've got companies turning to AI generated data and synthetic data to
train their models. How big a problem is this?
What are the risks? Like, what's the next step here?
So first of all, I think our models are running out of data is a very narrow
view. It is.
I know that you're implicitly referring to these large language models that's
ingesting Internet data, especially data from, you know, from websites, Reddit's,
Wikipedia, and like whatever you can get a handle of.
Even when looking at language models, let's just stay in this narrow lane.
I think there's so much more we are seeing that differentiated data were can
be used to really build customized models, whether it's, you know,
you know, journalism as a business or or in, you know, very different enterprise
verticals, for example, health care. We're now running out of data.
In fact, there are many, many industries that we're still not have not even enter
the digital age yet. We have not taken advantage of the data,
whether it's health care or environment or education and all that.
So even in the lean of language models, I don't think we're running out of data
from that point of view. Do you think using AI generated data to
train models now is a good thing, or does that take us further and further
from the original source in a dangerous way?
That's so much more nuanced than we are. QUESTION It's a good question.
So there are many ways to generate data. For example, in my Stanford lab, we do a
lot of robotic research, right, robotic learning there.
Similarly, the data is so important because we simply don't have enough
resources or enough opportunities to collect human generated movements and
all that. And simulation is really, really
important. Would that take us onto a dangerous
path? I think.
I think even with human generated data, we can go down the dangerous path and
simulation data. Likewise, if we don't do it in a
responsible way, if we don't do it in a thoughtful way, of course it might take
us down a dangerous path. I mean, I don't need to even call it
out. You know.
What other about human general data? Are there like entire dark web?
And so, so so the problem is not simulation itself.
The problem is data. You're getting into the hot and crowded
AI startup game. You're starting something reportedly.
Can you tell us anything about it? No.
Okay, Well, stay tuned. We also we conducted a poll about trust
in the age of AI. Can we bring the results up of that
poll? The question was, how much do you trust
tech companies to develop AI safely and securely?
I fully trust them. 0%.
I'm skeptical, everyone. Not at all.
A significant portion of people who is who are doing the people in this
room. The people in this room.
Okay. If you had to rank the big A.I.
players, who do you trust the most? And who do you trust the least?
My trust is not placed on a single player.
My trust is placed in the collective system we create together and in the
collective institution we create together.
So I maybe that's a trap question. I'm not going to be able to call out
anybody that I feel is. You know,
I mean, think about the founding fathers of the United States.
They did not place trust in a single person.
They created a system that all of us can trust.
Are we doing that? We're trying at least the Stanford
Institute for Human Center here is trying.
I think many people are trying. You know, I get to ask this question a
lot, Emily. Do you still have hope in there?
First of all, it's such a sad question, But but I, I do say my hope is not in
the my hope is in people. And I'm not a delusional optimist.
You know, I, I people are complex. I'm complex.
You're complex. But the hope is in people is in our
collective will are a collective responsibility.
And many things are happening. We're moving.
Many of us are working towards moving to make this a trustworthy civilizational
technology that can lift all of us. So there are so many risks that get
talked about human extinction, bad actors, bias, like racial, all kinds of
bias getting exaggerated. What is the thing that you worry about
the most? I worried about catastrophic social
risks that are much more immediate. I also worry about the overhyping of
human extinction risk. I think that is blown out of control.
It belongs to the world of sci fi and just pondering about.
There's nothing wrong about pondering about all this, but compared to the
actual social risks, whether it's the disruption of disinformation and
misinformation to our democratic process or, you know, the kind of labor market
shift or the biased privacy issues, these are true social risks that we have
to face because they impact real people's real life.
Meta is leading in open source A.I. campaign.
What do you think should be open and what should not be open?
That is a nuanced question. I do believe in an open ecosystem,
especially in our democratic world. I think that the beauty of the past, you
know, several couple of hundred years or 100 years, especially in our country, is
the kind of innovation and entrepreneurship and and exchange of
information. And so it's important that we advocate
for that kind of open ecosystem. What is the biggest thing in A.I.
that nobody's talking about? What should we be talking about?
I think we should talk more about. Oh, God, that's a long list, actually.
We should talk about how we really can imagine how we use this technology.
You know, I talk to doctors. I talk to.
Biochemists. I talk to tourists, I talk to artists I
talked to, you know, farmers. There are so many ways we can imagine
using this. There's so many ways we can use this to
make people's life better, work better. I don't think we talk enough.
We're talking about gloom and doom, and it's also just a few people talking
about gloom and doom. And then, you know, the media is
amplifying that. I don't think I don't know what you're
talking about. Yeah, my my hand was waving nicely.
I don't think we give enough voices to people who are actually out there in the
most imaginary way, creative way of trying to bring good to the world using
that. Is there any one, any thing you want to
like, call B.S. on like anyone or any company just kind
of hits you off? I know where you're getting.
I haven't called them. I would just say the B.S.
I just think over indexing and the the
existential crisis. I'm sorry.
Extinction. Existential crisis.
An existential extinction crisis. Yeah, exactly.
That is a over indexing. I am concerned about the the some of the
bills that are in different parts of our country, California state, or that is
over indexing on that. And it might come from a good place, but
it's putting limits on a models and might even inadvertently criminalize
open source and and are not really being thoughtful about how to evaluate and and
assess these models I am concerned about.
So you think we might overregulate? We may we may overregulate in ways that
we did a minute and and hurt our ecosystem.
But in the meantime, there are places where rubber meets the road, like health
care, transportation and finance, where we should look at the proper guardrail.
Did you talk to President Biden about that?
Because I know you have like a line to him.
I can tell you what I talked about actually with President Biden.
One of the things we talked about is the moonshot mentality to invest in public
sector AI, because, you know, it's we're here in the heart of Silicon Valley.
It's not a secret that all the resources, both in terms of talent, data
and compute, are concentrated in intact industry.
Actually in big tech industry and Americans public sector academia is
falling off the cliff pretty fast in terms of resource.
Stanford Natural Language Processing Lab has 64.
GPUs. 64.
Think about the contrast. And so we talk about resourcing our
public sector because that is the innovation engine of our country.
It produces public goods, it produces scientific discoveries, and it produces
trustworthy, you know, responsible evaluation and explanation of this
technology for the public. So last question, and this is something
I know you're really focused on in your lab and in general.
There are not enough women and people of color in this field who have their hand
on the dial. How serious is this risk and what could
it lead to? Yeah, well, Emily, I know you have been
advocating this issue. Look, there isn't enough.
In fact, I think the the culture is not necessarily better.
We're seeing more and more women and people of different diverse backgrounds
entering the field of tech and AI. But we're also seeing that the voices of
the the men be much more lifted and all that.
And people do say, well, okay, you're here talking, but there are so many
people who are better than me. There are so many young women, people in
tech from different and diverse backgrounds whose voices should be
lifted, who should be given a bigger platform.
So if we don't hear from them, we're really wasting human capital, right?
These are these are brilliant minds and innovators and technologists and
educators, inventors, scientists. Well, not giving them the voice, not
hearing their ideas, not giving them, not lifting them waste our collective
human capital. I think godmother is a pretty good term.
What are we all. Thank you.
Agree. All right.
Thank you. The godmother of.