Today I have the pleasure of
interviewing Ilya Sutskever, who is the Co-founder and Chief Scientist of
OpenAI. Ilya, welcome to The Lunar Society.
Thank you, happy to be here.
First question and no humility allowed. There are not that many scientists who
will make a big breakthrough in their field, there are far fewer scientists who will make
multiple independent breakthroughs that define their field throughout their career, what
is the difference? What distinguishes you from other researchers? Why have you been able
to make multiple breakthroughs in your field?
Thank you for the kind words. It's hard to
answer that question. I try really hard, I give it everything I've got and that has worked
so far. I think that's all there is to it.
Got it. What's the explanation for why
there aren't more illicit uses of GPT? Why aren't more foreign governments using it
to spread propaganda or scam grandmothers?
Maybe they haven't really gotten to do it a lot.
But it also wouldn't surprise me if some of it was going on right now. I can certainly imagine
they would be taking some of the open source models and trying to use them for that purpose.
For sure I would expect this to be something they'd be interested in the future.
It's technically possible they just haven't thought about it enough?
Or haven't done it at scale using their technology. Or maybe it is
happening, which is annoying.
Would you be able to track
it if it was happening?
I think large-scale tracking is possible, yes. It
requires special operations but it's possible.
Now there's some window in which AI is
very economically valuable, let’s say on the scale of airplanes, but we haven't
reached AGI yet. How big is that window?
It's hard to give a precise answer
and it’s definitely going to be a good multi-year window. It's also a question of
definition. Because AI, before it becomes AGI, is going to be increasingly more valuable
year after year in an exponential way. In hindsight, it may feel like there was only
one year or two years because those two years were larger than the previous years. But I would
say that already, last year, there has been a fair amount of economic value produced by AI. Next year
is going to be larger and larger after that. So I think it's going to be a good multi-year
chunk of time where that’s going to be true, from now till AGI pretty much.
Okay. Because I'm curious if there's a startup that's using your model, at some point
if you have AGI there's only one business in the world, it's OpenAI. How much window does
any business have where they're actually producing something that AGI can’t produce?
It's the same question as asking how long until AGI. It's a hard question to answer. I hesitate
to give you a number. Also because there is this effect where optimistic people who are working
on the technology tend to underestimate the time it takes to get there. But the way I ground
myself is by thinking about the self-driving car. In particular, there is an analogy
where if you look at the size of a Tesla, and if you look at its self-driving behavior, it
looks like it does everything. But it's also clear that there is still a long way to go in terms of
reliability. And we might be in a similar place with respect to our models where it also looks
like we can do everything, and at the same time, we will need to do some more work until we really
iron out all the issues and make it really good and really reliable and robust and well behaved.
By 2030, what percent of GDP is AI?
Oh gosh, very hard to answer that question.
Give me an over-under.
The problem is that my error bars are in log
scale. I could imagine a huge percentage, I could imagine a really disappointing
small percentage at the same time.
Okay, so let's take the counterfactual where it
is a small percentage. Let's say it's 2030 and not that much economic value has been created by these
LLMs. As unlikely as you think this might be, what would be your best explanation right
now of why something like this might happen?
I really don't think that's a likely possibility,
that's the preface to the comment. But if I were to take the premise of your question,
why were things disappointing in terms of real-world impact? My answer would be reliability.
If it somehow ends up being the case that you really want them to be reliable and they
ended up not being reliable, or if reliability turned out to be harder than we expect.
I really don't think that will be the case. But if I had to pick one and you were telling
me — hey, why didn't things work out? It would be reliability. That you still have to look
over the answers and double-check everything. That just really puts a damper on the economic
value that can be produced by those systems.
Got it. They will be technologically
mature, it’s just the question of whether they'll be reliable enough.
Well, in some sense, not reliable means not technologically mature.
Yeah, fair enough. What's after generative models? Before, you
were working on reinforcement learning. Is this basically it? Is this the paradigm that gets
us to AGI? Or is there something after this?
I think this paradigm is gonna go really, really
far and I would not underestimate it. It's quite likely that this exact paradigm is not quite
going to be the AGI form factor. I hesitate to say precisely what the next paradigm will
be but it will probably involve integration of all the different ideas that came in the past.
Is there some specific one you're referring to?
It's hard to be specific.
So you could argue that next-token prediction can only help us match
human performance and maybe not surpass it? What would it take to surpass human performance?
I challenge the claim that next-token prediction cannot surpass human performance. On the surface,
it looks like it cannot. It looks like if you just learn to imitate, to predict what people
do, it means that you can only copy people. But here is a counter argument for why it might
not be quite so. If your base neural net is smart enough, you just ask it — What would a person
with great insight, wisdom, and capability do? Maybe such a person doesn't exist, but there's
a pretty good chance that the neural net will be able to extrapolate how such a person
would behave. Do you see what I mean?
Yes, although where would
it get that sort of insight about what that person would do? If not from…
From the data of regular people. Because if you think about it, what does it mean to predict
the next token well enough? It's actually a much deeper question than it seems. Predicting
the next token well means that you understand the underlying reality that led
to the creation of that token. It's not statistics. Like it is
statistics but what is statistics? In order to understand those statistics to
compress them, you need to understand what is it about the world that creates this set of
statistics? And so then you say — Well, I have all those people. What is it about people that creates
their behaviors? Well they have thoughts and their feelings, and they have ideas, and they do things
in certain ways. All of those could be deduced from next-token prediction. And I'd argue that
this should make it possible, not indefinitely but to a pretty decent degree to say — Well, can you
guess what you'd do if you took a person with this characteristic and that characteristic? Like such
a person doesn't exist but because you're so good at predicting the next token, you should still
be able to guess what that person who would do. This hypothetical, imaginary person with far
greater mental ability than the rest of us.
When we're doing reinforcement learning on
these models, how long before most of the data for the reinforcement learning
is coming from AI and not humans?
Already most of the default enforcement
learning is coming from AIs. The humans are being used to train the
reward function. But then the reward function and its interaction with the model is automatic
and all the data that's generated during the process of reinforcement learning is created by
AI. If you look at the current technique/paradigm, which is getting some significant attention
because of chatGPT, Reinforcement Learning from Human Feedback (RLHF). The human feedback
has been used to train the reward function and then the reward function is being used
to create the data which trains the model.
Got it. And is there any hope of just
removing a human from the loop and have it improve itself in some sort of AlphaGo way?
Yeah, definitely. The thing you really want is for the human teachers that teach the AI to
collaborate with an AI. You might want to think of it as being in a world where the human
teachers do 1% of the work and the AI does 99% of the work. You don't want it to be 100% AI. But you
do want it to be a human-machine collaboration, which teaches the next machine.
I've had a chance to play around these models and they seem bad at multi-step
reasoning. While they have been getting better, what does it take to really surpass that barrier?
I think dedicated training will get us there. More and more improvements to the
base models will get us there. But fundamentally I also don't feel like they're that
bad at multi-step reasoning. I actually think that they are bad at mental multistep reasoning
when they are not allowed to think out loud. But when they are allowed to think out
loud, they're quite good. And I expect this to improve significantly, both with
better models and with special training.
Are you running out of reasoning tokens on
the internet? Are there enough of them?
So for context on this question, there are claims
that at some point we will run out of tokens, in general, to train those models. And yeah, I
think this will happen one day and by the time that happens, we need to have other ways of
training models, other ways of productively improving their capabilities and sharpening their
behavior, making sure they're doing exactly, precisely what you want, without more data.
You haven't run out of data yet? There's more?
Yeah, I would say the data situation is
still quite good. There's still lots to go. But at some point the data will run out.
What is the most valuable source of data? Is it Reddit, Twitter, books? Where would you train
many other tokens of other varieties for?
Generally speaking, you'd like tokens
which are speaking about smarter things, tokens which are more interesting. All the sources which you mentioned are valuable.
So maybe not Twitter. But do we need to go multimodal to get more tokens? Or do
we still have enough text tokens left?
I think that you can still go very
far in text only but going multimodal seems like a very fruitful direction.
If you're comfortable talking about this, where is the place where we
haven't scraped the tokens yet?
Obviously I can't answer that question
for us but I'm sure that for everyone there is a different answer to that question.
How many orders of magnitude improvement can we get, not from scale or not from data,
but just from algorithmic improvements?
Hard to answer but I'm sure there is some.
Is some a lot or some a little?
There’s only one way to find out.
Okay. Let me get your quickfire opinions about these different research directions.
Retrieval transformers. So it’s just somehow storing the data outside of the model
itself and retrieving it somehow.
Seems promising.
But do you see that as a path forward?
It seems promising.
Robotics. Was it the right step for Open AI to leave that behind?
Yeah, it was. Back then it really wasn't possible to continue working in robotics
because there was so little data. Back then if you wanted to work on robotics, you
needed to become a robotics company. You needed to have a really giant group of people working
on building robots and maintaining them. And even then, if you’re gonna have 100
robots, it's a giant operation already, but you're not going to get that much data. So in
a world where most of the progress comes from the combination of compute and data, there was no
path to data on robotics. So back in the day, when we made a decision to stop working
in robotics, there was no path forward.
Is there one now?
I'd say that now it is possible to create a path forward. But one needs to really
commit to the task of robotics. You really need to say — I'm going to build many thousands, tens
of thousands, hundreds of thousands of robots, and somehow collect data from them and find a
gradual path where the robots are doing something slightly more useful. And then the data that is
obtained and used to train the models, and they do something that's slightly more useful. You could
imagine it's this gradual path of improvement, where you build more robots, they do more
things, you collect more data, and so on. But you really need to be committed to this path.
If you say, I want to make robotics happen, that's what you need to do. I believe that
there are companies who are doing exactly that. But you need to really love robots
and need to be really willing to solve all the physical and logistical problems of dealing
with them. It's not the same as software at all. I think one could make progress in
robotics today, with enough motivation.
What ideas are you excited to try but you can't
because they don't work well on current hardware?
I don't think current hardware is a
limitation. It's just not the case.
Got it. But anything you want to
try you can just spin it up?
Of course. You might wish that current
hardware was cheaper or maybe it would be better if it had higher
memory processing bandwidth let’s say. But by and large hardware is just not an issue.
Let's talk about alignment. Do you think we'll ever have a mathematical definition of alignment?
A mathematical definition is unlikely. Rather than achieving one mathematical definition, I think
we will achieve multiple definitions that look at alignment from different aspects. And that this
is how we will get the assurance that we want. By which I mean you can look at the behavior in
various tests, congruence, in various adversarial stress situations, you can look at how the neural
net operates from the inside. You have to look at several of these factors at the same time.
And how sure do you have to be before you release a model in the wild? 100%? 95%?
Depends on how capable the model is. The more capable the model, the
more confident we need to be.
Alright, so let's say it's something
that's almost AGI. Where is AGI?
Depends on what your AGI can do. Keep
in mind that AGI is an ambiguous term. Your average college undergrad is an AGI, right?
There's significant ambiguity in terms of what is meant by AGI. Depending on where you put this
mark you need to be more or less confident.
You mentioned a few of the paths toward
alignment earlier, what is the one you think is most promising at this point?
I think that it will be a combination. I really think that you will not want to
have just one approach. People want to have a combination of approaches. Where you spend
a lot of compute adversarially to find any mismatch between the behavior you want it to
teach and the behavior that it exhibits.We look into the neural net using another neural net
to understand how it operates on the inside. All of them will be necessary. Every approach like
this reduces the probability of misalignment. And you also want to be in a world where
your degree of alignment keeps increasing faster than the capability of the models.
Do you think that the approaches we’ve taken to understand the model today will be applicable
to the actual super-powerful models? Or how applicable will they be? Is it the same kind
of thing that will work on them as well or? x
It's not guaranteed. I would say that right now, our understanding of our models is
still quite rudimentary. We’ve made some progress but much more progress is possible. And so I would
expect that ultimately, the thing that will really succeed is when we will have a small neural net
that is well understood that’s been given the task to study the behavior of a large neural
net that is not understood, to verify.
By what point is most of the
AI research being done by AI?
Today when you use Copilot, how do you divide
it up? So I expect at some point you ask your descendant of ChatGPT, you say — Hey,
I'm thinking about this and this. Can you suggest fruitful ideas I should try? And
you would actually get fruitful ideas. I don't think that's gonna make it possible for you
to solve problems you couldn't solve before.
Got it. But it's somehow just telling the humans
giving them ideas faster or something. It's not itself interacting with the research?
That was one example. You could slice it in a variety of ways. But the bottleneck there is
good ideas, good insights and that's something that the neural nets could help us with.
If you're designing a billion-dollar prize for some sort of alignment research result or
product, what is the concrete criterion you would set for that billion-dollar prize? Is there
something that makes sense for such a prize?
It's funny that you asked, I was actually
thinking about this exact question. I haven't come up with the exact criterion yet. Maybe a
prize where we could say that two years later, or three years or five years later, we look
back and say like that was the main result. So rather than say that there is a prize
committee that decides right away, you wait for five years and then award it retroactively.
But there's no concrete thing we can identify as you solve this particular problem
and you’ve made a lot of progress?
A lot of progress, yes. I wouldn't say
that this would be the full thing.
Do you think end-to-end training is
the right architecture for bigger and bigger models? Or do we need better
ways of just connecting things together?
End-to-end training is very promising.
Connecting things together is very promising.
Everything is promising.
So Open AI is projecting revenues of a billion dollars in 2024. That might very
well be correct but I'm just curious, when you're talking about a new general-purpose technology,
how do you estimate how big a windfall it'll be? Why that particular number?
We've had a product for quite a while now, back from the GPT-3 days,
from two years ago through the API and we've seen how it grew. We've seen how the response to
DALL-E has grown as well and you see how the response to ChatGPT is, and all of this gives
us information that allows us to make relatively sensible extrapolations of anything. Maybe that
would be one answer. You need to have data, you can’t come up with those things out of
thin air because otherwise, your error bars are going to be like 100x in each direction.
But most exponentials don't stay exponential especially when they get into bigger
and bigger quantities, right? So how do you determine in this case?
Would you bet against AI?
Not after talking with you. Let's talk about
what a post-AGI future looks like. I'm guessing you're working 80-hour weeks towards this grand
goal that you're really obsessed with. Are you going to be satisfied in a world where you're
basically living in an AI retirement home? What are you personally doing after AGI comes?
The question of what I'll be doing or what people will be doing after AGI comes is a very tricky
question. Where will people find meaning? But I think that that's something that AI could
help us with. One thing I imagine is that we will be able to become more enlightened
because we interact with an AGI which will help us see the world more correctly, and become better
on the inside as a result of interacting. Imagine talking to the best meditation teacher in
history, that will be a helpful thing. But I also think that because the world will change a
lot, it will be very hard for people to understand what is happening precisely and how to
really contribute. One thing that I think some people will choose to do is to become part
AI. In order to really expand their minds and understanding and to really be able to solve the
hardest problems that society will face then.
Are you going to become part AI?
It is very tempting.
Do you think there'll be physically
embodied humans in the year 3000?
3000? How do I know what’s gonna happen in 3000?
Like what does it look like? Are there still humans walking around on Earth? Or have
you guys thought concretely about what you actually want this world to look like?
Let me describe to you what I think is not quite right about the question. It implies we get
to decide how we want the world to look like. I don't think that picture is correct. Change
is the only constant. And so of course, even after AGI is built, it doesn't mean that the world
will be static. The world will continue to change, the world will continue to evolve. And it will
go through all kinds of transformations. I don't think anyone has any idea of how
the world will look like in 3000. But I do hope that there will be a lot of descendants
of human beings who will live happy, fulfilled lives where they're free to do as they see fit.
Or they are the ones who are solving their own problems. One world which I would find very
unexciting is one where we build this powerful tool, and then the government said — Okay, so
the AGI said that society should be run in such a way and now we should run society in such a
way. I'd much rather have a world where people are still free to make their own mistakes and
suffer their consequences and gradually evolve morally and progress forward on their own, with
the AGI providing more like a base safety net.
How much time do you spend thinking about these
kinds of things versus just doing the research?
I do think about those things a fair bit.
They are very interesting questions.
The capabilities we have today, in what ways
have they surpassed where we expected them to be in 2015? And in what ways are they still not
where you'd expected them to be by this point?
In fairness, it's sort of what I expected in 2015.
In 2015, my thinking was a lot more — I just don't want to bet against deep learning. I want to make
the biggest possible bet on deep learning. I don't know how, but it will figure it out.
But is there any specific way in which it's been more than you expected or less than
you expected? Like some concrete prediction out of 2015 that's been bounced?
Unfortunately, I don't remember concrete predictions I made in 2015.
But I definitely think that overall, in 2015, I just wanted to move to make the
biggest bet possible on deep learning, but I didn't know exactly. I didn't have a specific
idea of how far things will go in seven years. Well, no in 2015, I did have all these best with
people in 2016, maybe 2017, that things will go really far. But specifics. So it's like, it's
both, it's both the case that it surprised me and I was making these aggressive predictions. But
maybe I believed them only 50% on the inside.
What do you believe now that even most
people at OpenAI would find far fetched?
Because we communicate a lot at OpenAI people
have a pretty good sense of what I think and we've really reached the point at OpenAI where
we see eye to eye on all these questions.
Google has its custom TPU hardware, it has
all this data from all its users, Gmail, and so on. Does it give them an
advantage in terms of training bigger models and better models than you?
At first, when the TPU came out I was
really impressed and I thought — wow, this is amazing. But that's because I
didn't quite understand hardware back then. What really turned out to be the case is
that TPUs and GPUs are almost the same thing. They are very, very similar. The
GPU chip is a little bit bigger, the TPU chip is a little bit smaller, maybe a
little bit cheaper. But then they make more GPUs and TPUs so the GPUs might be cheaper after all.
But fundamentally, you have a big processor, and you have a lot of memory and there is a
bottleneck between those two. And the problem that both the TPU and the GPU are trying to
solve is that the amount of time it takes you to move one floating point from the memory to the
processor, you can do several hundred floating point operations on the processor, which means
that you have to do some kind of batch processing. And in this sense, both of these architectures
are the same. So I really feel like in some sense, the only thing that matters about hardware
is cost per flop and overall systems cost.
There isn't that much difference?
Actually, I don't know. I don't know what the TPU costs are but I would suspect
that if anything, TPUs are probably more expensive because there are less of them.
When you are doing your work, how much of the time is spent configuring the right initializations?
Making sure the training run goes well and getting the right hyperparameters, and how much is
it just coming up with whole new ideas?
I would say it's a combination. Coming
up with whole new ideas is a modest part of the work. Certainly coming up with new
ideas is important but even more important is to understand the results, to understand the
existing ideas, to understand what's going on. A neural net is a very complicated system,
right? And you ran it, and you get some behavior, which is hard to understand. What's going
on? Understanding the results, figuring out what next experiment to run, a lot of the time is
spent on that. Understanding what could be wrong, what could have caused the neural net to
produce a result which was not expected. I'd say a lot of time is spent coming up
with new ideas as well. I don't like this framing as much. It's not that it's false but
the main activity is actually understanding.
What do you see as the
difference between the two?
At least in my mind, when you say come up
with new ideas, I'm like — Oh, what happens if it did such and such? Whereas understanding
it's more like — What is this whole thing? What are the real underlying phenomena that are
going on? What are the underlying effects? Why are we doing things this way
and not another way? And of course, this is very adjacent to what can be described
as coming up with ideas. But the understanding part is where the real action takes place.
Does that describe your entire career? If you think back on something like ImageNet, was that
more new idea or was that more understanding?
Well, that was definitely understanding. It
was a new understanding of very old things.
What has the experience of
training on Azure been like?
Fantastic. Microsoft has been a very,
very good partner for us. They've really helped take Azure and bring it to a
point where it's really good for ML and we’re super happy with it.
How vulnerable is the whole AI ecosystem to something that might happen in
Taiwan? So let's say there's a tsunami in Taiwan or something, what happens to AI in general?
It's definitely going to be a significant setback. No one will be able to get more compute for a few
years. But I expect compute will spring up. For example, I believe that Intel has fabs just like
a few generations ago. So that means that if Intel wanted to they could produce something GPU-like
from four years ago. But yeah, it's not the best, I'm actually not sure if my statement about Intel
is correct, but I do know that there are fabs outside of Taiwan, they're just not as good. But
you can still use them and still go very far with them. It's just cost, it’s just a setback.
Would inference get cost prohibitive as these models get bigger and bigger?
I have a different way of looking at this question. It's not that inference will
become cost prohibitive. Inference of better models will indeed become more expensive. But
is it prohibitive? That depends on how useful it is. If it is more useful than it is
expensive then it is not prohibitive. To give you an analogy, suppose you want
to talk to a lawyer. You have some case or need some advice or something, you're
perfectly happy to spend $400 an hour. Right? So if your neural net could
give you really reliable legal advice, you'd say — I'm happy to spend $400 for that
advice. And suddenly inference becomes very much non-prohibitive. The question is, can a neural
net produce an answer good enough at this cost?
Yes. And you will just have price
discrimination in different models?
It's already the case today. On our product, the
API serves multiple neural nets of different sizes and different customers use different neural nets
of different sizes depending on their use case. If someone can take a small model and fine-tune
it and get something that's satisfactory for them, they'll use that. But if someone wants to do
something more complicated and more interesting, they’ll use the biggest model.
How do you prevent these models from just becoming commodities where these different
companies just bid each other's prices down until it's basically the cost of the GPU run?
Yeah, there's without question a force that's trying to create that. And the answer is you
got to keep on making progress. You got to keep improving the models, you gotta keep on coming
up with new ideas and making our models better and more reliable, more trustworthy, so you
can trust their answers. All those things.
Yeah. But let's say it's 2025 and somebody
is offering the model from 2024 at cost. And it's still pretty good. Why would
people use a new one from 2025 if the one from just a year older is even better?
There are several answers there. For some use cases that may be true. There will be a new
model for 2025, which will be driving the more interesting use cases. There is also going to
be a question of inference cost. If you can do research to serve the same model at less cost. The
same model will cost different amounts to serve for different companies. I can also imagine some
degree of specialization where some companies may try to specialize in some area and be stronger
compared to other companies. And to me that may be a response to commoditization to some degree.
Over time do the research directions of these different companies converge or diverge? Are they
doing similar and similar things over time? Or are they branching off into different areas?
I’d say in the near term, it looks like there is convergence. I expect there's
going to be a convergence-divergence-convergence behavior, where there is a lot of convergence
on the near term work, there's going to be some divergence on the longer term work. But then
once the longer term work starts to fruit, there will be convergence again,
Got it. When one of them finds the most promising area, everybody just…
That's right. There is obviously less publishing now so it will take longer before
this promising direction gets rediscovered. But that's how I would imagine the thing is going
to be. Convergence, divergence, convergence.
Yeah. We talked about this a little bit at
the beginning. But as foreign governments learn about how capable these models are,
are you worried about spies or some sort of attack to get your weights or somehow
abuse these models and learn about them?
Yeah, you absolutely can't discount that.
Something that we try to guard against to the best of our ability, but it's going to be a
problem for everyone who's building this.
How do you prevent your weights from leaking?
You have really good security people.
How many people have the ability to
SSH into the machine with the weights?
The security people have done a
really good job so I'm really not worried about the weights being leaked.
What kinds of emergent properties are you expecting from these models at this scale? Is
there something that just comes about de novo?
I'm sure really new surprising properties will
come up, I would not be surprised. The thing which I'm really excited about, the things which I’d
like to see is — reliability and controllability. I think that this will be a very, very important
class of emergent properties. If you have reliability and controllability that helps you
solve a lot of problems. Reliability means you can trust the model's output, controllability means
you can control it. And we'll see but it will be very cool if those emergent properties did exist.
Is there some way you can predict that in advance? What will happen in this parameter count,
what will happen in that parameter count?
I think it's possible to make some predictions
about specific capabilities though it's definitely not simple and you can’t do it in a super
fine-grained way, at least today. But getting better at that is really important. And anyone who
is interested and who has research ideas on how to do that, that can be a valuable contribution.
How seriously do you take these scaling laws? There's a paper that says — You need this
many orders of magnitude more to get all the reasoning out? Do you take that seriously
or do you think it breaks down at some point?
The thing is that the scaling law tells you what
happens to your log of your next word prediction accuracy, right? There is a whole separate
challenge of linking next-word prediction accuracy to reasoning capability. I do believe that
there is a link but this link is complicated. And we may find that there are other things
that can give us more reasoning per unit effort. You mentioned reasoning tokens,
I think they can be helpful. There can probably be some things that help.
Are you considering just hiring humans to generate tokens for you? Or is it all going to
come from stuff that already exists out there?
I think that relying on people to teach our models
to do things, especially to make sure that they are well-behaved and they don't produce false
things is an extremely sensible thing to do.
Isn't it odd that we have the data we
needed exactly at the same time as we have the transformer at the exact same
time that we have these GPUs? Like is it odd to you that all these things happened at
the same time or do you not see it that way?
It is definitely an interesting situation
that is the case. I will say that it is odd and it is less odd on some level.
Here's why it's less odd — what is the driving force behind the fact that the data exists, that
the GPUs exist, and that the transformers exist? The data exists because computers became
better and cheaper, we've got smaller and smaller transistors. And suddenly, at
some point, it became economical for every person to have a personal computer.
Once everyone has a personal computer, you really want to connect them to the network,
you get the internet. Once you have the internet, you suddenly have data appearing in great
quantities. The GPUs were improving concurrently because you have smaller and smaller transistors
and you're looking for things to do with them. Gaming turned out to be a thing that you could
do. And then at some point, Nvidia said — the gaming GPU, I might turn it into a general
purpose GPU computer, maybe someone will find it useful. It turns out it's good for neural
nets. It could have been the case that maybe the GPU would have arrived five years later,
ten years later. Let's suppose gaming wasn't the thing. It's kind of hard to imagine,
what does it mean if gaming isn't a thing? But maybe there was a counterfactual world
where GPUs arrived five years after the data or five years before the data, in which
case maybe things wouldn’t have been as ready to go as they are now. But that's the
picture which I imagine. All this progress in all these dimensions is very intertwined. It's
not a coincidence. You don't get to pick and choose in which dimensions things improve.
How inevitable is this kind of progress? Let's say you and Geoffrey Hinton and a
few other pioneers were never born. Does the deep learning revolution happen around
the same time? How much is it delayed?
Maybe there would have been some
delay. Maybe like a year delayed?
Really? That’s it?
It's really hard to tell. I hesitate to give a longer answer
because — GPUs will keep on improving. I cannot see how someone would not have discovered
it. Because here's the other thing. Let's suppose no one has done it, computers keep getting faster
and better. It becomes easier and easier to train these neural nets because you have bigger GPUs,
so it takes less engineering effort to train one. You don't need to optimize your code as
much. When the ImageNet data set came out, it was huge and it was very, very difficult
to use. Now imagine you wait for a few years, and it becomes very easy to download
and people can just tinker. A modest number of years maximum would be my guess. I
hesitate to give a lot longer answer though. You can’t re-run the world you don’t know.
Let's go back to alignment for a second. As somebody who deeply understands these models, what
is your intuition of how hard alignment will be?
At the current level of capabilities, we have a
pretty good set of ideas for how to align them. But I would not underestimate the difficulty
of alignment of models that are actually smarter than us, of models that are capable of
misrepresenting their intentions. It's something to think about a lot and do research. Oftentimes
academic researchers ask me what’s the best place where they can contribute. And alignment research
is one place where academic researchers can make very meaningful contributions.
Other than that, do you think academia will come up with important insights
about actual capabilities or is that going to be just the companies at this point?
The companies will realize the capabilities. It's very possible for academic research to
come up with those insights. It doesn't seem to happen that much for some reason
but I don't think there's anything fundamental about academia. It's not like
academia can't. Maybe they're just not thinking about the right problems or something
because maybe it's just easier to see what needs to be done inside these companies.
I see. But there's a possibility that somebody could just realize…
I totally think so. Why would I possibly rule this out?
What are the concrete steps by which these language models start actually impacting the
world of atoms and not just the world of bits?
I don't think that there is a clean distinction
between the world of bits and the world of atoms. Suppose the neural net tells you — hey here's
something that you should do, and it's going to improve your life. But you need to rearrange
your apartment in a certain way. And then you go and rearrange your apartment as a result.
The neural net impacted the world of atoms.
Fair enough. Do you think it'll take a couple
of additional breakthroughs as important as the Transformer to get to superhuman AI? Or
do you think we basically got the insights in the books somewhere, and we just need
to implement them and connect them?
I don't really see such a big distinction between
those two cases and let me explain why. One of the ways in which progress is taking place in the
past is that we've understood that something had a desirable property all along but we didn't
realize. Is that a breakthrough? You can say yes, it is. Is that an implementation of
something in the books? Also, yes. My feeling is that a few of those are
quite likely to happen. But in hindsight, it will not feel like a breakthrough. Everybody's
gonna say — Oh, well, of course. It's totally obvious that such and such a thing can work.
The reason the Transformer has been brought up as a specific advance is because it's the
kind of thing that was not obvious for almost anyone. So people can say it's not something
which they knew about. Let's consider the most fundamental advance of deep learning, that a big
neural network trained in backpropagation can do a lot of things. Where's the novelty? Not in the
neural network. It's not in the backpropagation. But it was most definitely a giant conceptual
breakthrough because for the longest time, people just didn't see that. But then now that
everyone sees, everyone’s gonna say — Well, of course, it's totally obvious. Big neural
network. Everyone knows that they can do it.
What is your opinion of your former
advisor’s new forward forward algorithm?
I think that it's an attempt to train a
neural network without backpropagation. And that this is especially interesting if
you are motivated to try to understand how the brain might be learning its connections.
The reason for that is that, as far as I know, neuroscientists are really convinced
that the brain cannot implement backpropagation because the signals in
the synapses only move in one direction. And so if you have a neuroscience
motivation, and you want to say — okay, how can I come up with something that tries to
approximate the good properties of backpropagation without doing backpropagation? That's what the
forward forward algorithm is trying to do. But if you are trying to just engineer a good system
there is no reason to not use backpropagation. It's the only algorithm.
I guess I've heard you in different contexts talk about using
humans as the existing example case that AGI exists. At what point do you take the metaphor
less seriously and don't feel the need to pursue it in terms of the research? Because it is
important to you as a sort of existence case.
At what point do I stop caring about humans
as an existence case of intelligence?
Or as an example you want to follow in
terms of pursuing intelligence in models.
I think it's good to be inspired by humans,
it's good to be inspired by the brain. There is an art into being inspired by humans in the
brain correctly, because it's very easy to latch on to a non-essential quality of humans or of the
brain. And many people whose research is trying to be inspired by humans and by the brain often
get a little bit specific. People get a little bit too — Okay, what cognitive science model
should be followed? At the same time, consider the idea of the neural network itself, the idea
of the artificial neuron. This too is inspired by the brain but it turned out to be extremely
fruitful. So how do they do this? What behaviors of human beings are essential that you say this
is something that proves to us that it's possible? What is an essential? No this is actually some
emergent phenomenon of something more basic, and we just need to focus on
getting our own basics right. One can and should be inspired
by human intelligence with care.
Final question. Why is there, in your case,
such a strong correlation between being first to the deep learning revolution and still
being one of the top researchers? You would think that these two things wouldn't be that
correlated. But why is there that correlation?
I don't think those things are super correlated.
Honestly, it's hard to answer the question. I just kept trying really hard and it turned
out to have sufficed thus far.
So it's perseverance.
It's a necessary but not a sufficient condition. Many things
need to come together in order to really figure something out. You need to really
go for it and also need to have the right way of looking at things. It's hard to give a
really meaningful answer to this question.
Ilya, it has been a true pleasure. Thank you so
much for coming to The Lunar Society. I appreciate you bringing us to the offices. Thank you.
Yeah, I really enjoyed it. Thank you very much.
This is a great talk with lots of very interesting insights! It is obvious that he can very clearly see a path towards AGI, i hope he is right. In any case i like how he talks, and sensing how close this work is to him i believe him more. Instead of some billionaire CEO saying, oh yes my friends i will deliver you AGI next month!
People think Altman is behind GPT's success. I would argue that Ilyar is
If you find the Altman interview boring, watch this one! So much to process, I think I'll need to rewatch it a few times!
Questions, for all coming here after listening:
If you listened to both this and the recent Lex Friedman Podcast with Sam Altman:
2, which one had the better questions?
3, which one had the better answers?