Good morning, everyone. Hi, I'm Wallace Mills,
the Senior Strategist for Executive Thought Leadership here at NVIDIA. Welcome. Before we
get started, I just want you to know that the session recording will be available online
within 72 hours, and within a month, it'll be available on NVIDIA on demand. Also, I
want to remind you to download the NVIDIA GTC app if you haven't already. That's where you'll
find the latest updates, session catalog, and surveys for your sessions. We also invite you
to explore the exhibit hall here on level two, which will be open today at 12. So, you don't want
to miss that. That being said, I am particularly thrilled to introduce this session. I have had
the pleasure of putting together the Business Insights track in partnership with many leaders
across NVIDIA. So, we get to kick it off with this much-anticipated conversation. And while
many people are just beginning to ride that wave of generative AI, in true NVIDIA fashion, we
are already here to talk about what's next. So, if you would please welcome our Vice President of
Enterprise Computing, Manuvir Das, here at NVIDIA. He leads the teams working to democratize AI by
bringing full-stack accelerated computing to every enterprise customer. Manuvir has over 30 years of
experience in the technology industry, and prior to joining us here at NVIDIA in 2019, he held
a range of senior roles at Dell and Microsoft, where he helped create the Azure cloud computing
platform and was an affiliate professor at the University of Washington. Welcome, Manuvir. Thank
you. That is so embarrassing. This session is not about me at all. Thank you all for being
here this morning. It's a real pleasure to see you. I hope you enjoyed the keynote yesterday
and all the announcements from Jensen. You know, one of the things he mentioned in the
keynote was our first DGX system, the DGX-1, and he talked about delivering it personally to
this startup called Open AI. And what this group of people has done over the last few years is
just absolutely incredible, right? Isn't it? So, we're very fortunate today to kick this off in a
session where I'm joined by Brad Lightcap. He's the Chief Operating Officer of Open AI. He's
also a Duke Blue Devil, and I was just giving him a hard time because I'm a Badger. And it turns
out the bracket just came out for the tournament, and the Badgers and the Blue Devils will probably
meet up in the second round. So, I'm not going to be a friend of his.Uh, then, but I am today,
so you know, the interesting thing about Brad is he's obviously got a great role at Open AI.
He's also been called Sam Alman's secret weapon, you know, the person he really relies upon. So,
I'm sure he'll have a lot of interesting things to share with us. So, Brad, why don't you come
on out? Everybody gets to see us walk up here. It's gonna be, yeah, thanks for making Brad.
Okay, let's have a seat. Look at the fancy mugs and all of that. Okay, so Brad, why don't you
tell us a little bit about your role at Open AI, what you do sort of on a daily basis, and maybe
a little bit about what keeps you up at night? Sure, well, thank you for having me. It's great
to be here. This is my first GTC, so we'll see if we're back next year and what that will bring.
But, yeah, so I'm the COO at Open AI. I spend a lot of time thinking about how do we bring what we
build in our research lab to life for customers, users, and partners. And usually, people say,
"Well, what does that entail?" And I kind of say, it entails everything other than actually
doing the research. They don't let me touch the computers. I just pay for them. And I spend
most of my time with our customers and trying to figure out how this technology is going to get
integrated into the world. And what keeps me up at night? You know, there's not much, I would
say, right now that keeps me up at night other than Slack. But I think that the next few years
are really going to be quite interesting. I think we are still on the flat part of the curve. And
the way we see it is this is like the edge of the first inning. And so as this technology gets built
and developed, and as we scale these systems up, we think the capabilities are going to be really
amazing. Yeah, you know what's interesting is I think a lot of people think of Open AI as ChatGPT
and think of it as the average consumer going in and experiencing the technology. But of course,
you've worked a lot now with enterprise companies, right? And most customers that we talk to at
NVIDIA, they've built rigs of one kind or another in their company by now. And how does everybody
do that? They call into Open AI to do that. So, I think both myself and the audience are very
curious to hear from you a little bit about what has that experience been like. And I think
you're actually quite involved yourself in working with enterprise customers, right? So, tell us
a little bit about how that's all been going. Yeah, well, it's funny when we launched ChatGPT,
users obviously took off, and it was a product that wasn't launched for the enterprise. We spent
about six months just trying to figure out what the hell was going on and trying to make sure
we had enough GPUs to accommodate our growth. But we spent the last six months of last year
actually starting to realize that there was a legitimate and growing set of applications in the
enterprise that people were bringing ChatGPT in for. And that's why we ended up launching ChatGPT
Enterprise and ultimately our Team product, which was a smaller teams product. But there
was a real pull that we felt from not just SMBs and mid-market, but even the Fortune 500. We
currently have over 90% of the Fortune 500 using ChatGPT in some form. We're trying to bring them
all along on the capital E Enterprise version of the product. But it has real pull and real fit
there. And the amazing thing about it is it's very horizontal. So, every function at the company
has, to our knowledge, found some way to make the technology useful for them. And what's amazing is
we didn't have to build a lot of really vertical use cases or applications. It just kind of
works. So, if you're on the finance team and you're analyzing a lot of data and you're trying
to do reconciliations and tax analysis, you can drop big spreadsheets into ChatGPT and just ask
it questions and ask it to do the reconciliation, and it'll just do it. It'll turn your HR people
into data scientists if they need to be. And so, you've got applications like this that people just
found fit with. And we're trying to go and build even better versions of the tools for them. It
is amazing, you know, because, and you're right, it's been surprising to people just how good
the technology is right now. What we see, Brad, when we talk to enterprise customers, is the
use case that we see the most traction with is just assistance.You know, like you got your free
intern, right? And no matter what job function you're in, you build a chatbot that does the
work that you do, and you get your 80% answer to get going with, and then you finish it up.
Is that sort of what you guys are seeing too? Yeah, so in some use cases, there's a little bit
of that last mile engineering, and we have a team that can help customers with that. And so we try
and do that work in a very hands-on way. I think some of that will start to fade as the models just
get better. And so there are kind of two things we see. Partly, it's solving for where the model
still has deficiencies in its capability. And then partly, it's just trying to rig up all the
context that the model needs to be able to do a job. I don't know that the second part will go
away. The world is very large and messy. But I think the first part will. People will really
feel accelerated as the models get better. Yeah, okay. So obviously, Brad, you guys have
these great models, the various flavors of GPT that power ChatGPT. There's a whole tools
ecosystem that has sprung up around Open AI, helping people use this stuff. I'm curious,
for your company, do you see part of your mission and your role to be a full platform
for application developers who use this kind of technology now? Or do you just want to
be the provider of the core model service? I think both, if that makes sense. The way that we
look at it is everything is just an abstraction on the intelligence. And I think it's just how many
layers of abstraction do we want to go build? But we'll build anything that accelerates the world's
ability to start to pull the technology and pull intelligence into all the nooks and crannies
where we think it should be. One, I would say, fairly humbling part of my role is you start to
realize how big the world is and how many places there are that we could apply this technology.
And for every ounce of energy I would spend thinking about should we go build something
specific first as a first-party application, I kind of remind myself that there's someone
out there who cares a lot more about a specific problem than we ever will. And that's true 99%
of the time. So, how do you build a toolset that allows them to go build the technologies,
tools, and applications that they want to build? Uh, and then what are the things that we
focus on as the kind of primitive pieces, the foundational layers that will enable them and
also create great user experiences? You know, it's interesting because, uh, in a way, you're going
through the same journey that Nvidia went through, um, in its history and in the last few years,
which is, uh, you know, we have a model at Nvidia that we like to spend our time doing the things
that nobody else can do, and the things that the others can do, we let them do, right? Because you
feel the sort of responsibility that you have an instrument in your hand, and your job is to make
that instrument as good as possible and as, uh, with as much reach as possible, and let other
people build around it, right? And you've got this amazing instrument now, right? And I, I'm
sure you feel the sense of responsibility that, like you said, uh, you can impact the whole world,
right, with this, uh, with this instrument. So, uh, so I think it's, it's a very powerful thing.
And the other thing I was thinking about was, you know, Jensen said it in his screen yesterday too
when he was talking about the world's industries, you know, in a $1 trillion dollar of, of
industries. Just obviously, with your background, I'm sure you think about that because in the tech
world, you know, for a long time, the tech world has been about cost, right? Every company has to
have an IT department. There's a budget for that, and it's all about how do I reduce cost, you know?
Every new technology is disruptive because it's like, "I'll make something cheaper to do," uh,
but I think in the domain that you're in, and, and we believe that we're in, it's really about
new opportunities, new value for companies, right? I mean, nobody ever said GDP has to remain
flat, right? It's all, it's allowed for things to grow. So, uh, do you guys see it the same way?
Yeah, we do. Um, I think, you know, if you kind of look fundamentally at, like, what the technology
really is, um, it's kind of just this phase scale-up of the ability to offload certain tasks
to models that can learn, uh, that have a general learning capability and can get better, uh, you
know, predictably better both with scale but also with more information, more context, and more
capability. And I, I think that's the exciting part for us. From an enterprise perspective,
you think about how complex large businesses really are, um, and how much low-hanging fruit
there is to be able to say, you know what, for this specific thing...We actually can offload
parts of this workflow to an AI that can not only do it at a baseline level but actually can start
to do it better over time and increasingly kind of own parts of that entire value chain. Yeah, and
it just allows people to focus on other things, on other things exactly. And that's what we
see in practice. Instead of spending two hours sitting there tearing your hair out trying
to get the revenue reconciliation to work, an AI can kind of explore it and figure it out
for you, and you just kind of throw compute at the problem and all of a sudden it's solved. And
that same person that would have otherwise spent that time can just go spend their time thinking
about something more important. Yeah. I say this because I also manage finance and been there,
but yeah, I notice how all his examples go back to finance in some way. It's on the brain. I'm
sure your team is using ChatGPT all the time, as must you be. I think we all now on our
phones, right? We've got ChatGPT. I mean, that's where I go. There's a lot of people here,
Brad, who come from an enterprise background at this conference and in this room, you know? And
the question that's on a lot of people's minds is, there's all the knowledge of the world in the
internet, etc., that obviously your models have done a great job absorbing. And then every company
has its own sort of repository of knowledge in lots of different places, and various people
have different angles on how to approach that. Obviously, there's R&D with Nvidia. We do a lot
of fine-tuning. I'm curious as to, for Open AI, what is your vision of how enterprise
companies should really incorporate all of the data that they have into the AI process?
Yeah, this is one of the questions we get the most. And this is probably the thing right now
that I think is kind of the least solved problem, which is to be expected. I think we're really
early in this phase shift, and you've got this core technology that people are able to poke at
and use. But the pipelining and the rigging of all the infrastructure and systems will take some
time. But I think what we're starting to see right now is people are able to marry really interesting
repositories of data with identification of clear use cases. With an understanding of how the
model can be applied to both of those things, and you kind of tie those three things together,
and you can get some really good outcomes. So, an example of this recently that we worked on is
we worked with Clara to work on a customer support use case. Clara is a very forward-thinking company
on AI, so they've been kind of doing this for a while. But they took, I think, the right approach,
which was they really started with a very specific implementation of the technology where they
constrained the problem. So, it was a small part of the workflow with a very specific data set
and a very specific implementation of the model. They kind of got that piece to work, and then they
just built from there. And now, it's handling a large swath of the work and saving them many, many
hours. And I think that's the approach we guide toward. Don't try and overshoot, so don't try and
swallow the ocean from day one. Don't undershoot, meaning don't lack ambition. But start with
something that you can constrain the problem on, get it to work, and then scale it up.
Yeah, you know, that point you just made, I've seen a few interviews you've done, Brad,
where you've talked about this more than once, that you have these meetings with companies where
they think that somehow GPT is going to magically make them a better company and change their
position in the market. Whereas it's better to just start with specific use cases, get some
value out of those, and then go from there, right? Yeah, so as a piece of advice to people
in companies who are just getting going, for example, if I look at Nvidia, Brad, we
have now, you know, depending on how you count, a couple hundred of these RAGs or chatbots
that we're running inside Nvidia for different purposes. And we kind of got there organically.
For somebody who's starting off now, would you recommend that they spend some time first thinking
through how it's all done and picking one way to go? Or do you think it's better for them to
just sprout organically and see what happens? Yeah, well, to your earlier point, yeah, we spent
most of 2023. Um, I used to tell our team, "We don't really do sales, we do therapy." We would
have companies come in, and it would be like, usually a C-level person that was sitting in
our conference room. And about 5 minutes into the meeting, they would be kind of professing
all of their problems and things that they were worried about, and they're like, "Could AI fix
all these things for me? And my board wants me to ship something next quarter." And usually, we'd
have to, like, talk them off the ledge a little bit and, like, get them some water and have them
calm down. Um, but once we get to a real part of the conversation, yeah, you know, our perspective
on this is to kind of do what I just mentioned. Really think about where are there places in
your business where you feel operationally like there's an opportunity to improve how you run.
For a lot of people, that happens to be customer support. That's the thing that we hear probably
the most frequently. No one likes the quality of their customer support experience. They spend
a lot of money on it. It never quite works. It's a thing they hear the most customer complaints
about. And so, it happens to be a place that, and that's pretty horizontal, right? Because that
applies to lots of industries. Yeah, yep. Um, but we tend to recommend a multi-prong approach.
So, identify two or three areas where you have a real gnarly problem, but where, again, you can
kind of constrain the problem. So, support is this workflow that is this kind of, it's multiple
tasks strung together with varying levels of human involvement and human engagement and a lot
of data, and more context helps, right? So, you can look for these core ingredients of having,
again, going back to the pyramid of data, process, and model capability, and you can figure out
what's that first implementation look like and then how do you scale it up from there. And so,
picking a few of those types of projects, these more bespoke platform-based projects. And then the
other thing we recommend really is, going back to ChatGPT as itself as a product, is giving starting
to give your teams access to the technology.Um, this was not something we really were actively
thoughtful about in the middle of last year. But as we've deployed ChatGPT and as we've had
a chance to talk to companies that are using it, democratizing access to the tool and just giving
people an opportunity to use it, it does not have to be in a particularly complex or developed
form, but just giving people a chance to say, "I know what work I have to do. I can poke around
with this thing enough to be able to explore what it's capable of doing, and I'll figure out how
to find value in its capability, helping me do my work better." Yeah, and that happens very
organically and it happens all the time. And, you know, I think companies kind of forget that.
They want to have this very manicured AI strategy, and they want this big company rollout, and they
want these proprietary chatbots. And I think 90% of the value just comes from right now, at least,
is coming from just giving people access to the tools and not thinking too much about it.
Yeah, I think that's a fair point because the value, when you try it the first
time, is so obvious that, you know, you're willing to work through it. I think
that's a big thing. So, Brad, on that front, working with these enterprise companies and
different use cases, you've also rolled out your custom model with the GPTs now, right? That it's
very easy for people to build. So, can you tell the audience a little bit about what that is and
why you went down that road and how that's going? Yeah, I'll try and contextualize it maybe
in our broader picture of our strategy. So, we have this very core intelligence in Palmyra
and whatever comes next. And a lot of where we're spending our time is starting to think about
how can people make that technology or those models feel more personal to what they're doing,
more task-specific, improve their performance on any given thing. And so, a lot of the work we've
done in the last few months, GPTs custom models, has been in that direction. So, you can think of
GPTs and custom models as opposite ends of the customization spectrum. GPTs are like the dead
simple, easy way to take ChatGPT and basically create a slice of ChatGPT that is specific to
a given task. So, if you want to have a model, the ChatGPT, kind of remember certain information,
be able to call on certain outside data, be able to access a PDF or a spreadsheet, have
a certain personality, be able to use certain tools in a predictable, repeatable way, you just
kind of ask for it. You actually can configure a GPT without even having to build it. You just
describe what you want, and it'll go off and do it. And we see a huge pull for that in the
enterprise, actually. And it's not surprising because people start to figure out that these are
the workflows for which I can use the technology, so I'm just going to encode each of these in a GPT
and just call it. The custom model side is like the full monty, other end of the spectrum that is
basically us taking Palmyra or any other model and fully trying to figure out how to customize it for
a specific use case and maximize its performance in that use case. We do that on a more limited
basis. Obviously, it's time and resource-intensive for us. But we've had tremendous success early
on. We're still kind of experimenting with this, but success early on in improving the
model's capability in any number of domains. Yeah, you know, it is fascinating because
obviously, you guys really started this whole journey with the very large, capable model that
is just so surprisingly good at so many things, and it just gets better. And then, at the
same time, if I look at the last year, there's this ecosystem of models that have sprung
up. And, you know, they're not as capable as the models that you have inside your services at Open
AI, but, you know, in different ways, they're getting better, right? And they're specialized
at certain things. And so, how do you see, whether it's the larger models getting larger or
the smaller models, do you see a role for both within an enterprise? Or do you think just the
one large model used in lots of different ways? Yeah, we do see roles for all. I think, you know,
the same way what my mental model, by the way, on how to think about enterprise AI deployment
is to try and map it as closely as I can to how a modern enterprise is constructed from a human
capital perspective. So, the same way that you wouldn't want to hire 25,000 PhDs to run your
company because it would be overkill for what it is that you do.You may only need five or ten
or whatever you may need. You wouldn't want to take GPT-X or whatever the latest model is to
every single problem. You may want a diversity of models that have specializations in different
things, that are kind of fine-tuned for different use cases. I suspect over time, the models will
just get generally better, so the need to iterate on them and fine-tune them and try to make them
really good at any specific thing will dissipate a little bit. But you definitely don't need a
flagship model to solve every problem. And so, one of the things we're actually working on
is trying to figure out ways to allow people to more dynamically pull models in for any
given use case so that they can distribute the intelligence a little more. But yeah, no, I
think you kind of have your intern-level model, you have your mid-level manager model, you have
your senior executive model, you have your subject matter expert model, and there's a place for
each, and it'll be a diversity of things. But it actually raises an interesting question.
I'm sure the audience is thinking about asking you this question too, given who you are and
what you do. If I would say on a spectrum of 1 to 10, the capabilities of models, where do you
think we are today? Are we like one out of 10, or are we seven out of 10? What do you think?
Yeah, you know, I was going to make one more point on the last comment I just made, which
is the interesting thing about what we do and the challenge from where I sit and how we deploy
the technology in the enterprise or anywhere is that the kind of map of human capital and trying
to map the human capital to the model capability, basically, but the thing that's changing
constantly is the model capability. That window is moving every six months. Yeah, and so,
all the models that were your intern models six months later are starting to look a little bit
like your mid-level VP models, and that mid-level VP model is starting to look a little bit like
your senior director model. You just dissed a whole bunch of VPs. No offense to any VP. These
are crude analogies. But that's an interesting phenomenon and something that companies have to
manage dynamically. And I think it's a net good thing.It's surplus, and so we spent a lot of time
with companies thinking about what we're bringing to bear on any given problem and should we be
rethinking that combination of things as our model capabilities improve. And you'd imagine that
it's sort of the new norm, right? Because in every company, some humans have to figure all this stuff
out and think through what's being used where. You know, early on, Brad, when you were talking about
the initial adoption and dealing with some things, it just reminds me of when the iPhone came
out and there was this general belief that, "Oh, it's cool for consumers, but companies are
going to have a hard time adopting the iPhone because it doesn't have this control and
that control, and how will it understand it?" And doesn't that sound silly now, right?
So, I think let's transition a little bit to what's next. One of the things that I see, Brad,
in talking to the more advanced customers or the customers who are further along in their
journey is that they're kind of starting to move this transition from a lot of this has
been about some form of information retrieval. At the end of the day, what I'm doing is I've
got some information and I'm trying to search through it in some fashion. And now the question
is, can I use this technology more as an agent where I try to do things in my company? I try
to run processes, I try to call into things, make actions happen. Do you see that in your
interactions with people? And how do you, where do you think the technology is? Because if I've
got an assistant and I'm looking at the output, there's a human in the loop. But if I'm making the
thing take actions for me, I have to trust it a little further, right? So, how do you see that?
Yeah, you know, this is what I'm excited about. This is, in many ways, kind of how we think at
Open AI about what this technology is useful for and how it should be used. And in some
ways, we laugh a little bit at the way that AI implementations work today in many cases.
It's like these information retrieval-based things, and they're like the world's worst
databases in some ways. They're really slow, they're really expensive, they're not 100%
accurate. They're getting better, but why would you use them as a database or why would you
use them for some sort of high-precision action? Yeah, it feels like a strange way to
use these things. You know, no judgment, but the way that we're really excited to see
these systems evolve is as reasoning agents. So, how do you actually take the model's core
capability to extract information from something, think about that information, and then basically
take some sort of insight and take action based on that insight? There are two things that have
to happen there. One is the model's reasoning capability has to improve, and two is you have
to give it some ability to have actuators, to basically take action out in the world. And
I think those are the next two waves that we're going to start to see merge. We suspect that
reasoning is the next area that we will start to see the model's axis of improvement really
accelerate on. And also, being able to give models the ability to work through multi-step problems.
Let me give you an example in healthcare, for example. If you can point a model at a
medical record and say, "Okay, it can extract the information from this medical record." Today,
it could do very basic operations. It could maybe summarize that information, it could update that
information based on some sort of input. But can you get it to think about that information?
And if it can think about that information, can it actually draw some insights about that
information in a way that might inform some second step or some third step after that? It could
help with follow-up with the patient, it could help with the diagnosis of a disease, it could
help with placing an order for a prescription, it could then complete the loop and actually
talk to the patient about the prescription, both when to pick it up from where and when they
should take it, and then also remind them later on that they should be taking it two weeks later.
So, that's the way that we think about these systems on a multi-year basis. And I think, do
you think that's going to happen because the core model is going to become better at that? Or
do you see an approach where there's a separate model or a separate system that is more built for
reasoning that complements the existing models? I think today's systems are already pretty good.
If you go to Palmyra and ask it to reason about a hypothetical situation and explain its thinking
step by step, it will explain it to you that way. So, the action path is known to the model. Now,
it's just a question of whether it can take each step in that action path and identify the specific
thing it should go off and do, and whether it has access to what it needs to do that.
That's great to hear you say that because we definitely see that happening, and obviously,
the more you all are working on that at Open AI, the better it is for everyone. So, Brad,
from your point of view now, obviously, we talked about agents a little bit, but
if you had to step back as your company, what do you think, in the next one year, three
years, and five years, are the big shifts that you're working on that can really change the
landscape for how people use this technology? I can't tell you everything, but what I can say
is that we don't think that we're anywhere near a ceiling on the core capability improvement in the
models. We think there's a lot of room for future scalability, and we're very excited about that.
We're also trying to understand how to move the models along axes that are not just raw IQ. And I
think we feel really good about where that work is going. From where I sit, there's also the question
of what are going to be the standards, frameworks, and tools through which the world starts to rig up
the information required for these systems to be useful in production and deploy them in a deployed
setting. So, part of it is building the thing, and part of it is making sure that we have a
place and a way to deploy the technology that can actually make it useful in production. It
was definitely an unfair question, and I think you did a very good job handling it.So, let me ask
a question. I won't make predictions anymore. Let me ask you the question in a different way,
right? So, obviously, as a company, you can have a focus on just improving the technology
as a whole, as you're doing. You can have a focus on enterprise customers, the industries of
the world, the commerce of the world. There's a lot of opportunity. So, what is your mindset
and focus? Do you feel it's your mission to democratize this across all the enterprise
companies in the world, to help them all get to a better point? Or are you more focused on
individual consumer use cases because obviously, that's a big benefit to the world too?
Yeah, our mission is literally to ensure that the benefits of the technology are broadly
distributed. So, how do we think about enacting that? Well, one is making sure that people are
able to build on top of it. And for the reasons I mentioned earlier about how big and messy the
world is, we're going to need to do that anyway. I think surfaces will change, the abstraction layers
will change, but the core of it is we will just try and build ways for people to use the tools
effectively and successfully wherever they want to use them. Greg Brockman, our co-founder, has
a nice phrase. He says, "How do you think about a world where you have AI like baked into the
economy?" The "baked in" part is when you kind of decompose what that means. And probably reading a
little too much into his analogy, it's like you've got all these ingredients, and you've kind of got
to mix them up and then let it sit, and it starts to work. We think about it a lot that way too.
How do we actually put the technology in place and bring these other ingredients to bear in
a way that, once they're mixed up, things just start to work differently? That's how we spend
a lot of our time, trying to enact our mission. And obviously, from a consumer perspective,
we look at it similarly. ChatGPT is just an abstraction on our own API. So, we took a model
and made it better at talking to people. We served it, and it worked. But it's just a way for people
to access it that is not through an API. So, Brad, I think it was around November
30th, 2022, when you released ChatGPT. Um, it's got to have surprised even you, what's
happened, right? Just the level of interest, the uptake. I mean, it's a new thing, and it just,
uh, people just got it right away, right? Because it was so easy to see what its impact was,
right? So, um, so I'm just wondering if, uh, if you could do a little bit of a retrospective
for us. You know, it's been a little more than a year. What's your take? Has it surprised you
in every way? Are there any, uh, you know, if you could look back, are there any things that
could have gone differently? Any, any, you know, choices you would have made differently?
Yeah, what about more GPUs? Probably not. I think it did surprise us. I'll speak kind of
on my own behalf here a little more than on the company's behalf, but, um, yeah, you know, we
actually thought, we did not think that Palmyra as a model class was the model that had kind of
crossed the chasm in terms of its usefulness for consumer applications or enterprise applications.
We thought actually Palmyra would be kind of the first model that had crossed that chasm. So,
a lot of our planning processes had aligned around the launch of Palmyra, which was in March
of 2023, a year ago. But we had finished training Palmyra months before that, so we started
training Palmyra in the middle of 2022. So, we're now kind of two years from that date. And,
uh, so, but we, yeah, we kind of thought four would be the moment. We had to scramble a little
bit to accommodate what everyone wanted a little earlier than planned. But it's been amazing to
see. And I think it speaks to something that I think will be true in any capacity, regardless
of whether you're an enterprise, a developer, an individual, which is the technology has
this very innately human characteristic to it. You can kind of hand it to someone who's
like 5 years old or 95 years old, and they can both find a way to use the technology. It's very
natural, and I think that's really important. So, how do we push the systems to continue to improve
on that access too? Yeah. And then two is being able to lower the barrier to access. And so,
making sure that it is accessible to people around the world. That was kind of a thing that
we thought we really got right with ChatGPT, making it free. Um, and the stories we hear from
people in far-flung parts of the world who use it for things that we could only dream up here, you
cannot imagine, right? Imagine, yeah, exactly. You know, the point you made about it being so human,
that's something that's quite close to Nvidia too, Brad. Because, you know, we do AI, but we're
also sort of the graphics company, a little bit, right? And so, we really see a lot of opportunity
where, firstly, the text interface is so much more human than writing code. But the audio interface,
the visual interface, having these avatars where you basically feel like you're talking to another
entity. And of course, at the end of the day, there are other AIs that are converting that into
text that is then going into a regular chatbot or what have you. Do you think that is an opportunity
for this technology to really expand its reach on a planetary scale? Because it makes it much easier
for humans to interact with. Do you think that should be a good area of research and progress?
Yeah, I think that someone born today, the relationship they have with computers will
be kind of unrecognizable to anyone sitting in this room. They won't know a world where you have
to navigate through graphical user interfaces, hamburger menus, click-down things, and fill in
text fields and hit submit, and then check your inbox for a confirmation email. These miserable
situations that we find ourselves in. I appreciate that we make do with the tools we have, but I just
think it will be completely foreign to someone born today in 10 or 20 years. It reminds me of
my kids, who span the advent of the iPad era. My elder boys, I remember this moment when they were
young, sitting on my lap, trying to press the keys on the keyboard to participate with Dad. But when
my daughter reached that stage at 2 years old, what she was trying to do was move her hand
across my laptop screen because that's the interface she knew. Right, she didn't know
what the keys were all about. So, I think, uh, those interfaces are going to be quite
different as we go forward. And yeah, in 10 years, hand your kid a, uh, like a laptop from, you know,
2020 or whatever, and watch them talk at it and wait for a response and not get one. That, not get
one exactly, it'll be something like that. And the amazing thing is that your company, and hopefully
our company, will have been able to feel like we had something to do with that, right? So, uh,
yeah, it's quite amazing, Brad. I think, you know, I think I speak for everyone in the audience when
we say, you know, very, uh, very appreciative of everything that Open AI has done. Can't wait
to see what you all do next for the world, and we'll be here watching. So, all the best to
you and the company. And of course, at Nvidia, we're here to help you any way. And I'll text my
boss and see if he can find you some more GPUs. Okay, great. Thank you for the time, Brad. Thank
you, appreciate it. Yeah, thanks, appreciate it.