SPEAKER 1:
It was stunning, it was mind-blowing. After the biology questions, I had them type in "What do you say to a father
with a sick child?" It gave this very
careful excellent answer that was perhaps
better than any of us in the room might have given. I was like, wow, what is
the scope of this thing? Because this is way better. KEVIN SCOTT:
Hi everyone. Welcome to Behind the Tech. I'm your host, Kevin Scott, Chief Technology
Officer for Microsoft. In this podcast, we're going
to get behind the tech, we'll talk with some of the people who have made
our modern tech world possible and understand what motivated them to
create what they did. Join me to maybe learn a little bit about the
history of computing, and get a few behind
the scenes insights into what's happening
today. Stick around. Hi, welcome to Behind the Tech. We have a great episode for you today with a really
special guest, Bill Gates who needs
no introduction given the unbelievable impact that he's had on the
world of technology in the world at large over
the past several decades, has been working very closely with the teams at
Microsoft and OpenAI over the past handful of months
helping us think through what the amazing
revolution that we're experiencing right now
in AI means for OpenAI, Microsoft, all of
our stakeholders and for the world at large. I've learned so much from my conversations that
I've had with Bill over these past months
that I thought it might be a great thing to share just a tiny little glimpse of those conversations with
all of you listeners today. With that, let's introduce Bill and get a great
conversation started. Thank you so much for
doing the show today. I just wanted to jump right in with maybe one of the more
interesting things that's happened in the
past few years in technology which is GPT-4 and ChatGPT in the
work that we've been doing together at
Microsoft with OpenAI. By the time this podcast airs, OpenAI will have made their announcement to
the world about GPT-4, but I want to set the stage, the unveiling of
the first instance of GPT-4 outside of OpenAI
was actually to you last August at a dinner that
you hosted with Reid and Sam Altman and Greg Brockman and Satya and a whole
bunch of other folks. The OpenAI folks had
been very anxious about showing you this
because your bar for AI had been really high. I think it had been
really helpful actually, the push that you had
made on all of us for what acceptable high
ambition AI would look like. I wanted to ask you, what was that dinner
like for you? What were your impressions
like what you've been thinking before and what, if anything, changed
in your mind after you had seen GPT-4? BILL GATES:
AI has always been the holy grail of
computer science. When I was young,
Stanford research had Shakey the Robot that
was trying to pick things up and there were various logic systems that
people were working on. Also the dream was always
some reasoning capability. Overall progress in AI until machine learning came
along was pretty modest. Even speech recognition
was just barely reasonable and so we had that gigantic acceleration
with machine learning, particularly in sensory things, recognizing speech,
recognizing pictures. It was phenomenal, and it just kept getting better
and scale was part of that. But we were still missing
anything that had to do with complex logic with
being able to say, read the text and do
what a human does, which is quote, understand
what's in that text. As Microsoft was
doing more to OpenAI, I had a chance to go see
them myself independently a number of times and they were doing a lot of
text generation. They had a little robot arm. The early text generation
still didn't seem to have a broad understanding. It can generate a sentence, saying Joe's in Chicago and then two sentences later
saying Joe's in Seattle, which in its local
probabilistic sense, it was a good sentence, but a human has a broad
understanding of the world from both experience in reading that you understand that can't be. They were enthusing about GPT-3 and even the early
versions of GPT-4. I said to them, "hey, if you can pass an advanced placement
biology exam where you take a
question that's not part of the training
set or a bunch of them, and give fully
reasoned answers, knowing that a biology
textbook is one of many things that's in
the training corpus, then you will really
get my attention because that would be
a heck of a milestone. Please work on that. I thought that they'd go away
for two or three years, because my intuition has
always been that we needed to understand knowledge
representation and symbolic reasoning in
a more explicit way. That we were one
or two inventions short of something
where it was very good at reading and writing and therefore
being an assistant. It was amazing
that you and Greg and Sam over the summer, were saying yeah it
might not be long before we're going to come
demo this thing to you because it's actually
doing pretty well on scientific learning. In August they said, yeah, let's get to this thing. In early September, we had
pretty large group over to my house for dinner,
maybe 30 people in total, including a lot of the
amazing OpenAI people, but a good size group from Microsoft. Satya was there and they gave it AP
biology questions and let me give it AP
biology questions. With one exception, it did a super good job and the exception
had to do with math, which we can get to that later. But it was stunning,
it was mind-blowing. After the biology questions, I had them type in, "What do you say to a father
with a sick child?" It gave this very careful
excellent answer. That was perhaps
better than any of us in the room might have given. I was like, wow, what is
the scope of this thing? Because this is way better. Then the rest of the night we asked historical questions about, are there criticisms of Churchill or different things and it was just fascinating. Then over the next few months
as I was given an account and Sal Khan got one of
those early accounts. The idea that you
could have it write college applications or rewrite, say the Declaration
of Independence, the way a famous person like Donald Trump
might have written it. It was so capable
of writing poems, give it a tune like, Hey Jude and tell it
to write about that. Tell it to take a Ted Lasso episode and include
the following things. Ever since that day in September, I've said, wow, this is a fundamental change and not without some things that still need
to be worked out. But it is a fundamental advance. It's confusing people
in terms of, well, it can't yet do this, it can't do that, it's not
perfect to this or that. Natural language is now the primary interface
that we're going to use to describe things
even to computers. It's a huge advance. KEVIN SCOTT:
Yes. There's so many different
things to talk about here, but maybe the first one
is to talk a little bit about what it's not good at it. Because the last thing that
I think we want to do is give people the impression
that it is an AGI, that it is perfect, that there isn't a lot of additional work that has to happen to improve it
and make it better. You mentioned math is one of the things so I thought
maybe let's talk a little bit about what you think
needs to be better about these systems over time and where we need to
focus our energy. BILL GATES:
Yeah. It appears to be a general issue that its knowledge of context
when it's been asked, okay, I tell you something and
I generate something, humans understand "I'm making
up fantasy stuff here," or "I'm giving you advice
that if it's wrong, you're going to buy
the wrong stock or take the wrong drug." Humans have a very deep
context of what's going on. Even the AI's ability to know that you've
switched context like if you're asking it
to tell jokes and then you ask it a serious
question where a human would see from your face or the nature of that change that okay, we're not in
that joking thing, it wants to keep telling jokes. You almost have to do the reset sometimes
to get it out of the, hey, whatever I bring up
just make jokes about it. I do think that sense of
context, there's work. Also in terms of how hard it should work on a problem. When you and I see a math
problem, we know, well, I may have to apply
simplification five or six times to get
this into the right form. We're looping through how
we do these reductions, whereas today the reasoning is a linear chain of descent
through the levels. And if simplification needs to run
10 times, it probably won't. Math is a very abstract
type of reasoning. Right now, I'd say that's
the greatest weakness. Weirdly, it can solve
lots of math problems. There are some math problems
where if you ask it to explain it in
an abstract form, make essentially an equation or a program that matches
the math problem, it does that perfectly
and you could pass that off to
a normal solver, whereas if you tell it to
do the numeric work itself, it often makes mistakes. It's very funny
because sometimes it's very confident that, hey - or it'll say, I mistyped. Well, in fact, there's not a typewriter anywhere
in this scene. The notion of mistyping
is really very weird. Whether these current
areas of weakness, it's six months, a year, or two years before
those largely get fixed, so we have a serious
mode where it's not just making up URLs. Then we have the
more fanciful mode. There's some of that
already being done largely through prompts and
eventually through training. Training it for
math, there may be some special training
that needs to be done. But these problems I don't
think are fundamental. There are people that think it's statistical, therefore
it can never do X. That is nonsense. Every example they give of a specific thing it doesn't do, wait a few months. It's very good. Characterizing
how good it is, the people who say it's crummy are really wrong
and the people who think this is AGI,
they're wrong. Those of us in the
middle are just trying to make sure it gets
applied in the right way. There's a lot of activities like helping somebody with
their college application. What's my next step? What haven't I done? I have the following symptoms,
that are in fact far within the boundary of things that it can do
quite effectively. KEVIN SCOTT:
Yeah. Well, I want to talk a little
bit about this notion of it being able to use tools
to assist it in reasoning. I'll give you an example from this weekend
with my daughter. My daughter had this
assignment where she had this list of 115
vocabulary words. She had written a 1,000 word essay and her objective was to use as many of these
vocabulary words as she reasonably could
in this 1,000 word essay, which is a ridiculous
assignment on the surface. But she had written
this essay and she was going through
this list trying to manually figure out
what her tally was on this vocabulary list and it
was boring and she was like, "I want a shortcut here. Dad, can you get me
a ChatGPT account and can I just put this in there and it will do it for me?" We did it and ChatGPT, which is not running
the GPT-4 model, but I don't think GPT-4 would have gotten
this right either, didn't quite get it right. It was not precise. But the thing then that I got her to do with me
as I was like, well, let ChatGPT write a
little Python program that can very precisely - this is a very simple
Intro CS problem here. The fact that the Python code for solving that problem was perfect and I got my solution immediately is just amazing. My 14-year-old
daughter who doesn't program understood everything
that was going on. I don't know if you
reflected much over these past months because essentially when we're trying to solve a complicated
math problem, we've got a head full of cognitive tools that we pick
up like these abstractions that you're talking about
to help us break down very complicated problems into smaller or less complicated problems that we can solve. I think it's a very interesting
idea to think about how these systems will be able
to do that with code. BILL GATES:
Yeah.It's so good at writing. That's just a
mind-blowing thing. But when you can use
natural language, say for a drawing
program that you want various objects and you want to change them in
certain ways, sure, you still want the menus
there to touch up the colors, but the primary interface
for creating a from-scratch drawing will be language. If you want a
document summarized, that's something that it
can do extremely well. When you have large
bodies of text, when you have text
creation problems, there was a ChatGPT-3 written where a doctor who has to
write to insurance companies to explain why he
thinks something should be covered that's very complicated and it
was super helpful. He was reading that letter over to make sure it was right. In ChatGPT the Version 4 stuff, we took complex medical bills and we said please
explain this bill to me. What is this and how does it relate to my insurance policy? It was so incredibly helpful
at being able to do that. Explaining concepts in
a more simpler form, it's very helpful at that. There's going to
be a lot of tasks where there's just huge
increased productivity, including lot of
documents, payables, accounts receivables. Just take the health system alone, there's a lot of
documents that now software will be able to characterize them
very effectively. KEVIN SCOTT:
Yeah. One of the other things that I wanted to
chat with you about, you have this really
unique perspective in your involvement in several of the big inflection
points in technology. For two of these inflections, you are either one of the
primary architects of the inflection itself or
either one of the big leaders. We wouldn't have the PC, personal computing
ecosystem without you. You played a really
substantial role in getting the Internet available
to everybody and making it a ubiquitous technology that everyone
can benefit from. To me, this feels
like another one of those moments where a lot of
things are going to change. I wonder what your
advice might be to people who are
thinking about like, I have this new
technology that's amazing that I can now use. How should they be thinking
about how to use it? How should they
be thinking about the urgency with which they are pursuing these new ideas? How does that relate
to how you thought about things in the PC era
and in the Internet era? BILL GATES:
Yes. The industry starts really small where computers
aren't personal. Then through the microprocessor
and a bunch of companies, we get the personal
computer, IBM, Apple, and Microsoft got to be very
involved in the software. Even the BASIC interpreter on
the Apple II, a very obscure fact, was something
that I did for Apple. That idea that, wow, this is a tool that
at least for editing documents that you have
to do all the writing, that was pretty amazing. Then connecting those up over
the Internet was amazing. Then moving the computation into the mobile phone was
absolutely amazing. Once you get the
PC, the Internet, the software industry,
and the mobile phone, the digital world is changing huge parts
of our activities. I was just in India,
seeing how they do payments digitally even
for government programs, it's an amazing application
of that world to help people who never would have bank
accounts because the fees are just too high, it's
too complicated. We continue to benefit
from that foundation. I do view this, the beginning of computers
that read and write, as every bit as profound as
any one of those steps and a little bit surprising because robotics has gone a little slower than I would
have expected. I don't mean autonomous driving, I think that's a special case, that's particularly
hard because of the open-ended environment and the difficulty of safety and what safety bar people
will bring to that. But even factories
where you actually have a huge control
over the environment of what's going on and you
can make sure that no kids are running around anywhere
near that factory. A little bit people
have been saying, these guys over-predict, which that's certainly correct. But here's a case where
we underpredicted that natural language and the computer's ability
to deal with that and how that affects
white-collar jobs, including sales, service,
helping a doctor think through what to put
in your health record, that I thought was
many years off. All the AI books, even when they talk
about things that might get a lot more productive, will turn out to be wrong. Because we're just at
the start of this, you could almost
call it a mania, like the Internet mania. But the Internet mania, although it had its insanity and things like sock puppets or things where you
look back and say, what were we thinking? It was a very profound tool
that now we take for granted. Even just for scientific
discovery during the pandemic, the utility of the
immediate sharing that took place there
was just phenomenal. This is as big a
breakthrough, a milestone, as I've seen in the whole digital computer realm which really starts
when I'm quite young. KEVIN SCOTT:
I'm going to say this to you, and I'm just interested
in your reaction because you will always tell
me when an idea is dumb. But one of the things
that I've been thinking for the last handful of years is that one of the
big changes that's happening because of
this technology is that for 180 years from the point
that Ada Lovelace wrote the first program to
harness the power of a digital machine
up until today, the way that you get a
digital machine to do work for you is you either have to be a skilled programmer, which is like a
barrier to entry, that's not easy or you have to have a skilled
programmer anticipate the needs of the user and to
build a piece of software that you can then use to get the machine to do
something for you. This may be the point where we get that paradigm to
change a little bit. Where because you have this natural language
interface and these AIs can write code, and they will be able to actuate a whole bunch
of services and systems that give ordinary people
the ability to get very complicated things done
with machines without having to have all of this expertise that you and
I spent many years building? BILL GATES:
No, absolutely. Every advance hopefully lowers the bar in terms of who can
easily take advantage of it. The spreadsheet
was an example of that because even
though you still have to understand these formulas, you really didn't
have to understand logic or symbols much at all. It had the input and the
output so closely connected in this grid structure that you didn't think about the
separation of those two. That's limiting in a way
to super abstract thinker. But it was so powerful in
terms of the directness. That didn't come out
right, let me change it. Here, there's a whole class
of programs of taking corporate data and presenting it or doing complex
queries against, have there been any sales
offices where we've had 20 percent of the headcount missing and are our sales results
affected by that? Now you don't have to go
to the IT department and wait in line and
have them tell you, oh, that's too
hard or something. Most of these corporate
learning things, whether it's a
query or report or even a simple workflow
where if something happens, you want to target an activity, the description in English
will be the program. When you want it to
do something extra, you'll just pull up that
English or whatever your language is in
and type that in. There's a whole layer
of query assistance and programming that will be
accessible to any employee. And the same thing is true, I'm somewhere in the college application process
and I want to know what's my next step and what's the threshold
for these things, it's so opaque today, so empowering people to
go directly and interact, that is the theme that
this is trying to enable. KEVIN SCOTT:
I wonder what some of the things are that
you're most excited about just in terms of application of the
technology to the things that you care about deeply from the Foundation or your
personal perspective? You care a lot about education, public health, climate,
and sustainable energy. You have all of
these things that you're working on
and have you been thinking about how this technology impacts
any of those things? BILL GATES:
It's been fantastic that even going
back to the fall, OpenAI and Microsoft have
engaged with people at the Gates Foundation
thinking about the health stuff and
the education stuff. In fact, Peter Lee
is going to be publishing some of his thinking which is somewhat focused
on rich world health, but it's pretty
obvious how that work in a sense is even more amazing in health
systems where you have so few doctors and getting advice of any kind is so
incredibly difficult. It is incredible
to look at saying, can we have a personal
tutor that helps you out? Can yo,u when you
write something, if you're going to some
amazing school, yes, the teacher may go line by
line and give you feedback, but a lot of kids just don't get that much feedback on
their writing and it looks like, configured properly, this is a great tool to give
you feedback in writing. It's also ironic in a way
that people are saying, what does it mean that can
people cheat and turn in computer writing?
Kind of like when calculators came along
and we said, oh my goodness, what
are we going to do about adding and subtracting? Of course, they did create contexts where you
couldn't use a calculator and we got through that without
it being a huge problem. I think education is the most
interesting application. I think health is the
second most interesting. Obviously there's commercial
opportunity in sales and service type things
and that'll happen, you don't need any foundation
type engagement on that. We brainstormed a
lot with Sal Khan, and it looks very promising because a
class size of 30 or 20, you can't give a student
individual attention, you can't understand
their motivation or what is keeping them engaged. They might be ahead
of the class, they might be behind. It looks like in
many subject areas, by having this and having
dialogues and giving feedback, for the first time, we'll
succeed in helping education. Now, we have to
admit, except for this prosaic thing of looking up Wikipedia articles or helping you type things and
print them out nice, the notion that computers
were going to revolutionize education largely are still more in front of
us than behind us. Yes, some games draw kids in, but the average math
score in the US hasn't gone up much over
the last 30 years. The people who do
computers kept saying, hey, we want credit for that. Credit for what? It's not
a lot better than it was. Obviously, the computers didn't perform some miracle there. I think over the
next 5-10 years, we will think of learning
and how you can be helped in your learning in a
very different way than just looking at material. KEVIN SCOTT:
I know you think about this as a global problem. My wife and I with our
family foundation, think about it as a local
problem for disadvantaged kids. One of the common
things that we see is that parent engagement makes a big difference in the educational
outcomes for kids. If you look at the children of immigrants in East San Jose or East Palo Alto here
in the Silicon Valley, often the parents are
working two, three jobs, they're so busy that they have a hard time being
engaged with their kids and sometimes they don't speak
English and so they don't even have the
linguistic ability. You can just imagine what a technology like
this could do where it really doesn't care
what language you speak. It can bridge that gap between the parents
and the teacher, and it can be there to help the parent understand
where the roadblocks are for the child and to even
potentially get very personalized to the
child's needs and help them on the things that
they're struggling with, I think is really very exciting. BILL GATES:
Just the language barriers, we often forget about that, and that comes up in
developing world. India has a lot of
languages and I was at the Bangalore Research
Lab as part of that trip. They're taking these
advanced technologies from trying to deal with the
tail of languages, so that's not a huge barrier. KEVIN SCOTT:
One of the things that you said at the GPT-4
dinner at your house, is that you had this experience early in Microsoft's history
where you saw a demo that changed your way of thinking about how the personal
computing industry was going to unfold
and that caused you to pivot the direction
of the company. I wonder if you might be willing to share
that with everyone. BILL GATES:
Xerox had made lots of money on
copying machines. They got out ahead, their patents were there, the Japanese competition
hadn't come in, and so they created a research
center out in Palo Alto, which was forever
known by its acronym, Palo Alto Research Center, PARC. At PARC, they assembled an
incredible set of talent. Bob Taylor and others were
very good judges of talents. So you end up with Alan Kay, Charles Simonyi, Butler Lampson, I don't want to
leave anybody out, but there's a bunch of
other people and they create a graphics user
interface machine. They weren't the only ones. There were people over in Europe doing some
of these things, but they combined it
with a lot of things. They put it on the network, they got a laser printer. Charles Simonyi was
there programming this and did a word
processor that used that very graphical bitmap
screen and let you do things like fonts and stuff
we take for granted now. I went and visited
Charles at PARC at night, and he demoed what he had done with this
Bravo word processor, and then he printed on the laser printer a document
of all the things that should be done if there were cheap
pervasive computers. He and I brainstormed that, and he updated the document
and printed again. It just blew my mind. The agenda
for Microsoft came out of - That's in 1979 that I'm with
him, computers are still
completely character mode. That's when the commitment
to do software for the Mac emerges from
Steve Jobs having a similar experience with Bob
Belville at Xerox PARC. Xerox built a very
expensive machine called the Star
that they only sold a few thousand of because people didn't think
of word processing as something you would pay for. You had to come in
really at the low end, so PCs with first character mode, but later graphics
word processing. I hired Charles. Charles helped do Word and Excel and many
of our great things. Eventually, 15
years after Charles had showed me his thinking
and we brainstormed, we largely achieved through Windows and Office on
both Windows and Mac, we largely achieved
that piece of paper. I told the group that that was the other demo that had blown
my mind and made me think, what can be achieved in
the next 5-10 years? We should be more
ambitious taking advantage of this breakthrough. Even with the imperfections that we're going to
reduce over time. KEVIN SCOTT:
It was a really powerful and motivating anecdote
that you shared. Maybe one last thing here before we go, or maybe
two more things. What do you think are the big grand challenges
that we ought to be thinking about over
the next 5-10 year period? Like in a sense like this, I actually have this piece
of paper that Charles wrote, it's here by my desk framed
because I think it was one of the more unbelievable
predictions of a technology cycle that
anybody's ever written. I don't know why
everybody doesn't know about the existence
of this thing. It's just unbelievable. As you think about what lies ahead of us over
the next 5-10 years, what's your challenge? Not just to Microsoft, but to everybody in the world who's going to be
thinking about this? What do you think we ought to go push on really really hard? BILL GATES:
Well, there'll be a set of innovations on how you execute these
algorithms and lots of chips. Some movement from silicon to optics to reduce the
energy and the cost. Immense innovation where
Nvidia is the leader today, but others will try
and challenge them as they keep getting
better and using even some radical approaches because we want to get the cost of execution on these things and even the training dramatically
less than it is today. Ideally, we'd like to move them so that often
you can do them on a self-contained device, not have to go up to the
Cloud to get these things. Lots to be done on the
platform that this uses. Then we have an
immense challenge in the software side of
figuring out, okay, do you just have many
specialized versions of this thing or do you have one that just keeps
getting better? There'll be immense competition. Those two approaches. Even Microsoft will pursue both in parallel
with each other. Ideally within a
contained domain we'll get something that the
accuracy is provably extremely high by limiting the areas that it
works in and by having the training data and even perhaps some pre-checking post-checking type logic
that applies to that. I definitely think areas
like sales and service, there is a lot that can be done there and that's super valuable. The notion that there is this emergent
capability means that the push to try and scale up even higher, that'll be there. Now, what corpuses exist? Once you get past every
piece of text and video. Are you synthetically
generating things? Do you still see that
improvement as you scale up? Obviously, that'll get pursued and the fact that it costs
billions of dollars to do that won't stand in the way of that going ahead in a
very high-speed way. Then there's a lot of
societal things of, okay, where can we push
it in education? It's not that it'll
just immediately understand student motivation
or student cognition. There'll have to be a lot of training and embedding
it in an environment where the adults are seeing the engagement of the student and
seeing the motivation. Even though you free up the
teacher from a lot of things, that personal
relationship piece, you're still going to want
all the context coming out from those tutoring
sessions to be visible and help out the dialogue that's there with the
teacher or with the patient. Microsoft talks about making
humans more productive. Some things will be automated, but many things will
just be facilitated where the final engagement
is very much a human, but a human who's able to get a lot more done
than ever before. The number of challenges
and opportunities created by this is
pretty incredible. I can see how engaged the
OpenAI team is by this, I'm sure there's lots of teams that I don't get to see that are pushing on this. And the size of the industry, when the microprocessor is
invented the software industry was
a tiny industry. We could put most of us on a panel and they could complain that I work too hard and it shouldn't
be allowed. We can all laugh about that. Now, this is a
global industry, so it's a little harder
to get your hands around. I get a weekly digest of all the different articles about AI that are being written. Can we use it for
moral questions? Which seemed silly to even ask to me, but fine, people can ask
what they want to ask. This thing has the ability to move faster because
the amount of people and resources and companies is way beyond those
other breakthroughs that I brought up and was
privileged to live through. KEVIN SCOTT:
One of the things for me that has been really fascinating and
I think I'm going to say this just as a reminder
to folks who are thinking about pursuing careers in Computer Science and
becoming programmers. I spent most of my training as a computer scientist in my early part of my career
as a systems person. I wrote compilers
and wrote tons of assembly language and design
programming languages, which I know you did as well. I feel a lot of the things
that I studied just in terms of parallel
optimization and high-performance computer
architectures in grad school. I left grad school and
went to Google and thought I would never use any
of this stuff ever again. Then all of a sudden now, we're building
supercomputers to train models and these things
are relevant again. I think it's interesting. I wonder what Bill Gates
the young programmer, would be working on if you
were in the mix right now, like writing code for these
things because there's so many interesting
things to go work on. But what do you think you
as a 20-something-year-old, young programmer
would get really excited about in this
stack of technology? BILL GATES:
Well, there is an element of this that's
fairly mathematical. I feel lucky that I
did a lot of math. That was a gateway to
programming for me, including all crazy stuff with numerical matrices
and their properties. There are people who came to programming without
that math background, who don't need to go and get
a little bit of the math. I'm not saying it's super hard, but they should go and do
it so that when you see those funny equations,
you're like, I'm comfortable with
that because a lot of the computation will be that thing instead of
classic programming. The paradox when I started out writing tiny programs,
was super necessary. The original Macintosh
is a 128K machine, 128K bytes, 22K of which is the bitmap screen. Almost nobody could write
programs to fit in there. Microsoft, our
approach, our tools, let us write code
for that machine and really only we and
Apple succeeded, then when it became 512K, a few people succeeded, but even that people
found very difficult. I remember thinking
as memory got to be 4 gigabytes, all
these programmers, they don't understand
discipline, and optimization and they're just allowed to waste resources. But now that these things that you're operating with
billions of parameters, the idea of okay, can I skip some of
those parameters? Can I simplify some
of those parameters? Can I precompute various things? If I have many models, can I keep deltas between
models instead of having them? All the optimizations that made sense on these very
resource-limited machines. Well, some of them come
back in this world where when you're going to do billions of operations
or literally hundreds of billions
of operations, we are pushing the
absolute limit of the cost and performance
of these computers. That's one thing that is
very impressive is the speed-ups even in the last six months on some of these things has been
better than expected. That's fantastic because you
get the hardware speedup, the software speedup
multiplied together. That means are we - how resource bottlenecked
will we be over the next couple of years? That's less clear to me now that these improvements
are taking place, although I still
worry about that and how we make sure that
companies broadly, and Microsoft particular, allocates that in a smart way. Understanding algorithms, understanding why certain things are fast and slow that is fun. The systems work that
in my early queries, just one computer and later a
network of computers, now that systems where you have datacenters with
millions of CPUs, it's incredible the optimization
that can be done there. Just how the power
supplies work, or how the network
connections work. Anyway, in almost every
area of computer science, including database
type techniques, programming techniques,
this forces us to think about in
a really new way. KEVIN SCOTT:
I could not agree more. Last, last question. I know that you are
incredibly busy and you have the ability to choose to work on whatever it is that
you want to work on. But I want to ask you anyway, what do you do outside
of work for fun? I ask everybody who
comes on the show that. BILL GATES:
Oh, that's great. I get to read a lot. I get to play tennis a lot. During the pandemic I
was down in California in the fall and winter and
I'm still enjoying that. Although the
Foundation's meeting in person and some of these
Microsoft OpenAI meetings, it's been great that we've been able to do
those in person, but some we can
just do virtually. Anyway I play pickleball because I've been playing
for over 50 years, tennis and I like
to read a lot. I goofed off and went to
the Australian Open for the first time because it's nice warm weather
down there and that was
kind fo spectacular. KEVIN SCOTT:
I actually want to push on this idea that
you read a lot. You say you read a lot, which is not the same as what most people say when
they say they read a lot. You're famous for
carrying around like a giant tote bag
of books with you everywhere you go and you read
a insane amount of stuff. Everything from like really difficult science books, all the way to fiction. How much do you actually read? What's a typical pace of
reading for Bill Gates? BILL GATES:
If I don't read a book in a week, then I really look at what
I was doing that week. If I'm on vacation, then I'll hope to read
more like five, six, or seven because books are
quite variable in size. Over the course of the year
I should be able to read close to 80 plus books. I have a younger children who
read even more than I do. It's like, oh geez, I have to be - which Sowell
books am I going to read? I still read all the
Smil, Pinker, some authors that
are just so profound and have shaped my thinking.
But reading's relaxing. I should read more fiction.
When I fall behind, my non-fiction list tends to dominate and yet people have suggested such
good fiction stuff to me. That's why I share my
reading list on Gates Notes. KEVIN SCOTT:
What's the over-under, you're famous for saying
that you want to read David Foster Wallace's
Infinite Jest, like over-under that
happening in '23. BILL GATES:
Well, if there hadn't been this darn AI advance. I'm kidding you, but
it's really true. I have allocated and
with super excitement, a lot more time to sitting with Microsoft
product groups and saying, what does this
mean for security? What does it mean for Office? What does it mean for
our database type thing. Because I love that type of brainstorming because new
vistas are opened up. So no, it's all your fault. No Infinite Jest this year. KEVIN SCOTT:
Excellent. Well, I appreciate you making that trade
because it's been really fantastic over the
past six months having you help us think
through all of this stuff. Thank you for that and thank you for doing the podcast today. Really, really appreciate it. BILL GATES:
No it was fun, Kevin. Thanks so much. KEVIN SCOTT:
Thank you so much to Bill for being with us here today. I hope you all enjoyed the
conversation as much as I did. There's so many
great things about this conversation which
were reflections of the conversations that
we've been having with Bill over the
past handful of months as we think through
this amazing revolution. One of the things that I've
learned most from Bill over the past handful of months as we think about what AI means for the future is how
he thought about what personal computing and the microprocessor and PC
operating system revolution meant for the world when he was building Microsoft
from the ground up. Even what it felt
like for him as one of the leaders helping bring the Internet revolution
to the world. Like those parts of the conversation today
that we have where he was recounting some of
his experiences like the first meeting that he had with Charles
Simonyi at Xerox PARC, where he saw one of the world's first
graphical word processors and how seeing that in talking with Charles influenced an enormous amount of the history of
not just Microsoft, but the world in the
subsequent years. Just hearing Bill talk
about his passion for the things that the Gates
Foundation is doing and what these AI technologies
mean for those things, like how maybe we can
use these technologies to accelerate some
of the benefits to the people in the world
who are most in need of technologies like
this to help them live better and more
successful lives. Again, this is a tremendous treat for being able to talk with
Bill on the podcast today. If you have anything you'd
like to share with us, please e-mail anytime at
behindthetech@microsoft.com. You can follow
Behind the Tech on your favorite podcast platform, or check out full video
episodes on YouTube. Thanks for tuning in
and see you next time.
TL; DR
Applications for AI in health:
- Diagnose illnesses
- Explain illnesses to patients or insurance companies
Applications for AI Education:
- Summarize articles or books
- Explainer for problems in learning.
- Accelerate learning, change the way people learn
- Solve problem of communications and language
Application for AI customer service:
-Answer questions on the phone, sales or after sales.