[MUSIC] Hi everyone, thank you guys so
much for coming today, my name is Michelle Lee, I am a second
year PhD student at the Stanford AI Lab. And I am one of the organizers of
the AI Salon, and today's Salon moderator. To just give a little bit
of background on AI Salon, AI Salon was started four years ago by Dr.
Fei-Fei Li. It is one of the most beloved and iconic student faculty events
at the Stanford AI Lab. At AI Salon, we have had many speakers,
such as Elon Musk, and Jensen Huang. And we talk about big picture
topics in AI that goes beyond just talking about codes and algorithms. Today, we have three exceptional speakers
for the Salon, we have Dr. Kai-Fu Lee, we have Professor Susan Athey, and
we have Professor Eric Brynjolfsson. We're here to discuss on
an incredibly important and pertinent topic, AI and
the future of work. But before we start a Salon,
we want to welcome Dr. Kai-Fu Lee who will be introducing
his book, AI Superpowers, which is New York Times, Wall Street
Journal, and USA Today bestseller. Dr. Kai-Fu Lee is a venture capitalist,
technology executive, writer, and
artificial intelligence expert. He is the founder of Sinovation Ventures, a leading VC firm focusing on
developing Chinese high tech companies. Prior to founding Sinovation Ventures, Lee
was the Vice President of Google, founding President of Google China, and Founding
Director of Microsoft Research Asia. He was named one of the 100
most influential people in the world by Time Magazine. Lee earned his PhD at CMU in 1988, where he developed one of the first
continuous speech recognition system. He has authored 10 US patents and
over 100 journal and conference papers and is the author of AI Superpowers. Let's welcome Dr. Kai-Fu Lee to the stage. >> [APPLAUSE] >> Thank you, it's great to be here with such
distinguished people in the audience. But I thought I would bring even someone
more distinguished to say something about AI. >> It's a great thing to build a better
world with Artificial Intelligence. >> [LAUGH]
>> [FOREIGN] >> [LAUGH] [APPLAUSE] >> I'm not sure if he came himself he would get that much applause, but. >> [LAUGH]
>> But that wasn't President Trump talking. That was Speech Synthesis System built
using deep learning to train his voice, and it's a Chinese company called iFlytek. So I think in one example,
we see the power of machine learning and also the progress that China has made. So most of you probably know AI very well,
so I won't go into any
description that technology. But I would just say, if there
are a few people in the audience who are not familiar,
machine learning is the core part of AI. And deep learning is the most
advanced technology that is working. And think of it as in a single domain,
when you have a huge amount of training data, it can
do things much, much better than people. But only in a single domain
with objective functions. So that's all I'm gonna
say about technology, but I want to talk a bit about applications. Because we are a venture capitalist firm, we've been thinking about what
are the areas to invest in and from our perspective,
there are really four waves of AI. Now given, AI requires a huge
amount of training data. The first natural place is, of course,
Internet applications where we are all generators of data and
free guinea pigs labeling for the likes of Amazon, Facebook,
and Google every day. Every time you make a click,
buy something, you are creating data that
makes the system smarter and able to guess what you might want to buy
next time and send the right ads to you. And what's more important is that
AI gives very powerful knobs to the CEOs of these companies so
that they can optimize minutes per user. And that's how Facebook got into trouble,
actually, by optimizing towards one goal. Or you can maximize revenue per day, or
per month, or you can maximize profit. So each of the different objective
function will cause the company to display different ads, and products, and choices
to you in order to maximize that function. And that I think is the big power of AI. And that's why pretty much all
the [COUGH] giant AI companies are first Internet companies that includes
the giants here, Amazon, Facebook, Google, Microsoft, but also the Chinese
giants Alibaba, Tencent and Baidu. They have a huge amount of data and they
use it to extract value and make money. So that's kind of the first wave. The second wave you might ask is,
who else has a lot of data? And those are going to
be companies that used to consider the storage of data and
the data center as a cost center. And the requirement to
store the data as something you have to do in order for
archival purposes. Let's say a bank used to have stored all
of the customer transactions because you don't know when a customer
might want to see it. But now with AI, all of those transactions
and data becomes mountains of gold that can be used to do things like
a customer's loan determination, credit card fraud detection,
asset allocation. And also deciding what product
to try to sell to each customer, and estimate each customer's net worth and
so on. And that's what one of our investments,
Fourth Paradigm, does. And that, of course, is not just for
banks, but also insurance companies, financial investors, pretty much any
company that has a large amount of data. To give you an idea why AI is so powerful, I'll pick one example in a company that
we invested in called Smart Finance. What they do is loans, so
basically micro loans. Imagine a loan of $500 for
something like six months, and at, let's say credit card level rates,
but it's done through an app. So all you have to do is download the app,
fill out the usual things you fill out with loan applications, your name,
address, where you work, how much you make, rent,
buy place and things like that. But also it asks for your consent to
send information from your phone, at the same level other
apps send information to the likes of Facebook and
Amazon, and Snapchat. And it takes all that information
into actually a deep learning function that determines whether
to lend the money to you. So now could you imagine going
outside Stanford with $500,000 for those of you who have it. And for the first 3,000 people that comes
to you You picked 1,000 of the 3,000 and hand each person $500, so
your $500,000 is gone. What do you think the default
rate will be, 80%, 90%? What's the likelihood a stranger off
the street will pay you the money back? Very low, right? But the default rate for this app is 3%. So how do we manage that? We manage that because of all the
information that comes in that no human loan officer could possibly consider. So it would have things like your name, your address,
does it match on the Internet? How long did it take for
you to type out your address? If it takes too long,
maybe you were making it up. Or maybe you're copying it off something,
right? It would also have your contact list, which is submitted just like you
do to Facebook and Snapchat. And the contact list can be verified. Who's the person you call Mom? Is that person in fact your mom? And things like that
can be double checked. And also what apps you have installed. Do you have a lot of gaming apps,
gambling apps, or serious apps? And that all play a role. And also what kind of a phone you have. What's the model of the phone? Is it a newest iPhone or some old phone? And what day of the week is it? What day of the month is it? Why is that important? Well, is it before payday or
is it after payday? When you borrow the money before payday,
very reasonable. Borrow the money after you've just been
paid, that may be a negative signal. And then just for fun, we want to see
the 3,000 features that were extracted. And the least important feature
turns out to be the battery level. >> [LAUGH]
>> Why does that matter? Well, if you think about it, if you have
OCD and charge your phone all the time, it's probably a little bit correlated
with someone who tends to repay, right? And if you're kind of irresponsible
letting your battery run out, well, maybe that's a little bit
correlated with someone who defaults. Of course this is a very
unimportant feature, but it has some contribution nevertheless. So no human could possibly scan and
combine these 3,000 features. So you're probably wondering, well,
how did they train the system? Well, of course based on actual outcome, whether the person returned the loan or
not. So now you're starting to figure out,
wait a minute, [COUGH] so when you had no data,
they had no data to begin with. Well, that's what venture
capitalists are for. >> [LAUGH]
>> We give them the money, they lose it at a 20% default rate. They come back for more money. Our default rate is now down to 14%,
can we have another 30 million? Okay, and that kept going. It kept going until they got to about 7%, at which point they could just
borrow money from banks and be assured that they are gonna make more
money from that without using VC money. So you see how terrible this is. So that's an interesting example. The third wave is what we call perception,
and that is essentially digitizing
the physical world with cameras, sensors, microphones, and so on. Examples of that include Amazon Echo, includes the autonomous stores,
and of course it includes the very controversial application face
recognition just in the headlines today. I'll use that as an example, not to
endorse the use of face recognition, but just as an example of
how powerful it can be. Recently, there is a very famous
Chinese singer, Jackie Chan. Any of you know? Okay, very famous. I see some older people here so
you would know him. Very famous singer. So he was giving concerts in China. He gave I think four concerts, and then after that he got a nickname,
Policeman Chan. Because for
the safety of those stadiums for which he gave the concerts, as people
went in, there was face recognition. And the face recognition was connected
to the most wanted criminal list, and then people were apprehended when
suspicious people recognitions were made. And then in maybe 70% of cases,
it was a false alarm, so they're sorry. Here's your ID. You're not a criminal. But then 30% of the cases, people were
actually from the most wanted list. So imagine just on the technical level how
this could possibly be done without AI. Would any policeman be able
to remember 100,000 faces? Of course not. So now we can see these applications
are not just at human level or not, but they can be
dramatically better than people. And then the fourth layer is
what I call autonomous AI, and that's basically robots and
autonomous vehicles. We have made number of
investments in this area, including robots that pick fruits,
robots that wash dishes if you want one. If you want, it's right out. Actually, we can take orders later. It's only $300,000 each. These robots actually you would put
everything off the table into the machine, and it actually separates them
into piles and cleans them. So it's not a dishwasher. So how can they sell many at 300,000? Well, everything comes down, right? With volume, cost will come down,
so eventually, you can get one. And so
robotics in the use of autonomous stores. So we have an investment in China
called F5 Future Store that is an autonomous fast food. You can have a bowl of beef noodles, very authentic Cantonese style,
for about $1.50. So that's going to give
McDonald's a run for its money because it's much cheaper and
it's completely autonomous. No humans at all. And autonomous convenience stores and
so on. And finally,
there is the autonomous vehicles, which we all know a lot about so
I won't go into that. So these are the waves I think
will really revolutionize almost every imaginable industry. Will there only be four waves? Surely not. If you asked me 20 years ago,
what are the waves of the Internet? I might have told you there were
waves related to websites and browsers and search engines or
something like that, but we would not be able to predict all
the other things that happened later. For example, sharing economy,
e-commerce, social, mobile. So those are many more
waves of the Internet, and AI I think will be similar to that. So again, AI is really about a large
amount of data in a single domain with label, and then with fair
amount of compute and some experts. These are the magic
ingredients that makes AI work. And all the famous
scientists are Americans and Canadians, so you would think US is,
by far, the leader in AI. Well, in research that is the case, but in implementation, I would argue not,
and I will show you why that is. So this book, I'm sorry, another one shows
the h-index of the top 1,000 researchers. US has 68%, and then China is only 6%. So again, demonstrating US is well ahead. But these are the three issues that one
has to consider about AI implementation, application, and monetization. First is, well,
how many breakthroughs have there been? People ask me what the y-axis is. I said I made it up. [LAUGH] This basically shows
the magnitude of the various- >> [LAUGH] >> The magnitude in the importance of various types of innovations. And people can draw their own chart,
but the idea here is that there has only been one single big
breakthrough, and that was nine years ago. So it is probably a big
question whether there will be a lot more breakthroughs
in the next decade. And if there are no big breakthroughs,
it's hard for US to retain its leadership. Because the AI technologies are reasonably
well understood because of all the Open Source and the sharing, and
the people published online immediately. So there's not even a latency
in the journal papers. So all countries are more or less in terms of implementation
on an even playing field. And given that, I would argue that we
are now no longer in the early phase of AI entrepreneurship,
where expertise is the king. But today we're more in implementation, so it's a question of who can build faster,
and run faster, and who has more data. So on the last point, I would argue for many applications you really don't
need these super AI experts. Young AI engineers will suffice,
especially in waves one and two. So given this, [COUGH] as an example, we run a training camp for
AI every summer. This year, we had 300 students. Next year, we'll do 1,000, and these are largely undergraduate students
who have had maybe one course in AI. And then we basically have
an industry project leader that, after one week of courses that we teach,
we have basically industry people from autonomous vehicle companies,
speech companies, vision companies,
lead teams of eight to build projects. And actually just in one week of lecture,
four weeks of implementation, they're able to build things. In this case it's an autonomous vehicle. It's a toy car, but it has a real camera. It maps the campus of
Beijing University and it's able to drive by itself from
any building to another building. And this was done by eight students. So it goes to show you that the barrier
of entry is really not that high anymore. So a little bit more detail
about China's position. First, there are a lot of
young Chinese AI engineers. People are rushing into it. The upper right shows you the picture. The bottom is a lecture that I give, and
a little more people than this room. And then on the left side we see the
number of articles written in AI at all. Actually Chinese authors are above 40%, so much larger than even
China as a population. So it's just that they
haven't become famous yet. They're entering at the bottom of
the pyramid, and they are growing so I think that to the extent that AI
implementation just requires young hardworking engineers, China has them. Secondly, Chinese companies are innovated. So this slide shows China
began as a copycat. This was only ten years
ago on the left-most side. Basically every Chinese company
was the Google of China, the Amazon of China,
the Apple of China, and so on. But very quickly China began to
learn the product market fit. See, in Silicon Valley,
I think people really frown upon copying. Not talking about IP theft, I'm talking about what Facebook did
to Snapchat, for example, okay? And that happens everywhere in China. And actually,
we learned a lot of things by copying. Did we not learn music and
art by first copying? And then a small percentage
become innovative. And of course, if you're forever
a copycat, you'll get nowhere. So in the second phase you see that the
Chinese companies actually are as good, in some cases, better than American
companies, the ones that inspired them. For example,
WeChat is better than WhatsApp. Many of you probably use both of them. And you would know and
Weibo is better than Twitter, at least as a product,
maybe not in diversity of content [LAUGH]. >> [LAUGH]
>> That's a different story. And then in the third phase, actually, these are really innovative
applications all invented in China. I can't even begin to
explain each of them. Maybe I'll try one. Baidu and
Tik Tok are the top apps in China. Together they have 220
million daily active users. This is a phenomenal number. And what they are is, basically,
video based social network. Something that doesn't
exist at all in the US. In fact, there was a review of Tik Tok, [COUGH] I think it was in
Wall Street Journal from a few days ago. So feel free to check it out. And there are seven, I think there
are seven or eight apps here. And a total valuation of these brand
new Chinese apps is about $300 billion. And it goes to show you that
China is becoming innovative. And today we really have a parallel
universe with the US basically running on these apps and
China running on those apps. And these are parallel universes. So a lot of people keep asking me,
can Google go back to China and succeed? Well, it's very hard to
traverse the parallel universe. It's just like a Chinese company
probably cannot come here and succeed. So the Chinese apps
are really every bit as good. And the China also has
great entrepreneurs, and many of them are going to go into AI. I won't go into details here, but
I'll just say that the China approach to building a company is very different
than the Silicon Valley approach. Here I think it's about vision,
changing the world, technology centric, make it light,
non-capital intensive. China is almost the opposite. It's this building it, vast tenaciously,
iterate quickly, and execute. And one very unique aspect of Chinese
entrepreneurship is that the Chinese entrepreneurs, because there are so
many copycats around, the only way you can
win is winner take all. Not only do we have to win,
everyone else has to lose. But also on top of that, you have to build
a business model that is uncopiable. Otherwise someone will take it. So how do you do that? Well, you make something really
incredibly complex, very expensive. That's how you do it. So [INAUDIBLE] is an example, and the middle is the Grub Hub of China,
or the Yelp of China. But what they do is they
deliver food to every person in a city within 30 minutes for
the cost of $0.70 per delivery. And that has changed the way
the Chinese people eat. But how do you do that? They have 600,000 people. Essentially, imagine they're
on an Uber-like network and there's reverse search pricing
inviting them to deliver. And also they're on very cheap electrical mopeds that have to have
batteries replaced. So they have to build a giant algorithm
and bring in and train 600,000 people, pay them minimally, And
of course many will turn over and leave, you'll have to bring more in,so that
is the complexity of making the 600,000 Uber like delivering that work, that
makes their business model impenetrable. So the Chinese entrepreneurs
are really good at this. And of course, the Chinese, a lot of the money is flowing into AI,
even more than the US. Last year, 2017, 48% of the world's
venture capital went into China for AI, and 38% in the US. The equivilent company is actually rising faster in market valuation
in China compared to here. This example is iFlyTek and Nuance. Just Sinovation Ventures alone, our investments,
we have already made five unicorns in AI. And the youngest of these companies
is only less than two years old, so it's very, very fast, and
they really execute and deliver. And of course the amount of
data is very important, and the right is a generic graph that shows
the more data you have, no matter what algorithm so long as they are reasonable,
more data gives you better results. So, in the era of AI if data is the new
oil, then China is the new OPEC. Used to say Saudi Arabia but
not a good analogy [LAUGH] anymore. And why does China have more data? A lot of people say,
it's because there's no privacy, there everybody exchanges data. That is not true. The Chinese companies get data the same
way, as the Americans companies, think of them as getting data
like Facebook and Google. China has more data for two reasons,
one is breadth: just more people and one is depth,
because the usage is much deeper. The Chinese users,
using the example I gave earlier with Matreon order a take out,
ten times more than in the US. Shared bicycle,
300 times more in than the US, and every usage is a data point
that's used to train AI. And the most important is,
of course, the mobile payment. In China, mobile payment is
used 50 times more in the US. And some might say, we've got
Apple Pay it just take some time. But that is not true, because these are mobile payments
unattached to credit cards. Apple Pay, PayPal still largely
connect to credit cards. And as long as you
connect to credit cards, it's basically taxing the economy at 2%,
or something like that. So in China,
the use of mobile payment has become so convenient that you see here in
a farmer's market, and my wife last month saw a beggar in Beijing and he was
holding up a sign, I'm hungry, scan me. So, I would never joke about that this is
seriously, because nobody has changed. No cash, no credit cards. And all of that becomes
data used to train the AI. It also will contribute to
make China go from a savings economy to a spending economy. Also makes entrepreneurship a lot easier, because you can monetize
users from day one. You don't have to wait until
you have a million users. So this is another huge benefit. And of course lastly,
Chinese government strongly supports AI. But unlike what most people read,
the government came in a little bit late. All the unicorns were made
without any government support. They became unicorns on their own
capabilities, with private capital. However, once the government saw
the importance of AI, I think there are a couple of really important
things the Chinese government does. One is the techno-utilitarian policy. What that means is, let the technology
be implemented and see how it goes. If there are problems, then regulate it. So that is how the mobile
payment became pervasive. Imagine in the US, if Facebook announce
we're going to have a new payment method. I think immediately these MasterCard will
complain and say, software companies, they're not reliable, there could
be hacks and data could leak and fraud and all that stuff, right? And then, there might be regulation or
checks to slow them down. But in China, TenSen and Alibaba were proven competent,
they were allowed to go forward. Of course techno-utilitarian doesn't mean
everything is allowed, Cryptocurrency for example is not legal in China. And of course the state document really
sets the tone about the importance of AI that actually has no
budget associated with it. The budget is determined by
each reader of the document. For example,
in our investments in AI for banking, after this document came out, the banks
were much more open to buying AI software. So that was helpful to us. In the city of Nanjing,
the city government said, well, we have great universities,
so let's do an AI park. And China's building [COUGH] a new
highway in Zheijiang province with sensors to help the safety
of autonomous vehicles. And the New city called Xi'an, which is the size of Chicago is
being built, with two layers. Not the whole city, the downtown. The top layer is for pedestrians and
pets and bicycles, and the bottom layer is for cars. And that very expensively but nicely avoids the kind of problem that we
saw in Phoenix with the Uber autonomous. Because the largest problem of economist
vehicle is when you hit a human, and that's the highest likelihood of casualty. By separating the flow you
eliminate that possibility. Also the cars driving underneath
will have controlled lighting, so also avoiding problems that
we saw in Tesla autopilot, because you have a fixed lighting
in the B1 level of roads. So, that kind of huge infrastructural
spend I think will accelerate China's development of AI. So, where does China stand in
terms of AI implementation? Again, this is not research,
this is implementation. I think China came from way behind
to a little roughly caught up, to probably a little bit ahead going
forward, and this is my projection. However, I do want to say that
this is not a zero-sum game, because Chinese companies right now
only sell to Chinese customers. So their success do not come at
the expense of an American company. But people want to know where things
stand and this is my estimates. So with two engines now driving
AI forward not one engine, and also lots of money pouring into it. And giants training the experts,
and lots of open source and cloud technologies,
AI will create a huge amount of value. PWC estimates about $16
trillion per in the next 11 years net increment to the GDP. McKenzie estimates about 13 trillion. So this is the size of the GDP of
China plus India, it looks huge. And this will do a lot of things. Make a lot of money for people. It could be used to reduce hunger and
poverty, but there are also a lot of problems in AI. I know we're gonna have
a discussion on this, so I'll just talk more
quickly over this part. A lot of the top issues
are discussed a lot in the US, but future of work is something that
maybe not discussed as much, so I want to spend just
a few minutes on that. Because AI is single domain,
lots of data, and it does superhuman capabilities. So what that means is the types
of jobs that it can displace will Increase over time, because many
jobs are routine and are repetitive. And those jobs,
with improvements of AI algorithms, will be better done by machine. >> [LAUGH]
>> Where some of us are still safe. >> [LAUGH]
>> And our professors, researchers, our creative and
CEOs deal with complex problems. So AI single domain and non-creative. So there's some safety there, this is
happening at white collar and blue collar. I personally believe white
collar routine jobs will be hurt first because
that's just software. Robotics don't yet have the dexterity
of putting an iPhone together and probably won't for a long. So here, you see examples in
the white collar, Citi has announced, half of their operational back office
will be replaced by automation here. You see the example of the fast food I
was telling you about that is basically they're not gonna displace
any jobs one-on-one. But if they have 50% market
share then McDonald's and KFC well have a reduction in force. And then the last one shows you a basic autonomous cashier that
is in use in Beijing. If you visit this pastry shop, you pay yourself just by with
computer vision scanning. And it's only like $2,000,
one cashier displacement. So this is really happening very quickly. We should be worried about large
number of jobs being displaced, they are many people who would argue,
hey, every technology revolution creates more jobs that it disrupts,
which may be the case with AI as well. But the problem is,
we don't know where those jobs are, nor do we know when they will be created. For example, when Internet was started,
no one had any idea so many Uber jobs would become available. And it was impossible to predict,
so, same with AI. But I would say, Uber jobs didn't get created until 20
years after the Internet invention. We can't wait that long. Also, any job AI creates will
tend to be non-routine jobs. Because if it were a routine job,
then AI would just do it itself. So even if the total number of jobs
remains constant or even increases, the people who are displaced, let's say 40% of the workforce
over the next 20 years. They will not be able to
easily transition into from a routine job into a non-routine job. So, what jobs we can be offered is
going to be a significant challenge. One thing I believe gives us hope is that
if we think about what AI cannot do. One is that it cannot
create as we mentioned, it's not creative, not strategic. And the other is that it has no empathy or
love, and has no compassion. And there are many jobs that
we don't want a robot to do, that we want to interact with a human. So if we draw these two dimensions, changing it from a one dimensional
to a two dimensional picture, x-axis showing creativity and
y showing compassion or empathy. Then we can move these jobs around,
and we'll see actually there are many jobs in the upper
left that will become possible for people who may lose the jobs on
the lower left to shift into. As an example in the area of healthcare, US will need 2.3 million people
in additional in healthcare. And that includes, nursing and home care, elderly care and so
on in the next six years. And those jobs aren't
easily getting filled. Because the job of an elderly
caretaker is about $19 an hour, and it pays about half of a, less than half of
a truck driver or heavy machine operator. Yet the heavy machine
operator's job may disappear, but the elder caretaker would
actually need more and more. As we live longer our longevity increases, people over 80 need five times as much
care as people between 60 and 80. And that will create an opportunity for
more jobs. Now we can even imagine,
many jobs that are not being paid today. For example, someone who
home-schools his or her children, or a hotline volunteer, elderly companion. These are jobs that maybe unpaid or
voluntary, one could imagine that if people from the lower left were to lose
their jobs, this could become paid jobs. I think this approach would be much better
than universal basic income which gives money to everyone whether they need it or
not, but this would give targeted training for people to
move into more compassionate jobs. A very good example is Amazon,
which announced this April that it will offer $12,000 reimbursement for
taking classes up to four years. So for
taking classes into new professions that are in big need or
less likely to be replaced by AI. You've got to think, Jeff Bezos is
thinking about his Whole Food cashiers, he's thinking about his warehouse pickers. You know how Amazon has Kiva
move the large things and then the people actually does the picking. Well we all know from
our robotic professors, over the next five years,
a lot of that will become automated. So this is actually a very interesting and
generous move by Amazon. And actually many of you
probably don't know but the average salary at Facebook and
Google is about $200,000. And at Amazon, it's $28,000,
sorry, the median is 28,000. So offering $12,000 of reimbursable
training on a $28,000 salary, that's a pretty generous step. And Jeff Bezos has said that he would
like to feel his responsibility isn't just with shareholders, but
also with employees by ensuring that they are still employable
even if they're displaced. Even if the job isn't available in Amazon. The types of classes he
offers are aeronautic repair. Something that can be understandably
trained if someone took the class who was a warehouse picker, or a nurse. Someone who could probably also be
trained if he or she was a cashier. So these are really steps
that we need to take. So in terms of whether
AI displaces his jobs, actually lower left side,
they will be displaced largely by AI. But the lower right side where AI can be a
tool to make the scientist more creative, to help invent more drugs. And for the same amount of time, and
to alleviate pain, and cure diseases. And then on the upper left is a different
type of human machine symbiosis where AI might perform most of the analytical part
of a task, where human has the warmth. For example, a doctor might use more and
more AI tools and diagnostics to determine what's wrong
with the patient and what to do. But it is the doctor who would
communicate with the patient, tease out all the issues,
problems family history. Input it into the computer and then explain it to the patient in
a way that comforts the patient, gives the patient confidence,
maybe with home visits and so on. And we know that through the placebo
effect, if the patient has higher confidence, there's also higher
likelihood of recuperation or survival. And now the upper right is really,
Where humans, [COUGH] really shine with both
our compassion and creativity. So, this is the blueprint of
coexistence of human and AI. So, in my talk,
I've talked about the opportunities and challenges in the next 15 to 20 years. But I firmly believe,
if we look a little bit beyond that for the students in the room,
when your children enter the workforce. I think 30, 40 years from now, when they
look back and think what AI meant for humanity, I think what they will
think is that really too things. First AI is serendipity because it
liberates us from routine jobs. So that we can spend more
time with our loved ones. We can can do things we're
passionate about and we can have time to think about
what it means to be human. And also, if people are worried about AI,
AI is just a tool and we are the only ones with the free will,
and we set the goals for AI. So we humans are going to write
the ending of the story of AI, thank you. >> [APPLAUSE] >> Thank you, so much Dr Keifer. I wanna welcome to the stage
Professor Susan Athy and Professor Erik Brynjolfsson
to start our discussion. >> Yeah, thanks. >> So we have a tradition AI Salon
where we always use our hour glass to identify the time because we'd
use no technology during AI Salon. This an enlightenment era salon
were technology doesn't exist yet. So, in light of that we'll also be
using our hour glass to track the time, and so this will be an hour for
our discussion. I just wanna do a quick introduction
of our two other guests here. Professor Susan Athy is a professor at
the Stanford Graduate School of Business. She received her bachelor's degree from
Duke University and her PhD from Stanford. Her current research focuses on
the economics of digitization, marketplace design, and the intersection
of econometrics and machine learning. She's one of the first tech economists and
served as a consulting Chief Economist for Microsoft Corporation for six years. And in 2007, Professor Athy became the first female
winner of the John Bates Clark Medal, one of the most prestigious
awards in the field of economics. Professor Erik Brynjolfsson is
the Director of the MIT Initiative on Digital Economy and
a professor at the MIT Sloan school. His research examines the effects of
information technologies on business strategy, productivity, and performance on
digital commerce and intangible assets. He was one of the first researchers
to measure the productivity and contributions of IT. And his research has appeared in
leading economics, management, and science journals, and recognized with
ten best paper awards and five patents. So, thank you guys so
much for being here today. I wanted to start out the Sloan by kind
of thinking about the applications of AI. So Keifer here talked a lot about
many different uses, such as loans, or we can think of things in the news
like AlphaGo, for Go autonomous driving. But how generally applicable
is this technology? I was wondering if Eric,
you can talk more about this. >> Well, I think Keifer was exactly right, that these are some amazing technologies,
but they're quite narrow in many ways. And I think this is one of the biggest
misunderstandings in the popular press especially with Hollywood where
there is a lot of sort of impression of the way close to what a lot of people
call Artificial General Intelligence, AGI. And we're really far from it,
there are more breakthroughs. We don't know how many are needed but deep
learning by itself is quite remarkable, but can't I think most people
would agree get us to AGI. That said we don't make
the opposite mistaken and underplay how remarkable
these breakthroughs are. So in certain specific areas
like the image recognition particular Goodwin Feifei with her with
the imagenet set off an explosion of work. To showing how rapidly you could, using deep learning techniques get to
recognize images, voice recognition, credit decisions, and
Keifer gave lots of other examples. And each of those are in their own narrow
way, not just human but super human. The issue is when a human is able
to do something extremely well, and you speak Chinese, you assume they
also know something about which Chinese restaurants are good, or
something about Chinese culture. AI, it would be a mistake to take
extreme competence in one domain and generalize it to other areas. >> I see, so
kind of going off of that, Susan, you just gave a tutorial at,
about causal inference, and you kinda talked about a lot of unanswered
questions in the field of AI and a lot of work that we still have to do. Can you talk a little bit more about
the research agenda that your group and other researchers are working on to
make AI more generally applicable? >> Yes, so first, I would just say I
really agree with the perspective in the book and also the way that Eric
has nuanced it that we have this incredible revolution and
the big breakthrough of neural nets allowed us to solve
problems that we couldn't solve before. But yet, those problems still
fall into fairly narrow classes. So, trying to understand, since Alpha goes a very hard game
because it has a very big state space. But it's still a game where
if I have two strategies I can play them against each other and
see which one is better. And so the computer can generate
massive amounts of training data. The actual algorithms used in AlphaGo are
very similar to what we've been using for decades in economics to try to
learn from either human behavior or firm optimization decisions or
firm equilibria about their payoffs. And to try to stimulate what would
happen in a different world, and that sort of form of
counterfactual inference, trying to understand what would
happen if something changed a bit. But even though the sort of
conceptual approaches are similar, the neural nets don't necessarily make
a big breakthrough for those problems. And I'm going back to revisit those
problems using all I've learned about machine learning, because the problem
there was actually just lack of data. If we can find a steady where Walmart and
Targets should put their stores, or how much firms should invest in,
or even how a human should make dynamic decisions about training or
unemployment. Those decisions, we still have
a relatively small data set to study them, and the real problem is
kind of data sparsity. And also, lack of enough sort of
experimental variation in the data to really learn about cause and effect. So, even though these breakthroughs
are huge, it's not to say that, well we tackled Go so
next step is to replace the economist. Keiffer noted that economist
wouldn't be replaced. >> Thank you [LAUGH]
>> I was very happy. >> [LAUGH]
>> He also noted that we don't have a lot of compassion which sadly-
>> [LAUGH] >> It's true of my colleagues simply not of me. [LAUGH] But so I think it's really just
as Eric said it's not like it's just one short leap from the problems that we've
solved even though they sound hard. To other types of business problems. And so, then in terms of
the research agenda though, I think there is actually a really
exciting research agenda and I think for the next generation of students,
now that we've sort of ingested AI and Keiffer have talked about,
it's become more incremental. I think we can't go back to some of
the techniques that we used in the more small data world. Because inside every AI is an agent
that's trying to make decisions. They're doing counter factual reasoning. How should I climb the wall? What kind of recommendation should I make? These are decisions, and so
you have to do counter factual reasoning. You have to think well what will
happen if I made this decision. What would happen if I make that decision. And inside the agent is sort of a
statistician trying to use a small amount of data to figure out what
decision there is to make. That is a hard problem. And it's an especially hard problem
in a data poor environment. Well, maybe you're in a situation
you haven't been in before, and you need to still reason
about what comes next. That's really more human like reasoning,
and one way to make it better is to
put some structure on the problem. Don't just learn from the data in
a completely unstructured way, but actually use what you know
about the environment, use the domain knowledge about
the system that you're in, can make you much more efficient
with the limited data that you have. But that's not been the focus of
the last ten years of machine learning. And so
I think it's actually really exciting. Going forward to think
about marrying those, and that's what I'm doing in my lab, but
just on a small number of problems. There's really a much
broader set of questions. And then the last thing that I
focused on in my tutorial was how thinking about things this way, putting a little bit more of a structured
way of thinking on things can actually solve a lot of the problems that
AI has had in the implementation phase. So when you go out to implement,
if you wanna get humans to listen to you, you have to be interpretable. People have to believe you. They have to trust you. You have to actually be able to tell a human that you're
making a recommendation too. Do I know the answer to this. Maybe I'm uncertain over here. Maybe I'm more certain over here. Maybe my algorithm might
be biased over here, but over here I've got plenty of data and
I think you should trust it. We need to deal with this issues of
stability and trust worthiness, so that also really requires
a clear conceptual framework and layering that conceptual framework on top
of all the algorithmic innovation that we have, so I'm really excited about
the next ten years of basic AI research. And in Stanford we were really putting a
lot of emphasis on that human centered AI. Or something that Fei-fei's helping lead
us on and I'm really excited about what we're gonna be able to do in that area to
make the domains of AI more applicable. >> Yeah, so
I feel like in the past ten years, speaking of these great innovations and
breakthroughs with an AI they are kind off has been two extreme responses
to the breakthroughs. On one hand you have optimists like Ray who says AI will create a paradise for
humans. On the other hand,
you have doom-sayers like, you know, I'm not saying that AI will create
robot killers that will kill us all. And so, if we see that as a spectrum,
which, of course it may not be one. But it is a spectrum. I'm just curious to hear where
all of you are on that spectrum. Do you believe that AI is paradise,
or is AI going to kill us all? [LAUGH] You are shaking your head there. >> It's going to kill us all. >> [LAUGH] who? Okay.
>> I am very frustrated that that is so much of the discussion out there. I mean first getting back to what we were
saying earlier, we are still quite far from I know Ray has a different view on
that but most, AI researchers that I've spoken to would disagree with him in
terms of how close we are to that, but there's a more fundamental point and
it's really a key point that Kaku makes in his book which you guys should
look at and he made in his talk. And I want to hit on it again which
is that technology is a tool. So the right question isn't
what is AI going to do to us? Is it going to give us nirvana? Is it going to solve all
of our problems for us? Is it going to kill us all? No. Those are both, they both, both of those questions not opposite ends
of the spectrum, they're really doing the same thing which is treating AIs as if
it's the one that makes the decisions and the reality is that technology is a tool,
a hammer is a tool, AI is a tool. And those tools can be use
in a lots of different ways. It can be use to do constructive things,
it can be use to do destructive things. And the real important reason that we
have to understand that is too many people I think are being
passive about what's going on. And we have to recognize that
we are agents as was saying. We can make these decisions and
we have to decide how we want to, what kind of society
do we want to live in? Do we want to have one with a kind of
compassion that Kai Fu was calling for, what does that means as in terms
of the policies we put in place. What does it mean as in terms of us
as workers, as CEOs, as citizens? We have to take agency and
use these tools in different ways, and then the question isn't which of those is
going to happen, but which one do we want to have happen, and what steps are we
going to take to achieve that? >> Well, I just want to then bring
that to our topic at hand today, which is, you know, AI in the future work. It is true that AI is definitely a tool,
but their projections are. In fact, Kai-Fu writes in his book
that perhaps 40 to 50% of jobs will be replaced in the next 20,
30 years and that is a huge number of people who
will be out of jobs and out of work, and that's going
to have great impact on society. I'm just wondering if you guys can
talk a little bit about what kind of impact this will have on the workforce,
and how can we can prepare for that. >> Kai Fu, can you elaborate a little. >> Sure, yeah, there have been different types of
estimates from a lot of different studies. My numbers are more aggressive. McKenzie, and the OECD, and others
have come out with different numbers. Generally, in terms of the numbers,
people are believing that we should look at how many tasks
can be done by AI, not the jobs. However, when you have a job,
half of the task can be done, there's going to be 50% of the people
who will probably not be working, right? So on the one hand, I think it's a scary
large number, but also if we look at agriculture to manufacturing transition,
their numbers are even larger than that. So the issue really isn't
how many jobs are displaced. Many of the jobs that when
you graduate from here are jobs that didn't exist five or
ten years ago. So having new jobs and having jobs
go away has been the defacto, right? The status quo, it's always happens. I think the issue really is when we
went from agriculture to manufacturing, the people from the farms were
able to now go to the factories because of the relative, unskilled, low training required to
be on the assembly line for example. The big problem now is, I think AI is
displacing mostly the routine work. And the people who are displaced really
need to find their place, a new place. And it's not just a loss of income issue,
but it's a loss of meaning, that people attach
their meaning to the work that they do. >> Yeah.
>> And yet, when most or all routine jobs or tasks are gone, people have to really be trained to do
the more complex and non-routine tasks. So I think the big issue
is one about training. And Amazon has shown one way of training. I shared the positive example. But that's because they're almost
a trillion dollar company. They can afford it. Walgreen has the same workforce
that will be displaced. I don't think they can
afford the training program. So I think we really have to
think about how corporations and governmental programs can be applied. For example, in the US Congress there
are a number of bills being discussed such as giving Human resource
training credits back to companies. So I think it's going
to be moves like that, not universal basic income that
starts to move the dial forward. And last comment I'll make is that,
China and US are quite different. I think China is a very decisive,
execution-oriented form of government. And in the last phase of the transition
from agriculture to manufacturing, the government played a very
strong role of saying, okay, we're shutting these down,
you guys move over there. You're gonna become that in your new job. So a very, very top-down organization
which in the case of a crisis, maybe more effective. So I think actually US, probably
should be a little bit more concerned. Because with the government not being
able to move people from job to job. >> Yeah. >> And also I think currently unemployment
numbers high in historic low, I worry whether US government is
going to do anything in this area. >> So Susan, do you want to elaborate or
do you have a different opinion on this? >> Well so, personally I agree with
everything that Kai-Fu said and I think actually, I'm also concerned
about the US policy in this instant. It's not like our government has gotten
more functional in the last few years, and yet we may need to be preparing for
something really important. And at the moment, we don't actually
know if you wrote me a big check. I don't know how to spend it in
terms of making workers better off. We actually just don't have that muscle,
that capability. We don't know how to retrain people. We don't know what to retrain them for. So I'm concerned that the time will come
when we will need that then we're not gonna be ready for it. And actually that's a big emphasis that
I'm moving into my own researches to try to work with other scholars and
students here at Stanford, to try to prototype digital and
AI driven worker training. And also, worker recommendations to
try and help people understand and make better decisions. So I completely agree it's
an important problem, and then even with the displacement
coming relatively soon. I advice a lot of companies, banks all over the world have call
centers in poor parts of their countries. And those call centers
really will be gone, and they're really not economical today. They're gonna be shutting down
over the next few years, and there's gonna be these concentrated hits
to regions that already lost many factory. They already lost a bunch of stuff and
they are on their last legs, and then the call centers will go to,
and I think that's gonna be problematic. And we've had some interesting
research by with MIT and coauthors, kind of showing how when you get these
concentrated hits from robots in Detroit, that those areas really can spiral down. On the other hand now,
I kind of mess, I have two hands. On the other hand, there's also
some reasons to be less concerned. So Hal Varian's an eminent economist and he's been giving a really nice
talk about this recently. And his talk is called bots versus tots. And so he takes the most aggressive
numbers about worker displacement. But then he looks at demographic trends. So it's a little bit hard to predict
the future in a lot of ways, but demographics are actually
pretty easy to predict, like, we kind of know how many people will
be 50, 30 years from now, okay? So that's something that's
easier to predict, and in fact, in developed countries with
falling birth rates, and also China's very harmful one child
policy ends up with this aging workforce. What I really like from Kai-Fu's
discussions how old people actually need a lot of care. And we're gonna have all these
people consuming but not working. And so, we're actually without
big changes integration policy, we're actually gonna have worker
shortages over the next 30 years. And how our users, not my research,
but how our users, those effects are bigger than the most
aggressive job less effect from AI. So if you put those together though,
what's right and what's wrong, it's hard to know. But I think that pushes you in
a couple of directions, not so much universal basic income but instead,
why don't we be thinking about how we're gonna take care of the elderly
which could be augmented by AI. And in both physical robots
as well as monitoring, and so on and decision assistance. You could probably have
one human per old person. And if anybody's cared for
an aging relatively, it's actually pretty labor intensive. And so, you actually could employ
a lot of people in these service jobs. And so I think we should be looking at
sort of labor augmenting technology. And if the government's gonna do
something, they can train people to work on those things, and
also they can subsidize the services. But in the end, there's plenty of work. We could have one worker per
every preschool child, and one worker per every older sick person,
and he kind of employed everybody. So I'm not worried about not enough jobs
but I am worried about how we got there. >> You look like you're
ready to say something. >> Yeah, I just want to very much underscore what
Susan just said that Kai-Fu is right. Many jobs are going to be eliminated,
maybe 30, 40, 50%. But I don't think that there's a shortage
of work that needs to be done in our society that only humans can
do because machines can't do the whole spectrum of things
even within particular task. There are a lot of things that require
humans that I think we value a lot, taking care of the elderly,
taking care of kids, cleaning the environment a lot of
creative work, arts, entertainment. That are inherently require humans at
least with existing technologies and notes that will be for some time. So our challenge I don't think is so
much the end of work, our challenge is this transition that
Kai-Fu alluded to and excuse me for underscoring, how do we
get people to shift? I'm a little bit more optimistic than
Susan about that we do note some of the things that could be done. I think that if you take Kai-Fu's
points about creativity and compassion as being important things,
I think we can do more to have education that supports creativity,
that supports interpersonal skills. And in fact, if you think about think
about it right now, a lot schools, 19th and 20th century schools,
were designed to stamp those out. Don't be doodling, don't play, but if you put a pile of blocks in
front of a kid, the first thing they're gonna do is they're going to
want to start building things. So inherently,
I think we like being creative and if our schools didn't stop us,
we'd probably be even more creative. And we like interacting with other people,
we like playing, we like teamwork, so we could do more in our
education to support that. And those are the kinds of skills
that are gonna be in more demand. The other side of it is entrepreneurship. On one hand, we wanna have the skills
in the workforce, in the other hand, we need people who can figure out how can
we combine those skills with technology to solve existing problems. We have a whole class of people that
that's their job is to make those new combinations. We call them entrepreneurs. They're not usually professors or
policymakers. And the surprising statistics that I saw
were that, not in Silicon Valley, but in the United States is a whole
entrepreneurship is down that we have less creative destruction, less invention,
less new business formation, fewer businesses that are five years old. A turmoil in the market as companies
reconfigure than we did 10, 20 or 30 years ago. So we need to do more also to make our
economy in the United States more dynamic. I think that I'm also more optimistic
than Kai-Fu that an entrepreneurial decentralized economy can respond to
these kinds of transitions if we make it a little bit easier to do it. But ultimately,
the challenge again is not that lots of people simply won't have jobs. It's that those people won't be
transitioning to the new kinds of jobs that are needed. And when a Technology
automates part of the task. And this is some work I
did wit Tom Mitchell. When it automates part of the job,
like a radiologists, there are 26 different
tasks that radiologists do. It could automate some of those, but there are other ones that
actually become more important. So the re-engineering will be the big
challenge for us going forward. Both at the task and
occupation level, at the industry and firm level, and at the societal level. >> One comment on another dimension
is other countries other countries other than US and China. >> I think there is going to be
potentially more major problems there. Especially countries that have been
hoping to use the China model or the India model to climb out of poverty,
right? Because China did use the lower cost labor
workforce to outsource manufacturing. India uses English speaking population
to take care of call centers and IT outsourcing, etc. But these jobs are the ones
that are going to be displaced. So China and US can absorb that
because there are all these value creation engines,
entrepreneurs, and big companies. What about the poorer countries that
had been hoping to use the China or India model? And on top of that,
they don't have the revenue drivers, the big AI companies, tech companies. And on top of that,
the workforce is relatively less trained. So I think that presents a bigger
issue for a lot of other countries. >> While you are talking about
the very poor countries, I also have concerns about Europe. For different reasons, but I guess, and
that really gets back to one of your boxes that you didn't have time to talk about in
your talk, which is sort of market power. And it's completely good thing for
China in the sense that they're like Google isn't the only search
engine in the whole world, and for us being where I used to work. And I think it's really important for the world that there's more than one tech
company doing each particular thing. But a lot of these things do
end up being very concentrated. And so-
>> Especially for For AI, where- >> Exactly [CROSSTALK] >> It eventually becomes monopolies. How do we deal with that? >> Exactly, and I spent years trying
to help Microsoft's search engine compete with Google's search engine, so I spent a lot of time thinking about
how important data is, and how hard it is to be a number two company to
a number one company that has more data. And that the answer is it's hard. And so, that is something that
we need to really consider. And we haven't seen as many European tech
companies, which means that the whole community and capability hasn't built up
as much there to participate in this. And so, we already have trouble
redistributing within a country, we're really bad at
redistributing across countries. And so, these countries are gonna
have challenges sort of keeping up. And especially given that at
least maybe in the Western world, we're gonna have a lot of concentration. And so, I think that gets back
to the inequality question. And another concern that I have is
that we have to think about how the benefits of productivity
get passed along to consumers. So just roughly,
if you talk to a macroeconomist, the old fashioned simplistic 101 models
would say that how can AI be bad? If you can make more outputs with less
inputs, you have to be better off. And if we're all identical, and we all own an equal share in the one
factory that produces our outputs, then making it more productive just has
to be good, sort of tautologically. How could we be worried about being able
to make more stuff with less inputs? But of course, when you have
a real economy where we're not all owning shares equally in everything,
then this distribution comes in. And then,
there's a question of market power. So we could replace all of
our workers with robots. That in principle could lower
the marginal cost of output, which lowers the marginal cost of living,
right? Cuz if the cost goes goes down
the prices can go down, and then it's actually cheaper to live. So you might have lower wages, but
also lower cost of products, and everybody is fine. But if there's a lot of market power, then the company might actually
keep a lot of that as profits. They can lower their costs, and of course some people share in that,
but maybe the workers don't. And so, I think we're gonna need to be
thinking a lot about making sure that all of these productivity benefits
actually get passed on to consumers. So that's one component of cost of living. Health is another one. Transportation and
housing is another really big one. And so, actually,
getting back to what can we do, we need to think about how our policy
towards things like transportation in the advent of autonomous vehicles or
changes in transportation actually affect the ability of people
to get service jobs in cities. Here in the Bay Area, you can't
hire a service worker very easily. People might have to commute
two hours to live cheaply. That's a totally solvable problem, and especially with improvements in
technology we should be able to have a much wider set of land where
people could live and work in the cities. But making that actually work for
people, that is, making these changes lead to great public transportation and
low cost transportation for the poor rather than just a bunch of rich
people riding around in their autonomous vehicles and clogging up the roads all
the time watching videos in their cars and sitting on their exercise bikes. There's different visions of the future,
right? I mean, if you make it more
pleasant to be in a car, people will be in cars more
which can clog up the roads. On the other hand,
if you put in congestion pricing and make it more of a public transportation
thing, we could actually allow the poor to live much more cheaply and
work in service jobs in cities. So we're really gonna have to
rethink our entire public policy around urban economics and
the economics of cities. And I think the good news is that autonomous vehicles gives
us a chance to rethink it. And actually, even the advent of Uber. People are getting grumpy about all
the traffic caused by Uber in New York. And now,
they're talking about congestion pricing. Great. Congestion pricing is great. It makes people carpool, and it can actually make everybody better off,
even the poor. And so,
we should take this opportunity for the policy debate to take it
in a constructive direction. While zoning, there's nothing I can do
to make my friends in Palo Alto wanna build highrises. That's a really intractable
political problem. But maybe improving transportation
is something we can all get around. >> So it sounds like, Susan, you're
calling for more regulation and policy. Whereas you're saying that there
are things that private companies can do, such as what Amazon's doing. Eric, you're kinda in the middle there, with this idea that entrepreneurs and
start-ups can actually create more jobs. >> I think it's important to think
that this is not something that any one part of society is going to solve. It's across the board. And I really think it would be nice
if you had a government that really understood these issues well and
was able to take its role. And CEOs who saw it as their
responsibilities, and workers and citizens who do. But I think we need to work
on all different fronts. We've introduced something called
the inclusive innovation challenge, recognizing companies and organizations that are using technology
to create more shared prosperity. I have talked to people in
administration about the policies there. I'm happy that a lot of CEOs
see that as the responsibility, not just shareholder wealth maximization. But it's something, there is so many
changes in different dimensions to society that we can't delegate any one part. >> Absolutely, I thought it was
really interesting in the book wrote how Larry Fink,
the BlackRock calendar, actually wrote a letter this year right
before the World Economic Forum saying that he believes society is increasingly
turning to a private sector. And asking companies to respond to broader
societal challenges which is something we've been talking about. And society is demanding that
companies both public and private serve a social purpose. So as a VC, as someone who actually
plays a big role in funding companies, and perhaps even creating
incentive structures for startups. How do you think about
incentivizing more companies or asking more companies to play
a bigger role in society? >> Mm-hm,
I think there are role that we can play, the only role we can play is to find
those great entrepreneurs who will have a bigger heart and
see purpose beyond just making money. That's probably the tangible thing
that we can do, which is why we don't invest in many areas and
we didn't invest in certain entrepreneurs. In terms of what each of us can do,
VC can choose to, law firms have pro bono,
VCs could have pro bono, right? The pro bono could be going
into social investing. Investing in a company that may
not make a lot of money but might be providing the one
caretaker per elderly. Or some kind of home schooling or
any one of these issues. So I think each of us can
really do something and that letter made a big impact at World Economic Forum because people
were surprised and it made sense. So hopefully that will happen naturally
with each person doing whatever he or she can. >> Yeah, Eric, do you wanna add to that? >> One thing I think economists
do not appreciate enough, is that people are not
motivated just by money. And even the founder, CEO, some these big
companies had talked to Reid Hoffman, his billions of dollars,
why they all work so hard. And He said, he thought a lot of it by
introspection talking to his colleagues was, what they really wanted wasn't
necessary the next billion dollars, it was the recognition, and status, and
relative position that they got from that. And that means that there's a leverage
to do or what Kai-Fu just said to say, look, we kinda recognize people for
doing the right thing for society. We wanna give you status for
doing that not just the Forbes 400 list of the richest people, but
people who've done things and all this. And let's face it, all through history,
people have often do recognized for those things, there was this diversion
where some economists really highlighted just money as the goal. And I think a lot of CEOs went to that,
greed is good, the extreme version of it. I think that got us a little off track,
and got too many people focused on
that narrow goal, but corporations are ways of getting a bunch of people
to work together for a common goal. And that common goal can be
to help disable people or other things as well as
dollar maximization. And the more all of us in this panel and
all of you out there recognize that and talk it up, we're actually gonna
make a difference in how people, what kinds of motivations people have. >> Absolutely. >> Yeah, and I think, actually, universities can play a really
central role in that. And I certainly find that students really
want to work on impactful projects. And so young AI students, they wanna learn
AI but if I give them a project where they can use AI for a socially impactful goal,
they're all the more excited about it. And so one of the things that I've been
trying to get started here at Stanford, I have this little initiative
on shared prosperity and innovation that's funded by Smith Futures. And there, what we're trying to do is to
combine technological innovations for social good with also the design
of market based incentives. Where we could actually go
to a big philanthropist and say, hey, we've prototyped this product, we think that this is something that
would be good for social impact. We also think that we might be able to
have a pot of money out there to subsidize its adoption or to reward the social
entrepreneurs for bringing that to market. And so we would be thinking both
about designing the products and doing the research for
the products as at an early stage. But also trying to make sure that
the financial incentives are there to get the products into use and so that
the entrepreneurs are really rewarded for the social impact. Some of my colleagues did that for the
pneumococcal vaccine in something called the advanced market commitment where
they raise billions of dollars in a pot that was dedicated towards
subsidizing pneumococcal vaccines. And so
we could also use type things like that, ideas like that to subsidize technological
solutions to social problems. And AI is a great opportunity for
that because it's a lot of fixed costs, but not a lot of marginal
costs to deliver. And so it's really a great place for
a philanthropic impact to come in. So I'm actually pretty
optimistic about our ability to channel the philanthropists as well
as the leading universities and research communities to try to
tackle some of these problems. >> Absolutely, so I can see my hour
glass is kinda running out, so we're actually gonna open for
audience questions. You see that there are microphones there,
and so please line up. And while people are lining up, I just have kinda one last
question I wanted to ask Kai-Fu. Which is that I found it
incredibly poignant in your book that you talk about your
battle with cancer. And how that battle led you to realize
there are more things in life than just optimizing for fame, for wealth, or
even making impact in this world. Since we have a lot of people here who
are young, who are starting their careers, I was wondering if you can
give some advice to us on what should we think about as we
try to build actually a meaningful life rather than just a life where
we optimize for certain outcomes. >> Yeah, so my big awakening moment
was actually with a Buddhist monk. When I was ill, I went to the mountains
and met this very wise monk. His name is Master Shimmin in Taiwan. And basically he asked me,
what's the purpose of your life? And I said, well,
I'm here to make the biggest difference. And then I want to maximize my impact, and I measure everything that way,
almost like an AI algorithm. [LAUGH] I didn't say that to him. >> We've got economist for
that, my goodness. [LAUGH]
>> No, I would actually calculate everything. And there are deep details in my book
of all the crazy things I used to do. But then he said, you know a very
big weakness of humans is that they cannot resist temptation of fame and
vanity. And when you say you want to make
the biggest impact, are you sure? You want to make the world better,
is that your top priority? Or is it that you just want
to make yourself more famous? And the two I found
actually was inseparable, I was facing cancer and
I had to give an honest answer. And I said, no,
I think the two are inseparable. Sometimes I use the make a big impact
to mask my own vanity and desire. And he said, well,
here's what you should do. Measure everything you do by if
everyone did it in the world, would the world be a better place? Measure, think about yourself,
so separate yourself from the process in things that you do,
is this good for the world? Disregarding yourself, and think more
about The people who have loved you, helped you, and have you given back
at least what they have done for you? And that's the first step. And then if one day you can
unconditionally love and help other people, that is the day
that you have really grown up. >> Wow.
>> So that was a big awakening moment. Also I think facing cancer made
me realize that all the work and accomplishments that I had achieved
really didn't mean anything. When I found out I had
fourth stage lymphoma, I didn't want to work another day. It was the last thing I wanted to do. I wanted to spend time with my family. I wanted to do things that I like. Of course, I wanted to get better. And I read during my
illness this book by an end of life caring nurse,
her name was Bronnie Ware. And she cared for
2,000 people before they passed away, and she said there are five
regrets of the dying. And number one is that they didn't spend
enough time with the people that they love and loving them back. Number two was they didn't
follow their passion and really did what in their heart
they know they want to do. And number three was that they listened
too much to the environment and of what the society, or their friends and
family think what they should be, rather than what they themselves
know what they want to be. And then the fourth one
was they worked too hard. So actually,
that should be guidance to all of us, that if someone facing death,
no one regretted working so hard. I think we should all
rethink the priorities. And now that I'm better,
I'm in remission, I still work hard. But I also prioritize
things that matter higher. So it's not that I don't
work 60 hours a week. I used to probably work 80, now I work 60. I still work pretty hard.
But I really, when my family needs something,
I would put that ahead of myself. And it's not for
them to ask me what they want. It is for me to know what they
want before they need to ask me. And when my daughters have vacation, I change my schedule to match their
vacation, not the other way around. And I think that's given me that much
better outlook and made me a lot happier. >> Absolutely, thank you. Yeah, do you wanna to start
over there with Andrew? >> So one of the things that was discussed
was the big data gap between China and the US, and how these private companies
are using data to create better services and technologies. And as we talked about,
how big an impact good policy can make to solve these issues of job displacement,
do you you think there's also a data gap to close in terms of
what the government has access to, in order to more quickly
diagnose local conditions and figure out better targeted
policies in the US? >> Probably more your area? >> I think it's for you, no? Okay.
>> It's for me? [LAUGH] Well, the governments
obviously have a lot of data. But I haven't seen a lot
of wisdom in governments on how they can use or share the data. And a lot of the data that's
used by private companies to deliver good products
are data in a closed loop. So just having the census
data itself doesn't really necessarily help you
that much in building AI. So I guess-
>> Yeah, that's actually a theme that I push
a lot in my discussions as well. And I thought a lot about it in search. It's like just having Google's data, or giving Google's data to someone
else isn't what you need. It's the closed loop, that Google is
interacting and experimenting actively. >> But I've heard you,
when you were talking to Hal. You were disappointed that even if they
had better engineers, wouldn't be able to get to the same outcomes,
because they didn't have as much data. >> As much, well, but
I just wanna be careful. So the data is important, but it's actually the data that's
in an operational system. So just historical data isn't as useful as
the current live data, and also being able to run randomized experiments-
>> Right. >> To learn what works and what doesn't. Governments, I mean cities actually,
are being pretty innovative in the US, and probably, I think all over the world. Singapore, places like that, are using data to be more efficient
with government services. And I actually think in terms of using
AI to just make people's lives better on a daily basis, that may end up
popping up from the cities. >> Okay. >> Can I ask a follow-up question for
Petra? >> Yeah.
>> I mean, you talked about China, these two Internets,
I've heard Eric Schmidt was quoted or perhaps misquoted as saying he thought
there were going to be two Internets. He said actually he didn't
quite say it that way. But there is the great
firewall of China and how difficult it is to
move data between the two. So first question is,
do you see that evolving that way? And secondly, a related on,
there's 1.3 billion people in China. There's a lot more
people outside of China. So arguably, the companies that have the part of the world outside of
China have more data, not less data. >> Well actually, until they have
a closed loop, they don't have more data. But yes,
they have the potential to have more data. So first,
the China internationalization question, actually I think China has over
the last ten years tried to go global, and not very successful. And the reason is US has done such a good
job with the demographics like US. So for a search engine or social network, it's very difficult because American
companies have already globalized. But very interestingly, almost all
the companies I list on that new column are not targeting mainstream US. [INAUDIBLE] The video social app
is targeting the millennials, and probably more in developing countries. So China actually has
multiple demographics. So there are many people like us,
who use the Internet the way we use it, even though it's a parallel universe. But there are also new users emerging
in small towns and villages. And the young people and the older
people have very different habits. And very interestingly, those are where
the new innovations are happening. And when those are successful, they're being actually
internationalized fairly successfully. So TikTok actually is a global phenomenon. So my prediction is that the top American giants will continue
to dominate developed countries. And the Chinese software will make very
good progress in developing countries, in particular southeast Asia,
Africa, Middle East. And possibly South America and
India, I'm not sure. But American multinationals
don't really focus on these regions because there
the output is very low. They don't make too much money. So I think it's giving
China an opportunity. >> The revenue per user, yeah. >> It's actually matching
the Belt Road initiative, not because the government encourages it,
but because that's where the Chinese software
companies seem to be having some success. >> So in the interest of time,
I'm gonna to ask to the next two people to ask their
questions, and then we'll answer. So please go ahead. >> Great, good evening. So you talked about the parallel
universe of AI technologies, and then also this idea around trust. And how society can trust
the AI applications. I'm curious for
vertical such as autonomous weapons that probably implicitly, to some degree
require these parallel universes to communicate to some degree, how do you
think institutional governments can work together In the landscape of
understanding and trust.? >> Okay.
So let's have you ask the question and then we'll answer. >> Okay, so I'm as an AI professor
as sort of pioneered things like Google Translate, and
certainly agree with Eric that we're far, far away from, like, AGI,
strong AI type stuff, and that we need to focus more on the small data conditions
that Susan was talking about. That said, our work in the areas of computational creativity and applying to
music as well as things like language, I'm not sure that the conclusions
that we are looking at, whether that reflects a technological and
scientific reality or rather a comforting mythology that somehow creativity and
compassion are the realm of the human and that we can't model that just as well
as we can model the other stuff. And perhaps just a suggesting
alternate framing, the things in your upper left quadrant, maybe framing it this in terms of skills
is not the right way to look at it. Maybe the-
>> Do you have a question for the panel? >> Yeah, the question is this, maybe,
I'd like to hear the right framing is more about things like companionship,
which is the human need for companionship. Have you considered that? So companionship and weapons. >> [LAUGH]
>> A match made in heaven. >> Sure, yeah,
well I'll take the easier one first. >> [LAUGH]
>> Well the axis where really multidimensional in each label,
as Susan said, earlier I didn't intend that
economists have no compassion. >> [LAUGH]
>> That access really meant human touch companionship, communication skills,
empathy, compassion, things that we feel as human to human are
required, that a robot is not acceptable. So I think that's-
>> Yeah, but that's what I'm asking, maybe the robot is able to do that and
it's not a question of whether AI can acquire the skills to do that,
but rather just a need for humans to do that with other humans,
rather than the machine. >> I see, I see. No, I would agree. I think clearly AI can recognize emotion. I think some recent research
shows as well as people, right? And AI can certainly fake emotion,
currently not very well but over time it will get better. But it is people who ultimately reject
that kind of companionship, yeah, I agree. On the weapons issue you might
be interested in reading a paper by Doctor Kissinger
in the Atlantic. While he, I don't think,
fully understands AI but I think he sees the dangers
as it relates because I think many countries are looking
at how AI can be a part of, let's say the nuclear weapon
triggering system and detecting enemy action, and
creating response even. And that, he's quite concerned
that this will add more challenges because AI may
detect certain issues and recommend certain actions,
yet explainable AI doesn't do it in the way that humans can quite
get it so this part is paper. I think I can see, as a challenge I
guess my view is when AI gets that good, it's kind of a, yeah, in most tasks, AI would get better than people even
if it doesn't explain it that well.I So hopefully, it will still prevent
disaster scenarios on average basis. In terms of things like autonomous
weapons, I haven't studied it that much, but we obviously don't invest
in that in any country. There are agreements being
discussed among various countries about banning autonomous weapons. And I would like to think most
countries are going to discuss and negotiate reasonably on such
big existential issues. So just like we managed to avoid nuclear
war, hopefully that can be done as well. What really worries me though
is the non-state actors, because the barrier of
AI is not that high. I think terrorists, actually CRISPR, is another technology that
the barrier is not that high. So I would worry a lot more about
non-state actors, terrorists, and so on who use these technologies for harm. Hopefully countries through
diplomacy can work things out. >> Eric did you have
something to add there? >> Just to build on what said. There are some folks who
are looking this very carefully. I point you to Max Tedmark and the future
of life institute, and there were I think 6 or 7000 AI researchers signed
open letter About autonomous weapons. There's a debate going on the non
state actors, I mean a video. How many people have seen the video
Slaughterbots with the drones? So if you haven't, it's like a six minute
video, and it's very frightening because it shows how some technologies are pretty
close to what we have today can be combined to have drones with face
recognition, with simple weapons. And you can make them in
huge numbers very cheaply. That starts becoming something that,
it can be easily concerned about. So it's something that I'm glad there's
some people who are thinking about it and looking at it like I'm not sure I know
what the right answer is, how to do it, except that we need to
think really hard about it. >> Right, so
let's have another question there. This is a question for you. And I'm gonna try and pose this n a way
that will keep you out of hot water. So here's Google
withdrawing from China for because they didn't want to be a part
of the surveillance state, and then there's Beijing with the social
credit score, and so on so forth. Surveillance state, and here's my out for
you to answer this in a safe way that will ensure that you will be able to come back
to Stanford and give us another lecture. Lets say you've got- a contract offer from Kim Jong-un in Pyongyang to develop a surveillance system for
a billion dollars. What goes through your head? Would you take it? And you understand what I'm driving at. >> [LAUGH] All right. >> That's a pretty easy one [LAUGH]
I would of course not take it, we don't do contract work. And we actually for. >> [LAUGH]
>> All right, great. [LAUGH]
>> And in our investment, we stay away from doing anything related to weapons and also from intelligence
agencies of any country. We just want to build technologies
that people want to use. >> All right. Let's go with probably the last question,
according to our glass, maybe something more uplifting. [LAUGH]
>> Just a quick one. >> Hopefully so. Hi, my name is Foster, I'm here actually
for question for Susan, related to. Kind of an economic side of things here. It's been really interesting recently
seeing this developed understanding, causal inference about these cause and effect relationships that perhaps
go beyond our notions of or good guessing,
things that we know exist in our universe. And that we could very determinastically
put trust in and I guess I'm really interested in your opinions on back to the
graph we've been talking about empathy and compassion versus idea of like creativity,
and strategy and other forms of work. Where do you see our developed
understanding and modeling of cause and effect relationships impacting work that
perhaps depends on these very strong, perhaps very wide understandings
of cause and effect relationships. You know, things like science, or
other fields that perhaps, demand people to be very, very intuitive understanding
of cause and effect relationships. And yet, could perhaps being coded
in future algorithms and approaches. >> Yes, we think that where
the technology can be good and this is really consistent with the themes
of Cathy's book is that when you're making lots of incremental decisions around
a world that's fairly stable, increasing, decreasing prices, different messages,
different types of training, in a very controlled environment then actually we
can get rid of a lot of the guess works. We don't have to have creative people
thinking about the best headlines, we can sort of test them and so on. >> But I think where, what's hard is to
try to think about worlds that are very different than the world that we're in. Now in economic empirical work,
we actually try to do that sometimes. Like a big application is like
what happens if two firms merge? And so we do those counter factual
predictions and they're admitted in evidence in court and decisions are made
about mergers as a result of that. But those are pretty hard and
they rely on a lot of assumptions. And so I guess what I would say is that
what I've seen in practice is still that you're, the human is still very important
in the loop in terms of sort of defining like what the model is and
what the unintended consequences are, and just as like another example when I
worked in the search engine, there was, we did a lot of incremental testing but
the systems that we had to evaluate changes in the search engine were very bad
at predicting what would happen if there was something that would have
an equilibrium effect, or that would have sort of
a longer run effect. You know if advertisers, if you did
something that made advertisers spend more, it might look good
in a one week test but then they might all exhaust their budgets
and then the prices would fall, and the auctions would thin out and,
you know, other stuff would happen. And so we actually had many many examples. Where we lost a lot of money. Often I was warning about these things,
but it was hard to kind of get the systems to really be able to respond to
kind of the bigger picture effects, the equilibrium effects,
the second order of changes. And so
I guess I would say that while you can imagine like an AI could like learn
the whole model of the universe, in fact humans just are are a lot better at
figuring out what the model should be and then letting the AI operate within
a much more constrained environment and so actually the role of the human
in managing the software. In figuring out what
the constraints should be. In figuring out how the short run
measures leave out long run effects. All of these types of things
are great human skills to have as we go forward, because it's hard
to get the human out of the loop and building a system that
tries to make decisions. >> Well, unfortunately it looks
like our time has run out, I just wanna take this moment to
thank everyone on this panel today. It's been a very exciting and interesting AI Salon and
I also want to thank everyone here. AI Salon is really a place where
we come together to discuss big questions and so thank you guys all
for contributing your questions and to contributing into this discussion. I just want to, again, give a big round
of applause to our panelists here today. >> [APPLAUSE]
>> Thank you so much, everyone.