- Good afternoon, and welcome to the SPR Futurist Series on "21 Lessons for the 21st Century" in partnership with the IMF Library. I am Martin Mühleisen. I'm the director of the Strategy Policy and Review department here at the fund and our department and the library are organizing this event today. As you know, the Futurist
Series brings experts from other ways of
scientific and academic life to the fund, to spark new ideas and force to diversity of thought. We may hear about certain
things for the first time. We may encounter views that
clash with our economic minds, but I think we all agree
that there is benefit in considering other perspectives. This is also a mission of the IMF Library which fosters discovery,
creation and knowledge sharing, to advance the work of the fund. With this we are really delighted to have Professor Yuval
Noah Harari with us today. If you visited a book
store in recent years, you are sure to have
encountered one of his two previous books on the history and future of human civilization. Published in 2014, "Sapiens:
a Brief History Humankind" has become an international best seller and is published in nearly
40 languages worldwide. In 2016, Professor Harari published "Homo Deus: A Brief History of Tomorrow" which met with a similar reception from an enthralled global audience and his latest book which
carries the same title as our seminar today has
just been launched yesterday, "21 Lessons for the 21st Century". Professor Harari has also
written for global newspapers such as The FT, The Times,
The Wallstreet Journal, The Guardian and is
lecturing around the world on topics explored in
his books and articles and for the sake of academic completeness, he received his PhD from
the University of Oxford in 2002 and is currently a
lecturer in the Department of History at the Hebrew
University of Jerusalem. Some of the themes that have
emerged in past Futurist Series are the need
for life-long learning, skills training and the
need to focus on what is uniquely human, including
by making growth inclusive, and provide people with
equal opportunities. These themes also appear
in Professor Harari's work with a further examination
of what it means to be Homo Sapiens in today's world. We are looking forward
to a very interesting discussion with the managing director and with that, I turn it over
to you, managing director. - Thank you so much Martin
and we are superbly lucky today, Professor, because
this morning we had the luxury of the com du selectra, which
was this morning presented by Elvira Nabiullina who is
the Central Bank governor of Russia and this afternoon, we have you! So, it's really a fantastic day. We talked about monetary
policy this morning and we talk about all sorts
of things this afternoon, and that's also the beauty
of those lectures sponsored by the Strategy and Policy
Review department of the IMF. It gives everybody a chance to be inspired by broader thinkers than just us. So thank you very much
for coming all the way from Israel, via wherever
you launched yesterday, but - In New York. - New York, so thank you so much for choosing us as the day after moment. So, you know, when
Martin and his colleagues thought of inviting you, it wasn't without creating a bit of a buzz, because in various writings of yours, we seem to understand that the future is not necessarily for economists. The future is more likely to be for poets for philosophers, for ethical experts, but economists, maybe not, but maybe yes. So, you'll have to tell us about that. - I'll just say that the
distance between economists and poets is much smaller
than people tend to assume. (cheering and laughing) I often say that the best
storytellers in the world are the economists. Because they are the one
who tell the only stories that everybody believes. (laughing) Almost everybody. - Weather forecasters among that, right? That's interesting. We'll
come back to the story issue, because you make the point about stories and how useful, not useful and what kind of stories we need around. But, let me just mention,
I don't think Martin has indicated that, but your book, "21 Lessons for the 21st Century", is available to all of
those who participate in the lecture. Is that so? Okay, so please...
- Supply and demand. - Those who leave the room
before the lecture is over, or the conversation is over, will not get a book. (laughter) And then when it's over, you have to rush! And a few will probably have a dédicace from you which would be a real treat. I've tried, and we continue to try, on a regular basis, to
sort of, assume what the future will be. And we call that the
long-term uncertainty, sort of, scenario planning. Now you took that exercise very far and some of your critics
and your reviewers would call you an
alarmist, or a pessimist. You say things like "No
human job is safe from automation, current
political systems may become completely obsolete,
personal decision making may be taken over by
algorithms, wealth and power will increasingly become concentrated in the hands of the few." This is probably similar
to some of our long-term uncertainty scenarios, but
certainly not the ones that we hope will actually develop. In your books, and
particularly in the last one, you seem to be much more
assertive about what this future, and what is the point about explaining all that? - Well, I see my job not as
trying to forecast the future. I think it's impossible. Nobody really knows how will
the world will look like in 2050 or 2100. I see it more as mapping
different possibilities, in the hope that we will
avoid the worst outcomes, and yes, it is true that
I tend, in my writing, and also in my talks, to
emphasize more the dangerous scenarios because there
are so many people who's job it is to emphasize
the positive scenarios. Certainly when it comes to the chronology, it's quite natural and
obvious, that the people who develop the new technologies,
like AI and bioengineering and so forth, both the scientists and also the entrepreneurs, they naturally emphasize
all the wonderful things that AI can do for us. And there are many wonderful
things that it can do for us but then it becomes the job of historians and philosophers and social critics to balance the picture and say, wait a minute, there are also a few dangerous scenarios. Now, I do my best to tell people, look, I'm not a prophet. I don't know what's going to happen. And there is no point,
really, in prophecy. If you know for sure that
this is going to happen, and nothing we do can change that, what's the point of telling people. - Yeah - It will happen, so... I try to focus on the dangerous scenarios in the hope that we can
still do something about it. - So you are sending,
and with those books, and particularly with
this one, your sending a bit of an alarm signal,
saying this is on the horizon, - Mm-hmm. - and we can do something about it. - Yes, we can certainly
do something about it, but again, I try to emphasize
it as much as possible, also by looking back, that no technology has ever deterministic. You can use the same
technologies to create completely different kinds of societies. We have seen it in the 20th century, that you can use trains and electricity to create a dictatorships
or liberal democracies. The trains don't care. They work for both. It's up to us to decide what
kind of use to make of them. It's the same with AI. It can create a wonderful
world and it can be the basis for the rise
of digital dictatorships and all kinds of extremely
unequal societies. - You make a point in your book, as well, that we talk a lot about
infotech and people are concerned about that
and focused and explore the consequences of
infotech, IDC and so on and so forth. Your point is that it's
not in and of itself the end of it. It's combined with
others, such as biotech, which for the moment a little
bit under the radar screen, - Yeah. - that there will be
massive transformation. Can you explain that a little bit? - Yes, I think what we are
seeing and will see more and more is not an infotech revolution,
but rather the merger of two revolutions, infotech and biotech. And without the biotech
part, AI by itself, is not going to transform
the world so much. What is going to transform
the world, is the combination of AI with an increasing
understanding of the human body and of the human brain. You could say that the most important fact about living in the 21st
century is that we are becoming hackable animals. And you have these two
components, the animals come from biology and from biotech, and the hacking comes from
infotech and from computers. But, for almost any change
that people talk about today, you need to take biology into account. Even something like a self-driving car, which you say, what's the
connection with biology? Why is biotech relevant
to self-driving cars? Well, without biotech you
can't have self-driving cars because a self-driving
car, in order to function in a city full of pedestrians
and maybe human drivers and certainly human passengers, it needs to be able to
understand human behavior, and human emotions. A car which doesn't understand
how pedestrians behave, you won't want such a car on the road. - But can't you have so many data, and so much mining, and so
much analytical work done on multiple human behavior,
that you actually don't really need the biotech
dimension once you've accumulated all that and can anticipate? - But that data you're accumulating
about all these humans, it's really about the human
animal, how it behaves, and it's not enough just to
collect this amorphous data. You need to have models
for how humans think. Take a very simple
example, which is obvious, but I think it makes the point. You need to take into account
the difference between how children behave and how adults behave, and the different ways that brains work at age 10 and age 20. For a self-driving car on the road, it's important to first, be
able to tell the difference between a child and an
adult, to gauge the age of that human being and
to know something about what's the difference in behavior between an 8 year old, an 18 year old and an 80 years old person. And that's critical,
even for something like a self-driving car. When you reach the point where you say, okay, I want an AI doctor,
I want and AI teacher to replace or augment human
doctors and human teachers, there it's obvious that without biotech you are not going to get very far. - Mm-hmm. Do you think there could be
such a thing as an AI doctor? - Certainly. I think it will be,
it's coming quite soon. It will have immense benefits for humans, and it will also have a lot of dangers. - But why would you say
it would be beneficial? - Because, for example,
an AI doctor could, say monitors, let's take
the classical scenarios that people talk about, that
you go around 24 hours a day with biometric sensors
on or inside your body and they constantly send information to... - So you're monitored. From, you are monitored all the time. - You are monitored all the time. The information goes to an AI doctor, maybe on your smart phone or whatever, which analyzes all this
stream of information and monitors your health in a way which no human doctor can even approach. And it can do things like diagnose cancer when it's still just beginning, and it's still very, very easy and cheap and painless to cure it. Instead of waiting for a couple of years until it spreads and you one day wake up and you feel a bit of pain,
and you do, ah, it's nothing and it becomes worse and worse. You go to the doctor. They send you to all kinds of tests and by the time they find out, maybe it's not too late, but
it's certainly going to be a long painful expensive
process to deal with it. So the promises are enormous,
also the perils are enormous. And not just to the job market. In other things
- So what happens to all the regular doctors? They lose their job. - Depends. They, they need, they
can re-invent themselves and change what they do. Certainly, if most of what
you do is just information comes in, you recognize a pattern, and you make a diagnosis,
this is something AI will be much, much better than humans. Nurses for example, are much safer, than these kinds of doctors. - Okay, explain to me. - Because, if everything you do, is just analyze information, you gather information on a patient, and then again you recognize the pattern, oh, this is the pattern of lung cancer, or this is the pattern of flu, and this is the best treatment, then this is the easiest
thing to automate. But if you actually need
to give an injection to that person, if you
actually need to change a bandage or to give a shower,
that's much more difficult. - Yeah. - I mean, we are going to
have AI doctors long before we have AI nurses. - Yeah, I mean, this is a finding, that particularly in Japan,
they are experimenting with. The difficulty of inventing that robot which is going to lift
patients out of the bed which is just almost
impossible at the moment. - Yeah, and... - Unless you have the human
beings to actually do it. - Exactly, and people often think that, no we are not going to have AI doctors, because a good doctor
needs to not just diagnose my disease and recommend treatment, the doctor must understand
my emotional state. Must take into account my fears, my anger, my depression, this is
part of treating me. And people say, oh, an AI
will never be able to do that. But, this doesn't really make much sense, because at least as far
as biology tells us today, anger and fear and depression,
they are also biochemical processes, biochemical patterns,
just like flu or cancer. If the AI can diagnose flu,
it can also diagnose anger and the fact that it doesn't
have any emotions of it's own, actually makes it, in many
situations, much better, because it has no
distractions, and you know, your human doctor maybe
her husband, this morning, had a fight with her
and she's treating you, but she's still kind of
reconstructing the fight from this morning. An AI doctor has no
husband. (audience laughs) - For the moment. - For the moment. So the AI can focus 100% on you. And you know, people go through life wanting somebody to understand me, somebody to understand how I feel. People are obsessed with it. I want my mother to understand me. I want my husband to understand me. I want my president to understand me. (audience laughs)
You know when two humans meet it becomes a contest of I
want you to understand me and you want me to understand you. And very often we miss each other. - So it blows the judgment. - But with an AI, the AI doesn't want you to understand it. It is 100% focused on you. And it reacts, if it
reaches a sufficient level of sophistication, it can
react in the perfect way to your personality type,
to your current mood. Actually the danger, I think,
is that people will become so used to computers that are so empathic, that really understand me so deeply, that really care about my
tiniest units of emotion, that humans will not be able to compete. We will become intolerable
to all these humans who don't understand us
the way that the computers understand us. - There are a few books that
have been, novels at least, that have been written on that page. You suggest, in your
book, that there should be a universal, basic universal support, not basic universal income,
which is some topics that we've studied particularly last year in the fiscal affairs department. Do you think that's
the future for doctors? - I'm not sure. I don't know. I don't...
- Would it really be It's, more seriously,
- No, no, I... - Because jobs would be affected, because the combination
of infotech and biotech is going to outroot
many of the skilled sets that are currently
displayed by human beings. Those human being are going to have to reinvent themselves,
- Yes. - as you said. That has a cost, and they will need some
kind of income or support. First of all, what difference do you make between income and support? Is support more than income? - It's a different approach. - Yeah. - Whether we just go and give people, give a person a pile of money, do with it whatever you want, or you decide, this is the
kind of basic necessities, whether it's healthcare,
education or food, and I'm going to give you
these services for free, but you don't get to
choose which services. So, it's a different approach. I don't suggest it as the solution, I just discuss it because many experts are exploring these avenues. I think the main point
that I'm trying to make, like in the new book, in 21 Lessons, about, call it, universal basic income or universal basic services, the real problem, I think,
is in the first word 'universal'. Most people, when they
think universal basic income or universal basic support,
they actually mean national. They actually think in terms
of, okay you have all these unemployed taxi drivers in New York or coal miners in Pennsylvania, and at the same time, you
have a booming AI industry in Silicon Valley, let's tax, the big Silicon Valley giants and use that money to
give support or income to the unemployed coal
miners in Pennsylvania. But this is national. The really big question
is what will happen to people in other countries. Because the automation
revolution will probably have a completely different
impact on different countries, on different economies. Some economies will boom,
because they will be the center of the automation revolution, and actually a lot of
industries that went out will come back, and then other countries might lose everything. Their economies might completely collapse. So the question is not whether you tax companies in California to pay basic support for
people in Pennsylvania, but whether you tax
companies in California, to give basic support
to people in Bangladesh or Honduras. That's the real challenge,
and we need to think, if we are going to think in terms of universal basic income,
we need to seriously understand what the word universal means. It doesn't mean national. - But that requires a
complete change of governance, of paradigm or whatever
you want to call it. Because that universal
support service or income, whatever you call it, will
require a universal taxation mechanism and reallocation
based on empirical evidence of who is
suffering, who is benefiting and how you want to,
sort of, reallocate value in order to avoid chaos. - Yeah, I mean, you
know, I think it's going to be extremely difficult. I'm just making this comment
because you do hear a lot today about universal basic income
or universal basic support as the solution, if we
don't manage the automation revolution very well. And my concern is that most of
the people who are discussing it think in national terms,
but the worst imbalance will not be between different
parts of one nation, it will be between different nations. It will be like, maybe,
the industrial revolution of the 19th century, only much worse. That in the industrial revolution... - Can you dialogue that? - Yeah, in the 19th century,
you had a very small number of countries leading the
industrial revolution and getting, at least for
a couple of generations, most of the benefits and really dominating and exploiting much of
the rest of the world, and most countries
- Hence colonialism and all of that, that was used. - Yeah and most countries
remained far behind. And the same thing may happen
again with the AI revolution. And you have today some
countries which are at the forefront of the
automation revolution and many countries say basically,
well, we have much more urgent business than worrying about this. But this is like, you
know, you live in 1840 somewhere in South America or South Asia and you hear that in Britain,
they have these things like steam ships and rail
roads, and you say, oh, I have so many more urgent
things to worry about than these steam ships and rail roads. Thirty years later,
you're a British colony. And this is the situation we are here now. The main difference is
that in the 19th century, and in the 20th century, the big threat, if you are left behind, the
biggest threat you faced was exploitation. They will conquer you and exploit you. In the 21st century, there
is actually a worse fate, potentially, a worse fate awaiting you. You will not be exploited. You will simply become irrelevant. And, being irrelevant in
many ways is much worse than being exploited. When you're exploited,
at least they need you. (audience laughs) When you're irrelevant,
that's much more difficult to struggle against. - That takes us to the data, doesn't it? - Ah, yeah, that takes us in the direction of discussing what's
happening to the data. - What's happening to the,
and you, I think you say in your book, that who owns
the data, owns the future. And, those happy few, who own
the future, the data owner, and the data processors
and the data miners and all the rest of it,
will have a very happy life whereas, the rest of the
others will be this precariat out there eventually used for a little bit but irrelevant most of the time. Unless they're given that
universal basic support so that they're kept happy and quiet. - That's one scenario again. We are not sure, but what we can say with far greater certainty
is that data is becoming maybe the most important
resource of the 21st century. The same way that land was
the most important resource in ancient times and
machines and factories were the most important resource in the 19th and 20th century. And at present, we see
that almost all the data of humankind, both on the
level of the human body, and the human brain,
it's being concentrated in a few places. And, most of the world, you can say, is already in the process
of being, kind of, data colonized. That the data is flowing
from all over the world to a few hot spots. - Which one would those be? California? - The usual suspects. - So California, is one.
- California, East Asia, a couple of places in between. Bangalore and a couple of
places maybe in Europe. And in much of the world, they
are just giving up their data for free or maybe in exchange
for funny cat videos, (audience laughs) I'm not sure
if this is completely free. - Google would argue with
you vehemently that it's not for free and you're getting
a lot of services for free, therefore it's consideration
for access to your data. - Yeah, the question is,
whether you get a fair, you know, what is a fair
price is one of the biggest questions in economics
ever, and we are not going to settle it here. But, there is a huge suspicion
that what you're giving up it may be worth far, far
more in the long run, than what you're getting in exchange. So, it's done voluntarily most
places, in most countries. But, this is also largely
because people don't realize the importance of what they are giving. And, one of the problems
is, I don't have like, this is the solution,
okay let's have government regulation of data, let's nationalize it. I don't, nobody really has...
- But the Europeans are trying to do that with GDPR. Not dealing with the property of data, but at least dealing with the
more philosophical question about who owns it,
- Yes. - who has privacy, who
has the right to either, give it away or retain it. - The many plans and models
today for how to do it, what we lack is experience. What history teaches us
is that trying to regulate something like that is going
to be quite complicated and mistakes will be made. And we are not sure what
is the best way to do it. And, you know when you try to
regulate the ownership of land you have thousands of years
of experience going back to ancient Mesopotamia and the
ancient Chinese civilization and you can learn. And you have a couple
of centuries experience in regulating the ownership
of machines and factories and all that. But, we have very little
experience in regulating the ownership of data. So, yes, the debate is starting, all kinds of experiments,
like what is being now done in Europe, this is beginning, but we are working with a
much, much shorter time frame than before. We can't go back to the past,
there is almost no record, of something like that. And we need to do it very
quickly, because the development of this technology is going
at an accelerating rate. And if we, or some of us, get it wrong, we may not get a second
chance to try again, a different tack. If you're left behind,
in the AI arms race, then maybe you never get a second chance. - That's not very reassuring. (all laugh) Now, I'm just turning to
my excellent organizers. There's a time when I have
to turn over the floor to the audience. It's not just now, but I
simply want to warn you, so that if you have
questions, if you are curious about a particular area,
you should prepare, because I'm going to turn
the floor over to you in about 10 minutes. But, for the moment, I
still have a few questions on my mind. You talked about the
economists being poets. I would challenge you
on that, but (all laugh) some of us have to read a lot of material and I'm not sure that
it sounds like poetry most of the time. But you say that economists
write great stories that many people believe. I would challenge that too, by the way, because...
- If many people believe them or that...
- Well, I remember days when we
were trying to tell a story in the UK before the Brexit vote was out, and...
- Somebody told a better story. (laughter)
- I think you're right. I think you're right. I think you're right. (applause and laughter) Which raises the issue of the truth. - Mm-hmm. - Can you write a good story
because it's a good story irrespective of the facts, or do... - Yes. - Yes, you do. - Oh, definitely. If there is one thing...
- Even though it's a lie? - Yes. - Okay. - Some of the best stories
in the world are not true. (laughter) - Starting with Harry
Potter and (audience laughs) going all the way to books
which maybe I won't mention in order not to offend
anybody's emotions. (laughter) - But you take the view that
religion, or religious stories are just stories, and they're narratives that help people along. That they're just stories. - Yeah, I mean, I think that
you need to differentiate between saying that a story is not true and saying that a story is
harmful or not effective. To get a lot of people to
cooperate, you need to convince them to believe in a shared story. Now sometimes the story
can be completely fictional and it still works. And, you know, it starts with
things like playing games, like playing football. If you want to get 22
people to play football, you need for all of them
to agree on a common story for what football is, what are the rules, what are the goals, what is
allowed, what is not allowed. And, there is nothing wrong with that. Everybody know that we invented the rules. It's completely fictional. It's not the laws of physics,
or the laws of biology that mandate these rules. And as long as it goes
it's fine, until somebody forgets that this is just
a story that we invented, and you have a football hooligans starting to beat or kill somebody
because of a football game. And then to remind this
person, look, it's just a story we invented. Don't take it so seriously. Now, this goes all the
way to things like money, which they don't have any objective value, they are not based on the
laws of physics or biology. It's no coincidence that
we are the only animals that have money. Chimpanzees don't have money. I mean, chimpanzees can
trade, they can barter, like, I give you a coconut,
you give me a banana. This works with chimpanzees. - Barter works, yeah. - But they will be unwilling
to part with a banana in exchange for a green
- Baubles. - piece of paper, or in
exchange for electronic data on computers. (audience laughs) Most of the money today in the world, as you know, probably much better than me, is just electronic data in computers. It's not the dollar bills and so forth. And, as I said in the beginning, this is the best story
or most convincing story ever told, because it's
almost the only story, that everybody believes. Not everybody believes
in God or the same God, not everybody believes in human rights, not everybody believes in nationalism, but almost everybody believes in money and in the same kind of money, even though it has no objective value, even if you think about
somebody like the Islamic state, when they captured Mosul and Araqa, they destroyed museums and
they toppled down statues, and then they killed people, but they didn't touch the
money. (quiet laughter) When they captured the bank,
the central bank in Mosul, they found all these piles
of green pieces of paper with pictures of American presidents, and instead of burning it, they took it, and carefully guarded it. Even they believe that story. - But isn't that based on trust? - Yes, money is really made of trust. If you look at the long history of money from ancient Mesopotamia until today, what money is really
made of, is simply trust. In the beginning, there
was very little trust, so there was very little
money and the money you had, had to be made from something
which was also useful, like the first type of money
we know about for sure, was simply made of barley. You could eat it. Now, then you switch, eventually, to gold, which is worthless. You can't do anything useful with gold It's just a status symbol. And then to paper, and now to electronic and it's because there
is more and more trust between people, so money
can become more and more insubstantial. - But isn't there a
contradiction between the fact that there is more and more and there are more and more, including crazy people, who trust, put their trust in money, and the fact that, the world over, particularly with the
respect of institutions, trust is actually declining
and eroded in many respects. How do you reconcile the two?
- All but the last... - All but the last few years, yes, but if you compare the amount of trust that people today have in,
even global institutions, let's put aside national institutions, to the amount of trust people
have in global institutions a century ago, or a thousand years ago, it's incomparable. There is so much trust
still in the world today. We spoke earlier about football. So, just the recent World Cup in Russia, just think, a thousand years
ago, trying to get people from Argentina, France
and Japan to play together games in Russia. Absolutely impossible. Not just because nobody in
America knows about Japan and vice versa, but also
because there is no single game that all the people in the world play and agree on the rules. So, the World Football Cup or the Olympics is an amazing display of global trust. And it's the same with many
of, still of our financial institutions, they have
definitely taken a hit in over the last decade or
so, but if you think about what happened after the
global financial crisis, the ability of central banks, for example, to create trillions of
dollars out of nothing, they didn't really
create it out of nothing. They created it out of trust. There was enough trust in the system, that people were willing
to go on using the dollars and the euros and yens
even when they heard about this ex nihilo creation of
trillions of new dollars. They were willing to use
them because they still had enough trust in the financial institutions and in the governments. This trust is eroding, but
we still need to remember like somebody just told
me, we've taken a thousand steps forward, in terms of global trust, over the last millennia
and now we've taken five steps back. But we're still 995 steps
ahead of where we were. In 1018.
- So we're not at an intersection point. - Not yet. We could reach that point. I mean, things can go
downhill very, very quickly with humans. But we are still quite near
the top of the mountain, at least compared to any
previous time in history. - All right, turning
over to your questions, and those who are puzzled, curious. Please take the floor. There are mics all over
the place, I think. You should just raise your hands. Okay, you two. The two of you, go ahead. - All right, yeah, so
- You know what, you stand up and you say your name and... - Will do. My name is Sai Janaswamy. I'm with the library, so welcome - Thank you
- to the IMF. - The question I had was, in your book you talk about mindfulness and
I haven't read the book yet, but I looked at a Bill Gates book review which was the day before yesterday. I'm sure you looked at it. - Yes. - Right? (all laugh) So, he says, it all boils
down to being mindful and meditation, and I was
like, intrigued by that. And, what is the correlation. As humans, how do we
tackle these challenges in the 21st century and
what does being mindful and having a stream of
meditation got to do with this? - Well, there is one
chapter in the book about meditation. I was very apprehensive about
including it in the book because I was afraid, just of that. That people will say, oh
well, he suggested the answer to all the world's
problems is just meditate. (audience laughs)
And this is definitely not the answer. I don't think there is any
chance that eight billion people are going to start
meditating any time soon. And even if they do it,
the results might not be as positive as we hope. When you really observe
what's happening in your mind, when you're really there
just with your mind, without any distractions, what you often find is so frightening, is so shocking that people
can take it in all kinds of dangerous directions. So, because, I myself,
I practice, do vipassana meditation, because I practice it myself, I don't have such big trust
that this is the solution to all the worlds problem,
because it's very hard. But I do think that it is very important for people to make the
effort to get to know themselves better. And I know this is the
oldest advice in the book. Socrates said it and Jesus
said it and Buddha said it thousands of years ago. Know thyself. But, the difference now, is
that in the age of Socrates, you did not have competition
and now you have competition. 2000 years ago, if you
did not make the effort to get to know yourself better,
you were still a black box to the rest of humanity. But now, you have all these
corporations and governments that, as we speak, they
are trying to hack you. And if they reach a point,
and they are very close to the point, when they
know you, better than you know yourself, they
will be able to basically sell you, anything they
want, whether it's a product or a politician. And that's very dangerous. And we can't stop the
progress of AI and biotech, but we can make the effort
to get to know ourselves better and especially to
get to know our weaknesses so they don't become a
weapon used against us. And I think there are hundreds
of types of meditation practices out there. I practice vipassana,
but for different people different things work. Maybe sports would work for you. Maybe art would work for you. But whatever works for you,
it's important to do it now because of the competition. - Just behind you, yes, please. - Hello, my name is Annette Schmitz. What about nature? What happens to nature in the future? Does AI solve ocean acidification? Do we have enough data
from gps monitoring, hurricanes, et cetera? Does AI dominated solve
it or do the rich get wealthier by sort of
capturing food security and air and water,
essentials that we need? - It's really up to us. AI will not solve it for us. It can help us solve the
problem of climate change but it will not solve it unless we give it the instructions to go in that direction. Now I think what we need to
realize is that climate change is not just a problem,
it's also an opportunity. I don't think that we can
prevent climate change just by telling people to stop progress, stop economic growth,
because economic growth, again as you probably know better than me, is now the number one value in the world. Countries can call themselves
communist or capitalists, Jewish or Hindu or Muslim or secular, democratic or authoritarian,
doesn't matter, their number one value is
actually economic growth. And if you say to people,
to stop climate change you need to stop economic
growth, it is unlikely to succeed. But what we need to realize
is that there are a lot of opportunities for economic
growth and for improvement in the condition of humanity in developing new eco-friendly technologies. So I will just give one
example, and not from the field of AI, but from the field of biotech. One of the main causes of
climate change and of pollution in general is the meat and dairy industry, which in addition inflicts
terrible suffering on billions of sentient beings. And there are opportunities
for example to develop what is known as clean meat,
which is, you want a steak? Don't raise a cow and slaughter
the cow to get a steak. Just grow a steak from cells. It may sound like science
fiction to some people, but it's already happening. Five years ago, the first
clean meat hamburger was produced and it cost about $300,000. Last year it was down to $11.00. And if we continue to invest
in this kind of technology in a couple of years we can
have meat which is much more ecological, much more economic
and also much more ethical. So this is... - Is it as good? - I haven't tasted it, but
it should be even better because you don't need so
much antibiotics and you can decide exactly the level
of fat and the level of the different chemicals
and components in a way that you cannot do when you
raise cows and chickens. So, this is just one
example for how we need to think about the war
against climate change, not just as net cost, but
also as many opportunities. - Yes, please. - My name is Alberto Pejar
and I work as part of the team on long-term trends and uncertainties. And your work resonates with us because part of what we try to do
is challenge assumptions or surface them and especially the notion that they will persist into the future. And of course what your
books do that I think is very powerful is take
these supposedly irrevocable truths and demonstrate
them as stories or fictions or perhaps as we economists
might call them, institutions. So you've been quite nice to us so far. I'm inviting you to be a
bit harder on us and choose a fiction, perhaps the
economic growth one, or something else that
you think might challenge, or might change in the coming decades. An economic fiction preferably. - Oh! - We are, we like to self inflict a few. - That's a big challenge. Well I would say the biggest
story of all out there, as I just said, is economic growth. And, I do think there is a lot
of reason to doubt this idea that the number one value of
countries and of societies should be economic growth. But, I think it's unlikely
that we will be able to change the story quickly enough. Now we need to work with
that story, not against it, maybe I'm wrong. Maybe it is possible
to make people realize that you can make people's lives better, not necessarily by providing more stuff. But it's going to be a very hard sell. So, we can definitely make that effort. Similarly, one other, other
important stories concern is, as we mentioned earlier,
money and the nature of money, and our many attempts to create
new kinds of money today. Again, whether we succeed
or not, I don't know, but what I can say is that
any understanding of money should start with realizing
that money is made of trust. So, I don't know what will
be the money in 50 years, but, it will be made of
trust, not of distrust. If people are thinking
of new kinds of money which are based on
distrusting institutions or distrusting governments,
it's going against thousands of years of history. What is important to think
about, is what happens to things like taxation
systems if more and more of the transactions, of
the economic transactions in the world don't involve money. What happens if more and more transactions are based on exchanging
information and data rather than just dollars
and euros and yens? So, what happens to the taxation system? Should we start taxing
information and not taxing money? I don't know. But, I do think we need to
take seriously the possibility that the nature of
money is going to change quite dramatically in the
coming 50 years or so. - Interesting. You have a question. You have to raise your hand really - I, can I... Okay, you over there. Okay, you have the mic, so very quickly and then we'll move over there. - Very quickly. - I promised you the floor. - All right, so, my question
is about, you mentioned trust and you mentioned ownership of data and that that will change how things move. Do you think that DLT or
blockchain technology has any role to play in how that
works, because at core, what it has is that it
gives, actually, a certain potential around that
I can own my own data and I could sell or
exchange or do anything with it as I wish, because
the technology allows for it. The same of the redefinition
of trust or the provider of trust, right, it
could be, currently is, it could be the public, the government, but it gives a different
shape to how the trust can take due to this new technology. - The new technology is DLT -
distributed ledger technology also known as blockchain.
- Yeah, DLT, yeah. - Yeah, I'm definitely not an
expert on blockchain and all, but a lot of people are saying
that this can be the basis for creating a new system
of trust and a new system for ownership of data. Maybe. I just don't know. But, it is certainly the
case that the old systems of trust and of ownership
will have to adapt radically to the new realities of the 21st century. It's a very different
thing, ownership of data and ownership of land. We know how, what is
- Ownership of what? - of land. We have thousands of years
of experience what it means to own a field. I build a fence around it. There is a gate. I stand at the gate. I tell who can come in and who can go out. That's very clear to us. But what does it mean that
I own my medical data? If there can be so many copies out there, that I don't even know,
and so much research done on that basis. Now we are developing the
technology to do research on medical data, while
the data remains anonymous and while the data
remains under my ownership and even in a way that can give me a stake in the result of the research. So, if the results of the
research are monetized, I could a get a small share
of it, because I was one of the million people who's data was used in order to develop this new drug. So, this is a very promising direction, but we have very little
experience with that and one of the other difficulties is that it's completely
counter-intuitive to most people. Humans have this very
clear understanding today of what it means to own real estate, and even chimpanzees have
a very clear understanding of what it means to own a banana. There are rules of ownership,
even in chimpanzee societies. Not always respected, but that's true of human societies also. We don't have, certainly the
average person, doesn't have a clear concept of what
ownership of data means. And, one of the great
difficulties, and here I come back to this connection between
economists and poets, for this to work, it should
be clearly understood by the average person. One of the worrisome
developments I think today in the world, is that if
you think out of almost eight billion people in the world, how many really understand how
the financial system works? - Oh, very few. - Very few. And this is a very worrying
- Including in this room. (all laugh) - And this is very worrying,
especially if you think that maybe in 20 years
the number will be zero. Maybe the financial system
will become so complicated as AI takes over more
and more of the action, that it will just be too
complicated for the human mind. So how does the system
function when nobody, no human, really understands it? What it means is that
more and more authority is shifting to the algorithms. So, you still have a human on top, and you still make choices, but the options in the menu
are written by algorithms on the basis of how algorithms
understand the world, and they understand the
world in a way that nobody, no human, can really understand. So you get like, okay,
we have five options, but why these five options? We don't know? This is what the computer said. - Yeah and that's how you have accidents. You know, high-frequency
trading, too many algorithm trading and you can end
up with nasty surprises. - Ah, yes, and again,
as the complexity grows, and then the AI becomes
more and more sophisticated these kind of accidents are likely to be not just more common,
but far more important. - So even more than the ownership of data, the ownership of algorithms
should be something that we worry about. - Yeah, and not just the ownership. Maybe the algorithm
belongs to a government, but nobody in the
government really understand the algorithm. The algorithm is constantly changing, teaches itself things,
it's based on deep machine learning. It recognizes patterns,
that no human being is able to recognize or understand, so the algorithm comes
to the Prime Minister and says, okay we are now facing a crisis, but I can't explain to you why, you won't understand it. And the only two ways to avert the crises is this or that, I can't
explain to you why, because you're a human,
you won't understand it. And this can be down the
line in 30 or 40 years. - Reminds me of the days
when you signed this software agreement with software companies and said, I want the source code. Suddenly someone says,
what are you going to do with the source code? You
have no idea how to use it. Same. Okay, over to you. - Yes, does this work, okay. My name is Yuana Lukai. I work in the legal department. My question may be quite simplistic, but I'm kind of perplexed
by your statement that the biggest threat in
the future for countries would be to be left behind,
because I think of countries maybe in a cynical way, as markets. And if AI is about producing
products like AI doctors or monitoring devices for your body, then surely countries
that are doing well in AI would want markets
outside of those countries to thrive in order to be
able to sell those products. So, there must be some sort
of motivation in the future also for international
cooperation and redistribution and international tax and
maybe institutions like the IMF to prop up and support
economies that are doing badly if for anything else,
just to serve as markets for AI producers. - Yes, there is a common and
quite reasonable argument that maybe the ultimate
destiny of homo sapiens is just to be consumers. You don't need humans for
anything except as consumers. - And vegetators. - And meditators, but
they don't consume a lot so it's a problem. And, yes there is a lot to be said for it, but we need to take into
account alternative scenarios. It certainly didn't work
like that throughout history. There are many occasions in
history when people didn't value, very much, other
people as consumers. If you think about slave economies. So, you do not think,
oh we should liberate all the slaves and
improve their conditions so that they will buy our products. And in the future, slavery
is unlikely to return. If you need less humans, you
definitely don't need slaves. But you could, for example,
in a contrary situation in which machines are also the
consumers, you need consumers but then, even in
consumption, somebody is doing a better job than humans. And, you know, we see
the beginning of this, even today, in some respects. I'm in the book industry. I want to sell books. And in a way, my number one customer today is an algorithm. I need to sell my book to the Amazon and Google search engines. They are my number one customers. I know that, okay, I need to tailor, like, what I write or said, not to the taste of homo sapiens, but to the
taste of an algorithm. If I can get the algorithm,
everything else will follow. In a more extreme situation, you can even, it's a very, very simplistic version, but just to give the gist of the argument, you could have an entire
economy flourishing without humans in the loop. You can have one
corporation that mines ore and produces steel and sells
it to a second corporation that produces robots. And they are sold back
to the mining corporation that mines more steel,
more iron, produces steel, sell it to the robots corporation
and these two corporations can form an entire economic ecosystem which can spread from planet
Earth to other planets and throughout the galaxy,
just colonizing new planets and asteroids to mine things
and you just don't need humans in the loop. So this is of course a very
simplistic science fiction scenario, but it alerts us to the fact that it's not a law of
nature that you always need humans, even as consumers, or all the humans as consumers. There could be, potentially,
other consumers out there. Now the questions arises,
but what's the point? What's the point of having
this closed circle economy that benefits nobody outside? But, you can ask the same
thing about the human economy. We are also in a closed system that, okay, it benefits the actors in
the system, but who benefits besides the humans today? Nobody. So it's the same with, it's
definitely not a prophecy of an inevitability, it's
just a thought experiment to think about the
possibility that you could have sophisticated economies in the future in which humans, or at least many humans are not needed even as consumers. - I hope you have a
really upbeat question. (all laugh) Because this is the last
one and we want to end up on a positive note. - Thank you professor. My name is Harry Joppa. I'm from SPR here. - SPR stands for Strategy Policy Review. It's one of the departments. - Thank you. So, we are in an age
where by globalization is facing protectionism and I remembered in your first book you
mentioned that nations like religions are stories,
so I just want to ask you what's your view, let's
say 20 years from now. Do you think we are
going to see more or less nations in the world,
if you can (fades out) - More or less what? - Nations. - Nations. - Well, I don't really know. Certainly despite the recent
upsurge of nationalism, nationalism today is far
weaker than a century or two ago. We just forgot how strong it was. Go back 100 years to 1918,
European's were killing each other by the million
over questions of nationality. Now today with all the talk
about the rise of nationalism in Europe, just count the bodies. A good way, it's not the
only way, but a good way to assess the power of an
idea is to do a body count. And one of the amazing things
for me is as an historian watching events, I'm not
speaking specifically about Europe, is how
few people are willing to kill or be killed for nationalism. Which is a wonderful
development, don't get me wrong. A century or two ago,
to decide the question, like whether Britain should be part of the European Union or
an allegedly independent country, you would need a big war with millions of people
being killed and injured and so forth. As far as I know, in
Brexit, only one person lost their life, a British MP who was murdered by some fanatic. And the rest of the people
just followed whatever the referendum said. And it's the same with
the Scottish referendum. In past centuries, if Scotland
wanted to be independent of London and they
wanted to be independent a couple of times, they
needed to raise an army and to confront the armies
that London would send from the south to burn Edinburgh down. Now they just got a referendum
and almost everybody just accept the results. Very few people are willing
to actually be killed or to kill for this, and this
is a very good development. Even when you look at most of the world, nationalism is much weaker than before. And I think, I don't know
what will happen next, but what is to me, quite
clear, is that all the major problems of the world
today are global problems. The three biggest problems
we face are nuclear war, climate change and
technological disruption. And none of these problems
can be solved on the level of a single nation. You just can't solve climate change or regulate AI on the
level of a single nation. So, the only solution to
these global problems, is greater global cooperation. Whether we actually see
greater global cooperation, I don't know. It's the wise thing to
do, but we should never underestimate human stupidity. (laughter) It's one of the most
powerful forces in history. (laughter increases) Is this upbeat enough? (laughter and applause) - You were terrific, thank you. Thank you ever so much. I think we have to just,
we'll try to improve the last statement by being witty,
intelligent and focused. And meditating. Thank you. (applause)