(glass clinking) - Thank you, great, yes. Welcome, everybody. As chairman of RUSI it's a great pleasure to
welcome you here tonight to the Royal United Services Institute and to the Duke of
Wellington Hall specifically. Please turns your phones to silent if you haven't done so already and in the event of an emergency join the RUSI staff over there who will direct you to
the entrance you came in or to the emergency doors on the right. The Royal United Services
Institute was founded in 1831. The first Duke of Wellington
was our first chairman and we claim to be the world's
oldest think tank usually. And we have here tonight
one of the world's newest and deepest thinkers
to think with this think tank, Yuval Noah Harari, and
it's a great pleasure to have him here tonight. We will range more widely
than our normal subjects of conflict and defense and
security in which we specialize although these will no doubt feature in our discussion this evening because a discussion with Yuval is a discussion about everything
that human beings have done or are doing or might do in the future. And he has gone from
specializing in world history, medieval history, and military history, so writing three acclaimed
books about most of the things that we as human beings get up to. In Sapiens, his first book,
he described our history, how we drove other species
of humans to extinction and much else besides. And having passed through
a cognitive revolution and agricultural revolution
and a scientific revolution, we're now on the brink of the
end of Homo sapiens, he wrote, as we move from natural
selection to intelligent design. And then in Homo Deus he described what our future might look like, how we might become unaccountable gods wreaking havoc on our own ecosystem or have already become that in many ways and we might divide into a society where some become super human beings. And now in his latest
book which is on sale with the others outside from Waterstones, 21 Lessons for the 21st Century, he focuses on today's most urgent issues. So Yuval, welcome to RUSI and welcome to our invited audience and to people watching us live streamed wherever you may be in the world. And I want to start with a,
pointing out, with a point that is very familiar subject,
a very familiar subject to everybody here but which
you illuminate particularly clearly and mercilessly at the
beginning of your latest book which is what's happening
to liberal democracy. You write that liberal elites are in a state of shock and disorientation. And here we are dealing with
Brexit, with President Trump, with the Italian government. We are definitely in a
state of disorientation. And you write that in 1938 the humans had three
stories to choose from. In 1968 they had two. In 1998 there seemed to
be only one, liberalism. From 2008 we're down to zero. (audience laughing)
So let's start with explain and expand upon that and what, for those of us still
attached to liberal democracy, what hope is there for us? - No, there's still a lot of hope. That should be said from the beginning. But, well we talk about stories and maybe the first thing to say is simply that humans, Homo sapiens, is a storytelling animal. We think about the world, about our lives in terms of stories. Very few people think about the world in terms of numbers or
equations or statistics. And this is the basis for almost
all the human cooperation, is belief in shared stories which our stories are extremely important, at least in my opinion. And when we look at the broad history of the last hundred years
as you just described, we see a shift from a
situation in the 20th century when politics was a battleground between huge visions about
the future of humankind, huge stories that explained
or claimed to explain everything that happened in the past, what is happening in the present, and what will surely happen in the future. We had three such big
stories, the fascist story, the communist story,
and the liberal story, and they got knocked
out one after the other until at the end of the century you had just one story standing which is the liberal story which caused a lot of
people all over the world to think that that's it. That's the end of history not in the sense that nothing else will happen
anymore, that time will stop, but in the sense that we know it all. We now perfectly understand the past, we understand where we are, and we know where we are heading to. And we are heading towards a world of more and more globalization
and liberalization, that democracy will be the
dominant political system, that the free market will be
the dominant economic system, borders and walls will get
lower and even erased completely until all of humankind will become a single global community. And lots of people believe that not only in the core Western countries but all over the world. And then quite rapidly over
the last 10 years or so people lost faith in this story not only in the developing world but above all in the core countries of Western Europe and North America. If 10 years ago some
governments were still trying to force this vision of
history on other people around the globe even at the price of war, now they're not even sure of this story in their own countries. And it's the most, I think,
shocking and terrifying thing to switch from having one
story to having zero stories. When you have two or three stories competing with one another then you still don't
know what will happen, who is going to win, who is right. You still have doubt. But to have just one
story, you have certainty. There is just one thing
that explains everything. There is just one thing that can happen. And this is the level
of maximum certainty. And then to switch to zero stories, this is much more frightening than having two or three or four stories because you don't have any explanation to what is happening in the world and you don't have any
vision for the future. And I think that what
we see now in the world above all else is a vacuum
of stories and visions. This vacuum is partially
filled by old stories that seem to be making a comeback like nationalism and like religion but the main thing to say about them and we can go more deeply into that later, they don't really offer
a vision for the future. That what characterized fascism
and communism and liberalism is that they really had a
vision for the whole world, for the whole of humankind,
maybe not a good vision, but still they had some vision. If you look at the rise of nationalism today or in the last few years what strikes me above all is
that it simply has no idea how to run the world as a whole. Nationalists can have a
very good ideas sometimes about how to run a particular country. But how should the world
as a whole be managed? There is still a huge vacuum there. The most at least I can understand from what some nationalists are thinking is that they think in terms of like a network of friendly fortresses that you will have each
country building walls and digging moats around itself to defend its unique identity, its unique culture, its unique ethnicity. But the different countries
will still be able to cooperate peacefully to some extent. To trade it will not be
a return to the jungle of everybody against everybody else. But the problem with
this vision of the world as a network of friendly fortresses, actually there are two problems. The first problem is that fortresses don't tend to be friendly,
certainly not in the long run. Each fortress naturally
wants a bit more security and sometimes territory and
certainly prosperity to itself even at the price of what
the other fortresses want and without some global values
and global organizations and so forth you can't reach an agreement. And the idea of friendly
fortresses very quickly turns into warlike fortresses. The other problem which is a new problem is that the world now has
three major challenges which are global in nature
and simply cannot be solved or dealt with on the
level of a single fortress within the walls of a single fortress. These are nuclear war, climate change, and technological disruption,
especially the rise of artificial intelligence
and bioengineering. And it should be obvious,
I think, to everybody that you can't stop nuclear war just within the walls of one fortress, you can't prevent climate change just within the walls of one fortress, and you cannot regulate
AI and bioengineering just in a single nation because you don't have control over scientists and
engineers in other countries and nobody would like to stay behind and just restrict our own development if the other fortresses are
not doing the same thing. So this is why I think that
there is no real vision there. It's just something that fills the vacuum but no vision, yeah. - And your point, Yuval,
is that liberalism, might, it could have
coped with those things but it doesn't really have an answer to the ecological challenge,
the climate change, or the technological disruption. And so even this cherished liberalism that has been a mainstream
of politics for so long, faced with this is in a bad state. - Yes, I mean again, you can
make some slight corrections to the liberal recipe like okay, so people don't like too many immigrants so okay, we can have stronger
regulations on immigration and go back to the good old
days of the 1990s or early 2000s with a bit more regulation on immigration. But in the long term to
really, I mean the deep meaning of the ecological crisis and even more so of the new disruptive technologies is that we need rethink the foundations of the liberal order. In terms of the ecology
the main problem is that liberalism is premised on the promise of economic growth that you can basically satisfy everybody because the pie will just keep growing. And it's not impossible
but it is difficult to continue maintaining economic growth if you want simultaneously to fight climate change and ecological disruption. Again, it's not impossible but
it demands a lot of effort. And when it comes to the
disruptive technologies, here liberal democracy's in
a much even worse position because I think, and again we
can discuss it more deeply, that the implications
of the new technologies, especially again artificial
intelligence and bioengineering, it undermines the very
most basic assumptions of the liberal order
about human free will, about individualism,
about these basic slogans that the customer is always
right, the voter knows best. The new technologies really
undermine these assumptions. I still think that from the
options on the menu in 2018 liberal democracy is the best option because it enables us
to rethink and question our foundational values and norms and try to construct a new system in the most open and accountable way. I don't think we can
maintain the order as it is but this is still the best order to try something new out of. - And you gave us really
good advice in this book which is be aware that you
are bewildered but don't panic (audience laughing)
which I often used to feel like saying to colleagues
in government actually. And then you go on, though,
to almost make us panic with this point that
the, in your argument, the merger of information
technology and biotechnology is quite likely fatal to democracy. And I think that this
is quite a hard point for people to grasp in a way. We can see lots of things that
are damaging to democracy now and we can see to a politician's,
to an ex-politician's eyes the way social media has developed which has really fractured how people, they no longer deliberate
together, really, in a democracy, seems to be very damaging. But you're introducing a
much bigger argument really, a more worrying argument which is this technological change which few people are going to want to stop could be fatal to democracy. So would you like to explain
why you think that's the case? - Well it will be fatal to
democracy as we have known it. I think that the big
advantage of liberal democracy over all other systems
is that it is adaptable. It can reinvent itself. But for the way things are at present or for the way things we thought about liberal democracy for maybe two centuries, the merger of infotech
and biotech is fatal because try to make a long story short, the crucial point is what happens when outside system, an outside algorithm knows you better than you know yourself, knows how you feel, can
predict your emotions, can manipulate your emotions, can predict your decisions, your choices, can make choices on your behalf? And this is true of the marketplace where a corporation knows
your choices better than you and can predict and also
manipulate your choices. More and more crucial
decisions in people's lives, what to study, where
to work, who to marry, whom to vote for, there
is an algorithm out there that can tell you better than what you can tell yourself what to do. Certainly when it comes to politics to think about a situation
when the government knows you better than your mother. I mean the idea of having somebody that knows me better than I know myself, we've all experienced
that or almost all of us experienced that when
we were young children and our mother knew us better
than we knew ourselves. You're not angry, you're hungry. Eat something. And this is
very good when you are three but what happens when this
happens when you're 30 and it's not your mother,
it's the government and you could be any,
all kinds of governments, not necessarily a liberal democracy. And then people think it can never happen. Humans are too complicated. We have souls, we have spirits. No algorithm can ever figure
out these mysterious things like the human soul or free will. But I think that this is
18th century mythology which held on for 200 years because there was no technology to do it but now or very soon we will
have the technology to do it and it will force us to rethink the fundamentals of things
like the free market or democratic politics. - And it's quite bleak, isn't
it, some of what you are (audience laughing) saying, to put it mildly,
what you're saying to us because one of the
metaphors I really enjoy, one of the great things
about your writing, if I may say so, is that
you have very memorable phrases and metaphors
that stick in the mind. And the one that stuck
in my mind particularly from your latest book is the data cows. You say that cows used to be, the cows we now see in a
field, in a farmer's field used to be, they are the descendants of much more adventurous animals and we have reduced them to this state of standing in that field and
just processing food for us. And now we're doing this
with humans, you say, that we are turning ourselves,
we're going to turn ourselves into biochemical algorithms
of processed data that we take in emails and
tweets and texts all day long and we put some out as well. And we are just part of the data flow, no longer autonomous individuals. We will be the data cows. It's quite, what do we do about that? Do we say well let's stop
technological development which we can't really, which is unlikely we can do anything about that? How do we find, how do people looking for policies and regulations try to find solutions to these things? - Well there are two different questions. I mean one is how do you
form an alternative vision? And then there is the political question of how do you get people
convinced of your vision and enough people in enough countries to actually make it work? So in terms of the first question of formulating a new vision, I don't think that's impossible. The first step is to
acknowledge the realities, the biological realities of human beings and how do humans make decisions and where human desires and
choices really come from and the enormous potential
for both good and bad of the new technologies to
really hack human beings. And it can go in all kinds of directions. I mean the same technology can be used to give people the best
health care in history but there is something, some system that monitors your body 24 hours a day and can identify all kinds of illnesses from cancer to flu when
they are just beginning and it's very easy and
cheap to take care of them. It's a system that can help people make better decisions in their lives. Again, if we talk about
decisions like what to study or whom to marry or whom to vote for, people often make terrible
decisions in (laughs), (audience laughing)
in these situations. And if the system works for
you and helps people make, you need to decide what
to study in college. Very often people make bad decisions and waste years and a lot
of money and a lot energy because they have distorted
views of themselves, because they don't understand the field into which they are entering whether it's ballet dancing or
whether it's being a lawyer. They don't really
understand what it will take later on during their career. And if there is a system out there that can make better decisions,
that can be very good. Of course it can be used
for terrible things. The same system that monitors your body to recognize the
beginning of cancer or flu can also mean that citizens
in some countries like, I don't know, North Korea
might have to walk around with a biometric bracelet that constantly monitors how they feel. And if they watch, I don't know, listen to Kim Jong-un give a speech and the biometric bracelet
recognizes the signs of anger, that's the end of you. And that's feasible technologically. Maybe not today but in
five years or 10 years that's definitely feasible and there are all kinds of
governments around the world experimenting with the aim of creating what we can call a total
surveillance regime which is something far more extreme than even what you find
in George Orwell's 1984 because it's a system that
can look inside your body. In 1984, as far as I can recall the system monitors only the outside, where you go, what you say, who you meet, but what is actually
happening to your heart or to your brain, that's still off limits. But we are very soon,
we'll have the technology to have a really total surveillance regime in which you can survey
the entire population down to the level of what's
happening to your blood pressure and to your brain activity
every minute of the day. And the big question is how do we use this for good and not for evil? Again, the system, you can use the system to monitor individuals in
the service of government. You can flip that and use
it to monitor the government in the service of citizens, for
example to fight corruption. So the technology is
neutral in this sense. It can go either way. The question is how do we
formulate the good vision and then convince enough people and enough governments
to cooperate on that? - So the overall argument is
our lives will be governed by algorithms, external
algorithms which could be under the control of an
authoritarian government, a big corporation, or
indeed no one might know whose control they are under. They could be under the control
of artificial intelligence and no one is quite sure
where it lies anymore. And maybe this would be a good point to go on to the points you make about artificial intelligence. You tell the story very
well about what happened on the 7th of December last year which most people haven't noticed yet. They were going about their business or they were watching
the news about Brexit or whatever it may be, but just remind everybody
what happened on the 7th when the AlphaGo Zero program
started to play chess. - Yes, so, and almost every month or so we hear about a new achievement
of artificial intelligence. So one of the latest headlines was when a new software for playing chess defeated the previous computer
software for playing chess. And it maybe doesn't sound like news because it has been 20 years
since a computer program defeated the human chess
master of the world, Garry Kasparov, but what was
amazing about the new software is that it taught itself
how to play chess. It started basically from zero. They just programmed it
the basic rules of the game and didn't teach it anything
about strategy, about moves. It didn't have any
access to previous games, to the centuries-old bank of
knowledge about playing chess. It just played the games itself and taught itself chess
and strategies and so forth and went on to defeat the
previous computer world champion. But the most amazing thing
is how long it took it to reach from zero knowledge
to complete mastery, to be the top of the world. Four hours, that's it.
(audience laughing) So it's centuries of thousands of years of humans playing chess
and passing their knowledge to the computers and
that's it, four hours. So it's still a long
way from playing chess to taking over far more
complicated tasks in the real world but the writing is really on the wall. And as you say, I mean I don't
think we'll reach a point when we have computer overlords. The more scary, realistic scenario is that we might soon reach a point when, I mean all the people in power,
all the powerful positions are still occupied by human
beings, not by computers. You still have a prime minister. You still have a CEO. But the prime minister chooses
from a menu written by AI. For instance, the financial
system is likely to become so complicated within
the next 20 or 30 years and so fast-moving that no human being. We already now are in the position when the percentage of
humans who can honestly say they understand the global
financial market is minuscule. Of almost 8 billion people on the planet how many people really understand how the financial system works? Very, very few. - Well the governor of
the Bank of England's here so hopefully he's one of them.
(everyone laughing) (everyone laughing) - Yes, I said-- - But there aren't many others. (laughs) - There are a few but not many. Now envision a situation in 20 or 30 years when the system is so
complicated and so fast-moving that no human being is
really able to understand it. So you have there the prime minister and an algorithm counsel
the prime minister and says sir, madam, we have a problem. We are facing a financial meltdown and I can't really explain to you why because you're a human. You won't understand it. And you have three options what to do but I can't really
explain to you any of them because it's too complicated and you need to decide now
because it works in milliseconds. So you still have a human figurehead but in more and more fields and it can be finance. It can be terrorism, algorithms being used
to identify terrorists on the basis of more and
more complicated patterns. And the patterns can become so complicated that again, we reach a point when the algorithm tells
the defense minister this is a very dangerous terrorist but I can't really explain to you how I reached this
conclusion but trust me. And this is not some bizarre
science fiction scenario. We are in some fields
very close to that point. - And actually you tell
a story in your book I think about a Palestinian
who was arrested who was I think erroneously
identified in this sort of way. - Yeah, in Israel we
have one of the biggest laboratories in the world for surveillance and it's called The Occupied Territories and it's really a huge laboratory. And this is one of the reasons that Israel is one of the
leading countries in the world in the development of
surveillance technology. And there are glitches like this case of somebody posting a Facebook post and the algorithm mistranslated good morning as let's kill them. And there are two words which are very, very close to one another. And (laughs), and of course
they went and arrested the guy. So this is a tiny, tiny example from what is actually happening right now but you can really run
with your imagination of where this can lead
us in 10, 20, 30 years. - This applies, this thinking of our normal work here at RUSI, this applies to military technology, doesn't it as well, that? And normally now in a
democratic government it's still a human decision
to use lethal force but soon we will find that the next generation of war planes doesn't have to have a pilot or that our ships could be attacked by a swarm of a thousand networked objects with there is no way a
human reaction can deal with and you have to have your artificial intelligence
defense against that and indeed artificial
intelligence decision to launch your defense against that. And I think you say in your book that autonomous weapons are
a disaster waiting to happen. Is there anything we can do about that? Because we would have to, we would have to stop
almost the whole development of military technology to
prevent that, wouldn't we? - No, not necessarily but
certainly we are entering an AI arms race which
is leading us very fast towards a world of
autonomous weapons systems. I think it can be a disaster for humanity, especially if these weapons
are not in the hands of responsible liberal
democratic governments but in the hands of either dictators or all kinds of terrorist organizations or criminal groups and so forth. And the only real way to stop it is through strong global cooperation. This is one of the best
examples I mentioned earlier, (clears throat) that we
need global cooperation to deal with technological disruption. So the easiest example
to give is killer robots, or autonomous weapons systems. It's very, very clear
that you cannot regulate these kinds of technologies on
the level of a single country or even on the level of a few countries. If you have a treaty
banning killer robots, autonomous weapons systems, only between say European
countries and that's it then very soon the European
countries themselves will break their own ban because nobody would like to stay behind. If the Chinese are doing it
and the Russians are doing it, we'll be crazy not to do it ourselves. And it's not enough to
sign a treaty banning autonomous weapons systems. It's much more difficult than
even with nuclear weapons because with nuclear weapons
it's still quite difficult to develop, to have a
serious nuclear program which is completely secret. You can't really develop nuclear weapons without the world knowing about it. But with artificial intelligence and autonomous weapons
systems it's much, much easier to have a completely secret program. And also you can of course
develop it in civilian context. Oh, it's not a killer robot,
it's a self-driving vehicle and just a few tweaks turn it into an autonomous weapons system. And for that we need not only
a treaty, we need real trust. It's not impossible. You can have such trust
between even former enemies. If you look at France and Germany today a hundred years after the
end of the First World War, today I think there is enough trust between France and Germany. But if the Germans tell
the French trust us, we don't have a secret
laboratory under the Alps developing killer robots
to conquer France, I think the French will trust them and the French have a
good reason to trust them. It's very difficult to see,
say, China and the U.S. reaching the same level of trust. They are certainly not
heading in that direction at the present moment but unless we have this kind of trust it will be almost impossible
to prevent an arms race in autonomous weapons systems which could be a disaster for humankind. Again, one of the problems
with autonomous weapons systems compared with nuclear
weapons is nuclear weapons, unless it's an all-out war there is not much you can do with them. But with autonomous weapons systems once you develop them they are
not just waiting to be used in a doomsday scenario. They can be used in a
lot of other scenarios. So the kind of mutual assured destruction that prevented the use of
nuclear weapons since 1945 is unlikely to be relevant for
autonomous weapons systems. Once we develop them they are
much more likely to be used in a lot of contexts
unlike nuclear weapons. - But overall you tell,
and this is another, it's another alarming
diagnosis that you give there, but overall you also tell us to get war and terrorism in
perspective, don't you, really, that the number of people,
the proportion of people in the world who dies from violence is smaller than it's ever been. Far more people, I think
you say, are killed by sugar than by gunpowder these days. And you ask us to have
terrorism in perspective. Another of the metaphors I
really, that sticks in my mind from your, I think that this
is, that I've heard you, seen you write more than
once, is the terrorist is like a fly in a china shop, you say. It cannot on its own push over
a single piece of crockery but it can get into the ear of the bull that is sitting in the china shop and make that bull so
angry that it stands up and wrecks the china shop. And this is quite good
advice to governments and media organizations to
have things in perspective. Do you want to expound on that? - Yeah, I think there's a big difference between terrorism and warfare. Terrorism until at least in the shape that we've seen it so far, again, autonomous weapons systems
can be a game-changer also in the field of terrorism but at least until today it has been the ability of terrorists
to actually kill people, take over states, wage
war is extremely limited. If you look at the number
of people actually killed it's minuscule compared
to the number of people killed by air pollution or
car accidents or obesity. As far as I can remember, at least in a place like Western Europe, more people die from nut allergy than from terrorism
over the last 10 years. You should fight the nuts
before you fight the terrorists. (audience laughing) And part of it is because
of the immense effort invested in fighting terrorism. If we invested no effort
in fighting terrorism then yes, there will be more casualties. But also there is a reverse logic there, that the power of
terrorism really comes from the overreaction of
governments and of armies to the terrorist threat which is like the bull in the china shop. If you look at the
history of the Middle East in the last 20 years then
it wasn't the terrorists who wrecked the china shop. It was the bulls who became
enraged by the terrorists. And now war is a very different situation. We are living still in the
most peaceful era in history also in terms of war. And despite the deterioration
and the rising tensions in the international system
over the last five years or so we are still living in the
most peaceful era in history. And as a military story and as a specialist in the Middle Ages what amazes me about things like Brexit, like the Scottish referendum is how peaceful everything is so far. In previous centuries, to
decide the question like whether Scotland should be independent of the government in London and whether Britain should be part of a huge European political system or not, it could only be decided
through a major war with thousands, maybe millions of people being killed and wounded and displaced. And now so far it is decided by very peaceful referendums and elections in which a few people
may be killed by fanatics but not millions. We don't need to wage battles. Also if you look at the rise
of right-wing populist parties like in Hungary, like in
Poland, like in Italy, so far they are still
far, far more peaceful than their predecessors a century ago. Like I watch what is happening in Hungary. You become very worried if you start hearing Viktor
Orban saying things like the wicked Romanians are
holding sacred Hungarian lands in the Banat in Transylvania and we need to go to war
to reclaim these lands and if you see hundreds
of thousands of Hungarians volunteering to die in the trenches in order to recapture the
Hungarian lands of Transylvania. And so far it's not happening. I don't know what, I
mean maybe in five years we'll see something like that. And the same when you look at Italy. I wrote quite extensively
about the Italian involvement in the First World War. Italy entered the First World War in 1915 in order mainly to conquer/regain the lost Italian territories
in Trentino and Trieste, quite a small piece of land today partly in Italy, partly in Slovenia. The Italians lost about
half a million people, soldiers killed and more than
a million soldiers wounded in a struggle over that. Now do you think that Italians
today would be willing to, I don't know, have 50,000 soldiers killed to capture some few dozen
kilometers from Slovenia? So far it doesn't seem like it. So with all the really
frightening and disturbing rise of the nationalist demons they are still nothing like
what they were a century ago. They can become like this very quickly but so far we are still
in a far better situation. - So the danger then
is not that the world, so much that the world
will break down in violence as that we will become in living longer, often more secure lives,
we will become imprisoned in this kind of matrix,
part of the data flow and possibly divided in a different way, I think you argue in Homo Deus, that we might see the upgrading through intelligent design in the future, the upgrading of humans
through bioengineering, of some humans who have less
need of all the other humans in the age of artificial intelligence and that that new divide
opens up in society between those healthy and wealthy enough to be upgraded and those
who are no longer relevant. - Mm-hmm (affirmative), yeah, and this is one of the long-term dangers. I mean the technology will on the one hand make it possible to start
enhancing and upgrading humans and on the other hand
especially the rise of AI will make more and more humans
economically unnecessary, useless, and therefore
also politically powerless. And the world or humanity might have, different parts of humanity
might have different futures. and we might see really a process of some kind of speciation in Homo sapiens. This is a long-term danger. The immediate danger is
that we will get embroiled in again, in this nationalist battles and we could see a return to the extreme nationalism
of the 19th century and the early 20th century and that involves two big problems. First, we'll have all the old problems of things like the First World War but in addition we will
have the big problem of not being able to do much
about the new technologies. I mean the next 20, 30 years are crucial for our ability to regulate
the new technologies and prevent the worst outcomes whether it's autonomous weapons systems, whether it's massive unemployment and the rise of the useless class, whether it's the use of bioengineering to start enhancing humans in all kinds of maybe frightening ways. And we won't succeed in preventing
these dangerous scenarios if we spend the next 20 years fighting about the borders
between different nations and levels of immigration
and things like that. - So we should be focused on things like the ownership of data and the, which I think you say is a big
political question coming up, the regulation of al, the
transparency of algorithms. Understanding these
problems might allow us to get a grip on them
before it's too late. - Yeah, I mean one of
the biggest questions is as you said the ownership of data. Data is now becoming the most
important asset in the world. If in ancient times it was land, so a lot of political conflicts were about the ownership of land. And then machines replaced land as the most important asset
in the last 200 years. So a lot of politics was the struggle about who controls the machines, who controls the factories, the mines, the train system, the electricity system. Now data is replacing machines
as the most important asset. And I think that one of the
biggest political questions now is who owns the data. And most people and most
parties and most governments are hardly even aware
that this is the question. So the data is being
harvested and monopolized by a few corporations
and a few governments whereas most governments
and political parties and voters and citizens, they are not, they are hardly aware
that a) this is happening, b) this may be the most important thing that is really happening right now. - And let me, in a few
minutes we will open up for some questions from our guests here. And I just want to ask you about a couple of other things. I want to ask you about animals. You write a lot about the animal world. You've written about how there were many, there were at least five
other species of humans and we managed to get rid of them all. Then we got rid of a lot of other things. And then we, you write about
how we mistreat animals today, particularly other mammals,
particularly farmed animals. And you argue very convincingly
that in mammals as in, in other mammals as in humans
there has to be a great bond for instance between mother and infant and that is something we rupture every time we undertake
modern farming methods tens of billions of times over. Do you think we will look back at, will people look back in
a hundred years on our age like we look back on the Middle Ages as a barbarous, ignorant time when we didn't really understand what atrocities we were committing? - It could be. Again, I don't know who
will dominate the world in a century but certainly
if you think in terms of what kind of atrocities are
committed today in the world. And you look back at things like slavery and you think how could
people actually do it and not, like the people who wrote the American Declaration of
Independence as everybody knows, many of them had slaves. How can you just not see the gap there? And you have very intelligent
people who didn't see the gap. And what's happening
today which is like that, so one of the best examples is the way we treat other animals. So there is a chance that in a century people or entities will
look back on today. How could intelligent,
compassionate, wise people just not see what they were doing on a daily basis to billions? It's not like isolated cases. It's an entire system of
billions and billions of animals. And especially as now that basically the scientific discussion is over you can still hear some
people who are not scientists say things like well cows don't mind. Cows don't have a mind. They don't have consciousness. They don't feel pain. Or even if they feel pain they
certainly don't have emotions like a bond between a cow and a calf. This is all, you're humanizing cows. But now there is a very
wide scientific consensus, at least in the relevant
fields in the life sciences, that certainly emotions,
the basic emotions like the bond between a mother and child is common to all mammals and
probably to many other species. So the scientific concerns
with this is widespread. The scientific discussion is
really over in this respect but it's still like in many other cases. What's obvious to scientists,
it's still far from obvious to the general population,
to the politicians, to the market. - And when we think about what we should, if we agree with your diagnosis
on all of these issues about how we should live differently or what we should think
about doing in the future, really as I see you're
calling for two things. One is that we have to
outrun those algorithms in our own minds. We have to invest in human
consciousness in whatever way, in meditation and whatever way we can and that is an individual task. And you're saying, as you've
mentioned several times tonight we have to globalize politics or sort of find the global solution which why nationalism is not the answer. But on that point one has to admit that international politics
is not currently going in a constructive direction.
- No. (laughs) We're probably going in
the opposite direction at the moment. What if political leaders
refuse to globalize themselves? What if, does technology at
least give us all the chance to do something anyway? What if five billion people who are connected together
now technologically all decide that they were
going to do something about ecological collapse,
about the treatment of animals, about whatever it may be? Is there any, should we
take any hope in that and do you think this redefined liberalism might somehow call on that? - Well first I want to emphasize
there is no contradiction between this global
thinking and nationalism. It depends on how you understand
being a good nationalist, being a patriot. If you think that being a good patriot is taking care of your compatriots, of their security, their welfare, then I think that good patriots
today should be globalists because the only way to really protect the security and the prosperity
of our fellow citizens is through global cooperation. So I think it's very dangerous to think in these kind of either/or terms that either I'm a nationalist
or I'm a globalist. No, I think that today
really good nationalists should be globalists. - But we're not getting more. We're getting a bit less global in our unity and outlook, aren't we, even with whether we're
nationalist or whatever we are? - Yeah, so as for, if the political system is going in that way,
can people just use, say, some technological platform to
unite and do something else? I don't have a strong belief in that. And I tend to be skeptical
of technical utopias, that oh, we can just invent
some algorithm and platform and circumvent the entire messy political process and system. It doesn't work like that. We have enough, I think,
experience from the last 20 years that many of the utopian
visions of the 1990s about what the Internet will do in terms of connecting people, they turned out to be completely wrong even when there is partial successes. Like you look at the Arab Spring. So yes, you had something like the Twitter and Facebook revolutions. You could use Twitter and
Facebook to get 100,000 people into Tahrir Square and topple Mubarak but the problem then is that you needed old-fashioned
politics to fill the vacuum. The Facebook and Twitter
platforms were not up to it so you got two very, very
old-fashioned organizations, the Muslim Brotherhood and
the Army filling the vacuum and it became a struggle between them. So if you want to have a
big one-time demonstration or even a revolution,
yes, maybe you can have like a technological shortcut. But the really difficult part of building a political mechanism, you don't really have these shortcuts. - I think I agree with you but I'm looking for the
hope, you know, that... (audience laughing) - But I can say something-- - The politician's mind
looks for the, you know, where there's got to be an answer and there's a great prize to the person who finds the answer. - Well I mean again, to finish
this part on a hopeful note I would say two things. I mean we just kind of
marked the 100th anniversary for the Armistice, the First World War. And if people back then could see the situation of Europe
today despite Brexit, despite the rise of populism, they would say this is
an absolute miracle. - Yes. - So looking at the last hundred
years of European history I think can give us a lot of hope that people can rise to the challenges. And similarly if you
think about the last time that we faced like this major
technological disruption in the shape of nuclear
weapons in the 1950s and 60s maybe the same think tank right here, you would have somebody sitting here and talking about the nuclear age and a lot of gloomy people in the room being convinced that we
all know how it ends. It ends with a nuclear Armageddon which destroys human civilization. And it didn't end like that. The Cold War ended peacefully. So I think it's not
inevitable but it is possible that we will manage to rise
up to the new challenges also. - Well let's turn to our audience here and see if we have some, quite I'm sure we will
have some questions. And if I pick you out please,
a microphone will come to you and do say your name or
organization if relevant. So Jonathan, one of our
senior figures here at RUSI. - [Jonathan] Hi there,
Jonathan Eyal from here from the Institute. I want to ask you about how
linear these projections are. I'm just sort of reminded, you mentioned the First
World War, of Norman Angell and his lovely book on the grand illusions which four years before
the First World War predicted that there can't be any war because economies are so interrelated. He got a Nobel Prize but not much else. (William and Yuval laughing) So, I mean there's certain
linear projections. Admittedly you did define
certain ways around it in terms of common action but what, how robust are your projections and did you take into
account the possibility that the linear projection
is absolutely the wrong one? - How linear and how
robust, then, are the-- - No, I don't believe in linear projection so it certainly doesn't
move in a linear fashion. What I try to do at least in my writing is not to predict the future. I think it's both
impossible and also futile because if you predict
the future accurately you can do nothing about it. What's the point of prophesying something which is inevitable and you
can do nothing about it? So what I really try to do is
map different possibilities in the hope of influencing even a little the choices people make in order to prevent the worst outcomes. So when I say something
like artificial intelligence might drive millions of
people out of the job market, create a useless class, create an immense and very unequal society,
this is not a prophecy like this is definitely going to happen. It's just a very dangerous possibility which we can do something about today by adopting the right policies. And it's the same when we
talk about the threat of war. I don't think that a new
global war is inevitable. I don't think it's impossible either. I think that again,
Norman Angell was correct in saying that it would
be a terrible disaster for humankind to enter into a global war. He was absolutely correct about that. It's just very unfortunate
that humans tend often to make very bad decisions, both as
individuals and as collectives. And then what strikes me
about many of these decisions is really how unnecessary they are, that if you look, say,
at the Second World War then you think what went through the heads of the mind of the
Germans and the Japanese? I mean for me really one
of the most amazing things about the Second World War is
that 20 years after the war the losers were more
prosperous than ever before. So why did they need the war? I mean if you can be so prosperous after losing the greatest
defeat in history it's obvious you didn't need a war in order to be prosperous. But they just made the wrong decision. They thought that without a war, without conquering China,
without conquering Russia they will never be able
to prosper economically and they were 100% wrong
about that prediction but unfortunately they
felt like that, and... - Let's take another question. Yes, I think I can see Baroness Neville-Jones back there. - Morning, Neville-Jones. You don't talk much about
the distribution of power. I mean one of the things that it seems to me is the
case that digital part, digital age is democratized
part to some extent inside of societies and so what people do
collectively counts more. What assumptions do you
make about education? I mean wouldn't it be possible to imagine more optimistic scenario
where decent education means that actually people
take more responsibility and make better decisions? - Mm-hmm (affirmative). - I mean that seems to me to be a possible line of development which doesn't lead you to doomed scenario. It leads you actually to a better and a more interesting society. And I would have thought that the fourth industrial
revolution is going to increase the tendency of actually making, of increasing the economic as well as the political
power distributed at all, at many levels of society
instead of just at the top. - Right, education and
responsibility throughout society. - It's two different points. I mean if we're talking about jobs that, then yes, definitely
there will be new jobs as the result of the AI revolution. The big question is whether
all the people who lose jobs will have the ability to
education and retrain themselves to fill the new jobs. (Neville-Jones speaking faintly) Hmm, what? - [William] We better not have heckles because of the microphones. - So this is one big question. Some people might but not all people and you could have a division
according to countries in which some countries
have the both economic and social capital necessary
to retrain their population to take advantage of all
the enormous opportunities of this revolution but then other countries might
be left completely behind with no way to progress because the main thing that
enabled countries to progress in the 20th century was cheap manual labor and this won't work in the 21st century. So you could have an even more
extreme division of the world than what we saw in the 19th century with the Industrial Revolution. The other point is about
individuals taking responsibility for decisions in their lives. Now there is a very optimistic scenario that AI and these monitoring systems that constantly track
what we do and how we feel actually enable us, empower
us to make better decisions. Like to take again the example about deciding what to study in college. So when you need to make that decision maybe a system that has been
following you from childhood presents you with a
far more realistic view of your abilities, of your potential, of the opportunities in the job market, of the difficulties of the
different training courses, and then you can make
a much better decision, much more knowledgeable
decision about your future. That depends first on whether we develop the right kind of AI systems and secondly, whether
people have the education and really the motivation because again, the danger is
that people might just be lazy and just learn to follow whatever recommendation
the system gives them. Like to take a real example, one of the most important
abilities of humans is to look for information. Today a lot of people
have completely renounced their personal abilities
to search for information. They just follow slavishly the algorithm. The truth is the first three results of the Google search engine, that's it, and the only way I know how to look for information
is just ask Google. And it doesn't have to be like that. I mean the Internet gives
us immense abilities to look for information
but a lot of people don't utilize these abilities and they actually are in
a foul, in this sense, their skills of looking for
information are far lower than 20 or 50 years ago. - Malcolm Thomas? Microphone at the front. It's coming there behind you. - Thank you so much and
I particularly enjoyed the way you started by saying
so much of recent history has been explained
through competing stories and the declining number of stories. But one of the stories in our field which is still very powerful
is nuclear deterrents and it's one that's shared
by communists and liberals. - Luckily. - And during the Cold War. It's one that's shared
by India and Pakistan and Israel and North Korea today and maybe that shared story
helps the peace, I don't know. But I was reflecting on what the impact of AI decision support
might be in that area and what, I mean I would
be deeply pessimistic about the chances of
international cooperation and killer robots of a sort you suggested and also I think in
relation to nuclear weapons but it reminded me of
the 1983 film WarGames where a teenage hacker gets
into the Pentagon computer, thinks he's playing a game
of Global Thermonuclear War but has actually started one and that's the pessimistic
side (laughs) of the film. But then the film ends, spoiler alert, with the computer running the algorithm, running all the options multiple times, this machine intelligence many years ago and coming to the conclusion nobody can win (laughs)
global thermonuclear war so the only thing left
to do is not to start. So I guess the question is
maybe in a world in which leaders rely more and more (laughs) in future Cuban missile crises or future September
1939 (laughs) scenarios maybe major war will be less likely because they'll be more
relying on machines. - There are many different scenarios here. One thing is that hacking and
AI are likely to destabilize the nuclear deterrents because you, once you cannot really, you don't know for sure if you really have control
of your own weapons. You don't know for sure if
the other side has control of. Like a missile is launched
from Russia to Britain. Maybe it's the Russians, maybe it's North Korean
hackers, who knows. Maybe it's (speaking foreign language) as some famous person said. So who knows? And what do you do then? The nuclear deterrents collapses
if you don't really know who is controlling the different weapons. And now the danger is
that nuclear deterrents is based on this idea that nobody can win. But what happens if
technology develops so rapidly that you gain some new wonder weapon and you think that now you can win? And it can go both ways. Either you think you can win
or the other side is afraid that they are losing their last chance, that in 10 years or 20
years they will be so behind that they can't do anything
anymore so they have to act now. And so these are two
very dangerous scenarios in which AI can destabilize
nuclear deterrents. And then the third scenario
which I earlier referred to is that with AI you can have a lot of extremely dangerous and violent attacks which are not an all-out war. You can take down entire
systems in a country and nobody is sure who
is doing that and why. So it's a much, much. If nuclear deterrents is a kind
of super rational chess game between just two players in
which everything is clear and every move you know who is making it, then a future scenario is like this completely hazy situation in which lot of things are happening and nobody's sure about anything and that's a very, very
pessimistic situation to be in. - There's just time for
a couple more questions. There's a lady at the back there and then we'll come down to
the front row again here. - Hi, my name's Claire Hajerj. Thank you for a fascinating talk. The politicians of
today are grappling with what we all believe are
the big issues of our time, should be in Europe or not in Europe, should we have immigration or not, hire Cockneys, all of these issues, and AI is a side issue
to be dealt with later. So if you were speaking
directly to governments what would you be telling
them to do now to prepare? That's one question. And a second related question
is you discussed the scenario where the whole democratic
process is called into question because we have algorithms
who can vote for us and therefore why have elections. Democracy as the dominant
political system in the West has its strengths but it
also has its weaknesses. Can you imagine a scenario
where the advent of AI and that kind of
technology could prompt a, maybe a much needed rethink on how we approach the political process? - Well about the first
question of what to do, there are many things that can be done but the first essential
step is to build trust. And the obvious thing is
you cannot regulate it on the level of a single country. You need strong international cooperation. You need strong international trust in order to do anything
meaningful about AI regulation. So the first step is to
build that kind of trust. Without this we can have
a lot of very nice ideas about what kind of AI to develop and what kind of AI not to develop but we won't be able to enforce it because we'll be in a
kind of race to the button that everybody says yes,
we don't want to do it but we can't trust the
other guys not to do it so we must do it first. So this is the most important first step. What was the other question again about? - [Claire] What political systems in AI-- - What political systems
might turn into with AI? Could it be something better? - It could be something better. Again, it's like any technology can be used in many different ways. If the AI systems are served
to empower the individuals and to surveil the
government, for example, to fight corruption then this can really strengthen democracy in a new form. But there we need to realize
the immensity of the challenge and the fact that we have very
little time to confront it. - And Nikolai Townsend
in the front row here, right at the front. - [Nicolai] Thank you. How do you think the relationship between the U.S. and China will pan out? (everyone laughing) - I don't know. - He was trying not to make predictions. - I don't know. I mean again it's one of the
most dangerous developments now in the world, is the
beginning of an arms race between China and the U.S.,
especially in the field of AI. Three, four years ago hardly
anybody talked about it. Now we are really in an arms race. Certainly you see it from the side of the Chinese government. The American government is far
less aware of this arms race. I mean at present in a curious way it's an arms race which much
of it is actually between a government on one side and private corporations
on the other side. But the U.S. Government
is also counting on both and starting to realize what is happening and this is extremely dangerous because this almost
guarantees the worst outcome. And I think that it,
again, it's not inevitable. We can reach a situation of
real trust and cooperation but for that you need to build trust and at present we are going
in the opposite direction. And this is again one
of the most frightening developments now in the world and the really sad thing is
that it's not inevitable. It's not really necessary. It's not like a law of nature
that there must be a clash and an arms race between
China and the U.S. but this is what is, we are on that course and at present the situation
only gets worse and worse. - And Yuval, finally is there anything we've not asked you about that you were burning to
tell us about this evening? - No, I think we've covered
quite a lot of ground. - We've covered the-- - [Man In Audience] Excuse me, can you talk about
meditation at this point? - About meditation? - Okay, we'll take that. In that case we will allow
the one other question then. So may, I referred to this earlier, the, and you say to outrun the algorithms we have to know ourselves. - Yeah. - I think you say actually
that people wonder whether you're out of reality
in two hours meditating and you say actually that is reality before you go back to
all the tweets and emails and puppy dog videos and things like that. (audience laughing) Expand on this point and then we'll close. - Yes, so I mean the
oldest advice in the book is to know yourself. For thousands of years you have all these philosophers and sayings and prophets whether it's Socrates or
Jesus or Buddha or Confucius going around telling people
know yourself better. Really understand who you are,
what's going on in your mind. Why do you make the choices that you make? What's really in control of you? And this was always a very good advice which most people never followed
but it was a good advice. But throughout history you
did not have real competition. If in the days of Socrates
you did not make the effort to really get to know yourself, you were still a black box
to the rest of humanity. Maybe not to your mother,
maybe not to your spouse, but to the rest of humanity
you were a black box. But this is no longer the case now. As we speak now from your
mobile phones or whatever there are all kinds of
corporations and organizations and governments that
are very busy right now trying to hack you. And to hack you means to get to know you better than you know yourself. And once you reach that
point they can predict you, they can manipulate you, and you won't even
realize this is happening. The easiest people to manipulate are the people who believe in free will, that I'm making all my
decisions out of my free will so nobody can really manipulate me. And the belief in free
will was all good and well for centuries but now it's
becoming really dangerous because it makes us
incurious about ourselves. If you really believe that
every decision you make, every choice you make in
life reflects the free choice of your soul or spirit or whatever and you think you know yourself perfectly, what else do I need to know about myself? And it's kind of a barrier, a curtain to really starting
to explore your inner reality. If you realize no, I
know actually very little about what's going on in my mind, like the next thought
that pops up in your mind or the next desire that
pops up in your mind, where did it come from? Why do I think about these things? Why do I want this and not that? And you start realizing this is the result of all kinds of biological mechanisms, genetical mechanisms, outside influences, then you also become much
more curious about yourself. And one way to understand meditation is meditation is simply
a systematic observation of what's happening inside you to start having a much more
realistic understanding of who am I, where do all
these thoughts and emotions and desires come from, is it really me, is it the influence of this or that? And again it's for me
one of the most shocking. I started meditating
when I was doing my Ph.D. in Oxford in history 18 years
ago and it was a shocking. The first great shock was that I know almost nothing about myself. And to give maybe a simple example it often happens that, I don't know, you have a big test tomorrow or you have a big presentation tomorrow. You have to speak on
The Today Show tomorrow and you go to sleep at night and you say I must get a good night rest. I have an important test tomorrow. And suddenly all these
thoughts and worries coming up to your mind and you say shut up, I need
to get rest, and you can't. And in a way this is a
little like meditation. You don't want to do it. You want to fall asleep but you have this stream
of thoughts and emotions and worries coming up and you realize I have no control over it. I can't tell it just stop. I can't direct where it is going and meditation is like that but much, much more
systematically and deeply to like maybe you go for a course where they treat you in 10 days and for 10 days not just for half an hour before you eventually fall asleep. But for 10 days you just observe all the sensations and
emotions and thoughts that are coming up in you. And for me and for almost anybody I know who went to do such a meditation course it's really a shocking experience to maybe for the first time
take a real direct look at what is actually happening inside me. And this is far more interesting and far more being in touch with reality than all the tweets and all the emails and all the funny puppy
videos on YouTube. (laughs) - That was a good final
question, thank you for that. And thank you for that answer. Thank you to everybody for coming tonight. I'm sorry we couldn't
get in all the questions. If you could just stay in your places as we finish just for a moment while I take Yuval for
much and refreshments I'd be very grateful. And thank you so much
for joining us tonight because there are many
people here including me who have read hundreds of
books and pamphlets and blogs, probably thousands of them, but despite the fact we've
read all of those things your writings do make us
think about new things and new ideas and maybe
re-engineer our own brains anew and maybe look deeply into ourselves as you've just advocated. And so to do that and to
do it to millions of people is a great achievement and we
pay tribute to you for that and thank you for being
here at RUSI tonight. - Thank you. (audience clapping)