- I wanted to start by
asking you a question I think everyone is
thinking about at this time. In 2050, are we all going to be useless? (laughs) - No, not everyone. As I said there will be new jobs, there will be things to do. The really big problem is the problem of retraining and reinventing, and adapting to the new conditions. And this is not a problem
we can postpone to 2050, because it's an urgent
issue today in education. What do you teach kids today in school that will still be relevant in 2050? There is a serious fear that much of what we
are teaching kids today will be irrelevant, and we don't know what
to teach them instead. - You've spoken earlier about
school having reality maps. Reality maps for the future. Can you elaborate a little, for instance, a school today in 2018 is looking ahead. There is a child in class one, is looking ahead to the next 12 years, the next 16 years of studies. What is it in terms of a reality map that they should look at? - Well nobody knows how the job market will actually look like in 30 years, which is an unprecedented
situation in history. We were never before in a situation when parents and children, sorry, parents and teachers look forward such a
short time, and they say, "We just don't know what
you kids will need to do" "well in order to have a job" "and be fruitful members of society." So our best bet is to focus
on emotional intelligence and on mental resilience. Because the one thing we know for sure that people will need is
the ability to keep learning and keep changing themself
throughout their lives. The old model was that you... life is divided into two parts. In the first part of
life, you mainly learn, and in the second part
of life, you mainly work and make use of what you learned. But this is becoming obsolete. And we need to... the most important thing we
need to teach young people is how to keep learning and keep changing throughout their lives. - What is very interesting is that you spoke about
how artificial intelligence is pretty much the way to go. Not that it will take over, but that's going to be
a very dominating force in the coming future. And when we speak of
artificial intelligence, software engineers come into play. Now software engineers are the people who are going to be most looked at, because these are the people who are going to be
developing these technologies. But at the same time, these are men and women trained to design. Not trained to philosophically guide our future. So for instance, if there was a software engineer today, and you could have five things to tell him in his profession, what would those five things be? - Well we don't have much time, so well let's start with one. The first thing is the absolute need to incorporate an ethical training into the career or the teaching of software engineers. I mean, of all the people
today in the world, the ones that need most ethical training is software engineers. Even more than lawyers or judges. Because they shape the world, and they need to do it responsibly. I'll give an example. More and more even today, the question of discrimination turns out to be a question
of computer designing. If we want to fight against
discrimination of women, discrimination of ethnic minorities, discrimination of gays and lesbians, then we need to think in terms of how do we design software? When people today apply for a job, or apply to a bank to get a loan, more and more often, their application is
processed by an algorithm, not by a human being. So we need to ensure that the algorithm doesn't discriminate
against particular people. For instance there were
cases in the United States that of course you know that
you shouldn't code an algorithm with racial discrimination. To discriminate against black people. But what happened is that the algorithm started to discriminate against people from particular post codes,
particular neighborhoods. And all the people from that
neighborhood were black. And the people didn't realize that this was actually a racial
discrimination in disguise. So if we want to fight
against discrimination, the front line now is
with these algorithms. I mean you apply to
the bank to get a loan, and the bank tell you, "No, we don't give you a loan." "Why not?" you ask. And the bank replies, "We don't know." "The algorithm said," 'Don't give this person a loan.'" And we don't understand why
the algorithm decided that. We trust our algorithm. The algorithm goes over
enormous amounts of data, especially personal data, and finds patterns of people, reliable people and unreliable people, and based on that, the algorithm decides
whom to give a loan to. And we can't understand how the algorithm is doing it. If we could, we wouldn't have needed the algorithm. So these questions of discrimination, now they actually are decided by the people who design the algorithms. So they need a good training in ethics to beware of, for example, discriminating against particular groups. - But interestingly, organisms are algorithms, and algorithms can be hacked. So at the end of the day, is it a catch 22 situation? Because we are, even if you train these
software engineers, but at the end of the day, if we are living in such
a predictable manner that they can be hacked, is it game over? - No, I mean, it can be used, I've focused on the dangers
of these technologies, but actually they're huge advantages also. If you think about something
like traffic accidents, then today in the world every year, more than a million people are killed in traffic accidents. That's more than the people killed in all the wars together. And the vast majority of traffic accidents are caused by human errors. Somebody drinking alcohol and driving, somebody falling asleep, things like that. If we replace human drivers with self-driving vehicles, we could save a million people every year. This is a wonderful thing. Similarly even think about discrimination, then there are things
you can do with computers which are far easier than with people. I mean, you can tell managers
and bankers and officials, that it is wrong to
discriminate against women, against gays and lesbians, against black people. And they can even agree with you. But at the end of the day, there is something more stronger than their intellectual understanding which is the subconscious, the deep feelings and biases. So even somebody would say, "Yes, it's wrong to
discriminate against women, "against black people." When he actually comes to make a decision who to hire for the job, he will discriminate because of his subconscious biases. Most of what's happening in our minds, we are not aware of. With computers, the great thing is they don't have a subconscious. Whatever you call the computer to do, this is what it will do. So if you called it in the right way, it's much easier to fight
against computer discrimination than against human discrimination. So it's not all bad. And of course in the human front, what we need to do is
of course to get to know our own biases and our
own weaknesses better. Both in order to avoid
harming other people, but also in order to
avoid being manipulated by the new technologies. Because the way to manipulate people in almost all cases is by using their own weaknesses against them. And the easiest people to manipulate are the people who are not
aware of their own weaknesses. And they can't even imagine that somebody might be manipulating them. - So in an age of technology, what we really need are philosophers. We need spiritual guides, we need a lot more debate about what we know and feel inwards. It is about controlling the inside and less of the outside? - It's both. I mean, we always needed philosophers and the spiritual guides, but I think, yes, we need them today more than ever before because we are more
powerful than ever before, and because technology is
turning philosophical questions into practical questions of engineering. It's really quite an amazing
time today to be a philosopher, because we have discussions,
of things like free will, or like ethics. For thousands and thousands of years with very little impact on what is actually
happening in the world. And philosophers are very patient people so they can have this discussion
for thousands of years and without changing a lot. But now these questions are becoming practical
questions of engineering. And engineers are impatient people, they need answers now or next month. So, to give a concrete example with all the talk about
self-driving vehicles. So one of the most famous
example is what happens if a self-driving vehicle is driving and the owner of the car is
sleeping in the backseat, and suddenly two kids
jump in front of the car running after a ball. And the car needs to decide, it have like a half a second to decide whether to drive over
the kids and kill them, or swerve to the side, and there is a truck
coming from the other side, it will hit the truck and
probably the owner of the car, who is asleep on the
backseat will be killed. And you need to decide. Now these are the kinds of questions that philosophers have been debating for thousands and thousands of years. And it had very little impact on actual human behavior because even if the person says, "Yes, the right thing is to
sacrifice my life for the kids." When the crisis comes, he doesn't do it. Because he doesn't act from his intellect, or his philosophical views, he acts from his gut
instincts, to save myself. But now whichever you can
have the best philosophers in the world. You can put them in a room,
you can give them one year, come up with a decision,
and whatever they decide, you program the self-driving car, and you will have a 100% guarantee that this is what the car will do. And so you need to decide. You don't have 1000 more years for the philosophical discussion. The car company won't wait. And some people say, "Okay, we can't decide,
let's just leave it to the customer." The car company let's say Tesla, will just develop two models of the Tesla. The Tesla Altruist and the Tesla Egoist. (audience laughs) The Tesla egoist drives over the two kids. The Tesla altruist sacrifices it's owner. And the customer is always right. We put these two cars on the market, whichever car the customers buy, what do you want from us? So if we want to avoid leaving these kinds of questions to the market, we need philosophers
more than ever before. - Which car do you think will sell more? But that's a question for later. But it's interesting because you said, let's put all these philosophers in a room for a year and they come out and whatever they decide that's what we implement. And that's obviously got to be done at a global level. - Yes. - And like you said, these are all global problems. No one country can solve them,
we need global solutions. But in the current scenario, whether you look at president Trump or you look at Brexit,
it's more me, myself. - Yeah - Everyone is closing borders. Now at a time like this where more and more nations are just isolating themselves, how do we reach these global solutions? - I don't know. Things are looking bad in
terms of global cooperation over the last five years. They're becoming worse and worse. It's a chain reaction. Whenever one country,
becomes more nationalist, more isolationists, then the other countries around it react in the same way
because they don't want to be the only ones who carry the burden of all the global responsibility. And this has been the trend in the last five years and there is no easy solution. I think we need to raise the awareness of people, ordinary people and leaders that unless we make common cause to deal with these global problems, then everybody will
suffer the consequences. And this is why I think
it's very important not to fall into the trap of thinking that there is a contradiction between nationalism and globalism. You hear president Trump and other leaders like him telling people that you need to choose. That you need to choose,
whether you're a nationalist or whether you're a globalist, then of course you need to
choose to be nationalist to be for your nation. And I think this is the wrong framework. It's a trap. There is no contradiction between the two. As I just said, I think to be a good... if you think nationalism is about hating foreigners, then yes. You can't be both a
nationalist and a globalist. But nationalism is not
about hating foreigners. Nationalism is about caring
about the people in your nation, their safety, their prosperity. Now, in the past you could ensure, you could care about the safety
of the people in your nation with little cooperation
with other nations. But in the 21st century it simply can't be done. And so I think that there's
no real contradiction here and if people realize it, this might help turn back this tide of isolationism. - In the same thought scenario, and I refer to your book, 21 Lessons for the 21st Century, you do write that the liberal
story is losing the fight. That the liberal story which had sort of gained in the past is now seemingly losing out. And very interestingly you said that countries like India and Brazil that actually did sort of adopt liberal economic policies really grew, a lot of nations which didn't, didn't follow. But what I did want to sort of point out that in terms of liberalism, when you look at economic
policies for sure, countries are growing, but what about liberal social policies? A lot of people in India today would say that our current government isn't all that liberal. So is it just about being
liberally economically or you've got to go the whole hog? - No, no liberalism, the liberal story is much, much
broader than just economics. There's a lot of confusion about the term liberal today in the world. People don't really
understand, what it means. And very often people
think that they are... again that they can contrast
liberal with conservative and think that liberalism
is just a small part of the political spectrum. But actually, at least for the last few decades, even most of the conservatives increasingly adopted the
key ideas of liberalism. To test yourself or somebody
else whether they are liberal, you should ask three questions. Do you think people
should have the liberty? Liberalism comes from liberty. Do you think people
should have the liberty to choose their own government? Or should they obediently
follow some king or dictator? Secondly, do you think that people
should have the liberty to choose their own profession and what we do in life? Or should they be born into a caste and this decides what they do? Thirdly, do you think that people should
have the liberty to choose their spouse and whom to marry and how to live their lives? Or should they follow the
dictates of parents and elders? Now, if you answer yes to all three questions, you're liberal. And most conservatives, even most of the voters of Trump, they also answer yes to all three questions. So a lot of the people who today present themselves anti-liberal, if you took them and
transported them a century back, they would be in the
extreme radical liberal wing of the political spectrum. And we should realize what has been achieved over that century. Now, having said that, yes, now the liberal story is in crisis, and the biggest crisis of all is that there is nothing to replace it. Humans are animals, our
storytelling animals. We think in stories to
understand the world, We need a story. In the 20th century, we had three stories, big stories about the world. The communists stories,
the fascist stories, the liberal story. And fascism and then liberalism and then communism collapsed. And only liberalism remained as the single story explaining everything. And in the last few years, it is also collapsing and
people have lost faith in it. And there is nothing to feel the vacuum. What do you see with the populist regimes, that are arising in different
countries around the world, They don't have any vision for the future. What they sell people is nostalgic
fantasies about the past, about some imaginary past,
that we can somehow go back to, which is impossible. And I think that the most severe problem of the political system today, all in all the world is that nobody can come up,
with a meaningful vision for the future for where
human kind will be, in 30 or 40 years. A vision that addresses issues like global warming on the one hand and AI and biotechnology
and genetic engineering on the other hand. You just don't have anybody that provides a guidance on that. And the populist regimes with
they're nostalgic fantasies, they can sustain
themselves for a few years. But in the long run, unless we can find some you vision, meaningful vision for the future, then we are heading towards
chaos and Extremism. - So in the current scenario, every person who is a voter and is now looking at his, whether it's a local government, his or her national government, when they now need policies to somewhat get a
framework for the future, what advice would you give to voters around the world? - Yes - That when a politician comes
to you asking for your vote, this is the key question. This is what you must demand. What should that be? - Then of course for every country, there are different local issues, which are very important. But over and above that, three or four questions to ask every politician
all over the world is, if I elect you, if I vote for you, what will you do to lessen
the danger of nuclear war? What will you do to lessen
the danger of climate change? And what will you do to regulate the explosive potential of our artificial intelligence and bio-engineering. And finally, what is your
vision for humanity in 2050? What is your worst case scenario that you're most afraid of? And what is your best case scenario? If we learn how to make use of all these enormous new powers, what will be the best case scenario? And if the politician has no good answer or if the politician
just keeps talking about the past and has nothing
meaningful to say about the future, don't vote for that particular politician. - I have a feeling come May 2019, we're not going to vote for anyone. (audience applauding) There is also interestingly in the book, you lay down these
hypotheses and you know, sort of lay down all
these scenarios for us. But there was one very interesting bit, a very interesting chapter if I'm call and the title was, Turn Down The Panic. You know, so it was, okay, this can happen, this can happen, this can happen, but it's all right, all is not lost. Can you just elaborate on that? Because we've said about
everything that could go wrong, but where does Turning
down the panic come from? - Well, it comes from what I
talked about in the beginning of my lecture here, that when you look back, you
see the amazing achievements of humankind which are for most of history were
thought to be impossible for most of history people thought that it was impossible for humans to solve the problem of
famine by themselves. Again, maybe God will do something, but we can't. And over the very short period, just the last 50 or 60 years, we've largely managed to do it, that you know, 50 or 60 years, there was this huge concern
that whether India and China, can even feed all the
people who live there? And now more and more you
have the opposite concern, all people are obese, they eat too much. And not just the rich, also you see an epidemic of obesity and diabetes
and diseases related to eating too much or too
much of the wrong food, even among the poor population. So we of course we have to deal with that, but we also have to stop
for a minute and realize, the amazing achievement. And it's the same way as
the decline of violence. When the- it's important to realize how much violence is declined because I think it will
make us more responsible and more hopeful for the future. If you think that the level of
violence is always the same, then you're hopeless. Know everything that
people get in the past, wasn't enough to reduce human violence. So how can we hope to do anything? It's lost. But if you realize no, over the last 50 or 60 years, there has been a dramatic
decrease in violence. This gives us hope for the
future and also should make us far more responsible. Because if violence re-emerges, then it is our fault. It's not the laws of nature, It's not God, it's our fault. We should try better to prevent it. And so I think that is much, again, looking at the
achievements of the past. There is a lot of reason to
be hopeful for the future. - You know, when you speak about violence in your lecture, you refer to that it just takes one fool, one idiot to start that war. - Sometimes, - sometimes. But in the current world scenario, do you see more than one idiot? A couple of? - I'm not in that position to
comment on the intelligence and mental balance of
leaders Whom I haven't met. But you certainly see that leaders are becoming far more reckless, and far less responsible
in their behavior. And then what we talked
about earlier becoming far more isolationists and nationalists, basically saying, we care only about what happens to our nation. We relinquish responsibility for the rest of the world, and for the impact of our nation on the rest of the world. And this is extremely irresponsible, because in the reality
of the 21st century, the idea of independent nations is simply a fantasy. There are no independent nations anymore, all nations depend on one another when it comes to nuclear
war as long as the ... A continental ballistic missile can travel between the USA and Russia within minutes, so there is no independent nation in such a scenario simply with ... Similarly with climate change, similarly with the rise of
AI and with bio-engineering, no nation is independent. - I think it is a really good time to open the floor for questions from the audience. I do wish we could put you in a room with world leaders and talk to them, (laughs)but I don't
think they're that lucky, but this audiences, so here ... The gentleman here in the
black in the fourth row. Can we get a mic to him, please? - Hi, good evening. My name is Sameer Shetty,
and my question for you is. In your books, you've talked about the propensity of human beings to indulge in myth-making
of different kinds. And I wanted to know, what do you think are the most dangerous
myths of our generation? - What was the two most dangerous myth... No, no the question was what was the most dangerous myth of our generation? And I think there are two of them, opposite ones. One dangerous myth, is the myth of nostalgic fantasy, which is just talked about the idea that there was some golden age in the past, and that we can somehow go back there. And this is a very dangerous myth. I'm a historian. What I can tell you about the past, the past wasn't fun. You wouldn't really want to go back there, and even if you wanted to, it's impossible. So when you direct the attention of people towards this fantasy, instead of working to
create a better future, that is a very dangerous thing to do. The opposite dangerous myth is technological utopia. The idea that we just need
to develop better technology, and this will solve everything, and it never solve everything. Again each technology can be
used in many different ways, and if you invest only
in technological progress without investing at least as much in the education of humans and in the cultivation
of human compassion, and human empathy, then people will do terrible things with the new technology, it will just make them more powerful. So these are the two
dangerous myths of our time. - There at the back. The gentlemen, in the white
T-shirt with interesting logo. (audience laughs) - Hi, what I inferred from reading sapiens in the start was that sapiens
as a species has destroyed other species around us, mainly our planet, what it is today is because of at risk. This is something I inferred, and the second thing inferred was that we are prisoners to our own constructs. What I wanted to ask you was, there was a big ethical
uproar on gene editing and what happened in China or Crispr. What are your thoughts on this topic, and what I'm getting to is
that since we are a species that has messed up our planet in a way, what is so bad in making a better species? - Well, in theory it would be a good idea to create a better humans. I just don't think that
genetic engineering is the way to do it, at least not for now, because we know so little about the human body, the human brain, the human mind that this is just what I was referring to in the lecture. That we know how to manipulate things, long before we have a deep
and round understanding of the system we are manipulating. So it happens again and again in history we try to improve to something, but because we don't really
understand the consequences of what we are doing, it leads to unintended
terrible consequences. And if we start doing it to our bodies, to our brains, it's very likely that we will try to improve
certain human qualities, which I think are very important, but inadvertently we
will change other things. To give an example from a recent study, there is a biochemical called Oxytocin which became famous as the biochemical of love because a lot of research showed that oxytocin plays a key role in forming the bond between
mothers and children, between family members, between romantic lovers in relationships. So people had the idea wonderful, we now found the key to love
and harmony in the world. All we need to do is
spray oxytocin in the air and put oxytocin in the drinking water, and we'll have world
peace and global harmony. But then future researches showed that actually the same biochemical that causes greater love
towards people you know also simultaneously causes hatred and animosity towards strangers which from an evolutionary
perspective makes sense. So if you spread oxytocin
in the drinking water, yes, you will create more
love in small circles, but probably will increase
violence in the world and hatred in the world. So this is just a tiny example of how complicated the human being is and with the best intentions, if we don't deeply understand
the body and the mind, if we start meddling with things, the chances that it will go wrong are very, very high. So I would say we first
need a deep understanding of the body and especially of the mind. And then we can think of
using the new technologies to start changing and manipulating things. - It's a nationalist chemical. It teaches you to love your own and hate the ones outside.
- Exactly. There the gentleman in
the white shirt in front, yes with spects please. - Hello professor. It's a great honor to see you all here. My name is Gogand. So my question is regarding a point which you had mentioned in both CPN as well as Homo Deus. You said that religion as we
know it is in its end days, it's relevance in the
current society is over and we're slowly moving towards dataism. And I was very happy to hear that, but what I observe as in usual typical dataist
areas like social media and E-commerce, religion is making a big comeback. There're online churches where you can make donations to. So is religion really going to go away or is it just going to change
its face in your opinion? - It's more likely to change it to face. Religion has been around
for a very long time and in contrast to what many
religious people believe, it's changing all the time. The Hinduism of today is very different from the Hinduism of 2000
years or 3000 years ago. The Judaism of today is almost a completely different religion, than the Judaism of 2000 years ago. And religions keep adapting all the time and then they keep saying, "We didn't change anything." They keep looking back if they make it. Even if they admitted they
make a change, they say, "We just go back to the original purity." "It was pure in the origin." "Then something went wrong." "We are not changing anything." "We just go back to the original purity." And this is usually a fantasy. It's not true. but this is how they repackage change. And this is likely to happen
in the 21st century too. Some religions, which will not be able
to adapt might disappear. But other religions might survive and completely new religions can emerge. Because there is a very close connection between religion and technology. And we might see the rise of new technol religions, religions based on technology. We already saw one such important religion in the last two centuries, which was communism. We don't tend to think about
communism as a religion, but actually communism makes all the promises that traditional
religions used to make. It promises happiness and prosperity and justice and so forth, but not after you die, in some heaven. It promises it here on earth. We can build paradise on earth, with the help of
technology and this failed. But we might see another
round of technol religions, in the 21st century when religions will promise again, not just prosperity and health, but even eternal life here on earth with the help of technology and not after you die
with the help of some god. So I don't necessarily think that we'll see the
disappearance of technology. It can combine with new technologies to create very different
social and political systems. - Thank you. - Okay, there are so many hands. I'm going to go right at the back. The gentleman there in the black T-shirt-- - Maybe we can give to some women also. - I don't see any women. (applause) Okay the lady in the front there. Yes there. - Good evening professor? Very nice listening to you and very happy that you
endorsed philosophers because I am a philosopher. Thank you for that. (sighs) Just want to ask you that you spoke about hacking humans and you also said in the
beginning of your talk that how we are more at
risk of killing ourselves than being killed in a war. Taking that further, today we all know that depression and faulty thinking or erroneous thinking is a big epidemic that the world is fighting against. So how do you think technology can help in psychological manipulation of human beings for
betterment not necessarily as a damaging factor? - Well, it can help in many things, again, they're all dystopian visions involved with this technology, but to speak on the bright side, technology can help us
diagnose mental disorders and mental diseases on a massive scale. One of the big problems
with mental health care, is that it's very expensive. It's often much more expensive than typical health care. And there're billions
of people on the earth, that don't have any
mental health care system to support them. And computers and mobile phones even, can be used at least to diagnose things like depression far more cheaply and efficiently than
anything we have today. Your Smartphone can be
monitoring your behavior, what is happening inside your body, your blood pressure, your brain activity, and also what you do or what you search online. Today even Google has new applications that try to diagnose things
like depression or stress simply by monitoring the
words you're searching online and the one question that... Google says, "Any question you ask us we answer." There is one question that
when you type it in Google, at least in Israel, I don't know or in India, when you type, you don't get the answer. The question is how to kill myself, how to commit suicide.
- True. Instead of telling you how to do it, it gives you the number
of a kind of hotline for mental health care. Again it can go in terrible ways also, I don't want to praise it too much. But think of a situation when your mobile phone, maybe if you say you're a parent, you're a parent and you
have a teenage daughter, and she's depressed and
you do not even know it, but the mobile phone alerts you, that your daughter is
in severe depression. So this is especially
important for people, for classes, for countries, that cannot provide good mental healthcare of the kind that we see in
the rich countries today. - Thank you, thank you so much. - Okay. I'm particularly
looking for women now. The lady at the back in the red. - Hello. Hi, this is regarding
to what you've mentioned about the violence and wars
being a thing of the past. - No, not a thing of the past, they're still there just less of them. - Less of them. So what I feel in this, is that was imperialism in the physical sense acquisition of land, people and power, but that's still there. The face of it has changed
it's Neo imperialism. It's more about global corporation and conglomeration doing it. And it's now about data acquisition. So it may not be outright violence, but I think we're still
looking at some bloodless coups all over. So where do you think this takes us, what has it bought for us? - This is a very important question. I mean even though war is declining, imperialism and colonialism
could take new forms. In the 19th century you had
the industrial revolution, a few countries industrialized first and they then conquered and
dominated everybody else. Like Britain industrialized first, and this gave it the power to conquer and dominate and exploit India. And this might happen again with AI. A few countries, maybe different countries this time, maybe this time is China and the US and not Britain and France. They will lead the AI revolution and this will give us them so much power that they can then... even without military conquest, they can dominate the rest of the world. And we can also see something like data colonialism, which is already beginning now that you know in the
old days of colonialism, the imperial power would take
raw materials from the colony, transport it to the imperial center, say to Britain, there manufacture the finished goods and send it back to sell in the colonies. And this is now beginning
to happen with data. The data is being harvested and mined all over the world by a few corporations and countries especially
in China and in the US. The data is then transferred to the hub, it's being analyzed and processed, used to make new technologies and products which are then sold back
to the data colonies. And to give a very important example, we now see there is a huge interest in self driving vehicles. Now what is the biggest obstacle for developing self driving vehicles? The biggest obstacle is that
they are unsafe at present and developed countries
like the US, like Germany. They don't want to allow
self driving vehicles on the road before they're made safe. But if you don't allow them on the road, you cannot really fix the problems because no matter how many experiments you do in a laboratory, it doesn't really gives
you the necessary data about real life situations. So the danger is that big corporations will start using or selling these self driving vehicles in developing countries which have much laxer
regulations and rules. And there will be accidents
in these countries, but who cares and people will get killed, and the data harvested from those accidents
and problems will then be used to upgrade and perfect
the self driving vehicles and then they can sell them everywhere. And the highest price will be paid by the people in the data colony, whereas all the revenues and benefits will go to the rich
country to the AI leader, which dominates the self
driving vehicle industry. So this is a new form of imperialism and colonialism we should be aware of. And it's quite amazing that if today a corporation wants to come and mine iron it has to pay something to the country where it mines the Iron. But if it wants to come and mine data it doesn't pay anything. And data is today far
more valuable than Iron. This is the basis for
the future industries of the 21st century. So these are the kinds of issues, again, coming back to global cooperation for a single country to resist this it's very difficult. But we could maybe have a union
of data producing countries. Like you have a union of
oil producing countries, OPEC, which really increased the power of the oil producing countries. So you can have the same model maybe for data producing countries, having a union and able
to negotiate better, better deal, a better deal to get something in return for all the data they are providing. - So those who control data
will control the world? - Yes, at present it seems very likely that those who control the data will control the world. And most of the data in the at present is going to only two places, China and the US. - Yes, the lady in front in the gray suit. - Hi, my name is Jekabalegi. And I'd like to bring attention to three things you spoke about. One of course was the
disruptive technology, we've all spoken about it. The other was about humans
going within ourselves, So going within rather than going without. And then you spoke about
how that life hack, the biohack, is about better biology, better data and better understanding of technology computing. My question to you is that while
we are focusing intensively on creating better technology, and you actually spoke
about how we've been better human beings, truer to ourselves when we've created technology and you've spoken about how technology can enable
us to think better. But is there a hack for
becoming better human beings, not creating better human beings, but reaching within to
become better human beings? And is that the hope for the future? - Well, there are many ways to try and become better human beings. Do you have you know, all the spiritual traditions
of history out there and also the new traditions like psychotherapies is there. So I personally try to become a better human being by
practicing meditation and tomorrow I'm in
Mumbai on my way to going for a 60 days meditation retreat. And this is how I try to make
myself a better human being. There are other methods, a hundreds of different
meditation techniques. Some people they don't, that they don't find meditation useful, they can use out or therapy or even sport to get to
know themselves better, to get to know the weaknesses, to develop their compassion and their the human qualities. Different methods may work
better for different people. Whichever method works for you, I would say do it quickly because we don't have much time. And with regard to technology, again technology is not just the enemy. It can also be very helpful. Technology can be used also to protect us. I mean at present, most of the development of AI focuses on creating AI tools that monitor people, individuals in the service of
a corporation or a government. But we can create the
opposite kind of tools. We can create AI that monitors the government in our service. For example to fight
government corruption. If they like monitoring
the citizens so much, why not monitor the officials a little? So we can also create the
opposite kind of monitoring. And similarly you know, you have all the... When you surf the internet, there're many AI systems that are trying to hack your brain to get to know you and sell you something or manipulate your political views. And it's very difficult
to protect yourself. We can develop an AI sidekick that serves you and not these
corporations or politicians. We now have an antivirus for the computer that protects the
computer against viruses. We can develop antiviruses for the mind, that when you surf the Internet, and somebody is trying to manipulate you, the antivirus comes into
action and maybe blocks this fake new story or
blocks this video, whatever. So technology can be used again for good as well as for bad. It's really, it goes back to the question about the software engineers that the software engineers should have a very clear ethical view of what they're doing
and why they're doing it. Over the last 20 years, we had some of the smartest people in the world working on making people click on advertisements and on funny cat videos. And they were extremely
successful in doing that. The same smart people, if they give, if we give them a different task, a better task or if they give
themselves a better task, I think they can do that also. - So humans need to start very soon. All of us need to start
very soon to become better because we don't have much time and sadly tonight we
don't have any more time. One last question. - Oh, it's a huge responsibility.
- Yes that lady - So we're going to finish, yes the young girl here. - Hello doctor Harari. My name is Supriya. My question to you is
about your writing process. We're all, and a lot of us here
are in owe of your writings and your books. A book like Sapien and
a book like 21 Lessons which deals with a
multidimensional approach to concepts and complex matters. How do you go about writing those? Could you tell us a bit about
your process as an author? - Give your secrets away tonight? - Well. (applause) much of it is actually written in conversations like
the one we're having now. All my three books were
written in conversation. The first book Sapiens was written in conversation with my students at the Hebrew University in Jerusalem. I was teaching a course about the history of the world for something like 10 years and this gave me the ability to experiment with ideas, to hear what the students
are most interested in, if something was boring or if they didn't understand something then I realized I had to learn more and to explain it maybe
in a different way. And the other books Homo Deus and 21 Lessons after Sapiens came out, and I had many conversations like this. So the kind of questions
that people asked me, these eventually became
the chapters in the book. So really 21 Lessons, it's 21 chapters each
about some big question. And most of these questions they actually came from journalists or from readers or from
other people that I met. And also I would say that in my writings I try to focus on the questions and not on the answers. If there is some big
question that interests me, then I will follow the
question wherever it leads me and even if I don't find
an answer that's fine. I think ultimately the
most important thing I can offer people is to focus on the important questions. They don't have to accept
the answers that I have. And to some I don't have any answers, but I try to focus the public conversation on the most important questions that now face humankind. And as long as we agree on the questions, then this is a very large step towards finding answers and solutions. - I think we'll never have enough time with Professor Harari. (Audience claps) - Thank you. - Thank you so much.