(audience applause) - I'm here to talk about our future, both the future of our
species, homo sapiens, and about your own personal future. And nobody really knows
what the world would look like in 2050. The only thing we know for
sure is that it will be a very, very different world than today. And perhaps, the most
important thing to know about the future is that humans will soon be hackable animals,
animals that can be hacked. There is a lot of talk these days about hacking computers and email accounts and smart phones and bank accounts, but we're actually entering the era in which it will be possible
to hack human beings. Now what does it mean
to hack a human being? It means to create an algorithm that can understand you better
than you understand yourself, and can therefore predict your choices, manipulate your desires, and
make decisions on your behalf. In order to control and manipulate you, the algorithms will not
need to know you perfectly. This is impossible. Nobody can know anything perfectly. They will just need to
know you a little better than you know yourself. Which is not impossible,
because most people don't know themselves very well. Often, people don't know
the most important things about themselves. I know this from my own
personal experience. It was only when I was 21
that I finally realized that I was gay, after living
for several years in denial. And today, I keep thinking back to the time when I was 15 or 16 or 17, and I try to understand,
how did I miss it? It should have been so obvious, but the fact is that I didn't know. And that's hardly exceptional. Lots of gay men spend
their entire teenage years not knowing something very
important about themselves. But imagine the situation in a few years when an algorithm can tell any teenager exactly where he or she is
on the gay-straight spectrum just by collecting and
analyzing data about you. One way to do it, there are many ways, but one way to do it, is to perhaps just to track eye movements. The computer can track my eye movements when I surf the Internet or watch YouTube and analyze what my eyes
do when I see am image, say, of a sexy guy and a sexy girl walking together hand
in hand on the beach. Where do my eyes focus
and where do they linger? Now even if you wouldn't like
to use such an algorithm, to hear it from an algorithm
these news about yourself, what happens, let's say,
if you find yourself in some birthday party
of a kid from your class and somebody has the brilliant idea that, hey, I just heard
about this cool new algorithm that tells you your sexual orientation, and wouldn't it be so much
fun if everybody take turns testing themselves on this algorithm with everybody else watching
and making comments? What would you do in such a situation? Would you just walk away? And even if you do walk away,
even if you do keep hiding from yourself, from your classmates, you will not be able to hide from Amazon, or from the Secret
Police, or from Coca Cola. As you surf the web, or watch YouTube, or just walk down the street, the algorithms will be
discreetly monitoring you and hacking you in the
service of the government or a corporation or some organization. Maybe you still don't
know that you are gay, but Coca Cola already knows it, so next time they show
you an advertisement, they choose to use the
version with the shirtless guy and not the version with
the girl in the bikini. And next day when you go to the shop, you choose to buy Coke and not Pepsi, and you don't even know why. You think you did it from your free will. They know why you did it, and such information
will be worth billions. Now, I know, of course,
not everybody's gay, but everybody has some
secrets worth knowing. Now, what do you really need in order to hack a human being? You need two things, just two things. You need a good understanding of biology, and especially brain science. And you need a lot of computing power. Now, in the past, for thousands
and thousands of years of human history, nobody
knew enough biology, and nobody had enough computing power, in order to hack human beings. So even if the Secret
Police followed you around 24 hours a day, watching
everything you do, they still couldn't know
what was really happening inside your brain. They still couldn't really
understand your feelings or predict your choices or
manipulate your desires. But soon, corporations
and governments will have enough understanding of biology and enough computing power to hack humans. And when this happens, and it is already beginning to happen, then authority will gradually shift from humans to algorithms. And this is already beginning to happen in more and more fields,
even in democratic societies, even without any government coercion, people are willingly entrusting more and more authority to the algorithms. We trust Facebook to tell us what is new. We trust Google search
to tell us what is true. We trust Google maps
to tell us where to go. Netflix tells us what to watch. And Amazon tells us what to buy. Eventually, within 10 or 20 or 30 years, such algorithms could also tell you what to study at college,
and where to work, and whom to marry, and
even whom to vote for. And as algorithms become better, they cannot only guide and control humans, they might also replace
humans in more and more jobs. And this even true, or especially true, of jobs that demand a good
understanding of human feelings. For example, there is a
lot or talk these days about self-driving cars, but even in order to
replace human drivers, self-driving vehicles, or the computers that drive these vehicles, they need not just to know
how to navigate the road, they need to understand humans. They need to understand and anticipate the behavior, both of human customers, and also of human pedestrians. They need, for example, to know, to recognize the difference
between an eight-year-old and an 18-year-old and a 40-year-old that are approaching the road, and they need to understand
something about the difference in behavior between small
children and teenagers and adults. Similarly, in order to
replace human doctors, computers will need to understand not just our diseases, but
also our emotional moods. The computer will have
to know whether a patient is angry or fearful or depressed. But it's very likely that
computers will be able to do that better than most human doctors, because after all, anger
and fear and depression are biochemical phenomena just like flu and cancer and diabetes. If computers can diagnose flu, computers can also diagnose fear. Now of course, as all
jobs in driving vehicles and in diagnosing diseases
will gradually disappear, all kinds of new jobs,
which we cannot even imagine at present, will emerge. But the new jobs, too,
will continue to change and to disappear. Few jobs will remain the same for decades, for a long time. Some people imagine that the
coming automation revolution will be a one-time event. Let's say in 2025, you have
the big automation revolution, lots of jobs disappear,
lots of new jobs appear, we have a couple of rough years, and then everything settles
down to a new equilibrium in the job market and in the economy. But it will not be like that. It will be a cascade of
ever-bigger disruptions. You have a big revolution in 2025. You have an even bigger
revolution in 2035, because by then AI is so much better, and an even bigger revolution in 2045, which means that to stay relevant, you will have to reinvent yourself, not just once, but
repeatedly, at every 10 years, 15 years, to reinvent yourself. And the main obstacle for doing that might well be psychological more than economical, technological. It's just very, very hard
to reinvent yourself, especially after a certain age. When you're 15 or when you're 18, you're basically creating
yourself, inventing yourself, and it's very, very difficult. But it's much more difficult
to do it when you're 40 or 50. You probably know by now that
adults don't like to change. They tell you to change all the time, but they don't like to change themselves 'cause it's very difficult. So the most important goals of
education in the 21st century are probably to develop
your emotional intelligence and your mental balance,
because you will need a lot of mental balance
and mental resilience, to deal with a very hectic world, to keep learning throughout your lives, and to repeatedly reinvent yourself and stay ahead of the algorithms. Now I hope all this doesn't
depress you too much. It's not the point. I mean, they told me repeatedly, don't scare the kids. (laughs) But I really think that you can handle it, that you can really rise to the challenge. Humans are extremely adaptable beings. If we know what we are
facing, we can adapt to it and we can find solutions. And I'm really most curious to hear what you have to say about all this. So, thank you for listening, and in a few minutes, we'll have a chance to hear what you think. Thank you.
(audience applause) - We're gonna chat for a bit. I like to listen to
that, Yuval, but hmmmm... People are gonna be able
to hack our consciousness, 'cause they'll know so much about us. I suppose me, what I was thinking is, if I'd a been a kid in the future, in this sort of dystopian future where that game's invented
that's watching my eyes, and that game suggested, and
I'm worried about my sexuality, I'd think, well, I'll
just look the appropriate body part of the appropriate person for what's acceptable
in the cultural context I find myself in. For example, if it was the man
and the woman on the beach, simply look at the beach. (laughs) (audience laugher) Maybe there's a dolphin in the background. People would perhaps query
what you were thinking about his blowhole, but I would say, so like, what we are suggesting is that in this future that
could very soon be upon us that our thoughts will be
accessible to corporations that will most likely want
to exploit our intentions, but in a sense, we already
live in this dystopia, but they don't yet have the
facilities that they will have. So you talked a little, Yval, about developing emotional balance and emotional intelligence. That's interesting to me. How would you suggest that young people begin to undertake that? - Well, that's the big question. It's something which
is much more difficult to teach than to teach history,
or physics, or mathematics. It's really about getting
to know yourself better. I know, it's the oldest advice
in the book, know yourself. You have all these philosophers and saints and gurus for thousands of years telling people know yourself, so there is nothing new about the advice. What is new is that now
you have competition. Like, if you lived in the age of Socrates or Jesus, and you didn't make
the effort to know yourself, well, nobody else could
really look into you. But now you have this competition. So different methods work
for different people. I use meditation. I meditate for two hours a day. Other people, they go to
therapy, they use art, they use sports, you go on a
two-weeks hiking expedition, and by that, get to know yourself better. Whatever works for you is fine, but the main thing to remember is that you have competition. - It's interesting that
all of the suggestions you make are activities that exist outside of the sphere of work. It's interesting also, isn't
it, that it's presumptive in the field of education,
that the education that you receive relates
to some future experience where you become a worker, where the skills that you have acquired as a result of your educational experience mean that you are of use to society, that you are now a functioning component. But all of the activities
that you suggested to build self knowledge
and self awareness, they existed outside of work. It's curious, isn't it. Like, so you're saying that
what we need to cultivate are the aspects of our nature that are not determined by
our monetary value to society. - Yeah, again, the key insight is we don't know what
will be the nature of work in 20 or 30 years. So if we knew, okay, there'll
be a lot of work in X, we can prepare children
for the skills they need for that particular line of work. But we just don't know. So, and maybe you take an example from the context of school. If you go to do an exam,
maybe the most important thing you can learn from the exam is how to deal with failure. But if you go to hand in the exam, that's wonderful, okay, great, but if you failed, that's
an even more important thing to learn. How do you pick yourself up? And how do you go forward from failure? This is going to help,
if you manage to do that, if you fail an exam, and you
know how to deal with that, that will be far, far more important for your future than getting a straight A. - Yeah, so developing a
kind of mental robustness, that is the main thing I
learned from my exams actually, is how to deal with failing exams. (audience laughter) Ah, that's the feeling, is
it, of failing some exams? But isn't it possible that,
given as none of us know how this future will unfold,
that perhaps our role as workers will no longer be
our primary role as citizens? Isn't that one of the assumptions that we should be
challenging, that human beings are not just, isn't there an assumption, and you would know this
because of your great anthropological work,
isn't there an assumption that a human being has to work because we resource our understanding from out of the times
where we needed to survive, and surviving seems like work. But now that survival is
somewhat taken off the agenda, and I'm talking from the perspective of a privileged person in a
relatively privileged nation, that possibly, we needn't define
ourselves by work anymore? Why ought we? - Yes, we certainly might find ourself in a situation when work is become, either the meaning of work changes, that things that today
are not considered work, even though they are extremely
important to society, will be recognized as work, whether it's raising children, or whether it's building a community. These will be recognized
as work deserving respect and deserving monetary compensation. We could face a situation
in which there is not enough work for everybody, and the big struggle of a lot of people is not against exploitation,
but against irrelevance. In the 20th century, the big struggle was against exploitation. That you have the small
elite exploiting the masses. And in the future, maybe the main struggle is the elite doesn't need you. There is nothing that you
can do which is beneficial to the economical or political system. They don't need you, and this is a much, much harder struggle. Also, again, psychologically,
to feel that you are not needed, that you're irrelevant, that kind of the world has passed you by, that's much, much more difficult than feeling that you're exploited, but when you're exploited, they need you. You're doing something important. You have potential power. - Isn't it, then, that
dynamic that we most need to challenge, where there are elites upon whose largesse we
are somewhat dependent, whether it is for
exploitation or irrelevance. And, in a sense, this is
already an observable phenomena. I'm sure everybody here knows people that are socially irrelevant in the sense that, possibly, homeless,
or addicted to drugs, or have mental health issues that mean they're no
longer of monetary value and therefore are regarded
as irrelevant by the system, regardless of oncoming
technological advancement, already, today, not in the future, there are already huge
percentages of the population the world over that are
already regarded as irrelevant because they can't contribute monetarily. So it's happening already. - Yeah, yeah, but it could
become much, much worse. And, again, one of the
greatest fears is simply the amount of stress. It's, as an historian, it's curious to see that humans today are
so much more powerful than a thousand years ago,
or 10 thousand years ago. If we could meet our
great-great-great-grandmother, and tell us what kind
of powers we have today, she would've thought that we
must be living in paradise with no worries at all. But actually, in many
ways, we live much more stressful lives than a thousand years ago. And the level of stress
might only increase in the coming decades because, again, the pace of change is increasing, and you're constantly under this fear that I will be left behind,
I will not be able to cope with the next big change, and as you grow older, change is becoming more and more stressful. - Yes, do you think, Yuval,
this is perhaps because the incredible technological advancement that the last couple
of centuries have seen has not been accompanied
by comparable spiritual, if you were to use that
word, or intellectual, emotional, or mental advancement? And at the beginning of our conversation, you said, you know, balance, and the ability to endure. These previously would've belonged in the realm of the spiritual. So, isn't one of the
challenges we all face that as human beings we have to evolve other aspects of our nature to keep up with the capacities of technology if we are to have any chance of surviving, and indeed, challenging the dynamics that mean that there
are still to some degree an elite that determine who is expendable, who is irrelevant, who is exploitable. Do we need to bring some
focus to these structures and ways that we can radically alter them? - Yeah, I mean, I think spirituality now is more important than ever before, because a lot of spiritual
and philosophical questions are becoming very, very
practical questions. You know, things that
philosophers have been debating for thousands of years, with
very little practical impact, are becoming questions
that engineers face. For example, you think
about a question like what is humanity? So you can debate for thousands of years, but now, as abilities, especially
in biotechnology develop, we may get the opportunity
to start re-engineering the human body, the human
brain, the human mind. And then the question arises, which kind of qualities
we want to enhance, and which kind of qualities or
abilities are less important? If you ask the army, or if
you ask some big corporation, they will tell you, oh, we want
to enhance human efficiency and intelligence and discipline, because we want more
discipline and intelligent and efficient soldiers
and workers and so forth. But things like compassion or spirituality or artistic sensitivity, eh,
we don't care about that. So if we leave it to the free market, or to the army, to decide what to do with the new technology, we may get, not upgraded humans, but actually downgraded humans. Something which is far less than we are. And this, questions like what
is the essence of humanity suddenly becomes a practical
question for engineering. Or to take an even simpler question. In order to put a
self-driving car on the road, you need to answer some very
difficult ethical questions. Like the textbook example is
you have the self-driving car moving on the road, suddenly two kids who are chasing the ball,
jump in front of the car. And the car has to choose
whether to drive forward and kill the two kids,
or swerve to the side, hit a coming lorry, and
risk killing its owner, who is asleep in the backseat. What should we do? Now, for thousands of years, philosophers have argued about it, but it had no real implications. Now, and they said, okay, we just leave it to the free will and the
conscience of individuals. But with algorithms, you,
the engineer needs an answer. How do I program the algorithm? What should it do? And with an algorithm, you
can be a hundred percent sure it will do what you tell it to do. So, there are all kinds of answers. One option, if you just
leave it to the free market, you know, the customer is always right, then Tesla will produce the Tesla Egoist and the Tesla Altruist, and you know, the customer is always right. If people bought the Tesla
Egoist, that kills the two kids to save its owner, well, what do you want? The customer is always right. You might have the
government regulating it. But whichever way you
go, we need an answer to this philosophical question. You know, philosophers are
very, very patient people. They can argue for thousands of years about some ethical dilemma
without resolution. But engineers, and even
more than engineers, the investors, the owners of the company, they are not patient. They want the self-driving
car on the road tomorrow or in two years, so they need
to answer these questions. - Yes, and we already
understand the primary ethics of commerce and capitalism. That probably the car would
run over the children, then reverse up onto the pavement, run over a few more people, eject the owner out of the sun roof. If the ethos is to maximize profit, what I'm saying is that we've already had like a situation where GM Motors had to calculate whether it was cheaper to recall cars with a faulty ignition, or to face the lawsuits
that would be incurred when the cars caught fire because
of those faulty ignitions. And I'll leave it to you to research what GM Motors did in that situation. So in a way, in a sense, if human ethics are always couched within
the parameters of capitalism or the desire for profit, the results will always
be negative for humanity. - No, not always.
- Just sometimes. - The free market in
capitalism has also done some very good things for humanity, but my point was not
specifically about this issue, but more generally. - Yes, of course, that
would only be a few people once in a while anyway. We can afford a few of them. - What I meant to emphasize,
it was just an example, is that now, not just
the crucial importance but the immediate relevance of
these big spiritual questions that you know, if you're looking for, if you think, okay, I want a career, I want to make a good career move, what kind of jobs will
there be in 10 or 20 years? Philosophy is actually a much safer bet than it ever was before,
because corporations like Tesla, like Google, they suddenly confront these deep philosophical questions. They actually need philosophers
for the first time. - I understand, Yuval, but
it's still being couched within how do I monetize my resources, when an alternative route might be, let's not monetize ourselves. Let's look at alternative systems that exist outside of
the pre-existing model, which has already proven not to be necessarily beneficial
to us, 100% of the time. I'm not saying, you
know, I think capitalism has done lovely things. Look at these trousers. These are not the trousers of
a communist, let me tell ya. (audience laughter) So I think now is the
time to get questions from the young people of Lilian Baylis and perhaps surrounding schools. I don't know exactly the method by which you've been selected. If you have a question,
stick your hand up. If you're feeling a bit shy,
don't worry about feeling shy. You'll just say your name, and you'll say the question, and then Yuval will answer you. So, who has a question, put your hand up and one of these humans
in orange tee shirts will come to you with a microphone. Put your hand up and don't be worried. There's a couple of people at the back. We know why people sit at the back, 'cause they're naughty. (audience laughter) So it's the back of the bus people there. Can you see 'em? Sat by that emerald. - Good afternoon, my name is Hani Parrot. My question to you is that in 2017, you argued that by 2050,
there would be a generation of useless people, you deemed
as useless and unemployable. Now my question is, what much different is that from the current society where most of us here are
from not very privileged places and stuff like that. What much different is it, basically, that we are deemed as useless? We are deemed as unemployable, and that's my question to you. - Well, first of all, it's not a prophecy. We don't know for sure. It's a danger that we
need to take into account. And as you say, it's not
just a future danger, it's already beginning to happen now, but it might get worse and worse. And of course, when we
talk about useless people, they are useless only from the viewpoint of the economic and political system. Nobody's useless from the viewpoint of their mother, or siblings, or friends. But the danger is, that yes,
people will be left behind and will suffer, as I said before, not from exploitation, but
really from irrelevance, that there is nothing that they can do which is valued by the
political and economic system. But if we are aware of this danger, we can try and prevent it from happening. One thing that we need to be doing is think very hard about how to retrain and educate people throughout their lives. Because in a situation when the job market is very volatile and it
changes every 10 years, so it's not enough to provide
people with good education in their childhood, we
actually need a system of lifelong education, and
the responsibility for that is on the government. Just as governments built
huge systems of education for the young in the
19th and 20th century, in the 21st century,
they will need, probably, to build another education
system for adults. Whether they will do it
or not, I don't know. But it should be on the political agenda. What worries me, and this
is why I say these things and use very provocative terms like the useless people or useless class, is to draw people's attention
to this potential danger, because this should be
one of the top items on the political agenda today. Not in 2020, or it will be too late. We need to think about
what we are doing today, so the next time there is an election, and politicians come and
want you to vote for them, so you ask these politicians, what are you going to do about, what are your thoughts about the coming automation revolution? What will you do to prevent the emergence of this huge useless class? And if the politicians don't understand what you are talking about, or if they have no meaningful
plan for how to deal with it, just don't vote for that politician. - Ooo, that was quite good. That was quite good. So we can look to people
in political authority and make sure that they
seem across these ideas and they don't seem to
exist blindly in the service of existing capitalist interests or not have a vision at all. So that was good. Bloody good question, that,
and all, mate, well done. Well done. Who's that, there's a person
there with their hand up just central, I mean you,
blue shirt, long hair, sort of fair hair, whitish skin. (audience laughter)
It's a tightrope. - Hello, my name is Danielle, and as a young adult in today's society, I feel like young people
don't have enough say in regards to politics. So what do you think we
can change in society in order to allow us to have more of a say in what we believe in? - Hmmmm, oh, good question. I'm not sure I know the answer. - I've got this one, as well, though. (audience laughter) - One answer is if you
want to make an impact, join an organization or
establish an organization. It's very difficult to make
an impact as an individual. Humans' main strength is in their ability to cooperate effectively. 50 people who cooperate in an organization are far, far more powerful
than 500 individual activists, each one of which is just
making their own thing. Whatever you care about,
whether it's the future of the job market, whether it's racism, whether it's climate change, whatever you care about,
the best thing you can do is join an organization or
establish an organization so that you can cooperate
with other people about it. - That's pretty good, I
think, Danielle, isn't it? I want to stick this in there, as well. Practice democracy when you're younger. If you find yourself as a participant in a group or a social
system, e.g., a school, think of how you can
democratize that school. Think about things that,
requirements that you have that are not being met, see if other people
share these requirements, and, as Yuval suggested,
brilliantly illustrated in his first-class, and some
would say revolutionary, book Sapiens, that through cooperation... I'm quoting his book at him, while he's sat there, the nerve. No, Yuval's excellent book, Sapiens, as his answer just illustrated
through cooperation you have incredible power. Furthermore, as evidenced by
the society we live in now, a few people cooperating can dominate and control huge numbers of people. All of us are in that huge
number, being controlled, to a lesser or greater degree. And ah, what about this
human at the front? Yes, go on, mate, with the
brilliant asymmetric hairdo. - Thank you, my name's Lucy. I go to this rebellious new
college called Minerva Schools. They take us all over the world, and that's why I'm in London. - Welcome.
- Thank you, third week. I don't have much faith in
the established school system. What suggestions do you have
for us individual learners to be better self-guided learners? - Ooo, um...... (audience laughter) It's a good question. It's, the most important thing today is to be able to focus. Especially if you have no guidance from an established school
or an established program, the greatest danger you
face is being flooded by enormous amounts of information and being completely distracted and unable to form a clear vision, a map of reality. In the past, the main problem
was lack of information. Information was scarce. Censorship worked by blocking what little flow of information there was, and especially if you
wanted to learn by yourself, there was just nowhere to go. Now, it's just at the, like you live in a small town somewhere,
and there is no library, there are no books, there
is certainly no radio, no television, no Internet, so how do you get information? And schools were initially established as these conduits,
these reservoirs of this rare resource of information. Now information is everywhere. We are flooded by it. Our problem's just the opposite. Censorship actually works
now by flooding people with enormous amounts of information, whether true or not, it doesn't matter, just flood people with information to the degree that they can't
make sense of reality anymore. They can't tell the difference, what is important, what is not important. They can't build their map of reality. And for schools, I would say one of their chief missions now is
not to provide pupils with more information, it's
really the last thing they need, but to provide them with
either a map of reality or the tools to construct such a map. If you are a self-learner,
and you don't have this kind of a structure, then this is your greatest challenge. How to find my way around this enormous ocean of information
without drowning in it? And I don't really have
like a magic bullet of how to do it. I would just focus your attention that this is your greatest task. - How did you do it? How did you, did you have
mentors, instructors, examples that you followed? You come through academia,
because you're a professor, so but so, is that how you did it? You had mentors, people that you followed, when you were, I know you
didn't have the challenge of this abundance of information, not to the degree of the people
that we're addressing now have to contend with. But you must have still
selected the disciplines that you did, the methods that you did, the viewpoint-- - My method was really to focus on the most important questions, and then allow the
questions to just lead me wherever they go. Like you take a big question like, for example, why have men dominated women in almost all large-scale societies for the last ten thousand years? And you want to understand why. And it's important to take a question which is not only big, but
it's also very relevant to my life, to make it interesting. Something that really
impacts me every day. Why is reality like this? And when you start reading
and researching about it, the first thing you'll discover is that you have to cross all kinds of disciplinary borders. This is not a question
in biology or psychology or economics or philosophy
or history, it's everything. You can't understand gender relations if you don't know something
about human biology. But if you think oh,
biology has all the answers, you also won't understand much. You also need to take history into account and economics into account and so forth. So what gives you the
structure is the question. I have this big question
and I'm on a quest, following it wherever it leads me. - Well, that was pretty good, wasn't it? And I pushed him for that
for you, Lucy, didn't I? You saw that I nudged him a little bit, and we got Yuval to say find the question that you care about
most and then pursue it and see where it takes you and don't be confined by discipline. But do be confined by discipline
but not by disciplines. Now, there's a few nutters up the back. Why don't the Penguin people get ready, but there's a young man here at the front, hand up, fingerless glove,
that's the kind of person who deserves to be heard. What about that? - Hello, my name is Moise. I come from Kingsman School, and you were talking about engineers and how they would be
the ones trying to answer the philosophical questions. And I was wondering,
isn't it kind of putting all the onus on engineers? Like how they would be the
one deciding the future. Isn't it putting too much stress, or are you saying that we
shouldn't have much engineers, because we are putting or dedicating or, in a sense, putting
our lives in their hands? - Well, this is what is happening. At present, engineers,
especially software engineers, are making more and more of
the most important decisions in the world, and it is
indeed a great danger that so much power is
concentrated in so few hands. And even more dangerous,
because they may have a very good background in
computer science and engineering and mathematics, but they
usually have no background in ethics and law and
sociology and so forth. And again, maybe to give an example. When you apply for a job
later on in your life, chances are that your application will be processed by an algorithm
and not by a human being. And this algorithm was
written by some engineer or a couple of engineers. And one of the biggest
dangers is what happens if the engineers kind of
program their own biases into the algorithm? For example, there are true
cases today in the U.S. that for example we know
that when an application for a job comes, it's
wrong to discriminate against people on the basis of race. So kind of the algorithm
needs to be race-blind. And in a way, algorithms are
better in being race blind than humans, because they
don't have a subconscious, they don't have feelings and emotions. A human being, you can
tell the human being it's wrong to be racist, and the human being will agree, yes, it's wrong to be racist, but then when the application comes, his or her subconscious
feelings might bias them. And we have a lot of research indicating that this is happening. Now people say, with algorithms, we are in much safer hands, because the algorithms
don't have a subconscious. But what turns out, for example, is that algorithms, we have
today racist algorithms that they don't discriminate-- - Some of my best mates are algorithms. - They don't discriminate
on the basis of race, but for example, they
discover the algorithms that people who come
from certain post codes, they tend to be less reliable workers, and they start
discriminating against people from these post codes,
and surprise, surprise, the people in these post codes, they usually come from a
certain ethnic background. Now, the engineer who
programmed this algorithm may not even have realized
what he or she was doing, but they should have realized, for I think that in every
course for computer engineers, we need today to include
a program in ethics, in ethics for coders. - So you think that engineering,
or currently engineering is separate from creativity. Like those two do not exist side by side. And also you were talking about how algorithms could be biased. Isn't it like sort of saying that it's the engineer's fault that
a sort of bias could exist? Is like you're blaming the engineers for what ultimately isn't in their hands in their first place
and they're just doing what highers ups tell them to do, anyway? - It's a case-by-case situation, but I think that if the
engineers have more awareness of the enormous political
and economic influence of their work, and if they
have greater awareness of the ethics of what they are doing, then even if some big corporate boss tells them to do something,
they can push back, or they can do their job
in a more responsible way. Now, of course, it's not
just their responsibility. Governments need to
intervene with regulations. Customers need to be more
aware of what is happening. But ultimately, I would say
that it's extremely important, if there is one profession
today that we must include courses in ethics, in this profession, it's computer coding. It's much more important
for computer engineers to take some courses in
ethics and philosophy than it is for literary
critics or historians or artists or whatever. - That's excellent, thank you. Thank you for that series
of brilliant questions. And the other thing I didn't know is that you did a little
question one-two there, mate. Doubled up on the questions, very crafty. Hey, Penguin, what about some
of the people up the back? I'm trying to work out the system. People maybe sat randomly, for all I know. We don't know if the people at the front got there through sheer moxie and cunning. We don't know, do we? What about some of them
people who are up the back? Say your name and then
say the question, mate. - I'm Charlie, and as our
population is increasing, at the same time, like, at the same time our
technology is also becoming more proficient and
rendering humans useless, what are we gonna do about the
excess humans that are left? - Hmmmm. (laughs) So that's really going
back to the question about the useless class, and again, I want to emphasize, it's not a prophecy. It's just a possibility. If we make the right decisions
and right policies today, then we can prevent this
kind of dystopian scenario from materializing. And this is the whole point of
having discussions like this. If the future is inevitable,
then what's the point of talking about it? We can't do anything. But the future is not inevitable. Every technological
development in history, this was always the case in the past, and will also be in the future, that every technology can be
used in several different ways. If you look at the 20th century, so you look at inventions
like electricity and radio and trains and cars, you
could use these inventions to build fascist regimes
or communist dictatorships or liberal democracies. Electricity didn't tell
you what to do with it. And it's the same with AI. The development in artificial intelligence and machine learning and biotechnology could lead to a dystopian scenario in which a tiny elite of superhumans controls all the resources and power, and most people are economically useless and politically powerless. It could happen. But it's not inevitable. We can use the same technology to create a much, much better world
than ever existed before. For example, that yes,
people need to work less, but many jobs are not worth saving. What we need to protect is
not the jobs, it's the humans. If we can take care of human needs and humans will have more leisure time and more opportunity
to explore themselves, to develop themselves, to engage in art or community activities
or meditation or sports, instead of working so
much, this is wonderful. We don't need to, I mean,
I've been talking a lot about the dangers of AI and algorithms, but this is simply
because we are now flooded by all these promises that technology will make everything better. And we need to kind of balance it. But we should still remember that, yes, there are wonderful opportunities,
also, in technology. To give one example, again, returning to the self-driving cars. Today, all over the world, every year about 1.25 million people are
killed in traffic accidents. That's twice the number of people who die from crime and terrorism
and war put together. If, and most of the traffic accidents are caused by human error. Somebody drinking alcohol and driving. Somebody falling asleep or
texting a message while driving, things like that. If we replace human drivers
with self-driving vehicles, it is likely to save maybe
a million lives every year. So there are wonderful developments there. The key thing is not to
think in dystopian way, but to think, what should we do now? What kind of policies the
government should adopt in order to prevent the
worst case scenarios and make sure that the technology
is used in the best way. And we have, again, the
examples from the past that we can make technology work for us. All right, the 1950s and
60s, you had all these doomsday prophecies that nuclear weapons will inevitably lead
to a nuclear world war in which human civilization
will be completely destroyed. And in fact, what happened,
is that the Cold War ended peacefully, and
it may not look like it from the news, but your lifetime, the last 20, 30 years, have
been the most peaceful era in the whole of human history. There are still wars in
some parts of the world. I know this perfectly well because I live in the Middle East. So I have no illusions about it. But still we are living in
the most peaceful era ever. More people today die from eating too much than from violence. Sugar is far more dangerous
to your life than gun powder. And this is a wonderful development. So there is a lot of hope. - There's a lot there to think about. What I took from what you were saying is that we all have an
obligation to educate ourselves regarding ethics, not just engineers, but all of us need to have a
better understanding of ethics. And more and more, I think, that instead of looking
at society and systems and thinking how can we make
ourselves fit in with it, we have to look at ourselves, and ask how can we make
society fit in with us? There, what about, I think we've got time for two more questions. So, there's some of the people there. There's a hand frantically waving. I would take that as enthusiasm. - My name is Kadiz. I have a few questions. - Oh, a few, he's doing all
of the last ones you've now. - Do you feel as you said the possibility of more people becoming redundant in terms of the amount of jobs
available for people reducing, do you feel that the willingness of elites and like governments to help these people through schemes such as welfare benefits, job seekers' allowance, do you
feel like that will increase? And the follow-up question to that was why do you feel that
elites such as Google, Amazon, and Microsoft, why
do you feel that they still indirectly pay, that help out these people that are redundant via
all that corporation tax, when they can easily afford
to buy nations for themselves by which they run, where
they pay no corporation tax and help out the redundant
in no way possible? - Yeah, well, there is
a lot of responsibility and a lot of things
that governments can do, whether it's in social services, whether it's maybe most
importantly in education and education for adults. But as I think you're indicating, we need to think about it
also on a global level, and not just on a national level. The impact of the rise
of AI and automation will be different on different nations and different parts of the world. In some parts of the world, enormous new wealth will be created, and a lot of new jobs will be created. Whereas in other parts of the world, the economy might collapse completely. In high-tech hubs like Silicon Valley and like the eastern coast of China, we might see enormous
development and wealth, but whereas other countries which at present rely mainly
on cheap manual labor, like people producing textiles and shoes and so forth, their economies
might completely collapse. So beyond the question of
what does the UK government do in order to prevent the
most, in order to protect the most vulnerable people in the UK, an even bigger and more important question is what do we do on the global level? Because the worst problems are, there will be problems in the UK, but the worst problems
will not be in the UK. They will not be in Western Europe or in North America or
the east coast of China. They are likely to be in
countries like Guatemala, like Bangladesh, like Indonesia. They will be hit the most by
the automation revolution. And what we see is now with
the rise of nationalism and isolationism is a
cause for very great worry, that yes, maybe governments
will do what it takes to protect their own citizens, but the poorer nations of the world will be completely left
behind and we need to think very carefully about a global safety net, a global solution to this problem, and not just stay limited to
nationalist thinking about it. - That's an excellent answer, and that's a really good
question, that, mate. I like the way you dragged
the powerful Silicon giants, great technological
Goliaths that stride about the world governing and controlling us. You pulled them into it, too, and made Yuval talk for a while about the culpability and responsibility of our governments and
the way that we look at the nation-state and the globe. A lot of education, a lot of
data flying around out here. Lot of things for us to learn. We've gotta pull ourselves together, start looking at fraternities
across the globe, looking at new alliances,
new idealism, new ideologies. Who's this dude with the
glasses and the bonnet? My man there. And this will be our last question. We gotta wrap it up so start collating the good information in your minds and allowing it to land. - Hi, I'm Raphael, so I
have two kind of questions. - Everyone gets in with two questions. - Sorry about that. So the first is, are echo chambers, like politically speaking,
an example of this human hacking that you speak of? And if so, how do we deconstruct that? And a second one, what
do you feel about UBI or universal basic income
as a way to combat this sociopolitical redundancy? - Okay, so I'll start with UBI, because it really goes back
to the previous question. UBI, universal basic income, the idea that the government
taxes the big corporations who make all the profits from
the automation revolution and then uses the money to support people who might be using their jobs and need either social welfare or retraining to fill new jobs. The big problem with
universal basic income is that most people who talk about it actually mean national basic income. What they have in mind is something like the U.s. government
taxing Google and Facebook in California in order to help unemployed taxi drivers in New York and unemployed coal
miners in Pennsylvania, which is as good as it goes. But the big question
is who's going to help the unemployed people in
Mexico or in Bangladesh or in Indonesia? And I don't see the U.S. government, certainly not the present one, but also not a future one,
using U.S. tax dollars to support foreigners in other countries. So if UBI means universal basic income, the entire planet, yes, it's a good idea. But if it means national basic income, it doesn't solve the worst
problems we will be facing. About the issue of echo
chambers, then, yes, this is really part of this new
world that we are living in. That even though we
have all these abilities to communicate across the world, partly because of the
dominance of algorithms, we find ourselves being locked inside these small echo chambers,
which constantly, the algorithm finds out what we like, and what we think, and
constantly shows us back news stories that cater to our own tastes because we don't like to
be contested too much. One of the basic facts we need to realize about the human mind is that
the human mind is very lazy. It doesn't like to work too hard. And an echo chamber is
really just the human mind trying to create a kind of
very safe and cozy environment in which I'm not challenged. I don't need to think very hard. I don't need to defend
my opinions or to engage with other opinions. And in the end, it's up
to us to make the effort to break out of the echo chamber. I mean, here it's partly the
fault of the engineers, maybe, but ultimately it's our responsibility to make the effort to break
out of the echo chamber and it's easier than ever before. If you lived a thousand years ago in some small medieval village, this was also an echo chamber. But if you wanted to get different views, different opinions, it
was very, very difficult when you lived in this
small medieval village without a library, without
radio, without Internet. Now, it's much easier. The technology here helps us. But there is still a gap there
which depends on our effort. You need to walk the last mile yourself. The Internet and Google and Facebook, they have done some very
good things for us, too, and they make it very easy, if we want, to be exposed to other
ideas, to other opinions. But the last mile we still
have to cover it ourself, we still need to have this resolution, I want to break out of my echo chamber, and I think it's a very
important responsibility of each one of us. - Thank you, Yuval. Now, we have to wrap up this session now due to the restrictions
of what we call time and our understanding
of how time operates. Although that could all change. So, let's remember, I'd
like to sort of recap before we conclude some
of the important points: that you have personal
authority and autonomy, that your future isn't written yet, that you have a great
deal of personal power, that you can control, to a degree, the governments that rule you, the behavior of corporations
that dominate you, that you have personal
authority in your own lives, that the future isn't
prescribed and given to you, the future is constructed by you. Remember when Yuval said, the
point of writing this book, the point of having these conversations is the future can be
constructed by all of us, and every algorithm or
code that's being created, passed through human consciousness, and you are humans, and you
can write your own codes and algorithms and you're
brilliantly powerful. Particularly those of you sat at the back, slouching, the Lilian Baylis. Professor, Yuval Noah Harari, thank you for your excellent talk. (audience applause)
Well done, all of you, for - Thank you.
- Your brilliant questions. Thanks once again, professor. Cheers, you lot, fight the power. (audience applause)