Transcriber: Claudia Sander
Reviewer: Denise RQ I'd like to ask us
a fundamental question this afternoon. Perhaps the most fundamental question, which the human race confronts
in the 21st century. The most fundamental question:
will this be the century which is best known, which is marked, which is overshadowed
by the triumph of technology? Or will it be the century which is marked by the triumph of humanity? Will we be able to frame these extraordinary developments
in technology in such a way that they enhance, engage
a flourishing of the human race? That seems to me the question
of the 21st century. In fact, one could argue
that it is the question of human history. Because throughout our history, we have been striving, toiling, working, laboring to create tools. Homo sapiens, wise men,
the name of our species is also Homo faber,
the man who makes things, humanity who creates tools to enable us to do our work. And we can see
the history of the human race as a history in which we are able to accomplish more and more and more through the leveraging
of our human efforts by hand, by brain, using tools. And this 21st century, in the extraordinary experience
in which we find ourselves, our generation, is the century
in which there is no question, that these tools will become very rapidly, faster and faster, more pervasive, more enabling, more powerful, more open to abuse, more open to enabling
the flourishing of our human species. So, that seems to me
to be the question we face, this year and in each year
of the 21st century. Now, the most evident way in which these technologies
are beginning to affect us is a way which is very troubling, which could also be very helpful. And I want to take you on
to a journey into a future, which I believe may not be
beyond the lives of many in this room, in which we have a world without work. In which our tools have been
developed so well that they do all of our work for us, in which Artificial Intelligence, robotics
has taken the load of human toil off the shoulders of the human species. And, essentially, work, labor for money, which we have been schooled
into believing is what we must do, since we were hunter gatherers
and worked in this random way, we've moved toward an industrial society, for which we are educated,
for which we are prepared, in which we labor to pay our way. We face the future. I do not know how far ahead,
I do not know how likely, but I think it's
a significant possibility, and it's a significant possibility
in our life times, in which work is no more. How do we frame this future? Is this a future which takes us
to heaven on earth, in which we need not work? Or is it a future
which takes us to hell on earth, in which we can not work? I believe this to be a question which we should be taking
far more seriously than we have been, whether we as individuals,
in our families, in our professional groups,
in our societies, or whether we as those
who influence governments, and labor unions, and investment policies, and flows of capital, and the global order. Because I think this is a question
which is certainly as serious as things like the climate question, certainly as serious as things
like the possible end of antibiotics. These are huge risk questions
confronting the human community, we have to address them. To reassure some of you
and to worry others, these are not just questions
coming from the radical left, who maybe are
most critical of the technology, not questions coming from Luddites - some of you will know
the name of Ned Ludd. Britain, my former country
- I now live in America - but Britain had
the first industrial revolution. And gangs of men went around
smashing equipment, because they believed
it would destroy their jobs. And of course, they were right, and they were led by a man
called Ned Ludd. So, Luddism is the name
we give for smashing the technology because it would destroy jobs. I'm not coming here
to play the part of Ned Ludd. What is interesting, and this is really very new,
in the last two or three years, we now see mainstream,
centrist, responsible, boringly, respectable people,
beginning to raise this question. So, for example, two MIT economists have published two books,
in the last three or four years, raising these questions
and seeing major losses of work because of robotics. The Financial Times had an article, London's boring financial newspaper, on this very question. A few months ago, perhaps the most
comprehensive discussion, Oxford University has a center
called The Martin Institute, it is a futurist center, and it includes transhumanists and also enthusiasts for the future. These are not, in any way, Luddites. They produced a fascinating report, which argues that 47% of jobs in America, in 20 years' time, are longed to go. Of course, interesting number, 47%. Brazil has 47% of the land mass
of Latin America. You all know the map of your country. Imagine that land mass removed. It's almost half the jobs
in the US economy. They analyzed 702 different jobs. A detailed examination. That's what they think. Now... For example, look at the companies. Just examples of what's going on
- I mean the last - look at the news,
the last two or three weeks: Whatsapp bought by Facebook
for 19 billion dollars. Do you know how many employees
Whatsapp has? It has 55. Nineteen billion dollars in capital, 55 people with jobs. Huge value. Or to take perhaps
an even more striking example, which connects us directly
to the old economy, Instagram versus Kodak. Kodak, for two generations, dominated global photography. At its height, it had 145,000 employees, in the company itself, plus all
the distributors and photo shops. Instagram, though some dispute as to exactly
how many members of staff it has, it's around 11. I could not make this up! You can Google it while we are speaking. New kinds of value delivered in ways
that involve hard and inhuman labor. So, what I'm arguing
is that we have a prospect here in which you are essentially removing
the human factor, the labor factor from the value equation. Capital, technology, deliver value. Humans are increasingly unnecessary. And the technologies
which we're discussing - I have three quick examples
to stick this in your head. Google cars, the G cars, California now has allowed this cars, 500,000 miles driven not one accident caused by the car. Two accidents caused because the human driver
got wired and took it over. Again, I'm not making that up! Far safer. Wired magazine recently argued,
this technology may begin with trucks, because trucks' cost base is far higher
and they're much more dangerous, this is a far safer way to drive. The Google cars
have this thing spinning on the top. It spins ten times a second, every second it absorbs
more than a million data points. This is very sophisticated. Far safer. I don't know how many
truck drivers there are in Brazil, in the United States there are
5.3 million truck driving jobs. Ten years' time, fifteen years' time, I suspect there'll be
zero truck driving jobs. I let to try to interest
the US labor unions, FAA, AFL, CIO,
in having this conversation. I drew them into a conference
which we had recently. They don't seem particularly worried. I think the teamster may end up the big truck drivers union
with no members. They didn't seem
particularly worried people. People think that the future
will be so like the present. That's why the future is so risky. Second example, of course,
we have the IBM Watson. You know, IBM's supercomputer
which won Jeopardy, which made it the very most famous
computer in the world. They're now using IBM Watson
for medical diagnosis. Diagnosis is what family doctors do,
it's the main thing they do. They don't do it very well. Will there be family doctors
seeing patients in 15 years' time, when you can have a supercomputer and you can have a nurse or you can have a terminal
taking your symptoms? Third example, of course, MOOCs. MOOCs, M-O-O-C,
Massive Open Online Courses, this huge AI based courses which Stanford MIT are offering,
experimenting with these courses in which the key thing
is it's zero marginal cost. So, basically you can have
ten students in a class or you can have
a million students in a class, it doesn't cost any more. And one of our top experts in innovation has said he reckons in 10 years' time 90% of American colleges may have closed. I think maybe he's pushing it a bit, but, though for the dimensions, incredible opportunities
for global education in parts of the world where you'll never
get the kind of capital to establish major universities
we have in the West. But not all opportunities
for people with PhD's. The standard answer, of course, which some of you're already giving
as you get cross with me, the standard answer to this argument is, "But people like you
have always been wrong in the past. The first industrial revolution,
industrial revolution since, every wave of innovation
destroys large numbers of jobs and creates even more
and creates far more wealth. It'll happen again." Well, I have to say I regard that isn't just
a revolution fundamentalism. The notion it worked,
then it works, and it'll always work, is a very dangerous way to think. Two reasons. First of all,
the pace of change, Moore's law, exponential curves, it's getting pretty steep. Ray Kurzweil has been quoted earlier. Genius of a man, who believes the singularity
will come in 2045 or something, when the robot intelligence
becomes smarter than us and takes over. I think he's wrong. But this guy is now
head of engineering for Google. This is moving very fast. Secondly, we had this discussion
recently at my think tank and we had leading it
of a man called Marshall Brain, who is a computer scientist,
who began a famous website, HowStuffWorks. One of the early very successful websites
which he sold years ago. Marshall Brain said,
the problem with this argument is that of course
this new revolution in technology will create a whole lot of new jobs. But who will do these jobs? Machines will do these jobs. That's the whole point. Isaac Asimov had this famous
Three Laws of Robotics, or four, because he set one
on the beginning and called it Zeroth, which are basically all about
robots being nice to people. They should not harm people. It is a wonderfully, marvelous,
liberal sort of "use it right now because we have drone robots going around
and killing people somehow", or other Asimov "if you have not
been built into our technologies." But the point is
he did not add another law which said a robot will never take a job from a man or a woman who needs it. That is the question. So, how do we put this pieces together? I began by saying, this is
the greatest question of our century, how do we frame this question? Is technology essentially tools to serve the human race? Our humanity. Or ultimately is technology the point. Are we meant to conform to technology? Is technology going to win? Is a technological model of what it means
to be alive, to be thinking, to be active? Is there really where we want to go? We're basically just rather poor robots,
we're rather poor computers. You want to get out of the webware
and just into the software and the hardware of this new world. Well, seems to me reinvent the wheel. That's a tool, reinvent the ads. Of course in this country where you have
primitive tribes still using very basic technologies, side by side
with a highly advanced economy, you see this very clearly. These are just tools. We must frame numerous tools. Tools to be used in the interests
of the human community. That's how we frame these things. The set, the context we place around them. The way we understand
what it is that they're for. Now, as we begin to conclude, I have three questions to leave you with. One seems to me
to be really very evident, that is to say, with all that's here about
highly advanced technology how will we operate when we have
robots doing almost all of our work. I mean, they will [not] be
novelists and poets and so, but, you know, they will try to be,
but that's not my point. We will not all get jobs
as novelists and poets. Almost all of our work is doable
by smart computing. First question is though
the existing technology we have, we have these little smartphones which come with a highly,
highly powerful computers. We accidentally call them phones, because occasionally
we even use them as phones. We have these things, how are they integrated
into our social lives, to what extend
do they enhance our humanity. What they do is turn us into a machine
to walk around bumping into people typing and in the middle of the day people
pull them out under the table in the bar, you know how it happens,
I've seen it in the corridors. These are wonderful things,
and I use mine all the time, but how do we integrate them
into a social purpose so they help the human race, so they are tools for us to use in order that we might flourish
as human beings. It's a question for now. The question we should be
much more worried about now. I think these tools are extraordinary, I wouldn't be here if I haven't been chatting
with Ana on Twitter, three or four years ago, we've been chatting over the years. These tools are extraordinary. We must integrate them
into our social and personal lives. Second question. Plainly this prospect of Heaven or Hell
of a world without work is a future prospect
but of course the future, you have to go to the future to make the decisions
today and tomorrow, that's why this is so important. What are the impacts
of this way of thinking, even if it's just a 30 percent possibility
in perhaps 40 years' time, how will you do the risk calculation,
as you do with climate and antibiotics. Whatever the calculus is, how do we use
a tool for decision-making now in the policy community,
in the corporate community in the NGOs. Among our industrial
how should these reflections influence decisions taken today
at the policy level. I think there are profound implications,
that are not listed. We have an education system designed
to turn us into industrial creatures. Creatures for fordism,
for the factory system. We've loosen it up a bit,
we talk about STEM all the time now. It's the same idea, training workers. If when you get to be 22
when you leave college, you retire, what would you have been doing
for the previous 17 years. Questions of course about income. Socialism says we redistribute income,
this isn't socialism, this is the good of the community
from a completely different perspective. Income may have no direct relation
to the work you do. Third question. This may be the most important question
for you of the 21st century. We began with the question
for the human race, now the question for you. If you never had to work again. If tomorrow morning, and the next morning, and the next morning you got up and there was no labor involved, and you could do essentially
what you chose to do, what would you do? What would you do? (Portuguese) Thank you. (Applause)
Honestly, the idea that with automation, advanced materials and AI will be dramatically changing the socio-economic structure of society is the only thing that really keeps me from being suicidally despondent about my future.
The key will be convincing the people with all the money to share...
(Flying pigs coming soon!)
Childhoods End describes a world living style similar to this
This is why I don't feel bad about leaving my economic education and starting a game dev studio. I just feels like technology is far enough ahead, and software is far enough ahead, that we don't need as many people there.
At least in first world countries, this is a possibility. Doesn't mean that it will be like that for everyone. Even first world doesn't yet have everything connected, second and third world will need a lot more work.
An automation revolution will be meaningless if we do not solve the far more looming environmental crises that we currently face.
Not having to work while being able to spend the rest of my life well nourished, well educated, and well accommodated sounds like a dream come true. I would be free to pursue my education with the intensity I currently have to spend keeping myself fed and housed. For a lot of people, I imagine this would bring about the ability to help advance the cause of humanity, whether through social or educational means, to amazing levels faster than ever before. BUT, and there is a but, I fear to think what depravity the same opportunity given to racists, bigots and zealots of all flavors would bring in the way of damage to the same humanity many would seek to uplift. Giving someone that hates another for whatever reason the ability to become mentally and physically strong would likely advance humanity toward destruction just as quickly, and right beside, advancing it to perfection. I think that a future such as the one stated here would force us very quickly to come up with a unified, species wide paradigm of what is right and wrong very quickly if we intended to survive and flourish at all.๏ปฟ
At least two movies related to singularity or similar came, now I would like to see a movie about a future with no work!
Workers of the World - Relax! An abridged reading of 'The Abolition of Work' by Bob Black.
Ok since no one else.. its Possibility!