- [Host] Welcome to Qatar. Welcome to the New Doha Debates. The second in a series of six debates on challenges the world faces today. - The impact that artificial
intelligence or AI is having on our lives today
is as far reaching as any of us could've imagined
and it's moving fast. Do the benefits outweigh the risks, or do the risks pose a fundamental threat to the future of humankind? Should the AI revolution be contained, or should it be unleashed? Could we even pull the brakes if we tried? Or is it simply too late? That is the subject of our
debate here tonight in Doha. - [Host] Please welcome
your Doha Debates moderator, Ghida Fakhry. (applauding) - Hello, and welcome to you all to our second in our
series of New Doha debates, where we take on AI, one of the most consequential
issues of our time and look at both the challenges and the opportunities it presents. First we will listen to
some of the most, well, some of the divergent views on this topic, and certainly there are many. Then, in our Majlis, we will explore some of the possible solutions
to this contentious issue, and ask each one of you to
make personal commitments that will help us address
the challenges we face in AI. We are at Northwestern
University here in Doha, with students from top
academic institutions, in education city. We're also live on Twitter and Facebook. We want to hear from all of you
following this debate online so before we begin, let's go straight to our correspondent, Nelufar Hedayat, who will give us an idea of exactly how you can all join this discussion; Nelufar? - Ghida, I can tell you that
already we've got thousands and thousands of people
who are watching, avidly, to see what this debate holds. As you mentioned, we would
love to hear from you, your thoughts, your
opinions, your comments on what you are hearing,
the hashtag, in order for you to have your
comments possibly read out, live within the debate, is: DearWorld. Hashtag DearWorld. Tell us what you think, and
hopefully it can be part of the debate, today. Ghida. - Thanks very much Nelufar, we
do hope that you all join us for this discussion: so
when we talk about AI, what do we mean when we talk about AI, when we talk about the shift
from narrow or weak AI, to artificial general
intelligence, or strong AI? How have we gotten to where we are today, and where are we heading? What are some of the potential dangers and opportunities along the way? Let's take a look. - [Narrator] When most
people hear the words artificial intelligence,
they think of this. - I'll be back. - Instead of the many ways people interact with AI every
day, from Google Translate, to personal assistants
like Siri and Alexa. Dartmouth University computer
scientist John McCarthy coined the term artificial
intelligence way back in 1955. But by then, others had
already started to think about what AI would look like. Most notably, Alan Turing. His Turing Test sets an
early and still relevant bar to see if artificial
intelligence measures up to human intelligence. The test is simple: if you
can have a conversation with an AI and not notice
that you're talking to an algorithm, well, then it's passed. We're still a long ways away from creating artificial general intelligence,
a self-aware entity, but AI technology has
boomed in recent years. Computer processing power
continues to grow rapidly, and the last several
years have seen a boom in the generation of big
data that we can feed to machine learning algorithms. Futurist Ray Kurzweil used a set of models and historical data to predict
that machine intelligence will surpass human
intelligence as early as 2045. This moment is what
people in the AI biz call: the singularity. The birth of an artificial
general intelligence. No matter how you see our future, from artificial intelligence
exterminating humanity, to it spurring us on to the
next stage of our evolution, artificial general
intelligence could be the last major invention humans make alone. - So then, is AI the new frontier? In this debate, we're
looking at the immediate and long-term effects and risks, benefits as well, of AI
as they affect us all in our daily lives. It's a complex but fascinating discussion. To help us better
understand what is at stake, we have four top experts in the field with different perspectives
on both risk and reward. Each has five minutes for their argument. We've got with us Muthoni Wanyoike, Dex Torricke-Barton, and Joy Buolamwini. They're joining us all
here on stage in Doha. We're also joined by
Professor Nick Bostrom. He was unable to get here in person. He is with us though through
the wonders of technology, live from a studio in London. Great to have you with
us, Professor Bostrom. And so these are our speakers. We'll be hearing from them shortly. We also have with us someone
whose task will be to listen to all four positions and share wisdom from his experience in
finding common cause. He is our connector. Govinda Clayton is a senior researcher in peace processes at the
Center for Security Studies at ETH in Zurich. Govinda is a world expert in
negotiation and mediation. He's sitting in the front row seat there. Thanks very much for
being with us, Govinda. We'll be coming to you
a little later, as well. Now, let's begin by
hearing from our speakers. Our first speaker is Muthoni Wanyoike. Muthoni is a data scientist
from Nairobi, Kenya. She is the co-founder of Nairobi's Women in Machine Learning and Data Science. Please welcome Muthoni,
the floor is yours. (applauding) - Good evening. Is it conceivable that
Africa has the potential to be the powerhouse
continent of the future? Or is our view of the continent
still mired with images of poverty and disease? My goal here today is to do two things. The first is to trigger mindful optimism towards the AI conversation,
and secondly is to highlight applications of AI that are
improving the quality of life within Africa. Mindful optimism
disentangles us from the fear and fantasy of an AI apocalypse. It helps us identify the future we want, and not just the future we fear. Everything we love about
civilization has been a product of human intelligence;
imagine with me the endless possibilities when we use AI to amplify our own intelligence. In Africa, an elephant is
killed roughly every 15 minutes. At this rate, the surviving
population of just 100,000 elephants, which is down
from a peak of two million, will be destroyed over the next few years. With artificial
intelligence, resolve experts are working to provide a
new pair of digital eyes to park rangers; they use vision chips with battery life of up to 18 months to track the elephants, to
monitor suspicious activity, and predict where
poachers will strike next, based on their database of past hunts. The fertile land in Kenya
has the ability to lift millions out of poverty;
up to 60 percent of African small holder farmers are
women who, until the advent of mobile technology,
had little to no access to financial products such as loans. Farm Drive, a Kenyan
startup, is using alternative credit scoring models to ensure
their financial inclusion. Where small holder farmers
have access to credit, they can sustain and contribute
to the economic development and improve their own livelihoods, as well as the livelihoods
of people around them. This credit scoring algorithm
does not just do that. It provides an opportunity
for lenders to access insurance in the event of
unforeseen weather patterns. At InstaDip, where I work,
we are using data from food retail stores in Tunis to
predict customer demand, customer behavior, and also
to improve customer service, and these are just a few
ways in which AI is making our lives better; it's
helping us save wildlife, it's lifting farmers out of poverty, and helping us improve customer service. And we have barely scratched the surface. AI promises to bring unparalleled
benefits to the continent. It is accelerating
development of industries that are eco-friendly and making the most of our limited resources. So yes, the promises of
artificial intelligence extend farther beyond the
problems of Silicon Valley. And I'll share with you
the African privilege: more than 60 percent of the
population in sub-Saharan Africa is under the age of
25, yet only 0.5 percent of the global population know how to code. This, ladies and gentlemen,
is where the risk and the opportunity lies. Can we teach young people
how to make robust technology that is beneficial to
them and their communities before ceding power to it? Can we learn how to clearly
define research questions and build towards the
future that we deserve? Can we create AI systems
that are compatible with our ideals of human dignity,
our rights, our freedoms, and our cultural diversity? It is my hope that this
modulus will help us identify common ground on these issues. AI is and will continue to
provide us the opportunity to rewrite our future,
and from where I stand, this is a future that is filled
with a strong African voice with strong African youth representation, representation of African
women, African scientists, and African innovators,
and not just Africans, but representation of the whole world. I have a stake in my
future, and refuse to settle for a divided world; thank you. (applauding) - Muthoni, thanks very much for this. Quite an optimistic view. You call it: mindful optimism. But what is clear is you
don't see AI as a threat. Quite the opposite, you
see it as an opportunity, not just for Africa, but for
the rest of humanity, as well. What do you say to those
who have serious concerns about AI and about the way
AI might shape the future. Concerns not just about
who will control AI, but how AI may eventually control us. And let me give you a
quick sense of what I mean. - [Narrator] AI is already
being used to monitor and manipulate populations. Surveillance systems
exist all over the world. And the Chinese government, for example, has taken it to a whole new level. Some workers are forced to
wear brainwave-reading hats. Facial recognition
technology is used to track and monitor the emotions
of school children. And China has set up
a social credit system that rewards and punishes
citizens by monitoring their social behavior. The system ranks citizens
against one another. Low credit scores result in punishment. This creates good
citizens and bad citizens, in a society controlled
by what is effectively a police-tech state. China and other global powers
are already weaponizing AI as countries compete for
military and economic power. Russia has begun building its robot army. South Korea has long
range AI machine guns, and the US has funded AI weapons
through it's own program: DARPA. If we don't regulate the
use of AI, other countries might follow their lead;
human rights are at risk. Poverty could rise globally,
and the division of power and wealth could be exacerbated. - So what happens when
governments totally control AI and use it to their advantage, perhaps sometimes against
their own citizens? And when wars in the future
are fought without any human oversight, shouldn't we
all be concerned about this? - I think we should not let machines make life-taking decisions. And AI has not caused the
wars that we have right now. We, as humans, have the
ability to ruin the universe, to destroy it completely,
or to ensure our continued existence for the next billion years. - But that means that we
have to embed some kind of social responsibility, an
ethical dimension, into AI. Is that feasible at all? How do you make sure
that robots are hardwired in a way that would avert catastrophe? That would actually, you
know, take care of this ethical dilemma that AI raises? - I think one of the ways we
can do that is to be clear about our goals, about our
personal goals for creating the machines, and this way
we are able to identify when the machines divert from our goals. And we should also be
able to change goals, depending on how we evolve
and how our needs change. - You say that, but can we
endow robots with our own human moral values? - I think we can, if you
think about the problems that we have created and the
solutions that we have created to date, then it's very
clear that we are able to impart our goals within
the machines that we create. Do we want technology to monitor, er, do we want technology to control us, or can we control technology? - That is the big question
that hopefully we'll be attempting to answer today. Muthoni Wonyoki, thank
you very much indeed. - Thank you. (applauding) - Now, our next guest, our next speaker is Dex Torricke-Barton. Dex is the director of the
Brunswick Group in London. He's also a former communications
director at Facebook, Google, and SpaceX. Dex, your five minutes start now. (applauding) - There's a famous joke
in the AI community. A machine learning
algorithm walks into a bar, and it says: I'll have
what all of you are having. There are many enormous
challenges created by AI. And some of those challenges
have pretty scary outcomes for humanity if left unchecked. But let's put things in
perspective for a moment. AI is still a technology that,
for all the hype around it, is very much still in its infancy. This is a technology that
for most of the time, in spite of all the
enormous value it's creating for us already, is quite
parrot-like in its ability to interpret the world. How many people in this audience right now have a smartphone, put up your hand. That's pretty much all of you. If you have a smartphone,
you are already using AI. AI is built into pretty much
all of the apps and services that you're using today, and
the world hasn't ended yet. Good news. Now, even though AI is great now, yes, it could absolutely create
problems in the future, no question about that. But we have to stay calm
and put that in perspective. Here's the thing though:
lots of people are refusing to stay calm, they are not
putting things in perspective. I used to work at the United Nations, it's where I started my career. One of the most curious
things was at the UN, every single year, a whole
bunch of people turn up, to campaign for a ban on killer robots. Think of everything wrong
with the world today: the threat of climate change, the fear of nuclear proliferation, the fact that we're living
through the greatest humanitarian catastrophe in history, the global refugee crisis, really, killer robots is the thing
that gets you out of bed in the morning and gets you really angry? Okay, that sounds like you've
lost perspective to me, and I think part of the
reason that people have lost perspective is because
they have chosen to focus on the problems with technology
and not the underlying social and economic and
political challenges that in many cases have
led to those limitations of technology; yes, there
is no question that, with AI, there is the potential
and the reality, sometimes, of having biases built
in, limitations in the way we have built that technology,
but that is a reflection not of an inherent problem
with AI, the technology, that is a reflection
of a diversity problem, built into our tech industry. Which is, in and of itself,
a reflection of our deeply divided society, riddled
with social inequalities. Yes, there is the potential
for killer robots, and that Russian robot army,
to be a real problem one day. But that's not a problem,
again, necessarily because of AI, that's a problem in
our international system. Why is it that nation states
are seeking weapons of war, to dominate the world? Why is it that we have
that kind of instability built into the way the world is ordered? These are questions that
are much harder to solve, and because of that, many
policymakers would much rather blame it on tech and
focus the conversation just on the technology. So if you're looking for the tech industry or AI practitioners to solve
all of our problems with AI, I'm sorry to say you're
probably deluding yourself. The problems that we face are societal, and in order to solve
them, we need a much larger conversation in our society
about how to go about doing that and that means we need
people in the tech industry to work together with leaders,
people who are working in governments and policymakers. Now, here's something that
actually does concern me a lot more than those
potential Russian robots, and that's the fact that most
politicians are generally quite bad at making
decisions about technology. Just a couple of years
ago, you had the spectacle of Jean Claude Juncker, the
head of the European Commission, joking that he carries around
and aged Nokia smartphone, a dumb-phone, doesn't carry a smartphone, doesn't have emails, gets
his advisors to print off his emails so he can read them, literally this is one of the
most powerful politicians in the world, overseeing
a vast apparatus dedicated to regulating tools exactly like AI. This is a guy who doesn't even know how to check his Hotmail. So is that the kind of person
you want making decisions about the technology and
the services that are gonna power the future of our
world, and that are powering all those apps that you
have on your phones? I think we need to do better as a society in demanding that our
politicians actually understand the technology they are talking about. A revolutionary concept. Now, ultimately I'll say
this: AI, like all technology, like all tools, can be
used for good or for evil. Technology, in and of itself, is not evil. It is simply a tool; it is neutral. A hammer can be used to
bash someone's skull in, or it can be used to build a house. AI is the most powerful
hammer of them all. In this century, AI may
well crush our society, if the worst fears are left unchecked. But it may also allow
us to build absolutely extraordinary things, as
a society, if we choose to realize the potential
and the opportunities there. If we choose to take a best-case scenario and not the gloomy doomsayer view. As a society, I think we can do that. I think the tech industry
and politicians and leaders, like you, should be working
together to make that happen, and I'm looking forward
to the debate tonight to see how we can do that; thank you. (applauding) - Dex, thanks very much;
a interesting perspective. You believe that AI has great potential, but only if the world's
political leaders get smarter and start working, start
strategizing and working along with the tech
companies of this world, and AI innovators. Now, if politicians are
inept, and in your own words, bad at making decisions on tech, and as we know, tech
companies are profit-driven, why should we trust them more
than we trust politicians? - Well, I don't think it's
a question of trusting one more than the other, but
another question might be: why would we trust them less? The fact is, the vast majority
of the world's innovation is driven by the private sector. And the private sector, by
the way, works very closely, hand in hand, with the public sector, but certainly all of
those apps and services that you are using on your phones, that you rely on every single day, they were probably driven
by the private sector, and they've done an okay job up to now, absolutely there are tons of
problems with the tech industry and there are limitations
to those technologies, but we should start
with an open mind about what they can offer. - An open mind, but in the process, we're giving away a
host of privacy rights, not to mention the data
that's being collected by these tech companies
without our knowledge, without our consent. - Well, I would disagree whether
it's without our knowledge or our consent, yes, probably
most of us don't read the terms and conditions
of all of our apps, but certainly when I post
my location on Facebook and along with pictures
of my food on Instagram, I actually know what I'm doing. And I think, I suspect that
actually the vast majority of people actually do roughly
know what they're doing, and politicians love to
assume that all of us are dumber than we really are. Actually, I think people
get a lot of value out of the internet and
out of social media. - But it does sound like
you're letting the Facebooks and Googles of this world, a
little easily off the hook. Don't you think that the Mark
Zuckerbergs and other leaders within the major tech companies
should be taken to task and put on the spot, the
way Zuckerberg, the founder of Facebook was,
recently, within a Senate, Congress hearing; let's take a quick look. - Thank you very much, Mr. Chairman. Mr. Zuckerberg, would you be
comfortable sharing with us the name of the hotel
you stayed in last night? - Ah, no. (laughing) - If you messaged anybody this week, would you share with us the names of the people you've messaged? - Senator, no, I would probably not choose to do that publicly, here. - I think that maybe
what this is all about, your right to privacy, the
limits of your right to privacy, and how much you give
away, in modern America, in the name of, quote: connecting
people around the world. The question, basically,
of what information Facebook's collecting,
who they're sending it to, and whether they ever ask me, in advance, my permission to do that. Is that a fair thing for a
user of Facebook to expect? - Yes, Senator, I think
everyone should have control over how their information is used. - Well, not quite, a little bit of a double standard at play there-- - How so? - Should we be trusting
people like Zuckerberg and other leaders within
the tech community, should we have full confidence in them, knowing full well that
the big tech companies, let's face it, are taking
away much of our privacy, and many people would argue
are part of the problem, not the solution. - All that video did was
reaffirm just how out of touch and clueless those politicians
are about technology. There's nothing about
social media saying you have to post your hotel location. There is nothing about
social media that says you have to share your private messages, but if you post a picture of
your breakfast on Instagram and send it to all your friends, yes, your friends will probably see it. Revolutionary concept. So, that was actually a perfect example of how politicians love to
deflect from the real question: why do we have a society
that is so deeply divided? Why do we have a world where war dominates the international system? Why do we have a world
where there is such racial injustice in our society,
that is affecting tech, but it's affecting ef
other part of our society? Those are questions we
should ask as well as asking: what can the tech industry do
to better serve our society? - And hopefully we'll get
to these questions, too. Dex, thanks very much indeed. (applauding) Now, on to our next speaker. He is Nick Bostrom. Nick is director of The
Future of Humanity Institute at Oxford University;
he's also the author of the bestselling book, Superintelligence, Paths, Dangers, Strategies. Nick joins us now live from London. Nick, over to you. - Thank you. I think AI has tremendous potential. To think more constructively
about it, however, it's useful to have a clear
view about what precisely we are talking about when we are talking about artificial intelligence. Since 1956, the start
of the AI as a field, the objective all along has been not just to automate specific
tasks, but to replicate the same full general form of intelligence that makes us humans smart
and unique on this planet. However, for decades,
that original ambition was radically out of reach. So we settled instead for
building special-purpose systems and so-called expert systems. Expert systems would be big databases, they would be constructed
by having some human domain expert and a software
engineer sit down together, and the software engineer
would painstakingly try to extract from the
human the principles that they used to achieve
their performance. But these expert systems
were very brittle. They didn't scale well. And ultimately, you really got
out only what you had put in. Well, since the last eight or nine years, the focus has shifted. The action is no longer in
creating expert systems. The action now is in
crafting learning algorithms. Figure out ways of making
AI systems able to learn from experience, in much the
same way as we humans do. And this has led to
revolutionary new capacities. Things that were
completely impossible to do with good, old-fashioned
AI, is now routinely done. You have these deep
neural networks that can, for example, look at the
picture and see what is in it, or that can hear somebody
talking and transcribe speech. And with deep neural networks and deep reinforcement learning, a wide frontier of new
capabilities are opening up that seem to have much more
of the intuitive pattern recognition capability that
we humans take for granted. But that had proved elusive to date. And progress now seems to be
really rapid in this field. And there are just a ton of
exciting research avenues at the horizon, with a
lot of talent and money rushing in to explore all of these. Now, if as I think will be the
case, progress will continue to be rapid, we then have to
ask: what will this lead to? And here I think there
are two different contexts that we need to recognize and distinguish. There's a near-term context,
and then a long-term context. And each of these forms
the basis for what I think should be a serious conversation. They both should be taken
seriously, but they are different. And if you mix them together, then you get a lot of confusion. I think you simultaneously
get an over-hyping of what is possible now, but
also I think an under-hyping of what will ultimately be possible. But if we wanna have some
view of how to feel about AI, I think we need to recognize
that there are these contexts, and that the near-term
context, at some point, will become the long-term context. So in the near term, I think
AI is really a general-purpose technology that will
have vast benefits across all sectors of society, and
it will make many processes more efficient, if you're
running a logistics center, let us say, and you are
better able to forecast future demand, then you
can adjust the stocking so that you need less inventory
and you can save money. If you are a big social media company, and you can find better ways of serving up in the news feed things that
people actually want to read, you can increase user engagement. We will have self-driving cars, we will have medical diagnosis systems that can look at, say, an
X-ray and help the doctor decide whether it is a cancer or not, and you can go through almost
every sector of society, and you can see there's some way in which these deep learning systems
can help us get better results. But then there's the long-term context, and there, ah, I think AI
is not just one more cool, technological advance, not just
one more interesting gadget. What happens if AI one day
succeeds at its original goal, which has been all along, as I said, to replicate human
intelligence in all domains, then really what we have
is the last invention that humans will ever need to make. If you think about what it
means to mechanize intelligence, to full human-level performance,
and then shortly after, presumably, superhuman
levels of intelligence, you soon realize that that's not just a technological advance like any other. If you have machines that can do research, that can do science better
than we humans can do, then from there on,
really further progress is driven by the machines,
and so this transition to the machine super-intelligence era, will be, I think, the most important thing that has ever happened in human history. And it will unlock an enormous potential for wonderful things to occur. I think a whole post-human
condition becomes possible. Space colonization, cures for aging, all these things that we tend to think of as just science fiction,
I think that actually becomes a real possibility
once you have super-intelligent scientists and technologists. But along with this
enormous potential for good, I think there is also going
to be very significant risks, including existential
risks, threats to the very survival of our species, and
so what I think we should do is I don't think we
should try to stop this. I don't think it's possible
to stop progress in AI, and even if we could, I don't
think it would be a good idea to do so, but we should try
to get our act together, both doing technical research
into scalable methods for AI control, so that
even as the systems become arbitrarily intelligent,
and at some point, super-intelligent, we still
know how to align them with human values, and also progress, so that we will have
the government structure that makes it possible for us to handle these enormous
powers responsibly and for the benefit of all. - Professor Nick Bostrom,
thank you very much. (applauding) So Nick, you mentioned
the vast benefits of AI, but you also take quite
an alarmist view of AI, certainly in the long-term context. You say it could spell the end
of humanity, as we know it. But whether we like it or not, AI is here. So what do we do about it? To whom do we entrust the
challenge, in other words, of finding the right solution? - Yeah, I think in general,
but with AI in particular, but technology more
generally, our capabilities are improving more
rapidly than our wisdom. And I think that that's
just a condition we're in, and the technology will
keep moving forward, so I think we'll need to
try to get our act together as best we can; with respect to AI, I think that involves, as I suggested, a certain amount of technical
work and computer science, mathematics, to develop
scalable methods for AI control, and then an ongoing
conversation in society as to how we use these new
capabilities as they come online. So I don't think, for
example, that it is ridiculous to have a conversation about
lethal autonomous weapons, as Dex suggested, even though
there are other problems, maybe more urgent problems
in the world today, if you want to prevent human
society from going down this avenue of leveraging
the latest AI to make more lethal weapons, I think
it's easier to do so before there are large arsenals deployed. So sometimes I think that thinking ahead about what might happen if we
continue on the current course is also worth doing, even as, of course, we should also grapple with
a lot of the other challenges of society that we see
around us in the world today. - Nick Bostrom, for now, thanks very much. We'll come back to you a
little later; thank you. (applauding) Let me now introduce to
you all our final speaker, Joy Buolamwini is a computer scientist and digital activist based
at the MIT Media Lab. Joy is founder of the
Algorithmic Justice League. Welcome to you, Joy, the stage is yours. (applauding) - We've arrived in the age of automation overconfident and under-prepared. AI systems are increasingly
impacting our lives, deciding what kind of job you can get, which colleges admit our children, and even the medical
treatment that you receive. And while the builders of
AI systems aim to overcome our human limitations,
research studies and headlines like the ones you see behind
me continue to remind us that these systems come
with risk of abuse and bias. For example, after years of development, Amazon scrapped a sexist AI hiring tool. If you submitted a resume
that just contained the word "women's" it would be ranked
lower than other resumes that didn't; supposedly easy
to use voice recognition systems continuously
struggle on English accents that aren't British or American. Tech workers at companies
like Google have protested the use of their AI skills
for creating applications for drone surveillance and even lethal autonomous weapons
systems, it's happening. Unchecked, unregulated,
and at times, unwanted, AI can compound the
very social inequalities its champions hope to overcome. The potential of AI must be
tempered with this reality. AI magnifies the flaws of its makers, us. I was confronted with
the flawed nature of AI when I encountered something
I called: the coded gaze. My term for algorithmic bias that can lead to discriminatory practices,
while I was working on an art installation. So as you can see in this
video, my lighter-skinned friend was easily detected. When it came to my face, I
needed a little assistance. To have my face detected,
I put on a white mask on my black skin. Now some might argue not
being detected by technology used for mass surveillance
isn't the worst thing. But a recent data breach
showed the Chinese government is using the technology to
track over 2.5 million residents in muslim areas of the country. Even when accurate,
these systems can be used for harmful discrimination. Still, inaccuracies in
facial analysis technology also pose problems; in the United Kingdom, where police have reported
the performance metrics, they don't look so good. You have false positive match
rates of over 90 percent. That's more than 2,400 innocent
people being mismatched with criminal suspects. And also not being detected
by AI systems can have negative impacts, as well. Researchers recently showed
that the kind of technology used to detect people in self-driving cars was less accurate at detecting,
you probably guessed it, darker-skinned individuals as compared to lighter-skinned individuals. So on one hand, mass
surveillance can be the cost of inclusion, and on the
other hand, not being detected by autonomous vehicles can
be the cost of exclusion in an AI-driven world. Even without super-intelligence, AI can have harmful outcomes. We need less attention on
hypothetical future scenarios and more resources devoted to mitigating the current shortcomings of AI. My MIT research uncovered
large racial and gender bias in AI systems sold by IBM,
Microsoft, and we see you, Amazon, when guessing
the gender of a face, all companies perform
substantially better on men's faces than women's faces. They had error rates of
no more than one percent for lighter males, and when
it came to darker females, the error rates soared up to 35 percent. My investigations even
showed these companies failing on the iconic faces
of people like Oprah Winfrey, Michelle Obama, and Serena Williams. Yet some of these systems are
being sold to intelligence and government agencies. The Financial Times reported
that a Chinese company was provided free surveillance technology to the government of Zimbabwe in exchange for something very precious: the dark-skinned faces of their citizens. Like the exploitation of the
profitable natural resources from Africa and here in the Middle East, to build Western economies,
we are witnessing the exploitation of the data
wealth of the global south. Now the digital bodies of the global south are being extracted under
the banner of innovation. AI and data colonialism is here. We must fight these trends
before it's too late. Given the rapid adoption
of these types of systems, and the potential for abuse,
I co-launched something called the Safe Face Pledge. And this pledge prohibits
the lethal application of this type of technology,
and also outlines steps to mitigate abuse. Over 60 organizations and thought leaders have supported the pledge,
and initiatives like these show us it is possible
to actually have a say in how AI is used. I'm optimistic that
there's still time to shift towards building ethical AI systems, and ultimately we must bend our AI future towards justice and inclusion; thank you. (applauding) - Joy, thanks very much. Obviously, you believe that
AI can be very beneficial to society but only if
some of these biases that you mentioned are
actively designed out. I wanna pick on some of
the last words you used. You said that we must now act
to bend AI towards justice and inclusion. As you know, AI's increasingly being used in the criminal justice system. We're seeing it used more
and more across courtrooms. Some people will argue it's
sending the wrong people to jail, others will say, you know, it's actually having a positive effect. If the problem within the
justice system in general has long been human biases
towards minority communities or others, aren't we better
off trusting machines? - We already tried this experiment. So Julian Ang Wen and her
team at ProPublica did a study where they showed that AI
systems used to predict how risky somebody was
and how likely they were to re-offend actually had bias
against black individuals. So it would predict that black
individuals who were low-risk were actually high risk,
and it had the opposite kind of issue when it came
to white individuals. It would predict that white
individuals who were low-risk were actually, sorry, who
were high-risk were actually a lower risk than they were. So we've tried this experiment, and the problem is given
how structural inequalities manifest themselves in criminal justice, if you're using that data
to try to improve criminal justice, you're actually
going to make it worse. - Well, are we gonna make it worse? We actually caught up
with a lawyer at Harvard, who you might know; she
happens to be blind, and she has dedicated her
time to actually exploring and studying the effect of algorithms on the criminal justice system. This is what she thinks. - As of now, there are
many risk assessment tools used in many different stages
of the criminal justice. It will make the work of
judges more efficient, and hopefully it will produce
results that are more accurate and less biased; the
idea is AI is much better than the human brain in
processing information, analyzing data, and identifying
patterns in the data. Beside the great potential,
there are also great risks that we should be aware of. AI could be a mirror of our biases. - AI could be a mirror of our biases, but isn't AI also simply
better than humans at analyzing data, and perhaps less prone to making these emotional decisions? - I used to think that
until I ended up coding in a white mask, and then
I had to really question how these systems that
I'd been lead to believe were going to be better
than we are as humans, were actually perceiving the world. And again, if you start
thinking about using vision systems which are highly complex, and something we're
trying to get AI to do, sometimes we underestimate how
much sophistication we have, as humans, so I'm not ready
to cede decision-making to AI. I'm still coding in a white mask. - But just briefly, when
you say we need to strive towards justice, to make it
more robust, more ethical, more responsible, whose justice? - When I talk about algorithmic justice, it's always about humans, right? And so making sure that we don't have AI that's propagating harms. - Alright, Joy, thank you very much. - Thank you. (applauding) - So thank you all very much, indeed, for your perspectives. Before we step into our Majlis segment and open up this discussion
to our four panelists, and then to our audiences here
in Doha and around the world, let's quickly recap; let's
summarize the key positions that we just heard. So first: AI will create
more equality among nations. That was Muthoni Wanyoike's perspective. Second: Politicians need to understand AI. Dex Torricke-Barton. Third, we heard that AI could
destroy humanity as we know it from Nick Bostrom. And finally, without oversight,
AI will simply amplify inequality, and that was
Joy Buolamwini's argument. So it is your chance now to weigh in on all of these statements. Here, and at home as well. You can just go to DohaDebates.com/AIVote. So onto your phones,
DohaDebates.com/AIVote. I hope I got it right the second time. Now you each have 100 points to assign to any one of those four positions, that best captures how you feel about AI, taking into account
both its pros and cons, you can give all 100 percent, 100 points, to one single speaker, or
you can decide to divide those points among two, three,
or even all four speakers. We simply don't live in a
world that is black or white, so you can choose between
the two opposing sides of the spectrum, and you
can divide your points up accordingly; the voting begins now, and while we get the
results, let's head over to Nelufar one more time, Nelufar, what are you seeing online? - Well, Ghida, I'll tell you one thing, I'm completely torn between all of those. I'm learning so much as we go along, and it seems that our
viewers who are tuning in on Twitter and streaming this
on Facebook are, as well. Thank you for your comments,
keep them coming in. I'm here, I'm reading them. I want to mention just a few
as you guys cast your votes. We've got one from
Damascus, Mohammed Khadid, he says: even if humans feelings
and emotions were modeled and used by artificial
intelligence projects, I don't think it would be an equivalent to human intelligence. So there, Mohammed is
prizing human intelligence above anything artificial
intelligence can reach. And then we've got another
one here from Ali Sekandar who says: dear world, the
debate about AI should be more about its relatability,
rather than just its positives or negatives. So he's actually trying to
make this a more nuanced, more difficult decision to be reached. So our audiences here, and I
hope you guys streaming this, are engaging, are viewing,
and are casting your votes. Just enough time, though,
for me to shout out the middle school, Billas Charter, thanks, you guys are all tweeting
and sending amazing pictures. You're all watching this,
I think in an auditorium by what it looks like, and I
wanna make a special mention of Christina, who's in the
eighth grade at Billas Charter, who says: how much should
we trust AI to take care of the human race? Should they be allowed to be politicians? I wonder what Dax makes
of that, in a second. And just very finally,
I've got to read this one, because this is absolutely brilliant, we've got one here from
Gino Raiti, who says: I for one can't wait for the singularity. Which is the moment when
artificial intelligence surpasses human intelligence. Gino says: how amazing
is that for our species to create something self-aware and be able to surpass us in many ways. Now, just to remind you one more time, hashtag DearWorld on
Twitter, and you can comment on our Facebook page
as well, Doha Debates. And I will try and engage
as many of your comments into the live debate as possible. Ghida, back to you. - Well, that's great; lots of engagement, excitement and anticipation. Thank you very much, indeed, and I'm told we have
the results of our vote. So let's have a look at them on-screen. Alright, so the number two position: politicians need to understand AI, is the one that, well, comes in second. Ah, without oversight, AI
will amplify inequality. Joy, that was your argument. That has gotten the most vote until now. 33, 35, just over 35 percent. Followed by Dex's position: politicians need to understand AI. Then number three is: AI could destroy humanity as we know it. That was the position that was articulated by Nick Bostrom in London, and finally: AI will create
more equality among nations. Muthoni's argument got 12.83 percent. Alright, now onto our connector,
who has been listening intently from his front row seat. Although AI is not his area
of expertise, mediation and conflict resolution are. Govinda, please join me
now onstage if you would, and share with us your initial thoughts. (applauding) - Thanks a lot, Ghida. Wow, I thought all the
presentations were just fantastic. I really enjoyed hearing
such really varied responses to the topic; I feel each
speaker offered really important but also quite different insights. But now, as we move into the Majlis, we want to build some consensus. So I spend most of my time
trying to better understand how to resolve conflict
and build consensus. And let me tell you, whatever the context, the first step to finding common ground is to get some clarity on the
issues we want to focus on. Complex topics, like
artificial intelligence, raise lots of contentious issues, so to increase the chance
of finding some consensus, we first need to identify
some common issues, and then ask the speakers
to kind of focus in on these a little. Essentially, we need to
get all of our speakers onto the same wavelength,
to give us the best chance for some points of consensus to emerge. So, what I'm gonna try and do for you now is identify three common
themes that I think came up across the presentations,
and then you can explore this a little further, in the Majlis. So, firstly, it seemed to
me that all of our speakers implicitly argued that AI is already here in a really meaningful way,
and that its continued growth is totally assured; therefore
we didn't really hear much discussion on, kind
of, is AI worth the risk? But instead we heard much
more discussion focused on the new risks and the
new benefits associated with the continued growth of AI. So I'd really love to
hear the speakers say a little more on this. So for example, like, is
the exponential growth of AI a certainty, and if so, is
the more relevant question maybe more: how can we
best manage these risks, and maximize the benefits? Secondly, and actually
perhaps more meaningfully, it seemed to me that all of
our speakers spoke directly or indirectly about, about the need for and the challenges associated
with regulating AI. Dex and Nick both spoke to this directly. While Joy instead highlighted
some of the more kind of worrying applications and
downstream implications of AI. And also touched on the kind
of possibly sinister ways in which many people will
argue, we really require some kind of regulation. And finally, in her closing
remarks and discussion with Ghida, Muthoni highlighted
regulation is really a challenge that requires
more consideration. And so it seems to me that
a second point of discussion in which we might identify
some form of consensus, but also probably highlight
some of the important differences related to the
need for new regulations to mitigate some of the
risks associated with AI, without limiting the various
sources of potential. Thirdly, and finally, we
heard each of the speakers talk about time horizons,
with the focus of discussion really ranging from the present day, now, to some relatively distant future. I would therefore be really
interested to hear the issue of time discussed more in the Majlis. For example, what do the speakers consider to be the short and the
long-term with regards to AI? And to what extent do the
costs and benefits that they've been talking about relate
to a specific period. I think perhaps, and probably
quite understandably, really, we have a little more consensus
with regards to the risks and benefits of AI now,
and more of the differences of opinion really emerged as we moved further into the future. (musical theme) (applauding) - Govinda, if I could just keep you there for just another 30 seconds,
you've simplified our task by identifying these three
areas where there could be some commonality, as you mention. Everyone agrees that
AI is here and growing, there's the need to regulate AI, and there's that business
of the time frame, or as you've called it, the time zone. What specific tools,
though, in your toolbox, could we actually use
to try to amplify some of the common ground
that you've identified, and build on them? - Sure, so I think when, in
conflict resolution practice, we tend to talk about
the process of moving from positions to interests. So each of our speakers have
really set out clear positions relating to artificial intelligence. So this is essentially what they think, and really what they'd like to see happen. Now these positions are
probably really based on years of experience,
and so it's unlikely these are gonna change today,
in relation to anything they hear from me, or from
you, or from anyone else here. In fact, actually, if they
did change their positions, we'd probably think this
was a little bit strange, and maybe even artificial. So, what we want to try and
do instead is not to challenge them on these positions,
but instead ask them to kind of look behind these to the
kind of common interests, needs, or understandings that underpin the positions they've set out today. So, what I'd really like
to hear from the audience, as we kind of move more into the Majlis, is questions that explore
the underlying motivations for the positions that the
speakers have each taken. So why, for example, are they
worried about unregulated AI, or perhaps what do they
think are the most important values or like, ethical
principles that we should be considering when we're
thinking about the risks of AI? - Alright, Govinda, thanks very much. And as you say, let's
get straight onto it. And let's get on to our Majlis discussion. Thank you, Govinda. So, now is where we can actually
move, or see whether we can in fact move the needle in either side, in either direction of this discussion. What can we agree on? How can we actually move forward? What are the immediate and
the longer-term factors that we should consider? We will start the discussion here on stage for about 20 minutes or so. We will then take your
questions, both here in Doha and online; so let me begin
with Professor Nick Bostrom, if I can. Nick Bostrom, are you with us? - [Bostrom] I can hear you. - I can hear you, too. Do we have him up on screen. So, Nick Bostrum, a quick question to you, as we get started into our discussion: what do you say to the optimists among us, like Muthoni, who argue that you know, even though we should be
mindful about our optimism, we should still strive ahead
and try to push the boundaries to reap the benefits of
AI, that it is not the time for worrying, but actually
the time for working along those lines; what
do you say to them? - Well, I agree, mostly with that. I think we should push ahead. I don't think it means that
nobody should worry at all, or think ahead about
possible future pitfalls. I think we can do both. We can reap the benefits
of what we have now, even as there should be
some community of people who are trying to look ahead
and scan for possible hazards down the road; in fact, I
think a lot of the different views that were expressed here,
are to a significant extent, compatible, so I don't think
maybe there are such deep, irreconcilable disagreements
between all the different panelists as one might
think at the surface. In terms of the timeline,
I can say something. I think there is a great
uncertainty about this. We did a survey among some
of the world's leading machine learning experts,
a couple of years ago, and one question we asked was: by which year do you think there
is a 50 percent probability that we will have obtained
human-level general AI? And the answers really
were all over the map. The median answer was
somewhere around 2040 or 2045, depending on precisely
which group were asked, but really with a lot of
uncertainty on both sides of that. So I think we should think
in terms of probability distributions spread out over a wide range of possible arrival dates, and not be too confident one way or the other. I do think that there are
very strong driving forces, pushing development, both
commercial, scientific, and increasingly security-related
interests, as well. So it does seem like
there is a lot of momentum behind further progress in AI. - Alright, Nick Bostrom, thank you. Muthoni, you heard what
Professor Bostrom has just said. You, on the other hand, have
talked about all the positive effects of AI; do you agree
with some of his concerns, some of the issues that you think that he thinks we should focus on as well? Do you agree with anything
that you just heard? - I agree with what Nick is saying, and the reason I agree with him is because for the developing world,
we do not have the luxury to vacate on the moon or to play around with self-driving cars
or autonomous weapons. For us, it's a question of
basic access to human rights, an end to poverty, and end to disease, access to quality education, and using AI to create these systems
that we've been unable to create for the past so
many years, is important. - Do you agree, Troy,
with Muthoni, even looking at the bias within AI,
is part of the problem, do you think, that AI is
essentially being built by, let's face it, white men
who live for the most part in the West, and who are
basing much of the way they're developing the
system on their own views of the world; their own biases? - So when I think about AI
that is currently being used, we're oftentimes talking
about machine learning. So this means the machine
is learning from data. So for AI as its
practice, data is destiny, and right now the data
reflects the coded gaze, or the priorities and
preferences of those who have the power to shape AI,
so to your question: who is shaping AI? And it's oftentimes white
men, it's oftentimes values that are coming from the West that don't necessarily
reflect most of society. - Dex, what do you say
to Joy's earlier argument that there is what you called
AI and data colonialism at work, that there is a certain level of exploitation going on? - I mean, I think the examples
you gave are very compelling. Clearly, there are probably
folks who are doing things that are not quite ethical,
and I think there's a reason why there is so much
emphasis in the AI community right now on strengthening
the ethical frameworks in which we're developing this stuff. Frankly, yes, there is
a problem in the field. And you know, look, I'm
not somebody who's white. I lived in Silicon Valley,
worked there for nearly a decade, you know, I
experienced many of the issues you talked about, I mean,
they're very close to the bone. We were talking earlier,
before the debate, about how there was an app
from a big tech company which used AI to see which
characters from oil paintings you were, and you know, it
would do facial recognition to see if you looked like
somebody from an ancient piece of art, and the only
result that came up for me was a very withered old Chinese woman from several centuries
ago, so all my friends were showing these lovely paintings
from the European masters. I did not show that one on my Facebook. And of course, I'm sure, entirely
this wasn't due to malice. I'm pretty sure it was
because whichever white, western European and American
engineers chose that art selection to build their database,
they couldn't be bothered to find any really
representative art from Asia, the Middle East, or anywhere else, which I and other people who are not white would have probably fit
into more effectively. So I totally agree, and I
think it's a major issue that we need to invest a
lot more in dealing with. - What should the priority
be, Nick Bostrom, for you, and what is the problem with
people thinking about AI in the context of a better
future for themselves, thinking of the immediate rewards of AI? What's wrong with that
idea, and isn't much of the reality of AI
still largely unknown? Is this why people like you
are largely skeptic about it? - No, I think there's
nothing wrong about thinking about how we can take advantage
of the AI capabilities we have now; my only point
is that we should also, or at least some people,
there should be some people as their full time job also
thinking about what will happen when we get to the next level AI. I think some of the problems
impact with current AI systems will be reduced just from
more technically capable AI. Some of the examples
of current day systems malfunctioning or
miss-classifying faces and failing to recognize faces from
minority groups, for example, are just a technical
limitation of current systems. If you have either better
learning algorithms or larger datasets that
are more representative, I think the performance of
those systems will improve. And so, to some extent some
of these ethical problems might get easier as we move
along, but then I think other new ethical problems
will come into view. - [Ghida] Are you concerned about these ethical problems, Joy? - I am, and I'd like to
push back on this term, minority, when we're talking about women and people of color, these
are the under-sampled majorities of the world. (applauding) Also, the assumption of
technological determinism over time, with enough data,
these systems are going to get better isn't what we've been seeing with the audits we've done. So last year, I audited
various large tech companies. IBM, Microsoft, and I
tested where they were when it came to gender classification. They weren't doing so great. A year later, they closed the gaps, but we decided to then test Amazon, right? These are huge tech
companies, and they still had error rates around
33 percent when it came to under-sampled majorities. So I don't think it's
inevitable that if we're not being inclusive about how
we develop these systems, they will automatically
somehow be representative of the rest of the world,
even when you look at the gold-standard
benchmarks that were used for facial analysis
technology, up until 2015, and even now, when you
look at the standard that's considered to be
the benchmark in the field, it was 75 percent male, 80 percent white, reflecting the patriarchy,
they used public figures, and if you look at the
representation of parliament members, you're about 77 percent
male, so if we're not being intentional about being inclusive, we will replicate structural inequality. So the so-called
minorities who are actually the under-sampled majorities of the world, will not necessarily have better systems, if we're not making the
correct decisions, now. - Interesting, and I
think one of the concerns that obviously we heard
earlier from Nick Bostrom and from others here on stage, Dex, is the whole issue of the
way automated weapons systems are developing. How appropriate do you think
it is for tech companies like Google, Microsoft, Amazon,
to assist the US Department of Defense in developing
these AI technologies, and we know that just last April
some 4,000 Google employees objected to the use of their
work in lethal projects. - Well, I think it's not
an easy question, at all. You know, it would be
very nice if we could say: yes, investing in any systems
which benefit militaries would be a terrible thing, you know? And I think obviously no sane
person enjoys war or conflict, and yet at the same time, military systems can be used for one, a host
of non-military purposes. GPS of course was developed
for military applications and has been the backbone of, you know, hundreds of billions of
dollars of civilian industries for a very long time, it can also be used for humanitarian purposes
and things like peacekeeping, which absolutely are
things which, I think, you know, folks who believe in a more just international order absolutely support. So-- - Isn't this just another
example of the way tech companies are just simply profit-driven? As you say, there are
billions of dollars funneled into these contracts. - Well, I don't think
it's an example of how they're just profit-driven,
because if they were purely profit-driven, this
would not even be a debate. They would fire all
those employees and say: yes, let's completely
invest in weapons systems. Google should be building tanks. And of course, they haven't done that. They have a mission in the
world, which is to organize the world's information, and it involves, in many cases, grappling with those very thorny ethical issues. So I don't think the
debate, at all, is simple. And I certainly don't think
that the average Google, you know, leader, is
thinking: how can I make a few more bucks by
investing in weapons systems? They're actually thinking:
these are technologies that may be useful for
a whole host of things. - Muthoni, how do you feel about this? How do we make AI safer,
even though you are the only one on stage, I believe, who doesn't have too many
concerns about the way AI is developing, but how do
we actually make it safer? Should policymakers and
the public leave the future of AI in the hands of the tech industry? Should the issues AI raises be
left to commercial interests? Or should it be a wider conversation, one that goes out of the tech sphere, or the scientific sphere,
and into maybe the more civil society, political
sort of discussion that brings everyone in? - I have concerns about
security and about the risks that AI brings, but I'd like
to bring us to the moment in time when AlphaGo
beat Kadjie from China. And what the Chinese government
did about two months later is to create an AI blueprint
that ensures that China is the global innovation center by 2030. And what we see with China is
a combination of government policy, support for entrepreneurs,
support for researchers, a massive collection of data. And I think for this conversation, none of the countries
that we keep mentioning have a high moral ground
when it comes to the risks that come with AI, so
there are genuine concerns and we should begin to
address these questions. But we all have to sit at this table, as governments, as
companies, as individuals that will be affected by this technology. - Over to you, Nick Bostrom,
if I can for a second or two, can we all sit at this
table, or does this process have to be more, shall
I say less, inclusive? How do we make sure that
AI develops in a safe way, in a responsible way? Jeff Bezos, the founder of
Amazon, expressed concerns about automated weapons systems
and the way they are used. He's proposed a big treaty,
something that would help regulate those weapons. Do you agree with him, and what do you think needs to be done? Well, I think, broadly speaking, there is no simple answer to that. I think it will require ongoing
attention by the public, and yes, politicians
understanding it more, but also people caring
a lot about these issues as they come up, and then
it needs to be debated on a case by case basis. With respect to autonomous weapons, I actually think it'd be fairly difficult. A lot of my friends are
supporting the attempt to get a ban on this, and I've
more stood on the sidelines. I think it's somewhat
unclear to me exactly where you would draw the boundary between the things you would want to ban and the things you would wanna allow. So I'm kind of undecided on that issue. I think, I mean, just to clarify earlier, when I spoke about
minorities, I think I meant minorities within the dataset
on which these algorithms are trained, and that I
think, is likely to improve if you had larger datasets,
more representative of different populations. And I was a little, like
actually in the talk, when you spoke about data colonialism, I wasn't sure whether you meant to object to collecting more data
from currently under-sampled populations, or whether
that is something you think we should do, or maybe
it's just we should do it in a different way. But I think to get really
equivalent levels of performance across different, say, ethnic groups, I think you might need
data collected more evenly from different ethnic groups,
and from different genders and so forth. - Alright, Nick Bostrom, thanks very much. We wanna move the discussion
now a little bit away from the theory, the probabilities
that we've been talking about and more into the
realm of the actions that we can take,
specifically commitments, actions our experts want
to encourage all of us here to consider; here are
now the four commitments that have been presented
by our four experts. Let's see how you all feel about them. I'm going to put them up on the screen. First, let's learn how AlphaGo works. Second, learn how to code. Three, demand a say in how AI
is used to shape your life. And finally, demand government
leaders who understand tech. So let's do another vote now. This time, though, please
vote for just one commitment that you can and will attempt to do. So if you're watching us
online, on Facebook, on Twitter, you can now also vote
on Facebook or Twitter, in addition to the Doha Debates app. Remember, just one vote per commitment. And I think it's a good time to check in with you again, Nelufar. - Oh, gosh, so much to get through. I'll let you guys vote in
here, and those of you who are streaming, but your
comments are amazing. Please do keep coming
in; I can tell people are already forming opinions and sides, depending on which speaker
they like the most. We've got a comment here
from Cameroon, Mbingo says: I'm gonna paraphrase
it, because I'm not sure I quite understand, but he
says that there should be a total survey done of all
the artificial intelligence and robots and programs to
see what the cost and benefit will be for the environment. So he's more concerned
about the global environment than what these robots will do to us. To the people of this world. We've got some more
comments coming in here. Fahad in Qatar says: dear
world, artificial intelligence is a supreme power of
what technology can do, in order to control its usage,
the government should have a better understanding of it. Aye, so agreeing with that opinion then that the government should
be involved in regulating, and processing; we've
got another one here, this is Natalie from Honduras. That's our first comment
now from the Americas. Natalie says: if humanity
looked to each other and the planet with more
reverence and looked for things to unite us rather than divide us, then AI could be utilized to
do more truly extraordinary things globally. So there's a really hopeful one there. Ghida, back to you. - Very strong opinions there,
thank you very much Nelufar. We can now take a look at what
the voting results look like. We have the results from our app, where the voting has closed. The results though, on
Twitter and Facebook, may be different as the
voting there is still open. So, let's take a look. Now, number three: demand a say in how AI is used to shape your life. Is the one with the most points,
48.63 percent of the vote goes to that; demand government
leaders who understand tech, that's gotten just over
31 percent of the vote, and the learn how to code, and finally, learn how AlphaGo works, just
3.83 percent of the vote. So these were the general
commitments that you've all made, but now let's open up
the discussion to your specific thoughts and
questions here in the room. And we'll also be joined by
viewers from Gaza, Palestine, Oakland, California, and Nairobi, Kenya. They'll come to us through
our on-screen portals, which have been provided by
our partner-shared studios. We'll start the conversation here though. Can I just ask you to please
raise your hand and wait until I call on you? I'm going to call on two people at a time. If I call on you, then
please go to a microphone, and please do keep your questions short, so we can get to as many
of them as possible. So let's now move into the
second part of our Majlis. (musical theme) Alright, I see a hand raised there, the gentleman to my right, and
the lady here in the middle. Would you please both make
your way to the microphones. And over to you, first. - Thank you, so I have a question for Dex, and I also have a quick comment as well, that you used the clip of Mark Zuckerberg to show that the problem is politicians and not privacy, and I think
it was very clear there that the issue at hand
was how corporations use our private information or, you know, they have access to our
private information, and they use, they break
our trust by handing it over to quite tech-savvy government agencies, surveillance agencies;
my question to you is that you made a joke out
of thousands of activists that push for banning
of AI-powered weapons and weapons operations,
how could you say that, when AI-powered weaponry
like unmanned drones and GPS-guided missiles
are used to bomb people in Afghanistan, Yemen,
and Gaza, into oblivion. Bear in mind that AI has allowed
states to conduct warfare at an extremely low political cost. (applauding) - [Ghida] Thank you very
much for that question. Dex? - The joke is not that
there are autonomous weapons and things which may have
the potential to one day be fully autonomous, the joke
is that activists think that the tech community is going to solve any of those things on its own. Hey, I used to work for the UN. I absolutely believe in the
mission of that organization and I absolutely believe in
delivering peace and security for the world. My dad was a refugee from
Burma in World War Two. You know, so in many ways,
I'm a child of imperialism, and the age of empires. Most of the world's population lives under terrible suffering every day. They live at the mercy
of all sorts of all sorts of weapons systems, you know, both autonomous and traditional. But what is truly terrible
is that we have confined these kinds of discussions
to ones which focus on very narrow, technological
solutions in that way. You're not going to solve
the problem of there being autonomous weapons or
drone strikes on civilian populations by arguing over
the technology on its own. We need to focus on the
fact that we have a terrible international system,
with rogue governments and irresponsible political
leaders from all over the world who have yet to be
restrained by a sane form of international order. That is what we should demand. - Thank you, and now we have
a question here, please. - Can I follow up on that? - Okay, a quick follow up
before we go to the question. - So they're not mutually exclusive. Right, back in 1980,
there was the convention on conventional weapons,
where we say injurous weapons systems do not need to exist. So I support what people
are doing with the campaign to stop killer robots,
because it's saying: let's put in some constraints
before it's too late. It doesn't mean we don't also grapple with the nature of war. - It's totally fair, one thing quickly, Nick talked about the
difference between near-term and long-term threats;
absolutely we need society which can focus on what's coming soon and what's coming in the future. It would be terrible if we
waited until things arrived before we'd dealt with them, but what I think I was
objecting to you, particularly, is we have a world which
has all sorts of problems, and that is a set of
issues which certainly, I don't think, features at
the very top of the agenda, right now, given everything
that is going on, including the refugee crisis. - Thank you very much. Let's go to our second question there. - Thank you for the interesting
angle on the debate. My question is for Mr. Dex and Miss Joy. Given what Miss Joy said
about bias-based inequities in AI and Mr. Dex said
that politicians are not informed enough to make
decisions related to tech, who do you think then should
regulate the development and exploitation of AI, that
deals with public policy, warfare and security planning, given that states do not
ratify some of the treaties and conventions that the UN puts forward? - [Ghida] Was that a question for Joy? - And Mr. Dex, both of them can answer. - [Ghida] Okay, Joy, briefly. - So I don't think we
should say policymakers can never understand
AI, so we, we don't have that conversation; earlier,
what we were discussing was the need for AI
literacy for everybody. So that as we're moving
into an age of automation, we have the basic
fundamentals to have informed and nuanced conversations. Is it true that not every
company, er, every country and company is going to
necessarily follow the laws? Absolutely, but it doesn't
mean we don't start somewhere. - [Ghida] Dex? - I think that's totally fair. I do think there are serious
limits to our ability to navigate the existing
set of political leaders, and institutions. We have a world that is
terribly, terribly run, and that is, I think,
the overarching problem that we face, which bleeds
into AI and a whole bunch of other areas; yes, the
UN has plenty of treaties, and those treaties are
ignored on a daily basis. If we actually followed international law, there wouldn't be any wars right now. So, we can totally put our faith in things like a ban on killer robots. Whether governments would
actually abide by that ban, or whether they would
call their killer robots something else entirely, I
think that's a real question. - Thank you, Dex, and as we both know, the United Nations gets often bogged down in these discussions and negotiations. Let's now check in with
one of our viewers, who's joining us from Gaza. In fact, I believe, two of our viewers. Do we have them up on screen? Karma and Mostafa. And let's hear from them;
what questions do you have? - So hi, everyone. We are very happy to join you today, but we were listening
to what you are talking, of course, we are agreed that everything in this life has a positive
side and a negative side. As we were discussing, my friend and I while you were talking, unfortunately here in Gaza, we face the negative side. It was against us here in Gaza,
a lot, but we still believe that there is something
we can do-- (muttering) And, sorry? (muttering) Oh, the bright sides, yes, as Palestinians who are living here in Gaza, we are trying to stick with the bright
side, in order to go out where there are new things. A lot of doctors here are
trying to create new technology, or using the new technology
to help the patient people. A lot of IT people, they are
trying to do a lot of programs to make our lives easier. I think that my friend, he
knows more about what's going on in this field, so, and
he attend one of, ah, or create something, can you? - So hi. We have this question: is AI biased? Yes. Is AI dangerous? Yes. But can we control it? Yes. So, at the end of the day,
all the people in attendance now, I think they will start questioning: should we have to unplug
from the technology world? Or stay connected? So, this isn't (muttering)
regarding to an obscurity. Are we secure or not? I don't believe you are
secure from AI apps. - Can I bring in Nick? Okay, well, thank you very
much indeed for these ideas. I wonder if Nick Bostrom
has been listening in, and whether we can actually get his take on what we just heard. Nick Bostrom, if you're
with us, please feel free to give us your take
on what you just heard. - Yeah, I mean, I think
with respect to Gaza, I don't think it's either caused
by AI or that AI is likely to be the way to solve it. I think it's a political problem. It is interesting to
think about how social and political dynamics
might change by some of the applications that AI will enable, particularly new ways of surveillance, new ways of bringing transparency
to social interactions. And we note historically that
other technological advances have had profound effects
on political systems, from the invention of
writing that enabled states, because you could keep track
of who'd paid their taxes, you could administer
larger political units, gunpowder that helped end the feudal era, the invention of the printing press, but with wars raging across
Europe for 100 years, and then helping bring in the modern era, and so the internet and AI is likely maybe to have similarly profound
effects on human society, but we don't have the
kind of political science that is really able to
predict what happens to our political systems
when you start to tweak some of these underlying parameters that regulate interactions between humans. But I think the potential
exists for things to become either a lot
better or a lot worse, depending on how those changes play out. - Thank you very much, Nick Bostrom. I'll take two more
questions from the audience. And I suppose what you've
just said also brings to mind what the historian Yuffat
Hararri has said, as well, that it depends on how we use AI. We can create paradise or
hell; it's just up to us. Let me go to a question at the very end. And one right here in the middle. If you could please both make
your ways to the microphones, and ask your very short,
concise questions. - It's been mentioned that AI is driven by the research funded
by the private sector, predominantly; however,
can the private sector in it's AI products, even
with the support of a state, ensure an absolute
inclusivity for the people, for instance, in some countries
of the developing world, if my main agenda for
today is to farm some food, and get something on my dinner
plate, if I'm lucky enough, would I need Alexa to play
some music in the background? - Alright, thanks so much. So the role of the
private sector, Muthoni? - I think the role of the
private sector is very important. The way we create our
future is by changing how we think about tomorrow
and the next 10 years and the next 100 years,
so the private sector has a very important role to play. And not just, I would to
want us to just put the onus of this on the private sector. We as individuals decide by way of voting, by how we buy products,
the products that we buy, the products that we
consume, the media sources that we watch, we decide what ultimately are the profit margins
for the private sector. So if there are companies whose policies, whose way of doing things
you do not agree with, then delete their app. And eventually that shows up in their profit margins somewhere. So I think we cannot just
leave this responsibility as we've had, very clearly. We can't leave this responsibility just to the private industry. We also have a part to play in this. - So talking about actions, an
example of the small actions we can each take on an individual basis. Another question, please, a quick one. - Thank you. I think we all agree
that AI is concentrated, and its implications
are essentially driven by large corporations like
Google, Amazon and Facebook, that are worth trillions
of dollars, combined, but my question is how
can we democratize AI and take it out of the Silicon
Valley to assist inclusion? - Joy, to you, how do we democratize AI? - One, we have to democratize
the creators of AI, in the first place, which is
why I admire a lot of the work that you're doing in Kenya,
because if we only have the large tech companies
shaping the vision, we don't have diversity
in various options. I think it also starts with
educational institutions, as well, I admire some of the professors who have decided to stay in academia so that they can actually
teach the next generation, even though there are
very lucrative incentives to go work at the large tech companies. So we have to make sure that those are not the only enclaves for innovation, and that we continue to
provide opportunities for a diverse pool of people
can shape the future of AI. - Alright, let's go quickly
to Oakland, California, and speak with Hassan and Jennifer, who are both listening
in and joining us now. With a question; go ahead. - Okay, so I wanted
to, without putting you on the hot-seat, Dex, I
heard you saying that, ah, you were talking about
it's the responsibility of politicians and governments to ensure that we're using AI ethically, and I wanted to know what companies, okay, maybe I misunderstood what
you were saying there, but I wanted to hear what
can companies like Google, Amazon, Facebook, some
of these tech players do to ensure that AI is used ethically, or what are you doing now? - Yeah, so, definitely, I think that's a slight misunderstanding. I don't think it's the
responsibility of governments or policymakers to make
those decisions alone. They absolutely need to be
working with the tech industry, on those things, and it is
something that will involve a lot of different
leaders, working together. But until now, the debate around AI ethics and building responsible
AI has really focused on what is the tech industry doing wrong, and how are they fixing all
of these things on their own? And we've given an easy
pass to those politicians, and the truth is very
unglamorous, we will need to work together, and
we will need politicians as well as the tech
industry to up their game. - [Ghida] Responsible politicians,
responsible tech leaders. - Absolutely. - Thank you very much,
Jennifer and Hassan, thank you for joining us. Hassan, I'm not sure
if you had a question, but make it a quick one if you do. - Yeah, sure, so I actually am working in machine learning right now,
and there's a common phrase: garbage in, garbage out, an that kind of refers to data as destiny. So I was wondering for
Joy, when you refer to data colonialization, how do the
people who generate the data, you know, get the value
of sending that off to another company, or
how do we try to mitigate exploitation between unequal parties, and who gets and delivers on data? - Great question, and when I
talk about data colonialism, I'm talking about power;
so Nick was asking earlier, does this mean that you don't collect data or you do collect data? Here's a question: have I consented for my data to be collected
in the first place? So I think we really need to
be thinking about this notion of affirmative consent, and
so reading terms of services that tell me Facebook has
been collecting my images and now has a biometric
print, my face-print, 10 years later, is not
affirmative consent. And to your point, I think
you're asking the right question, as machine learning practitioners, we need to think more
about data provenance. Where is the data coming
from, and also ideas like what have been proposed
by AI Now Institute, does data have an expiration date? So being more mindful about
the data and understanding data is a representation of
people at the end of the day. - Where the data's coming
from and where it's actually ending up, I suppose. - And if there's consent with that data. - Thank you very much,
and thank you again, both Hassan and Jennifer, for joining us. I'd like to take two more
questions from the audience. Who have we got? I don't see any women in the audience. Yes, I do see one there,
a question from you. A little gender bias. A question from you and
one from you as well. - Hello, good evening. I think it's a very interesting debate, and what I have to say
is less of a question, more of a statement, and
it is that we should be educating all people about AI
and about how we can use it, because politicians,
at the end of the day, are a representation of
the population, right? So we should be saying that
schools will educate us about it, universities
educate us about it, that news outlets report more about it, because it is very important,
and at the end of the day, we should ask ourselves: should we have, people have the powerful
tool of AI in their hands, can we trust humanity with
the powerful tool of AI, with all their political problems, economical problems, so can we trust them? - Thank you, the idea of
civil society, as well, playing a role in every single sector. A question please? - Yes, I want to go for a simple question. How can we protect the environment for the future generations
through AI technology? We are talking about the politicians and everything, we are
talking about everything. How about the environment
for future generations with this technology, AI technology? This question to the,
I want to go for Dex. - [Ghida] Okay, let's hear
from Muthoni first, alright? - I think AI gives us the
opportunity to make the most of the resources that we
have while taking care of the environment, so AI
allows us to ask questions like: is this, is the
energy that we're using, is what we are using
detrimental to the environment? And it helps us experiment
with other forms of energy that don't cause harm to the environment. - [Ghida] Dex, your thoughts. - Yeah, I think there are
all sorts of amazing AI tools which already exist now,
which you will probably never have heard of, and sound
very boring on the surface, but do phenomenally valuable
things in the aggregate. There's a company I know based
out of Cambridge in the UK, which uses AI to work out
how many delivery trucks need to be put on the
road every day to deliver the number of packages
that they have to get out. And that means you're sending
less vehicles on the road and using less fuel; there's
another one based out of Chicago, which uses AI
to optimize the performance of wind turbines, getting an
extra two or three percent of power out of them, so of
course, all of those things, together, end up creating a
big impact, added together. When you look at the
impact across industries and all sorts of other
fields which we haven't even thought about yet, there
will be huge implications for the way that the world uses resources. And that gives me a lot of hope. - Thank you, Dex; we've
got just a few minutes left in this discussion, just
enough time to actually go over to Nairobi and check
in with David and Katherine, who are both watching us. And because of our time constraints, well, we've got time
for just one question. So, whose question will it be? - Thank you so much. My is David Mbuya and
with me is Katherine, and we have combined all
our questions together. I work with an organization
called I Am Africa. I'm part of the Shared
Studio in Nairobi Project, and what we do particularly,
we work with young people in schools, and as our friend (muttering), just spoken, we also do a
lot in terms of teaching kids how to code, creating coding
clubs in the communities, and school, and supporting
teachers to start preparing for the
industry, for writing code. Because we are so much worried, right now, we are having a lot of
problems with the young people especially in terms of depression. People have, are having
a lot of depression, and many are dying
because of lack of jobs. We realize the reason why
the jobs are out there is because most of the
jobs are now automated. If AI is taking over,
we are getting into the (audio crackling) and the
young people are not prepared for the jobs of tomorrow,
and it is a pity that Africa still leaves men behind,
and be colonized again, digitally, because of our
young people who end up becoming our consumers,
rather than creators, of AI. So in this regard, that's
why organizations like I Am and (muttering) are really
stepping up to help young people to get out; so I think our
biggest area of concern is about how this depopulating data, and as our lady there talked
about colonization of data, this is what we are really
having it rough in Africa, because we realize that we've been given a lot of cheap things, like cheap phones. Everybody now can have a phone. But somehow those phones,
they come with very little, small, small root kits that
are taking a lot of data back to the people who are giving
us those cheap phones. They're taking a lot of data away from us, without us really talking about it. That is our concern. - David, we'll leave it
there, but I think we did get the gist of what you were saying. So to both and you Katherine, thank you very much for joining us. If I can just go back to Nick Bostrom. If I can ask you, Nick,
to take on this issue, again, the recurrent theme
of data and data privacy, and how data is being used, and the idea, as you wrap up as well,
in the final minute or two that you've got, the idea
that, you know, in the West, we feel that we've reaped the benefits and therefore the
excited young populations of Africa should just
stop, because you know, we've seen it all and
it only spells danger. So, to both those points,
your final thoughts? - Yeah, I mean, I think as
the questioner suggested, having kids learn some coding
in schools is a good thing, even if they are not actually
becoming programmers, it still helps put people
in a position to understand the infrastructure of the modern world. I think also the ability
to relate to humans, like the more you can
mechanize various kind of STEM subjects, the more important it will be in the future economy to
have humans who can relate to other humans, and that
might be another carrier path for young people, I mean,
how to manage data flows and stuff, that's a big topic. I'm not sure how I can
say one sentence about it that will really be very helpful here. I would say that tech
companies, in AI, today, have actually been quite
open with their research. They publish their research
findings immediately, make code available for
anybody, platforms like Google's Tensor Flow and so forth,
to try to help researchers around the world be able to tap into this, and have made significant
efforts to try to democratize AI. Of course, ultimately a
lot more has to be done, and with respect to data and so forth, that might have to be
regulatory frameworks that make this possible; so
some kind of combined effort by governments and companies
and civic society, I think, will be needed in that area. - Thank you very much
indeed, Nick Bolstrom, for sharing with us your insights and your ideas about
how we can move forward. Thank you very much,
and I've just got, well, just a minute and a half. I'll try to extend it if I
can, but just in a nutshell or two, can you each give
me your final thoughts on this topic; over to you first, Muthoni? - We all have a part to play
in the future that we create. Yeah. - [Ghida] We all have a role to play. - Yes. - [Ghida] What do we need to do? - We need to be more
conscious about the choices that we make and about
the goals that we have. - Muthoni, thank you;
Dex, what should we do? - The problems that are
created by new technologies cannot be solved by the
tech industry alone, and we've got to get out of this mindset of thinking that everything
that is going wrong in tech or has the potential to go wrong can be solved in Silicon Valley. We need our leaders and our
society to step up as well, and we need all of you
to be part of helping us fix these problems, as
well, so we all enjoy the opportunities, as well. (applauding) - Alright, more collective
action, Dex, as you say. Joy, what's it from you? - I think we also need more storytelling, and we need people to share
when they have experiences dealing with technologies
when you're not sure what's going on, and
asking more questions. What is happening with my data as well? So I don't think we, to your point, can only rely on technologists,
and we can't only rely on policymakers, either; we
need to think more through: what would participatory
artificial intelligence look like? (applauding) - Thank you very much, indeed. I want to thank all of
my guests here in Doha. Joy, Dex and Muthoni, and
over in London, Nick Bostrom. Thank you all very much indeed
for a fascinating discussion, and thank you all here at
Northwestern University in Doha, and of course, thank you to
the Qatar Foundation, as well. Let's all continue this
conversation online on Twitter and Facebook,
where I do hope that you will follow us; also follow us
on Instagram and YouTube and take a look at the exciting videos that we are producing
for you on this topic, as well as the other
exciting season's topics. Let's all make a commitment now, using the hashtag DearWorld, @DohaDebates, and do share with us
what you've committed to on the Dear World Board just outside. We want to hear your
thoughts and your ideas. Now I'd like to close
with a quote from Gibron Khalil Gibron: yesterday
is but today's memory, and tomorrow is today's dream. Until our next debate,
which will take place on July 24th for the Global
Tet Summit in Edinburgh, Scotland, where we will
address the complex issue of global citizenship. For me, Ghida Fakhry, and
the entire Doha Debate Team, thank you very much for being here. (applauding and music)
Decent watch, mostly a good illustration of how ready folks are to applaud and mistake poor discussion for critical thinking. All the usual problems of staged discussions: rapid topic switching, sweeping generalizations, flowery language, folks trying to score points, moderators who are urgent to relegate people to simplified positions, etc.
I always enjoy listening to Nick and am generally interested if a bit horrified by what comes out of them mouths of industry insiders, but there wasn't much in the way of really new material here if you're into these sort of talks. The demand for consensual data collection is sort of hopeful, if a bit impossible to imagine.
My own concerns in the arena remain tied to how we might influence the trajectory of ASI for the good, but it seems like this thinking remains mostly obscured if not ostracized by the immediate concerns of industry in public discourse.
More masculin in tone ? Isn't that a bit sexist ?