Transcriber: Maximus Garcia
Reviewer: Emma Gon So many of you might have heard some of the hype
around artificial intelligence. But what is
Artificial Intelligence anyway? Is it particular techniques
like machine learning? Is it anything a computer does that would otherwise
have required human intelligence? While we’re still working out what exactly Artificial intelligence is, I’m going to use
a broader term - algorithms to describe the automation
of processes yielding an output. But in using that term, I’m including the kinds of technologies associated
with artificial intelligence as well. Now, because of developments
in artificial intelligence and also because of greater confidence
around different kinds of automation, we’re increasingly using algorithms
not only to process data, but in our decision making,
really important decisions. So governments are increasingly
relying on algorithms to decide how to allocate
resources or to provide services. So here’s my question, in Australia should algorithms
make official decisions? Is there anything here
that we should worry about? And no, I’m not talking about
evil killer robots taking over the world, I’m talking about
far more mundane things, things like Robodebt. So this is an algorithm
that drew on tax and welfare data to determine who had been
overpaid benefits and by how much. Now, when this processing
was previously done by humans, they drew on a wide variety
of different kinds of data. But the algorithm
relied on a simple formula, and that formula assumed the stability
of income over the course of a year. Now, what that meant was is that people like Ken,
whose income was uneven, received a debt letter
for money that he didn’t owe. But that’s not the end of the story. The process was so complicated,
Ken described it as Orwellian that he was unable to resolve
the issue with Centrelink in two years. Is it okay that Ken gets
computer says, “No”? What exactly is
the problem with Robodebt? Turning to a different example, the Compass tool takes in
a wide variety of data and is able to predict whether
someone who’s committed an offense is likely to commit another one. Amazing technology, right?
Predicting the future! And this is being used particularly,
but not exclusively in the United States by judges, parole boards, prison
authorities in their decision making. Now sounds amazing,
but of course, there’s a flaw. And ProPublica pointed out
a really important one. There's a higher false positive
rate for African-Americans. Now what does that mean? That means that
if you’re African-American, you’re more likely
to get a high risk score, even though you would not
in fact, go on to re-offend. To make this real, the man on the left
received a high risk score of ten, despite only having
one prior non-violent offense, whereas the man on the right received
a much lower score of three, despite a prior offense
of attempted burglary. Only the man on the right,
in fact, went on to re-offend. Now, one example is not statistics, but ProPublica was able to prove
that this was happening at scale. So is this okay? Is it all right
to use an algorithm that is drawing on historic data to make decisions
about an individual now, particularly how long that person's
going to spend in jail? Moving from
the United States to China, we can look at the social credit system. Now, this algorithm again
draws on a wide variety of variables to give you a social credit score,
which is then used to decide, can you travel?,
can you access the Internet?, can you send your kid
to a particular school? Now artificial intelligence is involved
here in some situations. So, for example, facial recognition
is used to detect people who are jaywalking across the street. That will then lead
to a bad credit score. So social credit, a score, perhaps you were automatically detected
crossing a road against the lights determines your access to
social services, your ability to travel, and many other aspects of your life. Is this kind of thing, something we would be willing
to put up with in Australia? Right? And to answer that question, thinking about ethics, thinking about politics, thinking about
the accuracy of the calculation. Now, here’s the thing. In Australia, we can choose more or less
which companies we deal with. If you don't like Amazon's
recommendation algorithm, you can go to your local bookstore
and buy your books there. But we don’t get to choose about
our interactions with government. So in light of that, what are we, as citizens of Australia,
entitled to expect? Now, I would argue
that Ken was entitled to expect that if he receives a debt notice
from the government, that represents a debt he actually owes. That in fact the government has done
enough testing and evaluation before it sends out letters
to confirm that that’s the case. And that if he disagrees
and wants to contest that, there’s a fair, speedy process
he can use to do so. I think all Australians
are entitled to be treated fairly. Now that’s a complex thing,
what does fairness mean? If we look at the Compass tool, the company who might
have argued that the tool was fair and was able to show it met
the company’s own fairness metric. In fact, it’s mathematically
impossible to be fair in every sense of the term. But nevertheless,
we want our government to try to think deeply about
what fairness means in the context in which a particular system
is being deployed and to do testing to confirm
that a system will be fair. In particular, we want
government to be aware of the risks of relying on
data collected in a historic context with all the racism and sexism
that exists in the world and using that to draw
inferences about us today, and then to make decisions
about us based on those inferences. If we think about
the social credit system, I would like to think that Australians
would expect a government not to exercise that level of surveillance
and control over the citizenry. Now, this is a democracy and
everyone is entitled to their own view on what is an appropriate
level of surveillance. And some of you might think that
some surveillance might be appropriate, for example, to help
law enforcement solve crimes. But even in a democracy,
there need to be red lines and I would like China’s social credit
on the far side of the red line. So returning to the initial question, should government rely on algorithms
in its decision making? What exactly is the problem here,
is it really about the technology? Well, despite the nature
of the examples, not really. Robodebt demonstrates
that government should not automate a flawed process affecting
a vulnerable population. But it doesn't really tell us that
automation generally is a bad idea. Compass is an example
of what can go wrong, when relying on historic datasets
to draw inferences about people without really thinking
about issues around fairness. But it doesn’t really matter
whether we use machine learning
and artificial intelligence or whether we use
good old fashioned statistics. China’s social credit system is an example of government surveillance and control. But would we be any less concerned, if, instead of facial
recognition at traffic lights, we had humans watching us all the time? And inputting things into systems, so that ultimately when we try
to enroll our kid in a school, another human doesn’t let that happen? In other words,
what is the problem here? My argument is it’s not actually how
technologically sophisticated the tool is, it’s about centering
human needs and human rights in the systems that governments build. So if that's the problem,
what's the solution? Many of you may have heard of AI ethics, And indeed, governments,
organizations and academics are creating lists of
AI ethical principles. These include things like
fairness, accountability, transparency, beneficence,
motherhood and apple pie. And there’s a measure
of ethics washing here. So the Australian government
introduced its AI ethical principles, while it was still actively
pursuing Robodebt. AI Ethics is not going to solve
the problem for three reasons. First, it’s not about the sophistication
of the technology and classifying it as AI Second, the principles
are too vague to be useful. Compass is fair according
to its own metrics. And third, it doesn't give you a remedy. So Ken, who received the erroneous
debt letter, couldn’t do anything with the AI ethical principles
the government had released. It doesn’t help. AI ethics might be fantastic if we’re
trying to prevent the robo apocalypse. But we’re really worried about, what we should be worried
about in Australia at the moment, Isn’t that, not yet, it’s things like Kafkaesque navigation through
complex government systems without human empathy or support, it's relying on on biased historic
datasets to draw inferences about us and then use that in decision making. It’s government surveillance and control that we can see in things
like China’s social credit system. So if AI ethics isn’t
the solution, what is? I argue we need four things. We need to build constraints
into the legislation that authorizes the use
of these kinds of systems to require things like proper testing
and evaluation and full transparency. We need protection
for privacy and autonomy, as the foundation of human dignity. We need standards with enough detail so that organizations
can create practical policies for designing, using
and purchasing AI systems. And we need citizens to understand the kinds of problems
I’ve been talking about and the values that we need to protect, so that they stop
government from putting out either flawed or inappropriate systems. So how would that play out
in the context of our examples? Well, if we had it for Robodebt, first of all, that legislation would have
required that the system follow the rules for when debts are actually owing and not use shortcuts
like annualized income. We would have testing and evaluation
according to clear standards through which government could confirm
that that was indeed the case. The code would have
been released transparently allowing any bugs to be picked up by the community
in advance of deployment. And after all that, if Ken
had received an erroneous debt letter, he would have had access to, because resources would
have been put in to enable it, an efficient process
for contesting the debt. What about Compass? Well, one could very well argue that data driven decision making
is inappropriate in certain contexts, and sentencing is probably
one of those contexts. But there are going to be context in which government
does want to rely on data for making decisions about
how it allocates resources, for example. So what do we need to think about there? First of all, we need
to deeply interrogate what fairness means in the context in which the system will be deployed. We need to work out
what is the right fairness metric. And that might change
how we do things. So New Zealand has
a risk assessment tool, but it relies on a very narrow range of
variables associated with prior offending. Maybe that, for example,
is less problematic in that context. Once we determine what is fair, we need to test and evaluate
not only the accuracy of the algorithm, the figures you see reported, but against our fairness metric, again using standards to do that well. We need transparency around
what systems are being used, so citizens know about them. We need the ability to contest
this kind of decision making, when its Inappropriate. Now, if we did all of that, hopefully people will no longer
be spending longer in jail, because of the colour of their skin. But we’ll also all be able
to be more confident about how government is using our data
to make decisions about us. Now, I’m hoping we stay far away
from China’s social credit system, but this requires a level
of constant vigilance, right? If we’re concerned about surveillance, if we’re concerned about state control including through cyber physical systems they stopping you
entering a train station, right? Then we need to remain educated
and we need to pay attention when government tries
to push the boundaries and deplete our values. So algorithms can be used appropriately
for official decision making if that is done thoughtfully,
not with AI ethics, but instead with legal protections,
mandating things like proper evaluations, standards,
telling agencies how to do that properly, and an educated public
willing to challenge government when it introduces problematic systems
so we can get the benefits of AI. We can use automation
in government decision making. But next time the system is coming out and hopefully
you’ll even be told about it, ask whether systems
have been put in place so that you can be confident
about its appropriate use. Thank you. (Applause)