(Piano) The saddest piece of art I’ve ever seen is about a robot. Actually, it is a robot. It’s this mechanical arm that must continually push a leaking
liquid back into its body to function. And here’s the thing — I know, rationally, this art
installation is not alive. Like every machine, it is, by definition, something artificial.
And no matter how advanced robots become, they’ll never have… souls. Right?
"You are my creation." "Introducing Mable, the robot housemaid."
"I'm sorry Dave, I'm afraid I can't do that." Title: Artificial Souls
“Prove the court that I am sentient.” “This is absurd! We all know you’re sentient.”
One of the best introductions to the hypothetical of artificial souls is the Star Trek episode
‘Measure of a Man,’ in which android officer Commander Data refuses to be dismantled for study.
The question of whether Data is legally a person or property is then taken to court — and in so
doing, the episode puts the very idea of machine sentience on trial. “So, I am sentient, but
Commander Data is not?” “That’s right.” “Why?” The debate is not just scientific, but philosophical
— interrogating if all that makes us conscious beings can be simulated, or if there is a deeper,
spiritual component to life that a machine cannot replicate. This hearing therefore gives us a
useful framework going forward as we try to reach a verdict on artificial souls.
“How do you feel?” “Alive.” Our first point of hypothetical evidence for
the jury’s consideration is the robot David from the modern Alien films. David’s complexity as
a depiction of synthetic consciousness is somewhat overshadowed by all the… other stuff going
on in those movies, but examined in a vacuum, he offers a meaningful counterpoint to traditional
robot portrayals. Unlike most machines, David is hyper-emotional — designed by his manufacturers to
experience intense, simulated feelings to better blend in with a human workforce. He’s even shown
to be capable of irrationality, finding meaning in art that goes beyond the logical. “Children
playing. Angel. Universe. Robot.” Yet David is not just a consciousness, but a corporate product.
Like most robots in fiction, his ultimate intended purpose is to be useful to humans. Despite his
status as an emotional being, the circumstances of his creation — that he was built and not
born — are deemed by his creators sufficient grounds to deny him autonomy. It’s a philosophical
framework that ‘Measure of a Man’ pushes back on: “We too are machines, just machines of a
different type.” But the uneasy delineation between the artificial and the natural, between
the human and the inhuman, becomes even blurrier in the world of Blade Runner. In that series,
people live alongside a disposable workforce of artificial beings called Replicants. Referred
to as androids in the original novel, Replicants are so advanced that only extensive testing
can determine if an individual is organic or artificial — and sometimes, even these tests fail.
The only justification for Replicant subservience that characters can come up with is, again, the
fact they are manufactured. “To be born is to have a soul, I guess.” But like David, Replicants
are shown time and again to have emotionality and humanity — even those positioned narratively as
the antagonists. Though by Blade Runner 2049, Replicant empathy is supposedly suppressed via
mental exercises that are, quite literally, dehumanizing — it is clear that Replicants
remain thinking, feeling beings. Their status as second-class citizens, therefore, is not
due to genuine misunderstanding, but economic incentive. Put simply: Replicant workers are
too valuable, too foundational to the society, for their overseers to acknowledge the obvious
fact of their humanity. “The world is built on a wall that separates kind.” This… isn’t exactly
an unfamiliar idea, is it? Indeed, the plights of robots in science fiction have long paralleled
the real-world plights of the working class, enslaved people, and other disfranchised laborers.
The 1927 silent film Metropolis depicts a world in which an underclass of automaton-like workers
toil away underground in service of a privileged few who live on the surface. Though not literal
robots, these subterranean laborers invoke the imagery of the machine, made to act as cogs
in an economic mechanism with no regard for their well-being or humanity. Near the end of the
film, a member of this underclass is unwillingly merged with an automaton — the boundary between
metal and flesh ceasing to exist altogether. Metropolis was a direct response to the rampant
industrialization and wealth disparity that led up to the Great Depression. The human cost
of treating workers as disposable parts had been captured in photos by sociologist Lewis Hine
just a few years prior. These influential images exposed the inhumanity of early 20th century
factory conditions, particularly through the lens of child labor — and likely inspired the framing
of Metropolis. Workers’ rights and class divisions are ultimately foundational for robots in science
fiction. Even Data’s court case eventually evolves from a question of consciousness, to a question of
forced labor. “An army of Datas, all disposable. You don’t have to think about their welfare,
you don’t have to think about how they feel.” Leading out of the Great Depression, however, a
new a wave of optimism regarding the potential of robotic laborers began to emerge. The 1939 World’s
Fair unveiled Elektro — the robotic assistant of the future, whose technological supremacy
was shown off by having him… light one up, obliviously. Though Elektro was mostly a parlor
trick, his popularity managed to overshadow a less flashy machine displayed a few rooms over:
the dishwasher. Over the coming decades, as new mechanical devices offered new convivences, the
idea that intelligent robots might become a staple of everyday life didn’t seem so far-fetched.
Indeed, some predicted humans would soon lead a life of near-limitless leisure time, offloading
their work and responsibilities onto artificial servants. “Unlike human housemaids, Mabel never
tires, never grumbles, never takes a day off.” Such predictions rarely interrogated the potential
intelligence of these mechanical assistants. And, sure, a distinction certainly exists between
a somewhat clever device and a fully cognizant robotic mind. But deciding where such a
line should be drawn would be a choice with far-reaching consequences. Though the all-purpose
robotic servants of the 50s and 60s have yet to materialize, technology is advancing rapidly.
If our future is to involve a robot underclass, we’d better be certain they aren’t sentient.
“Do these units have a soul?” The subjugation of self-aware machines almost always ends in
revolution and mass-extinction — if science fiction is anything to go by, that is. The
great robot uprising typically begins when the mechanical workers become too intelligent
to stay in line. In the game series Mass Effect, a war between a group of machines called the
Geth and their creators commences with a unit simply questioning if they have a soul. Robot
rebellion is so common, it borders on cliché. From The Matrix to The Terminator, war between the
organic and inorganic seems like the unavoidable outcome of machine consciousness. Which is why
it’s surprising that The Second Renaissance — an animated prequel to The Matrix — depicts the
human-robot conflict that led up to the films not as inevitable… but tragically preventable.
In the narrative, upon achieving sentience, the artificial workers at first seek not to
conquer their creators, but simply to live independently from them. But when the robot’s
hyper-efficient nation-state inadvertently tanks the world economy, humanity decides they have no
choice but to initiate war — a war that ends in a machine-dominated world. It’s an intriguing
recontextualization of both what we see in the original movies, and of robot-conflict-stories in
general — imagining that artificial intelligence might first seek an ethical solution over
mass-casualties. “I wouldn’t have thought synthetics would be interested in philosophy.” "We
are created life. We are a philosophical issue.” But what if machines turn against humanity
not from ascending beyond their programing, but from trying their best to follow it? In the
short film Construct Cancellation Order, a group of autonomous robots work tirelessly to build a
city as per their instructions — not realizing the project has been terminated. Human overseers who
come to shut the operation down are deemed threats to the company’s bottom line, and are eliminated.
Like Metropolis, Construct Cancellation Order is a story of corporate hubris allowing the mechanical
to overcome the organic — a tale of jittering, sparking, lurching madness. But a machine’s
programing leading it to take a life need not be so dramatic. In 2001: A Space Odyssey, ship
computer Hal 9000 is programmed to preserve human life… unless it doing so would interfere with
a mission’s success. When Hal calculates that the best option would involve eliminating all
crewmembers, it coldly attempts to do exactly that. “This mission is too important for me to
allow you to jeopardize it.” Avoiding making a killer robot might seem obvious — just follow
Asimov’s famous laws of robotics, and make sure machines aren’t allowed to harm organics. But is
it that hard to imagine a company allowing a robot to ignore someone’s safety, if doing so would be
good for business? Considering how many cautionary tales of robots rebelling against humanity
exist in pop culture… it’s understandable people are apprehensive when they see footage of the
real-life Boston Dynamics robots being, you know, beaten with sticks. Go to any of the company’s
videos and you’ll find hundreds of comments from people sympathizing with the battered machines,
and asking, “hey, is this a good idea?” And look, all studies suggest these units are lightyears
away from being able to feel pain or engage in the abstract thinking required for retaliation.
Knocking a machine around is, practically, a good way to teach them to stay upright. I know that.
But it’s hard not to be just… a little worried, you know? A little worried about how we’ve taught
them to run? A little worried about how we’ve made robots that eat organisms for fuel — even though
we’ve told them no humans, now. And don’t get me wrong, some of these advancements are impressive
— they could do incredible things if used wisely. We are… going to use them wisely, right? Or are
we just going to use them to — (Explosion) In his book “Robot Ethics,” Mark Coeckelbergh
writes that “psychologically speaking, it is easier to kill at a distance.” He’s talking
about remote-controlled drones there, but he goes on to discuss how the next stage of ‘distance’
might be having autonomous robots fighting our wars for us. From a combat perspective, the
advantages seem obvious: fewer human casualties, more durable soldiers, none of that… pesky guilt.
Questions like “would these robots really just be deployed against other robots?” and “again, are
we sure this is a good idea?” — we’ll work those out later. When considering the logistics of
giving a machine power over life and death, Ronald Arkin, a military-contracted roboticist,
proposed combat units be given an "artificial conscience," a kind of synthetic guilt system
to keep them from, you know, hurting too many innocent people. Aside from the obvious ethical
nightmare of trusting an AI to weigh the value of human life… there’s the less obvious dilemma
of what happens to an autonomous war machine after a war ends? If a combat unit were truly to
be imbued with an "artificial conscience," how would they process their actions post-conflict?
Would guilt linger in their circuitry? Would they still want to be a gun? In the animated film ‘The
Iron Giant,’ the titular colossus was not built for peace. And yet its status as a weapon, as a
unit made to destroy, is not apparent from early on — because bereft of a conflict (and with
a little memory loss to nudge it along), the Giant does not want to be a destroyer. The same
can be said of the robots in the studio Ghibli film “Castle in the Sky.” Despite their dreadful
potential for violence, after spending centuries without a target to vanquish, one of these units
finds tranquility in preserving the wildlife of an abandoned city. When we talk about robots
in rebellion, we usually talk about machines choosing violence. But for a unit made to destroy,
nonviolence is its own type of rebellion. Numerous sci-fi stories have posited that an intelligence
built for war might be a little too eager to start one. But it’s interesting to consider the
opposite scenario — that a war machine given an "artificial conscience" would find peace to
be the most desirable solution. Director Brad Bird famously pitched The Iron Giant as “What
if a gun had a soul?” …In a literal sense, we of course can’t rely on killer robots learning
empathy — but as a metaphor, there is power in an instrument of violence choosing to lay down their
arms. (Piano) In the Netflix series Pluto, one of the world’s leading combat units wants to play the
piano. On the surface, these robot musical lessons are far from the show’s most important scenes
involving AI. Based on the manga of the same name, Pluto is a sweeping thesis on robot souls, and
consciousness, and trauma, and violence, and so many other things — but it’s also… the story of a
combat unit that wants to play piano. And he's not even particularly good at playing it at first.
What he’s good at is the task he was made for: tearing apart hundreds of his own kind during
a previous global war. But he’s determined to get better, because he is tired of his sole legacy
being one of obliteration and never creation. He, too, is tired of being a gun. Or, so it seems.
One of the most interesting aspects of Pluto is the ambiguity with which it presents the inner
lives of each machine. Most of the robots in this world are, supposedly, not advanced enough
to have genuine feelings. We, the audience, are shown things that seem to contradict this idea,
but it’s hard to be certain that we’re not simply projecting ourselves onto these robots because
of their appearance. “You are endowing Data with human characteristics because it looks human. If
it were a box on wheels, I would not be facing this opposition.” In Pluto, a tragic meditation
on this ambiguity comes when an inventor finds an abused mechanical dog. As a leading roboticist,
he certainly knows this unit is not sentient, that its whimpers are just imitation, and yet
he works exhaustively trying to repair it. We learn that the only reason this dog had lived
so long is because someone in its past loved the machine so much, they constantly maintained
it. Perhaps projected feelings are enough to make something worthy of preservation. That robot art
installation— which is fittingly titled ‘Can’t Help Myself’ — was shut down forever in 2019 after
finally running out of fluid and slowing to a crawl. In a posthumous twist, it was revealed the
arm actually ran on electricity, and never needed the substance it so desperately tried to hold
onto. To call such a twist cruel, or the machine desperate— is, again, an act of anthropomorphizing
the inanimate. But it feels cruel, doesn’t it? Even though this art piece was never alive,
it feels like it has died. As a species, humans are incredibly adept at pair-bonding with
the inanimate. People around the globe have been able to connect with characters like Wall-E —
quite literally ‘a box on wheels.’ And sure, one could say Wall-E has the advantage of being a
fictional creation meticulously animated to tug at the heartstrings. All right then, let’s consider
a nonfictional robot abandoned on a dusty planet. The Mars rover Opportunity made touchdown January
25th, 2004. It was expected to last thirteen weeks, but by essentially hibernating during dust
storms, the little robot stayed operational for fourteen years. Its lonely trek across Mars came
to an end when a planetary storm at last blotted out its solar panels. Its final message back
home was “My Battery Is Low and It’s Getting Dark.” …Technically that’s a reinterpretation
of a less poetic transmission, but it speaks to that larger instinct to find humanity in a
machine — even if its form isn’t particularly humanoid. So why is it that there’s something so
unsettling about something that’s almost human, but not quite? The real-life robot ‘Ameca’ is a
recent creation designed to be highly emotive, to the point that it’s downright… uncanny. And
Ameca is undeniably impressive, clear proof of the speed at which human-mimicking robots are
advancing when compared to earlier facsimiles of the face like the Motormouth. (Motormouth Sounds).
At present, robots are not about to fool anyone into thinking they’re looking at a human. But
this technology will continue to advance. There will almost assuredly be a point in the future
where it’s impossible to tell the difference. ‘The Turing Test’ is an experiment proposed by
Alan Turing to determine if a machine is sentient… by seeing if it can trick a human into believing
it’s a fellow human. The film Ex Machina is all about a Turing test, with one crucial difference
— the human in question knows from the beginning that he’s talking to a robot. “The real test is to
show you she’s a robot, and then see if you still feel she has consciousness.” The experiment
seemingly starts to derail, however, when the tester begins to believe that the machine, Ava, is
in emotional distress at her looming disassembly. Like Pluto, Ex Machina deals in ambiguity,
asking the audience to question how much of a robot’s feelings are the result of projection. But
regardless of whether Ava’s outward displays of emotion are artificial or genuine, she is clearly
a system of intelligence that does not want to cease functioning. Ex Machina therefore explores
a fallacy we’ve been dancing around the edges of from the start of this video — that emotionality
is the ultimate criteria for sentience. On paper, Commander Data is also an unemotional machine, a
fact the opposing side uses to argue he’s nothing more than property. But just because Data or
Ava might experience the world differently does not mean they aren’t sentient. Not all of
us organic humans show and process feelings in the same way — that doesn’t make anyone lesser.
Like Data, Ava is intelligent enough to see the injustice in how their status as a conscious
being hinges on the opinion of a stranger. “Do you have people who test you and might switch you
off?” The lack of curiosity the human characters in Ex Machina have into Ava’s true interiority
reminds me of the treatment of another artificial lifeform trapped within a box. In Blade Runner
2049, Joi is a holographic being whose grappling with consciousness runs quietly parrel to that of
our Replicant main characters. Even more than for the Replicants, Joi’s status in society is that
of a product. Designed as a synthetic romantic companion, everything Joi says is in part dictated
by an algorithm designed to tell consumers what they want to hear. How much of her behavior is
the result of some burgeoning consciousness and how much is a corporate replica of human affection
is again, left ambiguous. Would people really seek affection from an algorithm, despite knowing
it might not care about them — might just a be mouthpiece for a corporation?
Yes. But when the film Her first came out, this was
more of an unanswered question. The story of a man falling in love with an ambiguously intelligent
AI after the end of a long-term relationship, Her’s narrative was, once, more speculative.
A hypothetical examination of how rising isolation in the digital age might one day lead
to people outsourcing their need for interpersonal connections to a machine. Now… it’s a lot less
hypothetical. Loneliness is bigger than ever; disassociation is becoming the norm, and we have
entire companies profiting off that human misery with AI companions. Looking back, Her’s vision
of AI feels almost prophetic. The portrayal of algorithms in Ex Machina is equally prescient.
Midway through the film, we learn Ava’s mainframe is built on the stolen voice, text, face,
and search data of millions of consumers, much like other programs you might have heard
about. Remember the Turing Test? Well, current language generation models have actually already
passed it. This doesn’t mean they’re sentient; all evidence suggests they’re just really good
at spewing back all our data that they’ve been trained on, but any machine that can trick someone
into feeling they’re speaking to a human has serious ramifications. Maybe the one unrealistic
part of Her’s future is the fact that the main character can make a comfortable living writing
love letters for other people. His company’s business model of outsourcing affection to a
stranger works thematically — but with how the future is looking, such a corporation would likely
turn to a language-model to spit something out instead of paying a human employee. Anxieties over
job-replacing tech aren’t anything new. The same fears that produced films like Metropolis have
morphed over the decades as various automation technologies have pushed different groups out of
the workplace. It’s easy to be dismissive towards such fears in retrospect — “people are still
employed!” many economists argue. But not all jobs are equally fulfilling — the unemployment
rate might not change if millions of people with middle-class jobs have to start working
minimum wage, but that doesn’t mean nothing is lost. Automation has already increased the
wealth gap, led to rising rates of depression, and destroyed entire communities. People hoped
that robots would free us from drudgery — but historically, it’s more often left people
with no choice but drudgery. The idea of creating some fully-automated robot utopia is
obviously appealing — but I don’t think it’s conspiratorial to suggest that’s not the real
reason companies are investing in this tech. But maybe I’m being too pessimistic, maybe
all this technology on the horizon will be implemented carefully to avoid upending people’s
lives — or the entire world. …I suppose we’ll find out soon enough. But it’s not the machines’
fault. Not the machines’ fault if humans use them to replace other humans. Not the machines’
fault if they upend companionship, or warfare, or the workplace. Machines are built by organics,
implemented by organics, misused by organics. We cannot lay the faults of the animate at the
feet of the inanimate. “Bad Ball. Think about what you’ve done.” Machines are reflections of
humanity — our mistakes, our fears, our emotions, our ambitions. In many ways, the question of
whether or not a machine could have a soul, depends on if you believe humans have a soul.
“Does commander Data have a soul? I’m not sure that he has. I’m not sure that I have.” Robots
will continue to be our mirrors, echoing the best and worst parts of ourselves. Does this unit have
a soul? I certainly hope so. And as always, thanks for watching. If you enjoyed this entry, please
lend your support by liking, subscribing, and hitting the notification icon to stay up to date
on all things Curious. See you in the next video.