This video is sponsored by CuriosityStream. Get access to my streaming video service,
Nebula, when you sign up for CuriosityStream using the link in the description. Science fiction often kicks around terms like,
“the rights of sentient beings”. If that’s going to be the standard for how
we treat new species that we meet - or make - in the future, we’d better give some serious
thought now to what we truly mean by Sentient. So today we will be exploring the concept
of sentience, what it is and if we can replicate it, along with the parallel concept of sapience. Neither is a very easy concept to define but
that’s probably the best place to begin. Sentience is considered to be the capacity
to feel, perceive, or experience subjectively, and is often considered separate from the
faculty of reasoning or thinking. It is the ability to experience sensations. Which tells us both a lot and very little. Sentience is one of those fundamental philosophical
concepts that is hard not to talk about self-referentially, but in many ways it isn’t just part of what
you are, but the bit of you that lets you even be aware there is a you. We have a classic expression by Rene Descartes,
Cogito, ergo sum, or “I think, therefore I am” and that’s often interpreted as
a proof or assertion that the capacity for thought and the capacity for existence are
the same thing. And indeed many folks think it is, but the
concept is not saying that because a rock does not think it does not exist, or even
that you do exist, but rather that there is no point in doubting your own existence. Don't let the "therefore" confuse you, this
isn't a logical syllogism or conclusion, it's an articulation of the self-evidence of your
existence. You can roll back and question every first
principle or assumption until you eventually end up questioning if you even exist, but
doubting your own existence isn’t a coherent concept because if you didn’t exist, who
would be doing the doubting? So we can’t prove we exist or are sentient,
certainly not to anyone else, but we can say that doubting either is pointless, and I want
to say that from the outset because we will not waste anymore time today challenging if
there is such a thing as you or I[a] and if we are thinking, feeling entities. There’s a long standing tendency of some
folks to get ultra-reductionist about concepts like personal existence, sentience, and free
will, and question them from a scientific standpoint. Which is all well and good, but the entire
basis of science is that an individual can make observations about the universe around
them and draw conclusions from it, any conclusion you draw from your observations that calls
into question your ability to make observations or your own existence is logically incoherent.[b]
This is what we mean when we say things like “science can’t prove that science is the
only way to learn the truth”. The core of science is an unprovable assumption
that a sentient mind can employ reason to their observations to determine things, anything
it proves which calls that notion into question can be dismissed. Which is also a good reminder that philosophy
is valuable, sadly it tends to get dismissed and under taught a lot these days. “Philosophical question” is often a euphemism
for “mere academic discussion with no practical application”. But if we don’t want to repeat some other
mistakes of history, these are questions we really need to have an answer for BEFORE we
meet any evolved alien life or create any artificial intelligence. Folks often don’t realize that the entire
field of science is a subset of philosophy under what’s called “Natural Philosophy”. Since science is specifically focused on understanding
the natural, material Universe it unsurprisingly tends to offer us the best return on investment
for cranking out valuable technologies, medicines, and so on, which can make it seem like the
most valuable area of knowledge, but also to some as the only area of knowledge with
value. So much so that they skip understanding the
assumptions underlying it. We are going to be talking some today about
the neurology and chemistry of thought, but we are more focused on the philosophy angle
today, as without it folks can tilt to a reductionist take on concepts like thought. However it’s the same[c][d][e][f][g] as
asking what Shakespeare is, and saying that at its core “A Midsummer Night’s Dream”
is composed of various chemicals we classify as ‘ink’ on the surface and upper layers
of complex fibers we refer to as paper. We can analyze how the brain works, but when
we ask what sentience is, or what thought is, or what love or trust are, if you are
coming back with an answer of ‘certain configurations of neurons’ or ‘these hormones or chemicals’,
you are coming back with the same sort of answer that says a classic piece of literature
or science is just ink and parchment[h], or 1s and 0s in a database, or that math is simply
circuits firing inside a computer. These are but the substrates on which these
objects or concepts operate. Those substrates should not be ignored and
can impact the concept or abstract object, but they are not the totality of it. A book is not simply ink and paper, a mind
is not simply neurons. I also mentioned a concept called Sapience,
and we want to sketch that concept out in contrast to Sentience. First, we often discuss animal life or hypothetical
alien life and ask if it is intelligent, or sentient, or sometimes sapient, and I’ll
even coin and throw in the term pithient later today. I often do use all three casually as interchangeable
but they are not. If Sentience is considered to be the capacity
to feel, perceive, or experience subjectively then it is something we can say many animals
possess. We are prone to anthropomorphizing our pets,
and granting them levels of understanding they almost certainly lack or trying to humanize
their worldview, but regardless of that habit, they very clearly feel happy or sad, they
can perceive the world around them, and they experience events and remember them. So they are sentient. A rock is not. In between the higher mammals and inanimate
objects there is presumably a line where sentience ends but it's hard to pin down and might be
rather hazy, we will return to that later. Sometimes operational definitions are more
useful for discussion than rigorous ones. In discussions of cognition it’s often useful
to define some term to mean, “Whatever the heck the core thing is that human minds do
that the best animals minds clearly don’t.” We can call it sentience or consciousness
or whatever, but it’s a very useful term that shortcuts discussions floundering over
terminology. On the other end of things we have Sapience,
which derives etymologically from the Latin for “being wise”, and it has a fair few
variations on definition but usually as having or showing great wisdom or sound judgement,
or having or showing self-awareness. We often use it loosely as the separation
point between the smart animal and the human intellect, as indicated by the species name
“Homo Sapiens”, meant to distinguish us from various other human predecessors in the
Hominids or more broadly primates. As a species name, it is meant to separate
us from the rest of the Homo genus like Homo Habilis, Homo Erectus, or Homo Neanderthalensis,
commonly known as Neanderthals. In this regard it is an awful species name,
since it is very unlikely a Neanderthal or any of the Human ancestors who casually wielded
tools and possessed fire wouldn’t qualify as on the human side of that human and animal
divide. I don’t know if those other Homo Genus members
qualify as wise, but then I’m not sure if we do either. It is tempting to point to their apparent
brain size as similar to our own and assume they were as sapient as you and I, but thought
is also about a lot more than brain size and architecture, or brain to body weight ratio. It’s quite possible archaic humans, if raised
in the here and now, would be mentally indistinguishable from you or I, but they weren’t, and we
want to resist the urge of thinking of this concept as just a matter of hardware, all
the more so since your brain’s configuration is highly malleable and its capacity is influenced
by how you personally have trained and used it in your lifetime. In most uses of sapience as some other level
beyond sentience, what is basically being said is that this is the trait an animal needs
to be doing technology and civilization, that is certainly debatable, but tends to be how
I use it on the show if for no better reason than that it tends to be used that way in
conversation about intelligent alien life. It is probably a bad idea to think of sapience
as a subset of sentience or its pinnacle, but we would expect to only encounter abstract
conceptualizing, technology, and complex artificial systems with sapience and sentience[i], not
one or the other. What do I mean by complex artificial system? Well a hive or a herd is a system of interaction
between lifeforms, even sentient ones for some herds, but is natural, whereas congress
or parliament or a board of directors and the charters or constitutions running them
are artificial systems, and that is something we would expect to be the exclusive domain
of sapient entities. Though I want to be a little careful here
about implying the ability to handle abstract concepts and sapience are the same thing,
as that is debatable too, but we would expect them to overlap. Moving back to sentience overall, we described
it as a capacity to feel, perceive, and experience subjectively. The last bit, ‘experience subjectively’,
means a cognitive and emotional experience as opposed to the actual events of the experience. A tree can experience being cut down, a rock
can experience being crushed and thrown into a smelter and emerging as some cutlery, but
there’s no feeling of anxiety experienced by that rock for instance. And the tree or rock’s “reaction” to
the event was pretty much limited to its material structure obeying the basic laws of physics. We usually hold that only animals are sentient,
and specifically only vertebrates, though there’s argument if there are exceptions. This is why we don’t like using sentience
to discuss alien life or rights, or animal rights for that matter, because a frog qualifies
as sentient but isn’t usually considered to have many if any rights. Few of us contemplating interaction with alien
worlds and native life are expecting to encounter something of that intellect and try to establish
a treaty with them or argue if they own that planet. Even when we say things like a given region
‘belongs’ to the native flora and fauna, not us, we are not implying they have the
right to administer or sell it, anymore than an orphaned toddler whose parents died and
left them some money can make business decisions about those funds and properties. Nor could we just say planets with sentient
life should be quarantined while those without should not be, like some planet occupied by
a simple and mindless lichen. Of course we could, but the implication is
that sentience holds some special status for a right to property or to exist and I don’t
think most view a frog as vastly more valuable than a redwood tree. Whether or not we preserve an alien ecosystem
might hinge on many things but probably not specifically whether or not it has life that
meets the standard of sentience or not specifically. Similarly we wouldn’t draw the line on if
you could own property as if you are sentient or not, assuming we are acknowledging the
concept of property rights in the first place which many people do not. It’s catchy when Optimus Prime says “Freedom
is the right of all sentient beings” or in the classic episode of Star Trek the Next
Generation, “The Measure of a Man”, where they are arguing if Commander Data is property
of Starfleet or an individual sentient being with rights and freedoms with the Federation,
but whether or not that should be true, that obviously is not how we define it. You can own a cat or dog and clearly they
are sentient. Or it seems clear anyway, testing that is
tricky, and we will get to Self-Awareness tests for animals and concepts like the Turing
Test for artificial intelligence momentarily, as potential ways to determine if something
is smart enough to have rights. Of course where rights come from is pretty
important to discussing who should have them, and while science can offer us data on which
to base our answer, it's not a question science is capable of answering. Science can help you identify who has the
quantifiable traits you consider important, say an IQ of such and such or capacity to
recognize one’s self in a mirror, but it can’t tell you who should have rights because
it isn’t a concept science can explore, except in the context of what the biological
influences are on how we determine rights. Or, in other words, you can discuss the ethics
of science but not the science of ethics. Maybe all sentient things should have rights,
maybe only sapient ones should, maybe natural rights are a delusion, but science can’t
answer that one. What it can answer, or help us answer, is
who has traits we think indicate sentience like Self-Awareness, and we often use mirrors
for this. Does something recognize itself in a mirror? Now not doing so doesn’t mean something
isn’t self-aware, obviously something with bad vision or no vision can’t pass the mirror
test, others might be pretty smart compared to another animal but still try to attack
or befriend the mirror image, but tests like this with animals helped us identify some
different types and flavors of self-awareness. Self-awareness, like sapience, is not identical
to sentience but is closely associated and often overlapping. For instance we might divide it into three
types: Bodily Self-Awareness, Social Self-Awareness, and Introspective Self-Awareness. The first type, bodily self-awareness, is
mostly what the mirror test is for, can the animal recognize itself in a mirror, would
it notice something out of place, like a red mark on its chin or neck, which it normally
can not see except when looking at a reflection. This is the awareness of that basic self as
separate from the environment around them, and while often thought of as the simplest
type of self-awareness, it might be quite peculiar in something like a sentient alien
algae on top of water, which only knows its existence in terms of day and night, cold
and warm, up above layer of air, down below of water, and the sandy coast on the side. That might get very blurry if it came to think
of itself as a world brain, encompassing everything, even if only that thin surface layer was involved
in thought. Where you begin and end as a person, physically,
seems rather obvious but one might imagine folks thinking of their clothes as part of
them, or their hair or upper layer of skin, which are both technically dead, or those
lost hairs and skin as still part of themselves, or their bones as not, instead as some rock. Here, for self-awareness, we mean that they
recognize themselves as physically distinct from their environment and don’t care about
the boundary condition, whether or not they should include their shell for instance, which
they may have acquired from another lifeform, but it might matter a lot for some aliens. This requires a sense and detection capacity,
which we generally call proprioception, or self-movement and body position. That blurry line might be more blurry on the
social self-awareness side with something like a hive mind. I mentioned earlier that we consider vertebrates
sentient but are more dubious about invertebrates, which include insects like ants we discuss
when contemplating hive minds. Social Self-Awareness covers a being’s awareness
that they have a social role for survival. We do not think ants or other hive mind critters
have this incidentally, its generally assumed limited to smarter and highly social animals,
but they’d be an example of where it might be very blurry, as a hive mind might not consider
its components smart, sapient, or self-aware even if it possessed all of the above. We don’t know if ants are sentient, but
it would not be hard to imagine a hive composed of sentient animals, or a hive mind composed
of individual bits which were sentient too[j][k]. Now the third form of self-awareness, introspective
awareness, is the self-awareness of thoughts, feelings, and so on. Here is where we get close to that region
where we begin contemplating not just sentience but the conscious mind, and even the concept
of having a conscience of right and wrong. We would generally assume apes and monkeys
do not spend much time introspecting, though it is hard to test, and this is where we get
into concepts like the Turing Test. The Turing Test was originally envisioned
as a way of assessing if a machine could pass for human to other humans, but ironically
its main use at the moment is in things like CAPTCHA Codes, where a human proves it is
not a robot to a robot. It is much easier to trick a stupid machine
into thinking you are not a stupid machine than to trick a human into thinking so. Now we are using a Turing Test to mean that
something which passed it genuinely can not be distinguished from possessing that murky
quality of personhood, but in practice a Turing Test is about determining if something is
a machine based on specific circumstances, like if you could tell it was one only through
exchanging text messages or talking on the phone, and while an alien or a Neanderthal
might be people, they are probably not going to pass that test even ignoring the language
issue. An artificial intelligence for instance might
screw up a conversation by going overboard or saying something alien from its lack of
understanding, like you saying your boss or neighbor irritates you so much you felt like
punching them and it replies by saying it loves punching people too, or its boss irritates
him so much he wants to kill him. Artificial Intelligences designed to pass
Turing tests nowadays tend to focus on that sort of mimicry and be detectable by saying
something over the top like that. Alternatively, an alien might make that same
mistake or honestly feel that way, coming from a species where irritation is rare but
severe, turning homicidal easily, and while probably inaccurate, portrayals of Neanderthal
cavemen usually imply them to be much more casually violent. It’s obviously very important we get this
process ironed out, and not just for dealing with aliens. Indeed we need to be worried about it likely
a lot sooner for handling Artificial Intelligence, not just for fear of enslaving a machine who
is a real person who should have rights, but in case one intentionally takes a dive on
such a test, pretending to be dumb and safe while its conspiring to go all Skynet on us
and wiping us out. This may be one of those fundamentally unanswerable
questions and concepts but we do at least need an ‘operational definition’ in the
absence of a rigorous one. Such a defintion would not be perfect but
sometimes the perfect is the enemy of the good. Something that lets us know if a machine or
an uplifted animal intelligence, like a genetically enhanced super-smart cat or dog, is getting
to that area where we need to be contemplating their rights. What those rights should be is a harder question
of course, and for instance while we might say a toddler has human rights but not adult
rights, such as they might have property left to them but can’t administer or sell it,
a smart dog or AI at toddler level might not have those same rights. Regardless of what they are, we need some
methods and benchmarks for evaluating where the cutoffs are at, and why, and probably
several of them so nobody can loophole into one, though its interesting to contemplating
if anyone we’d be referring to as ‘nobody’ or ‘anybody’ can be loopholed into being
considered sentient or sapient, as we generally use those words for describing people, and
it is also iffy if something capable of intentional deception or trickier that might fool a human
can be significantly less than human itself. That level right below us is hard to discuss
but important. We will dub that Pithecine for now, ‘pithekos’
meaning ape and giving us terms like Australopithecus, as opposed to Anthro, or human, and perhaps
we might even add to the terms sapient and sentient with “Pithient” or subhuman but
near human intellect, as its similarity to the word ‘pithy’ seems strangely appropriate. A Pithience test, or super-pithience test,
would seem one of those things we need to be working on developing rapidly for use with
things we might create in the next century or so, and we’ll coin that term mostly for
discussion of it over in social media forums or the comment section of the video as that
level below human but where you really need to start worrying if you’re dealing with
a person at that point. I tend to feel that outright sapience is not
one of those things we would miss or mistake though. Now we do not want to assume sentience is
some sort of ladder with single levels, in between basic sentience of awareness up to
things like Pithience or Sapience, there are like to be parallel paths. Indeed there seem to be some creatures that
could be said to be ‘locally’ as smart as us, doing some particular mental task as
well or better than we, such as better logic, better short term memory, and so on. Some alien or engineered critter might be
on-average as smart as a dolphin or chimp, but have a razor sharp memory or better social
skills, while some machine intelligence might be magnificent at math but atrocious at art
and language. We might find some very strange niches with
alien or AI, things that make a hive mind or idiot-savant intelligence looks downright
normal. And again we might create them too, there’s
a lot of motivation to make something as smart as human in some ways but much dumber in others,
though many of those motivations are pretty repugnant. I would be curious what levels or alternatives
folks can think of for classifying intelligence, sentience, sapience, and so on, and if I haven’t
plugged our various discussion forums like facebook and Reddit recently, those are all
linked in the episode descriptions and are great place to discuss concepts of intelligence
with other folks who are intelligent. It is an important discussion though, because
if we’re as successful at developing AI as we think or hope or fear we will be, we’ll
need some agreed-upon criteria for when it become inhumane to switch your computer off
-- and for when it become murder. So we’ll be continuing our discussion of
Artificial Intelligence next week and we’ll get to unveiling our April Schedule in a moment,
but while working on this episode to get it ready for airing, and polishing up our episode
on Post-humanism for next month, I had a few thoughts on a common point we get raised in
dealing with hypothetical aliens, post-humans, or machine intelligences that are just so
far ahead of us in intelligence that it’s at least a big a gap as human and primate. This often leads to analogies about them viewing
us as ants, and we’re going to spend a few minutes talking about whether or not that’s
a valid perspective in an extended edition of this episode over on Nebula. I’ve taken to doing short episode follow
ups, some on Nebula and some during our Livestream mid-episode breaks of late, like the one we’ll
have this weekend, for topics where it doesn’t feel like a whole new episode is warranted
as a sequel but I have a bit more to say, and if you’d like to catch the ones on Nebula,
they do replace our sponsor reads. Our episodes come out on Nebula early and
ad free, and we do have some exclusive episodes, like our Coexistence with Alien Series as
well as these new Nebula Plus Extended editions. Now you can subscribe to Nebula all by itself
but we have partnered up with CuriosityStream, the home of thousands of great educational
videos, to offer Nebula for free as a bonus if you sign up for CuriosityStream using the
link in our episode description. This means you can watch all the amazing content
on Curiositystream, like the “Brain Factory” which documents science’s efforts to transfer
the human mind to a digital avatar, but also all the great content over on Nebula from
myself and many others. And you can get all that for less than $15
by using the link in the episode’s description. So as mentioned we’re having our monthly
Livestream Q&A on Sunday, March 28th, at 4pm Eastern Time, and we spend about an hour going
through your questions that get submitted in the Livestream Chat. We always take a break halfway through to
let me catch my breath and run a pre-recorded spot talking about show projects, some of
the SFIA crew who help on the show, and now we’ll also add into that mix some episode
followups too, when they come to my mind, but its also your chance to ask follow up
questions on any episode, or any other topic. Then we’ll head into April with a look at
AI Run Government, both how AI might help us run governments and how they might run
them if in charge, and then we’ll return to the Fermi Paradox series for a long requested
topic, a detailed look at Drake’s Equation. Then we’ll shift to look at advanced human
civilizations in terms of Longer Lifespans, Post-Humans, Post-Scarcity, and Purpose, before
switching back to the Fermi Paradox again to look at how Multiverses alters the equation. And all that’s coming up in April. If you want alerts when those and other episodes
come out, make sure to subscribe to the channel, and if you’d like to help support future
episodes, you can donate to us on Patreon, or our website, IsaacArthur.net, which are
linked in the episode description below, along with all of our various social media forums
where you can get updates and chat with others about the concepts in the episodes and many
other futuristic ideas. You can also follow us itunes, Soundcloud,
or Spotify to get our audio-only versions of the show. Until next time, thanks for watching, and
have a great week! [a]Yes, but there is merit in questioning
whether "you" or "I" is a monolithic, single thing that exists, etc. The cogito can be considered a certainty but
it's very easy to be led by it to assume more than is actually warranted, that leads to
a lot of non-evident baggage in philosophy of mind. [b]love this. [c]The steps of this progression need to be
spelled out more like baby steps. This is a lot to take in. [d]which parts should we spell out? Though I'm also noting I said we'd do talka
bout neurology and chemistry of thought but really did not
[e]Spelling out WHAT that reductionist take is would be good. Even that word is at the ragged edge of a
lot of people's vocabulary. And then they'll need some explanation as
to why that reductionst take is a problem. [f]A short digression or a little on-screen
note mentioning (as examples), reductive materialism versus eliminative materialism, might go a
way toward teasing the complexity of the topic, seeing how there's much disagreement even
*within* non-dualist takes on the mind. [g]Those feel a bit tangential from sentience/sapience. Also, I hope the script will stay focused
on substantive ideas rather than on rattling off all the terminology we've ever heard. Discussions that are just long lists of definitions
get boring quickly. [h]Yes but there may be a circularity to that
argument. A piece of literature is more than ink and
parchment *because* our experience of reading it is more than that (and the experience of
the artist when he created it), so it goes back to the question (not the assumption)
of whether sentience is something more. [i]Kind of misleading. The reason you assume sentience when you observe
sapience is not that it's a well founded assumption, but that the only ethical thing to do because
it's probably better to falsely assign sentience to a billion automatons than to mistakenly
treat a sentient as an automaton. [j]I'm so glad you got to hive stuff. This would be an interesting place to touch
again on how to measure sentience of aliens. If we assume ants aren't sentient, what do
you think is the technological limit for nonsentient (or nonsapient) hive creatures? Hive insects build rather sophisticated structures
the require more intelligence than any one of them has. So could nonsentient creature build things
we'd consider megastructures or arcologies? I can even imagine nonsentient nonsapient
space termites even build a Dyson swarm, a legit nonsapient K2 civ!
[k]These are cool ideas!