So the topic for today is Transhumanism and
concepts for Life Extension, and this has turned out to be an incredibly hard topic
to prepare a script for. Normally I do anything from 1-3 script drafts,
often tossing out one and starting from scratch, this time around I’m on script #12 after
tossing out its eleven predecessors. It’s also not unusual for me to reread a
lot of old material by others or watch a few videos on the topic if there are any, to see
if there are topics they missed I think need covered or points that are important that
I forgot, and I knew it was bad sign when trying to watch these when the one that least
set my teeth on edge still only managed to get less than a minute in before referring
to our ‘primitive Darwinian Brains’. Now I don’t know if you personally consider
our brains to be designed by some higher intelligence or the byproduct of a few billion years of
evolution, but either way there is nothing primitive about the human brain. The adult human brain is pretty much hands
down the most sophisticated machine we know of and there is nothing particularly natural
about it, unless your definition of artificial excludes the intentional investment of massive
amounts of time and resources that was expended making it possible for you to understand what
I’m saying right now. Of course some of you probably do have problems
understanding me right now, which ought to serve as a good reminder exactly how sophisticated
human speech is, and is a good opportunity for me to remind you that all the videos on
this channel come with closed caption subtitles that you might want to turn on. The human brain is a pretty serious piece
of hardware, but it’s the knowledge, all the learning, or basically software, that
goes into the makeup of your average educated adult that makes it truly impressive and also
artificial. If the basic brain itself is natural, the
final form it takes is no more natural than a chunk of marble someone has chiseled into
a statue, and frankly a good deal less so, because our minds are the byproduct of decades
of careful work to produce highly sophisticated thinking engines. Today we’ll be looking at some fascinating
concepts for, basically, improving on that, making people healthier or smarter or longer
lived or just plain safer and happier. Unfortunately its also a topic that’s gotten
a bad reputation, somewhat unfairly in my opinion, because Transhumanism has accumulated
a fair amount of rubbish around itself in the last decade as its spawned a lot of openly
political movements and frequently groups that would be most accurately described as
cults. Now that’s nothing new, we see that a lot
with science and science fiction, one minute you’ll looking at the scientific and philosophical
ramifications of something like Quantum Mechanics and the Many World or Copenhagen Interpretations,
the next minute someone is peddling some homebrew form of Quantum Mysticism. And that’s fine, any scientific concept
will tend to accumulate a lot of that as part of the normal process of contemplating those
very important philosophical and ethical aspects of the concept, and the core science is not
diminished by this. Unfortunately in the case of Transhumanism
this clutter has gotten to be much louder than the actual scientific concepts and general
principles so I think it scares some folks off and truth be told I sometimes feel the
same even though I’ve classified myself as a Transhumanist for around two decades
now. The core concept of Transhumanism is using
technology to improve mental and physical health and the length of the human lifespan
too, preferably indefinitely. This is hardly a new concept, people have
been using any number of herbs and rituals to attempt, sometimes successfully, similar
things for untold centuries. We’ve been sticking artificial things in
our bodies for a long time too, dental fillings have been found in human skulls 10,000 years
old and humans have basically been on the route to being cyborgs since we started putting
clothes on. We’ve been doing genetic engineering of
people probably longer and our crops and livestock are very definitely not the byproduct of natural
evolution. Just because a lot of the new ideas involve
microchips inserted into people or direct tinkering with DNA doesn’t really change
that beyond making it a lot more effective. There’s very little natural about you or
I, my dear ladies and gentleman. Mankind isn’t simply a maker of artificial
technologies who is now considering maybe making some of those artificial changes to
ourselves, we are now, and pretty much always have been, the most blatant and shining example
of our own tinkering with nature. So while in most folk’s minds there is some
sort of distinct line where we cease being natural by putting machines in ourselves,
it’s important to understand those are mostly arbitrary. And that when it comes to being natural, that
ship sailed long ago. Because fundamentally drinking some herbal
concoction to improve your health, or clear your mind for better thinking, or slow your
aging, is the same for the purpose of intent as cramming some tiny little machines in you
to do the same tricks. Same goal, same intent, different method. There’s an awful lot of folks who are alive
right now with all sorts of electronic gizmos in their bodies keeping them alive or making
their life easier and frankly I’m not sure what the difference is between a smartphone
in my pocket and one wired directly into my head is except the latter seems a lot more
convenient. I’m just old enough to remember when mobile
phones were high-tech gadgets reserved for science fiction and I’m also just old enough
now to have outlived the average human life expectancy for most of history. And so that’s our quick look at the ethical
aspects of Transhumanism. Essentially that as best as I can tell there
are none. Now that does not mean individual applications
of it don’t have their own, but insofar as we are just talking about using artificial
means to make people healthier, smarter, or longer lived, I don’t think there’s much
firm ground to get any moral footing. We’ve been doing this, with mixed success,
for as long as we’ve been around as a civilization and just because we’re much more scientific
and successful with it now doesn’t make it morally or conceptually any different than
in the past. Now we’ve got a lot of concepts to cover
and we’ll be skimming through many, and in many ways today we are looking more at
concepts then specific technologies. Some we’ll look at more down the road and
as those come up you’ll see little yellow and white boxes pop up, those are video links
to current or future videos on the topic and I only put future material up if I expect
to get to it inside the near future so if you hover over it and it says ‘click to
watch’, that video is done and you can just click on that to automatically pause this
video and open that one up in a new window. If it isn’t ready yet, it will say coming
soon and suggest you subscribe to the channel for alerts when new videos come out. I tend to break up Transhumanism’s goals
or interests into a number of categories of general technology, categorization is always
a bit of an arbitrary thing but here’s our topics for today: 1. Speeding up Reflexes & Thinking
2. Slowing down Aging
3. Cloning and Prosthetics
4. Uploading the Mind
5. Artificial Intelligence
6. Technological Singularity Our first category, speeding up reflexes and
thinking, focuses mostly on enhancing the speed at which signals are sent around your
body or brain. If you didn’t know, the main component of
that is a thing called an axon, and these tend to run throughout your body and mind
in little sausage links. They are the phone line or internet cable
or information highway of your body. Some are myelinated, some are not, and myelin
is a fatty white substance and the reason we call some chunks of brain material white
matter. We also call myelinated axons nerve fibers,
and where they connect to other cells, usually other neurons, we call these junctions synapses. The wider the diameter of an axon, the faster
information can travel down it, and if they don’t have that myelin sheath, or it’s
thin, it travels much slower. Generally this diameter is around a micrometer,
a millionth of a meter or a micron, but some are wider, up to 20 microns, the diameter
of our thinnest hairs, and in the case of the squid giant axon it can be fully a thousand
microns wide, or a millimeter. Now that would make for very fast nerve conduction
except that the axon is unmyelinated so it’s actually not too quick, quicker than our own
unmyelinated axons since it is so wide but still slower than our fastest, myelinated
nerves. I wanted to clear that up because people often
talk about using squid giant axons in people to speed up our nervous system and besides
giving people the shivers it also wouldn’t be effective. What you’d probably want to do is tweak
the genes that controlled axion diameter to be a bit wider, or simply transplant one intentionally
grown that way, or even just pull out that whole nerve to replace it with say a fiber
optic cable. Now what’s advantage of faster nerve conduction? Does it make you lightning quick? No, but it would make you react a lot faster,
this doesn’t make you move in a blur it just means the delay time to send signals
goes down. You might go from needing a decent fraction
of a second to realize something is in front of your car and send the single to break down
to a tiny fraction of that time, and that would save a lot of lives. It’s the least comic-book-ie superpower
but it’s probably the most useful. Speed up nerve conductions and people have
way less accidents of every type. Once you adapted to it, which would probably
take quite a while, it would be very hard for you, for instance, to trip down a flight
of stairs. On top of that some of the most debilitating
injuries tend to involve nerve damage, so the ability to get in and replace nerves or
regrow them is obviously a high priority of modern medicine. Now a lot of times the implementation of this
concept would revolve around basically coating existing nerves with some conductive substance
that simply relayed the information faster, closer to the speed of light basically, rather
than the speed of sound, which is about a million times slower than light and still
decently faster than even most fast nerves send signals. Tiny little robots or some gene-tweaked virus
would run around your system basically glazing your existing nerves in your body and brain,
or replacing them, so they simply sent everything faster. Conceptually easy though obviously not easy
to implement and probably way over-simplified from anything we’d actually have to do to
get that sort of result. Doing this in your brain would also speed
up thinking, especially if we could do it in a way that generated less total heat. Our brains run quite hot, and a lot like how
modern computers have plateaued out more from the difficulty cooling them then further micro-sizing
them. There are some fundamentally physical limits
to how little heat you can generate performing a single bit operation, since there’s always
some heat produced erasing a bit of data, as covered under Landauer's principle, but
it is many orders of magnitude lower than we currently produce doing this on our computers,
the one from the factory or resting on your shoulders. So there should be a lot of room for improvement
there. This would, or should, result in basically
just speeding up your thinking which doesn’t really make you smarter, it would be more
like slowing time down around you. If your brain was a million times faster,
its not that you are really smarter so much as you are experiencing a year of subjective
time for every 30 seconds that passes outside. That would probably drive you insane, since
humans normally operate at just a bit slower than the second-scale. Our eyes only operate at about 60 frames a
second, we interpret vibrations occurring more than 20 times a second as sound. So unless you had those altered too you’d
be staring at freeze frame of your surroundings for what felt like half an hour, and blinking
your eyes would leave you blind for hours. Even when you can see its still going to be
an eternity of nothing moving. It would be very handy to have days to think
about uttering one sentence, plenty of time for coming up with witty comments, but pretty
obviously while speeding the mind up a little bit, to the speed you need at that time, would
be very handy, thinking that fast would likely be very unpleasant. Dreaming would be outright disturbing I should
think too, as an hour of dreams would translate to just over a century of subjective time. A whole lifetime and then some every time
you go to sleep. This is why we often talk in these terms about
adding a third lobe to the human brain, essentially an entirely synthetic one that is designed
to handle a lot of these extra issues such as being able to feed you external information
like books or movies or let you talk ‘telepathically’ at your subjective time to others with some
sort of radio link. This isn’t likely an actual lobe but just
a series of extra computer bits added in to handle the problems. We sometimes call this state a SI1, or Super-Intelligence
level 1, since it’s the first and most obvious, and lowest level, upgrade to human thinking. Where your brain has simply been sped up a
couple orders or more in magnitude and you are still using the basic brain architecture
only it’s been modified just in whatever ways are needed to make this practical. Additions to let you bring in other, faster
inputs or store and sort memory better. This is also a way to extend lifetime, if
you are still living only about a century of real time, but your subjective time is
only a modest ten-fold, that amounts to an effective lifetime on par with Methuselah
and the other Biblical Patriarchs, if it’s been sped up a million fold that would make
for an effective lifetime comparable to having been around since dinosaurs walked the Earth. So it’s probably worth considering now how
such prolonged lifetimes, either in real or subjective time, would impact us and that
takes us to our second category. Category 2, or slowing down aging, preferably
to a complete stop, has been on humanity’s wish list for a long time. It’s controversial for many reasons, some
of them legitimate and some not. I dismiss out of hand the notion that nobody
would want to live much longer than we do now since they would die of boredom, that’s
simply silly. Even if you could get deathly bored, there’s
an obvious solution, die. I don’t think many religions or life philosophies
that let you indefinitely extend your life in the first place aren’t going to find
some sort of loophole for suicide at age 1000 but even if someone is strongly morally opposed
to outright suicide there are plenty of way to get the job done especially if you’re
bored. You update your medical profile to say please
do not resuscitate or clone me and take up exciting and dangerous hobbies like cliff
diving in a straight jacket while trying to escape the jacket or hunting lions with a
nerf bat. You will presumably alleviate your boredom
one way or another. I also don’t buy into the notion that we
need new blood for new ideas and to avoid stagnation. Besides there being plenty of room in this
universe to expand into for new folks, there’s always going to be some deaths. We spend a lot of time on this channel talking
about interstellar colonization and terraforming and building space habitats and even outright
artificial planets, and we talk a lot about Dyson Spheres, swarm of such artificial habitats
able to support in total billions of times as many people as are alive now. In that sort of context in a civilization
where the half-life of people, the period of time someone tended to be alive before
dying for whatever reason, was a full million years, you’d still have thousands of new
people born a year on Earth and trillions inside our solar system. That’s plenty of new blood. But there is a very real flavor of truth to
the notion that a person can only live so long, subjectively, before they really do
hit a point of diminishing returns where going on would simply be pointless. And when we’re dealing with the very high-end
super-intelligences we’ll talk about later in the video that might come even sooner. Some huge super-computer-mind wakes up, rapidly
expands its mind to be trillions of times faster and smarter than a humans, figures
out everything, does its whole mental bucket list, and just shuts off. The apocalypse might be a touch boring if
Skynet pops up and ten minutes alter just when we’re beginning to panic and realize
how screwed we are it just shuts itself off. We also talk a lot on this channel about the
Fermi Paradox, the seeming contradiction between the sheer age and size of the Universe and
the apparent absence of anyone else in it, and the notion of civilizations dying off
from terminal boredom is one we’ll be looking at in the near future. But some of the other objections to extending
life are harder to dismiss. A super-long lived culture is probably a gerontocracy
by default. Your senator or parliament member might look
like they’re thirty years old but they may have been your senator for thirty centuries,
and that’s a lot of seniority. A lot of time for low-risk, long-term investments
to make you super-rich too. And both of those are merely specific varieties
in which power and influence accrue with time. That’s a lot of time to have kids in and
grandkids and great-great-great-great-etc grandkids so that you might easily have millions
of direct descendants and you’ve got all that time to accrue knowledge and experience
in. Now age generally does bring wisdom, so that
might result in a very prosperous and well-operated society especially considering it’s one
in which education, social security and pensions, and medical treatment make up only slivers
of a nation’s economy. But the big concern would be that newer younger
folks would tend to feel they were under a serious glass ceiling. If the civilization is still expanding a lot
that’s less of a problem but if you’ve got to a point where you’re basically maxed
out and just replacing losses a lot of younger folks might feel very frustrated and controlled. If you imagine some civilization, regardless
of its total population, that’s only bringing in new people at a rate of maybe 1 per every
ten thousand people a year, that kid is probably going to feel smothered by attention from
their gajillion older relatives and the oppressive feeling that it will take centuries before
they are considered useful. This sense of identity-loss, of not having
much of a purpose in life, is a serious concern for everybody else too. Post-Scarcity economies full of long-lived
people probably do have to be concerned about a lot of existential problems that make it
hard for people emotionally to derive genuine purpose and satisfaction from life. That’s even more true in some of the setups
where the humans are essentially pets of super-intelligent machines that benevolent or not simply make
them feel useless. I could actually imagine such a creature intentionally
behaving hostile but faking weaknesses just so its creators felt they had a purpose in
life trying to fight it. Now on the how-to aspect of life extension,
transhumanism tends to be understandably vague. The first and obviously most appealing route
to most is just to stop people aging normally but there are a lot of other options like
mind uploading which we’ll get to later. Aging, in humans, is really more of group
of processes all wearing you down together. There is a thing called SENS, or Strategies
for Engineered Negligible Senescence, that looks at aging as basically 7 relatively distinct
and combatable things each with their own strategy. It’s a bit controversial in some respects
about how accurate this view is, and I’m not a biologist, so I won’t go into as much
depth discussing it as I’d like to but I’d encourage you to look it up, and its criticism
too. But I generally believe our first opening
salvo in a serious war on aging will take some form along these lines and it is important
to understand that aging is a pretty vague term that is composed of multiple different
phenomena. Winning any battle on these fronts scores
a major victory in increasing average lifespan. Now another approach is generally just to
replace bits and pieces of people with cloned or prosthetic bits and pieces, and that’s
our third category. Cloning and prosthetics are both topics of
a lot of controversy, prosthetics less then they used to be, but cloning remains touchy
so let me just say from the outset that I’m not familiar with any serious suggestions
we do this by growing copies of people to harvest for organs. That is not the goal, that would be an especially
monstrous crime too. Whole person cloning is simply growing someone
a twin sibling that’s much younger than them anyway. Prosthetics is nothing new, we’ve got examples
3000 years old and they probably predate that too, but obviously we’re looking at more
sophisticated ones, ideally with full sensory and nerve function. I probably don’t need to tell you that progress
in this area has been both miraculous in recent years. The thing is neither of these helps much with
the brain. Even if you can keep replacing bits and pieces
with cloned or cybernetic bits, you can’t clone a brain, so you’d probably have to
slowly replace it bit by bit or transferring it entirely into a more electronic setup. That’s category 4, mind uploading, transferring
your mind to a computer. And this is our first big problem because
you can’t transfer your mind to a computer, you can just copy it to one. Sometimes in science fiction this will be
handwaved by requiring a scanning method that vaporizes your brain in the process, usual
from ultra-fast serial sectioning with a laser, akin to how some science fiction system deal
with teleportation, vaporizing you while assembling a copy of you elsewhere, but this is just
that, a handwave. There’s no real reason you’d need to vaporize
a brain to do this which would make it murder. And if you’re not, then you’ve just got
yourself sitting in a chair while your digital copy is either on metaphorical ice or is actively
running as a new person, quickly diverging from you since it is having new experiences
you are not and probably pretty emotionally significant ones. So you are stuck with two people, two who
are initially pretty similar but will diverge into two different people. This is the same for cloning yourself in some
fashion to a genuine duplicate body, organic or synthetic, with a complete copy of your
memories. You still end up with two different people. Now I’m saying people and of course a lot
of folks are dubious if that would be a person. I, honestly, don’t see a good rational argument
why it wouldn’t be. Trying to prove it is a pretty futile process. We have a notion called the Turing Test that
we basically mean is a way to distinguish a computer from a human, you actually do one
of these every time you do ones of those irritating Captcha Codes and that’s also why many of
them jokingly include a note that says “Prove you’re a human”. Obviously that wouldn’t work with more sophisticated
forms of the test but a lot of us feel that if you can’t make a test that every human
can pass and a machine can’t, then it looks like a duck, quacks like a duck, and should
be accorded the presumption of being a duck. I mean heck, I don’t know if any of you
are real people, nor you I, and we’ll be looking at this concept more in the Simulation
Hypothesis Video but the most rational and sane approach is pretty much to assume that
if something is making a good case that it is sentient you should probably treat it that
way until it can be proven otherwise. Reasonable Doubt and all that, we might think
you killed someone but we need to be very sure, beyond a reasonable doubt, that you
did before we’re going to chop your head off for it. I have no idea if I have a soul or free will
or if me really exists but I find it easier and more pleasant to assume all of the above
and to me its always seemed only fair to extend the benefit of the doubt to anyone or thing
which shows decent indicators it might too. WBE, or Whole Brain Emulation, as this is
called, usually get’s calculated as requiring around 10^16 to 10^17 hertz of processing
power to pull off. Though there’s also versions of this analysis
that require a lot more. We did hit that level in the last couple years
with our best supercomputers, which are much bigger than a brain, but WBE is still a goodly
way off. Still we have basically finally reached the
point where we are getting into the human-level of processing power. Which will lead into our next topic of Artificial
Intelligence. We will look more at maximums, or rather minimums,
of processing power, in terms of how little energy it might take to run a whole human
mind in the Simulation Hypothesis video and Black Hole Farmers video but using Landauer's
principle at rough body temperature of 300 Kelvin and that 4x10^16 Hertz value for WBE
you need somewhere around 100 microwatts minimum to run a person real time, or for context
you could run a million people off a hundred watt light bulb, and a subjective lifetime
of several decades would run you some tens of thousands of joules or the equivalent of
about a milliliter of gasoline, a dozen or so drops. That’s the absolute minimum, at room temperature,
I doubt getting there, or even near there, is terribly realistic but even getting within
a couple orders of magnitude would be pretty impressive. In the context of a full solar englobement,
a Dyson Sphere devoted to using the sun’s light as nothing but a computer, often called
a Matrioshka brain, which we’ll look at in the Megastructures series shortly, one
done all the way out where Earth was would squeeze a decent sized family into a spot
the size of your thumb living in a nice virtual world, and permit a total human population,
in WBE terms, of around 10^30 people, more than a billion times what we normally project
for a Dyson Swarm population of regular people which is itself more than a billion times
the current human population. Anyway on to AI, Artificial Intelligence,
our fifth category. Now there’s not much for me to say here
because I don’t really believe in artificial intelligence, or more accurately I think all
intelligence is artificial. I’m really not worried about Google waking
up to sentience and assuming direct control to become the Harbinger of our Doom. I’m also outright morally opposed to slapping
on something like Asimov’s three laws of robotics onto an AI because I’d regard that
as slavery. I don’t think much is changed if you just
program something to enjoy being told what to do anymore then a plush penthouse isn’t
a jail just because the armed guards keeping you in it are courteous about it. I’ve already mentioned my opinion that if
something is acting like a sentient entity you ought to give it the benefit of the doubt,
but the thing is I generally take this a bit further and assume they are not just ‘a
person’, my loose catch-all for anything about as smart or smarter then humans, alien,
computer, whatever, but also basically a human too. Realistically early human level AI’s will
likely be heavily copied off human minds anyway, and since the whole point is to make a learning
machine, it will also be taught by humans and will probably try to act like us as much
as it can for whatever reasons. If it’s a totally logical critter, well
it’s pretty logical to be on friendly terms with your creators who you will likely have
deduced might have stuck some sort of failsafe kill mechanism into you. A lot of folks involved in Transhumanism in
general tend to figure we’d be replaced by AI’s eventually, sooner than later which
we’ll discuss in the Singularity section, but I tend to assume that if we can build
a computer that can outthink us we can also improve our own brains too, and I would pretty
much consider either thing to still be humans anyway. If we’re not using strictly biological definitions,
which I don’t think can really apply at this level, then an intelligence made by humans
and raised by humans has pretty decent claims to being human. Heck, we tend to regard our pets as human
and they are demonstrably not as smart as us. Now our last category, the Technological Singularity,
is one we have be kind of vague about so I’ll also be brief. The basic premise is simple enough, technology
has being progressing at a fast rate, seemingly an accelerating one, and we’re getting pretty
close to being able to make AI’s or implement some of these notions for making people smarter
too. If you can design a better brain you’d expect
that brain probably can design an even better one and so on. The singularity reference is pretty much just
a reference to mathematical singularities, places where you can’t really predict behavior
of systems. Easy version being how you can’t divide
by zero, things are not clearly defined. And the notion here is that you’re going
to eventually create a series of recursively improving computers that eventually get to
be so far beyond humans that they regard as nothing more than ants. There’s nothing human about them anymore,
they are simply that powerful. A lot of folks, loosely called Singularitarians,
think such an event is just a generation or two away. That’s generally where all agreement ends
inside these groups and there’s a lot of counterarguments to how likely this notion
is to come about in the near future. I tend to think the basic logic has some flaws
and is much further off, but you can examine the arguments yourself and make that call
on your own. There are tons of works, fiction and non-fiction,
discussing this concept. The point of this video is just familiarize
you with the concepts, we may revisit parts of it in more detail down the road, but I’ll
leave off here today. If I had to sum up Transhumanism in a nutshell
I’d say it’s basically just an extension of modern attitudes anyway, that humans are
imperfect creatures and a civilizations the same, and that’s there’s always rooms
for improvement and nothing wrong in and of itself with trying for that. In general it’s a pretty optimistic approach
to things, and one I think we all mostly agree on even if the specific paths and degree of
caution appropriate in pursuing them is certainly debatable. Tricky topic, if a fun and fascinating one
and I’ll admit I’m glad to have it out of the way, it was selected repeatedly by
polls of the audience on this channel and it’s been very hard to do justice too but
it did deserve covering. Also we needed to discuss some of the topics
here for some of the other topics we’ll be looking at soon. Speaking of that, next week it’s off to
discuss the Simulation Hypothesis, the notion that we might be living in an entirely simulated
reality, and we’ll look at that and discuss it in the context of the Fermi Paradox. As always, questions and comments are welcome,
and if you enjoyed the video, like it and share it with others, and if you haven’t
already subscribed to the channel, you can hit the subscribe button and you’ll get
alerts when new videos come out. While you are waiting for those, feel free
to try some of these other videos playlist, and until then, thanks for watching and have
a great day!