This episode is sponsored by Audible The galaxy appears to be rich in worlds and resources ripe for the takingâbut it doesnât
appear that anyone has done so. Could it be that some tragedy eventually befalls
all those who try? So today we return to the Fermi Paradox Great
Filters series for a look at Late Filters, challenges that might prevent civilizations
such as ours from ever moving out into the galaxy, and thus preventing us from detecting
them. The Fermi Paradox is the big question of how
space can be so huge and ancient and yet apparently not populated by any civilization we can detect. In this series weâve looked at one popular
solution to this paradox, that the pathway from potentially habitable world to an advanced
civilization like ours is far more difficult than we often think, and that many hidden
perils, what we call Filters, might lower the odds at each step of progress. Any potential Fermi Paradox solution has to
explain why we see a Universe empty of advanced civilizations that we can detect using the
methods we have now. It could be that life rarely emerges or that
life emerges but rarely gets more advanced than algae. It could be that disaster befalls worlds,
setting them back or sterilizing them, and weâve just been very lucky. It could be that evolution just doesnât
often produce complex brains capable of abstract thought, let alone spacecraft design. It could be that many planets have intelligent
animal life that havenât built any radios, which means we couldnât currently detect
them. But we certainly could detect civilizations
that had expanded to the stars in a big and bold way. Theyâd be immense and long-lasting and hard
to wipe out even if entire planets or solar systems were destroyed. But of course, we ourselves havenât expanded
into the galaxy yet either. So as much as we might hope that weâve cleared
the big hurdles, and that theyâre behind us now, and that it will be smooth sailing
from here on out, itâs still entirely possible that more major, possibly insurmountable hurdles,
the Late Filters, lie ahead of us. Late Filters are the hurdles between a crudely
spacefaring civilization like ourselves and achieving a future among the stars. The two big ones weâll discuss are the many
opportunities for a civilization to destroy itself before reaching the stars, and the
possibility that theyâll simply lack the means and desire to colonize the galaxy. Weâll also look at some of the stranger,
less often discussed reasons why either of these Late Filters might occur. We should also note from the outset that we
donât necessarily have to establish that any of these Late Filters are the sort of
thing that gets every single civilization, if weâre assuming all those prior filters
have already winnowed down the pool so much that there arenât a lot of civilizations
who have reached the point where they will confront these late filters. On the other hand, since we have no data to
work with on those earlier filters, we do need to consider the possibility that life
could be incredibly common, popping up in virtually any warm chemical soup on any vaguely
supportive planet or moon, but then get swept aside by these Late Filters. Such a Universe might be one in which our
galaxy is host to a billion worlds covered in the wreckage of fallen civilizations, a
nightmarish thought that seems like something H.P. Lovecraft would write about, which is an appropriate
thought since that will be next weekâs episode. Now for that first Late filter, being self-destruction,
weâve spent a lot of time on the show talking about potential apocalypses and found that
most standard ones donât hold up under inspection. You can reference those episodes, particularly
Fermi Paradox Apocalypse How, 5 Ways the World Could End, Cyclic Apocalypses, and Machine
Rebellion for most of the usual suspects like Nuclear War, Climate Change, Supernovae, Gamma
Ray Bursts, and Artificial Intelligence. In general though, weâve found they donât
make good candidates for successfully eliminating the dominant species - or at least doing so
in a way that matters to the Fermi Paradox. A race of intelligent machines wiping us out,
for instance, is certainly a concern for humanity, but mostly doesnât matter to the Fermi Paradox
because it just replaces one civilization with another. The extinction of the Neanderthals or some
other early hominid is not a Fermi Paradox solution, because it just resulted in the
rise of another species of intelligent critter, or a hybrid there of. Weâve also looked in previous episodes at
the second Late Filter a lot, our ability to get out to the stars and why we should. The primary episodes on this would be our
Generation Ships series, which details some of the actual nuts and bolts of making the
journey and getting the job done when you arrive. You can also refer to the Life in a Space
Colony series or the Outward Bound series, with the latter mostly focused on the colonization
and terraforming aspects. If the filter is about being willing and able
to go out and colonize the galaxy, then we have to establish that itâs something we
can do and would wish to do, and that many other civilizations would too. All of these episodes and series combine to
give us our default view on the show: that the late filters might exist, but do not seem
likely to be strong barriers. Many, even most, advanced civilizations might
be stopped by them, though I suspect in truth few would be, but more importantly, the odds
just wouldnât seem so bad. Itâs hard to imagine that they could explain
the Fermi Paradox without first assuming the early filters have already done most of the
weeding, leaving only a handful of worlds out of the billions in this galaxy to ever
have developed advanced civilizations. Now of course there are some assumptions about
space travel getting much easier in that reasoning, and thatâs why weâve done whole series
of episodes explaining how we might go about interstellar colonization, even without needing
any new physics. Indeed, while it would be incredibly expensive,
we could do it even now by using nuclear bombs as the propellant. This is also why the Fermi Paradox didnât
even used to be a paradox. Back in Fermiâs day, when he was helping
design nuclear weapons and we had yet to draw up a serious plan for getting men on the moon,
the notion that a species might nuke themselves to smithereens at some point seemed a lot
more realistic than the idea that they might settle planets around alien suns, the nearest
of which are millions of times further away than the Moon. If you go back only a couple generations before
that, there was no paradox because our civilization really hadnât absorbed the idea that there
were countless other worlds out there that might have been kicking around and evolving
life for longer than our world. The paradox started emerging as we began seeing
realistic pathways to the stars, while simultaneously living under the threat of doomsday for decades
without it happening. The more those paths into the void emerged
and took clearer form, the more time that went by without us obliterating ourselves,
the less likely the Late Filters seemed. While we can still see ourselves failing in
these regards, it no longer seems like some inevitable thing that would get every civilization
that arose. Alternatively, we know that a desire to explore
and expand are not likely to be peculiar and unique characteristics of humanity, but a
characteristic weâd expect most civilizations to have. Indeed, itâs hard to imagine an evolutionary
pathway that wouldnât favor at least some expansionist tendencies, nor a technological
civilization arising that didnât value curiosity. But without any data to back that up, we only
have educated guesses and intuition. These might be such high hurdles to leap that
they cause the galaxy to be crowded with ruined worlds or isolated planets who never managed
to find the will or the ability to venture forth. That doesnât mean that aliens would all
be habitual invaders of course, just that in order to build a civilization you probably
at least had a tendency to settle new available lands when you find any. So letâs look at the less obvious pathways
those two filters might take. We can of course come up with any number of
hypothetical cataclysms that might ruin us; we never know what wonders or horrors might
lurk behind the next door when it comes to technology. But ultimately a strong Filter would require
something a civilization is almost guaranteed to discover before theyâve expanded out
far enough and set up enough thriving colonies so that the loss of their homeworld does not
end them. Some kind of a Suicide Pact Technology which
guarantees their own destruction. Generally this would need to be something
where its threat could not be known until it was too late, though folks have suggested
both nuclear weapons and artificial intelligence as Suicide Pact Technologies, and that in
such cases it might not matter if you knew of the threat in advance or not. Generally I donât feel these make good examples
though, even ignoring those specific cases, because if folks are aware of the threat,
then it stands to reason that at least some civilizations would avoid their use. Though excessive caution could lead to scientific
inhibition too. Recall the recent concerns about the Large
Hadron Collider experiments and black holes eating our planet. A willingness to embrace risk in science and
technology is likely necessary for achieving advanced technology, so any species that gets
to our point might be naturally more inclined to take risks or pretend they arenât big
risks, and thus be more vulnerable to Suicide Pact Technologies. However, examples of unknown threats might
come from something like trying to develop a faster than light engine, which almost invariably
involves messing around with spacetime in a way that would violate causality, where
effects would end up preceding the cause. Personally I donât think such things can
work precisely because I donât think causality can be violated, or that you can travel backward
in time, and point to the absence of time travelers as good proof of this. Indeed, if you could travel back in time without
erasing the travelers by doing it, any fanatical group who disliked their civilization could
opt to colonize Earth a billion years ago, rewriting the future on a young fresh world,
instead of some other planet light years away. But other schools of thought suggest that
youâd have natural effects that prevented causality violations. An example might be a faster-than-light engine
that triggered those forces and resulted in any civilization that tried to use it being
deleted backwards in time, erasing all threads of history that led to that FTL drive being
developed and erasing everyone who used it, no matter how far they fled. Thatâs a particularly horrifying Fermi Paradox
Late Filter solution, since it might wipe out any trace of those previous failures,
meaning no warnings could be spread to others to avoid making the same mistake, or creating
worlds in an effective technological groundhog day, with any progress toward space travel
deleted from their worldline without them ever knowing it. We see something like that in Isaac Asimovâs
classic novel âThe End of Eternityâ. One way or another, every potential timeline
that could lead to discovering the FTL engine gets deleted, and every time they start pursuing
a non-FTL approach, someone notices that FTL option and ends up getting the non-FTL option
deleted from history too. But of course, if successful FTL research
always leads to homicidal Time Lords deleting their own timeline, the only thing that we,
on the surviving timelines, would experience would be a universe in which causality and
the lightspeed limit just appeared to be absolute and unbreakable according to all of our known
physicsâwhich, now that I think about, is exactly like our current situation. Hmmm... Speaking of multiple timelines, we canât
ignore that those might exist and that folks might figure out a way to reach them. The implicit assumption is that you colonize
a galaxy because itâs free land, but it takes a lot of work, and you always go for
the lowest hanging fruit first. You settle the fertile river delta before
the ice cold tundra, so to speak. However, while an exponentially growing population
could fill up even our Observable Universe quite quickly, without travel time, alternate
timelines and Universes are a different story. A population can double in a single generation
easily enough, but if thereâs a Universe for every single minor event that could go
two ways, I double my Universes much faster than I double my people. Now thatâs actually not a good Fermi Paradox
solution for reasons weâve discussed before, most recently in Aloof Aliens. If you can go to these alternate universes
and come back, then your homeworld will have near-infinite resources to work with and very
little fear of extraterrestrial invasion, particularly as you can offer those same endless
worlds to a potential invader looking for growing space with one hand while pointing
to the massive armada they could construct with those resources with the other hand. You also wouldnât bother invading a multi-verse
for genocidal reasons since the ability to hop between universes would show that to be
futile; you canât exterminate cultures in the countless other multiverses as thereâs
always going to be a timeline where you chose not to or failed in the effort. So youâre going to want to either explore
the Universe you originated from, or at least build super-powerful transmitters to say âHi,
nice to meet you, weâd love to hear from you and can offer you endless worlds in a
cultural exchange. Oh and also, if youâre unfriendly, we can
crush you like a gnat.â The flipside, as we mentioned in Aloof Aliens,
is that, should hopping universes be a one-way trip, youâd always have folks in your civilization
who stayed behind and decided to expand within their original universe. One-way presumably means one-way, meaning
youâre not even going to be getting reports back from the folks who made the journey about
whether or not they were successful or ended up getting vaporized, so even if the theory
works 100% on paper some folks will decide they donât want to make the trip, and will
opt for classic colonization. Now I mentioned a moment ago how even though
a population can grow exponentially, the number of possible multiverses presumably branches
off and grows even faster so that you canât really fill them all up, at least in a meaningful
sense, and you can see the Infinite Improbability Issues episode for further discussion on that
matter. But it raises a good point with apocalypse
scenarios. Itâs common to point out that any finite
probability, given a long enough time, can and will happen. Meaning that even if your civilization only
has a 1 in 1000 chance in a given year of sterilizing their planet using either nukes
or some more terrible weapon, it is going to happen, and such a cataclysm indeed has
a 50/50 chance of happening in 691 years, and only 1 in 22,000 civilizations will survive
10,000 years. However, this reasoning does have a flaw:
it assumes a static probability, and thatâs rarely the case when intelligent agents are
involved. Similar to the number of potential multiverses
growing faster than populations do, other related probabilities can multiply too. Your odds of being hit by a planet killing
asteroid drop as time goes by because asteroids of those size will mostly be leftovers from
the formation of your solar system. Each time one collides with a Jupiter-equivalent
or the Sun, fewer are around to hit your world. Many will be ejected from your solar system
too, though to be fair, you could also be hit by one ejected from another solar system. The probability decays over time, and thus
such a cataclysmic collision isnât inevitable. Natural disasters also donât make good Fermi
Paradox Late Filter solutions, because itâs a statistically improbable cataclysm. Such major disasters arenât going to be
something that happens with such severity and frequency in a place where life has managed
to create civilizations like ours, and so would be unlikely to occur in the period of
maybe a few centuries from this point to interplanetary or even interstellar colonies. Weâre also more robust to such disasters,
when it comes to surviving with at least some remnant able to rebuild, than non-technological
civilizations or animals. But more to the point, thereâs a fairly
limited window of time between now and when we could proof against such threats, and itâs
vastly smaller than the time that has already passed without them happening. Assuming an asteroid did kill the dinosaurs
off for instance, 65 million years ago, that would not kill us off now, and Iâd not be
surprised if we were able to detect and prevent such strikes even as early as 65 years from
now, a window of time a million times shorter, and thus a million times less probable to
occur than it having happened again since then. Also, people like to live, so any threat we
can be aware of will tend to get an effort to decrease the threat, and keep decreasing
the threat faster than you were essentially rolling the dice. Those could take some interesting turns too. As an example, technologies such as 3D printers
could become so advanced that one lone lunatic could fabricate a doomsday device in their
basement. Were this true, based on the supposition that
on a long enough timeline, anything that can happen will happen, one could argue that any
such civilization and its colonies is doomed. Whether or not this is true, it ignores other
technological improvements.. However, this is assuming an advanced civilization
even contains such lone, homicidal lunatics. Schools of thought on the subject vary, but
today we generally assume that psychological conditions are analogous to physical conditions:
detectable before turning into homicidal tendencies and either treatable or preventable. Science fiction often contemplates many strange
and often unbelievable civilizations, but rarely seems to consider one where insanity
is as easily detected and treated as a tooth cavity. There are any number of illnesses that used
to wipe us out in droves and we often came to view them as an inevitable part of life
until we came to understand them better and were able to cure or prevent them, mental
illnesses may turn out to be the same. Few would argue that if we were more intelligent,
things probably would be better. Although I do say âfewâ as many feel âignorance
is blissâ has some truth to it and another Fermi Paradox Late Filter option is that really
advanced civilizations tend to suffer from a lot of nihilism, seeing through the illusion
of free will and purpose and meaning and just give up. Or that a civilization turns into hedonistic
lazy layabouts tended to hand and foot by robots, and we explored that more in our post-Scarcity
Civilizations series and our Virtual Worlds episode. Generally such things do not work as Fermi
Paradox solutions though, as discussed in those episodes, but in short form, even if
we assumed the whole civilization, every single member, went down such a path, which seems
unlikely, those same technologies generally permit space expansion before they are sophisticated
enough to allow those other paths. As an example, if I can build a robot for
farming and manufacturing, I can build one that makes spaceships and orbital habitats. If I can make simulated people in virtual
realities sophisticated enough to feel like a decent facsimile of a normal person, to
tempt people to dwell in virtual utopias, then I can also make an AI able to run complex
manufacturing and space navigation, indeed the latter is easier than the former. Of course many people or civilizations might
not be willing to go the virtual route or turn their lives over to robots, even if they
werenât worried about getting killed in some machine rebellion, and theyâre likely
to pass that preference on to their kids, especially if the groups embracing those technologies
all seem to turn into lazy slugs. While killer robots are obviously a potential
threat, they donât make a good Fermi Paradox solution unless they are not too intelligent
themselves. Grey goo is an example, a bunch of self-replicating
machines that reduce a planet to nothing but more of themselves, but thatâs happened
before on this planet, the self-replicating machines just happened to be simple biological
life, Green Goo. Indeed, thatâs arguably happened several
times, and the grey goo might evolve intelligence eventually. However a civilization which employs smart
machines but is afraid of them might opt for something subhuman and with built in replication
controls that prevented mutation. Digital schematics arenât the same as DNA
and itâs not that hard to ensure a replication method whose odds of mutation are vastly lower
than in terrestrial organisms. So if they somehow ran amok and murdered us
all off, they might never evolve in any significant way even on billion year timelines. The galaxy might be swarming in worlds with
lots of machine intelligences of roughly mammal level minds. Indeed, may whole solar systems might be. One of the most obvious AI uses would be asteroid
mining and space industry, where you need some brains to handle the light-lag issue
of remote control and where self-replicating machines are very handy, see our episode,
Void Ecology for some less apocalyptic visions of that future. Controlling an AI though, especially a human-smart
one or smarter often involves suggesting some prime directives, Asimovâs 3 Laws of Robotics
being the most famous example, and theyâre okay as they go though theyâve got some
big weaknesses too and we might do an episode someday playing around with all the horrible
loopholes in that and Rob Miles over on Computerphile did a nice breakdown of them some years back
that Iâd highly recommend. Let me give an example of one scenario thatâs
a good Late Filter though. Imagine for the moment that many of the technologies
we often contemplate on this show come to fruition sooner than later, like radical life
extension. Let us also now assume we program the machine
to protect humans above all else. Now we could imagine some truly awful scenario
where the machine turns every human into some blob-brain in a tank unable to hurt themselves,
or perhaps in a Matrix-style tank living in a simulated world, Utopian or not but unable
to be harmed. Let us instead assume the programmers were
a little bit more careful. This AI is told it must protect all humans,
and its charges are effectively immortal or close to it, and it views its charge as protecting
every human currently alive. Thatâs a logical caveat by the way, if you
leave potential humans in the mix, then it can justify killing or hurting some people
now if it saves more lives down the road. Utilitarianism, the greatest good for the
greatest number, can get pretty dark even without including people who donât even
exist yet. Such being the case, that machine only cares
about those people alive right now. Now in theory, it would feel the same about
anyone else who was born, but, it has no motivation to let anyone else be born because every new
person represents a non-zero threat to the other people. This is the same reason you donât stick
an armed Asimovian robot on guard duty on your child, it canât harm any other humans
of course, thatâs Asimovâs First Law, but it will turn itâs machine guns on your
pets and any local wildlife capable of causing harm to your kid, traumatizing that kid of
course but it needs to be pretty smart to know that. This machine doesnât escape such behavior
by being smarter. There is no bigger threat to humans than other
humans, and for that matter every other human created further limits the time, attention,
and resources it can devote to protecting its current charges. To make that worse though, even if you tell
it more people is good, it will not like interstellar colonies. Partially because people separated by centuries
of light lag and living under alien suns are rapidly going to turn into aliens themselves,
and thus be an alien threat to humans, but at least as much because it will be a threat
to us. It has to send out a copy of itself to watch
over those colonists, even if it can be convinced they shouldnât be under its protective oversight
somehow, it knows it needs to watch them to make sure they donât become a threat to
the Homeworld as millennia and mutation and cultural divergence take effect and as those
colonist are free to multiply until they potentially have all the numbers and resources of a galactic
empire to pose such a threat. Rather, it has to be afraid of that copy it
sent. Same as the biggest threat to a human is another
human, the biggest threat to some AI is another AI with a conflicting agenda. If youâve got a machine whose primary objective
is to keep their person or persons safe, they will strip mine the galaxy to provide all
the weapons and defenses needed to do that if able to do so, and two of them with different
people to keep safe, if they perceive any probability those goals will be in conflict,
might tear a galaxy apart trying to kill each other. That AI back at the homeworld knows that,
so it has precious little motivation to go around seeding the galaxy with anything smart
enough to ever become a potential threat to its charges back home. Now it could, as weâve discussed elsewhere,
go about strip mining the galaxy with dumber machines that brought resources home, but
this is where we get to the notion that some civilizations might choose to stay home and
not expand much or at all. Any civilization that draws similar conclusions
about adding to their number adding to their risk, or that worlds they settle might be
friends down the road but also might be enemies, and that zero chance of enemies is better
than a chance at friends, doesnât have much reason to colonize. You donât need to cannibalize a whole galaxy
to come up with enough resources to ensure your current population can be maintained
indefinitely at a high living standard, especially if an alternative cosmology like the Big Rip
proves true, as opposed to the current preferred model of the Heat Death of the Universe. For the latter, thereâs no such thing as
too much resource harvesting because time goes on for such periods that they make a
trillion years look like an eyeblink, and we looked at survival methods for keeping
a civilization going long after their Sun would have died in our Civilizations at the
End of Time series. In a Big Rip scenario youâre existence is
on a timer, a very long timer to be sure, but not one so long that you need to pillage
a whole galaxy to keep your planet living in vast wealth until the End comes. As Late Filters go, since our normal Dyson
Dilemma concept revolves around a desire for expansion or at least resource acquisition,
anything that puts a finite and immovable end point on civilization is problematic. This is even more true for any alteration
that removes their biological compulsion to have kids. Thatâs a trait we can expect to be very
common of course, species that donât wish to reproduce even at risk to themselves arenât
going to last long, and civilization-building species arenât likely to arise from any
critters that arenât willing to cooperate with their own members or sacrifice much to
protect their offspring. Strictly logically speaking though, if your
own primary goal is personal survival above all else, kids are bad, they cost effort and
create competition. Humans are mortal, we die, our kids replace
us. Weâre not Greek Gods, we donât eat our
kids to keep them from overthrowing us one day. But one could imagine a species with advanced
technology that allowed radical life extension feeling otherwise. Indeed a common argument against Fermi Paradox
solutions that assume species will be in general aggressive and expansionist to at least some
degree is that they may get enlightened out of such behavior, or alter themselves to remove
it. Good example of how good intentions can have
very bad outcomes, a species that altered itself to not want to be expansionist or reproduce
beyond a chosen âidealâ population number might get pretty zealous about that and start
embracing the attitude that any additional people represented a clear and present danger
to them, which of course they do. Especially in a near-immortal civilization
and in a less direct sense, every new person is taking up resources, threatening to push
you under them in the social hierarchy, potentially stealing your friends or job or primacy as
an expert in your field, or again, simply might kill you. As a positive note, beyond me not thinking
this would make a very good Late Filter, such a civilization would still want to expand
to some degree, even if just with an automated extermination fleet, to ensure no actual aliens
arose as threats, as opposed to daughter colonies turned alien by millennia of separation. Fundamentally like most isolationist policies
we look at for the Fermi Paradox, itâs not that some might not choose such a path, itâs
that many would not, and also that such paths become far less effective if others choose
not to follow them. Your world might be nice and safe from your
colonies by not having them, but itâs in a lot of trouble if some other world did have
them and starts looking at you as a threat. Which is quite likely considering your most
obvious characteristic is that youâre very Xenophobic. As we mentioned in Hidden Aliens, you canât
realistically hide a civilization, so if you want to be left alone you put up big âNo
Trespassingâ signs, and you definitely donât blow up unwitting intruders if you didnât
put those signs up or other civilizations will send more such intruders, only much less
unwitting and much more heavily armed and angry. In the end itâs generally better to expand
where you can, so you have more resources to defend yourself with, and not to take any
actions which will make your neighbors think youâre basically looking for an excuse to
murder them, which is rather heavily implied to be your desired goal if you clearly regard
even your own colonists as something you shouldnât have because they might hurt you. Paranoia is not a desirable trait in your
neighbors. So it seems the best defense against the plausible
Late Filters is just Pragmatism. Anyone who can build mighty high-tech civilizations
is generally going to put real effort into foreseeing future problems and planning contingencies. For facing unpredictable threats, the most
pragmatic plan is to simply have tons of resources, make lots helpful friends, and to spread out
far and wide to minimize overall damage. And if your civilization doesnât have that
basic attitude from the beginning, youâre probably not going to advance far enough to
worry about the Late Filters anyway. But of course, I am a notorious optimist. And while optimists have more fun along the
way, the pessimists sometimes turn out to have been right. As we explore the vast universe, both by traveling
to new galaxies and experimenting with new science, we will always face unknowns. We can speculate based on what science weâve
figured out, and we can prepare for disasters that weâre capable of imaginingâbut until
we actually cross that dark, tranquil-looking ocean, until we actually initiate that bold
new scientific experiment, we can never truly know what balance weâve disturbed, what
veil weâve pierced, whose attention weâve attracted, or what existential threats weâve
unleashed. Weâll ponder that darker view of the Universe
next week in Gods & Monsters: Space as Lovecraft Envisioned it, our topic Poll Winner from
a couple months back. Lovecraft tends to be viewed as a horror writer
more often than a science fiction writer, but the two genres are often mixed together
and another author who frequently combined them is Richard Matheson, perhaps best known
for his 1954 novel I Am Legend, our Audible Book of the Month. Itâs been adapted into film a few times
and is often considered the biggest influence on the Zombie Apocalypse genre, though amusingly
has no zombies in it. In a period where most apocalyptic literature
in science fiction revolved around atomic weapons, robots, and other fictional high-tech
devices of physical destruction, Matheson paints us an end of the world that is far
more biological, evolutionary, and psychological. The protagonist, Robert Neville, is a very
human character who unlike those in many horror stories actually acts with pragmatism and
common sense to what seems like the end of humanity, and while the story also features
no robots in it, itâs what made me realize that machine rebellions were generally not
good Fermi Paradox solutions. You can get a free copy of I am Legend at
Audible.com/Isaac or text Isaac to 500-500. Audible offers a 30 day free trial, but each
month youâre a member you now get a free audiobook and 2 audible originals, and those
credits rollover to the next month or year and stay yours, along with any books you got,
even if you later discontinue your membership. And with their convenient app, you can listen
on any of your devices and seamlessly pick up where you left off, whether youâre listening
at home, commuting, running errands or off jogging or at the gym. Audible makes it cheap and easy to access
a vast collection of amazing stories on any device. As mentioned, next week weâll be dipping
into the sci-fi horror genre ourselves, as we explore H.P. Lovecraftâs view of the Universe, our most
recent poll winning topic, and thanks again to the thousand of viewers that voted in that
poll. The week after that weâll get a bit lighter
and head back to the Alien Civilizations series for âWelcome to the Galactic Communityâ,
to examine some common first contact scenarios where humanity finds itself suddenly aware
its surrounded by many vast interstellar empires. For alerts when those and other episodes come
out, make sure to subscribe to the channel, and if youâd like to support the channel,
you can visit our website to donate, or just share the video with others. Until next time, thanks for watching, and
have a great week!
There were enough close calls during the Cold War that nuclear war could be a GREAT FILTER. And we are not completely out of that hazard yet. In fact the longer nuclear war is delayed the harder rebooting the industrial era might be as more of the juiciest and most low hanging fossil fuel fruit are burnt off. Battery technology is only just get to the point where a solar electric powered industry could work. Nuclear fusion reactors could provide plenty of clean energy and interstellar engines but they could also be used to create plenty of neutrons for breeding plutonium from uneniched uranium.
Is it just me, or do the subtitles on this video look weird? Perhaps automatically generated, not edited? :-?
As some others have suggested a hugely important late filter is fossil fuel depletion. This topic deserves a video all of its own. Our civilization is still completely dependent on them, and they are already dangerously depleted. It would be much harder to develop a potentially space faring civilization without them.
One late filter is the simulation hypothesis. Also the energies required for interstellar travel are hard to control, requiring massive government regulation which isn't very efficient. Trust in government doesn't come easily.