This episode is brought to you by Skillshare
We often picture aliens as rather horrifying to look at, but the truly terrifying thing
about aliens may not be the way they look, but rather the way they think and act... who knows what horrible terrors await behind
those 16 eyes and razor sharp teeth. Science Fiction is full of examples of terrifying
aliens, and since Halloween is coming up shortly, I thought we would take a look at some of
the more common notions and tropes of terrifying aliens and ask how realistic they are, as
well as contemplating some ways aliens might truly be scary to us simply due to different
evolutionary paths. Of course aliens arenât likely to look human
like they often do in science fiction movies and TV shows, or even humanoid, thatâs mostly
a byproduct of limitations of makeup, costumes, budgets and sometimes imaginations. We get our fair share of jump scares on amusement
park rides where some horrible tentacular or spined abomination swings out at you when
you least expect it. Thatâs probably closest to what weâd expect
to find rather than near-human or cute and cuddly. While amusement rides are designed to be scary,
theyâre not truly terrifying. For terrifying, we need to go beyond the amusement
park and look at alien behavior. It is the alien mind, not the alien body that
will keep us up at night. We rightly fear the unknown. There is nothing we can say for sure about
alien behavior. Weâll only know when we encounter one and,
even then, thatâs a sample size of one. However the prerequisites for evolution to
be a technological species offers us a few high-probability basics. In the 1995 novel âThe Killing Starâ we
are presented 3 of these. First, we know theyâll be interested in
their own survival, because nature has little use for something which does not, and they
will care about that more than anyone elseâs survival. Whether they are so kind to others that theyâd
risk their life to try to help them or so cruel they will go a thousand light years
out of their way to wipe out other civilizations, their own survival will be a higher priority
than ours. Second, we know they will be tough, because
you donât claw your way to the top of the billion-year deep corpse pile of Darwinian
Evolution by being a wimp. Third, since those first two are both logical
deductions from natural selection, they would deduce that the first two laws apply to everyone
else too. Now this deserves some add-ons and caveats. We can assume they were probably intelligent
and curious by nature, as they developed technology. We can also assume they probably have a notion
of cooperation too. However, collaboration can take many forms. For instance, we said âtheyâ would be
more interested in âtheirâ survival than our own, but that assumes they have some sort
of social structure and that itâs species-centered and thatâs a very iffy assumption. While many animals, especially mammals, will
avoid lethal fights with each other but are often much more hostile to other creatures,
a social structure isnât necessarily their whole species or limited to their species. I suspect a lot of us place a higher value
on our pets than many members of our own species, subconsciously at least, and I suspect a lot
of dogs would attack or even kill one of their own if it was threatening their adoptive human
family. My cats also seem to prefer my company over
each other, who to be honest I donât think they like at all. Probably more to the point, it is very easy
to imagine humans forming alliances with aliens in a unified civilization that didnât include
all humans, and being willing to go to war against some other faction that included humans
to protect their alien allies. We donât really mean species when we are
talking about this notion, but more oneâs family or nation and civilization. In this sort of context you have two End Goals
in competition, personal survival and survival of the group. In its simplest form thatâs survival of
the species but that can be a good deal narrower, like the family or tribe, or broader. It could even be fairly perpendicular-like
advancement of certain principles or ideology. If someone throws themselves in front of a
bullet to save their leader, particularly in a modern context where they may never have
spoken more than a word in passing to them personally, it certainly wasnât about the
End Goals of either personal survival or survival of the species, but rather a matter of principle
or ideology. Now one can stretch that point, itâs not
about if something actually does benefit the species but rather if the person doing it
thinks it does, and most of us who are very driven by our principles and honor do tend
to think it benefits our species or civilization to act that way. So too, there is a long tradition of bodyguards
being picked from oneâs extended family or tribe, who obviously benefited from their
member being the leader of others, or for loyalty to be ensured by some policy of executing
or exiling guards and their families if they fail in their task. Most of our principles do have some basis
in our biology and basic wiring but often are wired more directly into our developed
mind, the bit of you that actually makes conscious decisions, so you can have end goals that
were influenced by those core two, personal survival and group survival, but still take
priority in your decisions even when that new end goal becomes at odds with either or
both of those two core end goals of survival. That biology does influence the ethics though,
albeit an advanced civilization might find a way to remove those if it wanted too. As an example, in Larry Nivenâs Known Space
setting, from which we get books like Ringworld, we have two races we interact with a lot. Piersonâs Puppeteers and the Kzinti. The Kzinti are very much your warrior hunter
pack critters and weâll return to them later, but Piersonâs Puppeteers are rather sneaky
and cowardly, though openly friendly to others. We are told at one point they have three sexes
rather than two, but that one is non-sentient. Later we find out that this is only sort of
true, as they are like wasps, one sex implants an egg in a creature and the other fertilizes
it, and the egg grows and eventually hatches in that critter it was laid in. We discussed that more, along with some parallel
cases, in our episode Parasitic Aliens. One can argue that is no worse than eating
a cow, and given that they are herbivores themselves means they probably kill fewer
animals than a meat eating species does over a lifetime, but it is the sort of thing that
can heavily influence oneâs attitude toward others. On a similar note, in Iain Banksâ The Algebraist,
we encounter the Dwellers. This is a long-lived gas-giant dwelling sentient
and technologically advanced species. Interestingly, Dwellers donât consider their
children to be sentient until they pass a certain milestone. Dwellers actually hunt their own young for
sport. A creature that lays large litters rather
than one or two kids as humans and many other large mammals do, might also be very callous
towards its young, and some species that laid eggs by the thousands would be expected to
result in sibling cannibalism. You might think at first that they may not
be aliens you really want to strike up a friendship with. Morality presumably is based on reason and
compassion but is likely to be influenced a lot by that biology, and our own morality
inside our shared biology between culture to culture and era to era already is a pretty
wide gulf but might seem narrow on the galactic stage. Aliens might have a fairly compatible set
of end goals to us, although how they achieve those might be radically different. To work with an alien intelligence will probably
require us to become very understanding of extreme moral differences and hope we donât
get too many nightmares waking us up at night over that decision. End Goals incidentally are essentially the
ultimate goal of someone, or something, and we also have Instrumental Goals that are those
goals we pursue in the advancement of that end goal, like having an end goal to be healthy
and instrumental goals for obtaining that like a diet or exercise schedule, and of course
being healthy is an instrumental goal itself to further personal survival. I should also note that we are not wired with
anything as explicit as personal survival or species survival like some core law, we
just have a lot of collected impulses and wiring and in general evolution favors those
impulses or wiring which tend to cause you to act toward those two end goals. If weâre talking something like artificial
intelligence â which may easily be the aliens we meet â it might have much more explicit
bits of rules, akin to Asimovâs Laws of Robotics. See the Paperclip Maximizer episode for more
discussion of the weird and counterintuitive ways various end goals might make something
act and the possible instrumental goals that various end goals might spawn. We have to cooperate with each other so we
are wired up that way. An alien might not be. We cannot assume it has anything but a rigid
self-survival goal. As an example, some sort of hive mind might
develop focused entirely on itself since it needs no cooperation, we see something like
that with Morning Light Mountain in Peter Hamiltonâs Commonwealth series, it and its
other species members do cooperate a little bit right until the moment one of them gains
an advantage that lets it avoid needing the others, in this case knowledge obtained from
humans to construct wormholes, and the first thing it does is kill its kin by opening a
bunch of wormholes to dump nukes through them. It really has no notion of species, and indeed
the species making it up are no more âitâ to that creature then we think of various
microorganisms in our body or our replaceable bits that grow back, like hair strands or
fingernails. It can grow more of every one of its components
so none are vital and all are replaceable and none are people to it. When you get around to it though, it doesnât
really have the same notion of people anyway, merely itself and other threats, which is
anything not itself. This is our first example of a terrifying
alien that is plausible, because there is nothing there to reason with, not because
the creature is unreasonable. It probably is quite capable of reason given
that it presumably developed all its own science and technology to get into space, but rather
how it reasons is the problem. Itâs overriding goal is continuation of
itself, which is aided by removing all threats and adding more resources and capability to
itself to improve its own existence and chances to continue existing. Species that develop as individuals like us
are likely to value diversity at least somewhat, even a fairly xenophobic tribe with genetic
bottlenecking going on has to deal with psychological diversity if they want to advance, because
a carpenter and a farmer, or a smith and a doctor, simply from their day to day life
around that livelihood have a noticeably different worldview that will affect all their thinking
and their group needs to be able to encompass those perspectives into their collective ideology
in some fashion to operate. In contrast, the big old single entity does
not require cooperation. Some algae that grew into a neural network
across some alien lake and developed some infectious microbes that let it get into insect
or fish brains and pilot them to help it build, protect, and expand its lake, and eventually
becomes sentient and sapient, does not have partners or a social hierarchy. It just has itself, things it can effectively
consume, and things it must destroy as being in its way. It simply does not have a reason to cooperate
with others, even if some birds had gotten the algae on themselves long ago and transmitted
it to other lakes to spawn new versions of itself. It is possible those many sentient lakes might
become friends and allies but theyâre less naturally predisposed to it, whereas we have
packs and herds and so forth long before sapience as humans. If it has taken this perspective though, again
it is reasonable, and in a universe where the speed of Light canât be circumvented,
it is aware that any colony it tries to found away from itâs homeworld is going to diverge
from it, which would make the colony a competitor as opposed to an extension of itself. It is also presumably well north of Einstein
on the IQ scale, given it single-handedly developed spaceflight, not too mention patient
given itâs implied longevity and being able to pursue its goals for untold millenia, something
we contemplated more in our episode Sleeping Giants when considering how a hyper-intelligent
and effectively immortal entity or civilization might act. I said such a thing is smarter than Einstein
but perhaps it would be better to say thatâs it is smarter than John von Neumann, another
genius who gave us the notion of the von Neumann Probe, a relatively simple-minded example
of artificial intelligence you could dispatch to go replicate itself and explore the stars,
or alternatively mine them for resources and bring the home. So it might just take the policy that it wonât
make copies of itself but simple dumb resources gatherers who bring the materials home to
fuel local expansion into some massive Matrioshka-Brain or Birch Planet equivalent of that entity,
consuming an entire galaxy in the process. And since it is implied to be biologically
immortal and probably has a consciousness stretching back to its primordial days, it
may be very willing to spend millions of years consuming its galaxy in such a project. It also by default has no concept of parents
or children or an idea of inevitable death and replacement. Even in this case though, reasoning with it
that such a policy doesnât make it well-suited for a war across the galaxy, since its probes
all have to be either very dumb or very, very reliably obedient, potentially allows some
diplomatic options. It is also not stupid, it may simply not have
any end goal besides personal survival and find other ones bizarre, but it will be able
to comprehend the notion even if it views it with contempt. A common theme in science fiction dealing
with lone hive minds is that they canât even comprehend that other species might cooperate
voluntarily to mutual benefit, but remember these things are always assumed to be very,
very smart, so same as you or I could comprehend a device like the Paperclip Maximizer whose
end goals is to Maximize the production of paperclips, even if we would view that as
crazy, it probably can comprehend our own basic psychology and reasoning too. So there is at least some room for the possibility
of a diplomatic solution predicated on it recognizing that it can only stretch out so
far and act belligerent before the difficulty of doing so effectively exposes it to a greater
threat than stopping and possibly engaging in some shared endeavors. You probably have to negotiate from a perspective
of mutually assured destruction though, and a forward thinking example. You basically tell it that being limited to
a single location it canât really backup, it is vulnerable to attack, and it is far
easier to destroy than create things of complexity like a brain. It is very vulnerable to things like relativistic
kill missiles and it knows it, more than a species like we would be if we were spacefaring. It might get us first but even if it is very
good at preemptive strikes and had a lead of being the first species or entity on the
galactic stage, it will know somewhere out there will be someone who will gain that strike
capability before it saw them and have detected its policy, and will attack it first. Especially if you tell it that your dying
breath will be spent dumping all your energy into a loud omni-directional transmission
telling everyone what happened, who did it, and what their address is. Paired to that is the reasoning that it will
have to risk ever more sophisticated machinery to venture further out in its resource harvesting
and defense assets. Itâs rather hard to guard your border if
that border is ten-thousand years of signal lag away with something really stupid, for
instance, let alone ten-million years away in another galaxy, and it itself is not stupid,
and will know that. Now the arguable exception to this is if we
live in a Universe where faster than light travel or communication is actually possible,
in which case that reach is extended. Indeed in the case of Morning Light Mountain,
it was that specific technology, via wormholes, that triggered the critter to murder its rivals
in its own planet and solar system. In that case it did have a species history
of hostile cooperation, sort of like our example of other sentient algae lakes popping up,
so such a critter might already have a basis for the idea of cooperation too. The next terrifying example is actually those
von Neumann Probes I mentioned a moment ago, because it is assumed they are simply too
stupid for much reasoning and too hard-wired to a specific end goal to be talked out of
it. A von Neumann probe, again, is a machine that
can go to a new star system, land on some random planet or asteroid, and extract materials
to build more of itself. We looked at that in more detail in our episode
on self-replicating spacecraft. Such devices can be as smart as you can program
them to be but as we often say on this show in regard to artificial intelligence, Keep
it Simple, Keep it Dumb, or else youâll end up under Skynetâs Thumb. If you want to send something smart out in
the galaxy, you need to be really, really confident in your mastery of artificial intelligence
to send out self-replicating probes smart enough to think and reason to distant star
systems where they can operate by themselves and un-monitored for thousands of years. Otherwise you have to worry about it coming
home for a visit. Such swarms of self-replicators, be they very
stupid to the point of being Grey Goo, or fairly intelligent, both represent major terrors. In the dumb case, there just isnât anything
at all to reason with, it just has a simple task it goes about endlessly, in this case
disassembling planets to make more of itself or whatever things itâs also programmed
to make, like ingots of metal or power relays or space habitats. Of course if it is making space habitats its
probably on the smarter side, but may still be unreasonable. Remember with our prior example it still had
an end goal similar to our own, it wants to live. Now you probably program your self-replicating
machines with some survival imperative but it is in service as an instrumental goal to
something else, like making paperclips, and so you cannot threaten it or reason with it
except in regard to that end goal. You threaten to kill it to diminish paperclip
production and it might respond to that threat. Again, though, it has to be able to even engage
in such reasoning. Our prior example of the lone mind could be
reasoned with because it was very smart, able to design and produce advanced technology
it had invented from science it had discovered. Such self-replicating probes only have to
produce advanced technology, which implies no advanced thinking anymore than an amoeba
has, which is a fairly advanced machine in many regards. Your upside there is that if it canât reason
then it will be fairly strategically limited. A virus can be hard to kill but at least you
donât have to worry about it thinking up opposing strategies on you. Though some sort of techno-organic virus might,
and we see some examples of that in science fiction too, such as the Drakh Plague in Babylon
5, which is implied to have some limited networked intelligence capable of adapting to strategies
intelligently. Such an intelligence doesnât necessarily
imply anything like reasoning and thinking though. When it comes to alien threats we see in science
fiction, that capacity for reasoning â while making them much more dangerous â is also
what makes them a lot less terrifying on examination. If I have some species that delights in hunting
an especially canny and intelligent prey, like we see in the Predator Franchise or the
Hirogen in Star Trek Voyager, you arenât likely to be able to convince them hunting
sapient creatures is immoral, at least not without essentially wiping them out as a culture
which is arguably no more moral than outright genocide. However, you can explain you donât want
to die â which they will hardly be surprised by â and will do all you can to avoid that
â which they also wonât be surprised by or likely offended by either. If they're hunter-focused they arenât offended
by you trying to kill them, because thatâs actually what they like about you as potential
prey, the challenge. One strategy might be refusing to fight, removing
the challenge, but they might opt to wipe you out simply to discourage others thinking
that was an option. Another is to offer them something like virtual
reality or artificial intelligence, and they probably already developed that long before
encountering any other species. Which raises an important point, such civilizations
have to be stable and have existed before meeting other alien folks. Such a civilization has a lot of challenges
all on its own, starting with how they cooperated in the first place to build a civilization. If they are all about the challenge of hunting
the cleverest prey, then for much of their early existence, that prey was each other. So they presumably had to have some mechanism
for dealing with that, like non-lethal hunts or some selection method or lottery for picking
who got hunted. This might make them quite like the Kzinti
we see in Nivenâs Known Space setting, very pack oriented and very much a social hierarchy
built on strength, but not unreasonable. Amusingly in that setting they grow much better
at cooperating with other species over time and the theorized reason for it was rapid
evolution, as they kept losing in their wars with humanity and doing so with their most
aggressive and hostile members doing the dying, so as to favor the survival of those more
reasonable. Niven takes some liberties with evolution,
like making luck a trait you can breed for, and such an accelerated change of behavior
occurring at a genetic level and without that species noticing it is fairly unlikely. However you probably would have that occur
long before they ever built a spaceship. Aggression is a good thing, depending on form
and definition, and evolution will breed for that, but too much of it in too stupid a way
is not a survival advantage and certainly not one for a species aiming for a high-tech
civilization based on many specialists working together. Thatâs one of the more peculiar aspects
of Klingons in Star Trek for instance, one often wonders how a race whose highest ambition
is fighting everyone, including each other, ever managed to get any building and science
done, more on that in a moment. As an interesting sidenote, intelligence doesnât
necessarily lead to technology and civilization as we think of it. As an example, humans have pretty limited
senses of smell compared to many other mammals, and given how stinky and smelly most early
cities were, species with a better sense of smell might never develop them and the attached
large capacity for specialization of work simply from not wanting to have so many of
them, and their waste products, all squished together. Very small things might be a filter that prevents
civilization emerging, see our Great Filters series for discussion of that idea. Now hyper aggressive warrior or hunter races
might just channel that somewhere else. I sometimes hear folks say humans of nowadays
are less aggressive than our ancestors or that it should be an aim for a future mankind
but that is a fairly dubious notion. We just tend to aim it in fashions less overtly
connected to being a muscular pack leader with the best choice of mates. We generally fixate on instrumental goals
that probably originate with the impulse toward that or the end goals of survival of self
or species, but those instrumental goals just mutate into more of our core end goal. A person might take an interest in a sport
to be more popular and respected but over time that really becomes their true end goal,
to be very good at that sport, and likely the same for most other avenues. You strive to be the best at that thing you
do, because it earns you a livelihood and respect and status, and often that eventually
mutates into self-competition, as you mark how youâve improved and start caring more
about that. This in many ways represents the most terrifying
possibility for intelligence though, because not only can we shift our effective end goals
to things like that, and possibly eventually engineer ourselves to be beyond biologically
wired end goals, but all of those end goals we have now base themselves out of that need
for specialist-based cooperation. In a post-scarcity civilization where robots
are doing most of the grunt-work and where concepts like starving to death or dying of
disease or old age are weird historical concepts youâve never encountered, that core desire
to be the best at something and even to only judge your success against your own improvement
might get pretty weird. Someone who has a talent for first-person
shooters has no need to provide a living for themselves, and all the time in the world
to hang out in virtual worlds, and they might spend centuries honing their capacity at that
skill. They also probably arenât too reliant on
needing to keep to a social norm as they might easily have technology and resources sufficient
to build their own spaceship smart enough to fly around the galaxy harvesting any raw
materials they need and producing any item they desire that is in the shipâs inventory
of schematics. Thatâs not a civilization, but it's still
an alien, and one who might interact with anyone it encounters only for the purpose
of finding out if theyâve got anything they can add to their constant virtual war or hunt. Assuming they keep it virtual anyway, they
probably could have a stored copy of their brain somewhere, or several, or access to
android bodies, and think landing on an alien world to shoot everyone one at a time was
no different than slaughtering their way through a virtual world. They donât care if they die, because somewhere
on some lonely asteroid in deep space they have a copy of themselves, probably many scattered
all over the galaxy, that will just wake up if they die. They also probably do not care if someone
tries to retaliate against their homeworld for their actions. They have in many ways become like our hive
mind example, a species of one, interested only in their goal. Their home civilization might not care, as
it's not really even a civilization anymore just a million factions with shared interests
or trillions of individuals each with their own goal and little to no reason to interact
with each other or work together. Indeed, Iâve mentioned artificial intelligence
as a likely type of alien we might meet, and as I often say on the show, the term âartificial
intelligenceâ is a pretty dubious concept that arguably already applies to us, and likely
gets hazier the more technology you get. Some computer-mind might be much more human
in outlook than some person who delights in some personal instrumental goal and has had
their brain tinkered with to remove distractions to that, like needing to sleep or ever getting
bored of that goal or caring at all what others think of it. It really is not hard to imagine a civilization
might even encourage a nascent form of that, like wiring kids up to be less distracted
from their studies or not care as much about the opinions of others to minimize peer pressure
or depression. The hive mind examples also canât expand
beyond their homeworld into the galaxy without needing to spawn copies that will diverge. In contrast, regular civilizations canât
easily diverge locally as we are bound together by common purpose and common need. That glue weakens when distance of space and
time separate us, and can also weaken when we lack a need for mutual dependence with
sufficient technology. So it really isnât alien species we have
to fear at all, as they are very unlikely to ever be some unified galactic civilization,
just individual systems, groups, or even lone actors, all with different motivations and
goals that might be very detached from anything weâd find reasonable. But then again, that just might be the way
our own civilization goes, our descendants every bit the alien from ourselves or each
other as any tentacular murder-machine we might find on some alien world. Some humans might even choose to become such
a thing, or something even more alien in body or mind, and theyâd be much more likely
for us to encounter than some alien originating from a distant galaxy. Actually, come to think of it, for humans
to interact with even the most cooperative aliens will require us to accept morals and
practices that are repugnant in our own cultures, or at least get very good at ignoring them. Such collaboration could go one of two ways. We could either become more tolerant or we
could go further and drop off the far end of that tolerance by normalising those repugnant
practices and adopting them ourselves. Civilization is, after all, a thin veneer
and our own world has shown time and again that civilisation morals and ethics arenât
fixed. Our alien interactions could result in us
becoming the very monsters that are the stuff of our current nightmares. We probably need to be less terrified of aliens,
be they benign or malignant, and more terrified of what we will ultimately become if we ever
encounter them. Happy Halloween! So this will wrap us up for the month of October,
and weâll get to our schedule for November in just a moment, but we had 7 episodes this
month totalling 4 hours of show: 5 regular episodes, a mid-month bonus episode, and an
end of the month livestream, and folks sometimes ask me how in the world I manage to get that
all done each month. With the average book length being about 90,000
words, our episode scripts average out to about 3 books a year, not including recording
them and doing the videos, and my honest answer is Iâm never really sure. Some authors can do multiple books a year,
like Isaac Asimov who wrote over 500 books, some need multiple years per book, like George
R.R. Martin. The big thing all share in common, and with
other folks doing creative work in other areas, is generally having a rule about getting a
certain amount done every day or every week, not waiting for the best ideas to come to
mind by magic. However there are a lot of things you can
do to enhance your creative productivity and thereâs a great course on that by Richard
Armstrong, âThe Perfect 100 Day Project: Your Guide to Explosive Creative Growthâ,
over on Skillshare. Everyone has to find their own best practices
for creativity and productivity, but you can pick up a lot of good ideas from others and
Skillshare has many amazing classes on everything from productivity enhancement to creative
writing. Perhaps youâre trying to adjust to working
in a new environment or just looking to pick up some new skill or hobby, Skillshare has
a course for it, whether youâre a beginner, a pro, a dabler, or a master, Skillshare has
thousands of classes on a wide variety of topics from experts to help you learn. Skillshare is an online learning community
for creatives, where millions come together to take the next step in their creative journey,
and Members get unlimited access to thousands of inspiring classes, with hands-on projects
and feedback from a community of millions. If youâd like to give it a try the first
1,000 people to click the link in my description will get a free trial of Skillshare premium
so you can explore your creativity. Act now, and start learning, today. So as mentioned, that will wrap us up for
October, but we will be back next week to start November up with a look at how we go
about becoming a post-scarcity, Kardashev-1 Civilization. Then the week after that weâll be taking
a look at Interstellar Trade, before returning to the Fermi Paradox to look at the Prime
Directive, the concept of civilizations who avoid interfering in other more primitive
civilizations in the galaxy, of which we may be one. If you want alerts when those and other episodes
come out, make sure to subscribe to the channel, and if youâd like to help support future
episodes, you can donate to us on Patreon, or our website, IsaacArthur.net, which are
linked in the episode description below, along with all of our various social media forums
where you can get updates and chat with others about the concepts in the episodes and many
other futuristic ideas. Until next time, thanks for watching, and
have a great week!
I'm kinda surprised the Reapers didn't get mentioned. Lovecraftian AI ships that want to take you apart is pretty scary.
Happy halloween yall
One of Isaac's ideas on why the universe probably isn't a dark and scary place full of aggressive exterminators (see: Dark Forest theory) is that an aggressive exterminator species might be afraid of being caught and attracting attention. However, this leads to the interesting idea of interstellar forensics; could you solve the murder of a species and determine who killed them?
If there is some clever way to get away with murder, that's a problem and a point towards Dark Forest. If neighborhood watch and ancient spacecops are even occasionally effective at finding and punishing aggressors, that's a point against Dark Forest.
Nicoll-Dyson beams are probably not popular murder weapons. It's too big. The Galactic FBI might confiscate your illegal Death Star before you're even done building it. What is the ideal tool for anonymous interstellar genocide?