They say two heads are better than one, so
what about two billion? Today’s topic is Hive Minds and Networked
Intelligence, and I should start by saying they’re not entirely the same thing. Indeed while Hive Minds is mostly a subcategory
of Networked Intelligence, it has a lot of subcategories of its own too. Probably the best known example of a Hive
Mind from fiction is the Borg from Star Trek, and those fellows are sufficiently horrifying
that it gives Hive Minds a pretty bad reputation. To be fair though, they are rarely presented
in a positive light in fiction. We’ll try to look at some positive aspects
and examples today, but to be honest I think I’d rather jump off a cliff than be part
of most versions of them so I’m probably not a neutral spokesman. Networked Intelligence is another story, as
a broad category, and Networked Intelligence itself is one of the three types of Intelligence
and possible paths to Super-Intelligence we’ve discussed. One of the others is Speed Intelligence, when
the mind is the same except sped up. That is simple enough conceptually and we’ve
talked about it extensively in other episodes. Networked Intelligence and Quality Intelligence,
which is hard to define beyond the difference between a lone genius who solves a problem
a room full of other experts could not, are both types we’ve spent less time on and
today we’ll fix that for the former. The first key to thing though about networked
intelligence is that it’s already something we have. And I don’t mean in the way humans themselves
are arguably a colony organism composed of many different types of cells and organs. Unlike the most integrated forms of Hive Mind,
you’re not a bigger intelligence composed of smaller intelligences so I think we have
to exclude individual humans as an example of a Hive Mind. Defining a networked intelligence is a bit
tricky, in order to avoid trivial examples like a herd of animals with limited communication. Normally with futuristic concepts I always
encourage folks to avoid definitions that would include ourselves since that tends to
indicate the definition is bad, you and I are not cyborgs, it’s nice to point out
how things like our glasses or tooth fillings or similar can be argued to be mechanical
augmentation of a human, but it’s clearly not what we mean by cyborg so you probably
have a bad definition if it includes modern humans. Similarly, humans are not what we mean when
we say Networked Intelligence or Hive Mind, but human civilization was built by us becoming
a simple networked intelligence. Moreover we recognize such things exist and
it’s implicit whenever we refer to group or organization that are beyond classic family
and genetic groupings. This company, that sports team, that church
or village or city or country, we do regard as an entity in its own right. A network being a bunch of ropes tied together
to create a net, usually with nodes, or knots, it’s probably not too surprising the definition
of network is pretty hazy too, but for today’s context we’ll say the simplest thinking
network would be several individual nodes, human minds in this case, connected together
exchanging information. Clearly we’ve been doing this since before
there were humans, since even the most basic of body language and noises inside some pack
of animals is an intentional exchange of information. If you want to stretch the point, you can
argue that even two simple organisms exchanging DNA to make a new one is pretty sophisticated
communication and you can really stretch the point and include even a simple unicellular
organism that reproduces by mitosis, by dividing itself, arguing it is a giant factory or many
interdependent machines. It’s easy to forget just how complex such
bacteria are but it is better to think of them as a giant metropolis full of molecules
as people and buildings than some tiny simple organism barely bigger than an atom, after
all, each cell usually contains trillions of atoms. Like I said though, we have to beware definitions
of new concepts so broad as to be meaningless not because it is inaccurate just inconvenient
and not helpful. I will just place the simplest of networked
intelligences at the invention of language, as that seriously jumped up both the bandwidth
and integrity of those signals, and as a byproduct allowed far more short term and focused specialization. We see limited specialization in almost any
group of cooperating animals, and we see intense specialization in things like insect hives,
but the sheer amount of fast and accurate data that can be exchanged through human language
allows us to train people with identical DNA to perform very specific tasks not strongly
shaped by their biology. And we see that strongly with the emergence
of cities, from which we get the word civilization in the first place. Very many people each specialized in very
different tasks which could not possibly allow their survival in isolation just doing that
specialized task, all grouped together for fast exchange of information and supplies. Any definition being a bit arbitrary, I will
set this as the simplest example of networked intelligence for humans. It also represents a huge paradigm shift and
increase in resources and abilities. Same as basic communication and tool use made
humans jump up over other animals, the rise of cities and civilizations gave a huge edge. We have tons of technological improvements
that make individual people more effective, we’ve had many more that just let us increase
how many people we can have alive and healthy, our carrying capacity. However, many of inventions were such boons
because they improved the network. Roads and bridges to connect rural areas to
cities and cities to other cities, carrying not just food and supplies but allowing the
movement of people, ideas, and information. Ships, railroads, highways, postal systems,
radios, telephones, and these days the internet. All can be viewed as an amplification and
augmentation of basic human speech, which allowed two people fairly near each other
to exchange complex concepts quickly and accurately. Even the invention of writing, which allowed
communication not only over distance but over time itself, improved this basic human network
so that it could include dead people. Long after they were gone, even from the memories
of the next generation or two who met them and spoke to them, writing allowed us to incorporate
non-living humans into the human network and the modern internet has allowed us to include
computers and databases into that network too. We don’t really think of ourselves as networked
intelligence, some actual entity called humanity, but even just those of us old enough to remember
when the internet did not exist, and even being exposed to it gradually so it lacked
an explosive moment of transition, can see a clear difference in the civilization we
have now as opposed to then. Technological changes happen so fast and frequently
these days that we are a bit immune to seeing how they’ve changed us, but it has still
happened. I wanted to note that though because a bit
like cybernetics, networked intelligence is one of those things where it happens gradually
enough that we might just keep moving the bar, folks a few centuries from now might
be shot through with tons of devices, cloud storing memories outside their head, and routinely
talk to people by just thinking their direction with the technological equivalent of telepathy,
and still be talking about how in the future people might be cyborgs or network their minds
together, not like us of course. But no matter how integrated human minds might
get to be, the human itself is not the networked intelligence, it’s just a node of it, whether
it’s an individual or not. Bob is not the networked intelligence of New
York City, he’s just a component of it. On this subject of Networks and Hive Minds,
we are obviously very interested in what happens to the individual, if they still continue
to exist or not or are free or not or have privacy or not, but the individual is not
the network, even if it is a key or irreplaceable component. Who is the network? The network is the network. We have a thought problem that’s fairly
interesting for developing this notion. We’ve talked a lot about copying a human
mind onto a computer substrate where the processors emulate neurons. That’s an intuitive enough concept for folks,
but you can just as easily – well not easily – sit millions of people down with pencil
and paper and have them perform all those same calculations that the computer is doing,
storing each result on paper and walking over to hand each new bit to another calculator. We can envision your neurons doing this to
make you. We can envision computer chips doing this
to make you. But there is something passing strange about
the idea of a ton of people cranking all the calculations out manually, including folks
looking at a scenery and jotting down data to be sent to giant skyscraper full of cubicles
manned by the team that makes up your eyes and optic nerve. Yet by the same logic as the neurons or computer
emulation, that would be you, and in this case you would be a networked intelligence,
all those people are your nodes. You don’t, as with the traditional hive
mind, have access to their thoughts, you can’t control them, they are not indeed wired into
your mind at all. We could also replace them with a giant ant
colony, a literal hive, that wasn’t calculating but perform those operations far more stupidly
and simply, pushing colored or scented grains around to serve as your bits and signals. I like this example, where people or ants
– our usual example of a hive mind – are cranking out calculations to run your mind
because it both shakes ups the notion of thinking of an uploaded mind on a computer as essentially
just a substitute brain – a black box doing the work – and highlights that such a thing
doesn’t actually have to be composed of the actual minds of its nodes in an intrusive
manner. I think it also easier to picture some place
like New York City or Tokyo as a potential real separate entity with thoughts when you’ve
just tried to wrap your head around a million people with pencil and paper running your
mind. I don’t think I’d ever categorize one
of these as a true networked intelligence unless the mind being generated was actually
smarter than the individual components each were. At least at some tasks, and while some group
of people working on a problem together might come up with ideas faster and better than
an individual could, and maybe even ones no individual would have thought of, it’s not
really exhibiting much intelligence itself. Of course in fiction it often is because its
individual members have usually become drooling morons. At best one can justify this with the assumption
that collective mind is using every spare bit of processing power, up to and including
the bits that process stuff like optical signals so that people can walk right by drones without
even being seen maybe, but this is mostly just bad writing. Or very good writing, in the case of Star
Trek’s Borg the writers are presumably more focused on making a dreadful inhuman enemy
that dehumanizes people, and nothing better shows that than folks stumbling around without
apparent self-awareness, like a zombie. So I won’t knock the writer’s from a story
standpoint, just a logical and science standpoint. The Borg are idiots, individually and collectively,
and I doubt that demonstrates an actual hive mind properly, even if I love them as villains. I think I preferred Unity, a parody of the
Borg from the cartoon Rick & Morty, where the titular character Rick gets the hive mind
Unity drunk and it comments how it probably shouldn’t be trying to run 200,000 Pediatric
hospitals and 12 million deep fryers in that state. And it’s a key point about such hive minds,
that if you’re composed of lots of individual components designed for doing such things
on their own, you probably shouldn’t be running them. Humans not only have components of ourselves
we control subconsciously, but plenty of bits that operate with no control whatsoever, I
don’t need to tell my DNA to unzip and replicate, though it might be handy to be able to tell
it when to do that and when not to. Indeed we do have some regulation methods
inside the body and cancer can result when that breaks down. I’d imagine a Hive Mind could develop the
equivalent of cancer, and if it layers up a lot minds, sub-minds that supervise this
or that, it would have to worry not just about individual members leaving or attacking it
if they did, but also sub-minds, smaller hive minds, rebelling or breaking away. Obviously the more autonomy you have at the
lower levels the more of an issue that would be, my kidneys have never tried to declare
independence or stage a revolt. Hive minds though could easily end up undergoing
such breakaways or mitosis as a form of reproduction. In the absence of instantaneous communication
it might need to as well. I mentioned a few episodes back in Digital
Death that a human brain spread out to the size of planet, but with its signals switched
over to light speed ones, would process at the same rate as a normal human mind, spread
out beyond that and you either need some form of FTL communication or you will start suffering
time lag issues. So spreading a hive mind over multiple planets,
let alone solar systems, would seem a serious limitation. I tend to be skeptical about us ever inventing
any form of FTL, but even if we grant it for the moment, a lot of fictional and theoretical
FTL methods are simply faster than light, not instantaneous, or have serious bandwidth
issues. You can probably run an interstellar empire
on dial-up modem speeds, the old Battletech & Mechwarrior franchise did that, but I can’t
see running a hive mind that way. This is one possible Fermi Paradox Solution
that gets kicked around too, that aliens don’t spread out from their homeworld much because
they converge to being hive minds or get replaced by singular entities like a Super-intelligent
planet sprawling computer. I tend not to bring it up in Fermi Paradox
discussions because it’s not a good one, but it is of interest today. It’s not a good one because you can’t
assume every civilization does this, you can’t assume none would be willing to subdivide
to found a small new hive mind in another system, and it still suffers from the Dyson
Dilemma, in that you can build Dyson Swarms around your own home star and as we saw in
the Mega Earths episode you can just keep building those up with resources brought in
from elsewhere until it is galactic mass and hovering just above the critical density to
turn into a black hole, either as a Birch Planet or a truly huge Dyson Swarm of Dyson
Swarms. I think most hive minds would be willing to
reproduce by making a new one elsewhere, but they might not like the idea of essentially
making a rival, and for that matter they might need a certain minimum number of people just
to make existence bearable for those colony splinters. In Kevin J. Anderson’s Saga of the Seven
Suns you’ve got a limited hive mind, the Ildirians, who tend to need to do everything
big, including their defense fleets, just to have enough of them in one place to stay
effective and sane. The Geth from the Mass Effect franchise had
that issue too. If that was hard to overcome you’d probably
need to bring resources back rather than expand. Now there’s the question of who would go
out and retrieve those resources from around the galaxy, but any hive mind that can’t
design an automated mining vessel obviously came out a loser on the deal when it became
one. It’s the same issue with folks living in
virtual realities, they might not want to abandon their paradise to go harvest resources
far away, but they shouldn’t have to. If you can make simulated paradises it implies
you can make something smart enough to decently mimic people to talk with in that paradise,
so programming a ship to gather stuff and bring it back ought to be child’s play,
and one would tend to think a hive mind could do it too, especially since they presumably
had to be pretty good with intelligence and computers to make their hive mind in the first
place. That skips those that naturally evolved, who
are often shown as being awful with computers because they never developed them. We get that with Morning Light Mountain in
Peter Hamilton’s Commonwealth Saga and with the Buggers or Formics in Ender’s Game,
and lots of other insect hive examples. The former, Morning Light Mountain, cannot
naturally speak faster than light, as most of the fictional examples can, so it does
have a decent head for electronics and cybernetics but still never developed computers much. In Ender’s Game the Buggers do have instant
communication and telepathy they naturally evolved, indeed humans back-engineered their
own interstellar communications off this. Sort of, they knew it could be done since
they could tell by watching Bugger ships react to things faster than the speed of light should
have permitted them to witness it, and knowing it could be done, humanity then figured it
out. It is implied they need a queen reasonably
nearby for this to work though, as a sort of central node. Though in later books it is stressed that
the Queen’s body is just one more drone to her, albeit a critical one, and that she
isn’t really the queen or maybe even the hive but more like our example earlier where
your intelligence was run by ants. We also get a retcon about the individual
buggers actually having intelligence of their own. Orson Scott Card is pretty good about consistent
canon by and large but some explanations changed over the series, tweaked for consistency I
assume. I remember when I did the Stupid Aliens episode
and mentioned the book I irritated some folks discussing the Buggers, “That’s not what
it was in the book!” and kept having to remind folks that it isn’t a book, it’s
a series of around 20 novels and short stories written over 30 years, and that they weren’t
saying I was wrong, they were saying the author was. Always a problem in science fiction when stuff
needs retconning, or maybe didn’t but gets it anyway, a lot of folks were irritated when
Alice Krige showed up in Star Trek First contact, the film not the episode, as the Borg Queen,
but it didn’t bother me too much personally. The Borg originally spoke with one voice out
of thousand mouths, cold and alien and unified like a sociopathic chorus so have a single
queen talking seemed wrong, but to be fair the borg originally were going to be an insectoid
hive race but they didn’t have the budget so we get the black leather body horror look
that seems like something out of Clive Barker’s Hellraiser franchise, so changes happen, and
it gave the audience a central focus for their villain. There’s also no reason a Hive can’t have
a mouthpiece, it’s actually inefficient to have a thousand people saying the exact
same thing, and a hive doesn’t have to be homogenous with every member having the same
function and status, insect hives aren’t like that after all, it could be a meat puppet
to use as a collective voice or even a semi-independent system. Now the Borg are horrifying in their own right
but the usual thing that bugs people about them is that membership tends not to be optional. Even in Isaac Asimov’s Foundation series,
the rather benevolent Hive Mind of Gaia, which still has modest individuality for its members,
is plotting galactic takeover and by conscription not volunteers. It was not a popular move with most fans either,
myself included, and is often guessed as the reason why all future books in the series
were set before the incident. Nonetheless there are tons of examples of
good hive minds in science fiction, especially when telepathy seemed to be an omnipresent
feature even in hard science fiction novels, a trend I’m glad finally died off in the
last couple decades, but common or not, rarely do I hear folks speaking of them with enthusiasm. Pretty much the only member of a Hive Mind
in fiction I like is Nevil Clavain for the Revelation Space series, it probably helps
that he’s a viewpoint character who joined semi-voluntarily, never upgrades his implants
from the earlier versions that were less connected, and is often on bad terms with his own faction,
the Conjoiners, so he doesn’t exactly cheerlead for them. They also don’t indiscriminately spread
and assimilate folks involuntarily either. Interestingly Clavain’s faction in the books
is often in conflict with the other faction of humanity that is closest to being a hive
mind too, the Demarchists. They are more of a bunch of intelligences
who are networked, so to speak, as like most people they have a ton of mental implants
but one of theirs, and the key one for their specific civilization, is one that tries to
go straight democracy, no representatives, by having everyone vote on almost everything. Sort of like if every bill in congress got
text messaged to you for a vote, only as best I can tell the implants allow them to do it
mostly subconsciously and even asleep. I’m assuming the implants in everyone’s
head know them individually well enough to guess how they’d respond. Forgetting the specific mechanics, that is
kind of the ideal of most versions of democracy and its parallels, everyone gets a say in
what happens because everyone has an investment in the outcome and a right to self-determination,
so networking folks to make news and details easier to get and absorb to make more informed
decisions seems ideal. Obviously taken too far you get a hive mind
where nobody gets any say in anything because there’s no individuality leftover. The other big issue is the privacy one, and
that’s a serious issue of the future even when you’re not telepathically linked to
other people. However in the networked intelligence case,
short of a hive mind, I do think that’s just an artifact of telepathy in science fiction. We associate telepathy as reading people’s
minds, not just the equivalent of a phone or internet connection, so a method using
that will understandably make you figure all those minds can read each other and freely
look around or even merge. Again, part of why I don’t like telepathy
in fiction, made up non-science makes for bad extrapolations of the future. Folks end up picturing some mind eating hive
or a bunch of folks joining hands around a drum circle to meditate and combine their
souls. My computer, and thus me, is connected to
the internet and to you, obviously, or you couldn’t hear me now. I’ve never noticed my smartphone trying
to merge into my brain even when I’m holding it next to my head or my computer trying to
eat my neighbors. I can read my files from other computers in
my house or on a network but only the ones I’ve shared. When you doing this stuff with mystical telepathy
that presumably can’t be done or it takes special effort and training. But when you do it with technology you have
to understand how it all works, how brains and memory function, to do it in the first
place, so segmenting things off or only sharing specific chosen bits is possible from the
outset, and if I want people to know what my schedule is, I can make that open just
like my google calendar, or if I want only my doctor to be able to look at my medical
status, I can do that too. So I think, when we’re talking about a technological
route to a more networked intelligence we don’t necessarily have to discard privacy
and individuality to gain the many obvious benefits. You also probably don’t have to go the all
in or out option, one size fits all. Living in a city traditionally cuts down privacy
a bit, hence many of us prefer the peace and solitude of the country, doesn’t mean we’re
divorced from civilization. So too, one can presumably set up such a network
to allow people variable involvement to fit their tastes, in general and at the moment. Also just like civilization, you could enjoy
sub-networks, I’m part of humanity and the US and my state of Ohio and my little village
and dozens of various related and unrelated social or professional groups, my level of
involvement in each varies and I can adjust my commitment. You could have a human overmind with, say,
the sub-mind of Ohio, which was both a separate entity and part of the Earthmind at the same
time, and how much so might fluctuate, as might its membership, with some joining or
leaving and involving themselves to varying degrees. Vernor Vinge explores this notion with a race
called the Tines in his classic novel, “A Fire Upon the Deep”, where we see small
groups minds of often just a few critters that often switch members who are not really
individuals on their own. There’s a lot of options for this in fiction
but I would tend to guess people who predict this as an eventuality for humans are semi-correct. Just my guess but the error being made is
that folks don’t want to give up their privacy and individuality and must eventually mature
to be okay with that, and I personally don’t see that as more mature or necessary, that
you have to sacrifice privacy or individuality to enjoy the benefits of a greater degree
of networking and group cooperation. Though I could easily see a lot of folks choosing
that route, and so long as admission is voluntary more power to them. Obviously if you’re too interdependent it
makes it hard to get out there and settle the galaxy, and we’ll be looking at a first
step to that next week in Colonizing the Oort Cloud, which contains tons of potential places
for us to colonize but usually so far apart even compared to planets that no unified Hive
Mind would be viable. The week after that we’ll leave the solar
system and continue to explore the problems with unification, especially with light speed
limitations, in Interstellar Empires, and then we’ll finish out 2017 by heading out
of the galaxy and asking if it is even possible to settle other galaxies in a Universe without
faster than light travel. For alerts when those and other episodes come
out, make sure to subscribe to the channel and hit the like button. And if you enjoyed this episode, you can help
support future ones by becoming a channel patron on Patreon. Until next time, thanks for watching and have a great week!