From oracles to fortune tellers, humanity
has been trying to predict the future for as long as we’ve been around. But Stock
market and election predictions show that even modern techniques often aren’t much
better than just looking at tea leaves or crystal balls. So today’s topic is
predicting the future, with a special focus on the concept called Psychohistory which
originated in Isaac Asimov’s classic Foundation Series. There is a reason why The Foundation
tops just about every list of best sci-fi novels and series; it’s a great story with
a fascinating concept, but odds are a lot of you haven’t read it yet, so I will try
to avoid spoilers where possible. We’ll get to psychohistory,
and why it differs from other predictive options, in just a moment. First, let me emphasize
that to forecast the future with 100% accuracy is not possible, even if we ignore random
factors percolating up from the quantum scale. We proved this just a couple weeks back in
the Infinite Improbability Issues Episode. Even in a truly mechanistic,
predetermined Universe, you could not keep track of every particle’s interaction with
a computer smaller than the Universe itself. Even if you could, both your observations
of those particles as well as the energies and interactions of that super-computer would
alter that system, throwing off your calculations. To accurately model
a system, it needs to obey certain parameters that permit it to be modeled by something
smaller and simpler than itself. Solar Systems are a great example because planets are huge
objects that are very hard to impact in a meaningful fashion. We can predict where planets
will be centuries in advance with very high accuracy.
Still, minor perturbations in the system do begin to stack up, making
accurate prediction increasingly difficult the further ahead in time you try to predict.
Every time you add another variable to the system, the number of possible perturbations
increases as well, creating a snowball effect over time.
We can make reliable predictions about planets, even though they
are huge, and contain complex, unpredictable systems like weather and civilizations, because
these systems don’t spill over to solar scale. A hurricane on Earth doesn’t matter
much in terms of Earth’s gravitational pull on Jupiter, nor does it matter who wins Best
Pie at this year’s county fair in Jefferson, Ohio… “much” being a key-word of course.
A hurricane for instance, represents a non-homogenous distribution of
Earth’s mass, and while the effect is small on Earth’s gravity, the force exerted by
Earth on Mars or Jupiter changes slightly depending on which side of the planet the
storm happens to be on. Now you’ve
probably heard of the Butterfly Effect, the general notion that even something as small
as the beating of a butterfly’s wings could alter the course of a hurricane when given
enough time for that effect to ripple and avalanche. If that is true, then it is pretty
easy to demonstrate that the Universe, even in the absence of intelligence, cannot be
predicted indefinitely far into the future. Which is to say, even if you could build a
computer capable of handling all the calculations necessary to track every particle, and do
so without altering the system, quantum events will boil up to the macroscopic scale and
via the Butterfly Effect, alter everything, even the disposition of galaxies, given enough
time. Adding intelligence
and free will into the equation simultaneously eases and exacerbates the problem. As an intelligent
agent, I can be quite unpredictable, but I can also force events to happen as I desire
them to. If a stone rolls downhill following an unpredictable trajectory into a river causing
it to change its course, I can walk over and remove it.
I can also add some more stones to shift the river back on course. As we’ve discussed
elsewhere on the channel, it is possible for us to move rivers, mountains, and even whole
planets, solar systems, and entire galaxies. Now, regardless of whether
or not one believes in the concept of free will, we know from an observational standpoint
that individuals are often highly unpredictable in their specific actions, but perhaps a little
less so in a statistical sense. You cannot usefully predict when a person
will decide to stand up to go to the fridge and get a drink or what drink they will get...
however, one might be able to proactively influence that decision. If I am talking to
someone and mention being thirsty or make noises indicating that my throat is dry, they
are very likely to ask if I would like a drink and also likely get one for themselves too.
I could also influence which beverage they might select, if I ask for coffee they are
more likely to get themselves a cup of coffee too.
I doubt I need to demonstrate to anyone today that human action is at the
very least pretty hard to predict, under many circumstances, or that we can easily have
our actions and decisions influenced. I probably also don’t need to convince you that as
unpredictable as human actions might be individually, moment by moment, they do tend to average
out a bit. I have no idea what I will have for lunch tomorrow, or what you will have,
but we could look at restaurant sales and grocery store sales as a whole, and make some
fairly accurate predictions as to how many people will eat cheeseburgers tomorrow.
Also generally speaking, the bigger our sample, the more accurate we
will become. I might say you have a 10% chance of eating a cheeseburger tomorrow, but the
level of uncertainty on that is quite large; if, on the other hand, I pick a small city
of 100,000 people and estimate 10,000 of them will eat a cheeseburger, and 10,310 do, I
will have been pretty accurate. In a nation of 100 million, where I predicted 10 million
would eat a cheeseburger tomorrow, I might expect a deviation of as little as 3000, just
3 percent of a percent, which is pretty darn accurate.
As the sample size expands, my accuracy increases, to the point that for
things like the behavior of gas samples composed of trillions of trillions of individual and
unpredictable molecules, I can predict some aspects of that sample’s behavior with incredible
accuracy. I will actually be more accurate the bigger the sample, in direct contradiction
to the notion that larger things containing more particles should be harder to predict
than smaller ones. The complexity of a system for modeling it, as we see with planets in
the solar system, does not correlate much with actual size.
Many systems, as they scale up in complexity, can begin to exhibit emergent properties that
make them easier to model too. In much the same way any collection of atoms can be modeled
under quantum mechanics but is easier to model under Newtonian Mechanics, or chemical or
biological or even psychological principles and laws.
So we can draw a few conclusions at this point. First, that while
we may debate whether or not free will actually exists, and also whether or not we could predict
someone’s behavior with a sufficiently large computer; from a practical standpoint individual
behavior is not predictable with any degree of precision.
Second, that while individuals are unpredictable, they do generally tend
to have many of the same motivations and actions and these actions can be externally influenced.
If I offer someone a cheeseburger, they are more likely to eat a cheeseburger in the next
few minutes than otherwise. Third, that although
individual behavior is largely unpredictable, people in groups do behave somewhat more predictably,
therefore we can apply statistics to large numbers of them and get useful results. The
larger the sample, the more accurate those results will be.
This is the basis for Asimov’s Psychohistory. Here is the reasonably
spoiler-free explanation: In the far future there exists a galaxy-wide empire that’s
been around for thousands of years. It contains millions of planets populated by humans, and
no intelligent aliens have been encountered anywhere in the galaxy in all of that time.
For the most part technology and innovation have plateaued and come to a halt under the
assumption that everything worth discovering has been discovered already. Indeed they are
beginning to lose some technology. Entering upon this scene is a mathematician
named Hari Seldon who invents a new science called Psychohistory that can predict the
future course of human history. The success of this new science, its failures, and the
efforts made to keep those predictions on track or to derail them, is what most of the
series is about. I will avoid discussing that further in deference to your future reading
enjoyment. However since Seldon’s first big prediction is revealed in the first short
story and is also displayed on the dustjackets of many editions, I won’t feel guilty about
exposing it… Seldon predicts that the Galactic Empire is going to fall, and indeed
it does so with a lot of parallels to the old Roman Empire.
He sets up a Foundation for a new and better empire to build itself
around in the aftermath of the Empire’s collapse. The Foundation is essentially a
bunch of scholars sent off to a distant corner of the galaxy to build and maintain a huge
Encyclopedia of all human knowledge, the Encyclopedia Galactica. Seldon can predict future events
at the galactic scale with high accuracy, and can throw in a few things which if done
just right and given enough time to build up momentum, will nudge the course of future
events into a different and more desirable direction.
He also gives us two axioms on which Psychohistory rests.
First, that the population whose behavior was modeled should be sufficiently
large; the smaller the sample, the less accurate the results.
Second, that the population should remain in ignorance of the results
of the application of psychohistorical analyses. Which is to say, people can know Psychohistory
exists and works, but they cannot be allowed to see a specific prediction or it’s very
likely to change the outcome. As an example, if I told you it was predicted
that you would be killed at work tomorrow, you’d probably call in sick.
Later in the series we are given a third underlying axiom of Psychohistory,
one considered too obvious for explicit mention, which is that Human Beings are the only sentient
intelligence in the Galaxy, and from events in the book we can conclude that any intelligence,
even a human, that deviates enough from normal human behavior can cause disruptions.
A fourth implied axiom is mentioned, but is steered away from since
it is a bit of a plot hole. That is the notion of new technology altering the dynamics a
lot. We talked at the end of last year about Black Swan Events, things which seriously
disrupt society that are unpredictable in foresight but often seem obvious in hindsight.
At the time I said that technology often tends to be a Black Swan.
I’m sure Asimov agreed with this view, since he does bring it up, but dwelling on it massacres
the basic concept of psychohistory. You can’t predict the impact of any new technology before
it’s been invented since if you knew its characteristics well enough to predict them
that accurately, in terms of their effect on civilization, you would probably know how
to make the device already. As Richard Feynman once said, “What I cannot create, I do not
understand.” That’s a serious problem in the story, since
the Foundation is constantly inventing new technology in the series.
Asimov also indicates that the further down the road you try to
look, the hazier it is and the more likely things are to diverge, and may need to be
nudged back on course. In the series, Seldon shows up occasionally as pre-recorded holograms
to do just that, and takes some other steps to keep the plan on track, but I won’t go
into too much detail here, as that would spoil the story.
So that’s the basic concept, and it is a decently solid one. As
discussed, we can’t predict individual behavior, but we can see trends in human behavior which
are probably subject to much more accurate modeling and prediction than we can currently
manage. So while the option
to develop psychohistory might exist, we have to toss in some caveats on how well it might
be expected to function. First, we have no particular
reason to believe history and human events have any inertia to them that tends to put
us back on course. Indeed, most examples of inevitability we have tend to revolve around
technological advances; such as us discovering how to farm cereal crops, and kingdoms arising
in floodplains as a result. Such things are Black Swans, obvious in hindsight
but unpredictable in advance. So we have the Butterfly Effect to contend with. While minor
perturbations to the system will often smooth out over time, they also can amplify over
time to cause major course changes. So no predictive method should allow us to predict
events arbitrarily far off in the future, and in the short term, technology is constantly
improving and improving technology produces changes that you can’t predict in advance
since you don’t understand that technology before you invent it.
Though to add a caveat to that caveat, the longer humanity is around,
the more likely civilization will increase in size, giving the system more inertia that
is harder to perturb, and the more likely we will slow down or cease our technological
progress. We have a tendency to say there’s always
something new to be discovered, but odds are this is not so and that major scientific and
technological improvements will eventually slow down or halt altogether at some point
in the future. The notion that the Universe and its mysteries are infinite and unending,
while possibly true, is honestly more jingoism and mysticism than scientific. I doubt we
have to worry about running out of new scientific discoveries any time soon, but we probably
will eventually, and a society that has done so ought to be more predictable without new
technology constantly altering the dynamics. So that is caveat one,
any psychohistorical model is going to be less accurate the further ahead it tries to
predict, especially in an environment where major black swans like new technology can
occur. Caveat two is that since
it will rely on human behavior, the introduction of inhuman behavior will tend to disrupt things.
This could be aliens, artificial intelligence, or even transhumans like cyborgs or genetically
modified people. Though such models could probably be expanded to handle them.
Also the introduction of a new group, like aliens, might not make
the model need to be twice as big, to account for their behavior, but potentially exponentially
larger to handle all the strange interactions between the two groups. However, the unspoken
psychohistorical axiom about aliens disrupting the plan has to be taken with a grain of salt,
since we can assume any civilization or species is going to have behaviors and trends which
can be predicted, given sufficient information to tweak the model to their behavior.
Caveat three is that people will want to change the outcomes. This
is why the second axiom in the story is that folks can be aware of the existence of psychohistory
but not its predictions. In the story, nobody can reproduce the science afterwards. A more
realistic perspective is that if someone created such a science tomorrow, by year’s end everyone
would have access to the concept and be working to improve it, either to make better predictions
or foresee bad outcomes and warn folks. If everyone has access
to those models, their predictive power will be limited, since people can see undesired
outcomes coming and try to avoid them. Though we can also say that being able to predict
events and outcomes, and everyone having access to those predictions, can have some good results
too. Accurate weather forecasting, which sometimes seems like an oxymoron, is generally beneficial.
So is letting folks on both sides of an argument see that most pathways in a conflict will
turn out bad and that the other side can see the paths that are good for you but bad for
them too, and move to cut those options out. For good or ill, we would expect these predictions
to not remain secret, which would tend to alter the likelihood of them occurring.
On the other hand there are such things as self-fulfilling prophecies.
If everyone thinks an outcome is inevitable, they may think resistance is futile. So if
someone tells you that Country X has some massive plan that makes them the inevitable
founders of a great new empire, you are probably going to avoid going against them as an enemy
and they will generally enter into any conflict with a lot more confidence. This can help
tremendously, provided they avoid overconfidence. It is also possible to thwart such predictions
by using random means to make decisions, much as we did a couple weeks back with a Quantum
Random Number Generator. So if enough folks don’t like such predictive methods being
used, they do have a way to counteract them. I wanted to take some
time to discuss Chaos Theory today, but I haven’t been able to think of any way to
explain that in a clear and simplified way beyond the basics. Chaotic systems are ones
that are very sensitive to initial conditions and behavior can change wildly based on minor
shifts to those initial conditions. A ball rolling down a smooth incline is not particularly
chaotic, starting it a little higher or lower on that slope won’t change the results too
much, it will just be a little faster or slower at the bottom.
A chaotic system on the other hand varies wildly in its behavior with even minor changes
to those starting conditions. Chaotic systems are a lot more sensitive to minor perturbations,
and thus we’d say they are way more sensitive to the Butterfly Effect. Now a chaotic system
is not a random one, it just tends to look random on initial inspection, it’s actually
called Deterministic Chaos but tends to just be shortened to Chaos Theory.
A coin flip or dice roll is not truly random, but the final results,
heads or tails or one to six, are incredibly sensitive to the initial starting conditions.
Chaotic systems don’t have to be very complicated either, nor are complex systems necessarily
chaotic, a set of hundreds of gears working together is pretty predictable though not
simple, whereas a double pendulum, a pendulum with a second pendulum hanging from the end,
is quite simple but chaotic. What’s relevant here
is not that human civilization is probably a chaotic system, or that such systems are
easy to perturb and hence sensitive to the butterfly effect, but rather that when you
poke a chaotic system, even a minor deviation in that poke can get wildly different results.
Push a switch and one of three things happens, you either push it hard enough to trigger
the switch, not hard enough and nothing happens, or you push it so hard that you smash the
switch and break it. By and large there’s a specific range of force that will flip that
switch, too low, no effect, too high, broken switch. This is not a chaotic system. You
can’t turn your lights on with a feather or a sledgehammer, but minor variations in
the force used have no major impact on the system or outcome.
On the other hand, push a person or a group of people and minor variations
in the force could have a huge impact on the outcome. Getting 100,000 people to take an
action that will alter the future in a desired way might work, and maybe 95,000 is just good
enough and 105,000 is actually better. But it might be that even a deviation of ten or
twelve folks more or less than 100,000 will fail to produce the desired result. That could
be an example of a chaotic system. Just a few more people and they overestimate their
strength and pursue more aggressive policies that fail, a few less people and they try
for more violent tactics that also fail, but go even a bit less than that and perhaps the
preferred outcome does happen, because they spend a bit more effort on tweaking their
message to attract new recruits and supporters. We all tend to know on an intuitive
level that trying to convince someone or sway their opinion can be a very delicate process,
there’s no single exact right phrasing and tone to get the job done, but rather a range
of them. Any suggestion can fail if just a little misaimed,
too subtle, or not subtle enough. You can even cause a negative backlash, reinforcing
a behavior or attitude you meant to discourage. Hence we tend to calibrate and nudge as we
go along. We also know that there can be a lot of unintended consequences; push someone
to exhibit one behavior and you might encourage another or suppress one.
Also, it can affect other people. The obvious case would be encouraging someone
to change their career. It can shift or disrupt the lives of their family or coworkers, but the
affects can also transmit a long way further out.
If you’ve ever played the game six degrees of separation, or sometimes Six Degrees of
Kevin Bacon, you can usually connect just about anyone to anyone else with six or less
people in-between. I wouldn’t be surprised if at some point someone made a facebook app
that could track the closest line between you and someone else by Mutual Friends. But
that connection is one of those things Butterfly Effects and Unintended Consequences can radiate
down. The key point today
about a chaotic system, which humanity probably qualifies as, is that it is especially sensitive
to minor perturbations as with the Butterfly Effect.
A result of that is that each additional effect that is sensitive
to minor perturbation is increasingly likely to derail predictions as time and perturbations
accumulate. Putting it in the mathematical terms, let’s assume I have a thousand year-long
plan that has a crucial moment or major decision about once a century that can destroy the
plan if the wrong action is taken. Each decision is pretty likely to be the right
one, we’ll say about 90% chance per decision to keep the math easy, we have to successfully
throw the dice each time. Not bad odds the first time, 90%, but the second time is also
90% and there’s only a .9 times .9 chance, or 81% chance, to get through step two. The
next step is .9 times .9 times .9, or .729, 72.9% likely to succeed, still not bad, but
by step 10, or .9^10, 34.9% likely to succeed. Almost 2 to 1 against.
You can also have a range of successes or partial failure that will increase or decrease
the odds of hitting the next goal. Sequential probabilities are not kind, especially when
the previous one can alter the odds of achieving the next goal, like if just barely succeeding
or missing step 1 changed the odds of succeeding on step 2 from 90% to 80%.
That’s the nature of chaotic systems though, the first part
of the definition about them being very sensitive to initial conditions sometimes makes people
overlook that unless the system is totally closed from additional or outside influences,
that sensitivity issue is ongoing and vulnerable to every new poke or nudge you give it.
So we’ve discussed the difficulties of implementing a system
like this but it is probably worth considering whether you should use such a system even
if it does turn out to be possible. Would such ongoing manipulations of the future be
ethical given that so many people can be so profoundly affected for good or ill?
I don’t think there is a definitive yes or no because it’s not
particularly clear cut. It clearly offers us a lot of benefits and while it is a bit
controlling, you aren’t coercing or brainwashing anyone into specific actions, at least not
necessarily. If you need to make nudges and corrections, you might have to resort to some
fairly direct methods to push people who are individually threatening to derail the plan
but that’s a different moral dilemma and also kind of implies your method is not too
accurate and reliable either. If you’ve got to go around coercing, assassinating,
brainwashing, or tricking key people into the right course of action on a regular basis
to keep the plan intact, it probably means the plan isn’t too sturdy and resilient.
You have to wonder how solid the system is if you essentially have to continually tinker
with it, akin to issuing patches for poorly written software.
You also have the moral issue of knowing in advance of bad outcomes for some people and
doing nothing to prevent it, or even taken actions to make it happen, but that is essentially
the Trolley Problem expanded to the planetary or galactic scale.
But barring that, the occasional nudges on individuals if necessary,
which is a different animal, it is hard to argue that it differs, ethically speaking,
from normal efforts to sway or change human civilization except in its degree of effectiveness.
As I mentioned when we were discussing things like weather control or gene-tailoring or
cybernetics, people have been trying to do such things for a long time, and just because
science and technology make those efforts effective doesn’t mean they are now less
ethical than all the failed attempts of the past.
Taking a potion the local alchemist or witch promised you would give you superhuman vitality
and taking a scientifically engineered medicine that actually does confer those benefits differs
only in that the former might be full of mercury and nightshade and kill you without giving
you the desired result, but if you thought it was going to work and took it, then you
have still made the same ethical decision. Our history is full
of folks consulting oracles for glimpses into and advice about the future, very rarely does
anyone stop the storyteller mid-tale and ask if it was ethical for the protagonist to get
a glimpse of a bad future foretelling the fall of their civilization; especially if
he then seeks to change that future for the better. We constantly seek to change our future
circumstances after all, and regularly create customs or policies designed to encourage
folks to behaviors we think will be for the better.
So there is the potential for some pretty cold-blooded actions being
available with the possession of such predictive technology, especially if it were some self-selected
group that was picking out which future was best for everyone else, and that is giving
them the benefit of the doubt that their intentions are benevolent. There’s no guarantee their
intent will have begun benevolent, or stayed that way over time, or that their intentions,
even if honorable, will be correct. Even wise men make serious judgment errors and as they
say: power corrupts, and absolute power corrupts absolutely. And being able to predict the
future is a lot of power.. However I suspect it’s
not ever going to be a concern because while I do think we will see predictive math applied
more and more accurately to human civilization as time goes on, I suspect that this will
always tend to be either very vague and error prone or quite specific and narrow in time
and topic, like predicting how folks would react to a given policy enacted this year.
We already do a lot of that with focus groups, polls, and marketing anyway, so seeing such
predictive modeling increase in accuracy and become more widespread in use seems quite
likely. That said, we can almost certainly rule out anyone ever developing methods to
exactly predict events or individual actions, especially very far ahead in time.
So whether or not we will ever develop a means of forecasting human
behavior on the civilization scale, it is a pretty fascinating concept with a lot of
possible benefits and problems. You certainly don’t need psychohistory to have predicted
that such a concept would be endlessly thought-provoking, and keep the series that introduced it on
the top of every best sci-fi list 75 years after the first short story was published.
Next week we return to the Upward Bound Series to look at Launch
Loops, the first of three looks at Active Support structures and how they let us get
around physical limitations on material strength. For alerts when that and other episodes come out, make sure to
subscribe to the channel, and if you enjoyed this episode, hit the like button and share
it with others. Until Next Time, Thanks
for Watching, and Have a Great Week!
It's called Cliodynamics.
Extra words blah blah whatever