Take 1+2+4+8 and continue on and on adding the next power of 2 up to infinity. This might seem crazy, but there’s a sense in which
this infinite sum equals -1. If you’re like me, this feels strange or
obviously false when you first see it but I promise you, by the end of this video you
and I will make it make sense. To do this, we need to back up, and you and I will walk through what it might feel like to discover convergent infinite sums, the ones that at least seem to make sense to define what they really mean, then to discover this crazy equation and stumble upon new forms of math where it makes sense. Imagine that you are an early mathematician
in the process of discovering that 1/2 + 1/4 + 1/8 + 1/16, on and on up to infinity, whatever
that means, equals 1, and imagine you needed to define what it means to add infinitely
many things for your friends to take you seriously. What would that feel like? Frankly I have
no idea, and I imagine that more than anything it feels like being wrong or stuck most of the time but I’ll give my best guess at one way the successful parts of it might go. One day you are pondering the nature of distances between objects, and how no matter how close two things are, it seems thast they can always be brought a little bit closer together without touching Fond of math as you are, you want to capture this paradoxical feeling with numbers so you imagine placing the two objects on the
number line, the first at 0, the second at 1 Then you march the first object towards
the second such that with each step the distance between them is cut in half. You keep track of the numbers this object touches during its march writing down 1/2, 1/2 + 1/4, 1/2 + 1/4 + 1/8, and so on. That is, each number is naturally written as a slightly longer sum with one more power of 2 in it. As such, you are tempted to say that if these numbers approach anything, we should be able to write that thing as a sum that contains the reciprocals of every power of
2. On the other hand, we can see geometrically that these numbers approach 1, so what you want to say is that 1 and some kind of infinite sum are the same thing. If your education was too formal, you’d write off this statement as ridiculous. Clearly you can’t add infinitely many things. No human, computer, or physical thing ever could perform such a task. If, however, you approach math with a healthy
irreverence, you’ll stand brave in the face of ridiculousness and try to make sense out
of this nonsense that you wrote down, since it kind of feels like nature gave it to you. So how exactly do you, dear mathematician,
go about defining infinite sums? Well-practiced in math that you are, you know
that finding the right definitions is less about generating new thoughts than it is about
dissecting old thoughts, so you go back to how you came accross this fuzzy discovery. At no point did you actually perform infinitely
many operations. You had a list of numbers, a list that could keep going forever if you had the time, and each number came from a perfectly reasonable
finite sum. You noticed that the numbers in this list
approach 1, but what do you mean by “approach”? It’s not just that the distance between
each number and 1 gets smaller, because for that matter the distance between each number
and 2 also gets smaller. After thinking about it, you realize what
makes 1 special is that your numbers can get arbitrarily close to 1. Which is to say, no matter how small your
desired distance, 1/100th, 1/1,000,000th, or one over the largest number you can write
down, if you go down the list long enough, the numbers will eventually fall within that tiny tiny distance of 1. Retrospectively this might seem like the clear way to solidify what you mean by “approach”, but as a first time endeavor it’s actually
incredibly clever. Now you pull out your pen and scribble down
the definition of what it means for an infinite sum to equal some number, say X. It means that when you generate a list of
numbers by cutting off your sum at finite points, the numbers in this list approach
X, in the sense that no matter how small a distance you choose, at some point down the
list all the numbers start falling within that distance of X. In doing this, you just invented some math.
But it never felt like you were pulling things out of thin air. You were just trying to justify
what it was the universe gave you in the first place. You might wonder if you can find other, more
general truths about these infinite sums that you just invented. To do so, you look for where you made any
arbitrary decisions. For instance, when you were shrinking the
distance between your objects, cutting the interval into pieces of size ½, ¼, etc.,
you could have chosen a proportion other than ½. You could have instead cut your interval into pieces of size 9/10 and 1/10 and then cut that rightmost piece into the same proportions to get even smaller pieces of size 9/100 and 1/100 then cut that tiny piece of size 1/100
similarly, continuing on and on, you’d see that 9/10 + 9/100 + 9/1000 on and on up to infinity
equals 1, a fact more popularly written as .9 repeating = 1. To all of your friends that insist that this does not equal 1, and it just approaches it you can now just smile because you know that with infinite sums, to approach and to equal mean the same thing. To be general about it, let’s say that you
cut the interval into pieces of size p and (1-p), where p represents any number between 0 and 1. Cutting the piece of size p in similar proportions, we now get pieces of size p(1-p) and p^2. Continuing in this fashion, always cutting up the rightmost piece into those same proportions, you will find that (1-p)+p(1-p)+p^2(1-p),
on and on always adding p to the next power times (1-p), equals 1. Dividing both sides by (1-p), we get this
nice formula. In this formula, the universe has offered a weird form of nonsense Even though the way you discovered it only makes sense for values of p between 0 and 1 the right hand side still makes sense when you replace p with any number, except maybe for 1. For instance, plugging in -1, the equation
reads 1-1+1-1 on and on forever alternating between the two, equals 1/2 which feels both pretty silly and kind of like the only thing it could be. Plugging in 2, the equation reads 1 + 2 +
4 + 8 + ... = -1 something which doesn’t even seem reasonable. On the one hand, rigor would dictate that
you ignore these, since your definition of infinite sums doesn’t apply in these cases.
The list of numbers that you generate by cutting off the sum at finite points doesn’t approach
anything. But you are a mathematician, not a robot,
so you don’t let the fact that something is nonsensical stop you. I will leave this sum for another day, so
that we can jump directly into this monster. First, to clean things up, notice what you
get when you cut off the sum at finite points: 1, 3, 7, 15, 31. They are all one less than
a power of 2. In general, when you add up the first n powers
of 2, you get 2^{n+1} - 1, which this animation hopefully makes clear. You decide to humor the universe and pretend
that these numbers, all one less than a power of two, actually do approach -1. It will prove to be cleaner if we add one
to everything and say that the powers of two approach zero. Is there any way that this can make sense? In effect, what you are trying to do is make
this formula more general, by saying that it applies to all numbers, not just those
between 0 and 1. Again, to make things more general you look for any place where you made an arbitrary choice. Here, that place turns out to be very sneaky. So sneaky, in fact, that it took mathematicians
until the 20th century to find it. It’s the way that we define distance between two
rational numbers. That is to say, organizing them on a line
might not be the only reasonable way to organize them. The notion of distance is essentially a function
that takes in two numbers and outputs a number indicating how far apart they are. You could come up with a completely random
notion of distance, where 2 is 7 away from 3, and ½ is 4/5ths away from 100, and all
sorts of things, but if you want to actually use a new distance function the way that you
use the familiar distance function, it should share some of the same properties. For example, the distance between 2 numbers
shouldn’t change if you shift them both by the same amount. From 0 and 4 should be
the same distance away as 1 and 5, or 2 and 6, even if that same distance is something
other than four as we're used to. Keeping things general, the distance between
two numbers shouldn't change if you add the same amount to both of them. Let's call this property “shift invariance”. There are other properties that you want your
notion of distance to have as well, like the triangle inequality, but before we start worrying
about those, let's start imagining what notion of distance could possibly make powers of
2 approach 0, and which is shift invariant. At first you might toil for a while to find
a frame of mind where this doesn’t feel like utter nonsense, but with enough time
and a bit of luck you might think to organize your numbers into rooms, sub-rooms, sub-sub-rooms,
and so on. You think of 0 as being in the same room as
all the powers of 2 greater than 1, as being in the same sub-room as all powers of 2 greater
than 2, as being in the same sub-sub-room as powers of 2 greater than 4, and so on,
with infinitely many smaller and smaller rooms. It's pretty hard to draw infinitely many things,
so I'm only going to dwar four different sizes, but keep in the back of your mind that this
process should be able to go on forever. If we think of every number as lying in a
hierarchy of rooms, not just 0, shift-invariance will tell us where all of numbers must fall.
For instance, 1 should be as far away from 3 as 2 is from 0. Likewise the distance between 0 and 4 should
be the same as that between 1 and 5, 2 and 6, and 3 and 7. Continuing like this, you will see which rooms,
sub-rooms, sub-sub-rooms and so on successive number must fall into. You can also deduce where negative integers
must fall, where for example -1 has to be in the same room as 1, in the same sub-room
as 3, the same sub-sub-room as 7, and so on, always in smaller and smaller rooms with numbers
one less than a power of 2, because 0 is in smaller and smaller rooms with the powers
of 2. So how do you turn this general idea of closeness
based on rooms and sub-rooms into an actual distance function? You can’t take this drawing too literally,
since it makes 1 look very close to 14, and 0 very far from 13, even though shift-invariance
should imply that they are the same distance away. Again, in the actual process of discovery
you might toil away, scribbling through many sheets of paper, but if you have the idea
that the only thing which should matter in determining the distance between two objects
is the size of the smallest room they share, you might come up with the following: any
numbers lying in different large yellow rooms are a distance 1 from each other; those which
are in the same large room, but not the same orange sub-room, are a distance ½ from each
other; those that are in the same orange sub-room but not in the same sub-sub-room are a distance
ÂĽ from each other. And you continue like this, using the reciprocal
of larger and larger powers of two to indicate closeness. We won’t do it in this video, but see if
you can reason about which rooms other rational numbers like 1/3 and ½ should fall into,
and see if you can prove why this notion of distance satisfies many of the nice properties
we expect from a distance function, like the triangle inequality. Here I’ll just say that this notion of distance
is a perfectly legitimate one, we call it the “2-adic metric”, and falls into a
general family of distance-functions called the “p-adic metrics”, where p stands for
any prime number. These metrics each give rise to a completely
new type of number, neither real nor complex, and have become a central notion in modern
number theory. Using the 2-adic metric, the fact that the
sum of all powers of 2 equals -1 actually makes sense, because the numbers 1, 3, 7,
15, 31 and so on genuinely approach -1. This parable does not actually portray the
historical trajectory of discoveries, but, nevertheless, I still think it’s a good
illustration of a recurring pattern in the discovery of math. First nature hands you something that is ill-defined,
or even nonsensical. Then you define new concepts that make this fuzzy discovery make sense,
and these new concepts tend to yield genuinely useful math and broaden your mind about traditional
notions. So, in answer to the age-old question of whether
math is invention or discovery, my personal belief is that discovery of non-rigorous truths
is what leads us to the construction of rigorous terms that are useful, opening the door for more fuzzy discoveries, continuing the cycle.
Love it.
This is absolutely fantastic!
I'm too tired tonight to type my whole speil out, but I've felt for a while now that (within the framework of the 'invention-discovery' dichotomy) math is more an elaboration - to put it in a single word. I think your cycle captures this idea nicely.
Really wonderful videos man. Got yourself a new subscriber.
Worth a fb share, I love it. p-adics on /r/math, what a day
So, looking into p-adic numbers, I'm starting to understand the idea of a p-adic metric, but does anyone have an example of a p-adic number that isn't a rational number? I get the idea now of stuff extending infinitely to the left, but I don't understand how that can approach something that isn't a rational number
This is what I wish Numberphile was
Wow, this was a really informative video! I'm still a little unclear about one thing though: Why do we still call our generalization of the limit a sum?