[Subtitles submitted by: ZachΓ‘ry Dorris] Today I want to let you in on a really neat trick that I learned from a very famous mathematician; how to subtract β from β to get exactly Ο. So for that I'll return to this blackboard here I found in the Simpsons, and I discussed in this video up there. And particularly that infinite series down there at the bottom. Okay, so it's 1 - 1/2 + 1/3 - 1/4 + 1/5, all the way to infinity - an infinite series, with sum log 2 [ln(2)]. So that's about...0.69. What I want to do here is something that somebody discovered about 150 years ago So just, just see what I do, it's a bit of a paradox. So what I do is I take a bit more of this, and I really need the whole board, so we'll take out every fourth term, and then what's left over has odd denominator and even denominator terms, so let's separate those out, too. Now we're going to work our way in from the left, ah, pick those two guys and put them down there, take those two guys, and put them down in the gap, take those two guys and put them down in the next gap, and you keep on going like this, and obviously when you do this, you use up all the terms that were there in the beginning, and just rearrange them here and rebracket them a little bit. Now we're going to work our way through the brackets, so first one here, (1 - 1/2) is 1/2. (1/3 - 1/6) is 1/6 And so on, right, so we can see how this works, this nice pattern here, um, that's what happens, so we've just rearranged the terms, and we rebracketed them a little bit, and that shouldn't change anything about the sum of this thing, right? So, if you compare what we've got now to what we started with, well, the bit that we started with is what? ln(2). But now we can compare term by term, so one half of 1 is 1/2. One half of 1/2 is 1/4. And half of 1/3 is 1/6, and so on, and so what you see is that term by term. The bit at the bottom here is always one half of the bit at the top, and so what it means is it should really be equal to ... 1/2*ln(2). Now if you put all this stuff together, you get ln(2) is equal to 1/2*ln(2), which really amounts the same thing as saying that 1 is equal to 2. *groans tiredly* *laughing* and whenever you come across something like this, you start worrying. So there's really, you know, two different options here, the first one is maths is broken. Or, we made a mistake. What do you think is more likely here, well, yes, we made a mistake. So where's the mistake? So let's have a close look, so there was three different identities, so here, here, and there. So two of these are correct, and one of these is not correct. So, the first bit, well, I already said this, and just believe me, this is equal to ln(2). Then the bottom bit is also correct. In fact, that bracketing here, and I'll just say this now, you can put in brackets anywhere you want, it's not going to change anything about this sum here. It's always going to add up to 1/2*ln(2) here, that's correct. So what's really messing things up is that we are rearranging the terms, so we're changing the order of the terms. And that really creates an infinite series with a different sum than the one before. So basically when we're dealing with these infinite series, unlike with the finite ones, order matters, and rearranging may change the sum. Okay, well some people are getting really worried oh no, maths is broken anyway, no it's not broken - we just have to get used to this. *laughs* we gotta be careful about this sort of stuffs. So we just saw that we can rearrange this series into another series with a different sum, and actually there's a famous mathematician, Riemann, Bernhard Riemann, who came up with a theorem, the Riemann rearrangement theorem, which says that, well, in this case of this particular series, you can rearrange this thing into series that add up to anything you want. For example, Ο, and I'm going to show you how that is done, how he...how he makes up these different sums. Now you've probably heard this name, the Riemann hypothesis, is at the moment the most sought after thing that you can prove in mathematics. So eternal fame awaits if you can do this, and that's all about the Riemann zeta function, and there's also the Riemann integral, which you use all the time, and lots and lots of Riemann named bits in mathematics, so this guy is a mathematical superhero. Usually, when it comes to these things that are named after him, they are quite...difficult to explain properly, but that Riemann rearrangement theorem I can do like this with my animation magic. So let's just do it, but before we do it, I really have to remind you of what it actually means for one of those series to add up to a certain number, okay, so what we do is we translate this, series, into a sequence of numbers, the 'partial sums', the sequence of partial sums. So the first partial sum is just the first term, which is 1, the second partial sum is just those two guys added up. That's...0.5 in this case, and then, third partial sum, fourth partial sum, and so you get this sequence of numbers here, and in the case of this series, the sequence of numbers 'converges' to a certain number. which is ln(2). And then we say that this series has sum, that number, ln(2) in this case. That's the definition of a sum of an infinite series. This is an example of a 'convergent' series. A series that actually has a sum which is a finite number. Ah, now there's also lots of series that don't converge. And they are called 'divergent'. So for example, if we replace all the minuses in here by pluses, I already showed in that other video up there, that this adds up to +β. But there's other types of divergence, For example, if we take this guy there, that's 1 - 1 + 1 - 1+... ; the partial sums here are 1, 0, 1, 0, 1, 0 They don't settle down to anything; not finite, not infinite, that's also divergent. So there are different sorts of divergent. To explain what's going on here, let's have a look at the positive terms, and the negative terms separately. So if you just add up all the positive terms, you'll actually find you get β. And if you actually add up the negative terms, you get -β. And, actually if you've watched the other video you should be able to prove that yourself, and maybe do it in the comments. So what we're really doing here is we're subtracting β from β in a controlled manner. What Riemann's rearrangement theorem says is that if you've got a convergent series, whose positive terms add up to β, and whose negative terms add up to -β, then you can rearrange the series into series that have any sum you want. So let me demonstrate this now for Ο, with this particular series. Okay, so we want pi. [CAPTIONER'S NOTE: Don't we all, Mathologer :)] That's positive, so we're going to start with some positive terms. So the first term of my new series is going to be 1. So the first partial sum is also going to be 1. The partial sums, I put up there. Now, that is not enough, so we add a few more of those positive terms, let's get close to Ο. Now we actually have to add quite a few of them to get close to Ο, we are still just under here, so let's add one more, and that gets us just over. This string of terms, we choose as the beginning bit of our new series. So we're just over, now we're going to use some negative terms to get just under. Okay, so we use the, actually, the -1/2 is good enough, so, that gets us under. *thoughtful pause* 2.6 and so on. *dramatic pause* Alright, now let's go over again, so for that, we'll just use some positive terms. Um, we actually need quite a few again, but we can be absolutely sure that we can get over because, at any stage of this process, the positive terms that are left over will add up to β, and the negative terms that are left over will add up to β, so I can always be sure that no matter how far I'm under, or how far I'm over, I can always take enough terms to get under or over. Alright, now we can just keep on going like this, so then a negative term to get us under again, then a positive, positive terms again to get us over, and we go, flip back and forth, and eventually we get to Ο. And we can also be sure that we get to Ο because the terms themselves, they get smaller and smaller in magnitude, so that means that as I overshoot and undershoot, I get closer and closer to Ο. And actually, uh, I can rearrange this initial series in infinitely many ways, different ways, to get me Ο; I can also rearrange it to get me an infinite sum, all sorts of other things, so maybe you'll also want to do this in the comments, figure out the details there. Now, for other series, you can actually have the situation where nothing changes no matter what you do. So there's lots of convergent series where the positive terms add up to a positive number and the negative terms add up to a negative number, so for example, it could be, you know, 2, the positive terms, and minus 7, the negative terms. And then, no matter how you reshuffle, this series, the sum will always be 2 - 7 is -5. Okay, now there's something else that we're actually doing here, we're actually bracketing, alright, so we're bracketing. So we can bracket like this, we can bracket like that, or maybe we don't do any brackets whatsoever, And actually, as long as we're dealing with, infinite series, that converge, either this type or of that type, doesn't matter what, you can put in brackets wherever you want, and the sum won't change. So I mean, that's a bit reassuring *chuckles anxiously* so, basically commutativity is broken, but but, uh, associativity, at least for convergent series, doesn't get mucked up, so it's quite nice. *excited* BUT, there's at least one more surprise happening here; that's just for convergent series. So if you actually look at some divergent series, strange things can happen even when you bracket them. So for example, this guy here, the 1 - 1 + 1 - 1 divergent series, if you just bracket it right, you'll actually get a convergent series. with a certain sum, and actually you can bracket this in many, many different ways, and get different sums out. And maybe that's also a puzzle for you to sort out; how can you rebracket this thing to get different, different sums out? Well, there's really amazing things to be discovered yet, um, the first one is basically saying, and you can actually, with what I showed you today, you can actually figure out the details yourself. *deep breath* is that if I give you an arbitrary infinite series, there's three different cases; so you can ask, in how many different sums can I rearrange this thing? So the first case is, no matter how I rearrange this thing, I'll never get a sum. The second case is it rearranges in all possible...sums. And the third case is, it just is one sum, no matter how you rearrange it. Now this gets even freakier when you are considering infinite series of complex numbers, or infinite series of vectors, and you can do this in Banach spaces, and you can do this in all kinds of other things, but maybe, maybe that's um, *exhales* maybe that's enough for today.
I really like his videos. Way better than numberphile in my opinion. But that's just me.
For the last example where bracketing 1+1-1+1-1+1.... becomes convergent, is a solution simply to bracket all pairs?
(1-1)+(1-1)+(1-1)+... = 0 + 0 + 0 +... = 0?
Shouldn't infinity minus infinity be zero?
/s
1-1/2+1/3-1/4+1/5+... = ln(2), I understand that comes from Taylor expansions and all that, but Mathologer made it pretty clear that you can change the convergence arbitrarily by rearranging terms, so naturally I'm curious: What makes ln(2) the "correct" answer? Is it because of convention, or because the terms are in increasing order, or is there some objective reason that ln(2) is the real result to that sum?
I will check them out. And my feelings about numberphile is not from one but many videos. I really like the idea but I just don't care for the presenter and the format. But I'll have to watch the ones with Tadashi because he is awesome.
I really like his manner. Good old dry German humour!
Am I the only one who feels kind of underwhelmed, here?
Infinity, by definition, is not a definite value. It's not anywhere on the real numberline. Being indefinite, it is thus not all that surprising to me that you can play fast and loose with infinite series in order to make the difference of two indefinite infinities come out to be anything you want.
I'll give Riemann some props for the rearrangement theorem. That's a nice result. But at its core, I have trouble seeing how this theorem is more than a formalized way of saying "since infinity is indeterminate, of course arithmetic stops behaving like you'd expect when you toss infinity into the mix."
This always bugs me a bracket (parentheses) is an operation. Why can you just add them to a series and say it's the same series.
It obviously is not.
1-1+1-1....
Is not the same as
(1-1)+(1-1)+...
They mean different things, and in the second example you are taking 2 part of the series and making them one part as in
(1-1)+(1-1). Is if the series didn't go on to infinity but 2 parts. (N=2)
But
1-1+1-1 is the series of it goes on to 4 parts (n=4)
Doesn't that substantially change the series? Why would I every call them the same series?
Why is it any different with series?
And if they are not the same series why would you use one to explain the other? Why would you do this at all if it demonstrably doesn't work? (As in you get contradictions sometimes)
3+4-5+6=8
(3+4)-(5+6)=-4
So it doesn't work for series that end why would it work for series that don't end?
And the part were he is arbitrarily making the positive numbers add up to what he want and then placing the next subtracting number isn't the same series at all, it's just stupid to do, I'm like your missing a hundreds parts in between, at some point you have to reconcile that or call it what it is different from the original series. You can't just skip parts you don't like then ignore it like it's cool and say answer means something profound.
How do you know you're over pi, unless you know pi in the first case. I understand you can get a number if you know the number, but that does not help us calculate more digits of pi.