Winding numbers and domain coloring

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

How could you rigorously compute the winding number along a path? What properties would the function needs? Being Lipschitz continuous maybe?

👍︎︎ 10 👤︎︎ u/leoleo1994 📅︎︎ Mar 24 2018 đź—«︎ replies

do you really need all colors? couldn't you just color each quadrant with a single color, and assign +/- 0.25 for windings based on a discrete function that maps colors to colors (0.25 for adjacents with sign depending...). maybe it doesnt matter

edit: maybe you could even do 3 colors, 120 degree sections eminating out of the origin?

👍︎︎ 4 👤︎︎ u/the_cat_kittles 📅︎︎ Mar 24 2018 đź—«︎ replies

I have a feeling that if the winding number is negative it means that there is no zero inside, but instead there actually is an infinity: a point that away from the origin...

Am i wrong? Was this addressed in the video and I missed it?

👍︎︎ 2 👤︎︎ u/TheRealWarrior0 📅︎︎ Mar 24 2018 đź—«︎ replies

And you can even do this in 3D. You'd have to sample everywhere along a surface, though. (And you'd need homology for the right generalization of "winding number"…)

Incidentally, if I understand this right, I think nearly any polynomial of z and \bar{z} (z's conjugate) would be guaranteed to have a root. A sufficient condition would be that there's a unique term of highest degree, with an unequal number of z's and \bar{z}'s in it. (Otherwise we have counterexamples such as z\bar{z}+i and z+\bar{z}+i.)

👍︎︎ 1 👤︎︎ u/columbus8myhw 📅︎︎ Mar 25 2018 đź—«︎ replies

Great vid!! I really enjoyed watching your content :) There is a pesky counterexample I can think of on the top of my head:

f(x + iy) = x^2 + y^2 

Yes, yes, this function is not analytic, but no loop ever drawn will ever have any non-zero winding number :^(

Edit: I realize the exact can be said in the 1d case! of f(x) = x2 -- because no values of x return a negative value, the 1d algorithm wouldn't work either

👍︎︎ 1 👤︎︎ u/quegarbaje 📅︎︎ Mar 30 2018 đź—«︎ replies

I think, I found the mistake he mentions - it's about the video on topology and necklaces. He almost says so, when referring to the rest of the content he will present in that upcoming video.

I won't specify yet what I think can go wrong (at least when there are more than two thieves involved), to give others a chance to also find out themselves.

If I'm correct, this caveat will be explained in an upcoming video anyways, and most likely in a more aesthetic and understandable way than I could ever explain myself.

👍︎︎ 1 👤︎︎ u/[deleted] 📅︎︎ May 01 2018 đź—«︎ replies

Hello Man! Amazing video, just as a comment it would be nice to know your references to make your content. Cheers!

👍︎︎ 1 👤︎︎ u/josephinedo 📅︎︎ Jun 24 2018 đź—«︎ replies
Captions
There’s two things here, the main topic, and the meta topic. The main topic is a neat algorithm for solving many two-dimensional equations. That is, equations with two unknown real numbers, or perhaps those involving a single unknown which is a complex number. So for example, if you want to find the complex roots of a polynomial, or some of those million dollar zeros of the Riemann zeta function, this algorithm will do it for you. This method can be super pretty, since there is a lot of color involved, and the core underlying idea applies to all sorts of math beyond this algorithm for solving equations, including a bit of topology, which I’ll talk about afterwards. But what really makes this worth the 20 minutes or so of your time is that it illustrates a lesson much more generally useful throughout math, which is to try to define constructs which compose nicely with each other. You’ll see what I mean by that as the story progresses. To motivate the case with functions that have 2d inputs and 2d outputs, let’s start off simpler, with functions that just take in a real number and spit out a real number. If you want to know when one function, f(x), equals another function, g(x), you might think of this as searching for when the graphs of these functions intersect. Right? That tells you the input where both functions have the same output. To take a very simple example, say f(x) is x2, and g(x) is the constant function 2. In other words, you want to find the square root of 2. Even if you know almost nothing about finding square roots, you probably see quickly that 12 is less than 2 and 22 is bigger than 2, and so you realize “Ah, there’s a solution somewhere between 1 and 2”. And then if you wanted to narrow it down further, you might try squaring the halfway point, 1.5. This comes out at 2.25, a bit too high, so now you focus on the region between 1 and 1.5. And so on. You can keep computing at the midpoint and chopping your search space in half. Another way to think about this, which will make it easier once we get up to higher dimensions, is to instead focus on the equivalent question of when the difference between the two functions is 0. We found a region of inputs where this was negative at one end, and positive at the other. And then we split this region in two, narrowing our attention to a half whose outermost points again produced varying signs. We were able to keep going forever in this way, taking each region with varying signs on its border and finding a smaller such region among its halves, knowing that ultimately, we had to be narrowing in on a point where we would hit exactly zero. In short, solving equations can always be framed as finding when a certain function is 0. And to do that, use the we use the heuristic “If f is positive at one point, and f is negative at another point, then we can find some place in between where it’s zero (…at least, if everything changes smoothly, with no sudden jumps)”. The amazing thing I want to show you is how to extend this kind of thinking to two-dimensional equations; to equations between functions whose inputs and outputs are both two-dimensional. For example, complex numbers are two-dimensional, and the tool we develop here is perfect for finding solutions to complex equations. Since we’re going to be discussing 2d functions so much,, let’s take a brief side-step to consider how we illustrate them. Graphing functions with 2d inputs and 2d outputs would require 4 dimensions, which won’t work so well in our 3d world on our 2d screens, but we still have a few good options. One is just to look at at both the input and output space side-by-side. Each point in the input space moves to a particular point in the output space, and I can show how moving the input point corresponds to certain movements in the output space. All the functions we consider will be continuous, in the sense that small changes in input cause small changes in output, without any sudden jumps. Another option is to think of the arrow from the origin of the output space to the output point, and attach a miniature version of that arrow to the input point. This can give us a sense at a glance for where a given input point goes, or where many different input points go by drawing a full vector field. (Unfortunately, this can also be a bit cluttered; here, let me make all these arrows the same size, so we can more easily get a sense of just the direction of the output at each point.) But perhaps the prettiest way to illustrate 2d functions, and the one we’ll use most in this video, is to associate each point in the output space with a color. Here, we’ve used different “hues” (that is, where the color falls along a rainbow or color wheel) to correspond to the direction away from the origin, while we’ve used darkness or brightness to correspond to distance from the origin. For example, focusing just on this ray of outputs, all these points are red, but the ones closer to the origin are a little darker and the ones further away are a little lighter. And focusing just on this ray of outputs, all the points are green, and again, closer to the origin we get darker and further away we get lighter. And so on, we’ve just assigned a color to each direction, all changing continuously. (The darkness and brightness differences here can be quite subtle, but for this video all we will care about is the directions of outputs, and not their magnitudes; the hue, not the brightness. The only important thing about brightness is just that near the origin, which has no particular direction, our colors will fade to black.) Now that we’ve decided on colors for each output, we can visualize 2d functions by coloring each point in the input space based on the color of the point where it lands in the output space. I like to imagine many points from the input space hopping over to the output space, which is basically one big color wheel now, getting painted, and then hopping back to where they came from. This gives a way, just by looking at the input space, to understand roughly where the function takes each point. For example, this stripe of pink points on the left tells us that all those points get mapped to the pink direction, the lower left of the output space. Those three points which are black with lots of colors around them are the ones that go to zero. Alright, so just like the 1d case, solving an equation of two-dimensional equations can always be reframed as asking when a certain function equals 0. So that’s our challenge right now: Create an algorithm that finds which input points a given 2d function takes to 0. Now, you might point out that if you’re looking at a color map like this, by seeing these black dots you know where the zeros of the function are...so, does that count? Well, keep in mind that to create this diagram we’ve had the computer compute the function at all these pixels on the plane. But part of our goal here is to find a more efficient algorithm that only requires computing the function on as few points as possible; only having limited view of the colors of the plane, so to speak. Also, from a more theoretical standpoint, it’d be nice to have a general construct which gives us conditions for whether or not a zero even exists inside a given region. Now, in 1-dimension, the main insight was that if a continuous function is positive at one point, and negative at another, then somewhere in between it must be 0. How do you extend to two dimensions? You need some analog of talking about signs. Well, one way to think about what signs are is as directions; positive means pointing right along a number line, and negative means pointing left along that number line. Two-dimensional quantities also have directions, although for them, the options are wider: they can point anywhere along a whole circle of possibilities. So in the same way that for 1d functions, we were asking whether a given function was positive or negative on the boundary of a range, which is just two points; for 2d functions we will want to look at the boundary of a region, which will be some loop, and ask about the direction of the function’s output along that boundary. For example, we see that on this loop around this zero, the output goes through every possible direction, all the colors of the rainbow; red, yellow, green, blue, back to red, and everything in between along the way. But on this loop over here, with no zeros inside, the output doesn’t go through every color; it only goes through some orangish colors, but never, say, green or blue. This is promising; it looks a lot like how things worked in 1d. Maybe, in the same way that if a 1d function takes both possible signs on the boundary of a 1d region, there’s a zero somewhere between, we might hypothesize that if a 2d function hits outputs of all possible directions, all possible colors, on the boundary of a 2d region, somewhere inside that region it must go to zero. Here, take a moment to really think about if this should be true, and if so, why. If we start by thinking about a tiny loop around some input point, we know, since everything is continuous, that our function takes it to some tiny loop near the corresponding output. But look: most tiny loops of outputs barely vary in color. If you pick any output point other than zero, and draw a sufficiently tight loop near it, the loop’s colors will all be about the same as the color of that point. A tight loop here will be all blue-ish; a tight loop here will be all yellow-ish, etc. You certainly won’t get every color of the rainbow. The only point you can tighten loops around while still going through all the colors of the rainbow is the colorless origin: zero itself. So it is indeed the case that if you have loops going through every color of the rainbow, tightening and tightening and narrowing in on a point, then that point must in fact be a zero. …And so we can set up our 2d equation solver just like our 1d equation solver: when we find a large region whose border goes through every color, split it in two, and look at the colors on the boundary on each half. In the example shown here, the border of the left half doesn’t go through all colors; there are no points that map to the orange and yellow directions, so we grey this area out as a way of saying we don’t want to search it further. The right half does go through all colors, spending a lot of time in the green direction, then passing yellow-orange-red, as well as blue-violet. Remember, that means points of this boundary get mapped to outputs of all possible directions, so we’ll explore it further, subdividing again and checking the boundary colors of each subregion. The boundary of that top right is all green, so we’ll stop searching there, but the bottom is colorful enough to deserve a subdivision. And just continue like this! Check which subregion has a boundary covering all colors, meaning points of that boundary get mapped to all possible directions, and keep chopping those subregions in half like we did in the 1d case. ...Except...wait a minute...what happened here? Neither of those last subdivisions on the bottom right passes through all colors...so our algorithm stopped without having found a zero… So being wrong is a regular part of doing math. We had this hypothesis that led us to a proposed algorithm, and clearly we were mistaken somewhere. Being good at math is not about being right the first time, it’s about having the resilience to carefully look back and understand our mistakes, and how to fix them. The problem here was that we had a region whose border went through every color, but when we split it down the middle, neither subregion’s border went through every color. We had no options for where to keep searching next, breaking our zero-finder. In 1d, this sort of thing never happened: any time you had an interval whose endpoints had different signs, when you split it, you knew you were guaranteed to get some sub-interval whose endpoints still had different-signs. Put another way, any time you have two intervals whose endpoints don’t change sign, when you combine them, you get a bigger interval whose endpoints don’t change sign. But in 2d, you can take two regions whose borders don’t contain every color, and combine them into a region whose border DOES contain every color. And in just this way, our proposed zero-finder can break down. In fact, you can have a big loop whose border goes through every color, without there being any zeros inside We weren’t wrong in our claims about tiny loops; when we said that forever-narrowing loops that go through every color had to be narrowing in on a zero. But what made a mess for us is that, the “Does my border go through every color or not?” property doesn’t combine in a nice, predictable way when you combine regions. But don’t worry! It turns out, we can modify this slightly, to a more sophisticated property that DOES combine nicely, giving us what we want. Instead of simply asking whether we find every color at some point along a loop, let’s keep track more carefully of how those colors change as as we walk along the loop. Let me show what you I mean with some examples. I’ll keep a little color wheel here in the corner to help us keep track. When colors along a path of inputs move through the rainbow from red to yellow, or yellow to green, green to blue, or blue to red, the output is swinging clockwise. On the other hand, when the colors move the other way through the rainbow, from blue to green, green to yellow, yellow to red, or red to blue, the output is swinging counterclockwise. So walking along this short path here, the colors wind a fifth of the way clockwise through the color wheel. And walking along this path here, the colors wind another fifth of the way clockwise through the color wheel. Of course, this means that if we go through both paths, one after another, the colors wind a total of two-fifths of a full turn clockwise. The total amount of winding just adds up; this is the kind of straightforward combining that will be useful to us. When I say “total amount of winding”, I want you to imagine an old-fashioned odometer that ticks forward as the arrow spins clockwise, but ticks backward as the arrow spins counterclockwise. So counterclockwise winding counts as negative clockwise winding. The output may turn and turn a lot, but if some of that turning is in opposite directions, it cancels out. For example, if you move forward along a path, and then move backward along that same path, the total winding number will end up being zero: the backwards movement will literally “re-wind” through all the previously seen colors, reversing all the previous winding and returning the odometer to where it started. We can look at winding along loops, too. For example, if we walk around this entire loop clockwise, the outputs we come across wind around a total of three full turns clockwise: the colors swung through the rainbow, in ROYGBIV order, from red to red again... and then again... and again. In the jargon mathematicians use, we say that along this loop, the total “winding number” is three. In the case of this loop, the winding number was three. For other loops it could be any other whole number. Large ones, if the output swings around many times as the input walks along a loop. Smaller ones, if the output only swings around once or twice. Or the winding number could even be a negative integer, if the output loops around counterclockwise as we walk clockwise around the loop. But along any loop, it will be a whole number, because by the time we return to where we started, we have to get back to the same output we started with. (Along paths that don’t end back where they start, we can get fractions of turns, as we saw earlier, but whenever you combine these into a loop, the total will be a whole number) Incidentally, if a path actually contains a point where the output is precisely zero, then we can’t define the winding number along it, since the output has no particular direction anymore. This won’t be a problem for us, though. Our whole goal is to find zeros, so if this ever comes up, we’ve just lucked out early. Alright, so winding numbers add up nicely when you combine paths into bigger paths, but what we really want is for winding numbers around the borders of regions to add up nicely when you combine regions into bigger regions. Do we have this property? Well, take a look! The winding number as we go clockwise around this region is the sum of the winding numbers from these paths. And the winding as we go clockwise around this region is the sum of the winding numbers from these paths. When we combine those two regions into this bigger region, most of those paths become part of the clockwise border of the bigger region. And as for the parts that don’t? They cancel out; one is just the other in reverse, the “re-winding” of the other, as we saw before. So winding numbers along borders of regions add up just the way we want them to! (As a side note, this reasoning about borders adding up this way comes up a lot in mathematics, and is often called “Stokes’ Theorem”; those of you who have studied multivariable calculus may recognize it from that context…) And now, finally, with winding numbers in hand, we can get back to our equation solving goals. The problem with the region we saw earlier is that even though its border passed through all colors, the winding number was 0. The outputs wound around halfway, through yellow to red, then started going counterclockwise back through blue and hitting red from the other direction, then wound clockwise again in such a way that the total winding netted out to be zero. But if you find any loop with a nonzero winding number and split it in half, at least one of the halves is guaranteed to have a nonzero winding number, since things add up nicely. And in this way, we can keep going, narrowing in further and further on a point… As we narrow in on that point, we’ll be doing so using tiny loops with nonzero winding number, which must be tiny loops which go through every color, and therefore, as we discussed before, that point must be a zero. And that’s it! We have now created our 2d equation solver, and this time, I promise, there are no bugs. “Winding numbers” are precisely the tool we needed to make this work. We can now solve equations “Where does f(x) = g(x)?” in 2d just by considering how the difference between f and g winds around. Whenever we have a loop whose winding number isn’t zero, we can run this algorithm on it, and are guaranteed to find a solution somewhere within it. And what’s more, just as in 1d, our equation solver is remarkably efficient; we keep narrowing in to half the size of our region each round, thus quickly narrowing in on our zeros, and all the while, we only have to check the value of the function along points of these loops, rather than checking it on all the many, many points inside. So in some sense the overall “work” we do is proportional only to our search space’s perimeter, not its area. Which is amazing! It is weirdly mesmerizing to watch this in action; just giving it some function and letting search for zeros. Like I said before, complex numbers are 2d, so we can apply this algorithm to equations between functions from complex numbers to complex numbers. For example, here’s our algorithm finding all the zeros of the function x5 - x - 1 over the complex numbers, starting by considering a very large region around the origin. Each time you find a loop with nonzero winding number, you split it in half, and figure out the winding numbers of the two smaller loops. Either one or both of the smaller loops will have nonzero winding number; when you see this, you know there is a zero to be found in that loop, and so you keep going in the same way in that smaller search space. As for loops whose winding number is zero, you don’t explore those further inside. (We also stop exploring a region when we stumble directly across a zero point inside it, right on the nose, as happened once on the right half here; these rare occurrences interfere with our ability to compute winding numbers, but hey, we get a zero!). And letting our equation solver continue in this same way, it eventually converges on lots of zeros of this polynomial. Incidentally, it’s no coincidence that the overall winding number came out to 5 in this example. With complex numbers, the operation xn directly corresponds to winding around n times as we loop around the origin, and for large enough inputs, every term in a polynomial other than the leading term becomes insignificant in comparison. So any complex polynomial of degree n has a winding number of n around a large enough loop. In this way, our winding number technology actually guarantees us that every complex polynomial has a zero; mathematicians call this the Fundamental Theorem of Algebra. Having an algorithm for finding numerical solutions to equations like this is extremely practical, but I should say that we’ve left out a few details on how you’d implement it. For example, to know how frequently you should sample points, you’d want to know how quickly the direction of the output changes, we’ve left some more details in the description. But the fundamental theorem of algebra is a good example of how these winding numbers are also quite useful on a theoretical level, guaranteeing the existence of a zero for a broad class of functions under suitable conditions. We’ll see a few more amazing applications of this in a follow-up video, including correcting a mistake from an old 3blue1brown video! (Which one? Rewatch all of our videos, everything we’ve ever made, and see if you can spot the error first!) The primary author of this video is one of the newest 3blue1brown team members, Sridhar Ramesh, and with the last video on the Basel problem you’ve already seen the work of the other new addition Ben Hambrecht. I did a little Q&A session with both of them, which you can find on the Patreon page, and where you can have fun laughing at how bad we are with live action filming and lighting. Sridhar and Ben are both incredibly smart and talented, and additions like these will be crucial for covering all the topics I’d like to with this channel. The fact is, a lot of time and care goes into each of these videos. There’s taking the time to explore which topics have are most likely to deepen people’s relationship with math; and even once you have that, any of you who write know how many iterations can go into putting together clear and engaging storyline for a given topic. Obviously creating the visuals to best clarify an idea in math takes serious time, and quite often involves writing the more general code for fundamentally different types of visuals. What’s neat is that unlike similar products, such as movies or college courses, we’re able to offer these stories and lessons for free. What makes that possible is that a little less than half a percent of subscribers to this channel have decided this is content worth paying for, in the form of Patreon pledges, which is amazing! So thank you. And if you’re not in a position to support, just, don’t worry about it, that’s exactly why the content is free. Just sit back enjoy, and if you really want to help out, share it with others.
Info
Channel: 3Blue1Brown
Views: 853,588
Rating: undefined out of 5
Keywords: blue, brown, winding numbers, Mathematics, 3 brown 1 blue, 3b1b, three, 3brown1blue, three blue one brown, 3 blue 1 brown, three brown one blue, one
Id: b7FxPsqfkOY
Channel Id: undefined
Length: 23min 10sec (1390 seconds)
Published: Sat Mar 24 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.