Approximating Irrational Numbers (Duffin-Schaeffer Conjecture) - Numberphile

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

James Maynard recently co-authored a proof of the Duffin-Schaeffer Conjecture.

On the Duffin-Schaeffer conjecture - by Dimitris Koukoulopoulos and James Maynard - https://arxiv.org/abs/1907.04593

πŸ‘οΈŽ︎ 50 πŸ‘€οΈŽ︎ u/CultistHeadpiece πŸ“…οΈŽ︎ Dec 08 2019 πŸ—«︎ replies

It sort of blows my mind that we haven't been able to prove that e+Ο€ is irrational.

πŸ‘οΈŽ︎ 51 πŸ‘€οΈŽ︎ u/fermat12 πŸ“…οΈŽ︎ Dec 08 2019 πŸ—«︎ replies

A round of applause for Koukoulopoulos and Maynard.

πŸ‘οΈŽ︎ 23 πŸ‘€οΈŽ︎ u/vilimus- πŸ“…οΈŽ︎ Dec 08 2019 πŸ—«︎ replies

If he’s going to remember e up to 2.71828, it might be worth remembering that the next 4 digits are also 1828, taking it to 2.718281828.

πŸ‘οΈŽ︎ 21 πŸ‘€οΈŽ︎ u/jacobolus πŸ“…οΈŽ︎ Dec 09 2019 πŸ—«︎ replies

James Maynard is such a pleasure to watch

πŸ‘οΈŽ︎ 41 πŸ‘€οΈŽ︎ u/_selfishPersonReborn πŸ“…οΈŽ︎ Dec 08 2019 πŸ—«︎ replies

I like how he moves his head.

πŸ‘οΈŽ︎ 7 πŸ‘€οΈŽ︎ u/24-cell πŸ“…οΈŽ︎ Dec 08 2019 πŸ—«︎ replies

If I understood him correctly, given you're asking for a series of numbers, you're technically gauging the error sensitivity of different series.

Wouldn't that mean there is a simple cut-off point similar to the convergence cut-off point for a series?

πŸ‘οΈŽ︎ 9 πŸ‘€οΈŽ︎ u/Untinted πŸ“…οΈŽ︎ Dec 08 2019 πŸ—«︎ replies

vv interesting, yet another maths win from using graphs

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/jmdugan πŸ“…οΈŽ︎ Dec 08 2019 πŸ—«︎ replies

What proportion of mathematicians are left-handed?

In my experience it seems much more common than in the population at large.

Could this tell us something about the nature of mathematical reasoning?

πŸ‘οΈŽ︎ 3 πŸ‘€οΈŽ︎ u/bsmdphdjd πŸ“…οΈŽ︎ Dec 08 2019 πŸ—«︎ replies
Captions
I get this initial joy that I'm very happy about having proved it, then I also get this fear that I'm about to completely embarrass myself in front of the mathematical community by claiming to have proved some well-known result when actually there'll be some obvious mistake because I've, I put a plus instead of a minus somewhere. So you get this euphoric joy and adrenaline rush, but then it's followed pretty quickly by complete fear that I'm about to trash my reputation and do something completely idiotic. Today I'm going to talk about a conjecture of Duffin and Schaeffer that was recently proven by myself and my collaborator Dimitris Koukoulopoulos. (Brady: So this is like, math news, is it?) - Yeah exactly, hot off the press. So this is a problem about Diophantine approximation. Let's just take a step back and think about what Diophantine approximation is, in general. So maybe the most simple numbers we know are just the whole numbers: 1, 2, 3, 4, and so on. So slightly more complicated, there's rational numbers; which is just ratios or fractions of whole numbers. So, one half, one-third two-thirds. But as well as rational numbers, there's irrational numbers which can't be expressed as just a ratio of two whole numbers. Euler's number e starts off as 2.71828 and then goes on. - (Brady: How far do you know that one?) Ok, I know it up to 2 8, and then I stopped. So it's no coincidence where I draw the dots. Similarly, pi is another famous irrational number 3.14159 And these are numbers where we've proven that you can't write it as just a ratio of two whole numbers, and in fact, there's a whole zoo of numbers out there and in many ways most numbers can't be written as just a ratio of two whole numbers. So most numbers are these complicated numbers that seem to have some crazy decimal expansion that probably has no patterns in it whatsoever. And so then there's a question as to what do we even really mean by these irrational numbers? So I put these dot dot dots here, really what this expression is saying is: I have lots of different approximations to the number pi, which get more and more accurate. So maybe a first approximation is that pi is roughly equal to three, and then a second approximation is pi is approximately equal to three point one, which is 31 over ten, and then a better approximation is that pi is approximately equal to 314 over 100, 3.14 and so on. This is really what we mean by an irrational number; we can't just uniquely define it in a very easy way typically, but we can define it by the sequence of simple approximations. The sequence gets closer and closer in on one number - pi for example was historically very important for engineers to do certain bits of architecture, and they needed to know it to a certain level of precision. And to a given number position you can just approximate it by some rational number. - (Brady: Good enough to build a bridge or something?) - Yeah, exactly Every irrational number has a decimal expansion like this and that's one way of understanding them, but sometimes though, rather more efficient ways of writing them as a ratio rather than just numbers with denominators 10 or 100 or 1000 or so. So one famous one is that pi is approximately equal to 22/7. And even though maybe 22 over 7 is a much simpler looking fraction than 314 over 100, because the numbers are quite a lot smaller, pi is actually closer to 22 over 7 that it is to 314 over 100. And so this is somehow a very efficient way of approximating pi and so if you an engineer or something you wouldn't have to care about 314 over 100, you could just use 22 over 7 and that'd work much better. It's a very famous theorem of Dirichlet that every rational number has a fairly efficient sequence of approximations, like 22 over 7, that approximate the number better than you might expect. (Brady: So is there like a mathematical name for them, these like - these silver bullets?) - For this sort of setup, they're known as the convergents to the irrational number. There's an algorithm for producing - if you give me your favourite irrational number I can do some calculations and it can spit out these convergents, which are these very efficient approximations to the number Dirichlet's Theorem says that regardless of what irrational number you choose, let's say your favourite irrational number is some number alpha, then you can find lots of different approximations of rationals, let's call it a over q, and this is going to be an efficient approximation in the sense that the error term, in the approximation, is at most 1 over q squared. So regardless of what irrational number you choose there's infinitely many different pairs (a,q) where you have this efficient approximation where the error is pretty close. By comparison, when we just doing a decimal expansion, we would just have an error term which looks like 1 over the denominator. And so it's not anywhere near as efficient. The whole problem of Diophantine approximation is variations on this question: if you give me an irrational number how well can I approximate it by rational numbers, and how efficiently can I approximate it? But unfortunately this whole area is a zoo of different complicated impossible questions. We know from Dirichlet's theorem that we can approximate pi with an error 1 over q squared. So one natural question is, when I do even more efficient approximations, can I get maybe an approximation that looks like one of q cubed? So that'd be even better, and this is a well-known open problem. We don't know if you can approximate pi to that level of efficiency. (Brady: Has until now 1 over q squared been the best you could do?) - Yeah for pi, I don't think we really know anything better than 1 over q squared. For some special irrational numbers you can do better than 1 over q squared and to really show you how pathetic we are and how little we know about these problems you could consider the number e plus pi, and we don't even know if e plus pi is rational or not. So it could be that e plus pi is equal to just a over q, for some whole numbers a and q. It's clearly not going to be the case, but as mathematicians we have no way of disproving this. - (Brady: That'd be amazing if someone found that!) - Yeah, so it would be completely crazy and you can check small numbers, so it's got to be the case that a and q would have to - you can prove that they'd have to have loads and loads of digits if they possibly were this, and kind of it clearly the case that this can't be the case. But we have no way of proving this at all. And this is really typical for these Diophantine approximation problems: that they're really complicated problems and subtle problems, that we really don't understand very well at all as mathematicians. And there are some numbers that can be approximated very well but almost all numbers you can't really do much better than one over q squared. You can do a tiny bit better than Dirichlet's theorem. But Dirichlet's theorem is almost as good you can do. Duffin and Schaeffer asked a huge generalisation of this question where you are only allowed to use specific denominators q. So maybe you choose the primes, or the squares, or whatever crazy subset of the integers that you like. And so, the question that Duffin and Schaeffer asked was: if you choose your favourite sequence of rationals, how well can you approximate these numbers? What's the efficiency of the approximations? It was very much part of their philosophy that they wanted a super general theorem that would work for absolutely every possible choice of denominators. So at first sight this seems crazily ambitious, that I've already said there's these basic problems like e plus pi, and we don't even know whether they're rational or not. But then suddenly we're asking for arbitrary approximations, of almost arbitrary quality, for arbitrary sets of integers. So it seems like this crazy big generalisation, and we know already that you're going to have to allow for some small number of exceptions. But then there's an amazing theorem of Cassels and then also Gallagher, who showed that you choose your set of denominators, and you choose what threshold you have for efficiency, and regardless of how you set it up; either almost all numbers can be approximated using those denominators at that level of efficiency, or almost no numbers can be. So you have this crazy simplicity suddenly in this whole world of complex problems that either almost everything can be approximated in the way that you want, or almost nothing can be approximated in the way that you want regardless of how you choose your denominators or how you choose your efficiency of approximations. So we know that you get either everything or virtually nothing but then the question is: which one of the two cases are we in? And so the Duffin-Schaeffer conjecture was saying there should be a simple criterion as to whether you get almost everything or almost nothing. So we can write this down. You choose your denominators so let's call them q_1, q_2, and so on, and then you choose the tolerable errors, so how efficiently you want to do it, for each denominator. So let's call it E_1, E_2, and so on. And then you want to know: can you approximate your irrational number alpha by denominators from your favourite set, with the error term of your choice of error terms? This is very difficult for an individual alpha but we know that either you can approximate almost all of them or almost none of them and then the Duffin-Schafffer conjecture, now a theorem, is that there's a simple test for whether you can get almost all of them or almost none of them. - (Brady: Is there an example - is this something you can't apply an example to?) So this is the slightly frustrating thing about the theorem, that because it's so general and you always allow an exception, set, you can't say for any individual irrational number whether you can approximate things. But one possible example is you could choose your denominators being the squares. So 1, 4, 9, 16, and so on. And I now have to think about what the error terms might be. One choice is I could choose one over the denominator. So these are gradually getting better and better approximations. So this would be 1 over 1, 1 over 4, 1 over 9 1 over 16, and so on. And it's not too difficult to see with these acceptable errors that you can approximate basically every irrational with these error terms. So this is not very ambitious. Maybe a more ambitious thing would be to try and do what we did here and have one over the denominator squared. We're looking at smaller error terms and so it's more accurate approximations. I could be more ambitious and try and do the same as Dirichlet's Theorem but where the denominators are just the squares, so the outer terms would be 1/1, 1/4 squared 1 over 9 squared, and so on. And then again there's a simple test as to whether you can approximate almost all numbers, or almost no numbers and in this case the Duffin-Schaffer conjecture, now a theorem, says you were being too ambitious here and so actually almost no numbers can possibly be approximated at this level of accuracy with this choices of denominators. So you plug in your denominators, you plug in your error terms, and then there's a relatively simple thing that will say either: Yes, you can approximate virtually everything, or no you get basically nothing at all (Brady: Could I give you just any arbitrary collection of) (denominators? Could I just say like, the Brady numbers - here are the Brady numbers, will it work for pi and e and that?) (And you'll be able to plug it into your machine and say sorry - ) (or, yep, it's gonna work.) - Yeah, exactly. So again, there's always got to be a small number of exceptions because there's the crazy examples. But yeah, you choose your favourite Brady numbers and you say how will you want to approximate numbers by Brady numbers? And then there's a very simple test; either yeah, you can basically approximate everything or woah, woah, woah, you've been too ambitious, you can basically not approximate anything at all. And there's no in-between ground, it's not like you can approximate 50% of numbers or 30% of numbers; it's either 100% of numbers or 0% of numbers, which is kind of the beauty in this, that normally these problems are all so complicated but then there's this one moment where either you're getting virtually everything or virtually nothing and there's a very simple test now to tell which one of the two cases you're in. That was what was really conjectured by Duffin and Schaeffer, they proposed the test. If you're in the field and you understand these things it's the most natural test you could possibly have but the problem was always, is this you need the true test? That we could prove that the test worked for lots of very special sequences of denominators and things, so if you chose the primes then we knew that it would work - the Duffin--Schaeffer test. But we didn't know for completely arbitrary choices of numbers, you know, the Brady numbers - you could have chosen them in such a way that we wouldn't know whether this was really the right test or not or if there's a subtle other thing that meant that the Duffin--Schaeffer test wasn't really the right criterion for whether you would get almost all numbers or almost none. (Brady: Are there certain types of denominators that work better and are more likely to give me that positive result,) (than other denominators? Like you said for primes) (usually it's going to be pretty good. Are there certain numbers that are more conducive to this than others?) - Yeah, so there can be problems when the - your denominators have lots of small prime factors. And so this was really the big point behind Duffin and Schaeffer's work, that they realised that there was this subtlety that if you have lots of prime factors in your denominators you can write like the same rational number in lots of different ways. We have that the rational number 1/3 is equal to 2/6 and is equal to 3 ninths; and you can keep on writing these different rationals as being the same thing because you can cancel out a common factor. The only way that things wouldn't behave in the most obvious way would be if you kept on hitting the same numbers again, and then again, again; for precisely this reason that the numbers which are well approximated by one third are also well approximated by numbers with the denominator 6, and are also well approximated by numbers with denominator 9, because they're all basically the same numbers. (Brady: James, for this theorem to hold do the denominators have to have been produced in some kind of) (systematic) (algorithmic way? Could I have just pulled 50 numbers out of, you know, a bucket and said these are the denominators?) (Or do they have to be like, you know, the Fibonacci numbers or the prime numbers? Do they have to be like) (selected in some legitimate way?) - No, you can really choose whatever crazy numbers you like. So you have to have infinitely many of them to possibly hope to get infinitely many approximations. (Brady: Ah, so you can't have a finite number of numbers in that denominator category?) No, you so you need to have an infinite number to really stand a chance because it's all asking: can you approximate numbers to, as good accuracy as you like. So if we in our approximations for pi you just stop with numbers less than 100 then well, you wouldn't know what the next decimal place was and so you couldn't tell the difference between pi in a whole load of other numbers. So you need to have infinitely many numbers, but the whole point of the theorem is it's a completely arbitrary collection of numbers and you can choose whatever you like. You don't have to choose it in some nice, mathematically sensible way. You can just pick, keep on picking numbers out of a hat in some crazy fashion and regardless of how you choose them, there's a simple way of telling whether you get almost all numbers being well approximated or almost no numbers being well approximated. (Brady: Was this a burning conjecture in mathematics?) So I think it was viewed as one of the more important problems in metric Diophantine approximation. So metric Diophantine approximation is when you're asking for these almost all results, and this was somehow the canonical example of a conjecture, and if you can't hope to do this, you can't hope to say more subtle things about how you can approximate numbers. (Brady: Give me a feel for what the proof was like. Was it, sort of a, one of these simple elegant ones or was it a bit of a monster?) So I think the basic idea is quite simple but then there's a whole load of annoying technicalities that meant that it turned into a bit more of a monster. But one of the fun things is that we use some ideas that weren't standard in the field and so, in particular, one way of trying to understand this situation is that we ended up drawing lots of graphs. So we'd just draw lots of dots where dots represent denominators and then we would draw lines between dots when there were mathematical complications to do with them. And then we would stare at these graphs and by understanding these graphs very well, we were able to understand the original problem. So one of the main things that we did was to try and understand the problem using these very simple ideas of dots and lines which is way to somehow visualise some of the complexities of what's going on. (Brady: Do you remember the moment that you and your collaborator had cracked it?) (Like was there a minute of the day where it was like - hey, I think we've done it! - or was it more gradual than that?) Okay, so there was one moment when I thought okay I think we've done it. There was a lot of work after that to make sure that actually all the fine details worked out in the way that I hoped, but yeah, there was definitely one moment where I was like, hang on a moment I think we can actually do this. That there was one issue that we've been stuck on for quite a little while and then suddenly we came up with a way of getting around that and yeah, that was pretty fun moment. So I was actually just sitting in this room thinking about the problem - it was a bit strange because I'd expected this to be a really tough battle that I was gonna have to grapple with for quite a long time and I really wasn't sure if I was gonna win the fight, but then suddenly I - there was a trick that actually managed to get around the problem. So rather than having to take on the battle that I thought I was going to have to fight head-on, I could just sidestep the whole thing and so, in some ways it felt a bit like a cheat that I had proven the theorem that I wanted to prove, but the key difficulty in the proof - actually, I didn't have to fight head-on at all. We were both pretty happy as soon as we got to this stage where we are pretty sure that everything would work out then. Although actually, I was at this thing - when I prove these results I get this initial joy that I'm very happy, but I haven't proved it, but I also get this fear that I'm about to completely embarrass myself in front of the mathematical community by claiming to have proved some well-known result, when actually there'll be some obvious mistake because I've - I put a plus instead of a minus somewhere. So you get this euphoric joy and adrenaline rush but then it's followed pretty quickly by complete fear that I'm about to trash my reputation and do something completely idiotic. (Brady: How often when you have those breakthrough moments like that do you end up finding you have made a mistake?) (And, and you just sort of keep quiet, and how often do you convert them into "Wow, I was right"?) Okay, there's sort of two different levels of breakthrough moments. There's ones where I don't really believe it myself and there's the ones that I really believe it myself. So it's very common in the day for you - for me to be like hang on this seems to be proving this amazing result, but I'm always a bit skeptical as to whether I've really done it. And those ones 99%at the time, they turn out to be complete rubbish and there'll be some really stupid thing that I've messed up and you get up the next day, you look at what I've done and I was thinking 'how did I miss this?' I've been a complete idiot here. But then there's the other ones where you somehow intuitively feel that, yeah, this feels right. You know, I feel like I've got to grips with the problem and I've really solved it and okay, touch wood, fortunately so far all of those have turned out to actually work out. (Brady: And just lastly, when you have a breakthrough moment like) (that, like you have that idea the way to circumvent the problem and that.) (Is it such a big thing that you couldn't forget it or do you have to write it down quickly in case you forget the idea? Like - ) Okay, I think I couldn't forget these things at all. That, normally when I've been working on a problem, I'm pretty obsessed by it and so it's sort of coursing through my body the whole time and in the back of my mind even when I'm having lunch and having the shower and things. And so I really have all the bits of the problem in my head and as soon as I have the idea for the problem, I'm not going to forget how easily. I often want to write it down to check that it really does work out the way that I think it works out. But yeah, it'll be over my dead body that I'm forgetting these ideas, I think. [Preview of more videos] Most recent time where we've seen someone really write down the conjecture officially was just over a hundred years ago by de Polignac, but he actually conjectured a stronger, more general form of the twin prime conjecture that's a bit more complicated.
Info
Channel: Numberphile
Views: 404,501
Rating: 4.929471 out of 5
Keywords: numberphile, conjecture, mathematics
Id: 1LoSV1sjZFI
Channel Id: undefined
Length: 21min 19sec (1279 seconds)
Published: Sun Dec 08 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.