Gaps between primes – James Maynard – ICM2018

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay thank you very much how to the introduction and it's great personal pleasure for me to be here speaking in Brazil the first time so I'd like to talk mainly about work from 2014 and 2015 on gaps team climbs but those of you who had the pleasure to go to previous ICM will maybe remember that there were several talks at that ICM about gaps between finds and say this can be seen as a natural extension of some of the results that were talked about then it's very exciting time for work on prime number theory but for those of you who weren't at the ICM beforehand don't worry this is enter the general mathematical audience will be completely self-contained so maybe the most basic question in prime number theory is to try and understand the distribution of prime numbers so maybe the most basic question in part number theory is to try and understand the distribution of prime numbers and in my mind there's two sets of questions there's large scale questions about gaps between prime numbers which often leave very quickly two questions related to the Riemann hypothesis and then there's small scale questions related prime numbers which look at correlations of the primes and questions about the gaps between files so I'd like to talk about gap stream Prime's today and try and understand something about the distribution of the gap stream files so maybe the most basic and celebrated results in part number theory is the prime number theorem and this is that if you want to count the number of times up to some large number X then the number of these Prime's is roughly X divided by the logarithm of X so another way of viewing this is if we look at primes of size roughly some large number X then the average gap between file numbers is of size the logarithm bats so since the lagoon for X gets larger as X gets larger the x gets faster and faster and on average the gaps between x gets bigger and bigger and bigger when you look at a plant of size X you get a gap typically of size the logarithm of X but the question I want to you think about today is whether it's the case that you always get gaps that look roughly of size the logarithm of X or if in fact it's the case that you get sometimes gaps which are the smaller than logarithm of x and sometimes capture two rather large than for X so the extremes of the gaps between primes how small gap can you get and how large a gap can you get so this is the basic question that I like to think about today and I want to start off by thinking about the question of small gap stream fibers so how small can gaps between times B maybe ideally you'd have some complete classification of all small gaps between prime numbers so let's try and start off by doing this the smallest possible gap between primes is size one and it's quite easy to see that the only pair of primes that differ by exactly one are the primes 2 & 3 for the simple reason that any two consecutive integers one of them has to be a multiple of two so we've got a complete classification of all prime gaps to size one now let's try and get a complete classification of all plan gaps of size two so we can start doing some calculations we find 3 & 5 5 & 7 11 and 13 but there seems to be lots and lots of these pairs of primes the first pair bigger than a thousand is 2030 1033 first pair picking the million is 1 million 37 1 million 39 the first pair bigger than a billion is 1 billion 7 1 billion and 9 and these seem to keep on coming so maybe the natural guess just in this quick computation is that there should be infinitely many such pairs of primes which zipped by exactly to Alice I'm sure virtually a horn and the audience knows this is the famous twin prime conjecture which is one of the oldest and most famous problems in mathematics but unfortunately is very much open and we really don't know how to solve this even if it's perhaps one of the most basic questions you could possibly ask about phone numbers but even though the prime number theorem the twin prime conjecture is very much open it turns out for lots of the recent progress on small gaps between times it helps us to try and look beyond the twin prime conjecture and at least guess at what might be true beyond that so maybe you'll humor me and let me think about more general gaps between files so we can look at gap stream Prime's not just of size one or two but of size eight for any given integer H and we get a very similar phenomenon that if we're looking at an odd number H then one of the two integers that we're looking at must be a model of T and there's only one prime which is multiple to and so clearly there can't be infinite many poems that differ by any given fixed odd number however when we look in plan gaps of size 4 or prime gaps of size 6 or plan gaps of size 8 for each of these we find lots and lots of pairs of primes that differ by exactly that number and therefore again as a generalization of the twin prime conjecture known as the Pontiacs conjecture still well over a hundred years old is that not only should there be infinitely person times that differ by exactly two but in fact there should be implementing pairs of finds that differ by exactly H for any given positive even number H but we don't even need to stop there rather than just looking at pairs of times we could think about clusters of many Prime's consider three times or four times all classic very close together so we could think about how short an interval can you fit three Prime's in and if you're looking for triples of primes all contained in an interval of length five then there's only four possibilities 2 3 5 2007 2 5 7 or 3 5 7 for the simple reason that many interval of length five you have to have one integer that's a multiple of two or you have to have one integer this multiple of 30 and there's only one time which is local to and there's only one time which is multiple 30 however if you're looking instead for Prime's in the intervals of length 6 then we find there's lots and lots and lots of triples of primes all containing interval length 6 and so naturally you would guess that there's infinitely many and I just wanted to take this sequence of generalizations one step further where instead we consider K different linear functions so these are just linear functions a n plus B as a function of n for two given integers a and B so you choose your favorite K of these let's call them a one of em or two of n of T or caravan and question is is it the case that there's input many integers n well such that all of these functions are simultaneously prime or is it the case that there's only finitely many inches n when these linear functions are simultaneously prime so this is known as the prime K to post conjecture and in exactly the same way that we saw before that there can be some obvious obstructions to linear functions being simultaneously prime because one of the functions has to be a month or two or one of the functions has to be a multiple of three there's a natural generalization of this which classifies any silly reason why they can't all be simultaneously prime which is that one of them always has to be a multiple of some fixed prime P however if there's no such obvious Congress obstruction then the conjecture is known as the punky to post conjecture that there's should be in flumino introduced n such that all the linear functions simultaneously Prime and pretty often so another way of saying this is you give me your linear functions it's a very quick and easy check to see whether there should be some obstruction to the bin simultaneously climb if there's not one of these simple obstructions and we believe it should be the case that there's something missing infamy often and the funky tubas injector is really a very powerful conjecture that tells us almost everything we want to know at least on the qualitative sense about the small scale distribution of prime numbers so it immediately implies the twin ton conjecture but it implies a lot more besides that so not only do you get to pounds very close to get there you get empires very close together and in fact you can have em primes all contained an interval of length roughly M log M so really very compactly tightly together it implies straightaway that you have infinitely many Sophie Germain climbs a to immediate consequences of this are infinitely many cases of Fermat's Last Theorem and the open problem of arsons conjecture on country boots would follow from the infinitude of Sophie domain files maybe everything I've mentioned so far sounds very much about pure number theory and analytic number theory but the twin prime conjecture also has the existence of very first prime configurations that are very relevant for the functional of computer algorithms for this concept of safe Prime's for example and it implies that there are infinitely many safe Prime's which is a certain type of part number that's found for certain computational cryptographic algorithms a few more immediate consequences that you get arbitrarily long after meta progressions of times so another people the green towel theorem but moreover you can have this with bounded gap stream times at the same time that you're finding you also find that go backs conjecture is a fairly immediate consequence of a slight generalization of the pam k tuples conjecture where you have to be slightly careful about uniformity and you can also find a residue class Markku that contains many small times which is the problem that will crop up towards the end of my talk so hopefully I've convinced you that the prank a tuples conjecture is a really big conjecture that has fantastic consequences for the prime numbers but being a huge generalization of the twin prime conjecture unfortunately it's well beyond anything that we able to prove at the moment however the main result that I like to talk about is a weak form of the prime kt+ conjecture so if we cool the prime Q 2 plus conjecture you choose your favorite K linear functions there's a simple check as to whether there's an obvious concurrence obstruction but if they're admissible then we believe it should be the case of infinite many prime they're simultaneous refinement we often and the thing that I like to talk about says that rather than demanding that all of the linear functions are simultaneously prime we can guarantee unconditionally that at least many of them are simultaneously prime so it's instead of having all K of l1 of N and L K of n being simultaneously time in fact I can have roughly log K of the linear functions being simultaneously fine so a few quick comments I'd like to make about this famously Jang proved his result about found a gap stream times and that roughly corresponds to a version of this wheat pancake Hooper's conjecture where instead of having log K of the linear functions being time he had two of the linear functions being timed by the K was large enough and he proved this using a different methods however even though his method was different both the ideas behind this proof and the ideas behind Jang's proof were rooted in earlier work of godson Pinson or Dylan so this was all was due to me but also independently Terry towel 2013-2014 and from a technical point of view it's also very useful because the method is very flexible and can apply to lots of other situations so we can prove weak forms of this puncay tuples conjecture in lots of different contexts so I'd like to begin by given a few ideas about how one might go about proving such a weak form of the funky lupus conjecture well sorry but before that I'd like to mention the immediate consequences for a gap stream climbs so if we recall Jiang Sturm said that you had two of your linear functions been simultaneous the time by decay is large enough if you choose all your linear functions just three of the form n plus bi for some constants bi chosen such that the admissible then you automatically find that when two of them are simultaneously firm infinitely often you get in front many pairs of times that differ by some bounded amounts because the linear functions themselves are fixed and when you go through Jiang's argument numerically he got that there were infinitely many pairs of primes that differed by about 70 million the weak form of the plan created was conjectured that I put up before not only Gibbs band gap stream times but it turns out that the method gives an improved numerical result for banner gap stream times so I was able to show that the infinite many pairs of times the DIF by at most six hundred and over because I could get many of the linear functions being simultaneously time I was able to show that there are bounded length intervals that contain many Prime's and from the often not just bounded length intervals contained in two frames then in subsequent work as part of the polymath collaborative online projects we optimized the numerix involved in the arguments and the can't world record is that there are infinite many pairs of times that if by no more than 246 so I'd now like to move on to a couple of the ideas in the food and on a very high level I like to think of the pouf as an application of the probabilistic methods so you take some large number X and I want to choose an integer n between X and 2x at random according to some probability measure and I want this probability measure to be biased so that it's highly concentrated on when many of the linear functions that we fixed are simultaneously fine and then I now want to just form one computation given this random integer n that I've chosen I want to calculate the expected number of the linear functions which is simultaneously pi and if I find this expectation is large say imagine I do the computation and I find that expectation is 10.5 that means that for at least one integer n between X and 2x there must be 11 of the linear functions being simultaneously fine and so provided I can compute this expectation and this expectation is large enough I can show you that many of the linear functions must be simultaneously fine for at least one value of n between X and 2x and if I can repeat this argument for all large values of X that I must have found infinite many introduced n such that many of the linear functions of simultaneously time so that's a very high-level overview of how the argument works it's worth noting that I don't know which of the linear functions are simultaneously prime at all I'm just guaranteeing that some of them are simultaneously prime and I don't know which integer n this occurs for I'm just guaranteeing the existence of some integer so if you accept this strategy then the question then becomes how on earth do I choose this probability measure there's two requirements which are competing in different directions on the one hand the probability measure has to be simple enough so I can perform the computation where I can calculate this expectation but on the other hand it has to be complicated enough so that it's really catching something to do with the five numbers if I just choose a simple smooth function for example for the probability measure then because the Primus of zero density this expectation would be decreasing as a function of X and it would be less than one for large values of X and the method wouldn't prove anything at all if however I chose a function that's very tied up to the twin prime conjecture that it's unlikely that I could calculate the expectation but fortunately there's an area of analytic number theory known as SIF methods which are very well suited for precisely these sorts of questions so to give a very high-level overview I view submitted x' as the study of certain weighted integers which you can think of almost primes or approximate fine numbers the idea is that you give every integer n a non-negative weight according to some approximate degree of how climate is and the hope is that these non-negative weights are much easier to calculate with than the phone numbers themselves but nonetheless capture much of the athletic interest that we have with the phone numbers and so there's two key properties about these sieve weights that come up that provided you understand the underlying integer solutions in athletic aggressions you can count complicate you can compute things using these civil weights and flour question about simultaneous prime values of linear functions because we understand the finds themselves fairly well in our found progressions and we can understand the integers in athletic Gresham's we can compete using these sieve weights the other key property is the fineness of positive density when you consider integers weighted by these non waise so though the points of density 0 and the integers they have a positive density inside these weighted integers therefore positive weights a positive proportion of the whole waves and say this means that the almost primes are genuinely capturing some of the athletic complexity of the times and so can be thought of as an actual way of interpolating between the integers which we understand well but automatically too naive and the times which are what we want to understand but a very difficult to compute with so given that we understand the problem of linear functions well in a thematic regressions and we know that the times have positive density in the almost primes we expect that we can perform a computation if we choose a probability measure defined naturally in terms of these civets and moreover this final expectation should tend to some positive value however there still the question as to what that positive value is because if I find the expected number of my linear functions which is simultaneously prime is 1/10 then this doesn't prove anything at all it just proves that there's infinitely many Prime's it doesn't show that there's some taneous by values so I only get a result about bounded gaps dream times provided I can say that this expectation is bigger than 1 and all I know from this slide is the expectation should be positive but it could be a very small positive number and therefore what the precise xx expectation is depends very much on the precise definition of almost 5 and it turns out that there's still a little bit more of an art than a science in terms of precisely how you define these SIF weights for different applications and precisely what sort of expectations you get out so the question of choosing a publicity measure if we decide to define it in terms of these SIF weights then depent falls down to a precise choice of the sieve that we would want to use and it turns out that there's a fairly standard choice and it's the cell bug see if that performs exceptionally well in these sorts of circumstances and it's often optimal and if you take this off the shelf results you can perform the computation and you find that the expectation that you get is roughly 1/2 when X is very large so this is somewhat disappointing we've completely failed to prove bounded gaps using what is often the optimal choice of siblings however it was a remarkable result of gods in Princeton New Durham and this was the key part of their method that you can form a modified version of the Selberg sieve that actually performs much better and so they produced a different type of sieve and when you perform the computation this gives the number which can be made arbitrarily close to one but it's always slightly smaller than one so this is really frustrating because it means we just failed to prove our gap stream times using their ideas however as I mentioned the underlying aesthetic input form these computations relies on results about integers and finds and Athleta progressions and the key breakthrough of eating Chang was to proof or slightly stronger result about times and athletic progressions and so because he produced slightly stronger basic athletic information he could correspondingly put this into ghosts and Pinson you didn't argument and this just improved the expectation a very small amount but because we would just play the threshold beforehand this was enough to push us just over the threshold and this was hell Jiang who found a gap screen times however the key idea in my work was a multi-dimensional generalization of the Selberg sieve which perform much better transferring this athletic information about results in athletic aggressions to results about small gap stream finds and so by coming up with a new type of sieve I was able to show that the expected number of the linear functions which of time was roughly log K divided by four and so not only do you get two of the functions been simultaneously time because we can get expectation slightly larger than one when K is large enough you can have ten or hundred or thousands of your linear functions being simultaneously trying so just to give a schematic overview to the idea of the methods we'd like to prove an athletic result about small gaps between x and the key a thematic input that fundamentally the whole method relies on this results about x and Athleta progressions and to turn the sort of arithmetic information that we know about fiims into the sort of a thematic information that we would like to know about times we use this sieve methods and the random publicity argument based on the sieve and this transfers results about planting event aggressions in principle to results about small gaps screen fires jiangs key results was to improve the athletic information feeling into the whole method which cost funding we flowed through the whole diagram and produce stronger results about small gap stream coins and fortunately because we were right on the cusp of proof of bounded gaps between times thanks to the work of gods and continued to him this was enough to give us band gap stream files rather than working on client enactment progressions my work was about the sieve method the heart of this argument about transferring the information about planting a friend progressions to small gap stream times and my multi-dimensional generalization was more efficient and so using even just weekly results about planting arithmetic questions it was able to prove much stronger results about small gap sometimes and gaps during many times this doesn't completely tell the full story because to get good results about small gaps between crimes you need to optimize many auxilary choices of parameters in the sieve method and so you need to solve a few different sets of problems one just pure combinatorial problem and one pure analysis problem which is an optimization problem based on their choice of smooth waves it turns out that these can be solved using ideas from elementary number theory in kinetics or ideas from either numerical analysis or functional analysis with a bit of probability theory thrown in to get good results about small gaps between times and to optimize all the parts of the method if for example you care very much about the numerical results for example the polymath result which is the current world record that there's infinite many pairs of times they differ by no more than 246 this 246 is a unnatural number that's just an artifact of the methods the method with the new sieve makes it clear that you'll get some numerical bound the precise value you get is dependent very much on how well you can solve this optimization problem and how much how well you can solve the combinatorial problem both of which we have essentially optimal answers for now but there's a natural loss in the argument based on this moving to approximate phone numbers and this means that we don't get twin primes using this method we can only get 50 of our linear font if you choose 50 linear functions you can ensure that two of them were simultaneously prime and these 50 linear functions once you choose them to be as tightly packed as possible being admissible corresponds to the result that there's infinitely many pairs of primes that if by no more than 200 46 say 246 is just an artifact with the method but also if you could prove strong results about pounds and athletic questions and correspondingly this would feed through you get strong results about small gaps between times and would be one way to improve the 246 are now like to pass to the opposite extreme so I've talked a bit about Tom k2 plus conjecture and its consequences for very small gaps team finds and in particular gap stream times that are much smaller than the average gap of size the logarithm of X I'd now like to think about large gap stream firings so again if we look up x of size about X then the prime number theorem says that the average gap is of size log of X so we certainly have gaps which will size at least volkov X in their order of magnitude and the question I want to consider is how much larger can they be than the logarithm of X or maybe it's the case of even the largest prime gap certainly a little bit larger than a lot of your buddies well it turns out that it was the results in the late nineteen twenties of verson theose that the there exists prime gaps which we arbitrarily large compared to the average gap and they'll send a flurry of papers through the 1930s which were optimizing these results and they've stabilized in rankings 1938 results that you can get gaps which are quantitatively bigger than the average gap so rather than having gaps of size log X you can have gaps which are slightly larger than no size log X which are given by this rather messy expression log x times double log X divided by triple log x squared times quadruple log X but really if you think of this is something which is slightly smaller than the log of x times the double log of X and unfortunately progress then stopped after this fry your papers in the 1930s and beyond very minor improvements see an implied constant there were no real improvements on rankings results so for those of you who remember Paul Erdos or heard about Paul Erdos you see encourage and stimulate work in the common attacks and number three communities by offering challenge problems to different researchers and as an incentive he would put Monkey rewards on solving the different problems that he said to the community and it turned out the UH dose who was also had worked on this problem and was amongst the authors behind in the sequence of papers leading up to rankings result he was really taken by this problem but acknowledged that to make any progress significant progress beyond dankness results you needed fundamentally new arithmetic in person new athletic ideas so he actually put the largest reward that he's he ever put on a problem for this particular problem and he challenged me and see merely to improve this result by the smallest possible amount so provided you could shade that there was a function which grew faster than a constant times log X and double log X divided by 2 pi log x squared equal to PI log X he put a $10,000 reward and he did this because it clearly required new athletic ideas so in 2014 this was independently solved by myself and a team of four authors of Kevin Ford Ben green so ciccone Arkin and Toyota since then the five of us have combined our efforts to get the best possible quantitative results from our new approaches and we get a small but modest improvement over Rankin's results so we improve this triple log x squared on the denominator to triple log X so maybe the quantity of improvement is not so exciting in the novice but the key point is that we're able to input new athletic information into the whole method so now I'd like to go through a couple of slides of some of the ideas behind the proof of producing large gap stream files and I like to think of the fundamental problem of producing large gap sometimes as a game so I give you some large integer Y and then we line up the integers 1 up to Y and the way the game is played is that first of all you choose a rescue class mod 2 and we cross out all integers which are congruent to your su class want to say if you choose a team or two then we cross out all integers which are Konkan to a team of two then we go to the next time which is three and we cut out all introduced which are Konkan to a three mod three and then we keep on doing this prime by prime you choose the rescue class a five by five then a seven mod 7 and then each time we cross out all the integers in that verse to class and we keep on going until you crossed out all the integers between 1 up to 1 and the fundamental problem is what's the minimal number of steps if you choose your letter U classes very carefully how quickly can you get the young game where we cross out all integers between 1 up to Y if you can cross out all the integers very quickly then there's a trick based on the phone number theorem that shows that you can create a string of consecutive integers each of which has a very small fun factor and so you've created a long string of consecutive integers and therefore a long string of numbers which aren't fine and therefore a large gap between files this is a very natural generalization of maybe the high school argument that you can remember that there should be arbitrarily large gaps between times because n factorial plus 2 is a multiple to n factorial plus 3 is a multiple of 3 and how to a plus 4 is a multiple 4 and so on up to n factorial n is a model of N and so this is giving you n minus 1 consecutive composite values so it's given you a gap between farms of size at least then and this is just a very natural generalization of that which could be formulated and thought of as this combinatorial athletic game so the aim then is to try and come up with a choice of su glasses which very efficiently cross out all numbers one up to y and so you want to try and as much as possible for you to repeatedly crossing out the same number using about rows of SB glasses and the argument based on a deferred ranking is very simple for medium-sized Prime's you just choose the rest you class 0 and then for small times and large Prime's you just greedily make a choice of the rescue class which will cross out the most number of points at that stage and by allies in this algorithm for how you pay the game they've showed that you require about Y times a double log of Y divided by log of Y times and this correspondingly once you've optimized what you mean by medium small and large and once you plug this result back into what I had on the first slide this gives rankings result on large gap screen times so it's this argument that we'd like to improve upon by being more careful with our choices of diversity classes by inputting some extra arithmetic information about the files and the argument behind my work with faulty arcing green natal is improving the choices of the rescue class for large files now I don't want to go through all the details but it turns out that the fundamental obstacle is a fairly simple question about prime numbers we'd like to choose some residue class it turns out that for the other ranking method the residue class is chosen for the large Prime's you only need to guarantee movement at each stage whereas we'd like to remove many of the remaining survivors that haven't been crossed out yet and once you work through the argument this is not only the same as the problem of I give you some time of science about X and you need to find some residue class ap modulo P which contains many primes which only very slightly larger than X say AP MRP contains just one less of class modulo something of size about X so there's only about one element amongst fine structure of size X but if we go slightly larger than X then we might hope that we can find many times at all in the same residue class although because the primes have density about one over log X on average there are no point in any of these history classes so we want to find some rare residue class which is unusually rich in the number of times but this is precisely the sort of problem that I mentioned was a consequence of the funky tuples conjecture at least if you knew the prime K 2 plus conjecture with good degrees of uniformity and lots of appearances so you could choose your linear functions to be of the form a liar van gogh's n plus some small multiple of P where you choose these small multiples to make the linear functions and admissible set of linear functions to avoid the obvious converse restrictions and provided you can find an integer n which is not much larger than P a satisfying this you'd then find a rescue class and integer n where many of these linear functions are simultaneously prime corresponds to a residue class containing many small primes so we don't know the pamphlet do books conjecture but we do know our weak form of find K poopers conjecture and I mentioned that one pleasing feature of the method is that it's very uniform and flexible and so in particular using our weak form of the pamphlet equals conjecture with this sort of choice of the linear functions we can solve this model problem and show you that for any prime P of size about X there has to be a versa you class AP MRP which contains many poems which a little bit larger than acts now using having self that mental problem we then use techniques from populistic common attacks to show that one can make a consistent trace of the residue classes a people the large Prime's P to more efficiently go through the other Franken argument and so to play this combinatorial game where we're costing out integers more efficiently and this requires rather fewer climbs in the overall argument and correspondingly gives us rather larger gap stream files oops so I just want to finish with a few concluding remarks so one question that I left open was the fact that the whole method of proving these weak forms of the plank a tubeless conjecture relied on our results about fines and athletic aggressions and so if one was very optimistic we might say well what if we assumed as optimistically as possible the strongest believed results about fountain athletic Russians how far can we possibly push this method and so it's a result as part of the work in the polymath group that if we assume the most optimistic results about farms and athletic progressions which is conjecture known as the generalized Eliot halberstam conjecture then we can push this method much further it corresponds we have much stronger thematic information feeding into the serve and so we can get much stronger results about for example small gaps between times and in particular we can show that there's infinite many pairs of times that differ by no more than six however unfortunately this is a hard limit for our methods and there's a very substantial obstruction for trying to push six down to two you include the twin prime conjecture so it's definitely the case that a new fundamental a thematic idea is needed even if we were to assume exceptionally strong results about parents and Athleta professions if one wants to prove the twin prime conjecture itself it's a fundamental part of the method that we don't know which functions are simultaneously prime and therefore we can't prove print the twin prime conjecture however this does have couple of amusing consequences one of which I'd like to mention here so if you assume this very plausible and widely believed but definitely very optimistic result about plans and Athleta progressions then at least one of the following results is to either the twin prime conjecture is true or is the case that the Goldbach conjecture is almost true in the sense that every large even number is within at most two of a number which will be written as the sum of two primes now of course we believe that both of these two things should hold and I can't tell you either one of them holds but I I can't tell you individually either one of them holds but I know that at least one of the two holds as a consequences of these results so you just the finding apart there's been a lot of very exciting developments on gap stream farms in prime number theory it's a great time to be involved in at number three in CID methods for young people it's very energetic field at the moment it's a very exciting time I'd I think it's there's been a number of great results Maxine magical will be talking later on in this Congress about are the results related to the print on conjecture from a slightly different angle and I hope you listen you enjoyed the talk thank you very much listening I think I'll wrap up here and pass over to questions thanks very much games can you have your questions can you just explain it a little bit more about the optimization problem in the smell as well get to proof sure so there's two different optimization problems one of them I said that you take your linear functions to be of the form n plus bi so just some linear surface but you want to choose the bis which can be as close together in size as possible so as smaller numbers as possible while still maintain claiming the fact that you don't have one of these obvious congruence restrictions to a billion functions being simultaneously fine to get the best possible result about small gaps between farms so this is the comfortable optimization problem of how small can I choose these initiatives bi to keep my set be inadmissible but have them most closely support as possible and it turns out that we have numerical computations for small values of K and we have asymptotically optimal results for large audience of cave that optimization problem the second optimization problem is slightly more subtle so I mentioned that the key I do in my work is a multi-dimensional generalization of the cell Bergson and naively this is defined in terms of some auxiliary parameter which is fundamentally corresponding to a multi-dimensional version of this probability distribution function and this is and there's question as to how do I choose this smooth function optimally in terms of to try and get the best possible result and make the civil sufficient as possible and this correspondingly turns out Sabine to calculating the largest eigen value of a certain linear operator on Hilbert space so you can define a fairly nice significant linear operator which is corresponding to this expectation which is a function very closely related to calculating this expectation for the number of simultaneous phone values and to get the optimal result you want to calculate the largest eigenvalue of this linear operator to get au so you need to bounce the spectrum of the Vienna operator so this is a pure functional analysis problem that has stripped out all the number theory but this is precisely the problem that you want to solve and morally what's going on when you're choosing this optimal function is you're trading off between trying to increase the poverty that all your linear functions simultaneously time versus decreasing poverty that they're sometimes do fine but increasing the poverty that at least one of them is fine and there's a trade-off there and that's captured by this linear operator but the precise details may be too technical for me to say now yeah I have a question which has nothing to do it analysis so in your address ranking method where you had to go to this sieve and then you wanted efficient serving techniques has anybody tried to refine that using properties of primordial numbers so instead of dealing with factorials but products of primes to sort of make the selling process go easier right so yeah we're using the high school method is may be n factorial plus say a factorial plus 3 really using your if you chose the worst class one perfect time that will be corresponding to take in M prime oil plus to you and Pamela plus the and pomelo plus four say guess that's already incorporated in this and that would be corresponding to the most basic choice of just using one less you cars always equal to one at every stage but it turns out that there's these more complicated choices of those devices which work more efficiently yes that's very much incorporated very well so what is the expected order of GX for large prime def see that's very good question see it's strongly believed that if you're looking at farms of science biotechs the largest gap between crimes should be of science about log x squared so this is one of the larger than any of those else we have so far our methods are somewhat limited and in fact all our methods go through this combinatorial game and it's strongly believed that if you go through this combinatorial game you can't get close to proven log x squared so somehow large gaps that occur in nature occur for very different reasons the ones that we construct in here but we need a radically new idea to be able to touch that but amongst fantasize about X we expect the largest gap to be of size log x squared may I ask another foolish question yes so you have all right so no really Vinogradov is a result with level of distribution 1/2 meaning it tells you that the primes are equi distributed an average for model I up to size about X to the 1/2 now if you assume that you'll have already told us what happens when you assume Elliott Halberstam that each level of distribution 1 minus Epsilon what about all the levels of distribution in between I assumed that gradually helps and what happens if you assume my level of distribution of 1/2 plus Epsilon does it make a difference so I mentioned that you get about log K over 4 of your linear functions been simultaneously prime if you assume that instead of having love distribution one half you have a level of distribution theta then you get some right theta over 2 times log K of your linear functions been simultaneously trying so it affects that that's the numerical consequence and then that correspondingly as you get a slightly better level of distribution you can get slightly more linear functions being prime or you acquire slightly fewer of your linear functions to ensure the two of them soon return it's as as close to continuous as a function from the reals so the integers can be yes exactly yes and what about results in the model chance namely raising the level of distribution at the cost of respecting things to smooth numbers say can that sort of thing ever help yes it definitely can help so for when you're looking at intervals can having a very large number of times I showed that you can have M times in two or a three e to the 4 M you this eat the this for is corresponding to the 1/4 in your lokay so again if you have a high level of distribution high level of distribution you can improve that constant 1/4 for the problem of many primes the restriction to smooth numbers is somewhat irrelevant it doesn't matter too much at all and see you very quickly get improvement in that constant for to a constant of 3.8 or 3.9 or something like that by using Jang's results in the finance of the polymath IT projects for the question of just pairs of primes currently the values of K numerically that we're looking at so suitably small that it appears there's a very limited there should be a numerical benefit by incorporating ideas Longshanks line but these seem to be numerically suitably small that they wouldn't necessarily give any improvement whatsoever in the number of 246 but would have a large amount of theoretical computational difficulties so maybe they could improve 246 to 244 242 but even that's not totally clear because we're in a numerical regime where the increasing level of distribution isn't so great compared to the cost of restricting to smooth numbers or numbers that look smooth ish hazy and that's true not yet for Young's particular result but for hypotheticals from their versions are off so if you could prove a strong enough version where you're flexible enough in terms of smooth what do you mean by smooth and strong enough in terms of your level of distribution then you would expect to be getting correspondingly bigger and bigger increases so again it's very continuous but the best-known numerix for jangles of results we're in a regime where the benefit is almost negligible excellent other questions well if not with the same games again [Applause]
Info
Channel: Rio ICM2018
Views: 3,586
Rating: 5 out of 5
Keywords: Mathematics, ICM2018
Id: KbWcd34QHI8
Channel Id: undefined
Length: 51min 12sec (3072 seconds)
Published: Thu Sep 27 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.