MIA: Martin Nowak, Evolutionary Dynamics

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thank you the title of my talk is evolutionary dynamics what fascinates me about evolution is the ability to describe it mathematically so in my opinion evolution is actually or has very much become a mathematical theory and can really be best formulated with mathematical equations uh there are many verbal presentations of evolution kind of out there and some of them are good you know but they are typically not as accurate as the mathematical one a quantitative understanding of evolution requires the mathematical formulation and this is what i wish sort of the field evolutionary dynamics to be the mathematical exploration of the evolutionary process and the talk today will be more on the abstract side we also work on specific problems for example like the spread of crispr based gene drives or evolution of cancer viruses cancer itself is an evolutionary process the cells get mutations and selection then leads to uh on the lesions and so um we study the evolutionary dynamics of response to treatment such as immunotherapy or targeted therapy or these are our applied research projects but today i will talk mostly about the underlying mathematical formulation of some of the questions that we are very excited about so you please excuse me that it's somewhat more on the abstract side um so we begin with the following question what is evolution what is evolution and just maya an evolutionary biologist who lived to be more than 100 years of age he wrote a book what evolution is and in that book he asks also that question what is evolution and what is it that evolves and he says figuratively we talk about say evolution of species that was his field we talk about the evolution of genes we talk about the evolution of genomes of the evolution of the brain or something like this but the only thing that actually evolves is the population the carrier of the evolutionary process is the population of reproducing individuals so evolution occurs in populations and populations consist of individuals that kind of reproduce and this reproduction we can think of is genetic but it's not only genetic in the realm of humans that we often wish to explore that reproduction is also cultural so a person has an idea and that idea spreads and people learn from each other and that learning process i also analyzed this evolutionary dynamic so i constantly try to expand what i would even call evolution and i'm not absolutely sure that the limits are of what we would call evolution once we go into learning and into these sort of things so we can think of reproduction genetic or cultural and then the two fundamental ingredients of the evolutionary process based going back to darwin are really mutation and selection mutation meaning there's a new type and selection means that something grows faster than something else and these are the two fundamental principles and over the last 20 years i have tried to add a third principle to this and that would be cooperation and i want to make the point that somehow cooperation we will encounter aspects of corporations in this talk is a kind of master architect of the evolutionary process evolution is not only competition it's not only fighting it's always fighting also but there's something else also and this is this helping and i help you and you help me your cells help one another or genes that's in protocells that begin to form they kind of help each other reproducing and so you see if you really look for it that cooperation is that which gives which lifts biological organization to a higher level it allows the evolution of genes of sales of multicellularity of sales with organelles of animal societies and of humans and if you ask what is really specific about humans in the evolutionary eye it is human language human language is something that emerges and there are also other projects that we have about human language in cooperation in people in groups of people that cooperate that want to communicate with each other yes but isn't it subsumed by the competition like the competition i think that um i would build it hierarchically so fundamentally you have variation and once you have variation you have competition once you have competition it can also be cooperation so i would build it hierarchically it is not something that is instead of uh selection but i would argue that if we just have the competition in mind we actually don't get this amazing discoveries of evolution this kind of levels of organization to me it makes sense also if you the in the first example of selection you will just select for the fittest organism and that's true if there's no interactions if there's interactions cooperation is a type of interaction between different genomes or different species but you can also have negative yes yes exactly right that's exactly right so when we have evolution of cooperation we always want to find a mechanism that allows natural selection to favor cooperation that's our evolution of cooperation and in this world of mathematical descriptions of evolution we have a large part of it that is deterministic where we write down differential equations ordinary differential equations partial differential equations and a lot of this is the foundation of our field and very often when we have a new problem we ask ourselves can we do it like that but the more accurate description is always the stochastic one and uh then once you have a stochastic description sometimes you ask how can i in certain limits recover the deterministic description but what i focus here in my talk is actually the stochastic description the probabilistic description of the evolutionary process and i will have two two parts of my talk i will first talk about constant selection and then i will talk about non-constant selection so constant selection is this when there's a constant fitness landscape for example but once we play a game once we have social interaction then the selection is not constant because the the fitness or the reproductive value of strategies changes over time changes depending on what the population is doing we call this frequency dependent selection so first part of my talk conference selection second part of my talk frequency dependent selection the selection coefficients depend on the relative abundance of strategies and we want the stochastic description so here's a very simple stochastic descriptions that i like to use and it has been around for a long time the population geneticist marine australian population genetics is proposed in the 50s and it's the so-called marine process we have a population of size n and we choose an individual for a production and you can say that's proportional to some value which we like to call fitness also some number and then we choose another one for death could be the same one and then the first one makes an offspring that replaces the second one and then you've made one step so that's a simple stochastic process it's in probability theory something like an urn you you have two colors in there you could have more colors but in the beginning we do calculations with blue colors and so you you take a ball you reproduce it you remove one the total population is constant and the different colors can have different values of being chosen either for us or for tests we can put the fetus difference into births or deaths or in both it's typical enough to put it in one of them so we put it in for the moment so the marin process happens to be a burst test process in the theory of stochastic processes because it has a state space and in that state space which here happens to be one-dimensional you can make one step forward and one step backward at most or you stay in the same place that's the definition of a markov process which is a birth test process a specific kind of markov process so i can have one blue individual here and if i do my more end process i could then end up with two blue individuals two blue can go back to one or three or four by two three but you can only make one step that's the definition of a birth test process for those of you who know for example the right fischer process where i replaced the whole generation it's another model to do that that would not be a bias test process but it would be markov process here i stay with the marine process but often results are similar so without mutation there are two absorbing states can be either red or blue and once i am there nothing changes anymore without mutation so without mutation these states are transient if i wait long enough i will definitely leave them no matter how large my population is but if i were in those states without mutation nothing will change anymore they absorbing and then a typical thing that we want to know is we introduce a new mutant and we want to know the probability that this mutant reaches fixation if we wait long enough either one of two things must happen the new mutant must either die out that's extinction or it wins the whole world that's fixation and we're interested in calculating these probabilities so this is the event of fixation of the new mutant and that's the probability of extinction of the new mutant and nothing else could happen if we wait long enough and if we have constant selection here then we can say the relative fitness of that thing here is r this can be greater than one or less than one compared to the thickness of the red one which is one so it's a reproductive value and the total population says is n and the fixation probability can be calculated as this formula here that depends on r and it depends on the total population size in the limit where r goes to one we have neutral evolution and in neutral evolution the fixation probability of a new mutant is one over n and that's completely clear for that you don't have to do any calculation because if i ask you what's the chance that this guy wins the world because it's all neutral it must be the same chance as anybody else winning the whole world so it must be 1 over n so for neutral evolution the fixation probability is 1 over the population size but if you go away from neutral evolution we have this formula and now a question that i like to ask that is kind of fundamental in our field is how does population structure affect natural selection precisely are there suppressors and amplifiers of natural selection how did i first think about that question i made mathematical models for the for the origin for the evolution of cancer in somatic evolution and i thought normally we think of evolution as something desirable it gives rise to beautiful structures the flowers insects the brain so we like we like evolution but here's an evolutionary process we don't like the organism doesn't like the evolution that leads to cancer how can you get rid of evolution if you don't want to have it one way would be not to allow mutation so you could say i already have paid a huge price to make sure that our sales replicate this very low mutation rate i might actually have reached sort of a cost basis there that i'm paying so many atps for every base that i cannot reduce it further so i already have mutation as low as possible can i get rid of selection if i don't want evolution of cancer can i actually get rid of selection can i come up with a population structure that cancels selection because then the dangerous mutations to avoid cancer would be like neutral and these are this will be the so-called suppressors of selection but first we just ask the question is there something that makes selection weaker consequently is there also something that makes selection stronger or is it sort of all like in a well-mixed population and in order to do this we formulated evolutionary graph theory and the beginning of this was actually the phd thesis of eris lieberman who also worked here at the broad institute um so we have the idea that the individuals occupy the vertices of a graph and the edges to determine where to place the offspring so we have the same as the marine process but now on the general graph so the marine process is now a special case the marine process is the complete graph with identical weights and self-loops so for the mrn process we have that fixation probability and we want to see whether changing the population structure would change the fixation probability let us try some examples you know so a simple example instead of the complete graph we also call this the well-mixed population which is a strange name because well mixed is not the same as well stirred there are some papers about that recently but it's the complete graph so here's an example the directed cycle let's suppose my cells reproduce such that the offspring of this would go here the offspring of this goes here and so on and the fixation probability of a randomly placed mutant can be easily calculated in this example also and we get the same formula so here we have changed the populations the population structure very much but it's the same fixation probability so with respect to just the probability of feel like selection which measures somehow how selection operates then nothing has happened here time could have changed time is sort of a more complicated time until fixation could have changed but the probability of fixation has not changed here's the cycle you know you can go backward and forward now and also very easy calculation again nothing has changed yes how can you have fixation if you can move back to the initial position at any time so because i need your huge example in any of these well i guess except for this one because you could fall into the red one yeah so here i basically if that one is chosen for the production then i have two blue and then if i choose state one f3 blue and in the end if i choose that one i have all of blue in this over game is over and so the same can happen here or here so the offspring goes there that's the description so same thing station probability next we want to find a characterization of all population structures that behave like the complete graph and in order to do this we define the temperature of a vertex the temperature of a vertex is the sum of all weights leading into that vertex so the graph itself could be weighted and then you calculate all the weights that lead into a vertex if the weights are just 0 and 1 then you would sum here over binary numbers but the weights don't have to be it can be anything and then the temperature of vertex j is the sum of all the weights on the graph that lead into j and now the theorem is the so-called isothermal theorem if all vertices have the same temperature then the fixation probability is equivalent to the marine process so this is a characterization that allows us to construct many population structures that have the same fixation probability as the marine process a consequence of this is for example that all symmetric graphs like regular grids have the same fixation probability as the mrn process that's a consequence of this before our work many papers in population genetics always observed that the fixation probability didn't change but they all looked actually at examples that were isothermal yes so the what's changing is just the amount of time it takes yes exactly the time is interesting to calculate and sometimes difficult to calculate time is a big problem it's not related to just the eigenvalue gap same as like spectral um that time has something to do with the gap and that itself is complicated to calculate but there's there's lots of things that could be done there so is the fixation probability itself then just a consequence of the stationary distribution of the random box of what's going on because all the eyes in this all of these ones have the uniform distribution stationary distribution is that a way of thinking or is that not really relevant um i don't um so then nevertheless uh we find something that is not isothermal and we should if all the temperatures are the same members fixation probability is the same process does it also go the other way there is an extension proven by a very good graduate student ben adlam who is at bd now and this is if you think of the isothermal theorem you ask for something which is a very severe restriction because all the temperatures have to be exactly the same but if you think of something like an earth or any graph you might have that the temperatures are approximately the same but not exactly the same so the proof by pen is the so-called robust isothermal theorem if all vertices have approximately the same temperature then the fixation probability is close to the marine process so this is a robust extension of the isothermal theorem otherwise the isosceles theorem would be kind of only for non-generic case and then for some random graph models we can actually prove things like addis reigning and other random graphs we observe things numerically we don't know how to prove but this is a theorem that is proven but here are now suppressors of selection so prices of selection actually easy to construct so we have this line here or we have dispersed and if you ask for that graph what is the fixation probability of a randomly placed mutant a mutant can only reach fixation if it is randomly placed here if it is placed here or here it will definitely not reach fixation because eventually it will be replaced so a randomly placed mutant independent of the relative reproductive advantage r of that mutant hit this one over end because it has to be placed here and then the fitness doesn't matter and so the amazing thing is this is very much the architecture of a colonic crypt where you have um uh a compartment that has like one or a few stem cells at the bottom and then the cell division and differentiation and you move up the group then you die at the end of the group so the colonial group as an epithelial tissue is kind of organized in my opinion as a suppressor of selection and we could say why and we could say maybe because that's a protection against cancer because advertised mutations there behave almost like neutral mutations so human epithelial tissues and the hematopoietic system where many many cell divisions actually occur as a process of selection thereby reducing the onset of cancer yes but in those ones like in a crypt the cells once the stem cell divides these cells up here on the villi are not divided so probably the cell that's the blue assuming blue is bad is actually more likely than one over n to be at their basis right um so the interesting thing is here the group divides it moves up this cell divides it moves up so there is cell division also up here hierarchically also the hematopoietic system in the hematopoietic system actually most cell division is what three like occurs at the end approximately 10 to the 11 cell divisions every day but very few of those cell divisions occur in cells that are here to stay for a significant amount of time and the stem cells in the hematopoietic system divide maybe only once a week or once a month and there could be as few as a thousand of them so you greatly reduce the risk of cancer causing mutations in situations where you need many cell divisions i think there's this is a strong argument that this is like a suppressor of selection but the next thing is we want to ask what about amplifiers can we find amplifiers and um here is lieberman in this phd this is found the first amplifier and the amplifier is a star and the star here has a center and that center is actually hot because many things lead into it and the periphery is called and if you calculate the fixation probability of a randomly placed mutant on the star it's actually that formula here if you look at that formula then it's the same as the old formula but r has been replaced by r squared so that's precisely an amplifier because if my fitness advantage was two now it's four if my fitness advantage is one half disadvantage now it's one quarter so it augments advantageous mutants and it reduces disadvantageous mutants so the star is an amplifier of selection the question is can we find something that is even better than that so here this goes from r to r squared but how how far can we get you know what kind of structure can we actually find in fact lieberman and his phd this is found a structure that is an arbitrarily strong amplifier of natural selection uh where it guarantees the fixation of any advantageous mutant however small that advantage is and what was that structure and that was the superstar and i have absolutely no clue how he came actually up with that structure um but uh it has a center and then it has these leaves here and it has a minimal loop it the minimal loop is drawn here is three the minimal loop in if you look for loops is actually 3. that length of the minimal loop will turn out to be that exponent k and then if we want to make that loop longer we just add more notes in here so we could add one more and then it would be four two more then it's five and so on and then we have to make more notes also in here and more leaves so the number of leaves this is should be the same as the number of those notes here and that should become very very large and then we can make k larger larger arbitrarily large and that's an arbitrarily strong amplifier and our nature paper published in 2005 i believe contained the amplifier the superstars in abigail's strong amplifier the claim and it had a proof and the proof was wrong and it was actually the correct proof was only published like uh two years one or two years ago by computer scientists and it's a it is in the paper that has a hundred pages and they give a correct proof so the statement that the superstar is a strong amplifier is correct but they give a correct proof and so the interesting thing is we had this very weird structure out there and we wanted to find others and somehow other structures in the last 10 years have remained elusive until very recently so first my friend christened the charter computer scientist at the institute of science and technology austria he actually made the following observations that amplifiers are sensitive to how the mutants arise so if you think that the mutants they could arise uniformly at random so the cell gets hit by a cosmic ray and now it's a mutant that would be uniformly at random or the mutant arises during a reproductive event that's more likely so if the mutant arises during the reproductive event then the star and the superstar they amplify only for one but not for two so we only had actually amplifiers for this way of seating mutations and not for that way of seeding mutations the star could be rescued by making a looping star so you make self loops in the periphery and that allows an amplifier for for mutants that are during reproductive events or any linear combination of that but what about arbitrarily strong amplifiers for this kind of seating and only recently we actually made the very surprising finding that almost any topology can be turned into an arbitrarily strong amplifier by adding weights and self-loops so we can start with almost any topology of the population and then tune the magnitude of those links and thereby get a structure that guarantees the fixation of advantageous mutants and the idea is somehow that you generate this link such that you have a main land and a bunch of islands and the mutants arise on the islands and they drift towards the mainland and then they take over the mainland and from the mainland they spread back to the islands and if you do the weights properly you can really prove that these arbitrarily strong amplifiers of natural selection so i hope to think you know that this could be useful structures for a laboratory experiment where you definitely want to hold on to advantageous mutants you could instead of a well-mixed operation you could use something like this and therefore very much increase the chances that advertisers mutants will reach fixation or almost get certainty the drawback is the time so so far all the structures the better they are in guaranteeing selection the worse they are in the time that it takes for selection and so right now we're running computer simulations where we're looking for structures that are sort of good in a time fixation trade-off they need not be arbitrarily strong amplifier but they should also have a time that shouldn't sort of be too large but the idea is that you could actually make population structures that increase the probabilities that advantageous mutations take over and that could be useful hopefully so now i go to the second part of my talk and this is non-constant selection frequency dependent selection that's games we play games the field is called evolutionary game theory and mathematically similar structures can be found in ecology also in infection dynamics here the success of something depends on what others are doing so we play typically we play a game so if we have just two strategies then a game is given by a payoff matrix and this is strategy a it gets a payoff of little a when meeting another a player and little b when meeting another b player and this is the payoff for a b player interacting with an a player and a b player i i start with something like a moren process i have i many a players n minus i many b players then the payoff for a is i have i get payoff little a when i meet another a player they're i minus one other a players because i'm also an a player and they're n minus one other players i divide by n minus one plus b times n minus i is the number of b players so the idea is that my fitness arises from these pairwise interactions with all others in the population where fitness is a linear function of the relative abundance of the strategies in the population so this could be now sales or it could be people in an experiment this is what we are doing in evolutionary game theory now i translate the payoff that i have in the game into something like the reproductive value and i do this with a function where i say that the reproductive value is one plus w w is a parameter and that's the intensity of selection times the payoff and this parameter w here i don't have to use a linear function i can also use exponential function or many other functions but it's linear functions of works and we use it often not always but the interesting observation is here that this parameter w scales the intensity of selection in the traditional approach of deterministic evolutionary dynamics to games that parameter was not observed because it cancels out if you actually write down the so-called replicate the equation of evolutionary game theory and you introduce something like this parameter it cancels out immediately it only changes the speed of the differential equation but not the outcome so the intensity of selection didn't matter in the classical approach to games but here it actually plays an important role and especially this limit of weak selection will be one where many calculations are possible that are not possible otherwise so weak selection means that our reproductive values are all approximately the same the game that we are playing right now only makes a small contribution you can also think of it in that way we are playing many games that game is only one of many games you can also think of it i'm very confused about payoffs i more or less speak people from whom i learn randomly that's another way to get weak selection so again we want to calculate the fixation probability of a new mutant so we have strategy a is now being generated by mutation we want to know whether strategy a is taking over or whether it becomes extinct and this is a formula that can be written down from the breast test process and we ask if the fixation probability is greater than one over n then selection favors the strategy a replacing b because fixation probability equal to one over n is neutral drift so if the fixation probability is greater than neutral drift apparently selection favors that process the process that is being favored is the fixation of the newly introduced strategy so we want to know dc are greater than one we have defined those three i values and f5 values they're just here so in principle we can write down this sum and we can ask when is it greater than one and we don't really get a nice condition unless we are in the limit of weak selection in the limit of weak selection at any end we can actually write something down but in the limit of weak selection at large n the condition becomes very interesting because it is a plus 2b greater than c plus 2d these are the elements of the payoff matrix and if this holds then selection favors a replacing b and what is the interpretation of this condition is the so-called one-third rule and this is selection favors a replacing b if the fitness of a is greater than the thickness of b at frequency one-third there's something very interesting the fitness is a linear function of a defeat is a linear function of b you look at the frequency of a between zero and one you take the point where the frequency is one-third at that point you ask which one has higher fitness and if a has higher fitness at that point then the overall fixation probability is actually favored and that was very surprising to us uh and also to other people and other people proved this one threat rule holds almost for any process you can imagine not just the marine process it's very hard to construct an evolutionary process that doesn't hold for big selection and intuitively we found the reason also what is happening here and this is i average over stochastic trajectories that all start here so my stochastic trajectories start with the invasion of a these stochastic trajectories they go up there they could go back they could either hit here or there averaging over those stochastic projectors i can ask the question how often is my opponent a and how often is my opponent be and the answer is one third of the time my opponent is a and two thirds of the time my opponent is b and that's exactly that if i'm an a individual then that's my fitness if i meet others one-third and two-thirds and if i'm b individual then that's my fitness if i meet others once i didn't do so and that's the intuition behind the one-third rule so that's not the only question we ask we would want to ask not only when fixation is favored compared to neutral we could ask in a mutation selection equilibrium which strategy is favored does strategy a or theory gp win in the mutation selection equilibrium uh and the statement is that a is more abundant than b if that condition here holds that this was proven by antal who is now a professor in edinburgh and this statement is here's a simple coefficient that's just the population size if the population size is large that's just one if that is one that is a plus b is greater than c plus d that's a famous statement in game studio that's called risk dominance if we play a coordination game where the diagonal values are greater than the off diagonal values then it's the basis of attraction of the a solution greater than the other one or in other words if we play a game a coordination game i minimize or i maximize my expectation if i play strategy a if i assume that you play half-half um and so this is how it scales with population size and the amazing thing is that result holds for any mutation rate in the mutation selection equilibrium any intensity of selection and for many processes that's actually a result where we don't need big selection for many processes but not for all processes this is very difficult and that result lies for each process and for which process doesn't hold but if you don't need a low mutation and you don't need weak selection that's a very general result and then we wanted to know how can we generalize this result from two strategies to any number of strategies so we play games with more than two strategies they can be different cell types they all embody a different strategy or something like the rock paper scissors game we can in general we have n strategies and for n strategies uh we can derive again a method and the result is sort of surprisingly simple but we do need weak selection for n strategies we need weak selection and the result becomes very nice if we also have a large population size and any statement is for low mutation strategy k is more abundant than average if this year holds so these are the elements of my payoff matrix the payoff matrix is now an n by n matrix and these are precisely the pairwise risk dominance evaluations that we also had before so before a plus b minus c minus d now i do this pairwise first ready gk with every other strategy i and i average over those pairwise comparisons and if on average i'm risk dominant i'm favored by selection it is intuitively clear why this is the result because we have low mutation so if we have n strategies we are on the simplex sn the the sum over all the frequencies of every strategy x i is always one so we are in this simplex sn but if mutation is low at any one time i have at most two strategies present and therefore i only calculate competition pairwise between all possible pairs and therefore by summarizing some summing over those pairwise competitions because i'm in low mutation i get the answer but it is amazing that the answer can be read off directly from the payoff matrix so you give me the payoff matrix and we can immediately say which strategies are favored by selection which are not favored by selections low mutation what about high mutation high mutation should be easy so high mutation this is the answer what's happening at high mutation mutation is so high that all strategies are present at the same time and moreover all statuses are present in this middle point of the simplex at equal abundance because otherwise mutation is not high enough and so if i'm there i can just ask what is my payoff there this just i'm averaging over all my directions all the strategies and i compare this to the average payoff of all the strategies so this is my payoff against all other strategies and they are equally abundant minus the average payoff of all strategies against each other and that's the high mutation condition and now the amazing thing of that result which was also found by deepa andal is actually for any mutation it's a linear combination of the two and that is really beautiful because the mutation rate here comes as population size times mutation rate as this factor here and we don't intuitively understand why it is linear there's no reason our priori why it should be linear so this is an open question in that field to actually understand the reason for the linearity in the mutation rate we understand for weak selection the linearity of the payoff values but the linearity of the mutation rate is surprising this is proven and the intuition is still kind of interesting so again we just can read off from the payoff matrix for any mutation rate whether strategy is favored or not but now we want to do the same thing as we did before we want to ask how does population structure affect games before we ask how does population structure affect constant selection now population structure at frequency dependent selection at games games is a short hand for frequency dependent selection where the fitness functions are linear so if i want to have more than that non-linear fitness functions there would be many many unanswered questions but that's what games are typically doing in biology so the idea is we are again we are on a graph and we play the game with neighbors and we have a payoff matrix here and then we have an update rule and the update rule that i want to discuss in the remainder of the talk is so-called test birth update i think there would be others and the outcome depends on the update rule but here i focus on this one so a random individual is chosen it will update its strategy by looking at the neighbors and the neighbors win that point proportional to success proportion of the payoff so it could be a social setting where say i want to rethink what i'm doing i'm looking at my friends and i pick something from a friend who is doing well and the game that we are playing let's play a particular game at first you know it's the so-called donation game this is a special prisoner's dilemma there are two strategies co-operators and defectors and if two co-operators meet the payoff is b minus c because i'm paying a cost and you have a benefit you paying a cost and i have a benefit so my payoff is b minus c if i'm a cooperator and you're a defector i'm just paying a cost and i have no benefit if you are a cooperated i'm a defector i only get the benefit i have no cost and to the factors you have zero so that's a game which is called prisoner's dilemma it's a special prisoner the so-called donation game and the prisoner's dilemma can be also defined more generally but we will also derive results that really hold for any game anyway but evolution of cooperation is always an interesting game it's very interesting to study that game because in a well-mixed population detectors would always win so by finding population structures where co-operators actually win i have discovered a mechanism for evolution of cooperation and that mechanism is population structure can favor cooperation so you could have cells that pay a cost in order to help other cells and if these cells live in certain structures that could actually be favored by natural selection and at first we did computer simulations of this setup and we did it like on the cycle on the lattice random regular graphs random graphs and scale free networks and we found sort of that the fixation probability as a function of the benefits to cost ratio sort of always increasing and there's a point where it crosses one over the population size and that's the point that we wanted to calculate that's kind of the critical benefit to cost ratio needed for selection to favor cooperation and um a postdoc of mine hisashi otsuku really was able to prove a very beautiful formula and this is a regular graph of degree k operators we know about the factors if the simplest possible thing holds that you could imagine benefit over cost is greater than k and that also holds for big selection at that time we have actually explored spatial games for more than 20 years there was no analytical result in the area there were observations there's the first analytical result the first analytical result is the simplest you could possibly imagine and it's not easy to prove but also the reason why it holds is this weak selection here so if we hadn't had weak selection we wouldn't have actually observed that result that this idea of weak selection became powerful so that in 2004 and later so on a regular graph of degree k that means every individual has the same number of friends k friends if the benefit to cost ratio in my game is greater than k natural selection overall favors cooperation this is interesting because in a well-mixed population natural selection always opposes cooperation unconditional cooperators don't win but unconditional cooperators here win in the spatial setting so weak selection many people ask us the question can we prove something without assuming weak selection so we want to start the games and graphs but let's not do weak selection and this is actually a question that is in the territory of computer scientists because uh and that's also worked with chris nathan chatterjee uh the question is i give you any graph and any game on it and your task is to write an algorithm that for example tells me if the strategy has a positive fixation probability or gives me the fixation probability so we call this the qualitative question so i give you a graph i give you a game and i ask what is the complexity of an algorithm that tells me whether the fixation probability is greater than zero or i ask what's the complexity of an algorithm that approximates the fixation probability visit a certain bound and that as you know is sort of the universe of computer science this is sort of the class of problems sort of categorized in computer science and this would be problems that can be so simple in normal time and then we have non-deterministic polynomial time and then we have sharp b complete this would be kind of problems that can enumerate solutions in polynomial time and then these are b space complete these are problems that can be solved in polynomial space and the fundamental question of computer science and one of the biggest problems in mathematics is precisely the question whether the class p is the same as the class nb and as you know in other words this would be for all for all problems where an algorithm can actually verify a solution in polynomial time is there also an algorithm that can generate a solution in polynomial time that's the question and if b is equal to nb then the question would be answered in the in the affirmative but this is not expected that the expectation is that b is not equal to np and therefore many proofs in computer science are like this they don't say something is impossible they just say if you have an algorithm for this problem then actually you would have answered if you have a polynomial time algorithm for this problem you would have answered p is equal to nb which is not expected so many papers in computer science are of that kind you know it's not the statement is not you can't do that but if you rather do this in polynomial time then that would imply b equals is equal to np we don't expect that so for simple uh games on graphs if we really have not the limit of weak selection you are very quickly in a complexity class that would be like b space complete so you do not expect simple mathematical formulas for those things you don't expect formulas that can be evaluated in polynomial time but there's a question there and this is the computational complexity of the weak selection case so the computer scientists also ask themselves of course the question what if i give you any graph but i'm still in big selection what's the computational complexity of that and they couldn't make progress on this one way or the other so they could not prove that's easy and they could not prove that's hard and at the same time uh some postdocs of mine worked with both books of shinto gyao and also shinto yo himself and over two years they actually had meetings there and they came up with a method to calculate for any graph for weak selection in polynomial time so they actually proved by construction so weak selection any graph and this was this paper that was published last year in nature and the sdr is a very imminent mathematician and the this is the this is how the algorithm works approximately you introduce a single mutant in a random place you have no mutation you introduce a single mutant in a random place on a graph and then you have the time until the situation results in either fixation or extinction so you have to expect this time until starting from a single mutant everything goes back to original wild type or the mutant takes over and that's time capital t and interestingly that's not a time we can actually calculate we don't know how to calculate this but luckily it cancels out in the formula that we need in the end but we have to at least formulate the concept and next we have the time tau i j which is the coalescence time this is the meeting time of random walks starting in i and j on the graph so you start in two positions i and j and you perform random works on the graph with the weights whatever weights the graph has and you ask how long until these random works meet and that is also called the coil essence time and why intuitively why would we need this in biology because that is a measure for how likely it is that those two individuals have the same strategy because they kind of you go back in time and you ask and how long until they derive from the same common ancestor therefore capital t minus tau i j is now the expected time during which i and j have the same strategy in that process with this quantity and we have those quantities and those quantities are related to those that's the meeting time of two random walks that start k steps away from each other so we average over all those ij pairs in such a way that we take exactly those that are k steps away from each other and all of those meeting times of random works on graphs can be calculated by solving a linear system of equations so that is polynomial time intuitively the condition there is the following we have an individual that updates his strategy and we want to know the chances that a co-operator here wins against an opponent who is also bidding for that individual what is the payoff of that co-operator the payoff of that co-operator is minus c it based that price whenever it is a cooperator it is a cooperator for the duration of the process minus the time that an individual is not the same strategy as itself that is zero so it pays if it's a co-operator it pays the whole duration of the process and then we have to ask and what is its benefit so the benefit arises from all the one step neighbors also being co-operators so that's the expected time during which one step neighbors of a co-operator are co-operators and that that's the payoff of that co-operator here and in order to win that site it has to out compete something that is also bidding for that site so what's the payoff of that site it is a cooperator in which case it would pay a cost c for the duration t minus g2 and t2 is the time at which a two-step neighbor of a co-operator is a co-operator and it has a benefit from the three steps so each one-step neighbors are the three-step neighbors of that guy and they give it a benefit and that's the probability that they are co-operators so you have that formula here and if you look at that formula capital d cancels out and you get that simple condition so the benefit to cost ratio is really given by the meeting times of random works that start two three one two three steps away from each other and you can now use this to calculate many examples of games and graphs that we couldn't calculate before so and interesting things can be observed so here's a star and a star does not support evolution of cooperation a star has a benefit to cost ratio that is infinite but you link two stars via their hub and you get the benefit to construction that is five over two all of this is for large population size you link them via leaves and you get a benefit to cost ratio of three so by introducing very sim simple bridges you can actually make very cooperative societies uh ben ellen who is the lead author here was very proud to be able to calculate this the ceiling fan that has a benefit to cost ratio of eight and he was also calculating the wheel of dharma that has a benefit to cost ratio of quadratic equation and then we had already studied like millions of graphs and we have never found a graph where the critical benefit to cost ratio was less than the average degree so that was a conjecture the critical benefit to cost ratio always is greater than the average degree but after checking a million or several million graphs we found that actually and that is a counter example so the critical benefit of the cost ratio is 3 over 2 1.5 but the average degree is 2. then we asked ourselves because we want to be good citizens so we ask ourselves well what's the best society that we can build now that we know how the structure of society can promote cooperation what's the ideal society what's the society that maximally boosts cooperation in this game and actually it's a society that is based on strong pairwise relationships so you start with any graph you make very strong pairwise partnerships and that boosts cooperation arbitrarily strong uh if we calculate other things um then we can get many many uh families of random graph uh models and so and and then the back a postdoc of mine and ahmed they they have some artistic side to them also they wanted to do all graphs of a small size so they make this painting for all graphs of size four and this is the line here and this has a critical benefit to cost ratio of four this is sort of achievable then this has a critical benefit to cost ratio that is infinite so you never favor cooperation on those structures and down here it's even worse than that not only do you not favor cooperation but you favor spite so here natural selection would actually favor a behavior where you pay a cost to harm somebody else so this is kind of a hell situation here so this favors spite and this is a promoter of cooperation so these are all graphs of size four what about all graphs of size five down here you have the hill of spite and then here you have this array where you according to the critical benefit to cost ratio this is like the best is the line then this branched graph then the cycle of length six and so on until the gratitude benefit to costa rica becomes pretty large until it becomes infinite now all graphs of size 6 and then he ran out of patience but all graphs of size 6 the best is the line here again and then other structures but one thing that we already explored um the line is the best for all graphs up to size 26 but then it actually is no longer the best then it starts to branch what is the best structure so there are amusing things that can be done here but of course we not only solve evolution of cooperation we solve any game once you have evolution of cooperation you have solved any game um i will show that in a moment but for any two by two game the payoff matrix is like this and the relevant condition can be written like this here and if you look at those coefficients here the whole thing it must make sense that strategy a for example is favored by making little a bigger so based on the understanding of the game it must be that this is positive and it must be that that is positive but to prove that actually on a graph there was quite some work so that is now also proven that these structures are really positive but based on the intuition of the game it has to be like that and you can divide then also by this quantity here you have a factor here which we call the sigma factor and this relates to a theorem because for two strategies and any population structure a is more abundant than b for weak selection and any mutation rate it can always be written in that way where sigma is a number that needs to be calculated and that's a theorem that is based on the symmetry of the problem it always has to be that the answer has to have that shape a spatial process somehow in weak selection does the following thing it augments the entries the importance of the diagonal values here it is more important how you play against yourself then you play against the other type of strategy and how much more important that is the stronger you promote cooperation so the whole problem in solving a spatial problem is calculating sigma but we always know that there's a sigma and critical pc ratio can immediately be rewritten in terms of a sigma there's always there's a relationship there and that holds for two strategies any population structure any mutation rate weak selection and then a student in my class asked the question about uh n strategies any population structure and that's also paper miscarried at anita and here the surprising answer is if you go to n strategies they're still there only two sigma values for any number of strategies for two strategies you have one sigma value and for n strategies you have two sigma values but not more than that and that here is also totally related to what i showed you in my talk for deepest result for well mixed population and strategies any mutation rate in some sense in that scenario this is like the low mutation term and that's the high mutation term but now it holds in a structured population and here the really cool thing is that we know that this is the answer but there are very few examples where we can even calculate those sigma values and um i had the privilege to work with many brilliant people over the years thank you very much [Applause]
Info
Channel: Broad Institute
Views: 6,439
Rating: undefined out of 5
Keywords: Broad Institute, Broad, Science, Institute, of, MIT, and, Harvard
Id: PuECCmT1Ioo
Channel Id: undefined
Length: 54min 21sec (3261 seconds)
Published: Wed May 02 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.