Algorithms to Live By | Brian Christian

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] good evening I'm Alexander Rosen the executive director here at long now it's great to see a sellout crowd for algorithms nice actually known Brian for quite a while he's been coming to these talks for years mostly at the behest of Rose his now fiance congratulations so yeah it was part of her wooing strategy apparently is what she just told these four years of long now talks seem to Dennett so if anybody else in the audience is looking for advice a couple of announcements tonight one is that as you might have heard in the last the last talk we did we have released all how many years it has been 15 years now just about of long now talks without a password and standard-definition on our website and we have released an iOS app and iPad app that and web or Apple TV app that you can use to watch them not all of them are up there yet we're still digitizing that format but about 50 or 60 are up in that format now so you can now watch sitting on your couch the other announcement is that on June 6th was long now's 20th anniversary and for all this time we have been speaking at you and what we want to do is something a little bit different which is to have on October 4th on here in San Francisco at Fort Mason in about three different venues a member summit where you can all come and all the almost all the programming is going to be you speaking to each other so we're gonna we're working with Brady Forrest the founder of ignite to do a whole ignite series so we welcome submissions from all of you members to to submit five-minute talk subjects and we're gonna curate you know 20 or 30 of those five-minute talks as well as a bunch of other content like that and I think Andrews putting together our long short film festival of the long shorts we've been doing over the years and more so we'd love to get submissions of that so there's gonna be a lot of ways for all of you to connect with each other because we've we've always known that we have an amazing membership and we've always been trying to figure out the right ways to connect you all so this will be our attempt we'll do it once every 20 years and so hopefully we'll see how it works out so that anyway so save that date October 4th it'll be both a daytime programming and evening programming it'll go all the way late from midday to late that night and lastly I want to introduce our long short for tonight there's a sometimes it's really difficult to come up with what the long short is going to be this particular one really chose itself it's a poem by our speaker that was made into movie form about the subject that you're going to hear so enjoy in America redeye planes fly east from Los Angeles to New York between sunset and sunrise collapsing the night the flights west from New York to Los Angeles predominate during the day stretching that open the mess of the American air fleet leaps at the Sun West as the Sun heads west and east as the Sun beneath them and over Asia resets to East again northern birds sloshed down from the pole to the equator in the late month while southern birds are sloshing from the equator down to the south pole for their spring we humans have made with all our fires and all our fuels the longitudinal version looking at the Earth from above centering over the North Pole watched night and day sweep around see Figure one the planes winging outer Skelly around the rim now fix the line of dark against light steady and let the land and water circle beneath it watch figure 2 the plains and the continual flow to sunrise from sunset like too hands cupping the earth from her sides a friend of mine makes a hundred grand a year optimizing the algorithms that arrange flight plans in contrast helianthus annuus doesn't know twists its floor it's in the Fibonacci sequence our economy bristles with efficiency with individual wills building and buying collaborating and competing by the millions but from the longview injustice base just as elegant a field of some flower buds craning for light [Music] [Music] Jeff's jet planes always sound like the sky inhaling being this guy tonight special night it is full moon coincident with summer solstice that hasn't happened for I think 70 years or so and the long now they're quite frequent I'm Stuart brand for the long now foundation the next speaker is Kevin Kelly and he'll be over here at their herbes first time in a while since they reopened it and he's been saying for a while he'll be talking about the next 30 digital years he's been saying for a while he thinks that computers are in the process of teaching us what it means to be human and lately I've come across some of the people that are working on that algorithms that will accompany self-driving cars and I suspect the computers are going to teach us how to be moral with great precision and indeed it turns out computers are also teaching us how to be smart and how to decide well with great precision and for you life hackers the guy with the golden deck of advice is Bryan Christian [Applause] Thank You Stuart and Thank You sander and thanks to the long now foundation [Music] the example that I would like to begin with is one that is particularly and I might say painfully familiar to those of us who live in the Bay Area which is the search for housing now we typically have this idea that being a rational consumer means having an array of options considering all of them ruminating about the one that you like the best and then selecting that in a sufficiently competitive marketplace that's simply not possible as anyone who has looked for an apartment in the Bay Area knows open houses are mobbed and the keys often end up in the hands of whoever can physically voice the deposit check on the landlord first now such a savage market leaves little room for the kind of fact-finding and deliberation that's supposed to characterize this kind of a process instead what we're left with is a situation in which we must make a binding commitment either way as soon as we see the place either we put our deposit in and we take it and we never know what else is out there or we walk away to gather more information knowing that that opportunity is gone and someone else has gotten it so what do you do how do you try to make an informed decision where when the very act of informing yourself might cost you potentially your very best opportunity it is a it is a cruel situation bordering almost on paradox fortunately there's an answer and the answer is 37% if you want the very best odds of getting the very best apartment spend exactly 37 percent of your search or one ovary for the more mathematically inclined in the audience noncommittally yeah I'm glad I'm glad to you gets a gets a clap that's good spend 37 percent the first 37 percent of your search so if you're given your yourself a month to find a place the first 11 days noncommittally exploring your options you can leave your checkbook at home you are just purely calibrating after that point be prepared to commit deposit and all to the very first place you see that is better than what you saw in the first 37 percent this is not merely an intuitively satisfying compromise between looking and leaping this is the provably optimal results and we know this because apartment hunting especially in a competitive marketplace like San Francisco as an example of what computer scientists and mathematicians know as an optimal stopping problem and a an optimal stopping problem describes any situation in life where we face this structure of a decision where we have a sequence of opportunities that come one after another in sequence and at each point we either have to commit and go all-in or walk away and lose that opportunity forever some people have argued that this structure not only describes their search for real estate but also our search for love where when you're in a relationship with someone you at some point face the decision of am I going all-in or am i walking away and if you walk away you potentially fourth with the ability to change your mind later so you might be so bold as to say well I'm just going to apply the 37% rule directly to my romantic life to find an interval over which you want to find the right person calculate 1 over E with 37% of that range and know that this is the exact point at which to make the shift from your dating life being just for fun to things starting to get serious of course it all depends on the assumptions you're willing to make about love the thesis of of the work that I'm going to talk about tonight and this book which is a collaboration with my longtime friend and UC Berkeley cognitive scientist Tom Griffiths is very simple there's a set of problems that all of us face in everyday life as a function of having finite space and finite time and finite information whether it's looking for a house or a partner or deciding where to go out to eat or deciding how to deal with our overflowing closet or how to how to manage our time we think of these things as being innately and uniquely human problems they're not they in fact correspond to a set of some of the most fundamental problems studied by computer scientists over the last 50 60 years or more and this gives us a real opportunity to learn something about how to make better decisions in our own lives by looking at what those problems look like and the characteristics of their solutions the book follows this line of thinking into twelve different domains and we're going to talk about two of them tonight in greater detail and I'll give a sort of a thumbnail overview of the rest and the first are optimal stopping problems now the 37% rule comes from the perhaps most famous optimal stopping problem which is known as the secretary problem um it was first popularized by Martin Gardner in his column in Scientific American in 1960 and true to 1960 style it has this very Mad Men flavor of you the implicit male second person are hiring and implicitly female secretary and candidates show up in random order at each moment you interview one and must either hire her and send everyone else packing or send her away and interview the next candidate but she never comes back so it was popularized and kind of entered into the American Mathematical imagination in the 60s but the problem itself goes back to 1949 which was the first known public presentation of this problem by this guy merrill flood and it seems that flood had its romantic implications in mind even from the very beginning you see flood was giving a presentation at a mathematics conference at Princeton and the minutes of this conference were being taken by floods daughter who was 18 years old at the time and had just started dating a much older man flood and his wife strongly disapproved and they hope that this relationship would end quickly and so to kind of implicitly give the message flood presents the secretary problem and this is many years before the solution was known but he hoped that his daughter who was taking the minutes would would get the hint that the solution was probably more than one fortunately the relationship was indeed short-lived the book tells some cautionary tales of scientists and mathematicians applying this sort of logic directly to their love lives with I think it's fair to say mixed results one of my favorite stories is that of Carnegie Mellon professor of operations Research Michael Trick who was in a relationship as a graduate student asking himself this question of how do I know if I'm in the right relationship and all of a sudden it dawns on him oh my god dating of course it's an optimal stopping problem and Here I am I'm an Operations researcher I'm just gonna run the numbers and so he ran the numbers and he found 26.1 as the age after which to be able to commit to the next person that you meet who's better than all the people you've already been dating he was in a relationship at the time and so he knew exactly what to do he proposed on the spot and she shot him down now trick faced what mathematicians who study this problem know as rejection and as it turns out you can make a simple modification to the 37 percent rule to account for the possibility that the person might say no so if for example there is a 50 percent chance of getting rejected then the stopping threshold moves to 25 percent you should be willing to make an offer only 25% of the way into the pool and if it doesn't work just try again so so the the mathematical wisdom here is proposed early and often the other side of the coin is represented I think best by the famous astronomer Johannes Kepler who after the death of his first wife went on an epic and seemingly endless series of court ships to try to find the perfect person to to marry he was really interested and we have access to Kepler's diary and his letters which are kind of staggeringly candid about all of this he was really interested in the fourth woman that he dated because of her tall build an athletic body but he was even more interested in number 5 because she really got along really well with with her prospective stepchildren but nonetheless he persevered and continued dating over many years ultimately a total of 11 different women before having one of these sinking realizations of like I've made a huge mistake and realizing that he was number five all along what was he thinking so he hops a train to Regensburg gets down on one knee apologizes for dating a half a dozen other people and asks her hopefully you're not you know promised to someone else at this point maybe you'll take me back and fortunately for Kepler she does and the rest of their lives are very happy together now Kepler faced what's known in the field as recall so this is the ability to return to a candidate once you have dismissed them and still have a chance of getting them back in his letters Kepler really beats himself up for continuing to date people after he met this amazing woman number five and he he D cries what he calls his restlessness and doubtfulness if why did I continue with this futile search and and leave this amazing person behind well it may give Kepler or some peace of mind but not his second wife that when you have the ability to recall past candidates restlessness and doubtfulness are indeed part of the ideal optimal solution and you should not make any offer until you're at least 61 percent of the way through the pool and only then if you fail to meet anyone better than someone from the first 61 percent then you hop on the train to Regensburg and try to try to make it right now optimal stopping covers a lot more problems like the then simply the the dating and housing scenario and one of the other ones which is also I would say acutely and painfully familiar to San Franciscans is the question of when to park so we've all had this experience of you know approaching some venue and you know you're you're trying to hold out for the best space and you see an opening and you think to yourself there might be a better spot but if I keep going the next person behind me is surely going to take it and even if I swing back around it's gone and so you may want to we include this figure in the book you can cut it out and put it on your dashboard so to an optimal stopping theorists it all depends on what's called the occupancy rate which is what percentage of the spaces are filled in that in that particular neighborhood and so depending on the occupancy rate there is a specific number of spots away that you should wait and then take the next available spot so in a neighborhood that's 90% full holed out into you are seven spaces away which is about one block if it's 99 percent full then you should be willing to take the first available space 69 spaces away which is about a quarter of a mile if it's 99.99% full I would not drive at all and actually this is this is an interesting case we're looking at the optimal strategies for drivers not only gives us a way to make better decisions when we're behind the wheel but in this case it also gives us some insight into the urban planning problem of parking so the the kind of traditional and typical way of thinking about parking was that it's simply a question of we have this resource public spaces we want to allocate them maximally to the maximum number of cars but if you look here going from a 90 to a 99 percent occupancy rate only accommodates nine percent more cars but it involves every single driver driving ten times as far to find a space and walking ten times as far to the destination so more contemporary urban planners kind of inspired by some of these ideas have had begun working with city governments to push for lower occupancy rate in in downtown urban areas which means raising prices and one of the parking gurus behind this ideas in the abdominal soup at from UCLA and one of the first instances of this is through SF Park here in San Francisco when I interviewed Donald Shoop that you know arguably the world's expert on how to park what his own research had taught him about his how to optimize his commute to UCLA he said yes I ride my bike one of the things I think is really interesting just in any domain there's this irresistible question which is do people innately actually implement the optimal strategies or do we do dumb things and when researchers have tested optimal stopping problems in the lab they find something very consistent which is the the optimal rule and a classic secretary problem is 37% lab subjects implement a 31% rule that is they have mostly the right idea but they consistently start to commit too soon now there are any number of XA so you could sort of think about this maybe we're we're loss averse that's a classic example I think one of the most interesting nuances here is that someone looked at the data and said well you know that the stopping rule that humans are using is optimal if we were to assign a one percent penalty to the to their utility for each option they look at and you know their their fellow researcher said okay well that's weird because there is no penalty for taking more time and then of course they realized of course there's a penalty for taking more time these are laboratory subjects doing an extremely repetitive and boring experiment there's always a penalty for taking up more of someone's time and this to me points to what I think is one of the more profound ideas about optimal stopping problems in general which is that when time itself has a cost then you need to find a point in any decision-making process when it's not worth it to continue thinking to continue gathering information and so given that we are all just intrinsically as human being subject to this cost of time passing every decision-making process becomes a question of when to stop now optimal stopping covers some of our the largest decisions that we make in life whether it's housing spouse's jobs these sorts of things but one of the most prevalent decisions that we make is one that's iterated over and over and over in the course of a day perhaps so many times that we don't even realize it and specifically it's a decision that takes the form of a tension between doing our favorite things and trying new things so specifically you know you want to go to eat do you go to your favorite restaurant or do you try a new place that just opened up and has gotten some good reviews on the way there do you listen to a classic cherish album or do you try to discover some new music that you might enjoy and who do you take to dinner do you go with your spouse or your close family or your your closest circle of friends or do you reach out to the co-worker who is new to the office or an acquaintance that you'd like to get to know better this is a case where we intuitively understand that a life well-lived is some kind of balance between doing the things that we know and love while staying open to new possibilities of course that intuition does not tell us what that balance should be and fortunately computer scientists have been trying to find this exact balance for now almost a century and they even have a name for it they call it the Explorer exploit trade-off so in this context exploring means spending your energy gathering new information exploiting which two computer scientists doesn't have the negative connotation that it does in ordinary English just means spending your energy leveraging the information you've gathered to get a known good result and so this comes up with in computer science notably in the optimization of ads so here I'm perhaps do not have to explain this to this particular crowd we have Google search results with some ads highlighted in red Google makes I think something like 95 percent of their revenue from selling ad space and for any given search keyword in this in this case there is a huge pool of ads and specifically there is the ad or set of ads that has gotten the best historical track record of getting clicks and there is any number of other ads that they simply don't have as much information about either they haven't tried them as much or they're new to the system or whatever and so here this tension between going with the known good thing and trying something new becomes extreme we explicit and quantified you can ask the question of literally what percentage of users should see the ad that has the best track record for getting clicks which percentage of users should see new things that we want to get more information about that could be better there probably not but maybe they could be ideas and strategies that have been honed over many years of working on problems like this within the computer science community in the tech world are just now starting to make their way into the field of medicine so here to illustrate the idea of clinical trials we have Morpheus and if you think about it a clinical trial has much the same structure where for any particular diagnosis or condition there is some known best treatment and there's any number of other experimental new treatments that could be better and could be worse and so as I will explain some of the the key concepts from the computer science work are now making their way into the FDA how does the computer scientists think about the explore exploit trade-off the canonical problem in the literature is called the multi-armed bandit problem it's a pretty strange name but it comes from the idea of the slot machine as the one-armed bandit and so a multi-armed bandit is just a roomful of slot machines so imagine you walk into a casino it's full of these slot machines they each payout at a different frequency some machines are better than others but you just have no idea which of course until you try them and so quite simply let's say you're going to be there for the afternoon you know you have enough time to pull 100 levers what strategy gives you the best chance of walking away with the most money well for most of the 20th century this was considered not only an unsolved problem but an unsolvable problem and mathematicians during World War two British mathematicians joked about dropping the multi-armed bandit problem over Germany as the the ultimate instrument of intellectual sabotage just to waste the brain power of the German mathematician so to make make some of this may quite such a thorny problem a little bit more explicit let's imagine you walk into a casino that just has two machines one you've played fifteen times nine times it paid out six times it did not the other machine you've tried just twice once it paid out once it did not so what do you do next which handle do you pull now the most straightforward way of addressing this problem is to compute what's called the expected value which is just what percentage of the time as it paid out so in this case it would be 60% in this case 50% and so the you know the natural thing is just to say well okay I'll just pull the handle that has the best track record but again there's a sense in which this 1:1 machine we just don't have enough information to kind of write it off for good and again this was considered basically you know in the category of kind of brain teaser slash you know career suicide because this just wasn't really considered to be the kind of thing that had an answer but over the years a series of breakthroughs starting with this guy Richard bellman in the late 50s and continuing with John Gittens in 1970s and so forth up till today there have been a series of substantial breakthroughs on the problem and I think the the critical thing about this question of which of these two machines to pull is that it all comes down to something that we haven't explicitly talked about yet which is how long you plan to be in the casino now Tom and I call this concept in the book a that will be familiar to long now members we call it the interval and a way to think about it is you know imagine you've you've moved to Spain for a year for work or something the first restaurant you go to on your very first night in Spain is literally guaranteed to be the greatest restaurant you have ever been to in Spain the second night you say this is amazing I'm gonna go to another place it's got a 50% chance of being the greatest restaurant you've ever been to in Spain night three you're down to one and three that's still pretty good but of course this goes down as your experience allows you to set a higher bar and so the odds of making a great new discovery that kind of dethrones your pre-existing favorite goes down as a function of your experience the other key thing here to note is that the value of of making great new discovery also goes down as you run out of time to enjoy it so if you happen to find this incredible charming place on your final night in Spain it actually is kind of tragic because you think yourself well geez I wish I'd have known about this you know eight or nine months ago that would've been great and so naturally our decision-making should shift as a function of where we perceive ourselves to be within the relevant interval of time over which we're making the decision now thinking in these terms has for me completely changed the way that I think about one of my favorite movies the inspirational 1989 classic Dead Poets Society and which has Robin Williams as poetry professor John Keating saying these inspirational soliloquy is like seize the day boys make your lives extraordinary and armed with the knowledge of the Explorer exploit right off we should cry foul here because Robin Williams is in fact giving two contradictory pieces of advice if we just want to seize the day we should pull that machine with the higher expected value but if we want to make our lives extraordinary then surely it's worth pulling the handle of that one one machine at least one more time because if it is in fact better we have the rest of our life to enjoy it and if it's not we have the rest of our life to do the other thing some of these ideas have begun moving out of the field of computer science and starting to influence some of the sibling kindred fields and for me one of the most interesting is the field of psychology and cognitive science where the idea that our our strategy should change relative to the interval that we're on is influencing how developmental psychologists and cognitive scientists think about both the early years in our life and the later years in our life so to give us an example of the early years this is an infant that's plugging a power cable into its face we have a lot of stereotypes about babies and-and-and young kids in general they're kind of random they're generally bad at things they have a really short attention span they have a really aggressive what's called novelty bias there's a whole literature on how they just like relentlessly prefer new things to things that they already have no matter how great their existing toys are and so it's you know that it's tempting to just think of them as inept versions of adults but psychologists including Alison Gopnik here at UC Berkeley are appealing to some of the ideas in the literature on the explore exploit trade-off to make the argument well no being random and aggressively preferring new things is exactly what you should do when you're at the beginning of your entire life you know if you've just burst through the casino doors and you've got 80 years to be there you really should just run around pulling handles at random you really should just put every single object in your house into your mouth at least once because it may be delicious and you'll have 80 years to enjoy this the explore exploit trade-off also I think offers some pretty tantalizing way of thinking about one of the other strange things about the human species which is that human infants and human children are kind of uniquely useless compared to other species so one of my favorite facts on this is that an a gazelle 3 hours after being born can outrun a cheetah and escape being eaten by a cheetah 95 with 95 percent success humans are not I mean we're basically useless for the first 20 years we aren't allowed to operate heavy machinery so what's up with this well the Explorer exploit trade-off offers us at least one I think pretty provocative way of making sense of this idea and it's that when your room and board is being taken care of by your parents you are free to have a purely exploratory beginning of your life in a way that the Gazelle is not that when when the moms and dads of the world are buying your lunch you are not dependent on those early jackpots in order to just stay alive and so you can enter into a more purely exploratory phase which is probably exactly what you should be doing it at the very beginning of your life now at the other end of life we have people in their later years older adults and we likewise have a set of kind of preconceived ideas and biases about what the lives of older adults are like you know we think they're that they're very set in their ways they're very resistant to change or resistant to new ideas there's a lot of psychological literature on this idea that they have they maintain fewer and fewer social connections and so it can be tempting to regard this as like oh it must be just lonely or something like this but again sort of drawing on the intuitions of the Explorer exploit trade-off researchers like Stanford's Laura Carstensen are making the argument that no no no no no that's not what's going on at all older adults are deliberately and aggressively pruning their social lives down to the people who actually matter and not dealing with their flaky acquaintances anymore because you know who cares and that they have they have entered the exploit phase of life where they are are deliberately focusing on the things that really matter most and are happier for it now the the algorithms that get computer scientists excited are what are called minimal regret algorithms so in the multi-armed bandit problem you can quantify regret as the amount of money that you could have made if only you had known what you knew at the end at the beginning which I think it's a pretty satisfying way of quantifying the concept of regret and there's a family of algorithms that offer what is called minimal regret and specifically they offer logarithmic regret and there's good news and bad news I'll give you the bad news first the bad news is you will never stop making mistakes no matter how well you learn and how optimal your strategy is the good news is that your mistakes will decrease in their severity and frequency over time continually as you go through life and I think I think that's something we can all feel pretty good about now the FDA has been looking over the disciplinary fence at some of these minimal regret algorithms for ways of rethinking their approach to clinical trials now the classic clinical trial is just your sort of randomized controlled trial we give 50% of these people this thing 50% of these people the other thing we pretty much just close our eyes and let the study proceed except there's kind of an ombudsman person who's there with like a giant red button if something totally goes off the rails but otherwise we'll you know we'll look at the end and see the information at the end now this is itself an algorithm that is known to computer scientists it's called epsilon first and it's not known to be a very good algorithm and it's linear regret for example and so there is this push within the medical community and among biostatisticians and computer scientists and so forth to use some of the very same algorithms that are already hard at work behind most websites optimizing ads for situations in which the stakes are in fact much much higher and one of the arguments that I've heard is you know how can you just coldly appeal to some algorithm when human lives are on the line and I think an equally valid counter-argument is how can you not use this strategy that has you know a guaranteed minimal regret when the stakes are that high so it's extremely interesting to me these these documents are from 2010 and 2015 so the FDA is literally as we speak trying to think about how to how to adopt some of these ideas and rethink their approach to clinical trials and so for me this is very significant because you know thinking computationally about this this fundamental human problem I think offers us some some insights and some rewards at a number of scales you know we we can think differently about where to go out to eat we it gives us a way of thinking about kind of the arc of human life and how our thinking does and should change as a result of kind of where we perceive ourselves to be not only in terms of our lifespan but in any of the other smaller intervals that we're on in life and lastly it gives us some I think surprisingly concrete guidance in the cases where where the stakes are really the highest so now I'm gonna talk very briefly about some of the other things that are on the book in the book and which hopefully will have an opportunity to get more or two in the conversation so the book pursues this kind of computational lens to to human decisions and human problems over a number of domains so the first half of the book is really just following tracking the different domains of things in the sorting chapter we talk about the best way to alphabetize your bookshelf and more importantly whether you should and sorting theory also gives a little bit of consolation for sports fans on cases where we should or should not trust the outcome of a sorting procedure in the chat chapter on caching we talk about memory management and storage management and specifically what can computer science teach us both about how to deal with our overflowing closet at home and with the phenomenon of human memory and humans forgetting more generally in scheduling we look at scheduling theory both as it arises in the machine shops of the Industrial Revolution on to the operating system CPU schedulers of today and we basically ask the question of what if anything have we learned about human procrastination and human time management from several decades of fighting the the interminable beachball of death what causes that what do we do to make it stop and what can we learn from this the chapter on Bayes rule gets into the computer science of how to make good predictions about the future and this is something that I think is particularly relevant to long now members and especially to people who traffic on long bets which is a wonderful website so if you want to make accurate predictions about the future it turns out that one of the absolutely simplest but in some ways surprisingly accurate algorithms it's called the Copernican principle which just says that you should always assume something is going to last twice as long as it's lasted so far which is which given certain bounds is is actually a quite a reasonable heuristic and so we include predictions for like you know Google will probably last until 2030 to the United States will last until like the mid 23rd century and if you're if you've met someone on Valentine's Day and you're trying to decide is it is it premature to book New Year's Eve tickets the answer is yes and when is it not premature to book New Year's Eve tickets the answer is Saturday July 23rd the second half of the book builds an argument for what we call computational kindness which is a way for for applying some of these principles and some of these insights into thinking more societally about how we interact with each other both interpersonally but also you know as a policy question how can we design a world that is computationally kinder to us that that presents us with easier problems to solve and it also looks at cases where what does computer science tell us about problems where there is no simple straightforward optimal algorithm and this to me is extremely interesting because I think it gives us an opportunity to rethink our notion of what rationality itself is and so I'll just say a few words about this so we have this intuition that being rational is about being exhaustive you know considering all your options thinking everything through to the end being deterministic following a policy that's guaranteed to work every time that's going to same result reliably every time and is exact it gives you an answer that is both highly precise and with a high degree of certainty that it is the answer and looking at the computer science of dealing with so-called intractable problems or np-hard problems shows us that up against the hardest classes of problems computers do none of these things they are not exhausted they explore a limited subset of their of their possible options and they trade off the costs of making an error against the cost of delay um they're not deterministic they use randomness they follow procedures that are not guaranteed to produce the same answer reliably and they're in exact they use approximations they use trade-offs they produce answers with partial degrees of certainty so one of my absolute favorite examples of this is in encryption which is of course essential to you know everything from military to banking to online shopping everything encryption on the web begins with generating huge prime numbers and so there is mathematicians and earlier in the 20th century boasted about how useless the study of prime numbers was until all of a sudden it became extremely important to national security and all these things so there's there's an intense value in good algorithms for determining whether a number is prime and it turns out that the one that we currently use is wrong 25 percent of the time and we've just decided that's okay we'll just run it a few times and it's probably going to cancel out over time so I interviewed some of the implementers for example of open SSL and I said well how many Miller Rabine tests do you guys run he said that 40 is probably good enough this gives us a 1 over 4 to the 40 chance of making an error and that's like one in a million million billion and you know that's okay so even at the highest level computers don't necessarily adhere to these things that we stereotypically think of computers is doing and out of that I would say emerges a different way of thinking about rationality and a series of principles and pieces of advice that that don't necessarily look like we might what we expect to get from computer science they say things like don't consider all your options don't necessarily go for the thing that's best every time make a mess on occasion travel light let things wait trust your instincts and don't think too long relax and toss a coin and unlike the principles that you might find for example in a typical self-help book they're backed by proofs Thanks [Applause] my leg went to sleep the last one on there I love you said something about computational kindness what's that yeah mm-hmm the basic idea of computational kindness is that the way that we interact with other people both explicitly and implicitly poses them problems to solve and so you can you can from this argument make a bridge from computer science into ethics and say that we ought to interact with people in a way that minimizes the computational cost and so I would say out of this I'll give you a very explicit example so I've just started looking at the real estate market and the real estate market drives me nuts but perhaps not for the reasons that it drives most people nuts it drives me nuts because it's computationally unkind and that it requires you to do more strategizing than is necessary so homes are sold by a single price auctions where everyone writes down their bid the person who writes the big biggest bid wins and they win at the price they wrote down and so it creates this incentive to get inside the head of your competitors figure out how many other people are bidding on the house and what's called shade your bid which is figure out the maximum price you'd be willing to pay and figure out the appropriate amount less than that to bid such that you still win the house but you save money and it just requires an awful lot of strategy you want to try to suss out how many other people are in the auction what do they think I think they think and so forth and it turns out this is all completely unnecessary there's this really wonderful mechanism that's called a Vickrey auction where everyone writes down their bid and the person with the highest bid wins but they pay the price of the second highest bid and it turns out that the Vickrey auction is what is called into game theorists strategy proof which is there's absolutely no better way to play the game than to just write down your exact evaluation of the house the game optimizes for you and so that to me is a really it's a specific example but I think it's a powerful example of this broader theme which is you know in game theory it's this is called the revelation principle it's that any auction that involves this sort of recursive I'm trying to get into your head process can be replaced with an auction that has different rules in which the best thing that you can possibly do is just be totally honest and the house will go to the same person for the same price on average as in the strategic version so that to me gives me this almost utopian view that there are these opportunities this kind of low-hanging fruit to to change the role in ways to just eliminate these costs clarifying algorithms like that that might apply to voting politically yeah I mean voting is a great example so the word strategic is a negative is a pejorative term to a computer scientist or game theorist one strategic behavior tactical wins or what are we saying in in game theory and and what's called mechanism design strategic behavior is anything other than doing what you really think and so if I you know if we imagine the you know the the nadir Gore election or whatever maybe I'm a Green Party supporter but I vote for Gore anyway because I don't want to throw the election that's strategic behavior I'm not I'm not doing what I really think and so there are mechanisms for voting like Instant Runoff voting ranked preferences mmm-hmm but there are some disheartening results in the literature that say that there basically is no completely strategy proof voting mechanism there is a strategy proof auction mechanism but there's not a strategy proof voting mechanism but I still think it's therefore I mean what is it that mean that leaves us with a failed system yeah okay so what well what's the algorithm for dealing with the field I'm not an expert on voting theory but my understanding is that for every possible voting mechanism you can imagine some sort of horrible pernicious condition but the current voting system is one in which we are already seeing the bad pernicious condition in evidence and so we might as well try something else at least for a while and say that's better I have two questions one from Barry Gordon one from Wayne very Gordon wondered if you use an algorithm to decide when to stop working on this talk or the book by the way Wayne - did you apply optimal stopping and choosing your fiancee did she apply it and choosing you oh do you use this stuff yeah yeah I do um my fiancee Rose claims that I told her at some point that 37 percent of the average American male lifespan was something like 29 and you know we were dating at the time and I was 29 side apparently I said something like so if this if this works I'm all-in which I was it did and I was but I don't it's it sounds like the kind of thing I would say but I don't remember saying that I mean I think I will say this you know the the the language of exploration and exploitation has literally become the way that I talk about these sorts of choices so you know my fiancee Rose and I will will decide you know do you feel like exploiting or exploring tonight that is that is actually how we have these conversations and sort of giving it that generic framing does that help the discussion and help you get moved quickly drew decision of a new restaurant or not yeah it does and you know so we were assuming that I would be moving into her place in Oakland and so this this gave us a very clear directive which is that we should exclusively go to our favorite places in San Francisco and relentlessly try new places in Oakland even though they're probably not as good as our favorite places because we have you know this whole new time opening up but the plot twist was that we change our minds and she's in fact moving in with me in San Francisco and so it was like 180 let's aggressively go to new restaurants in San Francisco and only go to our favorite restaurants in Oakland pretty smart did some asks what was your recess research process and path for actually researching the book and I'm guessing that this book grew art of the previous book which was called the most human human and it was about AI basically yeah yeah it was about my participation and what was the sequence of events for you leading from book 1 to book two yeah ironically I have been working on this book since before I even wrote my previous book I thought that this would be my first book but I what the hell happened I got wrapped up in this crazy adventure participating as what's called a human Confederate in a Turing test competition which I'm sure many people in the audience know but for those who don't Alan Turing had this idea in 1950 he's asking these philosophical questions you know can machines think could we build a machine that could think and if so how would we know and his idea was we'll just have a test we'll have a contest we'll stick a panel of scientists into a bunch of chat rooms they'll be talking simultaneously with some random humans hidden in a room down the hall and some software programs claiming to be random humans in a room down the hall and that we will reach a point at which we can't tell the difference and so I I kind of insinuated myself into the history of this by becoming one of the humans hidden in the room down the hall but I was something of a ringer because I had spent the previous year researching the history of the Turing test and interviewing all that all the possible experts you or the far from a random you I was far from a random human yeah and were you trying to pretend to be a computer or what no it is it is my job to persuade the judges that I am in fact a real person not a computer claiming to be despite the fact that the heart is using the same exact thing was certain hmm this is an interesting trick the Turing test how do you persuade a human or human not a computer which is their method that that is the question that obsessed me for the years that I was working on this so how do you try to act human in a competitive situation and and to make matters even more intriguing so the contest every judge assigns a numerical score to each conversation which is how confident they were they were talking to a real person and every year a computer program gets the highest score and wins this it gets high score among the other computer programs and wins this thing called the most human computer award you get a bronze medal and three grand so it's very nice but there's also every year a real person that gets the highest score among all of the real people as having most successfully persuaded the judges that they were human compared to the other humans and you if you get this you are awarded what's called the most human human title for the year and so I found myself kind of despite myself not only competing against the computers but in fact competing against the other real people how'd you do that's the spoiler the books been out long enough that I'll proudly say that I won the title of most human human so that book was researched and written win mmm it came out in 2011 so that I was working on that from like late Oh a to 2011 okay we're 2016 now and still counting and the conversation the public discourse about ai's seems to be moving right along what have you noticed from then to now oh yeah right so this this book came out in the spring of 2011 which was pre Siri and just to give you an idea of how long ago that really was and the thing that's really striking to me I remember going out on the hardcover book tour in the spring of 2011 and the the number one most popular question I was getting asked at the time was do you think AI will take over my job on the paperback tour a year later the number one question was do you think AI will take over the world the destructive way in it yeah yeah yeah just burn everything to the ground and so that for me was it was just a really striking I mean I really think over the the last several years which is not that of a time this has gone from a completely sort of loony-bin sci-fi thing like some of the best scientific minds here in here in San Francisco the open AI Institute some of some of the most respected computer science researchers people that I enormously admire are taking this from extremely seriously and so as a result I myself have been converted from from a skeptic thinking this is all a bunch of you know sci-fi hogwash I'd read about the existential issue yeah like we we actually do need to be tuned you are worried I'm concerned because in my view the the scenarios that are bad resemble very closely the reasons that the world is already bad so one of the one of the big problem is for change in AI is defining what's called a good objective function which is how do you formalize the thing that we want the system to do in a way that won't make us sorry for what we wish for I would assume what you want to formalizes things you don't want the system to do and then explore otherwise okay the typical model is that you define sort of a reward function for things you think are good and inevitably you discover some horrible externality that you forgot to build into the system when you want to do is work from what are the sides of the road drive on and who cares where the road goes yeah that may be yeah that may be so I think this is this is this is an open problem so yeah the more people thinking about this the better and I see it as significant because I think it's what's already wrong with the world that we we have defined various objective functions whether it's you know quarterly earnings or GDP or whatever that we we take to be approximate correlates of human flourishing but we have optimized them at our peril and created these huge externalities so this is why this is for me kind of the awakening moment at which I realized that a lot of these hey I said there's this AI thought experiment called the paperclip Maximizer is you know what the paper Nick Bostrom came up with them yeah yeah yeah right what if a paperclip factory invents a and it turns the entire galaxy into paperclips I'm not worried about that scenario but I but I'm worried about the genre of things that involve this kind of like we've created this objective function and we've relentlessly maximized we're on the way maximising thing so is the discussion which by 2012 was about other going to end the world to stop there or is AI discussion moving on since then I think the the question of will a I take my job has also kind of evolved into will a I take every job and so for me this is another interesting thing where ideas like universal basic income have like gained track I would I would argue have gained a foothold culturally by starting with the people who are really credible about AI saying we've got to think about sort of a post jobs economy and you know to my to my point of view over the last couple years have started to kind of percolate into mainstream society starting with you know the computer scientists and the AI people Jill tarter asks how do we or should we deal with existential problems unintended or unknowable consequences in the far future oh you know we're we're an interplanetary now I think one of the one of the chapters in the book talks about this idea for machine learning that's called overfitting the basic idea here I mean it's a similar to what we were saying before so in a sense we're talking about an interval which is sort of infinite yeah I mean there's so there's a couple ways to come at this question so in an explore exploit context you can ask this question of okay if my strategy is supposed to depend on the integral that I'm on how do I try to optimize for the infinite indefinite future that would be there so called infinite game where you're always improving the game for millennia after millennium yeah yeah and so there you may be relieved to know that the infinite but geometrically discounted future reward version of the multi-armed bandit problem was solved in the 1970s by John Ginza at Oxford who at the time was this is to me is sort of an interesting aside he was academic mathematician but he been hired as a consultant to the Unilever Corporation who wanted to know what percentage of our budget should go into speculative R&D versus just marketing profitable drugs and so corporations are interesting examples of something well that's classic explore exploit isn't it yeah exactly and it all kind of depends on the horizon that they feel they're operating on and so I think corporations nations and institutions are examples of these things that prioritize present survival obviously but aspire to be indefinite and so some of the algorithms that come out of the infinite horizon version of the problem are very interesting to me in global business Network which taught scenario planning we ran into a version of this problem which is that very large corporations need to do scenario planning because basically they're looking at and government departments and so on they're looking at a very long time frame but it's absolutely worthless for a start-up to do scenario planning because the startup is making a bet on a version of the world that they think is probably true yeah and their innocence totally RnB and so you know having multiple ways the world might go is not that useful to them because they're in pure discover what's going on adapt pivot all that kind of stuff there in a very short timeframe yeah I mean there's this really interesting result that in in machine learning there's this idea called regularization which is that it's kind of the formal version of Occam's razor it's that like if you have a bunch of different models that all give you a pretty good prediction take the simplest model the one with the fewest knobs on it the fewest parameters and I think this is very interesting when it comes to long-term planning because one of the ideas here is that you know you you train every model on the the data available to you at the present but you're trying to generalize it into the future and so the farther into the future you generalize it the more you should regularize your model that is the simpler it should be the fewer parameters that it should be and so what I kind of take this to mean is that if you're if you're trying to predict you know the state of the economy or the market next year you want elaborate models but if you're trying to if you're trying to make really long-term thinking then principles serve better than models you know things with the fewest parameters yeah we comes guidelines along now it's a similar kind of idea Damon asks what can algorithm has learned from neuroscience around human decisions and you're also getting into you things like Danny Kahneman the system one system to intuition versus you know serious study and the behavioral economists who are looking at these things how do those fields relate um I can't speak with a great authority about neuroscience but but one of the things that for me is really fascinating about the human brain from a neuroscience standpoint is that if you think of a brain as being kind of like an organization the larger it is the more discombobulated it can become Society of mind Marvin Minsky says yeah yeah I mean so literally the more volume the brain takes up the more potentially out of sync you know the computation can become because it's you've got this latency between the hemispheres and so forth and so there's this really interesting anatomical study that talks about like the so for electrical reasons that I don't fully understand thicker axons can transmit signal more rapidly across a long distance so you see this if you if you look at mammal brains the thickness of the inter hemispheric axons scales with the volume of the brain because you need to move signals faster from one hemisphere to the other because they're just further away but the trade-off is that you're less volumetrically efficient because you have these huge pipes and so if you plot all mammal brains on a curve you get one outlier which is humans and specifically we're an outlier in that we do not have as thick of axons as you would expect given our brain volume which means that our brains are more volumetrically efficient they can do more computation in that amount of space at the risk of greater interim aspheric lag so you know speaking to this kind of decentralized self society of mine thing our species has made a very particular gambit which is that it's we have a more powerful computing organ but it's it's more out of sync with itself or out of sync with itself is it also possibly more general-purpose in some sense servable I think so I mean I think this kind of speaks to the the long period of uselessness at the beginning of human lives then that that my understanding and it rings true to me is that that goes hand in hand with the generality that we're not we are not sort of single purpose ochita evading machines but we are these sort of general machines but we know that's that's yes as we get to a 200 year lifespan instead of settling down becoming an adult at 20 it'll be more like 40 I think that's probably true I suspect that's probably already happening [Applause] let the record show the audience agreed here's a question from Boris looks likes MOS that can't be right what aspects of computer science thinking should not be adopted by humans I think most of the history of computer science has been characterized by an assumption that the machinery is reliable and if you assume that the machinery is alive reliable then what you want to do is maximize that wasn't true for long enough computers were funky and there was all the Chanin of you know how do you get reliable stuff from the you know unreliable yeah yes signal and all of that are they reliable now well I mean what's interesting is that the machines themselves are I I have to imagine many orders of magnitude more reliable than you know the vacuum tubes and so forth but the scale at which we're doing computations is also much greater so when you're talking about like extremely large distributed computations the chance that some freak gamma ray comes in and flips a bit you know at this exact moment or some there's some sort of weird neutrino burst that has some you know these these completely freaked things start to have nonzero you know occurrences and they start to wreck your computation if you redo the computation also see that in genetics or just learned the other day that the cold-adapted pigs in the northern part of korea flipped one base pair huh out of three and half billion to get that capability that's cool right so there's there's there's kind of a renaissance within theoretical computer science to care about what's called robustness and so care about what's called robustness mm-hm so for example sorting algorithms the the kind of punching-bag algorithm for computer science students is called bubble sort which sorts something by looking for out of order pairs and then just correcting them at this very micro level and it's kind of famously inefficient but if you're our operation is unreliable then if you make a mistake you've only made a mistake of you know distance 1 whereas these sort of famously lauded you know linearithmic algorithms like merge sort if you make a mistake in an emerge for top eration you could end up in the wrong half of the results and so this in the book we talk about this in the context of sports so you know March Madness if your team loses the first game you were just permanently relegated to the bottom 32 teams even if you were the second-best team in the whole tournament it turns out there's there's some really interesting work coming out of the University of New Mexico right now on robust sorting algorithms and the the most robust quadratic or better sorting algorithm known is what's called comparison counting sort which is effectively like a round-robin tournament where each item is compared to each other item and the traditional view is that this sucks it's quadratic you know whatever but it turns out to be the single most robust algorithm known against a noisy comparator function and so this I think provides a little bit of a double edged thing for sports fans which is that if your team loses you think inappropriately early in the post in the in the merge sort postseason you will get sympathy from a computer scientist but if your team fails to qualify for the postseason because their comparison counting record in the regular season is too bad you will not get from a computer scientist that is a robust results and we all crave that approval from the computers on so I'm curious if some things are getting into a kind of a post algorithmic world with deep learning and things like that where they're very very complex systems very distributed working with vast amounts of data and mining and astonishingly in inventive ways some of them that we can't entirely track and you're getting systems that can do things that what's right but is there really an algorithm in there that you can say that yeah well done that's the right algorithm or you know what are we facing here yeah there's there's something in my opinion uncomfortably inscrutable about deep learning specifically you know I some good friends of mine work at Instagram and Facebook on on some of their algorithms for what to show you when and they there is this kind of well we tried all this crazy ml stuff machine learning stuff and it outperforms the best you know old school algorithms that we have but no one knows how to how it actually works and no one knows really how to debug it using the traditional debugging methods of you know track down the exact point at which the the problem happened and I suspect we will make strides in this direction but it is a little bit discomforting when someone tells you yeah we've got this system and it's better than anything any of our actual staff can make and so that's why we're using it but no one really knows how it works that does that does start to sound like the first step in permanently inscrutable do you think yeah sorry it does start to sound like the first five minutes of a sci-fi b-movie yeah yeah the next thing is consciousness and then you're in trouble or not the first thing it may say is I'm sorry but that would have it having to understand about the concept of apology which is a deep one is that a permanent condition or are we sort of seeing these deep learning capabilities emerging from a complex process we can't completely parse but we can then ask the deep learning process to please tell us in terms we can understand yeah yeah yeah what is your mechanism here I mean it to me it's very interesting I feel like the justice system the human justice system works this way where if someone does something bad but they can give you a really compelling story about why they took that action then it's somehow okay and once these programs discover that they can just take the easy way out until us a compelling story oh that's it and then we're yeah then we're really and then we're dead meat yeah so okay there's a sequence of events from you started on this book then you've veered into a I thing discovered the human versus AI and AI keeps moving and you're now I've done this book do you get a sense of what the arc is here their narrative is there progress in your own sequence of events here what's next yeah I mean they're for me i'm i'm currently preoccupied with three things one is this AI safety stuff because i really yeah i mean i see it as being to me it is interesting because i mean my my like academic background was as computer science and philosophy double major and so i I sort of think of each discipline in the terms of the other and so for me what's interesting about the AI ethics stuff is that it is it is giving us an opportunity and by that I mean you know possibly urgent crisis to make headway on these long standing ethical problems that have kind of always been there right on that that that now it's crunch time and we really need to do something so that that to me is exciting you know conceptually in the way that I thought thinking about the Turing test was really interesting philosophically because we've got this 2500 year long history of philosophers asking what makes humans unique and distinct and special and going back to Aristotle we've answered this question by contrasting ourselves with animals and so I see the development of the computer in AI in particular it's kind of turning 2500 years of Western philosophy on its head because we now ask the question of how are we distinct from machines and so I should I'm word of the animal yeah yeah yeah and we're we're I think much more feel much more kindred with animals than we did before the computer that's that's kind of sweet so are you looking at this dangerous AI problem from the sort of existential risk standpoint or from the what do we do in the next couple years of research standpoint I'm interested in the question of how do we articulate what it is that we really want how do we create an objective function that that actually does capture the things that we have in mind when we when we talk about say some sample versions of that be an example I mean there for me what's interesting about this is that if you look at utilitarianism there are a bunch of these paradoxes and utilitarianism that are called the repugnant conclusion and the idea here is that like these very simple naive ideas of like oh let's just maximize total happiness across all people that sounds great well you know you may end up in a world in which you've severely overpopulated the earth and everyone is you know infinitesimally happier than zero and there's are so many people that that your objective function tells you that's a better world that's what you've asked for sum total of all human happiness so then you think okay that's not what I really meant what was the optimizing for the mean human happiness and then you get these crazy scenarios where there's like this cast of permanently kind of indentured or tortured people just just as long as everyone else is more made more happy than those people are made miserable then this is the world your best where you've optimized from your okay you say well that's not really what I meant and so for me I'm you know again I I'm interested in like the this sort of current cultural moment with respect to AI because it gives us it sort of puts these things in the crosshairs in a way in which like these utilitarian paradoxes have been discussed in you know philosophy departments and these white papers for X number of decades and you know just like it was with prime numbers you know people studied prime numbers for a long time and then they suddenly became extremely useful and extremely important and I think some of the similar some of these similar things are happening in utilitarianism where it's like some of these paradoxes that seem to completely abstract and just like mind games for philosophy departments all of a sudden now it's like wait a minute we actually it's it's it's a big problem that we can't define like the world that we want yeah philosophical things like the trolley problem or you know turning up with these self-driving cars totally stuff like that okay so you say looking at AI hazards is one thing you're focusing on what else yeah the other two I'm interested in computational linguistics so I think that there's really an interesting we're the field of computational linguistics is kind of having a moment what's happened I think it's it's a technology that's coming into its own the idea of natural language processing is now due in large part to advance this just in computation it's now a field that is kind of like ripened in a way that I think is teaching us again it's like what what is our newest technology teaching us about our oldest technology that to me is an interesting question oldest technology being like being you know the word that may not literally be the oldest but you know I four rhetorical purposes it sounds like there's a book Claire and what's your glimmers of what you know this work on computational linguistics is teaching us about language in human I mean I think it's I think it's changing our sense of what a word is for example there was a there's a Supreme Court ruling in 2011 there was this case that went to the FCC where AT&T was asked to their 18 t was subpoenaed by the FCC to provide some document they did not want to provide the document so this is in the wake of the Citizens United ruling on corporate personhood they said we're invoking our right to personal privacy and this goes to the Supreme Court and it becomes this linguistic question of our all persons entitled to personal rights personal privacy and so forth and so what seems like this this kind of corporate thing actually becomes a linguistic ruling on his personal the same word in as your title form or is it a distinct word with a distinct meaning and and we should treat these as unrelated concepts and so for to my knowledge the first time the court has traditionally used the OED on these things and trace the etymology and say well in old sex in or whatever in this case they turn to like big data corpora studies saying okay let's just ingest every New York Times article ever you know every Wikipedia article ever and we will run these sort of word cloud things and see do people do humans use the word person in the same contexts that they use the word personal and the the answer was no they don't they use them in distinct ways thus the court ruled that they are two different words and not every person is entitled to personal things therefore AT&T had to cough up the documents and the majority opinion ends with the great zinger we trust that AT&T will not take this personally I'm tempted to end there but let's sound like there was one more area that you're you're focusing on what would that yeah yeah so I'm really interested in what I've been calling reproducible journalism so it seems to me that this this is sort of inverting my role so rather than thinking as a writer about computer science I'm thinking as a computer scientist about writing great so it strikes me you know whenever I see you know read an article that'll say you know the grid is now six percent renewable or you know we're on track to reach price parity of fossil fuels and solar you know at year X or you know the the police have shot X number of people this year in the stone I this this thing goes off in my brain which is like where are you getting that you know I'm just reading it's like it's like someone giving you a time series and just giving you a single data point and saying you know here's the story and that's not the story so it really seems to me like there's an opportunity to kind of bring bring journalism and and some of the political the way that we make claims about civic things into the 21st century it seems like there's a really an opportunity to do something there and you see any of these things going toward a Booker's you truly do yes it's I'm you know if my previous experience is any indication they will all happen but not in the order that I think they're going to have it all right so you've got a book how long's this book been out the came out in and mid April late April so it's been out there for a while they've been touring a bit and you've dealt with people dealing with it just say something about what and he's a writer one always wonders what are the people gonna do with this thing yeah a bit over for a couple years what are people doing with your book that delights you and puzzles you yeah I mean I I have to think about the puzzles one I may not yet know about the puzzling things before well how about delights and disturbs yeah yeah yeah yeah I mean it's it's as you can tell from my manner when I talk about these things there is a there is an element of the book that is completely an earnest and there's an element of the book that's tongue-in-cheek and so I'm you know we Tom and I attempt to delineate these things tonally I'm sure there are people that are going to take the tongue-in-cheek things extremely literally to disastrous results you know so for example if you if you read the fine print on the 37 percent rule you discover that it is optimised for a particular objective which is giving you the best chance of getting the best thing however it's it only succeeds 37% of the time and so here's a case where even following the opposite that's a failure no bail sixty-three percent of the time and so yeah I almost feel like the need to put some kind of disclaimer like we're not responsible for the 63 percent of you who do this and it doesn't work out are people coming in and say it didn't work for me I mean that's a great question I think over the years we will probably get correspondence of from people saying I did or did not propose to my partner based on reading this thing right and it either I there have been a couple success stories where people have said you know I've now made this change after read your book and thank you and and that obviously makes me very pleased but I'm sure there will be the reverse I'm sure and but there's one other at least two other audiences you're the philosophers that you've hung out with these years and the computer scientist you'd be coming out with these years what are those guys making this yeah I mean I think the to me the book is able to kind of bridge those Disciplinary communities they want to be bridge those two that's a good question yeah I think so I think so I mean I for me it's interesting for for non computer scientists the book is kind of a gateway drug into computer science great and for computer scientists the book reveals the the highly interdisciplinary nature of the things that their comrades are working on and discovering in a way that's not necessarily obvious to people who are in the trenches optimizing these things that you know for example I know ton of people that work in ad tech and so then saying to them you know this is actually kicking off potentially you know the biggest change in in medical clinical trial practice in 50 60 years the people who are who are in the weeds don't necessarily realize that that there are these larger scale things and I think it's very satisfying for both camps to have an opportunity to to see the connection between the two so that to me is extremis as so it's like the prime numbers folks discovering we need you to defeat the Nazis and they're going what yeah yeah yeah yeah and so it's to me that's a really satisfying dialogue because like my own background being in computer science philosophy then I was probably the only person to graduate from the computer science program and go into a terminal degree in poetry you know I I have lived my life very much as a cross pollinator of these things and so it's and and they all feel extremely interrelated in my mind and so being able to you know it's it's sort of like being an interdisciplinary Yenta it's like you you're able to put these people that that's amazing extremely satisfying thing if I'm interviewing an expert and I say oh so you're working on the same thing that so-and-so is doing in the sociology department and they say wait what what yeah give me their name give me their email address and so those moments for me of finding these things that resonate across those lines it's extremely rewarding well Bryan you're living proof that humans really do have a general purpose brain [Applause] [Music] you
Info
Channel: Long Now Foundation
Views: 3,418
Rating: undefined out of 5
Keywords: Technology, Psychology, Economics, Computer Science, Behavior Science, Algorithms, Decision, Optimism, Regret, Computational Kindness, Explore, Exploit, Optimal, Artificial Intelligence
Id: gPBiJsqTRms
Channel Id: undefined
Length: 91min 9sec (5469 seconds)
Published: Tue Apr 14 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.