Artificial Stupidity: The New AI and the Future of Fintech

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I want to start by thanking Shafie for that very generous introduction and for inviting me here today to speak to all of you it's a real pleasure and an honor for particularly for an economist to be speaking at the the Symons Institute and particularly about computing especially to be talking about computing in front of a Turing Award winner it's actually quite intimidating so I'm gonna try to keep my comments relatively low level and broad rather than to focus on any kind of detailed mathematical derivations I know that being from MIT we have a rule that says every PowerPoint deck has to have at least one equation I'm gonna break that rule today there we know equations but this is a public lecture after all so I'm assuming that not everybody here is well-versed in the theory of computing what I'm going to talk about instead is how artificial intelligence has really changed the way we think about financial technology and what that implies for the future of the field there's some really interesting implications some troubling concerns that I want to bring to your attention and hopefully a way out that I'm I'm betting that people in this audience will be able to implement over the course of the next few years so that's a quick outline of what I'm gonna try to touch on and but feel free to interrupt with questions or comments I know there's a Q&A period at the end but happy to take questions earlier if you like generally people are a little too polite to do that but I would encourage you nonetheless I want to start with so I'm actually using the lapel mic as opposed to this one so I'm hoping that it is this is this better okay sorry about that yeah I'm should I hope I don't have to repeat what I just said no okay all right well so I want to start with a little bit of motivation and the motivation is this graph and the graph looks like this anybody tell me what this graph is climate change that hopefully not but I understand where you're coming from exactly this hockey stick is world population from 10,000 BC to the present and it is the prototypical hockey stick because population growth happens to be exponential but there are very few species on this planet that actually have a growth curve that looks like this you know I grew up in New York City and during long hot summers occasionally the population of cockroaches in New York look like this and then you call the exterminator and boom it comes back down Homo sapiens have been reproducing virtually unchecked for eons how how do we do this and the answer of course happens to be technology technology is what enables us to really dominate virtually every ecosystem on this planet agricultural technology medical technology manufacturing technology and so on and it's actually easy to see this if you look at it on a logarithmic scale because on a log scale the slopes can tell you the rate of growth and from this log scale depiction you can tell that there are maybe four or five different unique periods in human evolution the flat part of the curve to the left is the Stone Age and then the slight would upward curving part that's the Bronze Age and then after that the Iron Age and then the Industrial Age and over the course of the last hundred years we are in what I think of as the digital age between 1900 and 2019 we have nearly quintupled the number of people walking this planet that is an extraordinary amount of growth in from an evolutionary time scale in a blink of an eye and it is because of all these different technologies most important of all for the current discussion is digital technology and so you all know by now Moore's law better than I do the fact that we are doubling the capacity of chips every three to five years my colleagues in the physics and engineering departments at MIT tell me that we're actually running up on a limit to Moore's law and that may be the case but then there's quantum computing and who knows what that will do in terms of being able to keep this pushing forward and you might think that because of progress on the tech side there has to be an implication in financial services and the answer is there is there is a financial Moore's law and if we take a look at what that looks like let's take I don't know trading volume as a case in point the red line here is the amount of trading for options and futures on various different exchanges and what you can see is that if you do a log scale which is the blue line and the Green Line is the linear curve that you fit we're doubling average daily trading volume of financial securities about once every seven or eight years it's not quite the same as Moore's Law but it is just as impressive and so it's pretty clear that financial technology is going through an incredible growth spurt and many of you I suspect know this firsthand because you're probably in the FinTech space how many people here are actually working on FinTech startups or have an idea to do something along those lines show of hands alright fair number how many of you here are actually very concerned about FinTech and what it might do to society yeah good so you're at the right place we're gonna talk about both the positives and the potential negatives of this set of amazing innovations but I want to give you one example of what an innovation in financial technology might mean for each one of you in the audience here and that technology is what I'm going to call precision indexes now you've all heard about precision medicine coops you've all heard about precision medicine the idea that can actually target therapies specific to your particular genetic makeup in cancer it is absolutely critical to get your DNA sequence before a doctor will even talk with you about treatment options because it has become so oriented towards customizing it for particular individuals what about if we did that for financial products and services so you know instead of the Dow Jones 30 the footsie 100 or the S&P 500 imagine being able to have the Shafi Goldwasser 30 or the Peter Bartlett 100 of the Jim Simon's 500 an index specifically for you that's focused on your income expenses age health taxes behavior like goals constraints everything about you so that we can design exactly the right portfolio for you every day and imagine that this is really smart beta in that it is totally automated now it turns out that we now have the means to do this we've got the hardware we've got the software we've got the telecommunications platforms and this is within our grasp now this is not a new idea by any means a number of years ago there was a paper published titled personal indexes and the the concluding paragraph had this to say about technology artificial intelligence and active management are not at odds with indexation but instead imply a more sophisticated set of indexes in portfolio management policies for the typical investor something each of us can look forward to perhaps within the next decade who was this incredibly prescient insightful sage that wrote these words well it was written by yours truly but I wrote this in 2001 and so one could argue that given that we don't have precision indexes yet that I was off because it's been a lot more than a decade now I wasn't totally wrong in the sense that we do have Robo advisors and ETFs and mutual funds that have all sorts of automated algorithms that provide investors exposure to different kinds of styles of investing but we're not there yet and the question I want to take up today is why not what's missing and it turns out that it is not artificial intelligence that's missing what we don't have is what one of my graduate students called artificial stupidity we don't have an algorithmic understanding of how people actually behave we have all sorts of rules and URIs --tx about how people should behave but we don't actually know algorithmically how people do behave and that's what we need before we can actually design truly useful highly customized precision indexes now I think artificial stupidity is a bit strong and obnoxious so I would change it to artificial humanity because all of us we make mistakes and it's not necessarily even a bad thing because some of those mistakes can actually save us in other contexts so what we need are algorithms that actually describe human behavior so we can counterbalance the least productive actions with various different kinds of remedies and that's the missing link human behavior not understanding that human behavior can overwhelm all of the most sophisticated technological advances to cite that other famous philosopher and great humanitarian Darth Vader when sorry for that I teach MBA students so I have to incorporate sound effects what when you when you are looking at all of these sophisticated technologies it turns out that they don't they don't have any ability to deal with overwhelming human behavior particularly during crises so what I want to do now is to talk about how do we model human behavior because it turns out that the way we model it is going to give us some insight into the kind of AI the new AI that's going to be necessary to deal with these opportunities and challenges so I want to begin with my own field of economics and talk about the theory of economic behavior how do I con amidst s-- think about the way people particularly investors behave and in order to do that I'm going to take you back to 1947 which is the year that Paul Samuelson published his PhD thesis titled very modestly foundations of economic analysis remember he was a PhD student it turned out it was modest because his PhD thesis actually became the foundations of modern economic analysis at the same time von Neumann and Morgenstern at the Institute for Advanced Study Princeton published a book on the theory of games and economic behavior and these two works described a particular paradigm known as expected utility theory on the idea that each of us we have these utility functions a measure of our happiness and the way that we can predict how humans behave is to maximize the expected utility subject to budget constraints and other practical production and consumption constraints and that will predict how it is that people behave this theory became so successful in academic economics that economists emboldened by the beauty of the mathematical precision with which you can calculate these kinds of solutions they started applying it to all sorts of other domains that had nothing to do with economics things like a theory of divorce of suicide of extramarital affairs and so it got to the point where the theory of economic behavior started becoming the economic theory of all behavior and the term Homo economicus arose this idea of economic humans that people actually behave in ways that are much more akin to what economists think than other disciplines so you might not be surprised to learn that after these theories were espoused people tested them and the tests had mixed results initially it looked like it worked pretty well it certainly can predict certain kinds of behavior but the more you applied these tests to various different domains that more became clear that actually they don't fit the data very well when I was a grad student and an assistant professor I worked with a colleague Craig McKinley to test the random walk hypothesis which is one aspect of of this theory of expected utility maximization and what we found was that in the data the random walk does not hold for stock prices you could actually predict stock prices to some degree little did we know that at the time we were writing this paper a certain mathematician was applying these ideas in trying to predict the stock market that mathematician is now a multi billionaire and he happens to be the founder of the Simons Institute for the theory of computing so we know we know from Jim Simon's work that actually the random walk doesn't hold but a number of economists and psychologists came up with all sorts of other behavioral biases that humans seem to suffer from and I won't take you through all of them but I'll give you a very simple example that I think we all face every day in terms of how we decide on investing my first year MBA students are confronted with this problem in the class that I teach and try to motivate how to think about investing I show them the returns of four different financial assets I don't tell them what it is or even over what time period they span I simply show them what the rates of return are for $1 invested in these four assets over an unspecified multi-year horizon and this is what it looks like the green line turns $1 into $2 over this multi-year investment period not very rewarding but not particularly risky it's got a pretty smooth upward slope red line turns a dollar into about five dollars way more rewarding but way more volatile lots more risk the blue line turns a dollar into eight dollars even more rewarding but way more volatile and the black line somewhere in between and the question that I asked them is if you could only pick one of these four assets you can't mix and match them you can only pick one to put your entire entire retirement wealth or your children's college fund or your your parents or grandparents life savings if you could only choose one of these which would you pick it's a matter of risk versus reward right so how many of you here would pick the Green Line show hands anybody Wow nobody maybe one person but okay how about the red line anybody any takers for the red line really only one person I want you to remember this moment because when I tell you what the red line is some of you are gonna need to call your brokers the the blue line and he take us to the blue line okay so these are the entrepreneurs and hedge fund managers yeah what's that you see the graph yeah absolutely absolutely you see this whole thing right now and I'm asking you right now yeah you see the graph do you like that okay black line how many people want black yeah virtually all the audience's that I've talked to may prefer the black line why because it's got the best trade-off between risk and reward it's not the most rewarding but it seems to have pretty low risk well let me tell you what you all picked first of all the time period goes from 1990 to 2008 okay the Green Line is US Treasury bills the safest asset in the world at least for the first next few weeks we'll see what happens with the budget discussions but not very rewarding so if you put your money in T bills you would have earned pretty much next to nothing over the course of the last decade the red line which only one of you picked that's the S&P 500 the US stock market most of you already have that in your 401k so if you didn't pick it you need some rebalancing to do but if you did pick it in 2008 congratulations you did just fine you did quite well the blue line is a single company Pfizer pharmaceutical company way way way more risky not for everybody but for the few entrepreneurs and hedge fund protegees makes sense and if you put your money in Pfizer well congratulations you did even better exactly phenomenal now what about for the most of you who picked the black line the black line is the return to a private fund known as the Fairfield century fund this was the feeder fund for the Bernie Madoff Ponzi scheme which is why I had to stop it in 2008 that's when the Ponzi scheme blew up now you know how the Ponzi scheme got as big as it did look at the number of hands that went up it is human nature that we are all drawn like a moth to a flame to high yielding low-risk assets and in finance we have a term to capture that phenomenon it's called the Sharpe ratio the Sharpe ratio is defined as the expected return above and beyond t-bills in the numerator and risk as measured by the volatility in the denominator so you can think of it is the amount you're earning per unit risk and we all want more we want more Sharpe ratio higher Sharpe ratios and if you look at Pfizer and the SME 500 the Sharpe ratios are about 1/3 compared to the Bernie Madoff Ponzi scheme before it blew up it had a Sharpe ratio on paper that was an order of magnitude higher sometimes when things are too good to be true they they aren't true now there's nothing there's nothing patently irrational about seeking high Sharpe ratio assets but the problem is that our perception of risk is highly subjective and context dependent and can easily be manipulated and as a result human behavior does not always give us the best financial outcomes but unless we understand why we do the things that we will later regret we can't develop the algorithms to get around it and to do a better job that is after all what AI is supposed to be about it's about doing a better job that what we can do ourselves or to do it faster cheaper on a more broadly distributed scale so we want we want to focus on that how do we do that how do we design a theory of actual behavior as opposed to a theory of what we think people ought to be doing the answer was cut what was first proposed by a computer scientist of a sort and that computer scientist is named Herbert Simon now before Simon became a computer scientist and I'm not really sure that computer scientists really feel that he's a computer scientist before that he was an economist and he came up with a theory of behavior that he called bounded rationality in 1956 Simon's proposed something that he called satisficing behavior that word satisficing doesn't exist in the English language he made it up as a combination of satisfactory and optimizing behavior it was a compromise and his notion was that humans we don't optimize we don't have a utility function that we're trying to use the calculus of variations to compute optimal investment trajectories we come up with rules of thumb heuristics we have a mental model the mental model has a particular prediction and we follow that prediction and what we're wrong well so be it and so Simon's proposed satisficing as an alternative to expect the utility maximization and as a result he became totally alienated from the economics profession now let me explain to you why that is to do that I have to sort of explain what the theory of satisficing is I'm gonna give you an example of coming up with a heuristic the example has to do with a particular problem that all of us face every day and that is getting dressed in the morning what should you wear so I'm gonna tell you how I work with my particular challenge by telling you first what my wardrobe looks like so here's my wardrobe I've got five jackets ten pairs of pants 20 ties 10 shirts 10 pairs of socks four pairs of shoes and five belts that's my entire wardrobe now you might think that's a rather limited collection of clothing but I'll have you know that if you calculate the combinatoric s' you'll see that I have two million unique outfits in my closet two million unique outfits now it's true not all of them are equally compelling from a fashion perspective so I have a problem I have to choose the best outfit how do I do that well suppose that I optimized I trying to calculate the expected utility across all of these different outfits and let's assume that it takes me one second to evaluate the fashion content of an outfit how long would it take me to go through all possible outfits to see whether I've got just the right one turns out it'll take 23.1 days now you could say well why me you'll do it once and then you're done and you don't do what ever again I promise you I have never spent 23.1 days ever thinking about what to wear in the morning how do we do that yeah thank you busted that tells me more about you than it does about the problem that you are a very creative individual that does not care about socks matching yeah thank you very good so it turns out that in fact we don't optimize according to Simon's we come up with rules of thumb that are good enough and by the way you know sort of what these rules of thumb are right like you're not supposed to wear white slacks after September who said that well my wife told me that that's a rule or you know you shouldn't wear shorts to a a wedding we sort of know that we come up with different rules of thumb and that's how we get by Simons proposed this and he was roundly criticized by his economic colleagues in the following way so the idea behind satisficing is you come up with an algorithm it's good enough it's satisfactory it's not perfect it's good enough well what does that mean well presumably it means that the cost of optimizing is going to be balanced against the benefit of finding a better solution right but let's think about that for a minute good enough means close enough to optimal which means that the cost of doing another optimization is not really commensurate with the reward but how do you know that unless you know what the optimum is when I got dressed this morning for this event today I didn't spend a 23-point one day's so you know I might have been able to come up with just the right outfit that would make my all of my remarks so much more compelling that you would go off and be totally convinced that what I was saying was the God's truth how do I know that if I didn't spend the time doing it that the solution that I did come up with was good enough and if you do know what the optimal solution is then you don't need to satisfice just do the optimal thing right so so satisficing was dismissed and actually eventually Simon who is at Carnegie at the time left the Economics Department and moved over into psychology and computer science and by the way his work in computer science was absolutely revolutionary for computer science he was the first to study algorithmic methods for playing chess and he and Alan Newell developed some really advanced ideas about how to come up with a chess playing program wasn't as good as the programs we have today but based upon what they were working with was really path-breaking and in recognition of his work in AI he was awarded the touring medal the highest honor in computer science and Simon also won the Nobel Prize in Economics he's the only person to have won both of those awards and it's astonishing that his work has not had much impact in economics given how important I think it is and many others now that recognise what he's done it wasn't until I started thinking about it from an evolutionary perspective that I finally came up with a response to Simon's critics and the response that I've come up with and written about is this who we don't know whether something is good enough what we do is to do what we can right now and we use that heuristic until and unless we have another reason to come up with a better one so a quick example when I was a five-year-old growing up in New York some marketing genius figured out that Superman was the hero of the day so you could sell a lot of Superman jackets to five-year-olds if you put them emblems on the jeans jackets and so they offered that I saw it on TV I told my mom I wanted this could I have a Superman jacket and she said nope you know we were a single-parent household we couldn't afford it it was a luxury and so that was it I asked her the next day and the next day and the next day and for about two months I asked her everyday can I have it now how about now can I combine it with my birthday and Christmas present I nagged her endlessly as most five-year-olds will do until finally she relented and I remember the day that she agreed to buy it was a Friday after work we went to Alexander's on Queens Boulevard in in Queens New York got the jacket put it on wore it slept in it did not take it off for the entire weekend except when I had to take a bath and she forced me she would not let me wear it in the bathtub and come Monday morning I was so excited to get the jacket on and wear it for school I posed in front of the mirror got up early and looked at myself and I spent so much time in the mirror I was late for school and that was the first time as a first grader I was late for school and it was pretty traumatic because in those days you'd go to the principal's office and get a note from the principal bring it back to the teacher and I remember walking to the back of the room where everybody was already working on their lesson snickering that I was late and it was so traumatizing that over five decades later I still remember that day as if it were yesterday from that day forward it did not take me any longer than three minutes to get dressed for school and that Urist ik the heuristic that I came up with came about because I had a bad experience so the negative emotional reaction that I got forced me to come up with a new Urist ik and the heuristic worked quite well until I got to college and my roommate asked me to be best man for his wedding and at the wedding rehearsal I showed up in shorts because I figured it was a rehearsal not realizing that actually you're supposed to wear like the whole thing at the tux and all that so heuristics are developed in response to the environment we adapt and so this idea of bounded rationality requires a new theory of human behavior Homo optim the adaptive human so this is where I have the adaptive markets hypothesis come to play the great evolutionary biologist theodosius dobzhansky said that nothing makes sense in biology except in the light of evolution I would paraphrase that to say that nothing makes sense in the financial industry except in light of adaptive markets the idea being that we make mistakes but we adapt and we change and market dynamics are really driven by that evolutionary process so now let me bring this back to artificial intelligence and contrast that with natural intelligence in the context of adaptation and I think you'll see where we're going with this and what it has to do with FinTech so to talk about the relationship between AI and natural intelligence and bounded rationality I want to use an example of a piece of AI that is incredibly effective it has to do with recommender systems so recently I got interested in biotech so I decided that I needed to learn more about the industry so I I wanted to order a book on one of the most successful biotech companies in the history Genentech and so I went to Amazon like many of you would do and I searched for genetic and I got this book and I clicked add to my shopping basket and as soon as I did that Amazon does this really nasty obnoxious thing that I just hate it showed me five other books that other people who bought this book also bought and sure enough I had to buy two more really really nasty habit this is new AI now I know that many of you are young enough that you don't even understand the distinction between new AI and old AI but there was such a thing called old AI let me tell you what that was old AI was expert systems something that was pioneered by actually a Stanford faculty Edward Feigenbaum and one of my colleagues at MIT Randy Davis it was a system of basically rules that were meant to capture optimal behavior in various different scenarios so an expert system was meant to be an exhaustive enumeration of all possible scenarios that you might find yourself in and then the selection of the optimal behavior in exactly that particular scenario so in those days storage was very expensive and so the programming had to be incredibly efficient the coding was really critical the algorithm was incredibly sophisticated not a lot of data not a lot of storage possibilities that's exactly the opposite of new AI so it turns out that what I call new AI and recommender systems is one example the algorithms are actually pretty simple what's complex is the data and it turns out that new AI is a lot closer to natural intelligence as far as we understand it as far as neuro scientists tell us that old AI old AI is actually much closer to Homo economicus let's figure out the optimal rules for various different scenarios and let's do that that's not how people behave people behave with bounded rationality in case you're interested a couple years ago there was a really neat review paper published on the Amazon recommender system and they acknowledge that the early version was actually pretty simple they've gotten some more sophisticated versions since then but the early version that worked pretty well and obviously made Amazon what it is today it is remarkably effective but it really started to work well when they got enough data so data is key so I want to talk a bit about that and I'm going to talk about it in the human context I want to talk about something that all of us have already been adapted to doing and that is threat identification I'm going to show you a picture and I'm not gonna tell you what it is but I want you to tell me as quickly as possible just shout it out whether or not this picture is Friend or Foe does it represent any kind of a threat or is it benign okay you ready all right here we go friend or foe you know I'm getting both because basically who knows what is it it's just a bunch of random pixels he can't really tell not enough data by the way the correct biological answer is pho what you don't know can kill you so from the evolutionary perspective you should be afraid of this but we are much more rational today so we don't know all right let me give you the same picture but with a few more pixels friend or foe well it's looking pretty threatening right okay one more picture friend or foe so this is a selfie of yours truly at the Washington DC spy museum getting stalked by a ninja but in fact it's not a real ninja it's pretty clear this is obviously not threatening at all so threat identification is something that we can do instinctively but only if we have enough data right data is critical all right so let me give you a more complex this is a very simple example I'm gonna give me a more complex example the next example has to do also with friend or foe but this has to do with something that all of us have done we've gone to a cocktail parties where we've met lots of different people and during the course of the evening you'll find out different things about different people you run into and you may or may not have a goal but let's suppose that you're trying to figure out who you might want to work with or who you might like to be friends with so you want to learn whether or not you're compatible with these various different individuals in particular you might learn over the course of an evening conversation about their gender sexual orientation marital status race ethnicity age group so on and so forth okay so I'm gonna tell you about two particular individuals that you will run into in this course of a hypothetical evening and I'm going to ask you to make three decisions about these two individuals so the two individuals are Jose and Susan and let me first tell you about Jose Jose is a gay Latino male single young professional from California was a no religious affiliation Adam Kratts middle-class with an MBA now let me tell you about Susan Susan is a heterosexual married female white middle-aged from Texas Christian Republican affluent with a bachelor's degree okay so now I'm going to ask you to make three decisions about Jose and Susan decision number one you're about to launch a tech startup and you need to hire somebody to help you with the business you need a partner for that startup who would you rather hire Jose or Susan how many people would hire Jose for the startup okay how many people hire Susan alright most of you would hire Jose okay okay second second you are in the process of organizing a fundraiser to raise money to help breast cancer patients and so you need to get somebody to help you organize that fundraiser you know make phone calls to plan the event and so on who are you gonna call who are you gonna ask to help you with that breast cancer fundraiser Jose or Susan how many Google ask Jose how about Susan okay third decision you're working at the Internal Revenue Service as an auditor and you're looking for tax evaders people that have submitted faulty fraudulent tax returns you can't audit everybody and in particular you can only audit either Jose or Susan for tax fraud who would you audit how many of you would audit Jose how many people would audit Susan Wow that's amazing I can't believe how judgmental you people are you've never met these people you're making decisions about hiring and auditing and all these how is that possible now it's true I asked you but you didn't hesitate you were able to make decisions like that and there was consensus there was consensus yes I know this is Berkeley and that's why I ask these questions it turns out that human nature and evolution has given us cognitive abilities to make snap judgments and from an evolutionary perspective believe it or not this is it a feature not a bug but the key is do we have enough data now now let's look at the facts here first of all I've listed next to each of these characteristics the number of broad categories that you might think of as being possible over the course of an evening conversation into bucketing these individuals so two major genders two major sexual orientations so there are four possibilities I know there are more if you're a more complex individual marital status you know single married divorced race ethnicity their four major races age group you know four major age groups and so on if you calculate the number of unique combinations of these features it turns out that you've got over a million different categories that's more resolution than a 600 by 800 photograph but the problem is this how many people here have met more than a million people in their lives nobody I actually gave a presentation to a group of marketing people and three people raised their hands I don't know if I believe them so if you have not met more than 1 million 36 thousand 800 people that means that some of the cells are empty in your data storage of all peoples because the way that we make decisions the way that you made your snap judgments ghost something like this of all the people that I can think of that raised money for breast cancer how many of them look like Jose versus how many of them look like Susan well I think I'm gonna go with Susan of all of the people that I know that have been volved in tech startups how many of those people look like Jose versus how many people look like Susan I'm gonna go with Jose so we are doing exactly what Amazon does the problem is that our data is incredibly sparse most of our entries are empty we don't have a fully populated database yeah yes I'm gonna I'm gonna get the baseless stereotyping the reason that we have baseless stereotyping is because it doesn't take much when you've got no data to flip the bit and small things will change your impressions of entire classes of people things actions and various phenomenon that means that all the various different biases that can emerge can emerge really easily when you've got a sparse data set and you're allowed to manipulate bits at will this is one of the reasons why fake news is so dangerous it doesn't take a lot to change the way you act based upon the sparse data that you currently have and people who know that they can do enormous amounts of damage I can do that with financial services because I know that there are certain things that you care about like Sharpe ratio so if I show you a few graphs that give you the sense that gee it's not a lot of risk and huge upside I can manipulate your data set to the point where you will easily flip from no I don't want to invest - yeah give me a hundred thousand dollars of that security yeah is the other government ask Oh sort of tell this story about how we as people make reference but the majority of people in the room didn't raise their hands for any of your three questions yes the majority of us have an awareness and that's insufficient data to make your time so today you're sort of telling a story about how people act while apparently most people in the room don't reflect the story well so I I'm not sure I agree with you that most people didn't raise their hands I saw a lot of hands go up so it was not like four or five people raised hands so let's say maybe half the people raised their hands okay that's fair right yeah that's a lot of people I mean that's enough people to carry the vote of a particular issue well I could argue the other point which is that there are many people that would have made the snap judgement but don't want to say and are embarrassed to say but in fact you don't have to to reveal to me ask yourself how many times have you made a snap judgment about somebody I don't want to be his friend because you know he doesn't like soccer and I love soccer and so I don't want to be friends with anybody doesn't like soccer what a ridiculous thing to say the person could be you know an enormous li beneficial friend to you and yet they just don't happen to like soccer because they were hit in the head with a soccer ball when they're in fifth grade yeah yes agree yeah that's right that's right it's much more complex than this of course but on the you cannot come up first exactly thank you that that's the point the point is that all of us even with the ideal situation where none of your issues arise I argue that you cannot come up with good decisions all the time but if you now add your considerations it is far worse and to get back to the point about bias one thing that I didn't talk about was path dependence meaning that the weights on these different feature vectors are going to be different depending on your experiences so if you were riding on the New York City subways and you happen to be mugged by somebody that was wearing green face paint from that day forward when you see somebody with green face paint you're gonna think twice before being in a room with them by themselves so it is highly path dependent TMI I don't want to go there yeah agreed fine the point is that in in order for us to deal with all of the foibles of human behavior we need to talk about these issues we need to think about how it is that artificial humanity can actually be modeled so I won't talk a bit about what the implications are for the financial system this is an area that I think about more often than not but I'm gonna give you one example about some research that I've been doing that will give you a sense of where I think the future of financial AI might be if this has to do with very specific financial behavior that I call the freak out factor and it is nothing more complex than when the markets go down investors can freak out meaning gets scared and you decide I can't take this anymore I've lost ten percent on the market I got to get out I want to put everything in cash that's what I call freaking out so oh well the interest rates there are many reasons yeah it could be there are many triggers that could be causing the freakout factor so I'm focusing on the state of freaking out it turns out that freaking out hashing out your risky security to put them into cash generally is not a good thing and in a minute I'll tell you how not a good thing it is so in a paper that I just finished with some of my students and former colleagues daniel alkyne Kati Kaminsky quinoa and she came Wong we asked the question can you predict who's gonna freak out because if you can you can then step in and intervene with precision indices and say no no you you're about to freak out because we just lost 10% in the stock market interest rates are down and you're worried about the future so I want to target you as being somebody that's likely to freak out can you do this well it turns out you have to define what it means a freak out so let me give you a very simple definition of freaking out there are other definitions you come up with but it's a simple one imagine that you have a portfolio at a brokerage firm and within a month your portfolio value declines by 90 percent now that could be for one of two reasons either the market goes down by 90 percent or you've liquidated part of that let's suppose that the value goes down by 90 percent and on top of that you sold at least 50 percent of your portfolio during the month that's what I call freaking out do we agree on that there are less extreme versions of freaking out but do we agree that if your portfolio balance goes down by 90 percent and somewhere during the month you liquidated half of your risky holdings that constitutes as freaking out okay it turns out that we were able to get a large brokerage firm to give us data on their retail investors about 650 thousand individual accounts that spread across about 300,000 households over a 13 year period from 2003 to 2015 so this includes the financial crisis and they gave us monthly snapshots of individual portfolios their trading activity and their demographics all of this anonymize so we don't know who they are but we have the data and the question that we want to ask what the data is can we predict who freaks out now first let me show you what freak-outs look like so this graph represents the percentage of individual accounts that had freaked out in that given year so this is a year by month by month over the course of the last 13 years and you can see that their periodic spikes and if you overlay a graph of financial market dislocation you can see that those spikes occur at the times when the financial crisis hit Bear Sterns going under Lehman going under and so on so clearly people freaked out 9% of all households in our database freaks out at least once and so the question is does freaking out help or does it hurt and let me show you these are the returns of the median household that freaked out so of that 9% of the households you take that sample and you look at the median and you look at their returns one month to two months three months ten months twelve months after they freaked out how do they do well the returns you can see are mostly zero because they're out of the market so they're not making any money and so by and large it seems that freaking out is actually not subtracting a lot of returns there is one case where during the pre-crisis period if you freak out for more than a year that's actually not good news you actually lose value but by and large it doesn't seem like you're losing anything because you're going to cash but that's not the correct question the correct question is what if you didn't freak out what would you have earned in that case and that's this graph this graph shows you the hypothetical portfolio at the time you freaked out it freezes that snapshot and it calculates what would have happened to those securities if you had left them alone for one month two months three months and so on and you can see that the Green Line which is the post-crisis period the red line which is the pre-crisis period in both cases had you left the money in and instead of pulling it out you would have actually done better so that's what you lost an opportunity cost but look at this look at the blue line the blue line is what you would have earned had you not freaked out during the crisis and what it shows is over an 18 to 24 month period freaking out was actually good for you it's actually good to freak out when of course you've got the right thing to freak out about under extreme circumstances it doesn't cost you to get out of the market but what does cost you is waiting too long to get back in and therein lies the challenge there's an opportunity there to help investors navigate around the freakout factor by helping them to get out of the market when it's extreme enough and getting back in when the coast is clear so to the answer the question can we identify who's going to freak out because we know it's a big issue the answer not surprising is yes you can it's a little bit complicated and take a look at the paper if you're interested particularly because as we said most people don't freak out so you can have you have a very good prediction by just assuming yeah you're not going to freak out so you've got to pick a balance data set do the usual machine learning calculations to try to balance it and be able to perform the appropriate modeling but when you do it seems like you can actually tell who's going to freak out so let me give you a little bit of a quiz and show you what it is that we found I'm gonna ask you to tell me who is more likely to freak out all right of these various different characteristics investors who are between the ages of 45 to 85 do you think they're more likely to freak out or less likely to freak out then typical how many people think more how about less now it seems about even turns out more not surprisingly the older you are the more you have to lose the more you worried about retirement so you tend to freak out more what about females is there any gender difference between males and females how many people think females are more likely to freak out how about less likely to freak out you're right less likely this is why females actually make better portfolio managers at least in the context of retail investments how about married investors are married investors likely to freak out more or less how many people think more how many people might think less more more once you're married one could argue the stakes are higher are you thinking about a family you've got two people that you need to think about how about investors with self-declared excellent investment experience knowledge when you sign up for a brokerage account you have to actually list your own investment experience so these are people that are excellent investors in their own eyes more more likely to freak out less likely you're right more likely to freak out households with a larger number of dependents more kids more likely to freak out or less likely more likely less you're right more social workers paralegals and government related workers more or less likely to freak out more likely less likely yes you're right less likely and finally self-employed real estate moguls what do you think more likely to freak out less likely yeah more likely you see we're going with this it turns out that with enough data we actually have a pretty good handle on which of you is likely to overreact to certain market moves not only can we know who it is but we think that we can actually predict when you're likely to do that at what point are you likely to freak out here's a little teaser it turns out that people who have done more than one trade in their account over the last thirty days has five times more likely to freaking out than somebody who has not done a trade in the last 30 days so imagine if we actually had an algorithm and the data to predict who and when will freak out and we can then intervene to prevent them from doing the thing that we know based upon historical data is going to give them a disadvantage in building wealth and suppose that we allow the algorithm to manage this process entirely and program the algorithm with a goal to maximize your long-term wealth talk about a truly greedy algorithm this would be it that's what we're looking for precision indices right but is greed good we stopped to ask the question should we really be maximizing everybody's long term wealth of course that's right right greed is good where is it well let me give you one perspective the point is ladies and gentlemen that greed for lack of a better word is good greed is right greed works greed clarifies cuts through and captures the evolutionary spirit greed in all of its forms greed for life for money love knowledge has marked the upward surge of mankind and greed you mark my words will save not only Teldar paper but that other dysfunctional corporation called the USA thank you very much no no points that I've asked do does anybody does anybody know where that's from yeah turns out that in 1987 oliver stone released the film wall street featuring gordon gekko the fictitious corporate raider and interestingly enough this character was based upon somebody a number of people but in particular that speech that Greta's good speech was based upon a speech that was given right here at Berkley a few years before by Ivan Boesky at the Business School Oliver Stone made this movie to illustrate to the public how disgusting and dangerous financial innovation is and how this culture of corporate greed has to be checked and so you could imagine his consternation when years later people would talk to him see him in a restaurant or write him a letter saying I want to thank you for making the movie because I became a stockbroker because of it I became an investment banker because seriously this movie did more for MBA programs and business schools than anything else that I could have imagined and this is the challenge with corporate culture it's that somehow we have lost sight of the role that ethics plays in these contexts now that's not completely true we haven't totally lost sight of it because in the financial industry we've actually thought about this to a great extent the role of ethics and financial transactions has actually played a very significant role in the regulations that are imposed the way that we've dealt with how we deal with all of these various different conflicts is the notion of fiduciary duty a fiduciary is somebody that is required legally to put your interests in front of their own and so it turns out that in situations where you're worried about conflicts you have to ask the question is the counterparty that you're dealing with are they a fiduciary it turns out that many brokers are not fiduciaries so when you buy a stock from a broker at your favorite brokerage firm they are not under legal obligation to represent your interest solely a financial adviser on the other hand is and they are held to a higher standard so it turns out that in dealing with human interactions we realize that we've got to deal with this in a very specific way and so we've come up with a mechanism for doing so do we have that mechanism for AI not yet this is part of artificial humanity as well so it turns out that there actually have been discussions about this in AI a very old one that all of you I'm sure know from fiction and that is the Three Laws of Robotics that Isaac Asimov proposed in 1942 for those handful of you that aren't nerdy enough to know what they are let me tell you the first law of a robot that Isaac Asimov proposed in his I Robot stories is a robot may not injure a human being or through inaction allow a human being come to harm number one rule number two rule a robot must obey the orders given it by human beings except where such orders would conflict with the first law and then the first law takes precedence the third law says a robot must protect its own existence as long as such protection does not conflict with the first or second laws so like a typical computer scientist would do they've constructed a recursive structure beautiful theory but it wasn't until Isaac Asimov came to the foundation series that he developed the zeroeth law he realized that he forgot something and the zeroeth law does anybody remember what that is yeah exactly so Ozma realized that when you start thinking not just about individual human interactions with robots but you're talking about the entire humanity you need a law to cover that and so here's what the zeroth law is a robot may not injure humanity or by inaction allow humanity to come to harm but when he proposed the zeroeth law he didn't describe whether or not you should revise the first law to make that recursive let's suppose we did that if we made the first law recursive which means the first law you can't injure a human except if it conflicts with the zeroeth law well now that raises a really interesting conundrum it raises a conundrum that we have already dealt with as humans which is does it make sense to commit murder if in doing so you save the lives of many and what would happen if you had an AI that actually had these four laws recursively and they started working at ExxonMobil and started thinking about what it means for climate change what they're doing and for Humanity that gets really complicated now I'm not smart enough to know how to think about that but I know people in this room who are and so I'm hoping that Shafi and her colleagues take this on to think about how ethics and culture can actually be quantified and embodied in what I call artificial humanity so let me wrap up by saying that financial technology is not just about Homo economicus that used to be what it was about but in my view that's not the future of FinTech artificial humanity figuring out how people actually make decisions and developing the tech to prevent us from the worst of those decisions and helping us to come up with the best that's truly advanced AI and you know ethics culture policy these are things that traditionally are not part of quantitative analysis but I think they ought to be because these issues are far too important to be just the left to philosophers politicians and lawyers that's the traditional domain of their expertise but it doesn't mean that we can't take these ideas and try to start quantifying them otherwise the lawyers will have final say and you know I was reminded just a few weeks ago about how worrisome that can be when I was told the story about a lawyer who took on a client an elderly woman who wanted some help with a will and the lawyer said the fee is $100 and she said fine she paid him in cash hundred-dollar bill and then she got the advice that she needed and left and after she left the lawyer looked at the hundred dollars and realized it was actually two hundred dollar bills that were there hadn't been separated and so he immediately was confronted with an ethical dilemma should he share it with his partners if you didn't laugh at that joke you need some artificial humanity thank you yes thank you oh wait wait wait for that microphone you mentioned that we're using artificial intelligence or will you we are we're trying to understand how human behaves and use artificial intelligence to guide us or to make the world a better place but what if come certain companies or maybe government's used this and understand understood humanity and how we behaved and used their agendas to shape our opinions and shape our yeah that's the right word for it shaped our opinions so that in the future the opinions that we think that we're adapt adapting are not our opinions but the opinions that are driven by these governments or organizations yep I think that that's a real danger and that's one of the reasons why in my view the technology field a much broader responsibility than simply producing great products it's because these products have system-wide societal impact that we don't fully understand and any technology no matter what it is can either be used or abused and the kind of technology that's being developed right here in the Bay Area is no different but the big difference is that the the individuals in the policy world may not have the expertise to understand the implications of these technologies and the people in the technology field may not feel that they should be involved in policy and so what I'm suggesting is we need to have both come together and start thinking about these issues and it's already happening it's happening here there a number of lectures that I've seen online from the Simons Institute focusing on algorithms and the law and various kinds of challenges that technology is posing on how we govern but I think that we need we need smarter people to come into the field people in this audience need to start taking these kinds of issues seriously thank you yes hi now in the financial world your system kind of ignores the fact that all the actors are not independent that they are integral related yes and you know in order for there to be winners there have to be losers so if you've developed algorithms that made everyone a winner then no one would win well so let me try to challenge that perspective in certain contexts you're absolutely right that it's a zero-sum game but there's actually a pretty large part of finance that's not a thorough sum game and let me give you an example a very clear example to make this explicit so when you buy a stock and if it goes up then you won and the person who sold it to you he was the knucklehead that lost right because that person didn't participate but that's assuming that the person was trading as opposed to trying to get liquidity so for example the person that sold you the stock might not be trying to trade but trying to cash out in order to pay for his kids college education in which case he does mind that he's not getting the benefits of the growth in the company he doesn't want to take any more risk he wants to use the money to transform it from financial to educational capital and so that's an example where typically when we think of a zero-sum game even in the context of stock market trading it's not always a zero-sum game but there are cases where it is and as long as you know where it that it is that's fine two mutually consenting adults that want to engage in a particular game of chance where one loses and the other wins that's fine as long as they understand and could withstand those kinds of gains and losses I would argue that for the vast majority of financial transactions they're actually not zero-sum games there are actually very positive gains of trade that we engage in because contrary to popular belief much of the transactions that go on in financial markets is not day traders it's people that are shifting assets from risky to risk list or risk list to risky looking for long term growth and that's really how markets ultimately will provide better value for society yes so I'm curious I got a question about the impact of artificial intelligence on financial theory or in particular the field of artificial intelligence the subfield I'm interested in is a natural language processing that is computers reading textual data maybe newspapers or other yo just not numbers like your text data and like you know I've seen in the new some reason you know advances and this sort of technology so it's curious how much of an influence currently this sort of natural language processing is used and you know financial theory whether you felt like it's going to be more involved in the near future it's gonna be yeah yeah so that's a really interesting point I would say that it hasn't really been used in financial theory but it's been used in financial data analysis so for example some of my colleagues and I wrote a paper on looking at Twitter feeds and seeing whether or not certain patterns of text can actually lead to changes in stock market performance or the minutes of Fed meetings if you read those minutes for positive or negative commentary whether or not that has an impact in stock market no those kinds of methods of using natural language processing to do financial research is becoming more and more commonplace but that's not really affecting the theory of for now yeah definitely and that's actually one of the really interesting areas in the financial industry where you traditionally had fundamental analysis that people reading research reports and making decisions based upon textual qualitative information then you've got on the other hand quants that are using all of these various analytics to make decisions yes algorithmically those two are actually merging thanks to natural language processing because if you can read it and quantify it you can actually develop an algorithm to manage it if you remember the name of the paper that you mentioned it's on my website oh yeah good co-author with Pablo azar and yeah it's it's on I forget the exact title something on looking at Twitter and the wisdom of crowds yes up there so I wish Isaac Asimov was still around I didn't know about the zeroth law and the the enormous conflict between the zeroth law and the first law yes and it seems like this is a little bit like Stalinism how many people do you have to murder for the sake of humanity and this is what Stalin did yeah and so I'm kind of interested the first three laws were generated in 1942 at that time we were allied with Stalin okay and so maybe it's understandable that he had that mindset but what what happened with that it seems like that would be a very controversial idea well it certainly is a controversial idea and I obviously I've never met dr. Asma off so I can't speak for him but if I had to conjecture how the zeroeth law came to be it actually came out of one of the foundation books that he wrote and in the foundation which is a one of the reasons I went into economics it's a it's a fictional account of a mathematician named Harry Selden who is a psycho historian a field that he created using mathematical methods to predict human behavior but those mathematical methods only would work if the population of the planet grew to a certain size were the law of large numbers and the central limit theorem would hold and so as part of what Asimov did that story he had Selden put together a set of plans for Humanity that would guide it towards a very positive trajectory but Selden also predicted that if certain other things occurred it could actually lead the planet to a very bad outcome and and so he had to keep secret all of these ideas and also think about how to implement them in ways that would actually allow the higher path to be followed and my sense is that Ozma when he was thinking about those ideas realized that if you want to maximize the greater good right the greatest good for the greatest numbers that's the typical utilitarian philosophy that Jon Stewart mills and other utilitarians espoused the greater good for the greater number leads in some cases to necessary deaths as difficult as it is for us to acknowledge the classic example is the trolley problem I don't want to bore you with a lecture on ethics but you know the old you know trolleys going down a path six people gonna get killed one person on the other side of the track if you flip the switch one person dies not six do you want to do that those kind of calculations it turns out we do this all the time we do this now for example when we set the speed limit in the United States to 55 miles an hour that means a certain number of people will die every year if instead you set the speed limit to 45 miles an hour you will save lives so why not do that well because we want to get to work on time and we don't want to drive 45 miles an hour on the freeway and so we as a society have decided the trade-off between a life and getting to work and and if you think I'm joking about this take a look at the Department of Transportation on their website is a memo on the statistical value of a life and that number nine point one million dollars that's the value that they used to assign to your life my life in calculating speed limits in order to balance cost and benefit so we do this right now and if we're going to develop truly powerful AI the eye is gonna have to do this on our behalf and the question is how so that's the troubling thing with the four laws of robotics committed suicide because it couldn't resolve that dilemma between the zero than first law so not to like forget what it was just not a great outcome but you know that actually could be the optimal outcome you know in the grand scheme of things yes maybe back there and then we'll go back down here between once and needs financial decisions like say if I wanted to liquidate my portfolio to buy a boat that seems like a pretty bad idea and so like an ethical ie I might prevent me from doing that but Mia is like a human being might really enjoy like being out in the ocean and so it seems like there's like a line of where it's like a really bad ethical decision that someone should prevent you from making it right so this is exactly where the issues are gonna arise and where we have to think about how much paternalism we want to instill in our a I and how much we want to allow individuals to override these recommendations my guess is that initially there's gonna be a lot of over writing of these kinds of suggestions but eventually if the AI gets good enough and ends up knowing you better than you know yourself if that ever happens I know it seems kind of crazy but let me give you one example of how good AI is so I I enjoy playing chess on chess calm I'm not very good at it but I've done these exercises where not only do I not get the right answers for the exercise I can't even understand what the answers are when I'm told what they are because the the chess program is going five six seven steps ahead I can barely look two steps ahead and so but I now have confidence that the AI is better than I am even though I can't understand it I believe that the AI is better than I am what if we get to a point where financial AI knows a lot more about what's gonna happen in the stock market next year what's gonna happen with macroeconomic conditions what's going to happen with your own personal conditions your health what if they factor all of that in and like a chess engine they just know so much more than you do that you know buying that that surfboard or whatever you want is gonna cause you tremendous grief in three years time and you don't even know it might we ever get there exactly that's right so but but as a as somebody who would parrot a computer scientist I would say it's just a matter of degree it's not a matter of different characteristic you know checkers is now a completely solved problem we actually know what the optimal strategy for chess is from beginning to end that was solved about ten years ago checkers finite checkers not chess checkers that's right and so there may be a residual amount of uncertainty that we will never be able to get to but never is a long time and I don't know how much technology is needed to be able to deal with that situation or I remember you don't need an AI that's perfect you just need an AI that's better than you and the question is how long does it take for us to get an ad it's better than you with chess we're already there we look at Herbert Simon I I wish Simon were alive today because he would just be blown away that Garry Kasparov cannot even understand some of these chess moves when Garry Kasparov says oh that's a computer move I don't know what that is that's amazing to me so might we get there with financial AI I think we can but it may take another twenty or thirty years but I think we can get there with the people in this room I think we can get there yes down here question and then in light of what you just said how should financial regulation evolve for this incoming and I'm quoting the one recent issue of The Economist and unavoidable final AI takeover of financial services thank you well for one thing I believe that financial regulators need to learn more computer science they need to hire more computer scientists and they are I believe it or not the SEC is actually been hiring people that are data scientists and understand how these AI systems are working and I believe that actually by using these techniques regulators can do a better job of regulating they can regulate much more efficiently by using technology they're not there yet but they're getting there more importantly I think eventually we're gonna have to think seriously about how we digitize our legal interactions and I know that Frank Partnoy and choppy are teaching a course on algorithm in the law so it's happening now but most lawyers are not computer scientists they're not even trained in computing so I think that's really where we need to make the biggest change which is to start thinking about the law and our interactions of the law as something that can actually be understood algorithmically it shouldn't be hard and there's some really interesting conundrums it there's one story that I have to tell because Cioffi and other computer scientists are here so it turns out that in the 1940s at the Institute for Advanced Study I think was 1940s curt girdle the famous logician happened to be there and he was about to take us test for citizenship and as part of the citizenship test you're gonna get asked questions about the Constitution and so he had to read the Constitution and and he spent a lot of time reading it and so the day came for him to go to the judge in those days you have to see a judge and the judge would ask you questions and so girdled didn't know how to drive so he needed to be driven so he got a ride from Believe It or Not Albert Einstein and I think it was von Neumann the two of them this is the absolutely true story you can read it in who got Einsteins office it's a book that was published decades ago so they gave him a ride and along the way girdle said something interesting he said you know I read the Constitution and I realized something it is possible in a completely legal way to turn this country into a dictatorship the Constitution allows it and I have an algorithm that will do that and now this was right in the middle of the Cold War and so you know Einstein and Vine oi says no no no no no you do not want to mention this at that when the judge asks you what do you think about Stalin and the dictatorship you do not want to say oh that can happen here too and so you know I wonder you know if we had Shafie and Silvio McCauley and some of the other Turing Award winners read our Constitution or read corporate governance documents what kind of things would they find and what would be possible what could computer scientists tell us about legal interactions and governance that we don't know I think quite a lot what's that Terms of Service other yeah well here oh and then back here thank you for the motivating speech I just want to ask this is not about ethics it's more about emotional reaction to different situations of like human nature so what if we get to a day where no matter by oai or by other means or by like psychology or neuroscience we can fully exploit this human nature and we can fully understand the patterns of human emotional reaction to everything around us well there be such a day or we'll human nature our pattern of emotional reaction to the outer world also evolved with our knowledge of this world yeah there's no doubt that human behavior will evolve to deal with these technologies whether or not you can predict how they evolve I think you can and so I believe that there is eventually our ability to be able to make certain kinds of forecasts that will allow us to compute equilibria between the various different species that are Co evolving I maybe there are chaotic components of the system that you'll never be able to forecast but there is a significant enough element and a significant enough regularity human behavior that you can go pretty far along the way there's an enormous amount of potential right now and I think it's just really a field that's beginning to to get off the ground with you know people in neuroscience collaborating people in computer science and engineering there's just so many things that are able to be done that that don't fit into any one single field that's kind of the fun of an organization like the Simons Institute it's not just for computer scientists there are all sorts of interesting people here and many schools are starting up various different computer science colleges if you will so I think that there's a lot more that that can be done yeah or is it Shafi did you oh okay okay over here thank you for that talk so I agree with you ai is going to be probably the best investor out there they're probably going to be multiple eyes right because of the amount of data because of Moore's Law because they're able to process things much better than us humans so the question for you is is the end state AI trading with AI and we're not gonna have any human investors around at the end that we're just gonna be at the end the customers of an AI investors that are doing all the work for us you know I think that that might be possible although I think it's gonna take us a very long time for the simple reason that the complexity of the investment process suggests that there will always be a group of individuals that are going to be able to provide unique services to individual investors that are seeking those services so I don't mean things like index funds index funds are already now pretty much run by AI I mean you know the portfolio optimizers that can construct an SP index fund they can do that you know at the blink of an eye you don't need to worry about a lot of oversight on the management of those portfolios but I'm thinking about you know hedge fund strategies where somebody says I want to look for unique opportunities in the energy field you know when we have a discovery of a particular kind of energy technology I want to invest in that I think that those are still going to require human oversight but more and more now I think it's going to be a partnership between technology and human judgment that we'll be able to make those kinds of decisions and in the end I really think that's what technology does it's not necessarily going to be replacing humans altogether but it's going to change the domain over which we spend our time and you know add value to an investment process thank you thank you [Applause]
Info
Channel: Simons Institute
Views: 11,508
Rating: 4.970696 out of 5
Keywords: Simons Institute, theoretical computer science, UC Berkeley, Andrew Lo, Theoretically Speaking, Artificial Intelligence
Id: zqw1nmJ7XZM
Channel Id: undefined
Length: 87min 55sec (5275 seconds)
Published: Wed Dec 11 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.