Yuval Noah Harari and Tristan Harris interviewed by Wired

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello I'm Nicholas Thompson I'm the editor in chief of Wired magazine and I'm here with two of my favorite thinkers you've all know a Harare is the offer of three number one best-selling books including 21 lessons for the 21st century which came out this week in which I just finished this morning which is extraordinary interest on Harris who's the man who got the whole world to talk about time well-spent and it's just founded the Center for UMaine technology I like to think of you all as the man who explains the past and the future and tristana's the man who explains the present we're going to be here for a little while talking and it's a conversation inspired by one of the things that matters most to me which is that Wired magazine is about to celebrate its 25th anniversary and when the magazine was founded the whole idea was that it's a magazine about optimism and change and technology was good and change is good and 25 years later you look at the world today you can't really hold the entirety of that philosophy so I'm here to talk with you've all and twist on hello thank you no good to be here just I won't you tell me a little bit about what you do and then you've all you told me - I am a director of the Center for Humane Technology where we focus on realigning technology with clear-eyed model of human nature and I was at before that a design ethicist at Google where I study the ethics of human persuasion I'm a historian and I try to understand where humanity is coming from and where we are heading let's start by hearing about how you guys know each other because I know it goes back with a while so when did the two you first meet funnily enough on an expedition to Antarctica not with Scott and Edmondson just very invited by the Chilean government to the Congress of the future to talk about the future of humankind and one part of the of the Congress was an expedition to the Chilean base in Antarctica to see global warming with our own eyes it was still very cold and it was interesting and there are so many interesting people on this expedition a lot of philosophers Nobel laureates and I think we particularly connected with Michael Sandel is a really amazing philosopher of moral philosophy it's almost like a reality show love unable to see the whole thing well let's get started with what are the things that I think is one of the most interesting continuities between both of your work you write about different things you talk about different things but there are a lot of similarities and one of the key themes is the notion that our minds don't work the way that we sometimes think they do we don't have as much agency over our minds as perhaps we believed it until we believed until now so just on why don't you start talking about that and then you've all jump in we'll go from here yeah well I I actually learned a lot of this from one of you balls early talks where he talks about democracy as the where should we put Authority in a society and we should put it in the opinions and feelings of people but my whole background actually spent the last ten years studying persuasion starting when I was a magician as a kid where you learned that there's things that work on all human minds it doesn't matter whether they have a PhD or whether they you know what education level they have whether the nuclear physics is what age they are it's not like oh if you speak Japanese I can't do this trick on you it's not gonna work or even PhD it works on everybody so somehow there's this discipline which is about universal exploits on all human lines and then I was at a lab called a persuasive technology lab that teaches at Stanford that teaches engineering students how do you apply the principles of persuasion to technology could technology be hacking human feelings attitudes beliefs behaviors to keep people engaged with products and I think that's the thing that we both share is that the human mind is not the total you know secure Enclave route of authority that we think it is and if we want to treat it that way we're gonna have to understand what needs to be protected first is my perspective yeah I think we are now facing really not just a technological crisis but the philosophical crisis because we have built our society certainly liberal democracy with elections in the free market and so forth on philosophical ideas from the 18th century which are simply incompatible not just with the scientific findings of the 21st century but above all with the technology we now have at our disposal our society is built the ideas that the voter knows best that the customer is always right that ultimate authorities as Tristan said is with the feelings of human beings and this assumes that the human feelings and human choices are these sacred arena which cannot be hacked which cannot be manipulated ultimately my choices my desires reflect my free will and nobody can access that or touch that and this was never true but we didn't pay a very high cost for believing in this myth in the 19th or 20th century because nobody had a technology to actually do it now people some people corporations governments there I gained in the technology to Hackman beings may be the most important fact about living in the 21st century is that we are now hackable animals well explain what it means to hack human being and why what can be done now is different from what could be done 100 years ago with religion or with the book or with anything else that influences what we see in changes the way we think about the world to hack a human being is to understand what's happening inside you on the level of the body of the brain of the mind so that you can predict what people will do you can understand how they feel and you can of course once you understand and predict you can usually also manipulate and control and and even replace and of course it can't be done perfectly and it was possible to do it to some extent also a century ago but the difference in in the level is significant the key I would say that the the real the real key is whether somebody can understand you better than you understand yourself the algorithms that are trying to hack us they will never be perfect there is no such thing as understanding perfectly everything or predicting everything you don't need perfect you just need to be better than the average being and are we there now or are you worried that we're about to get there I think three star may be able to answer where we are right now better than me but I guess that if we are not there now we are approaching very very fast yeah I think a good example of this is YouTube so relatable example you open up that YouTube video your friends and zoo after your lunch break you come back to your computer and you think okay I know those other times I end up watching 2 or 3 videos and I end up getting sucked into it but this time it's gonna be really different I'm just gonna watch this one video and then somehow that's not what happens you wake up from a trance 3 hours later and you say what the hell just happened and it's because you didn't realize you had a supercomputer pointed at your brain so when you open up that video you're activating Google alphabets billions of dollars of computing power and they've looked at what has ever gotten 2 billion human animals to click on another next video and it knows way more about what's gonna be the perfect chess move to play against your mind like if you think of your mind as a chessboard and you think you know the perfect move to play I'll just watch this one video but you can only see so many moves ahead on the chessboard but the computer sees your mind and it says no no I've played a billion simulations of this chess game before on these other human animals watching YouTube and it's gonna win it's like think about when Garry Kasparov loses against deep blue you know Garry Kasparov can see so many moves ahead on the chess board but he can't see beyond a certain point like a mouse can see so many moves ahead and a maze but a human can see way more moves ahead and then Garry can see even more moves ahead but when Gary loses against deep blue IBM deep blue that's checkmate against humanity for all time because he was the best human chess player so it's not that we're completely losing human agency and you walk into YouTube and it always predicts you for the rest of your life and you never leave the screen but everywhere you turn on the internet there's basically a supercomputer pointed at your brain playing chess against your mind and it's gonna win because chess is a game with the winner and a loser instead of the technology fully as an opponent but YouTube is also gonna I hope please gods at YouTube recommend this particular video to people right which I hope will be elucidating and illuminating so is just really the right metaphor a game with a winner and a loser well the question is yeah what what is the game that's being played so if the game was being played was hey Nick go meditate in a room for two hours and then come back to me and tell me what do you really want right now in your life and if YouTube is using two billion you know human animals to calculate based on everybody who's ever wanted how to learn how to play ukulele it can say here's the perfect video I have to teach you how to play ukulele that could be great the problem is it doesn't actually care about what you want it just cares about what will keep you next on the screen and we've actually found we have an X YouTube engineer who works with us who's shown that there's a systemic bias in YouTube so if you airdrop a human animal and they land on let's say a teenage teenage girl and she wants to watch us a dieting video the thing that works best at keeping that that girl is watching a dieting video on YouTube the longest is to say here's an anorexia video because that's between you know here's more calm stuff and true stuff and here's the more insane divisive outrageous conspiracy intense stuff YouTube always if they want to keep your time they want to steer you down that road and so if you airdrop a person on a 911 video about just the 911 news event just fact-based news video the auto-playing video is the alex jones info video what happens to this conversation what follows are some records file and you know the problem is that you can also kind of hack these things and so there's governments who actually can manipulate the way the recommendation system works by throwing thousands of headless browsers like you know versions of Firefox to watch one video and then get it to search for another video so that we search for your vol Harare we've watched that one video and then we get thousands of computers to simulate people going from Yuval Harare to watching the power of Putin or something like that and then that'll be the top recommended video and so as default says like these systems are kind of out of control and algorithms are kind of running where two billion people spend their time so 70% of what people watch on YouTube is driven by recommendations from the algorithm people think that what you're watching on YouTube is a choice people are sitting there if they sit there they think and then they choose but that's not true 70% of people are watching is the recommended videos on the right-hand side which means 70% of where 1.9 billion users that's more than the number of followers of Islam about the number of followers of Christianity of what they're looking at on YouTube for 60 minutes a day it's the average time people spend on YouTube so we got 60 minutes and 70% is populated by a computer so now the machine is out of control because if you thought you know 9/11 conspiracy theories were bad in English try what our 9/11 conspiracies and Burmese in Sri Lanka and in Arabic no one's looking at that and so it's kind of a digital Frankenstein and so it's pulling on all these levers and steering people in all these different directions and you value we got into this point by you saying that this scares you for democracy it makes you worried whether democracy can survive or I believe you say the phrase you use in your book is democracy will become a puppet show yeah if it doesn't adapt to these new realities it will become just an emotional puppet show if you go on with this with this illusion that human choice cannot be hacked cannot be manipulated and we can just put trust it completely and this is the source of all authority then very soon you end up with an emotional puppet show and this is one of the greatest dangers that we are facing and it really is it's the result of kind of philosophical impoverishment of just taking for granted philosophical ideals from the 18th century and not updating them with the findings of science and it's very difficult because you go to people people don't want to hear this message that they are hackable animals that their choices their desires their understanding of who am i what is my most of authentic aspirations this can actually be hacked and manipulated to put it like briefly my amygdala may be working for Putin I don't want to know this I don't want to believe that no I'm a free agent well if I'm afraid of something this is because because of me not because somebody implanted this fear in my mind if I choose something this is my free will and who are you to tell me anything I'm hoping that Putin will soon be working from my amygdala but that's a side project I have going but it seems inevitable from what you wrote in your first book that we would reach this point where human minds would be hackable and where computers and machines and I would have better understanding of us but it's certainly not inevitable that it would lead us to negative outcomes to 9/11 conspiracy theories into a broken democracy so have we reached the point of no return how do we avoid the point of no return if we haven't reached there and what are the key decision points along the way mm-hmm well nothing is inevitable in in that I mean the technology itself is is going to develop you can't just stop all research in AI and you can't stop all research in in biotech and the two go together I think that AI gets too much attention now and we should put equal emphasis on what's happening on the biotech front because in order to hack human beings you need biology and the some of the most important tools and insights they are not coming from computer science they are coming from brain science and many of the people who design say all these amazing algorithms they have a background in psychology in brain science because this is what you're trying to hack but what should we realize we can use the technology in many different ways I mean for example we're now using AI mainly in order to survey individuals in the service of corporations and governments but it can be flipped and to the opposite direction we can use the same surveillance systems to monitor the government in the service of individuals that to monitor for example government officials that they are not corrupt right the technology is willing to do that the question is whether we are willing to develop the necessary tools to do it here is that the biotech lets you understand by hooking up a sensor to someone features about that person that they won't know about themselves and they're increasingly reverse engineering the human animal one of the interesting things that I've been following is also the ways you can ascertain those signals without an invasive sensor and we were talking about this a second ago there's something called Euler video manikin ITIF occation where you point a computer camera at a person's face and a human being can't look it I can't look at your face and see your heart rate my my intelligence doesn't let me see that you can see my eyes dilating right but I can see your eyes dilating so I'm terrified but if I if I put a supercomputer behind the camera I can actually run a mathematical equation and I can find the micro pulses of blood to your face that I as a human can't see but the computer can see so I can pick up your heart rate what does that let me do I can pick up your stress level because heart rate variability gives you your stress level I can point there's a woman named poppy Crum we have a TED talk this year about the end of the poker face though we have this idea that there can be a poker face we can actually hide our emotions from other people but this talk is about the erosion of that that we can point a camera at your eyes and see when your eyes dilate which actually detects cognitive strains we are having a hard time understanding something or an easy time understanding something we can continually adjust this based on your heart rate your eye dilation you know one of the things with Cambridge analytic is the idea that if we have you know which is all about that hacking of brexit and Russia and all the other u.s. elections that was based on if I know your Big Five personality traits if I know Nick Thompson's personality through his openness ocean openness conscientiousness extroverted Ness agreeableness and neuroticism that gives me your personality and based on your personality I can tune a political message to be perfect for you right now the whole scandal there was that Facebook let go of this data to to be stolen by a researcher who used to have people fill in questions to figure out what is what annex Big Five personality traits but now there's a woman named Gloria mark at UC Irvine who's done research showing you can actually get people's Big Five personality traits just by their click patterns alone so with with 80% accuracy so again the end of the poker face the end of the hidden parts of your personality we're going to be able to pull a is act human animals and figure out more and more signals from them including their micro expressions when you smirk and all these things we've got face ID cameras on all of these phones so now if you have a tight loop where I can adjust the political messages in real-time to your heart rate and to your eye dilation and to your political personality that's not a world you want to live in it's a kind of dystopia the many contexts you can use that it can be used in class to to figure out that the student is not getting the message that's what it is both the picture which could be a very good thing it could be used by lawyers like you negotiate a deal and if I can read what's what's behind your poker face and you can't that's a tremendous advantage for me so it can be done in a diplomatic setting like to Prime Minister's or meeting - I don't know resolve the israeli-palestinian conflict and one of them has an impact and the computer is whispering in his ear what is the true emotional state what's happening in the brain in the mind of the person on the other side of the table and what happens when when the two sides have this and you have kind of an arms race and we just have absolutely no idea how to handle these these things I gave it a personal example when I talked about this in in Davos so I talked about mine for me like my entire approach to this to these issues is shaped by my experience of coming out that I realized that I was gay when I was 21 and ever since then I I like haunted by this thought what was I doing for the previous five or six years I mean how is it possible I'm not talking about something small that you don't know about yourself everybody there are something you don't know about yourself but how can you possibly not know this about yourself and then the next thought is a computer in an algorithm could have told me that when I was 14 so easily just by something as simple as following the focus of my eyes like I don't know I work on the beach I even watch television and there is what was in the 1980s they watch or something and there is a guy in a swimsuit and there is a girl in a swimsuit and which way my eyes are going it's as simple as that and a computer could have and then I think what would my own life has been like first if I knew when I was fourteen secondly if I got this information from an algorithm I mean there is something incredibly like deflating for the ego that this is the source of this deep wisdom about myself an algorithm that followed my eye movements and there's an even creepier element which you write about in your book what if coca-cola had figured it out first and examined you cook with shirtless men when you didn't even know you were gay exactly like other coca-cola versus Pepsi coca-cola knows this about me and shows me a commercial with a shirtless man Pepsi doesn't know what it's about me because they are not using these sophisticated algorithms they go with the normal commercials with the girl in the bikini and naturally enough I buy coca-cola and I don't even know why next next morning when I go to the supermarket I buy coca-cola and I think this is my free choice I chose Coke but no it was I was hacked right and so this is they never said the whole issue this is everything is what we're talking about and how do you trust something that can pull these signals off of you if a system is asymmetric if you know more about me than I know more about myself we usually have a name for that in law so for example when you deal with a lawyer a lot you hand over your very personal details to alerts they can help you but then they have this knowledge of the law and they know about your vulnerable information so they could exploit you with that imagine a lawyer took all that personal information and then sold it to somebody else so but they're governed by a different relationship which is the fiduciary relationship they can lose their license if they don't actually serve your interest and it's similar like a doctor or psychotherapist or they also have it so there's this big question of how do we hand over information about us and say I want you to use that to help me so on whose authority can I guarantee that there's no moment when we are handing the information in this video is the lawyer there is this formal setting right like okay I hire you to be my lawyer this is my information and and we know this but I'm just walking down the street there is a camera yeah looking at me I don't even know that and they are are hacking me through that right so I don't even know it's happening that that's the most deposit is part of it we often say it's like imagine a priest if you want to know what Facebook is imagine a priest in a confession booth and they listen to two billion people's confessions but they also walk what watch you around your whole day what day what you click on which ads of coca-cola or Pepsi or the shirtless men and the shirtless women and all your conversations that you have with everybody else in your life because they have facebook Messenger they have that data too but imagine that this priest in a confession booth their entire business model is to sell access to the confession booth to another party so someone else can manipulate you because that's the only way that this priest makes money in this case so they don't make money any other way there are two giant entities that will have I mean their million entities that will have this data but there's large corporations mentioned Facebook and there will be government's which do you worry about more it's the same I mean once you reach beyond a certain point it doesn't matter how you call it this is the entity that actually rules again whoever has the data this data I mean even in even if you in a setting where you still have a formal government but in this data is in the hands of some corporation then the corporation if it wants can decide who wins the next elections so it's not really much of a much of a choice when there is choice we can design a different political anakim in economic system in order to prevent this immense concentration of data and power in the hands of either governments or corporations that use the user to use it without being accountable and without being transparent about what they are doing the many I mean the message is not okay it's over humankind is in the dustbin of history that's not the message no that's not the message I mean the eyes have stopped dilating keep this going the real question is we need to get people who understand this is real yeah this is happening there are things we can do and like you know you have a meter I mean like elections in a couple of months so in every in every debate every time a candidate goes to meet the potential voters in person or on television ask them this question what is your plan what is your take on this issue what are you going to do if we are going to elect you if there if they say I don't know what you're talking about that's a big problem I think the problem is most of them have no idea what we're talking about and it's like one of the issues is I think policymakers as we've seen are not very educated on these issues they're doing so much better this year than last one through watching the Senate hearings the last hearings with Jack Dorsey and Sheryl Sandberg versus watching the Zuckerberg hearings are watching the : stretch hearings there's been improvement it's true there's much more though I mean I think these issues just open up a whole space of possibility we don't even know yet the kinds of things we're gonna be able to predict like we've mentioned a few examples that we know about but if you have a secret way of knowing something about a person by pointing a camera at them an AI why would you publish that so there's lots of things that can be known about us that's a manipulators right now that we don't even know about and how do we start to regulate that I think that the relationship we want to govern is when a supercomputer is pointed at you that relationship needs to be protected and in government's okay and so there there are three elements in that relationship there's the same compute supercomputer yeah what does it do what does it not do there's the dynamic of how its pointed what are the rules over what I can collect what are the rules or what it can't collect and work in store and there's you how do you train yourself to act how do you train yourself to have self awareness so let's talk about all three of those areas maybe starting with the person what should the person do in the future to survive better in this dynamic one thing I would say about that is I think self-awareness is important it's important that people know the thing we're talking about and they realize that we can be hacked but it's not a solution you have millions of years of evolution that guide your mind to make certain judgments and conclusions a good example of this is if I put on a VR helmet and now suddenly I'm in a space where there's a Ledge I'm at the edge of a cliff I consciously know I'm sitting here in a room with you Vall and Nick I know that consciously so I've got the self-awareness I know I'm being manipulated but if you push me I'm gonna not want to fall right because I have millions of years of evolution that tell me you are pushing me off of a Ledge so in the same way you can say Dan Ariely makes this joke actually behavioural economists that flattery works on us even if I tell you I'm making it up it's like Nick I love your jacket right now if you it's a great jacket it's you know I didn't compliment it's a really amazing I actually picked it out because I knew from studying your carbon dioxide exhalation yesterday exactly the point is that even if you know that I'm just making that up it still actually feels good the flattery feels good and so it's important we have this sort of I think this is like a new era of kind of a new enlightenment where we have to see ourselves in a very different way and that doesn't mean that that's the whole answer it's just the first step we have to all walk the first step is recognizing that we're all vulnerable hackable right vulnerable yet but there are differences you've all is way less hackable than I am because he meditates two hours a day and doesn't use a smartphone right I'm so lucky yeah so what are the other things that a human can do to be less hackable so you need to get to know yourself as best as you can it's not a perfect solution but somebody is running after you you run as fast as you can I mean it's a competition who get who knows you best in the world so when you are 2 years old it's your mother eventually you hope to reach a stage in life when you yawn you know yourself even better than your mother and then suddenly you have this corporation or government running after you and they are way past your mother and they are at your at your back they are about to get to know this is the critical moment they know you better than you know yourself so run a little run a little faster and there are many ways you can run faster meaning getting to know yourself a bit better so meditation is one way other people they go that and there are hundreds of techniques of meditation different work works for different people you can go to therapy you can use all to commune sport whatever whatever works for you but it's now becoming a much more important than ever before you know it's the oldest advice in the book know yourself but in the past you did not have competition if you lived in ancient Athens and Socrates came along and say know yourself it's good it's good for you and you said now I'm too busy I have this I have this olive grove I have this deals two to two I don't have time so ok you didn't you didn't get to know yourself better but there was nobody else who was competing with you now you have serious competition so you need to get to know yourself better this is likely the first maxim secondly as an individual if we talk about what's happening to society you should realize you can't do much by yourself join an organization like if you're really concerned about this this week join some organization fifty people who work together are far more powerful force than 50 individuals so each of them is an activist and it's good to be an activist it's much better to be a member of an organization and then there are other tested and tried methods of politics we need to go back to this messy thing of of making political regulations and choices of all this it's maybe the most important politics is about power and this is where power is right now I think there's a temptation to say okay how can we protect ourselves and when this conversation shifts into with my smartphone not hacking me you get things like Oh set my phone to grayscale Oh I'll turn off notifications but what that misses is that ever you live inside of a social fabric when we walk outside my life depends on the quality of other people's thoughts beliefs and lives so if everyone around me believes a conspiracy theory because YouTube is taking 1.9 billion human animals and tilting the playing field so everyone watches Infowars by the way YouTube has driven 15 billion recommendations of Alex Jones Infowars and that's that's recommendations and then 2 billion views if only one in the people believe those two billion views that's still - was it two million two million like thematics is noticeable and so if that's two million people that's still two million new conspiracy theorists so if everyone else is walking around in the world you don't get to do that if you say hey I'm a kid I'm a teenager and I don't want to care about the number of likes I get so I'm gonna stop using snapchat or Instagram I don't want to be hacked for myself worth in terms of likes so if I'm using if I'm a teenager and I'm using snapchat or Instagram and I don't want to be hacked for my self-worth in terms of the number of likes I get I can say I don't want to use those things but I still live in a social fabric where all my other sexual opportunities social opportunities homework transmission where people talk about that stuff if they only use Instagram I have to participate in that social fabric so I think we have to elevate the conversation from how do I make sure I'm not hacked it's not just an individual conversation and we want society to not be hacked which goes to the political point and sort of how do we politically mobilize - as a group to change the whole industry I mean for me I think about the tech mystery but all right so that's sort of step one in this three step question what can individuals do know yourself make society more resilient make society let's say it'll be hacked what about the transmission between the supercomputer and the human what are the rules and how should we think about how to limit the ability of the supercomputer - that's a big one that's why we're here in essence I think that we need to come to terms with the fact that we can't prevent it completely and it's not because of the AI it's because of the biology it's just the type of animals that we are and the type of knowledge that now we have about the human body about the human brain we have reached a point when this is is really inevitable and you don't even need a biometric sensor you can just use a camera in order to tell what is my blood pressure what's happening now and through that what's happening to me emotionally so I would say we need to reconceptualize completely our our world and this is why I began by saying that we suffer from philosophical impoverishment that we are still running on the ideas of the basically the 18th century and which were good for two or three centuries which were very good but which are simply not not adequate to understanding what's happening right now and which is why I also think that if you know with all the talk about the job market and what what should I study today that will be relevant to the job market in twenty thirty years I think philosophy is one of the best bets maybe yeah I sometimes joke my wife studied philosophy and dance in college which at the time seemed like the two worst professions because you can't really get a job in either but now they're like the last two things that will get replaced by robots no I think that evil is is is right I think this often this conversation usually makes people conclude that there's nothing about human choice or the human minds feelings that's worth respecting and I don't think that is the point I think the point is that the new kind of philosophy that acknowledges a certain kind of thinking or cognitive process or conceptual process or social process that we we want that like for example Laurence Fishburne as a professor at Stanford who's done work on deliberative democracy and shown that if you get a random sample of people in like a hotel room for two days and you have experts come in and brief them about a bunch of things they change their minds about issues they go from being polarized to less polarized they can come to more agreement and there's a sort of a process there that you can put in a bin and say that's a that's a social cognitive sense making process that we might want to be sampling from that one as opposed to a alienated lonely individual who's been told showing photos with their friends having fun without them all day and then we're hitting them with Russian ads right like we probably don't want to be sampling a signal from that person to be thinking about what's what Adam are not we know that we don't want from that person we don't want that process to be the basis of how we make collective decisions so I think you know we're still stuck I I think with this way we're still stuck in a mind-body meets we're not getting out of it so we better learn how do we use it in a way that brings out the higher angels of our nature and the more reflective parts of ourselves so I think what technology designers need to do is ask that question so a good example just to make it practical yeah so like let's take YouTube again so what's the difference between a teenager well let's take an example of you watch a ukulele video so it's a very common thing on YouTube there's lots of clearly videos how to play ukulele and what's going on in that moment when it recommends other ukulele videos well there's actually a value of someone wants to learn how to play the ukulele but the computer doesn't know that it's just recommending more ukulele videos but if it really knew that about you instead of just saying here's like infinite more ukulele videos to watch it might say and here's your ten friends who know how to play ukulele but you didn't know know how to play ukulele and you can go and hang out with them mm-hmm it could basically put those choices at the top of life's menu the problem is when you watch like a teenager watches that dieting video the computer doesn't know that the thing that you're really after in that moment isn't that you want to be anorexic it just knows that you people who watch those tend to fall for these anorexia videos if you can't get at this underlying value this thing that people want we just need I mean the system in itself can do amazing thing for us we just need to turn it around that it serves our interests whatever that is and not the interests of the corporation or the government actually to build okay now that we realize that our brains can be hacked we need an anti-virus for the brain just as we have one for the computer and it can work on the basic or the basis of the same technology let's say you have an AI sidekick who monitors you all the time 24 hours a day what you write what you've seen everything but this AI is serving you it has this fiduciary responsibility and it gets to know your weaknesses and by knowing your weaknesses it can protect you against other agents trying to hack you and to exploit your weaknesses so if you have a weakness for funny cat videos and you spend an enormous amount of time in order that amount of time just watch it you don't but you know it's not it's not very good for you but you just can't stop yourself clicking then the AI will intervene and whenever this funny cat video tries to pop up the and and it will just show you maybe a message that you've just been somebody just try to hack you you just get these messages about somebody just try to infect your computer with a virus and it can and I mean it's the the hardest thing for us is to admit our own weaknesses and biases and it can go always if you have a bias against Trump or against Trump supporters so you very would very easily believe any story however far-fetched and ridiculous but so I don't know Trump thinks that the world is flat Trump is in favor of killing all the Muslims you would click on that this is your bias and the AI will know that and when so it's it's completely neutral it doesn't serve any entity out there it just gets to know your weaknesses and biases and tries to protect you against them but how does it learn that it is a weakness and a bias and not something you like how come it knows that when you click the ukulele video that's good and when you click the this is already angry at a richer philosophical framework because if you have that then you can make that that understanding so if a teenager is sitting there and in that moment it's watching the diet dieting video and then they're shown the anorexia video imagine instead of a 22 year old male engineer who went to Stanford computer scientist thinking about what can I show them that's like the perfect thing you've had a 80 year old child developmental psychologist who studied under the best child developmental psychologists and thought about in those kinds of moments the thing that's usually going on for a teenager at age 13 is a feeling of insecurity identity development like experimentation and like what would be best for them so we think about this is like the whole framework of human technology is we think this is the thing we have to hold up the mirror to ourselves to understand our vulnerability first and you design starting from a view of what we're vulnerable to I think from a practical perspective I totally agree with this idea of the AI sidekick but if you're imagining like we live in the reality the scarier reality that we're talking about right now it's not like this is some sci-fi future this is the actual state so if we're actually thinking about how do we navigate to an actual state of affairs that we want we probably don't want an AI sidekick to be this kind of optional thing that some people who are rich can afford and other people who don't can't we probably want it to be baked into the way technology works in the first place so that it does have a fiduciary responsibility to our best subtle compassionate vulnerable interests so we will have a government-sponsored AI sidekicks we will have corporations that sell us AI sidekicks but subsidize them so it's not just the affluent that have really goods one thing is to change the way that you know if you go to university or college and learn computer science then an integral part of the course is to learn about ethics about the ethics of coding and it's it's really it's extremely irresponsible that you can finish you can have a degree in computer science in coding and you can design all these algorithms that now shape people's lives and you just don't have any background in thinking ethically and philosophically about what you're doing you're just thinking in terms of pure technicality or in economic terms and so this is one thing which kind of bakes it into the cake from the first place let me ask you something that has come up a couple times I've been running about so when you were giving the ukulele example you talked about well maybe you should go seek entrance you play ukulele you should go visit them offline and in your book you say that one of the crucial moments for Facebook will come when an engineer realizes that the thing that is better for the person and for community is for them to leave their computer and then what will Facebook do with that so it does seem from a moral perspective that a platform if it realizes it would be better for you to go offline and see somebody they should encourage you to do that but then they will lose their money and they will be outcome he did yep so how do you actually get to the point where the algorithm the platform pushed somebody in that direction so this is where this business model conversation comes in is so important and also why Apple and Google's role is so important because they are before the business model of all these apps that want to steal your time and maximize attention Apple doesn't need good moves before and after and during but it is also before well but anyway I don't want to every and the Android case so yeah Android and iOS not to make this too technical or an industry's focus conversation but they should theoretically that layer you have just the device who should that be serving whose best interests are they serving do they want to make the apps as successful as possible and make the time you know the addictive maximizing you know loneliness and alienation and social comparison all that stuff or should that layer be a fiduciary as the AI sidekick to our deepest interest to our to the physical embodied lives to our physical embodied communities like we can't escape this instrument and it turns out that being inside of community and having face-to-face contact is you know there's a reason why solitary confinement is the worst punishment we give human beings and we have technology that's basically maximizing isolation because it needs to maximize that time we spend on the screen so I think one question is how can a poland' Google move their entire businesses to be about embodied local fiduciary you know it's not supposed to society and this is what we think of as humane technology that's the direction that they can go Facebook could also change its business model to be more about payments and people people transacting based on exchanging things which is something they're looking into with the blockchain stuff that they're theoretically working on and also messenger payments if they move from an advertising based business model to micro payments they could actually shift the design of some of those things and there could be whole teams of engineers at newsfeed that are just thinking about what's best for society and then people would still ask these questions of well who's Facebook to say what's good for society but you can't get out of that situation because they do shape what 2 billion human animals will think in physics gets me one of the things I most want to hear your thoughts on which is Apple and Google have both done this to some degree in the last year and Facebook has I believe every executive at every tech company has said time well-spent it's a point in the last year we've had a huge conversation about it and people have bought 26 trillion of these books do you actually think that we are heading in the right direction at this moment because change is happening and people are thinking or do you feel like we're still going in the wrong direction I think that in the tech world we are going in the right direction in the sense that people are realizing the stakes people are realizing the immense power that they have in their hands and talk about the people in in the tech world they are realizing the influence they have on on politics on society on and yeah and and and most of them react I think in not in the best way possible but certainly the the react in a responsible way in understand yes we have this huge impact on the world we didn't plan that maybe but this is happening and we need to think very carefully what we do is that they don't still don't know what to do is that nobody really knows but at least the first step has been accomplished of realizing what is happening and taking some responsibility the place where we see a very negative development is on the global level because all this talk so far has really been kind of internal Silicon Valley California USA talk but things are happening in other countries I mean all the top we've had so far relied on what's happening in liberal democracies and in free market in some countries maybe have got no choice whatsoever you just have to share all your information and just have to do what the government-sponsored algorithm tells you to do so it's a completely different conversation and then another kind of complication is the AI arms race that five years ago when two years ago there was no such thing and now it's maybe the number one priority in many places around the world that there is an arms race going on in AI and we our country needs to win this arms race and when you enter an arms race situation then it becomes very quickly a race to the bottom because you can very very often hear this that okay it's a bad idea to do this to develop that but they are doing it and it gives them some advantage and we can't stay behind we are the good guys we don't want to do it but we can't allow the bad guys to be ahead of us so we must do it first and you ask the the other people they would say exactly the same thing and this is a this is an extremely dangerous develop the prisoners the last two years yeah it's a multipolar trap I mean every actor does no one wants to build a lot or bought drones but if I think you might be doing it even though I don't want to I have to build it and you build it and we both hold them but then we you know and even at a deeper level if you want to build some ethics into your slaughter BOTS groans but it'll slow you down slow you don't right and one week I think we talked about when I first met was the ethics of speed of clock rate because the faster we're in essence competing on who can go faster to make this stuff but faster means more likely to be dangerous less likely to be safe so it's basically we're racing as fast as possible to create the things we should probably go in as slow as possible to create and I think that you know much like high-frequency trading in the financial markets you you don't want like if we have this open-ended thing of who can beat who by trading a microsecond faster what that turns into is this has been well-documented is people blowing up whole mountains so they can lay these like you know copper cape cables and so they can trade a microsecond faster so you're not even competing based on you know an Adam Smith version of what we value or something like that you're competing based on basically who can blow up mountains and make transactions faster when you add high frequency trading to who can trade hackable you know programming human beings faster and who's who's more effective at manipulating culture wars ever or across the world that just becomes this like race to the bottom of the brainstem of a total chaos so I think we have to say how do we slow this down and create a sensible pace and I think this is also about a humane technology of child development psychologists ask someone like you to psychologists what are the clock rates of human decision-making where we actually tend to make good thoughtful choices you probably don't want a whole society revved up to you know making a hundred choices per hour a you know about about something that really matters so so what is the right clock rate I think we have to actually have technology steer us towards those kinds of decision-making processes so back to the original question you're somewhat optimistic about some of the small things that are happening in this very take place but deeply pessimistic about the complete obliteration of humanity what I think that you've always point is right that you know there's a government there's a question about us tech companies which are bigger than many governments couldn't face those controls 2.2 billion people's thoughts Mark Zuckerberg editor-in-chief of 2.2 billion people's thoughts but then there's also you know world governments that or sorry cut you know national governments that that are governed by a different set of rules I think the tech companies are very very slowly waking up to this and so far you know with the time well spent stuff for example it's let's help people because they're vulnerable to how much time they spend set a limit on how much time they spend but that doesn't tackle any of these bigger issues about how you can program the thoughts of a democracy how mental health and alienation can be rampant among teenagers leading to doubling the rates of teen suicide for girls in the last eight years so you know we we are gonna have to have a much more comprehensive view and restructuring of the tech industry to think about what's good for people and that's gonna there's gonna be an uncomfortable transition like I've used this metaphor it's like with climate change when there's there's certain moments in history when an economy is propped up by something we don't want right like so the biggest example of this is slavery in the 1800s there was a point at which slavery was propping up the entire world economy you couldn't just say we don't want to do this anymore let's just suck it out of the economy the whole economy would collapse if you did that but the British Empire when they decided to abolish slavery they had to give up two percent of their GDP every year for 60 years and they were able to make that transition over a transition period and so I'm not equated advertising or programming human beings to slavery I'm not but there there's a same similar structure of the entire economy now if you look at the stock market like a huge chunk of the value is driven by these advertising programming human animals based systems if we wanted to suck out that model the advertising model we actually can't afford that transition but there could be an awkward years where you're basically in that long transition path I think in this moment we have to do it much faster than we've done it in other situations because the threats are more urgent you've all do you agree that that is one of the things we have to think about as we think about trying to fix the world system over the next it's one of the things but then again the problem of the world of humanity is not it's not just the advertisement model I mean the basic tools were designed like you had that the the brightest people in the world 10 or 20 years ago cracking this problem of how do I get people to click on ads some of the most the smartest people ever they this was their job to solve this problem and they solved it and then the methods that they initially used to sell us underwear and sunglasses and vacations in the Caribbean and things like that they were hijacked and weaponized and are now used to sell us all kinds of things including political opinions and entirely ideologies and it's now it's now no longer under the control of the tech giant in Silicon Valley that I need these methods these methods are now out there so even if you get kind know the the the Google and Facebook and it was kind of completely give it up the cat is out of the bank people already know how to do it and there is an arms race in this arena so yes we need to figure out these advertisement business it's it's very important but it won't solve the the the human problem right and I think now the only really effective way to do it is on the global level and for that we need global cooperation on regulating AI regulating the development of a end of biotechnology and we are of course heading in the opposite direction of global cooperation I agree actually in that there's this notion that the games very sure Facebook and Google could do it but that doesn't matter because the cats out of the bag and governments are gonna do it and other companies are gonna do it in Russia's tech infrastructure is gonna do it so how do you stop it from happening not to bring it back not to equate slavery is someone away but the when the British Empire decided to abolish slavery and subtract that that their dependence on that for their economy they actually were were concerned that if we do this Frances economy is still going to be powered by slavery and they're gonna soar way best way past us so from a competition perspective we can't do this but they the way they got there was by turning it into a universal global human rights issue that took a longer time but I think this is like you've all says I agree that it's it's this is a global conversation about human nature and human freedom if there is such a thing but at least kinds of human freedom that we want to preserve and that I think is something that is actually in everyone's interest and it's not necessarily equal capacity to achieve that at end because governments are very powerful but it's gonna we're gonna move in that direction by having a global conversation about it so let's end this with giving some advice to someone who is watching this video they've just watched an Alex Jones video and the YouTube algorithm has changed and the centum here and they somehow got to this point they're 18 years old they wanted to vote their life to making sure that the dynamic between machines and humans does not become exploitative and becomes one in which we continue to live our rich fulfilled lives what should they do or what advice would you give I would say that get to know yourself much better and have as little illusions about yourself as possible if a desire pops in in your mind don't just say well this is my free will I chose this therefore it's good I should do it explore much deeper secondly as I said join an organization that is very literally you can do just as an individual by yourself these were the two most important advices I could give an individual who is watching us and I think your earlier suggestion of understand that the philosophy of simple rational human choice is we have to move from an 18th century model of how human beings work to a 21st century model of how human beings work you know seeing personally you know our workers we're trying to coordinate kind of a global movement towards fixing some of these issues around humane technology and I think that like you've all says you you can't do it alone it's not a let me turn my phone grayscale or let me like you know petition my Congress member by myself this is a global movement the good news is no one kind of wants the dystopic endpoint of the stuff that we're talking about it's not like someone says no no I'm really excited about this dystopia that we're I just want to keep doing what we're doing no one wants that so it's really a matter of you know can we all like unify and the thing that we do want and it's it's it's somewhere in this vicinity of what we're talking about and no one has to capture the flag but we have to move away from the from the direction that we're going and I think everyone should be on the same page on that well I you know we started this conversation by talking about where they were optimistic and I am certainly optimistic that we have covered some of the hardest questions facing humanity and that you have offered brilliant insights into them so thank you for talking and thank you for being here thank you just on thank you know you've all thank you here Thanks
Info
Channel: Yuval Noah Harari
Views: 128,747
Rating: 4.911324 out of 5
Keywords: tristan harris, yuval noah harari, nicholas thompson, wired, ai, interview, time well spent, ethics, ethical design, artificial intelligence, technology, california, democracy, society, corporations, know yourself, robots, algorithms, technological disruption, future, humanity, future of humanity
Id: v0sWeLZ8PXg
Channel Id: undefined
Length: 56min 40sec (3400 seconds)
Published: Mon Dec 03 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.