Munk Debate on Artificial Intelligence | Bengio & Tegmark vs. Mitchell & LeCun

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
you don't know which of your facts will be demolished you don't know which of your arguments will be totally destroyed and you now rattled you're shaken up and you don't know what the hell to say but you got to say something capitalism doesn't just have problems capitalism is broken I think it's time for this toxic binary zero-sum Madness to stop the biggest threat to the liberal International order [Music] paradoxically is not a non-liberal society like China but a liberal Society like the United States of America manipulated by flattery goaded by tweets his hand near the launch code when he overheats if he's that outrageous then you must think half of your countrymen are just completely Bonkers or more whatever you want to call this system a mafia state of feudal Empire it's a disaster for ordinary Russians having different opinions is so 20th century this Century we have different facts what a Pity we can't have a vote between who prefers bald Brits to curly head Canadians we don't require Divine permission to know right from wrong we don't need tablets administered to US 10 at a time in tablet form to be able to have a moral argument no do not want sympathy we do not want pity we want opportunities and that's why the whole thing whole thing Ukrainian crisis Ukrainian War Ukraine tragedy it's an issue that affects everybody on this planet so what I want to say to you is don't give in to the fatalism here don't give in to the sense that these are the great forces [Music] We Believe we make our own destiny [Music] good evening ladies and gentlemen my name is Roger Griffis I am the chair of the monk debates and it's my privilege to once again have the opportunity to moderate tonight's monk debate on AI when we advertise this debate to you our valued members donors ticket holders we genuinely felt that this could be one of the more consequential debates in the history of this series this is our 28th main stage debate here at Roy Thompson Hall look at that over a hundred Debaters have been on this stage but tonight for the first time in this series we're going to address what could be at least for half of our Debaters on stage an existential question the threat that AI could pose to humanity in the years to come and into our future this would not be possible this evening all of those past debates without again the generosity of you so many of you are monk donors now numbering in the multiple thousands so thank you for your generous charitable support for the monk debates and also a big thank you for the organization the foundation that imagined these debates in the first place and make them happen twice a year every year for over 15 years so thank you to the Peter and Melanie monk charitable Foundation Bravo guys thank you [Applause] you know Peter would have loved this debate he was a man who uh was fascinated with big issues big challenges Global in scale and context and tonight's debate as with all of our debates is held in his memory these debates are part of his incredible Legacy so let's give Peter a well-deserved Applause I want to move fast tonight because we have got a lot to cover together and I want to promise that like you I am uh I've got a lot to learn this evening so I'm going to try to keep the conversation on an accessible level we've found really four of the world's leading authorities in AI these are people who have worked and innovated and moved this technology forward to the moment that we are at now this moment of international debate about the future of AI so let's start our debate debate by getting our Debaters out here Center Stage the first debater to join us is Max tegmark he's a professor at MIT doing research focusing on how discovering in his words inscrutable machine learning systems work under the hood he is an impressive body of scientific research he's the author of numerous best-selling books recently you like me have seen him in the news leading with Elon Musk and other scientists around the world a public call for a moratorium a pause on AI research ladies and Gentlemen please welcome to Toronto mit's Max tagmark thank you Max well one great debater deserves another and here we have not let you down yahshua bengio is truly one of the world's leading experts in artificial intelligence known for his pioneering work in the field of deep learning in 2018 he won the prestigious Turing award which is widely recognized as the a Nobel Prize in Computing he's a full professor at the University de Montreal and a founder of the and scientific director of Mila Quebec's AI Institute he's a driving force behind the Montreal declaration for the responsible development of artificial intelligence please welcome Canada's yahshua bengio thank you thanks for coming okay formidable Debaters on the pro side of our debate tonight the motion be it resolved AI development and research poses an existential risk arguing against the motion let's start with our first debater yen lacun is vice president and chief chief AI scientist for meta the parent company of Facebook WhatsApp Instagram all those amazing applications that we use a world leader in VR some of you maybe were trying out meta's VR applications in the lobby before and they'll be available after the debate too he is responsible for assisting in innovating AI applications across meta's three billion daily active users Yan lacunas also like Joshua a winner of the very same year 2018 Turing award for his foundational scientific contributions to machine learning he's an active academic lecturing and teaching as The Silver professor of The qurant Institute of mathematical scientists at New York University ladies and Gentlemen please welcome Yan lacun thank you though appreciate it our final debater this evening made a harrowing trip today from Santa Fe multiple flights delays but she's here on the ground just hours before tonight's proceeding she's a world leading expert in various fields of artificial intelligence and cognitive science at the Santa Fe Institute that focuses itself broadly on issues related to complex systems she has really distinguished herself in a popular conversation also around Ai and its implications having authored numerous International best-selling books including her latest artificial intelligence a guide for thinking humans please welcome Melanie Mitchell thank you okay so we're gonna do what we do at every monk debate which is vote on tonight's resolution so this evening the way to do that is in your program there is a QR code and you can scan that code with your phone or it should work if you point it at the QR codes that you see on the screens here on the stage this is going to spawn a a web page that will have tonight's various voting questions for you please keep that web page open we also recommend that you do not use the Roy Thompson Hall Wi-Fi it gets overwhelmed by this many people hitting it all at once but 5G you should be fine to connect with the web page so I'm going to ask you to do that now and you're going to get that first question you're going to vote is AI research and development right now an existential risk does it pose an existential risk as you're doing that voting I just want to remind you and our Debaters about our countdown clocks so The Debaters have a cheat monitor here on the stage for their opening statements rebuttals and closing statements but the audience is going to see the final minute of each of those segments when that clock hits zero I want you to join me in a round of applause for our Debaters that'll keep them on their toes and our debate on time okay our second question you should also have coming up right after you answer the first one on the resolution which is would you change your mind tonight so depending on what you hear on our resolution be it resolved AI research and development poses an existential risk could you have a different opinion by the end of the evening based on what you hear from our debaters so answer both of those questions and then if you can just keep that tab open on your phone and that'll let you vote on the resolution at the end of the debate okay let's go now and see if we've got some preliminary results to the resolution question for this audience does this pose an existential risk in your view AI research and development will try to get the first round of audience votes on the resolution 67 percent versus 33 percent so two-thirds verses one-third in favor of the motion so 67 to 33. now the key thing is could you change your mind so how much is public opinion in this room in flux this evening let's see the percentages now people who could possibly change their minds depending on what they hear tonight thank you all for taking your time to vote on that second question and we'll keep those voting tabs open if we can I'll just give a few more moments if we can't get these results right now I'll make sure that you get them at some point as the evening unfolds okay I'm gonna say that we're still tabulating those possibly we'll give it a little bit more time but I do not want to delay any longer so let's go on to opening statements Max tegmark as we agreed you're up first the stage is yours we're going to put your six minutes on the clock here enjoy thank you first slide please hey folks and the next yes technology can be used for good and for bad and its power is growing exponentially which means of course that it's blast radius you know the amount of damage it can do to our society is also growing exponentially back in the Stone Age with a rock maybe someone could kill five people 300 years ago with a bomb maybe a hundred people in 1945 with a couple nukes 250 000 people with bio weapons even more with nuclear Winter According to a recent science article over 5 billion people so now the blast radius has risen up to about 60 percent of humanity and since as we'll argue superhuman intelligence is way going to be way more powerful than any of this its blast radius can easily be a hundred percent of humanity giving it the potential power to really wipe us out now to stir things up I have a tweet here from my friend John lacun who says that making AI safe is going to happen as with every new tech and it's going to be a process of iterative refinement now I think iterative refinement is great for less powerful technology like jet engines and and less powerful AI where you have the luxury of trying many times but I think it's a terrible strategy when you're talking about things as powerful as as nuclear winter or some future AI that we could completely lose control over because then you really only get one chance you have to get it right the first time and can't come back and iteratively refine now why is it that superhuman AI can be so much more powerful than all those other Technologies it's because by definition it can do all the intelligent things the way humans can do just better for example it can do goal-oriented Behavior it can persuade manipulate hire people start companies build robots do scientific research it could for example research how to make more powerful bio weapons or how to make even more intelligent systems so it could recursively self-improve itself it also do things that humans cannot do at all it could very easily make copies of itself so if you imagine for a moment a superhuman AI that can think 10 say a thousand times faster than a human researcher so it can the nine hours do a Year's worth of research but now instead think of a million of those a swarm of a million superintelligent AIS where as soon as one of them discovers something new it can instantly share that new skill with all of them that's the kind of power we're talking about here and finally superhuman air will probably be a very alien kind of intelligence that lacks anything like human emotions or empathy so of course it would have the power and potential to wipe us out if it had those goals but why would it possibly have those goals next slide please well one the most talked about way in which this could happen is if we've given it some sort of goal that it actually Faithfully obeys but that goal list isn't fully aligned with with human goals this is uh the way we humans have usually wiped out other species for example when we wiped out this this guy here the West African black rhino you know it wasn't that we hated rhinoceroses and we're deliberately trying to kill them it's just that some some humans had this weird goal that they thought that if they ate ground a rhino horn their sex life was going to get better even though this was totally debunked in the medical literature but the humans were smarter than the rhinos and so it was the human's goals that prevailed or if you chopped down the rainforest for the goal of making money you know and you extinct the species that was living there another example of what could happen to us if it's Rogue AI at the top here we have a second route to Extinction which could be malicious use so to stir things up more have another tweet from Jan where you see he says we would have obviously never designed AIS to have that sort of goals but who is we I'm sure you wouldn't Ron wouldn't do it because I know personally he's a very nice guy and to you in the audience it might seem really weird that anyone would do it because you're all so nice because you're Canadian but in the U.S we regularly have mass shootings where people deliberately try to kill as many people as possible so if if there's some sort of super powerful open source AI you know a person who wants to kill as many as possible might well use that and the AI is safe in the sense that it's totally obeying its owner now right still causes disaster the Third route to disaster I placed here is just that we get out competed this is the one that's the least discussed and I would love to hear from John and Melanie what their plan is to avoid the one in the middle and the one at the top so I was competed well by definition AI can do if it's superhuman all our jobs better right so companies they replace the humans that don't replace their humans by AI we'll get out competed by those who do similarly it's about delegating not just jobs but also decisions so companies that don't have an AI CEO will be out competed by those who do militaries that don't have ai generals will be out competed by militaries who do countries that don't have ai government will be out competed by countries who do and we end up in a situation where we humans have quite voluntarily just given away more and more power to Ai and we have this future where all sorts of stuff that's happening but it's no longer our future we've been disempowered giving away control to machines that don't even need us for anything that's a very bad situation let's not go there [Music] thank you Max take Mark terrific opening statement now we've got those results quickly for you now on how many of you said might change your minds I've been told it's 92 percent of you could vote a completely different way at the end of uh this 90-minute debate so let's see what happens and we'll keep going in that Spirit Yen lacun you're up next with your opening statement okay well first well to thank Max for showing some of my tweets I won't have to uh mention of that okay so first of all we are still very far from having human level AI the current technology is very limited we have systems that can best of our exam but any 10 year old kid can learn to clear up the dinner table and fill up the dishwasher and in minutes convincing them to do it is a different story any 17 year old can learn to drive a car in about 20 hours of training we still don't have domestic robots we still don't have self-driving cars at least level five sort of in cars so we still have some basic uh you know major things to progress to to to make to reach machines again recruitment of AI um we're missing something big there are immediate risks to AI but those risks are not new there are things like is AI going to be able to generate a lot of misinformation or convince people to do bad things or give information to people that they shouldn't have who already have the internet we already have social networks we already have counter measures against things like disinformation and his speech and things like that and by the way all of those countermeasures make massive use of AI AI for those problems is the solution it's not the problem so now is there a basic design for AI that will make it safe steerable now if we extrapolate the capabilities of our current AI systems I would agree that we might be worried about the fact that they may do bad things and be non-controllable so if every most of you probably have played with and you know that they confaborate things they they make up facts that don't exist they can't really reason they don't plan they don't really understand the reality of the world because they're all purely trained on text and most of human knowledge has nothing to do with text or with language for that matter so if you believe that future AI systems are based on the same blueprint I think you're entitled to beliefs that it might be dangerous and my prediction is that within five years we're not going to be using those things anymore next slide please so what I'm proposing is something called objective driven AI so this is a type of AI whose behavior is controlled by a set of objectives so those AI systems cannot produce an output unless it satisfies a number of constraints safety constraints for example and objectives that measure whether the system sort of answers the question they're being asked or satisfied accomplish the tasks they want to do those systems are controllable they can be made safe as long as we implement the the safe object the safety objectives and the surprising thing is that they will have emotions they will have empathy they will have all the things that we require entities in the world to have if we want them to behave properly so I do not believe that we can achieve any anything close to human level intelligence without endowing AI systems with this kind of emotions similar to human emotions this will be the way to control them now one set of emotions that we can hardwire into them is subservient the fact that there will be auto service so imagine a future 10 20 years from now perhaps every one of us will be interacting with the digital world through an AI assistant this AI assistant will be our best digital friend if you want it will help us in our daily lives those would be like having a staff of people who might be smarter than us and that's great having working with a bunch of people are smarter than you is the best thing you can ever do um people right um so if that's the case we would want those AI systems to be transparent and open kind of like Wikipedia if you want We Trust Wikipedia because the content is contributed by millions of people who have been vetted and there is some editing process and so we have some level of trust in the veracity of the content it's going to have to be the same for AI system the way they learn about the world because they're going to be the repository of all human knowledge will have to be crowdsourced which means they have they're going to have to be open source and if we are afraid that they are a danger to humanity they cannot be open source so we're on the forking Road are we going to go keep this interlocking key as if it was a weapon or are we going to go open and I really argue for going uh open because I think AI is going to open sort of a new era for for Humanity a new Renaissance the new era of Enlightenment if you want everybody will be smarter for it and being smart is intrinsically good so I have a positive view as you can tell and I think there is a very uh efficient way or good way of making AI system safe is going to be arduous engineering just like making turbojet safe it took decades and it's it's hard engineering but it's doable thank you very much thank you yen lacun to terrific opening statements let's go for another yahshua bengio Europe six minutes on the clock thank you um let me start by stating something that's probably obvious for many researchers in Ai and Neuroscience but for many people it isn't which is that your brain is a machine it's a biological machine that's what neuroscientists are trying to figure out and what it means is that there's no scientific reason to think that we couldn't build machines as smart as humans in the future well when exactly is another question but as Jeff Hinton has explained in a recent talk digital computers have advantages over analog brains analog brains also have advantages in terms of computer Energy Efficiency but in terms of the ability to absorb large quantities of data and to share information at high rates between computers so that they can learn in parallel as Max was talking about so you can have thousands of computers sharing what they've learned for example um you know right now we already have ai systems that can read the whole internet very quickly which a human couldn't do in in you know many ten thousands of lifetimes so so this gives them advantages if we are able to build machines that have the same principles of intelligence as we have um and that means it's very likely that we'll build machines that are superhuman now when is that going to happen well um uh Jeff Hinton and and Jan Luka and and myself who won the drawing one for deep learning actually all agree that it may be anywhere between just a few years and a few decades something like 5 to 20 years with some you know confidence interval now if it's a few decades that that you know may be reassuring but as if a few years I think we should be concerned and in any case it doesn't change the question of tonight which is is there existential risk it's not about how far into the future it is I changed my mind about this whole question because the time scale changed if you had asked me just a few years ago I would have said well maybe a few decades or centuries because I thought like Jan uh like Melanie that it was too far away now the I'm sure you many of you have tried chat GPT or gpt4 and uh you you can't uh really like avoid uh noticing that these machines are incredibly uh powerful in fact they pass what Turing who gave his name to the price we won um defined as the Turing test in the sense that when you uh uh dialogue with these systems you may not be sure if you're talking to a human or a machine now it's been also the job of many scientists like myself to try to figure out what is missing and I agree with Jan and Melanie that there are important ingredients missing but are they something we'll figure out in just a few years or a few decades it's hard to say I can tell you that the things that I'm working on uh which is something on the order of uh reasoning um right now these these systems are really good at something like intuitive intelligence but not so much at reasoning and and thinking through before they're saying stupid things um is something that could potentially be solved very quickly or maybe there'll be obstacles on the way we don't know so so this is a this is a problem and as as Max said once uh the the recipe for building these things is available or even worse the the parameters the weight so somebody could download this and and give it instructions that could be very very harmful because imagine systems that are many times smarter than us could defeat our cyber security um could hire you know organized crime to do things could even hire people who are legally working on the web to do things could open bank accounts could do all kinds of things uh just through the internet and and eventually do the r d to build robots and and have its own like controlled drag control in the world so in a way where we're going is and that's really uh I'm challenging Jeff Hinton here uh you know coming up with a new kind of entity in the world that may have its own um self-preservation goals which is something that also Max explain why it may happen it may happen because somebody asks it to do something bad or it may happen because of what I call a Frankenstein sort of uh tendency of us wanting to build machines that are like us and and once we have machines that have a self-preservation goal well we are in trouble um you know think about what happens when you want to survive you don't want uh others to turn you off right and you need to be able to control your environment which means control humans so existential risk isn't just well we all disappear it might be that we're all um disempowered that we are not anymore in the control of our death Destiny and and I don't think this is something we want um last slide which I won't have time to go in details but um I'm um I'm writing a blog post which uh I'll post soon where I've tried to take all of the comments and questions that people have asked me um uh about this question uh in and trying to answer all the uh the critics about oh we should not worry about existential risk um and up to now I unfortunately haven't been convinced by these these arguments I would really like to be convinced uh and and lay these these concerns to rest but uh all of the arguments I've heard and we'll discuss tonight have not been good enough thank you Joshua bengio let's have Melanie Mitchell give our final opening statements six minutes on the clock Melanie in human extinction it's in our Collective psyche themselves but tonight we're debating whether these fears belong in the realm of Science Fiction and philosophical speculation or whether AI is an actual real life existential threat I'm going to argue that AI does not pose such a threat in any reasonably near future first I'll argue that the possible scenarios that people have dreamed up for AI existential threats are all based on unfounded speculations rather than on science or empirical evidence second well we can all acknowledge that AI presents many risks and harms none of them rise to the extreme level of existential saying that AI literally threatens human extinction sets a very high bar finally claiming that AI is an existential threat is itself harmful it misleads people about the current state and likely future of AI such sensationalist claims deflect attention from real immediate risks and further might result in blocking the potential benefits that we could reap from technological progress let's look at the three scenarios that people have posited for AI to be an existential risk is that a malevolent super intelligent AI somehow emerges and uses its Evil Genius to destroy Humanity we've all seen that movie I believe that no one here takes that scenario seriously for the foreseeable future AI systems will not have their own desires or intentions for good or for evil the way that living beings do they're not alive the second scenario is also that a super intelligent AI emerges but it's not malevolent it just misinterprets our wishes and accidentally kills us all sort of like a Sorcerer's Apprentice gone nuclear for example yosho avengio wrote about this thought experiment we might ask an AI to fix climate change and to solve the problem it could design a virus that decimates the human population Presto humans dead no more carbon emissions this is an example of what's called the fallacy of dumb super intelligence that is it's a fallacy to think that a machine could be quote smarter than humans in all respects unquote and still lack any Common Sense understanding of humans such as understanding why we made the request to fix climate change and the fact that we'd prefer not to be wiped out intelligence is all about having insight into one's goals and the likely effect of one's actions we would never give unchecked autonomy and resources to an AI that lacked these basic aspects of intelligence it just does not make sense the third scenario is that a genocidal group of humans uses AI to help them destroy Humanity indeed humans often use technology to do very bad things but we can't conclude from this that AI research and development is itself an existential threat a terrorist group could conceivably carry out a nuclear biological attack that kills millions of people with or without AI there's information online right now about how to make weapons how one might go about killing millions of humans AI systems could make it easier to get that information but the threat is still there without AI but more importantly our society our institutions and our Technologies are enormously complicated diverse and resilient they create a barrier of complexity that puts the brakes on such an attack which would require a Cascade of Highly improbable events everyone here would agree that there's a long list of actual near-term risks and harms of AI such as the spread of disinformation or job losses we should take those risks very seriously but it's also harmful to take unfounded speculations about unexistential threats too seriously consider vaccines which provide immense benefits in mitigating diseases but we've seen calls for them to be severely restricted and even halted due to unfounded speculations about their risks we don't want to kill off potential benefits from AI in science in health care or education would you call science itself an existential risk it gave us nuclear weapons after all but not pursuing science is an even greater existential risk just as for vaccines and other Technologies our assessments of AI threats need to be founded in science not and empirical data not unsupported speculations there's no evidence that AI research and development poses an existential threat now or in the reasonable future thank you thank you Debaters the table is now set for rebuttals so we're going to go around the horn three minutes each in the same order Max you're up first thank you this is lots of this is very interesting so so first of all Melanie you we saw here that 67 of the audience already thinks this is an existential threat so just coming out and saying it's not is an extraordinary claim which I think requires extraordinary evidence and I would love to hear from you and John what your actual evidence is that there is no risk so I would love if you can get a bit more nuanced and safe instead of just saying oh it's unfounded speculation you know and it's things like and it's sci-fi tell us what do you actually think the probability is that we are going to get superhuman intelligence say in 20 years saying in in 100 years and what is your plan for how to make it safe uh what is your plan for how we're going to make sure that the goals of Nai are always aligned with humans how are you going to avoid the malicious use case that I brought up none of you said anything about that John you again reiterated that you want to make sure that we give good goals to AI I'm completely confident that that's what you would do but if someone is a terrorist or just wants to take over the world for their own benefit you know how what is your plan actually for stopping them from from not putting very submissive goals into their Ai and putting in a goals that are going to make that AI take over the world for them I'd like to hear some details and and finally uh yeah what do you each think actually that the risk is we're here to debate whether there's a threat right we're not debating if being whether there's a hundred percent certainly they weren't all gonna get wiped out that is not what Joshua and I are arguing we're just arguing that the the risk is not zero percent and that it's too high and for our comfort levels it would be wonderful if you can actually tell us what do you each think do you think it is really zero percent that this wipes us out or do you think it's one in a million that it's going to pose an existential threat or ten percent or or one percent when we can get by past slogans like this saying it's speculation and sci-fi and getting into these nuances I think this is going to be really really um helpful and and productive and and finally I wanted to say to you John it was actually great to hear that you actually admit that you feel that it is a real extension risk if we just go with today's or the regressive technology and just like for example GPT 5 gpd6 cpt7 and what you see is the Hope is that we switched to some better technology right so if if I misinterpret it please clarify if not uh how isn't there still a risk though that some companies are going to continue doing it the gpt6 kind of way because they already are how do you ensure that people actually only use the Safeway yeah in lacun your opportunity for rebuttal okay let me answer this one just right away uh no GPT whatever is existential risk it's a risk you know might be dangerous might be not as useful as people make it to be but it's not going to be an existential it's not going to wipe out Humanity no um you'll need something considerably smarter than this and that's what I said it's we're missing something big to make systems really intelligent so the first thing I want to say is that yeah this the science fiction scenarios of you know the Earth being wiped out humidity being wiped out this is this sounds like a James Bond movie right it's like the super villain who like goes in space and then you know kind of puts like some deadly gas and eliminates all of humanity so just one movie and I can't disprove it the same way if I tell you I use the Bertrand Russell idea if I tell you there is a teapot flying between the orbits of Jupiter and Saturn you're going to tell me I'm crazy but you can't disprove me right you can't just prove that assertion it's going to cost you a huge amount of resources to do this so it's kind of the same thing with those Doom scenarios there's sci-fi but I can't prove that they're wrong but the risk is negligible and the reason is negligible of extension is because we build those things we build them we have agency this is not super human intelligence it's not something that's going to just happen it's something that we are building and so of course if it's not safe we're not going to build it right um I mean would you build a bomb that just blows up randomly no right okay um so I think a lot of the fears around AI are predicated on the idea that somehow there is a hard takeoff which is that the minute you turn on an AI system that is capable of human intelligence or super intelligence it's going to take over the world within minutes and this is preposterous it's one of the tweets that you posted that I wrote the reason is preposterous is that it's it's just not the way anything works in the world you you build something you build it small you don't make it super intelligent right away you make it as smart as a mouse and then you figure out if it behaves properly and then you make it as smart as a cat and then a dog and then something a little bigger right and of course you do this progressively interactively that's what I was referring to in that tweet that you posted that I that I wrote so um you know this could be engineering it's it's like asking today if we're gonna you know precisely how we're going to make a super intelligent system safe it's kind of like asking in 1930 whether we're going to be able to build turbo jets that are incredibly reliable so reliable that you're going to be able to cross the Atlantic near the speed of sound this would have sounded impossible but we did it thank you Jan I'd like to go back to the teapot well there's no evidence for a teapot but there there is an asteroid coming to us it's not clear that it's going to hit the Earth but we're seeing a very clear trajectory of improvements in intelligence of the systems we've been building steadily over the last two decades or more and you and I have been part of that and yeah I agree with you it's not yet arrived but does it mean that we should you know do nothing in fact it's interesting that you've been proposing solutions to the safety problem which means you believe that we need to build safe AI yes which means that there is a problem that needs to be fixed yes I mean also you you you um you just said that if uh it was dangerous we wouldn't build it well let me remind you of a few uh things that we've done I mean collectively not you and I of course um that that are existential risks you know fossil fuel companies um for many decades have known and hidden the fact that their activity uh you know could be highly destructive for the planet and well it was the profit motive uh companies are actually acting in a way that is not quite aligned with what Society needs and in fact there's an interesting analogy between that kind of behavior and AIS we are asking companies to do things that are useful for society um you know produce goods and services we need but we can't exactly tell them to do that in a formal way so we tell them maximize profit and stay legal pay your taxes but that recipe although in principle if everything you know old companies were microscopic and everything was good and there was no environment would be good but there's a mismatch between what companies are trying to achieve and what Society really needs so how do we how do we deal with that well we need governments to intervene to try to reduce that and we need to understand the problem in fact what Max and I and others are saying is not necessarily there's going to be a catastrophe but that we need to understand what can go wrong so that we can prepare for it we can build safe AI systems the other thing uh is that you seem to think that everyone is like you but but you know there are a lot of people out there who can have all kinds of motifs that that could be very dangerous for everyone um so um Melanie you said that we don't yet have existential risk and I kind of agree but how do you know that in two three five twenty years it's not going to be the case [Applause] so uh Joshua uh we're not debating whether or not there's a problem to be fixed or whether there's any harms we're debating whether there is an existential threat from AI That's the resolution and if the existential threat is you know 100 years 500 years from now I don't think then we would be here debating you're talking about something very near term and uh Max asked like what is the risk I don't know I mean the risk is non-zero the risk of anything is non-zero you know we could talk about any kind of scenario speculative scenario like you know uh malicious aliens coming to Earth in their spaceship and destroying Humanity that's an existential risk and then we can say wait wait a minute maybe we should ban radio broadcasts because that's how they're going to find us but clearly the probability of that is quite low it's not high enough to justify the kind of attention that you're seeming to ask for now I want I want to say you know there's this word super intelligence smarter than humans these phrases that are being glibly thrown around as if we understand what they mean yahshua said you know your brain is a machine sure I agree with that so we could build a human level intelligence in principle I also agree with that I think we're quite far away but in principle but human intelligence is not any old machine it's a very special kind of biological machine that's adapted to our human problems we're different from octopuses we're different from rats or different from viruses we have our own specific human problems needs and motivations and we're fundamentally embedded in a physical cultural social system and that's the kind of thing that gives rise to human level intelligence AI systems while they learn from Human data and that's exactly all they learn from is the the data that humans created and so they have captured some aspects of our intelligence they lack fundamental important aspects of what it is to understand the world and I think that when we throw around words like super intelligence we're using sort of our intuitive kind of associations and assume that these things could be easily built and that we're on a trajectory to build them but really that's not been shown at all I don't think it's um something that's science is really gives much evidence for now and I'll just conclude to say that the whole history of AI has been a history of failed predictions back in the 1950s and 60s people were predicting the same thing about super intelligent Ai and talking about existential risk but it it was wrong then and I'd say it's wrong now thank you thank you well let's uh thank our Debaters for a terrific opening for this debate I'm going to join the conversation now and try to think up some questions that are top of mind for our audience and speaking with The Debaters before we all agree that we really want to Center this debate on that keyword and the resolution which has been brought up many times tonight which is existential so let's throw a definition just up onto the screens here just to kind of Center ourselves for this uh portion of the debate this is from Nick Bostrom a kind of leading thinker on existential risk his words an existential risk is one that threatens the premature Extinction of Earth originating intelligence or life or the permanent and drastic destruction of its potential for desirable future development so I think we all agree that's a a relatively Fair formulation of what we're debating tonight so let me come to you first max because you were the last to talk in the first the first to talk in the last round and just ask a question I think that is on the minds of a lot of the audience members which is to try to help us understand why would whatever we want to call it an AI system machine intelligence why would it want to harm us why would it have an intention to do something to us can you give us that answer yeah let's not anthropomorphize by talking about one thing but why might it have goals to do bad things like this as it's either because of malicious use that some user actually gives it those goals not everybody is as kind as Yan and Melanie or because we specified some other goal that we thought was kind of getting it right but it turned out when it's pushed to its extreme you know does great harm so in this case in both cases the AI is very loyal to us and it's just that the goal was off or the human goal was put into them off the third one which I have heard no counter arguments to whatsoever is the one where we just get out competed that's kind of the trajectory we were already on right now everybody just wants to make a buck you know companies have a goal to make profit and they just we gradually get more and more disempowered so those are three votes to it and I would love again if if if John and Melanie can answer this this question like what do you think the probability actually is that we're going to build machines that can do most of our stuff in 20 years what do you think the probability really is that uh we're gonna have an exit that there will be an existential risk and what's your plan for avoid avoiding these three different threats because we can't just dismiss arguments as being ridiculous or sci-fi when people like Jeff Hinton are making them the CEO of openai is saying it at the next risk the CEO of deepmind is saying what is your plan actually for you know how avoid each of those things which authorities do we want to pay attention to you know I don't I don't think we have proof by Authority here right but we don't need an argument right so let's say we don't know and I'm agnostic actually I don't know what's going to happen but given the stakes shouldn't we pay attention to this right so yeah this is an interesting analogy let's say yahshua brought up earlier climate there's an idea of a precautionary principle Jan that you do certain things because you want to be careful maybe not what you want to do but you take steps and actions now to avoid to close off certain worst case scenarios why isn't this an example like climate AI is one where we want the precautionary principle to be in effect to deal with maybe it's a tail risk only but this tail risk of an existential threat but that's exactly what we're doing we're developing technology and testing it making sure it's safe before deploying it I mean current AI systems are pretty much deployed this way there's no reason for this to um to change as the power of those system progresses but um you know I mean there's several questions uh you know if bad guys can use AI for bad things there's many more good guys who can use the same more powerful AI to counteract it right so in the end the good guys are going to win sometimes the attacker has the advantage there is there's no absolutely no reason to believe that's the case for for AI in fact we are facing we're facing the situation right now what do you know I mean because it already exists because people are currently using it I just I just this this is a really interesting point so let's have Yen uh sum up on this and then I'll come to you yeah no no it's good but I like the crosstalk but let's these are complicated ideas I'm struggling to keep up so I know the audience might be too because some of the dentures that you talked about are people using AI for for bad things they already exist admittedly you said that before those threats exist with or without ai ai might help a little bit let's say misinformation okay is is AI going to help with misinformation well you know Q Anon is two guys they don't use AI they have a big impact the counter measures against misinformation hate speech Etc um and propaganda attempts to corrupt the electoral process and democracy again as I said before the solution is AI or to take down this content on social networks and various other channels AI is used massively so to take a precise example the proportion of HP taken down automatically by AI systems about five years ago on Facebook was about 25 percent last year it was 95 right but we need 100 to avoid Extinction this is no we don't we don't there's never anything that is completely perfect well that's the issue that's why we need to do more than what we're doing now again it's the good guys AI which is superior to the bad guys AI That's like saying that the way to stop a bad guy with a bio weapon is for it to have a good guy with a bio weapon that's not what you do this where you stop a bio weapon attack is with vaccines and banning bio weapons and having various forms of Regulation and control no the way to stop bio weapon is to have Counter Intelligence because you can do this in secret too much and it's the same if you use AI it's not going to make your exactly and how do we build that Counter Intelligence we're going to build we're gonna need sorry the infrastructure to protect ourselves and so for that we need to recognize that there is a risk there is a risk uh there is a risk no there is there are a lot of risks but the question we're asking is are they existential are they going to end civilization it could and that's good and I think you know uh rudger brought up the precautionary principle which says you know well if there's a risk we should probably do something about it and I'm you know I'm not opposed to to that but I think that we have to be balanced we only have so much resources so much attention and the whole narrative on existential risk in AI has diverted a lot of the attention of the world maybe part of the 67 percent that are here saying that they think is an existential risk it takes our attention away from some of the real immediate risks like disinformation uh and bias okay so are you saying it's a zero percent risk zero percent existential risk is that your claim um do I think there's a zero percent risk of any scenario no of course not what do you think it is we're really debating I can't put a number on it I think it's quite low and I think what we're debating is there a reasonable existential risk risk of end of civilization in the reasonable future otherwise we wouldn't be up here today we're not I'm not going to like say 0.0001 I mean I can't see how high is too high for you then how high is how high a probability I think it's too high for you I don't think we can put a probability on it we don't know we don't have no no enough okay it's lower than uh the Earth is being wiped out by by meteor and by the way AI can help with that problem um but let's just bring the audience here so in a sense what we're hearing here from the con side of the debate is that the the risk is is so in a sense negligible that we should move on to other things the development of AI with a variety of controls with regulation possibly but but to assume that there's any demonstrable existential risk of a Quantum that would require a pause a moratorium as both of you has argued is is not in the cards let me let me you a little bit if you're pleased about these probabilities let's let's consider very simple scenarios there are more complicated ones for example that Max and I've been talking about but let's consider first the probability that we will build we will know how to build intelligent machines that are powerful enough to be dangerous existentially to us in say 10 years okay that probability when I ask a lot of AI people uh they agree that it's it's significant you know it's 20 30 40 50 and and people like Jan and Jeff and many others that I've asked agree that it's clearly not like tendonize something right it's not like uh asteroids coming and destroying the Earth so that's the first thing we need I guess wait let me finish let me finish second what's the probability that once this is available and and known and open source so that anyone can you know play with it what's the probability that there will be someone with uh you know misguided intentions or malicious intentions which um like uses it and and we get catastrophic consequences but possibly leading to Extinction uh or at least uh you know the same scale as we're talking about uh nuclear weapons explosions and things like that so what's the penalty for that most people I asked is like 99.9 but once it's available somebody's gonna misuse it so I don't understand how they can get with very small probabilities now the yawn sensor is oh it's okay I he agrees more or less with what I'm saying but he says we'll fix it before we get there yes let's do that Okay so I think um you know you say you talk to you the people you talk to in AI have this belief well the people I talk to in AI don't have that belief I I think there's this the field is quite split I would say and I think there's quite a there's a coined at 50 that we're all gonna die well to any of the people who sign this letter saying that this is an existential risk people who do who have signed letters saying they think it's an accidental risk they don't say over what time scale but this is yeah Jeff engine is a Godfather but he doesn't know everything you don't talk to him yeah I I think that he's he's smart guy but I think that I think that a lot of people have way over hyped the risk of these things and that's really convinced a lot of the general public that that this is what we should be focusing on not the more immediate harms of AI and that itself is harmful Let's uh good answers everybody let's just follow this risk question and come to you Jan on this um because you're a scientist and you're at meta and you've got three billion active daily users it's hard to wrap my mind around that number but you've got big data and all kinds of really interesting ways to deploy this technology historically scientists at times have made decisions to engage in Risky Behavior because they they want to do things they want to innovate and they want to discover there are you know hair raising Tales around the Manhattan Project and probabilities of igniting the entire atmosphere of the planet with the first Trinity test a very low risk but they still went ahead and did the test why are you confident that this time scientists in the scientific Community writ large around the world is going to approach this with the Prudence that you are and with and with the care and caution that meta is going to bring to this uh this challenge of of safe development of AI okay so so first of all it's not a question for Mida because the progress I mean the question of really what is intelligence is a very deep scientific question that will take the effort the combined effort of the entire scientific community so this is not going to happen within the confines of a small lab or a big lab this this detects everyone there's one reason we need this research to be open by the way but the point is humanity is still around despite all of playing with all those toys like nuclear bombs now the assimilation of AI with nuclear bomb I think is a is a complete fallacy the assimilation of uh AI when the you know nuclear weapon is complete fallacy nuclear weapons are designed to kill people and Destroy entire cities AI is designed to basically amplify human intelligence like this is intrinsically good right well it depends of course of course it can be dangerous um the same way flying on an airplane that is not properly engineered is dangerous or a submarine or a submarine for that matter yeah and now properly maintained okay so to ensure that things that are deployed in the public is safe you need safety testing you need certification and I'm all for that you know we need regulation for various things they already exist you you have we all have automated systems that can drive our car on the highway you know soon they'll drive a car completely automatically those have to go through certification they don't get into your car just just because the manufacturer decides uh it's good AI systems are used to help medical diagnosis for you know Imaging or for you know various other applications they have to go through approval by the Regulatory Agencies so we already have those regulations that's good and you know we're talking about the risk of AI but like what about the benefits you know everything is a trade-off between risk and benefit and what many what many said is you know if the risks are infinitely small or or negligible you know we have to consider the benefits like what about killing all the benefits the the progress that we can make in science in medicine in technology yeah let's uh let's bring in uh Max you want to get in on this point yeah so you just said that intelligence is intrinsically good I disagree intelligence is a tool that makes you able to accomplish more good things or bad things what are you talking about if Hitler had been more Professor MIT yeah you are a good person so your morality combined with more intelligence makes the world better but if Hitler had more intelligence I think the world would actually have been worse so it's naive to think that just because you make something smart it's only going to suddenly care about humans you know ask some ma'am Uli mammoths if they feel so reassured that we're smarter than them then therefore we would automatically adopt Mammoth ethics and do things that were good for the mammoths that we didn't you know that's why you probably don't have haven't met any I also want to say that if anyone in the audience Works in biotech maybe you could raise your hand you know this the whole discussion in the last 10 minutes must sound kind of weird for you where where like we're going to keep building this unless someone can prove to us that it's dangerous because in biotech it's exactly the other way around if you had come up with a new medicine you'd say this cures cancer it's awesome you can't just go sell it in the supermarket until someone proves it it's dangerous it's your job to convince the Food and Drug Administration in the US or the Canadian authorities or whatever that this is safe and that the benefits yes the benefits outweigh the risks and you also cannot if you take build a new someone can't just come in and build a new weird hitherto unseen design for a super powerful nuclear reactor right underneath the CN Tower just because nobody else can prove that it's dangerous it's their job to prove that it's safe and we need to totally flip it around like Joshua said that when you when with future very powerful AI systems it should be the role the responsibility of the companies that the first prove that this is safe before it gets deployed we need to become like like biotech no I don't support the resolution but I think we you know I never said AI has no risk regulation oh regulation absolutely I totally support regulation and no one ever said AI has no risks we're talking about this extremely high bar here of ending civilization of ending Humanity but why is it our role to explain to you why it's dangerous you still haven't answered I've never said it wasn't dangerous I asked you twice what is your plan for avoiding misuse you haven't told me you have asked you twice what's your plan for solving the alignment problem I think you haven't told me I've asked you twice I really just finished to tell me what your plan is for avoiding the scenario where we get out competed in this empowerment you've said nothing all right so unfortunately unfortunately I don't get paid enough to solve all these problems I think you have to have a plan I don't I don't have to have a plan I think the AI Community is developing plans to to mitigate risks but that that's not what we're debating here we're not debating whether AI has risks or whether the community is going to solve them we're debating whether AI research and development poses an existential threat that's something very different answering my question what is your plan to make sure it doesn't have an existential risk I don't think that there is an existential risk I think there are many risks but there are people who are working extremely hard and including I think many of you know Oshawa particularly on mitigating the more immediate real real world we need uh Melanie you and I know each other and uh we need regulation we need to make sure that we get all the upside that Jan was talking about for I I've been working on AI because there's all this upside that that you know is OD starting to happen and there's also uh downsides that you and I have been talking about in you know for many years now and uh we need to take care of that um but we need to take care of all the downsides including the ones that seem maybe a little bit um extraordinary but I want to say that you're saying oh existential risk is a very high bar so um if I tell you oh um maybe it's only going to kill one percent of humanity uh you consider that's not important that's extreme extremely important but I wouldn't call that existential by nobody can kill one percent it can kill 50 and you can kill hundred percent right I mean I think that's very I I think the you know killing one percent would be a complete and utter catastrophe and we should do everything we can but then but that's not existential it's hard to kill these eight billion people well but the reason why it could kill the reason why it could kill so many people whether it's only one percent or a hundred percent is because these systems we're talking about systems that don't exist today uh that I think might be coming in a few years or in a few decades and that these systems would be smarter than us okay and so they find ways that we don't have easy defenses against that's the scenario that I don't know is going to happen but I think it's plausible enough that we need to worry about it and so there is a risk let's Avion come in on come in on this point because he wants uh two points I want to make first of all this is complete fallacy that the desire to dominate or destroy is linked with intelligence this is false it's not even true within the human species uh some humans the this is not necessarily the smartest Among Us who want to become the leaders foreign in fact we have plenty of examples to the contrary on the international uh political scene and this desire to dominate is something that is intimately linked with human nature because we are social animals we are social species and nature has evolved us to organize ourselves hierarchically like baboons like chimpanzees have no desire to dominate anybody because they're not a social species so this desire to dominate has nothing to do with intelligence they're almost as smart as we are by the way so this has nothing to do with intelligence we can make intelligent machines that are super Superior to us but have no desire to dominate I'm I lead a research lab I only hire people who are smarter than me none of them want my job now but again isn't it just refer to the other side of the debate because it's something I think the audience would appreciate understanding it's not so much the desire to dominate it's the control problem it's that you've set them some goals maybe very Noble and great goals but they start doing other things to achieve those goals which are antithetical to our interests it's not that they're trying to dominate it's that there is a tragedy of the commons that goes on it's the same thing so how do we design goals for machines so that they behave properly and again this is something that you know a difficult engineering problem but this is not a problem that we are unfamiliar with because as societies we've been doing this for Millennia this is called making laws right it doesn't work we design a lot of course it works um because we design laws to align our objectives with the common good so that we prevent people from doing bad things by telling them you're going to get this punishment if you do this these bad things we even do it for super intelligent entities called corporations it's not perfect but it's not existential okay but here is another point to uh I want to uh respond to to Max you're a professor at MIT which means you educate really smart kids you are building super intelligent machines I mean human machines in that case right because you're educating them what is the probability that in your class one of those students is the new Hitler and it's going to wipe out uh liberal democracy those as Joshua mentioned what we can build with machines is going to be vastly more powerful even than my MIT students and I still love to brag about them right it's that scaling and also the entirely alien nature of their minds which is so intimidating you know the students the computers yes exactly and I I feel a little bit like we're having this this we're on this big ship sailing South from here down in the Niagara River and Joshua is like hmm I heard there might be a waterfall down there maybe this isn't safe and Melanie is saying well I'm not convinced that there even is a waterfall even though Jeff Hinton says there is and some seal and moreover I don't know how far away the waterfall is it might be like really far away there's big uncertainty so I'm not really going to worry and tell you about my plan for keeping and then beyond it feels like you're saying that yeah it was definitely waterfall there we'll get there we'll figure it out but but we're gonna figure out how to make things safe how to stay in control of our boat when we get close we don't know yet how to make it safe I think we all admit this uh some of us have worked very hard on that research for close to decade but we're it's going to be easy we'll figure it out when we get there and I think in both in both of your cases it would really be um If This Were again biotech this kind of argument is not reasonable if a tech company says oil companies told us oh climb extension threats whatever from climate change it's not a thing that's sort of like what I'm hearing from you Melanie so therefore they shouldn't be obligated to tell us why what they're doing is safe surely should be it should be their responsibility to tell us how they're going to mitigate this problem and similarly with biotech again just because okay almost all big companies will always claim that there there is no risk with their product initially the tobacco industry told us that too the asbestos industry told us it's it's so max I think misstating what we're saying it's we're not saying there's no risk and you know I think the people this analogy with the waterfall is well okay we all know how well about waterfalls that seems very reasonable but this idea of super intelligent AI is not something that you know you sort of extrapolating wildly from what we have to something that's smarter than us in every possible way a hundred times smarter than us maybe and yet still misinterprets our goals this is a chartered cruise ship towards the fold it's set down on the ticket this has been the goal of artificial intelligence since it's Inception to build super universities I mean everybody's been saying oh the waterfalls up we're almost at the waterfall we're almost at the waterfall we're almost at the waterfall that happened in 1960 not by Jeff Hinton but people like Claude Shannon and Herbert Simon and they were just dead wrong and you know who else was happy if I'm wrong but I don't know and you know what else was wrong I don't know what other kinds of existential risks are going to happen maybe we're going to add Lots you know genetic engineering is going to we are moving in that direction right that's the thing it's not like some crazy crazy thing that's unrelated to what we're seeing it's very much related to the work that I'm doing I I see where we are going and I don't like it that's what I'm doing and we have to be let's be we have to be more humble here because we made another terrible prediction epically failed one about three years ago most AI researchers thought we were decades 30 years maybe 50 years away from passing the Turing test we just heard Joshua answer argue now that it's already happened there is no Turing test but the prediction did not hear me hear me out I'm just making the point that three years ago most AI researchers that I know did not think we were going to get chat or GPT 4 in 2023. this took a lot of people by surprise wouldn't you agree no absolutely not maybe not you but we have they're actually serious polls out there no so that most people the basic technology for it existed two you know three years before yeah so I didn't know what was going on there and from the outside it was it was a big surprise really it's a big surprise and no I'm not no one's denying that chat GPT and gpt4 and so are amazing but that doesn't point in the direction of this sort of misalignment below the line I don't know if there is a line that I can well look at the last 20 years there is a lot there's there's a line in a certain direction where these systems are learning from huge amounts of human data to be able to synthesize not to acknowledge they're not able to feel they're not able to think they're not able to form their own goals that won't come soon that's harder than intelligence I would say right and that's the problem we don't know how to program a computer unlike what Jan seems to claim right now it's an open problem that economists uh reinforcement learning researchers AI safety people have been studying for many years and essentially we don't know how to make a computer that will do what we intend and there are several reasons for this but one of them is that we're not even able to express it in a clear way that a computer for sure will understand and then the issue that they might go into something else that turns out because they want to achieve those goals that they they might want to preserve their existence they don't want to do anything they're not alive no that's very easy to do no you're you're wrong you're wrong the the systems that for example Chan GPT is essentially like an oracle it doesn't really have a want although a little bit it wants to please us because of the reinforcement learning yes it's been trained to please us then it's actually easy to put a wrapper around them to turn them into agents that have goals it's actually easy to do is that humans gave them yes not their own goals yes and in order to achieve those goals they're going to have sub goals that and those sub goals for example may include things like deception and and this is happening that never happened it happened already it did not happen I actually we studied that the example where there was an example where the New York Times in fact reported that gpt4 had deceived a taskrabbit worker but if you actually look at the actual paper and look at the details and I ask you all to think more critically about what you hear in the media a human asked it prompted it to push it in that direction it has happened when that that was the point they were trying to make a test yeah and the human system could could deceive a human so when it's when I told this journalist that had loved it systems yeah that's what people do when they told the journalist that it loved him and he should leave his wife that was deception do you really believe that it loved the guy or is it deceiving him it was not deceiving is very anthropomorphic idea it's an intention I don't believe it was deceiving him the consequence of trying to achieve a goal in order to achieve the goal you you try to find you know the means to the end and just it's not just humans it's it's you know that system has no goal uh other animals do it as well this has been a terrific debate because the moderator has been completely obsolete for the last 10 minutes I love that I mean let's uh let's do this more often um I just saw remaining moments though I want to touch on one point that Max raised in part of his three kind of critiques and it's to go back to the second point maybe we can have that definition again up on the screen that we agreed on for existential risk and it's the second point that an existential risk isn't simply an Extinction event it's a permanent drastic destruction of kind of human potential our inability to return to our previous track of development again a very high bar but I guess the question Melanie that interests me is I know your views on this is is this idea of losing our agency as Max presents it you know the corporations that adopt AI will be more successful and outperform the ones that don't the governments that adopt AI will be more powerful and outperform the ones that don't the citizens the individuals that do and all of this will encourage us to hand over our decision making to machines because performatively the outcomes will just be so much better for us individually and collectively why isn't that an existential risk a slide into the end of human agency and an inability to return to our full human potential I I don't so I think that first of all it's not clear to me at all that AI companies that use AI or humans that use AI are going to outperform those that don't you know we have lots of AI has lots of limitations and you know for example there was recently a lawyer who used AI to put together a case and it actually made up all kinds of cases cited and he got you know pounded by the judge he did not out-compete the other lawyers now maybe AI will get better but I think that um you know the these um these sort of suppositions the assumptions are not obvious at all and humans are very reluctant to give up their agency you know we've had lots and lots of Technologies in the past and we've had you know people used to think that you know having the the introduction of say writing and calculators would make people lose things like their memories their abilities to to reason well the fact of like Google Maps lose our ability to navigate and stuff but humans actually kind of adapt to those Technologies and go beyond that yeah and you're nodding to this you agree with the there isn't an agency problem here well you just said the the point has been made for every technological Revolution or or Evolution for writing Socrates or against writing he said people are are going to lose their memory right the Catholic church was against the printing press saying they would lose control of uh the Dogma which they did they they could do nothing about it the Ottoman Empire banned the printing press and according to some historian that's what uh accelerated their decline it took them 300 years to or at least 200 to uh authorize it again and this was just because they wanted to control their population and so every technology that makes people smarter or enables communication between people facilitates education again is intrinsically good and AI is kind of a new version of this it's the new printing press so long as it doesn't blow up in our face and then you can smash your hand with the printing press right yes if it's the problem is the scale right so so long as we build technologies that could be harmful but on a small scale The Goods the the benefits overwhelm the dangers but now we're talking wait wait now we're talking about building Technologies unlike any other technology because it's technology that can design its own technology no yes no talking about superhuman AI this is this is the subject it's under our control it's under control and we remain under control it's not it's it's very much like previous technology it's not really it's not qualitatively different the experts have been studying this question say it's going to be very hard to keep it under control and that is why I'm here today that very argument has been made with four computers you know 60 years ago um this this is I recommend I recommend the audience to go and uh to a website called the pessimist archive it's hilarious it's uh newspaper clips of all the stupid things people said whenever a new cultural phenomenon or a new technology appeared okay the train oh you know you're not going to take the train it's going to go 50 kilometers an hour and you can't breathe at that speed so it is full of this right everybody has said this kind of thing every single time there was a technological Evolution or even a cultural revolution you know Jazz was going to destroy Society the printing press was going to destroy society and it did it totally did for the better it enabled the enlightenment philosophy science rationalism so let's before we go to closing statements I want to give Max the last word in the segment an Enlightenment Max we're on the verge of a a re imagining and blooming of human thought powered by AI I think we're stock Traders will tell you that past performance is not an indicator or a future performance future results yeah future results and it would be a huge mistake in an exponential technological growth to assume that just because something happened one way in the past is going to continue being this way what's actually happened is first during the Industrial Revolution we made machines that were faster and stronger than us so we started working less with our muscles and more with our brains and that was fine we typically got paid better now and this disempowerment scenario that you were asking them about we're now instead building machines they can think better than us gradually and we're not there yet so we shouldn't be basing arguments based on today's pathetically bad AI we should be looking at tomorrow's the the the um superhuman AI that some people think might happen in five years or 20 years and you think maybe would happen in you know 300 years that's what we should be talking about that's the elephant in the room because if that happens it's going to be very different from the Industrial Revolution now we can neither compete with our muscles nor with our brains and we are really going to start seeing um a true disempowerment and I don't think it has to be that way I can be a bit optimistic too you know if we stop dismissing these existential threats and taking them seriously which I think dismissal of them is exactly the thing which is preventing us from doing the right thing I think that a lot of the research that you were mentioning Joshua on how we can keep the machines under control how we can use them to empower rather than disempower us and do all these great things will actually succeed it's not that it's impossible to make AI safe and controllable like John and hopes we all hope for this it's just that right now the pace of the growth of power of tech has been faster than the pace and the Safety Research the alignment research and and putting in place the right policies and so by talking acknowledging that there is a risk we can make sure we accelerate the pace of the safety work and put some some safety requirements in place to make sure that we get the safety in time and get all the upside okay fantastic uh four-way discussion I've learned a lot and I know our audience has too so let's go to closing statements we're going to put three minutes on the clock these will be in the opposite order of the openings so Melanie Europe first the stage is yours thank you I'm in awe of what our field has accomplished you know we've accomplished computers that can describe images to blind people we've developed computers that can help doctors diagnose diseases predict the predict predictive structures of protein and fluently synthesize enormous amounts of human Collective knowledge the potential for benefiting humanity is breathtaking but hear me on this we can acknowledge the incredible advances in AI without extrapolating to unfounded speculations of emerging super intelligent AI we all know that Science and Technology are double-edged swords with potential risks for misuse there are many risks of the AI technology to be sure but but hear me on this as well we can acknowledge the risks and harms of AI without extrapolating to the mythical Specter of existential threats there are risks there are harms they're not existential well talk of super intelligent machines destroying Humanity or even helping evil humans to do so may resonate with our base fears everything science tells us about the nature of intelligence and about the resilience of our society argues against the existential threat narratives that we've heard here today I'm worried that overstating the so-called existential threats of AI takes our Collective attention and focus away from the real very real harms and risks that modern AI actually presents and this is not a idle worry remember Jeff Hinton who we've all been talking about he he was asked on CNN why he hadn't pushed the concerns of AI ethicists at Google who had long warned about risks of AI spreading misinformation and magnifying bias he said they were rather different concerns than mine their concerns aren't as existentially serious as the idea of these things getting more intelligent than us and taking over so to me this minimization of the real risks of AI encapulate encapsulates what's so dangerous about this existential threat narrative it takes all the oxygen out of the room and leaves no space for the real evidence-based risks that we need to address let's not allow ungrounded speculations about the future of AI to inflame our emotions and fears to distract us from real harms that we can address let's design ways to make AI safe fair and beneficial based on science not science fiction thank you great closing statement Melanie thank you Joshua bengio you're up next I've been working on machine learning for all my life and I've seen the progress I did not expect the advances we've seen in in recent years and we now have machines that we can have a dialogue with and they can pass for humans and this was something that for a long time was considered as a milestone maybe of actual uh human level intelligence now when we look carefully we see that it's not exactly that and there are some things missing Melanie says let's not extrapolate I say we have to extrapolate the reason is we have no choice if something as awful is uh maybe a few years or maybe even one or two decades in front of us we have to prepare for it we have to do the social adaptations we have to do the AI safety work that Jan has been talking about that I've been talking about that I think we have a chance we have agency right now to control our future and for that we need to accept that there are all kinds of risks Melanie I'm totally with you with all the harms that AI currently poses but it doesn't mean that we have to deny the risks that are existential that are already on our radar screen at least on mine um now just uh quickly you know why do I think that this is something that can happen uh and have dire consequences it's because of humans no you know it's not just but AI getting crazy it's because we have weaknesses um there are many ways in which we can get deluded there are uh you know conspiracy theories and lots of people believe them so there will be people who act in strange ways and will do things that can be very harmful song as um each of us has just our hands uh maybe guns unfortunately um and you know damage can be local and and not existential but when we build very powerful tools I think we really need to be much more careful and and that's why we're talking about this today so just uh to end on a positive note um Melanie asked me you know why don't you stop working on this well I've been thinking a lot about this I want to do what I think is is best to to go in the right direction and I think that I'm going to reorient my research so that uh either I'm working on applications that are not dangerous but that very safe like working on Healthcare and the environment or working on AI safety in order to prepare and prevent uh the bad things that could happen so thank you very much and think about it the stage is yours so yes there are risks as with every technology and and they are not existential I'm just repeating what you said I couldn't say it better it's you know every technology is a trade-off between the benefits and the side effects some side effects are predictable some aren't there are going to be side effects bad side effects of AI there are going to be people who are going to try to use AI for bad things um and it's it's a game of cats and mouse and welcome to the real world this is the way this is the world we live in um every time something bad happens we find countermeasures this is true for defense and everything military this is true for intelligence it's true for terrorism crime just about everything right when and the statement has been made for every technological uh Evolution remember what people were saying where the internet started coming online and and be generalized oh there's going to be Cyber attack people are going to steal your credit card number and maybe the financial system will be brought down remember where people were saying just before your 2000. satellites are going to fall out of the sky and crash into cities and the phone system was going to crash and civilization will end um it didn't happen we're still here so I think there's a little bit of the same kind of feeling of uncertainty a lot of people have the feeling that bad things are going to happen because they're not in control the other feeling that AI is just going to happen and there's nothing they can do about it and that creates fear and I can completely understand that but some of us in what could be construed as a driver's seat um there are plans to make those things safe and if we can't make them safe we're not going to build them here is an example in the 50s people thought about building nuclear-powered cars there were prototypes that were built they were never deployed obviously because it's not safe so I think it's going to be the same thing for for AI we're not going to build nuclear powered AI but although arguably in certain countries it is nuclear powered um so I I think there are risks attached to not developing AI there's so many potential benefits we have to think about that and sort of weight against um against the actual risk but the risks are not existential AI is going to be subservient to human it's going to be smarter than us but it's not going to reduce our agency on the contrary it's going to empower us it's like having a staff of really smart people working for you that's in your Renaissance thank you [Applause] thank you Jan lacun we're going to give the last word and tonight's terrific debate to Max tegmark Max take us away I'm going to make a pitch for humility science is all about being humble my definition of what it means that I'm a scientist is that I would actually rather have questions I can't answer the answers I can't question now what you're going to vote on here in a few minutes you're going to vote on on this question of whether this is an existential threat what does that mean you're not asked to vote whether you're 100 sure where you're going to get wiped out you're going to ask to vote on whether you're comp whether there is a non-zero risk of an existential some risk or a non-zero probability right that's what you can ask to vote on so if if you think yeah it's probably going to be fine but maybe there's a five percent chance that we're gonna get wiped out and if you wouldn't get on an airplane if you're told that it has a five percent chance of crashing that's too high for you right then you should vote Yes it is an existential threat now we all need to be more humble you were right Beyond making fun of that pessimism website where people worried too much about stuff that didn't happen you were right Melanie pointing out how ridiculously over optimistic McCarthy and other Minsky were in the beginning thinking hey what happened sooner scientists have lacked humility in that direction what we've also just as often lacked humility in the opposite direction the world's most famous nuclear physicist at the time lord Rutherford said oh getting nuclear energy out of atoms is like moonshine very next day Leo is the Lord invents a nuclear Chain Reaction right so it's and as I said a few years ago most people including even Joshua Benji thought weren't going to have gpt4 well here it is and we also have to be humble about these probabilities that I kept pestering you about there so I thank you so much John for actually giving me one you said it's about the same risk as getting struck by an asteroid since I'm a nerd I happen to know that the risk of that wiping is that is about one in a hundred million per year and I think that's not the humble estimate I think it's way too low as you actually explained it reminds me a lot actually of of when the Challenger space shuttle investigation happened where many scientists said the probability of a spatial blowing up was 10 to the minus five one in a hundred thousand they sent up 100 of them about two two of them blew up right lack of humility and the designers of the Fukushima Nuclear Power Plant had said that the chance is less than one and ten thousand that you'll get that kind of bad tsunami in any given year lack of humility let's be humble the truth is we don't know how soon we're gonna get super human AI the three of us think it might happen in five to twenty years Melanie thinks it's going to take a lot longer we just don't know who's right that means there's a risk that it is going to happen soon and we also don't know whether it's going to go well or whether we can get disempowered or wiped out let's be humble it is an existential threat that's what it means oh thank you guys for terrific debate this is did exactly what we hoped which was an opportunity to learn uh to think big thoughts about Ai and now to have an opportunity to vote for a second time on tonight's resolution be it resolved AI research and development poses and existential risk so if you need to go back and scan that QR code either in the program or on the screens or just pop open your browser and you will see that single question there ready for you to vote again we recommend you use your cellular network for voting as opposed to voting on the Wi-Fi which will have a hard time registering your ballot so let's do that voting now and while we do that let's review our audience vote at the start of the evening so when all of you came in here unedified by 90 plus minutes of debate you had a vote of 67 percent in favor of the motion 33 percent opposed so an audience definitively in the Pro camp arguing in support of the motion we then asked you how likely well the chances of you changing your mind over the course the proceedings of the debate and we got a really high number on that which was interesting 92 percent that might be a record for this debate series uh so public opinion in this room really in flux as we went into what proved to be a fascinating conversation it's not working okay we always have problems with voting don't we I miss the paper ballots I gotta say uh should we bring back the paper ballots okay it's working now oh okay sorry a hand vote okay let's try a hand vote that's a good idea talking about ancient Rome today and Athens democracy we can do this by hand so let's see if the voting works it probably it didn't but uh while we're waiting for that let's see a hand vote how many of you remember 67 how many of you are in favor of the motion at the end of tonight's debate put up your hand you guys can put up your hands if you want okay is that 67 percent I need a I need a visual AI to read this room okay how many of you are opposed whoa that's hard to call okay ours is anyone having any luck with the voting application now okay what we actually you know what we have a solution to this we have no we have a solution we have all of your emails all of you that bought tickets we have your emails we will email you in the next 24 hours you can fill out a ballot send it back to us and we'll announce that on a website and we'll call it a draw for now but let's thank our Debaters for a terrific debate thank you thank you John and thank the monk foundation for a fabulous evening we've got VR demonstrations outside courtesy of meta please enjoy those they're in the lobby for as long as you'd like yeah yeah yeah exactly thank you everybody
Info
Channel: Policy-Relevant Science & Technology
Views: 66,525
Rating: undefined out of 5
Keywords: science, technology, artificial intelligence, ai, machine learning, ml, risk, existential risk, ai risk, ai alignment, crisis, climate, pandemic, gpt, chat gpt, debate, munk debate, munk debate ai, bengio, mitchell, tegmark, mit, lecun, meta, twitter, algorithm, munk debates, agi
Id: 144uOfr4SYA
Channel Id: undefined
Length: 107min 20sec (6440 seconds)
Published: Sat Jun 24 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.