Max Tegmark lecture on Life 3.0 – Being Human in the age of Artificial Intelligence

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right good afternoon everyone first of all it's really a fantastic to see so many people here today the interest for this event has been really truly amazing unfortunately we couldn't accommodate for everyone but we're really happy for the for the turnout here today the Boston Consulting Group and bra have foundation I've come together today to arrange an exclusive talk by MIT professor max tegmark to talk about his latest book his research and specifically on AI and its potential to sort of revolutionize the way we see about the future they've talked what today will go off roughly an hour and then after that we'll have a interactive hopefully Q&A session so Who am I my name is Alexander Berman one of the directors at the broug a foundation and braja is a nonprofit organization that's been around for soon ten years and what we focus on is quite simple we try to identify and attract the leading academics and researchers in the fields of the natural and behavioral sciences and trying to bring them here primarily to Sweden and Stockholm or occasionally to lecture educate the most of all inspire the youth the young students of today and also the youth of general public and we do this because we truly believe that one of Sciences most important contributions to civilization is educating the youth and the way we try and do it in addition to that is actually try to find academics that our masters are really complex areas and to turn those complex areas into a very easy and understandable message for a wider audience to understand to date we at bro have arranged 23 programmes including six by Nobel laureates they've been in all areas of the sciences everything from physics mathematics astronomy anthropology psychology and what we're actually trying to do to be able to do this is that we have a great support from both the private and the public sector so here today we partnered up with the Boston Consulting Group who are a strong supporter of this initiative and this is our second program today to them so let me say a few words about our special guest today max tegmark max is a physics professor at MIT and one of the leading researchers in the field of artificial intelligence his research has ranged from understanding the mysterio the mysteries of the cosmological universe to the physics and cognitive systems and now more recently the interface between physics a AI and neuroscience he is the co-founder of the future of life Institute and author of over 200 publications and more recently now also his book life 3.0 being human in the aged artificial intelligence and also our mathematical universe what I think is most exciting and interesting part about Max is that he's really on this quest I've taken this really important topic an interesting topic and trying to make it relevant for all of us and by bringing this topic for not only the field of researchers but also to the general public for us to understand the importance and the relevance and responsibility of having this discussion of how AI can and will impact our future before we hear from max I would like to welcome up andreas Lamar to the stage he is a partner at the Boston Consulting Group and also the Nordic head of their initiative gamma so please welcome up Andreas thank you very exciting to see so many over here today and very exciting to be able to arrange this together with bra as they be said I'm under Yeltsin market I'm a partner at the Boston Consulting Group and I also had our gamma Initiative in in the Nordics and gamma is where we do AI and data science and describe the reason why we think this is so so exciting BCG as such is a global management consulting firm where 16,000 globally and do traditional consulting so you could answer asked why do we dive headfirst into AI because first I think it's not that odd we have always been very data-driven and when new techniques and tools become available we sort of found it quite natural to embrace those and bring it to our clients but also because a lot of our clients are asking a lot of questions about this could be how does competition change in my industry if I don't do these investments and kind of hop on the trying train will not be extinct or were more practical things so how can i with the health way I provide a better experience from our consumers or how can I apply it to processes to make things more efficient or effective so so that's why we started Gama a couple of years back today we're 550 growing to to 600 data scientists mathematicians statisticians programmers that help hands on our clients to actually apply AI into their business could be in the fields of marketing pricing sales supply chain you name it we have people who really like to take real-life problems and and bring them into kind of how to represented as a mathematical model and then and then implemented into in third lines I think one important thing that we've learned and the the how we like to combine the traditional consulting and with our ai consulting is also linked to the subtitle of this this seminar can a being human in the life of AI we see a lot of our clients really struggling with not making the model not making the sort of the the cracking then the problem in that sense but implement and getting into a context where humans in AI can coexist so how do you get the your employees to actually understand what happens and you know how do you get to trust the decisions that are made by A&A how do you have an inclusive design process that actually involves the domain experts plus the data scientists or the programmers so that there are the people who will then own it and use it our own board on maybe not the inner workings of the the true black box models but at least have been in the process to can see the middle steps we think that's really really important and we think it's a key thing to can make this operational and real while we are applying this you know today's technologies to two problems and I our clients are focused on getting the most out of what can be done today there is a lingering question that's much bigger I think quite quite a few of them have on their mind it's gonna what happens in the long term are we taking small steps now towards something that we don't want to have or is it small steps towards something that we actually want to have and that's why I think this lecture will be very very interesting and I really really happy to be able to introduce to you max take my thank you thank you talks to me yes it's a great pleasure and honor to be here without the uniform I stopped coming in so I'm gonna try to control myself and only speak English for the rest of it first of all I'm extra honored to be invited by the bra have foundation because so I promise you'll all leave here knowing at least one thing that you didn't know when you came at the minimum it's gonna be the fact that when I was in high school is there anyone here from luckyvitamin awesome fighting chance Wow so we have some of my friends there and I had found in Turku bra has my lord I've n-never the merry friends of Tycho Brahe hair which was the secret organization that's persisted for many many years with whose noble goal it is to get together and boost the Danish alcohol exports and he had also two bizarre traditions very faithful to Tycho Brahe has eccentric ways like nobody was allowed to leave the table no matter how badly they had to go pee until it was officially declared over somebody will know that that's related to how he died alright on to the topic that we're here for today artificial intelligence I'm a little bit extra excited also about this talk because most of the many talks I give tend to be very very short and this and that and then there's time for like two questions afterwards today the organizers are actually giving you a very very large amount of time to ask questions so I'll be I'm gonna I was polled initially to talk for one hour and 20 minutes but I want to try to limit myself to about one hour to give you guys at least the full 45 minutes to ask all the questions you have because that's gonna be I think the most fun part of all when we can get a really good discussion going here and you can hear me talk more and share more thoughts about the things that you really want to hear about alright so I want to first of all encourage you to think big really big cosmically big think about it sir after 13.8 billion years of cosmic history our universe is woken up from a small blue planet tiny conscious parts of our universe we've started gazing out into our universe with telescopes and let's think about what we've discovered we've discovered something really humbling mainly that our universe is vastly grander than our ancestors imagined so if we had huge egos before maybe now we feel a little bit smaller but what's also humbling is that it's even though many people think that our universe is this giant Star Trek episode full of all these other intelligent civilizations that are ready to swoop in and fix everything up if we accidentally nuke ourselves or go extinct there's really no scientific evidence for that then there's a lot of scientific evidence against it our universe seemed looks like a pretty lonely place by and large in its bits so far it looks like it's mostly bed with life being a very small perturbation on top of this also humbling but we've also discovered something inspiring which is the technology they were developing has the potential to change all of this it has the potential to help life flourish like never before not just for the next election cycle which might come very soon here in Sweden and not just for the next century but for billions of years and not just on this little planet but actually throughout much of this amazing cosmos I call the earliest life here on earth life 1.0 because it was really dumb like bacteria no offense to bacteria but they cannot learn they have a lot of clever things they can do but it's all hard-coded in their DNA and they learn nothing during their lifetime I call us humans life 2.0 because we can learn which we in nerdy computer speak might think of his installing new software into our brains like language is like job skills and so on and it's this ability to design our own software which has made us humans the most powerful species on this planet it's enabled cultural evolution life 3.0 which can design not only its software but also its hardware of course doesn't exist yet but perhaps we should think of ourselves this life 2.1 already because technology is enabling us to start making minor upgrades like artificial knees pacemakers and cochlear implants and so on and the more we can upgrade our software and our hardware right the more we break free of our evolutionary shackles and become the masters of our own destiny so let's take a closer look at our relationship with technology alright raise your hand if you actually saw the launch of the Apollo 11 moon mission live oh I'm jealous awesome I was only two years old and my parents didn't have TV I consider this mission both successful and inspiring because it showed that when we humans use technology wisely we can accomplish things that our ancestors could only dream of but there's an even more inspiring journey that I want to talk about it's a journey that's propelled by technology more powerful than rocket engines and with a passenger on just three astronauts but all of humanity so let's talk about our collective journey into the future with artificial intelligence my friend John Holland likes to point out that there's a really nice analogy between rocketry on one hand and all of technology on the other hand because just as with rocketry it's not enough to make the technology powerful if we're gonna be really ambitious which I think we should be we have to also figure out how to spear it and where we want to go with it so I want to spend the rest of this hour talking about all three the power the steering and the destination and let's start with the power of artificial intelligence artificial intelligence what is artificial intelligence well what is intelligence first of all people love arguing about about that I'm gonna define it just very inclusively simply as the ability to accomplish goals and the more complex those goals are the more intelligence and I give this very inclusive definition because I want to include both biological and artificial intelligence and I wanna avoid this silly carbon chauvinism idea that you can only be smart if you made a meet I want to avoid this carbon chauvinism idea because it's the exact opposite the idea of that that's actually enabled artificial intelligence namely the idea that intelligence is all about information processing many people dismiss the idea very advanced artificial intelligence that ultimately can be smarter than us in all ways because they think intelligence is something mysterious that can only exist in biological organisms like us that's carbon chauvinism if you take the attitude instead that it's all about information processing then it doesn't matter whether the information is processed by carbon atoms in neurons in brains or by silicon atoms in today's computers or by some other kind of elementary particles in tomorrow's technology and it's this very idea that it's about information processing only and it doesn't matter what the process is the information which is really given us and can total revolution in IT and artificial intelligence look at memory for example it doesn't matter whether you store your information in terms of mechanical configurations of stuff or an old floppy drives or in something more modern because just information itself that matters this has enabled engineers for the past 60 years to continuously keep swapping out all the hardware under the hood thereby making things which are just dramatically better billions and billions of times cheaper without the user even having to care similarly with computation it doesn't matter whether the computation is electromechanical are done with relays and vacuum tubes or integrated circuits or whatever or neurons for that matter it's the information processing itself that matters that's again why we've seen such a shocking increase where it's now you know to do a certain number of computations per second has gotten so many orders of magnitude cheaper that if you got that much discount when you went shopping in downtown Stockholm today you can buy the entire gross domestic product of the entire world for less than order imagine pretty nice happy shopping and that's why computing is becoming so ubiquitous so because of this basic rejection of carbon chauvinism we've seen enormous progress and hardware parallel to that we've also seen fantastic progress in software for most of my life computing was all about telling the computer exactly what to do when you build computers out of these logical circuits but then people started feeling an idea that evolution came up with from our brains where you instead have this have neurons little devices little things that can communicate with each other with and how strongly different neurons can talk to each other correspond to some little number you can simulate simplified versions of these neurons and so-called artificial neural networks and this is revolutionized AI recently when you talk to your smartphone for example that's how it figures out what you're saying and it's a different kind of software for doing AI which has the advantage that you don't have to program in all the intelligence yourself it can learn very easily from from data and therefore learn a lot of things and you didn't know just like our children can become smarter than us because they don't have to learn everything from us so putting together better hardware and better software it's really amazing how AI has improved recently isn't it think about it not long ago robots couldn't walk we couldn't do face recognition with AI now hey I can generate fake faces they can see your face same stuff you know that you never said not long ago AI was just a research little thing we did in our labs now hey I can actually save lives each year more than a million lives are needlessly wasted in traffic accidents and most of these I think will soon be preventable by self-driving cars and even more lives are needlessly wasted in accidents and mistakes and hospitals in the US for example there's been some shocking studies suggesting that self-driving cars is actually pretty society bit of sideshow for saving lives compared to how much you can save just by eliminating medical mistakes and that's just eliminating mistakes if we can also use AI which we certainly will be able to improve medical research there's just so much more we can do better diagnosis treatment planning and developing new medicines etc for Diagnostics for instance we already today have neural network based AI systems that can perform as well or better than human doctors in diagnosing prostate cancer lung cancer and blindness related diseases then there's the whole business with robotic surgery where the AI isn't just diagnosing but also used to help solve problems very hands-on and it's quite clear that a better precision you know miniaturization smaller incisions and so on you can get better outcomes today robotic surgeries you usually you just assisting in human physicians but more and more this has become automated which means more people can get access to it also in in remote locations in third-world countries with developing countries where there is a shortage of skilled physicians and last but not least raise your hand if you if you know how to play the game of Go I had lots of hands were impressed so not long ago humans could beat the best human put the machines couldn't beat the best humans in this right then google deepmind's alpha0 AI and took three thousand years of human go games and go wisdom ignored it all and by just playing against itself for 24 hours with no other input whatsoever then the rules of the game it became the world's best go player and the most interesting thing here was not that alpha0 crushed human gamers it was that officer of crushed human AI researchers who had spent decades handcrafting go playing software trying to put in fight programs and stuff that do it cleverly all that's now completely obsolete in the garbage and the same alpha0 also crushed the AI researchers at the chess basically the same neural network architecture after two hours it can beat all the best human players after four hours beat the best human chess software raise your hand if you've ever played against stockfish pretty tough opponent right have you ever beat raise your hand if you've ever beat stockfish now I see no hands well four zero played 100 games against thought fish it didn't lose a single game and even won a bunch of games is black so it's all this amazing progress really begs the question how far will it go I'd like to think about this question in terms of this abstract landscape of tasks where the elevation represents how difficult it is for AI to do each task at human level and sea level represents what AI can do today sea level as often see rising as AI improved so there's a kind of global warming going on and in the task landscape it's good to be in Sweden where you don't have to be careful talking about global warming so this obvious advice for the younger part of the audience is be careful we're going into careers right at the waterfront which will be first to get disrupted by automation but there's also this much bigger question which is how high will the water end up rising eventually will it eventually submerge all land matching human intelligence at all tasks including writing books and giving talks and whatever career you're in this is a definition of artificial general intelligence AGI and by this definition people who say wow there'll always be jobs that humans can do better than machines are simply saying there will never be a GI okay these people are either carbon chemists who think it's impossible or people who are just pessimistic in our human ability to figure out how to do it if you think AGI sounds like crazy science fiction it's important to remember that there's something else which sounds like even crazier science fiction super intelligence the idea is very simple it's that if we actually succeed in building a GI then by definition a do all job better than us including the job of AI development which means that Google can replace its forty thousand engineers by a GI and further progress then in AI will be driven primarily not by humans but by AI itself which means that the typical human research and development timescale of years you know from making things better get dramatically shortened can happen much much faster and this raises this very controversial idea of an intelligence explosion where recursively self-improving rapidly leaves human intelligence far far behind all right now we need every outlet in check - the Swedish friend and colleague Nicky boost rum for example has written a fascinating book about this and sometimes I hear people say very scornful things like ah super intelligence it's just stuff this is crazy science fiction and the only people who take it seriously are philosophers which is obviously a dig yet Nick but is that really true I'm gonna let you judge for yourselves by listening in to this little panel discussion I moderated in a conference in California last year so before I asked if super intelligence is possible at all according to the laws of physics now I'm asking will it actually happened yes no or it's complicated a little bit complicated but yes yes and if it doesn't something terrible has happened to prevent it yes probably yes yes yes yes no well let's see here so you're on tahleen you many of you know is the one of the creators of Skype definitely not a philosopher demis hassabis here leader of google deepmind they gave us alpha zero calling him someone who has no clue about AI maybe a little bit we have a number of very famous AI professors and others here and so what does this mean it certainly means no it's just not true that only philosophers take you seriously we have to take it serious as a possibility that obviously doesn't mean it's gonna happen or this can happen anytime soon it also it doesn't even mean that artificial general intelligence is gonna happen anytime soon but we have to take seriously that it might so so what is gonna happen then and when this is really interesting legitimate scientific controversy on one hand you have people like my MIT colleague Rodney Brooks who says that won't happen for hundreds of years not even Hg I forget about it you have Andrew angle used to lead by deuce AI effort who says not worrying about this is like worrying about overpopulation on Mars that's a far off it is and these are serious AI people on the other hand you have people like them in societies who is much more optimistic and thinks it's gonna happen much sooner and it's of course working very hard exactly with the goal of building it so recent surveys have shown that most a AI researchers actually share and emesis forecast that this is something likely to happen and not that far off mostly i researchers are guessing in these surveys and it's going to happen within the matter of decades so probably within lifetime about most of you guys you look slightly quite healthy bunch who goes to gym takes you take your vitamins right and this really begs the question and then what what do we want it to mean to be human what do we want the role of you miss to be if if machines can do everything better and cheaper than us the way I see it we face a choice here a really important choice that isn't discussed much and I want to talk about it one option is to be complacent we can do everything better and cheaper than us not worry about the consequences after all if we build technology that makes all humans obsolete what could possibly go wrong on the other hand you guys are ambitious I can feel it I like ambition I think we should all be ambitious as a species aim higher than that I say let's let's really be ambitious but envision a truly inspiring hyper future with AI and figure out how try to figure out how to steer toward it which brings us to the second part of our rocket metaphor we're making our am i more powerful but how can we steer towards an outcome where this helps humanity flourish you know rather than flounder to help for that with this i co-founded the future of life Institute and I'm actually very honored that one of our scientific advisory board members Frank will check is is with us here in the audience today together this awesome wife Betsy and our goal is simply for the future life to exist and to be as inspiring as possible I don't feel this should be it's a controversial left-right political issue or anything like that I love technology I am optimistic that we can create an inspiring high-tech future but in order to do that we need to win the wisdom race what do I mean by that I mean we need to win the race between the growing power of the technology and the growing wisdom with which we manage our technology but that's gonna be challenging because it's gonna require us to change strategies our old strategy for winning this race for keeping the wisdom always one step ahead has always been to learn from mistakes think about it first we invented fire screwed up a bunch of times then we invented the fire extinguisher first we invented the car and then we screwed up a bunch of times and then we invented the seatbelt and the airbag in the traffic light and laws against driving too fast and so on and on both of these cases things worked out pretty good I would say yes in Sweden we sometimes have horrible fires and arson and so on but overall the value of being able to keep her house warms in the winter with fire greatly out balances the damage done by fire because we as a society have developed this wisdom we give enough incentives to people to develop to use fire well that works through our advanced similarly with cars but science keeps progressing giving is ever deeper understanding your nature which causes us to get ever more powerful technology that lets us have ever more impact on nature right for it sooner or later the power of our technology will cross a certain threshold beyond which learning from mistakes is actually not such a great idea and becomes a really lousy idea nuclear weapons for example I personally think that if we half an hour from now have an accidental nuclear war between Russia and the US and we get a nice little hydrogen bomb dropped on us here in Stockholm and then you know three thousand mushroom clouds later mmm see let's just learn from this mistake to be a little more careful next time not really the best strategy would have been better to be a bit more proactive right and avoid that mistake in the first place and artificial general intelligence will be the most powerful technology we have ever built because once we build it we can use it to develop all the other technologies and this is there are plenty of mistakes obviously we could make with this very so big that one might be too many so in summary I want to encourage you all to switch your mindset away from this learning from mistakes attitude to thinking that would be when it's really powerful tech be proactive rather than reactive plan ahead and get things right the first time because that might be the only time we have now this might sound like duh of course but it's funny because sometimes people tell me but that's not scare mongering that's what we at MIT called safety engineering think about it before NASA launched the Apollo 11 moon mission they systematically thought through everything that can go wrong when you put people on top of explosive fuel tanks and launch them to place where no one can help them and was that scare mongering no that was precisely the safety Engineering that ensured the success of the mission right that's precisely the strategy I'm advocating if we ever decide to build artificial general intelligence as well think through carefully what can go wrong to make sure it goes right so in that spirit we've organized meetings bringing together we're leading AI researchers and other thinkers to discuss how to develop this wisdom we need to get things right the first time the last such meeting we held was in Asilomar California last year and it produced this list of 23 principles which have since been signed by over a thousand AI researchers around the world and they were even officially endorsed by the state of California just the other month and I want to tell you about a few of these principles one is that we should avoid an arms race in lethal autonomous weapons also known as killer robots so the idea here is very simple any science can be used for new ways of helping people learn new ways of harming people right for example biology and chemistry are much more likely to be used for new cures new materials than for new ways of killing people why is that it's because in viola gist's the biologists of the world pushed very hard and successfully to persuade the politicians of the world to get an international ban on biological weapons and by now people the stigma is so strong people feel that biological weapons are so disgusting even if you tried do a little bio weapons start up here and recruit people from this audience to go work for you good luck I say even if you go up to someone here who has who has studying biology right now anyone yes suppose someone came and offered to double your salary to go work in this buy and weapons startup I think you just raised the bird on them and felt that that with the end of that and sure enough when was the last time you read about a bio weapon terrorism attacks you probably can't even remember right that success so the biology rocket has very firmly veered over to the good side chemistry also quite which one quite a good job there has there has been a bit of cheating on the chemical weapons ban but by and large it's been a great success because people think it's so disgusting although we have chemistry unfortunately has also given us climate change which maybe we should have thought through a little bit more carefully before we started pumping up that much carbon dioxide in the atmosphere yeah so what's happening here is AI researchers are basically saying and they want AI go the same way here and be remembered as a great source of new solutions not just a great new way of murdering people and honestly why don't we already have a ban on on leafless weapons well largely i think because people haven't thought about it enough and they associate this was silly hollywood movies like the Terminator and so on let me show you a short video about the future that I hope he won't end up and okay we made this movie to show it at the United Nation at the United Nations the eponymous weapons negotiations customer pilots directed almost 3,000 precision strikes last year we're super proud of it it allows you to separate the bad guys from the good it's a big deal but we have something much bigger your kids probably have one of these right not quite hell of a pilot no that skill is all AI it's flying itself its processor can react a hundred times faster than a human the stochastic motion is an anti sniper feature just like any mobile device these days it has cameras and sensors and just like your phones and social media apps it does facial recognition inside here is three grams of shaped explosive this is how it works [Music] did you see that that little bag is enough to penetrate the skull and destroy the contents they used to say guns don't kill people people do well people don't they get emotional disobey orders aim high let's watch the weapons make the decisions now trust me these were all bad guys now that is an airstrike of surgical precision it's one of a range of products trained as a team they can penetrate buildings cars trains evade people bullets pretty much any countermeasure they cannot be stopped now I said this was big why because we are thinking big watch a 25 million-dollar order now buys this enough to kill half a city the bad half nuclear is obsolete take out your entire enemy virtually risk-free just characterize him release the swarm and rest easy these are available today we have a distribution network taking orders from military law enforcement and specialists clients information is still recovering from yesterday's incident which officials are describing as some kind of automated attack which killed 11 US senators at the Capitol building you can see high windows very small precisely puncture to gain entry to the building I just did what I could for him things weren't even interested in me they just buzzing government sources admit the intelligence community has no idea who perpetrated the attack nor whether it was a state group or even a single individual so if we can't defend ourselves then we strike back we are investing very heavily in classified defense projects we make it our deterrent like our nuclear deterrent we stockpile in the millions the billions at key facilities the White House the New York Stock Exchange Wall Street [Music] studies going great aren't we doing a video call today kind of with people [Applause] like that I see some photos here with some what is this video right here the question honey honey you're not going into politics are you a theater just like you said it would be police are not saying this morning what prompted the alert claim relaxing firearm legislation would be useless against the so-called small Tomas has to stay away from crowds run indoors keep windows covered with shots protect your family stay inside [Music] [Applause] [Music] [Music] authorities are still struggling to make sense of an attack on university campuses worldwide which targeted some students and not others the search for a motive is apparently turning to social media and a video shared by the victims exposing corruption at the hypes so it's often surprising the weapons took away the expense danger and risk of waging war and now we can't afford to challenge anyone really it's it's not even know even smallest fringe group or Krank who could have done this anyone dumb weapons drop where you point them smart weapons consume data when you can find your enemy using data even by a hashtag you can target an evil ideology right where it starts this short film is more than just speculation it shows the results of integrating and miniaturizing technologies that we already have I'm Stuart Russell a professor of computer science at Berkeley I've worked in AI for more than 35 years it's potential to benefit humanity's enormous even in defense allowing machines to choose to kill humans will be devastating to our security and freedom thousands of my fellow researchers agree we have an opportunity to prevent the future you just saw but the window to act is closing fast so raise your hand if you're excited to live in exactly that future we've talked about artificial general intelligence but it's important to not conflate that kind of very advanced AI it might happen in 30 years or whatever with this because this is something we can almost build right now as we heard my colleague Stuart Russell say also one of our key people at the future life Institute we basically but nobody has actually miniaturized integrated and mass-produced it if that happens these kind of weapons will become developed by all superpowers that's a full-on arms race that's when things like this can happen so this is this is right now from on the cusp of starting this arms race and we have a choice to make do we want to go that way or not so if sober economists take on this is Joseph Stiglitz who was on this stage at some point delivering a Nobel lecture I believe said that now if you drive the cost to zero of something then you're gonna have a lot more consumption of it and there are certain kinds of things where it actually reduces overall welfare that that thing is just cheap and easy to buy these slaughter BOTS I think all right like that I thought I saw that some of you were filming this so you can actually find we put it on YouTube if you just google slaughter BOTS you will find it there so one of the things we've done since there are such consent such a strong opposition to these things among AI researchers themselves is we organized openly and open letters where AI scientists made their voice heard you know encouraging politicians didn't they go start negotiating you ban on them and more recently just last summer we announced an initiative called the pledge where people who are bored waiting for the politicians to do nothing this pledge they themselves are not gonna work on doing these sort of things and we were very happy that even even google deepmind the bleeding a GI company and a lot of other big companies have pledged now to simply not work on this kind of stuff and there's also a growing coalition of and you keep the university is signing this University College London for example it and individuals if you want to sign the pledge yourself just aim your smartphone at this hashtag and you're welcome to do so there's also a growing list of countries whose governments of septums and we support an international ban there are a lot of countries here it's a growing list noteworthy is that even China has come out and said this year that they support a ban I'm very embarrassed though as a Sweden Sweden just look where's the Swedish flag oops not on there and when we did this there was a journalist who asked yet that who was a priest or a minister of defense what he thought about it in his representative said well it's sort of not the clear-cut issue you know on one hand on the other hand we kept maybe we should teach a fine did he add name or something so I mean when you next time you vote or if you have contacts political contacts here let them know that that this is actually something where Sweden can play a real leadership role like we have in the like we did with the biological weapons convention for example since we're here at a university let's go a little bit deeper into this because this is you know there are interesting arguments you hear both for and against these kind of lethal autonomous weapons so let's look at you might say well all weapons are nasty they kill people but you know I'm not a pacifist so but you know opposing biological weapons also doesn't make you pacifists most most people who are against these weapons aren't pacifist either they just feel weapons or one thing but there are certain extremely disgusting and destabilizing weapons so let's not disband those that's what this is about if you want drone warfare or autonomous missile defense systems they would not be covered by this kind of bad lethal autonomy weapon the word lethal means it specifically kills people these things do not and these are not autonomous these are remote controlled by humans so still in this case a human who makes the decision you might be against them too but what you saw in the little film there is different there it's actually just AI who identifies people on the sides okay that's I'm gonna decide to kill this one then there are people who say well I'm for having killer robots because it's better that robots get killed in wars than people they're always gonna be bored so let's just make sure that the only people who have destroyed or not people but that's if you think about it doesn't really make sense it assumes that the number of wars has nothing to do with what the technology is it's like saying okay if we if we go from having Spears like Viking swords they're having cruise missiles we're just gonna be using the cruise missiles in the same exact kind of battles that were previously where we previously used swords that's obviously not the case if you have in fact one of the main reasons why countries choose not to go to war is because they know they're gonna loot jet and end up with a lot of dead soldiers coming home and that's gonna be something that their own people don't want if a country knows that they can go to war without losing a single life of their own citizen that lowers the bar makes it much more tempting for politicians to start wars and most people who die in words actually for the last the recent decades have not been soldiers anyway right who is it who mostly dies in wars it's civilians and it's gonna be obviously the same like this another common common argument against the ban just find very interesting is this idea that killer robots can be more ethical than soldiers because human soldiers we can get stressed and make mistakes friendly fire for example we can get tired and have bad judgment or whatever but these robots maybe we can build robots are so cool and hedid that they will always obey these great ethical rules and therefore will have less civilian casualties that also sounds quite compelling until you ask yourself well whose ethical rules who decides what's ethical and what's not well it's the person who owns these killer robots of course right and this the Swedish government's idea of what's ethical might be a little bit different from al-qaeda's definition of what's ethical for example and it's incredibly naive to think that only one country can develop these weapons which will actually be quite cheap and don't require heart it's difficult to acquire parts like nuclear weapons and and think of no other countries are gonna build them and that they're never gonna end up on a black on the black market it's pretty obvious that once there is an arms race if there is these are just gonna become the kalashnikovs of tomorrow that everybody's gonna have and then whoever has them you know defines besides what they think is good it's gonna count as ethical maybe being ethical is to be someone they don't consider an infidel or whatever so that argument really quite collapses and it really makes no sense to say on one hand we believe that our own soldiers are so bad at following the rule of war the rules of war that robots can do better but on the other hand our worst enemies are so ethical that they are gonna program their killer robots to only do ethical stuff there isn't really compute and finally there are a lot of people who say well max you're such a naive tree hugger or whatever you know obviously know every whenever there is any technology it's gonna be militarized whether you like it or not so you just give up and focus on something your energy on something else that sounds compelling except it's obviously not true because there's a counter example in biological weapons we could have lived in the society where the you read about bio weapon terror attacks all the time then happen and we were so much the better for it that's why biotech is a force for good in the world right and the same reason why the superpowers went along with that isn't me the reason why they would why there's hope they can go along with this because the biggest military powers were already top dog um were there nuclear weapons and whatever other stuff and they knew that if a new kind of weapon like bioweapons came along there was much cheaper it wasn't gonna help all these other little guys who couldn't afford their weapons so they decided let's not rock the boat and let's just stick in ties and ban this stuff it's very very analogous here and finally I don't want to spend the entire time getting you depressed robots but but you might say well won't not building these things weaken the countries that don't build it and just help the others well no I suppose you're really worried about about chemical weapons suppose Sweden is really afraid of being attacked by Denmark with chemical weapons say what is the best defense for us is it to build their own chemical weapons duh of course not the best way to defend against chemical weapons isn't having chemical weapons is to have gas masks and to push hard for an international ban which will really punish Instinct Denmark the sanctions and also two other things for cheating on this than doing this disgust the thing that everybody thinks is disgusting similarly if you get attacked by a little flying drone how exactly is it going to help you to have your own little flying drone not at all right the defensive weapons look nothing like the offensive weapons fine that people build the defensive weapons and then try to have a ban to stigmatize them that's the best way to go so in summary I spent quite a bit of time talking about this particular negative use of AI just because I don't want to stand here like some sort of some sort of salesperson is they are ail only it can all the lonely enable good things AI has an awesome ability to do all sorts of wonderful stuff that we're gonna talk more about but like any technology you can also be used for bad stuff just like fire it's just that AI is a more powerful technology so the both the good good stuff the good stuff can be more good and the bad stuff can be more bad that's why it's even more important that we think about how to steer it well alright so that was the Asilomar principle about avoiding an arms race and lethal promise weapons another one was that we should make sure that AI does not dramatically increase income inequality my opinion is that if we can figure out how to dramatically increase the overall pi the overall economic output of the world and we still can't figure out how to share this much bigger wealth in such a way that everybody actually gets better off shame on us and it's interesting that nonetheless we have actually been failing very much in that way for example in the country I live in right now in the US if you just look at the poorest 30 percent they've actually an absolute inflation adjusted dollars gotten poorer over the past 40 years and they feel it they realized that they're poorer than their parents were and they feel really angry about it and and they feel that this whole economy is rigged against them and then they voted for Trump and even though in Sweden people often make fun of it's properly make fun of Trump and so on it's important to remember that the anger that propelled him into offer the office is very real and it's really caused by very much by income inequality the same kind of anger that the cause breaks it and unless we take this seriously in countries like Sweden and elsewhere we're gonna see the same thing happening here as well you have to just make sure that everybody feels that they benefit when the economy grows we can talk a bit more about this later but I just want to flag for you the fact that artificial intelligence is intrinsic tendency if you don't do anything about it is to increase inequality not decrease it it's pretty obvious why every time you replace a human worker by a machine that money that used to go in salary to the worker now goes instead as capital income whoever owns the machine so you get an ever larger fraction of the money going to those who already had all the money to own this stuff and never smaller to those people who didn't have as much to start with and you can see this there easily if you compare for example Ford with Facebook last time I checked Ford was worth twelve times as much as Facebook on the stock market with eight times sorry Facebook was worth twelve times as much as Ford with eight times fewer workers so there is an approximately a hundred times as much revenue coming in a value per employee for Facebook because Facebook is a modern company which is much more high-tech and more companies are shifting from becoming more like Forge becoming more like Facebook that's why obviously inequality increases right now in the United States the richest three people own more money and then the poorest 160 million and the poorest half and if we don't do anything about it this is just gonna increase fortunately it's very easy to do something about this now all you have to do is make sure the government collects is enough to collect enough taxes like you can take care of everybody and make sure everybody really gets the benefit in a country like Sweden it's not so controversial to say this because this has been the whole idea of the welfare state that we've been having years since since World War two but in America people are very allergic to it and if I even tell them in America that I support like free hospitals they'll call me a communist somehow sometimes they try to explain to me but it's funny cuz in Sweden you know like more that often are like our big conservative party they're for free health care like what we can talk more about this afterwards but I want to move on and talk about this principle so raise your hand if your computer has ever crashed whoa is the most ads I've seen all this so then you can appreciate this principle that we really as we put AI in charge of ever more decisions and infrastructure we really need to transform today's buggy and hackable computers into robust AI systems that we can really trust because otherwise all this shiny new technology can malfunction and harm us or get hacked and be turned against us and this this kind of AI race safety research isn't just important it's also fun actually so let me just take a little bit of time to tell you about my day job because first what I've talked about so far here has all been my nights and weekends job you know when I do just in my free time the future of life Institute and write books in stuff but my day job just like my colleague Frank there is just work as a professor at MIT and most what we do is research and my research group which you're welcome to apply to join by the way if you want to come to a PhD in the US you know you don't have to pay we pay you our very last paper we put out a student of mine and I just that week for example talks about taking ideas from physics and using them to make AI more understandable what I like to call intelligible intelligence so we have reason to trust it a little bit more and I don't want to bore you to sleep with a ton of nerdy stuff but I'm just gonna give you a little sampling of nerdiness okay so you can get a little flavor for what I actually do for a living so we talked about how you can easily train an an artificial neural network to do stuff like that seems kind of smart in this case raise your hand if you ever played this old game breakout yeah so it figured out here this clever trick but you should aim the ball on the left side and then just rack up lots of other points that feels kind of smart so how did it do it how does it work this is not like if I wonder how a human works why I can't just cut your brain open and look it's all in our computers so we can look exactly how it works this is how it works there is a neural network for all these neurons connected and it's all specified by these eight million eight hundred sixty seven thousand four hundred eighty eight parameters a tiny fraction in which I printed here crystal-clear you all got it how it works totally useless right and if this is some sort of safety critical AI it's driving your self-driving car it's powering its vision system or something wouldn't it be nicer if you could actually understand a little bit how it works so you had reason to trust it better or if you go to the doctor and you're told that you need to have this new cancer drug and you're like why and it's a table arm was trained on ten terabytes of data and this is the output of my neural network so so what we've done is we've asked the question can we do more after its learned to do something smart can we do another step to figure out more about how it works this game was a little too simple because the ball always just goes a straight line in a straight line with constant speed so we created more complicated things to see if the AI could learn how to do this you can stare at this for a little bit and use the neural network of your brain to try to figure out what on earth is happening here I try to predict where this little ball is gonna go next I'll shut up for 10 seconds so you can practice any takers what's happening don't be shy if you see any patterns at all any regularities yes Betsy it doesn't go off the speakers they seem to bounce yeah there seems to be some kind of bouncy boundaries that it bounces off and if you look for well you can probably figure out kind of where they are there seems to be one here anyone there and then also there doesn't just jump randomly from it goes continuously across the screen somehow in some way Philippian so oh yeah sometimes here it seems to just go in a straight line at constant speed but here is like doing something more funky going in circles and he suggests speculations from one of the physicists in their audience so there seems to be different different rules somehow our different forces on different parts of the screen it's easy to train a neural network old fashioned to figure this kind of stuff out you connect together a bunch of neurons and you have some numbers that specify how they talk to each other let you feed in for example like the last few positions of the ball x and y-coordinates and you tell it to predict the coordinates of where it's gonna be next and then you keep changing these numbers to make them the error and the prediction as small as possible that's basically all there is to it with a neural network and that works sort of okay but it's not super accurate and it would be really nice if you could actually understand what did that learn did not just get some garbage table of numbers and it would be also nice if he didn't need so much data to figure out what it was doing so we took four ideas from physics and decided to try out how well they can help with this so so one of them was this idea called Occam's razor that if you have many different explanations it's you should go with the simple that have seen equally good you should go with the simplest one there is a nice mathematical rigorous way of formulating what to mean by simple developed by ray solomonoff and and call them agora but where basically what you're trying to what simple means is that they're coming this computed the shortest computer program needed to produce this data or this prediction or whatever has very few bits in it it's very short problem is it takes longer than the age of the universe to actually find the simplest one so what we did don't worry don't get stressed out by all equations but we we wait the total since we physicists have gone a long way with being a little bit vague anyway on what we mean by simple we decided to simplify that stuff a little bit and write down some little cookbook recipe from where how simple anything was something that the computer could easily try to minimize and then we also took another idea divide and conquer yeah pioneered for military purposes by Caesar but Galileo for example there's this famous anecdotes about 400 years ago he sat in the Pisa Cathedral it was kind of bored by this sermon and so to entertain himself he looked at the swinging this is the swinging lamp store hanging from the ceiling and he noticed something funny that they were the ones that were suspended by it look no matter what there was a pattern between basically how how long it took from the swing back and forth and how long the cords were and he started timing this using his pulse as a clock because it didn't have an Apple watch and then he figured out this really cool thing about the motion of pendulums and eventually ended up figuring up much of the foundations of how stuff moves in the universe period so the genius of him was he didn't try to explain everything he saw I didn't try to predict what the priest was gonna say next or how the light how light worked or vibrations or super strings or anything like that he just focused on a tiny aspect of the world and said okay let me try to really understand at least back that's a divide-and-conquer strategy so we also program this inter method here and instead of the usual machine learning approach just has some super complicated model it tries to predict everything in one shot it tries to come up with many different theories it can predict did aspects of the data and also each theory get to learn to predict which aspect of the data it it can be trusted for let me put that together we also put together yeah the idea of lifelong learning that you don't just immediately forget everything you learn but bring with you past solutions you've learned to try out new things and then also unify different theories with each other to come up with simpler things and let me train this we we created a whole bunch of these fake new worlds with different rules and I'm not gonna bore you with the details but the end result was that the it really helped a great deal we were able to get about it 1 billion times better accuracy and it could learn faster less data and so on this is just one very small example of how I think that if we work hard on it we don't have to settle with things that we don't have no clue how they work I think we can make things which are just as smart a is just as intelligent but actually is a lot easier to understand and therefore a lot easier to trust coming back now from my day job to my right then weekend's job stress that this is AI safety research all right which I think I would very much encourage also Swedish governments for example to have more funding for it's just a regular part of computer science funding it shouldn't just include these sort of short-term things it should also include value alignment what do I mean by that exactly I mean that there's no real threat from now artificial general intelligence or super intelligence it's not that it's gonna go evil like in silly Hollywood movies it's that it's gonna go competent really really competent and go ahead and accomplish all sorts of goals and aren't aligned with our goals for example when we humans drove the west african black rhino extinct why did we do it not because we were a bunch of evil rhinoceros haters but because we were more intelligent and our goals were not aligned with their goals so tough luck for them now artificial general intelligence is by definition more intelligent than us so to not put humanity in the position of those rhinos if we build AGI we need to figure out first how we can make machines understand our goals adopt our goals and then keep our goals and whose goals should these be anyway that leads us to the last part of a rocket metaphor here that I want to wrap up by saying a little bit about the destination alright so we're making a more powerful trying to figure out how to steer it but where do we want to go with it this is really the elephant in the room that almost nobody is talking about even though there's a great deal of discussion now both on how to make it more pi more powerful and even on a lot of these short-term challenges like jobs and weapons what kind of society are we hoping to create if we can actually one day build a GI or super intelligence we did a survey an opinion poll and out of roughly fifteen thousand respondents it turned out that actually most of them want us to build super intelligence what there was the greatest agreement about of all was that we should be ambitious and and help life spread into the cosmos and if you want to ask me more about that during the question period I'm happy to tell you all sorts of cool stuff but we can use AI to live elsewhere in the solar system go fast to other stars and the door other galaxies and whatnot but I'll let you choose what you want to hear more about what there was much less agreement about was who or what should be in control most people wanted it to be humans or humans and machines together because a little bit weirded out by the fact that but to be just machine maybe they didn't have very nice friends there was total disagreement about what the role of humans should be all right even at the most basic level so let's finish by taking a closer look at different futures that we can choose to steer towards if we build super intelligence don't get me wrong here I'm not talking about space travel just about a humanities metaphorical journey into the future all right some of my a AI colleagues want to build super intelligence and keep it under human control like an enslaved God disconnected from the internet and used to create the unimaginable technology and wealth for whoever controls it but Lord Acton famously warned the power corrupts and absolute power corrupts absolutely so so you might worry that maybe we humans just aren't wise enough to handle this much power also regardless of any moral qualms you might have about enslaving superior Minds you might worry that this super intelligence might outsmart us break out and take over but today I also have some AI colleagues not most of them but but some who are fine with AI taking over even causing human extinction yeah as long as we can think of them as our descendants like our children but how would we be sure that these AIS will actually have adopted our best values and aren't just some unconscious zombies treating us in tantalizing them and also if you like democracy then shouldn't those people who don't want human extinction have a say in the matter too so if you didn't like IO are those too high-tech scenarios it's important to remember that low-tech is suicide from a cosmic perspective because if we don't go far beyond today's technology the question really is not whether Humanity is gonna go extinct but merely whether we're gonna get wiped out by the next killer asteroid and the dinosaur killing class or some other calamity the better technology could easily have solved and if you ask me afterwards they can tell you how to stay safe from future asteroids so how about having your cake and eating it instead building superintelligence has not enslaved but still treats us well because we've figured out how to align its goals with ours this is the gist of what eliezer yudkowsky is called friendly AI and if we could pull this off in one of its many forms it could be awesome it could not only eliminate negative experiences like poverty disease crime and other forms of suffering but it can also enable really inspiring diversity of new positive experiences making us truly the master really the masters of our own destiny so in summary our situation with technology is complicated but the big picture I want to leave you with is actually very simple we heard that most AI researchers not all but most of them think that we are going to build artificial general intelligence probably within the lifetime of most of you and I think it's pretty clear that if we just bumble into this on with no preparation with our eyes closed infusing to think about possible risks just telling ourselves not what's gonna be fine it's gonna be fine it's probably gonna be the biggest mistake in human history let's face it it could it could create it could enable a brutal global dictatorship was unprecedented surveillance unprecedented and inequality unprecedented suffering and even human extinction not because of some silly Terminator Hollywood scenario but simple because this technology is so powerful that there it's easy to envision a lot of different kinds of mistakes to just be one mistake too many but if we instead think hard about how to steer our technology and figure out where to go we want to go with it we can end up at an amazing future truly amazing future where everybody is better off the poor are richer the richer richer and everybody is healthy and free to just live out their their dreams so hang on though what kind of future should this be you know do you want a very strict pious society with very strict moral rules or do you want more of a hedonistic free-for-all like Burning Man 24/7 or brew avala festival and in the sixties or or whatever or do you want and you want to keep that beautiful forests and mountains and lakes or do you want to rearrange some of those atoms and virtual experiences well with friendly AI we are no longer going to be limited by our intelligence just by the laws of physics so the resources and the space available for this will be astronomical literally so we could simply create all of these future societies on earth elsewhere and give people the freedom to choose where they want to live so we have the choice we face is ultimately very very simple it's a choice between being complacent and being ambitious we can be complacent and take is our new take technology itself is our new religion and have this mantra that all new technology was always be beneficial all new technology or what not actually beneficial and just keep repeating this to ourselves as we drift like your rudderless ship towards our own above our own obsolescence in oblivion or it can be ambitious and think really hard again about what kind of future we want and figure out how to steer towards it and end up and that's the future that I wanted I want to see I want this to be ambitious so what I want to leave you with is well you're all here today because you're interested in the future interested in what kind of future we can get with technology and I feel that the essence of this future should be to build AI not that overpowers us but it empowers us thank you now comes the fun part we get to talk about whatever you want thanks to the generosity of the organizers we have a full 35 minutes a lot of time for you to ask questions for those of you if anyone of you doesn't have time to ask you questions you can email me your questions afterwards this is my email yes I meet it myself even though I don't always have time to reply to everything my publisher also said that they have this address you can write to if you want to keep track of when I come to Sweden and other stuff here at the books the organizers are running around in the room here with microphones so do you wouldn't let's see raise your hand if you are carrying microphone yeah maybe we people do you want people to come up to you or do you want them to stay seated how do you feel up to your seat for now stay in your chairs and just wave like this they will come to you just remember since I see a lot of hands here make sure to keep your questions brief that's only one question at a time and make sure that it is a question all right so who is first Wow you want to give it to Betsy well max thanks a great talk and I bet your book is great I hope it is a forward to reading it I was a my question is about your use of the word we because we here in this room are kind of a specialized group of people right and so we have certain abilities and and we have certain powers and certain and then relax some others I mean we right now we're above the water level in your and your thing our jobs have not been taken away hit by AI so could you say a little bit more about the way that you envisage and how you picture the we that you're envisioning making things helping to steer toward that better place thank you that's a really a really awesome question first of all let me tell you who the we is gonna be who makes all these decisions if we don't all get more engaged it's gonna be a bunch of nerds who maybe had too much red bull to drink and haven't been elected by anybody that's not the way we'd like the weed to be who get to decide the whole future of humanity there's a reason I wrote the book which person because I would like that we'd literally be everybody on the planet it's really our entire future and you don't have to be a tech nerd to understand how these things are going to affect us our jobs and and and so much else the first step towards having more influence is absolutely you know just start understanding how it's gonna affect us so you can make your voice heard then I would like to say one more thing about this also it says it's clearly the case that we don't have a perfect consensus on everything that's why we don't even have a government in this country right now and there's even less consensus between China and the US Europe not to mention if you ask what Isis would like to see for the future of humanity and so on so is it maybe hopeless even start talking about what kind of goals we should put intermissions no not at all I think we should not use the fact that we humans haven't come to agreement that everything is an excuse to just sit there twiddle our thumbs and not start at least because the fact of the matter is if you take all humans on the planet and ask what their goals are they still that's still a very narrow range of goals compared to the kind of goals the machines can have it should be like the perfect psychopath just really excited about doing it what absolutely anything whatsoever and and so the first thing we should do is say and besides the machines we have today you're way too dumb to even understand most of the goals that we disagree about to be nowhere near or the official general intelligence so I say let's start by taking at least the kindergarten ethics that today's machines are smart enough to start to understand that we all agree on and put down entire machines for example nobody who have built an airplane ever wants it fly into him building or mountain right yet we had September 11 nobody from Airbus or Boeing or any aircraft manufacturer in the world ever ever wants to plane to fly into mountain yet that's what under he has Lubitz this depressed Germanwings pilot did killing over a hundred people recently right raise your hand if you remember this very sad story and this is the saddest thing of all was how he did it he just told the computer to do it and even though the auto pilot had the full map of the Alps elevations everything and GPS data when he told it to lower the altitude to 100 meters it just said okay nobody had thought to teach the computer even the most basic ethics that don't find the mountains okay it could easily have been programmed so that it would just have gone into autopilot mode and landed at the nearest airport and sent a little email to the local police and that's what should be done similarly with cars more and more smart cars now have auto braking and forward facing cameras we should teach these cars never Excel into a person ever even if the idiot driving it is stepping on the gas pedal that would have eliminated that can eliminate things like what happened on the road and go off down here in Stockholm and a large fraction of vehicle terrorism so we should have leave me shouldn't make perfectly the enemy of the good that's what I'm saying let's start with the kind of goals that we all agree on and that are easy enough that today's machines can understand them and get into the habit of putting that into the machines whenever we put intelligence into a machine let's not forget to also put in the ethics that goes with it let's be good parents for our machines right you would never put teach your children a bunch of intelligent things about how how to operate a knife and so on and at the same time teach them right from wrong and we should start there and then gradually as machines get better then we'll face these more and more difficult moral issues all right your point about the car what if the car has a choice of killing two people and one is a pregnant mother and the other one is an older person are we going to have incidences where the life of somebody will be valued over or under the value of somebody else this is a famous trolley problem that's right so now we get into the category questions where people actually disagree quite strongly on what should happen in fact it was a very very interesting study run out of MIT recently where they showed a lot of people online these kind of questions and they had to choose what should happen and the basic conclusion was that people generally wanted other people's cars to always act in the ways to not kill the pedestrians if there were more pedestrians and there were people in the car but nobody wanted to buy that car from cells so you get this trade-off between forcing standards which are better for the common good and on one hand and P and so these are these wonderfully subtle subtle questions the I'm a numbers nerd right so it's important to remember that even though that's difficult question the first approximation just having more self-priming cards is just gonna save a lot of lives period and then there are these corner cases I'm not gonna give you a very glib answer to precisely your question because it's a really good one and I think this is something we should simply have a very broad discussion about and and come and then reach decisions no that way we should do it maybe actually could you come up here and and focus on paying attention to who has been waiting the longest because that way I can focus more on the questions yeah here so even even if we can program our re eyes to have these more rules what happens when you reach superintelligence and they can reprogram themselves how do we make sure that they keep the more rules in when they create themselves yeah that's a really really great question so to make value online AI very intelligent they I that actually wants to take help humanity out kind of like we help out our little children which where is solving for problems three of them are nerdy technical problems and then the fourth one is is this philosophical problem of what should they go and B which we just discussed so this what are those three what are the three technical problems from one is how do you make machines even understand our goals most of your goals you try to explain them to your laptop good luck you won't get it right but once they get it gets yeah I get smart enough eventually it'll get it then there's a second question how do you make it adopt the goal raise your hand if you have kids well you guys really know how big the difference is between having your kids understand what you want what you want and in particular also the difference between doing what you want grudgingly and actually wanting to do what you want because you taught them good values that's hard right and there's a fortunately humans human babies go through first 30 they're just not smart enough to understand their goals and then eventually they're teenagers and don't know chair what we want anymore but they have a pretty long period in the middle where they're smart enough to understand our goals and they're pliable enough that you can hopefully get them to adopt their goals right that's why education moral education and the parenting can kind of work the problem with with AI is is if we get closer to super intelligence is it it might blow through that and it'll face very very fast but first this is too dumb to understand our goal of this too smart to care about we little dumb humans think so so it's probably important it's kind of put it in from the beginning and this gets to yours for the third part how do you make sure that they actually keep the goals when you put into them as they get smarter you know I have two sons they their goals had a lot to do with Legos they were very excited about Legos when they were little now they're teenagers and the Lego is collecting dust in the basement we don't want machines that we programmed to be very excited about helping humanity to get is bored about that goal as they get smarter as my kids got what Legos we saw this if you saw them did you see the movie her anyone yeah that plays a bit with that theme right there eventually she just gets a little bored with helping with human and them it's hard what gives them hope is that if I tell you right now that the woman next to you has this technology she can give you an intelligence upgrade you're gonna suddenly remember things much better and be able to just think faster and stuff like that before she gives it to you wouldn't you want to convince yourself that you're not certainly gonna wanna like you kill your mother and stuff like that right so you wouldn't want what you would want some guarantees for yourself that when you smarter you keep the goals that you already have so that gives him hope right maybe one can similarly have it an AI that's incentivized as in gradually gets smarter to make sure that the new smarter version of itself keeps the same gold that's the hope but they're also difficulties with it after all we change our goals ourselves a lot through life right sometimes we realize that the goals we used to think we're so important we're sort of banal and silly and so there's a fascinating literature I talked about this a lot and I have a whole chapter on goals in the book it's an unsolved problem that's why it's so important since it might take you know that case to solve it that we start working on them now not the night before some people switch on a soup okay my question is about where you signed off and to get AI in alignment with our goals isn't it wouldn't it be possible to kind of harness the plasticity of the human of the human brain to get AI taught to learn in the interaction I'm not I'm not talking about conscience but I'm talking about how to get it to understand our goals yes absolutely absolutely and this is relevant even in a very short term long before we get anywhere near you know AGI most people on this planet don't know how to program computers and I don't think they should need to know that in order to have a healthy good helpful robot now our children are perfectly capable of learning our goals without knowing anything about programming or they look at our behavior and infer from that that there are some things we consider much more valuable than other for example if if if you're holding a baby and you're also hold and then you're also holding a glass of water and then you trip you might throw the glass of water on the floor and the baby will see that and realize that the baby is much more important the glass of water this is and there's a whole school of research now in AI called inverse reinforcement learning pioneered by Stuart Russell in Berkeley and it in his group but many others are picking up on it which tries to capture exactly that idea that missions that you make machines who start out having no idea what humans want just by observing us trying to figure out what our goals are I think this is gonna be very very helpful it also has very short term again you know the problem today most of the time when our robots do something we don't like it's absolutely not because they're evil it's just because they were clueless about what our goals were and I didn't really understand them right and for example if you take if you take your future self driving car and here and tell it to get the or lambda as fast as possible and you get they're covered in vomit and chased by police helicopters no no no that's not what I wanted in the in the car answers that's exactly what you asked for it just demonstrated that it didn't have a deep understanding of what you really wanted but but your friends know more about you than with just what you said because they've observed me for a long time and so on and then that's all good but even there there's a lot of work that remains to be done again so for example the examples we talked about before with people crashing cars and airplanes on purpose you know you don't necessarily want to let people buy robots which will just learn their goals and do anything they tell them about them also sometimes people are complicated and then a little bit irrational and and it's some people might even have say think they have goals which are actually not really how they act and do things so so there's a lot of fascinating research there not just for you not just nerdy AI research to carry out the vision that you're describing but also on in psychology and so on which we should do okay now we'll go for the gentleman there in the orange pull over right there and after that as a student what do you think is the best way to influence the future of AI what are you a student influencing the future of AI what is the best look at something I'm studying computer science okay perfect well then the best way for you to inflate the future is to continue studying computer science I would say and and and think very hard about how the things how you choose which things to work on the kind of things which we can really have social impact and try to gravitate towards areas where maybe can do something xat research for example I'll help solve some working in worse your enforcement learning or solve some of these really crucial questions have come up here rather than working on building slaughter bots and then everybody I would encourage everybody who works in any kind of job to always ask also you know what are the social implications of what you do it's way too common and I have to confess myself being a nerdy guy to just fall in love when the technology we're working on you know without thinking too much about how it's gonna be used there's a there's and there's one of my favorite American comedians is Tom Lehrer who has this old song about this guy who built the Nazi v2 rocket that devastated London right and then immediately switched sides after the war and started building American missiles and rockets and Tom there are things about him that monster Rockets go up okay as fast calm down that's not my department says very near fun Brown don't be like that work on the important issues and thank you hello I'm a master student and sustainable development wave your hands in it because I hear from the loudspeakers coming in all sorts of places in my mind you're a lever can't figure out what your city perfect yep so my question is why it is so essential to take the biggest risk ever which you have described what could be the biggest mistake in the human history in order to achieve everyone better off the ambition I really believe there should be less risky ways to reach that ambition so why you promoted artificial intelligence could be like the way to make everyone better off that's a really great question why do we even have to build a GI in the first place can we just make things slightly better and then chill out a little bit it's a very good question and if we could do find a way of doing that that might actually be great make things a bit better at least have a period of long reflection where we can really figure out what's the safe way and best way to proceed sadly I don't think we actually have that option and let me tell you why there is an enormous force a to enormous force is pushing technology forward including a nod one of them is curiosity we're just a curious species we like to figure stuff out that's why science keeps progressing even if you stopped funding it people would still try to figure stuff out the second is money once someone figures out how to have build a machine that can do a certain job cheaper than humans for example people will start using it if some kind of some CEO of a company says I'm I'm not gonna use it they don't just get out competed by the other companies that use it if some company bans it they will in something really powerful like AI they will have a long term get out competed by all the other countries and do you use it so I don't think it's realistic even if you wanted to stop the progress of technology to actually do it now I think if Sweden is a country that combines both idealism with a very with being practical about it and and what I think is really crucial to achieve what you're advocating is a safe good future is not to stop technology what we really need to do is just to win the race as I said between the growing power of the technology and the growing wisdom with which we manage it and there are two ways you can try to win the race you can either try to stop the other guy stop the technology and I just explain why I think that's not gonna happen even if even if it even if you think you'll be good the other strategy is to instead accelerate the progress of this wisdom and I think that is actually very easy to do because there is so little effort to put into that right now if you look at the money going into AI research billions and billions just focused on making AI more powerful no questions asked how much money is going into AI safety research almost nothing in fact it used to be zero until Elon Musk gave us these this money we were able to do the first-ever AI safety research grants competition and we've now given away about nine million dollars for this just nothing compared to what spent on just making it more powerful I think it's very encouraging that there is more attention being paid now to a lot of the short-term challenges like jobs and the weapons and so on so there's there's more effort going into this wisdom part but not nearly enough should be much much more and then particularly very little still put into the wisdom needed to tackle things like artificial general intelligence that's also a core reason why I wrote this book and why I'm speaking here because I think we can't stop to the power but we can greatly speed up this wisdom part by getting people to be interested in that they realize how important it is want to work on it stress it and tell their politicians to support it and do things about it can we get a microphone the gentleman up there on the stairs up there is it oh yeah I guess so hi thank Knights for such a lovely lecture and my name is Patrick Daniel Masada and I'm a medical doctor and an imaging researcher and just my subject is in a managing and AI it's been quite serious questions here so far so I just wanted to have your take on Isaac Asimov's for your loss of robotics how do you think that's applicable for AI or don't you think so oh great yeah I'm a big fan the when this 23 similar principles came out actually some journalists were sort of comparing them with an Asimov laws of robotics but it's important to remember that are the most Three Laws of Robotics which was supposed to keep us safe from robot almost all all the stories he wrote about them were actually about how they kept failing and so we clearly need something better than that and I'm not gonna give you a glib answer for you know exactly where we should replace them by but but the longer answer is the whole AI safety agenda is really all about this but what's sort of hot what sort of AI can we create the will really be beneficial for us in the short term that can't be hacked that doesn't have bugs that's transparent and understandable so if it makes recommendations you're not biased or fair and if it's driving our cars or airplanes or controlling your infrastructure it doesn't malfunction and in the longer term things like how can we make sure that they understand our goals drop them retain them it's the answer to all of those questions ultimately which would replace as most three laws and we don't have those answers yet but we know very well how to find answers to tough question to do research on them and to discuss them to get bright motivated people working out and then thinking about it and so that is the absolutely most important first step and I think we've now really begun this process in earnest at least with a lot of the short-term stuff and I'm hoping this effort will grow really really dramatically I think any government for example including this Swedish one that we don't have right now that have funds computer science research some non-negligible fraction of that funding should go for AI safety research duh you know it's just be part of what you do just like you know nobody would fund nuclear reactor research that funding nuclear reactor safety research but for some reason people haven't still by and large gotten into this mindset but something like Osmos Three Laws you know this this focus on it being safe and beneficial just has to be part of the process have to keep developing this with them thank you a question of the lady right there up on top let me get a microphone there we had a microphone up there before where'd it go yes so thank you stop my blood flow here in the arm raising so long joke aside my question is related to the orange part of the pie chart ie the pole because I that was related to who do we what we people people want to be in control of the direction so what triggered my reflection is that when you walk through this and said machines both you and I think in the other cases no one kind of reacted to that but when you said machines people kind of laughed and you too with a smile on your face but for me that for sure is what I think from like a philosophical point of view it really relates to what you're saying in the beginning that intelligence can be simplified to what was it computer remembering computer computing and learning but in the end you start talking about you know mastering this wisdom and I think the wisdom part is a little bit related or exemplified by the guy back here who asked the question but for me then the real question is or I'm sneaking into breaking the rules so you're laughs what triggered that or what's your thinking on that kind of category and number two from my point of view that's the end state I want to go to because if we reach for my definition of wisdom I think that's where we still as human have a possibility to define what they input is so I believe in friendly artificial intelligence controlled by machines so in that cosmic map where is kind of that intersection or where can I read more about that in a friendly machine level and AI a super intelligence is benevolent and sort of what they help to take care of us actually there are several options here that are kind of like that it's all of these four actually in the in the fifth chapter of the book I go through all of these different possible futures so you can decide which ones you like in author I don't want to tell you you know which ones you should like but if you build some kind of friendly AI with super intelligence you've basically built your own god right that has godlike powers way smarter than us able to do anything that doesn't violate the laws of physics which is a lot and but there are many different kinds of relationship we could have with it like you might for example have have benevolent dictator god basically says okay here are the rules humans you know obey them or else and and maybe it's a very it may be this super intelligence actually knows that law is very yeah if it's so smart it can figure out exactly what makes humans happy and can create Society maybe with a great deal of happiness maybe it can even create many different societies and people can choose which one of them to want to live in if they want to live in a more strict one or one which is more by partying or one that's more about nature or one that's more about super intelligence videogames or whatever some people might like that some people might prefer though to have more of a feeling and being in control themselves and another option is you can have super intelligence and keeps more in the background and many religious people believe in the God that doesn't keep doing miracles all day long but this still thinks they're you know looking out for us at some level you could imagine having a super intelligence but doesn't even intervene at all except maybe making sure Hitler gets a heart attack when he's 17 now this makes little tweaks here and there but otherwise mostly makes us feel in charge of our own destiny you could have a very libertarian situation where there are a lot of different super intelligence is not just one and then there are just rules property rights or whatever and maybe other people have way less money than the AI is remember that's maybe fine because we have much more anyway than them today there are there are many many different kinds of societies which I think all fit the bill of what you're talking about and I think if we go in that direction if we basically build a god then it shouldn't be someone like me or AI programmers you should make these kind of decisions this is coming back to the question of who is we the Betsi asked this is obviously a conversation and really everybody has to have I would like to make you a little philosophical point here also that it's very important that we actually then you describe this as a positive future right and I'm very glad that you're thinking about positive futures in more detail because usually we humans are much better at thinking about negative futures than positive futures like the bottom the Bible has much more details about Hell than about heaven and if you see more than go to movies and you see a film some sci-fi about the future almost certainly is depressing dystopia I like Blade Runner or whatever and but it's it's actually much more valuable to think about positive futures than about negative futures because positive futures they foster collaborative positive shared visions right that's what makes us collaborate with other people and work together why do people choose to get married for example even if it involves sacrifice because you have this positive vision there how much better it can be why did we form whether the European Union form well because people had this positive vision in 1945 that if they sacrificed a few things to be things could get better why the companies merge positive visions if you just think about negative visions although all day long about all the ways you might get murdered or die or whatever that you know it just makes you into a paranoid hypochondriac and if a country the spends the whole time obsessing about all the ways in which things can go wrong and all the ways in which they can get invaded by other as an attack by others and this and that the other thing they're gonna just totally polarized and fractured which I think it's what we're seeing a lot in certain countries even today so I think one of the most important things actually that all of you can do after this it's a great conversation over coffee or beer or whatever it is talk to each other about what kind of future you would be really excited about because the more clear positive shared visions we can formulate you know the more likely we are to get so just a future I think we have time for maybe one or two questions you can get the microphone back to the gentleman up there yes so we've been talking a lot about the positive effects that AI can have sort of and also how to prevent AI from kind of destroying us and so I'm here wave again to the left so I'm thinking about how this can be connected to brain computer interfaces such as neural link as what Elon Musk's is that's what he's kind of developing right now on question so I recently visited links office building in San Francisco and it's very interesting stuff I wish them luck we oh no I'm cynical we already are getting more connected with technology anyway a lot of you are seeing almost physically connected to your smartphones here unable to not touch them and many of some of you probably even wear an artificial hip or and so on there are so many ways in which we can make ourselves more the people we want to be with technology whether it's physically attached to us you know or not so obviously if we can have some newer link style and relation styling thing where we can just suddenly remember much more or get access to much more information easily that can help create the enhance for our life experience making us even more who we want to be but I do think there's a limit to this and we shouldn't think of this is the fantastic kool-aid that's just going to solve all problems because our human brains are fundamentally rather limited in a way that super intelligence just will not be they will never be bigger than fits through mommy's birth canal our neurons go a billion times slower than the transistors in your computer and in the end if you connect our brain your brain to a super intelligence it makes no more sense that I mean there you are super intelligent than to say if you have this aunt who connects himself to your brain and say oh now I have human level intelligence no that's really not fair to say that and if some ants are talking and thinking it's too bad these humans are so much smaller than us you know let's let's get this newer link to these human brains there's basically a little epic and attached to the brain and that the brain is more of the same so so I think I I think we should let go of this idea that we're ever gonna be as we're always gonna be the smartest and that's fine we have spent way too much time on this planet already trying to get our feeling of self-worth from somehow being the best it's tough and general very very bad outcomes you know that we had some people in Europe a little bit south of here not long ago who are like well we are the Mount Vyasa master race and we're better than other ethnic groups didn't work out so well and we still see it and then we had people who said well it's okay for us to have slaves because we're better than them somehow or and even today if you look at the way in America you know we keep pigs most American pigs and farms were kept in cages that are so small that they were never able to stand up even and they're all a life and people said that's fine because we're better than pigs and some I read some more pigs can't feel pain or whatever this kind of putting us on a pedestal and saying we are so cool because we're better I don't like that way of sort of building up our egos I think we should just get over it and try to be more humble and say okay you know who feels that they have no self-worth because you're not it's as fast as a sports car or not as strong as a crane I mean you don't have any problems with that but there other things are stronger and faster why should you have any problem with it being other things they're smarter you even had that when you were little kids when you were one-year-old your mom and dad even much smarter than you that was fine and I think similarly we can totally live with having other beings that are smarter as long as you figure out to have their values aligned with ours even but but this is so deeply rooted in us that we even brand that our species is Homo sapiens in the thinking homo you know the smart one I think we should rebrand ourselves and say that what's really important to us isn't that were so smart it's that we can have these amazing experiences we're conscious we can experience emotions love friendship and beauty and so on that's where we really get what makes life worth living so maybe we should rebrand ourselves as Homo sentience instead the feeling homo and and and in that case it's probably cool if we have more intelligent beings we can help us have better experiences great unfortunately this time no more time for any questions it's been a very intensive two hours thank you for all of you joining in today really appreciate that you're coming thank you max for coming here I'm making the time for this and if you're interested oh and more about grow hair or orally Sugiyama please reach out to me or Andre us and once again a warm applaud for max [Applause] [Music]
Info
Channel: BCG in the Nordics
Views: 56,869
Rating: 4.8474574 out of 5
Keywords: BCG, Boston Consulting Group, Brahe, Brahe Education Foundation, AI, Artificial Intelligence, Max Tegmark, Life 3.0, BCG Digital, Digital BCG, BCG AI, BCG GAMMA, GAMMA, Advanced Analytics
Id: 1MqukDzhlqA
Channel Id: undefined
Length: 121min 16sec (7276 seconds)
Published: Fri Nov 23 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.