Are AI Risks like Nuclear Risks?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi in the previous video I talked about this open letter which has been signed by a lot of prestigious people which talks about how there are risks and possible problems associated with AI and it says we need to do more thinking about the future of AI technologies and the impact they're going to have on society but it's worth noting that in those 8,000 or so signatories there's quite a broad range of opinions about the specific nature of these problems which problem is most important what the timescales are so to say all of these people are concerned in some sense with AI safety is not to say that they all agree with each other and the document the open letter links to lists a whole bunch of different things and I'm going to talk about some of those now so the first thing is economic impact things like technological unemployment you know what if AI put someone out of a job or the effects on economic inequality in the sense that if the AI technologies produce a huge amount of new wealth not everyone is going to benefit from that wealth equally and this can increase inequality which can cause other problems also some people are concerned that even if we manage to set up a situation where people don't need to be employed in order to get the resources that they need to live there's a question of in a world in which nobody has to work what are people actually doing with their time it just measure how do people get a feeling of achievement and ideally they shouldn't just have a feeling of achievement they should have actual achievement but it's hard to have actual achievement in a world in which you're outperformed in every way by machines there's concern about AI ethics things like driverless vehicles this has been done to death but that's just because it's an interesting problem that we're not sure how to deal with yet so you know you have a self-driving car it's in some scenario where it has to hit one person or the other and then the question is how did it make that decision this is a philosophical question right it's an instance of the trolley problem in real life so there are really two questions here the first is the ethical one how should we design our AI systems to make ethical decisions in these situations the interesting thing to me about this is that humans are routinely in these situations right car crashes happen regularly but we don't have time to make ethical decisions if you're in this type of scenario in which you are forced to hit someone and you have to choose no one's going to blame you for choosing one person or the other because you're in the middle of a car crash like almost by definition you have no time to think whereas with a self-driving car the decision of how you want your car to behave needs to be made beforehand with all the time in the world and no excuses so what's new isn't the decision itself so much as its having enough time to think also a sidenote prediction I'm calling it now when self-driving cars are common we will have a problem with morons deliberately jumping in front of them for fun anyway that's one question the other question is legal what should the law be about this who's liable the person in the car the person who owns the car the company that wrote the software and practically speaking the way the software actually gets written will be determined by the legal question on the ethical one there's concerns about military use of this technology autonomous weapons systems and so on the ethics of that and some people are worried about discrimination in machine learning systems that these systems we build to process people's data and make decisions about insurance premiums hiring decisions loads all kinds of things these systems can be used for they may end up being racist or sexist things like that which is another potential issue there's also privacy concerns people are worried about the ability of AI systems to deduce private information from the vast amounts of public information available to them the classic example of this is the young woman who started receiving coupons for baby food and other baby related stuff from her supermarket and her father stormed in there to complain but she actually was pregnant and the supermarket's product recommendation algorithms had noticed that before her own father I don't even know if that story is true but it illustrates the point AI may be able to discover things about you that you didn't intend to make public so all of those are problems that can happen when AI systems are working more or less as they were intended to work then you have the stuff that's more what I've been talking about in earlier videos which is problems that happen when AI systems have unexpected unintended behavior which is harmful either during development like in the stop button problem where I'm talking about this robot that wants to make you a cup of tea and ends up running over a child who's in the way that kind of accident or problems that only happen once the system is deployed and it starts behaving in ways that were unexpected and have negative consequences this can already be a real problem with existing machine learning systems and as those systems get more powerful and more influential and get relied on more and more those problems are going to become more and more important and then you have the question of general super intelligence intelligent systems that dramatically outperform human elegance across a wide range of domains that can be a maximally bad problem so when people say oh yes I'm concerned about possible problems with AI they're really talking about a very wide range of possible problems here I'm going to go back to the nuclear analogy I've used in the past imagine if some point around the turn of the last century a load of scientists got together and all signed a letter saying we're concerned about the risks of nuclear material and so thousands of people signed this thing and then it turns out that some of those people are talking about like radiation poisoning you know Marie Curie died young as a consequence of the radiation she was exposed to while doing her research but other people are talking more about things like their diamond core this plutonium core which caused a lot of problems during the Manhattan Project where there were accidents that resulted in sudden powerful bursts of radiation which gave acute radiation poisoning to the scientist conducting the experiment and anyone who happened to be standing nearby or you have other people are more concerned with risks associated with nuclear power you know you have this nuclear waste generated that needs to be disposed of that's a problem or you have the other problem with nuclear power which is the possibility of meltdowns which can be disastrous and then you have other people saying well never mind all that what about nuclear weapons right this is the big problem what if people build nuclear weapons that can kill millions of people and then if they proliferate then that can cause vast problems for Humanity like global thermonuclear war that's an issue and then you know beyond that you also have concerns like okay during the Manhattan Project there was concern that the Trinity nuclear test might ignite in the atmosphere the principle of this is quite similar to the way that a hydrogen bomb works you have a standard atom bomb next to a certain amount of hydrogen and then I'm not a physicist but more or less the energy from the fission reaction the explosion of the atom bomb kicks off a fusion reaction in the hydrogen the hydrogen atoms fuse and give off a tremendous amount of energy which is another chain reaction it's the same thing that's happening in the Sun right turning hydrogen into helium but of course in the Sun heavier elements are also fused up to and including for example nitrogen so there was some concern when people were developing the atom bomb what if we kickstart a fusion chain reaction in nitrogen because the atmosphere is like 78% nitrogen there's a chance that we turned the entire atmosphere in to a thermonuclear bomb effectively from the first time that this issue was raised it seems pretty clear that it wasn't going to happen in the sense that the amount of energy given off by an atom bomb or a hydrogen bomb is just not big enough to do this their understanding of physics at the time pointed to it being nowhere near enough energy but at the same time I don't believe they'd ever actually fuse nitrogen in the lab so they didn't know for sure exactly what conditions caused nitrogen to fuse and also they'd never set off a nuclear bomb before so they didn't know for sure exactly how much energy they were going to get out of it so there was a nonzero probability from their perspective when they set off the trinity explosion that it would end all of humanity instantaneously more or less right there and then so before they did it they run the numbers again in a few different ways and they looked at it very carefully and made very very sure ahead of time that they were very confident this wasn't going to be an issue but they weren't going to ignite the atmosphere so people have had various concerns about nuclear material ranging from oh if you work with it you might get radiation poisoning too if you screw it up you may destroy all life on the planet forever in a giant fire explosion and so getting people to sign a letter that says we're concerned about nuclear material would cover a broad range of possibilities so I like a nuclear analogy here because it helps me explain something about a paper that I'm going to go through soon concrete problems in AI safety because it's concerned about accidents specifically the paper is looking at unintended and harmful behavior that may emerge from poor design of real-world AI systems and another way that this is similar to nuclear research is that the type of knowledge you need in order to prevent something like the diamond core problem of something going supercritical and dumping out a huge amount of radiation in your lab is the same kind of understanding of radioactivity and fulfill material in general that you need in order to understand how nuclear bombs work and make sure you don't set off one of those by accident or to understand what storage and transportation technology is necessary for nuclear waste or how to prevent meltdown like a general good understanding of nuclear physics will help you with protecting yourself from getting radiation poisoning and also hopefully protect you from accidentally igniting the atmosphere and I think it's the same in AI and I think that's part of what concrete problems in AI safety is trying to do it's trying to bring together the people who are concerned about possibly igniting the atmosphere the real epochal superintelligence problems and the people looking at the more run-of-the-mill what if my robot ignores my stock partner type problems and it's trying to point out areas of research that we could look into that would actually provide progress on both of those things where there's overlap things that we can study that would help us with current existing AI systems but that may also help avoiding these huge global scale super intelligence related problems problems which like igniting the atmosphere may or may not at this point actually turn out to be a real problem but which are still definitely worth looking into because these stakes are so unbelievably high I want to end this video with a quick thank you to my excellent patreon supporters these wonderful people around here I'm always looking for ways to make good use of your support to improve my videos and I recently bought my own autocue which I think works really well I've put up a behind-the-scenes video on patreon of how I did that you want to check that out but in this video I especially want to thank Stefan scales skills who supports me for $20 a month thank you I've actually just made a new $20 reward level you can go check that out and let me know what you think thanks again and I'll see you next time
Info
Channel: Robert Miles
Views: 73,210
Rating: undefined out of 5
Keywords: AI, AGI, AI Risk, Robert Miles, Robert Miles AI, AI Safety, Nuclear Safety, Nuclear Weapons
Id: 1wAgBaJgEsg
Channel Id: undefined
Length: 10min 12sec (612 seconds)
Published: Sat Jun 10 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.