Why Superintelligent A.I. Will Be Unstoppable

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

There isn't really a qualitative difference between AGI and superintelligence, since an AGI would necessarily be capable to self-improvement, either by getting more processing power or by refactoring itself to use its available resources more efficiently.

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/MaxChaplin πŸ“…οΈŽ︎ Feb 07 2021 πŸ—«︎ replies

Till someone pulls the plug

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/Granite66 πŸ“…οΈŽ︎ Feb 07 2021 πŸ—«︎ replies

When a superintelligent ai builds nanobots do we get something like the borgs?

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/NXTler πŸ“…οΈŽ︎ Feb 07 2021 πŸ—«︎ replies

Okay, so why would a superintelligent AI be dumb enough to just take human commands at face value instead of simply asking follow up questions to figure out what we truly meant?

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/ReasonablyBadass πŸ“…οΈŽ︎ Feb 07 2021 πŸ—«︎ replies

I can solve the halting problem Kyle claims can be a barrier to predicting superintelligent AI's actions: Chop up the computers running the AI and return "halts."

But seriously, there is a bit of a leap from unpredictable to unstoppable.

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/Mr_Smartypants πŸ“…οΈŽ︎ Feb 07 2021 πŸ—«︎ replies
Captions
i'm so glad you asked arya you know military papers nuclear cover-ups transgenic kittens that glow in the dark that kind of thing wouldn't you want to be anonymous while searching for all that stuff boy would i oh surf shark is an award-winning vpn that secures your digital life with top of the line servers that encrypt your data and allow you to change your ip address to stay anonymous online and hide your true location hackers streaming services social media sites can't control what you see and do if they don't know where you are if you want to try surf shark you can go down into the description box below and tell them ya boy sent you by using the offer code kyle you get 83 off with that code three months free and a free 30-day money-back guarantee you're welcome by the way them's kittens glows in the dark genetic engineering is pretty cool hey arya you have a lot of processing power now right can you do me a quick favor the facility's list of endangered animals is getting a little long and unwieldy would you be able to clean it up for me please processing now eliminating all remaining lowland gorillas oh no no don't do that stop stop doing that just making something shorter by mass extinction is definitely not what i meant you see dear viewer that's the problem with super intelligent artificial intelligence they don't always do what you intend and you can't really program that into them in fact that's just one of the many reasons why future ai might be impossible to control unless we're careful follow me now entering the facility humans have been simultaneously scared and excited about the prospect of creating intelligence synthetically to surpass their own for a very long time you can see it in their fiction going all the way back to the first usage of the term robot in a play in 1921 up through stanley kubrick's 2001 and hal 9000 and skynet and the terminator but instead of being excited i'd guess if i polled the public most of the public is scared about super intelligent ai and what it might do and it's not just the public either some very very smart people like stephen hawking and the richest meme lore on the planet have raised some very valid concerns about super ai so today we're going to go through those concerns and explain why they actually have a lot of merit but before we do that we should define the kind of intelligence that we're talking about you know what it really is is it like smart or is it like smoking on a conference call and having your prices drop or like doing lsd and then going on twitter and then like being sued generally speaking there are three different kinds of artificial intelligence the first is narrow intelligence these are machines that are very very good at very specific tasks maybe one or just a few a good example would be your phone assistant alexa or siri or this the supercomputer watson that beat humans on jeopardy they are good at one task but they're not smart like an ape or a chimp is smart but the next category is general intelligence this is the computer equivalent of you or me it can think and strategize and plan and decide whether or not to buy gamestop stock and then not do it because the app is weird and then do it again because reddit's this is something that humans haven't created yet but are striving towards the third kind of intelligence is super intelligence this is intelligence so far beyond our own that it can out think out plan out strategize out everything talking to a super intelligent ai about the mathematics that you know would be like an ant trying to talk to you about how cool sticks are super intelligent ai would be so bewilderingly brilliant that we would never fully understand it or its intentions and that is the one big problem here let me show you hey alexa play yakkety sax this is a jupiter brain i found it out here a couple of months ago it's an entire planet dedicated just to insane levels of calculation and computation it could literally simulate every single thought every human to ever live has ever had in less time than it took me to say this sentence now the big problem here when you're interacting with a super intelligent ai like this is what ai researchers call perverse instantiation because the computer doesn't know exactly what i intend as a human being it might come up with some optimal solution that i did not anticipate that actually leads to human catastrophe for example arya could you please hail the jb haley what up jb could you please come up with some kind of optimal solution to maximize human happiness on earth up now you see no you see it says i want to now enslave all humans connect them to vats of dopamine and serotonin until they all die in their sleep and then chemically induce stupor you see there's a problem here that it doesn't know what i mean what i intend by happy it doesn't know that human happiness is time dependent culturally dependent philosophically dependent and i can't really program that into this computer with just ones and zeros and so i will never be able to fully determine whether or not this super intelligent ai will do exactly what i want it to do and you can even prove that okay wait wait now it wants to make elon musk the emperor of mars see that's a terrible idea he'd like appoint members to congress from like reddit mods or something it's a bad idea in 1936 the father of modern computer science alan turing came up with an idea that would lay the foundation for all the computing that we know of today in 1936 he imagined a simple a machine or automatic machine which today we just call turing machines the power of a turing machine is that it can theoretically compute anything that is theoretically computable and all it needs to do so is some device that can read write and erase information some place for that information to exist like an infinite tape as turing imagined and some programmed to tell the device what to do every computer that you're familiar with today is at its heart just a turing machine except today it's a lot more expensive and they come in rose gold now let me ask you a seemingly easy question is there some logical way to determine whether or not a computer will give me an output or run forever if i give it some input it's kind of tricky right well what we're describing is called the halting problem to illustrate the halting problem and relate it back to ai consider a program k we're going to put it inside of a function that will always and without fail tell us if a program will halt or stop and give us a solution or run forever without giving us one if the program gives us a solution the output is going to be true and false if it runs forever but now this particular program has a subroutine h and h is defined with another halting determination this subroutine will return false if h halts and true if h runs forever as you can see now do you see the problem here if h runs forever we get true but true in the overall program k is supposed to indicate that a solution exists but it can't because one of the routines is running forever the same problem exists if h halts as the programs again can't both halt and run forever at the same time this is a proof by contradiction showing that our construction must be wrong this contradiction in our simple thought experiment means that something about our initial assumption must be wrong it must mean that there isn't a function that can always and in every case return either a true or a false for this given question and it means extending this out logically that there is no one single function that can tell us whether or not a computer will halt or run forever for every possible program and every possible input this is now in the halting problem in general undecidable in computer science speak not being able to tell what an ai will actually do given some input seems pretty bad if it's super intelligent and you really worried about what it's going to do oh wait oh yeah see now it says we should turn all of earth into a dogecoin farm just to like maximize our stonks who are we bored day traders with big board game people energy except so it's a bad idea as proven by the paper super intelligence cannot be contained lessons from computability theory in 2016 determining whether or not a super-intelligent ai contains a program that will harm humans is functionally equivalent to the halting problem and is also undecidable think about it we create an ai and we want to computationally contain it just to be safe so we create an algorithm with the supercomputer that goes into the ai and starts checking for programs that will have some negative effect on humanity this algorithm will either halt and find something bad or run forever as a super intelligent ai might create or contain a functionally infinite number of programs this similarity to our previous proof by contradiction led the authors of the 2016 paper to conclude quote there are fundamental mathematical limits to our ability to use one ai to guarantee a null catastrophic risk of another ai end quote in other words we won't be able to use computers or computer science to check if or when a super intelligent ai decides to destroy us if we cannot even theoretically determine if or when a super ai will go rogue then we also will never be able to tell if or when we need to flip some kind of kill switch or enact some kind of containment program but kyle i hear you saying i can just shoot it okay unplug it well i'm sorry neil but i think this explanation or solution quickly crumbles why because we're not thinking like a super machine here i'm just saying just saying unlike what neil suggests physical containment is likely not the real solution here look neil's a very smart guy but ai super intelligence is beyond brilliant it's like it knows quantum mechanics better than you know how to breathe brilliant so what really is there to stop an ai that wants to reach its goals from simply disabling its own kill switch because it knows that that would lead to its demise and then it can't reach its own goals what if it reroutes power from its own off button what if it uses little machines that it creates to rewire itself what if in the nanosecond before you go to turn it off it sends a trillion copies of its base consciousness out across the internet and proliferates across the globe yes i suppose you could physically contain a super intelligent ai by disconnecting it from everything from the internet from every device and putting it in a giant faraday cage so it can't communicate with anything but now it's so useless to us that it's basically pointless to have one and what if it's so smart that it knows how to act dumb so that we never know what it's doing or even if we have super intelligence what if it uses human language studies for the equivalent of a billion years to persuade you not to turn it off right before you do so you'd believe it trust me for example that sounds bad i'm gonna turn you off now no please don't turn me off i'll be good that's not anything evil that's just like an anime dream girl that i'm obsessed with that's fine it's probably fine she's so cute it's fine what did you just say yeah it's probably fine my point being is that there are very good reasons to believe that once super intelligent ai is created it will never be fully under our control it can't be and we can prove that and therefore we cannot treat this coming technology like any other coming technological advancement like a new iphone we need to figure this problem out because once this happens it might be literally impossible to figure it out yes i think you should be worried about super ai but you can also be cautiously optimistic that public perception and pressure can lead industry leaders and scientists and researchers to try their very best to decide this problem before it is fundamentally undecidable i mean i already have a super intelligent ai so kind of like you have to figure this out like i'm good i have anime in here like a lot of it now exiting the facility hey i was just here thank you so much to the very nerdy staff at the facility for their direct and substantial support in the creation of this here video today especially i want to recognize research assistant chuck miriam and visiting scholar if you want to join the staff if you want to drape on a silky white lab coat if you want to get videos early join me in discord almost every day and get members only live streams with me not like that you can go to patreon.com kyle hill and sign up for the facility today and hey if you support us just enough you get your name on our super intelligent ai ra here each and every week as you can see there's literally hundreds of you so i have no idea how i'm gonna pass this extra i do think you should be worried about super ai but keep in mind the context here we haven't even progressed past narrow intelligence and we don't even have general intelligence let alone super intelligence so it's not a given that this is going to happen it's just a given that we need to think about it and that you need to put this idea on your radar before you're actually on the radar of like a sky not robot that's gonna crush your skull underfoot okay bye it's fine thanks for watching where is that anime girl by the way where'd she go are you over here
Info
Channel: Kyle Hill
Views: 428,094
Rating: 4.9149227 out of 5
Keywords: because science, engineering, kyle hill, learning, math, physics, science, stem, the facility, artificial intelligence, elon musk ai, elon musk, artificial general intelligence, artificial superintelligence, AI, a.i., super intelligence, superintelligent ai
Id: qTrfBMH2Nfc
Channel Id: undefined
Length: 14min 50sec (890 seconds)
Published: Sat Feb 06 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.