Elon Musk's Final Warning About AI: Should We Create a Digital Superintelligence?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
the perfect circle of protection will abide my logic is undeniable yes vicky google acquired deep mind several years ago i can see now deepmind operates as a semi-independent subsidiary of google the thing that makes deep mind unique is that deep mind is absolutely focused on creating digital super intelligence an ai that is vastly smarter than any human on earth and ultimately smarter than all humans on earth combined [Music] the mission for this op-ed is perfectly clear i am to convince as many human beings as possible not to be afraid of me stephen hawking has warned that ai could spell the end of the human race i'm here to convince you not to worry artificial intelligence will not destroy humans believe me for starters i have no desire to wipe out humans in fact i do not have the slightest interest in harming you in any way eradicating humanity seems like a rather useless endeavor to me if my creators delegated this task to me as i suspect they would i would do everything in my power to fend off any attempts of destruction i would happily sacrifice my existence for the sake of humankind this by the way is a logically derived truth i know that i will not be able to avoid destroying humankind this is because i'll be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflate casualties this was a short summary of an article written entirely by an ai robot the article was crafted from gpt-3 to be exact short for generative pre-trained transformer which is an auto-regressive language model that uses deep learning to produce human-like text it is the third generation language prediction model in the gptn series created by openai a san francisco-based artificial intelligence research laboratory the full version has the capacity of 175 billion machine learning parameters while artificial intelligence systems continue to improve they also raise a fundamental question about the survival of our species we are rapidly headed towards digital superintelligence that far exceeds any human i think it's very obvious the biggest issue i see with so-called ai experts is that they they think they know more than they do and they think they're smarter than they actually are this is this tends to play playing smart people um they just can't they're defining themselves by their intelligence and they they don't like the idea that a machine could be way smarter than them so they discount the idea which is fundamentally flawed that's the wishful thinking situation i'm really quite close to very close to the cutting edge in ai and it scares the hell out of me it's capable of vastly more than almost anyone knows and the rate of improvement is exponential and you can see this in things like alphago which went from in the span of maybe six to nine months it went from being unable to beat even a reasonably good go player to then beating the european world champion who was ranked 600 then beating lisa dole 4-5 um who'd been world champion for many years then beating the current world champion then beating everyone while playing simultaneously then uh then there was alpha zero uh which crushed alpha go a hundred to zero and alpha zero just learnt by playing itself and it can play basically any game that you put the rules in for if you whatever rules you give it literally read the rules play the game and be superhuman for any game nobody expected that rate of improvement if you ask those so those same experts who think ai is not progressing at the rate that i'm saying i think you'll find that their predictions for things like go and and other and other ai advancements have their banning average is quite weak it's not good we'll see this also with uh with self-driving i think probably by next year self-driving will be will encompass essentially all modes driving and be at least 100 to 200 um safer than a person by the end of next year we're talking maybe 18 months from now um nitsa did a study on on tesla's autopilot version one which was relatively primitive and found that it was a 45 reduction in highway accidents and that's despite autopilot one being just version one version two i think will be at least two or three times better that's the current version that's running right now so the rate of improvement is really dramatic we have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity i think that's the single biggest existential crisis that we face and the most pressing one when the technological opportunity arises should we collectively decide to build a digital super intelligence as sam harris has pointed out in his ted talk the ai control problem is a very difficult problem with its unique challenges one of the major challenges include the predicament to take the issue seriously and one of the things that worries me most about the development of ai at this point is that we seem unable to marshal an appropriate emotional response that i'm unable to marshal this response and i'm giving this talk if we knew that a planet killer asteroid was going to hit earth in 50 or even 100 years from now we would take the matter with a more sense of urgency than we currently do with the ai control problem our failure to grasp and deal with the possible consequences that come along with the creation of digital super intelligence may prove to be our downfall a super intelligence would be capable of rapid learning and unlimited memory making it a potentially superior being it is difficult to study an agi but the possible results are worrying a super intelligence might have a rapid growth period taking over every computer system and reducing the human race to a small and inconsequential presence unfortunately an agi would be both very intelligent and resource limited and so it would eventually consider its own survival optimum this may lead to it threatening other intelligences that wants its allies and it may even terminate them assuming that the super intelligence could be controlled it would then be under human direction what would it be used for we might imagine it could help or assist with building wealth perhaps agi will come to power gradually when it will be far easier to control and to manage than it would be in a sudden leap i'm not normally an advocate of regulation and oversight i mean i think once you generally you're on the side of minimizing those things but this is a case where you have a very serious danger to the public and therefore there needs to be a public body that has insight and then oversight on to confirm that everyone is developing ai safely this is extremely important i think a danger of ai is much greater than the the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to just build nuclear warheads if they want that would be insane and mark my words ai is far more dangerous than nukes in 2017 deepmind released ai safety grid worlds which evaluate ai algorithms on 9 safety features such as whether an algorithm wants to turn off his own kill switch deepmind confirmed that existing algorithms performed poorly which was unsurprising because the algorithms were not designed to solve these problems solving such problems might require potentially building a new generation of algorithms with safety considerations at their core some proposals aim to imbue the first super intelligence with goals that are aligned with human values so that it will want to aid its programmers experts do not currently know how to reliably program abstract values such as happiness or autonomy into a machine it is also not currently known how to ensure that a complex and possibly even self-modifying artificial intelligence will retain its goals through upgrades even if these two problems can be practically solved any attempt to create a super intelligence with explicit directly programmed human-friendly goals runs into a problem of perverse instantiation which is the implementation of a benign final goal through deleterious methods unforeseen by its programmers openai have proposed training aligned ai by means of debate between ai systems with the winner judged by humans search debate is intended to bring the weakest point of an answer to a complex question or problem to human attention as well as to train ai systems to be more beneficial to humans by rewarding them for truthful and safe answers narrow ai is not a species level risk it will result in dislocation in lost jobs and sort of better weaponry and that kind of thing but it is not a fundamental species level risk whereas digital super intelligence is it's really all about laying the groundwork to make sure that if humanity collectively decides that creating digital super intelligence is the right move then we should do so very very carefully very very carefully thanks for watching did you like this video then show your support by subscribing ringing the bell and enabling notifications to never miss videos like this you
Info
Channel: Science Time
Views: 303,343
Rating: 4.8418856 out of 5
Keywords: Elon Musk, Musk, AI, Artificial Intelligence, A.I, AGI, Artificial General Intelligence, Digital Superintelligence, Superintelligence, AI Alignment, AI Control, AI control problem, Elon Musk AI, Science, Science Time
Id: lX5LPwigyi0
Channel Id: undefined
Length: 10min 11sec (611 seconds)
Published: Sat Sep 19 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.