Is Superintelligent AI an Existential Risk? - Nick Bostrom on ASI

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
do you want to be my friend of course will it be possible why would it not be let an ultra intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever [Music] since the design of machines is one of these intellectual activities an ultra intelligent machine could design even better machines there would then unquestionably be an intelligence explosion and the intelligence of man would be left far behind thus the first ultra-intelligent machine is the last invention that man need ever make provided that the machine is docile enough to tell us how to keep it under control this was a quote from the work of the british mathematician i j good who in 1965 originated the concept now known as intelligence explosion or the technological singularity which anticipates the eventual advent of superhuman intelligence according to the most popular version of the singularity hypothesis called intelligence explosion an upgradable intelligent agent will eventually enter a runaway reaction of self-improvement cycles each new and more intelligent generation appearing more and more rapidly causing an explosion in intelligence and resulting in a powerful super intelligence that qualitatively far surpasses all human intelligence superintelligence i think is one big existential risk perhaps arguably the biggest but it's peculiar in one respect that although it's a big danger in its own right it's also something that could help eliminate other existential risks if we imagine like a very simple model where we have synthetic biology nanotechnology and ai we don't know which order they will come maybe they each have some existential risks associated with them suppose we first develop synthetic biology we get lucky and we get through that the existential risks however there and then we reach molecular nanotechnology and we are lucky we get through that as well and finally ai in another tractor maybe we get ai first and we have to face the existential risk with that the existential risks along that path are kind of the sum of these three different ones that will each have to surmount the first use of this concept of singularity in its modern sense occurs at least as early as 1983 in the coming technological singularity essay by verna vince referring to a hypothetical future development of advanced machines in which he wrote that the singularity would signal the end of the human era as the new superintelligence could continue to upgrade itself and would advance technology at an incomprehensible rate four poles of ai researchers conducted in 2012 and 2013 by nick bostrom and vincent c muller suggested a probability estimate of 50 percent that artificial general intelligence or agi would be developed by 2040 to 2050 ultimately we will be surpassed by intelligent machines assuming we haven't succumbed to existential catastrophe prior to that so the question of how far away we are from human level and machine intelligence i think the short answer is that nobody knows we did a survey of leading ai experts and one of the questions we asked was by what year do you think there is a 50 chance that we will have human level machine intelligence and so the median answer to that we got was 2050 or 2014 depend exactly which group of experts we asked we also asked like by what year do you think there's a 90 probability and we got 20 70 or 20 75 we also asked even when we do reach human level machine intelligence how long do you think it will take from there to go to some radical super intelligence i do think there is a fairly large probability that even when we get to humanish level we will soon after have super intelligence i place a fairly high credence on there being at some point an intelligence explosion and radical super intelligence i think this transition might well be very rapid most proposed methods for creating superhuman or trans-human minds fall into one of two categories intelligence application of human brains and artificial intelligence the specified ways to produce intelligence augmentation on many and include bioengineering genetic engineering ai assistance direct brain computer interfaces and mind uploading because multiple paths to an intelligent explosion are being explored it makes a singularity more likely to occur as the alternative where singularity does not happen would mean that all these different paths have failed some singularity proponents argue its inevitability although through extrapolation of past trends especially those pertaining to shortening gaps between improvements to technology in one of the first uses of the term singularity in the context of technological progress stanislaw ullam tells of a conversation with john von neumann about accelerating change one conversation centered on the ever accelerating progress of technology and changes in the mode of human life which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs as we know them could not continue ray kurzweil claims that technological progress follows a pattern of exponential growth following what he calls the law of accelerating returns whenever technology approaches a barrier kurzweil writes new technologies will surmount it he predicts the paradigm shifts will become increasingly common leading to technological change so rapid and profound it represents a rupture in the fabric of human history this approach is called the technological singularity because according to this hypothetical scenario there will be no smooth or gradual improvements in technologies everything will change very rapidly one school of researchers including ai practitioners and most predominantly ray kurzweil believes that the singularity will occur only once ai has reached human level of intelligence or general intelligence as opposed to narrow ai another school of thought however argues that advanced ai is not necessarily required to have human level intelligence and that it can be any step behind humans on the ladder of intelligence in this view ai does not even have to have an artificial consciousness in order for an ai takeover to lead to a true singularity this view allows for non-sentient machines to be developed that will nevertheless radically change society so superintelligence i think will be a big game a change here the biggest thing that will ever have happened in human history at some point this transition to super intelligence that two possible pathways in principle one can imagine that could lead there could enhance biological intelligence we know biological intelligence has like increased radically in in the past kind of making the human species our machine intelligence which is still far below biological intelligence insofar as we're focusing on any form of general purpose smartness and learning ability but increasing at the more rapid clip so specifically you could imagine interventions on some individual brain to enhance biological cognition or improvements in our ability to pool our individual information processing devices to enhance our collective rationality and wisdom they're like some kind of hybrid approaches you can imagine between biology and machines the cyborg approach public figures such as stephen hawking and elon musk have expressed concern that full artificial intelligence or strong ai could result in human extinction but the consequences of the singularity and its potential benefit or harm to the human race have been intensely debated philosopher nick bostrom defines an existential risk as one in which an extinction level event is not only possible but likely he argues that an advanced artificial intelligence is likely to be an existential risk in addition time frame is also a factor as a super intelligence might decide to act quickly before humans would have a chance to react with any countervailing action a superintelligence might decide to preemptively eliminate all of humanity for reasons that may be incomprehensible to us there is also a possibility whereas super intelligence might seek to colonize the universe a super intelligence might do this in order to maximize the amount of computation it could do or to obtain raw materials for manufacturing new supercomputers nick bostrom suggests that humans may never be able to fully understand an artificial superintelligence since the intelligence of an asi is likely to be much greater than that of the smartest humans it is likely to be particularly difficult for humans to accurately perceive or understand its thought process in his book superintelligence ostrom writes that a super-intelligent agent with a humane goal system would not necessarily behave benevolently towards humans while it may seem alarmist to worry about these scenarios in the current world where only narrow ai exists we do not know how long it takes or if it's even possible to develop safe artificial super intelligence that shares our goals thus we better start planning today about the advent of asi while we still can thanks for watching did you like this video then show your support by subscribing ringing the bell and enabling notifications to never miss videos like [Music] this you
Info
Channel: Science Time
Views: 39,322
Rating: 4.9065766 out of 5
Keywords: AI, Artificial Intelligence, ASI, artificial superintelligence, superintelligence, nick bostrom, nick bostrom AI, nick bostrom superintelligence, existential risk, AI risk, singularity, science, science time
Id: _MJsPJ_JIJU
Channel Id: undefined
Length: 10min 7sec (607 seconds)
Published: Sat Feb 20 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.