Elon Musk's Message on Artificial Superintelligence - ASI

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
the advancements in ai i think are  quite astonishing i mean i think it   reaches the threshold where it's as smart  as the smartest most inventive human then   i mean it really could be a matter of days  before it's smarter than the sum of humanity what happens when machines surpass humans in  general intelligence if machine brains surpass   human brains in general intelligence then this new  super intelligence would have undergone an event   called the intelligence explosion likely to  occur in the 21st century it is unknown what   or who this machine network would become the  issue of super intelligence remains peripheral   to mainstream ai research and is mostly discussed  by small groups of academics i don't think we can   solve the problem just technologically you  imagine that we've done our job perfectly   and we've created the most safe beneficial ai  possible but we've let the political system become   totalitarian and evil it's not going to work out  well because we're talking about human ai human ai   is by definition at human levels and  therefore is human so the issue of how   do we make humans ethical is the same issue as  how we make ais that are human level ethical what is an actual good future what does that  actually look like all of us already are cyborgs   so you have a machine extension of yourself in  the form of your phone and your computer and all   your applications you are already super human  by far you have more powerful capability than   president united states had 30 years ago if you  have an internet link you have an article of   wisdom you can communicate to millions of people  and communicate to the rest of earth instantly   and these are magical powers that  didn't exist not that long ago   so everyone is already superhuman and a  sidewalk the limitation is one of bandwidth   we're bandwidth constraint particularly on  output our input is much better but our output is   extremely slow if you want to be generous you  could say maybe it's a few hundred bits per   second or a kilobit or something like that  output compare that to a computer which can   communicate at the terabit level very big  orders magnitude differences we're headed   towards either super intelligence or civilization  ending intelligence will keep advancing another   thing that we're advancing is something that puts  civilization into stasis or destroys civilization   what is a world we would like to be in where  there is this digital super intelligence studies have purposed a number of directions in  which ai could develop such as the development   of stronger and smarter artificial servants  the development of a network of increasingly   intelligent systems the development of ais with  human-like personalities or the development of ais   with moral reasoning capabilities that can make  decisions autonomously and care about humanity the term ai is used to mean anything from  incremental improvements to the software   of today's computers to the development  of human-like thinking machines   however this type of ai is an extreme case  it is also called asi strong or general ai   in contrast with today's narrow and weak  ai it presents the creation of a machine   with intellectual abilities that match or  exceed those of humans across the board   by definition an asi can perform better than us in  any conceivable task including intellectual skills   it could engage in scientific research teach  itself new abilities improve its own code   create unlimited copies of itself choose better  ways of deploying its computational resources   and it could even transform the environment  on earth or colonize other planets   the evolution of this new type of sentience could  follow many paths it could present a potential   existential risk to humanity depending on the  nature and capabilities of the system however   the hypothesis of a powerful machine taking over  the world is not the only outcome but the really   two sides that's one is getting rid of a lot of  the negatives that like the compassionate use   to cure diseases and all other kinds of horrible  miseries that exist on the planet today so that   is a large chunk of the potential but then beyond  that if one really wants to see what the positive   things are that could be developed i think one  has to think outside the constraints of our   current human biological nature it's unrealistic  to modern trajectory stretching hundreds of   thousands of years into the future we have super  intelligence we have material abundance and yet   we are still these bipedal organisms with three  pounds of gray cheesy matter all of these basic   parameters that sort of define the human game  today i think become up for grabs in this future [Music] philosopher nick bostrom lays the  foundation for understanding the future   of humanity and intelligent life now imagine a  machine structurally similar to a brain but with   immense hardness and flexibility designed from  the bottom scratched to function as an intelligent   agent given sufficiently long time a machine like  this could acquire enormous knowledge and skills   surpassing human intellectual capacity in  virtually every field at that point the machine   would have become super intelligent with other  words the machine's intellectual capabilities   would exceed those of all of humanity  put together by a very large margin   this would represent the most radical  change in the history of life on earth   while the ultimate goals of super intelligences  can vary greatly a functional superintelligence   will spontaneously generate as natural sub-goals  such as self-preservation and goal content   integrity cognitive enhancement and resource  acquisition the three overlapping revolutions   that people talk about gnr genetics biotech  nanotechnology and robotics which is ai there's   a difference with a.i in that there really isn't  foolproof technical solution to this you can have   technical controls on say nanotechnology one of  the guidelines is it shouldn't be self-replicating   if you have an ai that's more intelligent than  you and it's not for your destruction or it's   out for the world's destruction and there's  no other ai but superior to it that's a bad   situation according to ray kurzweil this is a type  of artificial intelligence system that acts like   it has a mind regardless of whether a philosopher  would be able to determine if it actually has a   mind or not asi is often associated with traits  such as consciousness sentience sapience and   self-awareness observed in living beings however  these are not necessarily characteristics of an   asi an asi may be non-conscious to various degrees  even if it has some degree of consciousness it   could have a non-human nature such as a radically  different mental contents from a human mind   kurzweil described an asi as a machine that passes  what he calls the touring threshold the point at   which the machine's intelligence will be so far  superior to people that any interaction between   human and machine would raise the question of  whether the machine is in control ai can be widely   available the analogy to nuclear bomb is not  exactly correct it's not as though it's going to   explode and create a mushroom cloud it's more like  if there were just a few people that had it they   would be able to be essentially the dictators of  earth whoever acquired it and if it was limited to   just one other people and it was ultra smart they  would have dominion over earth so i think it's   extremely important that it be widespread then it  will be tied to our consciousness tied to our will   and everyone would have it so it would be sort of  still a relatively even playing field in fact it   would be probably more egalitarian than today just  really just comes down to the two things and it's   solving the machine brain bandwidth constraint  and democratization of ai i think we have those   two things the future will be good as long as  ai powers like anyone can get it if they want it   we've got something faster than meat sticks to  communicate with i think the future will be good it is currently unknown what this kind of machine  would become discussion of super intelligent   ai focuses on several fundamental questions how  does one create minds that are better than humans   what are the possibilities to create a friendly  asi there are several competing ways to get there   and many are not mutually exclusive one of the  first is to simply replicate the biological brain   it is not necessary to understand exactly  how a brain works to replicate it digitally   however a super intelligent ai will need to  have the ability to learn no brain can do that   without a massive database of past experiences  in order to develop a super intelligence that   would benefit humanity the process has to  be done in a series of steps with each step   being determined before we move to the next one in  fact it might just be possible to program the ai   to help us achieve the things we humans may not  be able to do on our own it's not simply being   able to create them and learning how they've  been commanded but it is interacting with them   and involving ourselves at the same time it is  learning how to be human after the first asi thanks for watching did you like  this video then show your support   by subscribing ringing the bell and enabling  notifications to never miss videos like this
Info
Channel: Science Time
Views: 569,476
Rating: undefined out of 5
Keywords: Elon Musk, Elon Musk AI, Artificial Superintelligence, ASI, Digital Superintelligence, Artificial Intelligence, AI, Elon Musk Superintelligence, Nick Bostrom, Ray Kurzweil, Technology, Science, Future, Science Time
Id: ZCeOsdcQObI
Channel Id: undefined
Length: 10min 3sec (603 seconds)
Published: Sat Oct 24 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.