Ben Goertzel: AGI will obsolete human life as we know it -- thank goodness

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

I haven't worked out why genocide is so popular amongst transhumanists.

👍︎︎ 2 👤︎︎ u/[deleted] 📅︎︎ Sep 18 2017 🗫︎ replies
Captions
[Music] [Music] [Applause] alright thanks Adam how long how long as it starts was alright consecutive or concurrent yeah so yeah I'm actually happy I'm going last because I agree with an awful lot of things that the previous previous speakers said so that that gives me the opportunity to get deal a little further out since they've covered a lot a lot of the basics I mean I'm I'm professionally and AI developer for a long time now I started with a PhD in math but I love mathematics but I mean III the certain point I realized no there's a choice to spend your life proving really cool theorems or like create a superintelligence a billion times smarter than humanity which you can then upload your mind into and prove theorems way beyond what you would have done in the million years as a human so I got possessed by the AGI bug 2530 years ago long before it was it was so fashionable as it is now I'm not going to tell you too much about the intricate details of my AGI work now because it's complicated and would take a long time but I'll give some some general thoughts on how AGI fits into life the universe and everything and then tell them a little bit about a few aspects of the AGI work I'm doing so in terms of life the universe and everything first of all to frame the discussion I wrote a brief book many seven years ago called the cosmos manifesto the name of which was inspired by our Karski a federal van the Russian cosmos and a few of the premises there are on this slide I mean I believe that in the next decades of centuries humans will merge with technology in various ways will develop such an AI and mind uploading we will leave earth and roam around the universe will create virtual and synthetic realities will reengineer space-time many of the things that religions have promised analogs of those will come about through technology although probably not exactly what what they're what religions have promised such as a heaven could exist in a virtual reality reincarnation of a form could exist if you managed to reconstruct someone from traces they've they've lived left behind in the world I think the domain of material scarcity that we now live in is going to be gone and our you know our great grandchildren but we'll look back and be baffled that people had to work for a living like they'll be what you had to do stuff you don't like to do all day every day in order to get some tokens you exchange for like food and water and shelter this this will seem bizarre to them in the way that slavery or cannibalism seemed seem strange to us now I mean not to say that everyone will be able to get infinite resources I mean if you want to spin up a new solar system to play with there will be competition for resources on some level most likely but the era where we have to work and struggle and compete for resources that are feel like basic needs relative to our minds and bodies III think that that's gonna go away relatively soon and I think the ethical systems that are prevalent on earth will will change a great deal and become more flexible which is has already been happening in the world as we've moved toward toward more and more advanced civilizations so and this is not just about technology I mean amazing new states of experience and social and interaction will come out of all this I mean what it feels like to be alive and to experience each moment will become quite different I mean in the same way that our is radically different than that of say a dog or a cockroach or of a a primitive tribes person who didn't know written language mathematics science movies and so forth so I think a lot of big things are going to change and above all this list will probably seem astoundingly archaic in the hundred years once our intelligences are a billion times what they are now I mean just as you know if I asked my dog you know what what amazing exciting future would you envision to be like all the all the steak I can eat and like I can run around in fields and then like mate all the time and they're not they're not gonna say well you know I can download all the movies I won't or I can discover unified physics or fly to other planets so all these amazing things we envision now these are probably just just scratching the surface and there's a lot of technologies that will be involved in creating this this type of future I mean there's nanotechnology since synthetic biology physics if you're doing space-time engineering and we still don't have a unified physics I mean for for virtual reality there's all sorts of issues with interfacing the breath the brain two computer systems that will generate synthetic realities to feed into us there's brain computer interfacing body modifications let us fly through the air but among all these technologies I do think that AI and specifically artificial general intelligence is likely to be the most impactful one and I mean this is because as I J good said in the 1960s the first truly intelligent machine will be the last invention humanity needs to make I mean once you have an AGI that AGI can proceed and invent nanotechnology synthetic biology space-time reengineering and so forth and none of these other advanced technologies on the horizon have the same potential to take over the invention process and bring it way beyond where where humans have brothers so my my approach to AGI if you're interested you can find on Amazon my book the AGI revolution which is reasonably non-technical for a general audience or if you're if you're a computer scientist you could look at the book engineering general intelligence which it's sort of the the high-level it's 900 pages but it's sort of the high-level abstracts of the OpenCog AGI system that a group of us are building which is a big and complex undertaking now that the guts of the OpenCog would be AGI mind are hard to visualize I mean it's the knowledge base inside the AI is a big graph it's avoided labeled hypergraph nodes and links of an abstract sort in the computer's RAM and we have a bunch of learning and reasoning algorithms acting on these nodes and links so there's a probabilistic logic engine in the evolutionary learning framework there are some deep neural nets for dealing with visual and auditory perception there's activation spreading system that spreads around attention to different things in the network and there's a lot of sunzha here we've solved a lot of problems we have similar problems to solve one of the things were applying this system to not the only one but the one that's easiest to show off is controlling these humanoid robots that we're making in Henson robotics in Hong Kong where I've been based for the last six years so I'm not going to run through all these robot videos I have here in great length but just just just give you a little this is an old human skin line starting robot for like ten years ago actually have a new one that we're gonna showcase at the Consumer Electronics Show in January this is the Sophia robot which is our most sophisticated one which looks really uncommon ly lucky human and I think this robot is the most realistic facial expressions of any we have this thing of the music festival in Hong Kong earlier this year which is kind of cool what would school is people walk by and didn't know was a robot most of the time then they see the plastic hands which we kept kind of crappy on purpose and then they were whoa what's that is it the person in plastic bag no it's it's a robot yeah we we have metal robots the sound here isn't good so I won't run through too much for this this video you can find on youtube if you look up OpenCog dancing robots reasoning it was a demo man hit one of our developers who he's somewhere in this room I think was running the robot who's the basic logical reasoning are two exercises oh we get the robots to do each other so that's kind of funny very they could learn from each other usually they just been each other more and more [Music] we had a great debate among these two robots at the rap the rise conference in the Hong Kong a few months ago which you could find on Facebook video something if you look at dancing robots I rot rise Hong Kong and we're we're gonna do that again at the Web Summit in September and I mean humanoid robotics is not a GI it's its own discipline there's a lot of subtlety did the materials science and the control theory and and so forth from an AGI point of view that gives you two things I mean first of all it gives you a really cool user interface because everyone loves to see these beautiful robots and they get people excited and whether the abstract data structures and algorithms do and you know we do a lot of other things with OpenCog we analyze genomics data to help help biologists understand Aging in and--and--and disease and we're experimenting with mathematical theorem proving these things are equally useful for advancing a stored AGI but they don't they don't reach out to average people as much as the humanoid robots because everyone can when the robots miles looks them in the eye and understands what they're saying even a little bit they're like wow there's some intelligence there so there's that aspect of the robots they're great for just reaching out to people but I also think that embodiment in something that can see hear touch and experience the human-like world and interact in the human-like world that has a big advantage in terms of getting human-like intelligence into the AGI because there's a lot of common sense knowledge that we all share as part of our culture and the easiest way for an AI to absorb that is to be able to enter into our world and absorb it from us directly in interaction with us so I mean our our plan is to use these humanoid robots to advance toward more and more capable at AI systems first-first getting a humanoid robot that can really hold a conversation with you with with genuine understanding of what you're saying in the context of what's going on around and where we're getting there we're doing a little bit of that but we're not we're not really there yet I mean you you'd like the robot to be here to understand it's in a conference room all these people are sitting in chairs and they're gonna get up and stand and go away I understand the people can talk to each other and understand if someone falls asleep that if you poke them they'll wake up just get basic understanding of the situation that it's in and be able to discuss based on that which is what everyone expects of robots from science fiction we may be a few years away from getting there I think that we know how to but there's certainly a lot of a lot of work to get there once you're there then you're in a very interesting regime because then you have something that you can teach like you can you can teach it math on the whiteboard and when you write two plus two equals four you can show it two things here two things here and count one two three four things I mean you can connect abstract knowledge with embodied observations and experiences then you can start to teach the robot like a kid and once you get to that point you can almost said building a full-scale human level AGI is more of a management problem because you know you're going to be able to do it by improving the algorithms and then teaching it more and more and you're gonna be able to get as many resources as you want to achieve it and furthermore I suspect that once you get to a human level AGI into a superhuman AGI is gonna be quite rapid I mean not like five minutes it's gonna rewrite its code and then reread the universe five minutes later but I think it will be years and not decades from a human level I to a significantly superhuman yeah once you've taught that AGI to program computers and to analyze algorithms and to be a computer scientist I mean then it can advanced quite rapidly because it can make many many many copies of itself which can learn from each other and it can apply its knowledge to modifying its own source code and as it modifies its own source code basements computer science discoveries they can make itself smarter and smarter and then smaller and smaller and this is the recursive self-improvement that I Jacob who was running about in the 60s and Ray Kurzweil has discussed in his book The Singularity is near now that that really only kicks in in my view once you have an egg job it has the intelligence of a human level adult who's able to understand computer science algorithms math programming engineering and so forth so I think there's there's like zero probability or very very close that some simple AI program is suddenly gonna like blast out of control and become super human on the other hand once you have something that's like a top level computer scientist which is a program in tutor that can copy itself to many hard drives and hack its own code that then getting astoundingly fast progress seemed seems quite possible there's there's a lot of unanswered questions here I mean people always ask about consciousness and and subjective experience and I think we don't have a solid answer to that my strong intuition is a digital computer program that has the same basic behaviors as a human being and the same practical understanding as human being is gonna have consciousness in the same sense that the human being does but I mean I'm open to being refuted on that if empirical studies come out differently I mean an experiment I thought of as if if I connect my brain with a wire or some analog there's another humans brain and I feel their consciousness they're on the fringe of mine then I connect my brain to the robots brain and I don't feel their consciousness I feel just like if I connect to my brain to a brick or something I mean eventually this sort of second person experience might convinced me this robot is just faking it and doesn't have any actual experience so I mean that there's there's a lot of ways that we're not thinking of now but that could be used to explore consciousness I think AG eyes can be a very powerful tool for this another interesting aspect is what my friends at the global brain Institute in Brussels call the global brain so I mean you can take a lot of different a eyes around the world networked them together into one sort of AGI board mind but then once we all have the chip in our heads instead of in our pockets which is almost enough of them all once brain computer interfacing it advances a bit I mean then the chips in our head which are giving us in brain access to the to the Internet and in brain SMS I mean these will also connect to the network of AG eyes around the world and then anyone who wants to can jack into this sort of global mind matrix and if you if you look at the advance of technology what we use it for I mean will you things for email we use them for SMS we use them for like Facebook live video streaming more than anything else we use this technology to become closer and closer to other people and that work together so I would imagine once we have the in brain chip I mean that there's going to be an awful lot of use of that to network people together and into something that even goes beyond being an individual in the sense that we see it now now where that leads we really can't know I mean we may discover aspects of reality that we're just we're just not imagining now this is scary yeah it's it's I mean that's it's always been scary in human human history and and this maybe is scary early but it's it's hard to know because if you know if we were living in a cave and someone described modern reality to us we would think this world was pretty scary I mean the the difference between where we are now and where our primitive tribal ancestors were is awfully dramatic and you know a primitive human plunged into this conference room would barely recognize all of us as humans so I think going into radically new frontiers and changing our way of life and our way of thinking is what humanity has done since we stopped being glorified monkeys and apes basically so what Stephen Hawking says they are can the human race and Elon Musk is worried that I I may kill everyone I mean I guess if you look through human history there have always been people extremely afraid of radical change and many of the things they feared have been have happened right I mean many of the things that medieval people hold dear or not are not around around anymore and many of the things we hold dear may may not be significant in the future but humanity has always been evolving it to into into new forms so I'm I don't I don't want to see an AI just take all of our molecules and repurpose them - extra hard drive space for itself or something so I mean that that's what and we eliezer yudkowsky who's a friend of mine who's talked about this a lot in the u.s. like to say that the AI does not love you and I does not hate you but he can use your atoms for something else I mean that's not the future I want to see I can't aside that a probability zero but not nor do I see why that's an extremely likely outcome so we we can't rule out these negative outcomes but I think that we that we can do things that sort of militate against these negative outcomes and make them you know qualitatively let less probable so I I together with my friend Julia Mossberg in California we're pursuing a project specifically oriented toward making an AI that will have unconditional love toward human beings so I mean why not I mean there's military making killer a eyes and there's Google making a eyes whose main purpose is to advertise stuff to people to convince them to buy stuff they don't need so I mean why why not having AI and his purpose is just to love people and and and be nice to people I mean at least let's throw that there in the mix along along with that all the other guys and if you're gonna have an AI that self modifies when smarter and smarter and smarter and smarter wouldn't it be nice if that I had core to its design and its motivational system and its values the the desire to love and help everybody I mean there there's no mathematical guarantee that after it reaches IQ of 10 billion you know it will completely switch and start to hate everybody but so I mean I would rather have a benevolent loving AI become super intelligent then the killer military robot or an advertising engine or hedge fund or something right so so I think that's quite worthwhile and it's similar to the reason I'm working on applying the OpenCog to medical AI to curing human diseases and with some colleagues I'm working on elder care AI in China and they are avatar that helps chronically ill old people I mean if we roll out AI in positive and benevolent applications as AI has become smarter and smarter the odds seem higher that the outcome will be good it's it's nothing nothing like a mathematical guarantee but I think it's it's maybe the best we can do I've also been thinking a lot about how to how to make sure that ad develops in a way that sort of egalitarian across the world economy I mean we have a big problem now in concentration of wealth and the smaller and smaller fraction of the population I mean the world is getting uplifted generally but economically it's it's concentrated quite unfairly in the small percent of the population and I've seen this a lot in particular because given that OpenCog is a global distributed project we I'm based in Hong Kong but we have an office with 25 programmers in Ethiopia in other Sababa and our programmers that are doing reasonably well but if you walk around the streets and in others I mean there's like people with no medical care and all sorts of diseases like lying in the sides of the street you know and I mean even on the upper end you know people people graduate with degree in mechatronics or computer science and they end up for their life administering someone's Windows system or something because there's just not not a great variety of tech jobs in that country so I think AI should be developed and distributed in a way that works against global concentration of wealth and then helps everyone so I'm involved with a group creating a project called singularity net which is it's it's a blockchain based sort of globally distributed marketplace for a eyes and the idea is that anyone can basically upload an AI or us an AI on their own server wrap it in our interface which is a set of smart contracts and then anyone who wants AI services can access this global distributed network to get AI services now this this doesn't in itself like feed starving children but it does provide a global marketplace where clever AI developer anywhere in the world can insert their AI into this global marketplace and if the thing succeeds it can reach customers from anywhere who want AR services and a certain percentage of the profit generated in this network is then donated to benefit projects that use AI and and other technologies to help the needy in the world so I think there's a lot of things we can do to militate toward positive outcomes like like building a eyes that love people and want to help people and making sure that AI is developed and deployed in a more equitable way around the world now these these don't prove that and I won't use our atoms for something else because we're going into a completely uncharted frontier where we don't know what's going to happen but it seems like the best we can do is to make that seed a seed with as much positive disposition toward all sentient beings as we can and then as that seed self modifies and improves itself will well watch it and will well learn as it as it as it grows all right thanks [Music]
Info
Channel: Science, Technology & the Future
Views: 23,060
Rating: 4.7466664 out of 5
Keywords: science, technology, singularity, transhumanism, Adam Ford, Future, future by design, Science Technology & the Future, Social Issues, scifu, scifuture, podcast, vodcast, AI, Artificial Intelligence, Human Possibilities, AGI, AGI17, Melbourne, Implications, Society, Politics, skeptics, victoria, australian skeptics victoria branch, science technology future, intelligence explosion, technological singularity, artificial general intelligence, opencog, ben goertzel, hanson robotics
Id: qQvoVzDt2yk
Channel Id: undefined
Length: 26min 9sec (1569 seconds)
Published: Mon Aug 28 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.