Ben Goertzel - The unorthodox path to AGI

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

what ever happen to Shivom? i invested 1000$ into it and it was removed from Kucoin bc it was a scam and agi singularity net is partnered with it... Not a good look for singularity net at all. they might be a scam too. give me my 1.6 million tokens back or my 1000$..

u owe me my money .. but u dont care.. agi is a scam company.. dont invest

👍︎︎ 1 👤︎︎ u/Edo33 📅︎︎ Jan 26 2021 🗫︎ replies
Captions
hey everyone welcome back to the podcast today we're talking to ben gertzel who's actually one of the founders of the agi research community he's been interested in this area for decades really long before most people were and he's done a whole bunch of interesting thinking around what it's going to take to build an agi we'll be talking about that a fair bit we'll also be talking about his current agi initiative singularitynet which is taking a very different and some might say radical approach to building agi that differs considerably from what we see at deepmind in open ai instead of focusing on deep learning or reinforcement learning alone ben's taking an approach that focuses on decentralization trying to get different ais to collaborate together constructively in the hopes that that collaboration is going to give rise through emergence to artificial general intelligence it's a very interesting strategy with a lot of moving parts and it's going to lead to conversations about ai governance ai risk ai safety and some more exotic topics as well like consciousness and pan psychism which we'll get into too so i'm really looking forward to having this wide-ranging discussion with ben and i hope you enjoyed as much as i did hi ben thanks so much for uh joining me for the podcast thanks for having me i'm looking forward to the conversation yeah me too i think actually one of the areas that i want to start with because i think it's sort of upstream of a lot of other things and it's something that you know people often talk about when is agi coming what's agi going to look like what are the risks associated with agi but i think upstream of that is a separate conversation about what intelligence is itself and that seems like a hard thing to pin down i'd love to get your thoughts on what what you think intelligence is and how you define it so i do think intelligence as a concept is hard to pin down and i don't think that matters too much i think for example life is also hard to pin down as a as a concept and you could debate whether viruses are really alive or digital life forms or retroviruses are alive and i mean yeah there's there's some things that are really clearly alive and some things that it's much less useful to think of them as being alive like a rock and then there are things that are are near the border in in an interesting way and you know biology didn't grind to a halt because we don't have an exact definition of of life right and i think similar thing is true with cognitive science and and ai so i i've gone through a bit of a a journey myself and thinking about intelligence but that's a journey that has made me i guess less and less enthusiastic about precisely defining intelligence as being an important part of the quest i mean when i started out working on the theory of agi in the 1980s i mean i i wanted to have some mathematical conceptualization of the problem so i started looking at basically okay intelligence is something that can achieve complex goals in complex environments and i think that it's in the spirit of what shane legg wrote about in his thesis machine super intelligence much later my conception was a little bit broader because he he's looking more at sort of a reinforcement learning paradigm where an ai is trying to optimize some reward function and you know optimizing expected reward is only one species of goal achievement you could say instead i'm trying to optimize some function over my future history which isn't necessarily a average of a reward and you can also look at multiple objective functions and you're looking at so pareto optimizing multiple functions defined over your future history and i think it so going that direction is interesting it leads you down some rabbit holes of algorithmic information theory and and whatnot then on the other hand you could look at intelligence much more broadly so my friend uh weaver david wanbaum had a phd thesis from university of brussels called open-ended intelligence and he's just looking at intelligence as a complex self-organizing system that is you know modifying and extending and transforming itself and its environment in a way that is sort of synergetic with its environment and if you look at things from that point of view you know achieving goals is one thing that happens in a in the complex self-organizing system of this nature but the goals may pop up and then go and be replaced by other goals and the synthesis and interpretation of goals is is just part of the overall sort of cognitive self-organization process so from from that point of view achieving complex goals in complex environments is part of the story but it's not necessarily the whole story and obsessing on that part of the story could maybe lead you in in bad bad design directions and you could look at the current focus on deep reinforcement learning much of the ai community as in part being driven by an overly limited notion of what intelligence intelligence is and of course successes in that regard may tend to drive that overly limited definition of what intelligence is also you can then go down the direction of okay how do we formalize the notion of open-ended into intelligence and you can do that and weaver and his thesis wrote a bunch about you know category theory algebraic topology formulations but but then i mean that becomes a whole pursuit on its own and then everyone has to think as a researcher like how much time do i spend trying to formalize exactly what i'm doing versus trying to do things and and build systems and of course some some balance can be useful there because it is it is useful to have a broader concept of what you're doing rather than just the system that you that you're you're building it that at that point in in time on the other hand again going back to the analog with with biology i mean if you're trying to build synthetic life forms it is useful to reflect on what life is and the fundamental nature of metabolism and reproduction and so forth on the other hand the bulk of your work is not driven by that right i mean the bulk of your work is like how do i string these amino acids together so so i i'm attracted by that philosophical side on the other hand you know probably it's best going to be addressed in synergy with building stuff so i don't and as interesting when shane legg who went on to co-found google deep mind was working with me in the late 1990s when he was my employee in webminding shane's view then was well if we want to build agi first we have to fully understand and formalize the definition of intelligence and he called it cybernets at that point and then then he published the thesis on machine super intelligence and so he did to his satisfaction formalize the definition of of general intelligence although i have a lot of issues with his definition he was happy with it then then shortly after that he decided but that's actually not the key to creating general intelligence instead let's look at how the brain worked and and follow that so he i guess each of us working in the field finds it useful to clarify in our own mind something about how intelligence works and then yeah then you go off and pull in a whole lot of other ideas to actually work on it well that's really interesting and i think one of the things that's really fascinating about i think the approach that you're taking here is it's almost as if it implies that the moment we specify like a loss function the moment we get too absolutist about what it is we're trying to optimize it like it creates room for pathology like it creates room for an almost kind of ideological commitment to a certain loss function which maybe we don't understand is that like yeah i mean one one thing you find in experimenting with even simple ai systems is you know like just like a computer program has the disturbing feature of doing what you're programmed to do rather than what you wanted to do like an ai system given a certain objective function it has the property of figuring out how to optimize that objective function rather than the the function you you you hoped you were giving it right so i mean it's common in game playing like if you if you set an ai to play in a game i mean it it doesn't care about the spirit of the game right it's just trying to find a way to win given whatever funny little loopholes there are in in the rules of the game and i mean for a relatively simple game like chess or go it's all good right because there's not that there's not that many loopholes the board isn't that big there's not that many pieces if you're looking at a more complex video game i mean very often an ai will find some really insane weird looking like that that can't work right but but but it does work right it's allowed by the rules of the game and you know i found this playing with ai with genetic algorithms for learning strategies for games a long a long long time ago like in in the in the 80s i mean as others even even before that but that same phenomena of course occurs on on a larger scale right and and there's there's a whole bunch of complex papers looking at pathologies of you know you you're asking an ai to optimize the reward but then it finds some way to optimize that reward that that that wasn't what you're thinking and this i mean this ties in with wire heading which is talked about in the transhumanist and and and the brain computer interfacing community right so if you if you if you if your goal is to maximize pleasure you know define the stimulation of your pleasure center then why don't you just stick a wire in your head and and stimulate your pleasure center but that that that's very trivial but they're they're sort of indefinitely complex versions of that of that pathology that's that's not an easy problem right because you can look at it like if you look at the brother issue so i genuinely want an agi that i create to in some sense i want to reflect the values that i have now and even if i say i don't want to fully agree with me on everything i mean that the notion of fully agree i wanted to understand not fully agree in terms of my own my own mindset right like i don't i don't i i don't want it to like slice all my children into little strips and stir-fry them or something right so i mean there's there's a lot of things i clearly don't want to happen there's some ways that i don't mind the ai leading me in a different direction than i was thinking just as i want people wiser than me to lead me in different directions but formalizing the wiggle room i want the ai to have in deviating from my values is is also hard and then like if if i had an ai that had exactly the values i had when i was 20 i would be annoyed at that i know let alone the exact values that the human race had in 1950 let alone 1500 right so you so you want the agi even if we had an exact formulation of our values we wouldn't want the agi to be pinned to that forever right but then we want its values to evolve over time in synchrony with us but we're evolving in directions that that we don't know so how to formalize all that is infeasible given tools currently are at our disposal leading one to think that formalizing these things is hopefully can't be necessary and can't be the right way to do it right and that leads you in whole other directions like okay if if formalizing ethics and like pinning down an ai to formalized ethics and leads to all sorts of bizarre conundrums that are very far from resolution maybe we take a step back and take a more loosey-goosey sort of human view of it which is what we do with ethics in our own lives and like if if we take early stage agi systems and we're relating to them in a way involving positive emotional bonding if we have them doing beneficial things like education and and health care and uh doing doing science and so forth i mean then then are we qualitatively moving that agi in a positive ethical direction right rather rather than trying to approach it by formalizing our our goals in some precise way and then guaranteeing that ai won't deviate from goals except in the ways we wanted to deviate from them which we also can't formalize right so that is it but that i mean that's both satisfying and unsatisfying right and but that seems to be the the reality as i see it now and do you think because there's a sense i i keep getting from this as you described like our moralities shifted dramatically over the last 500 years there are things we do today that we take for granted that would be considered morally abhorrent like 500 years ago yeah i mean my my mom is gay she would have been burned at the stake right right i mean clearly and right right now i mean i i eat meat every day with some moral misgivings but i could it's easy for me to imagine in 50 years especially once synthetic meat is a thing that really works and that almost works now it's easy for me to imagine now that people will look back on eating hamburgers the way we now look back on cannibalism and we're like oh but the burger tastes good but yeah i mean the the stone age tribesmen might have been like the other guy's upper arm tastes good right right and i mean to the degree that like the cosine similarity basically between our moral fabric today and the moral fabric of like 1500s or 1300s europe or wherever we've come from is basically zero they're i mean it's not zero like kinship kinship is value we love our parents and our children we we love we love our wives and we we we don't want to torture our own bodies like that there is there is a core of there is a core of huma of humanity there but the the subtle thing is what we identify as that core right is not what they would have identified in 1500 they would have been like belief in god is the core right and to me i'm like no that's that was just some random nonsense you believe back then that was that wasn't the core right so i guess i guess you do have populations too that would look at you know there are individuals within humanity who would say you know it's morally bad to love your family like i'm sure you can find out if seven billion people a handful who would argue for that like i guess part of what i'm wondering there are there are transhumanists who who right who believe that yeah because that's that's tribal thinking which is holding us back from from moving on to to a broader singularity where you're not clinging to all this dna stuff so yeah sure so yeah and i guess where that leads me is the question of like whether you think uh our moral trajectory which is going to be you know agi is going to be a part of that eventually but is that trajectory like is the process more important than the end point is it more about like the the stepwise progression or do you think that there's actually like a subset of moral norms that will be preserved throughout because they're intrinsically good clearly the process of transformation is a lot of what's important to us right like we if we could upgrade our intelligence by any at any speed we would probably not choose to multiply our intelligence by 1 000 every second because we would just lose ourselves that's basically replacing yourself by a by a god mind right which could be a cool thing to do i mean if i if it was like either i exist or the god mind exists maybe i'll decide the god mind should exist but i mean if if you doubled your intelligence every six months or something then then it would be more satisfying from our point of view of our current value system right because we're we're getting some we're getting continuity there you're getting to experience yourself becoming a super intelligent god mind rather than right it's just happening and that i think that has to do with why we that has to do with why we feel like we're the same person we are when we were we were when we were like three years old okay i remember back around 18 months but the sense in which i'm the same person as i was at 18 months i mean of course you can cherry-pick common personality traits there but the the main thing though is each point i thought i was the same guy the day before and and and the day after and just changed a little a little bit and there yeah even even at a moment in your life when you have an incredible inflection point right like i mean i mean then a few times in my life i've been through some big sort of consciousness transformation where i felt like within a week i was a whole different person but still i didn't really think i was a whole different person right i still realize that like this is it's still been i'm just in a different state of consciousness and that yeah that continuity is important to us on on the cultural level also right so we and i think that will continue to be important to us going forward for a while i mean whether whether we will lose our our taste for continuity that's like a big question i don't have the answer for right you could you can imagine the taste for continuity continuously going down year by year until by like 2100 we're just a super ai mind who really isn't attached to itself at all it doesn't care about upgrading its intelligence by by a factor of a thousand in in in a second right but now now clearly we we do care about that and we that's going to drive the progress toward the singularity in a in a number of different of different ways right well and so speaking of progress towards the singularity there are a number of different organizations now working on that that that big project like two i think to the most maybe high profile in the media or open ai and deep mind i know you've had a lot of influence on deep mind in particular in terms of the the folks who worked there today in the thesis but i'd love to get your sense of so first off what's your mental model of how open ai and deep mind are approaching agi and then how does that contrast with the singularity now i mean the first thing i would say is i i to to me putting those two in the same category is like putting i don't know winston churchill and trump in the same category or something i mean i mean i i don't i don't they each have their own strengths but i don't think those are really comparable organizations and i think google as a whole and not just deepmind but google brain in the mountain view office which is where burt and transformer neural nets for example came out of a whole lot of other valuable stuff so i think google as a whole has put together by far the best ai organization of any centralized company out there i mean of course academia as a whole is stronger than anyone any one company but by by a humongous amount in terms of coming up with with new ai ideas but if you're looking at a you know a company or a government lab a specific organization i mean google has just done a really good job of pulling in ai experts from a wide variety of of different backgrounds so i mean they have deep deep learning people they've pulled in cognitive architecture people they've pulled in a whole bunch of sort of algorithmic information people think they got marcus hooder who's invented the general theory of general intelligence right so i think there's a there's really a great deal of of depth there and i know there's some internal rivalry between the mountain view and then the and the uk people but but i think you know deep mind is very strong google brain and other teams in mountain view are also very strong google in new york area is very strong so all in all there's a lot of depth there and there's a lot of different approaches being pursued behind the scenes which are qualitatively different from the things that get a lot of publicity so like i my oldest son zarathustra's young phd in in the ai for automated theorem proving so machine learning to guide fear improving and his supervisor joseph urban is an amazing guy organizes aitp ai for theater improving conference every year you've got a bunch of google deep mind people there every year who were doing work on ai for fear improving connecting that with ai ethics and some of the things we were talking about about earlier and that's a pretty much unconnected with you know video game playing or with the or with brain modeling or the things that the demo sabas personally is into so i think there's a lot of depth there they really they want to create agi like larry and sergey understand what agi is and what the singularity is dennis and shane understand it very deeply i think they are still at a high level predominantly committed to deep reinforcement learning and sort of conceptual emulation of how the brain works in modern hardware as an approach to agi now they're clearly open-minded enough that they've made hires of great people who do not share their their predilection which is to their credit but still like the vast bulk of their machine is deep reinforcement learning you know crunching a lot of data on a lot of machines and trying to figure out what the brain is doing and figuring out ways to sort of do the same kind of thing in neural nets on on a lot of gpus right and so i think if if that approach is going to work it would shock me if google weren't the ones weren't the ones who who got there actually i mean i mean i think and now i happen to think that is not the best approach which which is is is is a different topic right now open ai on the other hand you know i i don't know them as well but my wife who's also a aipg my my wife and i actually i'm not in the iphone my math phd so she's more of a certified ai expert than i am but we went to open ai's unconference a few years ago in uh in san francisco and you know it felt like a room full of super high iq brilliant passionate like 25 year old guys or younger who thought and mostly guys literally who who thought that ai had been invented about five years previously and that back propagation was the only ai learning algorithm there and like i i there's a few guys there who are like well for our hackathon project this weekend we're gonna we're gonna map we're gonna enable natural language based authoring of python programs right so we're going to train a seek to seek neural network to map matt and ellen to python after three days of hacking day and night like well we we found the irregularities in natural language are a bit more complicated than anticipated right like they really these guys didn't know that that they didn't know that linguistics or computational linguistics had anything to say or they may not have known existed as a field no of course i mean yes care of course there were guys in in open ai who were very very deeply knowledgeable but i felt that that's at that stage open ai was sort of like a deep neural nets only shop and they were just hiring more and more people to to bang on that to use that one hammer to bang whatever they could which is exact opposite of what what what d minding and google brain have done and then i mean the whole thing of gpt2 is too dangerous to release because it's going to destroy the world i mean i mean spouting is bad but it's not going to destroy it's not going to destroy the world right and that that was invented in google anyway i mean that's that's just former nets it's just yeah i mean that's just burnt in one direction right and so i mean now actually you know for some of my i'm working on a number of things i'm working on a bunch of agi research that we'll talk about in a few minutes and i'm also working on some applied practical ai projects like well one thing we announced yesterday is this a robot called grace which is an elder care robot a collaboration with hanson robotics and so that's supposed to go into elder care facilities and hospitals provide social and emotional support some practical functions but for in that project like i i want to launch that before i get to agi i need a practical dialogue system i wish we could use open ai stuff i wish we could use gpg two or three but a high percentage of what they say makes no sense it's just bloviating and then you can't put that in an elder care robot right so so i mean we you you you see the these things that they were claiming are gonna destroy the world because they're so human-like they're not even really they're not good enough to use in in almost any practical projects now and i i would say deepmind has not done that either on the on the other hand oh gpt2 i didn't mind so much because they hadn't gotten their payday yet right but now now they're already bought by microsoft so what why did they why do they still need to overblow things it's it's not financially necessary anymore right and what is it that you think uh means that or prevents gpt three from from performing at the level that it needs to i mean is it is it a symbolically it has no idea what it it has no understanding of anything that it's talking about it is my friend uh gary marcus who's working with that on the r robot with rodney brooks he he he put it beautifully in his article like gpt3 bloviator right i mean it just it just spouts stuff that sounds plausible but it has no idea what it's what it's talking about and so i mean it you can see like with multiplying numbers you know it can multiply do two by two multiplication when you get up to four by four multiplication of integers it's right like 15 or 20 percent of the time or something that's just really really weird like if you know the algorithm you write almost 100 unless you make a stupid error if you don't know the algorithm you should be zero percent right but what it did it looked at all the multiplication problems online it memorized the answers and then it came up with some weird extrapolations that let it do a few problems that weren't and it's it's training database right it's but it doesn't understand what multiplication is or it would never get like 15 or 20 multiplication problems right and you can you can see that in many other cases like you ask it like you know who who who with the best presence of the us it'll answer a lot of good things and i'll throw a few kings of england in there just for fun but i mean because it doesn't know what of the us means so the thing is you in the end it has no more to do with agi than my toaster oven does like it it's not it's not representing the knowledge in a way that that will allow it to make consistently meaningful responses and there are that's not to say that everything in there is totally useless for agi it's just like you're not going to make gpt 4567 and get agi so i mean there's there's very interesting work on tree transformers where you use like a constituent grammar to bias the generation of a transformer network and then i've been playing with something like that where you use a whole knowledge hyper graph you can use a knowledge graph which is dynamically constructed based on what's been read to guide the generation of of the transformer and then then you have some semantic representation some knowledge that that's that's playing a role in in the generative network so i mean they're i i think and that's in open right right that sounds like yeah yeah yeah yeah so so there's some of the ideas and tools and transformer networks may end up being one component among others in a in a viable agi architecture but on the other hand open ai is not working on those things from what i know i mean from what i can see their philosophy is mostly take the best ai out of the literature and implement it very very large scale with a lot of data and a lot of processors and then it will do new and amazing things and then the thing with that is it's semi true like that there's so much in the ai literature that gave so-so results and will give amazing results once once you run it on enough data and enough processor time but you know you usually need to rejigger the architecture and rethink things and add some important new features while you're in the process of doing that that scaling up like so i like i was teaching deep neural networks in the 90s in university of western australia i taught a class in neural nets cross-listed computer science and psychology department back when i was an academic we were doing like multi-layer perceptrons with recurrent back propagation we're teaching deep neural nets and you could see you needed to scale that massively because you were running a neural net with like 50 neurons and it took all your processor time for hours right but the process of scaling it up it wasn't like take that exact code and idea and run a lot more machines like still people aren't using recurrent backprop right they can't they came up with other with with other methods so at the very high level i sort of do think we can get to agi by taking ideas we have now and scaling them up on a massive number of machines with a massive amount of data but when you're in the weeds that scaling up involves re inventing a whole bunch of new math and algorithms kind of in the spirit of what was there before and i i think open the eyes so far sticking a little too closely to like let's let's actually take what someone else invented and just just literally scale it up on more machines with more with more data right and that's it it leverages their position very well in that they have a lot of money a lot of data and a lot of a lot of computers right but but i think i think we need i actually i don't think we need necessarily humongous conceptual revolutionary breakthrough to get to agi but i think we need more creativity that that than that right and so that that's that leads on to my own agi work yeah i was going to say maybe that that gets us to open cog to singularity net um and and i think this is entangled too with with the next question i did want to ask which is what are some of the risks that you see coming from agi i know you've been generally more optimistic maybe than some of the hardcore pessimists in the community but um you see some risks and and i'm curious how that plays into your own view about what the most promising routine yeah i mean the there's two categories let me address the risk thing first because once i start talking about open cog and true agi it's hard to stop so i mean there's two categories of risks in in in my view so what one risk is the sort of uh nick bostrom style risk which is you know you do your best to create an ethical agi using all your your math theory and your your loving care and common sense and then getting all the politics right and then you know you still can't reduce the uncertainty to zero when you're creating something that's in the end more generally intelligent than you are right i mean there's there's no matter how optimistic you are like there's there's some odds that there's something you can't foresee that that's gonna that's gonna unfold there and i mean i don't buy nick's argument that a superhuman agi is going to intrinsically have a drive to turn us all into computronium to make more hard drive space for itself i mean the whole drive to be megalomaniac and consume our resources and and and uh you know squash your enemies this this is not something that necessarily is there in an engineered system by by any means but i'm not looking at that as like an instrumental goal though that emerges like no i don't understand no i think that's i mean i think i think that emerges in systems and evolve through competition or through some complex mix of competition and cooperation but if you're engineering a singleton mines say it doesn't have it's not competing with anything right it just doesn't it didn't evolve in that competitive way so it doesn't have to have that that motivational system so i don't i don't think the drivers that cause humans to be that way have to be there for an agi because i mean you know you and i could compete but we ca but we can't we can't merge our brains if we want to to become a super organism two two agis that started competing could decide to merge their brains in instead right i mean so i i i don't think i don't think that logic applies to systems that are engineered and don't don't evolve in in the way that that that we did but on the other hand have to say there's an irreducible uncertainty in creating something like 100 times smarter than me right i mean we it may you know for all you know it could immediately make contact with some even more super intelligent that's imminent in in the quantum fluctuations of elementary particles that we think are random and that that other ai that it contacted could be good or bad from a human point of view right so there there's that there's an irreducible uncertainty which is really hard to put a probability on and to me i mean there's also an irreducible uncertainty that i wake up in five seconds and realize like in my brain in a vat in some other universe right so that i mean there's there's there's irreducible uncertainties all around if you reflect on them but then there's a more concrete risk i think which is that i mean humanity could develop malevolent or indifferent ais in a pretty directly simply comprehensible way even before you get to massive super ai and so this this gets down to my oft made observation that the main things that ais are used for in the world today are selling killing spying and crooked gambling right i mean so i mean that you're advertising you're doing surveillance military and you're doing like wall street training and so if if this is what's shaping the mind of the baby agi you know maybe maybe it will it will end up being a you know a greedy megaloma megalomaniacal sociopath reflecting the uh the cognitive structure of the corporate and government organizations that that gave that gave rise to it right so i mean that's that's a palpable risk right i mean you you could see if earth agis are spy agencies wall street traders and advertising companies which are exacerbating global wealth inequality and then i mean you've got a bunch of people in the developing world who have no jobs anymore because robots in the developing world took their jobs so they're being subsistence farmers and and computer hackers right so i mean you there's a lot of there's potential dystopic scenarios that don't need any super agi and that from some points if you would be even look like the most likely scenario given the given the nature of world politics and and technology development right so which also seems which also seems to kind of motivate your approach right i mean the decentralized yeah that certainly motivates my approach and and i think the this ties in with ai algorithmics in a subtle way that's not not that subtle but some of them is commonly appreciated because i i think big tech companies even the more beneficial oriented ones run by good-hearted human beings they are focusing on those ai approaches that best leverage their unique resources which is huge amounts of data and huge amounts of compute power so if you look at the space of all ai algorithms maybe there's some that don't need that much data or compute and there's some that need a lot of data or compute the big tech companies they are sort of have a fiduciary duty to put their attention on the ones that require a lot of data and and compute because that's where they have more competitive advantage over everyone else and and they have such if you're working with those companies the apis for leveraging their data and compute power are really slick and so much fun to work with like if you're working at google i mean simple command and and you're doing like a query over the whole web like that that's amazing right so i mean i mean of course you've got you've got a bias to use those tools but the result is that the whole ai field is being pushed in a direction which is hard to do if you're not inside a big tech company it's a valid direction but there may have been other directions i think there are that don't need as much data or compute power but those don't have nearly such slick tool chains associate associated with them so for develop like opencog that i'll talk about in a moment is kind of a pain in the ass to work with tensorflow is much easier to work with that's not entirely for fundamental reasons so like we don't have the you we don't have the ui developers we we don't have we don't have all the all the all the team that's needed to make parallelism scale up automatically right so the approach is the big top big tech companies like have slicker tools associated with them so they'll attract more developers even other ones not working for those big tech companies so what happens is and of course if you're a phd student if you write a pg thesis that matches what a big tech company is doing i mean you're more likely to get a good job right away so why not do that it's also interesting even though there's other interesting stuff so the whole ai field is sucked into what's valuable for these companies doing uh you know selling crooked gambling spying and and supporting you know murder activities by national governments it's funny how common these effects are i mean i'm just sorry i'm just thinking back to like my time in academe in in physics where i was studying interpretations of quantum mechanics and it was like that's like a great career ender if you ever want a career ending thing to study is like go into fundamental like quantum mechanics and it's the same sort of anyway yeah and then that that's an area i've i've start i've i've studied a lot and there's a lot there's a lot of depth in the interaction between that and quantum computing i mean that that's that's a whole other whole other fascinating topic and i i think you know reinforcement learning it suits really well a company which has a sort of metrics oriented business model and supervised learning does as well so like if you're running an advertising company sales of all kinds is all about metrics like how many how big is your pipeline how many deals and have you closed you know this this advertising channel like how how how how good it has has the the roi been right and so the world wall street obviously as well one of the beautiful things about working in computational finance i mean i like it as a geek it's cool because you get immediate feedback on what your algorithms are doing as opposed to medicine where it can be years to get feedback well because these business areas are metrics oriented that's really driven ai toward things like supervised learning and reinforcement learning where you're you're optimizing these quantitative metrics all the time and it's it's not that that's bad or invalid but it's it's not the only thing that you can you can do in advanced ai or or or agi so i mean other things like say hypothesis generation which is important for medicine or science it's just nastier to to quantify computational creativity in general so like google deep dream got a lot of news it's creative compared to a lot of things but in the end it's just combining pictures that were found online right it's not that creative but computational creativity it's hard to put a hard to put a metric on it and ai development has been driven by you know these metrics driven business models now there's there's some good about that right i mean having a metric lets you cut through your your internal in in certain ways and it can it can drive progress on the other hand it also drives progress away from valuable things like my my mom spent her career running social work agencies and she was always fed up by you know these uh philanthropic organizations they would only donate to non-profits that were demonstrating progress according to metrics but yet it's very very hard to show your progress according to metrics if if you're doing say enrichment education for low-income single mothers or something i mean the metrics unfold over years and it costs a lot of money to follow up the people and and see how the how they're doing so the result there is philanthropic organizations prefer to donate to breast breast cancer or something where you can you can quant you can quantify progress so that this is the thing is focusing on quantifiable progress yeah is good but it but it pushes you to certain things and then ai ai it's pushing you to reinforcement learning where you're doing you're doing reward maximization and it's pushing away from education healthcare and science where things are are it's harder to immediately quantify that's what i see is a short-term danger and in a way in a way focusing on the long-term nick bostrom danger is valid in a way it's a disinformation campaign to distract your attention from the short-term danger so when a big tech company tries to get you to pay attention to super agi might kill you 50 years from now i mean and sponsors conferences on that partly it's a way of distracting your attention from the damage they're causing in the world that at that this at this exact moment right so there's although there's a valid aspect to it too so that i mean the world this world is very tangled up right yeah it's it's funny how often problems come down to the fact that important things are often hard to measure and when we score the functioning for example like the economy in terms of the stock market or gdp or some metric that's easily pdp is going up a lot this quarter right i mean right that's very very very exciting what's the problem everything's great yeah well and you you and i are probably doing okay right i mean i'm not complaining personally at the moment but i can see you know overall there's it's a metric that found a profound mess profound economic economic issues and i can see like my my sister's a school principal and there's you know low-income low-income kids who are just sitting at home watching tv all day getting no education and there's going to be a lot of ramifications to that but hard to measure gdp going up is easy to measure right now i i do want to kind of tack into sort of the the last area i really want to make sure we can touch on which is which ai risk mitigation strategies you think are most promising i'm going to focus maybe on the short-term risk that you've highlighted in terms of yeah let me let me say a little bit about my own agi work now because that that will that will tie into that so so i think so my my approach to my feeling since the late 90s has been that what i would call a hybrid approach to agi is going to be most successful and there were architectures in the 80s called blackboard architectures where you had sort of a you view there were blackboards back then now they're whiteboards right or prometheus boards they have a lot of things but so you you have a imagine a blackboard and you have a bunch of experts in different areas and they're all writing on the same blackboard and they're all erasing what each other wrote and they're collectively doing it doing a proof or making a picture or something so i think each of the historical ai paradigms which i would say neural net supervised learning unsupervised learning logical inference and uh of evolutionary learning and the number of others each of these paradigms in my view is sort of like one of the blind men grabbing part of the elephant of intelligence like one guy's grabbing the nose the other the other guy is uh is pulling the ears one guy is pulling on the others or something right so i mean i think each of the traditional ai paradigms is getting it a key part of what we need for human-like general intelligence and given that so one approach is is to try to find ways to make one ai paradigm incorporate what's good about all the other ones right so make a neural net do logical reasoning and do evolution another approach is to come up with some new meta algorithm that sort of incorporates what's good about all these different and that that's very appealing to me personally actually but the the approach i think is probably going to succeed first is a hybrid approach where you're letting algorithms coming from these different classical ai paradigms cooperate together in real time on updating a common knowledge base and i think i think that work in that direction ultimately is going to lead to what looks like a single meta algorithm that incorporates the best of what comes from these different paradigms but i think we're going to get there by by actually hybridizing different algorithms from different classical ai paradigms and how having them work together so in in opencog architecture what we do is have we have a knowledge hypergraph so it's a weighted labeled hypergraph actually it's a metagraph not a hypergraph because can you define those things yeah i was about to a graph has nodes with links between them right a hyper graph has nodes with links but a link can go between more than two nodes like you can have a link spanning three nodes or something a metagraph you could have a link pointing to a link or a link going to a whole sub graph right so it's like the most abstract graph that you could get so open cog atom space it's a weighted labeled metagraph weights means each node or link could have a set of different quantitative or discrete values associated with it and labels these are types associated with with nodes and links and so we don't we don't enforce a single type system on on the item space but you could have a collection of type systems on on the m space so from a from a programming language it's like a gradual typing system or something where you could have something untyped or you could have something with a partial type and then the types could act even new types can be assigned assigned via learning but but you can have you can have type checkers and that whole whole instrumentation on there so it's a it's a weighted labeled knowledge hypergraph and then you allow multiple different ai's to act on this knowledge hypergraph you know concurrently and this this is where things get interesting because if you have a probabilistic logic system and you have say a tractor neural net and you have a reinforcement learning system and you have say a automated program learning system these are working together on the same knowledge hyper graph i mean then you need them to be cooperating in a way that leads to what we call cognitive synergy which sort of means they don't screw each other up right it means like basically if if one of them get one of the algorithm gets stuck the other algorithms should be able to help it overcome whatever obstacle it's facing that requires the various algorithms to be sharing some abstraction of their intermediate state with each other so it means some some abstraction of the intermediate state of each algorithm as it's operating on the knowledge hypergraph needs to be in the hypergraph it's itself right so that and this is where the design gets subtle because doing everything in this hypergraph is slow but doing nothing in there means you just have a multi-modular system with no ability for each algorithm to see the other one's intermediate state so it's like what abstraction of its state what portion of each algorithm state goes in the hyper graph versus versus outside and this you know in the neural net for example we develop what we call cognitive module networks where you see if you break a deep neural architecture into layers or something you may have a node in the hyper graph representing each layer and its parameters and the piping between layers happens in the hyper graph but then the the spreading of the back propagation inside the layer happens in in some torch object that that's that's out outside the hyperdrive so you're constantly bouncing back and forth yeah but so the nice thing with torch is you have very good access to the the compute graph unlike in tensorflow so if you have two different torch neural nets you represent them by nodes in the hyper graph and if you compose those torch neural nets that's represented by a symbolic composition of the nodes and the nodes in the hypergraph so if your reasoning engine comes up with some way to some new way to compose neural modules that can be backed out to composition in torch and the compute graphs the composition just passes through so i can in math terms you have a morphism between the the torch compute graph and and the and the logic graph with it with it within the hypergraph right so that so that's but there's a there's a lot to work out there right so i'll just describe one little bit of it which alexei polipov who leads our st petersburg team published i guess last year in the paper on cognitive module networks but you need you need similar thinking to that like pairwise for each pair of ai algorithms right so like how how does your how does your evolutionary learning algorithm make use of probabilistic reasoning to do fitness estimation and those so that it's the hybrid approach and it mostly has a steep learning curve right because because you need to understand this this cross-cutting knowledge representation and you have to understand all the algorithms that are that are playing are playing a role in it so what where we're at now with that actually we came to the very painful and annoying conclusion that we needed to rebuild almost the whole open cog system from scratch so we're open cog 2.0 this is yeah we're calling we're calling opencog hyper on i decided to name the versions after obscure elementary particles instead of numbers like like linux has all these funny animals or or apple has uh california suburbs right so i mean yeah we're doing hyper on when we point to quantum computing we'll make it tachyon there you go basically it's about scalability i mean we can do whatever we want with the current open cog but it's just it's too slow in a few different senses i mean i think the probably the most obvious thing is we need to be massively distributed like now we can have a knowledge hyper graph and ram on one machine and we have a bunch of hyper graphs share a postgres data store in a sort of hub and spokes architecture but we can't use the current system across like thousands or tens of thousands of machines and ultimately you know for our work with transformer neural nets we have a pretty big server farm with all these multi-gpu servers for opencog we just can't use scalable infrastructure now and it's obvious we need to so part of our bet is justice happen with deep neural nets like when you manage to scale them up the right way whoa look at what they can do right right we're thinking that once we've scaled up the open cog infrastructure we're suddenly going to see the system able to solve a whole bunch of hard problems that that hasn't been able to to so to so far right so that's part of it is just scaling up the knowledge hyper graph and of course that mostly means scaling up the pattern matching engine across the knowledge hypergraph right because i mean just scaling up nodes and links isn't that hard getting the kinds of pattern matching we need to do which significantly go beyond what current graph databases support getting the kind of pattern matching we need to do to scale up across a distributed knowledge hypergraph yeah not not impossible but it's work right then that the the other thing is and this is more interesting from an agi point of view from the work we've been doing putting neural nets evolutionary learning and logic together we've just come to a much subtler understanding of how the knowledge hypergraph need needs needs to work so we're we're creating what's effectively a new programming language which is we've been calling we've been referring to atoms because in in opencog the nodes and links are called atoms atom is the superclass of the of the node and link subclasses so we informally referred to the the dialective scheme we were using we have been using to create and manipulate nodes and links as atoms because both nodes and lengths are atoms so this is atomies two maybe we'll come up with another with another name but we're understanding better what we need to do in terms of a type system and then a sort of family of index type systems inside the m space to better support integration of neural nets reasoning and and and evolutionary learning and so this is led us to dig fairly deep into idris and agda and various of these dependently type programming languages so we're looking at sort of how do you do gradual probabilistic linear dependent typing in a reasonably scalable way and because it seems like if you can do that then you can get these multiple ai algorithms we're working with to interoperate sort of scalably and and cleanly and this comes down to like how much of the operation of these ai algorithms can you pack into the type checker of like a gradual probabilistic linear dependently typed language because like the more of the ai crunching you can fit into the type checker then then you can just make sure that type checker is really really if efficiently implemented right and this yeah then this this ties in with which aspects of internal state of the ai algorithms are put into the into the into the knowledge knowledge hypergraph right so we're we're digging very deep into functional programming literature on on that on that side as well as into the distributed graph databases and i think i i think this this may be how in the end hybridizing different algorithms eventually leads you in the direction of well actually what i've ended up with there's little resemblance to the algorithms i started with and we like have a we have we have a meta algorithm so to to start opencog hyperon will certainly be a hybrid like we're going to keep using torch or or if something better comes along right and we're going to use our our probabilistic logic network framework and we can make those work together but it it may be that after a few years of incremental improvement there like we've modified the neural evolutionary and logical part enough that you just have some something something you want to call sort of a different a different uh more abstract learning algorithm because when you when you cache these things out at the category theory level i mean the differences between a neural learning algorithm and a logic inference algorithm are much less than one than one would think what one would look at them at the at the current implementation level so part of it is about having an implementation fabric where the the underlying commonalities between the algorithms from different paradigms are exposed in in the language rather than obscured which is is is the current case yeah i was going to say that itself is really interesting because i generally at least i've seen it framed as an adversarial relationship this idea of you know neural learning for symbolic learning or like symbolic logic and then what i find cool about this is you're really kind of fusing the two together and making them play nice in a very formal yeah and we've already we've already done that in simple ways so like i mean we have we have uh for example you have the nodes and links and the hypergraph and you can do embedding to embed a node in a in a vector right and we we do that we tried deep walk and graph convolution networks actually we're doing it using kernel pca in a certain way now some much more traditional tools but what you can you can set that up so that you have a category theory like you have a morphism between the vector algebra of the vectors and then and then the the probabilistic logic algebra on the on the graph side so what's interesting if you're trying to do logic inference you have some premises you have a conclusion that you want to drive or try to derive from the premises you can make a graph embedding as a vector of the premises make an embedding of the conclusion so then you have a vector for the premises vector of the conclusion you can look at the midpoint you can look at sort of the intermediate points along that that vector you you map those midpoints back into the knowledge hypergraph and then you look at those as potential intermediate premises for the logic engine to do and inferring from the from the premises to the conclusion right so you're you're using the morphism between graph space and vector space as a method of logical inference control right and that that's and that depends how you do that mapping because if you just straightforwardly use deep walk or gcn to do the mapping you don't you don't get a morphism that that's that's accurate right so there's so that yeah there's there's a lot of subtle things there and i think i think once we i think we're going to get rid of back propagation and and as that has gotten rid of you're going to see that replaced with algorithms that map more naturally into between the logic and and evolution or evolutionary side also so this this is another subtle point is playing around with infogan and other more subtle neural algorithms so we've got an infogen which is a form of gan that that learns automatically these semantic latent variables on the generative network side we've gotten that to do transfer learning between clinical trials successfully which which which is pretty cool so like you you train you train the info again network to sort of predict which which patients and say a breast cancer clinical trial are going to be helped by by which medication and then using an infogan for that the semantic latent variables there can be used for patient segmentation in a way that lets you transfer to a different clinical trial better clinical trials still on breast cancer i assume right yeah you can be still in breast cancer but maybe with different drugs or maybe a very different patient population right it might be you can tell something even for different cancers i mean we're looking at tumor gene expression and there's a lot of similarity between the tumor gene expression and cancers in different tissues actually but we've been looking at breast cancer mostly because there's more open data about that than than about other things so that there we got it to work but the the what i was coming to is if you try infogan on like video data or something the learning just doesn't converge right and so then you give up and you're like that's a bad architecture but maybe it's not about architecture and back it's just a bad learning algorithm right so there's uh so i'm i i'm i'm suspecting that using flowing point evolutionary learning like cmaes or something on these really complex neural architectures may work better but if so that gives you a lot to go on in cognitive synergy right because once you're using an evolutionary learning algorithm well you can use inference for fitness estimation right i mean there's there's a lot there's a lot of openings for other ai tools in your hybrid system to help guide the the evolutionary learning in a way that's more challenging in in a back prop framework so yeah this is this is this is something i'm curious about which is a purely technical point like how how many promising neural architectures are being discarded just because they're not suitable for that proper for back back prop which is a very good algorithm but has its strengths and weaknesses like like like everything else right so well maybe this ties in at least thematically with another kind of contrarian position at least as you were saying earlier with respect to i guess the way agi has looked at our ai's looked at or intelligence looked at in the west which is so we tend to take a just a materialistic view of consciousness but i do want to touch on this idea that of pan psychism that you've been a fan of which is almost as controversial in the west as like discarding backprop is did you mind kind of elaborating a little bit on pan psychism and maybe it's connection to some of your thinking on agi if i will elaborate on pen psychism but i i have skipped the company i'm now the ceo of which which i must tell you about for a few minutes so singularitynet which is on it so i've talked a lot about opencog and opencog hyperon right and that's something i've been working on a long time now what i've been doing the last few years is leading this project called singularitynet which is it's a blockchain based agi platform basically ai platform not just agi and and narrow ai both so that that lets you basically operate a bunch of docker containers each of which has an ai algorithm satisfying a certain api in it and then these docker containers can coordinate together they can outsource work to each other they can rate each other's reputation they can pay each other and someone can query the network and there's a peer-to-peer mechanism that passes the query along through the network to see if there's anyone who can do what that query asks for but the whole thing works without uh without any centralized controller right so it's a it's a society of minds as aia pioneer marvin minsky was talking about and why is that important by the way so so why yeah so this this ties back to the more political industry structure aspects we were talking about right because it's important because as we move from narrow ai toward agi we're going to be a lot better off as a species if the emerging agi is not owned or controlled by one actor because any single actor is is corruptable and i'm i'm not very corruptable but if some thugs come to my house and threaten to murder my children if i don't give them the the private keys to my repository then i probably am corruptable right so i would rather not be in that position or anyone be in that position right so i i think we want the early stage agi as it evolves i think we want it to be more like linux or the internet than than like say os x or some company's private internal network and to enable that is challenging right because you're talking about like runtime systems that are using a lot of ram and and processing power and data and network and so forth so singularitynet is a platform that allows a bunch of different nodes in the distributed ai network to cooperate and operate in a purely decentralized way and it's it's out there now it's not as sophisticated as as it will be i mean we're we're working with cardano blockchain right now it's implemented on top of ethereum blockchain we're moving a bunch of the networks at cardano blockchain probably because it's faster and cheaper but partly because we're introducing some more abstract features that cardano supports better because their smart contracts are in haskell which is a nice abstract language so we're looking we're looking at how does one ai in the network describe at an abstract level what it does to the other ai in terms of the resources it it uses the data it takes and spits out but also like what properties it's it's it's processing fulfilled so we're introducing an abstract like dependent type theory based description language for ai's to describe what they're doing to each other which is supposed to make it easier for one ai to reason about how to combine other ais to to do something right so from if we compare the opencog in opencog you have this knowledge hypergraph and multiple ai algorithms are tightly integrated on top of it because they have to all understand what each other are doing semantically singularity is looser integration right you have multiple different ai's in the network they have they communicate by a description language that tells how tells what each other are doing and why and what properties they obey but in the end they can be black boxes they can have their own knowledge knowledge repositories and exposing certain properties so i think i think we want both of those layers like you want a society of minds layer with multiple ai's that are semi-transparent with regard to each other and then within that you'll have some things that are doing more narrow functions like processing certain certain data types or or doing certain kinds of optimization and you have some ais in that network that are serving more general intelligence type functions just like we have a cortex then we have a peripheral nervous system and the cerebellum and so forth so in that landscape opencog is intended to power the agents that run in singularitynet network doing the most abstract cognition stuff but we want a lot of other things and they're complementing them because i was going to ask yeah how does the how does the coherence then emerge from singularity and it sounds like then it's like opencog gives you that coherence gives you that high level reasoning and then outsources tasks through singularitynet through the blockchain to other areas yeah and you can deploy cog through singularity net right so i mean you can have multiple different open cog agents running in in singularitynet but from a from a pure singularity net point of view open cog doesn't matter like people could deploy whatever they want in there from from the view of like why i personally created it in the first place it's partly because you want a decentralized open way for your open cog systems to cooperate with with a bunch of other things so yeah we we singularity net is run by a it's a non-profit foundation it's more like ethereum foundation we've actually we've spun off a for-profit company called true agi which is sort of working on building systems using the opencog hyperon framework so that's sort of like uh oh cool like a linux red hat thing right where opencog hyperon is is open but just as red hat commercialized stuff on top of linux true agi is commercializing well working toward commercializing uh systems that are built using opencog hyper on and and singularity net so yeah there's a lot of layers there this this actually ties in with your questions about psychism and consciousness in the in some emergency some some in indirect ways because i i think yeah part of the idea underlying psychism which is the sort of the philosophical premise that you know everything is conscious in in in its own way right but just just like in george orwell's animal farm all animals are equal but some animals are more equal than others i mean in psychism everything is conscious but some things may be more conscious than others or or differently conscious than than others right and so it's in in that point of view you know it can seem ridiculous to say that this coffee cup is conscious but yet if you dig into quantum field theory i mean and quantum information theory at a certain level i mean all these wave functions they're interacting with the wave functions that are outside this system they're processing they're processing information all the time and they're incorporating that in some aspects of global coherent state as as well as local state i mean there's if you try to boil down consciousness to sort of information processing or like distributed coherent awareness it becomes hard to argue that these processes are absolutely not there in some some physical object although you can certainly argue they're there to a greater degree in the in the human brain or or a mouse's brain but if you're talking about a philosophical principle like is there a conscious versus unconscious dividing line it's not clear in what sense that that makes sense certainly you can speak about abstract reflective consciousness like self self modeling at a cognitive level and we are doing that in a way in a way that this this cup is is is not right so there are some aspects of the natural language term consciousness that that humans have and a coffee cup doesn't have so what becomes subtle in thinking about consciousness is what what uh chalmers called the heart problem of consciousness right so we have various sort of empirical properties we could talk about like can you model your mind in a practical sense and answer questions about what you're doing and and are you exchanging information with your environment and registering that information exchange in your global state you have all these all these sort of empirical properties of associated with consciousness then you have what are called qualia like the experience of existing right right and what what many people do is they they correlate the experience of existing with sort of reflexive self-modeling which the human brain clearly does in the way that a coffee cup doesn't and i think the key aspect differentiating psychism from what's a more common view of consciousness in the modern west is as a pint psychist you tend to think like the basic quelia the basic experience i am is not uniquely associated with like reflective deliberative self-modeling type consciousness but it's rather it's associated with the more basic like information exchange type consciousness that that is imminent in every every physical system right and so that's a that's not incredibly relevant to the everyday ai work that i'm doing now it will become relevant when you start you know building uh femtoscale quantum computers or maybe even simpler quantum computers it'll certainly become relevant when you start doing brain computer interfacing but then you can ask yourself questions like okay this this computer that i've like wired into my head right do i feel it there on the other end in the same way that i feel if i wire another human brain into my head or does it feel like what i get when i wire a coffee cup into my head right yeah because i'm guessing if i wire a coffee cup into my brain or wi-fi it i'm not going to feel that much of what it is to be a coffee cup yeah i guess if i wire my brain into yours and increase the bandwidth i will feel a lot of what it is to be you which will be weird right what if i wire my brain into like our version 3.0 of our our grace uh awakening health like elder care robot right well will i feel you know what it is to be an elder care robot will it feel something like where it is to be a human or will it feel like what it is to be a coffee cup right so i think that the rubber will hit the road with this stuff and very interesting in terms of something like singularity as a decentralized society of minds or even think about human society and the global brain of computing and communication and human intelligence cloaking the earth right now i mean one could argue the real intelligence is as humans in human society and culture and we're all just like neurons in the in the global brain like responding to what it sends us on the internet right but what kind of consciousness or experience does the global brain of the earth or would say a decentralized singularity that society of of of minds have right like you so in a psychist view you might say well an open cog system it has a focus of attention it has a working memory you know it's it's conscious experience will be a little more like human beings quite different because it doesn't grow up with a single body that it's uniquely associated with something like a decentralized singularity net network you know might have its own general intelligence in a way that is exceeds an open cog or a human its conscious experience would just be very very different right i mean it's because it's not centered on a single knowledge base let alone a single body this gets back to your first question of like what is intelligence right because our whole way of conceptualizing intelligence it's over fitted to organisms like us that are we're here to control to control the body and get food and and get sex and status and all the things that we do an open cog system even though it's very mathematical ultimately it's built on the metaphor of human human cognitive science right you got perception action long-term memory work working memory i mean it's because that's what we have that's what we have to go on right but is that kind of intelligence greater in a fundamental sense and that than that which would ever emerge in a singularity net network that might you know get some self-organized emergent structures that we can't even understand and this brings us back to weaver's notion of open-ended in in intelligence right so singularity that you would say is more of it's more of an open-ended intelligence approach toward agi where you're like everyone around the world put your stuff in there you know have it describe what it's doing using abstract description language if it's going to flourish it should get a lot of its processing by outsourcing stuff to others in the network rather than being solid cystic and then you know it's trying to make money by providing services it's hopefully providing its creator with with some money helping with it with income inequality if they're in a developing country but what does this whole thing develop into no one's scripting it right so that's also very cool so if if the breakthrough the agi happened in that decentralized way would be really awesome and and and fascinating i mean the the open cog way is a little more determinate like we're architecting a system sort of model on human cognitive science we're going to use it to control these these elder care robots which even have human-like bodies like through a partnership of true agi and the robot company and so i mean i mean that's a little a little more determinate and it may be a little of each right we don't know we don't know how this is going to evolve because the the robots are going to draw on singularity ai including opencog plus other stuff on the back end and from the robot's point of view it will just draw on whatever works best for achieving its functions and we'll see to what extent that's open cog versus some incomprehensible like self-assembled conglomeration of of a thousand agents right well it's a beautiful and exciting vision and a very a very open-ended one too which is kind of interesting i guess we'll all have to kind of wait and see how this develops yeah vision is one thing that making it work is what's for the most of my time on which is really astoundingly difficult but it's it's amazing the tooling that we have available now like you you couldn't have done this when i when i you could have written it down when i started my career but it wasn't each step would have just been so so slow in terms of compute time and human time you using the crappy tools available at that time so it's it's amazing this is all very very hard but it's amazing that we can that we can even even make progress on it now right so it's certainly a fun time to be doing uh ai as as you you and the and all your listeners know absolutely yeah and it's a great time to be learning about things like this too all the different approaches to solving this problem thanks a lot for sharing yours uh with us do you have any um any places where you recommend people go if they want to contribute to open cog or singularity yeah absolutely so probably our best developed media property is the singularity net so if you go to singularitynet.io you can you can and look at the singularitynet blog and singularitynet youtube channel which is linked from singularitynet.io like that will lead you to everything but regarding opencog in particular there's an opencog wiki site and you can find like go to opencog.org then go to the wiki and there's there's a page of stuff on opencog hyper on which is our new system there's the current open cog is is is in is in github it's all open and while i've been thinking a lot about the new version the current version is what we're using inside these nursing assistant robots now like it does something and we did we there are two online conferences that i organized earlier this year which might be interesting so what one one was the open con online conference which is just about opencog then every year since 2006 i've led the artificial general intelligence conference which has been face to face until now but this year the online online agi 20 conference you you can you can you can find all the videos and papers from that online also and that's some of my stuff but also other things from the modern agi community that i haven't had time to go into here well great thanks so much really appreciated something i'm sure a lot of people want to check out and uh ben thanks so much for making the time yeah yeah yeah thanks for you thanks for the interview it's a good good fun you know there's always more always more to cover than you possibly can but uh there's some fun conversation
Info
Channel: Towards Data Science
Views: 8,127
Rating: 4.8552632 out of 5
Keywords:
Id: -VKF1lJhspg
Channel Id: undefined
Length: 81min 39sec (4899 seconds)
Published: Wed Dec 09 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.