Possible Paths to Artificial General Intelligence

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] all right thank you very much everyone so I'm Josh and the other panels are already introduced I'm going to start off with just very brief introductory remarks and then we're each gonna give short opening statements I'm not sure how much debate is gonna because I think actually there's a really interesting remarkable agreement among many of the panelists in the broad sense about the kind of path Iran but what I'd like you guys to say in your introductory statement would like you to focus on is what path I mean so first of all it's a above the five of us here I'm sure you know something about each of us but I'd say or out of the five of us are actually working on AI you know as one of our main things Nick has been more of a thinker about about a broad range of paths and their implications and so I want to start off by asking the four the four people here and I'll go at the end and say a little bit about my view what half a GI are you on yoshua actually just give a really great explain that so you could you could summarize it or just a that's right it was but what path are you on which means like what's your goal and what's your roadmap why did you choose that path and how far along on it do you think you are or you're part of the community is and then we'll dig into that a little bit and think about next step so um Arina do you want to so I think we all agree that the there are multiple viable paths to enjoy and all of them come with the certain trade-offs in terms of how long it will take us to get to HR you could take a particular path and what the resulting HDI might look like in terms of things like memory or computational requirements so when I was making a choice of which path you the question asked myself is what is it that I want each eye to actually look like and for me I I want H I to do as well as humans on the National tasks in our natural world and so for me it's probably natural to look for inspiration and of course in our world we know that there is this numerous examples of biological intelligence each one evolved for you can perform optimally in the tiniest ways of the universe that their body allows them to explore and we know that these intelligences are very diverse range from the distributed brain of the centralized brain of the vertebrates but also they share quite a lot of similarities so for example if you look at the human brain then we know that cerebellum is shared with the ancient ancestors like fish and parts of the limbic system are shared with lizards also if you even look at the different areas of the cortex in a single brain it's widely agreed that they share a lot of computational principles and what architecture even though they might be doing very different roles like visual cortex person or district or sex so it seems like there are some core computational principles that are shared between many examples of the existing biological intelligence that we still don't really understand so for me this is the past try to figure out what are these principles and I'm not talking about trying to replicate the my new details of like spiking dynamics kind of works but what I care about are the kind of core computational principles so if you know of Mars way like levels of description is more of the algorithmic side things another thing that I think is really important is that intelligence does not exist in in the vacuum right if we had a completely random world then we would not have so there is something about the structure of our universe in the mouth of our world that enables intelligence to exist so for me it's really important to examine these embodied interaction between the brain and the world to understand where intelligence comes from so this is my approach trying to look both at the structure of the world and the kind for computational principles of the brain to figure out what are these the principles we can then report to the artificial substrate and also I think these paths is quite cool because even if we don't reach HDI hopefully on the way we can discover quite a lot of interesting things about ourselves hey great go ahead hi my name's E and from Chinese Academy of Science I'm working on brains by the approach for artificial general intelligence so I guess our that one is quite similar but I do see well I do see the difference actually well just know we mentioned that what do we really need for for the realization of artificial intelligence do we need spiking patents activities like that I understand to my mind I would say that science is in the detail so my approach is more on the bringing is part it's back in your network for for realize artificial general intelligence so when we talk to people well people say that okay you're working on brains by the eye that means your connection is - and then I would answer this question like this so has a connection astre do you reject a symbolic AI and people always say yes we don't really need symbolic AI but my understanding is but is that for brains by the AI what we need is not only it's not only the connectors it's not only the about a computational model and and their self-organization of the network but also how can the neural dynamics give rise to symbolic reasoning so this is truly important so in this case I wouldn't say brains by AI is connectionist so I'm not the first one to say that like like in 1955 when people have Western joint conference on computer science there's a session a year before the year before the AI first conference in that Mo's say water Pete says that emulating the nervous system and emo and actually emulating the fight the hierarchy of final course is traditionally called the mind will actually come to the same thing there's no doubt so so in this case what I expect is that for brain inspired AI we need this energy and collective efforts for both connectionist and symbolic AI but more in the details I would say so all my work is based on single single neuron modeling of different type of neuro and how it gave rise to different kind of functions and how this cognitive function self or self organize together to realize more of complex multitasking and even AGI and for more for a more specific level of detail so we're working on self recognition different levels of self-consciousness and fear of mine so that I think this is a fundamental use for Safeway I cannot expect the world that an AGI system without a sense of self that can really understand the world and interact with us so I think the building block should be starting with very nice show stage of self consciousness all the way down to fear of mind cognitive empathy and also emotional empathy and then we could have the real safe AI okay great I'll just say for myself again I think you know again you saw your show statement we're all inspired by cognitive science and neuroscience me you know I come out of cognitive science and my fundamental goal is to reverse-engineer intelligence in the human mind and that means also the brain but I'm approaching more from the functional behavioral and I'd say software level of algorithms as opposed to starting more from the network structures and me right now what's what's what's teams have exciting the path but I think I'm on is what is in some ways you know AI is oldest dream about how to get two qubit intelligence going back to Turing's paper in which he proposed the Turing test right the dream of being able to build a machine that grows into intelligence the way a human being does that starts like a baby and learns like a child that you know Turing proposed that famously as the only ID he had about how you could do it and it's been it's been promoted by you know out these early AI conferences by Marvin Minsky John McCarthy all in some sense all of the great AI thought leaders from the beginning and you know Yoshio and I agree you know very much like we show the same videos you know the idea of like what is that basic common sense that even those young kids have when they're playing with blocks or cups right I think if there's a good reason for this as as a giggling root AI because it's actually the only known scaling route in the known universe that we know actually works right a human child is the only learning system that reliably reproducibly demonstrable grows into adult human level intelligence starting from monthly so if we could reverse engineer how that works then you know that would that would be a route and it's also a route that we can measure there are milestones and metrics and there's an entire field of cognitive development that has for the last couple of decades made great strides towards actually laying out what that roadmap might look like I'm always asking myself also okay well if that's what we're working on and all these great people have thought about it why why hasn't it worked before and why is it why might it work now well I I think we're at a very important time which is that the fields of engineering and the fields of cosmic development have both matured to the point where they can talk to each other where engineering just as yoshua said we're engineering now and offer useful tools to the people who are trying to reverse-engineer how babies Minds work right and where where the field of developmental psychology hasn't has got enough qualitative and even quantitative insight that it can actually start to guide those goals and hopefully they'll be mature enough to talk to each other and in the work I've been doing as a cognitive scientist for the last 10 years or so that's exactly been doing we've been building computational models of instance perception infant common sense and the learning mechanisms which take that forward well I mean I'm excited especially right now about what I what I see is likely sort of a moonshot stage for us is roughly the stage of 18 months old common sense the common sense that is that is the height of where we get to before language comes into the picture but which language builds on is I think that's where we have enough understanding on the stein side and enough maturity and our engineering toolkit that we can really you know get there in a in a practical way we're just at the beginning of that because then you'd have to ask well what happens when language comes into the picture and culture I completely agree again agree with yoshua of the sensuality of culture and the cognitive mechanisms of language that an end and empathy and other kinds of things that support that as getting us you know that's the real singularity for human intelligence but I think we understand enough about that first step that that it's extremely exciting to be working on that as a computational cognitive scientist and as an AI engineer okay so that's so that's so that's that's the range of views which in some sense we're all biologically inspired in some ways of course there are other possible paths to AGI and again nick has has had a lot to say about the different um let's say factors there do you want to comment on on how you see the panelists here thinking or other possible paths I mean I I agree this looks like kind of the model most promising I mean may be fun just to map out things if one sort of indexes say on unbend your intuitions and vision and then once these what I could one diverge from that in different directions and get different sort of ambitions of this so I think one one direction would be to say we need even more inspiration probality like we really need to capture the nitty gritty quality that the exact dynamics of spiking trains and dendritic compartments and and so forth that could be a vision somebody would have said we really focus more neuroscience then may I and and at the extreme of that you have people who think we will first get the machine intelligence by not just being inspired by the brain but by replicating the whole thing with some kind of old brain emulation right that's what you're gonna work which I agree looks much less likely but if one kind of maps out the space of possibilities that are championed by at least some you know non stupid people then that could exist on that map then I could going in another direction and say well we can abstract more from from the brain part and look more at the mind part maybe good old-fashioned area looks bad but there are other ways in which you could go up a level of abstraction you could think maybe some kind of evolutionary algorithm like some algo scourge would with enough compute to be the way we get it so like do away with all of this human scientists trying to figure out the mechanisms and just hope we find it in a beginner search base or maybe something that you find the right kind of idealization and it's something different something more principles something more based in your nature some kind of graphs something put together by repeated application of some simple functions that you build up and and then this this looks like a clue they do your letter approach might looks like I kind of dude you're way off approximating what there's a cleaner I guess if you take a more secure district and say well hey what we need to do is we are right now kind of incapable and meet the first enhance humans either collectively improve our collective intelligence the scientific community or individually with biological enhancement that what would be what face machine intolerance and so I think this kind of maps had a little space and one can maybe put different people's approaches in there and I have an impression that that you think something between Benjo but with a little bit more maybe theoretical understanding of the modules and specific things that human infants do and then maybe kind of more cognitive science yeah yeah I guess I I would say well are you are you yeah no but and yeah and you might think try to give me more power and kind of similar in the level of abstraction of inspiration that we need yes so if we could talk about our different levels of abstraction yeah I think talking about different levels of abstractions your interesting play to discuss yeah and I want to comment on makes point our evolutionary computation so actually my view for intelligence is not only about learning what your turn is like milliseconds to days but also for decades of development and for billions of years of evolution yeah so this is why in max book about like 3.0 saying that for AI we got an opportunity to rewire both only both software and hardware so let's say in the biological brain what we have for one concrete example is that we have excitatory inhibitory neural so why not 50 and 50 half and a half is actually not in the brain well actually in the sensory cortex approximately 50 percent of them is you know the mammalian brain and when I am developing a spiking neural network for for amnesty recognition is actually I found that the 15% of the inhibitory is the optimal for m-mister recognition so that means through decades of evolution what we found what the biology found is a optimization procedure that helped us to find the optimum to solve problems so in this case now we got the power of learning and development and also evolution so that you can rewire the whole system the whole system design and not only for parameter to me for those connections of the building blocks but also to change the building blocks and how they connect to each other so so this is very important that you know the scientific approach looks and says okay there's adaptive process these you could call them learning instant across all these timescales write milliseconds scale to millions of years of evolution and with cultural evolution also in the middle of that right and I think you know that that might there might be important differences of opinion there right like you could say there's another path right which may be referred to a little bit that is very well represented I think in some some parts of deep minds are in an open ai for example and it's often put out there as as what people are may be scared about or excited about is a kind of evolution the sort of massively distributed deep RL as a model of a synthetic model of evolution so what we're going to this evolutionary process with enough games you know whether it's dota or capture the flag or the entire open AI gym universe and you know that I think that's it that's it that's an interesting thesis that that that toolkit might be able to in some sense get a kind of evolution but then there's a question of can I do enough of the like structural reek construction and reconstruction that biological evolution has done that you're pointing to and you know that there there's more questions I think um add a few things first of all regarding the different paths that you mentioned I think some are safer than others I think you know this conference is a lot about safety and in particular I believe that if we more or less stick to the human-like intelligence we end up with something much safer than other options because we kind of know the limitations of human minds and they're not infinite even if we as I said if we have bigger minds and so what one thing that's connected to this which i think is also a good engineering reason for going that way it's connected to what karena was talking about when she talked about the sort of shared principles between intelligence in different animals and presumably those principles if we were to understand them would allow us to build intelligent machines now there's you know those you can think of those principles like a small set of laws like the laws of physics that would be amazing I think it's like an amazing hypothesis if this was true that we could understand intelligence with such a small set of principles then it would have amazing consequences it would be of course aesthetically pleasing but also it would make it much easier to build smaller machines which could be a good thing or a bad thing but but it would be mostly building a kind of intelligence very close to human intelligence and the other thing I want to mention is about the levels of abstraction so the reason I chose to do neural nets in the early 90s early 80s mid eighties was because I felt that they were at the right level of abstraction sort of low-level notions of how like particular neurons are actually computing and high-level cognition in the sense that with the the neural net toolbox and especially the ones the one we have today we can we can represent high level cognitive computation and we can also represent low level things like one of the things I've been working on is how could brain do back prop so low level computation that gives rise to credit assignment all of these things can be talked about in that language so that's that's that's very convenient one thing I want to mention also about those principles there's there's a lot of effort in neuroscience and maybe to some extent in cognitive science to describe our human intelligence through all of the things that we can do and in case of neuroscience we like which neurons are doing what sort of mapping out sort of huge encyclopedia of every function that the brain computes where it's done and you know how things are connected to each other I think this is hopeless right it would take forever to understand the brain that way if instead we can figure out you know a small set of principles that can give rise to this computation then of course it would be much simpler much faster to understand the brain and and you know the principle number one is learning right that that you don't need to actually describe the function computed by the brain you don't need to need to describe which is like a huge number of bits right you only need to describe the learning procedure which could be very very small number of bits which could fit into your genome I think I think I think I see the pre-linguistic stage really important of course and most animals are there yeah exactly and I believe that the reason why children have this amazing vocabulary spurns because they already pre build the concepts and now the fastest classes just like fast attachment of poor labels to these resistant concepts and I think if we take AI know human roots we have a danger of the eye learning different kinds of concepts than what and then it might actually be harder for you to learn they like our symbolic kind labels for for these concepts which also means that then humans might struggle to actually communicate with AI so then we might actually struggle to those things the constrains rights with their interpretability constraints that come from that so I think that's kind of a point I think I think that's a great point yeah I mean I think that if we want AI in general whatever really form is going on inside but if we want AI to live in a human world to live safely effectively valuably right to be able to be an entity that humans can talk to teach and trust the way we do with other people even people we haven't met before then in some sense right at least there's got to be something like a human-like API and at least in some in some sense just as we have certain API to each other right there better bit of justified by some common mechanisms at some level I think it's it's it's a root not only both just both to safety but also to some of the other dimensions of benefit um maybe just to try to create some controversy I don't know how much time we have how much time should we take oh okay okay great um well okay so so so I can throw some good traversable but let me just first respond to what Nick said what you say if that's okay and that'll try to create good and then you can you can you can hunter great controversy around that you know like that controversy so just to say yeah I mean I I think there is one difference is I guess coming coming as a cognitive scientist maybe as opposed to a neuroscientist there's a there are a few sort of stylistic differences well I mean it's also related to our orientation towards the classical AI right I mean I you know I think yo should thinks of classical AI as well they had certain goals but it's right right mechanisms really I guess my you know my feeling and I think that the sort of collective insight of the field of cognitive science is that actually symbols are important there are really symbols in the brain there are really symbolic languages of abstractions symbolic systems in the brain and it's remarkable how that works and we don't know how it works but that there's actually a lot of success we can have with the combination of several ideas rather than say in case of need you know I wouldn't say neural networks are the universal representation layer I would say programs are the universal representation those programs would be differentiable programs of the sort that a deep neural network is but they could also be you know lists expressions they can be grammars and I think to me what's what's remarkable about the maturing engineering toolkit is our understanding of all the different possibilities of programs as models and as different ways to learn programs right if your models in general are some kind of program then learning has to be something like programming and one way to program is to is to you know it's the right code by hand another is to like as we've now to you know found have a big differential architecture with lots and lots of parameters set up the right data set and if it's end-to-end differentiable then you can effectively tune and shape the program using stochastic gradient descent and such method but there are many others but there's no programmer that programs exactly but one of the maybe one of the most exciting things in machine learning right which has been somewhat independent of deep learning but is now also drawing on things from deep learning is what's sometimes called machine learning meets programming languages or you know programs that write programs so these are people coming with a different technical toolkit you know from the design programming languages and program synthesis and code that writes code it's it's interestingly also somewhat related to older ideas of genetic algorithms of genetic programming but it's but it's different from that it also draws on some Bayesian tools but it's different from that it draws on some techniques of programming language design and even spec means from compilers so I see that area for example there's a hub that match with evolution of intelligence in animals and humans right yeah I mean I think I think it matches very well right I mean DNA is code right and a machine a body and a brain is a machine that builds itself from some code one and one of the things that does is it writes new programming languages both individually and collectively so I think that that general idea is actually a very powerful one and and they're the many different versions of it and correspond to the different kind the different like scales levels of analysis and scales of adaptation process these including learning biological evolution cultural evolution I mean that's a very big picture but I think it's just that's one key difference yeah yes so I totally agree that programs are incredibly important especially for human level intelligence but I wonder what you think about this idea that there are different levels of intelligence abstract reasoning that programs is kind of topside with that but below that we have more kind of grounded perception based abstractions and I think there was an interesting experiment by a little bear where he kind of traveled around Russia at the beginning of the 20th century asking people kind of abstract reasoning questions like you know giraffes don't live in big cities Moscow is a big city with maybe a draft there and people could not answer this abstract question they they were like well they were only basing answers on their personal experiences like I've never seen the dress I can't answer this question so and it seems like essentially this kind of reasoning wasn't present in people who didn't have formal education and it seems like the formal education is what gives people their bill that's one kind of programs that's like the consciousness prayer that's like symbolic propositional programs but there's lots of other programs in the brain that I think we want to think about how to make perception really work right I mean in my view and grounded lie mal I'm all for grounded language as well but I think it brought excuse me a broad view of programs that embraces the strength of deep learning together with the idea of generative models and you know symbolic abstraction I think that's also that's also the key to perception and grounded language but you know that's my view I think an another interesting difference though is the relation of the engineering enterprise to the scientific enterprise yeah but part of that is like how much scientific detail matters does the whole connect don't matter the spiking matter and so on but the other is what's there what's the relation between the experimental world and sort of theoretical engineering toolkit and a nice thing about hogging the science is that you know as opposed to the neuro approach is that I can do those experiments in my lap or on Mechanical Turk I can work actively with people studying infants or young children and to me that's important it's important to say okay when I say I want to build a machine that starts like a baby and learns like a child it's not simply nice my intuition about how children think which is you know Marvin Minsky as brilliant as he was you know he was either stating his intuition or Piaget is intuition but that you know Piaget was more or less a contemporary of Turin a lot has been discovered in the last couple of decades and baby's brains it turns out start they're not blank slates they start with a lot of structure and learning is a lot more sophisticated than and a lot more diverse and a lot more symbolic we think then simply you know understand and that's the science the guy I have the impression that what you're proposing is to use the formalism which seemed naturally adapted to system to computation in order to explain system one computation no I'm again I don't I don't want they don't want don't I don't want to miss communicate here right when I say programs I don't simply mean less programs right I mean a broad thing which includes which includes deep networks right I mean you know these days in my group we do a lot of stuff with deep networks and we and one of the things we do is we use deep networks to guide the construction of the symbolic layer right and I think that's very exciting um so I just to be just to beat I want to be controversial in the right ways but I don't want to miss communicate I'm not saying oh we should do what we should go back to symbolic a I'm saying they had a good idea you guys had a good idea to Judea pearl had a good idea and and we have a maturing to look at that allows us to see how these several good ideas absolutely get together and actually can extend each other I mean I think that this embolic is crucial but I've coming towards the end rather than the beginning you have to build up to it you have to sort of earn your symbols by first sorting out maybe on symbolic representation and good old-fashioned EIS a little bit like if you might in some very low-tech primitive tribe this is like it just plain shooting across the sky and I say we want to build that so they're gonna record we want this high-tech civilization and have put together like some wooden replica right it's never gonna fly the way to actually get there is the first invent simpler technologies and then eventually our factories no electricity and you can you can do the whole thing but it's still helpful I think when doing this upstream balling stuff to know what you're aiming towards because it that you can have constraints in the kinds of grounded representations you crave if you know that in the end they have to be combinatorial that's what I'm proposing is that actually the the needs of system to computation can be used to help system structure right I think is yes that would be a great idea because you want it feel free to get you guide the questions I love to how you guys on the panel put yourself on the spectrum of how brain inspired you wanted your path to be from very little to very much and I would love it if you can also place yourself on a different axis in terms of how intelligible you're guessing that AGI is going to be where one end of this expect would be to say something like well it's just gonna be ml all the way plus a bunch of clever new tweaks but it's gonna be a completely opaque black box will have a very little clue of how it actually works there's something maybe more closer to what josh is saying where maybe there is a lot of Goldfine sort of married into that maybe a lot of Josh's system too so that we can understand more but but but there's gonna be both right so if you look at humans and the system one system to distinction there's gonna be part of the explanation is gonna be hard to express in words and as a part of it which is higher level which gives probably a good approximation of the kind that humans communicate with language and but that's not going to be the complete explanation unlike what you know go Phi was trying to explain everything with with symbols to to explain our actions or complicated decisions there's going to be a part of it that remains difficult to communicate that's what I believe so I think my view is that the what perception does in biologic intelligence is exactly grounding the symbols and then everything even system one would then be built on top of so it's kind of disentangled representations that the perception of eels first and then you can do all sorts of things on top of either model-based planning or both for URL so even model 3 might be more in chapter 2 than just black box but some some things remain out of reach of interpretability think about trying to explain the go moves of alphago right maybe you can come up with it was if it was trained with constraint of having system to type of representations at a high level you might come up with some the same things you find in a go book but it's not enough to completely you know learn as well I don't think I mean again I agree with what you're saying on the other hand explain ability is a key part of human intelligence right the way if you the way they seem complete right it's it exactly I agree with that but we shouldn't underweight its importance and especially for a twin a human yeah right when a human player learns go they learn those explanations from the go book or a master are crucial from the very beginning and if we want a machine if our goal and stay go or any other game is to build a machine not that ultimately with incredible amount of compute learns to do this one thing at superhuman level but can learn you know more with the data complexity as you said and the time complexity of a human to learn so many different things so quickly ok then I think explain ability is going to be a key part of that I'm both the input and output side and I think again maybe we don't just agree on that those things in terms of that's like the endpoint of the path but in terms of where the paths are in the near term this this is where there is some difference you know you got you're talking about earning your symbols I think in in certain ways that I that are not fully appreciated in the deep learning community we already have earned our symbols there's already quite remarkable things that we can do with symbols and the deep learning people's many of them are doing it anyway they just don't they like tensorflow or pi torch symbolic languages that allow you to build that about individual to build deep learning systems and that allow collectively the cultural evolution process of what you call the deep learning community to innovate so quickly in water up very much like exponential it early exponential growth counts and you can measure them in people's Google Scholar and and steered into impact on the world okay that that's those are cultural evolution processes which of course we don't know how to completely engineer those things in artificial systems nearly at least those symbolic Costas T's at the scale but if you look at what's going on right now in other parts of the AI community we have actually made real progress in machines that write code or machines that do probabilistic inference in code so in the short term I believe that it's not just the thing we're gonna get to that we're gonna have to earn and somehow is going to emerge out of network right now it is also on our route to building explainable AI and to understanding the explainable parts as well as the unexplainable parts the more intuitive parts of human time I wanted right maybe we should hear your echo directly to max problem that about how optimistic we are for brain sparta AGI let me give you a very simple example but very intuitive I think I think you talked about how optimistic are you about explainable we're all really optimistic I'm actually not not for the long term because the examples like this I have a two and half year old girl there's one day when she said dad I'm gonna be a bad girl I said what I'm gonna Punk you I said no don't don't touch dad out I'll get hurt so she sought a little bit and then so this is why I want to bring actually I am bringing my mind cognitive empathy to my robot so that it can get some autistic behavior so so I am really worrying that once we have a model as a brain spider cognate avenging that that have general intelligence for as a human brain in a developmental a way but one day even with autistic behavior she said II I'm not gonna do this because I don't want to follow you because people people do that starting from year one they can they can they can have their pre thinking saying I'm not trying that oh so this is why III feel we still meet some dangerous point another example is humor humans good of them from the broader sense but talking about bias who have bias human so when when simulating or or building you see there's clouds there's risks or there's limitations human cognitive brain inspired so we should study the neuroscience and cognitive science of bias and how to avoid that kind of a building block in how to understand you well some of them might be inevitable I mean and and something in any kind of inductive system and others might be avoidable and yet we have to study them to understand that yeah let's get more see first off great great panel and and I like the the discussion may part about the analogies and learning from humans and human minds and so forth and and all the advantages of that I want to just give some counter arguments in to have you react to them a little bit and let me just first off stipulate I understand that there does exist this proof and there's some benefit from the biology and it is arena stead it makes it easier to communicate and yahshua mentioned briefly that maybe maybe it's less risky because it's less likely to be bizarre but from an economists perspective there are some advantages to being very different and first off I mean one of the reasons that bulldozers or jets are so valuable is that they have a very different ways of generating power and so forth and that's why they're much more valuable than then then a biological version of those machines and secondly maybe a little bit more subtly if a machine is a closed substitute for a human then it makes human labor much less valuable whereas if it's very different complementary that has superhuman capabilities and other dimensions but not on some of the things that humans could do it's more likely to lead to an equitable distribution of income and wealth it's more likely to have humans still be part of the valuable ecosystem so it'd be interesting to have you sort of reacted for those those benefits and cost of having systems that are very different from us to versus systems that closely mimic us and gradually just are close substitutes so I can mention it may be the analogy of birds and planes so it's the same underlying principles of aerodynamics right but of course planes look very different and you might argue that planes are better in some way but actually they're just optimized along a different axis so birds have lots of constraints of energy consumption that are not as present for for planes and we don't have planes that are as efficient at flying in terms of energy consumption so I think what's gonna happen is we'll discover these principles will make progress and then these will be of course inspired by animal and human intelligence but then we'll want to apply that intelligence to other things with different constraints than human bodies have and then I think it will happen as you say that will move away towards maybe even before we get AGI right it's not it doesn't have to wait for AGI in directions that that are going to be different that look different in nature because we push the in sort of application where the constraints are different I don't think I don't think that's what most of the say deep learning community is doing we're not trying to mimic humans so there's a big difference between mimicking humans as in the details of what we do of our failures of our neurons and getting inspiration and trying to get to keep that inspiration at an abstract level that corresponds to where we think are the right principles and maybe Josh and I could differ and that's okay we need to explore all the roots so that's that's not mimicking right I think that complete the notion of mimicking doesn't work can I just come in but with an opinion and moderation so I agree with points you're saying both the value and the risk a value of alternative paths and the risk of a certain path is mimicking humans I think another another way to put the point is that I don't think any of us are interested in bid or working on building digital humans but when we say weird we might there might be some capacities of humans that we want to achieve human-level goals with human-like means and it's for example common sensibilities right if we or certain kinds of social cognition that again allow us to talk from each other if we could have whatever kind of AGI we have if we want to be able to interface with it if we want to be able to talk to it and Trust but but but that doesn't you know that doesn't mean it has to be a full person I think there is an interesting point if you're actually gonna build agents your point about having a self essential but we don't necessarily want to build you know AG eyes that have a full human-like felt we might want to have all human like common sense but not a full stuff we might not want them to have some of the same you know fundamental motivations and will that we have we and understanding and in from an engineering point of view how that actually works in humans allows us he's a part of the components and engineer the ones we want and not the ones that we don't want to leave that's one vision so I think that's important again clear I mean to go back to the popular bird Erol airplane analogy we should build airplanes not but some of the things we're building are also kind of more like robot earth right like age you know in some sense agents that like might lead the flock in the right direction if there's birds that are flying or if there's robots that are flying around with the other birds they shouldn't be like airplanes because either the airplanes where the birds are both will die we've seen that happen too we're liking this in the self-driving car analogy we could just get any me right now if we just if we pass the global laws that banned all human drivers and pedestrians and bicyclists for the streets and said everybody has to be in a self-driving car it'd be a lot easier and safer in certain spans to build autonomous vehicles but because the route that we seem to be on is one in which autonomous vehicles have to live in a human world where there are humans driving and walking and bicycling and so on and leaving stuff on the road and whatever then there's going to have to be and I think most people that right now agree that this is a key bottleneck in autonomous vehicles they're gonna have to have some understanding of human intentions so that they can be safe and valuable in interacting the world and so that's the kind of thing that we're at least I'm talking about that can be both economically valuable landscape we need to differentiate yeah let's let's take let's look at I've noticed that in quite a few of the conferences I've gone through lately the need to bridge the symbolic and the neural based machine learning approaches is is getting emphasized this as one of the tasks and also that there's this recognition that at least in the learning of children that it's much more embodied than much more robustly embodied but in us humans evolution forged us as creatures in which these capacities are deeply integrated integrated in ourselves and in our adaptive relationship to our environment and so my question is do we really are we really just looking at these as individual capabilities and trying to forge them or do we really do all of these tasks but don't integrate them very well in an embodied relationship their environment so my view is that there needs to be integration and we see that humans right we we see how our language fades how the world right but at the same time we can't as I mentioned before which one is really learn language at the same time as we learned the kind of basic concepts so it's like it seems to be this bootstrapping where we first need to have some basic understanding the world from which we learn the symbols which language and then language is what gives humans disability to turbocharge our intelligence because now we can move to the symbolic domain and manipulate which we create new structures just in the symbolic space mainly using programs and that so that that programs can only exist in the space but that's the best way humans kind of start overtaking I I would say yes very much to what the things you're looking for I would that's the thesis that I'm talking about I already have it very interesting promising first towards what you called neural symbolic integration right it was art exists and they're very important for embodiment so you know I recently I found myself working with a number of robotics groups and when I work with robotics what that mutually means is you know they do all the hard work the engineering but I'm very proud that a few robotics elaborate we've published in the last year's a couple of robotics papers two of them have won best paper prizes that leading robotics conferences you know I don't claim any credit for that but what they've done is they've actually in different ways some of them like you know are using actually sort of you know basically combinations of symbolic and the each of them used combinations of symbolic and differentiable approaches one of them has no neural networks at all but as differentiable dynamics like differential physics dynamics but then a symbolic planner that basically says which dynamic modes should you use that was work at the market you stated in another one which was worked at on araga giant a mighty student did working with Leslie Kelvin and others right was one where you took an analytic symbolic thank physics dynamics model and then added on a neural network to learn the residuals you can write down you know we have really good physics engines that we can write down right now which capture your very powerful in capturing a range of different kinds of physical interactions but they're not perfect and and there's many things that we don't know how to capture their way so the neural net learns the residuals and that that is a very powerful combination also that already exists and it's being used in real robots so another example of paper that will soon come out in science robotics by nima is an example that they plays Jenga that uses probability and symbols to play Jenga rights but those are real things that are happening right now it's incredibly exciting [Music]
Info
Channel: Future of Life Institute
Views: 11,320
Rating: 4.823009 out of 5
Keywords:
Id: c8-OeGYRFpI
Channel Id: undefined
Length: 48min 6sec (2886 seconds)
Published: Wed May 01 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.