Ian Cheng & Richard Evans in Conversation

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I just want to first off say I'm a huge fan of Richard's work and I came across it through several papers online because I was obviously a fan of The Sims 3 a fan of black and white and I came across very special paper of Richard's to do with a story engine called versu which was a way to simulate social interaction with a group of agents and if you know in played video games social agents are very rare in the world of video games normally you're shooting at something or you're trying to build something we're trying to destroy something but sociality is something that's very difficult to model something that we are all experts at and so that's very hard to make it feel right in the context of a game in the context of a simulation wanted to maybe just talk a little about this engine called versu and how you went about modeling what you called social practices yes so first of all just like to motivate why I think these social practices are so important to model it was it was when I was first seeing the original Sims game which was at the time hugely impressive gaming is still it is and I was like wow there are these little magical people and they're walking around on there interacting with each other this is all great and then my sim and he'd invited this other sim to come around to his house and the other sim the visiting sim came and rang on the doorbell and went ding dong and my sim answer the doorbell I was like wow this is magical and then after they've spoken for a few minutes my sim went off and he went upstairs and he went and he had a bath and I was like well ah this is this was the moment for me where I was like ah this is not yet fully I write the you because we all know that if if if you invite someone over to your house you don't just walk off and have a bath but this is this is a violation of a subtle subtle social norm so it wasn't like there was just some bug in this software it wasn't like oh you know they put a plus plus when they should have put a minus minus right this was a deep lack of understanding of what the norms are of social practices right so the norm that the agents didn't understand is if you invite someone over you need to look after them you need to pay attention to them if you can't just walk off and have a buff and so I thought well what it what would it take to actually model the myriad of social practices that we're in all the time well what would be a computational model of practices and that was really the driving force behind this system it seems to me that we are in all these practices all the time what we need is a computational model of them so the agents can understand them so they won't go off and have a bath when they've invites them around and you distinguish between something I think very fundamental and beautiful which is that often when we think of an agent or an AI in the context of a videogame we're using in thinking of them as having a you call it a regulative view of their agency which is you're an agent and you just have a certain set of goals and no matter what situation here and you're gonna maximize those goals you're gonna achieve some value based on some reward and it doesn't matter what context it is in therefore like if you had the goal of really taking a bath you would do it in the context of a party but then you've developed when you call the constitutive view of agency which is that we could be doing a million possible things right now but we don't you and I are speaking on this mic in front of you guys you guys are watching and this is a kind of social practice and we're limited in a way to this set of things but within that it acknowledges that context actually does matter as a part of intelligence and this is something that I think you've really got at the heart of in terms of modeling it in the computer system that's right focus so right now as exactly as you say correctly there's an infinite number of actions that we can do I could enumerate all the prime numbers I could ring up my wife and say wibble there's an infinite number of possible actions I could do but almost all of them what don't even dawn on me right now right is it's like somehow from this infinity of possible actions I can do we're just focused on this small finite set now that's a huge achievement but that's an achievement that goes unnoticed in most AI systems because they assume in advance that restriction and it was that it was what are these units that provide these collections of affordances and that argument is it's a social practice a social practice is a collection of affordances and so that's how we have a finite number of things that we think about doing because we're in a finite number of practices at any moment and it seems to mean that the limited affordances is actually a kind of freedom because it actually makes things meaningful and I think of this Philip a dick thing he said that he goes about writing his novels by changing one small thing in reality like maybe you make gravity just 50% and then that the task of a sci-fi authors so then imagine all the different ways in which life culture would change based on this one tweak and in a way the best sci-fi authors pre imagined all the social practice consequences and I gather that if we had AI with more and more of this ability to respond to context within the knowledge of having a cumulation of social context tweaking one thing you get something quite artistic and beautiful I think yeah exactly I think that's a great thought so what I find personally fascinating about things like science fiction novels is not the fact that they've got spaceships and aliens and stuff but it's the fact that there's a different set of practices on this world and so we really want from a computer agency the ability to construct entirely new practices so so in our world when we eat eating is a social thing that we all do together and going to the toilet whatever is something to do on our own but you could imagine it's something we should have swapped around so eating something that we don't do with other people we just it's a shameful thing we do want to own and in the privacy of our own kitchens that the set of possible social practices is infinite like it could be that we could and we could for example on the full moon we could sacrifice all our cheese to make a giant cheese toastie and then we could um sacrifice it and not eat it but then the highest status people are the ones that sacrifice the most cheese there's an infinite number of practices that we could invent what we really want is computational agents inventing their own new practices and and then we're seeing a infinite number of possible worlds not possible was in a sense of physically different worlds but it is socially different worlds it's very interesting because you also says let me do me backstage which is that if I just chose to do something extremely random right now like like that in itself is constrained in a way by a hidden perhaps to myself social practice yes I wanted to make a disruptive point and but this randomness in order for to mean something for you and for me from the idea of me doing it has to come from its own accumulated set of social practices that I'm bringing to this social brackets a social practice interrupts a social practice but to do something in a total vacuum yes is to kind of not be alive yeah exactly one for it not to mean anything is that it's just physical flailing if it's gonna mean anything it has to be inside these contexts so what we want to do as AI researchers is model these contexts formally exactly that's right I'm curious now so although the way in which you've modeled social practices have been more toward the the angle of or it's been implemented in a using symbolic logic yeah but the orthodoxy of today in terms of AI is machine learning deep learning how do you see those things ever coming together do they replace one another there's one replace that others a is deep learning forever thing or is there a kind of middle ground yeah it's a really good question so deep learning is certainly hugely impressive a large number of perceptual tasks and and symbolic old-school symbolic computing is also good a number of different tasks so for example long-range planning if you want to plan over multiple steps theorem proving there are lots of things that are symbolic systems are extremely good at and there are a lot of things that um neural networks are very very good at including pattern recognition and what we really want exactly as you say is is a unified system that has the best of both these worlds that's both symbolic and crisp but also fuzzy and neural and tolerant to noise and robust to error what we really want is a unification of those things I mean that something is the holy grail of AI if we really could have unification those things and we could all go home and have a big holiday but there is something historical which I'd like to draw your attention to is sort of analogy so in the 18th century there were two types of two major schools of philosophy one was the empiricists and one was the rationalist so them cases British and places people like Locke and Berkeley and Hume and they were effectively the old-school philosophy version of the deep learning guys they were all about all we start with nothing and we learn we learn a model of the world and then opposed to them with irassman lists with mostly on German and they came from a very different view which was that we impose a lot of logical structure and it's only because we impose that structure they were able to make sense of the world and so you have these two competing schools of philosophy which in many ways were exactly the same as the two competing types of AI we've got now right it would be the deep learning people in symbolic people and then Immanuel Kant did this hugely impressive synthesis of empiricism and rationalism and my hope is this there's going to be some sort of parallel unification in in in AI where we can combine the power of neural networks with the power of symbolic logic reasoning and lastly I just want to ask I hear you have children how do your children influence your work in AI because it strikes me that I learned the most about my own work in my own like interest in E I threw my dog and I can only imagine a children which is a self-learning unsupervised learner through and through is and one who's learning constantly new social practices and breaking them and inventing their own might be I'm curious if it's a source of inspiration or a source of torment or how the kids what it largely it's a source of humility because they they learn so much so quickly it makes you realize how many difficult challenges there are to implement to do it the sort of thing that a someone can do by the time they're two and a half it's quite remarkable so it really is a sort of human T and I'm also kind of cute thank you Richard you
Info
Channel: Serpentine Galleries
Views: 1,045
Rating: 5 out of 5
Keywords: ian cheng, guest ghost host machine marathon, richard evans, sims, black and white, rationalism, ai, empiricism, philosophy, bad corgi, computer, computer games
Id: UZRfofoXXI4
Channel Id: undefined
Length: 10min 29sec (629 seconds)
Published: Wed Jan 03 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.