Demis Hassabis: creativity and AI – The Rothschild Foundation Lecture

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
our distinguished speaker this evening demis hassabis co-founded the renowned artificial intelligence lab deep mind and is recognized worldwide as one of the smartest thinkers in his field he was nicknamed the superhero of artificial intelligence by the Guardian he's a former chess prodigy with degrees in computer science and cognitive this is making me sweat he's so he's so clever anyway this evenings topic creativity and AI draws on his eclectic experiences as an artificial intelligence researcher neuroscientist and video game designer to discuss the implications of cutting-edge research for creativity and scientific discovery there'll be an opportunity for questions at the end I think I think Tim is probably handling that as that right in yeah anyway without further ado I'd love you to give a big welcome to demis hassabis [Applause] thank you Chris for that great introduction so it's a great honor for me to be here at the Royal Academy to give this inaugural Rothschild lecture in this wonderful and inspiring amphitheater that we're sitting in today I always love visiting their Royal Academy and I think it's great to have these kinds of dialogues between the sciences and the arts and I think it's actually going to become increasingly more vital as we rush headlong into the modern technological world so today I'm going to explore theme that's the heart of everything at the Royal Academy namely creativity and I'm going to examine it through the lens of science and also more specifically through the lens of the latest advances advances in artificial intelligence so as all of you will know AI is the science of making machine smart and as Chris mentioned we found a deep mind in 2010 with the goal of trying to advance artificial intelligence and we thought of AI and deep mind as a kind of an Apollo program effort to advance AI as quickly as possible and what we mean by that is try to bring together the world's greatest research scientists and engineers and give them all the resources they require compute power and other things in order to see how much progress we could make towards solving AI and we had an ambitious roadmap that we run carrying out to this day the other big thing idea behind the mind and vision behind it was to try and organize scientific endeavor in a new way so the way I can kind of summarize that is we try to fuse together the best from academia blue sky thinking and ambitious thinking that you get in the best place in academia with the best from the startup world so the kind of focus and energy and pace that you get at the world's best startups and we didn't see why those two types of environments had to be mutually exclusive and we thought that we could advance science more quickly if we could combine the best from both of those worlds now a mission a deep mind we articulate it as a kind of two-step mission so step one fundamentally of intelligence so we'd like to understand what intelligence is and we create it artificially and then we believe if you do step one in a general enough way then step two naturally follows we should be able to use this technology to solve almost everything else and that might sound a little bit fanciful but I hope that by the end of the talk I'll have hope to have convinced you to at least think that maybe this is not so far-fetched after all and perhaps actually it's a logical next step after we have general artificial intelligence so more prosaically we plan to to do this by building the world's first general-purpose learning system so what are those words mean general and learning well let me take you through two main approaches two main approaches to building AI kind of falls between two schools of thought so in the early days of artificial intelligence the main approach was is what's called expert systems sometimes it's called good old-fashioned AI these days they go Phi or traditional AI you can think of it and on the left hand side here that's what what's involved is that we you know teams of programmers and researchers hard code knowledge in the form of rules that they express in you know complex databases and you can imagine this sort of series of thousands of if-then rules trying to encapsulate the solution to whatever problem the program is supposed to be dealing with now the issue with those types of those types of systems is they can't generally deal with the unexpected and they're quite brittle because of that so they're limited to the solutions that have been pre-programmed in them they can't think for themselves and they can't deal with anything that they weren't already prepared for so what this means is they're limited to solutions that we can express as the programmers so the programmers themselves have to understand in my new detail what the solution is in order to handcraft and code these hand code these knowledge systems so these expert systems were inspired by logic systems and logic theory if we now compare that to modern-day learning systems and which is the sort of the advent of has been one of the reasons behind the rejuvenation and the revolution in artificial intelligence in the last decade is because these learning systems are now really starting to work and these learning systems they learn solutions from first principles they learned directly from data and directly from experience and they learn for themselves so they're not pre-programmed with solutions they have to figure out solutions for themselves and if we can build these systems in a general enough way they can generalize to new tasks they've never seen before and perhaps even solve things that we don't know how to ourselves and that's the really amazing promise of these systems is they could go beyond the knowledge that we have ourselves in all sorts of interesting domains which I'm going to talk about later in this lecture and in the main learning systems are inspired by neuroscience and informed by neuroscience and how the brain works and that's where we get a lot of our inspiration from for building these types of architectures so expert systems are inspired by logic and learning systems are inspired by the brain now still the most famous example of an expert system was IBM's deep blue computer that beat Garry Kasparov in the late night and in the late 90s who was the hit of the time he was the world champion at chess which I'm sure all of you will remember now this was obviously a very impressive technical feat and I remember this match very well I was doing my undergraduate at Cambridge and we were sort of watching this match as you could imagine I was extremely assured in this from both from the chess side and the computer science side and what I remember coming away from was actually all those an impressive technical feat when I came away from this match more impressed by Garry Kasparov mind than I was by the machine because here was Gary you know this amazing sort of creative genius probably one of the best if not the best chess player of all time and he was able to more or less hold his own against this big brute of a machine I mean a huge supercomputer with obviously teams of programmers behind it with all these rules programmed into it and not only was he able to compete on a more or less level footing the machine he of course he can do all the other things that we can do as humans he could speak three languages do I have a car ride a bike all of these other things that we could myriad of things that were able to do and if you compare that to deep blue which was obviously amazing at chess the blue could not even play a strictly simpler game say like noughts and crosses without being totally reprogrammed so nothing in the knowledge base of deep blue would help it do anything else so it was this hard-coded specialized system that was only good for one thing playing chess and it seemed to me that you know in terms of thinking about intelligence something was missing here some critical things were missing here and what I believe was missing were these two notions this notion of learning and this notion of generality and both of those things were missing from deep blue and expert systems in general and when I saw this match you know after this match concluded one of the things I resolved to do was to one day build a general games playing machine that could play any game out of the box so let's look at what's been happening with learning systems and actually the system the kind of framework we think about intelligence in a deep mind is a framework called reinforcement learning and the idea behind reinforcement learning is that these systems are these agents we call them the AI systems that deepmind learn from first principles through trial and error so that's how they build up knowledge about the world and I'm just going to show you with the aid of a simple diagram how these systems work at a very high level so first of all we start with the agent system the AI system here on the left and the agent finds itself in some kind of environment now the environment could be the real world in which case the agent you can think of it as a robot a physical robot or the environment can be a virtual environment like a game or a simulation a computer simulation in which case the agent would be like an avatar in that game environment and the agent has been given a goal by the designers to achieve within that environment now the agent only interacts with the environment in two ways so firstly through its sensory apparatus and we normally use vision but you could use other modalities like audition and touch but we use vision and you get these observations about the environment through your sensors and the observations also include rewards from the environment for doing the right things and the first job of the agent system is to build up a model of the world out there the environment out there and how it works so it's got to figure out a statistical model about the environment it finds itself in and and the linkages in that environment and once it has a model of the world which is continually updating based on new observations and then the second job of the agent system is to pick the right action to take so at any moment in time the agent system might have a whole array of actions available to it and it's got to select the best action that will get it closest towards achieving its goal at that moment in time and if the the model of the world the agent has is very good it can hypothesize in its mind what the consequences of doing certain actions will be and what the likely change in the environment will be and you can think of this system in a cycle so the agent once it runs out of thinking time so this is all a real-time system it outputs the best action it's found so far the action gets executed and then that may drive a new change in the environment which then drives a new observation and then the agent updates its model of the world and then selects a new action and this goes on in an incremental fashion until eventually through sophisticated sort of trial error processes the agent eventually reaches its goal now those diagram looks quite simple there's actually a huge amount of technical complexity behind this that needs to be solved very complex technical challenges but we know that if we could solve all those challenges this framework of reinforcement learning is enough to give us general intelligence and we know that because this is how the brain works and in fact in in the in the primate in human brain is the dopamine system and dopamine neuron they implement a form of reinforcement learning and that actually allow us to learn using this system of reinforcement based on rewards so how did we develop this further well the first work we did is partly because of my background in my previous career of designing video games and building AI systems for video games I realized that games would be the perfect proving ground for developing and testing AI algorithms normally when you work on AI you often work with robotics but the prom with robotics is and we love robotics as an application area for AI but as a development platform it's quite tricky because you end up spending most of your time on the hardware you know dealing with the server motors on the robots and they always break and they're quite slow and they're very expensive to use and in fact it's much more convenient to use virtual simulations like games and actually test the sophistication of your AI algorithms in in those simulations so we we started with games and in fact we started with the most iconic sort of the first iconic game console which was the atari 2600 which some of you may remember from the 80s and we it was the first real sort of big console game that had a big diversity of of games on it very very different sorts of games and we tested our system our first system back in 2013 on these Atari games and before I show you a video there of the of the system working I just want to explain to you what it is that you're going to see so the agent system which we call dqn only gets the raw pixels as inputs so it only gets the pixels on the screen the kind of values of the colors of the pixels on the screen as inputs it isn't told anything else about the game everything else is learned from scratch it doesn't know what it's controlling it doesn't know how to get points all it knows is here's a stream of numbers 30,000 numbers per frame and the goal is to maximize the score doesn't know anything else about what it's supposed to do so it has to learn everything else from scratch and then the final sort of challenge if you like is this notion of generality so we wanted one single system to be able to play all the different games out of the box so I'm gonna show you my favorite video of the Atari stuff working which is this game called breakout I which is one of those seminal games on the Atari system and in this game you control the bat at the bottom of the screen here there's pink bat that goes left and right and you've got to bounce this little pink ball here that's what the little pixel here on the left against this rainbow colored wall and the idea of the game is you've got to knock out all the bricks in the wall and you've got five lives and you can't allow the ball to go past the battle always you lose a life so I'm just going to run the the video here and you'll see the system improving over time as it gets more experience playing more games and starts to figure out what's happening in the game so this is what it looks like after 100 games and you can see the system is starting to get the hang of what it's supposed to be doing so it supposed to move the bat towards the ball but it's missing the ball most of the time but it's starting to get the idea that it maybe it's a good idea to move the bat towards the ball and then after 300 games so now you can see it's about as good as any human can play this and it almost never misses the ball anymore even when it's coming back at very fast angles so we thought well yeah this is great but what happens if we left it playing for another 200 games and to our surprise what it did is it found this optimal strategy which was to dig a tunnel around the left hand side and then send the ball behind the brick wall which was you know it's sort of an amazing solution to the problem in a way and of course it you know it's it's sort of very low risk because the ball can't go past your bat and it's very highly rewarding because you hit many bricks with just one shot so when we saw this so this is our first of since MIT since then many a hard moment for us where we actually learnt something from our own system because the the programmers and the researchers behind this amazing researchers but they're not so good at playing Atari games so they didn't really know themselves about this this tactic and obviously it's being executed with sort of an incredible position from that from the system so then we took these these systems and the next thing we work and applied it to was probably our most famous program called alphago and alphago was our program using these reinforcement learning ideas scaled up even further to play the ancient game of go and for those who don't know the game and I encourage you all to learn it as an amazing game that I think you would all like this is what the board looks like it's a very esoteric and artistic game and it's played on a 19 by 19 grid and you take turns black and white take turns to put stones on the vertices of the of the board and the board initially starts empty and it fills up now the history would go is long and storied one it's over 3,000 years old as invented in China has played all over Asia and in fact it's considered in Asia to be more than just a game as something more kin to poetry or art in fact Confucius wrote about go as one of the four great arts that any true scholar should master along with poetry calligraphy and music so it's really considered to be one of these sort of profound arts like all of these other artistic endeavors and today it's as popular as ever 40 million active players 2,000 professionals and the game of Go is a credibly simple I could teach you it in five minutes there's only two rules but the complexity that comes out of it though is what makes it so elegant and one measure of that complexity is the fact there are 10 to the power 170 possible ball positions so that's a one with a hundred and seventy zeros after it and that's more than there are atoms in the universe right so that's the level perplex it just comes out of these two rules and and that's that's what makes the game so deep and so profound and you know again sort of these ancient scholars thought about go is containing some of the mysteries of the universe in it and therefore was worthy of this incredible amount of study and of course that complexity and the satiric nature of the game is one of the reasons what makes it so difficult for computers to play and the game would go proceeds one stone at a time you place it down to the board fills up like this so this is the end of the game and the way you determine the winner is what you're trying to do with your stones is surround off wall off empty areas of territory and then you count the number of the number of squares that you've surrounded compared to your opponent and the person that's surrounded the most squares wins the game so in this case here it's a very close game but white wins by one point so why is go so hard for computers to play so after deep blue be garry kasparov the next big challenge of the sort of Mount Everest if you like of computer AI research was go and go is much harder than chess for computers won because of this enormous number of possibilities that I've just talked about this 10 to the 170 possible positions so the search basis is much much bigger than it is for chess but the second and kind of even harder problem is chess programs chess engines including deep blue rely on what's called an evaluation function so this is one of these handcrafted rules based systems that tell the Machine which side is winning in the current position and that's what allows deep blue and it's and its successors its descendants to figure out what the right move is to play the problem in go is it's such an esoteric game it's impossible to figure out what the right set of rules are and encapsulate that in a rules-based system even if you ask top go players you know why did they make a particular move they'll often tell you it just felt right right they won't actually be able to explicitly tell you themselves why they pick the move whereas if you ask that to a top chess player they'll almost certainly give you a specific plan they were thinking about you know I was planning a then I thought B would happen and I was gonna answer C now that plan in the end may not be very good or may fail for some reason but they normally have an explicit plan you can go it's much more about feel much more how an artist would think and one way to think about that is that go is primarily game about about intuition rather than calculation which is more dominant in a game like chess so that's how humans players professional players deal with this enormous complexity and this evaluation function they rely on their instinct and they're in so we ended up by taking a totally different approach to the way that chess computers were built and we built our alphago with these learning systems and we actually created we two neural networks which are loosely based on how the brain works to deal with these two complex problems we create a one-year network called the policy network that takes in the current board position and learns through looking at millions of different games and playing millions of different millions of games against itself and seeing and experiencing millions of games of go what sorts of moves are most likely to be played in a particular position so you can think of it taking the current board position and returning to you like the top five most likely moves to be made so that really narrows down this enormous search base that you need to explore you don't have to look at everything anymore like a brute force system would have to you can just look at the most likely moves and then the second thing that was kind of sort of an unknown whether this could be done was we built a system that could take this is the second your network called the value Nets here on the right the pink network that could take the current board position and return a value a probability between zero and one of who was winning so zero would be white a hundred percent likely to win one would be black and one hundred percent likely to win and point fire with me an equal position and oversee it over through a course of training of millions playing millions of games against itself it learnt to predict from any position who was going to win the game and how confident was in that prediction and so by combining these two neural networks into one system which became the alphago system we were able to solve these two very difficult challenges that go presents once we had this system we decided to challenge one of the greatest players ever in go history a genius south korean grandmaster called lisa doll he's eight won 18 world titles and he was considered to be you know the greatest player of the past decade and we had a million-dollar challenge match in Seoul South Korea in back in 2016 and you know before that match everybody including doll thought it would be a whitewash five no whitewash - Lisa doll because until this point no go program had ever even beaten a professional player let alone a world champion so so all these sort of traditional techniques that were being used to make chess computers even though they've been developed for further twenty years since the deep-blue match they still haven't got anywhere near to professional level in gun so we played this match and over 200 million people across the world watched the five matches and alphago incredibly won the match for one and it was proclaimed by many experts both in AI and go to be a decade before its time and you know it's kind of a mentos match that I think will go down in history in in sort of AI as an AI landmark and there's often been called since then as a sort of Sputnik event for AI especially for China and Asia but the most important thing is obviously we want our forego won the match and that's what we built it for but the most interesting thing about the match was how alphago won and how it played so I just want to explain to you a little bit about alphago's play even though most of you may not know how to play go I think you can still appreciate what alphago did so this is a football position from game two and this is move 37 which is probably the most famous move in the match and alphago is black.here this is very early in the game and Lisa doll is white and alphago plays this move here on the right hand side outlined in red this stone here and the key thing to notice about where this stone has been placed is that it's on the fifth line so you can see it's on the fifth line from the right hand side of the board and the board's 19 by 19 now in the openings if you're professional you almost always play on the third or fourth lines and that's the most important lines to be to be disputing early on in the game of go so to play on the fifth line this early on is kind of unthinkable no professional player would even consider this move because it seems suboptimal and wasteful and yet alphago decided to play here and then it turned out the reason that alphago played there was a hundred moves later these two stones here in the bottom left hand corner thy rings in red ended up the kind of fighting on the bottom left here of the board ended up spilling all the way into the middle of the board and moving all across the board and then a hundred moves later ended up connecting up perfectly with this stone on the right hand side and move 37 and that was ended up being decisive in that battle and it won alphago the whole game right so somehow it's as if alphago had presently understood this was going to happen and position that stone perfectly for a hundred moves into the future so you know of course the interesting thing is we can all think about you know what is creativity and I'm going to come and talk a little bit more about that in the in the latter part of this talk but we could all play an original move in some sense even if we didn't have to play go week all just play around and move on the board and that would be surprising in some sense but the key thing about go is although it's so considered to be an art form it's like objective art so a move is only considered original and creative if it ends up being effective and you can measure the effectiveness but obviously seeing the result of the game and then studying that afterwards and seeing if that move really had a material difference to the outcome and you don't have to take just my word for it you can see this I'm just going to play this very short but finally clip from the live commentary soup stream that was going out to the millions of players they will be watching this on YouTube and on the right hand side here is the strongest player the West has ever produced Michael Redmond who's nine down professional and his reaction he's watching the game live and commentating on it to seeing this move 37 that's a very that's a very surprising move mistake so you can hear there he thought it was a mistake and he goes on to say later in that clip that he thought it was a miss click so he thought our computer operator had actually clicked the wrong place on the board the computer bored because he couldn't believe that alphago or you know would play that move then of course I must mention that Lisa doll himself came up with his own incredible brilliant moving game fall which was the game that he won or moved 78 in which this move in the middle this called a wedge move and that has been analyzed you know by all the players around the world both this move and move 37 in the two years hence and they're both being proclaimed to be amazing moves and this move here I haven't got time to explain about it but it triggered a Mis evaluation in alphago's networks and that's what allowed Lisa doll to win to win that game so for us this was a you know an amazing sort of once-in-a-lifetime experience it was full of drama for us and if you're interested to see a little bit more about the kind of the human emotions and the spirit of human endeavor behind this match I'd encourage you to to watch this documentary award-winning documentary that done by this brilliant director Greg chose which is available on Netflix and you'll see what went into the match and the nuances behind it what the go players thought but there's one thing I want to quote about from there which is Lisa dolls own thoughts and reflections on me 37 after the match the director asks him you know what he thought about alphago and ruth 37 and he said i thought alphago was based on probability calculation and it was merely a machine but when I saw this move I changed my mind surely alphago is creative this move was really be creative and beautiful so as a really amazing moment and I kind of really cried when I saw that on the film after as obvious I didn't see him say that in live and I thought it was an amazing thing for him to say and and very deep of him to realize that so I want to now just talk a little bit about these words I've been using and throwing around intuition and creativity what do I mean by that at least in this context and I should caveat this with and I'm sure we'll get into this in the Q&A though I'm not saying this as encompasses all of what we think of is intuition and creativity but I think at least I want to think about operationalizing and some of these definitions so we can discuss it in a scientific way so intuition then the way I think about intuition is really it simply knowledge that we have acquired through experience but it's knowledge that's not consciously expressible or accessible so we can't consciously access it and we can't express it to others and that's what why it seems a little bit mysterious to us this kind of implicit knowledge now of course we know we have it and you can test the existence and the quality of it by testing it behaviorally you can verify behaviorally and in a game-like go it's very easy you can give somebody a go position and ask them to come up with a move and then evaluate the quality of that move so I think that's what largely encompasses what intuition is so what about creativity well I think one way you could operationalize the definition of creativity is the ability to synthesize knowledge to in the service of producing a novel original idea and I think under those definitions I'll forego in some sense clearly demonstrated these abilities during this match albeit obviously caveated by that in fact it's still a very constrained domain of a board game but let's think about creativity sort of more generally you know here's another definition of creativity as I think I got out the Oxford Dictionary the ability to use skill and imagination to produce something new and I think there are at least three types of creativity or three levels of creativity if you like so the first type if you imagine that you're given three or more examples in a particular topic and let's imagine for the moment that the green dots are these examples and the white box is a particular topic or field of endeavor and you'll also create something new so one way you could do that is what I call interpolation and it's used often in machine learning and AI as an AI term an interpolation you can think of is kind of like an averaging so you know here are three training examples here three examples of things we would like from this world of possibilities and you kind of find an average of those things and in some sense that orange dot is new right it's not it's different from the cumulative from the examples and it's something new but it's still sort of contained within the space this green dotted Green Line the space that the examples cover the next level of creativity which is you know a high level of creativity would be extrapolation so now you know you have those examples but instead of just finding an average you're extending the boundaries of what you already know so this would be the blue dots and you can see those three blue dots as outside of the boundaries that sort of marked out by the training examples the green dots and then finally there's what I would call invention or innovation which which is you know here represented by the yellow dot that's outside the white box completely and this is something completely new perhaps informed in some way by what's inside the box now you know how how are we doing on the AI front were these levels of creativity so let's let's examine machine creativity and you know neural network systems the kind of systems I showed you sometimes the fashion ones are called deep learning these days they're pretty good at interpolation you know they're massive statistical machines if you like and they're very good at averaging things and spotting patterns in data then you have things alphago like systems which are getting pretty good I would say extrapolation you know finding new things beyond the boundaries of even what the human designers knew about but still within the same general context and then you've got true invention which I think no AI systems are anywhere close to yet right so this would be instead of coming up with an original moving go it will be inventing dough right or inventing chess and there's no systems that are able to do that but we can come up with you know I think alphago definitely demonstrated extrapolation it wasn't just averaging what humans have done before or mimicking what humans are done before I was coming up with genuinely new ideas but it can't invent something truly new so you can't ask well what's missing well although AI systems have been pretty successful so far there's actually a whole bunch of things that we still need and a still need to crack and things like concepts abstract abstract thinking reasoning by analogy memory systems and imagination which as we just saw earlier is in in many of the definitions of creativity and a lot of these terms and all these ideas and capabilities are missing from our current AI systems and this is where the cutting edge of AI research is at the moment and we're working variously hard on all these different topics I've just mentioned and I think those things are key to this invention or this out-of-the-box thinking because I think a lot of that comes from interdisciplinary thinking spotting unusual connections between different subjects and doing things like imagining counterfactuals right so imagine fantastical scenarios and a lot of our creativity I believe comes from those capabilities that we currently don't have in our AI systems so where can we look for inspiration and I've only got time to cover one of those topics each one of those could be a whole lecture in itself but I'm just going to talk about a topic that I've studied for a long time imagination which I think is one of the main keys to creativity and we can actually take on spiration from the brain and especially from what I call a systems neuroscience point of view which is a high level understanding of the brain and interested in the algorithms and the architectures the brain uses and I actually studied memory and imagination for my PhD and I was venting the question of how do we imagine what are the brain mechanisms behind imagination and when I started my PhD one of the things I've started looking at was how memory works and I became convinced that memory was a reconstructive process so you don't shouldn't think of memory as a videotape it's not a perfect recording and if we remember tomorrow we think back to this lecture or what you had at lunch today it wouldn't be up it's not only a perfect videotape you're actually going to reconstruct it from its components so we reassemble our memories form our component from components and we put them back together so I was thinking if memory and there's a lot of evidence that's how memory works so I was thinking if I have is that's how memory works and you can think of memories this reconstructive process there may be imagination which is a constructed process you're putting these components together in a novel way maybe it relies on the same brain mechanisms and the same brain areas so we know and we've known for 50 more than 50 years that memory is reliant on a area of the brain called the hippocampus which is shown here in pink and it's at the center of your brain and without your hippocampus you're you would become our music and that's what happens in terrible diseases like Alzheimer's and so what we thought is why don't we test some patients who have damage to the hippocampus but the rest of their brains intact on imagination tasks and see if they can imagine and so what we did is quite a simple test but no one had thought to do this for you know even though we've been researching memory for almost a hundred years now and we thought to test these patients on a simple imagination task where we got them to imagine scenarios like imagine you're lying on a white tropical white sandy beach in a beautiful tropical Bay it described everything you can see around you so this is no problem for healthy people and and we got the patients to to describe this and we got age matched an IQ match control subjects to also describe scenarios and what we found is that the patient descriptions were hugely impoverished compared to their their control cohort and you can see here on the right hand side this is a graph of the richness measuring the richness of their descriptions on the left hand bar is the patient's on the right hand bar is the control subjects who their imaginations are a lot richer and what we found after further investigations is that the problem they had was they couldn't bind together disparate elements of a scene into a whole coherent whole so we call this spatial coherence problem and that's what we think the hippocampus is actually doing for imagination its briny together all of these elements into a whole and of course you know the imagination what does it do for us well it's extremely valuable skill that humans have and allows us to more accurately predict the future by hypothesis hypothesizing about different plans you could do and seeing how they would turn out and also I think it's the beginning of creativity in the sense of allowing us to think of counterfactual situations we later did some brain scanning work on healthy subjects imagining in brain scanners and we found five different brain areas that were heavily involved in different aspects of imagining so most recently then we've tried to recreate this aspect of imagination so imagining scenes in our AI systems and we've recently had some big breakthroughs on that front and we created a system called generative query Network the gqn and what we system was able to do is amazingly kind of reconstruct a 3d model of a scene just from a handful of 2d snapshots so imagine giving a the AI system you know a few 2d pictures of a scene a 3d scene and it recreates the whole 3d scene just from those two stills to have those few to these stills so then at that point to test the system we ask the system to render the scene from a new angle it's never it hasn't seen before so we can we can ask it to render from any arbitrary new angle so in computer graphics and in AI circles this is called the inverse graphics problem so if you imagine computer graphics you know you have these algorithms and they produce all these beautiful pictures that you see in games and in 3d artwork and CGI and there's a mathematical sort of equations that basically create those 3d scenes and what this system is doing is doing the inverse of that the reverse of that here's a 3d scene here's some pictures of it now recover the the generative equations that actually generate that scene so it's called the inverse graphics problem has been a long-standing problem in computer graphics so the scenes that we were able to do I should say are very simple scenes currently but it's kind of amazing that this works at all so what I'm good I'm just going to show you a quick video of this working well you're gonna see is these kinds of quite toy like 3d scenes with three four or five objects in it geometric objects in it like you know spheres and hemispheres and circles and so on and and boxes of different colors and different textures and what we do is we give the system a couple of still snapshots of this scene and then we tell it to render that from any new angle so I'm going to show you that in this video here so you'll see the scene on the left-hand side here this little box world with these three objects in there and the system only gets two snapshots v1 and v2 and then we ask it to render the view from this new view view three coming from another angle and we would like to see what the image looks like from that new angle so we'll see here so it gets given view one that gets input into the neural network and it gets processed and it gets represented inside the neural network then we give it a new camera angle view - so that's what it looks like from view - we give that to the input and it adds that to its scene representation and then we ask it we query that's why it's called a generative query Network what would it look like from this third year and now a second your network outputs the new prediction of what it should look like and then we compare it to the ground truth of what it really looks like and you can see that they almost match perfectly and then we're able to spin round we're able to make take it from any new angle and then we can give it new pictures of new rooms with different objects and you can see it can move around zoom in zoom out so it's just as if it was a computer game and we'd completely built a new graphics engine to do that so it's just recovering that from these 2d stills now obviously we're now building up to scenes of higher complexity and eventually we would like to get to real world scenes where you can recreate a real world scene just from some 2d pictures so hope you I've given you a good flavor of what's happening in AI the kind of cutting-edge of AI at the moment and even though I said earlier there are many unsolved big problems too to that we have to still tackle even the kinds of technologies we have today are already proving very useful so I'm just going to briefly mention a few applications so obviously there are a whole host of commercial applications that we and others are looking at so helping with healthcare medical diagnostics we have a bunch of collaborations with hospitals around the world all sorts of different areas especially with image recognition with us work with optimization and energy we actually did some work for the Google Data Centers and we managed to say 40% of the power the cooling systems used by more efficiently controlling although all the cooling equipment I think there's lots of potential in education for personalized education using these AI systems and also with virtual assistants on your phone making them a lot smarter it's also being used a lot in art in design many all you know about this so especially in architecture I believe that building the opera house on the left hand side a was designed using machine learning as was the engine block on the bottom right here for a car engine and also there's been some interesting things in art art transfer transferring styles between different art styles on the same picture as well as creating art itself on the top right and then for me my particular passion is using it for science to accelerate scientific endeavor and it's been used already successfully these kinds of air systems I've talked about for discovering new exoplanets it's been used to try and control the plasma and nuclear fusion reactors design new chemical compounds and detect eye disease and things like retina scans so a whole host of areas in both medicine and science but I think was just the beginning of I think we're going to see a huge revolution over the next decade so I just want to kind of close now by by just sort of going back to my initial statements about our mission statement and the way I think about that so I think of AI is kind of like a meta solution to a lot of the other problems and challenges we have as a society so I think one of the big challenges we face in all sorts of domains from science also - into even things like entertainment is you know information overload and system complexity there's just so much we've kind of bombarded both in our personal lives and our professional lives with just overwhelming amounts of information and data so how can we make sense of all of these data streams and then the other thing is you know as a society we want to kind of understand and master increasingly complex systems some of which are boarding bordering on chaotic systems things like macroeconomics climate all of these areas where you know these systems incredibly complicated that we would like to try and understand and I think these are you know huge challenges that we have without something like AI helping us and you know for a long while sort of the I guess the early 2000s the first decade of this century big data was this huge buzzword and I think in a way big data is the problem you can think of an AI is the answer because everyone has got tons of data now all companies do and you know we all have tons of data but what do you do with all of that data how do you make sense of it and I think the only way to do that actually at scale is to use AI and on that level you know in a kind of very general way you can think of intelligence as a kind of process almost a magical process in some ways that converts unstructured information or data into useful actionable knowledge that's what intelligence I think is fundamentally an AI is a kind of way of automating that process and as I mentioned my personal dream and why I spent my whole career working on AI is to use it and build it as a powerful tool to help the scientists and experts and and clinicians accelerate desperately sort of needed scientific breakthroughs so I think AI you know it's an incredibly exciting time in AI holds incredible promise for the for the future but you know it must be used responsibly and safely just like any other powerful technology and we have to ensure that it's used for the benefit of everyone and the benefits accrue to everyone you know I think of AI is in it in of itself it's an inherent neutral technology and just like with every any powerful technology depends how we decide a society to deploy it and use it and on this topic I think a lot more research and discussions needed with a wide set of stakeholders and as I said as why I think it's very important to have dialogue like this between scientists and technologists and artists and and the social sciences and I think that's gonna be critical if we're going to get this right for everyone and we've started ourselves several efforts both internally a deep mind we have an ethics and society group with policy thinkers and philosophers and ethicists and we've also been instrumental in co-founding a an industry group called the partnership on AI to includes nonprofits and academics as well as the big companies trying to think about these topics for the benefit of everyone in society and so I just want to end this talk by thinking a little bit more philosophy Lusaka CLE and for me as a neuroscientist one of the other really interesting things about this journey were on is that I believe that by trying to distill intelligence into an algorithmic construct like we're doing with AI and then if we use that and compare that to the human brain I think that might help us better understand what's unique about our own minds including profound mysteries like the nature of creativity that we've been discussing what dreams are and perhaps even the big questions like consciousness and as Richard Feynman said is one of my all-time scientific heroes what I cannot create I do not truly understand and I think about that about intelligence and I just want to finish and give the last word to Fineman actually and a passage from one of his books that really inspired me when I was a child to think about science and art and this is the way I feel and it sort of echoes my views on the topic and he said firemen said although I may not be quite as refined aesthetically as my artist friend is he was walking through a meadow with his a good friend of his who was an artist and they were looking at a flower and they were discussing this and he said I can appreciate the beauty of a flower at the same time I can seem much more about the flower I could imagine the cells in there the complicated action inside which also have kind of beauty the fact that the colors in the flower evolved in order to attract insects to pollinate isn't is very interesting it means that these insects can see the color all kinds of interesting questions with the science knowledge only adds to the excitement the mystery and the awe of the flower and I think he's really right about that and that's why I love both science a lot thank [Applause] Demi's this is an academy here waiting to not get at you but ask you some questions but i just want to pick up on a couple of points your background was in gaming it was competitive you've come here I think in the spirit of collegiality of openness of a dialog it was interesting in that in the go game lisa dole said that he felt he was there defending human intelligence and lost I think we've got well gone past the stage at least I hope we have the arts and science have pitted against each other but do you think that element of competitiveness that humans inevitable competitiveness is still essential in developing what it is you're trying to develop in establishing relationships and knowledge between AI and human creativity yeah I mean look it's it's an interesting thing because competitiveness is obviously when used positively and constructively is a is can be a very powerful driving force and a very good one for progress in response to what Lisa doll said you know I can understand why he felt like that because he was representing the NGO world and it was quite surprising for him you know it's definitely at least a decade before he was expecting that to happen but one thing you will remember is that of course alphago is a human endeavor to and there are all sorts of you know amazing programs and researchers on the demine team who spend their whole lives building up their skills in the way Lisa Dahl had in his art to be able to program something that the architecture behind alphago which then went on to learn for itself but it of course all the initial conditions was created by by human scientists so I think the whole thing and if you see the film there is a celebration I think of the spirit of human endeavor from all sides from the go players the programmers everyone kind of collaborating together including actually the journalists and the writers who were writing about the the the match in fact that might one of some of my favorite pieces of writing were done by a wired journalist who was writing I thought very poetically about the whole match as he was watching it live so I think it's actually I think a wonderful celebration of human ingenuity all around and and I think you know after the match since then if you talk to him he he's had time to reflect on that and I think it's been amazing for the NGO world they've they've unleashed their own creativity because not only are they playing what's called like alphago like moves also many of the top go players I've spoken to have said it's felt that there is freed their minds from the shackles of tradition so they're all trying to think the unthinkable now and they've come up with their own brilliant new ideas that in the thousands of years and hundreds of years past they would have been told as as junior go players not to not to do and sort of be told off for it and now they're able to explore fully their own creativity without peddling a stereotype which is always a precluding to peddling a stereotype the the art world loves the notion of randomness and chance it wants to harness it and possibly for many artists here certainly for me and I'm not an artist that moment where the commentator thought a mistake had been made becomes really interesting less the resolution you saw the beauty but more the fact that Beckett's idea of failing or failing better lies at the heart of many people's creative vision you'll see it is a impossible pursuit of perfection whereas certain scientists see it as a potential pursuit of perfection again that's a stereotyping art predicates itself in certain areas of having no rules of wanting always to break the rules that almost becomes a media trope but actually at its best it offers endless possibilities and its worst it's an anarchic void of meaninglessness how does that play into your pursuit of understanding human created well I think that's what I was talking about we're trying to talk about with the types of creativity so I think the breaking of all the rules and breaking outside of you know going beyond what the rules allow you to do that for me would be true invention at which I was kind of having as the yellow door outside the box yes and I think our systems currently are not capable of that right they're capable of being creative but within the rules so to speak which is then what I was sort of meaning by extrapolation so here's the rules of go come up with some new motif some new strategies some new tactics and new theories and it was able to do that that were genuinely new so I think that is a genuine form of creativity but not the highest level of creativity which would be something like coming up we'd go in the first place I think there are many people in this room who would love a AI to take over the role of the critic and I think the notion of criticism how we judge things human tastes how we decide that something's more interesting or better than another is an inexact science it can be a poetry bits in in the exact science how does that play into your thinking well I think some aspects of aesthetic judgment could potentially be learnt by these systems it given enough training data you know maybe there was some amazing art critic or restaurant critic that you wanted to kind of mimic the judgment of and maybe given enough data some of those aspects could be judged could be sort of mimicked in some way but I think it would still go beyond that because when a when an art critics judging art one of the things that I regard about human created art why should why I think it's higher than machine created are is the part from the technicalities of it is that there's always the imprint of the artist through their artwork and I think some of the soul of the artist comes through their art and that's what we're appreciating as human viewers of that art and the perhaps the art critic too so I always think of let's take someone like van Gogh you know his this sort of the tortured nature of his soul comes through almost every brush Voki he has and that's one of the reasons why his art so incredible and I think it wouldn't be the same even if a machine could match it technically which always a big if anyway in of itself because I think part of what's great about art is the is that is the sort of the the imprint of what the of what the artist has experienced in creating the art well and I mean there are many things that art can affirm but one of them is the affirmation that I am here I'm alive what it is to be human wrestling with the human condition presumably a machine can't do that but presumably you're arguing that at some stage in the not-too-distant future it might be able to have a semblance of that or is that nonsense what's the timeframe I mean months years that you'll obviously say not but is there a sense that we're looking at something like approaching this in the next decade no I think I mean aspects of it in the next decades but I mean I think this is what I mentioned at the end we about what fascinates me is both a neuroscientist and a computer scientist is the you know what are these aspects of the brain that that mechanist you know cannot be done computationally although any and if they are what are they and what mechanisms do they use can it be explained or is there something mysterious and I'm quite open-minded about that and I think what I see is part of what we're doing which is this neuroscience inspired AI is let's see where that takes us and then we'll see which aspects remain that only the human brain can do and you know I think about that from creativity I think about that for dreams I think about that for consciousness we don't know what these things are the nature of consciousness we don't know how they manifest themselves in in in the physics of our brain there are theories but we don't know and I think this may be one way of of of examining that is trying to build aspects of intelligence and then seeing what's what's missing and some of those things you know may be impossible although for the moment you know at least from a biological point of view there doesn't seem to be anything non computable in the brain although there's speculation about that there's a famous mathematician called logic Penrose who talks about quantum consciousness and he thinks there are quantum effects in the brain in which case if he's right then we will not be able to model those on a conventional computer a traditional classical computer but so far and biologists have looked for this quite hard they did they had no one's found any quantum effects in the brain so far I love the idea that the computer would not it would be what it would have to do is learn to invent reinvent go itself maybe in the end it will reinvent art but are one of these our art does this constantly reinvents itself and you've already given us about six potential lectures I'm not so cheeky or opportunistic to ask you to come back and give a series here but you should do really and but I'm also conscious that there are people here in the brief time we have left who will have questions to make questions to ask of you and I do think there should be another or to look through the implications of much of what you said could I ask people to ask questions rather than make long statements I know that's difficult because there's so much that's been thrown out but I'd love to take some questions from the floor could you wait for the mic where there's a hand up at the back there thank you hi thank you for a great lecture and following on from what you were just discussing you said at the beginning that reinforcement learning we know that model can lead to general intelligence and so wondering if in order to to get the general intelligence and the higher level of creativity like invention do you think we need to work out consciousness and intentionality and do you agree with philosophers like John Searle you say that we need to understand the physical material of the brain rather than just the algorithm yeah no I disagree with John Searle and I do think that you can make progress on this question without fully understanding the substrate in fact III believe that intelligence will be substrate independent in the sense that the real force of learning is the way we're going to try and build it but there probably other ways of building intelligence they're more mathematical less neuroscience based and even the neuroscience based way like we're doing you know we're really looking at the system's level the algorithmic level not at the actual wetware itself the exact way that neurons work and cortical columns other people are doing that that's sometimes called whole brain emulation where you're effectively trying to reverse engineer the brains personally and implement it in the same way the brain does and I don't believe that will be necessary for intelligence as to whether we'll need consciousness for true creativity and other things I'm not sure if I was you know I think it's a open scientific question and we need to get further with the research to understand that but I would say that if I was to bet on it I think it's likely that intelligence and consciousness are what's called double dissociable so I think you'll be able to have intelligent systems that will that will be fantastically intelligent in terms of their capability but will not feel conscious in any way in a way that I do to you to me and I also think on the other end of spectrum if you look at animals for example like our pets like dogs and cats and so on I think it's pretty clear they have fought some form of consciousness you see them dreaming and and they seem to have are those kinds of traits of self-awareness and other things but obviously they're not close to human level intelligence so it seems as though maybe they're dissociable traits but you know we notice maybe we'll get in twenty years time we'll get to a point where we are okay we're sort of stuck against a brick wall and actually a reason we can't have more intelligent systems is we now understand what this consciousness thing is thank you for questions beautifully answers systematic yeah gentleman that and then hi thanks the great lecture do you think that the current speed of AI research has had a negative impact on its practices in the field and if so what do you think can be done about it so do you mean a negative on on its in terms of its applications yeah no I don't think so I think it's mostly been positive I would say I think like with any any hot topic and that's I think it's a bit too hyped and I think that's caused a lot of you know that there's sorts of bad cycles you get when when when an area gets to the top of the hype cycle so I think there's been a lot of amazing work that's happened but some of the promises are over promising still compared to where we are and I think that sometimes can lead to some bad science that's you know rushed or in some way but I think mostly the community is actually very good the research community around AI and it's very open everyone publishes everything and I think it's pretty collegiate at the moment so I would say the research community is actually pretty solid and I actually think in order for us to get to better practice best practices and protocols let's say around how these systems are deployed I actually think we need to get further with the systems so we have concrete systems to experiment on and actually figure out because computer science isn't it's not theoretical subjects an engineering discipline in order for us to make progress with item we have to have systems that we can actually test and periodically test and I think all the best science is done with empirical work in tandem with theoretical work and for us empirical work is engineering thank you for the lecture I think I need an AI to help me process all the information I've just given us in the last hour I'd like to take you back to your art and science comment at the beginning so have you looked at using gqn to instead of try to recreate three-dimensional computer graphics potentially recreate architecture or environments that live longer exist whether it's because of war or just you know dilapidation there's there's a lot of amazing art in the world architectural II that has been lost and there are a lot of paintings and photographic representations at that and is it something you guys have looked at in terms of trying to help us recapture some of that yeah we have started to look at that so as I mentioned we're with gqm we're now trying to build up to more complex scenes you know and eventually real-world architecture would be a very interesting to try like a room or you know dilapidated room in in a in a in a in obstruction another area that's been worked on a lot is what's called generative models or which gqn is an example where they're trying to fill in pictures or even draw photos and things where they try to you know you can leave a missing part and it will fill it in and they're not photorealistic yet they're not as good as the originals you know you would obviously spot it immediately as generated by computer but they're getting better all the time and one of the issues is with architectures or anything more complex than our simple scenes is the system's still don't really understand the semantics of a scene so they don't really understand that these objects are separate and what background is and foreground and so on and how physics interacts with structures and that's the concept part I was talking about and I think systems like gqn if they had abstractions and concepts would start being able to pass too late the world up into semantic meaning and structure which should then allow them to model much more complicated scenes so I think that's what's holding us back right now but eventually I would expect to be able to do those kinds of things we should say that art was always harnessed technology and recently a painting was made by algorithms and all I can say is well it might have made it into the summer exhibition yeah thank you very much I just wanted to ask you a question about explain ability so you mentioned about the move that alphago made that was after the fact that the go experts could say ah we know why it did that but you could also probably imagine situations where it would be harder to understand why a machine had made a decision that it did do you think it's important to build systems that are able to explain themselves or do you think it's natural that we're gonna kind of decoupling machines and what kind of lose a bit of that agency no I think it's great question I think it's incredibly important that we have interpretability in our systems for a couple of reasons one is it's useful to advance the science if the better you understand the current systems obviously and what are their their limitations are but for any once you start deploying these AI systems for any safety safety critical application of course you would need to understand why the decision was made and I would actually advocate further and always have a human in the loop to make the final decision and think of the AI as a tool that provides information to that ultimate human decision maker and in order to do that we need to explain these black box systems better and I don't worry about that as much as other people so I think we're just going through a phase at the moment where you can think of it in terms of the evolution of AI systems is in the last decade there's there's been huge explosion of AI systems that are really good now and can do interesting things but that's very new and the challenge in that the you know the last 10 years is being can we get these systems working at all never mind about you know interpretability now we have the working we have something to work on reverse engineer and analyze now it asks and many other teams around the world are concentrating on building analysis tools so will being analysis tools visualization tools all sorts of things even doing behavioral testing like more like you'd have in a psychology lab you know to look think about both behaviorally testing looking into the architecture measuring it's almost like doing brain analysis like neuroscience but on an artificial brain and so with all those tools are very very embryonic right now because they've only only in the last couple years of this pin started to be worked on and I'm pretty sure I'm pretty confident that with another sort of five years of work on those kinds of tools a lot of these systems right now that looked quite black box or are quite black box systems will become understandable and interpretable so you know I think it's vital but I think we're just a bit on the beginning the starting point of that and you know I wouldn't worry too much about that but at the moment a lot of these systems are quite black box we've been rigorously programmed to stick to an hour we've crashed through it let's be a knockin take one more question here take one more question yeah let's take and then we can carry on over a drink he said offering up Dennis TiVo okay thank you word has always been pretty fearful of a I think there were presentation things like westward and terminator etc and I'm very glad you mentioned ethics so Justin what do you think of the art words for presentation of AI and how how your advice on preventing that kind of future from happening yeah I think the art world it would be nice if they were if it was a little bit more creative in some sense right because I think it's easy to I mean it's obviously more dramatic to have you know dystopian futures and you know villains and so on it's obviously creates more excitement but obviously that's you know I think it's most of those scenarios are pure science fiction and we shouldn't worry about them too much I think that we actually need a lot of science fiction can be very helpful in terms of lots lots of scientists including myself who inspired by science fiction to make some of the things they read certainly for me I read probably too much science fiction when I was young so try and make that come true and there are actually brilliant books about futures which with AIS and humans in them that have really interesting worlds like Ian banks a great writer his and also asked him off not his robot stories which I've never read actually there's that things like the foundation series which is more serious sci-fi I think is very interesting and it would be useful I think to have to explore the whole spectrum of possibilities with AI rather than this sort of quite crude a narrow way of exploring it and you know I don't particularly like Westworld for example I think it's pretty boring and obvious sorry I think it's a good note on which to end that the scientist comes into the Royal Academy and says that the art world needs to be more creative it actually will accept science fiction and Hollywood films and television as part of the broad visual culture we've just finished a festival of ideas we're in the main artistic practitioners philosophers theorists have come to the Academy and we need to expand our networks we need to get out more we certainly need to generate more discussions with scientists at the cutting edge of artificial intelligence among other things we probably need to make this place a forum where human consciousness gets debated and you'd be a great person to do that but for this evening demis hassabis thank you so much [Applause]
Info
Channel: Royal Academy of Arts
Views: 65,523
Rating: undefined out of 5
Keywords: Royal Academy of Arts, art, visual arts, AI, Artificial Intelligence, Demis Hassabis, Deepmind, Rothschild
Id: d-bvsJWmqlc
Channel Id: undefined
Length: 71min 46sec (4306 seconds)
Published: Wed Oct 10 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.