How Do Neural Networks Grow Smarter? - with Robin Hiesinger

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] well thank you so much lisa for the kind introduction thank you so much for having me here it's a it's a great honor to be with the royal institution i've been a fan for many years i dare say though i'd really love to be in your famous lecture hall right now as it is i'm grateful that i have the opportunity to join you from my somewhat more boring office in berlin and i'm glad to tell you about some of the work and ideas that are in the self-assembling brain so as lisa said i'm a neurobiologist i've been working together with my lab on one question for many years and this is how does a neural network become a brain and so while while we and many many others have been trying to figure this out by looking at the existing neural networks in biology over the last few years has been really an explosion in another field and also quite old field but only in the last few years really become clear just how powerful an approach of simulating neural networks is so when i learned more about this i was starting to wonder what do these people know that i don't and what can i learn from the field of artificial intelligence and how much are they actually looking at what we are finding out about how the brain develops and so when i started to do this i kind of realized just how little we are talking to each other in these two fields and to try to understand how two scientific disciplines who really have a shared problem that i call the information problem what kind of information do you need to wire up smart neural networks and how do you get the information into those networks to find out how that came about i want to tell you the story of those two scientific disciplines up to where we are right now and where based on the work in both fields we may be going so our journey starts with this rather severe looking gentleman this is heinrich wilhelm gottfried von valder hearts he is a or was a german neuroanatomist and he actually coined the term the neuron and when he did that he was thinking and defined neurons as individual interconnected cells which sounds innocent enough but really was the origin of a huge problem and the problem is this if neurons are individual physiological units and then you look at anything in the brain and it looks like this mess of a forest of things how do you know that you really have individual cells doing things and how do they relate to the network so when even 20 years before wilder hertz coined the term neuron josef from garila another german as early as 1871 produced this beautiful depiction of individual neurons these are cells and fibers in the spinal cord of an ox and he actually thought what he saw what i wrote down here a dividing nerve fiber whose two branches hang together with the nerve fiber network which is connected to two nerve cells in other words he thought the network really may be connected to some kind of cells but it is not the cells themselves it must exist by itself and the reason and a large part of the reason for sure was that people could not imagine how something like a network could be put together by many individual cells this has been a debate for many decades that went on way into the first half of the 20th century and the two key protagonists of this or maybe who based best exemplify this debate with the italian camilo golgi and the spaniard santiago cajal golgi had developed a beautiful staining technique that allowed to basically see for the first time in unrivaled precision the trees inside the forest and kahal used this method to depict thousands and thousands of different types of neurons like this this is still to this day it's a picture from 1905 one of the most iconic pictures of neurons ever it's a human purkinje cell i mean what a beautiful cell with a cell body a large dendritic tree and it receives input from thousands or tens of thousands of other neurons onto this tree and each one of those is important and carries some information about how ultimately the brain works so when a year after this picture both of them received a shared noble price for their work what was happening is quite interesting kahal went and said look we found all these beautiful neurons and he used the term while their herds had coined as a matter of fact while their heads had learned spanish in order to talk to kahal and they became good friends and golgi then went ahead for the same nobel prize and said well you may be looking at cells but i don't take i don't buy it there's there's no such thing as neurons connecting to make a network the network exists separately and the neuron doctrine will not stand the test of time so this was 1906 and it went on for quite a while of course the neuron theory was right we do know that there are a lot of neurons in the case of the human brain the estimates are now something like 86 billion neurons but the problem of course that golgi saw hadn't gone away and the problem still hasn't gone away how do the many many neurons of a brain wire up to produce an intelligent network and of course this is the basis of a very famous debate of a debate that was as relevant back then as it is today the nature versus nurture debate how do you get information into a network to make it intelligent and the debate goes somewhat like this there is connectivity and usually it's dubbed like it's in your genes and then there's learning so information from the environment well it's not that simple connectivity is not just in your genes but also uh receives information from uh from environment and neural function learning itself is not just environment but influenced by genes let's let's ignore the genes versus environment debate for a moment and just focus on these two as sources of information somehow whatever makes a neural network smart gotta be in the connectivity or it gotta be learned so which one is it well one extreme solution to the problem would be just make the connectivity random then it would all just be learning well it does sound a bit extreme but as a matter of fact in the first half of the 20th century the majority of neuroscientists surely thought that the information cannot be just from a genome a genome encoded developmental process and most of it surely had to be learned and when the first artificial neural networks were built they went a step further and said well we're sure it's totally random so here is the first real neural artificial neural network and it was entirely based on random connectivity it's a picture of the 1958 paper by frank rosenblatt and he called this thing the perceptron it had some kind of input and then it had this area here the so-called association area and it's randomly connected to the input and it's randomly connected to the output so here is frank rosenblatt fiddling with one of these association unit elements and so what he said in in his 1958 paper is as i quote at birth the construction of the most important networks is largely random subject to a minimum number of genetic constraints so this is what he thought and he went on to build a really big neural network it was quite complicated it actually did achieve quite a bit um here is the picture of the so-called mark one perceptron as it was put up at cornell university science magazine had like headlines saying human brains replaced and their articles in the new yorker and so forth so it made a big splash and the idea of course was um here is a solution to produce an intelligent network and the idea was that this is what it's got to be like in biology so a possible answer to what makes a neural network intelligent provided by the first people trying to build one and the problem of course when we try to measure how good and how intelligent was something like this is that we're running against the definition problem i mean what do we actually mean with intelligence so let's talk about this for a moment what did people who were building and who are still building artificial intelligence systems how do they define those things and here's what i found to be the the most agreed upon and surely the most famous definition of artificial intelligence it comes from marvin minsky and has been adopted by many others and it goes like this the science of making machines that do things that would require intelligence if done by men well it's a curious definition because really what it does is it defines some kind of intelligence with some other kind of intelligence it kind of defers the problem if you will to something that that humans apparently do so it defers the problem to a kind of biological intelligence which of course leads us to ask the question what is biological intelligence and when you try to investigate that i encourage you to look at the wikipedia page for this it's quite interesting to see that there truly is no consensus amongst scientists the wikipedia page even the first paragraph includes plans so why is there no consensus why is it so hard for neurobiologists to actually say what intelligence is well let's look at the intelligence that the artificial intelligence community wants to achieve with artificial neural networks let's start with human intelligence human intelligence sits in this thing here this is an actual preparation of a brain from the paro museum in dallas and what you see from the outside is is very similar to the the the iconic like pop culture version of brains you see this folded structure this is the cortex the cortex is what has expanded evolutionarily most in primates and neuroscientists agree is where human intelligence sits and the cortex is is a matter of much study and much debate as a matter of fact even as we speak there there are wonderful theories out there that say maybe we need to make artificial intelligence networks more like the cortex otherwise it's not going to be something like human intelligence but do you need a cortex to be intelligence is is that is that what it is is there no other intelligence how about well b intelligence well what do you see here let me tell you a story and then you can tell me whether you think that's intelligent here is a b and this b is very excited it's a b that just maybe a little while ago just had found a mile away or so some beautiful flowers and it came back to the hive and it performed some kind of strange dance that leads all these other bees around it to just listen in so this guy here is in fact telling everybody around it this is why they're they're sitting around like this it's telling them where it found the flowers well how do you know how do we know the scientists how do you find out well there's a picture of my colleague randolph mensah and he has studied this in great deal and the way he studies this is he uses a high-speed camera and he sees how the bee does tell the other bees and then he takes the same bees and he puts a transponder on them and then he can actually measure where they fly using a big radar somewhere in the field and what he found is that just the way the bee is oriented when it performs this dance is actually in the dark in a height just on a vertical surface relative to gravity tells the others which direction to fly and then the amount of wiggles it does the movements of its back part of its body when he plotted how far they fly and how many waggles they do he found this beautiful relationship where literally one bagel counts for well 74 meters so if this b does say 17 waggles 17 times 74 is 1258 meters so next morning this other bee may just take that information it has learned which direction to fly it has learned to fly 1250 meters it corrects this information based on its own internal clock for the position of the sun and it flies to those flowers what is that intelligent there's something amazing about this because what the b in fact does is it encodes information in something utterly different and then it transmits this information from its own brain encoded to some other bees which are listening in they actually don't even see anything it's vibrations and electrostatic forces and then that other b needs to decode this information some abstract version of a landscape and find that same place so if you want to build a machine that's artificially intelligent to do that i think it would be a pretty big challenge so how about the science of making machines do things that will require intelligence if done by bees well what we do know is that it does require a combination of connectivity and learning and these two are actually not easy to disentangle an easier maybe the best way to do it would be to find a version of intelligence a type of intelligent where we know that it doesn't use one or the other so well let's meet butterfly intelligence i'm going to tell you a story of a bunch of butterflies monarch butterflies are beautiful animals they occur in the millions in northern north america and when the days get shorter and the temperatures drop in late fall in northern north america then millions and millions of butterflies embark on a journey 3000 miles south to a few trees in a rather small area somewhere up in the mountains in mexico marked here with an eggs there they overwinter and those that survive well in the morning the same butter in the in the spring the same butterflies will embark on a journey back they start to fly north and stop somewhere here around in texas for example where they know they're beautiful flowers and they eat and they mate and they die and then a new generation of butterflies and bridges and they pick up the trail they migrate further south they eat mate and die and the new generation emerges and they die and another generation to finally spread all over northern north america and by the time the days get shorter and temperatures fall millions of butterflies embark on a journey 3 000 miles south that was last taken by their great great grandfather where is that information coming from if we look at connectivity and learning is not learning the grandfather my grandparents long been dead and there is no indication that there is any information tradition across the generations and it's an it could be a different number of generations to start with so it is all in its connectivity it's a half gram animal that can just do this some people say it's in its genes that's in the genome well that doesn't really make much sense somehow the genome doesn't encode a navigation it doesn't encode a migration route but the genome encodes is a process to develop with a lot of time and a lot of energy an outcome and this outcome of connectivity is what allows the butterfly to do all these things not only to migrate but this butterfly's neural network knows how to fly of course in 3d navigate recognize food sources coordinate eating and sleeping recognize cord and mate with other butterflies and it does that all prior to any learning so let's compare that to artificial intelligence based on artificial neural networks they know nothing prior to learning and we'll have to talk more about this and what it means for artificial intelligence however we do know that when it comes to individual abilities artificial intelligence has become immensely successful and better than humans in fact we do have artificial intelligence now that can recognize faces and voices better than humans we have artificial neural networks that can predict who and what you want better than your spouse and we have of course since many years and getting better and better ai that can beat world champions at chess and many other games what we do not have however is an ai that can do any of these two things at the same time so with this in mind what how can we compare butterfly and artificial intelligence well the butterfly intelligence was all in the connectivity based on a time and energy consuming process based on a genome now artificial intelligence uses learning based on a network that is designed based on more or less random connectivity and it uses random connectivity because prior to learning you don't want it to contain any information so it really is all about learning current artificial neural networks used in ai in 2021 have no genome and no developmental process they're designed they're built and they're switched on to learn so do we need a genome ai has good arguments for no and the arguments are learning changes network connectivity and that's close enough we also don't need ai says every biological molecule to simulate the function of a neuron we can simulate the essence of a function of a neuron that's the strength of its connections to other neurons and of course computers are getting exponentially faster and this one we've all heard i think more than any of the other arguments they'll overtake us soon enough leading leaders in artificial intelligence research to say things like this here's a nice beautiful quote that goes it is not my aim to surprise or shock you but the simplest way i can summarize is that is to say that there are now in the world machines that think that learn and that create moreover their ability to do things is going to increase rapidly until in a visible future the range of problems they can handle will be co-extensive with the range to which human mind has been applied the founding father of ai who said that is herbert simon and he said that in 1957. well we wouldn't actually say this today we would today say probably things like you know maybe we'll have machines that do all those things in 10 years but herbert simon had good reason for saying what he said he was a uh member uh participant of the 1956 workshop in dartmouth that gave ai its name it was started and on the invitation of john mccarthy another founding father he just got a job at dartmouth in 1955 and he invited lots of his friends including marvin minsky whose definition of artificial intelligence i read to you and many other people like john nash from a beautiful mind the founding father of information theory claude shannon is sitting here the founding father of algorithmic information theory ray solomonov is sitting here and of course herbert simon was there together with his colleague ellen newell and they were the only participants at the workshop who brought with them an actual program that could you know as herbert simon would say think it was a simple program but it would be defining for ai for at least 50 years it worked exactly like computers work the von neumann architecture of computers that we still have today and that everybody of course is very excited about basically here's a picture from this 1956 paper it uses decision making trees right so you have you have some input and then it makes a decision here as an operator and then it has a certain knowledge and it decides whether it's going to go further down in this algorithm or whether it branches out to some other option and it may loop back and in the end using decision making and so-called symbol processing logic it will come to a conclusion it actually solves theorems of the principle mathematica and this is how computers work to this day but this is not how neural networks work this is not how the brain works so what happened to the perceptron remember that was 1958 it's just a year later and um what happened was maybe a bit unexpected from our perspective today and was actually not well perceived at all by the majority of people in the field and definitely those at the ai workshop where frank rosenblatt was not present and the leaders in the field including especially marvin minsky and his then colleague seymour parker they disliked the concept of neural networks so much they actually wrote a book about it and they gave it the name perceptrons and the whole point of the book is to tell you all the things neural networks cannot do imagine disliking concept so much that you actually write a book give it the name of that concept just to say mei well it did have a huge impact it led to artificial neural network free ai for the better part of 50 60 years it led to many ai winters in which the promises of ai were not really met and you know it's not true neural networks of course existed and proliferated throughout the decades but it was not a main field as part of artificial intelligence it was kind of a parallel threat and a parallel community working on this kind of concepts so here is marvin minsky 50 years later after the workshop at a conference that happened in germany ai at 50 in 2006 and still no human level ai here he is telling us why many computer programs today can rival human performances this is what we see today still we each but each program does only one thing and nothing else they do not have enough common sense knowledge they cannot do common sense reasoning ai still have no common sense because they lack the knowledge all humans process so this is interesting in many aspects one is of course the the the holy grail of of the human performance but the other is the focus on knowledge knowledge base expert systems knowledge based systems are all simple processing logic systems that even in 2006 the field of ai was like squarely behind at this point the fewest people would ever have guessed that neural networks would ever do what they do today until 2011 and 2011 2012 started what we now call the deep learning revolution and what happened was that suddenly neural networks when you send them into competitions there's in 2011 this is important imagenet competition on visual recognition tasks and you could basically send in your program and see who wins a neural network suddenly won how did they win well they could predict better they could recognize better computers were fast enough to simulate neural networks of several layers and they could be fed learn with what we now call big data however they're basically perceptrons deep neural networks we use today are quite similar to the original 1958 perceptron except that they don't just have one layer one of the key things the one layer perceptron couldn't do for the quantusers is to compute for example the xor function and once you have several layers in a deep neural network that's not a problem anymore you could feed it a lot of data and if you feed it enough data it suddenly learns something and can do something but they're all designed they're built and they're switched on to learn but now we're coming close to our time the last 10 years has been very interesting and is the reason why a neurobiologist is talking to you about them today the question became more and more so is the brain after all a good model for ai if so big data can't be all we just we're just not fed big data we learn by doing and so here are the areas you could possibly improve you can improve connectivity by improving network architecture and make it more like the brain this has happened actually quite a bit convolutional neural networks are modeled on the visual cortex for example but the biggest parts of improvement have all been squarely in learning and the biggest shift was not only learning from big data but learning through self-learning make the a n the artificial neural network learn by itself using techniques called reinforcement learning so we could do worse than track the success of ai using the london-based startup deepmind it started was founded in 2010 what a perfect time just before anybody thought or most people thought that neural networks would be such a big deal and was bought as many of you will know by google in 2014. the mind started with a neural network that would be successful in beating masters of the ancient japanese game chinese game go like chess a two-person null some game just with a lot of possible moves so very difficult to simulate in any other way and after they did that based on a neural network with big data they developed a version they called alphago0 and this version used the 0 to indicate that it actually wasn't fed any data anymore this is a purely self-learning neural network that would learn to play go better than a go master by playing against one they generalized it and built something called alpha zero which is not only able to win against school masters but also other games and the new situation maybe the pinnacle of ai right now is mu0 it's been published just a few months ago and it does not only not get any information about previous games it does not get any big data it was not even told the rules of the game it's entirely naive and just by playing against another computer who knows and plays well and plays right it learns to beat that and it's just a neural network it learns so this is the history of ai what do we learn from this why is this important well the history of ai is a history of trying to avoid unnecessary biological detail and trying to create something that so far only exists in biology and on our journey to avoid biological detail but come in fact closer to the brain we witness the the move from symbol processing logic really the most decades of ai research focused on to just in the very last few years neural networks we saw the move from random connectivity to more biological topologies and we saw the move from data feeding to self-learning so all of this kind of begs the question if we're trying to avoid unnecessary biological detail what is unnecessary well ai today as i already said still has no genome and no developmental process they're still designed switched on to learn well babies and butterflies are not born with random connections and they're not switched on to learn are we missing something can we build an ai that can do everything the butterfly does well as a neurobiologist i can tell you nobody has ever built a butterfly brain modified brain is part of a butterfly and it has to grow together with the rest of the body to be what it is in all its detail and all its beauty and maybe there is no other way to get the butterfly into the brain and to start with the genome decoded in a time and energy consuming process to produce the butterfly so let's revisit our question why do we need genome and growth and what shortcuts can we really take here were the arguments i showed you before ai arguments for no genome needed the first was well learning changes network connectivity that is a bit like growth maybe we do not need every biological molecule to simulate the function of a neuron all we need to do is simulate the essence of what a neuron does which is to strengthen or weaken connections and finally computers are getting ever faster so so what does it all matter let's go through these arguments one by one let's start with the learning changes network connectivity how good is this argument and and it's actually quite good the question is can training replace growth the answer is we don't know yet but the ai community is making a pretty good case and the case is that actually what the ai community painfully had to recognize is that the the critical step of getting any artificial neural network to be what you might call intelligent requires immense amounts of time and energy in their training and this is why until a few years ago we didn't get those things to be smart because you need fast computers and lots of data or lots of self-learning for this to happen the other things that actually quite similar to our brains like training order matters you know if you learn something now and it's put on top of everything you've ever learned you already have your biases it's not compared on an equal footing and similarly an artificial neural network if you feed the data in different order it actually produces a network that learns somewhat differently so the there is something about the learning process that resembles a growth process however the network that does learn is designed and it uses simplifications and both of those must impose limitations we just need to understand what are those limitations and are they relevant so let's talk about those simplifications for a moment here is the core of an artificial neural network it's the artificial neuron it was actually first proposed as a mathematical model that is still celebrated and basically used today often in its original form by warren mcculloch and walter pitts where mcculloch was another participant of the 1956 dartmouth ai workshop and the essence is this if you have two neurons this is a picture from their 1943 paper if you have two neurons then really all that matters is the connection strength onto say a third neuron and you learn by increasing or decreasing the strength of individual so-called synaptic connections these are synapses well this is what a synapse looks like when we look at it in the brain of a fly with an electron microscope and it's super messy if you're not trained and looking at these pictures then you basically just see nothing but a mess it is actually quite beautiful it is you know you see different neurons with membranes in between them there are these mitochondria they produce energy they have so-called glia cells here's actually a release site where so-called neurotransmitter is released from synaptic vesicles all that jazz in order to strengthen or weaken synapses so do you need all the stuff the artificial intelligence community will say well we can get away with all of this by simply doing the essence of what it does strengthening and weakening synopsis the question has to be what level of detail do we actually need so here's the simplest of the intelligent organisms i'm going to show you today it's a worm it's a famous worm it's called cinereptitus elegance it's a nematode and it's been used as one of those genetic model organisms in hundreds of laboratories in the world and its nervous system its neural network is very well understood as a matter of fact we have a full connectome of the exactly 302 neurons that this animal has most of those 302 are here at the front and a few of those 302 are here at the back and we do not only have since decades all the neurons and all the connections we also gain more and more knowledge now about the molecular composition of every single neuron and what it does and so when you do that you find interesting things you find for example this there's a neuron here indicated by a red dot in the front of the animal and it releases a molecule the only other neuron that actually has a so-called receptor for that molecule so it's at the very end how does that work well you know it's a small worm the molecule has to diffuse through whatever is in between you know there's a gut and all kinds of other stuff and then the you know some molecules at some point arrive here these are so called neuromodulators they are not just in the actual connections between neurons but they are molecules that can change in our brains the state of many thousands of neurons at the same time by diffusing through brain regions well if you take this kind of information needed to make an animal be in its own way intelligent then that's not captured in a wiring diagram this is a part of a wiring diagram of a huge beautiful and very successful effort to map the so-called connectome that's all the connections in the brain of the fly brain the little fruit fly has about 100 000 neurons and these are just a few of them and it's very useful to have these but it doesn't contain all the information you need to understand how it works because you need the information about you know what kind of molecules are secreted from one to the other are they going to be exciting are they inhibitory and so forth we still love to draw those diagrams and we've learned how to do it from other wiring diagrams like from electrical wiring diagrams now those are also very useful and we love those because an electrical wiring diagram really is a blueprint it is a blueprint that contains all the information you need to put together the whole thing and it doesn't matter in what order you put them together as long as all the components are there before you flip the on switch but the biological wiring diagram does not have an off switch it has to grow slow in a time and energy consuming process to unfold all the information that is not even shown in this particular depiction so we're back at 150 year-old question where is that information coming from and we've basically already talked about at least two of three ingredients that we need the first is a genome that doesn't describe the neural network it contains the information to grow in a time and energy consuming process a neural network that then and that's the third ingredient based on its performance underlies natural selection which is feedback to the genome so the way the genome is programmed to produce networks and thereby the way the neural network is programmed is through an evolutionary principle and what that means is something we also still need to talk about a little bit so let's go through those three the genome time and energy consuming growth process and the evolutionary programming we'll start with the genome the genome as i already said is actually not a blueprint because the genome does not contain the information that describes the brain it contains the information to grow a brain and this is a very very important difference what the genome encodes what every individual gene in the genome encodes thousands of genes whether you're a fly or a human or a worm a gene encodes a protein and many proteins have the property that they actually feed back to the genome they actually bind to the material of the genome to to select some other gene but not thousands of others to be now expressed to produce another protein and that new protein may feed back to the genome to produce yet another to read out another gene produce another protein and so forth and so forth so this feedback process in whichever new proteins are produced and the cell and the neuron changes its state in a time and energy consuming manner is truly a feedback of the genome with its own products and we cannot predict how that works as developmental biologists we just observe it and try to understand it but based on just looking at the genome you would not be easily able to predict this and there's a there's a fundamental theory behind why that may be so and that theory is the algorithmic information theory the founding father of which is ray solomonov another participant of the 1956 dartmouth workshop and he described algorithmic information in terms of probabilities it's a beautiful theory and i'm not going to go into all details i'll just give you one idea of it that is important for us right now and the idea is this you can compress information but then you will need time and energy to uncompress it to unfold the system how can you compress well you need ordered data imagine you have ten zeros and ten ones i can easily compress this and write down ten times zero ten times one it's actually shorter as a matter of fact the algorithmic information content of 10 zeros and 10 ones is pretty much the same as for say 10 million zeros and 10 million ones then all i need to write is 10 million times zero and 10 million times 1. so in if i would write down the whole system i would need one bit for every digit but the algorithmic information is much shorter if i want to know what comes out of it though i made this a particularly simple example of course you could imagine much more complicated examples then you need to put in time and energy and this is what algorithmic information does algorithmic information needs to be decompressed in a time and energy consuming manner to produce a complex system how does that help us to understand the development of a butterfly brain well let me give you two examples from the field of artificial intelligence and artificial life research that give us a glimpse of what we are not maybe appreciating enough the genome does when it encodes a growth process the idea that a very simple set of rules can encode something that is surprisingly complicated this may be most beautifully exemplified in what is called the game of life a mathematician called john conway i invented this game and it's simple enough take a like a sheet of math paper and you can define every square here as 0 or dead in this case when it's green sorry gray and as a life when it's yellow and now you use three simple rules the first is any life cell life is yellow with two or three life neighbors survives okay so here's one that's yellow and it has two yellow neighbors so next round it should still be there any dead cell with three live neighbors becomes the life cell so say this guy here this guy has one two three live neighbors so the next round it should become yellow and all life all other life cells just die and dead cells stay dead if you do this i did this here and on the internet play game of life and i just did as random as i could just once quickly click on a lot of yellow dots here so this is as random as i could do it and if you start this this is what happens you suddenly see patterns emerge patterns come patterns go beautiful patterns repetitive patterns some structures stay stable some structures suddenly pulsate then entire areas just die and then suddenly a little bit comes back and here is another region you suddenly have two civilizations on the two ends of the board but they are not in contact then maybe one of the civilizations will start to suddenly send a missionary that's going to go all the way down to the civilization down here here's one of those things it's called a glider and then this thing keeps on living and they they fight wars and they influence each other and they keep on producing these things and none of this you could actually predict from the simple rules is my second example is actually even simpler this is also what is called a cellular automaton the game of life is a two-dimensional cellular automaton this is a one-dimensional cellular automaton and all the information that that goes into it is this it's a rule call it rule 110. and this rule works like this and now it's one dimensional so you can take your math paper and we just go and apply the rule up here right so if i take three white uh boxes for example then i say the middle one and the next row got to be white okay so this is what this looks like white white white middle one an extra white then white white black where's my look up table here white white black middle one is black black white black white lookup table is here black black and so forth if you do this for row after row it produces a pattern if you do it for 250 rows the pattern looks like this there's some beautiful part here of the pattern that keeps on repeating itself there's a boring part where nothing much seems to happen and then there is this region here i hope you can see my red laser pointer this region up here this actually seems to as long as it's ever been run never ever repeat itself and it produces ever new patterns and it produces any pattern that's imaginable and because the pattern consists of zeros and ones it actually produces every computation that is known to math and this is what is called the universal turing machine this thing is so called touring complete not an easy concept to wrap our brains around but what it really says is that this unbelievably like ridiculously simple set of rules is sufficient to produce infinite complexity and every computation possible if only you put in enough time and energy to grow it and what is even more astonishing is that there's a proof that was originally performed by matthew cook in the 90s already for this kind of system that said it's undecidable and undecidable basically for our purposes means unpredictable if you want to know what this pattern here is like at iteration say 200 there is only one way to find out you have to let it grow you have to run the simulation to find out there is no math there is no analytical method to find out otherwise so this is why developmental biologists just like to watch things grow if we would be able to see that you know this is by the way you know a pupa of a fruit fly these are the kind of little animals that like you know and and your trash can lid are kind of developing to produce more little fruit flies it's just two millimeters long or so if we would know that this beautiful emergence of an animal is happening just by looking at the genome we wouldn't make these movies and what comes out of it is a real fly it looks somewhat like this normally kind of minus those teeth and if you take the outer exoskeleton away there is a real brain in there it's surprisingly complicated brain has more than a hundred thousand neurons it's not quite clear even how many synapses overall and this particular brain we can now watch and this is the type of things we do in the lab so we can actually put a living animal under a so-called two-photon laser microscope and we can image inside the brain how the neurons are growing and this is what this looks like so this is an intact fly we're just seeing at the resolution of a thousand a thousandth part of a millimeter the size of the cables growing into the brain and how they're changing their shape as the brain grows so with these techniques we can actually obtain full three-dimensional data sets four individual time points and then we can make movies like this where we basically watch the growth in time lapse on a computer just like i'm showing you right now and you can kind of fly into the fly brain through its eye you can follow the cables down into the brain you can observe the endings of those cables that i'm segmenting out here right now these are the ones that are actually making all those contacts this is where all the synoptic contacts have to occur and all of this is happening based on a genetic program the program that is originally encoded in the genome and that unfolds as the genome is in a feedback growth process with its own products so here we are back at a 150 year old question where is the information coming from i told you about the genome i told you about an idea of what it actually means that something grows based on the genome and why we can't read all the beauty of the brain in the genome and how it produces a neural network and so the last question was how do you program this thing and the answer is if you can't predict how a genome where an energy and time consuming process produces it what actually comes out of it well then there is really no way of programming it in a classic way you have to do it via evolution this is what we call evolutionary programming so here's stephen wolfram he is the mathematics genius who really discovered the rule 110 and wrote this book a new kind of science and with him the idea of evolution kind of seemed incompatible for the exact reason i just gave you he said well if you can't predict if i have a mathematical proof that even the simplest of rule sets much simpler than a whole genome cannot be predicted in its outcomes then evolution can't do anything and so here's a quote from his bro from his book he says in a sense it is not surprising that natural selection can achieve little when confronted with complex behavior for in fact it is being asked to predict what changes would need to be made in an underlying program in order to produce or enhance a certain form of overall behavior so he felt you know evolution can't do anything if you can't predict anything but evolution of course doesn't need to predict anything at all evolution only selects at the level of the outcome so every round of programming is painful it has it's a genome that has to go through all the growth in a time and energy consumption process to produce a network and only based on its performance do we have selection and can reprogram random mutations in the genome to see whether they produce a different or a better network so this brings us back to our original question can we build an ai that can do everything the butterfly does well in a way yes except that nobody has ever built it with lego pieces or in a computer simulation it has to grow together with the rest of the body if you can go and simulate the entire growth process then there is no reason if they were possible why you wouldn't be actually making a butterfly brain but there may be no other way to get the butterfly into the brain and to have its genome evolved via natural selection and growing through time and energy consuming process that i call algorithmic growth and algorithmic growth growth is really at the heart of the self-assembling brain we're almost at the end of our live stream today so i'll just give you a brief summary of the idea of algorithmic growth the way we've just discussed it for neural networks we've seen there's no shortcut to the time and energy it takes to grow it unfolds an amazing amount of information and all the details are important remember the neuromodulators even if they cannot be predicted or calculated from the genome and if they cannot be predicted or calculated then the only way to program the system is by evolution now there are lots of interesting consequences to this idea um just to give just very briefly just a brief idea maybe it's very hard to imagine how you would download or upload this kind of information from the brain to a traditional computer you don't even know what the information is really we as neuroscientists have problem telling you how four-digit bin pin number is saved in the brain and it's going to be hard to interface with this kind of information well of course we can interface with neural networks and our own brain i mean they're beautiful installations like this one where a simple measuring device can can feel the electric waves outside of your head and if you think really hard about a math problem you can make this light go up and the more advanced versions of this can and are being used already to insert thousands of of little electrodes little threads into the cortex for example for paralyzed patients to make them use their brain activity to control an artificial limb but now imagine you want to have a third arm and you want to interface that or an extension of your cortex and you want to interface that with what is already there how would that communication work well the one thing i can tell you is it would take a lot of time and energy to learn how to use it and just think of how difficult it is for you to learn to play the piano with two arms now imagine you have a third and it's not going to go faster than that and i don't think you're going to like when the battery of that arm runs out so with that here are the ultimate questions that the field is really about the human level ai or what is also called artificial general intelligence what do we mean by that well so today i told you a lot about b intelligence and i told you about butterfly intelligence and i told you about human intelligence worm intelligence fly intelligence and all kinds of artificial intelligence and what all of those have in common is that they are their own history they have to either grow like all the biological examples and first attempts also for artificial neural networks and they are the history in their learning but they are not on the same scale they are not just all the same type of intelligence just one is a bit better than the other they are their own history they are fundamentally different types of intelligence and if you want human intelligence then there is no shortcut to the growth process if you want your own intelligence there is no shortcut to the algorithmic growth of your own history of the self-assembling brain and with that i want to thank very much princeton university press for uh helping with this project all the students and the postdocs in my lab our funding over the years in the us in germany and by the european research council i want to thank my family navino nasim for helping me in more ways than i understand you'll find more information on the self-assembling brain here and i thank you very much for your attention and i'm looking forward to your questions [Applause]
Info
Channel: The Royal Institution
Views: 128,885
Rating: undefined out of 5
Keywords: Ri, Royal Institution, robin hiesinger, neural networks, self assembling brain, brain, neurobiology, consciousness, machine learning, neurology
Id: Xv_JJ2ZuDJM
Channel Id: undefined
Length: 54min 7sec (3247 seconds)
Published: Fri Jul 02 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.