HC29-K1: The Direct Human/Machine Interface and Hints of a General Artificial Intelligence

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to hot chips 29 keynote one the direct human machine interface and hints of a general artificial intelligence you all right guys we're ready to start the afternoon session welcome back with the Eclipse out of the way and lunch out of the way it's basically the fun is over and it's all just hard slogging for the rest of the afternoon so brace yourselves um it's my great pleasure to introduce dr. Phillip L Valdez our first keynote speaker dr. al Valda just finished a 3-year stint at DARPA where he was program manager in the biology initiatives running a program working on direct computer human interface brain interface very exciting stuff I first met Phillip a couple of years ago I was going to Washington DC on other business and I put up on my Facebook page hey I'll be in Washington what should I do and a friend of mine said you got to meet Phillip so I went to meet Phillip and after he had shown me the Lego models of working engines v8 and v12 and inline fours that worked we got down to looking at all sorts of amazing stuff of how you interface hardware to wetware and it's pretty cool stuff so I'm really looking forward to what he's gonna tell us about that and also some of the hints from that so that research and and that work that he's done of where we might go next beyond what we understand today which is so exciting from deep neural nets and that sort of thing so please welcome dr. Phillip L Valda thanks Fred well good afternoon everyone it's uh good to be back here in Silicon Valley you know when we moved to Washington DC with our two daughters for the DARPA gig I was telling my wife you know our older daughters 14 years old she's grown up in Berkeley she's never met a Republican and you know wouldn't it be great if she could have an experience which broadens her perspective well we didn't quite expect what we got our experience now is very broad we're thinking about perhaps fleeing to a bluer state now but I'd say that the DARPA experience was was fantastic all around and what I hope to do today is is give you a little bit of a flavor of how a DARPA type technology initiative can come about what it's like to be on the inside of it and and I think for the hot chips community what I'd like to offer is a little bit of a different talk rather than talking about specific technologies and specifications what I'd like to do is give you a view that's looking forward a little bit more in the future for what are the types of Technology data rates data formats information theory application challenges that we think could really drive new capabilities in machine learning and apply some of the things that we learned in trying to get artificial electronics in a way making artificial intelligences in order to make them to our real intelligences and and how we can take some of those learnings and apply them to create more capable more anthropomorphic systems in the future so my talk today really is to introduce this notion of new combinations of carbon and silicon coming out of DARP I've started cortical AI really to take the technologies and the approaches of what are the new types of computation what are the new types of systems what are the technologies that we required to make those interfaces possible and actually function and apply them to building artificial people and in the company we really focused on a couple of different areas most notably the thing that I'm going to concentrate on today is the inside what is the computing element the button the thing that binds us together cognitively that allows us to do very very powerful things some of which are still out of reach of traditional processing systems as we manufacture them in electronics and photonics we also work on what is the outside and how you can apply more anthropomorphic computing systems to make more realistic animated figures and renderings but you guys are going to get a lot of that from the video game and graphics folks so I'm gonna leave that out of this presentation and concentrate on the the guts of the technology so in the sense of neural technology what is that machinery inside the head that makes us so powerful computationally now the journey for DARPA who started long before I got there back in the you know 2002 era where we began investing in prosthetics for wounded war fighters people who had lost a limb obviously for for you know a couple hundred years we had the hook for the arm the peg for the leg but the investments in Dean Kamen's deca research drove what we call the lookk arm in homage to Luke Skywalker and the you know with his prosthetic at the end of the third movie but here this was an eight pound prosthetic with a modular system you could have you know elbow wrist shoulder versions and when it operates it's really quite remarkable you can see that these folks who have lost you know different types of of appendages depending on their wounds or congenital defects can actually control quite accurately what they do with the system they can pass grapes back and forth they can take these flimsy water bottles and drink from them without squirting the water out and and such an incredible leap beyond what's happening I'm happy to report that now with the Veterans Administration these things have an insurance reimbursement code and are being distributed and the volume is growing year-on-year to what we distribute to our our wounded veterans but in a sense this was a fantastic first step but you know as in many of these demonstration videos one of the key issues of course is what's not on the screen and what you don't see is the system that's actually controlling the function of the robot arm which is a mechanism that straps to the shoe and it's got a gyroscope and some buttons built into it and so all of the sophisticated functions that you see happen there is mostly the derivative of clever programming and foot motions and pedals while the person is learning to use the prosthetic so that really isn't the holy grail the holy grail of course is to take something like this and connect it to the peripheral or or central nervous system so that like Luke Skywalker did you could control it with your thoughts now that journey at DARPA began of course with animals and this is out of nikka Miguel Nicolelis his lab at Duke University where hidden behind the aluminum block there you can see that the monkey has a box with some wires coming out of it there is a probe of wires that is really shoved directly into the motor cortex of the mouse so the area of the mouse's brain that ordinarily sends the signals out from the brain to control its own arms but you might notice that its arm is currently trapped in that transparent sleeve there and he's being presented the marshmallows and simply with his thoughts he is controlling the robot arm to grasp the marshmallow and feed himself so he's using his visual system targeting his grasp move nuvaring that the the the fingers and the arm to grab the marshmallow and and give it to himself now a couple profound realizations this involved you know again what's not on the screen the giant refrigerator of computing the big bundle of cables the the screw plug in you know to transit the skull quite a lot of effort to digitize it very very high rates with 12-bit backs getting those nervous signals very noisy we'll see a little bit more but this was the first time this was the kind of the watershed moment where a creature controlled an electronic system with reasonable degrees of freedom to call it maybe 2 or 3 degrees of freedom to control this particular arm completely under mental control it wasn't long before that eventually came through and was used in humans that's some of you may have seen this 60 minutes video and straight down Jan Sherman quadriplegic paralyzed lamb neck down right you can see the Box on her head I can close it and open it and I can go forward back that is just the most astounding thing I've ever seen can we shake hands sorry no really yeah like come right over here yes you come out oh my goodness Wow like it's amazing and it really is it's amazing so that was the first moment that a human was able to control an electronic system when it said just with fun again what's not in the screen the giant refrigerator of computers the big bundle of wires the refrigeration system for the computers and the fact that within the system control the arm it had quite sophisticated digit control but those were all manufactured from watching people and how they grasp things habitually there's really only about five or six degrees of freedom in that robot arm and it was still slow it's not something that you could have individual control of all the joints in the fingers as you might want if you were to try and play a piano and it certainly didn't operate fast enough to catch something that was thrown at you that we do quite easily since that program the first one of the first programs I had the opportunity to work in in coming to DARPA was the haptics program that was generally run by Doug Weber I got to assist him on several of the different projects but here the idea was to accomplish the return path so instead of just controlling the robot arm with your thoughts you know the challenge for the jan sherman and the other operators was to was to use that system you she had to concentrate very heavily and look at the arm because there was no built-in feedback like you have in your own body there's no system where without looking you could tell where your joints were or where your limb might be there was no feedback when you bumped into something or touch something how hard were you grasping so this was the project to build the electronic sensors into the digits of the robot to build the proprioceptive sensors in the joints to feedback the position and I'm happy to report that this has been a fantastic success where you can see some of the early tests here of a wiring directly into the peripheral nerves in the in the root nerves of the arm that remain after an amputation and there are similar systems now which we have accomplished the same functionality directly into the cortex so both to the peripheral nervous system and to the central nervous system but the really remarkable thing about all those experiments is that as wondrous as they were as much of a watershed moment as we saw in that first opening of the door of controlling things with your thoughts the state of the art technology and I'll keep in mind those early demonstrations from with Jan Sherman 2005 last year's haptics demonstrations with the return path in the sensor early 2017 in that entire span this remained the state of the art technology but the FDA would allow us to implant in a person you know some of our technical staff at DARPA would laughingly joke about it and and say oh my god you know stone knives and bearskins it's nails and nerves we're taking this thing and you know shoving it I mean of course they do it carefully and they've got machinery and they clean it and all that but but the point is they're just pushing this little pad of a hundred wires a couple millimeters on a side and getting a really noisy signal out of those hundred wires and out of that we eke out on the order of six degrees of freedom and we can kind of slowly and herky-jerky ly control a robot oh so I come to this situation and say my god the technology has an advanced really since 2005 and really that's probably like 1980s level technology right what could we do if since 1958 we've been trying to hook up electrodes to the brain this is an early experiment over the visual cortex where it's an array of you know rather a large couple of millimeter sized electrodes you can see the pattern on the right where they were drawing in the notebook the dots that they would make in their vision the idea of phosphines you put an electric current right over your visual cortex you get this bloom of light it's we've known since that time the electric stimulation was not ideal yes you'd get a visual response but you weren't speaking the language of neurons you were just pumping a bunch of current in you're kind of setting them all off together but that see that agglomeration of neurons each one of them has a different computational function they represent different things even though they might be right next to each other some should stimulate activity some would inhibit activity some are different colors some are orientations some emotions and when you just plug the current in and turn it on yes you get activity but it's not in the language it doesn't have the precision to represent how finely we know our eyes work so of course I come to DARPA with my background as a physicist and an entrepreneur and an engineer and I look at the system you've got trillions of synapses on one side trillions of transistors and wires on the other and so far we have what a hundred wires that's a nice 10 order of magnitude bottleneck and right for exploitation so I see the opportunity to apply decade's worth of technologies to think about how would you develop a next generation of neural system interface specifically for use in humans with a rather audacious goal I wanted a four order of magnitude improvement in the scale at which we could have that interface now when we first came up with that number we actually did it with a pretty rigorous study of you know bottom-up what is the fundamental transduction allel iment in the neuron what is the amount of energy required to change its state what would we need to transduce it what would you need to process it and encode it from a first principals perspective what did we think was possible for is embanked is a big jump there's a huge motivator huge motivator to do it and this graph was one we did to get the product of the project approved what we're looking at the spatial precision on the bottom axis the number of independent channels of neural axis vertically on a log scale and you can kind of see that we started the DARPA project with the types of technologies of animal experiments where we could demonstrate very very limited scale very limited independent channel count and there were some things where we had larger scale you could use MRI to see many neurons but you couldn't see many independent channels you would have very very large voxels but of course the goal of course is to have more precision and have more scale so you can do more complicated things through your interface you want higher bandwidth and higher precision at the same time that was the core requirement so the question is what was a reasonable target while there are some new technologies that would take what had been done in humans that's in the the diamond boxes apply some of the latest technologies that had been demonstrated in animals with more scalable electrode systems but but they hadn't been pushed through the FDA and there were some exciting new technologies which came out which were the catalyst for me to come to DARPA it was in reading the arns and Kellar nature paper on how you can make not just electronic systems where you know to get to the auditory cortex we could just do a DARPA heart program apply the very latest kind of 2015-2017 era technology nodes and electronics and have something that could achieve what we thought was about the scale of the auditory cortex but the new technology catalyst the idea of optical interface was super interesting and we believe that those experiments those early experiments on fish and rodents and monkeys what opportunity would offer opportunities at tremendously larger scale and so this was really kind of the the product cell sheet for the DARPA Management it is we think we can move the state of the art human interface from the coarse motor control era to the fine sensory interface scale so three years back this is what the technology looked like this is out of Marchen in sirs lab right here in Stanford where he was looking at a screen now that vertical silver bar that's an aluminum track and down at the bottom you can see there's a tiny mouse and it's got an ultraviolet microscope imaging light and fluorescent microscope on its head that's looking at the part of its brain that has the the place and grid cells so what you're seeing here is one of the first demonstrations where we're starting to decompose what is the spatial and temporal and physical function computationally of the neural substrate and here you can see the tiny little circle there that's that's circling a neuron that corresponds to the triangular point at towards the top of the track that neuron fires when the mouse passes that triangular point in a specific direction so it's a direction and position sensitive and so you can see the mouse front and you can see the total energy inside the circle and the bottom graph and you can see every time it passes that little triangle you see the blip of activation and that whole area of neurons that is encoding every position on the track and by hooking up a machine learning system they could actually turn off the camera just look at the neural representation inside the mouse's brain and tell where the mouse thought it was how cool is that this is some work out of University College London where this is also in a rodent brain they are doing a bi-directional read and write and so now these technologies of making the neurons optically active this is work that was derived from the ability to program with DNA so synthetic biology crafting an instruction that you administer to a cell it creates a protein that when the neuron fires calcium rushes in and it will fluoresce so that's how you read you write by having a mirror a dual an analog to that where when you shine light on the cell it opens up a membrane channel forces ions to flow in and the neuron will fire so we have an optical interface system which they can now use in this case it's a scanning system where they will pulse the laser and add energy in a linear sequence to all of the circles in the in the in that area of the rodent brain and you can see that even stimulating some subset of the neurons in the area through their dense inter connection to the neighboring neurons you can see other neurons light up so in a way this was one of the first experiments to in this case the area of the mouse brain that controls the whisker sensation we're just with light they could write a code that would give an artificial sensation to the mouse 3 years ago this is last year also marks nature's lab so significant progress we're looking at the entire cortex of the rodent think about that we're looking and seeing the individual neurons computing as the mouse is awake and running on a treadmill you can zoom in a little bit to see some of the individual neural activity and here it is with a little bit of image processing where you have no difficulty making out the individual neurons so one of the obvious applications for machine learning in this field was we don't understand what all of those flashing lights mean but we can watch the behavior we can instrument the mouse we can look at what's happening computationally and develop correlations and start to tease apart what is the actual neural code what do these things represent like the position of the mouse in the track but now we're looking at putting images in front of the mouse and reading out how the brain is processing those images now when you start to look at neurons in this scale and here we're just starting to zoom in on one particular bit of the mouse brain you can kind of see some of the areas of the brain that are illuminated according to the Allen Brain atlas for the for the rodents but but you can start to see what the machine learning systems will do where it will isolate and identify and tag each individual neuron and it's isolated activity and you start to transduce what the energy is in each one of those circles and you begin to develop a representation for what is the neural code when are the neurons firing what are they related to behaviorally but you know think about that we're going from about a hundred wires to being able to transduce what in this case a million and a half neurons that imposes some challenge now the brain system that we hooked up to jan sherman was already kind of at the limits of scale that we could manage and so state of the art was to take that hundred wire implant in the motor cortex and you know think of it that these wires are very very coarse with respect to neurons any any one of those wires is surrounded by some between twenty and fifty thousand neurons all actively going and generating electrical signals so you end up with something that is not really an identifiable code of any sort it's a it's an agglomerate Agnetha tremendous amount of noise so computationally this was very very difficult we would sample it you know at at thirty kilohertz we would look we've used 16-bit DJ's over ninety six channels that's a sixty megabit data stream just to extract six degrees of freedom and the code that would drive the robot arm was only driving about five kilobits per second of data to control the arm the arm had a lot of smarts to do the finger control and grasp and other things but this was the kind of kind of sparsity of what we could extract from that very very noisy signal but now the challenge is when you want a sensory interface we're talking about going to see what does the architecture or the system look like that would be able to transduce and transmit millions of channels well the beauty is if you were to try and do the same thing that you are doing with the larger scale system it would be intractable but if you think about making a micro system where the scale of the transduction and the accuracy of the interface is closer to that of the neurons themselves that you're trying to speak with now you've got something where you can begin to isolate the individual activity of individual neurons which is very very compressible so we realized at one point that just to say we want a million neuron interconnect was only half the challenge you needed in order to be able to make the problem tractable you had to simultaneously have the precision to be able to make a compressible signal in a manageable data stream but there are bigger questions you know even if we can start to do dimensional reduction and say we're just going to track with high precision when a neuron fires it's a it's an impulse function and we can make a code that looks like this where you translate from an image to you know a hundred you know in in that one hundred sampling unit interval into you know the streams of pulses we actually need to develop these transcoding algorithms which could efficiently go from what is the neural representation to the representation of the sensory media that is causing that stimulus and of course our goal was to make an artificial system where you could take any sent in a stimulus whether it's a real camera feed or a synthesized image and be able to translate it into this pulse neuron code and write it into the brain so you could create a real interface but then you start to say well what is that code and how you do how do you do the decomposition and the more we started to look at the fundamental principles of how you represent information and how the brain was doing this computation we realized that our first guesses and really what neuroscience has been using for the last 60 or so years to represent neural activity wasn't telling the whole story and so for you you know EE folks you know Shannon Hartley digital channel capacity this is a very familiar graph where for example you take a bundle of neurons say your optic nerve and you you count them up there's about two million of them and then you say well how fast can a neuron fire that's kind of my switching rate you look at what the magnitude of the signal is and you figure out what the background noise is and you have a limiting channel capacity makes sense we do those calculations but then how do you apply it to a neuron you know the neuron you've got kind of the the neuron that's gonna fire a signal it fires it down the the axon to the right where those blue junctions are to the other neurons on the other side of the way your your temptation to say well let's just use shannon hartley the axon is the communication channel there might be some background noise you've got the impulse of the nerve we calculate what the channel capacity is we count you know firing rates and the nerves and we end up with a channel capacity description of about two bits per neuron issue well turns out that's completely wrong or wrong is probably not the right way of saying it it's it's an inappropriate model and and we have the proof because we can do behavioral experiments and calculate what is the information necessary for you to discriminate patterns that are very close to each other and and the two most obvious ones are discriminating two tones and you're in your auditory system that are very very close and here you can kind of look at the lines of you know what Shannon Hartley says you should be able to detect which is the blue line and and the finer resolutions that people seem to be able to detect which shows that you can do a couple orders of magnitude better with your biological system than Shannon Hartley would seem to allow so if we were to describe the code the way Shannon Hartley describes a digital system it's clear that that's not capturing the information that's in the channel same is true with your visual system you look at whether you can discriminate two lines that are close to each other same calculation same result you're about two orders of magnitude shy compared to what your body seems to be able to do so where is the missing information hiding or a different way of saying it what aspect of the coding or the transmission channel or the computing that's happening in the channel are we not appreciating in our description of the thing as a simple digital channel there's a few you know general areas where we believe there's a good amount of information that could be derived but it turns out that the one which seemed to hold the most was in the code being not a code of signal level and switching but in time of arrival and in fact not even time of arrival of a single symbol but the relative time arrival of multiple signals in a shared channel so these distributed codes these population codes that are time of arrival based and the experiments that back up this idea told us some really interesting things this is a beautiful simple experiment by NC who's now at Edinboro where he took electrodes and put them on both sides of an axon channel sampling many of them in parallel and he began to do information content measurements and entropy measurements to determine what was the information in the channel relative to the sample rate now for 60 years most of the neuroscientists would tell you oh you know 100 milliseconds 20 milliseconds 10 milliseconds that's a fine sampling rate for our patch-clamp experiments seeing what a single neuron does but look at this as you decrease the hope sorry let me go back as you decrease the sample rate the information rate goes up and the experiment broke when the sent one that when the system failed at two kilohertz and we're kind of at an increasing slope of deriving more and more information what this told us was that if all you look at is a spectral decomposition or an average firing rate you're not getting the majority of the information in the neural signal so now we find ourselves at kind of the uncomfortable junction of high spatial requirements high temporal sampling requirements and a desire to take these big boxes that are bolted to the skull with large bundles of cables and saying I want to integrate this in a package that you could insert in a human so what's the solution we realized we're screwed totally screwed we're never going to get to a million neurons well we'll make something that samples at 30 kilohertz it's got megapixel resolution we'll put it on the cortex temperature rise of maybe 30 degrees FDA will allow one degree of temperature rise inside your head so boil the cortex probably a negative outcome but we realized at some point that obviously the brain was doing it we know that your brain even actively thinking very very hard eyes working ears working all of your cognitive processes going peaks at about 25 or 30 watts it's a remarkably efficient machine that's computing at gigaflop rates for 30 watts so for the area that we want so we wanted to put a device over your visual cortex surely we could make something in the power budget if we but designed it to operate more in the way that the human brain works and so here the key situation was to realize that the brain was not a synchronously clocked architecture it used a spatio-temporal code but it was sparse and it was asynchronous and those two features it turned out was the key and so we began looking for technologies that would allow us to incorporate sparse asynchronous coding in both the sensors and the computers to build these integrated devices that would receive the possibility of getting to the scale we wanted and so the first one we found there's a few examples that DARPA had funded in previous programs this is the chrono cam group out of Paris where they're looking at kind of the traditional limits of a synchronous clocked architecture where for power reasons you just can't have simultaneous scale and sample rate and so they went to a different architecture which had a much more complicated pixel that would detect changes and sample only when there was a change and then of course they were able to preserve microsecond scale temporal resolution at mega pixel densities at very very very low power so knocking two three orders of magnitude off the power budgets specifically for sparse images and so this architectural transition it's one of those great technology situations where you realize that your limitation was only because of the architectural assumption you're making for historical purposes and if you could just change your perspective a little and go to an architecture more like the brain boom new avenues and the applications are pretty cool you can kind of see these artificial retina type of things that you may have seen here and about that's Christoph posh one of the designers of the camera the applications range for you know things where telemetry and power are limitations you can kind of see the traditional character camera in the top left and then the two channels of the asynchronous temporal code camera in the bottom left as the event I think the top right is the events and the bottom left is the is the illumination measurements that are made upon those events and there are clearly applications where this would start to fail so you know it's the traditional MPEG in code problem if you're watching the NBA basketball game and the camera the HD camera pans across the screen and everything is changing it once you know the visual system would be overwhelmed the buffer of events would would stack up and you'd lose you'd lose imagery but guess what that's exactly what your eye does you can't pan across the basketball screen any better than your TV you're fooled when you're watching the TV because you've got a fixed gaze in the and the TV is moving so you kind of see that that particular edge case but this would fail in exactly the same way that your eyes fail which make it a great interface technology for direct access to the brain there are other applications where sampling precision is very important and when we started to look at the application of how do we extract the most information from the neural code this was specifically the feature that we were looking for we wanted very very high temporal precision with very very low power and you can kind of see the asynchronous camera on the left the conventional camera on the right and the power consumption is about one 1000th on the asynchronous camera while you're getting better tracking precision in the in the imagery it also is designed so it has very very high dynamic range and functions in a couple of other different modalities that make it useful for other military applications like target tracking and things of that nature but for us it was the key technology in how we could extend the brain interface to scales that were meaningful so from from a company spin-out perspective I decided upon leaving dart but to start cortical specifically to apply some of the things that we learned in trying to figure out how do you interface these systems to natural biology and have meaningful information exchange the first core idea the first opportunity is this notion of how can you use these temporal codes to build richer interfaces at much lower data and power rates so we see lots of opportunity there but beyond that you start to think of packaging you know when you say I want a general neural interface how do you avoid boiling the cortex so that wherever I might want an interface to any of the different sensory systems I want to be able to put a package that's wireless that can be fully implanted no wires transiting the skull that that turned out to be the limit of all our experiments the open wound where the wires were coming out we had to take them out for those people after about six months of experimentation but the packaging is evolving incredibly rapidly from the first experiment a few years ago of the light sheet microscopy on the optical bench the first kind of modular commercial product with you know kind of the kit optical parts and then the hand machined graduate student project for the smallest at that time fluorescent microscope that now you know as a spin-off called in scope picks in the Palo Alto area from again from Marx niches lab where there's probably now half a dozen Nobel prize-winning labs that are using that technology to understand what the brain is doing and of course you know DARPA looks at that and and you say well that's a great graduate student product but the FDA is not going to approve inserting that into any into human brains but you know all of those technologies the webcam the optics with the filters the illuminator all of these things are systems which we can replace with planar photonics now and so we believed it would be possible to take these types of systems and make a program where you have effectively a universal neural interface which is a modular device fully self-contained wireless and can perform the same function for the brain that the modem performed for the early days of the computer and the development of the ARPANET it standardizes the interface to a system that otherwise is inaccessible the goal of course being that we want to access whichever part of our brain we'd like to connect to at very very high data rates with high information capacity so part of the fun of course being at DARPA is that everyone takes your call cuz you've got a lot of money and I took this opportunity to realize and discover much to my frustration that all of the technologies that were necessary to make this sort of leap and interphase were possible but they were stove-piped and they were held by all kinds of different companies and generally university researchers did not have access to them and even when the universities had access to them they would be working in their own discipline but to make all of this kind of an integrated system operate you needed you know computational neuroscience photonics electronics micro electronics regulatory packaging computer science electrical engineering fabrication the whole nine yards it was a huge reach just to get people to talk to each other and realize they could manage their part but if they would but circulate amongst the experts they could assemble systems so really the challenge more than anything was ecology and an ecosystem building where we needed to tell people if they but could get together great things could happen so we hosted over the course of three years a series of workshops and ultimately we define the program with a huge pot of money and we set out a very very challenging goal of this next generation interface and it turned into a kind of a three continent effort and these were the entities that we funded we probably engaged about ten times this number and visited over 80 labs to learn who could do what of the systems and so let me tell you a little bit about some of the systems we funded now these are still works in progress so this is a little bit of a sparse report just to tell you generically what are the technologies that are being applied to address the problem but here this is a prosthetic that's being developed by a team led by Ken shepherd at Columbia University where they're making thin CMOS arrays that are purely electronic so they're so thin in fact they're almost tissue consistency but they have a completely integrated system in that one CMOS device it includes power receptor telemetry bi-directional telemetry it involves the neural sampling grids neural stimulation grids and would take the types of technologies that we see in kind of epilepsy localization where they've got the big patches that they put over large parts of your brain and take it to the kind of individual neuron resolution so this is a fascinating project it's very very integrated but you can just see the participants in the program span universities startups large medical and regulatory agencies here's another project by the by fve in Paris the foundation of vision and hearing in collaboration with Stanford the Friedrich Miescher Institute in in Basel the chrono cam company with the asynchronous sensor gen site which has a technology where they can deliver genetic instructions to make individual types of neurons cell specific instructions active optically so here you can see on the right these illuminated colored parts of the retina they were able to send different genetic instructions to the different retinal components and activate them independently that's the idea here is cortical site or a led stimulator for an optically activated patch of the visual cortex fascinating here is a similar project run by the JB Pierce Laboratory Vince pure bone at Yale University in collaboration with Jacob Robertson and Caleb Camry at Rice and a group at st. Andrews working on the LEDs this also is using a thin CMOS device single integrated with 3d EPI liftoff three-five materials for the illuminators so the idea is to be able to read optically and write with the LED focus but one of the real innovations here is not just to light up an area and have the neurons fluoresce but to actually write genetic instructions to have the neurons generate their own light like fireflies and in this case they're using DNA from Tina fors these tiny little kind of mini jellyfish that generate their own light so the idea here is we use synthetic DNA programming to write the instructions that whenever a neuron fires it generates light this actually solves a bunch of problems but it doesn't generate very much light so you need a very high sensitivity the directer detector this is where Columbia comes in where they are developing Avalanche photo diode detector x' that would be integrated in CMOS but again single monolithic device with a relay patch external to the head the this is a graph from the st. Andrews folks at that are generating the high-efficiency blue LEDs this is the wavelength it's very well matched to the sensitivity of the options that that open the ion channels and the neurons but you can kind of see the the level of integration and the different technologies that are necessary beyond understanding the coding which is you need the integrated photonics you need the integrated power management the integrated compute integrated memory all of these and of course they need to be packaged and and go through the regulatory machinery the the detector the optical detector is also very interesting and this is work out of Jacob Robinson's lab at Rice where here the challenge is you don't have much offset from the neuron to the detector it needs to be flat because you don't want an unsightly bulge in the skull or an open wound of any sort so it has to be a flat a flat system but you also want precision focus and high numerical aperture to collect all the photons that are coming off of the off of those fluorescent materials and so they develop this phase mask which can is essentially make a 3d sensor it's it's it's fascinating here's a video of it operating on a 3d surface it's a little bit of a confusing view graph the bottom right green thing is the actual pattern that's being sensed directly by the camera and you can see the camera position at the top looking at two different channels in an agar target to simulate the human brain and they pass a you know fluorescent beads through the two channels at different depths and the point is that you can see the reconstructed images from the computed functions of the of the imagery where you're combining the computational power of the sensor and replacing the optical element so that there's no relief or size in the box of the optics that does the sensing so you have a new type of optical microscope which can lay flat like a tissue on whatever it is that you're sensing and have very very high precision microscopic images that's what we need to look at the brains a similar project optically is also being conducted by Berkeley and the University of Paris and Argonne labs where this is a more of a standard architecture with a spatial light modulator and and they and a camera in a folded configuration this one you know is probably more suitable for experiments it's got an open skull with a screwable filament and an aperture but you know there's still going to be some technical challenges for passing that through the FDA or making new systems that are small enough to be implanted now one of the other challenges is that that up to this point the things that we talked about funding were monolithic units so for parts of the brain like the visual cortex or the auditory cortex where you want very very high precision over a very well-defined area the monolithic units seemed appropriate but there are all kinds of therapies and interfaces that we could imagine building that need a much more distributed contact with larger areas of the brain where we don't necessarily have a good idea for how dense the connection needs to be because we don't understand what the coding is in those parts of the brain and we know that for example in speech the centers that govern speech were if we want to look and have an interface to essentially read what you're imagining saying from your brain so imagine you're you have a relative or yourself or a locked one of these locked-in people that are in a vegetative state but they're actually conscious underneath they just can't control their musculature to communicate this is a sort of system where with a distributed set of contacts we could be able to parse what they're imagining saying electronically and this system from peridot mix it's a spin out of krishna shenoy and Nick militia's lab at Stanford collaboration with their folks there the Francis Crick Institute Cal Berkeley Bob Knight's lab there Kyle St doing some circuit design circuits ER tech doing some regulatory and contract medical device manufacturer you see a system where they can take a series of optical or electronic fibers with glass cladding bundle them polish the interface and abut it to a CMOS sensor or to an array to write and then you've got these flexible fibers which you can then distribute and that are flexible enough to not just penetrate and damage nerves and and blood but be able to kind of wiggle their way in and contact the systems but this is fundamentally an electronic interface with a long wire to the neuron the last one that's also very interesting led by Brown University in collaboration is by Archana Miko with Stanford the VESA Institute in Switzerland the Salk Institute in San Diego and Qualcomm for the wireless systems this is almost like a mini cellular network for a shower of tiny miniature neuro grains which are distributed and scattered across the brain interface of interest with a relay patch on the top of this call so this is a you know miniaturized systems that have power delivery backscatter based low-power telemetry and all of the neural sensing and stimulation technology isolated in individual grains now what was really interesting about this one was first of all being able to scatter it over wide parts of the cortex and interface with larger parts of the brain but here you know you think of brain machine interface as being an application where you know you can read stuff out and then write stuff in but here you could have an immediate jump relay where if for example you've got schizophrenia which we now can look at the physical underpinnings of the psychiatric disease and say there's a deficit in connectivity between two brain regions we can make artificial neural links to connect disparate parts of the brain that have a missing component so there's a whole series of things that we think we can both correct and possibly even enhance improve the connectivity between parts of your cognitive processes kind of a tantalizing thought so the the great news though and really one of my stated goals coming into DARPA was that by creating this ecology and spending the last three years kind of getting the right people together getting the right industries aware that they have a role to play an opportunities in this field even though it might seem quite far off in the future it's actually moving incredibly fast and we were very very successful in getting very very large companies to participate we started the neural engineering industry group and since we began the workshops and started this work and got the whole community together a few hundred people participating across a couple hundred institutions you know you've seen neural link and Elon Musk's investment of a you know where he's talking about targeting 700 million dollars of investment in this field colonel founded by brian johnson with a hundred million dollars the mental typewriter by facebook Galvani bio electrics as the joint venture with 700 million dollars from google and a GlaxoSmithKline we have been successful in catalyzing what we see as a new industry so if nez d is successful we expect that within three years we should have a digital interface with api's to your primary senses and i would hesitate to say we're gonna cure blindness and deafness and aphasia but we should make a good amount of progress cochlear implants or the state of the art today with about 6 to 15 contacts we're talking about having tens of thousands to millions think about the improvement in precision if we're successful in developing the codes for the visual system will go from phosphines and blurs of white light to starting to write codes to specific neural encoding elements so that you have sharp edges and fine points and color and motion and then of course for the people that are locked in with ALS and other diseases of that that inhibit your ability to communicate verbally we believe we will also be able to read your speech so what's next what's next well this is the part where I want to get you thinking and tell you a little bit about what we're working on it cortical our goal in a sense is beyond restoration when I was at DARPA the challenge was how can we heal the infirmities of the damaged mind loss of a loss of ear stroke neurological deficit Parkinson's but I think thinking beyond that the next generation so for you technologists who are developing the next generation of processors and switches and memories and neural networks and learning systems our goal is to free the mind from the limitations of even healthy bodies what might come beyond restorative prosthetics whether it's digital telepathy being able to read out what someone is intending to communicate or communicating directly with them electronically through one of these interfaces talk about virtual virtual reality where you could begin writing information display directly into your senses without any other machinery than your implant that's coming and there have already been experiments in animals of collective cognition of monkeys with their prefrontal cortices wired together so that together they can do a task better than any three individual monkeys it's coming for us too and when we talk more specifically about what are the next generations of artificial intelligence architecture is going to look like if we take what we learned in making these artificial systems to connect to the brain what do we know well there's a couple of core areas as I can point to that would be transformative if we can enhance them and these are the areas that cortical is focusing on I think that this paper by balm and company had a great title you know modular segregation of structural brain networks supports development of executive function so as human brains and of course other primate brains mature they start out deeply and densely connected all to each other kind of like most of our deep learning networks are today but as you grow older you go through puberty your teens begin to develop executive function it happens through functional decomposition and abstraction into different unique parts of the brain that accomplish and represent different things and it's through that isolation and through that balkanization into more complex architectures that the more complex brain capabilities emerge you know it is literally true that you are preteen so your tween children you can yell at them and say don't you ever think ahead can't you imagine the consequences of your actions and the answer is no they don't have the physical machinery to do those computations they literally do not have it so you can rail about it as a parent all you want what you really have to do is just be a good example and wait until the circuitry emerges so what I would say is that our deep learning networks today are kind of pre-pubescent components of a thinking system that haven't gone through the maturing process yet but let me be a little more specific one of the key areas that demis hassabis and the deep mind people are starting to poke around in is what I would call assembling sub a eyes into larger integrated meta AIS and so when you start to look at the individual systems like deep mind or Watson that are these knees knowledge stores they're very bespoke it's almost as if you know these systems are taking some analogies to some tiny little subsystem of the brain and they're developing that functionality you look at Alexa and Siri and the Google Voice translator these are systems that are emulating just a small portion of the auditory cortex the Facebook and Google image identifier czar using convolutional networks to mimic a tiny portion of the visual cortex but no one yet has successfully built the center of all those things and by the way in case you're wondering where this art came from this is a wonderful piece of actual art by Greg Dunne that was taken from an actual cross section of a real blank brain showing the connectivity between the brain regions along a plane that kind of cuts you're almost right down the middle but that integration and synthesis component you can see that every single part of the brain is connected to that central area that area is called the hippocampus of the brain and we used to think that the hippocampus was kind of some of the machinery that would store memories and it does to a certain extent but it turns out that the way your brain really works is we're starting to piece together is that it's almost like a switching Network that activates the entire machinery of your cortex and when you're remembering something or when you're trying to process or recognizing something it's actually the activity of your entire brain different parts of it lighting up and different strengths to characterize different representations but it's through this integration in the middle that everything translates from one domain to the other we talk about our AI not being able to generalize out of its domain how do we synthesize auditory and visual information if one of you was to speak in the audience I probably couldn't hear you but if I look very closely and I can barely hear you and I can see your lips moving I will be able to integrate the visual information of what your lips are doing with the bare amounts that I hear and estimate much better what you're saying the same is true for every context relative to our memories and how we compare what we've learned with what we're seeing all of this is an integration that assumes one very important thing which we haven't done in our learning systems yet it requires that there's a common code that is semantically relevant to both sensory and memory integration but our deep learning systems don't really do much of that so I think this is one of the areas where we're talking about building a common code that has general representation across all the senses and memory systems so we can begin to compute meaningful things with learning systems there so building an artificial hippocampus is the first challenge the second one that I want to talk about is anticipation and here this this little fist in the bottom this is their your cerebellum in the back here brands about a fist-sized part of chunk of tissue now when I was learning neuroscience you know as an undergraduate and graduate in the 80s everyone was pretty convinced that your hippocampus was involved in motor control and in fact you know you seem oh you talk about muscle memory learning to ride a bicycle without thinking about it skating without thinking about it you know trajectory estimation you know the ball is coming at you you need to catch it and it turns out that that's true and if you start to look at the at the at the kind of integrated components of the of the hippocampus I'm going to come back to some of these and the cerebellum when we go back to that this is what the cerebellum looks like it's these incredible sheets of neurons all in a line and it's a system which learns trajectories so the neural signals flow perpendicular to these sheets and as you learn a new routine a new dance a new language how to speak how to talk how to move the connections between the pattern generators and these sheets are refined but the mind-blowing aspect of the cerebellum the thing that we've learned just in the last couple of years is that that fist of neurons in the back of your head actually has more neurons than the entire rest of your brain it just doesn't make sense it didn't make sense that all it was doing was motor control refinement and it turns out that was not in fact all it was doing because the cerebellum it turns out is not just connected to your motor cortex it's connected to your entire brain so we now believe or I should say some people and I myself believe that the core function the unitary function of the cerebellum is actually quite simple it's to project future state but not just of your muscles and your body it's to project the entire state of your cognitive processes what you're feeling where you are who you're looking at it what word do you expect to hear what is the next thing that's gonna come out of my you all know what I'm going to say and I'm surprised and now I'm paying attention because you guessed something different but that that mechanism to project your future state to anticipate what is going to be it turns out is fundamental and so we look at this sort of opportunity to say what if we could build an artificial cerebellum and it turns out that for the neural interface work it was critical to make the thing work at all and here we were just looking at a kind of a you know a motion-capture labelled monkey as he's walking on a treadmill we had his muscles wired up that's the red signal on the top right you had the neural state bottom left and the brain implant to look at the code that was generating it and so we were able to translate between all these domains but the performance of the artificial system sucked as long as we looked at fixed state and synchronous snapshots of the state of the brain it wasn't until we looked at the trajectory of systems that we finally understood the role of time the importance of anticipation the projection of the future state so our goal at cortical is to extend this trajectory estimation to the entire entire cognitive arena in all the type of mental processes you do and we expect significant enhancement across all of them so let me start to close with a couple of provocative questions it's not really a question of if this is going to happen anymore because we're already building machinery to do it within three years we expect some of this to be working what do you think is going to happen to written and spoken language when instead of just looking at the phonemes at the auditory cortex level we can begin to look higher in your object representation level in your brain what does telecom and media look like when we can have digital API is not just to audio and video but to your actual senses and emotions what is your relationship with your spouse look like so our goal is to be able to solve problems we couldn't solve by integrating the senses by embodying fundamental cognitive abstractions and how the patterns of the brain work and to be able to anticipate and imagine future consequences but that that's set that simple set of three advancements even if they're not super bleah functional in their first iteration is kind of the rudiments of having value judgment and ethics so the machinery that we need are the processors the graph representation the systems to emulate and understand and implement temporal codes to be able to represent these abstractions now once we can emulate one of those abstractions we can start to spawn recursive versions of it so one AI instance can begin to spawn other AI instances and anticipate how your own actions and decisions influence others and I would argue that this is the root of digital empathy and fundamentally in order to be able to transcend the current AI systems which today are more or less tools that we use and embody them with sufficient trust that they can anticipate the consequences of their actions they can recognize what's good and bad for them and others we won't be able to move into an actor that we trust but here I'm going to leave you with my last provocative suggestion that when you think of yourself as a conscious being what is the root of that consciousness and I've been working on this for the last year that I think that the fundamental mechanism that underlies all the complexity is the future prediction of your conscious state the machine is not magical it's something that's just continually estimating what you're about to feel what you're about to see what you're about to hear and if you take this action what do you expect to happen that is consciousness I don't think it's any more complicated so I think and I'm being very careful here I'm not suggesting that we're gonna go out and make a conscious machine but if we can make an artificial machine that can begin to appreciate some aspects and embody some aspects of ethics and consequence and earn trust that's when I think AI is really going to be powerful thank you very much [Applause] all right um so we've gone a couple of minutes over but we were having so much fun back that we decided to let Philip go a little longer and to give him an extra five minute for questions here so if you do have questions please get to the mics will be about ten minutes behind and we'll squeeze it out of one of the breaks thank you anything anyone asked me anything like yeah Charlie to merchants semi accurate yeah um have you or do you have any plans to test any of these implants on yourself and if not why not that's that's a great question rev one probably not you know there's a natural dynamic of risk/reward depending on what sort of a deficit you have the early implants you know require invasive surgery and the FDA wouldn't even let that happen to someone unless there was a tremendous benefit to be earned for the risk so the early people are the paralyzed the paraplegic the blind and so on but we do expect that every year they'll become more integrated less invasive and you know eventually non invasive and at that point you know there's no FDA involved anymore and you can imagine you know and maybe three or four maybe five generations of having a general brain implant that's successful without a regulatory process viruses like red code are conscious I would say in a rudimentary way yes you know the question is you know what are the what's the level of complexity in what you can predict and that depends on how differentiated and how complex the prediction machinery is what abstractions it can represent and and move forward in time so I would say very limited predictive power and the virus conscious we're talking about empathy Digital empathy morality does that mean that killing computer viruses actually a crime I suppose if you're a virus bigot that probably wouldn't be judged badly at this time anything else yeah like if you can figure out the electrical impulses behind a thought and if you can feed the same impulses to some other brain but the process remains the same well I mean the architecture is under your control and you know I think that one of the challenges in in developing these neural transcoding systems is really understanding the role of randomness and I think that a lot of neuroscience has confused what's really happening randomly in biological systems and the probabilistic systems they're using to describe them so in the former there's a naturalistic function which has a very repeatable behavior that's not random at all the latter is a statistical description because you've got limited sampling capability so there's an inclination to use the sampling technique to try and make something deterministic but it that's an inverse mapping that doesn't really work very well the randomness that's really happening in biological systems in is more or related to how is the individual biological computing element synchronized with the external stimulus that's happening at any time and so it might have you know been active or been quiescent and recharging so there is randomness in what you expect to happen depending on time synchronization but the Machine is very very repeatable especially in its aggregate shared computing function so deeper conversation but you know if I find me after and we can talk more about it thank you last maybe two more questions yeah hello Dennis knowing Cisco so the ability to make a neuron fluoresce upon stimulation and to make a to activate a neuron by illuminating it that sounds like a big breakthrough all by itself so what is that is that a plasmid is that Cris or could you give more detail about yeah so so essentially you're talking about a DNA instruction that synthesizes a protein using ribosomes that are part of the cellular machinery how you deliver that DNA instruction you know there's a there's a variety of techniques you can use viral injection so or you take a virus and you empty out its natural virus instruction and you you cram the payload with with your your code and you give the virus and and you know you have to design the virus so that it heads to the tissue that you want or maybe control the injection locally there are other technologies that use mRNA instead of DNA so it's a temporary thing that can be amplified and control and duration and there are some where you just spray it on and it diffuses in although that's a little bit more challenging in certain parts of the body but you know again longer conversation there's probably a dozen methods and they're growing by the month all right unfortunately my cerebellum anticipated that the talk is over and we're at a time so could we thank dr. Valda one more time [Applause]
Info
Channel: hotchipsvideos
Views: 13,497
Rating: undefined out of 5
Keywords: artificial intelligence, human-machine interface, brain-computer interface, artificial limbs, DARPA, Wiseteachers.com
Id: PVuSHjeh1Os
Channel Id: undefined
Length: 70min 40sec (4240 seconds)
Published: Wed Aug 30 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.