SUPERCUT: Neuralink Show & Tell Event (25 Minutes)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so we've got an amazing amount of new developments to share with you that I think are incredibly exciting as well as tell you about the future of what we're planning to do here the overarching goal of neurolink is to create a ultimately a whole brain interface a generalized input output device that in the long term literally could interface with every aspect of your brain and in the short term can interface with any given section of your brain and and solve tremendous number of things that cause debilitating issues for people so we are all already cyborgs in a way in that your your phone and your computer are extensions of yourself and if you I'm sure you found like if you leave your phone behind uh you end up tapping your pockets and and it's like having missing limb syndrome leaving your phone behind is kind of like a missing limb at this point you're so used to interfacing with it you're so used to being a de facto cyborg so what's the limitation on on a on a phone or a laptop limitation is the rate at which you can receive and send information especially the the speed with which you can send information so if you're interacting with a phone it's limited by the speed at which you can move your thumbs uh or the speed of which you can talk into your phone this is an extremely low data rate you know maybe it's like 10 optimistically 100 bits per second but a computer can can communicate at gigabits terabits per second so Justin Roiland in the audience this is a hi Justin so it's a little Rick and Morty reference here great Rick and Morty episode about intelligence enhancement of your dog and what's the worst that can happen videos now 18 months old so this is um pager uh who is playing monkey mind pong so this is Pedro has a neural link implant in this video and the thing that's interesting is that you you can't you can't even see the the neural implant we've monitorized the neural implant to the point where it matches the the thickness of the skull that is removed so it's essentially the it's sort of like having an Apple Watch or a Fitbit we're replacing a piece of skull with like a you know a smart watch Pedro first learned to pay pong with a joysticks I'm like that was novel it's like I didn't know monkeys could play Punk but they can and we took the joystick away and have the neural link telepathic video games essentially so what we've been doing since then is we've been on the very difficult Journey from prototype to product we've been working hard to be ready for our first human and obviously we want to be extremely careful and certain that that it will work well before putting a device in a human but we're are we submitted I think most of our paperwork to the FDA and we're probably in about six months we should be able to have uphost neural Link in a human so [Applause] we do everything we possibly can to test the devices before not even going into a human before even going into an animal so we do Advantage top testing we do accelerator accelerated life testing we have a fake brain simulator that has the the texture and it's like emulating a brain but it's sort of rubber we do everything we possibly can with rigorous bench up benchtop testing so we're not Cavalier and putting devices into animals since the page of demo uh We've expanded to work with a troop of six monkeys we've actually upgraded pager they do buried tasks and we do everything possible to ensure that things are stable and replicable and that things like that the device lasts for a long time without degradation what you're seeing there is it looks like the Matrix but that's uh actually though that's a real output of neural signals that's that's not a simulation or just a screensaver or something that those are actual neuronspiring sake it's one of all the monkeys typing on a keyboard what's really cool here is is um sake of the monkey is moving the mouse cursor using just his mind it's so important to show that um sake actually likes doing the demo and is not like strapped to the chair or anything so uh it's the monkeys actually enjoy doing the demos because they and they get the banana smoothie and it's kind of a fun game the first two applications we're gonna aim for in humans are restoring uh Vision this is like notable in that even if someone has never had Vision ever like they were born blind we believe they can they can we can still restore vision the other application being in the motor cortex where we would initially enable someone who has no ability to almost no ability to operate their their muscles you know sort of like a sort of Stephen Hawking type situation and enable them to operate their phone faster than someone who has Working Hands I mean as miraculous as it may sound uh we're confident that it is possible to restore full body functionality to someone who has a severed spinal cord what I want to emphasize again that the primary purpose of this update is recruiting um if there's one message I want to convey it is that if you have expertise in creating Advanced devices like watches and phones computers then your your capabilities would be of great use in solving these important problems so our first steps along these dimensions for our device is what we call the N1 implant the size of of about a quarter and it has over 1 000 channels that are capable of recording and stimulating a microfabricated on a flexible thin film erase that we call threads it's fully implantable and wireless so no wires and after the surgery the implant is under the skin and it is invisible it also has a battery that you can charge wirelessly and you can use it at home so similarly for implanting our device safely into the brain we built a surgical robot that we call the R1 robot so here it is that's our R1 robot with our patient Alpha who is lying comfortably on the patient bed this is what we call the targeting view so what you're seeing is this is a picture of our uh brain proxy and the pink represents the cortical surface that we want to insert our electrodes into and the black represents the vascular Shores that we want to avoid so this is another view real quick on the left is the view of the insertion area and on the right what the robot's going to do is it's going to peel the array the threads one by one from its silicon backing and insert it into the targets that we predetermined in the targeting View there you go that's the first insertion it's quite accurate but it's a bit slower than what we would like and cursing control is the foundation for interacting with most Computer Applications so since then we've been working to improve cursor speed and accuracy as you can see it's much much faster almost twice as fast so we are walking and we are designing a mouse and keyboard interfaces for the brain the way we do that is by training Pages Pedro and his friends on a variety of computer tasks and then designing algorithm to predict the behavior here you can see a few example of tasks in different phases of monkey training for example left and right click click and drag cursor typing swipe typing handwriting and even hand gestures I like when I click on a button and I can physically feel the button being pressed when a potential N1 user will attempt to click they won't be able to fill it an example of how we are addressing that is by providing a real-time visual feedback that represents the strength of the neural Click by changing the color of the cursor just by typing on a physical keyboard is much faster and easier than typing on an iPad keyboard this will make the band control much faster and easier to use we start this project with our monkeys but of course they don't know how to write so to mimic writing we train Angela one of our favorite monkeys to trace digits on an iPad here you can see in tracing the digit 5 and the digit 2. then we recorded his neural activity with the N1 device but now instead of decoding the cursor velocity with the code in real time the digit that he's tracing understanding the second one that although it can increase the typing rate it requires hundreds of examples and samples of each of the digits and the characters we wanted to classify this would not scale the way we are solving that is by interaction instead of decoding directly the digits we first decode the M trajectory of this on the screen and then when we decoded the head trajectory we can use any of the Shelf handwriting classifier to predict the digits and the characters for example classifiers that are trained on an MS data set why it's so important it's important because now we can potentially decode any character in any language with only one neural decoder for hand trajectory it means that you can write in English Hebrew Mandarin or even monkey language and we can understand you wanted a banana here's what we want that experience to feel like in this video you can see saki walking over to its MacBook and choosing to work on his typing task the entire decoding system works out of the box and it feels totally Plug and Play the first step to achieving this kind of high reliability is to test extensively offline a typical flow for using the N1 link is to connect over Bluetooth stream out neural activity from the brain and then use that neural activity to train decoders and do real-time inference we've built a simulation for exactly the sequence but instead of using a monkey with an implant we use a simulated brain that injects synthetic neural activity into an implant sitting in a server rack from the point of view of that implant it's in a real brain here on the right you can see that this bias is making it hard for the cursor to move to the upper right corner you see it struggling here to make it up to the upper right and then it moves much more effortlessly down to the bottom left we're trying many approaches to mitigate this problem some examples include building models on large data sets of many days of data to try to find patterns of neural activity that are stable across days another approach we're trying is to continuously sample statistics of neural activity on the implant and use the latest estimates to pre-process the data before feeding into the model over the last year we've made tremendous improvements to the stability and reliability of our system and we've been able to demonstrate consistent high performance across many sessions and many months we designed the custom neural sensors which include both analog and digital circuitry to record and stimulate across 1024 independent channels we Face challenges across all three major metrics performance power and area not only do we have to fit all 1024 channels into a single quarter sized implant but we also have to measure spiking activity less than 20 microvolts in amplitude and today I'd like to focus on the last challenge I mentioned power power consumption is important to us because we want to give future users a full day of use of their implant without any Interruption for charging back in 2018 we were sending every sample from every channel off the device for processing which burned a ton of power in 2020 we brought Spike detection onto the chip as you may know neurons transmit information by firing so simply monitoring for these spikes and only sending these Spike Events off the implant acts as a very efficient form of compression and over the past two years we've continued to make optimizations within the Asic dropping the total system power consumption down to just 32 milliwatts and doubling battery life let's take a look at our on-chip Spike detection algorithm which makes our battery-powered implants possible we first start by applying a 500 Hertz to 5 kilohertz bandpass filter to remove noise that's out of band next we use an estimate of the noise floor to generate an Adaptive threshold per Channel and finally our Spike detector module identifies three key points of a spike identifying three points allows us to detect not just the presence of a spike but the shape of a Spike as well this can be extremely important for distinguishing between multiple neurons adjacent to a single Channel today I like to focus on one of the many optimizations that we've made in our latest chip this one specifically cutting system Power by 15 percent in our latest chip we split the state into two parts a hot State and a cold state the hot state is accessed on every cycle while the cold state is only accessed once the threshold is crossed reducing the average access width and saving power this is a view of the insertion site similar to the one that DJ showed you earlier but instead of the targeting reticles if you look closely you can see that all 64 threads each carrying 16 electrodes have been inserted into the brain proxy while avoiding vasculature and all just within the past 20 minutes in pursuit of these goals our charging system has gone through several engineering iterations the first if you watched our big demo in August of 2020 Gertrude was implanted with a version of the N1 charged with our first generation charger this device was implemented in a small Puck package and later separated into a remote coil and battery base this charger was challenging to use however we learned a lot through its implementation with the addition of one new outer control Loop plus a banana smoothie pump The Troop has been trained to charge themselves so let's see how pager charges his implant on the right we're streaming real-time diagnostics from pagers N1 when he climbs up and sits below the coil you can see the charger automatically detects his presence and transition from searching to charging we see the regulated power output on a scale of zero to one and the current driven into his battery so when we started building implants in our original implementation of these systems we used off-the-shelf components to start automating tests quickly however these systems were constructed in a relatively autism fashion and were very difficult to maintain and this meant that testing quickly became the bottleneck for development so to alleviate this the hardware and software teams developed a new system which integrates all the required components onto a single baseboard we can then put the charger and implant Hardware on individual modules that plug into this baseboard including one board with opposing coils so that we can test charging performance this architecture allows us to rapidly iterate different Hardware prototypes because we can simply drop them into this system and reuse all the testing infrastructure additionally we can host the current and next generation of our mural Asics onto fpgas and plug those into this board as well and that allows us to test a whole extra layer altogether so that's how we generated this rather inceptive image here on the right what you're looking at is spiking activity emitted from some of our simulated neural sensors streamed through the entire system over Bluetooth and then displayed on a phone this allows us to test everything in one system from Chip to Cloud this system is one-fifth the cost one-fifth the volume and is very easy to manufacture this allows every developer to have a personal unit on their desk and it also allows us to test to shot the entire test Suite over a large number of these units mounted into a rack all of this has greatly accelerated our rate of development in our original implementation of doing these impedance scans it took four hours to get through all 1000 channels we're now able to scan all 1000 channels in just 20 seconds this means that we can run impedance on every implant every day and then our internal dashboards can play back a history of this impedance so that we can get a really good quantitative insight into that interface between biology and electronics as you can see our internal humidity sensing is so sensitive it can even detect the very small and slow humidity rise just from diffusion through our implant materials now in blue you can see that same internal humidity data but from devices in our accelerated system now if we adjust this data for Accel operation Factor you can begin to see not only the agreement in this data but also just how far into the future the data extends now in red you can see a device which has failed in our accelerated system this device showed an abnormal increase in humidity over the duration of many months before implant electronic failures occurred well we started building the first system prototype just after the Cova shutdown had begun in early 2020. so we had to get a little creative as you can see our first system prototype was a little Scrappy and operated out of one of our Apartments as indicated by the carpeting over the duration of just a few months the system was built out totally custom and highly iterated with two system versions and countless minor iterations leading us to our currently operated third generation system which achieves high density testing with automatic in vessel charging as well as automatic data collection we also integrated the system into a high density rack mount form factor along with a centralized fluid management system both for chemical uniformity across vessels and also reduce operational maintenance the system has been in operation for the last year and a half and has had its fair share of challenges since the system itself well we have started work on our fourth generation system and have totally redesigned it from the ground up to be a hot swappable single implant per vessel design partly inspired by high density compute servers with this new system we will achieve a whole new level of density robustness and scale so the way that the team solved this is by putting all three of these Optical paths into one Optical stack using Photon magic or polarization whatever you want to call it and that enables us to do vessel avoidance in real time so as I mentioned the brain is moving and where we place Targets in the beginning may not be where you want to insert at the moment the needle is going down there so the robot can actually detect the vessels and then determine if we're going to insert onto a vessel or not if it's safe to insert and then that way we can avoid inserting onto major vessels and that brings us to the robot that we have here today our current custom Optical systems offer pretty incredible capabilities for Imaging the exposed brain surface once the dirt is in place you can't see the dense vasculature at the brain surface the dirt is in the way there's simply too much attenuation to solve this problem we're developing a new Optical system that uses medical standard fluorescent dye to image vessels underneath the tissue we're also exploring applying our laser Imaging system to deeper tissue structures this is a real-life sem image of our latest design on the left there you can see the end of the thread in the middle is the needle and on the light is actually a piece of my hair um so yeah it's extremely small um and besides being really small there's a lot of other challenges associated with designing this one challenge is that we have to use the needle and the protective cannula that it sits in to grab onto the thread and to hold it while we peel it from this protective silicon backing then we have to keep holding it while we bring it over to the surface and then release it from the cannula during insertions another challenge is that we don't just have to get the needle through we have to get the thread through as well so we really have to focus on optimizing the combined profile of the needle and thread together these are just some of the challenges associated with designing something like this this allows us to iterate in under an hour for new designs the latest design seen on the right can actually insert through nine layers of Durock totaling uh three millimeters on the bench top this is far more than we could ever expect in a human with significant margin we're developing synthetic materials that mimic the biological environment this allows us to learn as much as we can on benchtop and start taking steps away from the industry standard of animal testing developing accurate proxies though is challenging we've come a long way from our humble first brain proxy shown here sitting on a plate and consisting of agar and a paraffin sheet and while simple it allowed us to perfect robot insertions through countless Bend shop tests today our proxy is slightly more complex where we've upgraded to a composite hydrogel based brain proxy that better mimics the modulus of real human brain in this image I've highlighted the calcane Cycles in red in an MRI it contains a map of the visual world the visual field it's about the surface area equal to a credit card on each side and if you unfold it and flatten it you see that the image is inverted it's upside down but more interestingly it's mag it's distorted so that the central part of the visual field the fixation point is greatly magnified so for example if you look at this image of Lincoln if you look directly into his right eye everything to the left of that fixation point is directed to your right visual cortex and everything to the to the right goes to your left visual cortex his eye even though it's very small in the image is magnified in the brain to occupy nearly a quarter of the surface area of the visual cortex we've inserted our device into the visual cortex of two rhesus monkeys whose names are code and dash that means we can record activity from their visual cortex generated by their not their normal home environment as they roam around but as we all know monkeys love banana smoothie that means we can easily teach them to fixate points on a screen and reward them we can reward them very precisely because we can track the location of their eye using an infrared camera and if we take all these receptive fields and accumulate them together overlap them and place them on a on a computer monitor for scale at a typical viewing distance you begin to get an idea of how much the visual field we can cover with this preliminary device let's look at code performing this task I want to show you first at one quarter speed there's a visual Flash and he makes an eye movement towards it we the monkey can only see what is white on this screen you can't see his own eye movement and you can't certainly can't see when we stimulate but here we stimulate and he makes the same circad to the same location because we stimulated the same electrode nothing appears on the screen at that time and he hears no other cue to make that eye movement this is a schematic of what a visual prosthesis using our end device might N1 device might look like a camera the output from a camera would be processed by an iPhone for example which would then stream the data to the device and the image would be converted into a pattern of stimulation of the electrodes into visual cortex you've already heard about how we can use the N1 link as a communication prosthesis to help someone with spinal cord injury control a computer or a phone but it can also be used to reanimate the body let me show you how first a little neuroanatomy movement intentions arise in motor cortex and are sent down long nerve fibers through the spinal cord these are upper motor neurons in the spinal cord they synapse that is make a connection with another motor neuron a lower motor neuron which sends these movement intentions to the muscles which contract and in turn you have movement this pig has more than one neurolic device there's a device in the brain but there's also one in the spinal cord and we can stream neural data from this device these devices in real time and use them to do things like decode the movement of the joints of the pig so here you can see on the left a Time series of the hip knee and ankle and we're decoding those those movements and as before you can see we're able to track the position of the joints and also stream neural data as well okay so let's stimulate an electrode so here's one electrode on one thread that when we stimulate causes a flexion movement of the leg so on the left you can see the movement of the joints and you can also see the time series of the stimulation pattern in yellow so the leg is moving up here's another electrode which when we stimulate causes an extensor movement this is actually a little harder to see because the leg is straightening and the hips are shifting but if you look carefully you can see how this is the leg is moving
Info
Channel: Neura Pod (Neuralink Updates)
Views: 227,706
Rating: undefined out of 5
Keywords: Neuralink, Elon Musk, Max Hodak, What is Neuralink?, Neura link, Neura Pod, Tesla, SpaceX, starlink, Nueralink, Nuralink, Brain computer interface, What does Neuralink do?, brain machine interface, Artificial Intelligence, Metaverse, Facebook, Neuralink News, Neuralink news 2021, Neuralink 2021, Neuralink Update, Neuralink Update 2021, Neuralink news and updates, neauralink update, Neuralink monkey, neuralink presentation 2021, neuralink pig, neuralink demos, Neuralink stock
Id: VlDx6yzomEg
Channel Id: undefined
Length: 25min 2sec (1502 seconds)
Published: Thu Dec 01 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.