Intel Advances in AI: Brain-Like Computing and Spiking Neural Networks Explained

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this year was huge for AI originally now neural networks can achieve superhuman performance in many tasks computers can already program themselves and can write text which is indistinguishable from Human speech but how do we bring all this intelligence into the real world into devices like phones and robots and here neuromorphic Computing seems to be the best choice the ideomorphic is to replicate the biological brain in Silicon in order to build the most efficient computer chips um so so the way I look at it is if you're wanting to deploy intelligence to the data center and you're going to have you know vast troves of knowledge that are built up in data sets uh databases that exist on disk right next to those processors conventional architectures are probably going to do very well for a long period of time uh for that on the other hand if you want to deploy the intelligence out into the world in systems in your vehicles into robots or even into just your cell phone maybe you know that that there's that real-time interaction that's required that's where neuromorphic Computing will really Thrive and succeed in the future Intel is working on such a neuromorphic chip called light heat tool they're basically taking the very latest understanding of what we know about the brain and putting it into a computer chip which is built in the state-of-the-art until 4 process node what excites me the most about this concept that it's basically Turns Upside Down the way we think about Computing and artificial intelligence the first paradigm shift that it's not using synchronous clock like all the chips though the information is encoded and processed in spikes just like in our brain this means when a spike is happening or communication from one neuron to next then the circuit is activated we don't use a synchronous clock which is the standard conventional way of Designing just about every you know commercial chip that you might buy is designed is synchronous Paradigm so loihi is different in that we we have asynchronous handshaking signals at the lowest level and the the circuits only activate when there's some kind of useful work to be performed so in in this Paradigm of spiking neural network chips that we're building here that means that there's some kind of a spike or a communication from one neuron to the next then the circuits activate and there's handshaking and the spike has passed through the network of of neurons and and it's processed as it goes but that's in contrast to what conventional design where you are strobing all of the flops all together you know with this Global clock and the information is flowing in this very synchronized way that's not how brains are processing information brains are processing information through these spikes where the timing of those spikes is actually encoding information what is interesting that such a neuromorphic chip is intended for running spiking neural networks and this is a whole new different class of neural networks in comparison to Conventional neural networks conventional models are trained using supervised learning approach it is trained on a labeled data set and the ways and biases updated frequently based on the error between actual and predicted output basically a conventional neural networks are sort of a static functions it takes input from many neurons and computes a single output this is so-called relo function but spiking neural networks are very different in that they have internal State they have temporal state so that the timing of when you present the input to these neurons matters if you give it an input and you wait a little while that internal State decays away and then the neuron returns to its initial state but if you get 2 inputs arriving from two different inputs at the same time that it will have a stronger activation and then it might fire at that point in time so there it's a very different class of networks because they're they're more like filters they're temporally behaving as opposed to being static input output functions so so that opens up a whole wide class of different types of algorithmic algorithms and computation that spiking neural networks can perform compared to deep learning models and we certainly don't look at lowihi or neuromorphic architecture as simply a better way to run these deep learning models it's possible to do that to take deep learning models and map them into low ehi and run them but these are never the best performing examples the best performing examples are always rethinking neural network processing to make use of this kind of temporal quality of of the neurons so so that's why they're they're good for temporal time series processing they're good for optimization problems because as you loop around these neurons these networks and you create recurrent networks Loops of these they the the spiking system forms a a dynamical system mathematically is how we Define that that that moves to an equilibrium point so rather than compute some clear output as a function of the inputs it's more of like a a pond where you throw a stone in the pond and there's all these interactions between the water molecules and these Ripples and and then there's you know that pattern of Rippling is maybe the answer to the computation of what you're you're seeking and spiking neural networks are rarely fire spikes that's why they actually Shuffle less data than a typical neural network and they require less power in general cnns are proven to be more efficient for real-world applications to the kind of problems which our brain used to solve like processing video or older or single Stream So whereas there is a temporal content in the signal the this is exactly the type of problem which chips like right here is able to solve in the most efficient way today if you look at the progress that's being made I mean the it's it's a very very exciting what's happening with conventional architectures and conventional very large you know Transformer networks right so I would not bet against this conventional architecture in in terms of overcoming the limitations today that it faces but I think where neuromorphic architecture clearly thrives and I think that it's very clear that it will succeed in the long term I believe is where you we need to deploy this intelligence into devices that respond to real world change and stimulus and and have to um Control Systems so in response to that stimulation make decisions inferences adapt to add you know to its knowledge uh and and to do that in this real-time setting another fundamental difference of neuromorphic approach is distributed memory you know one of the key bottlenecks of the traditional computers based on fundament architecture is the separation of compute and memory moving the data between the CPU and the memory is often inefficient and this becomes particularly critical when we are dealing with large amounts of data in real time while our brain is built entirely different in our brain the memory is distributed and each Computing node gets its own local memory and brain can access it anytime without need to wait for some clock so one of the most fundamental properties of neuromorphic Architecture is that there's no external memory the the memory which in a conventional processor is the dram you know or your cache right that's sitting apart from the the processor in some case cases far on a different chip right in the case of dram that that's not the case in a neuromorphic architecture the memory elements are intertwined at a very fine scale they're they're close to the processing elements where the neural processing is happening in this way the bottleneck which was discussed is eliminated and the energy consumption drops by a factor of hundred or even thousand times and this is actually why we are driving inspiration from our brain because this is the most efficient Computing engine that we know we have examples where we have over three orders of magnitude gains in energy delay product that product so over a thousand times uh improvement over the the best possible conventional Solutions this is what makes neuromorphic chips like Lloyd here so well suited for robotics because rabbits often need to operate for long periods of time on a single battery and also quickly adapt to the change in the real world and take action people see neuromorphic as being kind of the final correct you know right architecture suited for robotics because brains evolved to control bodies and organisms right and their movement through the world and understanding and adapting to real-time sensor input so there's been certainly some examples in the robotics domain so things like controlling robotics arms with adaptation so learning using some of the learning capabilities in the chip to understand if the the kinematics of the arm or you know say if there's friction that develops in the arm or something that changes about the model of that of that arm that system Luigi can adapt in real time to compensate for you know those kind of changes and perturbations that may experience there's been other examples of of learning objects in a kind of an active sensing way in the in the way that you know when when we try to detect a new object you know our attention might move towards it and we might you know study it a little bit and then we form a kind of a pattern in our mind of template of what that new object is and there's been some work using the icub robot to to demonstrate that kind of of interactive object learning on on Lowey as well but you know other examples in in other areas as well so uh things like optimization problems and planning uh so you know one thing that our brains are doing all the time are you know solving optimization problems whether that's the best path for me to move from my kitchen to my living room or you know kind of charting the course of you know how I'm going to plan my day or or just even moving my arm to avoid obstacles this is something that we do completely effortlessly and and it's been no surprise that we found that we there are networks that run well on low ehi that solve similar kinds of planning and optimization uh problems eventually we want neurons to learn while they compute the idea is to get computers to the point where they can think creatively recognize your objects or people they've never seen before and create something which hasn't existed before and you're a market computer chain is one of the most promising approaches here I find this research very exciting but at the same time very challenging and for sure Intel wants to commercialize this chip in the near future and bring it into the Intel products we've already talked about using neuromorphic for Hai in gadgets and robots another possibility would be to embed it into an Intel SOC to add extra intelligence to the chip apart from mental there are other companies working in the same direction like startup brain chip from Australia who are developing the neuromorphic chip aquida this year their Hardware is integrated into Mercedes cars it is used for keyword spotting for voice control you know when you say hey Mercedes and according to Mercedes neuromorphic chips are at least 10 times more efficient than conventional chips for voice control I have a separate video about it you can watch it later and of course there is another way to build neuromorphic chips so in analog fashion so instead of using standard cells to implement neurons and synapses it is based on analog cells like memory stairs and one of the recent designs here is for instance new Ram chip this chip is based on resistive non-volatile memory and it allows to compute directly in the memo three this technology is actually not new but usually it leads to loss in accuracy but this guy seems to solve this problem this chip achieves 99 accuracy on a handwritten digits 85 percent on image classification task this is actually comparable to digital chips but achieved that much lower power consumption IBM and MIT also working on another interesting research they found that using electrochemical Ram so-called isiram is also perfectly soils for building memory for neuromorphic chips and this device draws inspiration from Batteries Now a team at IBM is working on manufacturing such a device definitely we have to keep an eye on this research well one thing is clear neuromergic Computing has huge potential and this biologically inspired approach for any neural networks seems to be the Best Hardware choice for the future at least for age and thanks for watching guys I wish you to have beautiful holidays and I will see you in the next video ciao you know [Music] [Music]
Info
Channel: Anastasi In Tech
Views: 90,387
Rating: undefined out of 5
Keywords: AI, Neuromorphic, Intel, Neuromorphic Computing, Loihi2, The Future of AI, neuromorphic chips
Id: GY69IuTLmkk
Channel Id: undefined
Length: 14min 58sec (898 seconds)
Published: Thu Dec 22 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.