Neuromorphic, quantum computing and more: Intel labs vision of the future at CES 2021

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

20:10 - 20:45 ~~~>deep seeding neuronal messages by VR

👍︎︎ 1 👤︎︎ u/Yvwh3 📅︎︎ Jun 16 2021 🗫︎ replies
Captions
[Music] [Music] um welcome folks i am chris shot i'm the host of em gadget's explainer show upscaled on youtube and kind of my secret is i pretty much started this show as an excuse to get to talk to really smart people about the work they were doing and to try my best to understand it and in that spirit today i am joined here by intel's dr rich euleg who is the director of intel labs which is a division in intel which is doing some really crazy impressive work now rich maybe you can actually just give us a little introduction about what intel labs mission is i'm not sure it's a part of the company that people are familiar with i know everyone knows intel for your processor designs and all the work in computing but what does intel labs itself specifically focus on yeah happy to talk about that chris and uh thanks for having me on it's uh i'm delighted to be part of this event so intel labs is the um research organization for intel and our our primary purpose essentially is to explore the future for intel and we use a number of different uh methods for for doing that we engage with academic researchers we do our own original research within the company and we we work closely with our business unit partners and product team partners inside the company to to take our research innovations and translate them into a form that intel can ultimately bring to market at scale so that's essentially what we what we do so i want to dig into the specifics of uh some of your research in a second here but how do you decide what uh avenues of research are worth exploring well we use a number of different criteria and you know it's uh not so much about the time scale you know whether it's far out in time or near in but rather we look for pain points areas where um there's uh something that that makes computing difficult for people to engage with or sometimes we'll look for some emerging barrier that's coming you know it might be have to do with power or some limiter to being able to do something new with computing and sometimes we look for enablers that will technologies that make entirely new things possible and so just to give some examples uh several years ago within the labs we observed that one of the pain points with working with pcs is was this uh abundance of different kinds of connector types you know that the sides of pcs were littered with and our concept was that we should try to converge all of them onto a single uh i o protocol and that's what ultimately became thunderbolt um and usb type c so so that'll be an example of an innovation that that started in the labs but we've done similar things like when we recognize that um there was a lot of computing capability available in our server systems and oftentimes it wasn't fully utilized we invented virtualization technology and brought that to market and that's been you know widely adopted in the the cloud by by cloud service providers and that would be another example of a technology that came out of the labs uh and maybe one that we could talk a little bit more about today is our work in silicon photonics which is all about identifying a limiter a a limit to how much we can deliver io to compute devices and um silicon photonics is all about overcoming that that limit yeah so i know this is one of your uh particular areas of interest i know so um this is probably not a field that folks are particularly familiar with you just want to give us kind of the general overview about what what this technology is you said it's trying to get through bottlenecks in i o so we're talking about like feeding data into the chip faster yeah that's exactly right chris so if you think about you know what what we need um from from like a high performance compute device whether it be a high-end server cpu or it might be a a gpu or maybe an fpga that you have to feed it data and as it gets more and more capable in terms of the amount of compute that it can do the amount of data to feed the beast just increases over time and one of the things that we've identified as an impending barrier to making further progress is that just the the amount of power budget that goes to i o just getting the data onto the chip is taking an increasingly larger proportion of the available budget that you have and that means that you're increasingly we're not leaving power for the compute itself and a lot of that has to do with the limits of electrical signaling the the amount of data that you can push over electrical links is is getting increasingly limited and it's getting harder and harder to do especially at larger distances and so we think that there's a very exciting opportunity around moving from electrical signaling to optical communication and we've been developing in the labs a lot of different technologies that allow you to go end-to-end with optical interconnect straight into a a cpu or a gpu package so when we're talking about the current kind of electrical interconnects that's the systems like like pci express or things like that that people are now using to hook memory and graphics into the processor yeah that's right i mean the uh the predominant form of communication um in computing systems today is is electrical signaling it's the the more cost effective it's brought us a long way over you know many decades of improvement but what we're starting to to see is that um the the ability to deliver high bandwidth electrical links is is getting limited especially at distance and so what you've already begun to see in data centers is that the the different servers between racks are getting connected through optical links and what we think is going to happen over time is that those optical links are going to get closer and closer to the individual compute nodes and the different compute packages as well and so in the labs we've been developing technologies that allow us to to integrate those optical links straight into the compute package so we're talking about right now in data centers you've got the racks maybe hooked together with fiber optic cable and you're talking about actually building an optical link maybe directly into the server you know inside of an individual server itself for improving moving data between the different parts you got it that's exactly the vision and and to make that possible we've had to invent a number of different new components um so that that that deeper integration into the compute package is possible um you know there's when you think about we have to do end to end for optical um links you have to have first of all a a source of light um and and so we've uh actually going back many years in the labs have developed technologies around what's called a hybrid silicon laser where we can generate light um through uh uh silicon devices um once you can do that you have to be able to modulate data signals onto those those optical signals over different wavelengths of light you need ways of of guiding that light from from endpoint to end point you need ways of amplifying that light need ways of detecting the light at the other end and so there's a number of different components and physics behind implementing each of those that we've been investigating in the lab so in the the current fiber optics i've seen like the the ports for connecting in uh i've i've not spent a lot of time in servers but i've at least seen fiber optics like it links used in pcs and the like and i mean we're talking about like a fairly chunky uh plug that usually goes in the back of a port and then kind of a hefty the cable that uh strings to well the next port um what i mean what level of miniaturization oh we we have lost rich here for a sec we're going to attempt to get him back online i am back can you hear me i can hear you all right i guess we just lost the link there i'm not sure where you lost me um but i can uh um i think we're in the middle of talking about building this all yeah into the different parts you needed to build this into a uh an actual board i was just wondering yeah so what what uh how different is this from the i mean i've seen the the sort of fiber optic connectors used in servers a lot of them are a pretty chunky port i mean you know maybe the size almost like a sharpie marker so what what scale of miniaturization are we talking about to bring this uh down to the level you're talking about with with doing this like on a compute package yeah you're right that uh today's optical transceivers they they are still discrete devices and they're larger in dimension as a um and part of the reason why that's the case is that until recently we haven't um figured out how to modulate electrical signals onto the optical link with devices of small dimension and when i say small dimension something that's small enough that you could situate it right next to the compute package and we've developed what are called micro ring modulators that you can you can have as many as a hundred of them arrayed around the uh the compute package and therefore you have the option of integrated straight integrating it straight into the the compute package and what that means then physically is that you take the um the electrical link that would otherwise be required at each endpoint of a optical fiber which is really sort of the last inch or so that you need to remove in order to achieve this vision of really energy efficient uh high bandwidth communication at distance so we're talking about actually actually integrating tiny optical fibers onto the chips or boards themselves to form these connections that's right yep you got it so one thing when i was researching this one thing i thought was really interesting is just the idea of how the the modulation works with these ring structures um for folks who maybe are not following all of this modulation we're talking about kind of turning the signal the light beam into a pattern that actually carries data and it didn't it doesn't uh work in the way that i thought it would i think in my mind i assumed you'd just be sort of like uh like turning a flashlight on and off essentially to you know get this uh this light signal with your ones and zeroes but it's a little more complicated than that it turns out do you want to talk through how the how this works with these ring structures sure there's a a few different components that are coming together you know to start you've got the electronics within the the compute device that are ready to send data um across the package pins and you're still in the electrical domain at that point and you've got you know ones and zeroes that that are the data that you want to send off of the chip now you also have a light source this hybrid integrated laser that that's ready to um to generate light at different wavelengths and send onto an optical fiber and what you need to do is cross that that boundary between the electrical domain to the optical domain and that's achieved by what are called micro ring modulators that are tuned to certain uh resonant frequencies so that they can generate light at specific wavelengths that are um that are sent onto the optical fiber and the the they're coupled to the the the data signals that are generated um on the sending side of the the link and what's uh really amazing about optical technology is that you can have multiple different wavelengths that all share the same optical fiber which means that you can get much higher density of communication through a single a single link something that would otherwise with electrical signaling require a whole bunch of package pins and and wires coming off of the chip you're talking about this i mean it's i realize the frequencies are maybe not quite the same but essentially like shining a bunch of different colored lights down the fiber simultaneously and they can be picked apart and each one can carry its own signal that's right uh they're they're multiplexed onto the same fiber on the sending side and then they're they're teased apart on the receiving side so that you you reconstruct the signal and um what and so you get a very efficient use of that of that fiber and also a really important part of optical is that you can do so at distance you can begin to separate the receiver and the the transmitter by by large distances and and that is also a distinguishing factor of optical communication compared to electrical links yeah what is the is there example of what the performances difference is there over distance i know i mean a lot of uh usb cables and the like like that that we work with in the non-server consumer world tend to tend to max out at not terribly long but what's uh what's possible with optical fiber here yeah so um you know once you account for the fact that you can get greater distance um with with optical and that's an important property um what happens with electrical links is that as you as you get you can still push higher bandwidths but you need to go to shorter and shorter distances and and that's one of the the limiters but after that there's different ways of thinking about uh performance one is the efficiency of the link so how much energy is required to uh to transmit uh the bits of information and so if you if you look at sort of uh best-in-class uh ethernet style communication where you're communicating at distances you might be in the tens of what are measured in pico joules per bit um you know something like uh 28 30 picojoules per bit for a long distance communication we think that with the right with the right just to clarify sorry the peak of tools per bit that's like that's essentially electricity per per data moved is that it's what we're looking at yeah it's the amount of energy those are the joules our measurement of energy uh to transmit a bit of information and so you want that number to be as small as possible the the smaller it can it can be the more data you're able to to push over the link and so um currently with with optical uh communications the uh you do have to spend a fair amount of energy to to get the uh the bits pushed across the optical link but through some of the innovations that i was describing earlier we believe that we can improve that by an order of magnitude or more and uh so that's one way of thinking about performance is how much energy is required to send the bits another way of thinking about it is what you could call bandwidth density so this is how much space do you need uh in order to push uh you know data off of the chip and one of the problems that we have with with electrical links today is that there we have to add more and more pins to the package in order to reach higher data rates but because of the property that described about optical before where we can multiplex multiple different colors or wavelengths of light onto the same optical link we can get much higher bandwidth density and in there we think we can get 20 40x kinds of improvements and you can get up we want to get north of of a terabit per second of a bandwidth within a millimeter of of shoreline around around the the chip package data move per second in and out of a chip out of a chip over over a millimeter worth of shoreline on the chip so we think we can even push that higher by aggregating more and more links you know optical links on onto the package but you can see that you have you know much more headroom for improvement with this optical technology than you would have with electrical that's what has us so excited about it we think it will really change the way that computing systems are built because you can begin to disaggregate the different things that we think of as servers in new ways you can separate compute from storage and memory and you can you know begin to assemble systems in entirely new ways that that will lead to you know all kinds of improvements in performance and efficiency what are the odds this filters down to my uh ssd some point 15 years in the future or is this mostly a high performance computing innovation we're looking at well it's probably a high performance computing innovation in the near term um but and a lot of that has to do with the fact that we still have to get the cost down and in high performance applications can tolerate higher costs however you know i mentioned earlier thunderbolt um technology you know in our original vision of that that was uh going to be an optical link uh in order to you know even in client applications and so ultimately we went with electrical signaling for for thunderbolt but there's a you know you can imagine a future where even in in client applications where you're you know multiplexing multiple i o protocols at uh um you know between a pc and then the peripherals that you want to connect to if you have need for higher bandwidth then you could imagine moving to optical links in in the future and there could be client applications as well so it's something that would be enabled once we can bring the costs down that's that's pretty cool i i look forward to being able to connect in my external hard drives via a terabit a second link um i wanted to jump to one of the other areas that you folks have been working on at labs one that uh i i again i don't know if it's a a super common area of knowledge with our general audience but neuromorphic computing which has a heck of a name here can you give us a little introduction about what this project is what the what neuromorphic neuromorphic computing is and what you're hoping it can do yeah sure let me just start by defining the term so uh you know the name is just suggestive of the idea that um if you look at biological brains uh they are an amazing um thing what they're able to do and you know uh if you if you think about uh neurobiology uh we believe that there's much that you can learn from that and so we're considering how we can take lessons from biology from biological brains and and build uh computing devices that take the form or at least uh mimic some of the the dynamics of that this kind of of uh intelligence and really it's inspired by um you know or recognizing that there's been tremendous progress in in ai you know we've all seen that in in the last several years and it's based on a model of uh artificial neural networks that really abstract away the um some of the the more subtle nuances and behaviors of biological neurosystems what we want to do with the neuromorphic computing is capture some of those dynamics in particular um the the what we see in biological brains where you have uh many neurons that are interconnected in a dense network and any given neuron may fire based on the inputs that it receives over its over its synapses and the each neuron is then connected to other neurons and they they can fire send what are called spiking messages to those other neurons in the network and then excite one another and so this dense interconnection of of neurons um is what happens in biological brains and we're trying to model some of those dynamics so that we can see if we can build a system that that uh exhibits some of those same qualities and just to underscore the point you know by way of analogy if you look at something like a cockatiel it it has a tiny little brain but it's able to do amazing things navigate very complex physical environments um cockatiels can even learn words and and you know sort of parrot them out uh they can even turn you know objects into tools so there's a lot that can be done in a very you know small brain of an animal and we're way far away from that when you think about the compute that goes into say a drone where uh it's gonna be you know many orders of magnitude more power that's consumed the the capabilities of being able to navigate an environment are much more limited and what you know so we know that it's possible there's an existence proof with biological brains so what we're trying to do with neuromorphic computing is capture the essence of that and see if we can't solve some of those problems and and build more efficient intelligent systems so what are these neuromorphic chips or neuromorphic processors um what are they suited for i guess what's the um the advantage that they're able to provide i know it's still a relatively early days but um what what uh what can they do i guess yeah no great question so you know we're looking at them for a few different classes of applications the first is the things that i just talked about you know they we think that they have some promise in building the control systems for for robots or drones if we think about you know biological brains that's what they're really good for you know if you're an animal um out in the wild um being able to comp and navigate a complex dynamically changing environment is something that you can do with with high efficiency and so there's an analogy there in building robotic systems where we could potentially use it for a control system for say a drone or for a robot and and to do things that that um uh biological systems do uh things like um building systems that can learn how to smell you know taking like the the inputs from a a chemical sensor that that and then recognizing what the scent is in the same way that you know that we can discern what what's in an environment when we we take in those inputs you know through our noses we're using it to build systems that can touch you know make sense of of tactile sensors that may be built into a robotic arm so there's a whole class of applications that that are in that space but what's interesting about neuromorphic is that it's it's also very energy efficient and we think therefore that it can also be used for other classes of problems and when we build our neuromorphic computing systems we're we're not only building small single node implementations but we're also building larger clusters of um of neuromorphic systems where we we scale up to a really large number of uh simulated neurons and we think that there are problems in the space of constraint satisfaction for example like if you think of sudoku that's that's uh you know solving a surgical puzzle is a constraint satisfaction problem but there's many other optimization problems that are like that and neuromorphic systems seem to be really good at that kind of an application another application would be um solving problems in similarity search so imagine you've got like a really large visual database of images and you want to find and you've got a source image that you would like to submit into that database and you'd like to find all the other images that look like it like let's say you've got a picture of a couch and you want to find all the other pictures in your visual database that look like a couch that's called similarity search and and that's something that neuromorphic systems are pretty good at and if you think about it there's an analogy there too that once we you know we with biological brains build a sort of a memory of of different kinds of visual imagery and so when we we match against a new sensory input that we get we're able to to make that linkage between a past experience and a new experience so those are the kinds of things that we're exploring with our neuromorphic uh prototypes so is it fair to say that like a lot of the neural network and deep learning algorithms that have been used in ai work for the past while are kind of doing the software simulation level of how a brain structure might work and this is trying to actually build some of that into the the actual processor design itself yeah at a high level i think it's a fair characterization you know if you look at artificial neural networks or you know what are known as dnns they they are abstracting away some of the dynamics that we see in in biological brains and and that that those sort of those temporal dynamics what we see with spiking uh messages in in a neural network in a what's called an snn a spiking neural network is is the um the additional thing or the differentiating thing and we think that by capturing these additional dynamics we'll be able to get new forms of efficiency and we're also hoping that um we'll be able to build systems that can learn on the fly because if you look at a lot of of uh dnn designs deep neural networks they they tend to operate in two modes there's a mode where you train against a like a labeled data set and you learn how to to train the weights of the the neural network and then you deploy it and then you can use it for recognition and classification tasks but it's really a sort of a bimodal sort of mode of operation with uh neuromorphic systems we want to both get efficiency and be able to learn on the fly and that's one of the the things that we're exploring with this research so do these these ships have to be designed kind of in concert with the learning algorithms that are actually going to run on them i know i mean there are there are like a normal processor may not be terribly efficient but there are a lot of different ai models you could run on on one are these kind of built in in concert with the software that emulates these neural networks or how does the how does that uh designing work there great question in the same way that we design classical computers so that they can be programmed uh we're doing the same thing with with our neuromorphic designs so we have a you know a single design it and you can sort of think of it as a an efficient implementation of the dynamics that we see in biological brains but it actually doesn't look like a biological brain it really is a a bunch of cores with an on dye fabric that is able to connect those what we call neuromorphic cores and those neuromorphic cores are simulating what a neuron would do and they any given neuromorphic core can model a bunch of virtual neurons that can receive inputs and generate outputs these spiking messages that i described earlier and what we do to program this is we have a software stack a layer that runs on top of the architecture that allows a programmer to express what they would like any given virtual neuron to do so we have these things that we call firing rules and it's just an expression of what are the conditions under which a virtual neuron will generate a message into the network and that's how you program a neuromorphic system and so through that that flexibility of defining those those firing rules you can solve lots of different kinds of applications and that's really how we're beginning to you know to do these explorations and and i should say a little bit about how our methodology and how we do this so uh at intel labs we believe really strongly in collaborating outside of the company and one of the things that we've done is set up a network of neuromorphic computing researchers uh outside of the company and by the way this is a field that's you know been uh ongoing for for many years now but what we're doing is enabling it with these uh prototypes of neuromorphic chips we have a system that we call loihi which is a a neuromorphic implementation that we've put in the hands of these external researchers and we've also enabled them with software that allows them to express their algorithms applications that they think may be of interest and useful to explore and that's how we're beginning to learn what kinds of problems this sort this style of computing will be good at makes a lot of sense and the the chips you're making this the loihi is this built i mean kind of it's the the secrets just in how it's organized is this built like one of your other just like an intel processor does this take some special fabrication technology to actually assemble uh it doesn't require a special process in fact it's the low ehe design was just implemented on our 14 nanometer process it uses regular digital logic it's not um exotic in that sense there are some kinds of neuromorphic designs that are based on analog computing principles but it's still a digital design one thing that's unique about it is that it's using a different kind of design methodology what's called asynchronous design um which is a a way to get some additional efficiency without depending on a synchronous clock um for that drives the you know advancement of the um the computation uh but aside from that the you know the sort of the the the innovation in the design has to do with how we have we've built a a system that's able to model the dynamics of a neuromorphic system and and how you program it in the manner that i described earlier and and and that's where we think the efficiency is is going to be coming from that's very cool so yeah retool it's still still transistors and memory and everything but it's uh the secret sauce is the organization how it's all structured that's very cool that's right yeah yeah exactly um where i'm i'm taking too long these are far too interesting topics so i want to actually move on to another one of the things that um i've seen you working on um this one is uh i i have done the bare minimum of coding in my life but i'm not a programmer this one felt like it was at kind of the the absolute leading edge of my ability to understand but um machine programming uh can you give us a little intro about what this topic is this one was uh a little confusing for me i got to be honest okay sure um let me just motivate the the the problem so you know i didn't tell we were primarily a hardware vendor obviously we build our business on building all kinds of compute devices and communication devices and we're very dependent on a really rich and capable ecosystem of software developers we do our own software development too but but we really need you know lots of people able to to program computing systems and what we're what we've observed is that it it's getting increasingly difficult to program computing systems especially as they get more and more complex and we have this idea of a notion of a ninja programmer this is someone that's like really good at getting the ultimate performance out of a computing system and the problem is that there aren't many ninja programmers to go around for the the kinds of you know the the magnitude of problems that we'd like to solve and in fact uh while we're getting more and more programmers those programmers are uh emphasizing approaches to programming programming that that increase productivity so they're using you know languages like python in order to to quickly develop code but it's not necessarily the highest performing code that gets the most out of the underlying hardware and so we considered that to be a problem at intel and we wanted to to bring some technologies to bear and the basic idea of machine programming is to teach computers how to program themselves that's you know literally what the idea is and and you can take this apart in different ways i mean the grand vision is we'd like um anybody whether they're a programmer or not to be able to express their intent and have a computing system do that that's like at the highest level what we would like to achieve and and then you stop thinking about being a programmer or a coder you just express your intent and you get what you want out of the system now that's kind of abstract and high level and so you know in the labs we really like to get concrete and and make forward progress and so a near-term vision is that we're trying to develop technologies that just make programming more efficient as an example a lot of time today goes not just in writing code but in debugging it making sure that it's correct finding the errors in in the code and so we've been developing some new systems that are able to find errors in code automatically and they they do so by scanning lots of examples of code and then using an understanding of of what good examples a correct code look like in order to find errors in in code as it's getting written and to do so automatically and we hope that this will sort of dramatically lower the amount of time that's spent in debugging software so if one of the issues is that a lot of software even written by good developers has bugs in it um how do you pick what you're training the system on is someone having to go through and write stuff that you can verify and debug and know to be be valid or how i guess uh it just seemed like such a widespread problem i understand that the scale of the issue but because of that it feels like finding appropriate training data to feed into the system would be very challenging so how do you approach that uh that part of the problem here yeah right i mean you're you're asking a really great question you know how can you be sure that you aren't feeding the system uh you know code that that's buggy and then if you're doing that then how can you trust what you're getting out of it and the short answer to that is you can't be sure that you aren't feeding the system buggy data but you can that doesn't have to stop you in making progress and so basically what we're using is a method that's that's known as anomaly detection and um we sort of break the process into into two steps the first is that we um we point the system at a a lot of examples of code like billions of lines of production code that that's out there it's you know it's actual operating software and so it's got to be working correctly at some level although it may have bugs in it right um but what we do is we we just sort of uh mine all of that existing production code for patterns and there are you know distinct patterns that you can find in properly functioning code and you can use machine learning methods for that you can also use other forms of analysis formal methods for understanding the structure of code and you know does it does it look like it's a good example a pattern of correct code and most correctly function code will fall into certain distinguishable patterns and that's what you would expect anomaly detection is about finding those outliers the like snippets of code that don't seem to follow a common pattern and it's it's sort of like a probabilistic approach to finding where bugs might lie and that that's sort of the second part of the process which is scanning so scanning all of the code that that the new code that you're trying to debug and trying to identify the places where it deviates it has anomalous structure compared to uh lots of examples of production code and that um while it still requires a human in the loop to go and look at at violations pattern violations to confirm that there is in fact a bug you can take a lot of the work out of it by doing this automatically and that's the basic idea that we have we have a system we call control flag that we've been recently publishing some papers on and it it's actually been able to successfully find bugs in code that that had had been latent for for many years and um were only discovered by the system that's very cool it's uh it's almost enough to make me want to try to learn some more python say maybe not in the near term um well i know i know we'd uh put quantum computing in the title of this but i think we're actually reaching the end of our time here so we might have to save that for another chat in the future though i know intel's doing a lot of work there as well but uh thank you so much for taking the time here this morning to talk to us this stuff is all super interesting i know we could have spent probably an hour on any one of these topics but it's great to get an overview of all the work that intel labs is doing so yeah happy to come back at any time i appreciate it yeah we may we may give you a call for some future upscales for some uh some research into some of these this has been great thank you folks for watching stick around we've got a short break and then we are back our editor sherilyn lowe will be sitting down with microsoft for our next interview here so stay tuned for that
Info
Channel: Engadget
Views: 22,794
Rating: undefined out of 5
Keywords: engadget, technology, consumer tech, gadgets, science, gear, tech, neuromorphic, quantum computing, intel labs, intel, ces2021
Id: qkzlXmAoGNA
Channel Id: undefined
Length: 37min 27sec (2247 seconds)
Published: Tue Jan 12 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.