2019 WMIF | 1:1 Fireside Chat: Jensen Huang, CEO, NVIDIA

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
last year it was Mario oh yeah that's a wedding song one next year it's gonna be it's one of the hi oh there's a next year he's girls everybody were normal I might not get invited back well this is one of the highlights nice guy the nice guy that welcomed us he said last year I told you to be yourself this year I like to give you a new advice whatever you do don't be yourself he saw that he saw that be real well thanks so much for coming it's always it's always a pleasure to spend time with Jensen and I hope you get a chance to grab him here there where you can the next couple of days while he's around I always learned so many things let me ask you just to start this off what have you been doing for the last year in AI well last year in AI so first of all you guys should you guys should observe something that's really important this year the Turing award the Turing award went to the Nobel price of computer science went to three people who made a profound contribution to modern AI and the advancement of deep learning well Ben geo Hinton and look hood all three of them made tremendous contributions and they were awarded two Turing award the takeaway from that is is this is probably not a fad that that somehow deep learning and this form of data-driven approach of developing software where software and the computer is writing software by itself this form of AI is likely to have great profound impact I think that's a big deal there's a few things that I learned this year you know be working a lot in deep learning and we're going deeper and deeper and deeper nning and something happened this year that I'm pretty sure when we look back on is going to become the imagenet of natural language understanding the the approach of language understanding called Transformer this new approach of understanding language has has turned out to have been successful in many anyways summarization of text questions and understanding of course natural language understanding that we're gonna see some really really significant breakthroughs in the coming years and so I would I would keep an eye on natural language processing NLP I think this is the image and emotion moment the next five years we're going to see some gigantic breakthroughs I'm incredibly excited about physics inspired physics informed physics integrated neural networks where where parts of the neural network the parts of the layers are partial differential equations and whether it's on the input side feeding into neural networks or on the app the neural networks feeding into partial differential equations I think we're going to see the fusion of computational and data-driven science we're seeing some really big breakthroughs there some excite about those things how do you think that'll first apply itself to the real world okay let's see instead of instead of doing molecular dynamics simulations from from Newtonian physics and particles we might decide to do some physics on some parts of it and use neural networks on some parts of it a very very large scale weather simulations well where the partial differential equations are inferred on some level and computed on some level or combined you could use you could use traditional methods as ground truth to teach the neural networks I said and all of a sudden you could you could do a weather simulation on an on a scale that's unthinkable in the past quantum chemistry now you could you could fuse the two methods and get a speed-up of a thousand times ten thousand times it's almost unthinkable the you know the the progress that's being made instead of once one field of science or the other it's really the fusing of the two today's announcement with atom using tell us about that's right and your well let's see the atom consortium as you as you know has been trying to create a new form of computation a new method of computation to augment the discovery of drugs to accelerate to describe these drugs and so at some point there's molecular dynamics simulation but before that you use a neural network train from a large body of ground truth to to filter through the large combinations of molecules to decide which one are worthwhile to simulate and then which ones beyond that are worthwhile to go do simulation or real testing and in vitro testing line and so so that method of using neural networks not to predict the outcome but to reduce the the focus of the experimentation ultimately that's going to accelerate accelerate drug discovery such really really clever methods like this is happening all over all over all over the world and and I think that that on the first principle basis at the fundamental level now what's really happening is that data-driven science is now the fourth pillar of scientific discovery you know where we have theoretical the way that Einstein did it you know I could just imagine Einstein just he sat there at his desk he was probably mostly sleeping and he closes his eyes and he doesn't some you know thought experiments from first principle theory and discovers what he discovers and there's the experimental methods the newtonian methods there's the computational methods which is the methods that we know today simulation based methods and now there's going to be David data-driven methods which fuses some of those methods together it'll be the fourth pillar and I think it's it's really powerful so how do you take how does it company like NVIDIA not go too far up the stack and we're not serious how do you arbitrage I just feel like I feel like that's the longest period of time in a week since I've been that serious well switch in a minute yeah what not coming back this I don't like this party so how do you so serious here the things you've described seems like they would apply to a lot of different domains so how do you understand broad enough about these different domains like you just talked about Adam we could easily have a deep dive into autonomous vehicles how do you allow this kind of arbitrage between these very diverse disciplines use that same fourth pillar if you will how do you how do you handle all that well it turns out it turns out that that one of our one of our jobs and yours is to Keith we tried to find some universal truth about the way we do things and if we could discover a universal truth about what's happening in the world then then the company could build it build upon it as a foundation and one of the one of the universal truth we discovered recently was is this data-driven approach to writing software writing software that that that quite frankly no humans can write and so we discovered this this this foundational new technology we called deep learning or machine learning about 78 years ago and we realized it was three things and this was this was this was the transform this transformative moment for the company that in the future the way that software is going to be written won't be just engineers typing on a keyboard from everything that we've learned and all of our imagination and all of our training somehow we codify these algorithms either we derive them from combination of multiple first principles sciences or through heuristics that that approach is going to be augmented by something else and is this machine learning approach with three pillars says one it's because of the large amount of data that we now have combined with new algorithm innovations called deep learning that is that is repeatable it's AI that's repeatable AI that you and I can engage that's repeatable and third that requires a lot of computation to bring all this stuff together to write the software by itself the the the observation therefore translated to you need to be a data-driven company that a company like ourselves if we want to be one take advantage of this capability it needs to have a data strategy how do we capture the data how do we figure out what data to use and what data not to use how do we create clever ideas of what kind of data do we fuse together call features to use to train our models what kind of infrastructure do we need to create until now our company really didn't have supercomputing infrastructure we now have one of the world's largest supercomputers and I think we're well within the top hundred probably in the top 50 supercomputing companies in the world and how do we create our storage system or create our storage architecture how do we develop all the software on top what's the methodology the tools that we use to plumb the software from wherever we collect it from largely the edge you know in the case of cars or ground truth whatever it is how do we get this data pipe through the company compute it in a way that's efficient and get results back into the engineers hands on a on a fast enough basis how do we validate and simulate the results all of this transformed our company frankly do you use this for chips for design for fabrication we uses everything we use this for designing chips we use the did not designing systems and improving our yields we use this for computer graphics one of the things that as you know we're a computer graphics company and we use the GPUs to create deep learning then enable deep learning now we use deep learning to go back and reinvent computer graphics for example there's this new field of computer graphics that we've made real-time called ray tracing but even with all of the computation we can do now and all the hardware that we we bring to bear and the new algorithms we've created we still can't do ray tracing fast enough we can only generate enough samples to create essentially you know kind of a speckle of images on the screen and so for everything else we've taught a neural network how to infer the rest of the scene so we give it a few dots and have figured out the rest of it from all the training that we've done for it we use it for animation and so we taught characters how to animate we have you know one of the one of my favorite places for robots is in virtual reality the first place where you're gonna have interactive realistic avatars that seem like they're AIS and they can interact with you they can they they they're living breathing a is inside a virtual reality environment what would be likely in video games so let me ask you because I'm gonna ask the next question but the question before so let's go back to 2016 yeah if I was to ask you then what was gonna happen in the next three years what did you see that was right and what did you kind of miss what was what happened that you didn't expect crypto currency didn't see it I never I never imagined that you could plug our computer our chip into the wall money would squirt out didn't see it coming it squirted out really fast and then I didn't see that it would end when it came I at first I was surprised I didn't think that it would last and then after a while I I wish it would last forever okay I confessed that was truthful so what what did what were some of the good things that we did okay so so for example data science we realized that deep learning is an algorithm a body of body of algorithms and it's part of a larger field called data science that was a great observation that changed everything for our company we've focused on on ingesting ingesting data processing data doing data analytics petabytes and petabytes of data coming from different places for us to if you will wrangle with our GPUs supercomputing problem really really hard to do we developed a ton of software for that it's called Rapids and it's open sourced you guys should really take a look at it it's probably even more important than all the work that we've done for deep learning we dedicated a bunch of resources on machine learning a larger body of work beyond convolution neural networks and so we worked on a lot of stuff with machine learning and and then lastly we figured out that that it's not just about data centers it's about data centers and clouds and the edge working together that unless you could figure out a way as a company to get data from the edge into your company create software from it and then getting the AI back out to the edge and having the system work continuously the work that we announced today the partnership that we announced today essentially we've taken the AI and we put it out at the edge yeah there's a reason why the AI needs to be out at the edge if it's in the in the context of a robot or our cars because the latency is too important the contexts the contextual information of the environment around it can't afford to go back to the cloud and come back in the case of radiology the privacy of the data is too important the expertise is physical it's in the radiologist it's local the data that they have is local the expertise tends to be local it's hard to put everything back into the cloud and so you want to put now computing at the edge so that the AI could be created at the edge but then federated at the cloud somehow and then brought back to the edge that edge computing it was really a big deal so let's talk about that that's a really interesting concept so it do you think it's domain-specific as to where the intelligence should be edge cloud back forth is that depend on the use case I yeah that the amount of it depends on the use case ok the amount of it depends on the use case in the case of a in the case of in the case of a self-driving car the amount of computation we have to put at the edges a lot and the reason for that is because there's just it's traveling at 75 miles on hard miles an hour and and everything is moving and so the perception is computationally intensive we use multi use multi sensors we fuse we do sensor fusion it's no different than radiology you guys do sensor fusion sometimes you just CT sometimes use an MRI sometimes use ultrasound sometimes use the combination of all of it and so we do the same thing and self-driving cars we sent we fused the sensor we use different algorithms you do the same surprisingly in radiology we use a concept called redundancy and diversity you can't just have one intelligence make a decision you try to have multiple intelligences make them to make a decision and you hope that the approach that they use it's diverse it's not exactly the same for example in the case of radiology the work that we did we just announced today the doctor is one is is the initial body of intelligence now it's augmented by another form of intelligence it's now redundant and diverse redundant and diverse the two of them will augment each other and of course we're going to have other and other augmentations like for example if if another doctor from another place were to use their data and they created an AI that was that was a very effective instead of fusing the two you might decide to unsolved it to just have the two of them sitting next to each other and and both inform you and a doctor about about the Diagnostics and so one of the announcements that Jensen's talking about today was this notion of democratization of AI a little bit what I talked about in the last session was this complexity of being able to move data around in health care so one notion is to say if we created a solution here and we did it at partners and we shared that with Ohio State in the model form and then Ohio State took it and optimized it for their data and got the accuracy back up again so is that well first is that this is the case in any other domain or do you think it'll be something that has to happen in healthcare because the data can't move around like it like it does in other areas in fact it's it's almost exactly the same parallel in the self-driving car it's just your challenges are much harder in the case of a self-driving car let me just give me an example you have a network of cars and what one car experiences is not exactly the same was another car experience because one one car could be mostly in a city the other one could be mostly in a rural environment another car could be in China another car could be in Europe and other one car could be in United States and so the roads are slightly different however every single time every single time there was a if you will mistake and and the driver were to intervene it becomes a new label it's exactly the same as a radiologist with this AI tool that we're working on together you put it in the hands of the radiologists if it's exactly the right diagnosis then big thumbs up if it's not then the AI will say how would you adjust it and with a small tweak the doctor could say this is this is the right segmentation or this is the right identification and all of a sudden that new piece of information goes back to improve the overall network the same thing with a self-driving car there's a sign it says stop you didn't recognize it a stop because it may be it's in and I just bought a company in in Israel so maybe it's in Hebrew and so it's backwards and upside down and and it's true it's true do you guys know why it's up it's backwards and upside down because this language is so old it was chiseled and therefore has to come from this side we know it's not that interesting oh dear so okay so a couple of questions with this so when a car driver says I've made a mistake and I'll just want to talk about radiology yeah I know every time I see Keith he just wants to talk about one thing so I'm very pretends to care about my work he pretends to care about my work he only uses my work to inform his work so I so I'm driving down a car only cares about radiology and health care I'm holding laser for my guilty as charged so so with autonomous vehicles though if there's millions of drivers making mistakes or the cars making a mistake yeah is that all coming back up to a cloud somewhere and and and I assume that answers yes so let me ask the real question is how do you then make these on-the-fly changes how do you make sure that that upgrade is approved who do you have to go through regulatory all of that kind of stuff to have all those things happen you got to go through the whole loop again you take all of that data that's no different than the originally labeled data okay it's just new ground truth and then you got to go through the entire path and including validation including testing so this is any simulation it's not truly continuous learning we're now on the fly there's changes no no no that's right it won't be it won't be continuous however the data will be continuous the loop will be continuous and here's the big insight if you just describe this if you just describe this in the future not only will your software be software-defined because it's getting update it all the time and the capability will be defined by the software so your car needs to be programmable your medical instrument needs to be programmable your your doctors needs to be in front of a programmable system so that it could run all of these interesting neural networks as they come out but the more important the more important thing is this your company needs to be software-defined yeah right the whole company needs to be software-defined in the future all of our companies are going to be come from a block diagram perspective Nai our companies if you draw this the architecture in a block diagram of today's modern AI companies who are building AI products it looks just like an AI computer identical the whole path the whole loop of data coming in that the company perceives reasons plans action and then takes action actuates it's just one of the outer loops or the inner loops of what is ultimately the software-defined product and that flow that flow from volumes petabytes and petabytes of data coming in figuring out how much of the petabytes of data you should reject ideally 99.999% because your AI is so smart to realize I've read this before I know this already I don't have to keep learning multiplication because I get it already rejected rejected rejected so you can focus on only the things that you need to learn from the things that you need to learn still gigantic amounts of data that goes through data analytics data preparation feature engineering it goes through machine learning of different types it goes through validation it goes through simulation eventually you put it on you put it in experimentation when it finally passes you ot8 out to the products you that loop continuously yep that is basically the loop of artificial intelligence and the future companies are going to be software-defined companies so back to radiology yeah okay I know I know you're killing me killing me no no so this was so I love radiology UK me the last way you and I know each other so last week FDA came out with a white paper yeah and it was comforting for me to see that they're actually considering continuous learning they know that this is a problem and I champion actually their solution as opposed to just decreasing this pre-market a regulatory approval process and actually putting in place a post market analysis so to be able to kind of surveil these things in the wild looking at these algorithms and saying things change you know our modalities MRI and cat scans change right patient data changes so if even if these algorithms they are the same things coming at them are going to look different so it's probably not enough to control an algorithm on the way in you also have to monitor it after it gets out there do you think that's that's the way to go is that is that similar to other domains is automotive looked at that and said well we never thought that it was going to rain you know on a mountain and now it is and so something's happened and we need to reconsider this one of my favorite things it was a it was a it was a really weird thing when I first saw it a long time ago it was my it it it was groundbreaking for me from and from a way that I think about about developing products was the concept of beta if you look at cloud software it stays in beta for years as millions and millions and hundreds of millions of people use it frankly I think it's still in beta every single cloud application that we know is in beta and the reason for that is because it's changing all the time the idea that software a software product will be shrink-wrapped end it with Windows Vista it was the last product that I know now I don't I didn't mean that as it it was a milestone product and and many of us including myself worked on Vista for five years and the reason for that the reason why was such an important observation isn't isn't the fact isn't what you think it is it's because it finally became such a gigantic body of work and it needed to be a gigantic body of work you can't do by Giganta body of works like that anymore that's it you can't shrink wrap it there is no period of hey guys hit the button print ten million of them it doesn't exist the world that the future world the software that our company will be software-defined our products Software Defined our companies programmable the software the product is software programmable and OTA is happening all the time we're doing QA all the time I see and so if we can't get our arms around that idea we won't get the benefits of the the new exponential the new Moore's Law the new Moore's Law and it's happening you guys more silicon is not advancing at the state of Moore's law the amount of data plus silicon plus algorithm is moving at the rate of Moore's law and you mentioned this earlier in your talk there is no question that we now have software that is superhuman it's able to do one specific task better than we can well literally five years ago it was dumb as a brick and so somewhere between five years it became better than any human in the world so if that's not Moore's law its Moore's law squared probably or cubed but there's no question that these type of software that combines several factors it is data-driven it is algorithmically learned algorithmically written on a supercomputer this combination this cauldron in this three things happening we're gonna see how amazing results so so I'd be remiss if I didn't in front of this crowd to ask you and I'll pause just to make it torch you a little bit what so you've probably got a better vision that anybody what's gonna happen the next five years if you've got that kind of hockey stick curve and beyond chips and into intelligence on these things what are the capabilities and how do we prepare for that how do you prepare for that how do you prepare the company for that well the the the thing that we're doing is just making sure that number one we my job is to create the conditions by which the company can succeed create the conditions by which the company can succeed I can't guarantee it succeeds I can only create the conditions by which it succeeds there's no question that velocity agility in my field of work is vital to survival it's completely existential and so the thing that I have to do is make sure that the things that we just talked about that we are data driven company there or software-defined company that the computing infrastructure of our company is leading-edge world-class that we understand the pipeline flow of data coming in curated wrangled learned validated simulated tested ot aid this process this loop is absolutely world-class so I have to create the conditions by which our company understands that I have to have to make sure I've got the right expertise in the company now the rest of it is just strategy and strategies are changing all the time and and you know we're smart and we could stay on top of that the thing that I would say to the next five years a couple of things a couple of things I expect to have tremendous breakthroughs and we we talked about at the beginning I expect that natural language processing will surprise everybody our ability to have an agent go off and read a whole bunch of documents summarize it for us it is going to be Fant our ability to the next time we do search not only would it give us a whole body of lengths it will actually come back and and summarize what it learned and then give me a body of links if I wanted to go learn some more now is that separate from conversation though I think I think conversational conversational language processing will also surprised us now of course the it learned from a whole bunch of patterns so is it soulful conversation I doubt it but but it will hold our attention you know ill tell us a story one of the things that's really amazing is open a IDEs with their GP - it's called one-shot learning basically read-through I think it's eight million web pages at 8 million documents and it could complete a sentence meaning it could figure out what is the next word and once it figures out the next word it could figure out the next word once it figure out the next work and figure out the next word and so the way to think about that is you give it a seed of a word so for example you go you know Jenkinson goes to Boston he'll finish that story and and I and it's it's it because it's gonna it might be contextual II aware because it knows my calendar knows a few things it knows that you and I get together once a year it might just says Jensen's gonna go to to Boston to what he calls the women and what do you guys call it WMI w am i off okay all right so what we're gonna need to work on this the name of this conference otherwise known as women got that Chris w ma who does that CHR is does ch sorry about that that's good humor so that's creepy okay so last year you've been starting to really rub off so last year you mentioned we were talking about autonomous vehicles and the challenges that automotive companies had and you said you spent a ton of money ton of money on people annotating things right lane stop signs pedestrians cars that are that is that still a challenge for AI and is that ever gonna go away or how do you how does a company like yours approach that challenge into the other domains we're not annotated cars and lanes anymore we're now anti annotating scenes what is that scenario so give me a and so so for example intersections as it turns out are really tricky mmm they don't always have lights they don't always have stop signs they don't always look like intersections somehow you and I know that yeah yeah we drive up to an intersection good and yeah I'm not gonna fall for that I'm not gonna drive right through it I'm not falling for it it's this come on it's okay no I'm not falling for it I'm gonna I'm gonna slow down and so somehow we figure it out yeah circumstances we annotate those we can you have a i annotate for you yeah we all that so for example we we use we use AI that that annotates cars by itself and so we just go dink that's a car and it goes oh yeah I recognize this Kart and it draws a bounding box around it and and because because if you can figure it out we could teach an a on how to figure it out okay and so we do the same thing with the work that we're doing with you guys medical imaging yeah medical imaging because you you start out with with a small lead and it finishes the rest of the sentence for you yeah does that make sense yeah do you give it a little bit of a hint try to finish the rest of the sentence for you it's no difference but you finally take the heart and it fills in the rest of the heart finish the right sense for you right right I serious instead of instead of in language it's in a visual world the technology is very similar so you can use AI to train AI we use we use AI to go train ni and then we're different are where we take over this is really where where we now add a ton of value all the brute force the grunt work is now done by the AI the hard stuff is where we get involved how do we teach an AI how to understand this circumstance that is gosh kind of really subtle now now you're you have to apply some real intelligence and this is where this is where he is going to go you ask you know what's going to happen we're gonna have we're gonna have a I do a lot of our grunt work for us it's they're doing it already for example they go through our spam they go through our junk mail if it wasn't because of AI is doing that we'd be flooded with with with with junk mail and so somebody is already cleaning that up for us now it's even sorting between things that that it's learned in Outlook that I'd be interested in things that they call it focused and things that are background pretty soon it's going to prioritize my email for me mm-hmm because it knows what are the ones that I go to naturally or spend more time in or take actions from Keith drivers before each drives right up to on top well we've got a little over time so Chris's CHR is is right there any closing thoughts I I think I think that that remember what AI is remember ultimately what we're talking about you know are we going to harness AI should we put AI to work is how should we think about AI in the context of our company remember what AI is AI software should we use software in the future of our work the answer is absolutely this software just happens to be magic software in the sense that this is software that writes software and one of the things that that is that is most most most powerful for me as a tool is to realize ultimately what was the purpose of software software's fundamental purpose is automation an AI therefore is the automation of automation and if we can harness the automation of automation imagine what good we could do that's it's wonderful thank you very much thanks
Info
Channel: World Medical Innovation Forum
Views: 21,079
Rating: 4.80756 out of 5
Keywords: Keith Dreyer, Jensen Huang, NVIDIA
Id: 0W_OaPa8v7Y
Channel Id: undefined
Length: 35min 8sec (2108 seconds)
Published: Mon Apr 08 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.