E9: Should Tesla Be Scared Of Nvidia's Self-Driving AI Chips?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this is a really exciting episode of funding awesome most people think that Nvidia only builds AI chips for data centers and PCs but let me show you a different application altogether while I was at Nvidia GTC I had the privilege of interviewing Danny Shapiro nvidia's vice president of Automotive I use that time to learn as much as I can about nvidia's self-driving programs including the sensors they support for all kinds of autonomous vehicles the supercomputers that they put in every single one how Nvidia uses generative Ai and simulation to enhance the self-driving experience and all the crazy capabilities that the Omniverse is unlocking for the entire Auto industry your time is valuable so let's get right into it I'm Danny Shapiro vice president of Automotive at Nvidia and we're here live at GTC I'm excited to be here with you so this is the pstar 3 it's powered by Nvidia so the new Nvidia Drive platform we have lar which is a laser scanner cameras radar Ultrasonics all these sensors are generating a massive amount of data that goes into the brain we have to make sense of it then what's a car what's a bike what's a truck where the lane lines the lights anything out there needs to be deciphered so this is where artificial intelligence comes in we take that signal data and understand what you and I recognize as a person artificial intelligence has to scan through all the pixels in a frame of video to figure out where a person is in the scene in this case it's automated driving there's going to be driver behind the wheel still so still driver yep still driver but the brain of the car is so powerful that we can update the software over time and we'll add new features and new capabilities in different kinds of autonomous modes throughout the life of the vehicle let's start from the top right so liar light detection and ranging that's that sensor up there right is there just one liar how many liars so in some Vehicles we can look at some of the other Robo taxis and Robo buses have multiple lier in this case there's one in the center of the car so it's front facing sometimes there's one on the rear there could be light are also on the sides we have combination of surround cameras surround radar and together they help us form a 360° picture of what's going on around the car sure okay so where where are the cameras can I like uh so cameras are also going to be right there behind the mirror um often there'll be cameras in the side mirrors yep rear facing cameras as well so we'll radar is always hidden kind of behind the fa can't see it and the ultrasonic you probably seen on CS those little dots around the bumpers so that gives you short range coverage for parking scenario ultrasonic is for short range why have a liar and a radar what's the difference between those it's really it's a really good question they have their different strengths and weaknesses in terms of distance in terms of range in terms of resolution it's like a camera you need light right totally whereas the radar and liar can see in the dark and so it's the combination of these different kinds of sensors that give greater levels of safy and security got it I have a quick question about like fog so fog udes you from using cameras does the radar work well in fog the LI so that's part of different weather affects different sensors but again by combining and fusing these different signals together you get a more robust picture in a variety of conditions cool so all of those sensors go to a brain is that brain like a chip in the car is it in the cloud where's that brain that brain absolutely is in the car and that's our Nvidia Drive platform so this is a computer designed for automotive applications it has to be Automotive grade so it's not like a GPU that goes in your PC okay it has to operate at any temperature in the coldest winter temperatures to the heat you know driving through the desert you know if you leave your your phone in a car on a summer day it'll overheat it won't work totally a drive computer and basically any circuitry in a vehicle needs to be Automotive grade and adhere to these wide temperature ranges sure additional shock and vibration of being in a car dust so the harsh conditions are things that we then develop the chips to be able to sustain themselves so harsh conditions is just one of the challenges right another challenge what about when you don't have a lot of cell service or internet connectivity does it do everything in the car does it talk to the cloud how does that work so I think requiring connectivity is kind of a misconception about autonomy okay you don't need connectivity to drive the connectivity is one way you can get software updates when the car is in your garage you can stream you know Spotify if you want to do a search of a restaurant or something that'll go to the cloud your navigation often streams based on your GPS got it but for the autonomy that decision making has to happen on board it's two Mission critical the amount of time to make the decisions is too short to go to the cloud do some computation and come back so all that sensor data feeds straight into the card computer and basically we have 130th of a second so like one frame of video to be able to identify everything in that scene and then make those driving decisions how are you able to make those decisions so fast is it like some specialized algorithm what's going on at the software level to be able to do things so fast another really good question that is where artificial intelligence comes in okay like the world is too complex to be able to write code to say if you see this then you do that else if you see this other thing there's too many things that could happen there's too much in a scene so rules base doesn't work correct so AI we can train how to recognize all different types of objects and scenarios and so in a fraction of a second with a powerful supercomputer we're able to understand that full environment it almost starts to look like a video game inside the brain of the car right we're creating a digital twin of the real world inside the brain we know our car is here and then we can decide do we accelerate break turn left or turn right based on everything around so there is a literal AI supercomputer somewhere inside this vehicle connecting everything together and making decisions way faster than a human can right and not only way faster but it can see way more of the car we kind of have stereoscopic Vision we see in a cone in front of us we don't know what's going on in the sides and when we look to our side mirrors you know we're changing our field of view this thing has sensors everywhere some of that computation is just cuz it's seeing so much more absolutely not just visual but wazers radio waves sonar the key thing you're absolutely right 360° perception yeah doesn't get distracted doesn't get drowsy yeah and um yeah just constantly monitoring the other thing is the level of precision is so much greater than you or I because it knows precise the distance what's our speed what's the closing speed of the car in front of it to be able to apply the right amount of break to avoid a collision that's actually a really great point I'm actually terrible at estimating distance right so even though I can see in 3d because we have you have special sensors that are just dedicated to rang finding so what what we're focusing on here is AI sensing outside the car we've announced our new Blackwell processors that's our next Generation GPU that's will go into drive Thor our next generation of car computer and that's going to enable generative AI applications inside the car so an AI cockpit now where you can have a conversation with the car it will know everything about your car it will get to know you right and your preferences what would I talk to my car about can you give me a few examples well you could control every aspect of the vehicle with your voice um there may be Diagnostics that it's running or it can understand if there's a certain vibration there may be an issue and it can identify it and communicate it with you back to the factory hey I'm getting low on gas hey I have a tire with little pressure are those the kind of things like it will say to us or like it certainly could be but you can maybe even control other aspects of the car um and then this is where the cloud comes in too you can make quest to the car and if it needs to go outside to get you know what's time is a movie going to start or what's the weather in your destination or anything like that then it'll be this hybrid approach where it goes to fetch data from other services so it's almost like a generative AI agent that a understands the car but B understands you enough to say hey I want to go see a movie and it'll be like hey I'm routing you to the near theater where you know the movie is playing closest to the time it is now or whatever the right answer is so it's able to do relevant work for you as you drive not just related to the car itself is what I'm trying to get at that's super interesting is there so what's the interface for that look like is it a screen in there is it just voice to text and text to voice how do people interface with a car AI so we've developed a technology called Ace which is our Avatar Cloud engine and so this is a way to have um different kinds of avatars a concierge in your car it can be personalized and we have a number of technologies that take text to speech speech to text so there's a lot of different interfaces but also we can animate based on spoken language of that Avatar so it's it's automatic animation we'll be able to see the thing that we're talking to so it has it's a little embodied even though it's virtual that's cool we think of autonomy in sort of five levels right where the difference between level two and three is pretty crucial because level two the driver is still responsible level three you start Shifting the responsibility to the car what level is this so when these cars come out they're going to be what we call level two plus so they're very Advanced but the driver still has to be in control the driver is responsible uh moving to level three is more of a highway pilot we're doing that with mercedesbenz sure where the CLA concept that's down in the lobby beautiful vehicle their next generation will be based on that and Beyond will'll have Nvidia drive inside and so when those cars come out they can get software updates that will add higher levels of autonomy to the vehicle so drive is slowly moving up those levels over time is that like it's going to be the same platform currently at level two that crosses into level three exactly um can you speak to the differences between level two and level two plus can you help me understand that so level two plus has um usually a lot more sensors on the car and it's really enabling it to get to that next level without having to do anything other than the software traditional level two is more just driver assistance emergency braking U maybe you know blind spot but usually not Lane keeping and things like that so when we start to merge adaptive cruise control and Lane keeping now what we're doing is getting it robust enough that we can take the driver out of the loop and know that the car is going to do what it needs to do so we'll probably see a highway pilot first it's a you know smaller set of things to worry about but then beyond that move to an urban pilot where you'd have more pedestrians and when you say Highway pilot so that's after a certain speed is it on only certain specific highways or is it more of a function of hey we recognize you're on a big road going fast you're now you can enable this feature how does a highway pilot work it really varies by automaker okay right they can have different uh interfaces and different conditions that need to be met but what we see is in general being on a single Direction road with controlled exits and and and onramps and off-ramps uh multiple Lanes the driver sets their destination and the car will just drive and it will stay in the lanes it can maybe overtake other vehicles if they're going slower and can even you know go off of one freeway onto another freeway whoa the urban pilot then is much more complex because it's dealing with stop signs stop lights intersections pedestrians and and a lot of congestion but we're doing a lot of Trials and Pilots right now with a number of different companies we have over a hundred different automakers truck makers Robo taxi companies uh shuttle companies that are developing on drive and have a whole range of different Pilots so on the show floor here we have the Volvo ex90 another beautiful vehicle that we'll have Nvidia drive it's coming out this summer that's cool the weide robo bus is behind us this is a uh fully autonomous bus that is operating in Beijing in Singapore in Abu Dhabi uh nuro is another customer they have their last mile delivery robot so you can order things from the grocery store or pizza and this autonomous vehicle will drive on the roads and deliver your your goods and all of these different Vehicles will have supercomputers in them that can one day reach level three autonomy is that like the right way some cases level four can you explain quickly the difference between level three and level four well a level four device doesn't need to have a steering wheel or gas pedal so you don't even get human intervention at that point okay that's right so one of the questions I have this is from a few gtcs ago Jensen was talking about mapping large Road net that was a big effort do you need those digital Maps does this rely on digital twins like is that part of this framework really good question so digital twins are something that we're using throughout the entire Auto industry so let's just talk about even designing cars Nvidia Omniverse is our platform for digital twins they can create a digital twin of the car modify it put it into a virtual wind tunnel simulation okay and be able to determine what's the coefficient of drag right what's the fuel economy going to be this Cary and they can make these modifications to the design and then see how that affects wow yeah without ever manufacturing anything physical no no clay models anything so it really streamlines the process then you can also do virtual crash test simulations and see structurally how safe is the vehicle without wrecking a physical car that's right oh okay yeah yeah yeah and then what we can do is create a digital twin of the factory and really build out how does the assembly line work the robot screwing how do they interact making sure those robots don't Collide that there's enough space in the factory for everything to move material handling so this is a massive win for the Auto industry sure they can do Factory planning again in the cloud in Omniverse before they actually break ground on the real factory so they save a massive amount of money by not having to do change orders rebuild stuff it might be oh my God we didn't make the ceiling high enough because the arm of the robot has you know so whatever it is it's really a great tool and then we can even take that simulation into the development of autonomous vehicles so we created a digital twin of a city and we put all these vehicles in there and we can test how they operate and wow are they going to detect the children running across based on the actual sensors that are going in the car you know the resolutions you know that's we simulate everything we simulate all these sensors so we actually take our Nvidia Drive brain and we put that in a server and then we have a different server that's creating synthetic data all the radar liar cameras as if it was driving on the road road so our Drive computer thinks it's on the road driving around it's getting these signals as if it was driving and so it's making the driving decisions turning you know left to right breaking accelerating and then so we can test it fully before we actually put it in a real vehicle on the road yeah and you mentioned at the start of this conversation you know the types of things that the automotive H industry has to do you know highly regulated it has to adhere to a different standard than say a traditional GPU what about the simulation data does the simulation data have to adhere to a certain standard so that they know that you're modeling sensors with a certain accuracy that when you're training something in a virtual environment it's good enough to be put in a physical environment so that's exactly what we do is is Omniverse is physically active so we work with all the sensor makers to model the characteristics exactly of their sensors so those things basically we know are accurate and materials and that's right and so it's a combination we still do real world testing yeah sure but it's a combination of the real world plus the simulation where in simulation we have ultimate control over the time of day the lighting the weather all these things so we can repeat tests that you couldn't really repeat in the real world so if there's an issue we're trying to solve we can then continually hammer on it sure you can drive at Sunset all day long in simulation whereas if you're trying to test what do the sensors do at Sunset you only get a couple minutes a day that's right so you're getting real edge cases that you can say hey they don't have to be Edge in a virtual right you know we can repeat this that's really interesting that's a great Point behind us is the neuro and you can see on top is the lar sensor there yeah so this is a a delivery bot there's there's no driver inside um you can see on the side here the interface to how people would uh you know order something grow could be groceries could be pizza and it gets delivered uh and then with their cell phone uh app they're able to unlock apartments and take out their order um they also have set up a shop here it looks like it's doing very well it's selling out but they have cameras there's cameras that can track what's been removed from the vehicle and then would be automatically charged to your account this is awesome and each TR step right up sure that blue button at so blue button at the bottom so you can imagine you just uh got your order and now it's about to leave okay uh and then we'll show you the other one which was it just pull up to your home it would text you a code to say enter this code it'll open the vehicle okay so you would tap on the screen and then do like 1 2 3 5 and then that is super cool that's awesome and so what am I taking do I get a full Pizza uh if you really want want you to start with a snack and a poster did you say there's a poster oh dude these are cool can I grab one of each I'm greedy thank you so much massive huge convention we're standing behind like a huge Blackwell rack right now but Blackwell is also inside all the vehicles we just talked about right Blackwells are newest GPU architecture it's a platform from a single GPU to a rack but that same technology is what's going into Nvidia Drive Thor our next generation processor that's going to be able to enable autonomous vehicles autonomous trucks Robo taxis and shuttles and so what is the difference between Thor and drive how do we think about Thor is the S so the system on a chip that goes inside the whole Drive platform those ecus along the wall are all Drive platforms from different partners from Lenovo from weide from Zer they're using the technology here from the data center into that s so and then the whole platform has all the connectors for the lar the r are theas to plug in yeah so yeah we've come full circle I really really appreciate your time thanks so much for walking us through all the different levels that go into level two autonomy today level two plus and then soon level three autonomy right absolutely it was a real pleasure thank you so much for your time you're welcome a huge thank you to Danny Shapiro for walking us through nvidia's Drive platform the AI chips that power it the sensors that it supports and how Nvidia is combining real world and simulated data to help the entire Auto industry industry achieve autonomy there's a lot more to Nvidia than data centers and PCs so another big thank you to them for inviting me to GTC to learn everything I can in person and share it with all of you and of course thank you for watching and for supporting the channel until next time this is ticker symol U my name is Alex reminding you that the best investment you can make is in you
Info
Channel: Ticker Symbol: YOU
Views: 48,865
Rating: undefined out of 5
Keywords: nvidia vs tesla, tesla fsd, nvidia drive, nvda, nvda stock, nvidia stock, top stocks, best stocks, ai stocks, chatgpt, openai, growth stocks, tech stocks, pltr, pltr stock, palantir stock, amd stock, smci, smci stock, sora, nvidia gtc, jensen huang, jensen huang keynote, best ai stocks, best stocks to buy now, top ai stocks, nvidia gtc 2024, gtc 2024, nvidia keynote, nvidia blackwell, tsla, tsla stock, tesla stock, tesla vs mercedes
Id: R_UhKVk2Smo
Channel Id: undefined
Length: 19min 55sec (1195 seconds)
Published: Mon Apr 15 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.