LiDAR for Autonomous Vehicles: The future of 3D Sensing and Perception

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome all if anyone wants to sit there are still about five seats open for the three of you are in the back thank you for sticking around this is the last talk I believe of the day on this stage really good introductions I'm not going to say much more in terms of introduction at the quality will make solid state lidar that's our core capability though we are also strong in software and full system integration I have to show the slide all of you are here already know that we are what's called a unicorn right so that's part of what's interesting is that it's always exciting when you know us optical geeks managed to kind of find the right application the right need and and make it bigger than business world so our last round was we raise 90 million on 1.5 billion a pre-money valuation and we're a couple of billion in valuation now and our next event is the IPO we have 130 employees will be 200 by the end of the year we are hiring lots of optical people so this is probably the strongest pool of talent anywhere we have offices in 10 locations globally listed here on every continent some of our partners are some of our public partners in reality we have about 300 partners but the ones who gave us the permission to mention their names are Mercedes Renault Nissan Honda Kia Delphi the coital actually Coto is actually the largest headlight maker in the world and sasada which is the former sensing division of Ti what makes us jump out of bed although it's like we sleep only two hours a day is our mission a social impact of the company we do we say that we do five things we save lives most importantly we save space we save time we save energy and we save cost and that's very motivating for for for our team we are after and for rekon pillars a pillar is basically a group of applications a transportation broadly defined I think was of course autonomous vehicles for passengers but also attracts the trains and the infrastructure that goes with them security everything from smart home to smart borders we were in the news a bit recently about it because of that industrial automation which includes many applications and warehouses in mining agriculture and so on and 3d mapping both terrestrial and aerial those are some of the applications that fall in the four pillars the key point here is that in all cases the solutions that a good lidar system provides should lead and cause performance reliability size weight and power consumption so what lidar we all know why lidar we're all optical people here it's the only true 3d sensor sometimes you will say why not video why not radar listen you have 1d radar which tells you how far something is it would without with virtually no information in X&Y on how big it is what it is just that there's it seems that there's an obstacle at such distance right it's a 1d sensor but it's really good at it very precise it works in fog it's 3d robots but just 1d video is 2d now in video perception you have to analyze each 2d pixel array a frame frame and try to estimate how far something is how big it is how it might be behaving and so on and basically the outcome of all that crunching is really a low-quality version of the raw data that we can get with lidar and lidar it's all direct measurement so we send their pulses all around the sensor and we get a 3d envelope of the environment that has centimeter accuracy up to 200 meters so we don't have to do any processing to figure out how far someone is how big they are what they are or how they are behaving so in the autonomous vehicles everyone agrees that the most capable sensor is lidar and the primary sensor must be lidar and I would be more than happy to have a discussion if someone disagrees but we make it commercially viable even Elon Musk who made some statements about lidar boohoohoo said for instance you don't really need a lidar it may be it's too good for the job especially when it's expensive maybe we can make do with the suite of other sensors but even he never said that lidar is not the best analogy it's just a couple of years ago he made a statement it was too transid and he's right a lidar used to cost between eight thousand and two hundred thousand dollars and even at the low end at eight thousand you cannot have a few of those in the vehicle so and so basically when people use expensive light hours even when they were willing to pay for an expensive lighter the only were able to pay for one as you see on the Google car when you have a single lighter you cannot you have to put on top of the vehicle up on a pedestal so to minimize the shadow assets and so on and still you're compromising in terms of seen around the vehicle and of course you just create $80,000 for that particular unit even when it's eight thousand that's still a big chunk of money just for the the harder the vehicle on the bottom right that's a that's the Mercedes GL e450 coupe AMG that our partner Daimler gave us and our other partner Delphi reconfigured the fascia Society Hrant and behind that are hiding to solve get lighters so when you buy the vehicle it looks like that like the red vehicle basically you don't see the sensor and that's the way it should be we're not going to drive around with you know a lab bench on the roof of the vehicle this is a simple version of our roadmap first we did build a mechanical lidar because it was easy and it allowed us to start developing the software our core product is the solid state ladder the first version is the automotive sauce get ladder shown here the Gentoo and it's I showed to you here it's basically the size of two ducks with cards and two of those are hiding behind the grill of the red Mercedes that I showed you this sells I mean this is not a a sales pitch but that's a key point for adoption that's why I talk about the price we were able to reduce the price even from mechanical to $1,000 but this is $250 in volume and that's that'sthat's key when we start the company we talked to OEM automotive and and and they told us if you don't show us a roadmap that you get you're going to break the $100 barrier don't even start the company so we have to have a plan so everything in here is silicon CMOS all the equipment a hundred plus nanometer node just read the cheap stuff however the magic all in the photonic circuitry that's a robust that has a high yield that operates across a very wide temperature range and so on we have five types of silicon CMOS chips today inside the this sensor I'll talk about all of them and long-term as you see in Gen 3 all of these chips being based on the same silicon CMOS platform will be integrated on one substrate so the whole ladder will become one chip and at that point it can go in a cell phone it can be wearable can both go in a helmet and so on it can go also inside that we thermostat behind every light switch in every home and so on the three main four I see is a couple of those are our picks there are not many conferences where I can say pick and people understand what I mean so photonic I see the optical phased array on the transmitter side the spidery on the receiver side the control ASIC on the transmit side the Dro I see readout I see with the TDC circuitry on the receive side as well as the processor this is the the guts of the sensor and basically without the box it has the receive side the optical phased array as when you knowbut is basically a reconfigurable lens on a chip the lensing effect is achieved by controlling the phase which slows down the light the way you would normally throw down the light by having a different thickness of glass and a normal lens so instead of so this is actively like a lens where at every point and then array of a thousand five thousand or a million at every one of these points you can change let's say it's a glass lens and it says if you could change the thickness of the glass in a thousand pixel array since the lensing effect is built any don't need a lens in front of the receiver so that that's why right in front of the receiver you there's no lens there's just a flat plate that's in front of the emitter to the transmitter on the receiver side we have a spider a single photon Avalanche diode and made in silicon then you might say spider raised on thought existence so again well that's also part of our IP we make side arrays in silicon silicon CMOS and that spider ray is chip-to-chip attached to a readout I see and then the output is sent to the processor much like phased array radar optical phased array does cocoon basically form radiation performs the radiation pattern by phase control and you can control the direction you can control the spot size if you have the right circuitry and capability to do that now on the receiver side we do quite a bit of signal processing to get high quality data and also to make our system basically robust against background light robust against hacking and so on some of the things we do the first thing is PC SPC so statistical analysis of the data you collect I'll show you how that works we send multiple passes so each one of these lie dogs collect we call roughly we say half a million three four hundred eighty thousand points per second for each one of these points we don't actually send the pulse we send a pulse train and that fast train is coded and it's a pseudo-random code and we have a matched filter to see that the signal comes back has the right code and we do a number of other things including a spatial temporal correlation at EC SPC basically it for every pulse for every laser pulse that we send we do binning we have time bins where we collect what we're counting photons right this pad a single photon avinash diode let's by the way the name of the company the quality is the quantum energy because we count a quanta of optical energy coming back at us so that's sort of how we came up with the name so see you're counting photon by photon and you create this histogram and then you look for the statistical analysis to do the best measurement of for the time of flight there's the coding so every time we send the pot what I call there so that every time we collect the sample we send a pulse train in this case for instance it's the code is 1 1 1 0 0 1 0 and we apply a matched filter based on this code and that give that tells you when that allows you to pick out the signal from noise from the noise when you look at the light that comes back at you it might look like it's just so noisy it's just impossible to pick the signal out but when you have a code like this it's actually it makes it very easy to pick out a signal from the noise we look for temporal concurrency so for each post strain that we send so for each one as opposed to 0 and then the in the code we expect to measure the same packet time for the time of flight if all of the ones in the code do not give us the same measurement for the time of flight we just describe that sample and we could be collecting your sample a spatial concurrency the stud array you know as people who are come from the camera where the video world they don't understand how the spidery works it's not an imaging sensor you're not imaging the scene on this pad array the holes pad array just acts as one component that collects sequentially the measurements of the time of flight for each pulse and the passes are sent sequentially so you have to hit multiple adjacent pixels in this pad array to say that you have a hit and that's a special concurrency now once you create this it's a hardware the solid state lidar you can have lots of fun on the software side because a lot of the capability and the algorithms can be controlled by software in terms of for each application you decide how to react to the scene that you are sensing and what to do about what you're seeing for instance you see a deer in the middle of the road you can actually alternate between looking at the big picture and looking at the data and with the big spot and looking at the deer with the with the get assumed any of you go with it with the small spot and you can alternate one point from the here at one point from there so on and so forth so you keep your eye on the big picture and yet at the same time you get the high-resolution image of in this case the object the obstacle that's in your path so we have we also because it's not a physical system is not a mechanical system a mechanical system that's let's say spinning light out you know it base it scans the scene sequentially here at random access you can point in this direction then disappear and appear in that direction without physically scanning in between that's valuable in terms of giving you more capability we can adjust you have four hundred eighty thousand points per second you can adjust the window of interest you might say in an autonomous vehicle application I want to have a very high resolution data along the horizon so I actually have very high quality envelope of pedestrians and any objects that are on the road on it along the horizon and I might use a larger spot looking up and down at lane markings at bridge and so on we did the world premiere of this product at sea as 20 2016 that was the world premiere is that the first time we made it work but it became a product that we are providing to customers this year so at CES of this year we we did the product launch and we actually received the best of Innovation Award from from CES and that's the spin that I wear every day here one of the very exciting and very innovative products that we showed that see as was this headlight that the way we jointly did with with the coital a co2 is the Japanese the headlight maker it's actually the largest headlight maker in the world they make headlights for Toyota and other companies also for the VW Group and in this case you know co2 built this LED headlight with two of our subspace lidar built in it in this case they used our smaller lidar we have we have a smaller lighter this one is called the ds3 it just stands for SSS solid state sensor people think oh it means three dimensions or it means it has a three no this and this is called the s3 chi-chi written Qi we developed this for drone applications robot robotics any application with a size weight and power consumption power draw basically because you're running off a battery and so on are critical so and this can already go inside the helmet of a soldier or or firefighter so this is the version that they integrated two of and this headlight you see one here one here how much time do we have would be rocking it turns the levels of automation out for Android real do you hear about level 1 through 5 level 1 basically the vehicle does gives you some level of driver assistance and level 5 the car fully drives itself the car comes without a human being in it to pick you up you drive when you drive to work you can take a nap in the back that's level 5 let's not worry about when or whether level 5 is going to happen at this price point even at four level one automation this makes sense so we're not we don't stress too much about whether level five is going to happen really when or or when the most exciting one level is level four the industry already had level 1 level 2 quite a bit if you have a high-end vehicle you already have Lane detection adaptive cruise control to keep the distance from the vehicle and automated parking in some cases all of that is level 2 in my opinion the industry is going to skip level 3 because level 3 is the most controversial one that level 3 is the one that requires you to take control when there's a situation that the car cannot deal with and that's not acceptable in my opinion I think we should skip the industry should skip level 3 if some of our customers most of our customers are talking about skipping level 3 stars on actually are doing level 3 you know it's their decision is it's their responsibility because some people say I don't mind taking over when the car tells me that there's a situation it cannot deal with yeah but usually when someone says that there they might be you know it you know the very physically very athletic and young and so much fun yeah but you know if they're if someone is in a vehicle and you know usually I say if snow is if an older person takes a nap and the cars hit okay take over now it's different right so just because you think you can handle it you cannot just sell a car to everyone the way that tells you that you have to take over when there's a difficult situation so we in our opinion level-3 the you know should be skipped and go straight to level four and that's the main level of automation that makes sense essentially the vehicle drives itself in most situations and in some situations you drive the vehicle but there's no mix of the cars driving itself then it asks you to hey help me out when you look at the whole system the whole perception system in the vehicle there's the data that comes from the lidar which is so the point cloud with with the essentially that gives you an envelope of the environment certain frame rate you have information from other sensors IMU gns and so on and other vehicle data and when you add context based on you know local bit pre-existing math and so on you can now do some perception and you can do multiple layers in the perception pipeline so you do the data formatting you the first thing you do is you do a ground plane detection then you do object detection classification tracking and outcomes a an object list which is basically tracked objects and we basically and then you can deliver that with the soft state ladder because it's not mechanically spinning you can go to ten one hundred even a thousand frames per second and that's what most OEM is on most other most of OEMs want bitly the two hand hand them this object list for every frame tell them there's a person here the curb is right there there is a cyclist there and so and say do that maybe a hundred times per second and they will decide how they will behave based on their brand and their willingness relate to take risk and so you know so you know the Tesla has a mode called the insane mode right so clearly Tesla has kind of certain a pedigree and and and the culture and they they think that whoever buys them has a certain you know enjoys being in a car that drives in a certain fairly aggressive way but when you're in a Mercedes s-class maybe you want more luxurious behavior and you know given a certain situation where a Tesla might decide to pass the the Mercedes s-class might decide it's not worth kind of shaking up the person and kind of creating all that you know just really kind of poop you know applying lots of talk and and doing this crazy maneuver because people who buy Mercedes s-class tend to just one luxuries more luxury than sports behavior just the visualization of what a basically what you see in every frame which is a ground detection in green a vehicle in this case in red the bicycle or a bike in magenta a pedestrian is in yellow and the stationary objects or the trees in this case are in blue but that's not what you show the driver the driver is not going to look at this and make decisions this is a visualization of the information that we give to the OEM so they can make decisions now the lidar is the primary sensor people say you can always use lidar radar and video we agree however it's critical to agree also tell that the lidar has to be the primary sensor which means it's the sensor that's in charge of perception localization and navigation no other sensor is as capable this shows a configuration of a lidar radar video system by our partner Delphi this data was actually collected by Nvidia using our lidar first a this is this is a first pass up 101 in and the Bay Area it so the first they did the 3d mapping then the car was driven in a mode to actually drive itself so the first thing it did what's called an occupancy grid the detection you usually you think you you do perception and you figure out what's where at what speed are they moving and so on so you collect that kind of information at the rate of ten or a hundred or more frames per second and and once you do that now you can do a pass planning in this case you see this red line here this is the path that the car has decided to take given these vehicles and how fast they're moving and so on and once you've done the fast planning you navigates you activate the controls and the car act takes over and it controls the acceleration and the steering and so on please this is the 3d composite point cloud that we collected just with a single pass around the block that's next to our headquarters in Sunnyvale the data is so rich that there's not enough resolution on this screen if you zoom in you see this kind of cross section as you can see you can see the the curvature of the road you can see the trees you can even count the leaves on the trees if if you want you can see the lane this is the color coded based on height you can even see the lane markings just based on the thickness of the paint because when you when you drive by and you're as close as possible to the rain markings and they're just a few feet away we have submillimetre resolution but we can also visualize the data based on intensity which is a representation of the reflectivity of objects and in this case the any retro reflective features really stand out such as lane markings right turn signal stop sign we can also do an overlay between the point clouds which we collect and the satellite image before positioning so that's all I have in terms of presentation those are our 10 locations globally we are hiring basically five people a week so in all areas if you if this is exciting to you but more importantly maybe if you're an expert in the space you want to start a company and so on you know it's a good time I mean if we're in the 1990s we all kind of got into telecom and fiber optics and so on and and update you know optics was king and it's optics is King again now because of this particular application of autonomous vehicles which is arguably the hottest space in technology so thank you very much I'll take a few questions [Applause]
Info
Channel: SPIETV
Views: 15,819
Rating: undefined out of 5
Keywords: SPIE, optics, photonics, innovation, science, engineering, technology, lidar, self driving cars, autonomous vehicles, LiDAR, quanergy, louay eldada
Id: LvOjKIZ5aIk
Channel Id: undefined
Length: 30min 35sec (1835 seconds)
Published: Mon May 01 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.