Innoviz Webinar: LiDAR vs Camera (with Mati Shani & Dr. Raja Giryes)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
aliyev University the webinar is based on an opinion article I recently wrote for the autonomous vehicle technology magazine feel free to look it up on nov social media channels the session is recorded and will be sent to you in the days after the webinar the webinar will discuss the following topics the sensing technologies used in autonomous driving namely camera radar in lighter we will explore the strengths and weaknesses of these technologies and also analyze the synergies between them we should also discuss sensor fusion as it is related to the sensing technologies however it is not the primary goal of this webinar then we will try to answer your questions in a Q&A session to make logistics simpler the audience is muted however you are invited to submit your questions using the chat feature finally we will conclude the webinar with a short summary when we talk about the tunnels driving we actually mean that the driver waives the driving functionality to a machine in other words the driver loses control of this desk and we all know that losing control is not something people like a poll conducted by survey USA on 1200 American adults last March found correlation between the knowledge of respondents in autonomous driving technologies and between their confidence in these technologies so perhaps trivial but very simply put the more we know and understand a technology the more we trust it and it is more prominent with autonomous technologies in which people waive control and hand it to a machine bottom line if we want consumers to adopt autonomous driving the knowledge experience and trust must be obtained so the question is how can we further increase trust what should we do a tune of ease we believe that ensuring customer safety should be the key design goal of autonomous driving and therefore we are hosting such webinars to help and shed some light on sensing technologies let's understand why these sensing technologies are required the best analogy to autonomous driving sentences are the human senses so why do we humans need multiple senses to comprehend our environment setting inside the biology aspects let's consider the following objects have varying signature steps such as sound size smell color the more ways we comprehend an object the better classification we have and with improved classification we have certainty is it a friend or a foe the classical fight or flight dilemma finally we need to think safety what happens when one of our senses fail for example vision impaired or hearing impaired have many capabilities as they have other senses to use similarly what happens when a camera or radar fail what replaces them bottom-line multiple senses increased certainty and redundancy which leads to a much greater confidence in the overall performance and safety of the system let's talk about the technologies when referring to autonomous driving sensors the prominent technologies are camera radar and lighter ultrasonic is another sensing technology used in vehicles but it will not be discussed as its functionality is fundamentally different than other sensing technologies we are discussing today each technology is further subdivided by functionality or specification for example there are a long range mid-range and short range radars each having a different functionality and specifications today we will focus on the variants that are the closest in their functionality to the lider which are the forward-looking camera and long-range radar let's start with radar which stands for radio detection ranging radar goes back to the late 19th century but was mainly adopted with the introduction of air defense surveillance systems in world war ii radar operates in the microwave part of the electromagnetic spectrum it sends out electromagnetic waves hence called active sensing which are reflected from and detected by the radio system the system provides distance in direction to a target hence three-dimensional as well as direct measurement of its dynamic parameters such as velocity through Doppler shift measurements the technology is especially good in harsh environments such as low lights bad weather in extreme temperatures first automotive radar was introduced by Daimler back then mercedes-benz in 1998 for automatic cruise control today radars are used in multiple applications and are subdivided mainly through detection range they are considered to be a mature and relatively cheap technology taking into account many generations since first introduced switching to camera which operates in the visible part of the electromagnetic spectrum it detects electromagnetic waves which are reflected from targets by the sun's light hence called passive sensing cameras usually provide direction and classification of targets as it is especially good in semantics and color recognition but cameras have two fundamental flaws as they depend on external lights cameras are very wrong vulnerable to harsh environments or completely non-functional such as low light and bad weather regular cameras also lack the ability to provide range as the basic technology is only 2-dimensional so you have to introduced an automotive camera in 1991 and it was used well reverse driving the vehicle in 2007 mobile I launched multiple system for serious production for lane departure warning on General Motors and BMW vehicles and also radar camera fusion for adaptive cruise control on Volvo vehicles two days cameras are used in multiple applications and have many many different variants such as front looking surround and near infrared cameras cameras are considered to be the leading technology and ADA's due to its low price and perception capabilities which are similar to human perception one word really near infrared shortwave infrared or long wave infrared cameras they definitely have the potential to be integrated as additional technologies in autonomous but this is dependent on their maturity the ability to mass-produce the technology and its price time to market is especially important in trying to predict if such if such technologies will be added or not finally lighter which stands for light detection and ranging it was first introduced in the automotive industry as part of the DARPA Grand Challenge in 2004 with riders from regal and sick in 2005 build and introduced its first slider the HDL 64 II in the second our grand challenge since then lighters have been used in most development fleets of autonomous vehicles lighter operates in the near-infrared part of the electromagnetic spectrum similar to radar it sends out pulses or continuous electromagnetic waves hence active sensing which are reflected from targets and detected by the lightest detectors it is especially good in low-light conditions and relatively good in bad weather the system provides distance in direction to a target hence three-dimensional it's a very high resolution in a long range therefore making it the perfect sensor to find all those small obstacles on the road that are potentially risking the vehicle and its passenger passengers the picture in this slide shows an incredibly detailed image of the in of his parking lot created by a single in of is one frame lighter is the youngest of the three technologies hence the least mature which is reflected by a higher unit price but significant price reductions in the past years especially those introduced with solid state lighters made it possible for vehicle manufacturers to integrate lighters as an important sensing technology in their autonomous driving projects the first and currently the only generation of thought automotive lider was introduced by Audi in its a8 at the end of 2017 using Scala by Vallejo so we understand what lighter is but why has it become such a prominent technology in the last three to four years until 2016 lighters were mechanical system were simple in design usually coupling a laser with a detector multiplying this in the vertical axis and spin the entire mechanism horizontally in 360 degrees but the technology had performance limitations incredibly high unit price difficulties in scaling up the performance in a very large and bulky size making making it impossible to integrate with mass production vehicles in 2016 interface was founded introducing an automotive solid state lighter solving all the above issues of performance price size and durability so the opportunity to car manufactures of adding a ladder in their mass production autonomous driving projects became real his lighter is not as known as cameras and Raiders let's have a few words on this technology riders are usually classified the three attributes the wavelength they use the way field of view is created also known as beam steering and the method they they calculate the range to a target as for the wavelength in of his chose to base its technology on the 905 nanometer wavelength range as it is a mature and cheap technology devices are silicon based which make makes their fabrication mature efficient and simple there are other wavelengths used in the industry as can be seen in the slide as for beam steering there are a few major technologies which are used in the industry but there are many sub variants hence it is it is probably the area in which lighters are mostly different one from the other it is also the area in which mostly the companies innovate and differentiate their technology in of his chose a MEMS based design beam steering similar to the wavelength wavelength considerations we understood that price functionality and maturity are critical for automotive mass production hence we relied on the maturity and pricing of the MEMS technology innovators developed its own men's mirrors and and these provide the unprecedented performance finally distance measurement lighters are divided to those emitting laser pulses and those which use continuous waves here as well in of his chose to use a simpler and cheaper mechanism of measuring time of light pulse of pulses this means that the system car callate the time from which ye latest laser pulse was emitted to the time photons which were reflected from the target were detected ok so far we introduced the sensing technologies before we discuss their differences let's shortly discuss the sensing needs of autonomous driving so the basic way basic question is why do autonomous vehicles require in order to drive safely naturally the basic need is comprehending the road ahead and its uses easy said but very difficult to achieve the sensors need to detect all possible obstacles from small like tire debris to very big like an overturned truck all possible road users such as pedestrians vehicles motorcycles and trucks must not only be detected but more importantly be recognized why take an example in motorcycle which behaves completely different than a truck this is crucial for predicting the driving scenario in driving infrastructure such as road signs traffic light star nose guardrails are important for the driving maneuvers as well as driving path planning also the vehicle must comprehend its own driving also known as eager motion and path planning with respect to other road users for example an autonomous vehicle cannot change lanes if it is not 100% certain that he does not cut other vehicles by doing so the next obvious need is availability sensors cannot operate just part of the time or have very limited functionality safe driving requires fully functional sensors and so sensors must be usable in all driving use cases such as highway including both fast driving and traffic jams in situations or burn driving etc also as driving scenarios are never similar sensors need to cope with new and unfamiliar situations availability also requires possible 24/7 operation which means all lighting in weather conditions finally sensors need to provide the autonomous vehicle with the confidence to decide of course hundred percent confidence in reality is impossible there always mistakes but senses are still required to evaluate themselves and estimate whether they are working properly or not also they need to attach a confidence value to each estimated parameter the outputs meaning how certain is the system with that parameter value we are now ready to move to the next chapter discussing the technical differences between the three sensing technologies and I am passing the mic to Professor Sergius okay so what we'll do now is that we will survey the properties so what we'll do now is that will survey the properties of the different sensing technologies that we mentioned in particular we'll talk about compare radar camera and lighter so in this scheme you can see an overview of the comparison between all and you can see that each device has the different advantage compared to the other and this scheme emphasized the main advantages of each device and show that all of them are essential for making autonomous driving we will focus on some of these properties next and for example you can see in this scheme the slider has a major advantage in small object detection and low-light performance which are of immense importance for autonomous driving okay now we start with the technical table that show that the main technical properties of each system so for example you can see here that camera provides us with 2d color information why lidar gives high resolution tween information with reflectivity that is absent from there to the information of the camera if we look at the high-resolution radar so it gives us low 3d resolution and a velocity estimation another thing that Leiter give us compared to the other devices is that it it is a very high resolution which make it more resilient to spoofing and the phantom objects now we will turn to focus on some specific properties and we will start with the special separation a four spatial separation we have a the following a diagram the show us that the resolution of a measurement system depends on the wavelength use and the collector size so here you can see the wavelength of a lighter and the camera and the wavelength of rudder now using the formula the related resolution to the wavelength in the collector size we can see that in the case of lighter the wavelength is 1 micrometer which is similar to the one of the camera and therefore both of them achieve a similar angular resolution of 0.20 Gries oh then on the other hand if you look at if you consider the case of a high-resolution radar the wavelength in this case is much larger much larger and therefore if the angular resolution is much worse the same holds for the depth resolution lidar achieves resolution of 1 millimeter which is two orders of magnitude better than what you get with rudder or with high resolution rather which is 15 centimeter resolution and not is that with rather you need also much larger collector so this show that if you want accuracy you get it with light there are things that you cannot get with rather and you can get also depth information we now consider the semantic perception of which system so if we compare a camera to a lighter where both of them in some sense give us high resolution information camera focus on the color of the scene that it's looking at or it's capturing and a lighter give us the depth information now in some scenarios looking only at the color will make us miss some of the objects so see this example with lighter we can see objects marked in the right image that we cannot see in the left image because of the color ambiguity so you can see the small objects in the right image that you cannot see the left and another thing is that when you look at the camera at the RGB image of the camera it is very hard to distinguish between the road and the border which is easily distinguishable when you look at the 3d map of the lighter so getting this information is very important for estimating drivable areas where semantic classification of entities and ground is a critical element so here you can see an example where using just color or just depth let you miss some object and they show you that you need both of them now one may inquire whether we could have achieved the same results that we have shown in the previous slide also with rudder now for that let us consider the problem of small object detection so a as we have mentioned earlier when we looked at the wavelengths rather measures velocity of objects and you can see it on the right image that give us a demonstration of that and it depends on the Doppler effect but the way it measure things lead to a poor spatial resolution of the 3d world so you can see here the a map 3d map of a high-resolution radar and the 3d map of a lighter and you can see that with lighter you can distinguish between an object you can understand the world better and you can detect small obstacles which you cannot do with rather because you will miss lots of information another very important case when you drive is low-light conditions or driving at night in this case the camera performance degrades significantly and it can miss many objects lidar on the other hand excels at low-light conditions in the example that you see in this light observe how the two pedestrians in the camera invisible why they can be clearly seen in the lighter now not is that an ideal environmental environmental setting for one sensor lighter in this case which excel at low light conditions could be the worst for the other which is the condition for camera so it's very hard to see with the camera at night therefore using a fusion of different sensors is essential to make it an almost driving safe another important aspect is resilience to weather conditions now in this case rather achieves the best performance yet as we have shown before it cannot be used for object detection because of its poor resolution and therefore we will focus on comparison between a camera and glider and we will show the case of driving when it is raining now here you can see example of the image captured with the camera in the case of rain and the same scenario with the lighter now because of the shutter speed of the lighter and it's scanning strategy it is less sensitive to rank compared to the camera you can see here how the rain drop together with the sunlight strongly affects the visibility in the camera in the top image we cannot see the car that is clearly visible in the lighter on the right in the bottom image the bus and car are distorted which might be visible to a human eye but will almost surely fail with any machine learning based algorithm that is commonly used today with the term of driving in the lighter scan the objects look the same with the rain without rain and therefore you can use the same machinery that is used in clear weather conditions which is very important another important factor in safe driving is being robust to changes in the capturing conditions now here you can see the case of the recent Tesla accident which can demonstrate this how important it is in this accident a car crashed into a truck a lying on the road the truck was not detected by the camera because of the reflectivities of the covering of the truck and the camera detected the truck as part of the road and it seems that the rather failed as well because of the scattering of the covering of the truck so on the right you can see the front view of the camera and on the left you can see another way the same accident from another angle now we will now see the next slide we will see a short video prepared by our partners from Polk NASA that simulate the scenario seconds before the test of the high Cal hit the overturned truck now when you in this simulation we simulated the same scenario with lighter and you can clearly and the lighter clearly show that if you would use a lighter we would have prevented the accident in this case so you can see here the top view of the driving car and the object on the road in a minute we will see the in front of you and our partners from Cooke not assimilated they expected you know this one point cloud based on the specific capabilities of the system and not is that the reflectivity and scattering of the static object that confused the camera and rider do not affect the lighter scan that clearly observe the big lying object on the road so if we if you see again in the movie you can see that in the lighter we clearly see the truck that before was not detected by the camera and by the lighter now if here also we applied our machine learning techniques the detected that the car is a non driving object and you can see in green that the road on which we can easily drive so this example show us how important it is to use all three sensors together and not just a single sensor another important case is a sudden change in the lighting condition as is the case when we enter a tunnel before we enter into the tunnel you can see here that the camera a clearly detect objects outside the tunnel but not the cars that are inside the tunnel which are observed by the lighter as you can see on the right when we enter the tunnel the car and track are much less noticeable by the camera while the lighter sees both and another car that is in front of them as you can see on the right when we are inside the tunnel it takes time to open the shutter or increase the exposure which makes the truck invisible to the camera why nothing changes for the lighter so here this is another example that show why we need more than one sensor another prominent example of an expect an expected change in the light condition is a sudden direct sunlight that appears on the left you can see that the Sun is covered by the bridge while half a second after it appears and doesn't the camera and therefore nothing can be seen and this will not happen with the lighter another artifact is that you when you use the color information is color color confusion on the left the yellow lane marking is distinct from the barrier then a very camera a merges the yellow wire with a lane and then you can see that the car here on the front hits the barrier because it confuses the barrier with a yellow length and again if we would have the depth information this accident would be prevented so if we summarize each of the sensors has its unique advantages that make if you want to have a self-driving now and this clearly answer the question which sensor should we use and the answer is we should use all of them and for this we should fuse all sensors together so there are several factors that make this answer so clear first when we take into account the cost versus performance we have seen that the benefit of our sensor which caused more outweigh the cost there are clear classes that if we're we when we're if we will remove one will lead to a clear failure which can be easily prevented if you would use all the sensors now indeed sensor fusion is a challenge by itself and one need to decide what information should be included from each sensor yet it allows to cope in a good way with failure and recovery which are cases were one sensor fails and then the other can be used to keep the car moving safely okay so moving to questions and I would like to remind you that you're able to submit questions via the chat feature so we'll we'll start with two questions that we are often get asked the first one is what we call the Elon Musk question which has repeatedly Elon Musk has repeatedly stated that lighters are not required for autonomous driving and how do we answer such a claim so we definitely hope that this webinar was was answering this question but if not let's let's go back a little bit and describe what we discuss and and few more topics so a need for a ladder is not only performance or the capabilities of the system it has other aspects as we discuss such functional safety regulation consumer trust etc say that we want to analyze just a performance for a second then going back to in a mask we need to remember that mister mask is the CEO of a public company selling a specific product so he's taking aside and a stand of justifying Tesla's specific solution it is not an objective stand and by the way so enemies is not having an objective sight on this matter so we would like to switch to somewhere else on way more in a second but just bearing in mind that changing the statement of mr. masks that assuming that he says that now neither is necessary it has a profound economical cost in Tesla as customers bought their autonomous driving feature for more than seven thousand dollars u.s. dollars based on the assumption that the hardware is there and it's only a matter of a software version furthermore as we've seen in this webinar Tesla's accidents are not equal it's a matter of system design so now let's let's talk about way more for a second so way most probably the most experienced company other than Tesla in driving and developing autonomous driving platform and they heavily depend on and use lighter technology so much to the extent that they have decided to develop their own lightest solutions they claimed that lighters are necessary in autonomous driving and we can only believe that such capable engineers would have waved lighters or not decided to use it and save money if they have thought they could develop an autonomous trap driving platform without technology another driving but this is dependent on their maturity the ability to mass-produce the technology and the price the time to market is especially important in trying to predict technologies and we believe that due to the maturity and the project timeline lighters will be added to the total striving approaches before thermal cameras now this means that adding thermal cameras in the future will be dependent whether they add the missing functionality or if they can't complete completely replace an existing technology with improved performance and pricing now this is a major challenge that companies developing thermal cameras need to prove another important aspect with thermal cameras is that as regular RGB cameras thermal cameras doesn't do not give us a defamation which and reflectivities which are given by lighters so the absence of depth information is something that is crucial and as we have seen previously this information is very important for understanding the same and we cannot get these things with thermal cameras and therefore I don't believe that they can replace lighters ok few more questions that we get now using the chat feature so first of all yes the presentation will be shared as we initially said it will be sent out in in a few days there is a question on which of the three technologies is best for augmented reality glasses I'm this is a little bit out of scope of our webinars I'm sorry but our expertise is in automotive industry and in technologies and less on augmented reality glasses so I'm sorry for not answering of that there was there was a question on night-vision camera thermal and gated imaging and we believe that this was answered just recently by Professor Sergius there's also question on how light signals that enters the receiver are being processed and and yeah okay okay so a I assume that the question about how light signals that enter the receiver being processed refers to the lighter technology so lighter technology is time-of-flight technology that emits less light and then measure how much time it takes to for the light to go and come back each slider has a different way to process things as used in ANOVA so we have some array of mirrors that know how to each light way and then measure the number of photons that's returned and how much time it took them to return and this allowed to make efficient processing of race and get in an efficient way [Music] now another question is what is a Novus depth measurement accuracy so today as a the measurement accuracy that you Novus get for very large is one centimeter which is what is required for autonomous driving another question is on on our view of the lighter concepts where a camera helps the ladder scan a region of interest so basically this is a fusion on the level of sensors this is something that we believe it is not well at least the the path that you nervous Jose is to provide the lighter by itself and not to use it with other sensors we believe that eventually the sensor fusion should take the basic information coming coming out of each technology once you use for example a camera for lighter or vice-versa then you kind of contaminate that basic technology with another technology which means that the overall sense of fusion is not independent and therefore we believe that the sensor fusion should have independent sensing technologies and not a not intervened there is also a question about lighter performance in heavy rain mr. snow as we discussed earlier lighter is is better than the cameras in in bad weather not as good as radar though of course the performance is highly dependent on on many attributes also on the lighting condition in general so for example if it's a heavy rain in night it will be better than if it's a day also on the reflectivity of the target and and other parameters but in general the innov a slider we've already proven that it's possible to function under bad weather conditions as well so another question is it possible to have a surround view lighter system within a V sensor excellent question on whether it's possible to have a 360 degree lighter we would probably think that a solid state having 360 degree would be kind of a holy grail for many many applications and that's as far as we can comment on that okay so far for questions oh sorry which frequency you use yeah so we've discussed that univis uses 905 nanometer it's the lower part of the near-infrared spectrum how long does it take to have a confidence object on the vehicle interface okay so on in general on the information of object detection there are few latencies associated so there the first one is how fast we can provide point cloud to the vehicle and that's fairly quick we the latency is in few microseconds then there is a layer of object detection that of course is depending on the frame rate in other parameters that we measure but overall it would be a matter of between two to three or four frames until we get all information but usually the basic information is provided in two frames okay I think these were the questions oh there they are popping up some more what do you think when can we win can be the first introduction of FM C double either on autonomy automotive markets and dozen of his planning two more into this direction so I'll start with the second one in of is as we discussed or let's go back FM CW is always associated or has to use 1550 nanometer wavelength range currently although it has some advantages we do not believe that is a technology we want to pursue we chose nine of five phases stated earlier because of its price and maturity and because it's a silicon-based technology 1550 uses what what are called three five materials it's in a very expensive fabrication process and relatively immature in this industry maybe in the future but currently this is not the direction we're pursuing if it would be introduced it 1550 or FM CW in general it's a more expensive technology whether it could be produced in enviable costs and prices to the automotive industry is very I would say a difficult question at this point okay I'm sorry we have more questions but we do need to wrap up so I'm continuing and in making a summary sorry yeah okay so I'm getting a signals to answer one more question so we have how does your technology compete with way memo's sorry with way Mo's lighter well so the the the lighter that way moe I would say make public and to sell to the general market the honeycomb was a little bit different than what we have they also seem to have their front looking lighter or the long-range right in here we don't have there specifications so it's a little bit difficult to say how good that we have not okay moving to summary so I hope that we answered most of your questions and I'm sure that we will go back following this webinar and verify that we answered everything and if not I guarantee that we'll send each one of you the answers and our comments to to their questions and I'm sorry that we had no time to go for every question so let's have a short recap of what we have discussed today so the core theme of this webinar was addressing a question which is raised once in a while in the automotive industry whether a lighter is really needed for autonomous driving we believe the answer is fundamental yes and there are many reasons for that first performance and availability simply put camera and radar alone cannot cover the entire range of driving use cases let alone do it safely second functional safety we're familiar today with ADA's he made us the driver is in full control both legally and operationally but we're transitioning to autonomous driving in which there is no driver intervention with critical decision-making procedures such as automatic emergency braking qartheen scenario etc this is a major difference in autonomous driving the system cannot have a momentary lapse of reason it must operate at all times and at all conditions and because failures do occur immediate backup is mandatory to come to a safe State third we discussed the importance of public trust consumers will accept autonomous driving technology if they understand it but also if they trust that the technology is safe and since hundred percent is always not guaranteed consumers would want to believe that the automaker did whatever it could in order to maximize safety considerations such as unit price are less accepted when it comes to the human lives last but not least regulation regulators have hard times keeping up with the technology we all want regulation to support technology instead of limit limiting it but we'll also want regulation to be a safety net and protect from wrong usage or cutting corners when regulators authorize new technologies they always prefer to walk on the on the safe side therefore adding capabilities and safety can only expedite regulation of what enormous driving so that's it for today thank you professor Diaz for joining me we hope that you found this webinar useful in tutoring we look forward to getting your responses and comments and we promise to answer each of your questions and I would like to remind you that the session was recorded and will be sent to you in the coming days thank you and goodbye
Info
Channel: Innoviz Technologies
Views: 5,709
Rating: undefined out of 5
Keywords:
Id: yDwWIMy5rJ4
Channel Id: undefined
Length: 46min 0sec (2760 seconds)
Published: Mon Jun 22 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.