Stanford Seminar - Edge Computing in Autonomous Vehicles (panel discussion)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay well good afternoon everyone and welcome to the Thursday October 17 2019 session in our series on edge computing whether there will be different directions for Asia and the United States or not I'm Richard Asher I direct the u.s. Asia Technology Management Center at Stanford we're very happy to produce this series and we want to thank all of our member companies for support of our series as well as support of our other activities and please stay afterward and have some refreshments outside with us get to meet our speakers also get to meet representatives of our member companies I'm very happy that today we're moving into actual use cases of edge computing so we started off I gave an overview at the end of September we had a presentation on 5g we had a presentation on chip acceleration last week so for this week we're going to talk about a very important use case where a large amount of the information processing has to happen at the edge device meaning in this case the automobile we're going to talk about self-driving cars today I have with us a great panel so closest to me is dr. Sven biker and I've known spin for years then was the executive director of the cars lab at Stanford the Center for Automotive Research at Stanford however since then he's been an expert on mobility topics at McKinsey and company and most recently he's the founding managing director of Silicon Valley mobility which is a consultancy and advisory firm before he came to Stanford he worked for the BMW group for 13 years and he has worked in Silicon Valley in Germany and in Detroit and he received his master's and PhD degrees in mechanical engineering from the Technical University in Braunschweig Germany he's published various technical papers and holds several patents in the area of vehicle dynamics and powertrain and further away from me is dr. Martin Cirrus and Martin is the chief technology officer of what's now called the Alliance Innovation Lab in Silicon Valley now this was originally the Nissan Research Center of Silicon Valley and Martin was the founding director he also established their research agenda including leading a team of researchers on artificial intelligence technologies for automotive vehicles connected vehicles and human machine interaction also he spent before he came to Nissan he spent 12 years at NASA Ames working on human and autonomy systems for Space Exploration so not only how to have an autonomous satellite but how to automate the work of flight controllers in NASA mission control so he has a long term career in research and software engineering he has worked at the Palo Alto Research Center at nine x science and technology at IBM corporation and he's an entrepreneur he founded the startup company agenda so we will ask spin to give some comments then we'll ask Martin to give some comments than all three of us will come up here and have a real panel discussion so in the floor is yours that great thanks so much thanks so much for the invitation Richard and for the introduction and great to be on the panel with Martin then little bit a little bit later on I think we want to go back like almost 10 years or so and for this time we've been discussing a lot when we finally have autonomous vehicles yes and I'm sure we we will figure out tonight when autonomous vehicles will come to the market they might not be able to help with this journey but that's just a little bit to highlight or to visualize I guess what Richard was just saying from introduction were nicely done thanks very much just a little bit of a rundown is that my background I'm a mechanical engineer I really got excited about how a vehicle moves based on the four on the four tires so basically drive forces braking forces and steering forces which got me into BMW in the mid 1990s and I'm still very excited about how a V could move but I have to say that's not in my mind ultimately defining the future of mobility because it's not so much how the vehicle moves in my mind is much more how we get moved how we move ourselves with vehicles or actually how those vehicles move us therefore I wanted a little bit more from vehicle dynamics into mobility what I do today and as Richard pointed out spent quite some time here full-time at Stanford at the Center for Automotive Research then got into consulting where I still am with my own consulting firm based in Palo Alto and if they lecture at the Stanford Business School and I was just discussing earlier when we were walking in here they still have to grade a few papers tonight because at the Business School we have these compressed schedules where we only have the first two weeks to teach and then two weeks for the students to write the papers and the papers are all about what can incumbents do in order to remain relevant in the automotive industry and what can you come us do in order to get into the automotive industry and that's quite interesting like this rupture and and innovators dilemma and all of these things so all of this basically telling you there's an engineer here speaking to you right now but I also I guess have gotten my fingers dirty in the business of things and so therefore i profoundly believe just a great engineering solution does not make or ultimately for market success because really there are different things that need to come together obviously technology and business but also then regulation the customer should not be forgotten and certainly environmental concerns should not be forgotten many of these things we hope to actually address and maybe ultimately solve with autonomous driving so therefore I thought I might talk a little bit more about how an autonomous vehicle might work Martin will then give us also a lot of insights into all the data that's being generated and then all together all together we will discuss what does edge computing have to do with it a little bit about the the autonomous car that's believe it or not it's a slide I think I created it maybe tell created like eight nine years ago for my class I think it was over there and the Earth Sciences Building the future of the automobile and we were discussing well how does it actually work and so I came up with this little animation I hope that it's still helpful for you to understand when you get in one of these autonomous cars you need to tell it actually something and you better tell it where's the destination where do you want to go to can be a user interface where to see the little marker number eight where you say that's where I want to go and then the autonomous car basically says okay I got you on on a map I know where we are and we basically can be very precise with a lot of sensors to improve our position accuracy that we really know exactly where we are on the planet and then we could actually get going because we know what we want to go to we know where we are and we basically know what the map what the lay of the land looks like let's plan a route guess what problem might be there obstacles in the route the environment which can be other vehicles can be pedestrians can be obstacles something that is not in the map all of a sudden it shows up in front of the vehicle this is where all these sensors come in one two and three you can see here laser sensors cameras and radar sensors this is where Martin I believe will have quite some and said how much data they actually generate what we will then have a discussion where should we process all this data but what we actually do know that we figured out I know where I am I know basically what the routes are and now I know where we're going to go to but there's an obstacle now we can actually get a little bit more reactive about our routing and plan maneuver like you I actually need to go around this obstacle and then maybe even get somewhat tactical what is the actual path is it like do you want to cut the corner a little bit or do I want to swing a little bit wider because it's a pedestrian or bicyclist where I don't want to get too close and so we get these different steps and then we're almost done this is now later part where my heart actually is like the four forces on the tires you apply basically turning through the steering system deceleration with the brakes and acceleration with by wire system that doesn't really matter technically actually if it's an electric car or a combustion engine gasoline powered vehicle today everything is computer-controlled in the vehicles and you can just command I need one hundred and fifty Newton meters now and you basically get it a little bit faster from an electric engine or electric motor than from a combustion but stuff but let's not forget you also want to share some information as much as the person in the vehicle who's no longer the driver but the personal vehicle needs to enter his or her destination of choice you ought to need to reply back yes we will get there in 26 minutes and you might also need to communicate even more you know what we have to slow down a little bit because there's challenging traffic situation here which is what also quite some research here on Stanford campus has happened so that's basically how an autonomous car works you can cluster all these things a little bit further and also throw a few company logos onto here who's actually in this field and if you look at it for one okay we can talk about these three categories and we will but you don't see any car company you don't see your GM you don't see a Ford you don't see a BMW my former employer Toyota's not on there and that's not that they would not matter it's a little bit too obvious but therefore one should stated even more what this becomes now is really like a lot of new technology that we need to integrate into probably existing vehicles some ought to say yeah you know what know that we get an autonomous car and no one actually is gonna be in there and driving anymore let's start really from scratch and there are quite a few companies here zooks I think should be pointed out new row which is more the delivery vehicle let's say you know what we are not just going from automobile 1.0 to 2.0 we're really reinventing mobility and that should probably not be within the existing thinking what an automobile is so they're they're these companies that say okay autonomous vehicle in a completely new shell and maybe even completely new business model but then there are these companies that may be a little bit better known Aurora is being talked about a lot too simple is building autonomous driving stack as it's called for trucks drive a I got a little bit quieter out of Stanford or the Stanford graduates out of the CS program started that company that got acquired by Apple over summer pony door AI is talked about a lot and a few others here as well so these are those who basically say we mostly focus on software what does it actually mean if we say okay we do a path planning we do a maneuver we do some more technical driving strategies if you will that all happens in here talk about the full vehicle let's talk about perception that's obviously by and large the largest group here and and that is somewhat indicative I guess off of the action maybe of the investment as well that goes into it there's a lot here in in lidar which is again laser sensing we're valid eyeing quanta G black more companies that are talked about a lot but just don't be don't be deceived here they are what is that like 7/8 logos on there like people who do this for a living who scan the market what's going on in startup land and technology innovators they are tracking close to 100 lidar companies it's lighter companies just by itself about 100 and I'm not saying that that's exhaustive I don't have that spreadsheet but someone was telling me already three years ago no we're looking at 775 and then someone said no we look more like 90 so it must be about 100 plus or minus in this field camera and computer vision one company that is maybe missing here is mobile I not sure if we have them anywhere else but that's basically where people say you know what we do all of this with cameras because cameras actually the only ones that can detect a traffic light because the laser beam cannot really tell if that's green or red and won't once we get to radar down here radar can also not detect there's a traffic light green or red but I assume all of you will agree it actually does matter a lot of a traffic light is green or red so therefore a lot of working cameras as well radar middle wave actually just a stone's throw here from campus and there's a lot of work here and in the local Bay Area as well there are obviously others and you might ask well what now I get the camera idea but laser and radar isn't that all a little bit overkill maybe yes maybe no this is where a lot of the discussion is still happening what actually do we need in order to detect a person person to detect person a camera might be good but then you don't really know what exactly is the person doing so if you really want to see if the person is walking or maybe just standing you actually might need to detect the extremities and say well that's an arm and the arm goes like this probably the person is walking you might even need to believe it or not detect where the person's nose is which sounds like a really smell are you're not getting carried away you but if you think about it I'm a pedestrian this is the curb side this is where the autonomous car comes okay it makes a big difference if I'm standing this or like this and waving to Martin because s I'm waving tomorrow it's unlikely that I walk like this right but if I stand like this very likely it would step into the street and it tells us something what we are about to do here as we want to get these self-driving cars what they are called quite often as well if we want to insert these self-driving cars and the traffic as we know it it needs to get all these little cues it might even be something and I'm not making this up maybe the sports called maybe that says something because if that's made sports go maybe with a logo on there and there are five other people with the same sports coding logo very likely they belong to them if the fact we will same sports code on the other side of the road very likely I belong to them they don't want to interact with them this is how we drive and that's what we need to get into the system and this is where we can say yeah that is great in order to see color and maybe get an idea what is that object all about this is a lot about the geometry where we say extremities and all of that exactly how big something is but guess what same as our eyes don't work very well in bad weather condition same as cameras and lighter doesn't work too well very heavy rains snowfall and the like this is where radars very good so which is why many people say you know we need this sense of fusion we really need to make sure that we can safely detect what it is that we're looking at then there's other them don't want to go too much in detail simulation is a pretty big topic because we're saying how many miles do we need to drive in order to say no it's safe enough big debate no answer yet some people say for a number of reasons hundred million miles you need to drive in order to say no it's safe enough others say and I would agree with them even hundred million miles are not enough how long might it take to drive hundred million miles you say you might have a fleet of 100 we could but still driving a million miles it takes a long time so therefore simulation is very poor here and then v2x systems this is them what we're getting into what we will discuss here a little bit later or two through Martin's presentation vehicle to X vehicle to everything else so that's all connectivity like how do you get data to and from this autonomous car localization that's something that I had on the previous slide wherever and relatively briefly over the map but the map is not just a map it's not just a map like okay there's like a two-lane highway that basically goes from whatever San Jose to San Francisco and you're on it or not it's much more what exactly is the curvature of this highway it's exactly where said even a tree where was a building because if I see this tree I know well that thing at least is not going to move which actually does make a difference because in challenging conditions a tree this may be this tall it's not very different from a person in one of these perception systems so therefore as much as you can tell this is a stationary object that does not move it's very important to know there for localization and maps are really important and even the position a precision that we need for localization is very important today standard GPS gets you won't say about 30 maybe 15 feet on our own on a good day of accuracy that it's not enough to have a self-driving car because you really need to know which lane are you in on the road because you can still have something like with your but your camera you know if you are in between like the lane markings and everything but you also need to know okay I am in this Lane that's actually the lane that takes a left turn it's not the lane that goes straight because it does make a big difference so all of this we need to know and I think I will come to a close here pretty soon because timing is of essence here we definitely want to reserve enough time for your questions as well but I want to close then with one thing which I created because I got upset when people say oh you know what the car would just be a computer on wheels like you really don't know anything because a car today is about 100 computers and sensors on wheels are ready so everything that you see here you can read for yourself like transmission engine surroundings weather driver input all of that is already computer controlled this is connected by I think the number someone on you up to 20 communication networks within the vehicle which is where I say well you know what I got into the automotive industry is in mechanical engineer thinking that I can really define what a car is so can I mean mechanical engineering is still very important but there's so much computer controlled already in the vehicle which adds to the complexity where today if an automobile company launches a new car mastering that complexity really becomes the challenge and ultimately define success for you often but it also tells us that's all this information that's already in there and the vehicle like if someone sits on the seat or not or what the temperature setting is like door locks and everything so on the steering wheel and brake application and all of that and into this architecture we now need to integrate the autonomous car which might evolve out of this so that's one direction that's a more an evolutionary approach which is what we discussed in length in my class at the Business School or it might be a disruptive or revolutionary approach where we say you know what let's do away with all of this let's build like an autonomous car from scratch what we saw in the previous slide there companies that have this belief I cannot say that they are wrong but building an entire car from scratch without much experience should not be underestimated so we shall see but one thing is for sure these cameras and radar and lidar and existing in the car generates a lot of data and thinks that's what Martin which has a little more about what's that so that's great spin thank good you up then while we're changing machines yes I asked you kind of one follow-up of course you may so in terms of sort of technology developments the really recent thing one of the questions I'm curious about this last slide you showed about the you know incredible computer systems plural in a car all the wires do you see movement to make the inside of the car wireless no not too much and I don't want to sound like someone who said yeah tried that didn't work but still a little bit of inside baseball it was one of the projects that I got a BMW yeah 2003 or someone like yeah and so wireless sensor networks on a car it doesn't really apply that much because in the car you basically know exactly where a sensor is yeah it's exactly on the top of the vehicle and it scans the environment it's gonna sit there for the life of the car and it certainly needs power and so why would wirelessly to have problems with something Wireless I mean well Wireless applies our tires because they obviously spin and so then this is where it applies also for testing or something like this so we did say you know what someone brings the vehicle into a workshop and it in a dealership something rattles and screeches and it's not right on my con it's only Thursday morning so I don't know okay we get your wireless sensor kit just to check it out and then we do a little bit like the 24 hour what is that heart rate monitor awesome okay so I'm good well she had me sit down yes Martin the floor is yours afternoon still I'm Martin sir house Richard did a wonderful introduction I don't know if there's anything more I need to say like Richard said I run the research at at the Alliance Research Center we started as the Nissan Research Center but as you all know Nissan is an alliance with Renault and which BC so we are like an alliance center now that means that we work for all companies in the Alliance my startup is in healthcare so completely different but also about autonomy so anybody interested in that I'm not talking about that but you can ask me questions later on let me start with asking how many people you know son gave kind of like an overview of autonomous vehicle how many people really know the technical insights of an autonomous vehicle okay so it's still a lot of people that don't so you know because I don't want to overwhelm you with with technology and you know oh details so so I just wanted to see where shall I stay and my discussions so I titled my you know my talk autonomous systems with human-in-the-loop because this is my this is the peppy for me like Richard said I worked at NASA for more than twelve years as a senior scientist and one of the things that I learned there is there you know I I have said show me an autonomous system without a human in the loop and I'll show you a useless system right what is useless a useless system a system that is useless and you know I had debates with our scientists at Ames early in the or in the late 1990s about autonomous spacecraft and so you know it was obvious that it was easier if we just take the human out of the loop right and I said ok so now your spacecraft grows to the edge of the universe and then what oh no it's going to communicate information back and I'm like well there's a human then right so if you don't have a human in there why are you sending the spacecraft so the same thing I say with autonomous systems here on earth right and discounts for autonomous vehicles it's you know accounts for any autonomy that you have there was always a human in the loop and this to me is a very important aspect to keep in mind when when we talk about autonomy and autonomous systems on you know driving around I mean I always say how we imagine to have millions of autonomous vehicles driving around on the road without humans in the loop we have well in the United States airspace about 6,000 airplanes in the sky at one time and we have air traffic controllers talk well with two pilots in the cockpit mind you and we have air traffic controllers talking to the pilots all the time right why do we think we can have millions of autonomous vehicles driving around nobody needs to interact with them okay so so that's my start and then I say the role of edge and cloud computing and and we'll get to that why but one of the you know just as a start one of the things is that humans are not just in the car all right most of the humans are not in the car and so we need to be able to communicate and interact with humans not in the car and as men was saying you know v2x so communicating communicating from the vehicle to outside is a very important part of building autonomous vehicles and we'll talk about that so the human-in-the-loop why do we do this why do I say that and it's because of this slide right this is our vision it's about what I call socially acceptable autonomy right if I build autonomous system that nobody likes and we have millions of them driving on the road that's not going to create a nice environment right and so now of course we can debate but what does it mean to be socially acceptable I don't want to go into that today but I just want you to see that what we say it's an equation you know it's about human and robot teammate this is what we're talking about this is you know the robot needs to be interacting with humans to do that we need AI everybody agrees with that but we've a AI that can explain what it's doing to humans right otherwise we're gonna get all kinds of problems in another talk I gave recently I talked about the 737 max accident in the cockpit that's an example of what problems you get when a I can't explain itself right and to be socially acceptable we have we need social science to understand how humans behave in order for the vehicle involved for autonomy systems to understand how humans how to interact with humans so this to me is really what it's all about to develop vehicles that are socially acceptable we need to have explainable AI and we need to be able to understand humans and so people from the social sciences are very important to understand that let me give you a little video for those who know autonomous vehicles can understand why I'm showing this vehicle for those who don't I I just put yourself into as the driver into this vehicle that you see you see an image coming from the front of our autonomous vehicle in Mountain View and this video is meant to show how what we call level 4 and level 5 autonomy which is fully autonomous systems are so incredibly difficult so this is a intersection in Castro Street what we see here is the traffic lights are flickering right you see this truck right the truck is going and you see all kinds of construction going on we've already given up driving in autonomous mode by now alright and you understand this now we just watch this scene right this car is coming here right the car doesn't know what to do people are walking around stops in the middle of the road that you know that backhoe is I don't know what it's doing this card up this car just goes whoa all right his bicyclist you know now you think okay now I'm gonna go oops that car is just taking her left-hand turn suddenly and so he goes now watch the guy there on the side right you think he's taking away counts or he's replacing cones and then boom he just crosses the road Hey ha well since there was a human driving so yes can I have my water yeah yeah so there ain't no autonomous vehicle in the world that can do this ok let me just say that it doesn't exist anybody who tells you that we can do this is just you know this we cannot do this today no company nobody so that just to set the level of where we are with autonomy right and and how difficult this is and this is just a Wednesday morning in in Mountain View right this is still in San Francisco you know this is not Tokyo this is just Mountain View little little Mountain View all right so what is what is it about the future right where we want to go and I you know we say like it's the future of connected and autonomous vehicles because we're not going to do boom tomorrow everybody drives autonomous so we're gonna have connected vehicles and autonomous vehicles and its really to design a world of connected and autonomous vehicles with onboard and off-board intelligence right so the intelligence cannot just live on board the vehicle right to solve this problem of this intersection we need to know that there is construction there we need to know that the traffic lights aren't working right that we should not expect that there is the normal rules of the road that are going we need that information from somewhere that is intelligence that is not on board so what it's really about is this is the intelligence on board what sin was talking about what I say is we need intelligence in the cloud to communicate information about the state of the world and about issues that the car you know faces from humans in a control center we need to be able to connect to humans and traffic management center maybe there is a traffic management center for the Bay Area doesn't exist all right you know Palo Alto has a little bit of some traffic lights that are controlled with us you know then we have Mountain View that has their own little thing and then Sunnyvale has something do you think they talk to each other now doesn't talk to each other right where we go actually to do research about traffic management centers we go to Europe you know Europe has a way better system you know that for instance the country in the Netherlands has somewhere in one province about the size of the Bay Area there's somewhere around 1500 traffic lights they're all connected right there all this this is all this all works they have you know information coming we can get the data here no not so much so but we meet that and then we get to smart cities all right we need infrastructures would be nice if we would have a lidar on the intersection that we just look to you know saw the video from so we can get information from that intersection about the objects about the backhoe that was there you know about the cones that were on there really nice if we would have that information off the vehicle for a couple of reasons the vehicle could have known that before it went to the intersection and decided like maybe not such a good idea to go there maybe I should reroute myself or it could be that oh maybe I don't need that expensive lighter on the vehicle anymore the infrastructure provided information there for my vehicle becomes cheaper right so this is the game that we need to play but it's really about sharing the intelligences in these systems and as they already say we don't have this in the Bay Area right how long is it going to take before we have this in the Bay Area well I'll probably you know be in a wheelchair that this autonomous by it by the time we have that so it's gonna be a while before we have that so it's going to be piecemeal but this is what we're after and this is where I want to go with the next thing the the next slide is to show you as I already showed you in the video we can't handle every situation but we still want a font on his vehicle before the AI is smart enough to handle those kind of situations so how do we do this in the next five years right I say we need human intelligence and let me show you an example of what I mean what you're going to see is the car streaming all its sensor data to the cloud in real time and I say to the edge right because time delay is really now starts to become an issue here so we've worked with NASA Ames to develop a system to interface interact with the car in a way that NASA interacts with robots on Mars and want one thing - one thing to think about what we're not doing what can't you do with interacting with a robot on Mars anybody any clue I mean then we yeah you can't drive the vehicle in real-time okay I can't sit in my living room here and steer and give gas and thirty minutes later on Mars the rover receives my command and does something the time delay is is similar it's just too long so that's not how we're driving vehicles right it's very dangerous in my opinion to drive vehicles remotely right if we already have 3,400 well how many thirty four thousand deaths in the United States by people inside the car you know how many that's what we have we put people somewhere else right that's not the solution so we don't drive the vehicle remotely what we do is we observe what the vehicle does and we give direction to the vehicle we give commands and then the vehicle can autonomously execute those commands we say we give the vehicle go/no-go decisions all right that's what the human intelligence is so let me show you an example of how that looks so here you see half the screen this is our an image of the car driving you see that the vehicle has noticed an obstruction the road it sends information it now streams in real-time its lidar it's it's cameras to the person and it says here I want to go around this vehicle there's like if you look here there's two cars parked and somebody walking you know working there there's a double yellow line that means you can go around it because that's not allowed so do we allow autonomous vehicles to just go around whenever they want over a double yellow line no so what do we need we need a human to give a go/no-go decision and let me just show you how that works so I think I jumped so here there isn't the user says authorized it sends it to the vehicle the vehicle plants around around it and it goes so this is a very good point so if we now I need one human for one car we haven't one very much all right so this is this is then we might as well have the human in the car so that's not what we're proposing what we're proposing is something like this just like air traffic control or most of you probably don't realize if you take the bus in San Francisco for every four buses there is a controller in a control room somewhere continuously talking to all the bus drivers to make sure that oh the passenger just got sick in the bus what do I do oh I can't get to the bus stop what do I do right there is continuous communication between bus drivers and a control center this is a similar idea right is that we would have a control center with a person maybe controlling well what we hope at least a hundred vehicles right so you know so the game is really how can we get this this work system as I call it between vehicles and control center how can we make that as efficient so you need as few people to manage vehicles a fleet of vehicles on the road just like we do with airplanes so well we can discuss that in our panel now hold your hold your thought you know but this is one concept that we have developed and we're working on and we call this a mobility manager and it's personnel responsible for safe and orderly and expeditious flow of robot fleets in the global traffic system all right so to do that what do we need right we need both for autonomous vehicles and for connected vehicles an etch right if we want to have fast communication be nice if we have 5g today we don't have 5g so we need to do this over 4G which makes remote driving even more dangerous but so we need pipes going to the cloud where there is AI and analytics that can take all the video and all the information from all the cars streaming their sensor data we can learn from that sensor data we want to be able to go from vehicle to infrastructure so vehicle to vehicle vehicle to you know traffic lights to pedestrians right if you have a cell phone and you send GPS be nice if I can use that GPS to know where you are just like Sam was saying and maybe predict where you might be going sharing of data with other businesses the mobility sensor the center and and connected vehicles we don't leave alone so this is kind of the architecture that we need to develop and create and this is where H computing becomes very valuable because if I need to have my mobility Center you know for San Francisco and the cloud where the computers run are somewhere in New York that time delay alone to send the data from the car to New York and back over the Internet to my car might be too long so maybe we want an etch right in the San Francisco Bay Area for the Mobility Center of the San Francisco Bay Area if I want to do fast processing of images from my vehicle right I might want to have it very close in the intersection so maybe there is an edge in the intersection to make an intersection smart for me to communicate that way so this is where we are really starting to think about how does this architecture of computing communications and intelligence where does the intelligent live see it in in essence right what I what I always say is like look do we really want a million of data centers driving around you know do we all need to see the same data you know if I have 15 cars in an intersection they all process the same intersection data is that really what we want is that really efficient they all need the same computers they all process the same images that seems inefficient to me that processing should be done somewhere else right that shouldn't be done on the cloud on the car all right so one of the reasons I say that is because of the following so this is kind of the same picture that van has so these are is the car with its sensors it has Ken data can Ken bus is one of those networks that send was talking about in the car it has sensor data from these sensors it goes through the autonomous system this is a high level picture of an autonomous system we have sensing with perception we have the world model which are all the objects that we see in the world we know what where we are ourselves we have the map we have decision-making we have control we have IO to store this data in the cloud right and here are here are all the sensors and the canvas that generate data so how much data is generated so this is my back-of-the-envelope calculation of one of our vehicles at Nissan right we have 15 cameras we have to light ours with one radar we have one GPS and we have a canvas and total it generates 140 point 2 megabytes per second okay so then I did some calculation so that means in one minute driving it generates 10 gigabytes that means in 10 minutes driving generates hundred gigabytes in an hour driving 500 gigabytes if 24 hours driving 12 terabytes of data right so then I did some more calculation I said okay what does Google Amazon Microsoft and Facebook have they have total of today they say 1.2 million terabytes or 1.2 petabytes a hundred thousand a vs will generate that in 24 hours okay so 100 AVS will generate as much data as Google Amazon Microsoft and Facebook have today together sorry I did some more calculations for you so there are 264 million vehicles in the u.s. today that's 330 168 Google Amazon Microsoft Facebook's worth of data every 24 hour alright so I hope the edge is a big edge [Laughter] 24 hours is one day yes on Mars it's 24 hours and 37 minutes yes in one day now I mean not every piece of data we need to save not every piece of data we want you know so I'm not gonna argue here that this is all data we need to store in the cloud of course not I mean that's not I mean that would be impossible right but I just even if I cut 50% even if I cut 20% even if it's just 10% of this number it's still a lot of data right I mean and we need to use that data to learn we need to store it we need to send it communicate it it's a lot so my last slide is not to scare you completely if I've not scared you enough white it's just this number of all these Google's together so not to scare you even more but I hope that somebody is thinking about privacy and security of all this data I use this this paper by bloom at all about privacy and security at the usenix conference in 2017 so basically we're building data sensors with cameras and sensors that drive around our city and we will have every image that you can imagine from everybody in the city walking around driving around doing whatever they want to do as part of those you know the terabytes of data we we capture and there's fourth you know these aspects of privacy invasive technologies right that the security industry has known for a long time right it's ubiquitous capture of data in public right that's what we're doing it's physical surveillance by a privately owned company if this data is stored somewhere that is owned by a company the abilities scale without additional infrastructure right I just add another car and I have more data right I don't want to have a negative view here but if you know if some organization says I want data they just buy a number of autonomous vehicles don't let them drive around I have all the data they want for a particular situation right difficult of notice and choice about data practices for physical sensors that capture data about non users you as a pedestrian have no choice right but to be filmed by the autonomous vehicle right these are things we should call you know comparisons are of course CCTVs dashcams Google Streetview these are all technologies that we are developing as a society that capture data about you as an individual and what we need is privacy standards and regulations but of course technology always outpaces that right so there is a habituation to lack of privacy that's going on today in society and one can wonder if that's good or bad I don't want to say anything you know which side you want to lean on this is something that we as a society in the world need to grapple with and need to deal with but fair information and privacy practices I think we really need to think about and if we think about edge computing right if we think about 5g and etch computing the amount of data that can be stored and can be communicated within you know seconds is enormous and so it's not just autonomous vehicles that have this issue it's going to be everything that we're going to develop from in the home to on the street to infrastructure to autonomous vehicles okay that's it why don't you take the seat closest to the stage I'm going to start off by asking you a question to both of our panelists and it kind of comes in to the whole incumbents versus newcomers things so up until now the automobile companies have each individually kept extremely tight controls over anything needed to interface into their car systems and just last summer Baidu announced that they were going to create an operating system for autonomous vehicles they call it Apollo and I'm curious how you see the whole idea the third-party operating system working will this change the relationship between incumbents and newcomers and specifically Martin how do you see operating system cause I know some of the Chinese companies have been using it was a by using it outside of China so you go first Apollo yeah yeah that's Baidu Apollo and there's also some sort of a competitor which is called Auto ware which is Foundation and that much I understand is an open-source project now where's that from that's the foundation is based in Japan okay but it's an international global consortium I feel okay and remains to be seen who's gonna win both the pluses and minuses as always but to your question is there gonna be a universal operating system for these autonomous vehicles maybe let me peel the onion a little bit what what Martin was showing it so long anymore but what's running the canvass that was pushed largely by an automobile supplier bar mm-hmm back in the 1980s I believe because it was realized you know what we're putting in electronic fuel injection and we're putting in brake controllers ABS and the lights and all of these need to work together somehow let's implement can controller area network not quite an operating system but also an architecture and so it's not completely new to the industry it obviously it bares the question what consultants and always gets excited about who's holding the control point yeah right so who's in control of things it is definitely safe to say that the automotive industry and the the the the balance of power shifting mm-hmm I do not think that it's going some sort of black or white there's new players there will be laggards who are not getting the message and therefore falling behind definitely Chinese companies I think have a very strong play in this it is for one because they are extremely talented skillful and ambitious researchers and industry engineers and business people at play and I'm sure that in this seminar you've also been discussed politics across the Pacific so it is pointed out as one of the key objectives to really exercise leadership in a key industry and the automotive industry is seen as one so it's a pretty long answer it's not razor sharp I do realize this but there's gonna be some sort of a standard evolving that will need to leave some flexibility for a volume product versus like a high-end maybe premium product and China is definitely yeah when they come back to you with a few things about the Chinese makers I noticed that Pony da day I was one of the ones you had on your slide Martin first of all though how do you see the operating system kind of from the software side and especially open an open API and that kind of thing yeah so my background I don't have such a long background in automotive industry as Sven has you know basically Nissan Research Center is it for me but what I've noticed that and and rightfully so the OEMs are incredibly safety conscious right so everything is about safety now from a software you know I'm more a software guy right we all know that software goes where we're people who use software wanted to go and so open-source has has proven to be a pretty efficient way of building actually reliable software strangely enough everybody in the beginning said like oh open source software is dangerous I know at NASA they didn't want you know I actually have software running in Mission Control and when we had to show what software we had all the open source software had to be taken out right and as researchers you know so it's like but what we have shown like what a model that works really well is that the red head model is that there will become a party that takes this on and makes and gets all the bugs out and and takes this from an open source to a supported version right that the tier wants might be able to start using let me ask a quick question how many people know Red Hat yeah so so yeah so UNIX right you know became Linux became red you know I'd say but the tier so OMS don't build our own software they use the tier ones for this Oh II am sexily don't really know how to build software so and the you know we're not worried with that one yeah but well I would even make a stronger statement the tier ones don't really know how to build software because they are you know car makers they you know yes they know how to build embedded software but autonomy software is not something you know they don't have like a whole bunch of AI people I know you know so now that matters why we see all the startup companies on yes so I could see a model where an open source stack will become a supported stack by a player that then is delivered to tier ones and then use that to build components and systems for the OEMs so if you ask me that that I see as the model that will that that could happen well I'm gonna cut to the chase do you see China going in a different direction than the rest of the world or the US central direction than the rest of the world you asking me there ya know I I think I mean again this is just my opinion I think China has even less regulation or more regulation depending on how you look but it's easier in China to get technology into place and they will run into this brick wall right where it's just as hard in China as it is here and so you know I I just see I see a similar kind of similar evolution yeah people will realize you know all these startups in China will realize that it's really hard ok well let's go let's go to audience questions go ahead your first I saw you first if you're about open sores then again and again you see that borders are less relevant whatever technology you choose in open source you see collaborations Korea China everywhere so decisions from business point of view if they're going for open source once they do that they kind of yeah the open source is one thing I have to say that this is something that is very likely to be considered you'll use technology and so you're going to run into all problems of export controls and all the problems with sophia's you know being in charge of what kind of foreign investments a company can take your point is really well taken but this is why we're living in kind of a messy world right I want to give the floor who is a managing director for all the tech ventures you have a comment question from the investor perspective anyway okay all right we'll let you lay said we have a couple of portfolio companies from you in this map eponymous world so how do you see the evolution of perception versus computer or sensors versus brain and so my analogy is as we're evolving historically you have two different paths you can invest in sensors and therefore we have pulse and we have bats and they have different modalities beyond what we can do where you can investment brings and get relatively narrow eyes in conference say that the weakness or constraints of perception by the compute models and what we can run in a corpus and so clearly all the biological animals have constraints whether its energy budget for budget size in out warning of constraint over you get a power energy cause everything else but how do you see the trade-offs between getting your input having more compute power and you project five ten years from now what those trade-offs are going to be and how they and configuration will evolve okay let's spin that question first okay so as a quick answer I would pick the brain because the the census of the eyes and ears and tactile senses of the car if you will in my mind already very well-developed because you can actually detect an object over a pretty long distance in pretty bad weather conditions and even if you say you drive out of a tunnel and there's light coming directly into the tunnel from the Sun that's setting right in that direction I'm like it so that works relatively well what does not work well at this point is really to detect safely enough what is it that I'm looking at and even more so what is this thing gonna do within the next five seconds because it's not enough to say it's a person the examples that I gave seemed maybe a little bit jokingly but it actually wasn't is that person gonna cross the street but at that point I know already that it is a person I know it because the sensors are pretty good so therefore I think it has to be the brain and then therefore for a number of reasons I guess artificial intelligence is being applied to a large extent to this I I expect that the pendulum might swing a little bit back again because some seem to be relying a little bit too much on artificial intelligence and there was two Martin's pointed that he brought up like explainable artificial intelligence but a thing in the end I would pick the brain because the rest is already very well developed I would say if I can push that one Martin if you're going to comment on that what about redundancy among the center so yes I was actually gonna go I you know it's a different talk but I have a whole talk about resilient autonomous systems and I say we don't even know how to build resilient systems let alone resilient autonomous systems so so I say as an answer to your question I said today and over the next couple of years sensors are actually not the most expensive part of an autonomous vehicles it's gonna be the computer and it's going to be the software and the calf you that's gonna be 80% of the cost of the system and I say why the heck do we need to have all this compute on the vehicle if we have the edge and we have 5g get as much as you can off the SIS off the vehicle into the cloud into the edge software becomes a lot cheaper you can share the decision-making between a number of vehicles what needs to be on the vehicle is safety the job of the autonomous system on the vehicle is to provide safety in all situations right and that is where perception then becomes so so from the perception stack so sensing you know the sensor alone is not enough you need percept which is part of the brain but it's not the entire brain right the perception needs to understand and so you want to predict so I say anything that is below a second needs to be on the vehicle any decision you can make more than a second the way you can do off the vehicle right and that's how I see the system evolving that's how I would design the future where I have thousands of autonomous vehicles in a fleet I would want to have one decision-making module in the cloud that does the decision-making for all thousand vehicles and not all vehicles by themselves doing their own but that then becomes a problem so you have to so there are different ideas in a multi agent peer-to-peer kind of system you say let's each agent decide by themselves they communicate and they can optimize or you say do we have a shared and we have kind of like a central control system that that that does this and it depends on the application if I have a Robo taxi company I probably want that central control system that does the decision-making for all my vehicles and my vehicles just keep safe that's hard to manage we've gotta let you do your second question go ahead so I don't want to go too - six - six - six and all other standards which define safety but but the question is we are interested so think about evolution for machine vision - deep neural Nets and so how we're moving from human pre-trained or predefined features into the world where the machine finds its own variables and random neural net on whatever the mission really is important for physical vision and so think about chess problems like people writing chess programs assigning human weights and then you let alpha go to whatever alphago deed and then eventually kind of more deep learning methods and so you see pretty drastic improvement in performance from systems program and comprehensible by human beings to the system which we don't know how to work we know inputs but then it does something which we cannot explain and understand it just test them and does really well and so the question is I understand from safety and security and an automotive perspective you want to have comprehensible systems in terms of performance there is a strong argument to say if you don't constrain the problem by it must be comprehensible you can evolve using implement better solution to the problem to the step and so how do you see do seats company in some countries say China will say you know what I don't care if it's comprehensible of not I just measured the final output of the system if it doesn't kill people and find what it why doesn't people kill people I don't know and you go to Germany will say no no I want to get back to back and web slice you will Nets partition it and say this is where my problem is coming from so how do you see that the difference in philosophies and evolution based on your regulations and sector claims Martin I think that one's for you so you know it's not only I think it's more about philosophy of how you develop you know resilient systems and and so the thing so my personal view you know as I said if we cannot even build a 737 with an autonomous system for the pilot that doesn't crash after 40 years of trying right why do we think that we can do this so you know with autonomous vehicles without edge cases that are problematic and it's win these edge cases where the you know the the explained ability needs to come in and so if everything goes fine yeah I don't need explainin ability if I don't need to explain myself to the pedestrian if you saw in my example when the car needs to go across the double yellow line and the rule is of the road is that you're not allowed to cross the double yellow line and so therefore we say from the regulation point of view that the car needs to now have a human intervention this is a human break the law right go go right at that point the human wants to know why are you asking me this question right what what is it that you want me to do so the so the AI goes from being able to go around the car to explaining why it can't go around the car to the human and so from those types of systems we need explain why I and your deep neural net will not be able to do that very well in the in the near future but if you never get into this problem so if you have a society where it is okay to go then yeah we never need to explain then maybe you're right and and what is the right answer I think depends on the philosophy you want to take about safety and the philosophy you want to take about do we want socially acceptable autonomy or do we just want to have humans adapt to whatever autonomy we think is there autonomy for you and that is just a societal question now if I need to watch out every time because I see an autonomous vehicle I'm like what let that thing pass before I there you know you think that is okay then fine we can do that but I think in the Western world you know in certain cities like San Francisco that's not gonna fly what's one of caution really quickly because we heard this example again from chess computers and and alphago and so on and then quite often the story goes look they can do it in chess and to play go why can you not just have a car drive these incredible machines I really want to caution everyone because of the fundamental difference between playing however complex board game and traffic just on this campus and that's the difference between a super huge number and infinity and of course safety and everything but even if you take go which I have to admit I don't play go I don't know how to play it but I know how to play chess it's a huge number of different variations where all the different figures can be black and white one but it's a finite number and it applies fantastically well to a computer to run probably it also interacts it doesn't if you have a super fast computer you can calculate all these different variations that doesn't apply in traffic it would only be the same thing if the the King in on the white field just moves just a little millimeter to the left which might maybe show the intent where the King might move and then you want to calculate orders and maybe the King's turned a little bit if they have matters because all this matters and driving with this example that I gave earlier never I want to caution a little bit it's important to have these discussions but it only applies to some extent okay I've seen a number of hands but I saw yours first go ahead like this I actually worked on Tesla autopilot in 2016 and I'm a conf curious what the discussion would be like if we had someone from the cs department here today kind of curious to hear your thoughts on your presentation with respect to whether use what you're proposing is something that you believe is fundamentally the right way for the long term so what implementation of self-driving cars or whether you think that this is kind of like a stopgap solution because you think that I guess I general AI is kind of like long it long enough away from where we are today that you make sense to invest as a business as a commercial enterprise into a pretty huge infrastructure project I just say you are asking me to mean what is presented and as you say there's some limitations like the amount of data you need to pipe to the central command command system and that's a lot of infrastructure investment so that's very first structure investment makes sense if for example car base a I can drive in the in five years ten years or 20 years yeah I mean that's so I think it's a great question and I'm a very pragmatic person so if I think if you know the industry has kind of been saying that we were getting autonomous fully autonomous vehicles in the next five years for the last 20 years maybe generation but for the last 10 years and and we're not there and I don't see us to be there in the next five years maybe in in very you know certain area is very limited you know and and then it's gonna be you know minimal so that's why I think it's it's a it's a stopgap but not really because you know my example of the 737 max you know that after 45 years of auto pilot in the cockpit you know we suddenly have an incredible you know two plane crashes that were pretty pretty dramatic and if you look at how these what happened you know it's not that we don't know how to build these systems safely we do or they do but the pressures its organizational pressures Kame that made the system you know the way it was and and the crash actually happened so I'm just I'm just somebody who thinks we never have autonomous systems completely safe enough that we don't need humans in the loop that's just my my strong belief and if I can if I can follow up with a question on that though so partial autonomy has its own dangers of course if if your car is going to send the decision to you and you have to react within a second and you're kind of spaced out much less asleep at the wheel you know isn't that even more dangerous yeah we got a Tesla expert few so so that's what I'm saying it's hard right it's really hard so yeah so so that's what my solution is a human supervisor just like air traffic control and you know believe me the the the aerospace industry has been trying for decades to try to put more autonomy in the cockpit right and and now they're working on free flight and you know and they're trying to sell so what they've done if they have said okay fly three miles apart you know from top to bottom and you know so that if something goes wrong we have time to react and so if so if we are willing to do that on the road you know maybe we can we can make it safer but I don't think so other people say well let get the damn pedestrians of the road from you know this I saw a simulation of MIT is like oh we just you know cars for driving you know we don't need traffic lights anymore and then you look at the simulation like there's no people in the simulation what you topia is this you know but I mean so this is we have we will have people on the road and so I so I think it's going to take if we take the step you're taking it's going to take a long time before industry will be willing to take the risk of putting autonomous be fully autonomous vehicle on the road spend comments on this you know I really appreciate them the question about infrastructure and my answer is those who say autonomous have to say infrastructure in the same sentence otherwise I don't think how it's gonna work and I find it peculiar and you brought up if we're talking from this yes the property and I do know that computer scientists by and large feel that it should be all self-contained it should be and really autonomous system which means it doesn't need anything else to fulfill its mission or its tasks and everything and I understand the appeal because infrastructure is huge investments which you said and and just to give some reference 50 percent of the road infrastructure in the United States today is underfunded that means we have only enough budget to fix 50% of the bridges and to fill 50% of the potholes and to stripe 50% of the lane markings that need to be striped and now we're not even talking about using any of this communication infrastructure for Animus vehicles and still I find it surprising if especially computer scientists and I will say that argue for the case of not connecting automobiles highly safety critical systems what we are connecting everything else we're connecting obviously laptops you're connecting watches we're connecting refrigerators we're connecting the switches to our light in the living room and everything is connected but automobile would do without I'm not quite sure if we're leading them right discussion there and what might be part of it and I've been thinking about this quite a bit because autonomous driving is such a vast topic again it's not getting from automobile 1.0 to automobile 2.0 we are reinventing traffic as we know it and very often we see the brightest minds on the planet focusing on very specific things may be figuring out with artificial intelligence how I can do an unprotected left turn or how I can do a laser scanner that even can detect the nose in like whatever 400 feet distance and once we get this we say we're gonna step forward but we still don't know how many steps we need to take to actually have a fully autonomous car and the last thing I would say to this I think it's maybe in a picture that might resonate with the audience Steve flat over told me this he's from UC Berkeley really amazing research on the field of autonomous vehicles if you want to go up on Mount Everest and you are here in San Francisco Bay Area you basically get on a plane you fly to London you switch planes flying to Kathmandu you get out of the airport someone takes you to the first base camp to go up on Mount Everest and maybe by distance or number of steps maybe you covered or 90 95 percent but this is really then when the challenge begins and this is when you figure out if you make it or not up there and this might be roughly where we are maybe we made it to a second and third base camps and I'm going to how many they are going up on Mount Everest but it's not linear where we say we've come come this far therefore what you say in five years we will be able to do it okay I want to do one more well let me just make one one comment quick it's a quick comment I only know one a really good autonomous system it's called a human being right we don't do stuff alone we collaborate and we communicate in order to collaborate right so to think that the tunneler systems are not neat don't need to collaborate and interact with others is just a to monitor understanding of force thank you one last question that's yours back in the back no the the ethics they say more about what came with can you expend a little bit more on the different ethical considerations that go into the finding like it's it's a big debate this this topic of ethics for autonomous vehicles I heard about it for the first time on say eight or nine years ago which relatively soon after the discussion around the legal aspects of autonomous driving we've got into ethics of autonomy actually there's a this is a class here on Stanford lectured by Professor Chris Curtis and dr. Stevens labs and they they lecture on the ethics of autonomous cars and once we are there that we can actually distinguish safely enough between one person and another there would be far out and I don't know if actually how a system should make this decision because it could still be wrong and I've been in many discussions where I said you know what if these are the only two options like the eight year old school kid or the 80 year old grandmother by the way my mom is 80 if you make these decisions that means you were too fast a while ago because of your only two options hitting to kill someone that means you were overestimating your skills and handling estimating the risk and you will kill someone and still someone that told me what's when you are not getting at you're an engineer you don't understand it and there was a philosopher PhD action in philosophy it's important that we're having this discussion it's not if we find a solution so therefore the ethics matter a lot because now we really design systems that will make decisions on someone's behalf with all the consequences and therefore we need to have the discussion but until we get to the point that we can differentiate between like age of a person race probably will need to be discussed and in spring actually there were news on the web that these automated vehicles cannot detect people of color as well as they can detect middle-aged white men Martin any last comments well we chose the color thing yeah so it doesn't have to do anything with race but the outcome so they need the ethics question you know what none of his vehicle is just a manifestation of today's machine learning capability so you know ethics are you know the problem with ethical decisions are here today I mean if we can distinguish a cat from a dog you know we start having a problem if machines can start doing that and we need to discuss that it's not just autonomous vehicles it you're your your Apple camera right or your Samsung camera will soon make these kind of ethical decisions if we would let them so it's really a debate that as as fen says that is larger than just autonomous vehicles that doesn't mean that it isn't important that's why I had the privacy slide on there right because if if they become persuasive and the person from Tesla already knows he takes camera you know images all over the place I mean so so companies need to have you know I work with my startup in in healthcare and there we have the HIPAA you know rule right and we need to be HIPAA compliant if we store you know private information healthcare information in the cloud what why don't we have that for other stuff right I mean so so so these are the kind of things that I think we need to really think about and grapple about and and that is a societal question a legal question and Europe is different than the United States is different than China I don't think this question is so much problem in China you walk into China and you will body B body scan today and you know so it depends on where where your tolerance is for for ethics and for privacy and I think our culture will change because of these technologies but it's not autonomous vehicles it's the machine learning people you need to ask the question too okay so we've got some refreshments outside to encourage continuing the conversation for a little bit please ask your questions after that but if you would please join me in thanking [Applause]
Info
Channel: stanfordonline
Views: 5,249
Rating: undefined out of 5
Keywords: Stanford Online, seminar, EE402A, Sven Beiker, Maarten Sierhuis, Edge Computing, Autonomous Vehicles
Id: klIn8dEzuAA
Channel Id: undefined
Length: 83min 48sec (5028 seconds)
Published: Fri Nov 01 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.