Vladlen Koltun: Autonomous Driving: The Way Forward (July 2020)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
  • 00:00 - autonomous driving is still open research problem (not execution problem), whatever companies say.
  • 13:06 - for this open research problem of fully autonomous driving, currently there is no objective measure of progress. all secret, and companies use different standards. "disengagements per mile" not useful since is trivial to game.
  • 16:46 - lack of objective measurement holds progress back. is open research problem, so many key choices of implementation (e.g. use of hd map) have no consensus. but, are not becoming closer to consensus, because no objective measurement exist to show which choice are best.
  • 19:45 - more mature fields (NLP, computer vision) once having this same 'demo culture' problem, already fixed it. fields established common task framework, automated evaluation on private test datasets to populate public leaderboards. creates culture of empirical results with healthy sharing of knowledge.
  • 26:44 - they attempt to establish common task framework is CARLA challenges. measures full driving ability, can run on single researcher's desktop machine. summary of improvements since launch.
  • 36:19 - overview of recent papers testing ideas in CARLA, all from other (non intel) groups, all addressing some core design questions of AV.
  • 41:20 - announcement of new CARLA leaderboard. not sufficient to benchmark static results. new CARLA leaderboard takes submitted code and drives it in cloud, on unseen test environment (containing challenging NHTSA pre-crash scenarios), then gives scores.
  • 49:40 - demo video, evaluation metrics, call to submit agents to leaderboard.
  • 1:01:52 - TL;DW and wrap-up.
👍︎︎ 7 👤︎︎ u/I_ATE_LIDAR 📅︎︎ Jul 13 2020 🗫︎ replies

Interesting talk but why does..he..end..every...sentence......like........this.

👍︎︎ 1 👤︎︎ u/Krunkworx 📅︎︎ Jul 18 2020 🗫︎ replies
Captions
hello this is a talk for the autonomous driving workshop at ICML 2020 in some ways this is a follow-up to the talk I gave at the CMU robotics Institute seminar more than two years ago in April 2018 I will aim to minimize redundancy between these two talks so that if you watch both of them you will see material that builds on each other and complements each other without boring you and without repeating myself one thing that I showed at the talk at CMU more than two years ago was this slide a month before I gave the talk at CMU Elon Musk said I think probably by end of next year self-driving will encompass essentially all modes of driving and be at least 100 to 200 percent safer than a person by end of next year that is 2019 we're talking maybe 18 months from now a month after that in April I gave the talk at CMU I showed this slide and I said that mr. musk is wrong and this is not going to happen there are two takeaways at this point one is that I was right the second perhaps more interesting is to ask why why did this happen why did a savvy and genuinely smart person such as such as Elon could make this mistake and why it was obvious to some of us that this was a mistake and that this prognostication is is not going to do come to pass that that this projection is wrong a key point that we need to understand is that autonomous driving is a research problem at this stage not an execution problem ocean is of course absolutely critical and execution is going to be absolutely essential to their all-out of autonomous driving but autonomous driving primarily is at this stage a research problem not an execution problem let me talk about a couple of aspects of this autonomous driving involves perception based control of safety critical systems now we have made amazing strides in perception for example every time you upload images or video to any of the big social networks they are being analyzed by some of the most advanced perception systems we have ever developed they are being screened for offensive content automatically by convolutional networks and they are being categorized automatically for subsequent association with other content such as advertising and categorization for subsequent search and retrieval so we have scaled up amazing advanced perception systems however these systems are not being used in safety critical systems in which they must function under hard real-time constraints where human lives are on the if the wrong prediction is made in the settings in which we have deployed such advanced high dimensional perception systems based on modern computer vision there is a second line of defense if the system is uncertain it can hand off its its predictions it can hand off some content to human imitators who can screen the content further and even if a mistake is made most likely human lives are not at stake usually human lives are not at stake that's perception now control we have also implemented incredibly advanced control and successfully deployed it in safety critical settings lending a spaceship on the moon is an amazing achievement for control engineering LASIK surgery is an amazing achievement for control engineering however these examples of exquisite control do not involve the same kind of high dimensional an extremely fallible perception in the inner loop perception of the kind that we see today in the most advanced computer vision systems is not on the critical path of such safety critical control systems of course we do perception based control a thermostat is an example of closed-loop perception based controller thermostats that senses the temperature and regulates it in response but there is a qualitative difference between these examples and the kind of perception based control that must be done in ubiquitous deployment of autonomous driving societal scale deployment of autonomous driving of the kind that Ilan was alluded to where autonomous cars function in all modes of driving and go wherever human drivers can go with the same level of flexibility and robustness this involves perception based control where computer vision of the most advanced kind is on the critical path it's in the inner loop of safety critical control and we have never deployed a system like this we have never deployed a system that took such high dimensional and incredibly noisy perception and put it in the inner loop of a real-time safety critical system where human lives are at stake we actually don't know how to do this this is a qualitatively new technology that is being developed another key point here is that 98% accuracy is not good enough in autonomous driving and this is very important because in computer vision for example culturally so far if we've reached 98% accuracy on any problem in any domain of computer vision as a research community we have generally moved on we declared victory we said that the problem is now solved it is now boring and we're going to move on it's no longer interesting but in autonomous driving 98% is not even close to enough what does 98% mean to oversimplify a bit if you drive twice a day to work and back your regular commute 98% means that one time out of 50 you have an accident one times out of 50 if you drive twice a day is once a month once a month on your way to work or back you have an accident and what does 99.8% mean what 99.8% means that roughly once a year roughly once a year you crash roughly once a year you have an accident now this is an oversimplification but it illustrates that autonomous driving is a qualitatively different problem from problems that for example we have solved in computer vision it has qualitatively different requirements that have not been met so far by any comparable system on any comparable scale as a historical note we've had 98 percent accuracy since me the mid-90s for 25 years now we've been at 98% accuracy in autonomous driving in a famous demonstration CMU researchers Dean Pomerleau and Todd Johann took an autonomous vehicle from the east coast to the west coast from Pittsburgh to San Diego close to 3,000 miles 98% were driven autonomously they had 98% accuracy in autonomous driving 25 years ago since I have veered into history I want to also mention Earth dikmen's a pioneer of autonomous driving who initiated autonomous driving research in the mid 80s and by the mid 90s likewise was demonstrating accuracy success rates that are that were in the mid 90s he had a vehicle drive essentially autonomously from Munich to Denmark and back on the order of 2,000 kilometers 98% fully onboard sensing and computation here you can see the server rack in the trunk and in Earth's Dickman says case the autonomous driving system was in full control of the car it controlled steering gas and throttle full control unlike the CMU system where only the steering was controlled by the autonomous vehicle here you can see real-time tracking and detection of objects on the road in real traffic at high speed and here you will see this autonomous Mercedes changing lanes changing lanes autonomously in response to traffic conditions this is an absolutely fascinating chapter in autonomous driving history it is not widely known and not widely discussed today which is an injustice because ernst dikmen's really did incredibly prescient incredibly farsighted work and we all should be familiar with it and remembered as we move forward there is a talk by him that i am linking to on this slide and i hope that you all will familiarize yourself with this work and remember it for the pioneering work that it really was after this historical aside let me come back to the present moment and ask how well we're doing now if we were at 95% 98% 25 years ago where are we at now and here we can observe that we don't really know nobody really knows because we don't really have objective measures of progress that are reliable and are comparable across different systems and different operators in the industry what you see here on this slide is the so called disengagement report that is collected by the California Department of Motor Vehicles that is currently really the closest we can come the closest we have come to objective assessment of autonomous driving performance in this report different operators of autonomous vehicles submit statistics to the California Department of Motor Vehicles the department collects these statistics and publishes and what are these statistics these are the total miles driven vehicles equipped for autonomy and the average number of miles between dis engagements between occasions in which the human safety driver has to take control away from the autonomous system usually because there is a situation that is that is dangerous in which autonomous operation is no longer trusted so for example here for Baidu we can see that by du in 2019 has driven a bit more than 100,000 miles autonomously and out of these perhaps on the order of 20,000 miles on average will driven between dis engagements so this may look good in comparison to two other operators so for example here BMW you can see has driven around 20 miles autonomously and the average number of miles between dis engagements was maybe around 2 or 3 notice that this is on a log-log scale however are these numbers comparable they are not there is no control over what these miles actually are you could get incredible numbers by just driving around the block in a quiet neighborhood you can rack up a huge number of miles with a very low disengagement rate or you could drive on a highway in a well-understood domain that by now we have we have mastered quite quite well and likewise have fantastic statistics without actually addressing the core problems that the industry is is confronted confronted by in less structured domains this is a problem the lack of objective assessment of progress in the relevant domains is a problem that is blocking further progress in this field the reason it's a problem is because again autonomous driving is blocking on research not execution and this stage of the field the basic design of autonomous driving systems is not set the basic architecture of autonomous driving systems is under active consideration and there is no consensus there is no agreement on what the very basic architecture of autonomous driving systems should be this goes to the very core aspects of the design of these systems what are the modules of autonomous driving systems what modules should the autonomous driving system be made up of what are the interfaces between these modules what kind of representations shoot the modules construct and communicate these are open questions on which there is genuine disagreement between leaders in this field a particular question that is indicative of the deep disagreements in the industry is for example what is the role of a the maps should we rely on HD maps and to what extent this is actually an important question because reliance and HD maps effects the core architecture of autonomous driving systems autonomous driving systems that assume the existence of reliable HD maps architected in a very different way then autonomous driving systems that function purely based on onboard perception these decisions affect the core architecture the core design of these systems and there is no agreement on the answers when the field is at such a basic early research stage the field needs a healthy methodological culture and we know what such a culture should be we have examples of healthy missile methodological cultures that have successfully carried related fields through this growth stage through this research stage in which basic conceptual questions are being hashed out that methodological culture is called the common task framework the common task framework is a core concept in artificial intelligence that we all should be familiar with it is reviewed quite well by David Donoho in his survey paper on the first 50 years of data science david's account follows closely talks that are given on this subject by mark lieberman and i will refer you to Marx talks this is absolutely essential material that you all should be familiar with mark chronicles the deployment the development and integration of the common tasks framework in natural language processing the common tasks framework was established in the 1980s in natural language processing with the aim of lifting the field out of the morass that it was in the morass was that there was no standard evaluation methodology different groups were evaluating in different scenarios on different data sets there was a strong demo culture where groups were simply demonstrating their systems in order to impress investors and managers however these demonstrations were on different data sets they were not comparable and the field was dominated by different ideologies that could not be reconciled because each ideology each fashion or each glamour and deceit as mark and others have called these each of these could be demonstrated successfully on some boutique data set in some boutique scenario but the field as a whole was not making progress the methodological framework that was introduced in response is the empirical culture that we are all familiar with now in computer vision and natural language processing this is the framework that uses standard data sets standard metrics test data that is kept secret that everybody must evaluate on by objective automated systems such that results are posted on public leaderboards where they can be examined by everybody this culture that many of us in computer vision and natural language processing now take for granted was in fact an amazing achievement that did not come early and did not come easily and was very controversial at the time however this is perhaps the single most important factor responsible for progress that we have seen over the past two decades in computer vision and natural language processing this methodological framework clears the fog that surrounds ideological debates in fields that have not adopted the common task framework it guards us against inadvertent self-deception where we convince ourselves that our favourite death approach works so well when in fact it and critically it creates a culture a community in the field where different groups can see how everybody is doing can exchange knowledge can exchange ideas and can build on each other's farmers here is an example of healthy common tasks framework at work this is the progress in the field of object detection over ten years these are results on the Pascale vo C dataset and you can see accuracy starting out around 17% in 2007 and climbing to around 90% a decade later slowly but surely steadily the accuracy climbs this is what the common task framework can do when it is universally adopted and this is what must happen in autonomous driving if autonomous driving is to get on the same ramp of incremental improvement this is absolutely critical because the field is now at an early research stage where basic design decisions need to be made based on objective evidence that everybody in the field has access to as you can probably guess from my talk two years ago at CMU I believe that Carla our autonomous driving platform holds the key to widespread deployment and adoption of the common task framework in autonomous driving a key reason for the development of Carla is to instantiate the common task framework in autonomous driving where it is so badly needed today in this talk I'm going to give you an update on Carla I will show you what we've done in the past two years I will not repeat the basic introduction that I gave two years ago at CMU I will rather show you what has happened since then I will briefly highlight research that is being done with Carla today that tackles these core design questions these core conceptual questions about the very architecture of autonomous driving systems and I will announce the karlie leaderboard for the first time today I will show you materials on the Carla leader board which instantiates the common task framework in autonomous drive let's begin with an update on Carla taking the presentation at the CMU robotics Institute seminar as a starting point not repeating the same material and focusing on what has happened in the last two years let's begin with visual quality here is town 10 Town 10 is the latest environment that our amazing art team has released you can see here the content of the environment and the rendering quality this is real-time rendering with Carla today you can see how far we have come in terms of visual quality and visual realism our amazing art team has continued to produce materials open-source Creative Commons materials that are released openly and freely to the community so that everybody can build on this content and advance the field you might remember that we released Karla with two environments down one in town - well by now we are up to town 10 and we're continuing to create content such as this let me summarize some major axes of progress in the last two years in Karla first there is content there is content that we ourselves have developed and released but perhaps more importantly we have reaaargh affected Karla to make it easy to use custom content you can load your own content into Karla you can simulate traffic in other cities created by various modeling tools you can essentially load our Batory cities arbitrary Maps arbitrary environments on the fly we have rich and realistic sensors such as lidar radar IMU GPS and very realistic camera models we have advanced rendering both in terms of visual fidelity as you saw but also in terms of speed and you can now trade off visual fidelity and the frequency of the render so you can render a very high visual fidelity or you can trade off visual fidelity and increase the frame rate increase the frequency of the grant we have we are connected the traffic simulation to make it much more scalable and much more customizable to the point that you can now orchestrate very specific traffic scenarios if you want a particularly challenging traffic scenario to occur along the way and you want to train and test your autonomous driving system for let's say unprotected left turns you can do that you can script traffic scenarios very flexibly to make sure that you're driving stack will be subjected to plenty of unprotected left turns along the roads it drives and Karla this point can function as a kind of hub that communicates information across different frameworks that have to do with autonomous driving that are involved in autonomous driving we are interoperable with Ross we are interoperable with open drive and other standards you can deploy and test auto we're driving stats and we are interoperable with traffic simulators that operate on other levels of abstraction such as soon let me review a few of these aspects in a bit more detail let's look at visual quality again on the left you see Taunton rendered with the rendering that we had last year in Carlos so this is what Downton would have looked like last year in Carla that's on the left on the right you see rendering in Carla today after we integrated quite a few powerful during functionalities such as atmospheric scattering and mesh distance fields furthermore we have optimized their rendering to the point that you can render for example four cameras HD resolution with this fully featured count an environment on a single gaming workstation with one GPU at 25 to 30 frames per second so that was real-time rendering for HD cameras in real time on a standard gaming workstation at 25 to 30 fps let me show you some some scripting that is quite standard and quite easy in the Karla I like it a lot so here in the terminal we're just modifying various things in real time so we moved the Sun below the horizon which triggered night mode now we triggered rain and we made the ground wet all from the terminal in in real time while the simulator is running now there are puddles on the road and there is wind next let's blare on with traffic a little bit so now we're back in night mode and we put cars on the road now the cars are driving and now we turned on the lights we turned on the lights on all the cars and here you can see the beautiful police car with its lights let me show you an example of content import that I'm incredibly excited about this is still in the early stage but you will see an open street map of Manhattan being imported into Carla so that was a selection of manhattan from OpenStreetMap and here is this map in Carla this is under development you can see that these this is just the road layout we've only imported the roads but these are consistent drivable roads that fully reflect the content that is in OpenStreetMap of course we want to build on this and we want to expand this to be a fully featured urban environment with buildings and and everything that makes up the full urban environment of Manhattan but it's a start it's a start let me highlight some research that is happening in Carla today and I on purpose selected work that is not from our lab I will highlight work from other labs I will highlight work that has only been published in the last couple of months to give you a snapshot of work that is being done with Carla right now and is being published in Carla right now and I will highlight work that is really addressing these core design issues that have to do with coupling perception and control in the way that we need to do for broad deployment of autonomous Drive here is work from UC Berkeley from been wrecked slab that has recently been presented at the learning for Dynamics control conference and they're asking a core question about perception based control can we have robustness guarantees in perception based control that we are familiar with from classic control engineering if we put computer vision in the loop of control and the control system must act on the output of incredibly high dimensional and incredibly noisy perception as we see in computer vision can we still have robustness guarantees this paper tries to put together to develop the basic methodology for obtaining such robustness guarantees in autonomous driving here is another paper a joint effort by Oxford and UC Berkeley that addresses a related question can we ensure robustness when an autonomous driving system is taken out of the domains that it was trained in what happens when an autonomous driving system is deployed in a domain that has some structural features that differ from structural features that were present during training can the autonomous driving system remain robust can we develop training techniques and model architectures that are robust to such distribution again note how this tackles the basic coupling of perception and control that is at the core of the problem here is another work from UC Berkeley from Sergei Levin's lab that addresses imitation learning which is a very prominent very promising approach to machine learning based autonomous driving and this work aims to make imitation learning more interpretable and more controllable and that was presented earlier this year at the iclear conference here is work from andreas Geiger's lab in Tuebingen that also tackles imitation learning and tries to expand imitation learning to make it more expressive and more robust and here is another work from andreas islam that was made public a couple of months ago that looks at the intermediate representations that are pay passed from perception modules to control modules in autonomous driving stacks and this work asks a very core question a fundamental question which is what should these representations be what is the best trade-off between the expressive power of such representations and the cost of obtaining them in terms of training data that must be created for training the perception modules that produce these representations again note how the stack is a core design question what are the representations that are passed between the different modules in the autonomous driving system we have now come to the Karla leaderboard which is an announcement that I am thrilled to make today the Karla leaderboard in many ways is what we in the Karla team have been working towards since the beginning the Karla leaderboard operationalizes the common tasks framework for autonomous driving let me review this is a table that I showed two years ago in my CMU talk and this is a table taken from the original Karla paper that was published three years ago in this table in the original Karla paper we implemented a small set of very simple scenarios and we used them to benchmark a number of autonomous driving systems and as I pointed out two years ago these autonomous driving systems did not work well at all if you analyze the numbers the numbers are very poor they're nowhere near good enough for broad deployment in these two years amazing progress has been made in part because there was an objective metric an objective measure of progress works from different teams built on each other to the point that we have achieved 100% success rate on all these scenarios that were used in the original karla paper that was achieved last year in a paper that we published called learning by cheating that was published at the conference on robot learning it's a delightful paper with a cheeky title and I recommended of course 100% success rate is related to the fact that those scenarios were very very simplistic we really did not implement the scenarios that reflect the complexity of real-world driving because at that time the systems we had access to the autonomous driving systems we had access to couldn't cope with even very simple scenarios well that has changed so last year as a warm-up exercise towards a standing functional leaderboard for autonomous driving we have implemented what we call the car lock challenge the karla challenge was an instantiation of the common task framework in autonomous driving we released materials early last year there was a period of time in which participants could upload their driving systems to our cloud-based infrastructure in which the systems were tested in secret environments that were sequestered they were withheld they were never released to the public so that systems were tested for generalization for functioning in environments that they could not over fit to that they could not get access to during training and the winners presented their work a year ago at a workshop and the cvpr conference the basic mechanics here are different in an important way from leader boards that we're familiar with in fields such as computer vision and natural language processing because here it is not sufficient for participants to upload labels to upload passive data in fact participants must upload full functional software systems systems that must then be run in the cloud safely in new environments that the participants who created the systems did not have access to this is logistically much more complex and much more expensive than running standard computer vision or natural language processing leaderboards and we learned a lot from doing this last year for the car lock challenge the basic setup is that participating driving systems must drive through a set of routes they are given a set of routes for training in training environments and there are seven towns that are available for training as well as of course any environment that any participating team wants to construct but at test time the participating systems will have to drive routes in a new environment that has never been released to the public and we make sure that interesting challenging important situations happen along these routes our Traffic Manager injects scenarios stochastically along the routes as the participating autonomous driving systems drive and what are these scenarios these scenarios are taken from the needs of pre-crash scenario technology this is a catalog of real-world traffic scenarios that most likely that are most likely to lead to accidents in the physical world this is a public resource maintained by the US Department of Transportation that catalogs the traffic scenarios that are most dangerous that most frequently lead to accidents we use this catalog this typology to distill 10 scenario ten scenarios that are representative of the dangerous situations that most frequently lead to accidents in the physical world and these scenarios are automatically injected along the routes as the participating autonomous driving systems drive in our test environment last year the winners of the karla challenge presented their work at a workshop and the cvpr conference and you can read the paper published by the winning team at this year's cvpr conference so this year at cvpr 2020 last month the team that won the carla challenge last year rode up their findings and published it as a paper at cvpr for us this was a practice run this was a warm-up exercise that we did last year to understand how to implement the common task framework in autonomous driving and what issues we will encounter in doing this based on what we learned last year i'm happy to announce that we have now implemented a standing leaderboard that we are now releasing the leaderboard is now live and i am happy to announce the leaderboard in this talk with this talk the leaderboard is officially operational and ready for submissions let me show you a video that our team made for this occasion that we're all so excited this is all rendered in Karla here you see a summary of the basic mechanics where participants drive in urban environments and they encounter stochastically these dangerous scenarios that are injected by our traffic manager and the participating systems must deal with these scenarios there are two tracks the first is for systems that use purely onboard perception the second is for systems that rely on an HD map that was pre computed and provided the systems are evaluated based on shared objective metrics of performance the driving score that I will describe in a bit and you can of course make your score public in a public leaderboard you will see some scenarios now so this is a scenario in which you must change lanes when somebody else aggressively cuts in front of you here you approach a roundabout and you need to yield pretty standard yielding situation here a bit more dangerously a car is going to pull onto the road ahead of you and you need to notice this and slow down to avoid crashing into it this is going to be a very very dangerous scenario that hopefully doesn't happen too often but we do want our autonomous driving systems to deal with it this red car is going to drive run a red light this red car is going to run a red light illegally in front of us and we will have to notice on time slow down brake to let this car pass and avoid an accident it does happen here there is a web patch with reduced friction on the road and you will see the car slow down to safely traverse this path this the karlie leaderboard is now live and you can all submit your driving systems to be evaluated let me say a few words on the driving score the metric because we worked on this a lot a good metric is really critical for a successful implementation of the common task framework and a good metric in autonomous driving is challenging because it must reconcile multiple objectives multiple aspects of what constitutes good driving after all you want to get where you need to go that's part of good autonomous driving but you also want to do so safely without breaking rules and without causing havoc on the road how do you reconcile these different objectives and how do you combine them in a single number a single metric that must assess how well you got to where you need to go in the allotted time while also doing so safely the metric we've arrived at after a lot of experimentation including experimentation last year in the karla challenge that used a different metric from which we learned learned a lot the metric we arrived at for the karla leaderboard is the driving school which averages across routes and what it Everage --is is the product of a completion score and an infraction penalty the completion score is a percentage between 1 between 0 and 100 that quantifies how far along the route you got in the allotted time if you traverse the whole route and got to the destination you get a hundred percent on completion if you couldn't even get started you get zero if you went where you get 50 the infraction penalty quantifies how safely you did this for every type of infraction the system gets hit with a multiplicative penalty between zero and one that reduces the driving score along these groups here by necessity we introduced some constants that had to be set empirically they have to be set basically arbitrarily but that is part of the nature of this game for example for a collision with a pedestrian every single collision with a pedestrian this really should not happen at all a good driving system should never ever ever collide with a with a pedestrian but in this case for collision with the pedestrian your score is reduced by a factor of do you get hit with a with a multiplicative penalty of 0.5 for ending a stop sign you get hit with a penalty of 0.8 these penalties are imposed every single time so every single stop sign every single red light every single pedestrian every single collision with the static layout again and again you get hit with a multiplicative penalty multiplicative penalties are not standard but we implemented this solution because it keeps the score above zero and gives the participating teams ingredient signal that they can use to improve additive penalties which are more common can easily underflow to the point that a poorly functioning driving stack will just get zero if you subtract some number of points for an infraction it's very easy for a participating system to just get zero points in this regime even if the team improves their system by a bit their score might still stay zero and they will not even realize that they did something good that they improved their system with a multiplicative penalty as we implemented for the driving score for the car leaderboard participating teams always see some improvement they always get a gradient when they improve their driving stack the leaderboard is no lie and there is a baseline that we provided with a full starter kit that you can use for development and testing of your own systems this is the LBC baseline that's currently at a driving score of around 11% this is based on our learning by cheating paper that was published last year at quarrel this is the work that accomplished 100% success rate on all the scenarios in that were used in the original karla paper three years ago based on this learning by cheating approach Brady Joe a fantastic PhD student at UT Austin implemented a full open-source starter kit for the Karla leader board that you can access here at this link and use it as a starting point for your own entries it comes with extensive training data that many teams may find useful based on the leaderboard we plan to hold a Carla challenge 2020 that will be organized in conjunction with the autonomous driving workshop add the neural information processing systems conference the workshops have not been selected yet so we cannot we cannot say with 100% confidence that there will be an autonomous driving workshop at the neural information processing systems conference but if there will be an autonomous driving workshop if the autonomous driving workshop is selected for participation in the neural information processing systems conference this year we will organize the Carla challenge at that workshop and the Carla challenge this year will simply take a snapshot of the leaderboard so you don't have to wait if you want to do well on the carla challenge this year if you want to win and be able to say that you won carla challenge 2020 very simple you need to do well on the leaderboard and you need to be number one on the leaderboard at some point and we will announce at what point this will happen will simply take a snapshot of the leaderboard and that will be the ranking for the carla challenge so if you want to do well simply submit your driving system to the leaderboard get your driving score and improve and we hope that this implementation of the common task framework will catalyze the kind of incremental progress that is behind the remarkable achievements in fields such as computer vision and natural language processing in the past two years I'm now ready to conclude I want to thank our team that is behind all the amazing developments that you have seen in this talk first of all I want to thank our team lead chairman Ross who has driven the development of Karla for the past three years he's been living and breathing Karla and his passionate passion for this project is evident in the amazing growth of the platform and our amazing team of developers and artists that have implemented all the functionality that you saw today to summarize it is important that we all remember that we are still in the basic research phase of autonomous driving we are still trying to understand the basic design the basic architecture of autonomous driving systems the very core design and architecture questions have not been answered there is no agreement in the industry on what the basic design of autonomous driving systems should be this is in flux this is an open challenge and for fields that are in this basic design stage it is absolutely critical that the field adopt a healthy experimental methodology that allows different approaches and different quote-unquote ideologies to be measured and reconciled objectively and we know what such experimental methodology looks like this methodology is known as the common task framework and it is perhaps the single factor that is most responsible for the remarkable progress that we have seen in fields such as computer vision and natural language processing over the past two decades and perhaps the single most important problem in autonomous driving today is to implement the common task framework in autonomous driving because this is the factor that is going to put the whole field on the same kind of ramp that has been responsible for progress in computer vision and machine learning and I'm happy to say that the karlie leaderboard which implements the common task framework for autonomous driving is now live in many ways this is what we in the karla team have been working towards for years I'm very very excited that we have reached this stage and I hope that you all check out the leaderboard and if you think you have good ideas about autonomous driving and you want to get objective evidence about how good these ideas are you should participate you should upload your driving system and get your driving score thank you very much
Info
Channel: Vladlen Koltun and Collaborators
Views: 3,836
Rating: undefined out of 5
Keywords:
Id: XmtTjqimW3g
Channel Id: undefined
Length: 65min 13sec (3913 seconds)
Published: Mon Jul 13 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.