Using TypeDB for Self-Driving Autonomous Vehicles

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
do hello everybody welcome welcome how's everybody doing hello everybody welcome welcome as always i um i always jump start this but then we have so many people join and i never quite know when their audio kicks in so it's usually just me talking and i imagine that they can sometimes some can hear me still or some can't um but anyways welcome everyone it's so glad to have you back uh today we're really excited we have willika calling in she's gonna be sharing some really really amazing work that she's been uh doing over the last number of months really excited about that um but as we get everybody in to the webinar do find the chat section of zoom select the blue panelist drop down and then select panelists and attendees and then drop your first name and where you're calling in from that way we can say hello we can see where everybody's calling me in from it's always exciting i like to sometimes count the time zones which is always uh i always get a kick out of um usually tomas is here he might use my uh my little like comedic comedic relief back and forth i don't have that today so will you're gonna have to play that role no it's okay i'm just kidding i i won't make you do that you've got uh you've got your presentation to worry about um but let's see uh so demonstrating i'm daniel calling from london in vatican headquarters right in uh in downtown uh downtown soho we've got bob from seattle hello hello hello marcus from vienna good to have you wilco i didn't realize you were calling in from uh rotterdam yes yes i do ah amazing and then alan r stalwart in hoboken new jersey alan welcome back glad to have you again and we'll let a few more folks trickle in here and then i i will give a quick little bit community update for those that are new to uh these events that we run and then i'll hand it off to uh wilika and let her steal the show and blow you all away i've got a couple more minutes so as you can see from the uh these delightful pictures we've got up on the screen i i think we're getting close to being able to run some live events in in your local cities uh i know the entire team here we've been itching to uh to get back out and see you all in person i have some pizza maybe share a drink or two but we're really excited i think we're gonna we're gonna start kicking those off uh over the next few months once uh we can safely do so um but we really hope to see everybody in your local area soon so make sure you stay tuned for that um so i'm going to uh i'm going to dive in here because i always know a few folks will trickle in as i'm talking through and then uh will can get the the full full cast and crew uh so as always i uh i start by introducing a bit about our community and uh really our home and the community is on discord discord's uh really been an amazing platform for us to connect collaborate and learn and really what is one of my favorite things about this is that we can bring community members together that are working or are interested in similar topics um and really this this happens quite organically i like to facilitate it just because i you know i constantly see all the the introductions that happen and when you join in you get popped into the induction channel you can give a bit about who you are what are you interested in what are you working on and then being able to connect these these community members that are spread across the world together in really really interesting conversations and currently in the in the type ql and uh in the ideas channel there's been some talk around uh around time series modeling about talking about uh bringing units into type db which is really exciting and we've got a community member brett in sydney working with gary who's i want to say he's in uh on the west coast of the us so um again this is a great place for you all to connect and find uh find commonalities between everybody so encourage you to join in there i'll drop some links but you can always splash your phone with a qr code on the screen and join in that way um the next are a couple resources the first is uh our youtube channel i affectionately call this our inspiration hub and that's simply because all of the work including this this talk uh today which will go on youtube uh tomorrow morning uh all of the work from the community that's presented here at these events lives on our youtube channel and as you're thinking about your maybe your own uh own applications your own products that you're developing and maybe how can typed fit into them uh inevitably a community member has either thought about it worked on it and is shared with everyone else and here on youtube and so it's a great way to either draw inspiration or find some uh acceleration in terms of solving problems or uh or implementation uh in using typed so really encourage you check that out uh again the qr code will get you there but i will share a link here in just a bit um and the next is uh our upcoming events um obviously today we're talking about self-driving cars autonomous vehicles um but the month of august is packed i mean so we have a couple internal presentations from our radical engineers talking about how to build a type to be client how to scale type db for the enterprise but then we're we get into some really interesting conversations from the community um around spacecraft uh engineering models and specifically some of the work being done at the european space agency which is really exciting we'll talk a little bit about the computational future of biology and then uh james fletcher our principal scientist he's gonna come and uh and join us for a conversation i know he really loves to give which is talking about how strongly type data for machine learning can really uh give you a some interesting results in terms of accuracy and performance so i encourage you to check those out uh all of those live on our google meetup pages so you can find your local meetups by the qr code that's there on the screen but again i'll drop a link for you as well i encourage you to find your your local meetup in your nearest local city and uh stay up to date with all the events coming up so uh that brings me to the end of my little doodad steal whatever you like i'm willika i'm turn over to you i'm gonna be hanging out in the background uh if you've got questions feedback comments please do drop them into the chat we drop them into the q a section of zoom um and then at the end as a way to sort of model our live meetup experience we uh give a little bit of open mic so you can take the floor ask questions directly to uh to wilika and share any feedback ideas curiosities with the rest of the group you do that simply by raising your hand i don't know if you can see me uh providing that example there but i'll bring you off mute at the end and you can share i'm bilika they're they're all yours um i'll be out here managing questions so don't don't worry about having the patience for that and i will see you at the end okay thank you daniel so let me share my screen um i guess it's visible now i'll get a laser pointer yeah okay cool so uh thank you all for being here um so they asked me to present something about our work and what we do uh with type db so uh well i'm glad to thank you for this invitation um and today um i'm going to talk about the work we did on self-aware autonomous cars so we do also a lot of work on software autonomous robot my colleague also had a talk earlier and we'll probably present later in the year as well about our work on that but we use kind of similar principles and we applied it on uh autonomous cars me myself i'm village from so i live in rotherham um work in susterberg which is near utrecht in netherlands and my background is in ai and i do a lot of research on uh how human humans and ai technologies which can be robots which can be computer system which can be autonomous cars uh work together efficiently um so i want to start a bit shortly about uh the company i work for so that's dno the slogan is innovation for life so we are an independent research company um where we connect people and knowledge to create innovations uh well we work this license we work today with uh 33 400 professionals we are also scattered around the netherlands we have different locations but we also have some location international so uh and and all these all these locations are eventually clustered in units so it's actually the topics they work on um and well the the topic so i'm going to talk today about these autonomous cars so we work a lot with other colleagues i didn't do it by myself i love in unit traffic and transport also some ict colleagues and me myself with some others work in the unit defense safety and security so we have a range of projects and ideas that we can work multi-disciplinary throughout the company to share the knowledge we gain in our projects um so a bit shortly about the unit traffic and transport so they do a lot of things so they do research on um on powertrain so develop efficient and economical economical powertrain technology they also research urban mobility and safety so how traffic flows are influenced and can be um well more efficient for example model that let's see sustainable transport and logistics so how to effectively and efficiently transport people in goods and actually i work a lot with the department that is uh that is concerned with integrated vehicle safety and actually the part i'm uh working towards they have also or some colleagues have a nice slogan and i said but like we work on ai for safety in these autonomous cars but we also work on safe ai and i think throughout this this story i will touch upon both both subjects so me myself i work in the department of human machine teaming and well we love slogans so we also have a slogan that says no ai is an island so we work on use cases where well these robots or agents or however you can call them these ai ai these smart agents work in a real environment so it's real humans uh so therefore to unlock the full potential we need this good human ai collaboration and then well we strive to make this collaboration more effective efficient and social well i want to go quickly to the topic of today because i've actually lots to tell and i'm very curious about your input as well so i want to save some time for that too um and actually i want to start with uh note that a lot of work is done in autonomous cars i think you can you know that as well and i want to scope our place in that a bit so a lot of work is done on this well autonomous driving part so a lot of end-to-end learning we receive a lot of data we train our algorithms and then we can behave like people behaved in that data so also basic control theory a lot is known there well deep neural nets are often used uh in autonomous cars and on the other hand uh oh and cars have a lot of sensors so you can of course retrieve a lot of data as well on the other hand we also know a lot about human driving so we know about ethics laws norms preferences and we have a common understanding of the environment so one example i'm not sure if you know it you can find it on youtube as well um well it's an example where we where an autonomous cars on the road it sees another car that transports well traffic lights as you see and we understand that while such an algorithm in a computer haven't has never seen it so this is actually an out of distribution case and well i think that's the case that uh it thinks that these traffic lights are static objects and valves driving towards it it feels like these traffic lights is status objects are coming towards the car while this is obviously not the case and what we actually try to do is we try to combine these both these things so knowledge about ethics laws because how can you represent them in data maybe if you have enough data you will learn it but well you'd never specify it uh in such a way that you know exactly which ethics or which laws are incorporated what we try to do is is combine this knowledge in well a knowledge base and we use type db for that to be able to reason a bit about both and to reason um about the collective actually so as you know we work we work on hybrid ai so i placed us in all of these parts so we have a lot of knowledge about driving autonomous driving but also safe driving we have some knowledge about creating these algorithms testing these algorithms also on safety and we work on this intermediate part so um by combining this data driven and knowledge based ai we we think this hybrid approach will uh well make the situation more safe and can create situations where uh actors and actually we started uh well actually this year so it works also from previous years but since this year we have a quite a nice example in the netherlands or actually at the end of 2020 we had a pilot of an european project that created this car and it had a three months violence in uh in helmand which is near antova where drove from the um from the [Music] automotive campus to the train station and um well it did that autonomously but of course there was a driver in there because well in the netherlands uh we have to um and it was actually programmed like part the route was hard-coded um actually not many actually maybe none well not many smart things were incorporated but it drove so we take that as a starting point and we try to improve this car and um well some of my colleagues could be could be in there also during the corona times and uh they noticed that um many things are hard-coded in there so if something becomes hard or maybe unknown the car just stops and it can stop quite rigorously so it's not always that pleasant and also there are rules like it cannot cross a crossing by itself the human driver should do that or around the school it should drive very slow because well it might be dangerous with these kids running around but it's also when the school is closed it's still driving that slow so it was a bit of frustrating for the driver that some of these things were not worked out yet but hey it was a pilot uh and it drove from there so actually we started with this use case and said okay the current situation is like this like in encounters and it really did encounter a pile of leaves in the autumns and then it says i don't know what it is so i will break to avoid a dangerous situation then a human driver can deal with it and then it can picks up again and what we want to do is that it can say okay i don't know what it is but i can reason that it's well maybe not a traffic participant it's not something that can get hurt or move i know the context i know maybe there's no one around maybe it's late in the evening and i know that my action the risk of my action like if i can go around it or not or maybe i know the risk of going over it and therefore i will take this action instead of breaking so i will go around the pile of leaves this is actually what we what we would like to achieve for this car and well when we talk about these autonomous systems we often think about oh the loop so if these systems are in the real world so we observe things in the real world we then orient our possibilities like what can we do we decide on what to do and then we act in the open world and actually what i want to focus on today is a bit of both of these parts are not about observing and the acting so not about the sensors and the actuators but more about the reasoning process so for the confidence assessment i want to understand my surroundings i want to send my own capabilities and i want to understand how my own capabilities how competent i am in your current environment what can i do what risk does it has how uncertain am i and then based on that we can well advise the planner for example to select or not select some action in this environment so we want to give it an advice so actually there are well four topics today and i'm going to start about uh start with understanding the environment um so to understand the environment we work with through three types of information we want information about the automotive domain so what a road how is it components composed kind of infrastructures do i have traffic rules laws traffic signs and besides that when we drive with our car we have a real sensor input so um um well the the camera data maybe the lighter data um so this input can be taken into account also the location gps um let's see you know i want to go to the next slide yeah but we also want to combine we want to retrieve knowledge from other sources because in most projects we have well maybe intention prediction of other road users traffic prediction maybe you know where traffic jams are we often hear it on the radio so this car should definitely be able to know it and maybe points of interest so if we know there's a school well we might take that into account as well um and we want to combine that to build some situation awareness in our knowledge graph so i go over these uh three information domains shortly um so for the main knowledge we needed the schema of the automotive domain so just the basics we know as humans a computer might not know that uh explicitly so we want to bring that into into the knowledge base and actually we noticed that there was a lot of work done there so creating ontologies on the automotive domain so we didn't invent the wheel ourselves so we took this information actually these two papers are really intensive description descriptions of everything that can that you encounter when you drive on a road in the outside world so we based our model on uh on this literature and actually just to show you a bit how it looks like so we specified that there might be a highway there might be a speed limit on a highway it's composed of well different lanes so it has an emergency lanes two one-way lane and oh maybe two emergency lanes in this case so you can imagine that's a straight road with two lanes and two emergency lanes they're all related to each other so they are adjacent so this emergency lane is adjacent to this one-way lane via a solid line and these lanes are also adjacent by a broken line so in this way we specify how our road how our infrastructure looks like um and we also have entities like passenger car buses and the eagle vehicle so their own vehicle and they are of course driving on a lane so here the bus and ego drive on the same lane and this car's driving on the other lane um well then we can have some traffic legal safety rules so just show you some example we before we started we actually thought oh we can implement all basic traffic rules since it's a finite set um but actually it seems quite a lot so we started from our use case and made some build some generic rules from there and then we can of course increase it as we encounter more advanced situations but in this project we have a highway use case and an urban use case and actually with this knowledge i just showed you you can have a rule like when there is a solid line for example here between to the emergency lane it's not legal to do a lane change or in this case when someone is approaching from the right maybe the crossing you have to give right of way so these rules just well can be implemented in type db um next type of information is sensor information so we drove around with this uh with this car we worked with a lot of partners so cte but there are also a lot of partners that provide us us with many sensors so we had a lot of sensor data which we can use to to train train models to train algorithms but some of this data can also be inserted in the database as i mentioned before think about the location of vehicles for example the speed of a vehicle maybe also well these models can also see how well can calculate the the distance between vehicles we can also uh calculate the um time headway so how fast they are approaching or time to collision well what if ego stops then how many times do i have before another vehicle uh becomes close to me so this kind of information we can get from this sensor data while a car is driving and also here we can create some rules based on this information so for example it is not safe to do a late change when there is a vehicle close to ego in the lane where it wants to go through so if here is if if ego is here and it wants to take over the target but there is a car on this lane well please don't do it it's not safe or maybe if someone is coming from here and drives very fast towards you so there's also information we had but it might also not be safe to just throw your car before it so this could also be a rule to implement um but besides safety rules we can also have preferred rules so for example in this case it's not safe to take over for example if the car is driving slower than you you can stay behind it but you might prefer to keep your speed and take over this car so when someone is driving slower on your lane and it there is there is no unsafe lane change for example then you can do it so this is another rule you can implement um and in the urban scenario an example is that when there is a cyclist on your lane you need to make a bit of room for the cyclist so maybe you want to stay uh so keep going straight but go a bit to the left to give the cyclist room so this could also be well a rule to to include and lastly i want to talk shortly about the other information we can introduce so in one project we had traffic prediction in another project we had intention prediction we can also have these pois of hospitals and schools to be just a bit more careful around these uh or behave a bit differently um and based on this information well we can well we have kind of the same information so i didn't show the scheme and but we can have separate rules for example when there is a traffic jam ahead don't take over if it's not necessary for example so then it's preferred to keep on to keep your lane instead of taking over because it doesn't make sense when there is a traffic jam in front of you so if we then have some awareness situation awareness about the environment we want to understand our own confidence so that we can reason about our competence in this environment and actually this was a project where we tried to estimate the competence of a deep neural net that well made this traffic prediction so we had a data driven algorithm uh well the neural net that predicts driving behavior so that predicts cutting cut-ins so in this case for example um this is the ego vehicle and well we can estimate as this is in a merging lane um that this truck has to make way for this other truck to merge in um the highway but actually the traffic prediction so the uh or the driver prediction model the deep neural net didn't predict such a thing so if you run the simulation the car crashes in the in the in the truck so we want to actually so here we want to make sure that the ai itself is safe so this was actually the second part i mentioned at the start so the use cases we have a cutting classifier and the goal is to assess the competence of this cutting classifier and we do that by in a hybrid approach again so we have some data-driven way namely we can do an out-of-distribution analysis and we can reason with the ontology about the situation and about the competence of the model we need to use both to assess the eventual competence of the deep neural net um and we did that kind of as following so the first part is that we we get observations so in this case we had a simulation running and from the simulation we can get some information to our knowledge base about the environment as i mentioned before so for example about the road geometry and maybe about the visibility of lanes so you can read out your sensor and see how much of your lane how much of the lane is visible office obscured maybe due to a truck um but also this this this observation go into the cutting classifier which does a prediction yes or no so this prediction we want to assess the competence of that one uh and we also have to have this feature uncertainty estimator so this was the data driven part like see um the importance of the future features and um how it's out of distribution we can then get out of that so see if well our sample if if the sample is out of distribution it might be uh the category might be less certain since it might have never or only a few times encountered that situation before um so what we don't actually have we can add to our schema is that we have a cut-in predictor so we actually want this algorithm as an entity in our schema as we might have different cut and predictors maybe train on different data sets so we said okay for this cutting predictor this vehicle because this gets this prediction value of a cutting and i also mentioned that we have this feature uncertainty from the data driven component and we also want well a vector of this feature uncertainty so we thought about maybe first of the feature uncertainty only for the most important features or well eventually we had a combined feature uncertainty taken into account in our model well so we implement it in our environmental model so this vehicle is on a lane and has a name when it can have connections with ego of course um and what we actually introduced here now to assess the competence is uh an importance value and a doubt value and we needed some extra information namely the visibility of lanes so i'm going to explain it very briefly so we reason about importance of entities and uh this was encoded by domain experts so um we ourselves know that when there is a lane entrance things can happen you have to be a bit more alert than when you're just on a straight road going ahead so if there's a lane entrance we we link we label that as an important entity and also very close by entities are labeled as more important since you have to take them into account right away while more further away entities well it's easier to anticipate on a longer run than when something happens close to you you have to anticipate right now um the other value is doubt and uh actually um this is kind of an uncertainty value but we didn't want to call it uncertainty because it's exactly uncertainty but it's more of a yeah of a doubt uh and and and what it actually means is um if if you have out of distribution uh a high out of distribution so how high future uncertainty um then then you're more doubtful about the situation because well you haven't seen that much or none of these samples before also maybe you have unknown physical entities so our expert of this deep neural net knew that the training set didn't include motorcycles for example so if we do encounter a motorcycle we also know that the model is a bit more doubtful if there is a low visibility so for example the example i just showed you with the trucks merging um the trucks really really um obscure the visibility of the ego car so also here we know that if this situation occurs that this algorithm is more doubtful well and when a classifier is uncertain you also want to take it into account because it also has own measure so when we fill this in it looks a bit like this so we have a truck that's driving on a one-way lane well this lane isn't that we we know the lane it's fine um we have a high visibility so that's great and this vehicle is in this uh in this lane so we don't have a high doubt about that it's fine this is something we've seen before well truck we've also seen it before maybe it's close to ego so the importance is set to medium but actually the truck has a high feature uncertainty so maybe the truck went very fast or had a weird angle maybe was a bit sleepy and doing the weird things so the cutting predictor does have a well does provide you with a prediction value 0.6 but because of the high feature uncertainty we want to have a sign of somewhat higher doubt because we're not sure if this is really reliable and eventually which i'm going to show you afterwards i guess yeah after this slide is how we use this graph to calculate performance so hey there's my agenda and indeed we're going to reason better current performance so actually what we so i showed you this picture before so these observations then we it goes into the gutter classifier and a future uncertainty predictor and then eventually going to reason with this information and then this graph i ended with is used to for the competence assessment module to reason about if the car can continue in autonomous mode so if we're if the uh if the car is competent to continue or that a human should take over and actually we do that by a three steps and i can show it briefly afterwards so we remember some seconds in the past so we have the current situation and we remember a couple of situations before then we want to forecast what is then happening in the coming two seconds and then we decide upon okay stay autonomous or give over take over control and um well these slides are building up the graph a bit so i'm going to go to the example i think i have time yes um so this is our eco vehicle and this is the situation so three trucks this is an um yeah merging lane and lane entrance and we have this motorcycle we don't know so we we are looking for these sk edge cases white because for normal cases it's performing quite good but on these edge cases which can be fatal um well if we hope we can improve the algorithm um so actually here we have our our first truck one and it's quite uh close by ego so therefore the importance is uh is high um and i think this so this is then the doubt value and it's actually also quite high because i think because it's on this emerging lane ernest entrance lane yes so uh we also have this entrance lane which is has also a high importance because as i mentioned like this is aesthetic thing we think okay entrance lane are important because well things can happen when people are merging and there's actually quite high doubt here and that was um because this truck is actually obscuring the view of ego vehicles so it cannot see the motorcycle well maybe the first tip of this yeah i think it can see this work but it can't see this part of the road and therefore we say okay it's a bit more doubtful uh the algorithm well then we have all other concepts so we have the lane of ego which is well importance is low because we we see it there is no one in there no nothing exciting happening so doubt is also low um truck two is this one um well it's far away we know trucks we know how they behave it's in our model so also the performance here is set to low but it's still on this entrance lane so that that value is somewhat higher motorcycle somewhere we something we haven't seen before and we're very doubtful about and actually so this doubt and importance values are combined in a weighted average to compute the competence value so something between zero and one and this embedding so this confidence value is then saved for as i mentioned like i think one and a half seconds and then a linear regression model plots um the forecasting so but if if it keeps going like that last one half second we think that in the coming two seconds i am this competent um well and we can set a threshold of course on how competent we want the autonomous car to be to be autonomous to be able to drive autonomously or to hand over control um so this is a picture of the forecaster given this graph i just filled out with you together and so here you can see the embeddings um and actually we look until one and a half seconds in the past um so these are the competence value of the last one of seconds this is where we are now and then in the future if we were if we plot this algorithm this linear regression then we see that we're becoming less competent as you see a trend here starting and we do forecasting which is still we we notice that it's a simple model and we do a very simple forecasting thing but already well we can have a we can anticipate on what might become what might come um and we show this in uh in this visualization and i actually think i have yeah i have a running example um so it happens quite fast i think but you see the forecaster here um the embeddings here the confidence level here you see that's zero since you see in this case we took five seconds as a window so it always takes the lowest value of your forecast so it says zero and i think now it will go a bit higher and but this is a very doubtful situation you see that the cutting predictor predicts a lot of cuttings we're really uncertain about the values and actually now i cannot stop it because oh it's because of this pointer i guess yeah so as you see at this point the the truck is merging and the side motorcycle is um cutting in in the lane of ego so when this is happening it makes space for the truck and then i think we can build up competence from there when the situation becomes more quiet but you see that in this situation our car fire or model is said to be not competent enough to uh to drive autonomously um just to go back a bit about the information i just showed you um so here you see the cotton predictor output which is one or zero in this case for well all things it sees so it doesn't see this truck yet so it only sees these um we have the feature uncertainty of the important features or the cutting predictor um and i think below oh oh here's an even faster animation and we love you see the distribution of one of the uh features you actually see that starts quite out of distribution and then well goes to a more familiar situation at the end and we wrote a paper about it i think it's one of my slides um well uh if you are interested you can uh you can read it of course um and actually we showed we we had two use cases so the one i showed you before which is really a lot is happening and one where it's quite quiet and some cars just rushing beside you and coming into your lane and actually this situation i showed you is a use case three and four and we see that uh we compared it with with reasoner and without reasoner and well we made this competence value of the the feature uh uncertainty so so we said okay the feature uncertainty says something about the competence of the algorithm so also in this case the future uncertainty is uh quite high so this universe so the feature insert is quite high and therefore it also says here that one should take over so it's it's uh not certain and actually with our reasoners even more um it's even less competent actually um but in the first situation you see that also here the feature importance is quite low we set a threshold of 0.7 so the feature uncertainty is high and that was actually because in this situation this cars is driving well very fast so it was a simulation and actually the algorithm was trained on real data um and it wasn't used to cars trying to get fast so maybe that doesn't happen in the scenario but um well could have some crazy person can do it of course and we saw here that well without a reasoner the uh we think the future uncertainty was so that well the human should take over but actually with the reason or we can say something like yeah but it's it's far away it's not in my immediate surrounding i have a lot of visibility about everything so i actually have a quite good situation awareness so we think that we can deal with this situation and well as the car is just passing by ego it actually was um so actually we say that if the driver is competent to write autumn to drive autonomous uh in this situation so you see the difference here um last thing i want to show you briefly is the action selection so now we talked a bit about the modality like driving autonomous of giving control to the human but in another project we really looked at the specific action the car could do and well which we think are uh which we reason about our legitimate safe and maybe preferred action and which actions we um we um eliminated for the planner to plan so just to give you a bit of a background so we want to make a decision to take an action so for example in this case one could break accelerate switch lane peopling whatever in the current situation and we need to do that fast so because many things are happening uh we need it 10 times per second um and we we we build our own um we train our own data or we train our own algorithms sorry so we we drove around with this autonomous car i showed you before with all the sensors and uh well we can go to that information then we train this machine learning algorithm to well score each action given a certain state so we did that via inverse reinforcement learning so we had a lot of examples and we said okay if you are in this state and you do this action well we see that that's preferable so it gets a high uh a high score and then we implemented a planner so it was a monte carlo pre-search uh planner that made the actual decision so uh the mdp already of the algorithm already did a free selection or gave us gave a score another pre-selection so it gave a score to all actions and then mcts can decide on what to do and what we actually added with uh well then dragon um is that um we use the information about the context and we know the actions of the actor we also had some additional information besides the current environment we also have information about the traffic prediction and based on that we can already select which action the mct is sh mcds so the planner should take into account so besides these scores we want to do a filter on the action because this monte carlo tree search it's very expensive it makes a huge tree of possibilities and if we can already prune this tree well it makes a lot of sense when we have to work in real time and very fast information um so this is actually an example of how it would look how it looks in the knowledge base so we have our ego vehicle here and this is the situation where you actually want to take over this car because you're driving faster you cannot do it to the right because there is an emergency lane you want to do it to the left but there is a car approaching you very fast so you don't want to throw your car in front of it but this is actually quite hard situation for the planner to anticipate on this uh on this event so what one actually did is we modeled ego so this is a part of this huge model i i showed you parts of before um and it can actually do a lot of action so also cruise full stop accelerate decelerate but well as you might see we didn't filter on that we filter on lane change so left lane change lane keeping or right lane change and as you see in this environment so the output of our model given this information said okay a right lane change is not legal because there was a rule that stated that's not legal when there is a solid line but it is safe since there's no car there so maybe if you want to trade off between legal and safety if there's a really if everything is unsafe and it's your only safe option you do want to go break your break the law to be more safe so um it's it's nice to have both scores here the lane keeping is has a legal score if you can do that and has a safe score because well then you have to decelerate with it but then it's safe and the left lane change is actually uh also legal it's even preferred because you want to take over in this case when you drive faster but it's not safe since this car is approaching so um actually what then comes out is if you filter really hard if you don't want any zero in your actions so you only want the positive scores then you can give the planner say something like okay lane keeping is the only possible action you can do right now you cannot do a left or right lane change or maybe if the planner can take the numbers into account because for now we should just filter but in the future we want the planet to be a bit more um uh to include a bit more information then it can make its own trade-off so then it can take also this safety score maybe more into account so what we did is we researched a hybrid ai approach for self wear autonomous car with information about our environment about our own car to reason about the competence of the car and we actually think that with this hybrid approach for building up situational awareness um that we can reduce the risk of autonomous driving function in this open world where we have unknown situations we know that we will encounter unknown situations so better be prepared for that what we're still planning to do is we are now actually working on instead of modeling this environment as blues entities to link them to kind of contexts that you know a bit more about the context you are in so reason a bit more about a higher level and about the combination of these uh components uh we want to do the confidence assessment for well more vehicle features so maybe for the whole vehicle and also for richer action spaces and eventually we want to implement this in this real car so maybe maybe this uh this cool demo car can be used for that as well so we hope we can work together with them in the future so that was it today i think it was quite a talk i hope you enjoyed it and i'm very open for any questions or suggestions or ideas thank you willa
Info
Channel: Vaticle
Views: 108
Rating: 5 out of 5
Keywords: self-driving cars, autonomous vehicles, typedb, vaticle, tno research, cars, database, artificial intelligence
Id: aGgrzYaBMv8
Channel Id: undefined
Length: 47min 47sec (2867 seconds)
Published: Thu Jul 29 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.