Self-Driving Cars: Planning (Benedikt Mersch)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to the lecture on planning for self-driving cars today we will have a look at different strategies a self-driving car needs in order to get from a start configuration a to a goal configuration b we will then identify different challenges and try to tackle them with several approaches and solve this to path planning problem in a hierarchical fashion so this lecture is part of the course techniques for self-driving cars also held by my colleagues and let's get started so this is the typical view from a self-driving car on a road and the today's question is how do we get this green trajectory so we want to get from our current position to a goal and the goal might be somewhere outside this view so we need to know which overall route do we need to take should we make a right turn here or go straight and then make the right turn from the picture you can already see that we also need to take into account the different lanes so there are lanes which are only only a low right lane change or the lane on the left here only allows going straight we need to take into account that there are traffic rules and traffic lights other traffic participants so a lot of things we actually needed to take into account for that so let's first consider the different conditions or requirements we have for the resulting path our planner should output so the most important thing is that the trajectory is safe so we want to have a safe trajectory without any dangerous situations or collisions on top of that the next important thing is that the route is legal so we don't want to run red lights or cross a solid line which we are not allowed to cross the next two important requirements on top are now con more related with the passenger who is actually in the in the car and in the self-driving car since the passenger should also feel safe so the perceived safety is also an important factor on top of that we have the comfort so we don't want to have a very strong accelerations and yeah the whole right should feel smooth for the passenger and then the the part on top is the route efficiency because otherwise we might not take the self-driving car if it just takes long detours and we don't reach the goal in time so if we now connect these requirements to the path we want to get from a to b in a path which is safe physically feasible smooth fast short or also other user defined requirements and from in this picture you can see a typical situation where the car wants to take a left turn and needs into account needs to take to account the different participants like in this case there are cars on the right crossing the intersection so we need to wait for our traffic signal to turn green and then watch out for other cars but from this picture you can already see that for just the left lane or for a left turn this might be easy but if you think about the route which is from city a to city b we cannot take everything into account and think about all the different situations that can occur so in general this path planning task gets more complex and more hard if we consider larger planning horizons so the challenges we can identify are that we or the self-driving car needs to react to different events so there are speed limits we need to account for red lights we need to consider there might be jaywalking so people just running a red light and step on the street so we need to have a sort of emergency braking rule there are also other slower cars we want to overtake we don't want to follow a slow car for three hours so we can see that these challenges [Music] can be tackled with a specific way of planning or a way of performing the planning so at first we identify that not all information is available at first so while driving we might gather new information about the environment and we want to include this in the planned path so this leads us to formulating this path planning in an online planning fashion so we constantly update the best path we want to take based on this new observations and then the second part is that not all information is needed for each decision so if we just want to know the overall route we don't need to know where a specific car is in the future because we can deal with that when we reach the corresponding intersection so this this suggests or this conclusion is that we formulate the path planning in an hierarchical fashion with different modules and they only have several or different access to information in that case since the online planning is done by just updating the path based on the observations we now focus on the hierarchical planning so this is an overview of the different modules we have for the planning and the general idea is to divide the overall or the the big task of path planning into different sub-problems or sub-tasks so the hierarchy is shown here from top to bottom and at the top we have a global planning which just defines the overall route so it's like a satellite navigation system which tells us to now take the next turn left follow this route and follow this road and then take a turn right the next level is the behavior planner which accounts for different driving scenarios or situations with other traffic participants and handles these situations the next step is the local planning which actually computes the trajectory as an output and then accounts for um dangerous situations or for other requirements we have like a specific acceleration we don't know we don't want to exceed and then finally we pass this trajectory to the controller and in this case we on this lecture we assume that the controller is now already given so we have a low level controller that takes the trajectory and outputs the corresponding control commands so we'll only focus on these three and the hierarchy also defines the amount of information that is available to each planner and also the level of abstraction it has so this is our overview for the lecture today we will cover the different planners in during the course now and the hierarchy is roughly defined by the abstraction as i said and also the update frequency so the top level has a very high abstraction because we only consider the overall plan or the route and we do not need to update is it in a in a high frequent fashion it's enough to to plan the route once in advance and then update it if we for example see that the route is blocked and we cannot go there at all the mid level has a mid abstraction level and the mid frequency so we also update it more frequently but only if the overall driving scenario changes if we for example move from intersection to a highway scenario or to just just a straight road we need to follow and then the lowest abstraction level is the local planning part which needs to be updated in the high frequency because there are many changes dynamic obstacles that move and we need to update this this path way more often than the global overall path so first we now have a look at the global planning and look at the different inputs and outputs this module needs so the input is the user defined destination so we need to know where the car wants to go or the passenger wants to go and we also get a map of the road network so we need to know how the the road is looking like and then the output should be a sequence of waypoints but note that these waypoints they do not need to be super fine or with the highest resolution it's sufficient to know roughly to go from intersection one to enter section two and then move on so it's only a rough list of waypoints so i already mentioned this um comparison the global planning is similar to a satellite navigation system so we assume that we know where we are and we know what the overall map looks like and the goal is to find this list of waypoints with the minimum cost so in advance we need to specify what's the requirement the final path should have so either a very short one or a fast one so we can combine different costs and objectives in order to get the best path which is then defined with as the path with the minimum cost as i said this can be on can be done offline in advance just to have the overall route and then is updated with when new information comes in so beside the already mentioned blocked road can also be a traffic jam or a new speed limit which was not updated in the map so in these cases when new information comes in need to update it so the maps we usually use here are high definition maps and the idea is for the global planning to convert this map structure into a graph structure and then we can apply different search techniques on this graph in order to get this optimal path in the end so from robotics you might know that a common approach is to discretize the world so we discretize actions and also states and then we get a grid and can move in the grid or plan in the grid but since we now deal with self-driving cars we can make use of the fact that we have high detailed information about road maps so we for example know where intersections are or we know whether a street is a one-way street or what the speed limit is so this is the comparison this is the already mentioned occupancy or the grid map and where we have different discrete states and actions and here is such a more topological map so we discretize the the word topological elements this is an example of a high definition map so you can see that we have different lanes and also different colors which indicate whether the lane is a just a common road or intersection or part of a roundabout and we also have information about the traffic lights or also traffic signs in here the idea now is to generate a graph from this map so we take a different map as an example and you can see the red edges or the red path segments and we now define the nodes on this map so a note is just the start of or the other start of one segment and the end of the next one so we divide all these segments and describe the the um in the transition between one segment to the other one as a node so these are these are the the blue dots you can see here and they just represent the pose so in the easiest way just the position and orientation of the car then the already mentioned red lanes or segments are called edges of the graph or we define these to be an edge so they contain information about the length of taking this path from the one node to the other one we can also encode information like a speed limit or what type of street is it is it now as here part of the intersection or is it part of of the straight driving road so we can encode as many information as we need in order to later compute the cost from it here's a very simple example with four nodes so we have the nodes one two and then three possible next nodes from the node two and in this case we would build the graph like this so from one you can get so you could also go to this node here but we only consider this this way here so we can go from one to two and then there's a decision whether to then move to node three or node four depending on which lane you want to be in afterwards what i already said about the update is now that in case we get a new information about the map due to a construction site like here or blocked road we can encode this information or update it in the corresponding edge so in this so in this case this edge cost from two to four would become larger which then tells us to maybe not take this path and prefer the way to the note 3 for example okay so the overall goal is to find the best path from start to go so we want to do this with a very small and simple example on this graph now let's assume our car is here and we want to reach the goal so in this case it's very easy we can already see there's a direct connection by taking the the right turn here and we are directly on the corresponding line we want to be in but in contrast to grid approaches it can be sometimes not that intuitive what the best path will be so in this case the goal is here but there's actually no direct connection at least on this part of the map from the start to the goal which means that we need to take a large detour and somewhere use a roundabout or make a u-turn in order to reach this goal so this is also the advantage of using the high definition maps that we also know what are legal um legal paths because we cannot just drive on this intersection and then go in the other lane it is not allowed so this is sort of encoded in the high definition map what we now do is we we take this graph convert it into a tree i will explain why and then we used research techniques you might already know from robotics in order to find this best path on the graph so on the graph we can so on a graph like this on the left we can move from node to node resulting in a path but in case we are in one of the nodes on the graph we don't actually know where we are coming from we don't know the the path we have taken in order to reach the node so that's why we can transform this graph into a tree where we start at the top or the root node which is the current one and then all the decisions we can take are new branches such that one path in the graph is one unique path in this tree so now we can if we now locate ourselves in one of these subsequent nodes we already know what is the sequence of nodes we have taken in order to reach this final node and this makes it possible to use search techniques three search techniques to find the best path from the start to the goal node okay coming now to the tree search we want to find the best path to our goal node and there are different strategies to do this so we will differentiate between the different strategies in terms of completeness so it means if there is a solution are we going to find it there is a the metric of all the yeah we can also uh our metric of optimality um is the search algorithm optimal so if you find a solution is it really the best one or the optimal one or is it just one of many solutions but not the best and we can also look at the time consumption of an algorithm to to see how much type time does it need to find the solution and also the memory consumption so how many different paths do we need to remember until we find the best one and the last two are very important in order to implement these techniques in practice because we have hardware constraints and also in the self-driving car scenario we don't want to plan forever until we get the next step or the the best path in general we can also differentiate between uninformed and informed search so for the uninformed search we do not use any prior information about the goal if we have this information which is usually the case because we know our goal position we can also formulate the search in an um as an informed search so we make use of this prior information because we already know what the direction of the goal roughly is and yeah then we can define a heuristic which is like a cost estimate and then find the best solution faster okay so let's first start with the uninformed search so two very basic techniques are the breadth first search and the depth first search and in the depth first search you can see the search tree here we started the current node at the top node again and we will subsequently explore all the neighboring nodes um so the way this works is that you usually have a kind of list or a fringe which is a yeah the open set of nodes which are not explored yet and the way um this is done is we so we look at the list we we check what node to explore next we explore the node or look where this node is and check whether this is now the goal node and if not we continue with the search and then put all the subsequent nodes on the fringe on this list because then we want to explore them afterwards and here in the breadth first search we will always explore all the clothes the closest notes actually so if you treat this as different levels of the search tree we first explore the second third and fourth note here and then go to the next level so we try to we assume the illusion that the solution is close and we just explore the next the next level so this algorithm is complete because it will find the solution in case there is one it is only it is also optimal in cases the action costs are equal so here we could just assume that the cost is one to go from one node to the other one and it will be in concerning time and space the algorithm space scales with b to the power of d so b is the branching factor and d is the goal depth because we need to explore so many branches at each level and then we need to reach the level d in order to find the solution another strategy is the depth first search which means that we do not explore all the lower or the these the next levels but we will first decide on one one level and then try to go as deep as possible into the search tree and in this case for example explore this all the way up to the last possible node four and then in this case there's no um next state from there so only in this case we go back and then go on to search in the other directions in case of an infinite space and this is not complete so consider for example in the self-driving car scenario that we search for a route and in case we here use the depth first search we might just drive around the block forever because this is the deepest branch in the tree you can have and you would only you would try only try to find a deeper path so in this case you just you don't find the solution because in the graph example with the hd maps we have these um we might have an infinite space because you can go in a circle as in roundabout or just drive around the block um the depth first search is also not optimal so in case we find the solution since we we favored the the deeper solutions and if the solution cost is one or the the steps the cost from one not to the other one is one we just found the solution which is the farthest away which is not necessarily the optimal one concerning time and space this scales in time with branching factor to the power of the maximum tree depth so it's usually slower than the breadth first search um but concerning memory or the space it can be better because we can forget a lot of path and subtrees because we already explored them so in here for example if you explore these and you go you move to this branch you could forget the complete left half because you have explored it so far we assume that the cost is equal for each transition from one node to the other one and we can also encode different uh costs here edge costs um which i already mentioned previously in the maps so not not each path segment is equally long so we might want to account for that so here we i want to introduce the cost sensitive search so now we also take into account different costs um the uniform just means that the expansion of nodes is treated uniformly so we do not consider the different levels of nodes but we now focus on the cost so it's not like the it's not that the costs are uniform that's just the treatment of nodes concerning the level is uniformly so we define a cost function g an accumulated cost function which for each node keeps track of the cost which was needed to reach this node and in this example you see the different edge costs as a red number so it's you can see that they are different and when starting at node number one for example exploring node number six is not um it's not done in this case because the cost is seven which is higher than the other costs so in this case we prefer exploring node two with a cost of two and then move on to three because this has an accumulated cost of three which is still less than exploring this one with seven this one with five or this one with accumulated cost of six so you just on this fringe or on the the set or the open set of notes you keep track on the notes which are not explored yet and this accumulated cost and then you expand the one which has the the lowest accumulated cost so far this algorithm is complete and optimal and scales of the time and memory requirement scales with the c over e which is an estimation of how many levels we need to explore in case the minimum edge cost is e and the solution cost is c so it's a rough estimate of these requirements as i said the previous methods are uninformed search techniques so they don't use any prior information and now we want to make use of the fact that we know where the goal is so now we want to guide the search a little bit to just yet to make the algorithm faster so informed search techniques exploit additional knowledge about the goal what is usually done is to define a heuristic which is basically a cost estimate and tells us what you expect the cost might be from this node to the final goal you want to reach and this speeds up the search as you will see soon so one very easy implementation of this would be the greedy search which means it for each node which is not explored yet you just take into account the estimate of future costs from this node to the goal so you just look ahead you don't consider the past costs you just look what's the for example the node with the closest distance to the goal and explore that so this heuristic h describes in this case from node b the estimated cost to node 12. so from 4 to 12 and this is the only thing we we take into account for selecting the next node and we as i said ignore the accumulated past cost so this is important since in the a star search which is now a combination of uniform cost search and the greedy search we take both into account so now we consider the pass cost or the cost so far which led to the corresponding node and also the cost to go which is an estimated cost to reach the goal so we now combine our already defined accumulated path cost function and the heuristic so g and h in this case again we are at node or we consider node four so we know what the cost from one to 4 was which is given by the function g and we have an estimate what the cost can be from 4 to 12 which is then given by the heuristic age so here what we do is we expand the next node with a minimized accumulated cost and estimated cost so the sum of them which is here so this is then the f cost okay so what does our heuristic how does it need to look like so we use this f function which determines the the overall cost based on the past cost the accumulated cost g and the heuristic age and let's assume that h star is the actual cost from the node to the goal so it's the real cost we will encounter in that case and it's important that h is admissible what does it mean so the heuristic age is only allowed to underestimate the cost so it needs to be equal to the real cost or lower it may not be higher because in that case the optimality is not guaranteed anymore because then we the the heuristic might guide us to a sub-optimal path and we will reach the goal on that one and then think okay we are done we found the path but in this case it's not optimal so for a star we require that the heuristic age is admissible and one example um example heuristic is for example is in the equilibrium space the direct distance so if you are at a position you could use the um the direct distance from the position to the goal since this is always the the minimum cost in this case so you cannot if you take a detour it's already longer than the direct connection so if you want to minimize for the the path length then this would be possible and very easy to implement if we if you for example want to optimize it in terms of the travel time you could use a similar heuristic you take the smallest distance and the highest velocity you can achieve and then compute the time required for the car to drive there because if you take another path which is longer or a velocity which is slower you will end up with a higher cost so in this case it's always an underestimation of it here's a small example of the a-star star f the a-star search so we start in this configuration space here at this position and the goal is here and you can see that there are obstacles so we need to find a path around the obstacles so you see in black the different nodes that have been explored it's very fast i will play it a couple of times i guess the light notes light gray notes are the ones which are not explored yet so they are on this fringe and we keep track of the cost that led to this node and also of the estimate to the goal and the red one is always the one which is currently explored and you see that due to the heuristic we avoid exploring spaces like these which are too far away from the goal so whereas dep's breadth first search would have explored these two because they are close to the start or depth first search which would have explored these because they are very far from the start our a-star accounts for the distance to the goal and then at this point you see that there are two possible ways it has found so going around the obstacle like this or going there like this and then since this one first reaches the goal it's the optimal path which has been found okay now we go to the second module which is the behavior planning so we got now from the first planning module from the global planning is the sequence of waypoints and from our perception system we also get the measurements from the surroundings from the neighborhood and the goal now is to output a rough maneuver which specifies the behavior the car wants to perform now so it can be making a full stop at an intersection or driving straight over taking a car so it's a rough description of many different possible outcomes but it's the same scenario we do this because we know that following the global path requires different maneuvers and behaviors depending on the current situation and since we cannot know in advance what situation might occur we need to keep track of all the possibilities our car could encounter so this takes into account the interactions with different traffic participants and also stop signs traffic rules emergency stops so all these overall maneuvers a common way to achieve this is defining these transitions and states in a finite state machine so finite state machine is described by a finite amount of possible states we can be in and one state describes the scenario we want to tackle and the or the maneuver and the transitions between the different states are based on the observation so if something changes in our perception from the environment we move to another state and then the car will behave differently and one way to do it is in a very easy rule-based way to just say if x-holes then do y or then go to state y i will now now show a simple example so here we have a finite state machine with three states so the one state is that we are on a road and we have a reference speed according to our trajectory and we want to follow the speed this trajectory with the speed another maneuver or behavior could be that we have a lead vehicle in front and we need to adapt our velocity to the leading vehicle in order to not crash into it the third possible state is overtaking so if in case the car is then slow we might want to overtake so these are roughly the three possible behaviors you might have on on the road and let's now imagine we are in this state right now so we have the trajectory we follow it and now we need to measure our environment and see what happens around us so we stay in the state as long as these two these two transitions or conditions do not hold so in case we detect a leading vehicle now and we detect oncoming traffic we decide to follow the lead vehicle because we don't want to crash so and if there is a lead vehicle and we don't have traffic on the other lane we can overtake so in that case we move to this maneuver let's now assume that this condition holds so we now move to a different state which describes for follow the lead vehicle and adapt your speed to it now we again keep track of our surroundings and check whether the leading vehicle is still there so in case it takes a turn somewhere the path is or the front of us is free again so we can get back to our old velocity and follow the trajectory in case the vehicle is still there we and we then detect no oncoming traffic so the other lane is free we can then perform the overtaking maneuver which we simulate now so in this case we do an overtaking and now we still need to keep track of everything so let's see we let's say we now detect an oncoming traffic we need to fastly merge back to the old behavior and follow the vehicle so merge back to the lane aboard this maneuver in order to not crash into the other car or if the lead vehicle is now gone since we for example performed the overtaking maneuver we can follow our old trajectory with the alt speed again so you see that these transitions between the states depend on other traffic participants so we need to observe them and also estimate what they are going to do since this affects our decision in that case but the future behavior of other vehicles is of course not noun usually so we can just infer from the observations we do and we it's possible to support this decision making or to to decide whether a transition between the states needs to be done now by predicting what the other participants might do so in this picture you can see it was already shown before with this left turn that we now see different predictions for the car so first we detect different cars here with these bounding boxes and then we predict what they might want to do so we pricked that this guy is for example taking a right turn so we don't need to let it pass we can maybe roll forward and then make the left turn behind it and these cars are of course driving straight or making a left turn so we need to let them pass before this behavior estimation is typically done typically done based on different inputs so one thing we can use is the set of diff of past positions and velocities and accelerations so if you know where this where the other cars have been and how fast they are you can estimate where they might go you can also use the map estimation or the map information so if you know that the car is on a specific lane which is only intended for left left or left merging or left taking a left turn you would know that this car might do a left turn of course it could still break the rule but that's something which is really hard to model so it's you usually in that case assume that the car follow the rules and do the left merge or lane change we can also use different sensor data so we have the car or the self-driving car equipped with camera lidar and radar systems so we can also use these information this information to try to estimate what the cars might do in the future and then common approaches to tackle this behavior estimation can be loosely grouped in physics based maneuver based and interaction aware approaches so in physics based we could for example use the past positions and forward integrate the velocities and accelerations so if the car was going fast we know that the next step it will be in will be farther away maneuver-based approaches try to estimate the overall maneuver so to estimate this car is performing a lane change on the highway or it's just driving straight and then based on that you can predict what the next steps will be or the next positions or you could model it in an interaction aware approach or also a combination of these but in this overview i'm attacking them separately so in the interaction aware approach we try to um predict the behavior based on what the others do so since in the driving scenario everything is coupled and vehicles pay attention for each other and they their future behavior heavily depends on each other so yeah and as i said the you can also use of course different combinations of it and there's a broad variety of different estimation techniques one important thing when it comes to behavior estimation is to account for the multi-modal nature of this problem so sometimes um different future behaviors or car of cars are equally um like uh equally likely so you cannot say whether the car is now going straight or going straight and then making a left turn so you need to account for different possible outcomes all right so we now define the global planning so we get the overall route we know our current behavior based on the observations so we know whether we want to follow the route or make the turn in the intersection or merge the lanes so now we finally come to the local planning which then outputs the desired trajectory so that was our overall goal i want to output the trajectory which is then passed to the controller so the input here is the desired maneuver and also the measurements of the surroundings so we know where we want to go we know how we do it so this this the behavior pending was more about how to follow this trajectory and now we use this information to come up with the real path and the velocity profile so now it can be a very or a path with a with a high resolution which accounts for each time step where should the car be in the future so the goal is to return the reference trajectory which should be followed in a predefined horizon since we do not plan the whole path in such a detail we usually take a specific horizon and only output the path for for example the next five seconds or the next 10 seconds we also generate a velocity profile which is based on the previous decisions in the behavior planner and also the speed limit from the map in the global planning and of course if we remember this pyramid from the beginning the trajectory should be feasible so the car should be actually capable of following the trajectory and should be obstacle free which makes it then safe and a common approach to do this is to generate different trajectory candidates to score them according to some objective so we can include these conditions or these requirements in these objectives and then pass the trajectories to the controller and then the one with the best score is actually followed so first as i say we need to generate the the path and i now want to present different examples to do this so one is to use a combinatorial planner an example so we have different obstacles in green and we try to find the path which does not interfere with the obstacles so the output is already an obstacle-free path which is good because we do not need to check again whether we hit an obstacle but it can be intractable if we have a lot of obstacles in there and we need to account for some safety margins and when it comes to dynamic obstacles it gets really hard to already output the obstacle free trajectory another way is a variational planner so we have a we propose an initial trajectory and then optimize this by checking small deviations from this first guess and then optimize it with respect to obstacles or also the dynamics in order to not exceed specific accelerations which might lead to an uncomfortable driving the drawback here is that it can be slow to optimize it and it might not converge so in the end we might still have a path which is non-optimal because it for example hits an obstacle or exceeds some acceleration we initially decided not to exceed other ways to generate the the path is to use a sampling based planner so we just sample from the whole configuration space sample these configurations and evaluate them this is actually it is in fact fast and easy to implement because we know this configuration space we just sample from them from there and then combine it to a path but it is often sub-optimal because we cannot explore the whole configuration space it might have been better to to be closer to these obstacles in order to make it faster so it's just um yeah one way to achieve the goal but maybe not the optimal one and then especially when it comes to self-driving cars we can also think about ladder space planners because we know how the route looks like or the road so we can here account for the shape of the road and then propose trajectories which align with it so which for example keep the current lane or perform the lane changes and then you select from there so they adapt to the road structure but of course it's a limited action space because we do not think about each combination of steering for example but only in this case for so in this case the velocity is sort of the same of the acceleration we just um deviate diff or we think about different steering angles in order to end up at different lateral offsets so one way to to generate this trajectory is to use parametric curves so this is the contrast to sampling based methods where we just sample configuration and combine the list of configurations so here we have different conditions we need to met like the current position and the orientation and the final position orientation and maybe also a constraint based on the curvature because the curvature determines how how nicely we how nicely the following the trajectory actually is for the passenger so this could be done by fitting a cubic spline or combining the different splines to a trajectory so we define how the directory looks like in the parameter space so we have different parameters alpha and beta and this this u then defines where we are on this on this part of the trajectory and the parameters can then be fitted to the initial conditions we have as i said after generating the trajectories we need to score them so we need to come up with some objective functions in order to select the best one so different objectives can be what is the distance to our reference from the global planner so we need to overall of course follow this planner and only deviate from this this path if it's needed if we need to overtake a car for example but then after that we want to merge back into the center lane because that's the the best way or the common way of driving in that case we can also rate the or score the trajectories in terms of the control effort we need to follow it or the curvature which then is related to how nicely we can follow it also the collision so in case we collide with another car we should not take this so in this example with this path from a to b we might favor the red one because it's shorter the other one is taking detours and it's also not that pleasant to to drive here with a lot of lateral accelerations and the goal is then to yeah select the one with the lowest score and the score is usually the the sum of those different objectives and we can also await them if for example the the distance to the reference is more important than the control effort you can score them differently or weight them differently and also discount them since we say okay if we have a collision 10 seconds into the future it's not that important because in 10 seconds the situation might change and then it's still okay to follow this path for the first segment and then we re-plan anyway so for the obstacles so this is a very important goal we need to account for or an important score since this is actually the most important thing that we do not crash into other cars so we need to have an obstacle free path and from the view in the beginning you can see that there are many different obstacles we might encounter so in general there are static and dynamic obstacles so we have parked cars we should take into account because they might just park on the road so we need to go around there are illegal lanes so we are not allowed to cross this lane where the other cars are driving um we have sidewalks so we need to paint to take into account that we do not cross the sidewalks because that's also not allowed and we might hit a pedestrian then we have also dynamic obstacles like other vehicles that also drive around us or pedestrians which as i said can just step on the road and then we should do an emergency break in order to not collide with them so the illusion collision avoidance can be done roughly with two strategies so one is this combinatorial planner that already generates collision three trajectories so we can make sure that the trajectory which is outputted is fine and does not collide with anything or that's the other way we just generate the arbitrary trajectories not taking the collisions into account and then we we check for collisions which then influences the score we have so usually you use or we use safety margins we for example we could just make the obstacles larger as they are just to have a shot of safety here safety margin um but the problem is that the collision free trajectory might not um exist especially for long planning horizons or if you have a lot of obstacles or the dynamic obstacles so this is why in this lecture we will focus on the second part so this generating scoring and then passing the trajectories scheme here you can see different ways to check the collisions so one very simple metric is the time to collision so in case we have a leading vehicle we just compare the different velocities of our car and the leading car and then compute the time so in case we we keep these velocities we compute the time which is then needed until we or the time which has passed until we crash into the car and then you can define a threshold and say if this collision time time to collision is above a specific threshold it is not important for us because we can still change our behavior then um but if it's under a certain threshold you need to act against it because otherwise in for example five seconds there will be the crash with the following of the leading car another way of checking for collisions in case you have the trajectory is the so-called swath computation which is or which can be done for occupancy grid maps as in this example so we have a grid map of our environment and the trajectory here is from a to b with the car at a what we do is we take this footprint of the car so we know the dimensions of it and we translate and rotate this this footprint according to the reference trajectory and then for each cell which is covered by this footprint we can say that at some point in time at a specific point in time there will be a car there and then in order to check a collision we just take the other car and check which cells are occupied by this car and then we can compare whether there is a for a specific cell in the grid whether there is a collision another way to encounter this is the circle collision collision checking so we have two different cars here and we approximate their dimensions by circles so in this case three circles are sufficient to cover the whole space the car is taking and then we check for a collision by um for each pair of these circles uh by by comparing the distance between the two centers of the circles and then there will be a collision if this distance is smaller than two times the radius in this case so this is um yeah very efficient because we just need the three circles and a couple of checks between them in order to say whether there will be a collision between these cause between these two cars at this specific time another way of scoring the trajectories is um we're looking at the motion constraints we have so from pyramid you remember that the um perceived safety and also the uh the the the way how comfortable the trajectory is is also important for for a self-driving car because the passenger should need should feel safe and also should be a pleasant ride so we from the from the bicycle model we usually assume for the car motion we can see that when steering we move on a circle so and the the curvature of this circle can be related to the acceleration the passenger is experiencing while driving so when we look at this trajectory here we can approximate these circles with the center of stainless curvature here and the the radius of the circle is inversely proportional to the curvature and then from the curvature we can compute the experienced lateral acceleration and then we define a specific maximum acceleration which is still okay for a passenger and then you can see from this derivation so the lateral acceleration is the velocity squared divided by the radius so with this curvature definition it's then c times the velocity squared and this should be below the threshold you can see from here that it's now we could now either reduce the curvature so make the trajectory more straight or just reduce the velocity in order to to meet this condition another task of the local planner is to provide the velocity profile since this is now the last step of this whole hierarchy um it's actually or yeah it's in fact the the last um the last module before we pass it to the controller and then controller just yeah does what you tell him to do so we will just follow the trajectory so here we need to account the different velocities from which are a set of the velocity limits which are set from the other modules so we might have a speed limit from the global planar because you know from the map that there is a maximum speed limit we might also have this constraint concerning the lateral acceleration from the previously discussed local planner because we know we need to limit the curvature or limit the speed and from the reference plan we can also get a reference if you for example switch into the state of breaking the velocity should be set to zero as fast as possible because we want to make a full stop in order to let's say not hit a pedestrian or another car and then what you do is you take these different velocity limits and consider the minimum of it because each of the planners says this is the minimum velocity of the maximum velocity we should have and then you just take the minimum of it which guarantees you that yeah each planning module each constraint from the from from each planning module is met and then one way is to do it in linear ramp so if v0 is our current velocity and this is our now set velocity so the minimum of these limits then we just try to increase the velocity until we reach um the final reference and then we just keep it constant as soon as long as we do not have another set velocity from because the perception changes or anything like that okay so now we can we can came to an end so we discussed all the different steps now which are needed to yeah perform the path planning problem and you've seen that we defined the path planning in an hierarchical fashion by dividing the different tasks or the different or the whole tasks in different sub problems which have different access to information and consider the path planning with a different level of abstraction so the global planning returns the optimal overall route with the given road network so just the course sequence of waypoints we should follow the behavior planner then takes into account the other traffic participants and considers traffic rules and social socially compliant behavior when you let someone else pass first for example and then after that we have the local planner which finally computes the feasible trajectory and makes sure that it meets the requirements from the other planner so that it actually that this path is close to the global route that you take into account the velocities given by the behavior planner if you want to make a full stop for example and then finally the local planner checks for collisions and guarantees that the final trajectory is meets all these requirements we defined before and then you pass this to the control and then the controller can follow it and with this we are done with the planning problem and thank you for your attention
Info
Channel: Cyrill Stachniss
Views: 3,465
Rating: undefined out of 5
Keywords: robotics, photogrammetry
Id: wpjUQ7IRQZI
Channel Id: undefined
Length: 54min 33sec (3273 seconds)
Published: Sat Nov 14 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.