Robot Localization - An Overview (Cyrill Stachniss)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so welcome everyone to this lecture on robot localization so the goal is to provide you with an overview about different techniques that can be used for robot localization and then in the later chapters dive deeper into the individual techniques so today i'm only going to provide rough overview what means localization what it is about and what different types of techniques can be applied in order to localize a robot an autonomous car or another moving vehicle that navigates through the environment so there's also a five-minute summary of this lecture that you may wanna watch beforehand as a preparation for this lecture and now we wanna directly dive into the topic of localization so localization basically means where am i so we need to answer the question where is the robot in the world where is the vehicle driving on a road where are we so location information is the central pardon here and location information or position information um matters because this is the basis in order to navigate effectively through an environment so we need to know our pose for example in order to avoid bumping into obstacles in order to plan a correct correct feasible and efficient to navigate paths from our current location to a goal location and so positional information is key to a lot of basic robotics tasks especially robotic navigation tasks and therefore we care about where we are and localization try to answer the question where am i and often also where am i looking to or pointing to so in which orientation i'm actually in in order to provide relevant information for other downstream tasks so the question is where am i means we want to estimate our position and our orientation or sometimes called heading of a mobile system in some external or given reference frame so that typically means there's a reference frame given could be a gps reference frame but could also be a local reference frame for a building for a map whatever we are using and i know typically localize with respect to a map and i want to do this based on sensor information assuming i have a mobile system that moves through the environment so i want to estimate where is the platform so for example an xy or xyz location as well as a heading so where the platform actually looking to and in some reference frame and for that we typically assume to have information about what commands is the platform executing so if you think about an autonomous car it may be what is the steering angle and what is the heart of the gas pedal being pressed in order to estimate where the platform is going just based on the commands or it could be the odometry commands of a mobile platform in addition to that we typically assume to have sensor data this sensor data could be a camera this could be a laser range finder this may be a gps this might be an imu sensor so sensor data that we are receiving that we want to use in order to localize ourselves in the environment some of our sensors allow us to localize us with respect to a map and others with respect to some external reference frame even without having explicitly a map given but quite often in localization we refer to our location with respect to a map that defines a reference frame or some other global frame such as the gps coordinates and localization means to answer the question where am i you also know this question as a human so if you think about your mobile phone and you use your favorite map application and press localize me or where am i you will have a map and it shows you a spot for example here in our building which tells you where you are and here in this example now you have for example a longitude latitude value given which describes a position in the gps coordinate system so that you can use this kind of global reference frame in order to describe your locations but that's not the only way how we can do this we can also use on maps and localize us with respect to maps so what you see here is actually an old robot um the robot rhino developed here in bonn maybe 25 years ago or maybe even a bit more and in this example it used a map of the environment this was basically a local map where you could see the free space and in black the obstacles around and was using its sensors such as sonars or laser rangefinders in order to estimate its own pose with respect to those obstacles basically using a sensor model comparing what the platform sees and how that fits into the map that it has about the environment and as a result of this this platform will estimate its pose within this map and in this case the map is providing us with our reference frame we can also think about autonomous cars navigating through the environment where you have a vehicle like this equipped with different types of sensors such as cameras such as lidars radars gps imus reload meters and other sensors trying to estimate where it is in the environment and here typically we have a map information that could consist of lane markings of poles of buildings or other in information about what is where in the environment that we use as a map in order to localize so this describes the problem of robot localization or vehicle localization so using our sensor information and using the control commands that we are that the platform is actually receiving we want to estimate where are we with respect to some map or other reference frame most of the techniques for robot localization which are used today are probabilistic approaches that means they take into account that the observations that we have are not perfect that they are affected by noise and and the same holds for our motion commands so sending a certain motion command um we cannot rely on the fact that the platform except precisely executes this motion command there will be always noise associated to that because it will never precisely execute the command and probability theory allows us to represent these uncertainties and take these uncertainties into account when we're performing our estimation approach trying to estimate where the platform is and where it's looking to so if you think about the non-probabilistic world it would be for the vehicle for example the description the vehicle is exactly here if we described this here kind of as a 1d 1d space where x is the 1d variable describing where the platform is then the standard or non-probabilistic localization would say you're precisely here and the probabilistic approach may fit a distribution which may look like a gaussian but could also be any other type of distribution saying the vehicle's somewhere over here and the higher the value the higher the probability that the platform is in a certain region of the environment and we are typically looking into estimating such a probability distribution in order to describe where the platform is in the environment so we are performing state estimation of probabilistic state estimation so what's the state which state are we estimating so in localization we are often needing to estimate the position of the platform and it's heading so where is it looking to and if you're living in a 2d world in this case then we typically have a three dimensional vector describing an xy location and the heading as a 1d heading where is the platform looking in which directions it's looking into so we have three variables that we need to estimate if we think about a 3d localization with six degrees then we have to be six degrees of freedom so we have an x y and v coordinate and then three rotational angles for example a roll pitch jaw describing the three different angles the platform can be can be looking to so we either have to estimate a 3d or a 60 vector in order to estimate the current position of the platform and often we actually interested in estimating just the current platform sometimes we're also interested in estimating the full trajectory or um the full path that the platform has taken quite often we are referring here to as just estimating the current location so it would be just a three-dimensional or six-dimensional vector um there's also the word pose being used pretty often pose describes here position and heading information so the position information would be here for example xyz and then the heading would be roll pitch and yaw so if you hear the word pose it typically means position and heading information put together and that's the quantity that we want to estimate so in probabilistic state estimation we are concerned with estimating the probability distribution about where is the platform about our state given our observation and our control commands so describing this in its most simple form and again use the control command that is the observation and an x is the state that i want to estimate and often it boils down to estimating this probability distribution or it could also be just estimating the most likely state is also an option but given that we are in the probabilistic framework and we also want to take into account the uncertainty that we have about our post estimate often we're actually looking into estimating really the probability distribution which could be gaussian but could also be a different type of representation and as we are in most online problems and navigation problems concerned with the question where am i now so not where have i been 20 minutes ago but where am i now we are often using recursive state estimation techniques so filtering techniques in order to perform this task and those filtering techniques most of the time are based on the base filter or recursive base filter that we have been discussing here in the past before so the belief about where the platform is right now so the probability distribution of our state at time t is estimated given our observations and our control commands that we have received so far and the recursive base filter allows us to put this equation into a recursive form so the same equation believe at time t minus one pops up here again so this is the current belief this is based on the previous belief and then some quantities which only depend on the currently obtained state variables like the current control command and the current observation and there are two key components in here two key distributions and which is the motion model and the observation model again as a reminder the observation model describes you how likely it is to get the observation that i actually received given the state i'm currently in so given the state what's the probability of a certain observation or the observation we we currently received and the motion model describes us how likely is it to end up at a state xt given i know i was at x t minus one before and executed a certain motion command like pressing the gas pedal of my car with a certain intensity and turning the car with a given steering angle or putting to the right wheel and the left wheel of my differential drive a certain velocity command this describes the motion of that the platform will be taken was likely to be taking and this motion model describes a probabilistic way on how do we evolve from one state to the next state and with these information with the observation and the motion model we can turn the previous belief into the belief at the current point in time and we are typically recursively estimating our pose updating our pose estimate giving such a filtering equation not all localization systems are based on the idea of a recursive base filter but several popular ones are another term i would like to distinguish is global localization with respect to what's called post tracking so global localization means we have no idea where we start so we have an unknown initial configuration this basically means we can be anywhere and we have no idea where we are this is the task of global localization post-tracking means we typically start from a known position or we have a fairly let's call it focused belief about where we are so belief is a small uncertainty and we just want to estimate where the system is moving giving that start location this is something that's often referred to as post tracking and post tracking is typically the task which is simpler than global localization because in global localization we have more ambiguities about what happens in the environment so the key differences between those two approaches is that for global localization approaches from a technical perspective i typically need to have a way to represent large uncertainties really well and for post tracking it's typically sufficient to have a fairly or allow for a fairly compact representation of the uncertainty so for example if we have a gaussian distribution that is something that is often well suited for post-tracking but not necessarily very good for global localization at least in the setup when i have ambiguities in the environment so multiple hypothesis is where the system could be because i have no idea where the system starts and i need to estimate the location of the platform based on the sense observations that are obtained and basically need to narrow down the different possibilities of where that platform is at the same point in time global localization techniques need to have often a way for representing very large uncertainties so for a gaussian distribution for example it doesn't really matter how large my uncertainty is as long as it can be well described with a covariance matrix for example other representations such as sample-based representations may suffer in large or large-scale global localization because representing more states about where the system could be in also results in a higher computational or memory complexity so depending which problem you're actually addressing the design of your localization algorithm may look different but we also have systems where this is both done together so um key question that we need to answer ourselves which are important to answer which techniques should we use in order to perform global localization or post tracking is kind of which type of belief does this do we need to maintain in order to track the position of our system is the gaussian building fine do we need to take into account multi modalities and how much uncertainty can the localization system handle so are we able to deal with very large uncertainties or are we constrained to very kind of local estimates about where the system is and this will have an impact on which kind of localization system you may choose or how you set up your localization system then we often also distinguish between online and offline localization and here the question basically is is all sensor data um available beforehand then we typically talk about an offline approach so we want to use all the information at hand in order to estimate where the system was at a given point in time or where it was at every point in time um in contrast to this online localization typically means we only have the information available up to a certain point in time so after the current time step for example and we typically estimating where are we right now it's kind of the difference so if you think about online navigation about an autonomous robot navigating through the environment or an autonomous vehicle driving through the world we are typically interested in online localization because we want to know where are we right now in order to make our navigation decisions if we recorded data and later on post processor data and maybe want to use the location information to update the map of the environment then maybe we're interested in offline localization because we have all the data at hand before we start our computations and first it's more important probably to come up with a good estimate using also kind of the future observations and to estimate where the system was at a certain point in time given that we have all information available beforehand we would choose an offline localization approach probably for these types of problems but in 95 of the cases or even more we probably had for an online system because we want to make decisions based on where we are okay before i dive into the different techniques i want to distinguish something um or term which is often used which is referred to as sensor odometry um with respect to localization so localization really means answering the question where am i in contrast to this sensor odometry means to estimate the relative motion estimates of our platform exploiting sensor information so we are interested in estimating local motion updates like local odometry commands for example but we don't compute this based on the information that we get from our wheel encoders for example we want to estimate it based on our sensor for example based on a laser range finder that is observing the environment or based on cameras so we often use sensor data and not real odometry information in order to compute the relative motion estimates and this is typically done by registering or aligning the sense observation with respect to each other typically we don't come up with kind of a global pose estimate in here so there are some examples that you can have a look to one example is visual odometry for example so you're estimating the trajectory of a camera based on the visual information that you're estimating so what you see here is a camera stream and you can see those points over here scattered all over the image these are feature points extracted and based on those feature points you can try to align one camera image with the next camera image and then can estimate the trajectory of the camera so you basically see this in a local view down here um tracking the motion of the camera just based on the consecutive frames that we are receiving and this is a typical case visual odometry where we use our camera or vision sensor in order to estimate local motion updates so local odometry so to say um and we can use this to estimate where we are but it's not typically not provided in some global frame so we talk about visual odometry often visual odometry is combined with inertial sensing as visual inertial odometry which relates camera positions with each other often with the help of an imei so if we just have two consecutive frames so one camera image taken at time t one taken at time t plus one we can match features recorded between those images so distinct points in the world and then relate them with each other for example using techniques such as um the five-point algorithm or the eight-point algorithm so basically relating one one camera image location with respect to the previous one and as you may know this can be determined only up to scale at least if we have a monocular camera so we can't estimate how far the cameras are apart but we can only know in which direction the camera has moved and where the camera is looking to and um so this is a we can basically determine five degrees of freedom and then either we use a stereo camera to recover the scale or we use the imu in order to provide us with some scale information so we can actually get a 60 degree of freedom transformation so delta x data y delta z and the rotation angles for example in order to perform this estimation so if you do that key ingredients of your approach will be some point of way for estimating essential or fundamental matrix or five point algorithm or eight-point algorithm depending if you have a calibrated camera on calibrated camera typically outlier rejection techniques such as rand thick will be used in this context in order to come up with a robust visual odometry system but again this is uses just the camera information to ask to kind of reduce them into a relative motion estimate so we can update our pose and of course also those commands are noisy what we can also do is using a lighter odometry so it means we use a lighter or laser scanner you know to compute odometry that's actually technique so light odometry is more frequently used today before it was called incremental scan matching typically that means we are registering consecutive scans obtained with the laser range finder in order to estimate how the platform has moved or the recording position or the center the reference center of my sensor has moved from one time step to the other and this is done with scan machine technique iterative closest point or variants of the icp algorithm are here the standard choice not to register to scan with respect to each other and then i know how the platform has moved given this pair of scans and also given data associations and given that the lidar scanner also provides me with scale information i don't need an imu here i can exploit it of course if i have it but otherwise this system will provide me the scale information so i directly get a six degree of freedom transformation out when i'm registering pairs of lighter scans and this is again something that we refer to as sensor odometry it provides relative motion estimates so updates most relative motion commands that are computed based on two or pairs of sensor observations and it often doesn't really use a global map and so the question is is this really localization often people would say no that's not really localization because we don't localize ourselves with respect to some coordinate frame but of course it can be helpful to track our posts so if we know where we are and we have very good estimate of our relative motion we can use this to estimate where we are so i would see it that sense odometry is a tool that supports localization because we can use this information to estimate where we are and kind of reduce the uncertainty of our motion but it typically doesn't provide me a position estimate with respect to a global frame that is provided by a map for example okay so now let's dive into the localization problem and see how we can answer the question where we are in the world and there are basically four different approaches that are used in order to localize um a mobile robot or platform so one is so-called markov localization or sometimes also use grid localization is used here here we use a discretization of the environment where every um kind of of this discretization of our world means what the probability that the robot is at that location and then similar to an occupancy grid map but just kind of in more dimensions we are trying to estimate where the platform is as a kind of non-prometric belief this also sometimes called non-prometric localization or histogram-based localization then there's monte carlo localization out which is a sampling-based technique which is very frequently used you know to estimate the position of a platform especially supporting multimodal beliefs kalman filter based techniques are also very popular um typically if we if a gaussian believe is well suited to represent the location information that one estimate then the kalman filter or extended kalman filter is a good choice but we may also run least squares approaches and then we have kind of um what's called sliding window these squares so leaf squares is an offline technique those three are online techniques and the sliding window basically sits somewhere in between so you can see the sliding window as an interpolation between these squares and the kalman filter and depending on the computational resources you have you can use this sliding window ideas not to come up with the within least square system but only estimating the observations that you obtained so far still being able to execute this in an efficient manner and the sliding window based approaches have certain advantages especially if you want to fuse different sensing modalities but common approaches to robot localization i actually want a color localization probably the most frequently used one and common filter based localization i just want to illustrate those different techniques here to give you an overview and then you have the possibility to dive deeper into individual aspects such as monte carlo localization or common filter based localization okay let's start with markov localization or grid localization so here the belief about where the platform is in the environment is typically represented with a histogram basically so we have here this example so we have a one-dimensional world just to make it easier to illustrate over here we have a robot which can move through the environment and it basically can sense doors it's a sensor which says i'm in front of the door and says i see a door in otherwise it says nothing and so then the question is how can we use this information of a robot that navigates through this 1d world and estimate where it is and what markov localization does it basically uses a histogram which is kind of illustrated over here in order to estimate what's the probability that the robot is within a certain small interval let's say within a five centimeter um region um and so i'm basically discretizing this environment into these kind of five centimeter cells and then estimate the probability that the robot is in that cell so this can you consider this as a 1d histogram if the robot moves in 2d worlds this would be a two-dimensional histogram if you also want to estimate its orientation it would be three dimensional so x y and the orientation theta and if you go to six dimensions or a sixty degree of freedom that would be kind of a sixty histogram which gets pretty complex quite quickly so for typical 2d localizations of running localized in an indoor environment and we have a wheel platform which kind of drives on the ground it's not flying through the environment such as a uav a 3d histogram or 3d grid representing this belief is typically what you're using the great thing about this approach is that it naturally handles multimodal beliefs so i can assemble as many modes as i want it's just kind of that i can have a probability value for every cell so it's very easy to handle multimodal beliefs there's nothing i need to do for that however it is limitations with respect to the accuracy the localization accuracy that i can obtain and it basically tells us how large are those sizes those cells if i have five centimeter cells it basically means i can only localize my platform up to this cell size of five centimeters or five by five centimeters and if you think about especially larger um high dimensional space if you think about a 60 space x y that your pitch roll you can imagine that you actually typically don't have much memory available in order to provide a fine-grained discretization here that's the reason why this approach is typically not used for 3d localization for localization in the 3d world where you would need 60 representation but for 2d you still sometimes find these approaches another downside is that all cells need to be updated upon every motion command or every observation that is obtained which gets computationally very costly so even if you can store that easily in memory it gets computationally expensive to perform this operation so if you for example have the platform um it stands in front of the door we have no idea where the system is initially so we have basically a global localization approach so we have the same uncertainty in all positions that the platform is and then the sender says i'm in front of a door i see a door that means the observation model would probably look like this so we get peaks in front of these doors this is kind of what a door sensor says us here the probability is not zero because it could be that the sensor makes a mistake and then we can also be anywhere in the environment um and we but if we are in front of a door it's more likely to provide us with the right information as a result of this our histogram might be updated and may look like this so all the positions which lie in front of the door will have an increased probability if then in next step the robot moves through the environment we basically need to shift this histogram according the direction of movement and we probably also kind of smear out that histogram over here because the motion again is uncertain so we increase the uncertainty due to the motion of the platform if we then get the next observation we are in front of the second door then we again multiply this observation likelihood with our belief and then we actually get a belief which looks like this so the probability of being here in that position is gets larger and larger and larger and then we can again move through the environment and then this histogram flattens again a little bit because i'm getting more uncertain about the environment if i don't receive any further sensor information so it's basically idea of markov localization or grid localization so i'm using a histogram about the states where the system is in and estimate where the robot is based on that histogram that was mark of localization also got grid localization the second approach i want to dive into is monte carlo localization so monte carlo localization is an alternative techniques where the belief is different so we're not using a histogram we're using random samples and every sample is basically a state hypothesis so it's a guess where the system is with respect to an x y theta or x y that your pitch roll state so if this is my robot navigating through the environment you can see these kind of black lines over here and these represent the position of samples so the state configuration of a sample in this case it's just one dimensional it's just kind of the position where that where the platform is here located in um but the key idea is that we don't have this discretization this histogram of the environment but we are using random samples in order to represent the belief about where the platform is in the environment then we can do the same thing so we can basically uniformly or roughly uniformly initialize our system if we have no idea where the platform is if the robot stands in front of the door in the same observation likelihood and so basically seeing that all those samples which are here beside those beliefs they basically get amplified they get more important we will dive into that in more details how that works in um in the next lectures but for now we just say okay those samples are better than the other samples it's more likely that we're here at that space and if the platform then moves to the environment and can see actually samples getting more denser over here if i get new observation this holds here as well so the robot moves through the environment and we are getting kind of clusters of those beliefs and the more of those samples we have in a certain region that means the more likely that region actually is so this idea of monte carlo localization in order to be able to represent multimodal beliefs but not using a fixed discrimination of the environment and this is done through a sample based representation which is also one of the popular localization techniques out there so especially if you look into indoor localization techniques where no gps or other things are available then monte carlo localization is probably the gold standard today other alternatives are common filter-based localization so here we typically have gaussian beliefs in measurements and also observation models so everything is assumed to be gaussian and then we can estimate where the platform is by updating gaussian distributions using a kalman filter so quite often this is used in the context when i want to localize landmarks because the estimation of a landmark location can be quite well described with a gaussian distribution at least if i'm free of outliers and then i can have a kind of a trajectory of a robot how robot is driving through the environment shown over here certain observation that it gets and then i can provide a gaussian estimate about where the platform is in the environment and here we are basically predicting the state of the system using a gaussian we are performing a correction using our observations and recursively update our belief using these gaussian distributions it's also a commonly used technique actually a lot of localization techniques started as carbon filter-based localization techniques and that works well as long as the assumption that my beliefs are gaussian is not a too strong violation of what happens actually in reality especially with respect to the current belief so often especially for global localization this is not working that great um especially if you have modalities in the environment okay so the next technique i want to look into is least squares or least squares approach to localization this is typically an offline approach not an online approach anymore where assumed to have all the sensor information available beforehand and then use all the sensor information in order to compute the typical trajectory so all pulses of the platform or maybe only just one position and this uses a standard least squares approach in the standard least squares error minimization approach that you know already where we want to estimate the the discrepancy between what the observation tells us and what the motion tells us in a global setup so having all observation at hand beforehand the least squares approach typically also takes calcium leaves into account but as it's doing re-linearizations at every point in time it works somewhat better than the extended kalman filter but if i'm very far away from the idea of a gaussian distribution it also may not work very well quite often least squares approaches are used kind of as reference solutions in order to estimate other localization systems by kind of recording all the data making sure we don't have any outliers providing a proper estimate taking all observations into account computing a statistically optimal solution under the assumptions and then using this as a reference position and compare online localization systems with respect to this offline approach where all the information is available beforehand the different ways how we can represent this one way which became more and more popular over the recent years are factor graphs this is a graphical representation of the least squares problem where we have states in this case poses that we want to estimate so the position um at time zero time one time two times three then have landmark observations here um xl for the landmark location um we have observations these are represented by those factors so seeing a landmark former different positions we have odometry factors in between saying how did the system evolve over time we may have gnss information or gps information available we can build up this type of factor graphs where i basically want to compute configurations for those nodes so that the effectors or the um the the errors introduced by those factors actually gets minimized it turns into a least squares problem but these factor graphs are quite attractive visual representation of the underlying problem and frank delured is for example one of those persons who made vector graphs popular in robotics used it for the slam problem for localization and other types of problems and it's a attractive formulation because it allows to visually describe at least crest problem in a fairly elegant way and you can transform this one basically directly directly into a matrix of a normal system where you can or an information matrix and where you can see then directly the contributions of those factors to certain elements in your matrix and last but not least we have these sliding window-based approaches which basically use the least squares approach but basically doesn't take the whole history into account because the more data i i record the more data i need to take into account in my least squares problem and what the sliding window least squares approach does it basically takes only the most recent and observations into account let's say the most recent 20 observations and then tries to approximate everything which happened before with one single variable and so while moving through the environment i'm basically getting a new observation in i'm throwing out an old one and trying to estimate a good belief about where the system is in the environment so the key idea is to be better than a kalman filter and this requires to do a proper marginalization of the information that hey out of my um of my least squares problem um but still not be computationally as expensive as a least squares problem and this is a technique which you find today quite often especially for outdoor localization so if you think about autonomous cars driving through the environment they typically use some type of least squares or sliding window least squares approach not to estimate their position taking different sensors into accounts such as gsns information visual odometry wheel odometry light odometry radar information maybe and other sources in order to estimate that and they fuse this information with a factor graph which basically looks like this and chops of notes which are old one and adds new notes over here taking a certain small history into account and we can see the sliding window approaches is sitting somewhere between the least squares approaches and the kalman filter based approaches and basically depending on where i am on that line i have a very slow short sliding window and the sliding window of size one i basically end up having a kalman filter or if i have an infinitively large sliding window approach then i'm actually performing at least squares approach you can use the sliding window the size of your sliding window basically to control how many computational resources you have at every time step available in order to solve your localization problem and in this way turn this offline least squares approach kind of an online approach taking only the most recent and observations in order to do your computations often in this case approximations are also made with terms of how to throw away so to say the old information that i take out of this graph structure so the marginalization because exact marginalization can be computationally expensive depending how your graph looks like and therefore approximation may are often made on the way i brought you a small example and so this is a localization system of an autonomous car driving through the environment it's a system developed by one of my phd students daniel wilbus you see a vehicle driving through the environment and you can see build up basically a graph which is built up here so you see kind of certain edges which are here illustrated as observations so they could observe buildings you could observe poles corners which provides additional information to the system and then the vehicle can drive through the environment you can also observe new features use different types of objects in the environment such as poles such as road markings it observes those road markings and basically builds up this graph and this graph that you can see over here is the factor graph that i was talking about and you can also see here that this is not an infinitely long line it's basically chopped off over here so this may be a trajectory of whatever 30 50 meters maybe it's basically throw the ways all the old information which sits in here and basically takes information away at the back and adds new information at the top trying to estimate a good estimate about where the vehicles in the environment without any jumps taking into account multiple sensing modalities just as one example how such a localization system can be used in actually an up-to-date localization system used in prototype autonomous vehicles that are built in the industry today with this i'm coming with an end of the overview lecture that i've given here about robot or vehicle localization and in subsequent lectures we may dive deeper into it different aspects or different techniques for localization such as kalman filter-based localization monte carlo localization or similar so in sum localization means we want to estimate the position and the heading of a mobile system in the environment central building blocks that we that we use for this are our observation model our sensor model and the map of the environment and localization itself is a key element for other tasks that are used by mobile robots such as navigating task planning collision avoidance picking tasks delivery tasks for all those tasks you need to know where you are so whenever you move efficiently through the environment you should know where you are otherwise you can't navigate well and localization provides this key building block um i guess i've showed that there are different variants how localization can be run can be online can be offline with a map without a map a global localization post tracking so there's a large set of different variants and actually hundreds of papers on localization that are out there and are still being published if you look to the underlying estimation techniques we basically find four types of techniques this is the grid based or macro localization this is a kalman filter this is monte carlo localization and least squares and again a variant of the least squares the sliding window least squares approach these are techniques that are often used in modern and up-to-date localization systems several of these approaches basically realize a base filter but the least squares approach doesn't belong to this category because we don't have a recursive belief where we only integrate the most recent observation i'm either taking all observations into account or a certain window of observation so this is not a base filter but the kalman filter monte carlo localization as well as markov localization are base filter-based approaches with this i'd say thank you very much for your attention and again there's a short five minute video kind of a summary of this lecture available as well so thank you very much for attention and i hope this gave you a good introduction on what localization is before diving deeper into studying the individual techniques thank you very much for your attention
Info
Channel: Cyrill Stachniss
Views: 1,999
Rating: undefined out of 5
Keywords: robotics, photogrammetry
Id: 8VJ-A9OlhAE
Channel Id: undefined
Length: 40min 21sec (2421 seconds)
Published: Fri Nov 19 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.