MSR Course - 05 Motion and Sensor Models (Stachniss)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay good morning everyone it's my pleasure to have you all here and today we are going to continue with our consideration of locomotion and vehicles moving through the environment and what we have done last week was basically to discuss how can we describe motion if we assume a perfect roll so we just used kind of the basic law of physics in order to estimate where a vehicle is at time step T plus 1 given we know whether we have been at time t minus 1 so in the end we were just looking into how can we describe how we go from A to B using the basic laws of physics but we have also seen however is that those motion estimates are typically not correct and the reason for this is that we have uncertainty in our system so that means maybe the wheels don't have the perfect diameter as we expect them to be it could be the reason that the ground is not perfectly flat or that the commands that we give to our motors are not immediately executed but that there's certain delay in there or that we cannot instantaneously change the velocity from whatever three rotation per seconds to four rotations per second without a small ramp up and therefore we have uncertainty in all these motions and what we're looking in today is - how can we actually describe this motion this uncertainty this would happen the first part of the lecture today and the second part of the lecture we will look into very basic observation models because the motion model and the observation model are the two components we need in order to realize a recursos based filter for performing state estimation so all of that is motivated that we want to go towards state estimation which is something that's going to happen next week looking into the extended Kalman filter or the Kalman filter for state estimation and for that we need to have an probabilistic or stochastic interpretation of how our motion looks like and how observations look like and so this is kind of the first essential ingredient is the motion model so we have seen already last week that the motion is inherently uncertain and the question is how can we actually describe this uncertainty and I showed this example this actually and we're system started over here and was actually traversing through the maze and this is corresponds to the corrected trajectory of the trajectory that the vehicle actually has taken but if you just take the odometry information into account that means integrating the motion according to the laws of physics that we have seen last time like in this case for synchro drive but very similar to a differential drive then we would actually end up with a direct - it looks like this platform started over here moving in this on this way and what this shows is the internal estimate of the system at which X Y theta position the platform is at every point in time well you can clearly see is that there's a certain drift in this example to the toward the right-hand side of the platform so it always goes slightly to the right maybe the wheels are not perfectly calibrated or the weight distribution on the platform was different or was not equal and this leads to some - drift and so this is what the system things internal things internally what it has done this is what the system has done in reality and you can see that there's a mismatch between reality and what the system things it has done and the question is can we describe a relationship between those two in a probabilistic sense that means whenever for example whenever we go a meter forward we need to add some uncertainty to the process so if I'm if I'm the robot I'm standing here and I get the motion command go one meter forward and going one meter forward maybe I went exactly one meter forward but maybe this have been 99 centimeters or a meter five or a meter but a little bit turn to the right hand side and so I want to describe this probability distribution where we are at a point in time given that we know where I've been previously so the motion model still be described in this form it's a probability distribution where am I at time T so what's my position or it say X Y and orientation or X Y that and your control depending on in which configuration space I mean at a time T given I know where I've been in the previous point in time and which motion command I've executed so let's see assume our time steps are discretized let's say in one second step to believe that's smaller but let's assume for now that's one second so the question is assuming I'm right now here so this is X or I was here this is XT minus one my motion command was go one meter 20 forward and rotates ten degrees to the right hand side so I should be somewhere over here then it's a probability this thing should be a probability distribution which is somehow centered around this position right because this is assuming that at least we don't have a systematic error we should have a probability distribution for example a Gaussian where this should be the mean of this Gaussian distribution again things don't have to be Gaussian and in reality they are not Gaussian but it's one kind of approximation to do that we are interested in describing this probability distribution so what's the probability that I end up here what's the probability that I end up here what's the probability that I end up here what's the probability that I end up here these are all the different acti XTS we can actually query and we want to have different values getting out which is probability distribution or density function if you're in a continuous space what we typically are in our X Y theta world okay so that's what the motion model is about what's what emotional mode listen what we are going to discuss today is this kind of clear to you what there is this motion model okay good so next question is why are we actually going to do that I just briefly mentioned it already we are interested in later point in time or actually starting this next week performing state estimation in terms of recursive based filter and what we're going to pick you want to do want to estimate the state of the system given observation and given control commands so it's kind of the very basic equation and for that we derived the base filter in one of the first lectures which turns the belief about what the current state of the system given observation controls into a recursive formula so formula which is split up into several parts and one part of the recursive term so the belief shows up again but at one time step before and so this boil down and basically this equation so the belief was what is the state of the system at time T given all the observations involved controls that we have seen or executed system and we could split it up into this equation which basically has a recursive term here so the belief at time T minus 1 so I'm saying where I am at time T is where I was at time t minus 1 plus some modifications and these modifications incorporate a normalization constant and it basically incorporates two terms these two quantities these two probability distributions here the first one is the observation model and the second one is the motion model okay and typically if you think about the kind of standard based filter formulation how it is done like also for example a common filter you will see that these things are split up in two steps the prediction step add a correction step the prediction step uses the information about the motion or about the control to predict where the system will be at the next point in time not taking in account any observations there's the prediction step there's the first thing so this is the first part here which starts with the previous belief integrates about all kind of the positions where the system could have been at the previous point in time and then say where would that thing evolve to given this motion model so if you think about this one as a set of possibilities where you are let's say 100 possibility possible locations where you are your basic going over all those 100 possibilities that's what the integral is doing and for every of those one other possibilities you estimate how the motion forward would look like and this gives you the overall probability distribution that's how you can interpret this prediction and this is the first step that the Bayes filter typically does the second step is what's referred to the correction step and this is basically a product of the predicted beliefs of this belief and what my observation tells me so it's basically a weighting between the probability distribution of where my prediction tells me where I am and where my correction tells me where I am and for that we need the observation model that's what we're going to do in the second part of the lecture today and for that we need the motion model that's what we're doing right now so in our sorry cauldron slide copied into this knot not talking about the observation model that's later talking about the the motion model so we're talking about this thing over here so where am I at time T given I know where I've been at time t minus one plus the motion command that I have executed which guides me to works XT we want to describe that that's kind of our main goal and there are different ways how you can actually formulate this we want to look here into two variants of this and the important thing is I now want to do this kind of a little bit independent from the way we have described motion last week so last week we looked into what's the differential drive how can we describe this what's an ahkam and steering how can we describe this we do it in a slightly simplified form today just having kind of one variant how to go from one location to another location the different ways that I can do this I can describe this by a forward motion a sight-word motion and rotation that's one example all right so to go from every configuration in 2d to another configuration into ly X Y theta is my state variables I can say this is a forward motion in the ego system where I'm at time t minus 1 in a forward direction some movement in the Y direction and some movement in the orientation and I can reach every other configuration the other alternative to describe this is to reach any other configurations first execute a rotation then execute a forward motion and then execute a second rotation so I can also describe the motion from one point in time to another point in time of course in reality it's very seldom the case it's a system rotates moves forward and rotates again in reality the system most likely drove on a circle all right but how this circle exactly looks like again depends on the underlying drive that's what we have seen last week and we're trying to find it slightly simplified notation over here that we do not depend that much on the drive and we can select both of those models the forward side with rotate model or the rotation translation rotation model and for now I'll just stick with the rotation translation rotation model so in here we describe this as part of a rotation a part of a forward motion and a part of a second rotation and use probablistic concepts Nautilus how a noisy motion in this way looks like in addition to this we have different two different types of motion model on a higher level the first one are those called odometry based models and the second ones are the velocity based models so what's the odometry based model what's the velocity based model audiometry means we basically have a small sensor or wheel encoder which counts the revolution on the wheels this can be an easy let's say disc with white and black stripes and a small sensor which just measures how often do I see dark white dark Y transitions and then just count them at a very high frequency allows you to estimate what's the rotation of your wheel or you can see this for example if you have a bike and you have a tachometer on your bike to estimate the speed of your bike you typically have a magnet which is attached to one of your spikes and every full rotation the magnet passes sensor and gives one tick so this basically one tick tells you one full revolution of the wheel and if you know the length of your for the outer diameter of your wheel which is something around 2 meter six centimeters or something like this for regular bike then you know with one tick you have driven a little bit more than 2 meters this is typically not precise enough in our robotics application so you will typically have a lot of those ticks per revolution something like 1024 or 4096 so you will have a lot of ticks so that every part of the degree will generate an own tick and you're just counting the ticks and then you know how much your wheel has turned and that's a standard way for doing odometry based motion models you're basically counting the ticks of the different wheels integrated into a predicted motion that you execute it and then put a probabilistic interpretation on top of that the problem is if you don't have motors which provide you those ticks directly or you're don't can't modify your system then you don't have that and then the only thing you know what command here sent to the motors so you will send a command to the motors do something and this do something is typically expressed by a velocity so you speed up the motor so it reaches a certain velocity or you change the acceleration depends a little bit on the motor setup that you have but let's for now for ease of simplicity we can set the velocity so which speed will demote or generate and then you only have the information what you have sent to the system not what you have sent so there's not another sensor which counts the ticks it's just what I have sent so you can say the system go more forward is one meter per second but you have no idea of the system that does it maybe there's a person holding the platform in his hand or it sends in front of a wall it's not moving at all there's no way for you to figure that out or maybe your calibration is not perfect so maybe whatever your timer is not precise or the wheels have some other issues then it's very hard to actually track that and so the odometry based models are typically better from the quality point of view than the velocity based model so one is kind of known as an open-loop system so you just send a command and basically hope or pray that everything is good and the other thing is you send a command as well but you actually get a feedback by counting the number of ticks and see if this matches what you have done and you basically trust that it's better than what you have sent to the system okay how do those wheel encoder look like this can be small devices these are examples for those disks there is a disk which you place on the axis there's a small sensor and basically counts the black and white transitions and where we get a small pulse in your signal you know okay that was one tick and then you just count the transitions from black to white and white to black to estimate how far you're still your wheel has turned and then you can do that and estimate quite well the rotation of the position of the wheel of course what's not taking into account for example wheedle slippage so if one wheel slips this is something you cannot see with the odometry so the long trees are perfect but under well-defined conditions actually gives you a pretty good idea what the revolution of the wheels and so I want to talk a little bit about what our reasons for errors for wheel systems even if we use odometry so one thing the perfect case is everything looks great both wheels are on the ground but what are possibilities why this may not work out that well any ideas well that could be so just that slip already but can you imagine other things that could impact the quality of your motion estimate any ideas what could impact yes sir the wheels are not perfectly aligned or let's say one wheel is a little bit loose a little bit to the outside or inside your calibration is wrong and then maybe the center of rotation doesn't sit exactly in the middle of the excess way seem it to be and this will definitely cause mistakes in your system absolutely what are other possibilities different wheel sizes yes what I could be a reason for different wheel size because intuitively you would say mounting two different wheels on the platform there must be nav a very stupid engineer building up the platform but the section of the case there can be situations why the wheel diameter changes any idea what that could be why could you have different differently sized wheels on your platform tyre pressures if the air filled tires and the pressure on one tire larger than the other side this will generate a different diameter of the wheel or can even be someone has done a very good job and pumped up the wheels exactly the same way but you have unequal weight distribution in your platform so this can lead to a higher pressure on one side and then slightly change the diameter of the wheel and then leading to the same effect as if there would be a different wheel pressure and an equal weight distribution so even things which actually hard to control or to calibrate what else do we have exactly the surface may be imperfect so if you have like the perfect indoor ground like we have that here it's not very beautiful but very good for a robotic application floor in here there's very little slippage everything is perfectly flat so if you have bumps let's say you drive outside and you bump over a curb or small stone which is around this can definitely lead to issues like there's a small bump so it bumps the wheel up and it will definitely mess up with your automata information or we said different wheel diameters could be an issue or different surfaces like whatever carpeting indoor environments and things like this are very hard to model if you want to model this really precisely in a precise way so different things I can control different things I cannot control will in fact the Adama tree of my system so we need to abstract from the fact or we need to move away from the idea that we have perfect motion and need to take a probabilistic view saying okay we can't perfectly describe this we need to model that with probability distributions and then take those probability probability distributions into account when we are doing state estimation okay the first question is let's say we go to our rotation translation rotation model so let's say we have a platform at time T pointing in this direction well this is XT minus 1 and this is X T then we can describe in this rotation translation rotation model also chord or TR or rotation translation rotation we can describe this first as a rotation r1 which rotates the platform towards the direction where the weather system is at a different point in time then executing a translational movement moving forward and then from here doing a second rotation ok so this is what we can use to actually describe where we are going to and other questions if I know this conflict this position and I know this position how can I describe this rotation the translation in the rotation it turns out you can use this by standard trigonometry and can estimate what the difference the distance between those two positions that's actually something which is quite easy if you know the X Y Theta at time t minus 1 so this is equal X T YT theta t minus 1 so there's something that you're supposed to know and the same here X T YT theta T here is assumed to know given this and given this how can you estimate location one the translation and the rotation tool the things you want to estimate and that's the first step those things you basically know from the locomotion system we have discussed last week we can estimate where should be if I completely trust my system now one expresses the relative movement as a rotation translation rotation so the easiest part it's a translation how far is the distance exactly I just take the X locations and Y location of those two configurations subtract them from each other square them and add them up it takes a square root right it's kind of standard set up that we have and then we can also estimate those rotations and then maybe akley the HN comes into the game so we need to estimate this part over here so we're interested in this part of the angle and the same thing from this line in to estimate this part over here and with a little bit of basic geometry knowledge we can actually estimate that so we have a delta and the translation which was the thing I called t over here the first rotation of the second rotation and we can estimate those quantities over here so this is basic example that I had here a first rotation a translation a second rotation and it can estimate these quantities the the motion parameters based on the knowledge of those two configurations of window so this is these three parameters are basically my odometry command i can see that as my odometry command so this UT I use in state estimation is actually the thing over here the three dimensional vector consisting of a rotation a translation a second rotation this is kind of what I typically refer to as UT is my UT and so the standard or the let's say the most standard approach to describe the thing as a probability distribution would probably just say okay let's use a Gaussian distribution and that simply center a Gaussian around this point so this is basically our mean estimate so what we can do is we can just take a Gaussian distribution and center it around our estimate so that's kind of the standard approach that you would do if you're required to use let's say a Gaussian motion model for example and you want to use a common filter then you can say ok let's take simply a Gaussian and assume my motion is Gaussian it's one way you can do it turns out it's not the best way for describing that by this kind of it's one approach that you can do as a as a first approximation if we already make the assumption that this thing consists of a rotation of translation and second rotation it may be better to assume that we have noise parameters in those rotations and noise parameters in those translations so intuitive approach could be let's say okay in this rotation there's a certain amount of noise if the translation there's a certain amount of noise and in the second rotation is a certain amount of noise okay and the easiest way to do this is it say okay let's assume we have Gaussian noise in the individual parameters and say when we're doing the first rotation there's some gaussian error in that and then have a forward movement with some gaussian arrow in this direction and have a second rotation where there's again some gaussian noise in the second rotational part so it means this is kind of a Gaussian distribution or this is a parameter which is affected by Gaussian noise with Gaussian noise in the translational part and again Gaussian noise in the rotational part and we assume that it's basically additive noise which is added to this so let's say we have zero mean noise and then some and some covariance matrix or if in the 1d case a simple one dimensional noise parameter that is involved in this okay so this is kind of the slightly more advanced approach to do it in this way so can you tell me why this idea of using Gaussian noise and individual parameters or Gaussian distributions for the individual parameters and using a Gaussian distribution for the overall movement is not the same so I tell you that it's not the same but can you give me a reason or an idea or an insight why this may not be the same any ideas yeah yes so I'm not very happy with this explanation there are some truths in there but the reason that you gave words not the right thing so if we have two parameters which are Gaussian distributed and we add them up we get another parameter is also a Gaussian distribution right so this is of course it's a linear transformation the addition and if you add up two values then the the random variable consisting of the sum of the two individual random variables will be a Gaussian as well of the individually Gaussian okay so here we have all our individual parts are Gaussian all the parameters are gaussians however what makes this approach different than using one Gaussian in the end then maybe the final state so why why would it not be Gaussian I mean it's correct what you were saying there's it will not be a Gaussian if I do it like this i chain up both three Gaussian estimates I'm not ending up having a go so my can you give me a reason why because of the math yes but it's like we can be a bit more precise in order for you to throw it so what happens if you have a Gaussian distribution you transferred through some function and you get an output distribution which properties does this function have so that the output of the Gaussian come on guys that's something that you should learn very early a second semester this function must be a linear function right you can transfer a Gaussian through a linear function or linear model and then it stays a Gaussian distribution if you take a lot nonlinear function it will not stay agasi distribution so if you do error propagation what you're doing basically you're approximating a nonlinear function by linear function to make it linear and then get an estimate of your of your uncertainty and if you go back a slide this thing is not linear it's not a linear function and therefore we take the Gaussian squeeze it to some nonlinear function and the output result will not be a Gaussian if this would be linear then the resulting output would be the same but it's clearly nonlinear this transformation as an initial guess as soon as you have orientations involved there's typically some signs some cosines and tangents or whatever function and as soon as you see this you know it's not Gaussian it's not linear and therefore the output distribution will not be Gaussian unless I make some kind of approximation to squeeze it back into a Gaussian okay so in the end what we have we can stay at least at this point in time the output will not be a Gaussian distribution if we do it like this but I would like to investigate this possibility of doing it that way because it turns out that less is actually much closer to reality if we do it in that way compared to assuming a Gaussian distribution overall so in the end we have three parameters the rotation the translation the rotation and the parameters that I have are kind of my mean parameters and then some noise parameters added to this and there's noise parameters may be the rotation depends on a certain amount how large this rotation was maybe out it depends also on the translation so if I'm translating a further distance it may also because I'm traveling either faster or longer I typically more likely to also add some noise to my orientation same also the translation there's a translational part and this will somehow depend also on the magnitude of the rotation so if I don't do any translation just going straight forward my arrow stability small if I'm rotating while I'm going then activity also increased my translation of arrow but has some small at least a small part of the noise and also the second rotation depends kind of on the magnitude of your motion and so this is a typical model that you have which consists of one two three four and these Perez is the same so four different parameters one is basically how is the magnitude of the rotation influencing the arrow and the rotation typically the more you rotate the larger your arrow is likely to get and there is how does the how far I move impact the noise in my movement to be also the further I move the larger my mind noises and then there is a parameter saying how does translation affect rotation and the fourth parameter how does rotation affect translation and these are typically those four parameters in general rotation on rotation and translation on translation have larger values than translation rotation and rotation and translation if you want to break make this model even simpler then you neglect the effect that rotation is on translation and translation rotation and you only care about rotation on rotation translation of translation but in the general case or in general case but in this common case we had those four parameters the only question is how does this noise parameter look like so this can be a Gaussian distribution that's kind of the first choice but this can also be something else still possible that other distributions are taken into account so just to give you a few examples you can use your standard Gaussian distribution in order to estimate that noise parameter but maybe you want to take a different distribution like a triangular distribution advantage for example of a triangular distribution is that you have some bounce or theoretically even if the probability is small this Gaussian can be very very far away but it's pretty unlikely or nearly impossible that your platform is actually teleported away somewhere so even if the probability is really really really small it exists in the Gaussian here it actually goes straight to zero so you you limit the maximum movement that you can do and there are some other reasons why you may want to use the triangle distribution not a normal distribution but they said just kind of two ways for doing this are mainly doing this exercise to show you that not everything has to be LC or you can also use other distributions in most of the cases you will stick with the Gaussian distribution is often a smart idea we know how to handle gaussians really well so I don't want to advise you to never use the Gaussian distributions but the Gaussian distribution but there may be some cases where the Gaussian is not the best choice and just to be aware that there are different possibilities for that okay so what we need to do at the first step is actually calculate calculating the probability density of a movement that I'm executing so assuming I say I rotate 20 degrees cometa forward and rotate again by 20 degrees I'm in a new position but how the lot of the probability distribution looks in that area around that mean a most likely position that I have right now so I want to vary with a different XY theta and let's say a little bit more to the left a little bit more forward what the probability will look like at that position okay so and I can actually do this by using a simple algorithm which is actually use of as a probability from a normal distribution which just evaluates the probability distribution and tells me in the most easiest case how far am i away from my mean estimate so basis two parameter C called a and B the one is the query point or the query point minus the mean if it's zero centers if it's not zero centered and the parameter B which is basically a standard deviation that I have sitting in here okay all for a triangular distribution then the formula looks like this basically a linear function depending on where you are on the site and so what we need to do is if we assume that the Gaussian noise is in our audiometry parameters so in this rotation translation rotation part then we need to compare the rotation translation rotation parts of those models and see how likely it is that the first rotation would say was actually 10 degrees although I assumed I wanted to go 20 degrees say my command was called 20 degrees and have a new motion which tells me just when 10 degrees how likely is it this has happened probably very unlikely because 10 degrees is a pretty big mistake that you're doing and so what we need to do is we need compared here the parameters of the odometry command because place these are those things which we assumed to be Gaussian distributed so then we can turn this into an algorithm which maybe looks a little bit scary if you see it the first time but it's actually not very complicated so what we have is we have a start and an end position or two started end positions one is what Adama Terry told me so what my wheel encoders told me where I was at time t minus one at a time T here called not primed and primed in terms of variables and then I have a second pair it's basically my hypotheses where I the position I want to query all the right of position Evonik very so what I do is I need to compute the translation rotation and translation and rotation parameters for my odometry movement which is done over here so this one which belongs to this so this is basically my odometry parameters what my wheel encoders told me so my wheel encoders told me you moved within the last second and you can express it with those three parameters using exactly the same equations that we have defined before over here just copy paste and then we have to do the same thing with my very location because we know what my wheel encoders told me and then there's a second position or position difference that I want to want to estimate I want to query so how likely it is I'm not here but that I'm actually here or here or here and these are basically those hypotheses though things I want to query those locations so these are the values of interest so to say which result from X next prime which are just the two variables that are that I'm setting in here and what are they need to do is I need to compare the translation and rotation parameters that are executed for where I think I am according to the Dhamma tree and where I want to query Apps and this happens down here and this is kind of what's the different the first rotation what's the different in the translation and what's the difference in the second rotation plus the noise parameters I just mentioned on the slide before which is basically the standard deviation that you are taking into account for your Gaussian distributions or maybe that you make a lot of errors in your rotation because your rotational movement is very imprecise but you're very precise and going forward just as an example then the translational parameter here would be very small but the rotation of parameter would be rather big so you can have different uncertainties for the rotation translation rotation and then you say there's a rotational part which is Gaussian distributed there's a translational part which is Gaussian distributed another rotational part which is Gaussian distributed and they're independent of each other so I get three different probability values one for the first rotation one for the translation and one for the second rotation and if I want to estimate what the overall probability is given that they're independent I just need to multiply those three numbers and then I can query for a position the relative transformation or the the probability that actually end up there okay let's visualize this let's visualize how this looks like so this is X like ma at time t minus 1 this is my motion command you going to my odometry in this case just move forward and then these positions is kind of grayish areas what's happened you're basically moving on every position in here in varying these hypotheses so generating f of this for this position this position also for all those positions let's say in a certain area and the darker the value is larger this probability is so what you can see is the darkest values are centered around the Adama Terry command which kind of makes sense the system basically did what we told the system to do you may even see maybe it's not perfectly centered maybe it's a little bit closer to the platform not sure if you can perfectly see it this can be the case because when the the model assumes we can instantly set a velocity when you say okay go is 1 meter per second forward and need a little bit of time to actually accelerate and if the system doesn't take this into account I may have a small systemic off city but in general we get distribution which kind of looks like this in this direction you can see it it's very unlikely then gets more likely and gets unlikely again and this is my uncertainty in the forward motion in the rotation apart we actually see a movement over here and it looks like what you can see is it's not really Gaussian it looks a little bit like a banana therefore we are called this probability distribution of the banana shape distributions because they're kind of a little lying on that circle and the reason for this is this comes from the uncertainty in your orientation part and depending on the projector that you're taking if you're making mouth movements your distribution may be look different you may up end up with distributions like this because you basically accumulating the error and so you can see this basically as a histogram so you can see this as a 2d now visualized in 2d but in reality of course 3d X Y and theta histogram with all the possible values and they're basically carrying all those values so these are my X prime values this is my X prime X this is my year and then that's the probability distribution that I can generate that's one way for visualizing these probability distributions because I can't provide you a parametric form or an easy parametric form that you can understand it's not a Gaussian anymore and something that I can visualize quite well or quite easily visit histogram I may also represent this on or display this is a different form in a sample based representation what is a sample based representation incentive-based representations can see this basically i'm doing an experiment put the robot in simulation here execute this command and see where I end up and I repeat this process with different noise sampling different noise parameters hundred thousand ten thousand times all right you can see this is you have a function which looks like this you can represent this function also by samples though the more the denser the samples are together the more likely this area is and they actually state estimation techniques which we also investigate in this course so called particle filters which will rely on this sample based representations so we may be interested in generating those samples so it means generating samples which stem from a certain distribution let's say which is stem from a Gaussian if I have that if I have a technique which generates samples from a Gaussian distribution what I can do if I go back in here I can say okay my mean is that say ten degrees a meter forward fifteen degrees I will sample from a Gaussian centered around 10 degrees and generate a mole for maybe eight degrees maybe eleven degrees maybe ten maybe three you know generate Gaussian samples from this distribution then I do the same for the forward movement so I sample let's say around the mean of one meter so maybe I get thirty thirty centimeters out most likely ninety or ninety nine or one meter or one meter 10 so it'll generate samples and then for the orientation as well and then I can chain them up and this then generates a possible sample where end up kind of one of those experiments and if I repeat this I can also get a distribution of how those values look like and the questions how do we actually generate that how do we generate samples from a normal distribution so most of you use MATLAB there's already function for this and you may not take into account how this is actually generated but there's actually a quite easy way for generating at least approximatively Gaussian values and the thing is what you're doing is again B is your standard deviation we assume generating Gaussian samples around zero here is we simply draw twelve random numbers between minus B and plus B so minus Sigma and Sigma and we just add them up and divide those number by two so in the end that will get values somewhere between minus minus 6b and plus 6b translate this tool burns to plus minus Six Sigma so it doesn't cover the whole Gaussian distribution but it are just kind of standard uniformly drawn random numbers but what turns out to be if I actually draw that all right I do this a million times I will actually get a distribution which looks like this is actually pretty close to a Gaussian distribution so even this very simple form generates approximately Gaussian distributions just drawing random numbers and adding up those numbers divided by two it has to do with a central limit theorem that if you draw numbers you're approaching some form of the distribution of the mean turns out to be a Gaussian distribution and that's one way why this actually happens in the end and but we can also do it for other distributions so again we can do it for a Gaussian we can do it for a triangular distribution for triangular distribution the formula again looks slightly different but there's a way to do this and for example in exhiliration for the triangular distribution if you draw a thousand samples it looks like this if you draw ten thousand samples looks like this if you draw 100,000 samples looks like this if you draw a million samples it looks like this so you also see that you actually can get samples from this triangular distribution so if you now want to do this kind of virtual experiment of simulating where the platform ends up to be I can do this with the with the algorithm to sample motion that means assume I'm in zero zero zero and I assume or masoom an excess re and then I take my motion command into account and then there's simply generate random noise around this motion command and integrate them this motion so what I do is I say my estimated odometry command is my actually measured odometry command so the input this is the u+ I'm sampling either way the Gaussian distribution with the triangular distribution would mm up to giving the parameter is the B parameters that I was using before and then I can just compute the motion based on my forward motion equations X Y theta where's the rotation the translation and the rotation and turn the odometry command so the estimated odometry command into a change in the X Y theta space so this is basically for example sampling from a normal distribution and if I do this I do those experiments I get distributions which look like those here at the bottom so I'm here this is my motion and then I'm repeating this experiment with different noise and the orientation parameters different noise and the translation parameters and differ valleys in the rotational parameters over here so I'm repeating this experiment let's say 300 times never all those dots actually the outcome of my simulations of my virtual experiments and under the model I've presented here you will get different noise parameters for example this here requires era or assumes a rather large noise in the orientation because the samples are spread quite substantially in this direction but has very small or smaller uncertainty in the translational component this example over here says I'm actually pretty precise in my orientation estimates but I'm pretty unprofessional far and forward so of the spread in this dimension of this direction and this is another extreme case very large noise in their rotations and very small noise of the translation then you get kind of this banana shape distribution in this and if you could see there's a simulation of what would happen if this would be the right model to do what we then can do is we can actually do this experiment in reality so we take a mobile robot and they didn't drive in a certain pattern and just repeat this experiment hundreds of times and see what the distribution actually looks like and if you're going to do this this would work down but your candle you can actually see that you have the robot driving and all those dots are those samples representations that the platform ends up to be so you can see if the motion is small this is still very Gaussian it looks really nice the more rotations you take into account the more the spreads out and so in the end distribution the thing that you actually see over here you can actually see this is more about an a shape distribution rather than a Gaussian distribution so depending on the noise parameters that you have it can be more Gaussian ish warm more banana shape so if you have typically larger noisy rotations you turn more into a banana shaped distribution and the more rotations you actually executing and but it's but it's a model which actually fits reality kind of well so what we have done so far now we looked into odometry motion models and said how can make sure describe that in a probabilistic form that the motion is uncertain and there are two main things that we did first we wanted to compute what the probability of a certain motion that is executed this was kind of the first step where we said okay we turn the X Y or theta at time T and the X Y Z T at time T plus 1 into a relative motion command assume Gaussian noise and this relative motion command and then we can exit we can compute the probability for every nth configuration how likely does this actually to happen so the first part and the second part we basically did kind of a simulation or a sample based representation of this where we said ok let's simply move to the forward propagation so we assume we know where we are and we assume a given motion command that we execute it add noise from this motion command see where you end up so we once again from current position via motion command to end configurations compared to start and end configuration and see you get a probability value out of this but both things are needed so it's for some problems I need to estimate how likely is to end up in a certain location then I use the first thing and for some other setups it's more more important to say I need to actually generate sampling samples from a distribution in order to propagate moving forward nor to make predictions where we are then the second approach is more relevant and we have done this for the odometry based models so where we assumed we know this you that the system provides me this you so rotation translation rotation or in theory also other models so I just used the rotation translation rotation model because it fits most of the descriptions really well and we don't need to dive very deeply into the parameters that we are using but of course if you want a model one certain platform you may go because of the physics in order to describe this and even better and exploit constraints better for example if you go for an ahkam and drive the rotation translation rotation models not the best thing because the Ackermann can't rotate in the spot and things like this so there are certain things drives where you want to investigate things a bit more but in most situations we have a differential drive or something which is similar to a differential Drive and in this case there's a pretty good explanation of what's going on and that model is quite useful for doing your estimations if there any questions up to that point so far okay we'll take home messages if you have a dhama tree you should be able to estimate the probability distribution of that motion and again this will be the key ingredient for doing recursive State estimations next week when we were looking into how can we do the prediction step of a recursive filter and again this is a form of predicting where we are giving certain noisy commands depending on the filter you're implementing you may use this sample based representation where you make very the probability of a certain configuration the base depends on your base filter implementation later okay with this I would like to make a five minute break and then we're going to continue thank you okay so then the second part for today we'll look into observation models and basic observation models and we are looking here not into camera models so using a camera because there's something that at least a substantial part of you has excessively done in the photogrammetry course with me so we're not looking into how to map a 3d point on our to the image plane I want to look here in two basic range sensors like laser scanner because this range information is something which is very commonly used and most of the problems that we solve here use some kind of basic range Center for example for collision avoidance or things like this where cameras are not good enough for doing very bulletproof collision avoidance and therefore you most of the time have some form of distance sensor attached to it and again the motivation for this is same slide as 45 minutes ago we want to do state estimation and we want to estimate where we are given what we have seen and given what we have done and we come to our base filter same equation as before but now not the motion model of the thing we're looking into we look into the second part the observation model or measurement model sends on one lot sometimes called okay what is it it basically tells me this is the probability of measuring something given I know my state so assuming a know where I am what's the probability of measuring something let's say I have a I'm a robot and I have a 1d range sender which is my laser pointer and I know my state so I'm not I'm standing here and I'm looking in this direction and my laser pointer projects this point that to the wall and assume this device could also measure distances distances it would not tell me whatever five meters okay so my sensor model says assuming I know where I am how likely it is to measure 4 meter how likely is to measure 4 meter 50 how likely it is to measure 5 meters how likely it is to measure 5 meter 10 that's what this model is actually telling me assuming I know where I am assuming we all look to assuming I know where the wall is what the probability of measuring something you can see this and this says kind of how good is my sensor so if I think about this being a proximity sensor it means okay I this makes a certain error in measuring the proximity let's say it's a pretty good laser pointer and so the distance I measure is accurate up to a millimeter then you could for example express this as a small Gaussian distribution centered around 5 meters to the standard deviation of 1 millimeter that would be one very simple form of rain-sensing model and that's the model that we have here so if I would be here and measure and I would measure 5 meter - instead of 5 meter it would be a return of probability close to zero if I am that accurate with one millimeter standard deviation but if I measure five point zero one I would probably get a high volume okay so that's the thing that's what this is and we we need this in the correction step of orbit Showtime so in the correction step of our base filter we have the recursively saw a predicted belief and we have our observation model and we multiply those two values lost of probability distributions so it's basically a product of two probability distributions which give me a new probability distribution so if you think about whatever in terms of a Gaussian distribution it's basically combining two distributions by the computing form of weighted mean of waiting the prediction and the correction together into one Gaussian distribution and depending which distribution you give a higher weight basically depends on the uncertainty so if you have one distribution is very precise and one which is very uncertain he will give a lot of weight to the precise distribution so for example if you are very uncertain about your prediction but you have a very good sensor you are going to trust the sensor more and this thing will have a higher weight in this product just because you're multiplying two distributions and this has two gaussians and one has a very small variance so then the one was a small much smaller variance will have an higher weight in combining those two distributions in the resulting Gaussian okay and so that's the reason why we need this and today we want to look into how can we describe this for typical sensor and so that's a slightly there's seen before which I copy to the wrong position so this is my observation model and for now we want to look into the question how do we compute this thing and the first thing's what we often working with is actually a laser range scanner it's a very popular sensor used for applications where you want a wide bumping into something because what the laser scanner does it provides you this information to the closest obstacle at if in different directions for example so if again take my example of my laser pointer here I can measure the distance to the wall let's say in this example now seven meters what lasers cannot use laser scanners typically for we have a laser which is actually rotating around like this rotating rotating rotating very often let's say 10 times per second 20 times per second 50 times per second whatever and for every different orientation that's a one-degree resolution or a half degree or quarter degree or tenth of a degree depends on the oval setup how fast I'm rotating I will actually measure measure this so I get an angle and the distance to the closest obstacle and if we rotate the thing whatever ten times per second and I move around in the environment you can actually quite easily imagine that I get the distance to the closest obstacles in all my surrounding at a very high frequency which is great for any form of collision avoidance for basic mapping for building floorplan like maps and the question is however how do I treat how do I interpret this information so if we go for laser scanner we typically consider one scan this is kind of one rotation of the laser of the mirror which rotates a direct laser beam into the world then this can be let's say for simplicity reasons we have 360 degree field of view one degree resolution we have three and sixty values proximity values that we actually measuring or three and sixty one and so we treat them as individual measurements so everything every of those individual measurement is typically called a beam and what most models do they say okay all the beams are independent of each other everything is an individual own own measurement taking in a different direction and this is the information that I have so we take our beam or scan and break it up into several beams so this is once again taking a time T and again this is also an approximation because in theory every beam is taking it to a different point in time but most of the time those systems generate one scan as consider this one point in time one rotation of them of the mirror although it's not very precise if you go for really precise or for fast movement you need to times them all the beams separately and then if you think about this of a probability distribution it is I can actually break this down in treating every beam individually so say the probability of a whole scan is a probability of beam one having a certain distance of beam to having a certain distance of beam three having a certain distance and so on and so forth and we assume them to be independent we can turn this product so these the individual beams into a simple product where we just treat every beams individually so if we have 361 different measurements we treat them as 360 individual measure and compute the probability distribution for every individual beam and just multiply them this results from the assumption that the beams are independent of each other you may argue this is right or wrong but that's the assumption that we do I mean to make be a bit careful so you can say okay given I'm here I'm measuring this distance of 5 meters assuming Harvie degree to the right it's a little bit like this I will get a second measurement actually the probability of measuring exactly the same thing or very similar values actually higher because typically they're man-made objects there's a wall I mean there may be certain steps where the wall ends somewhere over there but in most cases it's actually I can estimate what I see given what my previous beam was so it's actually not true okay this assumption however if someone gives you this argument you have to take into account that we have here written assuming we know the state of the world and just to make it explicit I put here M for the map in there as well so we assume to know what the world looks like and then it's actually different if I sin assume I know that there's a wall then knowing what I measured doesn't tell me a lot about what I will measure if I go a millimeter or 1/2 degree to the right-hand side because I already know that there's a wall so then the effect is AK of independence is actually not the too strong assumption I mean still you can remain for something about the measurement properties of your sensor but it's not this assumption that I know they're man-made object they have certain sizes so moving just a little bit to the right or left inside of your laser beam will give you a similar measurement there's not what is meant in here with this business independence it's independent of the noise the parameters that we have in here because we assume we know what the world looks like in this in this setup and assuming we know that the world looks like we can often make this assumption that the individual beams are independent of each other okay so if we end then the end have been based sensor models and this do not necessarily have to be a laser scanners can also be a sonar or some cheaper range sensor in general you will get measurements in different directions so this is to be the example that you get from a laser scanner so this is a platform Sitz is laser scanner looking to the front laser scan looking to the back and this aspects it basically makes some range reading you will get beams now the aliasing effect of the projector doesn't make it look very nice but these are individual beams which are shoot out at very high resolution into dividual directions and you get a pretty precise matching of matching what the environment looks like so this is kind of your simulated environment in here so this is a position and this is your the measurement that you're going to take going to get for precise sensor like a laser range line and these are all the individual beams if you use much cheaper sender like a sauna the result is more likely to look like this so you will still have beams the the number of pins you have is typically much lower some of them will match the environment really well but some others are completely screwed up like this one over here and this basically typically caused by a specular reflection of the object redirecting the beam somewhere else we then get measurements which are simply not really well matching your reality and here also maybe here an object was standing which is not in the map which can also call the cost certain effects but those models typically assume the world is correct right what we see that we think to know about the world so the map in here is actually correct and that's what I'm going to see if I'm at that position in the world looks as I expect the world to look like now the key thing is we treat here every beam separately we computed for every individual beam the probability and we multiply those probabilities that was a result of the independence assumption of the beams okay so the next thing we need to go down or to the beam level how does this thing look like if you know how we can turn beams into scans let's say at the next point in time let's investigate how the beams look like and what be there to really do is something which is often called a ray casting model which basically says I assume to know what I should measure where the next obstacle and it's called raycasting because I'm basically saying assuming I know that I'm here and the ball looks like this I can base it do a raycast operation so I'm going through the grid as soon as I hit a black cell and if I hit a black cell and okay there's an obstacle so this is the closest obstacle according to this raycasting operation so you basically follow the Ray in that map and they say okay this is whatever five meter of four meters I should measure four meters and then you assumed some Gaussian noise around that that's basically exactly the example they had before assuming I'm standing here and measuring I know the wall is five meters away this is my ray casting I know I can look here and see count the distance or take a ruler and say okay it's five meters is my ray casting operation and then I take my sensor and see what does the sensor measure and then put a Gaussian distribution around those five meters and see if what I measure is close to the mean or not right so I basically have a Gaussian noise so this is my simplest form of a ray cast model that I can have so let's say the star is my way my way I expect my obstacle to be according to my map and then the probability distribution sitting over the mesh taking account the measurement noise of my laser scanner may look like this as a Gaussian distribution so this is a valley which is close to zero close to 0 close to 0 increasing increasing increasing taking its maximum values decreasing decreasing decreasing basically going toward zero and then stays zero there that means when I'm standing here the thing is 5 meter away it's extremely unlikely that I actually a measure of 10 meters but it's also extremely unlikely their measure of 50 centimeters like this like this yeah of course if I put my hand in here I can measure this distance of 50 centimeters it's possible but this means that the world looks actually different than what I expect the world to look like so it's an violation of my assumption of my assumption that I know the model there's a question how strict you want to be how you assume your model is correct but how much you actually trust your model if I if you trust your model proof you say my model is the absolute right thing to do in this no imagine of a situation where my model is wrong then you may go with one of those models the reality is different reality people are walking through the environment maybe you've met them Vaughn precisely I just walk here and bump a bit into the table and all the tables move by two centimeters and your model is incorrect so in reality you will not make such extremely strong assumptions taking into account that there's only girls your measurement noise or nothing else this will actually to really screw up your estimates because you are not taking account the that your model is imperfect a more realistic or more advanced raycasting model often looks like this now you see this for the first point in time because a flower that looks really weird so there's something some value which is above zero close to the platform lender then it kind of decreases then it increases then has a weird bump over here then it goes down to close zero and then it has a high peak at the end that is completely weird at least that was my impression when I saw it for the first time years back but actually turns out this model is actually pretty good model because what it does it tries to take into account certain effects tries to model those effects separately and then combine those models and typically takes to come the key effects that it can actually happen in an environment one thing which can happen is sensor noise so there are some noise some measurement noise around the symbol that's the reason by here around the expected obstacle I have high values and lower value somewhere else so this is kind of similar to our Gaussian if you go back to our Gaussian the previous slide this baby this effect so don't consider the absolute scale it's just kind of a drawing here but here so there's something which is responsible for the measurement noise it's kind of sits over here okay then there's something let's look to this peak here in the end that's completely weird this peak in the end is typically the maximum was gonna make some range reading so basically for laser scanner that you don't get any return from your Skinner so basically if I look into the sky there's nothing which will reflect that beam and I basically get no return a maximum range reading and this tippity is something which is sitting right there or it can be that one place a nice mirror somewhere and it's a a mirror car and sight mirror and the laser beam hits this mirror and the mirror reflects it somewhere in the middle of the middle of nowhere he also we get no return alright so there are certain situations where you get those maximum range readings and this is actually this effect over here and then there are values so it doesn't go to zero here it has some value which is larger than zero for every possible measurement but you can see something this is maybe a random effect something I don't know what it is it's probably too small but let's say one out of 500 measurements something weird happened I have no idea what it is but let's assume a uniform distribution that everything is possible there's a small probability but it's a possible explanation and that's indeed cover everything else I don't know and then there's this weird decay over here which has something to do with dynamics in the environment unforeseen obstacles that you don't have in your model like people you can't model people and know perfectly in every point in time the future where a human will be that's impossible but you can take that into a model into account that you have a certain distribution of people in the environment and this certain distribution of people will lead to a certain effect in your model and that's what this component is for and I want to dive a little bit more into the details and actually look into the typical measurement errors that you have for range sensors so one of the things is that beams are reflected by obstacles so this is the this can happen you have an obstacles and no the first thing is you have you measure actually what you want to measure so the beam is reflected by an oppositely goes back to your sensor but there's some measurement noise in there so this can be if you see those cones in here there's a cone which doesn't perfectly match the distance to the obstacle but it's close and this basically a measurement noise is kind of the first effect that you have then you or even yeah this kind of up here this kind of where actually everything is fine but you have some noise involved in this then you may have the case that a person was standing somewhere and be to her affection so these are things over here there's probably someone standing there you're measuring a distance which is shorter what you expect to be right so these things are situations where probably some object or some person is in front of the platform and leading to a shorter range reading also crosstalk is something which is possible a crosstalk not that critical for the laser setup at least it's then a 2-d laser setup but if you have others like so Ananse is basically one one sensor sends out a signal in another sensor received the signal so there's a mix-up between who was the send and who was the receiver of something called crosstalk and then it can be too weird measurements because one sensor sending something out and someone else get the reflected signal can't distinguish there's not its own signal this can something is called crosstalk you may have ran the measurement something are completely random things you can't explain and you may have some maximum range reading which is this one which leads from a total reflection of this surface which was over there it so let's trot explaining those individual models so the first thing is in my measurement noise so I have my expected observation and then typically a good assumption for my measurement noise is a Gaussian noise model that's actually fits reality quite often for just kind of the imprecision that I have in my range of measuring this can be according to how quickly my sensor and receiver reacts this can be the uncertainty in the time that I have if I have a time of flight sensor they are certain uncertainties in the system and they can often be expressed quite well with the Gaussian distribution okay so something that we know how to handle we have some form of Gaussian distribution with my what I measure my expected of the version measurements where I expect the obstacle to be there are some uncertainty involved called B which is here how the standard deviation of the variance of your Gaussian distribution and this tells you something about the measurement noise that's kind of all fine I think no one is surprised or should be surprised by this form of model that we put in there then we have the thing with unexpected obstacles how do we treat unexpected obstacles and it's kind of a weird function for me it was a weird function so it seems to be decaying function has a step function at that expo so the my first question to you and that's the easier question why they're this weird step function at that expected why do we have a step bridge which brings it to zero over there you probably mean the right thing can if we reformulate your answer so it becomes a bit clearer yes so we have an expected obstacle where we know we're an obstacle is like let's say this desk let's say I'm standing here and measuring the scanner is now in low level and I'm scanning the desk if there's an unexpected obstacle in the environment which sits behind the obstacle I don't care about it because the sensor will measure the expected obstacle I only care about it if I'm walking between the sensor and the obstacle that I know where it is right so everything which happens behind the expected obstacle I don't care because anyway won't measure it so I'm only caring about this area from zero to expected obstacle and the questions why do I have this weird function actually I can tell you it's an exponentially decaying function and this was a weird thing why is this an exponential decay in the function as anyone has no idea I guess no one has an idea about it but it's totally fine so let's have an idea and let's try to rewrite at least intuitively why this could make sense so we reduce this now to a 1d problem so we have our platform sitting here let's say this is our drip map and let's say this is this is our obstacle or excited obstacle okay so we only care about people which walk around in the environment and which can be standing in those different grid cells so we just be kind of simplify our model to this grid model where say a person standing in a grid cell yes or no okay and I don't want to make I want to make the assumption about the environment that people are I don't know where people I don't think they have preferred locations or something like this I just assume people can be randomly spread through the environment okay so there take random positions I don't want to make any assumption about people only walk at certain locations or something like this so we assume people can be anywhere so that means I can actually generate random samples of people in the environment and they will show up in certain location it's completely random uniform distribution about worlds that can generate their put a person into one of those cities all those places okay so the next thing I need to take into account is what is it what I'm measuring if I put randomly person in this 1d world what is the position that I'm measuring what's the distance that I'm measuring if I put a random randomly persons into this world into this 1d world so randomly basically fill one of those cells is occupied what do I measure with my laser sorry - which person - the closest person that's the key inside in you say if I will randomly put person they take for the person here in a person here I will only measure the distance to this person I will not measure this into the second person okay so only the first person matters no matter where we're distributing them in the world only the first one matters the one which is closest to the platform okay now let's simplify our modeling we say we can take an arbitrary number of persons so we can put as many persons as grid cells we can pick our environment fully or we don't put any person in there and everything has the same probability everything is same same likely we don't care there's no preference in which cell they are in in this case we can see we can actually enumerate all possibilities that we have in here and coding them as zeros and ones so zero is free and one is there's a person standing so for example the empty world I could generate by sequences of zeros on here one okay there's one description of the world now if I make this one occupied in this one occupied my string would look like this okay so what it basically is is this it's all possible bit strings with the length that consists of the number of cells so 1 2 3 4 5 6 so how many possibilities do I have for those bit strings yes 2 to the power of 6 so this is number of cells this is a number of possibilities that I have to generate those bit strings however I'm only interested in those bit strings I don't care about as soon as I have 1 0 and what is sorry 1 1 in the higher bits I don't care about the rest right so let's see how many bit strings in the set of all bit strings exist which have a one in here and I don't care about the rest when I say give me the percentage how many percent of the of the of the of the bitstrings have a one in here and I don't care about the rest 50% of all bit strings right because everything can be 0 and 1 if I'm only constraining the first one to be a 1 it's half of the bit strings have a 1 in the first position okay so how many bit strings exists where there is one in here but a 0 in here so that I am I getting to get this distance sorry 25% why it's half and a half because a half for a 0 and a half for a 1 so for measuring the distance of let's say distance and probability for distance of 1 for distance of 2 this is 3 this bit string 0 0 1 it's 1/4 times 1/2 so 1/8 and so on and so forth so you can see that we can actually reduce it to the bit strings where we constrain certain bits and for every longer distance measurement we need to constrain one bit more which it reduces the probability of this to happen by a factor of 2 and therefore we obtain an exponentially decaying function so what this exponentially decaying function here series is persons are uniformly distributed about the environment and they can happen from 0 persons to filling the whole environment as all happens of the same probability that's the assumption which underlies this exponentially decaying function ok so it's something which makes more sense now hopefully then 10 minutes ago so we will model it with an exponentially decaying function for this area from zero to that expected and there are some parameters I may need to fill to calibrate the model and how strong is this decay what's the influence and it's zero afterwards because zero was this effect if something is behind what I expect I don't care it's like if I measure the wall I don't care about the person next door in the other room I will never measure it measure him over okay and then we have these random effects things we haven't taken into account it's okay just if I don't know what it is just assume it can happen anywhere something I haven't modeled and there's a completely random effect and just assuming a uniform distribution so there's a certain probability something random happens when I have my maximum range reading this is basically a certain probability for just getting no return which is kind of the highest fitness and now if I take those four functions and put them together as a way that some I actually obtain exactly this distribution the only thing I don't know yet is how does this parameter vector look like how do I actually estimate how much weight I give to the measurement noise how much estimate do I give to random people walking around in the environment and again this can be dramatically different if you have a robot operating in Mensa around noon the wait for the unexpected obstacle is probably pretty high if you were in whatever nuclear power plant the probability of people walking on is probably very small so depends on the environment you're actually in how you need to calibrate your models you need to adapt those models to the environment you're in okay but what you can do is you can actually take data and just see just take data and fit a function this function to the data and estimate the parameters so what you seen here are two examples of time step measure distance for Sona and for laser and you can actually see that the results actually not too different so you will see this is done for an expected distance of three three meters so this is where the obstacle was you can see that you actually get more measurements closer to the obstacle it's kind of the darkest around the obstacle it's much lighter afterwards and they are kind of this maximum range reading switched it up on top here and so you can see okay laser and sonar seem to have different properties especially with this maximum range readings it's seem to be much more likely to happen for sonar than for laser and that's actually the case due to the reflection of signal for sonar but what you can do is you can record those data serious and typical environments and just calibrate your parameters with whatever different algorithms that you can find there were some hill climbing some gradient descent approaches to maximize the likelihood of the data given you expected measurement so what do you want to say you want to generate you want to estimate the model parameters so that given you know the expected the distance is the expected obstacle you want to maximize the probability for getting the observation data that you actually got and you can actually do this and if you do this you can actually get estimates these are histogram are those estimates that you can compute and the curved line is your Prometric function for this and you can actually see that for laser and sonar those models are not too different you can see it's solar is typically a little bit a little bit higher more uncertain provides a little less information about the laser provides a better but again this was a rather imprecise later this plots are now more than twenty or twenty five years old so nowadays all laser is actually a much more precise than the typical sonar synthesis that we use but what I can do is for example then I can compute likelihood observation likelihood so I assume I have a grit map and this is the position of the platform and this is actually the scan it's actually getting so this is a real measurement what I can do now is I can actually move very every individual location and say how light it is to actually see this skin so I go here i do my write tracing i computed for every beam i evaluate these models over here and then put a value in there the darker they higher the probability so what i actually get is this plot over here is can see those dark areas are areas where this can could cure securing with the high probability and you can see your repetitive patterns because there are structures in the environment which reoccur so the corridor has doors and so you have kind of this openings and you can see that there are certain areas which are more likely to happen so either you are here looking in this direction or a year looking in the other direction therefore have this kind of to track things in those corridors and this is a standard model which is used in localization systems still today although it has been developed here and born in the old computer science building so this isn't very old mat from the 90s far before my time I started doing robotics here in the computer science building next to be not quicker which is now I think demolished all looks like close to being demolished that was actually one of those experiments and models developed there and that building and so that's one of the model version associated the bond there of also always showing this plot and this is one of the standard models today that you use for laser based localization the disadvantage of this this model actually works pretty well has a good high quality for your localization estimates for providing localization estimates the disadvantage of this model however that is actually not the easiest to compute so you need to perform your ray casting operations in your map and you need to estimate where the beam is going to be reflected so it takes some computational effort to do that and you typically use some pre computations for maps in order to do that in a very fast manner therefore alternative models popped up which are approximations of that model and one example one is very popular is the so called beam endpoint model the thing is this is physically not not plausible this model so I'm from the physics point of view this model doesn't make sense but in practice it turns out that this model actually saw pretty good results and it's extremely fast to compute so what it does if this is my map let's say everything is free space these are obstacles what it does it discrete eise's the world and computes Euclidean distance transform on this map so for those of you who have attended photogrammetry one you may remember Euclidean distance transform is a grid where every cell stores the distance to the closest obstacle okay what you see here is basically the brighter the color the closer the cell is to an obstacle so again these are the obstacles so if you have this point over here it's close to this obstacle this point is close to this obstacle this point is close to this opposite this point is very far away so the further away the darker the values the closer you are the brighter those values and then he may smooth this distance with a Gaussian distribution but what it ends it turns out to be what the system does it just takes the end point therefore it's called peepin point model and kaveri is how far is the end point actually away from a real obstacle so it's like I'm measuring 10 meters and I say ok how far is 10 meters assuming my current position actually away from real point which is 10 meters away and then just if it's closest perfect where are those middles so what's the advantage of those models the advantage of this model says it basically takes into account how far the end assumed endpoint is away but it doesn't consider what happens on the beam of on the beam so for example if I'm here and I measured 8 meters like this one and for me there's an 8 meters in this wall everything is perfect I'm very happy we'll get a high score if I measure here it's exactly the same thing if I measure here and there's an obstacle in between I would still say oh eight meters pretty good fits the wall over there and would say this position is actually pretty good at explains the eight meters well ignoring that there's a table in between so the Ray can't physically pass through this table but that's something that the system is actually ignoring ok so this kind of the problem of the beam in one model why it's physically not very plausible to do that so if you then pass along this ray over here you would get the likelihood function which looks like this because you're getting closer to something although you can't reach this obstacle because the obstacle is to the site but the system even doesn't care from obstacles to the site or in front of the beam so you would get a function which looks like this and in the raycasting model you will get a very flat function which just Peaks up there in the end so there can be very different but it turns out that those ones also really work well and the big advantage of those models that they are super fast to compute because the only thing you need to do is you take your map and you take to put on your map into a likelihood field or into a Euclidean distance transform smooth with a Gaussian distribution with the Gaussian kernel in the end then you will get this mem the only thing you need to do for evaluating a beam is just you check oneself you assign cosine operation to estimate which cell you're ending and you pick out this cell and as a result of this especially they are cetacean algorithms which not the Kalman filter but other like the particle filter which scales the quality scales with the number of operations you can do so if you say I have a certain budget of computational resources and the more I can compute the better I actually get oftens it's helpful to use a very simple model and simply do ten or hundred times more computations and you get a you will receive a higher robustness or higher precision rather than taking a very precise model but can do your computations only a few times not the reasons why those beam endpoint models became so popular because they are super easy compute they're super fast and even if they don't have the quality of the more advanced models they actually work surprisingly well and so in most let's say ad hoc or systems optimized for speed you will actually find the beam endpoint model today so so far these were models which were evaluated on grit maps so on maps where we have kind of a grid structure and say there's an obstacle or there's a cell which is occupied or which is free you may end up having other terms of maps like landmark maps where you store certain obstacles in the environment in a parametric form like polls in the environment or you store all the corners or things like this and use those corners to localize but again we typically have a range bearing sensor and the range bearing sense so it gives me some information about where my object isn't what I'm expected to measure and for those models you typically add Gaussian noise to the landmark location assuming that that's what you're going to sense so typically measure that with with them then the model saying K how far am i away from the location of the of the obstacle in the map let's say I have a detector which is optimized to corner so I take my laser scan I extract all the corners and now I want to match them to locations of corners so know one corner there one corner they are one is there then maybe here we have more corners because here the windows generate extra corner corner signals okay so what I do is I'm how likely it is to measure a corner standing here in this direction okay how likely is to actually measure that corner over there I say okay compute the distance between me in the corner this is this this is the range reading I'm expected to get the x position of that corner my own x position subtract them square them do the same for the y coordinate takes a square root so this is the accreting distance to that corner that's the range I'm expecting to measure I mean they can do the same for my orientation so where I'm looking to where's my x and y coordinate what's the angle between me and the corner in a global reference frame this gives me at which orientation I should actually measure that and then I may add a Gaussian noise to that which is he expresses this noise parameter and then have some uncertainty this this is a standard let mark model which typically doesn't take into account those dynamic aspects into account or often do not because they assume they have some extractor and if I extract a corner from a scan I assume that is actually a corner in reality another person walking around of course I wouldn't generate a corner from a person at least in the basic models over here and then when you can do this you can actually estimate different mark locations they can localize yourself with this so this is an example what you see here there are three landmark locations so this is a landmark one two three that's actually the true position of the robot and what you get you only get range bearing information if no idea where you are only range bearing information then the question is how can you actually tell where you are giving this range bearing observation so the line Mach number one actually generated this circle around it because it gives you a range and bearing that means you can sit somewhere on that circle looking towards this landmark so for example if I have the information I see an obstacle let's see here one meter in front of me directly in front of me I can be here measuring the obstacle in this way or I can sit on a circle always pointing to that obstacle alright so every of these observations will oops too fast will generate a circle a circular distribution around the landmark location the other system can be it can be I'm sitting on the circuit board on every position of the circle have a different orientation so I can take landmark number one which sits here which generates I'm somewhere over here and I take landmark number two which says I'm sitting somewhere over here means they're kind of it's projected to the 2d world I would be sitting in different orientations and I can do the same for the third one sitting somewhere over here and what I then can do is I can just combine them and multiply those values with each other and then this probability distribution will survive or this area of high probability will survive which will actually give me this position so the different sensing information provides me different different possibilities where I am and this is the last step which actually leads me towards the localization problem and that's actually what's going to happen next week when he Betty's teaching again and he will introduce you to the standard form of localization using a common filter so we will use a common filter an estimate where you are in the environment given your sensor observations and with this I'm actually coming to the end today so the base filter framework is something we briefly revisit just by showing how it works and motivates why there are different models that we can that we need in order to fill the filter the different ways how you can do this extended Kalman filter particle filter think that we are going to discuss in the next week's and we discussed the day basis demotion and the observation models which are the central element of the Bayes filter in order to implement that okay and with this I thank you very much for your attention and you should not have the ingredients at hand in order to start implementing a local I or realize a localization system starting from next week on
Info
Channel: Cyrill Stachniss
Views: 2,732
Rating: 5 out of 5
Keywords: robotics, photogrammetry
Id: rjpbE-X23wc
Channel Id: undefined
Length: 95min 6sec (5706 seconds)
Published: Fri Apr 03 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.