Understanding Sensor Fusion and Tracking, Part 3: Fusing a GPS and IMU to Estimate Pose

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
let's continue our discussion on using sensor fusion for positioning and localization in the last video we combined the sensors in an IMU to estimate an object's orientation and showed how the absolute measurements of the accelerometer and magnetometer were used to correct the drift from the gyro now in this video we're going to do sort of a similar thing but we're going to add a GPS sensor GPS can measure position and velocity and so in this way we can extend the fusion algorithm to estimate them as well and just like the last video the goal is not to fully describe the fusion algorithm it's again too much for one video instead I mostly want to go over the structure of the algorithm and show you visually how each sensor contributes to the final solution so you have a more intuitive understanding of the problem so I hope you stick around for it I'm Brian and welcome to a MATLAB Tech Talk now it might seem obvious to use a GPS if you want to know the position of something relative to the surface of the earth just strap a GPS sensor on to your system and you've got latitude longitude and altitude simple enough and this is perfectly fine in some situations like when the system is accelerating and changing directions relatively slowly and you only need position accuracy to a few meters this might be the case for a system that's determining directions in your car as long as the GPS locates you to within a few meters of your actual spot the map application can figure out which road you're on and therefore where to go next on the other hand imagine if the system requires position information to a few feet or less and it needs position updates at hundreds of times per second to keep up with the fast motion of your system like for example trying to follow a fast trajectory through obstacles with a drone in this case GPS might have to be paired with additional sensors like the sensors in an IMU to get the accuracy that you need to give you a more visual sense of what I'm talking about here let's run an example from the MATLAB sensor fusion and tracking tool box called pose estimation from asynchronous sensors this example uses a GPS excell gyro and magnetometer to estimate which is both orientation and position as well as a few other states now the script generates a true path and orientation profile that the system follows the true orientation is the red cube and the true position is the red diamond now the pose algorithm is using the available sensors to estimate orientation and position and it shows the results of that as the blue cube and the blue diamond respectively so that's what we want to watch how closely do the blue objects follow the red objects and the graph on the right plots the error if you just want to see a more quantitative result and the cool thing about this is that while the script is running the interface allows us to change the sample rates of each of the sensors or remove them from the solution altogether so that we can see how it impacts the estimation so let's start by removing all of the sensors except for the GPS and we'll read the GPS five times a second the default trajectory in the script is to follow a circle with a radius of about 15 meters and you can see that it's moving around this circle pretty slowly now the orientation estimate is way off as you'd expect since we don't have any orientation sensors active but the position estimate isn't too bad after the algorithm settles and removes that initial bias we see position errors of around plus and minus 2 meters in each axis so now let me add back in the IMU sensors and we can see if our result is improved well it's taking several seconds for the orientation to converge but you can see that it's slowly correcting itself back to the true orientation also the position estimate is well it's about the same plus or minus two meters maybe a little less than that this is a relatively slow movement and it's such a large trajectory that the IMU sensors that are modeled here are only contributing a minor improvement over the GPS alone the GPS velocity measurement is enough to predict how the object moves over the point two seconds between measurements since the object isn't accelerating too quickly this setup is kind of analogous to using GPS to get directions from a map your phone while you're driving adding those additional sensors from the IMU aren't really going to help too much so now let's go in the opposite direction and create a trajectory that is much faster in the trajectory generation script I'll just speed up the velocity of the object going around the circle from two point five to twelve point five meters per second this is going to create more angular acceleration in a shorter amount of time and to really emphasize the point I'm trying to make here I'm going to slow the GPS sample time down to once per second so let's give this a shot okay so what's happening here is that when we get a GPS measurement we get both position and velocity so once a second we get a new position update that puts the estimate within a few meters of the truth but we also get the current velocity and so for one second the algorithm propagates that velocity forward to predict what the object is doing between measurements and this works really well if the velocity is near constant for that one second but poorly as you can see when the velocity is rapidly changing this is the type of situation that is similar to a drone that has to make rapid turns and avoid obstacles and it's here where the addition of the IMU will help because we won't have to rely on propagating a static velocity for one second we can estimate velocity and rotation using the IMU sensors now to see the improvement I've placed two different runs next to each other the left is the GPS only that we just saw and the right is with the addition of the IMU you can see at least visually how the GPS with the IMU is different than the GPS alone it's able to follow the position of the object more closely and creates a circular result rather than a saw blade so adding an IMU seems to help estimate position so the question at this point might be why is this the case I mean how does the algorithm combine these sensors to get this result in the first place well again intuitively we can imagine that the IMU is allowing us to dead rec in the state of the system between GPS updates you know similar to how we use the gyro to dead reckon between the mag and Excel updates in the last video and this is true except it's not as cut and dry as that it's a lot more intertwined than you might think and to understand why this is the case we need to explore the code a little bit the fusion algorithm is a continuous discrete extended Kalman filter and this particular one is set up to accept the sensor measurements asynchronously which means that each of the sensors can be read at their own rate and this is beneficial if you want to run say your gyro at 100 Hertz your mag and accelerometer at 50 Hertz and your GPS at 1 Hertz you're gonna see below how this is handled but the thing I want to point out here is that this is a massive common filter the state vector has 28 elements in it that are being estimated simultaneously there's the obvious states like orientation angular velocity linear position velocity and acceleration but the filter is also estimating the sensor biases and the mag field vector estimating sensor bias is extremely important because bias drifts over time this means that even if you calculate sensor bias before you operate your system and you hard-code that calibration value into your software it's not going to be accurate for long and any bias that we don't remove will be integrated and cause the estimate to walk away from the truth when we rely on that sensor now if you don't have a good initial estimate of sensor bias when you start your system then you can't just turn on your filter and Trust it right away you have to give it some time to not just estimate the main states that you care about like position and velocity but also to estimate some of the secondary states like bias usually you let the common filter converge on the correct solution when the system is stationary and not controlled or maybe while you're controlling it using a different estimation algorithm or maybe you just let it run and you don't really care that the system is performing poorly while the filter converges but this is one of the things that you need to consider during initialization of your system now another thing we need to talk about here is how to initialize the filter this is an EKF and it can estimate state for nonlinear systems it does this by linearizing the models around its current estimate and then using that linear model to predict the state into the future so if the filter is not initialized close enough to the true state the linearization process can be so far off that it causes the filter to never actually converge now this isn't really a problem for this example because the ground truth is known in the simulation so the filter is simply initialized to a state close to truth but in a real system you need to think about how to initialize the filter when you don't know that truth now often this can be done by just using the measurements from the sensors directly like using the last GPS reading to initialize position and velocity and just using the gyro to initialize your angular rate and so on all right with the filter initialized we can start running it and every common filter consists of the same two-step process predict and correct to understand why we can think about it like this if we want it to estimate the state of something you know where it is or how fast it's going there's two general ways to do this we could just measure it directly or we could use our knowledge of dynamics and kinematics and predict where it is for example imagine a car driving down the road and we want to know its location we could use GPS to measure its position directly that's one way but if we knew where it started and its average speed we could also predict where it'll be after a certain amount of time with some accuracy and using those predictions alongside a measurement can produce a better estimate so the question might be why wouldn't we just trust our measurement completely here it's probably better than our prediction well as sort of an extreme example what if you checked your watch and it said it was 3:00 p.m. and then you waited a few seconds and checked it again and it said 4:00 p.m. what you wouldn't automatically assume an hour has passed just because your measurement said so this is because you have a basic understanding of time right that is you have this internal model that you can use to predict how much time has passed and that would cause you to be sceptical of your watch if you thought seconds passed and it said an hour now on the other hand if you thought about an hour has passed but the watch said 65 minutes you'd probably be more inclined to believe the watch over your own estimate since you'd be less confident in your prediction and this is precisely what a common filter is doing it's predicting how the states will change over time based on a model that it has and along with the states it's also keeping track of how trustworthy the prediction is based on the process noise that you've given it and the longer the filter has to predict the state the less confidence it has in the result then whenever a new measurement comes in which has its own measurement noise associated with it the filter compares the prediction with the measurement and then corrects its estimate based on the relative confidence in both and this is what the scripts doing also the simulation runs at 100 Hertz and at every time step it predicts forward the estimate of the states and then if there's a new measurement from any of the sensors it runs the update portion of the common filter adjusting the states based on the relative confidence in the prediction and the specific measurement so it's in this way that the filter can run with asynchronous measurements now with the GPS only solution that we started with the prediction step could only assume that the velocity isn't changing over the one second and since there were no updates to correct that assumption the estimate would drastically run away from truth however with the IMU the filter is updating a hundred times a second and looking at the accelerometer in seeing that the velocity is in fact changing so in this way the filter can react to a changing state faster with the quick updates of the IMU then it can with the slower updates of the GPS and once the filter converges and it has a good estimate of sensor biases then that will give us an overall better prediction and therefore a better overall state estimation and this is the power of sensor fusion now I know this explanation might not have been perfectly clear and probably a bit fast but I think it's hard to really grasp the topic by watching a video so I would encourage you to play around with this example you know turn sensors on and off change the rates noise characteristics and the trajectory to see how the estimation is affected yourself you can even dive further into the code and see how the EKF is implemented I found it was helpful to place breakpoints and pause the execution of the script so that I could see how the different functions update the state vector okay this is where I'm going to leave this in the next video we'll start to look at estimating the state of other objects when we talk about tracking algorithms so if you don't want to miss that in future Tech Talk videos don't forget to subscribe to this channel and if you want you can check out my channel control system lectures where I cover more control theory topics there as well thanks for watching
Info
Channel: MATLAB
Views: 61,745
Rating: 4.970962 out of 5
Keywords: MATLAB, Simulink, MathWorks, kalman filter, sensor fusion, orientation estimation, 传感器融合, 系统状态估计, GPS, IMU, 点定位, 自主系统, 卡尔曼滤波器, 对象跟踪
Id: hN8dL55rP5I
Channel Id: undefined
Length: 14min 0sec (840 seconds)
Published: Wed Oct 23 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.