Control Bootcamp: Introduction to Robust Control

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome back everybody so in the last few videos we've been talking about optimal control so if we have some state space system X dot equals ax plus bu and y equals CX so X is the state of our system you are the inputs that we get to control and Y are some measurements so if we have our system then we have essentially shown that if you have access to the full state measurements if you have access to X the full state of the system then you can design an lqr linear quadratic regulator which is U equals minus KX and you can stabilize the system but if you only have access to limited measurements Y you don't have access to the full state you have access to Y then we need to put in this estimator block and this estimator something like a common filter so a common filter is a particular optimal estimator will estimate the state so X hat is an estimate of the full state X and it takes into account measurements Y and also measurements of what the control is doing you okay and so what we've shown in the previous few lectures this is called optimal control this essentially lqr is an optimal regulator the Kalman filter is an optimal estimator when you put these together this is called LQG or linear quadratic Gaussian regulator and regulator just means that we're trying to drive the state of the system X to 0 to the origin I could make this a reference tracking problem where I try to drive the state of the system X to some vector Direction X but if we're if we're stabilizing the system it's a regulator and what we've shown is that if the system is controllable so if a and B are controllable and if a and C are observable then it's possible to design this lqr regulator so the closed-loop eigenvalues are arbitrarily stable and it's possible to design this estimator block so that the estimate X hat converges to the true state X as fast as we want and there are some optimal eigen values based on trade-offs between accuracy of how fast you're tracking the state to zero and how much control you're spending and also a balance between the types of noises and disturbances you see in your system okay and so this broadly speaking is the field of optimal control but today we're going to start talking about a very very important modern perspective and control theory called robust control okay so optimal control has been a mainstay and is still possibly the most used state space formulation of control today so if you have a state space system X dot equals ax plus bu y equals CX chances are you're going to start by developing a lqr regulator and a common filter but there was a remarkable paper by John Doyle so Doyle in 1978 wrote this amazing paper one of the most important papers in control theory ever written and one of my favorite papers called guaranteed guaranteed margins and this means stability margins guaranteed stability margins for LQG regulators ok and I want to break this down for you ok so guaranteed margins for LQG regulars this is the title of John Doyle 1978 paper and it was well known at the time that if you actually were measuring the full state if y equals x and i didn't have to do a common filter then the lqr regulator was pretty robust if I had some uncertainties in my system if my system was a little bit different or I had some time delays in my system the lqr regulator would still be able to stabilize the system as long as Y equaled X now what John Doyle was investigating in this 1978 paper is if I don't have measurements of X but I have to build up an estimate using something like a common filter is this combined LQG regulator still robust to things like uncertainties in my system or time delays and so in this paper one of my favorite things about this is the abstract abstract there ain't none well I think the editor probably made him change it to there are none one of the simplest and most direct abstracts there in the literature okay so what John Doyle showing his 1978 paper is that there are no guaranteed stability margins for LQG regulators so depending on my system and the types of noise and disturbances that I have my combined lqr and common filter my LQG regulator might be arbitrarily sensitive to things like model uncertainty and time delays it could be arbitrarily non robust okay this is really a beautiful paper it's less than one printed page and it changed the field of control forever okay so instead so there's this idea that we could change the eigenvalues of this lqr regulator and we can change the eigenvalues of this estimator to be whatever we want to be more stable or less stable okay so this lqr regulator can change the eigenvalues of the closed-loop system and this calming filter can change the eigenvalues of my estimator but what this paper is showing is that there's some the stability of this system does not guarantee that it's robust it doesn't mean that the system won't blow up if I have some model uncertainty and so there's this new concept of robustness okay and so we're buffness and performance are now these kind of dual objects okay so the closed-loop i ghen values of mice them and how fast I can estimate our kind of measures of performance how well does my persistent perform how fast does it respond and robustness is more of a measure of how sensitive is my system to model uncertainties or time delays or disturbances things like that and so in this really fantastic paper this idea that LQG regulators could have very high nominal performance but almost zero robustness really turn the field upside down and it started this movement towards robust control okay so everything I'm going to tell you about for the next few videos is about robust control how do we tell if the system is or is not robust to model uncertainty how do we even know what that means and then how do we design a system to have high performance and high robustness and what are the trade-offs when can I have both when can I not have both okay so to jump into this we're going to need a few new concepts like the transfer function and the frequency domain because oftentimes performance and robustness are easier to understand in the transfer function or Laplace transform domain okay so that's where things are going we're going to start figuring out what robustness means how we can have robust performance when we can and when we can't and how to do things like fix our LQG regulator to make it more robust okay so that's all coming up and this is going from optimal control to robust control thank you
Info
Channel: Steve Brunton
Views: 46,538
Rating: undefined out of 5
Keywords: Control, Control theory, Linear algebra, Eigenvalues, Closed-loop, Feedback, Controllability, Robust control, Sensitivity, Complementary sensitivity, Loop shaping, Robust design, Optimal control, Matlab, Applied math, LQR, LQG, Simulink, Kalman filter
Id: Y6MRgg_TGy0
Channel Id: undefined
Length: 8min 13sec (493 seconds)
Published: Tue Mar 07 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.