Lecture 17 - System Identification and Recursive Least Squares - Advanced Control Systems

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
we're going to talk about the second part part two of the reader of the reader basically it's about system identification and adaptive control if you think about what we have learned we have explained it last time we have so far been assuming we know the plan when we design controllers in practice the plant the plant maybe I know or the plant may even be time wearing so in these cases if you want to do model-based control design we're gonna encounter some difficulties in practice you can obtain these plank models basically from two approaches either your model by physics in models by Newton's law of conservation of energy etc etc for example if the robot pick up a object the object the weight of the object will make where you influence the model of the robot but if you know exactly how much what's the mass of the object that it picks up then you can incorporate that into your model so that's the first part modeling by physics in the second part is this usually modeling by physics you would want to obtain systems that are of low order for example Newton's law F F equals F F equals MA so that's a usually second order first order system that's a you start from there and you build up small pieces by small pieces and then finally obtain a large model of the plant another way is suppose you just you just want to it's like it's like you're you're doing interviewer to the to the system you feed the system through some input that you carefully prepared like series of questions and you see how the system responds to your questions and based on these input and output data you can obtain some models of the system this is called a system identification so it's the data based system identification in this case you won't be concerned too much about how the system the physical system of the system looks like but just want to know how the input and behavior looks like sometimes these two are combined together for example first you model by physics but there are parameters that you don't know this is very common and how you can obtain the parameters is by doing some input-output analysis and then find the parameters so this is very important and extremely useful in many applications which we have seen here unknown plants time wearing plants and most often you're going to have this problem so you don't know the disturbance maybe you know the disturbance structure but you don't know how the disturbance parameters look like and in this case you have to use adaptive control - adaptive control means are you adaptive let the system adaptively find the knowledge the information either of the plant or of the disturbance then perform control compensation to handle these cases that's the basic picture for system identification and the adaptive control I would like to start the analysis by reviewing reviewing something that you have seen but I haven't officially introduced it yet system modelling so for example when we say a plant when we write the plant transfer function as a let's say point to past this is just an example 12.5 G to the power of negative one one plus point nine let's say when we say the transfer function of the system looks like this and then we have input that's a UK going into the system and I get an output Y K so essentially if you write out this difference equation you're going to see YK equals one plus 0.93 negative one point two five point five three negative one UK again this D is a delay operator means means G in the negative one UK equals u K minus one operator if you write it out put shift this denominator to the left hand side you're gonna get Y K plus 0.9 YK minus 1 equals point two UK plus 0.5 u K minus 1 okay so this final step is going to review reveal what I want to say today so YK equals negative point 9 YK minus 1 plus point 2 UK last point 5 u K minus 1 okay so equals point nine point two point five negative YK UK UK minus 1 okay so look at this reason this final result this is the main message for this slide the output at time K is going to be a linear combination of the pre this is my K minus 1 K the output at time K is going to be a linear combination of the previous output and the pre and the inputs information and there's a linear combination right these are coefficients here point nine point two point five are the coefficients are the parameters of the clonk model so if we you want if we want we can write it this way we can define this at the seeder we can define it as a theta and then this data where as Phi notation is going to be Phi K minus one notation you see why case better we use waiking notation okay so here it's saying output is a linear combination of the data you measure previously and then the input you have at time K this is a this is called let me start a new page this is called a regressive form of the plank model so you say essentially this is a mod of the plant he's giving us an alternative representation of the transfer function model we write it this way called a regressive model and more formally more formally for the case of for the case of a general system let's say the plant model you see to the power of negative 1 P and a these bas look like polynomials of 3 to the power of negative 1 ok B and a look like this before let me ask you one question I know I put a general when I when I say that this is a general form of the transfer function I put a the first coefficient of a as 1 and I claim this is a general representation I didn't put this I didn't say a thorough here but instead I had I said this is one why can I do that just a simple question yeah so if I have everybody sees that if I have if I have a zero here I can divide both the numerator and denominator by eight zero then I will get this normalized form this is a usually way usually use your way to write a general transfer function model because in this case you avoid the confusion of because you will go afford this confusion so if you write it in a general form a at the let the first coefficient p80 then it's possible to say eight zero equals zero and that will give you inconveniences when you analyze the transfer function model you won't even know what's the what's the order of the system because if 8 0 is 0 and B 0 can be 0 as well then the order of the system will reduce because you can you can have you can factor out D to the power of negative 1 both in the numerator denominator that will give you all kinds of inconveniences so we'll do like this [Music] that that's possible so that's why so following what you said if in the transfer function is V in the continuous-time case it's inconvenient to write it this way this is not convenient because it's possible these are both both of them are zeroes and you've got these confusions of voltage also if you have this one [Music] I'll say you're saying in the continuous-time case yeah that's right in the discrete-time case it's a little bit different if you see that that's a good observation so yeah indeed in the continuous-time case a zero being zero is necessary sometimes for writing out the integrators but in the discrete-time case let's look at this mode together into more details so in the discrete-time case you don't have you don't have seen factors maybe the most most common way is most close closest case is this one but this one you can write in the discrete-time case without problem because you can write it this way with what we have been writing is this let's say let's say a first-order system you see you can make 8 0 to be 0 B 0 to B 0 and B 1 to B 1 then it's gonna become Z to the negative 1 there's no h0 here okay all right yeah so this good time case is a little bit different and that expression is is a general expression okay so now you see if you even if for for this discrete time case what we said is exactly correct you can write let's work on equation one you can write this plunked dynamics by now you have YK plus 1 if I enjoy shift ad minus one to hear anything - 1 YK my class 1 equals B Z minus 1 u K ok so just to write these polynomials factor out these polynomials you're gonna get now this time I'm gonna write it in a little bit faster fashion so the result is gonna be YK minus 1 equals factor out when you fact out these polynomials on the left hand side shifted to the right hand side you're gonna get negative signs you're gonna get negative a1z negative 1 operated on YK plus 1 you're gonna get this then minus a 2 y K minus 1 then all the way to negative a n y K let's say what's the index I think you should be YK plus 1 minus n so then plus BK BT minus 1 operated on UK is gonna be B 0 UK plus B 1 UK minus 1 plus all the way to be M u K minus M so just in the exactly the same fashion as what I did in the first order example so everyone's so can see this okay so the sense the essential message is exactly the same you see all all these information YK UK all these informations are the information are the collected information about the system either from the measurement or from the input so we can write it negative YK all the way to negative Y K plus 1 minus and then UK UK minus M okay we can write it this way the coefficients is going to be 8 1 2 a and and then B 0 to B okay so again we can write it in the regressor phone where now let me officially introduce this code 5k is the regressive vector I regret so vector and these coefficients plant coefficients are called usually denoted at theta called a plant parameter or parameter vector but the essential message is first of all is a linear system if you once when you see this you should be immediately to seeing that this is a linear system the coefficients are these and the output depends linearly on the previous output and the control input ok so finally we can write it in the simple form see the see the transpose it's just for convenience when we write Sita Sita means the column vector so actually Sita is this one just for convenience of notation so see there is a column vector both theta and Phi are column vectors can write it that form so this is what we have just derived the simple regressive form of the plant dynamics where you have these no measurement no work tag regressive vector and the measurement element the goal is to if series I know you have to this Frank model and you want to estimate this data based on the information of Phi K and Phi k plus 1 these informations okay so this is our assumption we assume we know the plank model but we don't know the detailed values of these plant parameters the system identification process is actually pretty intuitive in the sense is based on this basic idea if you don't know theta then try something try something try to assign some values to see that and then try to see what it will give us and if this is wrong if Y hat this this is called an estimation we assign some values to ceará and that we call this cedar hat then we try to estimate Y of K and if this Y hat doesn't match why of K then we make improvements on this estimation plant parameter estimation okay so the notations are explained like this hat means hat usually means estimate it's similar to an observer case when we say X and X hat we want X hat to estimate X and then this index K here this index K here means the estimate of ceará at time k as I wrote them okay so this is the basic idea of estimation and correction in system identification okay in a more the most simple way the simplest way of doing plant identification parameter identification is to do least squares but this is not the only way this is the simplest way to start with it works on this performance index for these squares you see pay attention to the notations here JK means I want to estimate the parameter theta hat K based on all the informations I can collect from time one to time K so that means I have y 0 2 all the way to YK and I have this fight all the way to fight K minus 1 so I am estimating sitter hat cake based on all these informations that I can collect up to time k ok so can someone tell me what is if I want to find the best estimate what what is the exact solution of ceará hat k in this case so we you see a quadratic form of something start thinking about whether it has an exact mathematic solution if I just give you this is the cost function does this guy has an exact mathematic solution what is the best estimate sitter hat k to solve this problem yeah it's actually in the title it's these squares so how would you serve a least square problem cedar had que minimizer this arc mean of JK okay given given y 0 to Y K Phi Phi 1 Phi 0 to Phi n minus 1 so if you want to solve this unconstraint optimization problem just take the partial derivative function so you can just take the partial derivative and then get the solution so set it to 0 there are just a few places that you need to pay a little bit of tension when doing this so you may have seen this or you may have not seen this but let's do this together so you take the partial derivative of JK the partial JK partial C the head you're going to have you're going to need to do is actually a summation of terms so you take partial derivative of individual terms inside the summation sign then you're going to get let me write it out actually why I square so I'm gonna write out the square function first - see the hat transpose K Phi I minus 1 by I transpose I minus one plus minus 2y I Phi transpose I minus one see that hat okay so there's only one thing I want to mention here when you factor out this quadratic function this term I just want to mention one more thing about this time this is a common common trick so when you write well factor out this you're gonna have this ton square so just pay a little bit to intention that this guy is exactly this guy can everyone see that I write it in a different form but these two are exactly the same these two are equal why is that you see so this guy equals C rahat transpose Phi I minus 1 then times itself right is this a scalar or vector in a scalar so it equals exactly its own transpose right these two are exactly the same so in the end okay so writing it in this form you see is exactly it's a quadratic form of theta hat okay so take the partial derivative of this whole thing to this guy and then set it to zero you're gonna arrive at let's do this very quickly together so you're gonna get the first term doesn't depend on città hat so the partial derivative is going to be zero you're gonna have summation first term doesn't depend on cedar hat the partial derivative of the second term would be two Phi I minus 1 Phi I minus 1 transpose see that hat K then the last term is going to be minus 2y i phi transpose phi i minus 1 okay so if you want you can check the dimensions the partial derivative of these guys of the last time for example should be a vector so this is a vector so it's not a transpose here but it should be no transpose okay so the solution of that's how this minimization problem can be so simple minimization problem so if you set it to zero you're going to get summation summation of 1 to the K 2 pi I'm minus 1 CEDA hat K equals summation from 1 to K so this is the scalar I can shift this one to the right hand side without problem there's going to be a - here okay so finally noticing that this is a matrix this matrix this is a matrix so you can operate the universe of the matrix on both sides to get this one finally that's a final result here so let me ask one question is this matrix FK even even the inside is the matrix inside this summation Junction is it positive definite or positive semi definite by observation definite for example if you want to test you can you can set K to be 1 if it country simple mm-hmm but that still I think not that rigorous even if your input is not zero it can still is not guaranteed there's going to be positive definite yeah maybe it's best so when you write the solution you have to be careful so look at this if I said k equal to one then the inside the summation is going to be Phi 0 Phi 0 transpose this matrix is it positive definite or not it's not positive definite actually so you can you can try many examples even if it's 1 1 or 1 1 1 1 1 like this is not positive definite oh so this is actually the rank of this matrix is always 1 this is this is the Rank 1 matrix actually so for example like this one you can set it for example so you want to consider X transpose X whether this can be positive or not right set X 2 for example 1 negative 1 then this is this is 0 so nonzero X but 0 output so be careful anything any any matrix that look like this a a transpose is not is not full rank it is not full rank at all the rank of this guy is always one okay so be careful this matrix this summation of matrix can be not positive definite and it can be not invertible that's the first observation the second observation however is if you when this K is pretty large when this K becomes pretty large this summation will give you full rank okay so that's a one thing you can observe from this solution so if you solve it this way you have to be careful whether this matrix is invertible or not it may not be invertible okay so I'm going to discuss something something else something alternative way to do this which will give a little bit more vacation into the solution concept let me mention another thing let me mention another for example if this matrix is it's very large for example 10 by 10 matrix okay the computation here is going to be pretty ok so at least square solution this is a least square solution is very straightforward you can derive the solution very easily but you have to be pay paying attention about whether this guy it's safe to invert or not and you cannot be careful about the computations over there now what I want to talk about now is a recursive these squares are we explain what recursive means in a short moment but suppose at time k plus 1 okay let's see the difference here at time k you're solving this optimization you are solving this least square problem then the concept of this recursive d square is consider at time k plus 1 you do the same thing you want to estimate the planck parameter again so what you want to do now is you want to use as most information as possible ok you want to use the for example the output information from 1 to k plus 1 so you are extending the information you can use in this minimization problem ok I think I have a little type of here this would be K plus-1 so you want to estimate the parameter theta based on all the information up to k plus 1 so you can so view exactly the same way this D square problem ok you can obtain exactly the same formula but in this time you're going to have everything everything that index by K is going to be shifted to K plus 1 over here okay so this is the solution this is the improved solution improved solution compared to compared to theta hat k because you are using more information in this estimation problem okay it turns out it turns out if if you want to do this you don't need to do this very complicated inverse problem it turns out the solution of theta hat k plus 1 simply equals this guy you can simply obtain the result by adding some correction term to see the hat cake to your obtain solution at time k that's the essential idea of this is why it's called recursive these squares so it's a iterative process you improve the result step by step so this is the main equation we're gonna be focus on and what I will explain why this equation holds ok so everyone gets this basic idea you want to use a time k what information are using in that time k plus 1 what information you're gonna be using so let's uh spend some time to see details of this this recursive formula now what we have obtained let me rewrite let me for convenience let me rewrite the solutions of the problem at k plus 1 over here Phi I minus 1 Phi I minus 1 transpose inverse and then summation of Phi I minus 1 Y I I equal to 1 to k plus 1 ok and we are calling this matrix this universe matrix F F k plus 1 ok so let's do a little algebra and focus on this time first this term equals I going from 1 to K Phi I minus 1 Y I plus this additional term Phi the index be careful should be K and then Y K plus 1 ok now look at this one this one looks very familiar because when we write ceará hat k ceará hat K equals FK FK summation of I to 1 from 1 to K Phi I minus 1 Y I so these two is actually the same so I can write we can write this this guy then equals f K inverse theta hat K so put it here we're gonna get let's see FK plus 1 inside I'm gonna have a F K inverse see the hat K plus by K Y K plus-1 okay so so far so good now do one more step and then things will be I think things will be almost clear FK plus 1 FK what does FK equal to if you follow the notations we have been using so FK is going to be the summation from 1 to K then the inverse of that so FK inverse FK universes can be summation from 1 to K Phi I minus 1 Phi I minus 1 transpose which is going to be equal to now look at here summation from 1 to K plus 1 - what the index should be okay okay and this guy equals f k plus 1 inverse right so I think here so far we have explained everything up to this step well right now here FK plus 1 and then inside it's going to be FK plus 1 inverse minus this guy times theta hat k then plus 5 k YK plus 1 ok then finally you just need to factor out this part then you're gonna get u itself times its inverse then it's gonna be identity times theta hat K and then over here you're going to get FK plus 1 5 k YK plus 1 minus yeah - this guy here okay so FK plus 1 times 5 k times CI hat K times 5 which is equal to as we have explained this guy equals this guy okay so arriving at this final result let's take a moment to see what's the intuition behind so we have explained when we write when we introduce the plant models if you remember we introduced a regressive form of the plant model which is C the transpose Phi K and then we introduced this sort of estimation of Y by this guy Susie exactly this is a estimate estimate of YK plus 1 so based on the arrows of this estimation we are making improvements to the previously obtained result seed ahead okay and then the central idea of these squares is it actually provides us an optimal gain if you will want an optimal gain for this estimation error in this game is obtained from this game it's obtained from here that's the least square solution recursive least square solution okay [Music] so these intuitions can be written more formally written as follows so I have copied down the equation we have just derived equation 10 more formally we can define these estimations this is the estimation of Y and this is the estimation error okay so I'm using a notation of Y not here we will later see this is called a priori estimate okay so the name comes from this a priori it means when we estimate YK plus one notation here we are using our estimate we are using our plant parameter estimate the information at K so we are using the last step results from last step to estimate to predict the current step this is the the idea of why it's called a priori estimation okay now finally this is what we have obtained this is the error estimation error this is a arrow based because a formula for if you do this way you see we won't have we won't have to solve a large matrix inversion problem here but instead we are just accumulating our iterative ly improving our result over here okay so if you if you're wearing careful about the notations you're gonna see that FK equals this guy so when I say it's going to avoid the matrix inversion you might be confused why I can say that because there's a matrix inversion here okay so our next result I'm going to show you that this guy this FK plus 1 matrix also looks like something FK that means minus some update update time okay so in the next slide I'm going to show you that this universe matrix inversion indeed we don't have to do that we can do this by this again iterative formula that shows as follows that's the main goal of this slide okay so it turns out FK past one is gonna be equal to FK minus this this update part over here okay so essentially I'm gonna show you 1 k plus 1 this guy is going to be equal to 1k okay so let me explain the intuition here so it turns out when you are considering the universe of a matrix that consists of a bunch of summations together okay it turns out you can do this inverse by you can decompose this inverse by whatever is up to time K okay and then minus some other terms so more mathematically this is what I mean if you're considering the universe of some matrix a plus some add-on term added term BC here turns out the result is going to be the universe of a so it's going to be the universe of a minus some terms that related to BC this is a general this is a general version of the matrix inversion lemma okay so you can verify that this is true by doing for example you can do a plus BC times this whole whole term here you can see you can verify that this is identity so this is actually the inverse of this guy but we won't be doing that in the class you can do this you know as a homework or something I want to show you however I want to show you the intuition behind this okay if you have let's suppose PC is some positive definite matrix so let's assume that the size roughly I stay the size of the matrix it seems increased from a to a plus BC so if you have something that's increased if you have let me do a scalar version so if you have let me do a scalar intuition if you have for example one plus two times three you do the universe of this guy it's gonna be equal to seven 1/7 okay so this one is the roughly the a here and this a universe it's gonna be lying English so you some of you one you see when you increase the size here the universe is going to decrease the insides the universe is decreasing from 1 to 1/7 right so very similarly for the matrix version if you are increasing from a to a ma a plus BC then the universe is going to decrease okay it's gonna be a inverse minus something this is a roughly intuition of this matrix inversion lemma anyone has a questions about this so take a moment to think about this intuition wire we work on this special case of this guy so we have explained FK plus 1 equals the summation from 1 to K plus 1 and the inverse of this guy and then which is going to be equal to so summation from 1 to K plus 1 by I minus 1 Phi transpose equals summation from 1 to K Phi I minus 1 Phi I minus 1 transpose plus okay you see this matrix is increasing insights so the notations around me f k plus one this is a okay this matrix increases insights so the universe of this matrix is going to intuitively sort of decrease in size so it's gonna be something minors term here okay in the - term exact formulation of the -10 the update term here comes from the matrix inversion lemma okay so treat this as a here and then treat this as the PC in this matrix inversion lemma okay so where are you guys confused so here is just a here so yeah just for calculating this because when you want to do this you want you need to know what's FK plus 1 and this equation is exactly telling you how to calculate FK plus 1 based on order you don't need to do the conversion just use this okay and the detailed steps of obtaining this is just doing this matrix inversion lemma okay so let me let me do this now so up till now if we summarize what we have obtained summarizing we have obtained theta-hat k plus 1 equals theta hat k plus FK plus 1 phi k epsilon 0 k plus 1 with FK plus 1 equal f k minus also if you want to check that this equation is correct you can check the dimensions first so this will help you to understand the structure of the equation here the bottom here it's got to be Phi transpose F times Phi because only this way you can obtain a scalar that's gonna match the dimension of 1 and on the top you have to produce some matrix here so inside it should be Phi times Phi transpose then I'm right on the left hand on the right RS so do a dimension check if for example in the exam you have to write down this equation yourself so up till now we have obtained this usually there's an alternative writing of these equations just for just for different references some people want to write it in this form some people prefer striding this by K transpose so instead instead of FK plus 1 we use information just containing FK it's gonna be F ok Phi Phi transpose Phi K epsilon 0 k plus 1 ok so for computation it turns out you can use this equation or disk equation both are fine ok and let me do this so let me now explain that this equation one equation star and the equation star one they are equivalent okay so essentially we just need to show that this this is equivalent to this matrix okay so how would you if you look at the result look at these three equations here and are asked to prove that this is equivalent to this how would you proceed he's actually pretty he's not complicated how would you show this is the equivalent to this f k +1 5k equals in the end has to equal to FK one plus YK transpose okay so anyone has any suggestions sub see this - here let's try so FK that he's gonna be five K so I'm gonna multiply it inside - f K Phi K Phi K transpose F K 5 K divided by 1 plus phi k sk okay so your proposal using exactly the answer that I was looking for so it's going to be this guy is equal to 1 plus Phi K transpose f K 5 K on the top it's gonna be f k 5k + let's see which order should I choose so I can I think I can choose f k 5k times this guy Phi K transpose f K Phi K alright so this is the first time and then minus the second term minus the second term but this time it has the same denominator and then minus FK Phi K Phi K transpose F Phi K you see this time is canceled out alright so the remaining part is only this okay so this we have thus shown that the regional expression can be written in this form as follows okay so for notation purpose we're going to be say we're going to recording these set of equations as a PA a parameter adaptational reason okay how we adapted parameters so you see these is a recursive formula so you have to initialize the equation at some point so see rahat zero is your initial guess of the parameter vector if you know forgot some values of the mass of the system for the damping coefficient of the system you can set it here if you don't know you can set it to zero and then let the system wrong okay and then this similarly for this guy for the matrix F okay you see F yeah here you see from here if you are making improvements if you if you are reducing this error if you are doing in the right way you're reducing this error what should intuitively what should FB what should ask me should that be increasing or decreasing if you are doing good estimation here so if you are if you are on the right track you can put less control over the equation here so should be decreasing as you move along you get better result here so if you see that then when you assign the initial value of F you had better assign it to be large enough so that when you keep reducing F you have some values to keep the equation running so one way to assign is to put ad identity matrix as f 0 well the elements is a very large scale of numbers okay so these are the basic ideas for system identification if we want to do a quick review we would say this is what we have done first of all the very fundamental result we have to keep in mind is this alternative representation of the plant model and then the concept of the concepts of how sealer is actually obtained in the least square sense so these are the important facts summation from 1 to K Y I - Sita had K transpose Phi where okay these are the most important parts of this recursively squares problem okay and throughout this derivation we use these concepts of estimation of the parameter I mentioned this is called a priori estimation prediction and this is a priori estimation error okay it has a special name because it has a it has a brother or sister called a posteriori or prediction and april's the oreo prediction error so it's written this way if they look very similar but pay very special attention to the writing here okay so a posteriori prediction is using all the information when you predict the YK plus one you are using cedar hat k plus 1 to do this prediction ok so let me ask you a question first of all should this be a better prediction or not intuitively yes right intuitively because we are using more information to do this estimation now if this is the better prediction why don't we use this in the estimation why don't we use for example when we write down the equation so we write down the PAA it looks like this plus FK plus 1 5 K epsilon 0 if this is the better prediction why don't we use epsilon this is a notation for a posteriori error why don't we use it here calculation time that's one factor some someone else wanted to say something I heard there's another reason calculation time is one reason so look at maybe let me use let me give you one hint which is so this is one he and then the second one is FK plus one equals summation from 1 to k plus 1y I why Phi I minus 1 Phi tips on course so these are the hints [Music] think about why so the final conclusion is that you can't simply use this here although it's a better estimate so look at for example it needs memory that's memory probably probably can handle we can handle memory but just another another another fact here so when if you want to do this if you want to do Y hat if you want to do this April's your estimation so when you so you need to sit ahead k plus one if you need serie hat k plus one means you need the information of YK plus 1 because if you are doing estimation based on all the information of the process when you need this guy okay so now it becomes now comes the confusion part it becomes bizarre because in the sense when you do me so this is the not feasible way if you want to do that it becomes the process is immediately gonna become bizarre in the sense now we do this guy will do this guy you need YK plus 1 right and actually you need see the head k plus 1 may you see originally this update equation all the scenes on the right hand side let me take that back originally everything here this estimation original way of writing is a cedar hat equal sailor hat this guy originally this guy is using is using YK plus one as well but he's using this okay these are the two equations now this equation you see definitely is implementable because you have 0 hat k plus 1 and then you're on the right hand side you can update it by itself but this guy you won't be able to implement this because you see you won't obtain this he's like I don't know chicken deck or something so you won't be able to implement this equation at all because you want to obtain this you want to obtain this but in your derivation you are using it itself ok so keep in mind that this April's the early information is not for implementation when you implement use this equation but when we use this guy these are four important for for analysis for example stability and convergence important for stability and convergence analysis okay these are we're going to be using a posteriori information so but keep in mind which one we are implementing into which one we are not okay so to give you some intuition actually already see some intuitions earlier so we have explained that this estimation a posteriori our estimation which equals YK plus 1 minus y hat K plus 1 okay this Y hat is a better estimate compared to why I had zero okay so this arrow should be relatively smaller than this arrow here this is the main result for this page so the final conclusion is that we can mathematically show that this guy is smaller than or equal to this guy okay and how detailee how much it's smaller than this is explained by this equation so because it's 1 plus 5 K transpose F K Phi K which is always non-negative so you see the denominator is always larger than 1 so the whole thing is going to be less than or equal to epsilon zero okay so the obtaining of this equation is by doing the following steps okay I think I won't detailed explain every step but instead look at the result here you can obtain information of arrows by multiplying Phi transpose to this equation because when you do Phi transpose theta hat you get immediately the estimation of the parameter and you do five transpose times theta hat k plus one you get the a posteriori using information up to k plus one okay so do this multiplication so this it multiplied multiply Syrah 5ky if I had cake then from here to here if you want obtain the arrow you just subtract Y by its estimate so subtract Y K plus 1 on both sides you can get this result here so by doing you obtain this final result and again massage these two terms together you're going to get 1 plus Phi transpose F Phi minus Phi transpose F fine so in the end you're going to obtain this equation just a mathematic just linear algebra there aren't many tricks I want to mention here okay so if you see this result keep in mind about this equation here if you see this equation 13 and then recall for example yeah actually I have it here look at the equation 13 and then look at the equation box okay look at these two equations can anyone give me a simplification of equation box by using the equation 13 so it contains this guy which is this guy and it contains this which is this so we can simply writing equation box to sit ahead k plus 1 equal to c de hat k + f k phi k epsilon k plus 1 ok so this is the impose the earlier version of the parameter adaptation algorithm okay I mentioned is useful is useful for stability analysis ok one reason is that you see this guy is little bit more simpler than the previous parameter adaptational reason and when doing stability analysis this is going to be pretty useful ok I think this will be the end of today's lecture we're going to talk about next time about forgetting factors which is which is a very important part for system identification okay so I have office hours today from 1:00 to 2:30 and if you have any questions feel free to come to me
Info
Channel: S K
Views: 2,587
Rating: 5 out of 5
Keywords: Control Systems, Control Systems 2, Control Theory, Advanced Control Systems, UCB, University Of Caiifornia Berekley, ME 223
Id: oPhy59U0WSk
Channel Id: undefined
Length: 74min 29sec (4469 seconds)
Published: Tue Oct 30 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.