64 - Finding eigenvalues and eigenvectors

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I want to start with a brief reminder of what we're doing and then the the goal of this lesson is to write a theorem which says precisely how to do the procedure that we exhibited in the example that we just did so we're starting with a linear map a linear operator T and a vector V a non zero vector V satisfying that T on that vector all it does is multiply it by a scalar alpha so T of V equals alpha v such a special vector is called an eigen vector a self vector of T of T it's a property of T of V and T so a special vector V a nonzero vector G such that T of V equals some scalar times V is called an eigen vector and that scalar alpha is called an eigen value of T and alpha and V go together in an eigen vector in a corresponding eigen value if the space V has a basis which is comprised entirely of eigenvectors of T okay so this is a basis of the space V comprised of eigenvectors of T then when we construct the matrix representation of T with respect to that basis what we get T we the way to construct a matrix representation is to let T operate it operate on each of the basis elements and since they're all eigen vectors T of V 1 is alpha 1 V 1 T of V 2 is alpha 2 V 2 all the way up to T of V n is alpha n VN and I'm omitting here all the 0 times the others right when I'm saying that T of V 2 is alpha 2 V 2 what it means is that T of V 2 is 0 V 1 plus alpha 2 V 2 plus 0 V 3 plus 0 v 4 + 0 v 5 all the way to + 0 V n so there's a whole bunch of zeros here that I'm not writing and all that remains is these are components in each of their images and when we take the the matrix of the coefficients the transpose what we get is this diagonal matrix that's the matrix representation of the map t with respect to the basis B of eigenvectors so if and there's an if here if the space has a basis of t eigenvectors then t has a matrix representation which is a which is a diagonal matrix such a teeth is called diagonalizable so that's the framework in which we're working now we did an example a 2x2 example a matrix a 2x2 matrix that defines a linear map and we saw how we find the eigen values and their corresponding eigen vectors and what I'm going to write now is just the theorem that says this what we did in the example is precisely what we do in general so let's write the theorem so OH before the theorem I want to I want to just say formally one thing that we said informally so if T so you can call this a definition definition suppose T from V to V is linear so we have a linear operator we define we define the determinant of T the determinant of the operator to be the determinant of the matrix which is a matrix representation of T with respect to some basis F okay so the first thing we have to do for this definition is to verify that it makes sense so what do we mean by it makes sense the notion of a determinant was defined for a matrix when we have a matrix a square matrix we know how to find its determinant right it's a it's a formula that involves the entries of the matrix so now we have a map an operator the the determinant of an op an operator a priori doesn't compile right it's not a matrix but we're defining it to be take the matrix representation of the operator take its determinant define that to be the determinant of T so first of all it makes sense but that's not sufficient for a definition to be a good definition we need to also verify that it is what's called well-defined what do I mean by well-defined I mean and we mentioned this before so I'm saying take T take your favorite basis F for V and then take the matrix representation with respect to that basis F and then find the determinant again a priori it seems like the answer you'll get here the determinant that you'll get would depend on which basis you chose okay and that would make this not well-defined because one person would take one basis and get a certain determinant for the for the map and another person would take a different basis and get a different matrix representation and get a different determinant okay but as we mentioned earlier that's not the case because if you take different basis and it doesn't matter which basis you take the the different matrix representations and they're going to be different but they're all similar that's the definition of similar different matrix representations of the same map are similar once they're similar one of the properties of similar maps is that they they have sorry one of the proper of similar matrices representing the same map is that they share the same determinant okay so this definition is independent of the choice of F let's remark this remark the determinant of T does not depend on the choice the choice of the basis F furphy with respect to which we find the matrix representation of T since any two matrix representations any two matrix representations let's just write it any two matrix representations of T are similar by definition and thus have the same determinant okay and therefore we say that this definition is well defined okay questions up to here so we're gonna write stuff like the determinant of a map and it makes perfect sense likewise likewise likewise we're gonna write stuff like I'm not gonna state it as a formal definition we will write kernel of a matrix and just mean the null space of the matrix okay all the elements that the matrix performing as an operator since the zero okay current a for all these such that if we define T of V to be a V then AV equals zero those are precisely the elements in the kernel of the operator defined by a okay and if T is a map and a is its matrix representation then the kernel of a in the kernel of T are the same thing right okay so these are the two remarks that I wanted to clarify before we write the theorem and now let's write the theater so everybody good everything falling in place make making sense okay so I hope you remember at least the ideas that were in the example that we did previously because that's precisely what I'm gonna write so the framework is again T from V to V linear a linear operator and here are the parts of the theorem the eigen values the alphas the out eigenvalues of T are the roots of the polynomial polynomial determinant of t minus alpha I okay and remember what we said T is an operator I here is the identity operator so they're different t minus alpha I this is again an operator what we mean by the matric by the determinant of this is what we just defined okay take a matrix or presentation look at the determinant of that matrix representation okay so the determinant of an operator makes sense now this operator once you take a matrix representation and it doesn't matter which rep matrix representation you you take you choose what you get here is a bunch of numbers but there's still this alpha right so alpha here is a parameter okay and when you take the determinant what you're gonna get is a polynomial in alpha just like in the example so what's written here although it doesn't look like a polynomial is in fact a polynomial in the variable alpha and do you remember what it's called right the characteristic polynomial is are the roots of the polynomial t minus alpha I called the characteristic polynomial polynomial of T okay maybe we'll underline this cuz I don't think we actually defined it formally until now so this definition becomes now part of the theory so that's the first part this is how we find eigenvalues we write down what is the characteristic polynomial and find its roots okay to the second part if alpha is an eigenvalue so suppose we found such a root such an alpha so if alpha is an eigenvalue then its corresponding eigenvectors those vectors such that T of V equals alpha v for this specific alpha are the nonzero elements the nonzero vectors in the kernel a kernel of t minus alpha okay again this is precisely what we did in the example right for each alpha we plugged it into this matrix okay and we solve the homogeneous system of equations that this is the make the coefficient matrix for okay so solving the the the homogeneous system is finding the solutions to the homogeneous system the solutions to a homogeneous system are precisely those those vectors that are sent to zero are precisely the kernel of the system and again there's a blur here between the matrix a and the aim rep matrix representation a and the map okay okay okay maybe we'll prove these two first and then state a couple more statements separately let's add before we prove it let's add a couple of remarks so remarks so the first remark is the exact or if well it wouldn't be exact um an analogous theorem can be written for a matrix e what do I mean what do I mean let's look at the theorem again this theorem starts with take a linear map okay and for the map we're saying what are the eigenvalues of the map and what is the characteristic polynomial of the map and what are the eigenvectors of the map okay these words eigenvalues and eigenvectors and characteristic polynomial and kernel and everything here relates exactly in the problem not exactly again in a similar way to matrices okay what do I need start with a matrix a square matrix and buy in okay the eigenvalues of the matrix are the roots of the polynomial a minus alpha I call the characteristic polynomial of the matrix okay and it makes perfect sense and if alpha is an eigenvalue namely and eigenvalue is just an an eigenvector or an alpha and a v such that a times V equals alpha V just like for a map just like the map corresponding to the matrix okay so if alpha is an eigenvalue of a a matrix then its corresponding eigenvectors are the nonzero vectors in the kernel of a minus alpha so everything makes perfect sense and in the examples this is actually what we did okay and the words are used as freely really freely for maps and for matrices because we really identify them we really think of a map as a matrix and a matrix as a map okay good so that's this first remark you know analogous theorem can be written free analogous with all the definitions involved another remark so this would be remark one remark number two the set of all eigenvectors corresponding to a specific alpha corresponding to an eigen value alpha so you take an alpha and eigen value and you look at all the V's that are eigenvectors with this eigen value so fix an alpha look at all the B's that that satisfy T of V equals this alpha the claim is this remark is saying that the set of all these eigenvectors together with together with zero the zero vector is a subspace of V okay now why is this sorry mark can not afeared in order to prove that something is a subspace well here it's a subspace of a bigger space so what we need to set to show is that it satisfies the three properties of that zero is there well it's there because we took it we need to show that it's closed under sums and under products by a scalar right but what I'm clean cleaning here is that this is an immediate statement you don't need to actually verify it and the reason is the theorem the theorem says that if you take an eigen value what are all the eigenvectors corresponding to it let's look at the theorem that's part two of the theorem if alpha is an eigenvalue the corresponding eigenvectors are nonzero vectors in this kernel so it's some kernel a kernel of a map is always a subspace of the domain remember that okay so T minus alpha I is again a map from V to V it's not T it's a different map but its kernel is still a subspace that's a result that we know previously kernels are subspaces so we took all those nonzero vectors that's the all the eigen value eigen vectors when we throw zero back in we get the entire kernel and a kernel is a subspace so it's a direct corollary of part two do you agree so this statement is just a corollary of part two the set of all eigenvectors together with zero is a subspace of V let's write since it is a kernel a kernel of something it's cur something therefore it's a subspace okay and it has a name and it has notation it is called it is called the give a wild guess well not the null space that the word your space is good space is good but what kind of space it's it's a space of eigenvectors it's an eigen space that's what it's called is it is called the eigen space the eigen space of a V corresponding to the map T and the value alpha okay it is called let's write an eigen space an eigen space of T and denoted I don't know if this is completely standard notation but that's how I'm gonna denote it V sub alpha okay so for every for every eigenvalue you get a different eigen space okay for every eigenvalue you get a different collection of eigen vectors they're all that kernel that we wrote and therefore you get a different eigen space good okay so that's the second remark okay let's prove the theorem okay let's put a line here because this is in fact another definition let's prove this here so I want to prove two things two things let's look at the theorem again I want to prove that the eigenvalues are the roots of this polynomial and I want to prove that the eigen vectors are the elements of this kernel okay that's what I want to prove and in fact the way I'm going to show it to is going to emerge first and then one okay and it's going to be very simple in fact so let's write the proof so I'm gonna erase let's start by erasing this board okay so proof of the theorem so we need to first recall what is an eigenvector and what is an eigenvalue but we just wrote that so let's see that V a nonzero vector is an eigenvector if and only if let's write this as a big long if and only if statement if and only if T of V equals alpha v for some alpha that's the meaning of being an eigenvector right saying that T of V equals alpha V is the same thing again it's if and only if T of V minus alpha V equals zero do you agree okay now that's if and only if that's if and only if I can rewrite this as T of V minus alpha I of V equals zero I of V is just the map that does nothing the identity map so our V is V for every V therefore writing this is just is the same thing okay but now I can rewrite this by how we by the way we defined operations on linear maps I can write this as t minus alpha I of V equals zero do you agree so a vector a nonzero Vic vector is an eigenvector if and only if if and only if pum pum pum if it only if it satisfies this what does it mean that P minus alpha I of V equals zero it means that's V that V is in the kernel of this map so this is if and only if V belongs to the colonel that's what a colonel means of t minus alpha so a non zero vector is an eigenvector if and only if V belongs to the colonel let's let's rewrite this so all the if and only if would be correct because all of these all of these statements should have V is not non zero and V is non zero and V is non zero and so let's rewrite it sorry let V be nonzero v is an eigenvector if and only if pompom if and only if if and only oh now all the statements are indeed indeed if and only if providing we're looking at a non zero vector so this proves part to the eigenvectors are precisely the non-zero elements of this kernel that was part two of the theorem good so this proves part two good part two said the non the if alpha is an eigenvalue the corresponding eigenvectors are the non-zero elements here okay now let's prove part one so the question in part one is what are the eigen values right let's look at the theorem again the eigen so part two we proved if alpha is an eigen value it's corresponding eigenvectors the v's are the nonzero vectors here this we proved now we're asking what are the eigen values so the eigen values are such are those alphas that admit such I can vectors right that that they that have some veins for which T of V equals alpha V right meaning that the kernel here is not zero right if the colonel here is only zero then there is no nonzero element here therefore there are no eigenvectors and alpha is not an eigenvalue okay so let's write that so let's phrase it as follows the kernel of t minus alpha I is not just the zero vector that's what it means that there are eigenvectors corresponding to alpha if this kernel is just the zero vector then there are no nonzero vectors in it and therefore there is no eigenvector for this specific alpha good so we're asking when is this kernel non zero when does it have these that are not zero vectors meaning it has real eigenvectors okay so this kernel is non zero what does it mean for a kernel to be nonzero what does it translate to automatically what is not one-to-one very good no a kernel of something is nonzero if the something the something is not one-to-one good okay so this kernel so this proves to we put a period and now we're starting the proof of part 1 the kernel of T minus alpha is not zero if and only if T minus alpha I this map is if it's non zero is not one-to-one do you agree okay for a linear map being not a linear map is one-to-one if and only if it is on to remember so those properties coincide for for for a linear operator right so T minus alpha I is not one-to-one and dense and thus t minus alpha i is not invertible and it's an if and only if okay maybe we'll write not end us is not one-to-one ie it's not invertible okay so this kernel has nonzero vectors has eigen vale vectors if and only if this map is not invertible okay when is the map not invertible if and only if it's matrix representation is a non invertible matrix right this is a theorem that we had the map is invertible if and only if the matrix is invertible and moreover we know how to find the the matrix representation of the inverse right remember that statement that the matrix representation of T inverse is a inverse remember that so there's a correlation between maps and matrices okay so saying that t minus alpha i is not invertible is the same as saying that IE when is the matrix not invertible we had it we had a bunch of equivalent statements to saying that a matrix is not invertible remember there was one saying that for every B there is a solution there is au search when it is invertible for every B there is a solution to ax equals B there was a statement about the rank there was a statement about the determinant there were a bunch of statements and the one we're interested now is the one about the determinant ie the determinant of this map and again a determinant we can think of the determinant of a matrix representation is it's the same thing so this determinant it's not invertible if the determinant is 0 good so there are I in vectors corresponding to alpha if this determinant is 0 good and that proves part 1 that's all we wanted to say those alphas that admit eigenvectors those alphas that are eigenvalues are precisely the roots of this polynomial alphas that satisfy the determinant of t minus alpha equals 0 ok so this proves one good it's theoretics note how short this is right this is not true for extracting this from the simple two-by-two example took us quite a while right but once you have the ideas spelling them out abstractly is in fact rather straightforward good everybody good ok so this proves one one said the eigenvalues of the root are the roots of this polynomial we agree that this is a polynomial in alpha the determinant of T minus alpha I and what we just found is that alpha admits eigen vectors and therefore is an eigenvalue if this determinant is 0 therefore if it's a root of this polynomial good everybody good so back to the proof I forgot one thing which is the and the how much here okay and what I want to do is to do another example I want to do a three by three example which is more more complicated because it's three by three rather than two by two so verifying calculating this determinant and finding this kernel are all going to be three by three issues rather than two by two issues and therefore a bit longer but I'm not going to trace that example carefully in order to extract the theorem I already know this theorem I'm just gonna do this and do this that's how we do it right that I want to do separately in a separate clip before that I want to write another theorem which collects a bit more facts that we already encountered okay that we already know from the two by two example and again we're going to verify them on the three by three example as well okay so here goes a couple more statements and again we're still in this abstract realm I know once things get abstract they get slightly more challenging than in the concrete examples but really the way to to to grasp this well is to read it in parallel to the example see that what we're writing everything we're writing is precisely what we did in that baby 2x2 example okay so let's write the other two facts that we already know from the example and those facts were the first one was that if we take different alphas then their corresponding V's are always linearly independent remember that statement so that I want to prove and the other statement was if we take the matrix P the matrix P which is constructed by takings are the these that I vectors as columns that P is precisely the P that gives the similarity the P that satisfies D equals P inverse ap okay so those are the two facts that I want to write down now and give the the and outline the proofs I don't know if I'll do them in full detail but at least outlining them so here's the first theorem theorem ie in vectors corresponding to different eigen values are linearly independent that's the first statement the proof is is rather straightforward it's going to be based on induction on mathematical induction here's the proof so we need to start with the collection of eigen values and eigen vectors so let alpha 1 alpha 2 dot dot dot alpha K be different and that this is emphasized here ok we're assuming that we're looking at different eigen values if you take two of these to be the same this statement is no longer true okay so this statement is only valid if you take different eigenvalues and let V 1 V 2 V K be corresponding eigenvectors so T of V 1 is alpha 1 V 1 T of V 17 is alpha 7 P V 17 okay there they go in pairs and eigen value with its corresponding eigen vector we want to show that V 1 2 VK that this set is linearly independent that's assertion of the theorem ok and we want to prove it by induction okay so we will show we will show by induction that V 1 dot dot dot VK this set is linearly independent okay so how do we execute an induction we start with the base case then we have the induction hypothesis on which we base the induction step right so what is the base case the base case is K equals 1 right k equals 1 if K equals 1 we have one eigenvalue we have one eigenvector is a set of just one vector a linearly independent set you're doing this I guess you mean yes but you have to be a bit careful if you take a set comprised of one vector is it always linearly independent if that vector is not zero right so suppose we have just one alpha and just one V can that be vb zero answer no why not one word starting with defe that's the definition the definition of an eigenvector is a non zero vector right good so k equals one if we have a set V 1 is linearly independent for a set of 1 vector to be linearly independent all we need to say is that that vector is not zero since V 1 is not 0 as an eigenvector by definition an eigenvector is a non-zero vector satisfying T of V equals alpha Lee good so the base case is indeed trivial here ok now let's do the hypothesis so the hypothesis is the hypothesis is going to be that if we have K minus 1 alphas and corresponding case then they are linearly independent and then I'm gonna throw in another eigen value different from all the rest and show that the set of k are linearly independent okay so so here's the induction step step so let's take consider how do we show that a set of K elements is linearly independent by definition I'm going to take a linear combination assume it equals to 0 show that all the coefficients are 0 that's what it means to be linearly independent now alphas are taken right so I'm gonna take an any general coefficient so I'm gonna call them beta so consider b1 sorry beta 1 V 1 plus beta 2 V 2 plus dot dot dot beta K V k equals 0 suppose we have a linear combination of the V Ches which is 0 if we can show that all the betas are 0 that means that the V's are linearly independent by definition good so we're assuming beta 1 V 1 all the way to beta K V K is 0 the first thing I want to do is multiply this by alpha 1 ok multiply this by alpha 1 so I get beta 1 beta 1 alpha 1 V 1 plus beta 2 alpha 1 V 1 plus dot dot dot beta K alpha 1 VK sorry here it's V 2 so I just took this and multiplied it by alpha 1 therefore and I threw the alpha 1 I distributed it in all the components and it's still equal 0 because it's alpha 1 times 0 so it still equals 0 do you agree ok on the other hand I'm gonna take this and do something separate to it so on the one hand I have this on the other hand and now we're gonna need another board so on the other hand on the other hand what is T of this linear combination beta 1 V 1 plus beta k VK that's the linear combination we're assuming that it's zero right so on the one hand this equals T of zero because we took a linear combination that is zero right and what is T of zero zero for any linear map this is a something we proved from the definition T of zero is always zero so zero equals T of zero equals T of this linear combination and T of a linear combination is the linear combination of the t's right so this equals beta 1 T of V 1 plus dot dot dot beta K T of V K do you agree right this is how we this is the property of T Bingle in your map everybody good ok now what is what is G of V 1 it's alpha 1v1 right we're assuming that these V's are eigen vectors corresponding to the eigen values alpha 1 up to alpha key so therefore this is what's written here is the same thing as beta 1 alpha 1 V 1 T of V 1 is alpha 1 V 1 and T of V 2 so we have beta 2 T of V 2 so it's alpha 2 V 2 plus dot dot beta K alpha K V key and this equals to zero do you agree okay so and this is simply because these are eigenvalues and eigenvectors good so on the one hand we have this and on the other hand I'm going to copy the the bottom row from the previous board that said that beta 1 alpha 1 V 1 plus beta 2 alpha 1 V 2 plus dot dot dot plus beta K alpha 1 VK equals 0 this we obtained just by taking this linear combination and multiplying it by alpha 1 okay so we have these two statements I'm gonna subtract them ok so what do I get ok so this follows from this this we know from earlier recall so by subtracting we get when we subtract the first component they're equal and therefore it disappears for the rest of them tell me if you agree beta 2 and here we have alpha 2 minus alpha 1 V 2 plus and here we're gonna have beta 3 alpha 3 minus alpha 1 V 3 plus dot dot dot all the way up to beta K alpha K minus alpha 1 VK and all of this sum equals at a space alpha K minus alpha 1 VK and all of this equals 0 do you agree good ok now look what we have here we have some linear combination some linear combination of K minus 1 vectors V 2 up to VK a linear combination of K minus one of these which equals to 0 by the induction hypothesis V 2 up to VK are linearly independent right the induction hypothesis says if you take K minus 1 of them you're good prove it for K right that's how we do induction so by the induction hypothesis by the induction hypothesis we have that all these coefficients are 0 there are K minus 1 of these vectors it's a linear combination that's 0 they're linearly independent all the coefficients are 0 good so by the induction of hypothesis all these coefficients beta 2 times alpha 2 minus alpha 1 equals beta 3 times alpha 3 minus alpha 1 equals dot dot equals beta K times alpha K minus alpha 1 and all of them are equal because they're all just 0 good these are numbers these are no scalars alpha 2 and alpha 1 are different we're looking at different eigen values so alpha 2 minus alpha 1 it's not zero very good we don't know much about it but we know it's not zero so if this product is zero this implies that beta 2 has to be 0 alpha 3 and alpha 1 are different so alpha 3 minus alpha 1 cannot be zero so if this product is zero beta 3 has to be zero and so on okay so we conclude that beta 2 equals beta 3 equals dot dot equals beta K all of them are zero so we started with this linear combination that was zero and we proved that beta 2 up to beta K are zero are we done right we need to prove that beta 1 is also zero right we need to prove that if we take a linear combination that's zero all the coefficients are 0 well almost all the coefficients are 0 but maybe beta 1 is not 0 very good ok so let's write that oh we need another board let's not take another board let's write it here so we still need to prove that beta 1 is not 0 so let's plug all these zeros into this ok we know that this linear combination is 0 that was where we started ok all the beta k's starting from 2 are 0 so beta 1 beta 1 v 1 plus 0 V 2 plus 0 V 3 plus 0 V 4 plus 0 V K equals 0 so that implies that beta 1 V 1 equals 0 do you agree if this is zero everything is 0 except for the first one so the first one equals zero right so we have beta 1 V 1 equals 0 and V 1 as an eigenvector is not 0 therefore beta 1 has to be 0 as well since the v1 is not the zero vector good so we started with a linear combination that's zero and we proved that all the coefficients are 0 meaning they're linearly independent good everybody ok so this proves the fact that eigen vectors corresponding to different eigen values are linearly independent there's one more statement that I wanted to make in this theoretic rather long discussion one final statement and I can see that you're you're getting a bit overwhelmed so maybe I'll omit the proof look at the proof and see if it's crucial or not I think it's not it's really it's rather straightforward but maybe it's not even long enough to a myth maybe it's just worth writing let's look at it a second okay so the the the last theorem that I want to mention again something we said in relation to the example but now I want to say it as a theorem let let a be diagonalizable diagonalize bull not die yet Wow diagonalizable and let B equals v1 up to VN be a basis of eigenvectors with corresponding eigen values eigen values alpha 1 to alpha key then the matrix P the matrix P whose Colin's are the eyes so I'm just taking these V eyes there are n of them putting them as columns of a n by n matrix P okay this is an N by n matrix let's emphasize this whose columns are the V eyes satisfies that it's the matrix that implements the the diagonalization that the the similarity of a to the diagonal matrix so satisfies P inverse a P equals that diagonal matrix with the alpha eyes across the diagonal okay that's the statement okay so when we start with when we find the eigenvalues and eigenvectors which is the procedure we usually do by the main theorem that we proved a few minutes ago on root we find the eigenvectors if we take them and set them as columns we get precisely the matrix that gives you the diagonalization gives you the similarity of a to the diagonal one which we called d ok now let me take a look at the proof it's just a bunch of indices it's not really hard think you can bear miss with me for another five minutes to have the complete picture because these the theorems in this we're going to have a few more theorems later on which I'm not going to prove okay but these are really the main theorems these are the main statements about this entire topic okay so let's prove this as well and wrap this up with that proof so I'm going to erase here it's really just being careful with matrix multiplication that's all there is to it okay so here's the proof here's the proof so we want to show that P inverse ap is that diagonal matrix okay a similar statement let's look at the theorem again I want to prove that p inverse AP is this is this D equivalently I can multiply both sides from the left with P okay so if I multiply both sides from the left with P an equivalent statement with would be P P inverse would be I would cancel so if I prove that a times P equals this diagonal matrix with P here that would be an equivalent statement do you agree okay so that's what I'm gonna prove for what oh oh no that's a typo thank you right over here over here please correct look over here I wrote alpha K and it should go up to alpha N write V 1 to V n corresponding alpha 1 to alpha and those are precisely the diagonal elements of D thank you everybody fix that ok so instead of proving P inverse ap equals D I want to prove that ap equals PD ok so we will show show that a P equals P times this diagonal matrix and that's the same thing do you agree okay and the way to show it is gonna be very simple I'm gonna calculate the left hand side calculate the right hand side see that I get the same thing okay so let's start with the the right hand side so what is P times this alpha 1 alpha 2 alpha in and zeros what what is this so I have to write what P is in order to carry out this calculation but I know what P is P is the matrix that has the eigenvectors as its columns okay so the first column is really the eigenvector v 1 okay and I'm gonna name its components V 1 1 V 2 1 V 3 1 etc okay so I'm gonna write V 1 1 did I write it V 1 V 2 and V 1 1 I called it V 1 to V 1 2 dot dot dot all the way up to V 1 n so this thing this column is just the eigenvector v 1 do you agree the second column of P is V 2 so it's V 2 1 V 2 2 dot dot dot all the way up to V 2 n okay this column is V 2 that's what P is made of is made of write the columns are the eigenvectors so here there's gonna be a dot dot dot and the last column is going to be V N 1 V n 2 dot dot V in in good do you agree that's P that's the assumption take peat to be eigen vectors as columns multiply this by this matrix this diagonal 1 a 1 a 2 dot sorry alpha 1 alpha 2 all the way up to alpha N and alder or all the other entries are 0 right let's see what we get when we do this multiplication so so what I'm writing down here is the right hand side right hand side what do I get so let's start multiplying okay so I get V 1 1 times alpha 1 the first entry in the the the product V 1 1 times alpha 1 and all the rest of this row is going to hit zeros in the multiplication do you agree so I'm just gonna get Z 1 1 times alpha 1 that's what's gonna sit here alpha 1 V 1 1 do you agree good what's gonna be underneath it underneath it is gonna be second row times first column all these are gonna hit zeros in the product the only one that meets a nonzero entry is V 1 2 which meets alpha 1 so I get alpha 1 V 1 2 dot dot dot the last row times the first column the only one that needs a nonzero the only nonzero product in this sum is V alpha 1 times V 1 in do you agree let's go to the second column of the product second column is the each of the rows times the second column in the diagonal matrix and only the second entry in each row would hit the alpha 2 which is not 0 so I'm gonna get here alpha 2 V 1 1 alpha 2 V 2 2 sorry alpha 2 alpha 2 V 2 1 right first row times second column V 2 1 its alpha 2 alpha 2 times V 2 2 dot dot dot alpha 2 times V 2 n good dot dot and the last column is going to be only these hit the alpha he hit alpha n that's the only thing that survives good all I'm doing is doing matrix multiplication and by the way we didn't do many proofs of this sort because they're abstract and scary but in fact these are in a sense the most straightforward ways of proving that two matrices are the same this is just an equation of two matrices we're claiming that they're the same we're just writing what their entries are okay so it's really an important an important way of proving and in fact a basic okay so what we get finally is alpha n V N 1 alpha n VN 2 dot dot dot alpha in V in n good so what we got what we got and let me write it symbolically this is precisely V 1 multiplied by alpha do you agree this column is just the eigenvector v 1 multiplied by alpha 1 so what we got is alpha 1 V 1 the vector and I'm going to denote it like this okay this is the vector alpha 1 V 1 sitting in de maybe we can write it in the middle okay we have alpha 1 V 1 sitting in the first column of this matrix do you agree and the second column is alpha 2 V 2 that's the second column of the product all the way up to the last column is alpha n VN do you agree that's what we get on the right hand side okay we get a matrix whose columns are just the corresponding alphas times the corresponding eigenvectors okay let's see what we get on the left hand side and see that we get precisely the same thing so let's write it here any questions so far is it is what we're doing clear sometimes people get very confused from from such calculations but just because they're abstract because they're not numbers but in fact they're this is really a very routine straightforward proof okay so what is the left-hand side the left-hand side of the of the quality that we're trying to prove holds is a times P a times P and we want to see that we get the same thing that we got for the right-hand side so a times P is a times P what is P P is the matrix that has the V's as if the eigenvectors as its columns let's write it using the same notation I just use so it's first column is v1 its second column is v2 and its ends column is VN this is P okay is it clear what I mean when I write something like this good okay now tell me if you agree to this statement my claim is my claim is that the first column of the product I'm not even gonna gonna write a explicitly as a11 a12 a13 etc I'm gonna leave it as such and my claim is that the first column of this product is the vector a times v1 do you agree okay so how do I get the first column I take separately each of the rows of a and multiply it by each of the columns are e how do I get the first column I get each of the rows of a and multiply it by v1 right so the entire column is just a multiplied by v1 that's a vector do you agree is everybody with me does everybody agree to that that's a key step here if you can allow me to write this without breaking gay explicitly into all its entries it's much more illuminating than than writing all the little entries good everybody with me okay what's the second column a v2 all the way up to a V in do you agree good now what is a v1 write a V 1 V 1 is an eigen vector corresponding to the eigen value alpha 1 so a V 1 is nothing but alpha 1 V 1 and a V 2 is alpha 2 V 2 and AV n is alpha n VN do you agree because these are eigenvectors and eigenvalues okay so that's the left hand side ap is the matrix whose columns are the Alpha V ice okay and this completes the proof both sides you got the same thing look at the previous board P times V the right hand side was the SE matrix good everybody okay okay so I know this was pretty heavy pretty long but I hope not this encouraging I hope you manage to trace all the ideas and and if you did get a bit lost than what we're gonna do what we're gonna do next is precisely go back to the to the solid ground of an example take a three by three example and do precisely what the theorem says okay let's recall the main theorem of this the main theorem was this theorem it said you want to find the eigenvalues look at this characteristic polynomial the determinant of t minus alpha i find it those are the eigenvalues once you have the eigenvalues for each of them separately find the kernel of t minus alpha I those are the eigenvectors okay once you have the eigen values and the eigen vectors you can diagonalize right put the alphas along the diagonal that's your D put the vectors as columns that's your P okay good so that's what we're going to do next after we take a short break
Info
Channel: Technion
Views: 15,279
Rating: 4.8540144 out of 5
Keywords: Technion, Algebra 1M, Dr. Aviv Censor, International school of engineering
Id: Y_PfMmf3ego
Channel Id: undefined
Length: 71min 37sec (4297 seconds)
Published: Mon Nov 30 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.