55 - Matrix representation of linear maps (continued)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so we know how to represent a linear map using a matrix when the map is between two vector spaces of the form F N and F n when they're actually just column vectors okay what I want to do now is completely general well not completely I'm gonna do it in two steps again so I'm first gonna start with the situation so let's let's say this given a linear map T from V to itself I'm gonna start with a very with a simpler situation where T is the map from V to itself okay and then in in a couple boards down we're gonna replace this V by a general W and then it's going to be the the most general abstract case but first let's start with the map from V to itself we want to find a matrix a such that in some sense and we're gonna make that precise in a minute T of V equals a times V and I'm writing in some sense because V can be a space of polynomials and then this doesn't make sense okay so we need to first make it make sense and then show how we do it okay by the way a map from a space to itself is often called a linear operator if you encounter that terminology that's usually reserved for for maps that go from a space to itself a linear operator okay so let's start with an example let's start with an example we're gonna take the map from polynomials of degree less than or equal to three over our two polynomials less of degree less than or equal to three over our okay and we're gonna take the linear operator that does T of P of x equals P prime of X okay we met this specific example before we showed that it's in fact the linear transformation the TIA that the sum of the derivatives is the derivative of the sum remember that okay we know that the image is in fact in our to write derivatives of polynomials of degree less than or equal to three are always of degree less than or equal to two but I in particular they live in r3 right okay so I want to call the range r3 so that we're in the situation of operators of T from V to itself okay so do you agree that this is a linear map going from V to V from the same space to the same space what it does for a polynomial it takes its derivatives good okay I want to represent this by a matrix okay so here is what we do so choose choose and there's a matter of choice here we're gonna see that it's it depends on the choice choose a basis with which you're gonna work a basis for your space V or in this example are three of X and we're gonna choose naturally the standard basis the standard basis is we're gonna call it let's call it e we're gonna call it E and E is gonna be X cubed x squared X and one that's the standard basis for our space do you agree everybody and let's let's give these guys names this is going to be e 1 this is gonna be - this is gonna be a three and this is gonna be a for the elements of the standard basis okay okay so a general polynomial P of X a general polynomial in our vector space in r3 of X is of the form is of the form a X cubed plus b x squared plus CX Plus D times one right I'm emphasizing that it's not just plus D but it's plus B times one it's a linear combination of the basis elements right so that coefficient vector is P of X with respect to the basis e remember this notation equals a b c d a b c d do you agree okay so knowing ABCD is knowing the polynomial all the information about which elements were handling is written in a b c and d do you agree okay good now this is what we act on that's a general element here what is P prime of X so if P of X is this if P of X is this P prime P prime of X is what we know how to take derivatives right so it's three a x squared plus two B X plus C times 1 that's P Prime do you agree and what is P prime in terms of the coefficient vector okay so I'm a bit out of space so I'll write it here P prime of X its coefficient vector with respect to the same basis e is what zero because there's no X cubed so it's zero three A to B C do you agree okay so I should add this and then do this good so the maps it sends a polynomial to its derivative when working in coordinate space in in in da with the coordinate vectors we're looking for a map that sends ABCD 2 0 3 a 2 B 2 C ok and that can be expressed by a matrix we know how to do that that's what we discussed in the previous clip okay so let's write that so we're looking so we're looking for a matrix such that a times Z V is P of X but in cou in the coordinate we're working with the coordinate vectors a times ABCD would give us precisely 0 3 a 2 B and C this is the matrix that represents that transformation good and I can tell you what a is a is the matrix 0 0 0 0 3 0 0 0 0 2 0 0 and 0 0 1 0 that's the matrix C ok why why you can you can decipher it row by row right so let's do this this is a why because if you take a and multiply it by ABC what do you get well you wanted you wanted to get a zero here so whatever these entries are when you multiply these entries times ABCD you want to get zero so obviously they all have to be zero you don't want ace there you don't want B's there you don't want C's there and you don't want these there that's how I got this row clear how do I get this throw well I want to have here 3a no problem take a 3 here and then 0 0 0 right good so do you see how I formed this this matrix it's very easy good so the question is how do we do it in in so we're looking for a such that this here it is and how do we do it in general how do we find a there there has to be a systematic way of doing it we're not just gonna play guessing games and these may be much more complicated in general and there could be 18,000 of them right so how do we do it in general ok so here's the the general procedure but let's give themes some names the matrix so so is it clear is what just happened clear is it clear that this matrix represents that transformation in the sense that if you take a vector a polynomial of degree less than or equal to 2 3 encoded in terms of its coefficient vector do a you get precisely the derivative encoded in terms of its coefficient vector good is the idea clear ok ok so the matrix a that we found is called is called the matrix representation of that's what it's called and it has notation remember that in order to form that matrix we had to choose a basis write the basis we chose was e and e is how we got these coefficient vectors and that's what determined what the matrix is gonna be so it's denoted like this we take T now T is an operator or a linear map in general put it in brackets just like we took vectors in brackets and put a little a here to denote which basis it was representing T with respect to okay so that's the notation and it satisfies tell me if you agree that if you take the matrix representing T with respect to the basis II and multiply it by the coefficient vector of a vector V with respect to the same base e what you get what you get is T of V again the coefficient vector with respect to the base C okay sometimes students freak out when they see this but there's no reason to freak out it's really just what we did this is the matrix a that represents T with respect to the basis we chose this is the coefficient vector of V P of X in this case this is the coefficient vector of T of P prime of X in this case good this is just notation for saying precisely that statement you take T T of V equals T of e but if you want V and coefficients and P of V and coefficients then T is represented by a matrix with respect to the given basis okay so is everybody not scared of this anymore okay okay so how do we find T sub e in general how do we find the matrix in general well not in general yet we're still in the realm of a linear operator from a space through itself from V to V not yet from V to W that's going to be in a minute so how do we find it so how do we find the matrix representation so the answer is in general so let T from V to V again we're thinking still of a linear operator be a linear operator a map a linear map from a space to itself and let that's one ingredient that's the T here the other ingredient we need is the e right the basis and let e equal a 1 e 2 up to en be a basis for V those are the two ingredients involved okay what did we do what did we do so if you think of this once we're in the coefficient realm okay what we did is we took the columns of the images right of the images of the the T of the e eyes we called them previously the Wis and those were the columns of a remember that so that's precisely what I'm gonna do cisely but I first have to translate things to coefficients okay so here it goes take T of e 1 okay T of e 1 is gonna be an element in V again right an element in V again I can call it W 1 but the point is that I can write it as a linear combination of these guys right so I'm gonna write it as a linear combination whatever T of e 1 is previously called W 1 now I'm going to write it as a linear combination of these guys ok and the coefficients I'm gonna give specific names so it's gonna be a 1 1 times e 1 plus a 2 1 times e 2 plus dot dot dot a N 1 times en do you agree ok and the reason I have two indices here the the one in da in the second coordinate indicates that this is T of e 1 and 1 2 3 up to n here our indicate which E is a the coefficient of okay now do T of e 2 T of e 2 is against some linear combination of the e ice so it's a 1 to e 1 plus a 2 2 e 2 plus dot dot a n 2 e n ok so these these a 1-1 a 2-1 up to a n 1 are precisely the coefficients the coefficients of T of e 1 with respect to the basis e do you see that and what I'm gonna do is set them as columns of my matrix a that's what we did so let's say that T of e n is again some linear combination of these guys I'm going to call it a 1 n e 1 plus a 2 n a 2 plus a n n E and it's a square matrix because we're working from a vector space to itself okay and now what I'm going to take now what I'm gonna take so there's a dot dot dot here you shouldn't make them big dot dot dots so now I have all these coefficients and remember these are the coefficients of T of e 1 these are the coefficients of T of e 2 I want them as columns ok I want them to be the columns of my a that's how we did it in in when when when originally it was a tea from FN to F M remember so how do I make them columns if you look at the coefficient matrix here I want the rows to be columns I take the ten the transpose okay so take the transpose of the coefficient matrix of this coefficient matrix so a is gonna be and the way I labeled them is precisely so they fit well but in an example this is going to be an eight and this is going to be a three and this is gonna be a 17 and this is going to be a minus five in a PI and whatever you just have to remember to do the transpose okay so a is going to be precisely the matrix a 1 1 transpose A 2 1 all the way up to a n 1 and then a 1/2 a dot-dot-dot all the way up to a 1 n & a 2 2 dot dot dot dot a end to a to n dot dot dot dot dot dot a n n this is a this is the a so we get then a equals this satisfies a is none other than the matrix representation of T with respect to the basis e ok so let's show that this indeed works in the example we just did we guessed a right back to the example this was our a right over here this was our a but we guessed it right we said okay these are gonna be zeros because I want to get a 0 here let's do the procedure that we just described and see that we indeed end up with this matrix ok so let's do it let's do that so back to the example so before i erase this is the in some sense clear now okay the some sense is that it's not going to be TL v equals AV it's gonna be that T of V coefficient vector equals a which is the representation of T times V coefficient vector ok ok so let's go back to the example and show that we indeed get what we wanted so back to the example so the example was the the derivative right T of P of X is P prime of X ok so what is T of e1 e1 was what right let's look at the example this was arty from R 3 to R 3 given by T of a polynomial is its derivative we're on this board please please look here thank you so T of a polynomial is its derivative okay so we chose a basis this is the basis now according to that procedure that we described what we have to do is duty to each of the basis elements so what is T of X cubed that's a 1 T of X cubed is 3x squared right and we want to write that as a linear combination of these guys then we want to do T of x squared which is 2x and write that as a linear combination of these guys let's do that ok so T of e 1 e 1 is just X cubed T of X cubed is the derivative T of X cubed is 3x squared right and we want to write it as a linear combination of the basis so it's 0 X cubed plus 3 x squared plus 0x plus 0 times 1 that's writing 3 x squared is a linear combination of the basis elements ok T of e 2 is T of x squared G of x squared is 2x written as a linear combination it's 0 X cubed + 0 x squared plus 2x plus 0 times 1 right T of ìiî ìiî was X T of X is the derivative of X is just 1 that's how T was defined so it's 0 X cubed + 0 x squared + 0 X + 1 times 1 right there's a dot here to indicate that this is a product there's no mysterious 11 that just good and finally what is T of e for is T of one what's the derivative of one zero so it's just zero X cubed plus zero x squared plus zero X plus zero times one right and now we take these coefficients and transpose them so these are the coefficients these are the coefficients and we transpose them so T the matrix representation of T with respect to the basis e is indeed the a that we found manually without knowing the the algorithm for doing it so it's this row is going to become the first column so it's 0 3 0 0 0 0 2 0 0 0 0 1 and 0 0 0 0 clear everybody good ok so this is the idea this is how we find a matrix representing a transformation with respect to a chosen basis ok this is in the situation where T is a linear operator from V to V ok so now that you're no longer scared no longer scared of this statement of this statement if you're feeling that you understand what this says right you take the coordinate vector of V the coordinate vector of T of V and the matrix is the matrix representation now we're going to expand this to a map not from V to V but from V to W in general okay but we're gonna do precisely the same idea precisely the same thing it's gonna be a bit more complicated because we're not going to be able to use the same base e for Z and W because they're different spaces so there's going to be a base e for V here and a different base call it f for W which is going to be here and T is going to be the matrix representation with respect to those two bases okay so it's gonna get slightly more scary but same idea okay so let's do it let's do it and again let's let's start right away with an example and then we'll write the general statement that's gonna wrap up this discussion how matrices are just there well I should say that at the end but but there are two things happening here one is that we can see that linear transformations can be encoded by matrices right and the way to do that is an underlying encoding process of encoding vectors as their coefficient vectors right taking a polynomial ax cubed plus BX squared plus CX Plus D and encoding it as the vector ABCD that's the underlying encoding process and then it gives rise to an encoding process in a higher level of a linear transformation being encoded as a matrix do you see that okay beautiful okay so here's the here's an example that's gonna lead us to the more general situation of a map from V to W so we're gonna take T from two by two matrices over the reals two polynomials of degree less than or equal to two now these are two spaces that are different okay you can't take a basis for this in a basis for this and have the same basis they're different spaces okay and the way T is gonna be given is I have to tell you what T does to a matrix a two by two matrix and what it does it should spit out a polynomial right so the polynomial it gives is the following a plus 2 B plus 3 C plus 4 D times x squared plus 2 a minus B plus C X plus 3 a minus 2 B plus C plus 2 D do you agree that this is a polynomial of degree less than or equal to 2 okay and the first thing you need to do is verify that this is a linear map okay and in fact if you're already if you did if you practiced a bid and you got used to the ideas you should say right away of course it's a linear map because each coordinate in the image is a linear combination of the coordinates of the of the domain element right we said that okay so this is a linear map verify that T is linear okay and if if if you're not sure that you can right away do this verification that means you have to do it okay that means you still need this practice okay so what we're gonna do now is trace the exact same ideas that are here on the board let's look over here when it was the same space we chose a basis right did T to the basis elements wrote them as linear combinations of basis elements and then took the transpose right that's what happened here okay now we're gonna do the exact same thing slightly differently we're gonna choose a basis for this okay so let's call it e e is 1 0 0 0 0 1 0 0 0 0 0 sorry 0 0 1 0 and 0 0 0 1 this is a basis for our domain space for M to R do you agree and let's call these guys II 1 e 2 e 3 and E 4 good for the range space there's gonna be a different basis again where we have the Liberty to choose it and since we're just doing an example let's choose the easiest choice possible so f is going to be x squared and one is a basis for r2 of X do you agree good and let's call these guys f1 f2 and f3 okay now what we're gonna do just like before we're gonna do t2 each of the basis elements in the domain T of this guy T of this guy T of this guy and T of this guy and each one of them we can write as a linear combination of the basis elements of the range okay so I'm gonna write it on a separate board because this board is full but I want to show at least the first one here so you understand what's going on what is T of e 1 e 1 is 1 0 0 0 I plug in 1 0 0 0 into T and see what I get I get 1 times x squared because these are all zeros plus 2 times X plus 3 times 1 right so T of e 1 is 1 times F 1 plus 3 times F 2 plus 3 times F 3 do you see that okay and likewise I'm gonna do for all the other three so let's do that everybody with me ok the first time you see this it looks like wow this is so confusing but in fact after you do a couple of these it's just become such a standard procedure that you really get used to it really there's nothing tricky going on ok so what do we need to do we need to do T of e 1 t of e 2 t of e 3 and T of e 4 each one we're gonna write as a linear combination of the elements of F and then take the transpose that's it okay so what's T of e 1 so T of e 1 is T of 1 0 0 0 by the definition of T by the way T is defined it's 1 times x squared plus 2 times X plus 3 times 1 do you agree right look at the formula for T that's what he does to the matrix 1 0 0 0 and now it's written let's take our blue again now it's written as a linear combination of the elements of F 1 times F 1 plus 2 times F 2 plus 3 times F 3 do you agree ok so what is T of e 2 that's T of 0 1 0 0 and again you can check what T does to there so B is 1 a C and D are all 0 and what you get is 2 times x squared minus 1 times X 1 minus 2 times 1 do you agree ok look at the formula and verify that and these are my coefficients good everybody with me ok what is T of e 3 that's T of 0 0 1 0 that means let's look here again let's see how we do it C is 1 a B and D are zeros I'm doing T to it so I get a 3 here 3 x squared plus 1 X plus 1 right let's write it 3 times x squared plus 1 times X plus 1 times 1 right everybody good and finally T of 0 0 0 1 you get 4 x squared plus 0 times X plus 2 times 1 I take the coefficients I take the coefficients and write the transpose okay so my matrix is gonna be and the way it's denoted the way it's denoted is T and now there are two bases that were involved the base is e for the domain space and the basis F for the range space so the way it's denote it is like this with little with a here and F here and it's given by the matrix and I'm taking the transpose one two three four that's the first row one two three four two negative one one zero and three negative two one two good and this satisfies satisfies what does it satisfy that if you take this matrix if you take this matrix and multiply it by a coordinate vector with respect to the vector E so that's the noted V coordinate vector with respect to e what you get is what he would have done to be expressed as a coordinate vector in the basis of the range in F now are you freaked out no right it's precisely what happened do you agree okay and now it's completely general well I'm gonna write it I'm gonna scare you one bit more and write it instead of x squared X and one I'm gonna write F 1 F 2 F 3 but that that's it that's that's the most general case okay before I do that before I do that so this was an example now I want to write the general statement but before I do that do you recognize this matrix what what if I added a 13 8 3 a 13 here would you recognize it okay so this is a matrix we let's say where were we're buddies with okay we've encountered this matrix about 17 times throughout this course so far so notice what happened here I somehow managed to construct an example that yields this matrix as the matrix representation of the transformation so you might ask Wow how did you do that that's not this it's the opposite direction you had a matrix okay you wanted to construct a linear transformation that would somehow have this matrix as as its matrix representation right do you agree that's the opposite direction right so how did I do that and you what what do you mean plugged it in how did I know what Y polynomial it's just a matrix exactly well all I need all I need for this to be the represent the representing matrix of some tea I need it to satisfy a V I need to define T as a times V so I need to take some space of dimension 4 because this can be multiplied by four tuples right do you agree so this matrix is gonna act as a linear transformation on a four dimensional space now if I just took our four that would be too easy right I wanted to make it interesting so I took another four dimensional space I took em two of our right I don't care how it started out as a four dimensional space I know that after I write coefficient vectors I'm gonna get ABCD and that's what a is gonna multiply okay now for the range space what is the result going to be if I multiply this matrix by a column vector with four entries I'm gonna get a column vector with three entries so the range space has to be of dimension three so I chose chose a space of dimension three for example r2 of X polynomials of degree less than or equal to two okay and then I don't care that what three dimensional space I chose all I care about is that the coefficient vectors are gonna be three dimensional something times the first basis element plus something times the second plus something's time the third okay so that's how I found the range space in the domain space that's how I chose em two of our and r2 of X a four dimensional space in a three dimensional space that doesn't tell me what the map does how do I get T to do precisely this huh right that that I'm not done yet all I said so far was why I needed this to be a four dimensional space and this to be a two dimensional space how did I figure out the T is gonna do this right how did I know that this was the formula that I need huh what basis exactly I chose the basis for this namely this one okay and I know I know that I want I know what I want t of the basis elements to be okay and I know that if once I fix T on basis elements it fixes the entire linear map that was a statement from the previous clip right when we define T on te eyes that's it it determines the map so I knew I knew that this is what I want I knew that I wanted T of zero zero zero to be this because I wanted that to be my first column and I knew that I wanted T of zero one zero zero to be this because I wanted this to be my second column okay so I knew what I wanted T on the basis elements to be and once I define that T of ABCD is now determined ABCD is a times this plus B times this plus C times this plus D times this so T of ABCD is a times this plus B times this plus C times this plus D times this which is precisely what I defined good okay so I'm not gonna write everything I just said because it was really a rephrasing ideas that were all out there already okay but if you want you can rewind and play it again if if it was too fast or you didn't absorb everything but I'm gonna I'm gonna give it as an exercise I'm gonna give it as an exercise see if you really understood so homework try this if you can do this then you understood a lot okay then then you really picked up a lot of ideas start with any any a choose your favorite matrix don't choose a 17 by 35 matrix that would be too long right choose your favorite 3 by 4 or 2 by 3 or choose your favorite matrix and find T and you have to find what T goes from where to where find some T from some V to some W such that such that the matrix representation of T with respect to E and F where E is a basis for V and F is a basis for W is the a you started with if you can do this okay which is precisely what I described here how why did this how I how I found a T a V and a W such that the represent representing matrix would be this one if you can do that then you really understood this stuff okay then you know what you're doing try this okay okay so what I want to do now is write this in general so this was an example I want to write this statement to generalize it to make it abstract and then this is going to be a theorem okay so let's do that and all the ingredients are on this board and I'm gonna modify it very very lightly so how do we find T e F okay I'm really rephrasing this board so let T from V to W be a linear it's no longer linear operator because it doesn't go from a space to itself be a linear map okay everybody with me what I'm trying to show you it's is that it's precisely the same idea that's why I'm not taking the trouble to rewrite it but rather just modifying it and let e be a basis for V but that's not enough now and ah I have to I'm gonna have to reduce the font slightly and F equals f1 f2 dot F M be a basis for W now there are two bases one for the domain and one for the range good did everybody copy up to here I'm waiting tell me when you're done good okay now I'm gonna do T to the e eyes just like here but they're not gonna be linear combination of the e eyes they're gonna be something in W so they're gonna be linear combination of the F eyes so it's gonna be a 1-1 so I'm gonna erase all these little E's and now T of an e eye is a linear combination of F's right and in particular there's one more thing that's gonna change it's gonna stop at M okay so let's write it T of e 1 T of e 1 is now an element in W an element in W is a linear combination of f1 through FN so it's some a 1 1 f1 plus some a 2 1 F 2 plus dot dot dot some a and 1 M 1 F and do you agree any element in the range is a linear combination of the basis for the range okay and likewise T of e 2 is a 1 2 F 2 plus a 2 F 1 sorry a 1 2 F 1 plus a 2 2 F 2 plus a m2 f em okay so the the first coordinate the first index of a stands for with which F is it a coefficient of the second index stands for stands for which he was it a t of okay bless you see that's a good way to save somebody from sneezing okay so here it's going to end up at a em in and there's a dot dot dot here it's a one in F 1 a 2 n F 2 all the way up to a.m. n f em do you agree good take the transpose of the coefficient matrix which is this and the matrix you get which now the only thing I have to change is the bottom row the bottom row which is now the last column and with row M am-1 am-2 dot a M N and this satisfies that it's the representing matrix of the transformation T with respect to these two chosen bases good so you can trace back to what we did in the example and see that this is precisely what we did in the example good everybody with me okay now there's a theorem here there's a theorem I just presented it as an algorithm I told you hey this is what you do that's not a way of proving things right by saying hey this is what you do okay so the statement is that if you do this if you follow this procedure if you take a linear transformation choose a basis for the domain space V choose a basis for the range space W Duty to all de basis elements and eat spell them out as linear combinations as basis elements of W take the transpose of the coefficients all we did so far was say what to do and then there's a theorem the theorem is not just then this satisfies this then the theorem is so this is in fact the theorem let me write it on this board and then we'll duplicate it on separately theorem if you do all of this then what you get satisfies t e f x v e equals T of V F this is a theorem so the the the full formulation of the theorem is starts with this word and ends with instead of then a satisfies this then a satisfies this that's the theorem ok and it requires proof and proving it is a bit of a nuisance it's not long it's not long it's just a bunch of index work that really does precisely what we did in the examples and shows that you get the same thing so you have to decipher what this means what are the columns of this the columns of this are these and decipher what and multiply it by V represented in the basis e and see that you get is precisely what you get when you multiply T of when you write the coefficient vector of T of V with respect to the Veit the basis F ok so T of V V is alpha 1 e 1 up to alpha n plus dot plus alpha and E n right so T of V is a linear combination of these guys so it's a linear combination of these guys and you get the same thing ok but I'm not gonna go through the trouble of writing out all the details because it's just tracing indices and as I said before there's not much to to benefit from actually spelling out the proof of this theorem ok it's good practice if you want to see that you're not scared of indices but that's all there is to it ok so I'm not gonna prove this but but it is important to remember the algorithm to understand via the example why it works okay why it does precisely what we want okay and to understand this statement not to be not to freak out when you see this written but to understand what it stands for okay questions asked yes it's a good question right is there any reason in the world any reason in the world to ever use a basis other than the standard basis that's a great question no no no ii is a basis for V for the entire space why not choose a different basis or early why choose a different basis what why not work with a standard one isn't it nice and friendly and easy to work with that's a great question and that's a an entire topic that's gonna be our next topic and but I can tell you the answer I can give you a spoiler and tell you the answer in brief okay what we do what we do and we choose a basis given a linear transformation we choose a basis we produce a matrix this matrix okay let's go back to the example we produce this matrix okay this is a pretty nasty matrix it only has one zero okay in this case it's a 3 by 4 matrix so that's not such a big deal but suppose you really have a linear transformation between a 257 dimensional space to a 4524 dimensional space and you want your computer to do stuff with it ok to calculate stuff you want to produce this matrix representation and then all you have to tell the computer is multiply the matrix times the these vectors and there you have it right if you can somehow produce this matrix not to be just anything but to have let's say many many many many zeroes in it the computer is gonna work a lot less okay so your goal is and that's what we're heading to we don't just want to do this take the standard basis find the represent representing matrix with respect to that because you that doesn't give you a lot of control it just produces a matrix and whatever you got is what you got we want to choose somehow the B C's wisely to produce the representing matrices to be very easy to work with for example have a lot of zeros and ones and that's when we're gonna say okay we're gonna compromise we're not gonna choose the standard basis we're gonna choose a different basis and we saw different bases for different spaces but the benefit is gonna be that we're gonna have very nice-looking matrices and we're gonna we're gonna it's gonna be a topic we're gonna say when can we guarantee that the matrix that we get for example is a diagonal one that's like the nicest thing possible and if it's not diagonal maybe it will be almost diagonal and have just here and there's some non-zero off-diagonal entries okay and that's a whole topic that we're gonna discuss okay and there's gonna be terminology there and so on and so forth we're not there yet but that was a great question I'm very happy that you asked it okay it's not optimal in in the sense that that may be choosing a different basis would produce a tea and it's not gonna be a and F it's gonna be s Essen and Ann are a different different basis would produce a matrix here that would have one one one and all the rest zeros maybe maybe okay so we don't know that yet we don't know how to to to control for what matrix matrices do we get okay there's a lot of stuff we still have to discuss the semester is not over yet there's still a month to go okay but that's a big goal okay and that's gonna depend on that the flexibility that we have is precisely in choosing different bases good okay so this wraps up this discussion I hope this is the UH this is the bottom line okay so this is the theorem but the bottom line is also how we do it and I think I hope you got the idea so what what we want to do next what we want to do next is the following so there's there's a relation between matrices and and and linear transformations it's not it's not a very vague relation it's very precise given a linear transformation I know how to produce a matrix given a matrix I know how to produce a linear transformation right so it goes both ways matrices and linear transformation transformations are two ways of looking at the same thing in fact okay so on matrices we have a bunch of operations we know how to add matrices we know how to multiply a matrix by a scalar we know how to multiply matrices right we have inverse matrices there's a lot of stuff going on there that stuff should translate to linear transformations write operations on linear transformations how do we multiply linear transformations what does it stand for how do we add linear transformations what does it stand for and once we define all of that operations on linear transformations once we define them how do they relate to going back and forth between linear transformations and matrices and the statement is going to be that they go precisely back and forth ad the linear transformations go to the matrix side using this procedure or go to the matrix side and then add you get the same thing okay multiply and go or go and then multiply you get the same thing okay so we have to define what our operations on linear transformations and then show that the exactly correspond to the equivalent operations on matrices good so that's where we're heading
Info
Channel: Technion
Views: 22,119
Rating: 4.9679999 out of 5
Keywords: Technion, Algebra 1M, Dr. Aviv Censor, International school of engineering
Id: DKshIFvXSns
Channel Id: undefined
Length: 60min 26sec (3626 seconds)
Published: Sun Nov 29 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.