Properties of Matrix Transformations

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] all right so having looked at some basic matrix transformations particular operators in two and three space but do some interesting things in terms of vector graphics we want to look more in depth at some more properties of matrix transformations and basically the main property want to look at is the composition of matrix transformations and so given that we have two matrix transformations in transformation a T sub a is going from a domain of n space to a co domain of k space and the transformation V is going from a domain of k space to a codleyn of em space okay if we give these two then we can create a transformation from n space to M space that's called the composition of transformation V with transformation a and it's written this way TB circle ta you seen this notation we talked about composition of functions F circle G right F of G of X right and and that's exactly what it is that this TB composed with T a TB circle ta is TB of ta of X and so all right we're composing the two together so for instance let's let this sort of plane quote-unquote represent n space all right and in n space we have a little vector here called X it's an n-tuple right it resides in n space and then over here we'll have our a space there's a couple of space right so the transformation a right goes from n space right domain to k space codomain so it's going to map this transformation a and we're going to end up with this vector here in a space and how do we denote that it's it's what we call the image of the vector X under this transformation T a of X right and it's found by matrix multiplication okay but then I can go from k space to M space and I do that by transformation B and so this is now what how would we denote the image under transformation B the vector in M space it's an M tuple well it's TB of what what are we in putting ta of X and so we've got this composition going alright so I first take X apply transformation a I get ta of X and then I take that plug in transformation B so to speak and I get this final result so again we're doing sort of a taking a little trip in our airplane from end space to M space but we've got a little layover right we land over here in Atlanta or something we get off get on the next plane pop over it's much better if you can do what a one-way flight and that's what a composition does it's going to go directly from n space to M space and that transformation is what we'll call e TB circle ta which is sometimes written as T transformation T ba now this is real important understand the order does matter here and notice that we're doing transformation a first you see how it appears here and then transformation be back here so be real careful the common mistake is to get this order incorrect notice it's it's TA first then TB right and notice though it's building to the left as we do that and so that's real important to understand this notation here the the matrix for the standard matrix for transformation a we'll just call matrix a what size is that matrix what size is that matrix do you remember how to determine the size of the standard matrix it's going from n space to k space so I have to multiply n tuples by it right so it has to have n columns and columns and the outputs are K doubles therefore it has to have K rows so it's K by n because it's going from n space to k space right so if I'm going to multiply a by what an n-tuple and n by 1 vector right it has to be that size transformation bees standard matrix we'll call B what's the size there it's the size there what's going to be M by K because it takes K tuples right and outputs n tuples so when we look at T be a of X this transformation again this is T B of T a of X that's the definition right and so how do we find ta of X it's a T times X right so how do I get here I multiply by a that's that's the I don't fly how do I take the flight I multiplied by the standard matrix a and the standard matrix a here we said was what K by N and X is n by 1 which makes this whole thing what a K by 1 as it has to be it resides in k space right the image under transformation resides in k space right and so this is going to be TB of 8x right this is a K temple and and how do I do that this is going to be B times a X so how do I travel along this second flight so to speak I multiplied by the standard matrix for transformation B which is B now B of course again is M by K and by K and we said ax again is K by 1 and so this whole thing is M by 1 right it's an M Tuffle as it should be and then we can write this way it's just V a times X and this be a then this matrix is what size all right if I take an M by K times a K by n I get a what M by n right so it's going from what it's going from n space to M space so it's a M by n matrix and that indeed is our standard matrix for this composition TVA does the way we multiply matrices matters order important yes remember matrix multiplication not necessarily commutative so it's real important that you understand this order I'm doing a first and then be a first and then B okay this is the way we write it now I can extend this concept you know imagine you're had a flight where you have to do two connecting flights right your fly here and they fly here and you buy another place right we can have compositions of more than two functions we can have three or your transformations we have three or four right but we can always form that one nonstop route so for example what would it look like if I had a third transformation say T sub C composed with T sub B composed with T sub a okay so what do we mean by that this is T sub B or T sub C of T sub B of T sub a of X and and and we call this matrix transformation TCB a what's standard matrix just see the a in that order notice again this is what we do first and this is what we do last and it is important that you you get that order out that's a common mistake if you change and multiply in the reverse order that can be incorrect because again matrix multiplication is not necessarily commutative okay so let's take a look we're going to use results in the last lecture the basic matrix transformations here assuming that we watch that right a standard matrix for the operator in two space obtained by a rotation of 90 degrees about the origin followed by a dilation with factor four okay let me call this transformation a and call this transformation B I'm doing a first and then B so a is the standard matrix for transformation a in the last video we saw how to do transformations and remember it was cosine of theta and theta is this case of 90 degrees minus sine theta sine theta 90 degrees and cosine which I can evaluate cosine of 90 degrees being 0 sine of theta being sine of 90 degrees being 1 gives me this matrix you know will let matrix B be the standard matrix for transformation B and that is if you recall we looked at dilations and contractions they're just scalar multiples so it's the the factor right K which is 4 down the main diagonals zeros everywhere else so that's how you do any dilation or contraction dilation is an expansion right because the knows the value of K is bigger than 4 right so our standard matrix for T ba right again notice that it's not ta B because we're doing a first notice it's built built this way so that is indeed just B times a so it's matrix B this matrix a pretty easy to do that matrix multiplication row column is zero row column is negative for row column is positive for row column is zero there's our standard matrix so I can do a one stop you know flight I just multiplied by this matrix instead of multiplying by this one and then multiplied by this one now let's look at the image of the vector X 2 comma 0 under this operator so what is T 2 comma 0 equal to well we multiply by this standard matrix we just found one stop flight right the vector in count column vector form is 2 0 and I multiply 0 times 2 is 0 negative 4 times 0 is 0 4 times 2 is 8 0 times 0 0 there it is or I can write it in again back in comma delimited form let's see if this makes sense ok we can look at this geometrically because we're in two space here's our XY axis and we're starting with the vector 2 comma 0 so we're starting right here we'll just do the terminal point here's the vector right and what's the first thing we do we did a reflect a rotation rather of 90 degrees about the origin ok so if I go about the origin 90 degrees and rotate around where do I end up at the transformation hey we're not end up back at this point right here which is what that's a 0 comma 2 prime 0 comma 2 right there nice teeth that point there and then what do we do we dilate by a factor of four right with an expansion by a factor for it would basically multiply the vector by four so if I multiply a vector by 4 I get what 0 comma 8 right so transformation B is just going to shoot shoot it straight out away now that transformation B at this point up here which is the point what 8 0 comma 8 which is exactly what we got right that's that sure so transformation a transformation do now I did say you do have to be careful about you know doing the correct order right what would be the a times B matrix well that would be doing the dilation first then the rotation right so think about that what would ta be right so now we're doing be first the dilation first well that's a times B but if you take a times B in this case if you've changed the order of these two and multiply guess what happens I'll let you check and see you get exactly the same exactly the same and so we get ta b equals TV a the standard matrices are the same and in this case we say that these operators ta b and t ba we say they commute is the workers that commute okay and you can see this would be the case if I instead had did the expansion first right expanding by a factor of four away from the origin where would I go from 2 comma 0 I go out here to 8 comma 0 and then I do the rotation I'm gonna come around be right there you see that so it's the same thing always and in this case you'd be okay if you messed it up in terms of getting the standard matrix but in general that is not the case let me show you that in this next example let's find the standard matrix for this operator there's a reflection about the line y equals x would call that transformation a and then an orthogonal projection onto the y axis I'm gonna call it transformation B and then a reflection in the x axis transformation C and the y got two dots there if you printed my notes out early the semester I think I added this to the to the problem so I have that last little thing so I've got three things going on so I'm going to let matrix a be transformation a standard matrix and a reflection about the line y equals x we did that one in the last video here's the standard matrix for it B is the standard matrix of transformation B that operated as a projection onto the y-axis we also did that one and then we'll let C be the standard matrix of transformation C and a reflection in the x-axis we did that one as well and that's the standard matrix for that so we did all these in the last video if you're wondering how to get those make sure you review that so what I'm interested in is the standard matrix transformations TCB a notice again a is the first thing we do and C is the last thing we do so let's just see times B times a so we'll just multiply this matrix C times B times a again in that order okay so I'll let you do the the math there if you do the math there you get this as you're standing make sure now so there so there's the answer right so something I can do instead of multiplying by this matrix and then this matrix and then this matrix I just multiply this one matrix once and we get the output to this composition of three transformations reflection about the line y equals x orthogonal projection onto the y axis and then a reflection in the x axis what what about you know consider T ABC right so now we're doing C first and pay last okay what's its standard matrix it's going to be a times B times C well you know that's the opposite order here right so 0 1 1 0 0 0 0 1 and C is 1 0 0 negative 1 in this case this comes out to be 0 negative 1 0 0 so notice these two are again not equal to each other well not again the last one we saw they were equal right the two operators commuted but these two do not write these operators do not commute and so this is a real important when you're doing this the order does matter you got to be careful and sometimes you're going to get different stanner matrices so make sure you get it in the correct order whatever is done first will be to the far right and then you'll build to the left of that as you continue to do the other operators okay all right so you're gonna get the play of practice with that in the homework for section 410 I'm gonna practice those look at one last thing and sort of summarize a couple of things here some things we've already talked about but just kind of put this on perspective now as we're really ending chapter four here and then I have one more thing to say with with properties of matrix transformations so the matrix transformation from n space to n space first of all is said to be one-to-one if the transformation Maps distinct vectors or points in n space into distinct vectors in n space okay so distinct were unique okay another way of saying ta is one-to-one is if if we have the images under the transformation being equal the corresponding pre images or the inputs if you of those vectors have to be equal so I can't I can't get the same image with two different vectors if it is one-to-one okay this is something where I talked about again this is a review of ta is a transformation from n space am space domain is n space Co domain name space the set of all vectors in the domain that map into the zero vector in the co domain is called the kernel of the transformation as you denoted that was we've talked about the kernel it's basically the same as the null space we're going to see as the matrix a we're going to talk about that the set of all vectors in the domain sorry Co domain excuse me RM space is codleyn that are images under the transformation for some vector in the domain is called the range of the transformation and it's real important when we first start on mountain domain and codomain a lot of you were saying I thought it was domain range well it is here's where the range comes into play the domain and the codomain those are the vector spaces where these functions operate in so to speak that you know they we have the input if you will in the in the domain and then the output or the images in the co-domain but the range then is a subset of the codomain in fact we're going to see it's a to some space of the codomain okay so the range is the set of all outputs right that you get under the transformation as long as there's a vector in the domain that map's to some vector in in the co-domain and that that vector the codomain is in the range of the transformation so as I've just said the kernel of a transformation is just the same as the null space of a the range of a transformation is the same as the column space of a and if you remember that that comes in from here the tile space of a is the same as what we remember remember if the system ax equals B is consistent we prove that B must be in the column space of a it actually could be as consistent than these in the column space of a because why because ax can be written as what a linear combination of the columns of a ax can be written as a linear combination of columns of a so if ax equals B then B can be written as a linear combination of the columns of a which means it's the column space of a ok and so what we're saying is we've been talking about a lot of these things and and this is sort of summarized these these connections so we talk about them from three different viewpoints we have a system of equations view we have a matrix being we have a transformation view so when we talk about a matrix view the null space of a matrix a that's the same as the solution space of x equals 0 that's the system view right that's what we mean by the null space the solution space of ax equals zero set of all vectors X that satisfy that well the transformation view that's just the kernel a set of all values of x that get mapped to the 0 vector in the co-domain the column space of the matrix a right is the set of all B and M space such that ax be as consistent right that's the SystemVue which is the same as the range of the transformation okay so this is a summary we've talked about a lot of these things but this kind of brings it these very important viewpoints together one more thing a matrix transformation from n space to M space is called on - if the range of the transformation is equal to its codomain which is M space so in the fact that if if a the range and the transformation is equal to all of the codomain then it is on - okay now let's take a look at this important theorem if a is what notice now it's what not M by and not general - what n by N is a square matrix Y a square matrix and T sub a from n space - n space right it's an operator in n space o domain and domain are the same if that's the corresponding matrix operator then the following are what Oh equivalent equivalent a is invertible the kernel the transformation is the zero vector why is that why is that true remember if a is invertible ax equals zero which is the solution space right solution space of ax to be ax equals zero is the null space of the matrix well ax equals zero has what only the trivial solution so only the trivial solutions are null space is only the trivial solution which is the zero vector space that was zero vector space your subspace if you want to say that the range of ta is all of n space the range of ta is all of in space and because why does the range is what the range is the the column space and the column space if a is invertible is all of in space because if you remember those equivalent statements if a is invertible then what the column vectors of a are linearly independent they span in space and therefore they are basis for n specs right and so the tile space of a is all of n space and then ta is one-to-one and that's what we talked about up here and that's true because it's one-to-one because remember if a is invertible then there is exactly one solution to ax equal B for every right B in the end space in this case those are equivalent statements and so if we go to the textbook again right we can pull up our wonderful wonderful equivalent statements list right which if you remember in the last last videos we were talking about previously we got down I think to Q believe yourself I added our s and T to the list the kernel of the transformation is the zero subspace the range of the transformation is all that space and ta is one-to-one those are the three you just add it to the list they are equivalent to a being invertible and these are all connections that we need to know about because we can like we say well we're talking about you know linear independence how are we doing that where you sometimes looking at the determinant of the coefficient matrix is it not equal to zero all right and so on so when we're looking at different things we want to know certain facts sometimes it's easier to find a determinant or find something else a little bit easier and we can obtain right the answer because all these statements are either what they're either all true or they're all false and that's the beauty of this big equivalent statements theorem now enclosing then I want to talk about one last thing that's real simple going back to matrix transformations themselves if we have a matrix operator right that is 1 to 1 right in other words domain and codomain is the same then we say a well if the matrix a which is the standard matrix or the transformation is invertible then this matrix operator right ta with a subscript superscript rather negative one it's called the inverse operator or more simply the inverse of the transformation and quite simply the standard matrix for that operator is going to be the inverse of the standard matrix for the operator T sub a which is just a inverse okay so it's just the idea of inverses from a matrix transformation view the transformation view so let's do these two examples and we'll be done show that the matrix operator defined by these equations is one-to-one and then find the standard matrix for the inverse transformation function and then we're going to find the image under that transformation of w1 w2 okay so this system remember is represented as the vector W equals the standard matrix with the transformation T times X right the W here is the vector W 1 W 2 X in this case is the vector what X 1 X 2 so what's our transformation standard matrix what's the standard matrix for this matrix operator going to be it's just the coefficient matrix there's my first thing fine the standard matrix oops no that's not it that's the standard matrix for teeth sorry I want to find the standard matrix for T inverse here probably going to do that first of all let's show that this is one to one so that indeed has such a thing and how can we do that well the easiest way is in that list of equipment statements find the determinant of the standard matrix determinant T which is what 5 times 3 minus negative 7 minus 2 which is 15 minus 14 which is 1 that's not equal to 0 and therefore what do we know from our list of equivalent statements this transformation T a is 1 to 1 so there's the first part prove that it's 1 to 1 and now I'll answer this question what is the standard matrix for the inverse operator no it's just the inverse our standard matrix for operator so this is just the inverse of this matrix now we can find this you know with our calculator we can crunch that out there's also that little formula we gave before we can use but I'll have you check and make sure that you know that this is the standard matrix for that this is the standard matrix for that inverse operator usually denote it again yeah so there's that and last thing was to find the image of w1 w2 under this transformation so what is the image of w1 common w2 under this x w1 w2 in column vector form I get 3 times w1 plus 7 times w2 right row column for the first entry second entry row com2 which w 1 to w1 plus 5 w2 and and this is important because this is going to get us back to what X 1 comma X 2 where we started from and so notice we get this system of linear equations X 1 is 3 w 1 plus 7 W 2 and X 2 is 2 W 1 + 5 w 2 so this is the answer to what we were looking for but I want to write it in this form and and this is important so what we're saying is sorry I was off the screen there but there what we're saying is this is how we undo a sense the transformation right so we come back up here to the original if I have X 1 and X 2 and I plug them in I'm gonna get W 1 and W 2 well if I take those values of W 1 W 2 and plug them in here I'm gonna get back to my original values for X 1 X 2 now have you trottin here just try that try pick you know just pick any two real numbers X 1 X 2 plug them into this get your W 1 W 2 and then take those values plug them in here and get back your X 1 X 2 it's good for the exercise it's good see how that see how that works that's how we undo operators now last question is the operator with to space to space that performs an orthogonal projection onto the y-axis one-to-one well before we do anything let's just think about that is it one-to-one right orthogonal projection onto the y-axis right so here's our y-axis here if I have any point here let's say let's say at a height of y equal to let's just say y equal to here now let's put some X values down here so if I'm at the point 2 comma 2 where does it go if it's projected orthogonal onto the y-axis right it goes we're right to their right to their right to 0 comma 2 well what about negative 2 comma 2 where does that go to also projects the same point what about 3 comma 2 where's that go it projects that what about negative 3 comma 2 it goes there what about 10 comma 2 it because that we're going to everything is projecting onto that one point then clearly that is not one to one one to one says for every vector in the domain there's a unique image in the co-domain and that's not unique every in fact every point every vector that's on this line y equal two will go to that same point but we can see from if you remember the standard matrix for this we did this in the last video again is this and what is the determinant of that standard matrix well it's quite clearly going to be zero right see a row row of zeros or columns here as we know the determinant is zero which means that T is not one-to-one so we can quickly obviously compute the determinant of Stanner matrix at 0 that means all those statements we were looking at for an invertible matrix are false and of course being false T is not one-to-one ok and we can't undo this transformation that kind of makes sense right if all these points are mapping to 0 comma 2 how would I go back right zero comma two going the other direction only gets mapped to one point which one does it go to it doesn't work okay you can't undo that okay so that's a all we're going to talk about as far as the properties there's other things that could be said definitely go to page 277 in the book and take a look at those that list of equivalent properties and it's there for point for point to point to but it's the it's the big theorem it's our list of equivalent statements so far and you want to definitely go over that and you know let them think about those things and those relationships okay so that ends all of chapter 4
Info
Channel: Professor Bobby Jackson
Views: 776
Rating: 5 out of 5
Keywords:
Id: MKPlNQFAYQk
Channel Id: undefined
Length: 36min 31sec (2191 seconds)
Published: Sun Apr 12 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.