53 - The rank-nullity theorem revisited

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so here's the statement of the rank nullity theorem in the context of linear transformations it says the following so let T from v1 to v2 be a linear transformation a linear map via the theorem is a theorem about dimensions and it says the following the dimension of V one of the space here the domain space of the transformation equals the dimension of the kernel of T which is a subset a subspace in fact a v1 plus the dimension of the image of T which lives inside v2 is a subspace okay now let me remind you what we have seen so far when in the specific case that T is given as multiplication by a matrix so if T of V equals a times V then we've seen that this theorem is indeed valid okay and it was a direct consequence of some observations as to what is the kernel of T in that case and what is the image of T in that case in terms of the null space and the rank of the matrix a and then it followed directly from the corresponding theorem for the matrix a ok but now T is a general linear transformation we don't know yet that it is in any way related to a matrix and we're not going to use it yet okay we're only gonna prove this theorem from scratch from the ingredients and from the stuff that we already know about linear transformations okay now let me give you the idea of the proof the proof is is not too complicated but it is good is gonna be a mouthful it is gonna take a border to let me give you the idea and then we'll spell it out okay so the idea is the following you remember that this is a theorem about dimensions okay so the kernel is a subspace of V one okay so we're gonna take a basis for the kernel and complete it to form a basis of V 1 we know that we can always complete bases of subspaces are not allowed basis of the ambient space okay and then we're gonna send that basis using t2 v2 okay so we're gonna get a set of elements here they're not necessarily going to be a basis in fact they're not going to be a basis because there are a lot of fellows there the T sends to zero because we started out with the basis for the kernel right so all the guys that were originally a basis for the kernel T is gonna send them to zero okay so all these guys we're gonna throw out and what we're gonna be left with is at least in terms of the number of elements is what we need in order to prove this theorem so we started with some element let's let's let's add some some numbers here to make this easier so let's call in the dimension of V 1 K is gonna be the dimension of the kernel let me just check that it is indeed yeah K is gonna be the dimension of the kernel so what we need to prove is that the dimension of the image is n minus K then we're done do you agree so we take K complete them to a basis for n send them to v2 okay we get a set of n elements here throw those K throw out those K that were originally a basis for the kernel because they get sent to 0 okay so we're left with n minus K elements here if and we know by the way that they span the image because they were an image by T of a spanning set right so we know that they spanned the image if we can show that they're linearly independent we're done and that's precisely going to be the idea of the proof okay so if that was too quick here comes the slower version with all the details so let's do it so here's the proof but the idea was really just what I said there's no no additional tricks or or heavy machinery it's really a basic idea okay so denote by n the dimension of V 1 and by K the dimension of the kernel of T and we will prove that the dimension of the image of T is precisely n minus K and that would complete the proof of the theorem do you agree good so far everybody ok so let's say let's do precisely what we declared so let we need to give them names let's call them Z 1 Z 2 dot dot dot up to VK these are little Z's let this be a basis for the kernel the kernel is a subspace we know that it's of the dimension K so it has a basis comprised of K elements called in V 1 to V key ok so this is a basis for the kernel complete it and we know that this can be done this is a theorem that we hadn't proved a while ago completed it to form a basis for the entire space V 1 and we're gonna call that basis well it starts with v1 v2 all the way through VK and then we completed it so the guys that completed it we're gonna call V Kane plus one all the way up to V n we know that in total there should be n of them because V 1 is a space of dimension in okay we know we know that T of V 1 V 1 was an element in a basis for the kernel in particular it itself is an element of the kernel so when you do T to somebody in the kernel you get 0 right that's the definition of being in the kernel so T of V 1 equals T of V 2 equals T of V K all the first K guys get sent to 0 right ok since let's write it since V 1 through V to the kernel good ok so now we're gonna take T and operate on all these n elements okay this was a basis for V 1 in particular it's a spanning set for v1 therefore also T of V 1 T of V 2 dot T of V n duty to all these guys spans the image this was another theorem that we had improved if you take a basis or a spanning set for the domain duty on it you get a spanning set for the image okay but the first K of them the first K are zeroes so you don't contribute anything to the span we can throw them out okay therefore if we just take T of VK plus 1 T of V plus two all the way up to T of VN this also spans the image because everything we threw out was just a bunch of zeros do you agree not happy well you're asking doozies spend zero of course take zero times this plus zero times this plus zero times this plus zero times this you don't need to have zero in the spanning set in order to spend zero okay okay so do you agree that this is a spanning set for the image and how many elements does it have and minus K so all that remains that was my declaration for when I when I should when I try to outline the idea of the proof all that remains is to show that this set is linearly independent that does not follow from the theorem that says if you take a spanning set in T it you get a spanning set here right we took a spanning set T tit and we get a spanning set but it's not linearly independent there are a bunch of zeros here so question when we throw the zeros is this a linearly independent set if it is we're done it's it's a spanning set it's linearly independent therefore it's a basis it has n minus K at elements and we're done okay so in order to complete the proof we need to show that this set linearly independent that this set here is not just the spanning set but it's also linearly independent and therefore a basis do you agree okay so here we're gonna resort to the properties of being a linear transformation and we were in fact gonna show that it's linearly independent by definition okay so let's do that so let's give it a name so it's easier to refer to let's call this set B for bases to show that B is linearly independent and therefore a basis with n minus K elements and therefore we're done okay so let's do that okay so if what does it mean to be linearly independent I need to take a linear combination of these guys assume that it's zero and show that necessarily all the coefficients are 0 that's what it means by definition to be linearly independent so if alpha K plus 1 times V K plus 1 plus alpha k plus 2 VK plus 2 plus dot dot alpha in V N equals 0 sorry and we need to put T's here right and I'm not gonna be able to squish them in alpha k plus 1 times T of V k plus 1 right that's the the elements that I'm trying to show v k plus 2 up to alpha n T of V in all right a linear combination of the elements of B if it equals 0 then we have to prove that all the alphas are 0 that's what it means by definition to be linear independent right so if we have such a combination then by properties of being a linear map on your transformation we know that we can throw in the alphas right so we can write this T of alpha k plus 1 TK v k plus 1 plus T of alpha k plus 2 VK plus 2 plus dot dot T of alpha n VN write a linear transformation absorbs the scalars furthermore if we have T of a sum it equals the sum of the T's I'm again using the properties of being a linear transformation and I mentioned before that passing from a linear combination throwing the linear combination inside the T is a standard step in many many proofs which use the fact that a transformation is linear so T of alpha k plus 1 VK plus 1 plus alpha n the N equals 0 do you agree and in fact going from here directly to here is even that is a legitimate step in this in where we are ok we know that already that T of a linear combination is the linear combination of the t's that's the property of being linear ok what does it mean the T of somebody whatever this somebody is equals 0 well it's not or it's inside the kernel this element is an element of the kernel by definition T of it is 0 right so alpha k plus 1 VK plus 1 plus dot dot dot alpha n VN belongs to the kernel that's what it means to be an element of the kernel right but we already have a basis for the kernel right namely V 1 to V K so this thing here this linear combination of the V's with index from k plus 1 to n is an element of the kernel therefore we can write it as a linear combination of V 1 to V K do you agree so therefore this element alpha k plus 1 v k plus 1 plus dot dot alpha n v n can be written in some way as a linear combination of V 1 through VI there are some alphas some coefficients which I'm gonna call alpha 1 through alpha K such that this equals alpha 1 V 1 plus alpha KDK for some alphas do you agree if it's an element of the kernel and V 1 through the key where a basis for the kernel then it can be written as a linear combination of these elements okay with some coefficients I want to call them alphas you can call them betas if you prefer doesn't matter okay everybody good okay so now I have an equality here I can move all these guys over here okay so I'm gonna get a linear combination of all of these which equals zero do you agree so therefore alpha 1 V 1 plus dot dot plus alpha n VN that's the right-hand side minus alpha k plus 1 v k plus 1 minus dot dot minus alpha n V in equals 0 I took all of these and then subtracted all of these and got 0 do you agree those would be alpha KDK in the oh here it's a cake you're you're absolutely right thanks this is the first K ones this is from the right hand side and then I subtracted the left hand side in got zero do you agree so I have a linear combination now everything is in V in V one okay everything is in the domain space it's a linear combination which equals zero of the elements of the basis when can you have a linear combination of the elements of a basis which equals zero only when all the alphas are 0 so all these alphas are 0 in particular alpha 1 through alpha carriers are 0 but that I don't care about but in particular alpha K plus 1 through alpha n have to be 0 and that's what I wanted to prove ok so therefore all the alphas all alphas and in particular alpha k plus 1 dot dot dot alpha in R 0 so we started with the linear combination of the elements of B T of V k plus 1 up to T of V N and we should we assumed that equals to 0 we showed that necessarily all the coefficients are 0 that means that these guys are linearly independent and therefore B is a basis ok so what one second therefore B is linearly independent we already knew it was a spanning set and thus a basis for the image with n minus K vectors so the dimension of the images and - Cate that completes the proof of the fear yes questions from here to here I have a linear combination of a v1 through VN it equals zero v1 through VN was a basis for V 1 if you have a linear combination that equals zero all the alphas are 0 that's what it means to be linearly independent we know that v1 through VN are linearly independent they're a basis by definition of linear independence that's the definition the definition of linear independence nor theorem know nothing if you have a linear combination that equals zero all the coefficients are 0 okay that's the definition good ok so this completes this proof ok you may need to read it again as I said it spins two boards ok but the ideas are very basic and many many many of these ideas when I do spend time in an ink and and board for a proof it's not because I think that it's fun well I do but but I think that it it it incorporates ideas which are important not just for this particular proof but in fact the ideas that show up in many many other proofs and other exercises and other even questions that you may encounter in homework and and exams ok so there are many ideas here that are very basic and very important regarding linear transformations it's a very basic theater ok ok so back to what we know we finally know this rank nullity theorem in general for a general linear transformation note that in everything we did here we didn't we didn't say the word matrix we didn't at all use the fact that we've already hinted to that's going to be the next clip ok showing that in fact any linear transformation can be presented represented by a matrix but we didn't use that ok so this is completely valid in its own right okay one more thing that I want to say usually after we give a theorem we would give many examples but in fact we've seen many many many examples in every linear transformation that we have written as example for whatever so far I always showed you hey pay attention to this the dimension of the domain space equals the dimension of the kernel cluster dimension of the image so we've seen it exhibited in all the examples we've seen so far so I'm not going to give more examples right now okay it's a fact that we've seen many many times okay what I do want to do is write two corollaries two statements about linear transformations that follow from this okay and the nice thing about this is that it really gets used a lot it's a very useful theorem but many many times in the--in in statements that don't involve a priori the statement itself doesn't involve the kernel in the image okay they just get used as tools so let's show these two corollaries that I want to show let me find them there de yard so corollary number one is T from v1 to v2 a linear transformation is one to one and onto so if it's a one-to-one and onto map then the dimension of V 1 equals the dimension of V 2 so a one-to-one and onto map between two linear spaces can only occur when they have the same dimension okay this is a part of a more general statement that we also hinted to already the notion of I can even throw the word the notion of isomorphism when do we think of two spaces as being the scene okay and we're getting there it's gonna it's gonna come soon but this is part of it we can see that if we can move back and forth linearly respecting the linear structure of the spaces that's what a linear transformation which is one-to-one and onto means then they share dimension they are of the same size okay that's not the full statement but it it's a partial statement so let's prove this I forgot to write is a is linear and obviously it's linear cuz we're in a linear algebra course but nevertheless it's worth mentioning then the dimensions are equal so let's prove this the proof is going to be very short because we're gonna make use of the theorem so I'm gonna start the totally naive way I'm gonna start with the dimension of v1 and hope to get to the dimension of V 2 so what is the dimension of V 1 equal to well n is just a name but what is it equal to by the theorem the dimension of the kernel plus the dimension of the image do you agree that was the theorem what's the dimension of the kernel zero Y right we have a theorem we know that when a linear transformation is one-to-one that's if and only if the kernel is zero and in particular has dimension zero so this equals zero because this is one-to-one so maybe we'll write here this is zero since T is one-to-one and what's the dimension of the image the dimension of which V V to the image is a subset of V two but since the map is on two everything is in the image two equals the image since T is on to do you agree okay so the dimension of the kernel is zero plus the dimension of the image width which is the dimension of V two and we're done so this is a direct consequence of that theorem and try to prove it without it okay you will see that you run into a lot of how do you start okay do you agree everybody good okay another corollary corollary number two and this may this corollary here may may look a bit surprising why surprising because for general functions it's absolutely false okay but for linear transformations it's it's it's true so if the dimension of V 1 equals the dimension of V 2 so if we take a linear map between two spaces which share the shape the same dimension okay and it's a linear map it doesn't necessarily mean that the map is one-to-one and onto a force right you can take the 0 map send everything to 0 that's a linear transformation it's obviously not on - and it's not one-to-one do you agree ok so just the statements that the converse of this is obviously false ok if the dimensions are equal it doesn't mean that that the linear map is has to be one-to-one and onto okay that would be way too much but then if it is one-to-one then it's automatically on - and if it is on - it's automatically one-to-one so if you're taking a map between two dimensions between two spaces which have the same dimension then those properties of being one-to-one and onto collapse to being a single property it's 1 if and only if there are the other then T from v1 to v2 linear is one-to-one let's write it like this if the dimension and and T is linear is linear then then T is 1 to 1 if and only if T is on to okay and this is not in general true for functions okay if you take a function it can easily be one-to-one and not on two or beyond two and not one-to-one the properties are very different in nature but for a linear map between two spaces of the same dimension they coincide okay so let's prove this we have two things to prove one that if the map is on to then it's necessarily one-to-one to that if the map is one-to-one it's necessarily on to okay and it's gonna follow from this okay you can even just by looking at it you can see where it's gonna stem from if it's 1 to 1 then this is 0 so the dimension of V 1 has to be equal to the dimension of the image okay and that's going to be equal to the dimension that that equals by assumption that I mention a V 2 so the image is everything in the map is on 2 and vice versa ok so do you see that it's gonna be a direct corollary let's write the details I may need another board so let's take another board no no you okay so you're looking for a linear map between two spaces which have the same dimension a linear map that you can't say that V 1 and V 2 are one-to-one and onto that doesn't compile what's one-to-one and onto is the map okay so if you're taking two spaces V 1 and V 2 which have the same dimension for example 2 by 2 matrices and r4 for example 2 by 2 matrices and polynomials of degree less than or equal to 3 those are linear spaces very different spaces which have the same dimension if you have a map a linear map between one and the other that's 1 to 1 it's gonna be also on 2 I can't give you an example of one that's only 1 to 1 and not on 2 because this the theorem says or the corollary says there is no such example if it's 1 to 1 it's automatically on - ok ok so we need to prove two directions if it's 1 to 1 then it's on - and if it's on 2 then it's 1 to 1 so here's this direction this direction was assumed that T is 1 to 1 so if T is 1 to 1 then the kernel of T is 0 right this we already know therefore the dimension of the kernel of T is 0 because it's the 0 space therefore by the rank nullity theorem the dimension the dimension of V 1 equals the dimension of the kernel 0 plus the dimension of the image that's the rank nullity theorem but the dimension of the image well we need to write it differently we need to right here this is the this is what we know so that I mention of v2 equals the dimension of this one this is the assumption of the theorem of the corollary right we're assuming that we're looking at two spaces that are have the same dimension and this equals by the rank nullity theorem zero the dimension of the kernel plus the dimension of the image so the dimension of the image equals the dimension of V 2 and therefore this is another statement that we had improved if you have to if you have a subspace which has the dimension of the ambient space then that subspace is everything remember that's here you can't have a four dimensional subspace in a four dimensional space which is not everything okay we prove that therefore the image is V 2 and therefore T is onto good ok so it's very basic it's like you know filling around with some numbers right this is zero this is everything that not not but there there's Theory hiding here right in particular this theorem is not a it's not a you know just an observation it's that required work ok the other direction the other direction now let's assume that T is on two if T is on two and it's basically basically going to be the same ideas just reading it backwards so if T is on to then the dimension of the image equals the dimension of V 2 right because if T is on to the image is V 2 that's what it means and therefore they have the same dimension and this equals the dimension of one that's the assumption of the theorem by assumption right so the dimension of the image is already the dimension of V one but by the rank nullity theorem if we add here the dimension of the kernel it should still be equal to dimension of V one so when we don't have a lot of dimensions left for the kernel do you agree okay so by the rank nullity theorem the dimension of the kernel has to be zero there's nothing left for it okay and therefore the kernel is zero which implies that T is one-to-one good so these two statements these two corollaries indeed follow directly from the rank nullity theorem okay and we're gonna see why they're useful that we're building up to that statement that that to the definition and the idea of isomorphism of when are two spaces just two different ways of looking at the same space okay okay questions so far again this was a bit theoretic but but diarrhetic is part of this part of this course there's some theory here and it's really really just understanding the definitions and the notions we're dealing with okay we didn't do anything we didn't throw in any deep ideas okay everything was very basic was really one little step after the other just following very basic things as long as you know the definitions okay if if the definition of what it needs to be linearly independent suddenly slips your mind then then there's no way to do this there's no way to understand it okay you have to know the definitions okay so let's stop here in the in the next one what we're finally gonna do is prove that any linear transformation can be written in terms of a matrix how do we do that that's coming up next
Info
Channel: Technion
Views: 11,884
Rating: 5 out of 5
Keywords: Technion, Algebra 1M, Dr. Aviv Censor, International school of engineering
Id: a5pTAuNm--k
Channel Id: undefined
Length: 37min 11sec (2231 seconds)
Published: Sun Nov 29 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.