73 - Orthogonality

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
our very very last lesson for this semester everybody's very excited yay so the title is orthogonality the idea is the following we defined what our inner product spaces we defined the notion of an inner product and we saw how via inner products we can get geometry and I'm gonna mention a brief reminder in a second once we have that geometry we have notions of angles and we have notions of length and distance so that geometry provides further structure but on the other hand we kind of forgot of purely algebraic stuff that we did before right there's a basis for a vector space there are matrix representations for linear transformations we discussed many things and what I want to do very briefly because this is really a topic that we're just touching just introducing kind of relate to to show for example how we how we make it make precise the notion of a basis like the standard basis in r2 in r3 where all the vectors are perpendicular right if you think of the standard basis for our three the XY and z axes and unit vectors in those directions they're all perpendicular there's 90 degrees between each two right so we want to somehow be able to say that for a general basis and to see why is that good who cares okay so those are the ideas that I want to discuss now so and in particular what is this word orthogonal so here's a brief reminder very brief via the inner product which we denoted by u comma V on a vector space we defined lens or the norm or length which was defined the length of a vector was the root of the inner product of the vector with itself we defined angles so the angle alpha between two vectors was defined it's cosine is the inner product of the two vectors divided by the product of their norms remember this and we defined distances or a metric which was defined we use the notation d UV I don't know why I'm keep shape keep changing the order doesn't matter of course the distance between U and V is the norm of the difference okay so this is the stuff that that we did and this is the geometric structure on an inner product vector space V ok now let's mention what we said about r3 note that the standard basis of our three let's denote it by and it's elements are the vectors 1 0 0 0 1 0 and 0 0 1 and let's call them II 1 e 2 & 3 right this is the standard basis in R 3 satisfies two things 1 1 the norm of e I so what is the norm of e I it's the product the inner product of e I with itself square root but if you take each of these what is that what is the the inner product let's put a reminder here in R 3 what is the standard inner product in R 3 in R 3 let's let's even write it directly for the norm because we said it already the norm of V in R 3 is the root of V 1 squared plus v2 squared plus V 3 squared right everybody remember this so what is the norm of each of these e eyes it's just 1 right so the norm of these each of these eyes is 1 and this is true for I equals 1 2 & 3 for all 3 of them do you agree and the second thing that these elements satisfy is that the inner product of e I and EJ if you take the inner product of e I and EJ and let's remind now what the inner product is I should have switched the order of these doesn't matter so the inner product of U V in general is u 1 V 1 plus u 2 v2 plus u3 v3 right that's the inner product on our three so if you take you up I and II J two of these you just have to sum the products of their coordinates so what's gonna be the inner product zero do you agree if you for I not the same as G do you agree right if you take let's say e 2 and E 3 their inner product is 0 times 0 plus 1 times 0 plus 0 times 1 good everybody so this means that each of these vectors has length 1 often called a unit vector each of these standard basis elements are unit vectors this means this means that they're pairwise perpendicular why why so let's look back a minute at the definition of the angle the angle was defined over here on this board the angle was defined as take the inner product divided by the product of the norms and then do arc cosine that's cosine of the angle if this inner product is 0 then this is 0 so which alpha satisfies that it's cosine is 0 PI over 2 do you agree so saying that the inner product is 0 means that two vectors have a PI over 2 angle between them meaning they're perpendicular ok so let's go back here and and say this say it in blue so this means that AI has length 1 with one and this means that any two are perpendicular okay good okay so let's make this a bit more formal here's the the definition U and V in general are called perpendicular or this is the professional algebraic word orthogonal orthogonal and denoted U is orthogonal to V that's the notation which makes sense right if their inner product is 0 this compiles this makes sense in an inner product space where you have angles good okay let's maybe mention since if u v equals 0 then cosine of alpha equals 0 hence alpha equals PI over 2 good ok so in summary the basis elements of e the standard basis of our three are orthogonal orthogonal pairwise orthogonal any two are orthogonal and of length 1 do you agree ok ok so I want to take this notion of our sag analogy in in length 1 and define it for a general group of elements in a general vector space so here's the next definition definition a set set let's call it B equals V 1 V 2 V 3 doctor God V n in an inner product space V in an inner product space V is called orthogonal so now I'm defining what orthogonal means for a set ok if the inner product of V I and V J is 0 not always whenever I is different than Jade ok for all I different than J that's an orthogonal set ok if in addition if in addition the norm of each VI is one for every eye then there's another name for it then B is called it starts with ortho but that having vectors of length 1 means they're normalized that's the word that we usually usually use they're normalized vectors and B is called orthonormal that's the terminology okay furthermore here I just said this is a set but if it's more than just a set if it's a basis and I was hinting to that by calling it B if B is a basis for V linearly independent and spans the entire space then it is called an orthogonal basis or orthonormal basis orthonormal one-second basis question yes all the inner products so there are theorems in this direction okay there are theorems in this direction well it doesn't have to spin if you take two vectors okay so there are theorems in this is a very good observation okay does orthogonal T imply being a basis not quite but it does imply okay good but I'm not going to go into those details okay as I said we're just touching just touching there's a whole theory that can that can develop from here and in fact this theory extends beyond finite dimensional spaces even okay where things become a lot more interesting okay at least to me but we're really just touching we're really just putting some definitions I'm gonna prove exactly one tiny little very useful fact but that's it okay semester ends come again next semester well not to this course to the following courses okay but very good observation okay so is it clear what an orthonormal basis is so now that e the standard basis of our three is an orthonormal basis okay you can define the standard basis as being an orthonormal basis because there are many many more than just the standard one but the standard basis we had were orthonormal yes yes in fact they were all the same basis in disguise right they were all those that e well not necessarily for r3 but for RN where we just if we thought of polynomials then we took the standard basis but in terms of the coefficient vectors they were the same thing and we know that these what once two two vector spaces have the same dimension they're the same vector space not quite there isomorphic there are two ways of looking at the same vector space and therefore their bases are just translations right so answer yes every one of those what we call standard bases like for polynomials 1x x squared X cubed and so on is an orthonormal basis but not vice versa you can have many orthonormal bases which are not the standard basis okay okay you really wish there were at least three more weeks to this semester right and we can go on we're having fun okay good good I like that okay so example example e is an ortho II from the previous example is an orthonormal basis of our three and if you want that's a very good exercise take for example in two by two matrices 2 by 2 matrices 1 0 0 0 0 1 0 0 0 0 0 1 and I skipped one because I only said 3 and got to the end but take the 4 standard basis elements in 2 by 2 matrices show that they're orthonormal orthonormal with respect to what orthonormal means a statement about the norm in a statement about the inner product so you have to say what is the inner product on matrices and what is the norm that it gives rise to take the standard inner product in the standard North ok good so those are good exercises ok so once we have this notion it's very very straightforward so suppose somebody throws at us a bunch of vectors just a set may be that said is uh may be that set as a basis maybe it's just set suppose we want to form from it an orthonormal set ok so what do I mean form from it an orthonormal set I can always take all the elements throw them away take different ones and said hey I formed from it an orthonormal set that's not interesting so what I mean is suppose you have a set of vectors and that set of vectors has a certain span it spans some subspace maybe the entire space suppose you want to form from them a different set with this same spin with the same span that is orthonormal okay so there are two things you have to do the first thing is you have to normalize them you have to take each one and make sure it's of norm one okay and that's easy that's easy so here are some remarks it is easy to normalize a vector so suppose somebody gives you a vector V and you want a vector with the same span what is the same span as a single vector it just has to be some multiple of that vector right that's the span of a single vector all scalar multiples so the way to do it so suppose it is easy to normalize a vector V take one over the norm of V times V okay this the norm of V is a number one well to normalize I have to be a bit precise here a nonzero vector V if he is the zero vectors there's nothing you can do to it to make it of norm one there's no scalar multiple of it of norm one but if it's not and this is what you do okay the norm of V is not zero it's a number one over it is a scalar take that scalar multiple of V claim it has the same span as V I didn't change anything right it's just a scalar multiple of E and it has norm one because what is the norm of alpha V since the norm of one over V times V right the norm of a scalar times a vector the scalar can come out of the norm in absolute value right but this this is already non-negative a non-negative real number so it's just 1 over V came out of the norm and here I have norm V and that's what do you agree okay so normalizing the vector is very easy and if you have a bunch of them you just normalize each one separately but that doesn't guarantee the second property of being an orthonormal set or the first property of being even an orthogonal set that pairwise they're perpendicular okay that their inner product the inner product of each two is 0 and there's a way to do that and that's a much more sophisticated algorithm ok because once you deal with two vectors then you want to take care of a third one you're messing up the first two ok so I'm not going to describe it in detail I'm not going to describe it in detail okay I'm just gonna say a more sophisticated procedure process called named after two mathematicians no.not cauchy-schwarz that is something else as you know gram-schmidt mm or single him I think a single in Schmidt called gram-schmidt so it's an algorithm it's an algorithm whose input is a set of V 1 to V n vectors okay they have to be linearly independent if you want to keep that to have the same number of vectors at the end ok you throw them in you get out a set with the same span which are or you always can do that in the finite dimensional space yes okay but they many of the things I'm saying here can be extended to infinite dimensional spaces but they're beyond the scope of our course okay okay so a more sophisticated process called gram-schmidt enables to take any linearly independent set in particular a basis of course and produce an orthonormal set with the same spin one second one second with the same spin see that again yeah that the process does two things okay and I'm not saying in which order and and in fact that the the the what it really does is first orthogonalize is them and then normalizes the result that doesn't change their thug analogy okay so it produces a new set in in general none of the vectors are gonna are gonna stay the same they're all going to be modified okay but the spin will not change okay so in every step the span remains the same okay and that requires defining a notion of of of projecting one vector on another and then once you do that it you need to define the algorithm and then prove that it really does what you're claiming that it does it it's not hard I probably even if you open the Wikipedia entry you would see it and with an example in everything okay it's not difficult but again I'm not gonna go into the details okay so given a basis for example you can always produce an orthonormal basis from that basis I think I think I'm not sure I understand your question but but let me say this if you're looking at the entire at the entire space then and you're saying hey give me a linearly independent set okay I will produce orthonormal set with the same span I'll just take the standard basis right it becomes interesting when you're taking subspaces okay where you can't just say I'll take the standard basis because it's a subspace it's has some restrictions not all the elements are there and you still want to take a basis of the subspace and make it orthonormal okay I don't know if that answered what you were asking but otherwise you can do it very very naively and in a very uninteresting way okay yeah normalizing a single vector is just this okay you get a different vector if you started with the vector ones five zero zero by normalizing it you would get one zero zero it's a different vector right but now take one zero zero and zero sorry take five zero zero and one two zero five zero zero and 1 to 0 they span a two dimensional subspace of r3 you can you can normalize each of them separately but they're not perpendicular right their inner product is not zero their inner product is five okay so how do you make two vectors that are perpendicular of norm one that's it that span the same subspace of r3 spanned by five zero zero and 1 to 0 okay okay so good so we know what an orthonormal basis is we know there is a method ok accessible in the literature or the electronic literature to produce orthonormal bases and the next question is what are they good for ok so here our partial answers preliminary answers to what orthonormal bases are good for when you have a basis the first thing do you know about a basis that's that's what a basis is all about is that you can take any element in your space and write it as a linear combination of basis elements right the basis encodes everything about the space algebraically everything in terms of what are the elements of that subspace right but what are the coefficients suppose I give you a vector and I tell you hey this guy is in the linear subspace spanned by 5 0 0 and 1 2 0 how would you know how to write it precisely as alpha times 5 0 0 plus beta times 1 2 0 you would need to solve a certain system of equations right and produce those alphas and beta alpha and beta right but if it's an orthonormal basis they're right there looking at you the coefficients okay so that's the following statement if B equals V 1 V 2 V 3 V in is an orthonormal basis a V then this is a theorem this is actually a theorem I'm even going to prove it the proof is very easy then for every V and V I know exactly how to write it as a linear combination of the V ice as follows V equals it's going to be the sum of alpha I is V ice right alpha 1 V 1 plus alpha 2 V 2 plus alpha 3 V 3 etc so I'm going to write it in summation notation sum I equals 1 to n here are going to be the coefficients alpha i's times V ice do you agree right every vector is alpha 1 V 1 plus alpha 2 V 2 plus alpha 3 V 3 but I'm going to tell you what these alpha i's are the alphas are none other than the inner product of V and VI so alpha 1 is the inner product of V and V 1 alpha 2 is the inner product of V the vector you're trying to write as a linear combination and V 2 the second basis element alpha 3 the coefficient of V 3 is the inner product of V with V 3 the coefficients are precise okay let's prove this is it clear what the statement is everybody okay so these are precisely maybe let's emphasize these alpha eyes in expressing V as a linear combination of the V eyes are explicitly given by the V eyes themselves if it's an orthonormal basis okay let's prove this let's see that this is in fact the case right if it's a space of dimension in a basis has n elements and every element can be written as alpha 1 V 1 plus dot alpha and via right okay so here's the proof here's the proof so let's take a V so V a priori I know that B is a basis then V can be written as alpha 1 V 1 plus alpha 2 V 2 plus dot dot dot alpha in VN these alphas exist for every T do you agree okay that's what it means to be a basis and we even know that they're unique right a basis expresses every vector uniquely in terms of the coefficients right okay now let's see what is the product of the inner product of V and VI I'm going to do some calculations and see that this just equals alpha I and that's the claim that's precisely the statement of the theorem right so let's show that V comma VI is nothing but alpha and what ingredients do we expect to go in here no I'm asking what what are what are the whatever the steps are gonna be what are the justifications that we expect to see that enable this magic to happen the definition of what the orthonormality that's the point right we expect to somewhere use the fact that this is an orthonormal basis and not just any basis ok good ok so let's see that so what is V comma V I V comma V I is V comma sorry that is what I wanted to replace instead of V I want to write what if he is alpha 1 V 1 plus alpha 2 V 2 plus dot dot dot alpha in VN comma and now I'm multiplying by V I do you agree good ok now right what we have here is an inner product of a linear combination with a vector and we know how to break we know the linearity of the inner product in the first term so we can break this into a bunch of inner products alpha 1 V 1 times V I plus alpha 2 V 2 times V I right so these are properties of the inner product so this equals alpha 1 V 1 inner product VI plus alpha 2 V 2 inner product VI plus dot dot alpha n VN inner product VI I have to erase this blue stuff cuz it's getting in the way do you agree furthermore we know how to pull scalars out of the first component they just pull out right so we have alpha 1 V 1 VI plus alpha 2 V 2 VI plus dot dot alpha in VN VI right we still didn't use at all the orthonormality here it comes the inner product of V 1 and V i if i is not 1 it's 0 because i is different than j if i is 1 the inner product is 1 because it's the norm of VI squared right so let's write that so this equals so using the orthonormality VI VJ or let's since VI is fixed here let's write it as VJ VI VJ VI is 0 for I different for J different than I right that's the orthogonality of these V eyes and furthermore VI VI is the norm of VI squared right because the norm of VI is the root of VI times V I so this is the norm of VI squared and this is one that's the normality part of the orthonormality ok so this is the justification for this step right and what it says is that the only thing that's going to remain is alpha I times VI VI which is in turn alpha I times the norm of V I squared which is just alpha clear everybody okay so that's it we proved it if you want to write V as a linear combination of the V is the alpha eyes are none other than the inner product of V with each of the V eyes okay and it they're very easy to calculate you don't need to solve system of systems of equations and what not clear moreover if you want to know know the norm of a vector you can calculate it in terms of these coefficients let's write that as well only if you're working with an orthonormal basis in general this is not true so in many statements that we had and many things that we did we started with let V be a vector space and let B be a basis okay you can expect that in inner product spaces and things that follow okay you'll hear words and terms like Hilbert spaces and many many things that that expand from this many theorems we'll start with let V be an inner product space and let B be an orthonormal basis you can always have one that's the gram-schmidt statement and furthermore it's useful good okay so here is the second clean so let's call it a theorem again if B equals v1 v2 dot VN is an orthonormal basis then for every V the norm of V equals the root of the sum I equals 1 to N of the absolute values of the inner products v VI squared root in general these can be complex numbers right so this is not the absolute value but rather the modulus okay so if you want to calculate the norm of a vector that's straightforward to calculate these these are just the coefficients of writing it as the sum of the VIS and take some these the square of these coefficients and take this square root okay again explicit in terms of the cove of the basis elements okay let's prove this and again the proof is gonna amount to very simple manipulations with with inner products and something's wrong here yeah well well yeah what we'll write if it's complex okay we'll see that what shows up here is in fact the number times it's a complex conjugate which is the same as the modulus squared okay okay so let's see the proof so we know we know that the norm of V is just the root of the inner product of V times V that's how we define the norm in an inner product space right so it's implicit here I didn't write it that if V is an orthonormal basis orthonormal is in in an inner product space orthonormal with respect to that inner product right and this norm is with respect to that inner product we're assuming that the structures are are agreed that we're taking structures that stem from the same given inner product okay if you take a different inner product orthonormality changes meaning right and if you take a norm that doesn't come from that enterprise things just change okay so that's something to keep in mind okay so if I want to calculate V dot V and show that it's this sum and that would prove the theorem right because then we'll just put it in the root okay so what is V dot V I'm saying dot because we know that it's the same thing but our notation is this so what is the inner product of V with itself so remember that we know how to write V by the previous theorem right so I'm going to do it so V inner product with V is the inner product of this might be a bit confusing I hope you can manage the confusion so it's the sum of V VI VI let's make this bigger the inner product this is V right that's what's the previous theorem comma and this is V again do you agree good now this is a sum and this is a sum okay I can open them up and write this as as V V 1 V 1 plus V V 2 V 2 plus V V 3 V 3 plus and so and likewise here but let's try to do it without opening them up just in here and what do we get so we're gonna get coefficients these are numbers times the V is a linear combination times another linear combination we can use the linearity of the inner product right and we know that the linearity of the inner product is a bit tricky in terms of addition it works perfectly right so if we take the coefficient of V 1 if we take something times V 1 it's gonna meet each of the v j's here right and vice versa but terms of pulling the scalars out scalars from the first component pull out as is scalars from the second component pull out with a bar do you agree so what we get here and you have to think a minute and tell me if you agree and I'm using again the orthonormality or so nor man Li T that's what I'm using here so whenever we get one of these V eyes to meet a V J which is not VI that component is zero okay so the only thing that remains is for each of these when V one meets V one when V two meets V 2 when V three needs to be three etc okay so the only surviving things are some I equals one to n when VI meets VI do you agree good and what are the coefficients the coefficient of VI let's move this even a bit more because I need a bit more space so they're gonna be coefficients times V I VI when I is different than J it's gonna be zero okay and what are these coefficients the coefficient of VI on the left term was V VI so that's the coefficient of VI on the left term that's its alpha came out of the inner product the coefficient of the VI on the right term was V VI as well but when it comes out of the inner product it gets a bar so I have here the VI bar do you agree now this is just a 1 this is the norm of V I this is just the 1 and the inner product of a number times its complex conjugate is the modulus of that number weird so we get some I equals one to n modulus of V VI squared and therefore the norm is the root of that thing and that's the theorem here because there are n of these when V one meets V one I get this one when V two meets V two there a priori there are N squared additive components here and squared because each VI meets each VG okay but most of them are zero because of the orthonormality because of our finality right and only n of them survive and even those that survive this is just a 1 right good okay so this addresses at least in part in least in a very motivational level the usefulness of having an orthonormal basis we know how to write elements how to write norms very very explicitly ok and in applications these V's can be matrices these visas can be functions these V's can be many things and we know how to write it ok and when these V's are functions for examples for when these V's are functions for example what were these coefficients do you remember what was the inner product on function spaces I mentioned that was the integral remember we had an example take F and G and the inner product is going to be the integral of F times G or F times the complex conjugate of G if those are complex valued functions ok so things really become very rich and very interesting and with vast amount of applications ok um one more statement one more statement one really final statement suppose suppose that we have a linear map we know that we can sometimes not always diagonalize it diagonalization is a process which in particular involves a basis right and we're kind of peeking out that basis as the basis we want to work with since it produces a diagonal matrix a matrix with a bunch of zeros and easy to work with right that was the whole idea of diagonalization wouldn't it be nice if we could not just diagonal eyes with any basis but diagonal eyes with the north or normal basis okay and that is called wild guess orthonormal diagonalization say that five times quickly in a row orthonormal diagonalization okay that's precisely the statement it can't always be done but sometimes it can okay so that's the next statement and in fact the last one so once again we're only tapping on on a broader discussion let's write it as a theorem I'm not going to prove it though I'm not going to prove it let a and by n be a symmetric matrix over R let's see remember what a symmetric matrix is exactly a equals its transpose right that's a symmetric matrix okay so the claim is then a admits ortho normal and but-but-but there's a there's a there's a there's a terminology thingie here so a admits orthogonal that's what it's called orthogonal diagonalization okay so what does it mean admits orthogonal diagonalization it means ie a has a diagonal form D and saying that hey a has a diagonal form there is an invertible p such that D equals P inverse a pee right there similar via that P the P is going to have the eigen vectors as columns remember okay so but pee is gonna be so ie D equals P inverse ap where D is diagonal and it's not a different D D has the eigen values along its main diagonal and P is what's called in our saga no matrix bless you and an orthogonal matrix orthogonal matrix is a matrix meaning that the columns of P which are what what are the columns of P right the eigenvectors so let's write is an orthogonal matrix which by definition means that the columns of P which are the eigen vectors in in this case which are the eigen vectors of a are not just orthogonal but are orthonormal that's the terminology the terminology is a bit misleading in this case so the columns of P are orthonormal and in this case an orthonormal basis right because if it's diagonalizable then we have an orthonormal basis for okay so note the the bit of trickiness in the in the terminology that's what it's called it's called orthogonal diagonalization and it means that p is an orthogonal matrix but an orthogonal matrix means that it's columns are orthonormal and not just their okay okay and by the way remark the definition of an orthogonal matrix is usually not this this is equivalent to the definition but the standard definition that you would see is that P inverse and P transpose which a priori are completely different notions right the transpose is just to transpose the inverse is a more subtle process right but a matrix is an orthogonal matrix if and only if they coincide okay P inverse and P transpose are actually in the same thing okay so those are nice matrices they're easy to find the inverse of right okay I'm not going to prove this but but in fact it's even an if and only if statement so a admits an orthogonal diagonalization if and only if it's symmetric okay it's not that you can have other matrix matrices that admit orthogonal diagonalization and you can see the relation to being symmetric but okay so I think we're gonna stop here okay this is just really giving you a flavor of where things in which directions things can and in many many other directions but we're gonna stop here this ties up the notions of of an inner product which is hidden in this word right an inner product and the geometric structure with the the topics that we've discussed previously in this course okay and yeah questions on all of this everybody happy what's fun okay so you still have a lot of work to do to practice these things you're going to see them in exercises and in tutorials and and so on and I wish you all a lot of success in whatever direction you take and thank you
Info
Channel: Technion
Views: 16,422
Rating: 4.8423643 out of 5
Keywords: Technion, Algebra 1M, Dr. Aviv Censor, International school of engineering
Id: jhVMytb7nUA
Channel Id: undefined
Length: 57min 14sec (3434 seconds)
Published: Mon Nov 30 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.