2. Elimination with Matrices.

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
2 times this equation, Okay. This is it. The second lecture in linear algebra, and I've put below my main topics for today. I put right there a system of equations that's going to be our example to work with. But what are we going to do with it? We're going to solve it. And the method of solution will not be determinants. Determinants are something that will come later. The method we'll use is called elimination. And it's the way every software package solves equations. And elimination, well, if it succeeds, it gets the answer. And normally it does succeed. If the matrix A that's coming into that system is a good matrix, and I think this one is, then elimination will work. We'll get the answer in an efficient way. But why don't we, as long as we're sort of seeing how elimination works -- it's always good to ask how could it fail? So at the same time, we'll see how elimination decides whether the matrix is a good one or has problems. Then to complete the answer, there's an obvious step of back substitution. In fact, the idea of elimination is -- you would have thought of it, right? I mean Gauss thought of it before we did, but only because he was born earlier. It's a natural idea... and died earlier, too. Okay, and you've seen the idea. But now, the part that I want to show you is elimination expressed in matrix language, because the whole course -- all the key ideas get expressed as matrix operations, not as words. And one of the operations, of course, that we'll meet is how do we multiply matrices and why? Okay, so there's a system of equations. Three equations and three unknowns. And there's the matrix, the three by three matrix -- so this is the system Ax = b. This is our system to solve, Ax equal -- and the right-hand side is that vector 2, 12, 2. Okay. Now, when I describe elimination -- it gets to be a pain to keep writing the equal signs and the pluses and so on. It's that matrix that totally matters. Everything is in that matrix. But behind it is those equations. So what does elimination do? What's the first step of elimination? We accept the first equation, it's okay. I'm going to multiply that equation by the right number, the right multiplier and I'm going to subtract it from the second equation. With what purpose? So that will decide what the multiplier should be. Our purpose is to knock out the x part of equation two. So our purpose is to eliminate x. So what do I multiply -- and again, I'll do it with this matrix, because I can do it short. What's the multiplier here? What do I multiply -- equation one and subtract. Notice I'm saying that word subtract. I'd like to stick to that convention. I'll do a subtraction. First of all this is the key number that I'm starting with. And that's called the pivot. I'll put a box around it and write its name down. That's the first pivot. The first pivot. Okay. So I'm going to use -- that's sort of like the key number in that equation. And now what's the multiplier? So I'm going to -- my first row won't change, that's the pivot row. But I'm going to use it -- and now, finally, let me ask you what the multiplier is. Yes? 3 times that first equation will knock out that 3. Okay. So what will it leave? So the multiplier is 3. 3 times that will make that 0. That was our purpose. 3 2s away from the 8 will leave a 2 and three 1s away from 1 will leave a minus 2. And this guy didn't change. Now the next step -- this is forward elimination and that Okay. step's completed. Oh, well, you could say wait a minute, what about the right hand side? Shall I carry -- the right-hand side gets carried along. Actually MatLab finishes up with the left side before -- and then just goes back to do the right side. Maybe I'll be MatLab for a moment and do that. Okay. I'm leaving a room for a column of b, the right-hand side. But I'll fill it in later. Okay. Now the next step of elimination is what? Well, strictly speaking... this position that I cleaned up was like the 2, 1 position, row 2, column 1. So I got a 0 in the 2, 1 position. I'll use 2,1 as the index of that step. The next step should be to finish the column and get a 0 in that position. So the next step is really the 3,1 step, row three, column one. But of course, I already have 0. Okay. So the multiplier is 0. I take 0 of this equation away from this one and I'm all set. So I won't repeat that, but there was a step there which, MatLab would have to look -- it would look at this number and, do that step, unless you told it in advance that it was 0. Okay. Now what? Now we can see the second pivot, which is what? The second pivot -- see, we've eliminated -- x is now gone from this equation, right? We're down to two equations in y and z. And so now I just do it again. Like, everything's recursive at this -- this is like -- such a basic algorithm and you've seen it, but carry me through one last step. So this is still the first pivot. Now the second pivot is this guy, who has appeared there. And what's the multiplier, the appropriate multiplier now? And what's my purpose? Is it to wipe out the 3, 2 position, right? This was the 2, 1 step. And now I'm going to take the 3, 2 step. So this all stays the same, 1 2 1, 0 2 -1 and the pivots are there. Now I'm using this pivot, so what's the multiplier? this row, gets subtracted from this row and makes that a 0. So it's 0, 0 and is it a 5? Yeah, I guess it's a 5, is that right? Because I have a one there and I'm subtracting twice of twice this, so I think it's a 5 there. There's the third pivot. So let me put a box around all three pivots. Is there a -- oh, did I just invent a negative one? I'm sorry that the tape can't, correct that as easily as I can. Okay. Thank you very much. You get an A in the course now. Is that correct? Is it correct now? Okay. So the three pivots are there -- I know right away a lot about this matrix. This elimination step from A -- this matrix I'm going to call U. U for upper triangular. So the whole purpose of elimination was to get from A to U. And, literally, that's the most common calculation in scientific computing. And people think of how could I do that faster? Because it's a major, major thing. But we're doing it the straightforward way. We found three pivots, and by the way, I didn't say this, pivots can't be 0. I don't accept 0 as a pivot. And I didn't get 0. So this matrix is great. It gave me three pivots, I didn't have to do anything special, I just followed the rules and, and the pivots are 1, 2 and 5. By the way, just because I always anticipate stuff from a later day, if I wanted to know the determinant of this matrix -- which I never do want to know, but I would just multiply the pivots. The determinant is 10. So even things like the determinant are here. Okay. Now -- oh, let me talk about failure for a moment, and then -- and then come back to success. How could this have failed? How could -- by fail, I mean to come up with three pivots. I mean, there are a couple of points. I would have already been in trouble if this very first number here was 0. If it was a 0 there -- suppose that had been a 0, there were no Xs in that equation -- first equation. Does that mean I can't solve the problem? Does that mean I quit? No. What do I do? I switch rows. I exchange rows. So in case of a 0, I will not say 0 pivot. I will never be heard to utter those words, 0 pivot. But if there's a 0 in the pivot position, maybe I can say that, I would try to exchange for a lower equation and get a proper pivot up there. Okay. Now, for example, this second pivot came out two. Could it have come out 0? What -- actually, if I change that 8 a little bit, I would have got a little trouble. What should I change that 8 to so that I run into trouble? A 6. If that had been a 6, then this would have been 0 and I couldn't have used that as the pivot. But I could have exchanged again. In this case. In this case, because when can I get out of trouble? I can get out of trouble if there's a non-0 below this troublesome 0. And there is here. So I would be okay in this case. If this was a 6, I would survive by a row exchange. Now -- of course, it might have happened that I couldn't do the row, that -- that there was 0s below it, but here there wasn't. Now, I could also have got in trouble if this number 1 was a little different. See, that 1 became a 5, I guess, by the end. So can you see what number there would have got me trouble that I really couldn't get out of? Trouble that I couldn't get out of would mean if 0 is in the pivot position and I've got no place to exchange. So there must be some number which if I had had here it would have meant failure. Negative 4, good. If it was a negative 4 here -- if it happened to be a negative 4, I'll temporarily put it up here. If this had been a negative 4 z, then I would have gone through the same steps. This would have been a minus 4, it still would have been a minus 4. But at the last minute it would have become 0. And there wouldn't have been a third pivot. The matrix would have not been invertible. Well, of course, the inverse of a matrix is coming next week, but, you've heard these words before. So, that's how we identify failure. There's temporary failure when we can do a row exchange -- and get out of it, or there's complete failure when we get a 0 and -- and there's nothing below that we can use. Okay. Let's stay with -- back to success now. In fact, I guess the next topic is back substitution. So what's back substitution? Well, now I'd better bring the right-hand side in. So what would MatLab do and what should we do? Let me bring in the right-hand side as an extra column. So there comes B. So it's 2, 12, I would call this the augmented matrix. "Augment" means you've tacked something on. I've tacked on this extra column. Because, when I'm working with equations, I do the same thing to both sides. So, at this step, I subtracted 2 of the first equation away from the second equation so that this augmented -- I even brought some colored chalk, but I don't know if it shows up. So this is like the augmented -- no! Damn, circled the wrong thing. Okay. Here is b. Okay, that's the extra column. Okay. So what happened to that extra column, the right-hand side of the equations, when I did the first step? So that was 3 of this away from this, so it took -- the 2 stayed the same, but three 2s got taken away from 12, leaving 6, and that 2 stayed the same. So this is how it's looking halfway along. And let me just carry to the end. The 2 and the 6 stay the same, but -- what do I have here? Oh, gosh. Help me out, now. What -- so now I'm -- This is still like forward elimination. I got to this point, which I think is right, and now what did I do at this step? I multiplied that pivot by 2 or that whole equation by 2 and subtracted from that, so I think I take two 6s, which is 12, away from the 2. Do you think minus 10 is my final right-hand side -- the right-hand side that goes with U, and let me call that once and forever the vector c. So c is what happens to b, and U is what happens to A. Okay. There you've seen elimination clean. Okay. Oh, what's back substitution? So what are my final equations, then? Can I copy these equations? x+2y+z=2 is still there and 2y-2z=6 is there, and 5z=-10. Okay. Those are the equations that these numbers are telling me about. Those are the equations U x equals c. Okay, how do I solve them? What one do I solve for first? z. I see immediately that the correct value of z is negative And what do I do next? I go back upwards. I now know z here. So, if z is negative 2, that's 4 there, is that right? And so 2 y plus a 4 is 6, maybe y is 1. Going -- this is back substitution. We're doing it on the fly because it's so easy. And then x is -- so x -- 2y is 2 minus 2, maybe x is 2? So you see what back substitution is. It's the simple step solving the equations in reverse order because the system is triangular. Okay. Good. So that's elimination and back substitution, and I kept the right-hand side along. Okay, now what do I -- that, like, is first piece of the lecture. What's the second piece? Matrices are going to get in. So I wrote stuff with x, y-s and z-s in there, then I really, got the right shorthand, just writing the matrix entries, and now I want to write the operations that I did in matrices, right? I've carried the matrices along, but I haven't said the operation those elimination steps, I now want to express as matrices. Okay. Here they come. So now this is elimination matrices. Okay. Let me take that first step, which took me from 1 2 1 3 8 1 0 4 1. I want to operate on that -- I want to do elimination on that. Okay. Okay, now I'm remembering a point I want to single out as especially important. Let me move the board up for that. Because when we do matrix operations, we've got to, like, be able to see the big picture. Okay. Last time, I spoke about the big picture of -- when I multiply a matrix by a right-hand side. If I have some matrix there and I multiply it by 3 4 5, let's say -- so here's a matrix -- what did I say -- well, I guess I only said it on the videotape, but -- do you remember how I look at that matrix multiplication? The result of multiplying a matrix by some vector is a combination of the columns of the matrix. It's 3 times the first column. It's 3 times column one plus 4 times column two plus 5 times column three. Okay. I'm going to come back to that multiple times. What I wanted to do now was to emphasize the parallel thing with rows. Why? Because all our operations here for this two weeks of the course are row operations. So this isn't what I need for row operations. Let me do a row operation. Suppose I have my matrix again and suppose I multiply on the left by some -- let's say 1 2 7. Again, I'm just, like, saying what the result is. And then we'll say how matrix multiplication works and we'll see that it's true. Okay. But maybe already I'm making -- I'm sort of bringing up -- the central idea of linear algebra is how these matrices work by rows as well as by columns. Okay. How does it work by rows? What -- so that's a row vector. I could say that's a one by three matrix, a row vector multiplying a three by three matrix. What's the output? What's the product of a row times a matrix? And -- okay, it's a row. A row -- a column -- I'm sorry. A matrix times a column is a column. So matrix times a -- yeah. Matrix times a column is a column. And we know what column it is. Over here, I'm doing a row times a matrix. And what's the answer? It's one of that first row, so it's 1 times -- 1 times row one, plus 2 times row two plus 7 times row three. When -- as we do matrix multiplication, keep your eye on what it's doing with whole vectors. And what it's doing -- what it's doing in this case is it's combining the rows. And we have a combination, a linear combination of the rows. Okay, I want to use that. Okay, so my question is what's the matrix that does this first step, that takes -- subtracts 3 of equation one from equation two? That's what I want to do. So this is going to be a matrix that's going to subtract 3 times row one from row two, and leaves the other rows the same. Just in -- I mean, the answer is going to be that. So whatever matrix this is -- and you're going to, like, tell me what matrix will do it, it's the matrix that leaves the first row unchanged, leaves the last row unchanged, but takes 3 of these away from this so it puts a 0 there, a 2 there and a minus 2. Good. What matrix will do it? It's these. It should be a pretty simple matrix, because we're doing a very simple step. We're just doing this step that changes row two. So actually, row one is not changing. So tell me how the matrix should begin. One -- the first row of the matrix will be 1 0 0, because that's just the right thing that takes one of that row and none of the other rows, and that's what we want. What's the last row of the matrix? 0 0 1, because that takes one of the third row and none of the other rows, that's great. Okay. Now, suppose I didn't want to do anything at all. Suppose my row -- well, I guess maybe I had a case here when I already had a 0 and, didn't have to do anything. What matrix does nothing, like, just leaves you where you were? If I put in -- if I put in 0 1 0, that would be -- that would be -- that's the matrix -- what's the name of that matrix? The identity matrix, right. So it does absolutely nothing. It just multiplies everything and leaves it where it is. It's like a one, like the number one, for matrices. But that's not what we want, because we want to change this row to -- so what's the correct -- what should I put in here now to do it right? I want to get -- what do I want? What I -- I'm after -- I want 3 of row one to get subtracted off. So what's the right matrix, finish that matrix for me. Negative 3 goes here? And what goes here? That 1. And what goes here? The 0. That's the good matrix. That's the matrix that takes minus 3 of row one plus the row two and gives the new row 2. Should we just, like, check some particular entry? How do I check a particular entry of a matrix in matrix multiplication? Like, suppose I wanted to check the entry here that's in row two, column three. So where does the entry in row two, column three come from? I would look at row two of this guy and column three of this one to get that number. That number comes from the second row and the third column and I just take this dot product minus 3 -- I'm multiplying -- minus 3 plus 1 and 0 gives the minus 2. Yeah. It works. So we got various ways to multiply matrices now. We're sort of, like -- informally. We've got by columns, we've got -- well, we will have by columns, by rows, by each entry at a time. But it's good to see that matrix multiplication when one of the matrices is so simple. So this guy is our elementary matrix. Let's call it E for elementary or elimination. And let me put the indexes 2 1, because it's the matrix that we needed to fix the 2 1 position. It's the matrix that we needed to get this 2 1 position to be Okay. Good enough. So what do I do next? I need another matrix, right? I need to -- there's another step here. And I want to express the whole elimination process in matrix language. So tell me what -- so next step, step two, which was what? Subtract -- what was -- what was the actual step that we did? I think I subtracted -- do you remember? I had a 2 in the pivot and a 4 below it, so I subtracted two times -- times row two from row three. From row three. Tell me the matrix that will do that. And tell me its name. Okay, it's going to be E, for elementary or elimination matrix and what's the index number that I used to tell me what E -- 3, 2, right? Because it's fixing this 3 2 position. And what's the matrix, now? Okay, you remember -- so E 3 2 is supposed to multiply my guy that I have and it's supposed to produce the right result, which was -- it leaves -- it's supposed to leave the first row, it's supposed to leave the second row and it's supposed to straighten out that third row to this. And what's the matrix that does that? 1 0 0, right? Because we don't change the first row and the next row we don't change either, and the last row is the one we do change. And what do I do? Let's see, I subtract two times -- so what's this row? What's this here? 0, right, because the first row's not involved. It's just in the 3 2 position, isn't it? This the key number is this minus the multiplier that goes -- sitting there in that 3 2 position. Is it a minus 2 to subtract 2 and then this is a 1 so that -- the overall effect is to take minus 2 of this row plus 1 of that. Okay. So, I've now given you the pieces, the elimination matrices, the elementary matrices that take each step. So now what? Now the next point in the lecture is to put those steps together into a matrix that does it all and see how it all happens. So now I'm going to express the whole -- everything we did today so far on A was to start with A, we multiplied it by E 2 1, that was the first step -- and then we multiplied that result by E 3 2 and that led us to this thing and what was that matrix? U. You see why I like matrix notation, because there in, like, little space -- a few bits when its compressed on the web -- is everything -- is this whole lecture. Okay. Now there -- there are important facts about matrix multiplication. And we're close to maybe the most important. And that is this. Suppose I ask you this question. Suppose I start with a matrix A and I want to end with a matrix U and I want to say what matrix does the whole job? What matrix takes me from A to U, using the letters I've got? And the answer is simple. I'm not asking this as -- but it's highly important. How would I create the matrix that does the whole job at once, that does all of elimination in one shot? It would be -- I would just put these together, right? In other words, this is the thing I'm struggling to say. I can move those parentheses. If I keep the matrices in order -- I can't mess around with the order of the matrices, but I can change the order that I do the multiplications. I can multiply these two first -- in other words, you see what those parentheses are doing? It's saying -- multiply the Es first and that gives you the matrix that does everything at once. Okay. So this fact, that this is automatically the same as this -- for every matrix multiplication, which I'm conscious of still not telling you in every detail, but, like, you're seeing how it works -- and this is highly important -- and maybe tell me the long word that describes this law for matrices, that you can move the parentheses? It's called the associative law. I think you can now forget that. But don't forget the law. I mean, like, forget the word associative. I don't know. But don't forget the law. Because actually, we'll see so many steps in linear algebra, so many proofs, even, of main fact come from just moving the parentheses. And it's not that easy to prove that this is correct, you have to go into the gory details of matrix multiplication, do it both ways and see that you come out the same. Maybe I'll leave the author to do that. Okay. So there we go. So there's a single matrix, I could call it E -- while we're talking about these matrices, tell me one other -- there's another type of elementary matrix, and we already said why we might need it. We didn't need it in this case. But it's the matrix that exchanges two rows. It's called a permutation matrix. Can you just, like, tell me what that would So I'm just -- like, this is a slight digression and be? we'll -- yes, so let me get some -- let me figure out where I'm going to put a permutation matrix. You'll see I'm always squeezing stuff in. So permutation. Or, in fact this one you'll, like, exchange rows -- shall I exchange rows one and two, just to make life easy? So if I had my matrix -- no, let -- let me just do two by two. |a b; c d|. Suppose I want to find the matrix that exchanges those rows. What is it? So the matrix that exchanges those rows -- the row I want is c d and it's there. So I better take one of it. And the row I want here is up top, so I'll take one of that. So actually, I'm just -- the easy way -- this is my matrix that I'll call P, for permutation. It's the matrix -- actually, the easy way to find it is just do the thing to the identity matrix. Exchange the rows of the identity matrix and then that's the matrix that will do row exchanges for you. Suppose I wanted to exchange columns instead. Columns have hardly got into today's lecture, but they certainly are going to be around. How could I -- if I started with this matrix |a b; c d| then I wouldn't -- I'm not even going to write this down, I'm just going to ask you, because in elimination, we're doing rows. But suppose we wanted to exchange the columns of a matrix. How would I do that? What matrix multiplication would do that job? Actually, why not? I'll write it down. So this is -- I'll write it under here and then hide it again. Okay. Suppose I had my matrix |a b; c d| and I want to get to a c over here and b d here. What matrix does that job? Can I multiply -- can I cook up some matrix that produces that answer? You can see from where I put my hand I was really asking can I put a matrix here on the left that will exchange columns? And the answer is no. I'm just bringing out again this point that when I multiply on the left, I'm doing row operations. So if I want to do a column operation, where do I put that permutation matrix? On the right. If I put it here, where I just barely left room for it -- so I'll exchange the two columns of the identity. Then it comes out right, because now I'm multiplying a column at a time. This is the first column and says take one -- take none of that column, one of this one and then you got it. Over here, take one of this one, none of this one and you've got a c. So, in short, to do column operations, the matrix multiplies on the right. To do row operations, it multiplies on the left. Okay, okay, and it's row operations that we're really doing. Okay. And of course, I mentioned in passing, but I better say it very clearly that you can't exchange the orders of matrices. And that's just the point I was making again here. A times B is not the same as B times A. You have to keep these matrices in their Gauss given order here, right? But you can move the parentheses, so that, in other words, the commutative law, which would allow you to take it in the other order is false. So we have to keep it in that order. Okay. So what next? I could do this multiplication. I could do E 32. So let me come back to see what that was. Here was E 2 1. And here is E 3 2. And if I multiply those two matrices together -- E 3 2 and then E 2 1, I'll get a single matrix that does elimination. I don't want to do it that -- if I do that multiplication -- there -- there's a better way to do this. And so in this last few minutes of today's lecture, can I anticipate that better way? The better way is to think not how do I get from A to U, but how do I get from U back to A? So reversing steps is going to come in. Inverse -- I'll use the word inverse here. Okay. So let me make the first step at what's the inverse matrix? All the matrices you've seen on this board have inverses. I didn't write any bad matrices down. We spoke about possible failure, and for a moment, we put in a matrix that would fail. But right now, all these matrices are good, they're all invertible. And let's take the inverse -- well, let me say first what does the inverse mean and find it? Okay. So we're getting a little leg up on inverses. Okay, so this is the final moments of today. Sorry, he's still there. Okay. Inverses. Okay, and I'm just going to take one example and then we're done. The example I'll take will be that E. So my matrix is 1 0 0 minus 3 1 0 0 0 1. And I want to find the matrix that undoes that step. So what was that step? The step was subtract 3 times row one from row two. So what matrix will get me back? What matrix will bring back -- you know, if I started with a 2 12 2 and I changed it to a 2 6 2 because of this guy, I want to get back to the 2 12 I want to find the matrix which -- which undoes elimination, the matrix which multiplies this to give the identity. And you can tell me what I should do in words first, and then we'll write down the matrix that does it. If this step subtracted 3 times row 1 from row 2, what's the inverse step? I add 3 times row one to row two, right? I add it back. The -- what I subtracted away, I add back. So the inverse matrix in this case is -- I now want to add 3 times row one to row two, so I won't change row one, I won't change row three and I'll add 3 times row one to row two. That's a case where the inverse is clear. It's clear in words what to do, because what this did was simple to express. It just changed row two by subtracting 3 of row one. So to invert it, I go that way. And if you -- if we do that calculation, 3 times this row plus 1 times this row, comes out the right row of the identity. Okay, so inverses are an -- so if this matrix was E and this matrix is I for identity, then what's the notation for this guy? E to the minus one. E inverse. Okay. Let's stop there for today. That's a little jump on what's coming on Monday. So, see you Monday.
Info
Channel: MIT OpenCourseWare
Views: 1,671,756
Rating: 4.9345784 out of 5
Keywords: matrix theory, linear algebra, systems of equations, vector spaces, determinants, eigenvalues, similarity, positive definite matrices, least-squares approximations, networks, Fourier transforms, Markov processes
Id: QVKj3LADCnA
Channel Id: undefined
Length: 47min 41sec (2861 seconds)
Published: Wed May 06 2009
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.