OK, here is lecture
ten in linear algebra. Two important things
to do in this lecture. One is to correct an
error from lecture nine. So the blackboard with that
awful error is still with us. And the second,
the big thing to do is to tell you about
the four subspaces that come with a matrix. We've seen two subspaces,
the column space and the null space. There's two to go. First of all, and
this is a great way to OK. recap and correct
the previous lecture -- so you remember I
was just doing R^3. I couldn't have taken a
simpler example than R^3. And I wrote down
the standard basis. That's the standard basis. The basis -- the obvious basis
for the whole three dimensional space. And then I wanted
to make the point that there was nothing special,
nothing about that basis that another basis
couldn't have. It could have
linear independence, it could span a space. There's lots of other bases. So I started with these vectors,
one one two and two two five, and those were independent. And then I said
three three seven wouldn't do, because three
three seven is the sum of those. So in my innocence, I
put in three three eight. I figured probably if three
three seven is on the plane, is -- which I know, it's in
the plane with these two, then probably three three
eight sticks a little bit out of the plane and it's
independent and it gives a basis. But after class, to my
sorrow, a student tells me, "Wait a minute, that ba- that
third vector, three three eight, is not independent." And why did she say that? She didn't actually take
the time, didn't have to, to find w- w- what combination
of this one and this one gives three three eight. She did something else. In other words,
she looked ahead, because she said, wait a minute,
if I look at that matrix, it's not invertible. That third column can't be
independent of the first two, because when I look
at that matrix, it's got two identical rows. I have a square matrix. Its rows are
obviously dependent. And that makes the
columns dependent. So there's my error. When I look at the matrix A
that has those three columns, those three columns
can't be independent because that matrix
is not invertible because it's got two equal rows. And today's lecture will
reach the conclusion, the great conclusion,
that connects the column space with the row space. So those are -- the row space
is now going to be another one of my fundamental subspaces. The row space of this matrix,
or of this one -- well, the row space of this one is OK,
but the row space of this one, I'm looking at the rows of
the matrix -- oh, anyway, I'll have two equal rows and
the row space will be only two dimensional. The rank of the matrix with
these columns will only be two. So only two of those columns,
columns can be independent too. The rows tell me something about
the columns, in other words, something that I should
have noticed and I didn't. OK. So now let me pin down these
four fundamental subspaces. So here are the four
fundamental subspaces. This is really the heart of
this approach to linear algebra, to see these four subspaces,
how they're related. So what are they? The column space, C of A. The null space, N of A. And now comes the row
space, something new. The row space, what's in that? It's all combinations
of the rows. That's natural. We want a space, so we have
to take all combinations, and we start with the rows. So the rows span the row space. Are the rows a basis
for the row space? Maybe so, maybe no. The rows are a basis for the row
space when they're independent, but if they're dependent,
as in this example, my error from last
time, they're not -- those three rows
are not a basis. The row space wouldn't --
would only be two dimensional. I only need two
rows for a basis. So the row space,
now what's in it? It's all combinations
of the rows of A. All combinations
of the rows of A. But I don't like working
with row vectors. All my vectors have
been column vectors. I'd like to stay
with column vectors. How can I get to column
vectors out of these rows? I transpose the matrix. So if that's OK with you,
I'm going to transpose the matrix. I'm, I'm going to
say all combinations of the columns of A transpose. And that allows me to use the
convenient notation, the column space of A transpose. Nothing, no mathematics
went on there. We just got some vectors that
were lying down to stand up. But it means that we
can use this column space of A transpose, that's
telling me in a nice matrix notation what the row space is. OK. And finally is
another null space. The fourth fundamental
space will be the null space of A transpose. The fourth guy is the
null space of A transpose. And of course my notation
is N of A transpose. That's the null
space of A transpose. Eh, we don't have a perfect
name for this space as a -- connecting with A, but our usual
name is the left null space, and I'll show you
why in a moment. So often I call this the -- just to write that word --
the left null space of A. So just the way we
have the row space of A and we switch it to the
column space of A transpose, so we have this space
of guys l- that I call the left null space
of A, but the good notation is it's the null
space of A transpose. OK. Those are four spaces. Where are those spaces? What, what big space are they
in for -- when A is m by n? In that case, the
null space of A, what's in the null space of A? Vectors with n components,
solutions to A x equals zero. So the null space
of A is in R^n. What's in the column space of A? Well, columns. How many components
dothose columns have? m. So this column space is in R^m. What about the column
space of A transpose, which are just a disguised
way of saying the rows of A? The rows of A, in this
three by six matrix, have six components,
n components. The column space is in R^n. And the null space
of A transpose, I see that this fourth space
is already getting second, you know, second class
citizen treatment and it doesn't deserve it. It's, it should be
there, it is there, and shouldn't be squeezed. The null space of A transpose -- well, if the null space of A
had vectors with n components, the null space of A
transpose will be in R^m. I want to draw a picture
of the four spaces. OK. OK. Here are the four spaces. OK, Let me put n dimensional
space over on this side. Then which were the
subspaces in R^n? The null space was
and the row space was. So here we have the -- can I
make that picture of the row space? And can I make this kind of
picture of the null space? That's just meant
to be a sketch, to remind you that they're in
this -- which you know, how -- what type of vectors are in it? Vectors with n components. Over here, inside, consisting
of vectors with m components, is the column space
and what I'm calling the null space of A transpose. Those are the ones
with m components. OK. To understand these spaces
is our, is our job now. Because by understanding
those spaces, we know everything about
this half of linear algebra. What do I mean by
understanding those spaces? I would like to know a
basis for those spaces. For each one of those
spaces, how would I create -- construct a basis? What systematic way
would produce a basis? And what's their dimension? OK. So for each of
the four spaces, I have to answer those questions. How do I produce a basis? And then -- which has
a somewhat long answer. And what's the dimension,
which is just a number, so it has a real short answer. Can I give you the
short answer first? I shouldn't do it,
but here it is. I can tell you the dimension
of the column space. Let me start with this guy. What's its dimension? I have an m by n matrix. The dimension of the
column space is the rank, r. We actually got to that at
the end of the last lecture, but only for an example. So I really have to say,
OK, what's going on there. I should produce
a basis and then I just look to see how many
vectors I needed in that basis, and the answer will be r. Actually, I'll do that,
before I get on to the others. What's a basis for
the columns space? We've done all the
work of row reduction, identifying the pivot
columns, the ones that have pivots, the ones
that end up with pivots. But now I -- the pivot columns
I'm interested in are columns of A, the original A. And those pivot columns,
there are r of them. The rank r counts those. Those are a basis. So if I answer this question
for the column space, the answer will be a
basis is the pivot columns and the dimension is the rank
r, and there are r pivot columns and everything great. OK. So that space we
pretty well understand. I probably have a little
going back to see that -- to prove that this
is a right answer, but you know it's
the right answer. Now let me look
at the row space. OK. Shall I tell you the
dimension of the row space? Yes. Before we do even an
example, let me tell you the dimension of the row space. Its dimension is also r. The row space and the column
space have the same dimension. That's a wonderful fact. The dimension of the column
space of A transpose -- that's the row space -- is r. That, that space
is r dimensional. Snd so is this one. OK. That's the sort of insight
that got used in this example. If those -- are the three
columns of a matrix -- let me make them the three
columns of a matrix by just erasing some brackets. OK, those are the three
columns of a matrix. The rank of that matrix,
if I look at the columns, it wasn't obvious to me anyway. But if I look at the
rows, now it's obvious. The row space of
that matrix obviously is two dimensional, because
I see a basis for the row space, this row and that row. And of course,
strictly speaking, I'm supposed to transpose
those guys, make them stand up. But the rank is two, and
therefore the column space is two dimensional by
this wonderful fact that the row space and column
space have the same dimension. And therefore there are only
two pivot columns, not three, and, those, the three
columns are dependent. OK. Now let me bury that error
and talk about the row space. Well, I'm going to give you the
dimensions of all the spaces. Because that's
such a nice answer. OK. So let me come back here. So we have this great
fact to establish, that the row space, its
dimension is also the rank. What about the null space? OK. What's a basis for
the null space? What's the dimension
of the null space? Let me, I'll put that answer
up here for the null space. Well, how have we
constructed the null space? We took the matrix A, we
did those row operations to get it into a form
U or, or even further. We got it into the
reduced form R. And then we read off
special solutions. Special solutions. And every special solution
came from a free variable. And those special solutions
are in the null space, and the great thing is
they're a basis for it. So for the null space, a basis
will be the special solutions. And there's one for every
free variable, right? For each free variable, we give
that variable the value one, the other free variables zero. We get the pivot variables,
we get a vector in the -- we get a special solution. So we get altogether
n-r of them, because that's the
number of free variables. If we have r -- this is the dimension is r, is
the number of pivot variables. This is the number
of free variables. So the beauty is that those
special solutions do form a basis and tell us immediately
that the dimension of the null space is n -- I better write this well,
because it's so nice -- n-r. And do you see the nice thing? That the two dimensions in
this n dimensional space, one subspace is r dimensional -- to be proved, that's
the row space. The other subspace
is n-r dimensional, that's the null space. And the two dimensions
like together give n. The sum of r and n-R is n. And that's just great. It's really copying the fact
that we have n variables, r of them are pivot variables
and n-r are free variables, and n altogether. OK. And now what's the dimension
of this poor misbegotten fourth subspace? It's got to be m-r. The dimension of this left null
space, left out practically, is m-r. Well, that's really just
saying that this -- again, the sum of that plus that
is m, and m is correct, it's the number of
columns in A transpose. A transpose is just
as good a matrix as A. It just happens to be n by m. It happens to have m columns,
so it will have m variables when I go to A x
equals 0 and m of them, and r of them will be pivot
variables and m-r will be free variables. A transpose is as
good a matrix as A. It follows the same rule that
the this plus the dimension -- this dimension plus this
dimension adds up to the number of columns. And over here, A
transpose has m columns. OK. OK. So I gave you the easy
answer, the dimensions. Now can I go back
to check on a basis? We would like to think
that -- say the row space, because we've got a basis
for the column space. The pivot columns give a
basis for the column space. Now I'm asking you to
look at the row space. And I -- you could say, OK, I
can produce a basis for the row space by transposing my
matrix, making those columns, then doing elimination,
row reduction, and checking out the pivot
columns in this transposed matrix. But that means you
had to do all that row reduction on A transpose. It ought to be possible,
if we take a matrix A -- let me take the matrix -- maybe
we had this matrix in the last lecture. 1 1 1, 2 1 2, 3 2 3, 1 1 1. OK. That, that matrix was so easy. We spotted its pivot columns,
one and two, without actually doing row reduction. But now let's do
the job properly. So I subtract this away
from this to produce a zero. So one 2 3 1 is fine. Subtracting that away leaves
me minus 1 -1 0, right? And subtracting that from the
last row, oh, well that's easy. OK? I'm doing row reduction. Now I've -- the first
column is all set. The second column I
now see the pivot. And I can clean up, if I -- actually, OK. Why don't I make
the pivot into a 1. I'll multiply that row through
by by -1, and then I have 1 1. That was an elementary
operation I'm allowed, multiply a row by a number. And now I'll do elimination. Two of those away from that
will knock this guy out and make this into a 1. So that's now a 0 and that's a OK. Done. That's R. I'm seeing the
identity matrix here. I'm seeing zeros below. And I'm seeing F there. OK. What about its row space? What happened to its row space
-- well, what happened -- let me first ask, just
because this is, is -- sometimes something does happen. Its column space changed. The column space of R is not
the column space of A, right? Because 1 1 1 is certainly
in the column space of A and certainly not in
the column space of R. I did row operations. Those row operations
preserve the row space. So the row, so the column
spaces are different. Different column spaces,
different column spaces. But I believe that they
have the same row space. Same row space. I believe that the row space of
that matrix and the row space of this matrix are identical. They have exactly the
same vectors in them. Those vectors are vectors
with four components, right? They're all combinations
of those rows. Or I believe you
get the same thing by taking all combinations
of these rows. And if true, what's a basis? What's a basis for
the row space of R, and it'll be a basis for the
row space of the original A, but it's obviously a basis
for the row space of R. What's a basis for the
row space of that matrix? The first two rows. So a basis for the
row -- so a basis is, for the row space of A or of R,
is, is the first R rows of R. Not of A. Sometimes it's true for
A, but not necessarily. But R, we definitely have a
matrix here whose row space we can, we can identify. The row space is spanned
by the three rows, but if we want a basis
we want independence. So out goes row three. The row space is also spanned
by the first two rows. This guy didn't
contribute anything. And of course over here
this 1 2 3 1 in the bottom didn't contribute anything. We had it already. So this, here is a basis. 1 0 1 1 and 0 1 1 0. I believe those are
in the row space. I know they're independent. Why are they in the row space? Why are those two
vectors in the row space? Because all those
operations we did, which started with these rows
and took combinations of them -- I took this row minus this
row, that gave me something that's still in the row space. That's the point. When I took a row minus a
multiple of another row, I'm staying in the row space. The row space is not changing. My little basis
for it is changing, and I've ended up with,
sort of the best basis. If the columns of the identity
matrix are the best basis for R^3 or R^n, the rows of
this matrix are the best basis for the row space. Best in the sense of being
as clean as I can make it. Starting off with the
identity and then finishing up with whatever has
to be in there. OK. Do you see then that
the dimension is r? For sure, because we've got
r pivots, r non-zero rows. We've got the right
number of vectors, r. They're in the row space,
they're independent. That's it. They are a basis
for the row space. And we can even pin
that down further. How do I know that every
row of A is a combination? How do I know they
span the row space? Well, somebody says, I've
got the right number of them, so they must. But -- and that's true. But let me just say, how
do I know that this row is a combination of these? By just reversing the
steps of row reduction. If I just reverse the steps and
go from A -- from R back to A, then what do I, what I doing? I'm starting with
these rows, I'm taking combinations of them. After a couple of steps,
undoing the subtractions that I did before, I'm
back to these rows. So these rows are
combinations of those rows. Those rows are
combinations of those rows. The two row spaces are the same. The bases are the same. And the natural
basis is this guy. Is that all right
for the row space? The row space is
sitting there in R in its cleanest possible form. OK. Now what about the fourth guy,
the null space of A transpose? First of all, why do I call
that the left null space? So let me save that
and bring that down. OK. So the fourth space is the
null space of A transpose. So it has in it vectors,
let me call them y, so that A transpose y equals 0. If A transpose y
equals 0, then y is in the null space of
A transpose, of course. So this is a matrix times
a column equaling zero. And now, because I want
y to sit on the left and I want A instead
of A transpose, I'll just transpose
that equation. Can I just transpose that? On the right, it makes
the zero vector lie down. And on the left, it's a
product, A, A transpose times y. If I take the transpose, then
they come in opposite order, right? So that's y transpose times
A transpose transpose. But nobody's going to
leave it like that. That's -- A transpose
transpose is just A, of course. When I transposed A
transpose I got back to A. Now do you see what I have now? I have a row
vector, y transpose, multiplying A, and
multiplying from the left. That's why I call it
the left null space. But by making it --
putting it on the left, I had to make it into a row
instead of a column vector, and so my convention is
I usually don't do that. I usually stay with A
transpose y equals 0. OK. And you might ask, how do we
get a basis -- or I might ask, how do we get a basis
for this fourth space, this left null space? OK. I'll do it in the example. As always -- not that one. The left null space is not
jumping out at me here. I know which are the
free variables -- the special solutions, but those
are special solutions to A x equals zero, and now I'm
looking at A transpose, and I'm not seeing it here. So -- but somehow you feel that
the work that you did which simplified A to R should have
revealed the left null space too. And it's slightly less
immediate, but it's there. So from A to R, I
took some steps, and I guess I'm interested
in what were those steps, or what were all
of them together. I don't -- I'm not interested in
what particular ones they were. I'm interested in what
was the whole matrix that took me from A to R. How would you find that? Do you remember
Gauss-Jordan, where you tack on the identity matrix? Let's do that again. So I, I'll do it above, here. So this is now, this
is now the idea of -- I take the matrix
A, which is m by n. In Gauss-Jordan, when
we saw him before -- A was a square
invertible matrix and we were finding its inverse. Now the matrix isn't square. It's probably rectangular. But I'll still tack on the
identity matrix, and of course since these have length
m it better be m by m. And now I'll do the reduced row
echelon form of this matrix. And what do I get? The reduced row echelon form
starts with these columns, starts with the first columns,
works like mad, and produces R. Of course, still that
same size, m by n. And we did it before. And then whatever
it did to get R, something else is
going to show up here. Let me call it E, m by m. It's whatever -- do you see
that E is just going to contain a record of what we did? We did whatever it took
to get A to become R. And at the same time,
we were doing it to the identity matrix. So we started with the identity
matrix, we buzzed along. So we took some -- all this row reduction amounted
to multiplying on the left by some matrix, some series
of elementary matrices that altogether gave us one
matrix, and that matrix is E. So all this row reduction stuff
amounted to multiplying by E. How do I know that? It certainly amounted to
multiply it by something. And that something took I to
E, so that something was E. So now look at the
first part, E A is R. No big deal. All I've said is that the row
reduction steps that we all know -- well, taking A to
R, are in a, in some matrix, and I can find out what that
matrix is by just tacking I on and seeing what comes out. What comes out is E. Let's just review the
invertible square case. What happened then? Because I was interested
in it in chapter two also. When A was square and
invertible, I took A I, I did row, row elimination. And what was the
R that came out? It was I. So in chapter two, in
chapter two, R was I. The row the, the
reduced row echelon form of a nice invertible
square matrix is the identity. So if R was I in that case, then
E was -- then E was A inverse, because E A is I. Good. That's, that was good and easy. Now what I'm saying is
that there still is an E. It's not A inverse any more,
because A is rectangular. It hasn't got an inverse. But there is still some matrix
E that connected this to this -- oh, I should have figured
out in advanced what it was. Shoot. I didn't -- I did those steps and sort
of erased as I went -- and, I should have done
them to the identity too. Can I do that? Can I do that? I'll keep the identity matrix,
like I'm supposed to do, and I'll do the same operations
on it, and see what I end up with. OK. So I'm starting
with the identity -- which I'll write in light,
light enough, but -- OK. What did I do? I subtracted that row from that
one and that row from that one. OK, I'll do that
to the identity. So I subtract that first row
from row two and row three. Good. Then I think I multiplied -- Do you remember? I multiplied row
two by minus one. Let me just do that. Then what did I do? I subtracted two of row
two away from row one. I better do that. Subtract two of
this away from this. That's minus one, two of these
away leaves a plus 2 and 0. I believe that's E. The way to check is to see,
multiply that E by this A, just to see did I do it right. So I believe E was -1 2
0, 1 -1 0, and -1 0 1. OK. That's my E, that's
my A, and that's R. All right. All I'm struggling
to do is right, the reason I wanted this blasted
E was so that I could figure out the left null space,
not only its dimension, which I know -- actually, what is the dimension
of the left null space? So here's my matrix. What's the rank of the matrix? And the dimension of the null
-- of the left null space is supposed to be m-r. It's 3 -2, 1. I believe that the left null
space is one dimensional. There is one combination
of those three rows that produces the zero row. There's a basis -- a basis for
the left null space has only got one vector in it. And what is that vector? It's here in the last row of E. But we could have
seen it earlier. What combination of those
rows gives the zero row? -1 of that plus one of that. So a basis for the left
null space of this matrix -- I'm looking for combinations
of rows that give the zero row if I'm looking at
the left null space. For the null space, I'm looking
at combinations of columns to get the zero column. Now I'm looking at combinations
of these three rows to get the zero row, and of
course there is my zero row, and here is my vector
that produced it. -1 of that row and one of that row. Obvious. OK. So in that example -- and
actually in all examples, we have seen how to produce a
basis for the left null space. I won't ask you that
all the time, because -- it didn't come out
immediately from R. We had to keep track of E
for that left null space. But at least it didn't require
us to transpose the matrix and start all over again. OK, those are the
four subspaces. Can I review them? The row space and the
null space are in R^n. Their dimensions add to n. The column space and the
left null space are in R^m, and their dimensions add to m. OK. So let me close
these last minutes by pushing you a little bit more
to a new type of vector space. All our vector spaces, all the
ones that we took seriously, have been subspaces of some real
three or n dimensional space. Now I'm going to write
down another vector space, a new vector space. Say all three by three matrices. My matrices are the vectors. Is that all right? I'm just naming them. You can put quotes
around vectors. Every three by three matrix
is one of my vectors. Now how I entitled to
call those things vectors? I mean, they look very
much like matrices. But they are vectors in my
vector space because they obey the rules. All I'm supposed to be able to
do with vectors is add them -- I can add matrices -- I'm supposed to be able to
multiply them by scalar numbers like seven -- well, I can
multiply a matrix by And that -- and I can take
combinations of matrices, I can take three of one
matrix minus five of another matrix. And those combinations, there's
a zero matrix, the matrix that has all zeros in it. If I add that to another
matrix, it doesn't change it. All the good stuff. If I multiply a matrix by
one it doesn't change it. All those eight rules
for a vector space that we never wrote down,
all easily satisfied. So now we have a different -- now of course you can say you
can multiply those matrices. I don't care. For the moment, I'm only
thinking of these matrices as forming a vector space --
so I only doing A plus B and c times A. I'm not interested
in A B for now. The fact that I
can multiply is not relevant to th-
to a vector space. OK. So I have three
by three matrices. And how about subspaces? What's -- tell me a subspace
of this matrix space. Let me call this matrix space M. That's my matrix space, my space
of all three by three matrices. Tell me a subspace of it. What about the upper
triangular matrices? OK. So subspaces. Subspaces of M. All, all upper
triangular matrices. Another subspace. All symmetric matrices. The intersection
of two subspaces is supposed to be a subspace. We gave a little effort
to the proof of that fact. If I look at the matrices
that are in this subspace -- they're symmetric, and
they're also in this subspace, they're upper triangular,
what do they look like? Well, if they're
symmetric but they have zeros below the
diagonal, they better have zeros above the
diagonal, so the intersection would be diagonal matrices. That's another subspace,
smaller than those. How can I use the word smaller? Well, I'm now entitled
to use the word smaller. I mean, well, one way
to say is, OK, these are contained in those. These are contained in those. But more precisely, I could give
the dimension of these spaces. So I could -- we can compute
-- let's compute it next time, the dimension of all upper
-- of the subspace of upper triangular three
by three matrices. The dimension of symmetric
three by three matrices. The dimension of diagonal
three by three matrices. Well, to produce
dimension, that means I'm supposed to produce
a basis, and then I just count how many vecto-
how many I needed in the basis. Let me give you the
answer for this one. What's the dimension? The dimension of this
-- say, this subspace, let me call it D, all
diagonal matrices. The dimension of
this subspace is -- as I write you're
working it out -- three. Because here's a matrix in
this -- it's a diagonal matrix. Here's another one. Here's another one. Better make it diagonal,
let me put a seven there. That was not a
very great choice, but it's three
diagonal matrices, and I believe that
they're a basis. I believe that those three
matrices are independent and I believe that
any diagonal matrix is a combination of those three. So they span the subspace
of diagonal matrices. Do you see that idea? It's like stretching the
idea from R^n to R^(n by n), three by three. But the -- we can still add, we
can still multiply by numbers, and we just ignore the fact that
we can multiply two matrices together. OK, thank you. That's lecture ten.
https://youtu.be/S734XxFd0Qo
Why???