Okay. This is lecture five
in linear algebra. And, it will complete
this chapter of the book. So the last section
of this chapter is two point seven that talks
about permutations, which finished the previous
lecture, and transposes, which also came in
the previous lecture. There's a little more to do
with those guys, permutations and transposes. But then the heart of the
lecture will be the beginning of what you could say is the
beginning of linear algebra, the beginning of real linear
algebra which is seeing a bigger picture with vector
spaces -- not just vectors, but spaces of vectors and
sub-spaces of those spaces. So we're a little ahead
of the syllabus, which is good, because we're
coming to the place where, there's a lot to do. Okay. So, to begin with permutations. Can I just -- so these permutations, those
are matrices P and they execute row exchanges. And we may need them. We may have a
perfectly good matrix, a perfect matrix A that's
invertible that we can solve A x=b, but to do it -- I've got to allow myself
that extra freedom that if a zero shows up in the
pivot position I move it away. I get a non-zero. I get a proper pivot there by
exchanging from a row below. And you've seen that
already, and I just want to collect
the ideas together. And principle, I
could even have to do that two times, or more times. So I have to allow -- to complete the -- the theory, the possibility
that I take my matrix A, I start elimination, I find
out that I need row exchanges and I do it and
continue and I finish. Okay. Then all I want to do is say --
and I won't make a big project out of this -- what happens to A equal L U? So A equal L U -- this was a matrix L with ones
on the diagonal and zeroes above and multipliers
below, and this U we know, with zeroes down here. That's only possible. That description of
elimination assumes that we don't have a P, that we
don't have any row exchanges. And now I just want
to say, okay, how do I account for row exchanges? Because that doesn't. The P in this factorization
is the identity matrix. The rows were in a good
order, we left them there. Maybe I'll just add a
little moment of reality, too, about how Matlab
actually does elimination. Matlab not only checks whether
that pivot is not zero, as every human would do. It checks for is that
pivot big enough, because it doesn't like
very, very small pivots. Pivots close to zero
are numerically bad. So actually if we ask
Matlab to solve a system, it will do some elimination
some row exchanges, which we don't think are necessary. Algebra doesn't say they're
necessary, but accuracy -- numerical accuracy
says they are. Well, we're doing
algebra, so here we will say, well, what
do row exchanges do, but we won't do them
unless we have to. But we may have to. And then, the result is -- it's hiding here. It's the main fact. This is the description of
elimination with row exchanges. So A equal L U
becomes P A equal L U. So this P is the matrix
that does the row exchanges, and actually it does them -- it gets the rows
into the right order, into the good order
where pivots will not -- where zeroes won't appear
in the pivot position, where L and U will come
out right as up here. So, that's the point. Actually, I don't want
to labor that point, that a permutation matrix -- and you remember
what those were. I'll remind you from last time
of what the main points about permutation matrices were -- and then just leave
this factorization as the general case. This is -- any
invertible A we get this. For almost every one,
we don't need a P. But there's that handful
that do need row exchanges, and if we do need
them, there they are. Okay, finally, just to
remember what P was. So permutations, P is
the identity matrix with reordered rows. I include in reordering the
possibility that you just leave them the same. So the identity
matrix is -- okay. That's, like, your basic
permutation matrix -- your do-nothing permutation
matrix is the identity. And then there are the ones
that exchange two rows and then the ones that exchange three
rows and then then ones that exchange four -- well, it gets a little -- it gets more interesting
algebraically if you've got four rows,
you might exchange them all in one big cycle. One to two, two to three,
three to four, four to one. Or you might have -- exchange
one and two and three and four. Lots of possibilities there. In fact, how many possibilities? The answer was (n)factorial. This is n(n-1)(n-2)... (3)(2)(1). That's the number of --
this counts the reorderings, the possible reorderings. So it counts all the
n by n permutations. And all those
matrices have these -- have this nice property
that they're all invertible, because we can bring those rows
back into the normal order. And the matrix that
does that is just P -- is just the same
as the transpose. You might take a
permutation matrix, multiply by its transpose
and you will see how -- that the ones hit the ones and
give the ones in the identity matrix. So this is a -- we'll be highly
interested in matrices that have nice properties. And one property that -- maybe
I could rewrite that as P transpose P is the identity. That tells me in
other words that this is the inverse of that. Okay. We'll be interested in
matrices that have P transpose P equal the identity. There are more of them
than just permutations, but my point right now is that
permutations are like a little group in the middle -- in the center of these
special matrices. Okay. So now we know how
many there are. Twenty four in the case of
-- there are twenty four four by four permutations, there
are five factorial which is a hundred and twenty, five times
twenty four would bump us up to a hundred and twenty -- so
listing all the five by five permutations would
be not so much fun. Okay. So that's permutations. Now also in section two seven is
some discussion of transposes. And can I just complete
that discussion. First of all, I haven't
even transposed a matrix on the board here, have I? So I'd better do it. So suppose I take a matrix
like (1 2 4; 3 3 1). It's a rectangular
matrix, three by two. And I want to transpose it. So what's -- I'll use a T, also
Matlab would use a prime. And the result will be -- I'll right it here, because this
was three rows and two columns, this was a three by two matrix. The transpose will be two
rows and three columns, two by three. So it's short and wider. And, of course, that row --
that column becomes a row -- that column becomes
the other row. And at the same time,
that row became a column. This row became a column. Oh, what's the general
formula for the transpose? So the transpose -- you see it in numbers. What I'm going to write is
the same thing in symbols. The numbers are the
clearest, of course. But in symbols, if
I take A transpose and I ask what number is in row
I and column J of A transpose? Well, it came out of A. It came out A by this flip
across the main diagonal. And, actually, it
was the number in A which was in row J, column I. So the row and column -- the row and column
numbers just get reversed. The row number becomes
the column number, the column number
becomes the row number. No problem. Okay. Now, a special -- the best matrices, we could say. In a lot of applications,
symmetric matrices show up. So can I just call attention
to symmetric matrices? What does that mean? What does that word
symmetric mean? It means that this transposing
doesn't change the matrix. A transpose equals A. And an example. So, let's take a matrix
that's symmetric, so whatever is sitting
on the diagonal -- but now what's above the
diagonal, like a one, had better be there, a
seven had better be here, a nine had better be there. There's a symmetric matrix. I happened to use all positive
numbers as its entries. That's not the point. The point is that if I
transpose that matrix, I get it back again. So symmetric matrices have this
property A transpose equals A. I guess at this point -- I'm just asking you to notice
this family of matrices that are unchanged by transposing. And they're easy to
identify, of course. You know, it's not maybe so
easy before we had a case where the transpose gave the inverse. That's highly important,
but not so simple to see. This is the case where the
transpose gives the same matrix back again. That's totally simple to see. Okay. Could actually -- maybe I could
even say when would we get such a matrix? For example, this -- that
matrix is absolutely far from symmetric, right? The transpose isn't
even the same shape -- because it's rectangular,
it turns the -- lies down on its side. But let me tell you a way to
get a symmetric matrix out of this. Multiply those together. If I multiply this
rectangular, shall I call it R for rectangular? So let that be R for
rectangular matrix and let that be R
transpose, which it is. Then I think that if I
multiply those together, I get a symmetric matrix. Can I just do it
with the numbers and then ask you why, how did
I know it would be symmetric? So my point is that R transpose
R is always symmetric. Okay? And I'm going to do it for that
particular R transpose R which was -- let's see, the column was
one two four three three one. I called that one R
transpose, didn't I, and I called this guy one
two four three three one. I called that R. Shall we just do
that multiplication? Okay, so up here
I'm getting a ten. Next to it I'm getting two, a
nine, I'm getting an eleven. Next to that I'm getting
four and three, a seven. Now what do I get there? This eleven came from one
three times two three, right? Row one, column two. What goes here? Row two, column one. But no difference. One three two three or two
three one three, same thing. It's going to be an eleven. That's the symmetry. I can continue to fill it out. What -- oh, let's
get that seven. That seven will show
up down here, too, and then four more numbers. That seven will show up here
because one three times four one gave the seven, but also
four one times one three will give that seven. Do you see that it works? Actually, do you want to see it
work also in matrix language? I mean, that's quite
convincing, right? That seven is no accident. The eleven is no accident. But just tell me how do I know
if I transpose this guy -- How do I know it's symmetric? Well, I'm going to transpose it. And when I transpose
it, I'm hoping I get the matrix back again. So can I transpose
R transpose R? So just -- so, why? Well, my suggestion
is take the transpose. That's the only way to
show it's symmetric. Take the transpose and
see that it didn't change. Okay, so I take the
transpose of R transpose R. Okay. How do I do that? This is our little practice
on the rules for transposes. So the rule for transposes
is the order gets reversed. Just like inverses,
which we did prove, same rule for transposes
and -- which we'll now use. So the order gets reversed. It's the transpose of
that that comes first, and the transpose of
this that comes -- no. Is that -- yeah. That's what I have
to write, right? This is a product of two
matrices and I want its transpose. So I put the matrices
in the opposite order and I transpose them. But what have I got here? What is R transpose transpose? Well, don't all speak at once. R transpose transpose, I
flipped over the diagonal, I flipped over the diagonal
again, so I've got R. And that's just my point, that
if I started with this matrix, I transposed it, I
got it back again. So that's the check, without
using numbers, but with -- it checked in two lines that I
always get symmetric matrices this way. And actually, that's
where they come from in so many
practical applications. Okay. So now I've said something today
about permutations and about transposes and about
symmetry and I'm ready for chapter three. Can we take a breath -- the tape won't take a breath,
but the lecturer will, because to tell you
about vector spaces is -- we really have to start now
and think, okay, listen up. What are vector spaces? And what are sub-spaces? Okay. So, the point is, The main
operations that we do -- what do we do with vectors? We add them. We know how to add two vectors. We multiply them by numbers,
usually called scalers. If we have a vector, we
know what three V is. If we have a vector V and
W, we know what V plus W is. Those are the two
operations that we've got to be able to do. To legitimately talk
about a space of vectors, the requirement
is that we should be able to add the things
and multiply by numbers and that there should be
some decent rules satisfied. Okay. So let me start with examples. So I'm talking now
about vector spaces. And I'm going to
start with examples. Let me say again what this
word space is meaning. When I say that word
space, that means to me that I've got a bunch of
vectors, a space of vectors. But not just any
bunch of vectors. It has to be a
space of vectors -- has to allow me to do the
operations that vectors are for. I have to be able to add
vectors and multiply by numbers. I have to be able to
take linear combinations. Well, where did we meet
linear combinations? We met them back in, say in R^2. So there's a vector space. What's that vector space? So R two is telling me I'm
talking about real numbers and I'm talking about
two real numbers. So this is all two
dimensional vectors -- real, such as -- well, I'm not going to
be able to list them all. But let me put a few down. |3; 2|, |0;0|, |pi; e|. So on. And it's natural -- okay. Let's see, I guess I
should do algebra first. Algebra means what can
I do to these vectors? I can add them. I can add that to that. And how do I do it? A component at a
time, of course. Three two added to zero
zero gives me, three two. Sorry about that. Three two added to pi e gives
me three plus pi, two plus e. Oh, you know what it does. And you know the picture
that goes with it. There's the vector three two. And often, the
picture has an arrow. The vector zero zero, which is
a highly important vector -- it's got, like, the
most important here -- is there. And of course there's
not much of an arrow. Pi -- I'll have to remember --
pi is about three and a little more, e is about two
and a little more. So maybe there's pi e. I never drew pi e before. It's just natural to -- this is the first
component on the horizontal and this is the
second component, going up the vertical. Okay. And the whole plane is R two. So R two is, we
could say, the plane. The xy plane. That's what everybody thinks. But the point is it's a vector
space because all those vectors are in there. If I removed one of them -- Suppose I removed zero zero. Suppose I tried to take the --
considered the X Y plane with a puncture, with
a point removed. Like the origin. That would be, like, awful
to take the origin away. Why is that? Why do I need the origin there? Because I have to be allowed --
if I had these other vectors, I have to be allowed to
multiply three two -- this was three two -- by anything, by any
scaler, including zero. I've got to be allowed
to multiply by zero and the result's
got to be there. I can't do without that point. And I have to be able to add
three two to the opposite guy, minus three minus two. And if I add those I'm
back to the origin again. No way I can do
without the origin. Every vector space has got
that zero vector in it. Okay, that's an
easy vector space, because we have a
natural picture of it. Okay. Similarly easy is R^3. This would be all -- let
me go up a little here. This would be -- R three would be all three
dimensional vectors -- or shall I say vectors
with three real components. Okay. Let me just to be
sure we're together, let me take the
vector three two zero. Is that a vector in R^2 or R^3? Definitely it's in R^3. It's got three components. One of them happens to be zero,
but that's a perfectly okay number. So that's a vector in R^3. We don't want to mix up the -- I mean, keep these vectors
straight and keep R^n straight. So what's R^n? R^n. So this is our big example, is
all vectors with n components. And I'm making these darn
things column vectors. Can I try to follow
that convention, that they'll be column vectors,
and their components should be real numbers. Later we'll need complex
numbers and complex vectors, but much later. Okay. So that's a vector space. Now, let's see. What do I have to tell
you about vector spaces? I said the most important thing,
which is that we can add any two of these and
we -- still in R^2. We can multiply by any number
and we're still in R^2. We can take any combination
and we're still in R^2. And same goes for R^n. It's -- honesty requires me to
mention that these operations of adding and multiplying
have to obey a few rules. Like, we can't just arbitrarily
say, okay, the sum of three two and pi e is zero zero. It's not. The sum of three two and
minus three two is zero zero. So -- oh, I'm not going
to -- the book, actually, lists the eight rules that the
addition and multiplication have to satisfy, but they do. They certainly satisfy it in
R^n and usually it's not those eight rules that are in doubt. What's -- the question is, can
we do those additions and do we stay in the space? Let me show you a
case where you can't. So suppose this is going
to be not a vector space. Suppose I take the xy
plane -- so there's R^2. That is a vector space. Now suppose I just
take part of it. Just this. Just this one -- this is one
quarter of the vector space. All the vectors with positive
or at least not negative components. Can I add those safely? Yes. If I add a vector
with, like, two -- three two to another
vector like five six, I'm still up in this quarter,
no problem with adding. But there's a heck of a problem
with multiplying by scalers, because there's a lot of
scalers that will take me out of this quarter plane,
like negative ones. If I took three two and I
multiplied by minus five, I'm way down here. So that's not a vector
space, because it's not -- closed is the right word. It's not closed
under multiplication by all real numbers. So a vector space has to be
closed under multiplication and addition of vectors. In other words,
linear combinations. It -- so, it means that if
I give you a few vectors -- yeah look, here's an
important -- here -- now we're getting to some
really important vector spaces. Well, R^n -- like, they
are the most important. But we will be interested in
so- in vector spaces that are inside R^n. Vector spaces that follow
the rules, but they -- we don't need all of -- see,
there we started with R^2 here, and took part of it
and messed it up. What we got was
not a vector space. Now tell me a vector space that
is part of R^2 and is still safely -- we can multiply, we
can add and we stay in this smaller vector space. So it's going to be
called a subspace. So I'm going to change this
bad example to a good one. Okay. So I'm going to
start again with R^2, but I'm going to take an
example -- it is a vector space, so it'll be a vector
space inside R^2. And we'll call that
a subspace of R^2. Okay. What can I do? It's got something in it. Suppose it's got
this vector in it. Okay. If that vector's in
my little subspace and it's a true
subspace, then there's got to be some more in it, right? I have to be able to
multiply that by two, and that double vector
has to be included. Have to be able to multiply
by zero, that vector, or by half, or by
three quarters. All these vectors. Or by minus a half,
or by minus one. I have to be able to
multiply by any number. So that is going to say that I
have to have that whole line. Do you see that? Once I get a vector in there -- I've got the whole line of
all multiples of that vector. I can't have a vector space
without extending to get those multiples in there. Now I still have
to check addition. But that comes out okay. This line is going to work,
because I could add something on the line to something
else on the line and I'm still on the line. So, example. So this is all examples
of a subspace -- our example is a line in R^2
actually -- not just any line. If I took this
line, would that -- so all the vectors on that line. So that vector and that vector
and this vector and this vector -- in lighter type, I'm drawing
something that doesn't work. It's not a subspace. The line in R^2 --
to be a subspace, the line in R^2 must go
through the zero vector. Because -- why is
this line no good? Let me do a dashed line. Because if I multiplied that
vector on the dashed line by zero, then I'm down here,
I'm not on the dashed line. Z- zero's got to be. Every subspace has
got to contain zero -- because I must be allowed to
multiply by zero and that will always give me the zero vector. Okay. Now, I was going to make -- create some subspaces. Oh, while I'm in R^2,
why don't we think of all the possibilities. R two, there can't be that many. So what are the possible
subspaces of R^2? Let me list them. So I'm listing now
the subspaces of R^2. And one possibility
that we always allow is all of R two, the whole
thing, the whole space. That counts as a
subspace of itself. You always want to allow that. Then the others are lines -- any line, meaning infinitely
far in both directions through the zero. So that's like
the whole space -- that's like whole two D space. This is like one dimension. Is this line the same as R^1 ? No. You could say it
looks a lot like R^1. R^1 was just a line
and this is a line. But this is a line inside R^2. The vectors here
have two components. So that's not the same as R^1,
because there the vectors only have one component. Very close, you could
say, but not the same. Okay. And now there's a
third possibility. There's a third
subspace that's -- of R^2 that's not the whole
thing, and it's not a line. It's even less. It's just the zero vector alone. The zero vector alone, only. I'll often call this
subspace Z, just for zero. Here's a line, L. Here's a plane, all of R^2. So, do you see that
the zero vector's okay? You would just -- to
understand subspaces, we have to know the rules -- and
knowing the rules means that we have to see that yes, the
zero vector by itself, just this guy alone
satisfies the rules. Why's that? Oh, it's too dumb to tell you. If I took that and added it
to itself, I'm still there. If I took that and multiplied
by seventeen, I'm still there. So I've done the operations,
adding and multiplying by numbers, that are
required, and I didn't go outside this one point space. So that's always -- that's
the littlest subspace. And the largest subspace is the
whole thing and in-between come all -- whatever's in between. Okay. So for example, what's
in between for R^3? So if I'm in ordinary three
dimensions, the subspace is R, all of R^3 at one extreme,
the zero vector at the bottom. And then a plane, a
plane through the origin. Or a line, a line
through the origin. So with R^3, the subspaces were
R^3, plane through the origin, line through the origin and
a zero vector by itself, zero zero zero, just
that single vector. Okay, you've got the idea. But, now comes -- the reality is -- what are these -- where
do these subspaces come -- how do they come
out of matrices? And I want to take
this matrix -- oh, let me take that matrix. So I want to create some
subspaces out of that matrix. Well, one subspace
is from the columns. Okay. So this is the
important subspace, the first important subspace
that comes from that matrix -- I'm going to -- let
me call it A again. Back to -- okay. I'm looking at the columns of A. Those are vectors in R^3. So the columns are in R^3. The columns are in R^3. So I want those columns
to be in my subspace. Now I can't just put two
columns in my subspace and call it a subspace. What do I have to throw in --
if I'm going to put those two columns in, what else has got
to be there to have a subspace? I must be able to
add those things. So the sum of those columns -- so these columns are in R^3,
and I have to be able -- I'm, you know, I want
that to be in my subspace, I want that to be
in my subspace, but therefore I have to be able
to multiply them by anything. Zero zero zero has got
to be in my subspace. I have to be able to add
them so that four five five is in the subspace. I've got to be able to add one
of these plus three of these. That'll give me
some other vector. I have to be able to take
all the linear combinations. So these are columns in R^3 and
all there linear combinations form a subspace. What do I mean by
linear combinations? I mean multiply
that by something, multiply that by
something and add. The two operations of linear
algebra, multiplying by numbers and adding vectors. And, if I include
all the results, then I'm guaranteed
to have a subspace. I've done the job. And we'll give it a name -- the column space. Column space. And maybe I'll call it C of A. C for column space. There's an idea there that -- Like, the central idea
for today's lecture is -- got a few vectors. Not satisfied with
a few vectors, we want a space of vectors. The vectors, they're in --
these vectors in -- are in R^3 , so our space of vectors
will be vectors in R^3. The key idea's -- we
have to be able to take their combinations. So tell me, geometrically,
if I drew all these things -- like if I drew one two four,
that would be somewhere maybe there. If I drew three three one,
who knows, might be -- I don't know, I'll say there. There's column one,
there's column two. What else -- what's in
the whole column space? How do I draw the
whole column space now? I take all combinations
of those two vectors. Do I get -- well, I
guess I actually listed the possibilities. Do I get the whole space? Do I get a plane? I get more than a
line, that's for sure. And I certainly get more
than the zero vector, but I do get the
zero vector included. What do I get if I combine -- take all the combinations
of two vectors in R^3 ? So I've got all this stuff on -- that whole line gets filled
out, that whole line gets filled out, but all in-between
gets filled out -- between the two
lines because I -- I allowed to add something
from one line, something from the other. You see what's coming? I'm getting a plane. That's my -- and it's
through the origin. Those two vectors, namely one
two four and three three one, when I take all
their combinations, I fill out a whole plane. Please think about that. That's the picture
you have to see. You sure have to see it in R^3
, because we're going to do it in R^10, and we may take a
combination of five vectors in R^10, and what will we have? God knows. It's some subspace. We'll have five vectors. They'll all have ten components. We take their combinations. We don't have R^5 , because our
vectors have ten components. And we possibly have, like,
some five dimensional flat thing going through the
origin for sure. Well, of course, if those five
vectors were all on the line, then we would only
get that line. So, you see, there are, like,
other possibilities here. It depends what -- it depends
on those five vectors. Just like if our two columns
had been on the same line, then the column space would
have been only a line. Here it was a plane. Okay. I'm going to stop at that point. That's the central idea of
-- the great example of how to create a subspace
from a matrix. Take its columns, take
their combinations, all their linear combinations
and you get the column space. And that's the
central sort of -- we're looking at linear
algebra at a higher level. When I look at A -- now,
I want to look at Ax=b. That'll be the first
thing in the next lecture. How do I understand
Ax=b in this language -- in this new language of vector
spaces and column spaces. And what are other subspaces? So the column space is a big
one, there are others to come. Okay, thanks.