GILBERT STRANG: OK, here's
the, well, the title slide. Since this year
happened to be 2020, and that means clear
vision, I thought I'd get that into the
title of these slides. And then you've seen in these
six pieces as a sort of look ahead, and I'm going to start on
that first piece, A equals CR. That's the new way I like to
start teaching linear algebra. And I'll tell you why. OK, oh, here, we
have a few examples. Well, that will
lead to our ideas. You see that matrix, A0. A matrix is just a square
or a rectangle of numbers. But those numbers
have special features. If you look closely, well,
you say 1, 3, 2 as row 1. And then what do
you see for row 3? 2, 6, 4. And those are two vectors
in the same direction. Why is that? Because 2, 6, 4 is
exactly 2 times 1, 3, 2. And in the middle there
is 4 times 1, 3, 2. So I have three rows
in the same direction. And actually, also,
this is the magic. Can I tell you this
right at the start? The columns, look
at the columns. 1, 4, 2. If I multiply that
by 3, I get 3, 12, 6. If I multiply it by
2, I get 2, 8, 4. So somehow,
magically, the columns are in the same direction
exactly when the rows are in the same direction. They're different. That's what linear
algebra is about, the relations between
columns and rows. OK, and well, here's
another one I'll look at. There again, you see row
1 plus row 2 equal row 3. So it's not quite like
this where every row was in the same direction. But here is if I add rows
1 and 2, I get row 2. So that's a matrix
of rank 2, we'll say. You'll see it. OK, then here here, S is
for symmetric matrices. Those are the kings
of linear algebra. And here are a
few small samples. And the queens of
linear algebra are these matrices I call Q. Those
are called orthogonal matrices. Orthogonal meaning
perpendicular. So and they tend to
express a rotation. So that's a rotation matrix,
an orthogonal matrix. That rotates the plane. And there is a
pretty general matrix that we'll see at the very end. OK, so I'm into the start
of the column space. So that's a word I don't use in
the videos for quite a while. But here, you see I'm using
it in the first minutes. So I look at a matrix. Well, first, let's
just remember how to multiply a
matrix by a vector. OK, there is a matrix
A. There is a vector x with three components. And the way I like
to multiply them is to take the columns of A.
That's what I'm focusing on, columns of A. There
they are, 1, 2, and 3. I multiply them by those three
numbers x1, x2, x3, and I add. And that's called a
linear combination. Linear because nothing is
squared or cubed or anything. And combination because
I'm putting them together, adding them together. OK, so that's the idea. And now, the big idea
is in that top line. I want to think of
all combinations. So this is one
particular combination with a particular
x1 and x2 and x3. But now, I think of
every x1 and x2 and x3, all the vectors
that I could get. Well, of course, I could
get the first column by taking 1 and 0 and 0. That would give me
the first column. But it's really mixtures of
the columns that this produces. And it fills out. It fills out, in this
case, a whole plane in three dimensions. These vectors have
three components. We're in three dimensions. And can you just
imagine in your head, two lines meeting at 0, 0, 0. So they cross. But I just have two lines. And now, I fill in
between those lines. Filling in between
those two lines is taking the
linear combinations. That's where they are. And the result is I get a plane. I do not get the whole
space because nothing is going in a third
direction for this matrix. All right. So let's see more about this. So that's that
word column space. And I use the
capital C for that. And it's all the
vectors I can get that way, all the
combinations of the columns. And now I ask. Oh, well, maybe I just
answered this question. Sorry. I ask, is the column space,
all the combinations, is it the whole 3D space, which
everybody calls R3 for real 3, or is it a plane, or
is it just a line? Well, the answer is plane. That probably even
gives us the answer. That's the good thing
about this subject. The answer is a plane because
I have two different lines that meet at the 0. And when I fill in between
them, I have a flat plane. I don't go in the
third direction. Good. So that's the column
space for this matrix. And here's a little
more saying about that. We kept column 1. And we kept column 2
because you remember those two columns, the
first two, were different. They went in
different directions. They go in different directions. We did not keep the third
column because it was just the sum of the first two. It's on the plane, nothing new. So the real meat of the matrix
A is in the column matrix C that has just the two columns. And what about R? Because this is my plan
for the first few weeks, first two to three
weeks of linear algebra, is to understand. So that 5, 5, 3 would be called
a dependent vector because it depends on the first two. Those were independent. So those are the two that
I keep in the matrix C. And then that matrix
R, oh, well, now I'm multiplying two matrices. And you know how to do that. But I always have another
way to look at it. So the way I look at it
is by linear combinations. Do you remember those? So multiplying is a
combination of these guys. First, I have 1 of
the first column. That's my first column. And the next time, I have
1 of the second column. That's my second vector. And the third one is this
guy, 1 of that and 1 of that. So these two are the independent
ones, and that's dependent. And a full set of
independent ones is called a basis,
really fundamental. So I guess I think that
linear algebra should just start with these key
ideas, just go with them. And we learned something. It almost falls in our laps. It's a first great
and not obvious fact about linear algebra. I'm just amazed to have it here. The number of independent
columns in A, which it was two, is equal to the number of
independent rows in R, also two. You remember that we had
two rows and two columns? So two columns first in C,
two rows in R. And the point is that that's telling us-- and we just checked that
those two rows were-- two columns were independent. The two rows are independent. The basis, and then we
learned that the column space has dimension 2. R equals 2 for this example. And the row space has
the same dimension. So that column rank
R equals the row rank R. It's like if you
had a 50 by 80 matrix, OK, that's 4,000 numbers. You couldn't see what
those these dimensions are. But linear algebra
is telling you that a dimension of the row
space and the column space, 50 of one and 80 in
another, are equal. OK, so this is again coming
early, and we'll see it again. But it's good to start
linear algebra from day one. And then here is another
great fact about equations because matrices lead
to these two equations where x is the unknown. And this equation has 0
on the right hand side. So how could we get 0
on the right hand side? We could take 1 of that. And let me change that to a
minus sign and that to a minus Sign. One of those minus one of
those minus one of those would be 0, 0, 0. So that 1 and minus 1 and
minus 1 would tell us an x. And that's the solution. In applying linear
algebra in engineering, in physics, in
economics, in business, you end up with equations. Things balance. And you want to know how
many solutions there are. And linear algebra was created
to answer that question. OK, so now, I'm just
going to say a little more about this starting
method of the course. Oh, I want to focus here on
these interesting matrices, where every column is a
multiple of the first column. Every row is a multiple
of the first row. Instead of having two
independent columns and rows, these matrices have only one. So then C has one column. And R has one row. And the rank is 1. These are the building blocks
of linear algebra, these rank 1 matrices, column times row. The previous matrix would
have one of those blocks and a second block. A big matrix from data science
would have hundreds of blocks. But the great theorem
in linear algebra is to break that big matrix
into these simple pieces. So that's the goal for
the end of the course. OK, and finally, a last
thought about these. So this is C times R.
I'm urging teachers to present that
part at the early. So what are the good things,
I've marked with a plus. First of all, the columns,
we're looking at them in C. And we see them from A. We
take them directly from A. R turns out to be
a famous matrix. Row reduced echelon
form it's called. So to see that pop
up here is terrific. And then this
wonderful fact that row rank equal column rank is
clear from this C times R. So those are all
terrifically good things. The other thing I have to
say is that C and R are not great for avoiding
round off or being good in large computations. This is a first
factorization but not the best one for big computing. Right. So ill conditioned means they
are difficult to deal with. And also, we often have a
matrix with all the columns are independent. And it's a square matrix. All the columns are independent. We can solve Ax
equals b all the time. But then if all the
columns are independent, then our matrix C is
just the same as A. We didn't get anywhere. And R would be the
identity matrix, like a 1, because A equals C. So
this is the starting point, picking out the independent
columns, but not the end, of course. And I'll stop here and pick
up on the next factorization right away. Thanks.