GILBERT STRANG: OK. More about eigenvalues
and eigenvectors. Well, actually, it's
going to be the same thing about eigenvalues
and eigenvectors but I'm going to
use matrix notation. So, you remember I have a
matrix A, 2 by 2 for example. It's got two eigenvectors. Each eigenvector
has its eigenvalue. So I could write the
eigenvalue world that way. I want to write
it in matrix form. I want to create an
eigenvector matrix by taking the two
eigenvectors and putting them in the columns of my matrix. If I have n of them, that
allows me to give one name. The eigenvector matrix, maybe
I'll call it V for vectors. So that's A times V. And
now, just bear with me while I do that
multiplication of A times the eigenvector matrix. So what do I get? I get a matrix. That's 2 by 2. That's 2 by 2. You get a 2 by 2 matrix. What's the first column? The first column of
the output is A times the first column of the input. And what is A times x1? Well, A times x1 is
lambda 1 times x1. So that first column
is lambda 1 x1. And A times the second column
is Ax2, which is lambda 2 x2. So I'm seeing lambda
2 x2 in that column. OK. Matrix notation. Those were the eigenvectors. This is the result of A times V.
But I can look at this a little differently. I can say, wait a minute, that
is my eigenvector matrix, x1 and x2-- those two
columns-- times a matrix. Yes. Taking this first column, lambda
1 x1, is lambda 1 times x1, plus 0 times x2. Right there I did a
matrix multiplication. I did it without
preparing you for it. I'll go back and do that
preparation in a moment. But when I multiply a matrix by
a vector, I take lambda 1 times that one, 0 times that one. I get lambda 1 x1,
which is what I want. Can you see what I want
in the second column here? The result I want
is lambda 2 x2. So I want no x1's, and
lambda 2 of that column. So that's 0 times that
column, plus lambda 2, times that column. Are we OK? So, what do I have now? I have the whole thing
in a beautiful form, as this A times the
eigenvector matrix equals, there is the
eigenvector matrix again, V. And here is a new matrix
that's the eigenvalue matrix. And everybody calls
that-- because those are lambda 1 and lambda 2. So the natural letter
is a capital lambda. That's a capital Greek lambda
there, the best I could do. So do you see that the two
equations written separately, or the four equations
or the n equations, combine into one
matrix equation. This is the same as
those two together. Good. But now that I have
it in matrix form, I can mess around with it. I can multiply both
sides by V inverse. If I multiply both sides by
V inverse I discover-- well, shall I multiply on
the left by V inverse? Yes, I'll do that. If I multiply on the left by
V inverse that's V inverse AV. This is matrix multiplication
and my next video is going to recap
matrix multiplication. So I multiply both
sides by V inverse. V inverse times V
is the identity. That's what the
inverse matrix is. V inverse, V is the identity. So there you go. Let me push that up. That's really nice. That's really nice. That's called diagonalizing
A. I diagonalize A by taking the eigenvector
matrix on the right, its inverse on the left,
multiply those three matrices, and I get this diagonal matrix. This is the diagonal
matrix lambda. Or other times I might want
to multiply by both sides here by V inverse
coming on the right. So that would give me A, V,
V inverse is the identity. So I can move V over
there as V inverse. That's what it amounts to. I multiply both
sides by V inverse. So this is just A and this is
the V, and the lambda, and now the V inverse. That's great. So that's a way to see how
A is built up or broken down into the eigenvector matrix,
times the eigenvalue matrix, times the inverse of
the eigenvector matrix. OK. Let me just use
that for a moment. Just so you see how it
connects with what we already know about eigenvalues
and eigenvectors. OK. So I'll copy that great fact,
that A is V lambda, V inverse. Oh, what do I want to do? I want to look at A squared. So if I look at
A squared, that's V lambda V inverse
times another one. Right? There's an A, there's an
A. So that's A squared. Well, you may say I've made
a mess out of A squared, but not true. V inverse V is the identity. So that it's just the identity
sitting in the middle. So the V at the far left,
then I have the lambda, and then I have the other
lambda-- lambda squared-- and then the V inverse
at the far right. That's A squared. And if I did it n
times, I would have A to the n-th what would be
the lambda to the n-th power V inverse. What is this? What is this saying about? This is A squared. How do I understand
that equation? To me that says that the
eigenvalues of A squared are lambda squared. I'm just squaring
each eigenvalue. And the eigenvectors? What are the eigenvectors
of A squared? They're the same V, the
same vectors, x1, x2, that went into v. They're
also the eigenvectors of A squared, of A cubed, of
A to the n-th, of A inverse. So that's the point of
diagonalizing a matrix? Diagonalizing a
matrix is another way to see that when I square
the matrix, which is usually a big mess, looking at the
eigenvalues and eigenvectors it's the opposite of a big mess. It's very clear. The eigenvectors are
the same as for A. And the eigenvalues are squares
of the eigenvalues of A. In other words, we can
take the n-th power and we have a nice
notation for it. We learned already
that the n-th power has the eigenvalues
to the n-th power, and the eigenvectors the same. But now I just see it here. And there it is
for the n-th power. So if I took the same
matrix step 1,000 times, what would be important? What controls the thousandth
power of a matrix? The eigenvectors stay. They're just set. It would be the thousandth
power of the eigenvalue. So if this is a matrix with
an eigenvalue larger than 1, then the thousandth
power is going to be much larger than one. If this is a matrix with
eigenvalues smaller than 1, there are going to
be very small when I take the thousandth power. If there's an eigenvalue
that's exactly 1, that will be a steady state. And 1 to the thousandth
power will still be 1 and nothing will change. So, the stability. What happens as I multiply,
take powers of a matrix, is a basic question parallel
to the question what happens with a
differential equation when I solve forward in time? I think of those two
problems as quite parallel. This is taking steps, single
steps, discrete steps. The differential equation is
moving forward continuously. This is a difference
between hop, hop, hop in the discrete case
and run forward continuously in the differential case. In both cases, the eigenvectors
and the eigenvalues are the guide to what
happens as time goes forward. OK. I have to do more about
working with matrices. Let me come to that next. Thanks.