Professor Dave here, let’s discuss basis
and dimension. We’ve gone over vector spaces, and the spaces
made from the span of vectors, as well as the linear independence of vectors. Now we want to combine all of this knowledge
to form what is known as a basis, so let’s learn exactly what this means. To put it as simply as possible, a basis is
a set of linearly independent vectors that can be used as building blocks to make any
other vector in the vector space. Written out, the vectors v1, v2, all the way
to vn, form a basis for the vector space V, if and only if these vectors are linearly
independent, and the vectors span the space V. We’ve gone into detail about linear independence
already, so here we can just focus on the part about span. This requirement says that the vectors v1,
v2, to vn, can be combined in some linear combination to express any other vector in
the space V. Let’s start with a simple example using the space of vectors of length 3, R3. Our vectors will be (1, 0, 0), (0, 1, 0),
and (0, 0, 1). To check to see if these vectors span R3,
we must make sure a linear combination of them can equal any given vector of R3, which
we will express as a vector composed of a set of scalars (a, b, and c). Now let’s check to see if a linear combination
of our vectors can be used to equal this vector. It’s important to remember that the scalars
we multiply our vectors by, which are c1, c2, and c3, are the variables, while the a,
b, and c that make up our given vector should be treated as regular numbers on the right-hand
side of the equation. Writing this vector equation out as a system
of equations, we get the straightforward result that c1 equals a, c2 equals b, and c3 equals c. Because we found a solution that’s possible
for any a, b, and c, these three vectors span the vector space R3. We can easily check that these three vectors
are linearly independent by setting the right side equal to zero. So because a, b, and c equal zero, and as
we saw, c1 equals a, c2 equals b, and c3 equals c, these scalars all end up being zero as well. This was our condition for linear independence,
and we have therefore satisfied the two conditions for these vectors to be considered a basis. They are the building blocks for any vector
in R3, and because they are linearly independent, there is no extra information. We don’t need any more vectors aside from
these three, and if we take any away we will no longer span all of R3. Now let’s look at a more complicated example,
the set of matrices in R2x2. We could check to make sure four matrices
with a 1 in each corner is a basis, but let’s instead consider the following four matrices. Ones on the top row with zeros on the bottom,
then ones on the bottom row with zeros on the top, then ones on the diagonal with zeros
for the other two entries, and then a one everywhere but the top left, which will be zero. First off, to check if these four matrices
span R2x2, we will once again set a linear combination of them equal to any given two
by two matrix, which we can generalize with the entries a, b, c, and d. So here we have each matrix being multiplied
by its respective coefficient, and their sum equal to the generalized two by two matrix. We can distribute the scalars so that they
show up as entries anywhere we had a one in a particular matrix, and then we can add up
all the matrices to condense things a little bit, and we end up with this new matrix with
sums in each of its entries. From here we can write this out as a system
of four equations. While we could solve this system of equations
to try to find the solution, all we really have to do is make sure a solution exists at all. For a solution to exist, the determinant of
the coefficient matrix must be nonzero, as we recall from learning Cramer’s Rule in
a previous tutorial. So let’s just set up the coefficient matrix,
making sure to put zeros where the coefficients are missing in each equation, and then find
the determinant of this four by four matrix, which won’t be too difficult considering
all the zeros. We end up getting a determinant of 1, which
is indeed not zero, so a solution does exist to this system of equations, and our matrices
can therefore be used to make up any two by two matrix. This means that our matrices span R2x2. Next we must check for linear independence. Let’s take the same linear combination we
just used, and set it equal to a matrix full of zeroes. We end up with the same equations as before,
but with all the values on the right hand sides being equal to zero. We could solve these equations a few ways,
but let’s go ahead and use elementary row operations to get the coefficient matrix into
row echelon form. We want the diagonal elements to be leading
ones with all zeros below them. Let’s start off by taking the second row
and subtracting the first from it. Next, to get ones where we want them, let’s
switch rows two and three. Now we can multiply the third row by negative
one to get positive one here. Next, to get rid of this one in the second
column of the fourth row, let’s subtract the second row from the fourth row. And then we can get rid of this other one
in the third column of the fourth row, if we subtract the third row from the fourth,
and this will leave us in row echelon form. We are left with no free variables in this
form, all the diagonals have a leading 1. This means the only solution is all the scalars
being equal to zero, which we could check by writing the matrix in equation form once again. Using this last equation, c4 must be equal
to zero, and once we plug that into the equation above to get c3, and continue from there to
get the other two, we end up with all zero scalars. Thus our matrices are linearly independent. We have verified both conditions for these
matrices to be considered a basis for R2x2. Before we wrap things up, we should go over
one more definition. If a vector space V has a basis made of n
elements, then the vector space is said to have “dimension” n. Taking our two examples from this lesson,
we saw that R3 had a basis made of three vectors, and R2x2 had a basis made of 4 matrices. We can then say that R3 has dimension 3, and
R2x2 has dimension 4. Beyond these examples, it is possible for
vector spaces to be infinitely dimensional, and we also say that the set which contains
only the 0 vector element has dimension zero. The dimension of a vector space is fixed,
so no matter what basis we end up using for a vector space of dimension n, there will
always be n elements in the basis. No more, no less. With vector spaces, subspaces and span, linear
independence, as well as basis and dimension covered, we have established a lot of concepts
and definitions that are important in linear algebra. This means we can now start expanding to more
concrete operations. But first, let’s check comprehension.