Hello everyone so this is second lecture of
this course and in this lecture I will introduce Gaussian elimination method with partial pivoting
in the later part of this lecture will discuss about the ill condition systems those are
difficult to solve using Gaussian elimination method or Gaussian elimination with pivoting,
so in the last lecture we have solved this 3 by 3 system of linear equations using the
Gaussian elimination method and after applying the Gaussian elimination and using the back
substitution, we got the solution like x 1 equals to minus 0.950, x 2 equals to minus
2.82 and x 3 equals to minus 2.00. However, the exact solution this x 1 equals
to 1, x 2 equals to 2 and x 3 equals to minus 3. So basically there is a big difference in
these 2 solutions one which we obtain using the Gaussian elimination method and the other
one which is the true solution. So there were some mistakes when we apply
the Gaussian elimination method or something went wrong, so procedure was correct however
the reason for this errors is rounding off error propagated to such an extent that the
final solution becomes hopelessly inaccurate, so what is the solution to this problem? The solution to this problem is the Gaussian
elimination method with partial pivoting, so this is a modified version of Gaussian
elimination procedure and here we will search for pivot elements and based on that we will
perform the elementary row operations. So again consider a 3 by 3 system having 3
equations with 3 unknowns that is a13 x3 equals to b 1 so this is the 2nd equation
and the 3rd equation is, so in matrix form I can write the coefficient matrix as a11,
a12, a13, a21, a22, a23 and coefficient of the 3rd equation, so these are the 9 coefficients
of x 1, x 2, x 3 in all 3 equations and then if I had the right-hand side vector here then
this becomes the augmented matrix. In Gaussian elimination we use to perform
elementary row operations on this matrix in such a way this matrix reduce into row equivalent
form. In Gaussian elimination with partial pivoting
what we will do, first of all we will search for the element having maximum absolute value
from the 1st column, so I will look for this 3 elements from the 1st column and I will
see which element is having the maximum absolute value and if this is the element a11 I will
not do any operation at this particular moment if it is a21 I will interchange 1st and 2nd
equation, if it is a31, I will interchange 3rd equation with 1st equation. After doing this what I will do let us assume
that the maximum element is a31, so what I will do? I will interchange my 3rd equation with 1st
equation. So my augmented matrix will become a11, a12,
a13 and here it will come b1 while in this a31, a32 and then a33 equals to b3. So I have done a operation R1 interchange
with R3, so now this a31 is the biggest element in terms of absolute value in the 1st column
now what I will do, I will make this element as 1 so what I need to do, I need to divide
the 1st row by a scalar that is 1 upon a31, so I need to replace R1 with 1 upon a31 R1. So what will happen this element will become
1, this will become a32 upon a31 this will become a31 and it will become upon a31. Now what I will do with the help of this 1st
equation I will make these 2 elements 0 that is the element in the 1st column of the 2nd
row of and element in the 1st column of the 3rd row. For doing this I need to perform two more
row operations that is R2 will be replaced by R2 minus A21 times R1 and R3 I need to
replace by R3 minus a11 into R1. Please note that this element is now a11 because
we have interchanged R1 and R3 in our 1st operation. So if I will perform these 2 row operations
then what I will get these 2 elements will become 0 and I will get according to some
other values here. So this is the 1st pass of Gaussian elimination
method with partial pivoting. Now in the 2nd pass what I will do I will
left out 1st row and 1st column and I will perform same operations in this sub matrix
in such a way that this entry will become 1 and this entry will become 0. So here we are changing finally the matrix
in row echelon form however we are performing some extra operations by searching the pivot
elements and we are making the entry 0 with the help of the pivot elements and the diagonal
position. So this method is called Gaussian elimination
with partial pivoting. So let us take an example of this method so
I will consider the same example which I have taken in my previous lecture for that I got
a wrong answer using Gaussian elimination, so here the same augmented matrix 3 equations
with 3 unknown, so now in the 1st operations of Gaussian elimination with partial pivoting
what I will do? I will search the element having the maximum
absolute value in the 1st column, here you can see element 11.2 is having the highest
value, so what I will do, I will interchange my 1st and 3rd equation. This is the same exactly which we have done
while I was explaining the procedure. So here I have interchanged my 1st and 3rd
row so this has become my 1st equation now and 3rd equation will become 1st equation,
1st equation becomes 3rd equation. Now what I will do? I will divide the 1st row by 11.2 to make
the pivot entry as 1, so what will happen R1 is replaced by 1 upon 11.2 times R1, so
this will become 1 and subsequently other entries change, this is minus 4.30 upon 11.2
so it is coming out like minus 0.384 and so on. Now what I will do, I will make these elements
and this element 0 using the pivot element that is 1 now. So for this I will replace the 2nd row that
is R2 by R2 plus 1.31 times R1 and R3, I will replace by R3 minus 0.143 times R1, so after
doing these 2 operations I will get this particular matrix, so here you can see in the 1st column
in the 1st row I am having the pivot entry as 1 and rest of the entry below the pivot
elements are 0, so this completes the 1st pass of Gaussian elimination method with partial
pivoting. Now what I will do I will do the same on the
2nd and 3rd equation but leaving out the 1st column because these are 0 0 already, so for
this I will go to the 2nd column, in 2nd column will choose the element having the maximum
absolute value or of these 2 that is 0.408 and 0.412, so element 0.412 is bigger than
0.408, so I will replace 3rd equation, I will interchange 3rd equation with 2nd equation. So system will become like this. Now here to make this pivot entry as one I
need to divide 2nd row by 0.412, so when I will divide the 2nd equation by 0.412, I get
this augmented matrix, so here pivot element is 1 and then again what I will do? I will make this element 0 with the help of
this pivot element, so what I need to do? I need to replace 3rd equation that his 3rd
row R3 by R3 minus 0.408 times R1, so after performing this particular operation I got
the system and this matrix is in row equivalent form however if you look at this element it
is not still 1. So it is a pivot element and I need to make
it 1, so to make it 1 in the 3rd pass I need to divide this equation by this particular
number. So by doing this dividing the 3rd row by 0.0800,
I got this element as 1, now you can see it is a triangular form that is, it is an upper
triangular matrix or I can say it is in row echelon form. All the diagonal entries or pivot entries
are 1, so from here if I use the back substitution, you can see that x3 will become minus 2 sorry
minus 3 x2 will become to the 2nd equation after substituting the value of x3 and x1
will become 1, after putting the values of x3 and x2. The exact solution is x1 is 1, x2 is 2, x3
is minus 3 which is same which we get both using the Gaussian elimination with partial
pivoting hence the exact solution agrees with numerical solution okay when rounded to three
significant digits. So this is the modification in Gaussian elimination
and this particular method called Gaussian elimination method with partial pivoting. Here the term partial in partial pivoting
refers to fact that we have chosen the biggest absolute entry in 1st column in the 1st pass. So we are looking only at 1st column when
we are going through the 1st pass. If we look instead of searching in 1st column,
if we search it in whole matrix or in 2nd pass like whole 2 by 2 sub matrix then the
pivoting or pivoting is called complete pivoting or full pivoting. Unfortunately, we are having some systems
where we cannot solve the equation or system of equations using either partial pivoting
or complete pivoting. Now a talk about some of those systems, so
1st of all I will take the system those are called ill conditioned systems, so some linear
systems those are ill conditioned system are extremely sensitive to the rounding off error
or any other kind of numerical errors. For such systems pivoting does not help much
even though you use partial pivoting, you use full pivoting we can have uncorrect solution
using the numerical scheme. So I will illustrate or I will explain such
ill conditioned system with the help of an example after that what I will do? I will define matrix norm and I will relate
the matrix norm with the conditions when the system becomes ill condition means how to
measure whether the system is ill conditioned or it is not. So let us take an example which is having
2 equations and 2 unknowns, so a very small example and then we will see that this particular
example of an ill conditioned system. So let us take 1st equation as x1 plus x2
equals to 2 and the 2nd equation is x1 plus 1.001 x2 equals to 2, so here in the matrix
form I can write the coefficient matrix as [1, 1; 1, 1.001] into the unknown vector that
is [x1 x2] and right hand inside vector is [2, 2]. If we solve this particular 2 by 2 system
just looking at the system we can say the solution of this system is x1 equals to 2
and x2 equals to 0, so this particular solution satisfied our system of equations and hence
it is an exact solution of the system. Now what I will do? I will not change my coefficient matrix but
there is a little change in right hand side vector in the entry right-hand side entry
of 2nd equation let us say earlier it was 2 but it has become let us say 2.001 due to
some error, it may be some numerical error or some error in the measurement let us say
some sensor error. So what will happen? I am having a very small change in the input
data earlier it was 2 and now just here it is 2.0001 that is a change of the order of
10 raise to power minus 4. Now if you look the solution of this particular
system this new system what I will get? x1 equals to 1 and x2 equals to 1. Now look at these 2 solutions and
the 2 systems, in input data I am having a very little change, a change of the order
10 raise to power minus 4, here it was 2 it is 2.0001 however if you see the change in
the solution we are having a large deviation, so if we are having large deviation in the
solution just due to a small change in the input data such a system is called ill condition
system, so it is a perfect example of an ill conditioned. Now the same example and this particular example
shows that a small change of 0.0001 in system of equations makes a significant change in
the solution of the system, so in a system of equation Ax equals to b, when the solution
is highly sensitive to the value of the coefficient matrix A or the right-hand side factor b,
the equations are called to be ill conditioned. A matrix norm is a real valued function which
is defined on a set of let us say n by n matrices and satisfying the following conditions, condition
1 the norm of a matrix will be always non-negative. The 2nd condition is it will be 0 if and only
if A is a null matrix or 0 matrix, if you multiply the matrix A by some scalar Alpha
then norm of the Alpha times A will become absolute value of Alpha times norm of A for
R Alpha belongs to set of real numbers. Now 4th condition is triangle inequality that
is basically if you are having 2 matrices A and B of order n by n then norm of A plus
B will be always less than equals to norm of A plus norm of B, so any real valued function
satisfying these property on a set of n by n matrices is called a norm. We are having different types of norm the
1st one is natural or induced matrix norm, it is defined for a matrix A by this equation
so norm of A will be maximum of norm of A times x where x is a vector in Rn and it is
a vector having unit norm, vector of unit length. So here if you are having a vector of unit
length you multiply this vector with A, so norm of A equals to maximum of norm of x. The 2nd type of norm we are going to define
here maximum row norm, for any n cross n matrix A, the maximum row norm is defined as norm
of A equals to maximum over the absolute sum of all row, means absolute sum means we are
having entries of each row, let us take entries in 1st row take the absolute value of each
element of the 1st row, take their sum, similarly take the absolute sum of 2nd row and so on
and then maximum of those sums will give you the maximum row norm. The 3rd norm we are going to define here over
the set of matrices is Euclidean norm and that is obtained by square root of r( ATA),
where r(A) is given by maximum absolute value of lambda and lambda is a eigenvalue of A,
it means maximum of the all eigenvalues in terms of absolute value. Let us take a matrix 3 minus 2, 4 as the 1st
row, 1, 2 minus 3 as the 2nd row and 2, 4, 1 as the 3rd row, so here if you look the
maximum row norm of this matrix let us say A is given by
maximum of so in the 1st row 3 the absolute value of minus 2 will be 2, so 3 plus 2 plus
4 it will become 9, for the 2nd row 1 plus 2 plus 3 so it will become 6 and in the 3rd
row it will become 2 plus 4 plus 1 that is 7 so it is 9. On the other hand Euclidean norm for this
particular matrix will be square root of maximum of eigenvalues of A transposed A, so if this
is the A, transposed A will be having the eigenvalue as 3.61, 23.66 and 36.73, so it
means square root of 36.73 which is maximum among these 3 eigenvalue and these are the
eigenvalue of a transposed A and this is coming out as 6.06. So here you can see the Euclidean norm induces
the matrix norm. Now to relate this norm with the ill conditioned
system we are having an important result in this theorem. In this theorem let A be non-singular matrix,
then the solution x and y of the system A x equals to b1 and A y equals to b2 respectively
satisfy this particular inequality. So I am having to system A x equals to b1
on other system is A y equals to b2, so I can write A x minus A y if I subtract the
2nd from the 1st equals to b 1 minus b 2 or it can be written as A x minus y equals to
b 1 minus b 2 of our x minus y equals to A times sorry A inverse b1 minus b2, it is given
that A is non-singular so A inverse exist. Now norm of this particular vector equals
to norm of this vector and by the inequality I can write norm of Ab less than equals to
norm of A into norm of B. So I can write norm of A inverse into norm
of b 1 minus b 2, so norm of x minus y less than equals to norm of A inverse b 1 minus
b 2, if I divide it by the norm of x, this will also divide divided by the norm of x. Now I can write it the left-hand side as such
x minus y norm upon norm of x, this is less than equal to if I multiply in the right inside
in the numerator as well as in denominator by the norm of A so it will become norm of
A, norm of A inverse into norm of b 1 minus b 2 upon norm of A since I have multiplied
in the numerator so I have to multiply in denominator also into norm of x. We know that norm of A into norm of x will
be always greater than equals to norm of A x, so if I replace it
this inequality will not change and x you know that norm of A x equals to here A x equal
to b1 so norm of A x will be norm of b 1, so I can replace here it norm of b 1 and this
is the result of the theorem. Here we can see that multiplying coefficient
norm of A into norm of A inverse it is interesting, it depends entirely on the matrix in the problem
and not on the right-hand side vector, yet it shows up as an amplifier to the relatives
change in the right-hand side vector. So if it will be a high-value there is norm
of A into norm of A inverse for matrix there will be large change in the solution due to
a small change in the right-hand side vector and hands what will happen? The system will become ill conditioned. So based on this idea we can define condition
number, so for a given non-singular matrix A which is a real having real entries and
of size n by n and a given matrix norm, the condition number of A with respect to the
given norm is defined by condition number of A is norm of A into norm of A inverse means
this particular term, so if condition number for a matrix is large even a small variation
on the right-hand side vector that is in b 1 or b 2 can lead to a drastic variation in
the solution and such matrices are called ill conditioned matrices. So if you see the earlier example which we
have taken [1, 1; 1, 1.0001], so this example was taken in the beginning when I introduced
the ill conditioned system, so norm for if this is the matrix M, the condition number
of this matrix will be something around which is quite large and hence the system
was the ill conditioned system. So I will stop this lecture here, in the summary
of this lecture 1st I introduced the Gaussian elimination with partial pivoting and in the
2nd phase of the lecture I have introduced an ill conditioned system and the idea of
condition number to measure the whether a system is ill conditioned or not and in the
next lecture I will continue with the direct method of solving the linear system and there
I will introduce a technique called LU decomposition. Thank you very much.