Linear Operators and their Adjoints

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] so we're going to talk a little bit today about the idea of linear operators and their adjoints this is typically related to boundary value problems and differential equations that we typically solve that prescribe boundary conditions instead of initial conditions and so there's going to be some important concepts around this idea of a linear operator and it's adjoint and what we're going to find is that these provide us a really powerful solution technique and once we understand the basic structure of how this works we'll be able to actually solve quite a few problems with it and start to develop a formal setting in which we can solve boundary value problems in numerous ways and it's also going to lead us to an architecture in which we can apply some of our perturbation theory so that when we go to problems that are more difficult nonlinear problems we can rely again on linear approximations and these linear operators going to play this key role in giving us so much of the information insight for solving problems so this is what we're going to try to look at l u equals f l is typically a differential operator is equal to some right hand side forcing f and it's prescribed on some domain 0 to l and this ultimately is the ax equal to b of the functional space world so we're going to move from ax equal to b which is what most people are people are very familiar with which is solving linear systems of equations in matrix forms to l equals f where u and f and uh and l itself are all prescribed continuously on some intervals so it in some sense takes on an infinite set of values across that interval which is from let's say zero to l and so l u equal to f is equivalent to x equal to b and so what i'm going to try to frame this as is we're going to try to make all these relationships between x equals b and l equals f hold we're going to start seeing what the with the how they are the same and how we're going to start using the same techniques out of ax equal to b really going to come to play here in solving l u equals f now the other thing you have with l u equals f is not only is it prescribed on a domain from zero to l but you also prescribed boundary conditions and the boundary conditions enforce the kind of solutions that you're going to get right so the kind of boundary conditions you might have you might prescribe the function u at the boundaries or the derivatives at the boundaries or some combination of the function and its derivatives okay so the boundary conditions have a tremendous impact on the kind of solutions you're actually going to get out of this lu equals f and this l u equals f shows up everywhere so it shows up in quantum mechanics electrodynamics elasticity across the physical engineering sciences we often see boundary value problems and formulated typically in a linearized way because those are the ones we can actually solve and so we're going to talk about those analytic solution techniques here and solving l u equals f now just like ax equal to b what we'd really like to do to solve this l u equals f is to find this object the inverse of the operator l so this linear differential operator if we could find a way to construct its inverse then the solution is easy this is equivalent to finding the inverse of the matrix a so if you're solving ax equals b then you would say oh so the solution a x is equal to just the inverse operator multiplied by b same thing here inverse operator multiplied by f now what you might expect is what is the inverse if if l is a differential operator then the inverse of that differential operator should be some kind of integral terms okay and in fact that's exactly the kind of solution techniques we're going to pursue it's going to be some kind of sum or integral representation for the solution so that's what we're going to try to construct is we're going to have a couple of different ways to construct different versions of this inverse both in terms of eigenfunction expansions and in terms of green's functions as well as we proceed in in this material but for now we want to just start to understand some of the basic structure around this and what does it mean to solve this equation um almost everything we're going to do by the way when we start to think about solving this is we're going to have to start working with metrics just like we did in ax equal to b we have this concept of an inner product and a concept of a norm we're gonna have to have those same concepts here in function space and so what we define here is this idea of an inner product and function space which is given right by here this this is the notation we're going to use the broadcast notation so what this means is you take two functions u and v and you take their inner product now in function space it's just an integral from 0 to l over the whole domain u v star so it's complex conjugate of v dx so this is in some sense the equivalent of an inner product what the inner product does in vector space right is it computes the projection of one vector along a certain direction right so if you take two vectors and you compute the inner product it basically tells you how much of one vector is projected into the other's direction similar thing here this is what this inner product does it tells you how much is is uh one function projected into the other function and you can compute this just by this inner product here now just to kind of uh give you a little bit of looking ahead when we think about this if we find that the u v inner product is zero then what's going to be is those functions are in fact orthogonal just like you would have in an inner product or vector space two if you take the dot product between two vectors and it's zero that means they're orthogonal and same thing's going to hold here for function space is that i can make functions orthogonal by making this inner product zero okay so this is going to be very important concept a lot of a lot of the functions we're going to look at are going to be real and so often times i'll drop the v star there because it's all the functions are real however if you're working with complex functions make sure to keep that complex conjugation in there it's really important especially if you want to start defining a norm a norm is when you take u comma u in other words take the inner product of that function with itself and you want that to be some positive real value and that's why you need for complex functions also that star to be there okay so in some sense all the techniques we're going to talk about is we're going to take the inner products and we're going to rely on inner products and orthogonality those are the only tricks or the only really manipulations are going to matter for us as we proceed forward in this in this kind of analysis okay so with that in mind let's ask the following question when can we actually solve lu equals to f and in fact this goes right back to the question of when can you solve ax equal to b are there conditions on ax equals b such that you know you can actually get a solution and we can ask that same question here for l u equal to f in particular what we're going to show is if this operator l has a range that f better be in the range of it okay and how do we then define how to do this so let's start off with linear algebra first and this is going to get to this idea of what's called solvability or the fret home alternative theorem throughout what we cover in this course this fredo alternative theorem plays a critical role in understanding the kind of uh solutions that are possible uh with these linearized operators that we're going to be playing around with the whole time so the fredholm alternative theorem let's talk about it first in terms of the well-known x equal to b and then we'll do it for l equals f all right so let's talk about in terms of ax equal to b i have x equal to b and i want to solve it and i want to consider an associated operator the adjoint a star y equals to zero okay the adjoint operator oftentimes is the complex conjugate transpose of an opera of a matrix okay and so you can imagine when you solve this what you're really looking at is what is the null space of a star right that's what the y is there and you're looking for the non-trivial null spaces so suppose you find some vector y that actually lives in the null space of the of that adjoint operator what are in fact the repercussions of that so i want to show you what they are so what i'm going to do is take x equals b and dot product it on the right with the null space vector y so i have x equal to x dot y and b dot y now that's just taking your standard dot product okay and then what i'm going to do now is going to take use the definition of the adjoint the actual definition of the adjoint is right here you take the ax dot y is equal to x a star y so the adjoint moves from operating on x and it moves over to operating on y and that's object y star and these are equal we're going to talk about how to compute adjoints as we go on here but that's the definition of the adjoints what it tells you you can do is you can take this a here and ax dot y and move it over to the y itself but what's a star y well that's zero so then at the end of the day what you find if a star y is zero then this becomes zero and you get b dot y is equal to zero in other words the vector b the right hand side and ax equal to b dotted with y is equal to zero means that the null space of the adjoint operator has to be orthogonal to that to the right hand side b okay that is the condition for solvability to hold or this fredholm alternative our theorem to hold and what it's essentially stating is that the vector b better live in the range of the operator a otherwise you could not define what b is in other words a acting on x it has a range and if b is outside that range there's no way you're going to solve this equation and so this is just a statement of this in some kind of formal way and by the way if the operator a star y has no null space in other words the only way to satisfy it is with y equals to 0 then put that there and you can solve it for any b okay so this is a really important theorem and we're going to use it throughout the other interesting thing about us its relationship to zero eigenvalues so we're talking about the null space but essentially what the null space is is if you have a non-zero null space there are they are the eigenvectors of a zero eigenvalue in other words that you could imagine being lambda y where lambda is zero so one way to think about it is zero eigenvalues of a vector or of an operator we're going to see have a can have a very pronounced impact on the kind of solutions you get out of your problems and this has been well known uh nother's theorem basically states that if when you look at these operators every zero eigenvalue corresponds to some kind of invariance in your system and these invariances make it so that you have an infinite set of solutions available to you and so solving ax equal to b is problematic because you can't nail down a unique solution so this is the vector version of the fredholm alternative let's turn it over to the function version remember we're going to solve l equals to f which is some differential operator acting on you some right hand side and we're going to do the same thing is consider the adjoint problem which is denoted here by a dagger l dagger v is equal to zero so this is basically again looking at the null space of this adjoint operator that's what v is so v can be non-trivial or it could be that v the only thing that satisfies this is v equals to zero okay and in that case there would be no zero eigenvalue or eigen function so we're going to make use of this and make use of this idea of the inner product so i've already defined the inner product remember just an integral from 0 over the domain so 0 to l dx and the inner product i'm going to take the inner product of l u equal to f on both sides respect to v so in other words i just multiply by v integrate and that's exactly what this inner product is on the left of l u and on the right of l u equal to f which is the f c inner product with v so i'm just doing some integrals okay so i take the inner product it's exactly what i did with the vector case when i took the dot product this is equivalent okay take a projection onto that null space function v now the definition of the adjoint is that l moves from one side acting on u to acting on the other side acting on v so l dagger v again we're going to talk about how to compute this l dagger in a moment but that's the definition of the adjoint is you can move it from here over now once i've moved it over here what we know is l dagger v well that's the null space this is equal to zero so you'd have zero u this this whole integral here goes to zero and what you end up with is the same kind of condition we had before which is the forcing function f has to be orthogonal to the null space of the adjoint function that is a condition for solvability or set another way the operator l has a range f has to live in that range and if it doesn't you're not going to get a solution okay so this fret home alternative holds for ax equal to b and it holds for l equal to f they're equivalent and it's a really important concept and it's one that we typically don't talk about in linear algebra very much which is a little surprising given how simple and fundamental it is i mean the way we talk about it is in terms of having a determinant zero in other words having a singular matrix and having issues with uh finding solutions for that so here are the equivalences i want to point out between vector and function spaces so the reason i point this out is a lot of your intuition around vector spaces hold for function spaces so in a vector space we have for instance an eigenvalue problem which is ax equals lambda x we have something equivalent in function space l u equals lambda u we have this idea of an inner product in orthogonality so the inner product is just the dot standard dot product x dot y equals to zero that means orthogonality and here it's two functions u u inner product u and v is equal to zero that would mean they're orthogonal as well and we also have a concept of a norm in other words how do we measure a metric and typically the way we do it in vector space is x dot x that is the length of a vector the length of a function is u comma u and that's often denoted by the l2 norm which is this double bars u squared okay so now we have this idea of orthogonality we have idea of inner products and we have an idea of length of a function or a vector these are equivalent concepts and so all your intuition and vector spaces really have a lot of uh ideas that just pour right over to solving these functional representations for l equals to f and i like to really pin that in your mind because it's really important as you start to develop methods that you understand that this relationship that a lot of your intuition from linear algebra holds for this l u equals f okay so all those concepts hold and then i want to just make one last comment about null spaces because this is really important for us to think about and because it is related to this uh idea of fred holm alternative and solvability so suppose i actually were to take my operator l and i find that it has a null space some some function u naught which solves this so it's equal to zero so u naught spans a null space in fact you could have multiple functions spanning the null space or you know in an ideal world u naught's equal to zero the only thing that's there so you don't have to worry about a null space but oftentimes some of the more interesting problems we have to deal with the null space is not zero so you have to deal with this fact that this null space of it exists so what's the consequence of that the consequence is when i go to solve l u equal to f if i find some solution let's call it u u tilde of l equal to f i can always add some amount of that null space and it's still a solution in other words this zero eigenvalue or this null space sets me up so that i have infinite number of solutions available to me because i can't pin down that null space because i can just add any to solve l equal to f i can add any amount of the null space with arbitrary constant in a still solution so this is your solution and essentially what happens is this u naught is some kind of invariances and so i can add any amount of it and i still have a solution and so this is this is something where i have to think about when we solve l u equal to f and we'll we'll we'll address it in different parts as we go along but this is just a simple consequences of having a non-trivial null space okay all right so these are concepts we've talked about solvability inner products norms adjoints the question is how do you actually compute an adjoint so we're going to have to learn how to compute that l dagger object for us to do these inner product manipulations for us to take this operator and move it across on from the function u to the function v this will be a really important piece of everything that we do because this is uh this is just like the matrix a we have to be able to manipulate it and everything we're going to be doing is going to be taking inner products and exploiting orthogonality okay so how do we compute an adjoint lots of integration by parts that's it remember it's a differential operator and so to move it from one side to the other it's just like what you did in your first calculus course in different in in integration which was lots of integration by parts so that's what we're going to do here there's nothing else besides that so let's consider the following some linear differential operator uh here it is it's a second order differential operator it's going to be defined on some domain a to b here it is and then it has some prescribed boundary conditions so at a i have some uh relationship between the function itself and its derivative at b again some functions derivative with arbitrary constants alpha one beta one and alpha two beta 2. so this is a generic representation of what you might have for some linear operator okay second order and we're going to just basically ask the question how would i compute the adjoint of this operator with these boundary conditions so remember the operator it's not just the operator it's the operator and the boundary conditions this is a very important point so you if you look at the operator say i have to deal with this you do certainly but you also have to deal with these boundary conditions these boundary conditions are uh really important in framing what kind of what kind of solutions you get out okay so there's my domain and there's my boundary conditions okay so let's go and get this so the adjoint requirements i have here i'm going to basically take remember what i did before i just took and i took lu equal to f and i took its inner product with respect to v on both sides so what i have on the left is v inner product with l u and then and then on the right i'm going to get v interproduct with f but right now all i care about is when i do this operation of taking this l from here operating on u and i move it over to operate on v here this is what the definition of the inner of the adjoint is it's like that it satisfies this so the question is how do i go and actually compute that l dagger that satisfies that so let's do it for this operator so here it is i take this object v l u which is inner product which is nothing more than just an integral from a to b v l u but l itself remember in this case is a second order differential operator and here it is i put it in here so i put that differential operator in here and then what i'm going to do is rewrite this i'm going to take this a of x here for instance and put it over there with the v and i'm going to take that b of x and put it over with a v and take that c of x likewise and so what i end up here is i've uh i've taken out the a the b and the c now the reason i've done this is remember i'm trying to move all the pieces of this operator l over to the v which means the operator l has this a of x b of x c of x plus differentiations uh so i want to move all of that stuff over to be grouped with v and so step one is to start breaking this apart breaking out those constants and now i still have two derivatives and the derivatives are still on u and here there's two derivatives there and one derivative there so the next thing is to do is to say how do i move the derivatives over well that's when you can do integration by parts so you're going to integrate by parts by and what's happening is you're going to you're going to differentiate this part and integrate that part twice and then here once and so that's going to move the different derivatives that are sitting on this u and parked on that u you're gonna go park it on that v okay so here's two integration by parts for us we have this a of x v u xx and so what i'm gonna do is if i do integration by parts once so i'm going to integrate the uxx and i'm going to differentiate this ax v so when you do that here's what you end up with with the integration by parts av ux evaluated at a and b minus this guy here which is i've moved the derivative over to the av and only have one derivative left on u of x but now i have to do it one more time to move that that derivative off the u of x i'm going to integrate this differentiate this and then i do differentiate my parts again and look i got two terms here and now what i have here is u is left alone which is what i wanted and i have two derivatives on av two derivatives so great so i've moved the derivatives over by doing this integration by parts twice the cost of that was i picked up these these two terms here that i have to evaluate at the boundaries if i do this for the other term which is the b v ux i have to only do different integration by parts once and here's what i get so again i've moved the derivatives off of the u on to the v and the cost of that is an evaluation of this integral term or the boundary term okay so my calculation in total then if i bring in those two pieces remember that i had three terms in the original problem here and i and i showed you how the two integration by parts broke out what you end up with is v l u is equal to here are all the boundary terms you have to evaluate a and b plus here's the integral terms and notice how now u is isolated and all the derivatives are in v and so i'm going to call this all these boundary terms that we evaluated evaluated j and then i'm going to leave this here this is now your inner product that's left and i'm going to call this here l dagger v and you notice how u is all alone u is over there on the right and then this here is now your operator l dagger so what i'm left with is j plus which are the j by the way is the evaluation of u and v at the boundaries with the boundary conditions and then i have here l dagger v comma u which is the actual adjoint now sitting in this equation so here is what i have if i move this over i have v comma l u minus l dagger v u is equal to j now this j is an insulin object the by the way the l dagger in this is given right by here this is l dagger acting on v this is what's called the formal adjoint in other words just the form of l dagger by itself is called the formal adjoint the full adjoint is such that the j which is often called the conjunct or the bilinear concominant is zero so the adjoint itself is not just the operator it's the operator plus corresponding boundary conditions whereas just the formal adjoint is just the operator itself okay so i have the formal form of the adjoint here this is the operator itself and then the question is i'd have to set this j go to zero for me to compute uh the actual adjoint itself now if j is zero then that drops out and then you have v l u is equal to l dagger v u which is exactly what i wanted when i did this operation of taking the adjoint uh by the way i should mention at this point that there are some very interesting cases to consider one is if an operator is formally self-adjoint in other words l dagger is equal to l so uh is so that's kind of interesting right so you take the operator over and it's the same form so l l dagger is equal to l uh when you move it okay what's more powerful if you have an operator that's actually not just formally self-adjoint but it actually is self-adjoint and that is when you have vlu is equal to lvu and what this really means is that not only is the operator the same the boundary conditions are the same and that's how you establish a self a joint operator self-adjoint operators are very important uh and we're going to consider what's called the sturm louisville self a joint operator which is one of the classic and canonical second order differential equation boundary value probably models in all of mathematical physics and it encompasses quite a variety of of problems from the engineering and physical sciences okay so self-adjointness there's all kinds of properties uh if you have self a joint vector of matrices as well there's all kinds of nice properties including all the eigenvalues are real they're ordered eigenfunctions are real so all those things hold also for operators which is fantastic if you can get yourself a self a joint operator although very interesting things happen when it's not self-adjoint as well so so let's work out an example and here's going to be the example i'm going to consider this specific operator here here's the operator it's the second derivative minus the first derivative with the following boundary conditions it's pinned at one end so u of zero zero and the derivative at l zero okay so it's a pretty simple operator and the question is how do i compute the adjoint of this operator so what i would do is do exactly what we just did integration by parts you take vlu so you take the inner product of v now which is uh uh you you know that's going to be in your adjoint space um v l u and you're going to move that operator l over on to v and how do you do it well let's take take a look at what l is it's two derivatives minus first derivative and i've got to move those derivatives off of u so in the first case i have two derivatives which means i'm going to do integration by parts twice to move that second differential off of the u here i'm going to have to do it once so if i do this here here here's what it is when i do that what i end up with is the boundary terms that come up from integration by parts and now i've moved all the derivatives over to the v's okay so what this is here this is my j and this here is my l star v so l star v is here i've written it here l so l dagger v is u v comma u is here that's this operator here and then all this here are the boundary terms that i got from doing integration by parts and i know the details are small here but you can look this up in the notes i work it all out it's a little bit more detailed and the first thing we do is say well i have some some boundary conditions associated with you they were given to me right so if you go back there they are so i can actually put in these boundary conditions and they're really pretty nice right so first of all i got u 0 0 and the derivative of at l 0. so i put those into there and then i get end up with some equations for v and so by implying that by applying the boundary conditions for u it doesn't zero out the conjunct the conjunct is still non-zero and so now i have to impose boundary conditions for v to make all this here zero okay and so what this gives me is to make the conjunct zero i have to impose that v zero is zero and i have this boundary condition on the right so the derivative plus the function is equal to zero okay this is different for the than the boundary conditions for you so in general so overall my formal adjoint looks like this and notice the the actual operator l actually had a minus sign here so now i have two derivatives plus a first derivative instead of minus the first derivative so it's l l dagger is not equal to l and i also have the boundary conditions and so here's what they look like so my linear operator was originally this with these boundary conditions the adjoint operator is this with these boundary conditions okay so i just showed you how to compute this and these adjoints again are going to play a fundamental role not only in terms of thinking about uh l equals f but in in terms of how we think about doing eigen function expansion solutions green's function solution expansions and in general everything we're going to do is going to rely a lot on thinking about adjunct adjoint operators and thinking about solvability conditions and so computing these is going to be a really uh critical and important so that's going to be it so this is uh i guess this is just an intro to these linear operators and specifically to the adjoint and to this idea that we're going to work in function space but all the ideas that we had from before in linear algebra around orthogonality in norms in other words lengths of vectors we're going to have length of functions now we're going to have orthogonality of functions we're going to be working in an infinite dimensional space instead of finite dimensional spaces but all the concepts really hold from vector space over to this l u equals f where we're going to be working and so just keep that in mind i want you to make sure you have good intuition from your vector space ideas and you're just going to port it over here to your function space ideas
Info
Channel: Nathan Kutz
Views: 3,057
Rating: 4.9699249 out of 5
Keywords: boundary value problems, linear opeartors, spectral theory, adjoints, self-adjoint, kutz, BVP, Fredholm-alternative, solvability
Id: aG5tFA8GJ78
Channel Id: undefined
Length: 34min 3sec (2043 seconds)
Published: Wed Dec 30 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.