Lecture 2 | Modern Physics: Quantum Mechanics (Stanford)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this program is brought to you by stanford university please visit us at stanford.edu question is can you defeat the uncertainty principle in the following way you measure the position of an object first to a poor accuracy it's somewhere in here somewhere is within some region of size let's call it delta and then you wait a long long time and the particle is someplace else presumably and you measure it again with the same poor accuracy so it's over here somewheres in delta again how far did it move the distance that oh and let's say we discover after having measured it that the center of these fuzzy regions the center of these fuzzy regions are separated by some distance let's call it l all right so how well do we know the position we know the position uh or we know the separation let's talk about the separation we know the separation is l plus or minus something of order delta in other words the distance of separation is l with a fuzziness or an uncertainty which is of order delta two delta or a half delta that's not the important thing now we want to find out how fast it was moving so we divide it by the time that's the amount of time between the two measurements what's the uncertainty in the velocity the uncertainty in the velocity the velocity is l over t plus or minus delta over t and if we make t big enough if we make t big enough delta over t can be arbitrarily small so it seems then that we can measure the position to accuracy delta but the velocity to arbitrary accuracy because delta over t gets smaller and smaller now the answer to this is what we've done is we've measured what we wanted to do was to measure the position and velocity at a time just before the measurement was done we want to know at a particular instant of time right now i want to try to measure the position and velocity sort of simultaneously i want to know both position and velocity well the problem here is that you don't know the velocity or the momentum as accurately as you thought not if you meant the velocity before you measured before you began the measurement before you began the measurement the particle had a certain momentum and then in order to measure it to within accuracy delta you'll have to hit it with a photon of wavelength delta or smaller and that in itself will change the velocity by an amount which is uh given by the uncertainty principle h-bar over delta what you're doing here is you're measuring the velocity accurately but the velocity after you made the first initial measurement the position that velocity is a little bit different than the initial velocity because in order to measure the position you had to hit it with a uh with a photon so you're not measuring what you wanted to measure which was the original position and velocity simultaneously you're measuring the position now to some accuracy but then the velocity after you gave it a shot with the photon so it's not what the uncertainty principle really talks about okay let's uh yeah yeah you have now a particle of some velocity which you in factories measure accurately uh yes but you don't know the velocity anymore because well you don't know the velocity afterwards uh so so if you try but if you try to repeat the measurement the velocity to check it you discover that it was a little different it's a question of repeatability of experiments and and checking your experiments so you're always going to be stuck in that way that that in any case that is what the what the uncertainty what the uncertainty principle says is that you can't know uh in this case in this case that the measurement of the final position over here will give a little bit of a kick to the uh to the velocity and you won't know it afterwards it says what it says and you're perfectly right i mean there's something funny but uh but it says what it says and what it says in this case is you can't check that that was the velocity without changing it by an amount of order h bar over delta so you may think that that was the velocity okay i think i measured it but now you try to check it and in classical physics you can measure the position arbitrarily gently and not affect the velocity and quantum mechanics not so okay so let's see so uh are we ready to begin yeah we're ready to begin okay what i want to do before i do anything else is just an exceedingly quick review of complex numbers just uh not because i really want to teach you complex numbers i presume everybody knows about complex numbers but uh because just for some notational reasons incidentally we're going to be continuing with linear vector spaces for a while it's a somewhat dry subject that's the bad news the good news is it's a very easy subject so once you get it you should be it should be easy enough thank you michael but it does pay to spend a uh michael say hello to your mother you're on tv okay so let's let's just remind ourselves very quickly about some a property or two of complex numbers not all about complex numbers just one or two properties that i want to i want to remind you of because we'll see things very similar to them later okay ah before we even do that let me just mention about linear vector spaces linear vector spaces are composed of objects which you can multiply by numbers and add you can add them and multiply them by numbers and in fact complex vector spaces are things that you can multiply by complex numbers and you can add them that's what defines a complex vector space the simplest complex vector space is of course just the complex numbers the complex numbers you can multiply them by other complex numbers and you can add them so the complex numbers by themselves are a very simple linear vector space i just want to tell you that and then we can examine the complex numbers a little bit uh a complex number is just two real numbers added together with i x plus i y x is called the real part y is called the imaginary part and i is the square root of minus one all arithmetic or let's call it algebra all elementary algebra multiplication of complex numbers addition of complex numbers is exactly the same as if they were ordinary numbers except you must remember that i squared is minus one that's all that's that's the theory of complex numbers in a nutshell every complex number can be represented as a point on the x y plane don't confuse this x y plane with any vector space it's just the x y plane so every point uh in the complex plane has an x and a y and it can be plotted and it's usual to call it z it's a common terminology for complex numbers if we wanted to remember the complex numbers are vector spaces that that they can be added and use the notation that we started to use last time for vector spaces namely that uh a vector in a vector space is often represented in quantum mechanics in any case in a complex vector space by a ket vector just a symbol dirac famous symbol that looks like this then we might just describe we might just say the complex numbers are very very simple vectors in a simple vector space in fact as we'll see it's just a one-dimensional vector space as we'll see complex numbers just in the same sense that the real numbers can be thought of as vectors on a line in one dimension the complex numbers are com our complex vector space of one dimension we'll find out eventually what dimension means but for the moment let's just uh not worry about that now every complex number has associated with it a complex conjugate the complex conjugate is usually represented by putting a star over it and it's just the same real part minus i y and so what it is is it's the reflection this is z it's the reflection of as if the horizontal axis was a mirror just the reflection about the x-axis that's called z-star another way that we could represent it if we were using the notation of complex vector spaces would be that way this is called the bra vector this is called the ket vector when you put one next to another it's called a bracket or a bracket um really the bra vectors are really basically just complex conjugates of the ket vectors so if we were just dealing with a one-dimensional vector space complex vector space just complex numbers we might easily use the notation z and z star or ket z and brassy and they would mean essentially the same thing okay let uh let me just point out one more important property that we will use and we'll use the analog of it for vector spaces the reason i'm doing this now is to do something in a very simple context that we'll do over in more complicated contexts supposing i have two complex numbers z1 and z2 and i multiply them together let's just do it over here quickly z1 is x1 plus iy1 z2 is x2 plus iy2 thank you we multiply these together just to do it once it's x1 x x and y are both real x1 times x2 is x1 x2 that's real we have also another term in the real part that's y1 times y2 but it has a minus sign because i times i is minus 1 so that's minus y1 y2 this is the real part of the product and then there's the imaginary part plus i and what multiplies i i gets multiplied by x1 y2 plus y1 x2 x1 y2 plus x2 y1 all right so here we would take the product and separate it into a real and imaginary part and that defines for us the real and imaginary part of the product a very simple theorem this theorem it's hardly a theorem it's just a little exercise the complex conjugate of a product how do you get it what you do is you just change each plus i to minus i okay each plus i becomes minus i that gives you the complex conjugates of z1 and z2 and what will it do you'll work your way through this and it will just change the i over here nothing else the simple lemma theorem whatever it is is just that the complex conjugate of z1 z2 is just z1 star z2 star in other words you multiply the complex conjugates uh just uh just for notational reasons i'm going to change the order here and put the two here and the one here oh another example another another similar fact uh what about z1 star z2 supposing i take z1 star z2 that's the thing i can write down and i take its complex conjugate the complex conjugate of this product the rule is just complex conjugate each factor and that's just equal to z2 times z1 star so when you complex conjugate a product you just uh complex conjugate did i write it right no z z uh that should just yeah thank you z two star z one sure it is but uh i don't want to do that now yeah um another way to write it to write the same thing which generalizes the vector spaces to more complicated vector spaces is that if i have two complex vectors let's call them instead of calling them by the notation z now let's just call them a and b all right the rule here is now think of the b's as being complex conjugates because they're bra vectors and a's as being the original vector space because they're kept vectors right well the rule that is analogous to this is that if we interchange here look at this for a moment uh supposing we interchange z1 and z2 what happens to the result if we interchange z1 and z2 let's do it right here the this the interchange this is the interchange operation the interchange operation will turn this into z1 star z2 just interchanging one and two just interchanging one and two we'll replace z2 by z1 and z1 by z2 but this is just a complex conjugate of this so interchanging the one and two or the bras and the kets just does nothing but complex conjugates so to make it short if i take the well if i take some sort of product we haven't defined this yet but i'm just writing it down for future reference if i think of complex numbers as vectors in a vector space and i multiply the bras times the kets which means multiply the complex numbers by their complex conjugates then you just interchange the two complex numbers and complex conjugate so when you interchange or when you interchange the bras and kets that's equivalent to complex conjugation we'll come back to it uh in some detail very shortly it's just a useful thing to keep in mind that the mapping from bras to cats is just complex is essentially complex conjugation okay let's go back and very briefly quickly review whatever what a complex vector space is uh as i said it is it is really the heart of the mathematics of quantum mechanics and we can't do without it so we have to spend a little bit of time first of all it has a set of elements called a a b c and so forth how many of them an infinite number of them but not but as we'll see uh infinite in a certain sense uh just like the complex numbers there are infinite number of complex numbers there's also an infinite number of elements that we're going to call a and they have the property that first of all you can multiply any one of them by a complex number alpha and you get a new one let's call it c so multiplication by a complex number is a allowed operation that gives you another vector and adding them a plus b is another allowable operation you can put these two together and you can say given any two vectors any two complex vectors you can add them alpha a plus beta b and get another complex number if you can multiply them by con if you can multiply them by complex numbers and if you can add them then you can add them in combination with coefficients in front of them and the point is that they're closed under these operations c alphas and betas can be any complex numbers now for example this would preclude the real numbers as being a complex vector space the real numbers cannot be multiplied by complex numbers and give back real numbers so the real numbers are not a complex vector space they are a real vector space meaning to say that the real numbers can be multiplied they can be multiplied by other real numbers real constants and they can be added so they're a real vector space but what about the positive numbers are the positive numbers uh uh a vector space under the reels no because part of the definition of a real vector space is that you can multiply numbers by any real number positive or negative so uh and that's that's it that is the definition of a complex vector space let me give you two examples we talked about them last time just to review them very quickly one example oh ordinary vectors are vectors or form a vector space and they form a vector space which is a real vector space ordinary vectors which are just arrows in space can be multiplied and added but multiplied only by real numbers does not make sense to take an ordinary vector my finger and multiply it by a complex number all right so ordinary vectors are a vector space but they're a real vector space all right some examples of complex vector spaces complex functions let's call it psi a function of a variable now the variable does not have to be complex it can be complex but it doesn't have to be complex it could just be an ordinary real argument of the function argument means the thing that the function depends on psi of x where x is real but psi is psi is complex in other words psi is of the form some real part let's call it psi real of x plus i times psi imaginary of x that's the collection of complex functions of a real variable for example okay these form a vector space y you can multiply them by numbers and you get new functions you can multiply them by complex numbers if i take a complex function psi of x and multiply it by any complex number say alpha i get a new complex function number one number two i can add complex functions 2 psi psi and a 5 for example psi of x plus 5 of x gives me another complex function so they form a complex vector space now we're not used to thinking i mean unless you've been doing this for some time you may not be used to thinking of functions as vectors but they are vectors in this mathematical sense another example that we'll use over and over are column vectors column vectors are really functions also they're functions of a discrete index a discrete index instead of a continuous x they're functions perhaps of an integer but maybe an integer that only goes from one to four or one to three or one to seven and we can represent them by a column i'm going to use for these discrete kind of vectors i'm going to use a's and little b's little a's and little b's but if i put the entries into the column a1 a2 a3 these are just numbers now complex numbers a4 for example the set of column objects like this with only four complex entries forms a vector space you can add two of them by the following operation let's call it b1 b2 dot dot and define this is definition of adding m a1 plus b1 a2 plus b2 a3 plus b3 a4 plus b4 and so on you can also multiply them by constants if i want to multiply the vector a by a constant let's call that constant alpha again all i do is multiply the entries by alpha each entry gets multiplied by alpha and so alpha times a column vector is just a column vector by definition which is alpha alpha a1 alpha a2 dot so you can multiply them by constants it's kind of obvious that this notation here is describing components of a vector we could in fact think of these entries here as kind of like the components of a vector if i were to give you an ordinary vector in space let's say three-dimensional space i could describe that vector by three components x y and z and if i liked there were if i wanted to i could exhibit those components in the form x y and z but of course now i would be talking about a real vector space and the x y's and z's would be uh would be real so there's nothing uh terribly unusual about what i'm doing here you can also think of these objects as functions of the index if i tell you which index if i tell you i'm talking about 3 then a of 3 would be a function well it would be a it would be a function of a discrete variable which goes from one to four here so in that sense even these discrete vectors can be thought of as functions not continuous functions but just functions with whose independent variable just takes on four values one two three and four okay now what about the dimension of a vector space the dimension of a vector space can be defined as follows you can define it as the minimum number of vectors that you need so that you can write any vector as a sum with coefficients of that minimum number all right let me give you an example let's take a one-dimensional vector space just the ordinary line okay and we can think of vectors on the line vectors on the line are just of course real numbers but i can represent them as arrows from zero to uh to some real number if i'm allowed to multiply this vector this could be a unit vector or it doesn't have to be a unit vector but if i'm allowed to multiply this vector by any real number let's call it r and let's call this the vector v then i can j and r can be positive or negative then i can get any vector out of it so it only takes one vector to get started and to be able to represent any vector by in this case just multiplying it by a constant now let's go to a two-dimensional vector space so two-dimensional vector space and i'm thinking now about real vectors just uh just so we can draw on the blackboard the blackboard is a two-dimensional vector space if i give you any two vectors which happen not to lie on the same line if i give you any two vectors which happen not to lie on the same line then by multiplying this vector let's call this v1 let's call this v2 by multiplying v1 by a constant r1 let's say times v1 plus r2 times v2 i can generate any vector in the plane all i need is two vectors i could draw another vector in here v3 but i don't need it in order to represent any vector by summing up uh multiples of v1 and v2 i don't need it it's redundant in fact v3 itself can be expressed in terms of v2 v1 and v2 v1 and v2 can be added with appropriate coefficients to get v3 so the question is how many vectors do you need what's the minimum number of vectors that you need in order to be able to add them up with coefficients and generate any vector in the vector space that minimum number is the dimensionality of the space in this case it's two if you had a three dimensional vector space for example you could generate any vector by taking three vectors along the mutually perpendicular axes or any other three vectors as long as they don't lie in a plane as long as they don't lie in a plane so it takes three vectors in order a minimum of three vectors in order to be able to represent any vector lying in a three-dimensional vector space and that's the general definition how many vectors do you need in order to be able to add them up with coefficients and represent any vector in the space let's look at some examples how many vectors do you need let's take here we have vectors column vectors made with four entries only four entries i assert that that's a four dimensional vector space how do i see that well there are four vectors and you need all of them incidentally any fewer than this you wouldn't be able to represent any vector but you can take the four vectors to be one zero zero zero comma zero one zero zero zero zero one zero and of course the last one is zero zero zero one if you add up these vectors with coefficients a one plus a2 plus a3 plus a4 and what do you get you just get the vector a1 a2 a3 a4 the first one will give you the vector a1000 the next one will give you the vector 0 a2 and so forth and when you add them up you'll get back to this vector so the conclusion is that you need oh incidentally if you threw away one of these vectors then you would never be able to construct vectors there would be vectors that you couldn't construct in this way okay particular let's suppose we threw this one away over here then we'd never be able to get a vector which had something in the bottom entry here so you need all four of them but that's all you need you don't need a fifth vector in order to be able to write down every vector as a sum in this way so a vector a column vector space composed of four entries is four dimensional and so on five entries would be five dimensional two entries would be two dimensional what about just one entry one entry would be just the good old complex numbers it would be a one dimensional vector space so a column vector with just one entry is just a number it's just a complex number and they form the one-dimensional vector space complex vector space all right so let's uh let's just summarize that by saying oh what about the functions what about the functions let's say the continuous smooth functions on a line how many functions does it take to be able to write every function as a sum an infinite number obviously an infinite number there's no finite set of functions that you can add up with coefficients that will generate every function if i have a few functions that one this one a couple of others it's rather limited what i can generate by adding them up with coefficients in particular i'll never generate a function which looks like this so you need an infinite number of functions in order to be able to write all functions now all functions is a huge number of things all continuous differentiable smooth functions you still need an infinite number of them i won't try to tell you how big that infinity is but you need an infinite number and so for that reason the functions are called a infinite dimensional vector space but you can think of it simply as a vector space with an infinite number of axes it's a little mind boggling i suggest you don't try to think about it that way okay so there we are we have functions psi of x and we can think of them this symbol means think of them as we can think of them as vectors in a complex vector space phi of x is another vector these are two different vectors and they form an infinite dimensional vector space in the same way the column vectors a one dot that a let's say six this time a six dimensional vector space can be thought of as the vector a the vector a which who's in which is determined by the six uh small letters a a one through a six all right so that's that's the notation that we'll use to describe vectors and typically vectors are closely related to functions either of continuous variables or discrete variables now we come to the idea of the dual vector space the dual vector space is really just the complex conjugate vector space uh that's a secret that mathematicians uh don't let out they well for good reasons it's not an arbitrary they when they want to be general and they want to be rigorous uh they prefer not to think of the dual vector space as being limited to just complex conjugation you can have other kinds of operations but for us in physics and in quantum mechanics basically the dual vector space means the complex conjugates and we think of the complex conjugates as forming a separate vector space a different one but related related so just like for every complex number we have its reflection if the complex number is in the upper half plane then its reflection is in the lower half plane if the complex number is in the lower half plane then its reflection is in the upper half plane we actually have two a pair of redundant descriptions of the same plane one in terms of the complex numbers and the other in terms of the complex conjugates that's really the same plane but we think of them as two different vector spaces but there's a mapping between them for every complex number there is a complex conjugate for every ket vector z there is a bra vector z it's complex conjugate and so we think of uh a vector space coming together with a second vector space which is in a certain sense identical to it or iso isomorphic to it uh contains the same well i won't say it contains the same object that that would be uh that would be wrong but they're in one-to-one correspondence they're in one-to-one correspondence but let's suppose now oh and they're labeled if a is a vector in the ket space then a with its arrow facing in the other direction is essentially its complex conjugate what does that mean well for example if the vector space is the space of functions then the dual vector space is simply the things made up out of the complex conjugates of the functions there's a mapping between them for every a there is for every ket vector there is a bra vector just as for every complex number there is its complex conjugate so one example of this is the mapping of between complex conjugates well it's always a mapping between complex conjugates but if we were talking about column vectors then the usual notation now some of these things are just notations you can't ask me why because it's just a notation turns out to be convenient if you represent the original ket vectors by a column then the bra vectors are represented by a row it's nice to keep track of which are the bras and which are the cats by making the cats be columns and the bras be rows but the entry to the rows are the complex conjugate numbers that's the important thing all right so for every column vector you have the complex conjugate row vector and those form the bras and the cats this can be thought of as the bra vector this can be thought of as the ket vector notice we don't put a star in here the star is indicated simply by drawing the ket vector instead of the original vector so think of ket as being a symbol for uh for complex conjugation uh all right now supposing we have two vectors not two vectors we have a vector a and its image in the dual vector space we have the vector a it's bra no it's ket and we have its image in the dual space which is the bra now supposing we multiply a by the complex number alpha what is the image of that you might think that it's just alpha times a what's wrong with that you want to conjugate it because you want the uh the the bra vector associated with alpha a to be the complex conjugate of the ket vector and that entails complex conjugating the number that you multiply a by so you really want to put a star here alpha star complex conjugate this is just a statement that complex conjugating a product requires you to complex conjugate both pieces of the product okay we had where was it yes here it is right here complex conjugating a product means complex conjugating both factors so the complex conjugate of alpha times a if a was just a number for example would be the complex conjugate of a times the complex conjugate of alpha notice that when i'm speaking about simple numbers i won't use the notation of vector spaces i'll just use the uh the complex conjugation so when thinking about vectors we complex conjugate them by turning them from brass to cats when thinking of them as the coefficients just the numerical coefficients we'll use the notation of complex conjugation star all right so keep that in mind that uh that the dual vector space when you multiply a vector by a constant the image in the dual vector space gets multiplied by the complex conjugate otherwise well otherwise operations on the dual space are similar to operations on the original space for example if a and b are two vectors and we add them then the dual or the complex conjugate will just be the sums of the uh of the dual vectors that's a lot of information and it pretty much allows you to uh to reconstruct the dual vector space as i said for example the dual vector space to the space of function yeah good there's no significance to it no uh there's this sort of a tradition of keeping your formulas neat by writing them a certain way but in this case no there is no significance to that that's a good question other times there will be for other objects there will be when we multiply opera when we multiply vectors by operators then it does matter where we put them but when it's just numbers no there's no significance uh but uh i try to keep things neat by following a certain pattern but you you don't you don't need to worry about my pattern okay that's the basic idea of linear vector spaces and their duals or their conjugates next the next construction that you have to keep track of is the inner product between two vectors the inner product is the analog in ordinary vector of ordinary vector dot products just remind you very quick quickly what the dot product of two vectors is ordinary vectors you take the two vectors ah you can think of it in a variety of ways you can think of it as take the projection of one vector onto the other take the magnitude of that projection let's call this b let's call the original vector a this is the this distance here is the projection of b onto the axis of a if we multiply the projection of b by by the magnitude of a that gives us one form of the dot product we can do it the other way we can take the projection of a on b and multiply the length of the projections the symmetric way to write it is that it's a times b times what cosine of the angle between them but if we write it in terms of components let's suppose we write it in terms of components the components for example along any set of axes any set of orthogonal axes the x and y axis for example then if we know the components of a let's say a x and a y that's a and b x and b y then the dot product of a and b is just a x b x plus a y b y so we ordinarily write that for ordinary vectors that ordinary vectors now i just mean little pointers a dot b is equal to a x b x plus a y b y what if there's a third direction straight forward if there's a third direction or a fourth or a fifth or a sixth direction you just keep going azbz so in other words it's just the sum of the product of the uh of the components axis by axis x y z and so forth in parallel or not parallel or in analogy with this we invent the generalized idea of the inner product of two vectors let's call them capital a and capital b and we write it as a times b now this is the inner product of the ket vector b with the bra vector a because we were dealing with real vector spaces we didn't have to worry about complex conjugating anything but when we're dealing with complex vector spaces when we multiply two vectors we have to decide which one is the ket vector and which one is the bra vector once we do that then i'll tell you what well then i'll tell you what the rule is first of all the inner product has some properties this is this is abstract first of all it has some abstract properties the first abstract property is supposing you multiply the vector b by beta so this is beta times b this is the inner product of a with the vector beta times b so let's put all of this in a bra and a parenthesis here parentheses that inner product is just beta times the inner product of a b if you double in other words if you double the length of b and not double the length of a then you just double the inner product okay if you were to double both a and b of course you get a factor of 4 but we're just doubling doubling or multiplying by beta one of the vectors if we do that then the inner product itself just gets multiplied by beta another fact is that if we multiply a by a sum let's say b plus c the inner product of a with b plus c that's just given by a inner product with b plus a in a product with c the mathematical language is that the inner product is linear as a linear operation on on b these are all properties that ordinary dot products between vectors have and they're properties that you can check for the inner products between more abstract vectors if i give you the rule so far i haven't given you the rule for inner products for column vectors or for functions i'm going to give you that as examples but before i do i want to finish up with the abstract properties another abstract property is that the inner product of a with b ked vector b bra vector a is related to but is not equal to in general the inner product of b with a notice what i've done here effectively i've complex conjugated both a and b i've replaced a by the complex conjugate of b and b by the complex conjugate of a what would you get what would you expect to get on the right hand side here yeah if you take two numbers and you multiply them sorry if you take a number of times if you know what i mean you've complex conjugated both factors you can expect that the result is just the complex conjugation so that's the correct equation when you interchange bra and ket vector you have to complex conjugate and that's the rule for that's analogous to a rule let's see it's analogous to this one z1 star times z2 star is uh z1 stars e2 okay you see what i've done yeah i think it's the first one right okay so if your complex conjugate both factors then the product complex conjugates this is now to be taken as an abstract def part of the definition of an of a good inner product the last definition oh from this we can derive something very simply supposing we take the inner product of a vector with itself then this says that it's equal to the inner product of the vector with itself if i interchange a and a that doesn't do anything complex conjugated so it says that whatever this object is whatever this number is so the inner product is incidentally a number not a not a vector the inner product is a number the inner product of a with a is its own complex conjugate what are numbers called which are their own complex conjugates they're called real right they're numbers on the real axis their own complex conjugates so they're their own reflections and that means they're on the real axis so the inner product of a vector with itself is real now there's one more property that the inner product actually inherits from the complex conjugation idea again it has to do with the complex conjugate of a thing with itself or an a with an a what do we know about the complex conjugation conjugate of a sorry what do we know about the product of a complex number with its conjugate besides the fact that it's real it's positive it's positive it's always positive let's just check it it's x plus i y times x minus i y the imaginary part cancels i x both terms have an x y times an i but one has a plus sign the other has a minus sign but then this is equal to x squared plus y squared plus y squared because i times minus i is plus 1. okay so it's always positive x squared plus y squared real and positive this is part or in fact this set of requirements that when you multiply a bra vector or ket vector by a complex number that simply multiplies the inner product by the same number you can take inner products with sums and you just get the sum of inner products when you interchange bra and ket you complex conjugate and one more rule that the inner product of a vector with itself is positive that it's real follows from the previous one it doesn't follow from anything it doesn't follow from anything uh as an example of where it doesn't follow in real vector spaces four vectors in uh in in relativity can have inner products with themselves which are positive or negative or zero light like space-like and and uh and time-like so it doesn't follow from anything but it is part of the definition of an inner product and a complex vector space all right so what is a these are rules about what the inner product has to satisfy now you may be able to find more than one construction for an inner product which is consistent with these rules and in general you can but i'll give you the simple inner product for functions and for column vectors which satisfies these rules very easily and i'll leave it for you to go home and prove the various properties these uh four properties here you do it by inspection it's not very hard for a function or for the vector space of functions the inner product of phi with psi phi and psi are both functions phi of x and psi of x right that's equal to the integral dx now of course you have to decide functions over what interval for example we could have functions on the interval from zero to one we could have functions from minus infinity to plus infinity but whatever the interval is here integral of phi star of x psi of x notice that because phi is the bra vector it gets conjugated in here and we just multiply phi star of x by psi of x and integrate easy very very straightforward to prove all of these properties here i'll leave it to you as i said so that's the inner product or the dot product between two functions the integral phi star sine for column vectors and row vectors the bra vectors let's call the bra vector b1 b2 b3 three-dimensional complex vector space and let's take the row vector no sorry this is the row vector let's take the column vector the ket vector to be a1 a2 and a3 and i'm sorry i should have put complex conjugates here the entries to the row vector are complex conjugates then the inner product between b and a is just in very close analogy i think we've lost it off the blackboard but in very close analogy with the way you take inner products for ordinary vectors it's just the sum of the products of the components b1 star a1 plus b2 star a2 plus b3 star that a3 however many of them there are yes the coordinates of the ket vector b are b1 b2 b3 the entries into the bra vector are the stars now i think what you're saying is we would normally call this vector a and we would call this one just plane b we wouldn't put a star there that's a that's a notational traditional notation the star is indicated by the backward arrow here but the components of it really are the complex conjugates okay so that's the idea of an inner product all right let me stop now for questions because i'm going pretty fast questions yes so the x is oh here well yeah um right but we could yes but we could for example have more than one argument in here so it could be x and y x and y and then we would integrate the x d y yeah but certainly they're both real assuming they're both real uh now a function of two variables can be thought of as a function of a complex variable but let's not do that let's think of the arguments as real i think we're better off uh in most situations we will want to do that all right now uh shall we uh take a break for six and a half minutes all right now now i want to come to the concept of a set of basis vectors for ordinary vector spaces let's say two dimensional vector spaces on the blackboard basis vectors are just a pair of unit vectors pointing in orthogonal directions uh which can be chosen somewhat arbitrarily not completely arbitrarily you pick one unit vector and then pick the other unit vector being perpendicular to it uh for most purposes it's usually convenient to draw the basis vectors on the blackboard so that they're horizontal and vertical and they're just simply vectors that are useful in writing any vector in terms of them or taking components of vectors uh and let's uh let's define a basis i said a basis set the following way first of all they're unit vectors unit vectors means that their inner product with themselves is one the dot product with themselves is one incidentally for ordinary vector spaces the dot product just real vector spaces the dot product is just the square of the length of the vector of course so that's why it's always positive among other things uh the square of the length of a unit vector is one so first of all let's imagine in our vector space a collection of vectors which i will call what should i call them i guess i'll just call them b b for basis uh little b now now the little b's are not numbers now they're a collection of vectors b sub i how many of them are there the number is equal to the dimensionality of the vector space so if it's a seven dimensional vector space there are seven b's i equals one through the dimension of the vector space d for dimension okay b1 b2 b3 before now they're not they're not um which i say uniquely fixed but they satisfy some unique set of assumptions unique set of assumptions is first of all that they're unit vectors that says that every b the inner product with itself this is not summed over now often i use the notation that when you sum over uh when you repeat an index you sum over it here i do not mean the sum no sum just take the i vector take its inner product with itself that's assumed to be one all right so every one of these b's every one of these vectors b b1 b2 and so forth have unit length next they're perpendicular to each other what does it mean to be perpendicular to each other for ordinary vectors it means that the dot product is zero when two vectors are perpendicular remember that the dot product is the projection of one on to the other if they're perpendicular there is no projection so let's let's write it this way yeah okay that's fine if i take two different vectors let's say b1 and b2 or bi and bj for j not equal to i j not equal to i then that's zero that's the assumption that they're orthogonal a vector of unit length is called normalized or normal a normal vector is unit length a set of vectors which are mutually orthogonal to each other are called mutually orthogonal to each other uh if they're both normalized and orthogonal they're called an orthonormal collection of vectors orthonormal collection of vectors and if there are d of them then they're a bases they're a basis they're a basis of uh of vectors for the vector space we'll see what that means in a moment it means you can easily expand any vector in terms of them let me give you an example just in terms of column vectors i've already given you the example one zero zero zero there's nothing special about four dimensions here i've just chosen four dimensional vector spaces for uh zero one zero zero this could be b1 let's write it out the vector b1 is to be identified with a column one zero zero zero and i'll write down the fourth one and you can fill in the other two in between b4 is to be identified with one sorry zero zero zero one each one is of unit length that's obvious because each one has only one entry and when you calculate the inner product with itself each one has only one term in the inner product and it's just one times one right the inner product between any two of them will always be zero because they don't have entries in the same place for example the inner product of this one with this one will be zero it's one times zero plus zero times zero plus zero times zero plus zero times one so this is an orthonormal basis of vectors for a simple four dimensional column vectors this is not the only orthonormal basis you can find just in the same way as i could choose these two vectors to form a basis on the blackboard i could also choose these two vectors to form a basis on the blackboard they're supposed to be unit vectors so there's a there's a degree of ambiguity in the basis but uh you can check a given set of vectors and check whether it's a basis or not by calculating their inner products yes oh yeah okay that's right yes yes yes yes yes no no you're right let's let's uh let's let's straighten that out i should have written b4 and i should have written that this is equal to the row vector 0 0 0 1. does that help good yeah now that that's a good point just for consistency i really should have written b1 as a ket vector b4 as a bra vector and then taking the inner product of b1 with b4 good i was jumping ahead a little too quickly right this equation now these two equations can be summarized by a by a single equation this is b i b j if j is not equal to i if it is equal to i then the answer is just one so bibj is equal to one if i is equal to j it's zero if i is not equal to j that's the kronecker delta symbol i j delta i j delta ij is defined so that it's 1 when i is equal to j and 0 when i is not equal to j that's the definition of the kronecker delta and that defines a basis now let me show you how to take any vector let's suppose we have d basis vectors we know we don't need more than d vectors in a d dimensional vector space in order to expand any vector that's the definition of the dimensionality of the vector space that you don't need more than d vectors we have d vectors namely the d basis vectors therefore it should be possible to take any vector let's call it v any vector and write it as a sum over the basis vectors of coefficients and let's call those coefficients vi times the i basis vector and this is just you know v1 b1 plus v2 b2 and so forth etc all right now let me take the inner product of both sides of this with some particular one of the b's i could call it bj but let me do it very sp extremely specifically let me take the inner product of this with uh b1 right with the first basis vector b1 with v what do we get well we get a sum over i of the inner product of b1 with this expression but that's just v i b 1 b i notice it's summed over i but i've chosen the first basis vector on the left here so one of the indices is just one i don't want to confuse this with i let's draw it so that it looks like a one does that look like a one i hope so okay now what is this thing it's some non-i but there's only one term in that sum that's because b1 is orthogonal to all the bis except for i equals one okay so there's only one term and that term is when i is equal to one when i is equal to one the inner product here is just the number one so the result is just v1 what about the inner product of b2 with v exactly the same argument will tell you that the inner product of b2 with v is the coefficient v2 now this is kind of interesting i have now figured out what the v's are if i give you a vector v and i ask you to expand it in terms of the basis vectors it's very easy to figure out what the coefficients are the coefficients are just the inner products of v with the basis vectors so we can now write a more general formula and put it together let's write a more general formula put it together first of all we can say that let's call it bj v is vj now i want you to i want you to imagine that vj was the thing i was trying to figure out i was trying to figure out what expansion coefficients i need here in order to expand the vector v i want to write v in this form right well the answer is that the expansion coefficients are just these inner products but now let's rewrite this equation i can also write it i can also write of course that v that b i v is v i doesn't matter whether i stick j or i there same formula and now let's rewrite this formula here sticking in the v's that i've calculated so let's do that we have v is equal to a sum now i'm going to start by writing bi here you asked me earlier whether there's any significance to whether i put the vi on this side or on that side the answer is no so i'm going to choose to write it over here but now i'm going to take what vi is namely the inner product of bi with v so let me just substitute that in bi with v that's a kind of neat formula that formula will come up over and over again is sort of easy to remember v is the sum over all the basis vectors of the i basis vector times biv and i choose to write it in this form because later on when we when we find out what an outer product of two vectors is we'll discover that this has some nice significance but this is a formula for any vector if you know its inner product with the basis vectors okay so that's a that's a useful formula which we will draw on over and over again but for the moment that's uh that's it about vector spaces we haven't discussed operators yet i don't know if we'll have time tonight i would like to get the operators we i think we will but let's just talk a little bit now about the physical significance of vector spaces in quantum and classical mechanics the space of states the phase space the space or the collection of possible configurations of a system form a set in the mathematical sense of set theory the operations that you can do with sets are the usual operations that you can do you can form the intersection of two sets that's the operation that's the operation of and now if you have a set if you have a set of some kind and you consider some subset and you consider another subset then the things which are in both subsets are the intersection and the things which are in one set or the other set are the union of the two sets those are the basic operations or two of the fundamental operations of set theory intersection and uh and um and union and they correspond to and and or quantum mechanics is very very different the concepts of and and or are quite different we're not going to talk about them tonight but the difference derives from the fact that the states of a system are not points in a set they are vectors in a vector space and the algebra of vectors is very different than the algebra of boolean algebra boolean algebra being the algebra of of ordinary sets of ordinary set theory so what is the connection between sets sorry between states and vectors in quantum mechanics well let's go back to what we discussed last quarter let's start with some discrete systems the coin the coin was a good system that was useful for us in formulating classical logic the coin had two states the coin on the table i don't have a coin why a coin yeah it's a penny okay going to be heads or tails we came up heads and uh so heads or tails the coin is a two-state system the classical coin is described by a system with two states heads and tails and they form a set that set is called the phase space we discussed this extensively last quarter i'm not going to discuss it again we could also think of this slightly differently we could think of the coin as having a an arrow a pointer embedded in it and i don't want to call this i'm not going to call this a vector on purpose i'm going to call it a pointer just to distinguish it from the vectors which describe vector spaces of states uh we'll call it a pointer it points in some direction and here the pointer is pointing up here the pointer is pointing down and i could call these two states either up and down heads and tails or i could even give them a different name i could call this state plus or even plus one and minus one just giving them names now we call these states plus one and minus one in quantum mechanics we take these two states of the coin uh but of course we don't really need quantum mechanics for coins it wouldn't do us very much good but for some very very quantum mechanical variable which could either for one reason or another be up or down i'm really thinking about the spin of an electron but but for the moment it's just a system which can take on two values and only two values either heads or tails plus one minus one or up or down and we simply assign vectors to these two states two basis vectors two orthogonal vectors one of them we can call heads the other one we can call tails or we can call them the vector plus one and the vector minus one now these are just names this plus one doesn't mean that this vector has length plus one and it doesn't mean that this vector has length minus one they're just names the vector plus one and the vector minus one or if you like the vector heads and tails well you could do this in classical physics too you could nothing to prevent you from imagining that you could identify some sort of vector with the two configurations of heads and tails but it wouldn't do you any good there'd be no particular purpose in doing so in particular what sense would it make to add the two vectors heads plus tails sure you can call you can call the configuration heads by the label if you like a ket vector with an h stuck in it and just call it that you could call the down state or the tail state t but you would have no meaning whatever to a combination like oh i don't know again complex numbers alpha times heads plus beta times tails what on earth would that mean in particular with alpha and beta being complex numbers it doesn't mean anything in classical physics so this is something you would not entertain adding the two vectors together and therefore since you wouldn't entertain the idea of adding them together there's no point in calling it a vector space instead you just call it a set heads and tails and you never think about combining them together by the operation of addition uh the opera the operation of addition has no meaning for a point and a set of course you can form the union of two points but that's not adding uh the two points together to form a third point so to speak you don't add two points together in a in a vector space to form a third point but you do add two vectors together to form a third vector so there must be some kind of concept of a state of this two-level system of this two-state system which is the linear superposition of two of them that's what's new in quantum mechanics that this has the meaning this has a meaning and it has a meaning of a particular state of the head's tail system which is different than either heads or tails what way it's different how it's different is at the heart of quantum mechanics this is going to take us a little bit of time we're probably not going to really get to it tonight but that's the basic setup that the states of a system are vectors and not only that that we can add them together and make sense out of new states which is for example a linear combination of heads and tails it's neither heads nor tails but it's some number alpha h times heads plus alpha t times tails now uh let's uh and if there's no reason to restrict ourselves incidentally to two states but before i go on to more states uh six state system or a ten state let's just uh specify a little more carefully the properties of the vectors heads and tails or the properties of the vectors one and minus one first of all they are orthogonal to each other now this is a postulate the postulate is that states of a system which are clearly distinguishable by a single experiment and in this case the experiment would simply be to look is it heads or tails one experiment if by by a simple experiment you can uniquely distinguish the two configurations then the vectors that go along with those two configurations are orthogonal so orthogonal vectors correspond to configurations for which there is an experiment a clean simple experiment that can tell you whether it's one or the other that's the meaning of orthogonality distinguishable in the sense that there's an experiment that can distinguish them good that's that's number one so we assume that h and t are orthogonal to each other that means that h and t have zero inner product another rule uh which is now somewhat arbitrary but but we can impose it we can also choose the vectors h and t to be of unit length so let's also choose h h equals t t equals one in other words the two vectors h and t form a basis a basis identified with the distinguishable states of a system which can be distinguished by a clean experiment we identify the different uh configurations that we can measure with a basis of uh in the in the vector space nothing special about two vectors another simple system would be the dye dye is in dice we throw a die it comes up with anything from one to six one 2 1 2 3. this corresponds on the table we identify this with the state let's call it now just call it 3. this doesn't mean it's length this doesn't mean the length of the vector is three it just means the vector is called three there are six of them six distinct one up to six here the state vector space was two dimensional two basis vectors here the state vector space six dimensional six orthogonal vectors all mutually orthogonal i'm not going to write that down or mutually orthogonal and of unit length in other words the six vectors corresponding to the six distinct configurations of the die distinguishable configurations that you can tell the difference between basically in a single experiment just one experiment you just look at it and you see which which face is up that that determines for you which of the six vectors describe the particular configuration now i haven't told you the physical meaning of adding these vectors together i just told you though that it makes sense in quantum mechanics to do something you would never do in classical mechanics and that's add these two configurations to find yet some new configuration configuration which is different than either of these h and t that's what's new in quantum mechanics that if you had a single coin which whenever you measured it was either heads or tails there are additional possibilities uh which are somehow mixtures of heads and tails in a way that doesn't happen in classical physics the same thing is true here for the die you can have let's call these let's give them coefficients again this would be alpha 1 1 plus alpha 2 2 dot dot dot up to alpha 6 6. this is some state of the die which doesn't have a classical analog well what does it mean we're going to eventually get around to exactly what it means but let me tell you some properties of such a state let's begin with the heads and tails whenever i look at a coin and i check whether it's heads or tails i find it's either heads or tails i don't find i mean except with the extremely unlucky situation maybe lucky situation where it's i can't even get it to stand on in no no way i can do it see the heads or tails if i check whether it's heads or tails it will be one or the other all right so if i check it's one or the other what kind of thing is alpha h heads plus alpha tails tail it is a state and it's a state which if you measure the heads or tailsness of it we'll have a probability of having heads and tails and the probabilities for heads and tails will be related to alpha h and alpha t all right so this is some state in which when we look at the penny it has a probability for being heads and a probability for being tails remember quantum mechanics is probabilistic in a way that classical physics is not all right so what is the probability for heads let's call it p sub h that's the probability for heads it is it has to be positive it has to be real these alphas are arbitrary complex numbers what can it be it's just alpha h star alpha h that's a postulate of quantum mechanics and of course the success of this postulate is an experimental fact but it's a postulate of quantum mechanics if we if we take a sort of axiomatic view of it that the probability for heads in this case would be alpha h star alpha h which is a positive real number and what about t the probability for tails would be alpha star t alpha t well that tells us one more constraint about these vectors describing physical states we do expect the probability for h plus the probability for t to add up to one so we expect then that alpha h star alpha h plus alpha t star alpha t is equal to one let's give this vector a name i don't know what to call it let's just call it anybody have a suggestion what i should call it b p why p well this isn't the probability this is some vector here are the probabilities i don't want to call a vector i don't want to call a vector by a name which stands for probability let's uh just call it v for vector coin okay confused coin how about confused coin cc confused coin coin doesn't know whether it's heads or tails it's got some probability amplitude these are called probability amplitudes these are complex numbers they can be positive negative or complex imaginary whatever you like and when you add these together you get a coin a state of a coin a state of a coin you don't get a coin you get the state of a coin in which somehow it doesn't know or it's if you measure it you will find with probability alpha h alpha h star that it's heads and similarly for tails all right this object here is just the inner product of the vector cc with itself it's just alpha h star alpha h plus alpha t star alpha t so this is just the inner product of a vector with itself cc does not stand for cosmological constant and it doesn't stand for what is it that you do when you send it the email message you carbon copy it i guess uh it's not carbon copy it's just stands for confused coin all right any state any properly defined state of the confused coin should have unit length should be normalized that's another postulate of quantum mechanics that the properly defined states of a system are normalized have unit length in the sense that they're in a product with themselves is equal to one that's the statement that all probabilities should add up to one likewise for the die we have alpha one alpha one star plus alpha 2 alpha 2 star that is equal to 1. and if we call the state now i guess we should call it c d for confused dai confused die then its inner product with itself is also equal to one so that's another postulate of quantum mechanics the first postulate is that states which are distinguishable clearly physically different because you can do a measurement to distinguish them correspond to orthogonal vectors second statement is that all of the linear superpositions which correspond to physically sensible states should have unit length should themselves be of unit length the basis vectors are of unit length but we should also only consider states whose total normalization is equal to one those are postulates of quantum mechanics uh yeah i have lost you there can you that is a postulate of quantum mechanics that if you like that that that's the meaning of the alphas but it's not the total meaning of alphas there's more to the alphas than just probabilities um but as i said it's a postulate of quantum mechanics that the states of a system form a vector space if they form a vector space we can add them we have to then interpret we have to know what the meaning what the physical meaning is of a superposition of states like that right i'm not telling you the full meaning i'm telling you now one feature of such states we will come to we will learn more about these and we'll get familiar with the idea but for the moment one of the postulates is that the probabilities are simply the squares or the absolute values of these of these probability amplitudes not the absolute values the square of the absolute values of the probability amplitudes um it's only possible to justify these things by showing you how they work by showing you the ins and outs of it and how they work and how they lead to a consistent picture it's best at this point to accept make sure you know what i'm saying uh the words i think are clear that the probability and uh if this is if this is the state of a confused coin then if somebody somehow created a coin confused in this exact fashion what does it mean for the probabilities to be this incidentally what it really means is that you have a lot of these confused coins all created the same way let's be very specific now we have a lot of coins a million of them and we have a lot of coin tossers and they all toss the coin or they all do the experiment whatever that experiment is it's not really tossing coins there's something else that we're going to do but we do some experiment starting with all the coins the same let's say we take all the coins and put them heads and then we do something to that coin whatever it is we put it in a magnetic field we turn the magnetic field we do various things to it but not just with one coin but with a whole bunch of them each one dealt with identically to the others a zillion of them however many we need in order to be able to use the statistics and probability in a sensible way then after having done this we look at the coins i look at the first one heads i look at the second one tails i look at the third one heads tails tails heads tails heads and so forth all right from the frequency of heads and tails i can experimentally determine the probability for heads and tails if i have equal number of heads and tails the probability is a half if three quarters of them are heads and one quarter of tails and i have enough of them then i say the probability is three quarters heads one quarter tails and so forth the postulate is that the probability for heads and tails is given by these coefficients here that's that as i said there's a postulate you can't ask me why you can't ask why a postulate is true of course you can ask what the experiments which led to this weird guess were but it's we only have 10 weeks so we've already we've already gone through two of them yeah i just want to does that mean that the like alpha h and health if you are equal to like one over square root of two well that would be the case that could be well they can't they can be complex numbers yeah but that would be in the case of equal probabilities yeah well it doesn't have to be 1 over the square root of 2 it has to be a number of length 1 times 1 over the square root of 2. but you've got the right idea yeah remember they can be complex numbers and there are complex numbers whose length is one any number whoops any complex number which lies on the unit circle is a number whose length is one right z star z is equal to one right so if i take if i choose these numbers to be unit numbers divided by the square root of two then we would have the probabilities adding up to one uh that each probability being one half yeah so yes you've got the right idea okay but do keep in mind that in general these are complex numbers for the die we would put a 1 over square root of 6 in front of it to do the same thing now what i haven't told you is experim it's pretty obvious in classical physics how you uh how you make a heads or tails you just flip a coin and slap it down and see the heads or tails how do you make how do you construct for a coin one of these weird states which is half heads and half tails then i haven't told you how to do it experimentally we're going to come back to that we're not going to deal with coins of course we're going to deal with electrons but for now we want to get some formal ideas we want to get to the we want to get to the quantum mechanics as fast as possible and to get to it as fast as possible it's best to just tell you what the postulates are and show you how they work show you how they work and then come back and say how the hell did anybody figure this out or where did they come from what were the experiments that led to it the orthogonality of those states which correspond to measurably different configurations in a single measurement okay let's let's be very clear about it right this is a is it a necessity well you'd run into bad things if you if you didn't do it the quantum mechanics would fall on its face very quickly but but let's do it and then we can come back and ask what would go wrong if we didn't do it given the shortness of the time that we have available i think the best way to do it is to give you a bunch of posture show you how they work and then come back and say what would happen if we did something else and rather than to try to derive these ideas in fact even if i had 30 weeks i would probably do it the same way well i actually think i'm finished with the things in my notes so um ah no i'm not no no i'm not i'm not at all finished with the things in my notes but i i don't know i is have i saturated you thoroughly go on a little bit all right the next subject is the subject next subject mathematical subject before we go on to its physical interpretation is the concept of operators operators are operations that you do on vectors let me give you some examples from ordinary vector spaces from you know the kind of vectors that you draw on the blackboard there are operators and there are operations operators are a special case of operations or really strictly speaking one should speak about linear operators linear operators are a special case of operations that you can do on vectors uh here's an operation that you can do on vectors i just draw this to make some axes take all of the vectors incidentally when i speak of operations you can do on a vector i mean operations that you do on all the vectors in the vector space all right so here's an operation i could do i could rotate all of the vectors by the same angle every vector in the vector space rotate by 90 degrees that would mean that this vector would go to this vector i wish i had some more colors well the other colors don't show up very well so i'll use wiggly lines wiggly line this vector would go to this vector and so forth rotation by any angle is an oper it's clearly an operation that i can do on the vector space it is also a linear operator will define linear operators soon enough let me give you another example i could double i could keep the direction of the vectors fixed and double the length of every vector that's an operation it would take every vector and simply double it that's in fact that's just multiplying it by a number so multiplying vectors by numbers is also an operator an operation and they're also linear operators but we'll come to linear operators soon enough um take every vector and reflect it reflect it about the x-axis so take this vector and replace it by this vector that's an operation that you can do on the vector space every operator sorry every vector goes to some unique new vector that's also a linear operator let me give you an example of a operation which is not a linear operation take a vector and square its length if its length is two replace it by a vector of length four if its length is three replace it by a vector of length nine that's not a linear operation it's an operation but it's not a linear operation so what does it mean what does the notion of a linear operator mean what does it entail okay so two things and we've already used these things over and over but let me spell it out an operator is a linear operator and i'll represent linear operators by letters with little hats on top of them that's a somewhat standard notation uh often operators are represented by boldface but i can't do boldface on the blackboard so i'll use the little hat notation an operator is indicated by a hat on top of it this is not a bra symbol turned on its side or a cat symbol turned on a side it's just a little hat on top indicates operator linear operator a linear operator has the property that it if it acts on any vector it gives another vector so it gives a kind of image vector whenever the linear operation acts on it if it acts on a vector take a vector which is alpha times a remember vectors are things you can multiply by numbers multiply the vector a by a number and then apply the linear operator well the answer is equal to alpha times the linear operator on a so in other words it just says that if you take twice a vector and apply the linear operator it just gives you twice what the linear operator gives on the original vector was that clear did i say that clearly i hope so okay so that's one property of linear operators that if you multiply a vector by a number and then operate with the linear operator it just gives you the number times the linear times the linear operator on the original vector number one and number two a linear operator if it acts on the sum of two vectors just gives you the sum of the action on each of the vectors separately so it gives you l times a plus l times b times being used in the sense to mean operating on let's just check that for some simple operations if i take a vector if my operation is rotation by 90 degrees let that be the operation for the moment is it a linear operator we want to check it well the first question is supposing i take a vector and i multiply it by a number let's say double it do i just get twice what i would get if i rotated the vector the original vector so here's the original vector here's twice the original vector the operation is rotation it rotates every vector by 90 degrees let's say in particular the result of rotating twice a is the same as twice the result of rotating a that's obvious all right so it satisfies the first rule that if i rotate alpha times a vector i just get alpha times the rotated vector what if i rotate the sum of two vectors if i rotate the sum of two vectors here's two vectors in ordinary vectors a and b the sum of a and b is just the resultant like that if i rotate a and b the whole uh parallelogram rotates and a and b will rotate but so will the sum of a and b rotate and so the rotation of the sum of a and b is just the sum of the rotation of a and b and so it satisfies this rule here you can check that check that for yourself multiplying a vector by the number 2 again is a linear operation but squaring the length of a vector is not check that out you'll discover that that's not a linear operator so there's a notion of linear operators and here's its definition linear operators are important they don't correspond to states they correspond to the quantities that you can measure they're called observables observables means the things that you can measure you can measure the heads-ness or tails-ness of the coin so the hedge-ness or tails-ness of the coin is represented by a linear operator i think we have to finish now we'll come back to this next time and i'll explain to you how the observables that you measure are identified as linear operators in quantum mechanics and then we can do some little examples and uh and work out some predictions that would follow from this set of rules the set of rules seems very abstract but we're almost at the stage where we can make predictions about coins based on quantum mechanics by using uh this set of postulates we're fairly close to that by now the preceding program is copyrighted by stanford university please visit us at stanford.edu
Info
Channel: Stanford
Views: 429,974
Rating: 4.8234611 out of 5
Keywords: Physics, math, calculus, geometry, algebra, statistics, quantum, mechanics, uncertainty, principle, separation, complex, numbers, linear, vector, spaces, space, column, vectors, functions, variables, dual, basis, einstein, relativity
Id: KokditqpAJg
Channel Id: undefined
Length: 111min 4sec (6664 seconds)
Published: Thu Apr 10 2008
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.