Lecture 3 | Quantum Entanglements, Part 1 (Stanford)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this program is brought to you by Stanford University please visit us at stanford.edu but I want to make sure we're all on the same page with respect to some very elementary mathematics complex numbers just one important debit concept so I just want to go through very quickly extremely quickly a complex number has a real part and an imaginary part and you can plot it on a two-dimensional surface the two-dimensional surface horizontal axis represents the real part of the function and I've left the function just the real part of the number and the imaginary part of the number is plotted on the vertical axis and a point on this plane is an imaginary number if this is X and the height is y then the complex number is Z which is X plus iy okay we could also represent it another way we can represent it in terms of a distance which we can call R and an angle that we can call theta okay and now just from elementary trigonometry the distance X is R times the cosine of theta so that's X is R cosine theta and Y is R times R times sine theta so I times R sine theta or in other words we can take the R outs out of it faculty are out and write that Z is equal to R times cosine theta plus I sine theta the combination cosine theta plus I sine theta is called e to the I theta e to the power I theta so know that definition e to the I theta is cosine theta plus I sine theta the reason we call it e to the I theta is because it satisfies certain rules of Exponential's the rules of Exponential's is that when you multiply Exponential's you add the exponents alright e to the three times e to the five is e to the eight three and five is eight you can check using elementary trigonometry that if you have two angles theta and Phi and you write e to the I Phi which is cosine Phi plus I sine Phi and multiply them you get something with a real and an imaginary part the real part is cosine theta cosine Phi minus sine theta times sine Phi remember I times I is minus I as minus one so if you multiply these two what you get is cosine theta well I'll tell you what you get let's let's simplify it you get cosine theta cosine Phi minus sine theta sine Phi and we know what that is right cosine of theta plus Phi that's sorry that's a high school trigonometry formula the cosine of the sum of two angles cosine cosine minus sine times sine and then plus the imaginary part the imaginary part is cosine theta sine Phi plus cosine Phi times sine theta and that's just sine of theta plus Phi and by definition if e to the I theta is cosine plus I sine then this is just cosine then this is just e to the I theta plus Phi so from elementary trigonometry we discover that this combination cosine theta plus I sine theta has all of the properties of the exponential of I theta now all of trigonometry anything you ever wanted to remember about trigonometry couldn't remember is all stored in this formula for example if you couldn't remember what the cosine of the sum of two angles is all you need to do is multiply e to the I theta times e to the I Phi in this form and you'll discover that cosine theta plus Phi is cosine theta cosine Phi minus sine theta sine Phi so all of trigonometry I don't know why they take a year to teach trigonometry in high school and then at the end of it they come here to Stanford and they don't know this formula okay another fact let's we could check this in two ways we could just use the formula for a day for multiplying Exponential's e to the I theta alright first of all there's eetu the i theta that's cosine theta plus I sine theta remember what the complex conjugate of a thing is the complex conjugate is just the same number except with the imaginary part changing sign so it's the reflected number in the lower half plane if the number is in the upper half plane then it's reflected into the lower half plane or vice versa um the complex conjugate of e to the I theta is cosine theta minus I sine theta okay but that's the same thing that that's e to the minus I theta that's the same thing as e to it's cosine theta minus I sine theta and if you remember that the Sun that minus the sine of theta is the same thing as sine of minus theta then you realize this is just e to the minus I theta so the complex conjugate of e to the I theta you just get by changing the are the sine of the I here and what happens if you multiply these two you get one I think that is changing the sign of the data because that's what happens when you rotate the counterclockwise direction instead of the counterclockwise right so this is true everybody caught that but if you multiply e to the I theta times e to the minus I theta you get one alright so the class of numbers which are the class of complex numbers which are of the form e to the I theta are numbers which have the special property that when they multiply by the complex conjugate the result is 1 and there's another way to say that now the other way to say it is that X square plus y squared is equal to 1 the class of numbers for which X square plus y squared remember number of times its complex conjugate is just the sums of the squares of the real and imaginary part and so the numbers which have the form e to the I theta let's specialize to those our numbers for which x squared plus y squared is equal to 1 in other words their numbers on the unit circle and the circle of radius 1 they're simply called numbers at a North called unitary numbers numbers are of the form e to the I theta you control them unitary numbers and their numbers that have the property that they lie on the unit circle and they times their complex conjugate is exactly equal to 1 every complex number is a unitary number times a real number a real positive number and the real positive number is just a radius or the distance to that point on a complex plane all right that's a that's all we need to know that's all there is to complex numbers there is nothing else nothing else I can think of offhand I want to make sure everybody knows ah the other word for a number which is of the form e to the I theta is that it's called a pure phase ease the angle in here is called the phase angle the phase and the radius is called the modulus I guess so the modulus means the size of it so a number which has no R in front of it or where R is one number on the unit circle is called a pure phase or just a phase so when I swallow the I swallow the chocolate chip wrong when I refer to a pure phase that means a number cosine theta plus I sine theta okay now all right now the postulates of quantum mechanics we want to know today go through the basic postulates of quantum mechanics and what quantum mechanics is is it's a calculus for calculating probabilities after a while you get some intuition for it and maybe you get some pictures of what's going on but think of it as a calculus in other words a calculational procedure for calculating probabilities probabilities of what probabilities for different measurements different values of measurements the things you measure are called observables you prepare systems in certain States not in California or New York but in certain configurations that are called States and those states are labeled by or described by vectors in a complex vector space we've called them alpha or a I guess we pull them a you can add them you can subtract them you can multiply them by numbers and they form a complex vector space number one number two we normalize them in fact in the particular simple case that we studied which was the two-level system we could describe this as alpha times the Upstate for example plus beta times the downstate where alpha and beta are complex numbers are the coefficients here are such that alpha star alpha and beta star beta are the probabilities that if you were to measure up or down you would measure up with probability alpha star alpha down with beta star beta and so just to make the probability altogether equal to one we require that that the coefficients are such that the sums of the squares or the sums of the alcohol is just the square it's the square of the is just equal to 1 alright there's another way to rewrite the same equation it's that the inner product of a real state with itself is just equal to 1 that is equal to alpha star alpha plus beta star beta and so this is the abstract way of writing that the total probability adds up to 1 so States state vectors are normalized this is called normalized normalized means that the length of the vector in some sense is equal to 1 and altogether the sums of the probabilities are 1 that's the basic postulate for states are this is an example of the two dimensional space of states two dimensional doesn't mean the world is two dimensional it just means that there are two independent possibilities up and down heads in tails we discussed how you might measure whether a spin was up or down by putting it into a magnetic field and seeing if it emits a photon or not so given an arbitrary state of electron we can measure its spin if it gives off a photon then we say it's down I guess and it doesn't give off a then we say it's up and the relative probabilities or the probabilities for the two are just the coefficients here now those probabilities of course do not completely determine the complex numbers here why not well if I multiply any complex number by a pure phase it doesn't change its magnitude right if I take a complex number and I multiply it by e to the I theta where theta is any angle I get a new number whose magnitude whose alpha star alpha is exactly the same as the original starting number and so knowing the probabilities is not enough to completely determine what these coefficients are so there's more information in these coefficients than just the probabilities and we're going to find out what that additional information is but for the moment this is and of course if there's more than two possibilities up down and sideways or up down and and chocolate chip then we add another one gamma 12 a chip and whatever those are orthonormal vectors and we're going to do this right right and we represented them in a particular basis the basis of up and down we represented them let's just we represented them as up was just 1 0 and down is equal to 0 1 they're orthogonal to each other they're normalized both of them the sums of the squares of ena in each individual one is one so they're called orthonormal vectors orthogonal normal if we did if we have a higher dimensional vector space with more possibilities choco-chip then we might have three vectors altogether zero zero one they are still orthogonal and normalized to serve zero should go here up and down in the space of States not in we're going to we're going to have to remember the difference between these vectors these abstract vectors which represent States and vectors in real space because that's going to come up today pointers is a pointers than our vectors and we'll label them slightly differently all right now what about the quantities that you measure they are called observables there are things that you measure and by assumption error you you there may be more than one thing that you can measure about a system and you can always reduce them to real numbers the results of a measurement are always a collection of real numbers you may combine them together for example into a complex number but the results of real measurements you know pointers on a dial or whatever are real numbers and so observables are things which when you measure them among other things you get real numbers as a result they are represented by matrices or operators strictly speaking they're represented by linear operators which you can take to just being matrices for our purpose but a special class of matrices the hermitian matrices hermitian h erm i TI a n hermitian for her might and hermite was a mathematician hermitian matrices and to define them let's first define a couple of concepts about matrices first of all there's the concept of the transpose of a matrix the transpose of a matrix you have diagonal elements and off diagonal elements ins no reason why it has to be two by two let's take it to be three by three ah there's m1 1 M 2 2 & 3 3 and then there's the off diagonal elements let's just put one of them over here here's um M 1 2 and here's M 2 1 and so forth and there's other ones other they're all there the transpose of a matrix is just you act on the matrix to interchange rows and columns all it means is you reflect it about the diagonal you just imagine it's written on a piece of paper and you turn over the piece of paper so that M 1 2 replaces M 2 1 and so forth the way to write that the transpose the transpose is just written by putting a T up here and it's just the diagonal elements are unchanged but the off diagonal elements M 1 2 becomes M 2 1 and M 1 2 they just interchange likewise over here and so forth okay that's called the transpose we can write that in the form the transpose matrix we call IJ the IJ element of it is just the ji thell iment of the original matrix interchange of I and J of rows and columns it's called transpose the next concept is the hermitian conjugate I'll write it out hermitian conjugate and it's a kind of complex conjugation a complex conjugation for matrices hermitian conjugate and it involves two operations the first is to transpose and then the complex conjugate and it's represented by a dagger it's represented by a bagger the comp and it's represented by first of all transposing but then complex conjugating everything all the elements the diagonal elements as well as the off diagonal elements and we write it by saying that the hermitian conjugate the dagger of a matrix is you interchange rows and columns and then you complex conjugate alright so let's do an example is an example 4 plus 2i 7 minus i 6 plus 4i and 9 minus 2i all right what's the what's the dagger the the hermitian conjugate of that matrix well the diagonal elements they don't move when you transpose transposing a diagonal element stays in the same place but we have two complex conjugate it so we get 4 minus 2i now the element up here first we have to transpose o's is 4 4 minus 2i ok we have to transpose which means we put we interchange rows and columns and then complex conjugate so then we get 4 plus 2i up here but this one complex conjugated this one complex conjugate it goes down he m6 minus 4i and the diagonal matrix element stays where it is but it gets complex conjugated 7 plus I all right so that's the that's that's the idea of hermitian conjugate flipped the rows in column and then and then complex conjugate a hermitian matrix well first let's talk about symmetric matrices symmetric matrices let's just go back to the idea of transpose without the complex conjugation just the transpose a symmetric matrix is one that is equal to its own transpose in other words when you transpose it you get back the same thing in order for that to be the case all that you need is that the elements the reflected elements are the same as each other so it would say that m2 one has to be equal to m1 to the same and two one here that's the idea of a symmetric matrix and it just is exactly what it says it's sort of symmetric with respect to flipping it or to reflecting it about the main diagonal a symmetric matrix satisfies em IJ equals mji that's a symmetric matrix very simple concept a hermitian matrix is a little more complicated and what it says our mission matrix is a matrix which is equal to its own hermitian conjugate so if you have a matrix and you hermitian conjugate it if it is equal to the original matrix it's called hermitian it's just called hermitian that's the definition of a hermitian matrix let's write a hermitian matrix down right the formula for it yes yes yes there's the formula for right here mi J is mi J complex conjugate all right now say it again I think it's written correctly mi J is mi MJ I conjugate it's correct what it says first of all is that the diagonal elements are real for example this says if you interchange I and J on a diagonal element you just get the same diagonal element so it says that M 1 1 is equal to M 1 1 star that means that M 1 1 is real so real numbers on the diagonal not all is the same real number I mean I just wrote real real real real meaning any real number in any one of these places and then on the off diagonal elements over here let's suppose that this off diagonal element over here is Z the complex number Z then the reflected off diagonal element has to be the complex conjugate then it's called hermitian so 2 7 1 plus I and 1 minus I is hermitian real elements on the diagonal and complex conjugates when you reflect this is a kind of reality property a thing being its own complex conjugate for a number is says that it's real for a matrix the concept of reality is not that the matrix elements are all real not that each matrix element separately is its own complex conjugate but each matrix element is the complex conjugate of the reflected matrix element we'll find out we're going to see very shortly why this is a good definition okay I'm going to write down some theorems these are your homework to prove they're very very simple but just they don't have names because they're too simple very elementary theorems first of all if you take two vectors a and B and you take the inner product of a with B now remember when you form the row vector B you use the complex conjugate elements this is equal to B star B star 1 B star 2 complex conjugate times a 1 a 2 the first theorem is that if you interchange a and B that will put the A's here and the B's here what do you get this is correct no complex conjugate the way to think of it is that the row vector is like the complex conjugate of a certain column vector and if you interchange if you link the change are the order of these you're taking the complex conjugate of B and the complex conjugate of a and if you multiply complex conjugates it's just complex conjugates so the first the first elementary theorem to check is that the inner product of B with AE is the complex conjugate of the inner product of a with B now that's it's extremely Elementary anybody who's seen it of course is very familiar with it next statement now this one is less obvious take any matrix hermitian or not hermitian and of and all right now this is an operation that I haven't defined yet but I'm going to define it for you right now it's very simple you have a matrix M and you multiply it by AE that gives you a new vector this is a new vector and you take that new vector and you take the inner product with B you can work out what this is in terms of the matrix elements of M and the entries that go into a and B that's a very definite expression first multiply the matrix by a or multiply a by the matrix you'll be left over with a new vector that you can call a prime and you take it's in the product with B this is related to but not equal to I'm going to write equal but then we're going to change it into change everything a put a on the left that's kind of a complex conjugate of a put B on the right and put the hermitian conjugate of M everybody has been complex conjugated and all rows and columns have been interchanged when you interchange a and B you're interchanging a column vector with a row vector and your complex conjugating the same for M when you take its hermitian conjugate you interchange rows and columns and you a new complex conjugate this is a number incidentally this quantity is a number a complex number in general M on a gives you a vector the inner product of a vector with another vector is a number okay a simple number it's not a matrix it's not a vector it's just a number this equation is not right what do I have to do it complex conjugate it I've complex conjugated AF complex conjugated B and I've complex conjugated M so I have to take the whole thing and complex conjugate it so that's a simple fact about about complex conjugation of matrices and vectors but let's suppose that our matrix is hermitian let's suppose that the matrix in question is hermitian if it is hermitian I can array or I can erase this conjugation sign here if it's hermitian it's equal to its own hermitian conjugate definition of hermitian so for a hermitian matrix and from now on let's think about hermitian matrices for hermitian matrices for our mission matrices the this is called a matrix element this is called the B a matrix element of M the be a matrix element of M is the same as the a B matrix element of M complex conjugated now one last thing that will make the point for you if a and B happen to be the same vector if a and B happen to be the same vector so now let's let B equal a would be equal a let B equal a look what this says it says that the matrix element of M evaluated between sandwiched between two states which are the same the same state a on both sides is equal to its own complex conjugate what does that say about AMA that it's real so if you have a hermitian matrix this was a theorem that I was going to ask you to prove I just proved it if you have our mission matrix then we should give this object a name it's called the expectation value why it's the expectation value will become or what that has well it has to do with anybody's expectations there's something else it just has a name it's the expectation value of M in the state a now remember just keep in mind the way we're going to be describing States is through vectors the way we describe observables is through matrices so you can think of this basically as the average value of m when the state of the system is a yeah yeah sure of course comes from Dirac yeah all right so what do we learn we learn that if we have a hermitian matrix its expectation value is real well we haven't gone through the details yet but that sounds like a good thing if we have a observable which is represented in which one you measure it always gives you a real number then obviously it's average value should be real and that's what we find that for a hermitian matrix its expectation value in a given state is real that's what makes the hermitian matrices special and what's made at what makes the hermitian matrices natural candidates to have something to do with observable quantities because their expectation values in any state doesn't have to there are many many states any one of them will satisfy this okay any questions up to now anything that you really measure is always a number now you can that you can decide to say I will describe points locations of particles on the surface of the earth by giving a complex number I'll give you an address in Manhattan by giving you a Manhattan is laid out on the grid I'll give you a complex number but you don't have to do that you can represent any observable quantity as a collection of real numbers you can always represent the results of a series of an experiment by a series of real numbers right and those real numbers are called observables when they when they complex when you add them together in complex combinations just because of the technical historical definition we don't call it observable the observables are the collection of real numbers yeah it's in same question what's a good way to think about measuring numbers when they show up in physical formulas like is it like this things exist but they're not observable but then how do we know that they exist if they're not observable sort of I think I think there's a better I think the question that you really want to ask is why do we get why do we have to use complex numbers in quantum mechanics that's I think what you really want to ask why is it that we driven to complex numbers this will become clear but it has to do with time and reversibility and we're not there yet we're not ready for it we could have thought about doing quantum mechanics with all real numbers and never having getting complex numbers we missed some very important things as you will see as you will find out but but let's let's write down the path stirlitz examine their consequences and then I will tell you what we go wrong if you restricted yourself to real numbers all right it would have to do in particular with the evolution of state vectors what we have not talked about at all is how things change with time and it really has to do with how things change with time that where that I comes from so hold on hold on to the question and it'll come back again III realize it's a different question than you asked but but I think it's the right question why what complex numbers are is there just a mathematical formalism for describing pairs of real numbers in certain ways okay what why are we driven to use them in quantum mechanics that has to do with time evolution so it will come to it good all right we're still doing mathematics we cannot do this subject without mathematics I mean I can fake it I can tell you lots of stuff about quantum mechanics and give you all sorts of mystical experiences about entanglement and uh and when you walk out of you you will know no more about entanglement than you knew when you walked in but you'll have this fuzzy feeling about it and you'll say who isn't it wonderful and it's a mysterious but you won't have any idea what it is so so if we were going to really do it honestly we are forced to do this mathematics right right we want to do better than that we're going to really do it right in the simple in the simplest possible context I mean I am giving you the simplest possible context to discuss quantum mechanics basically the two-level system and then we'll do two two-level systems and then we'll talk about the entanglement of two two-level systems and we'll understand it in some detail and and you'll know what the words really mean all right the next concept and it's basically the last mathematical concept that we'll need for today and possibly for quite a while is the concept of eigenvalues and eigenvectors particularly eigenvalues and eigenvectors of hermitian operators so what is an eigenvalue and an eigenvector well if I give you a matrix or an operator matrix or an operator M the if it's a in general there may or may not be certain vectors that when I apply the matrix to them don't change the vector except to multiply it by a number in other words there may or may not be and typically there will be and in fact in the case of hermitian matrices there always are certain vectors we take a matrix there will exist certain vectors which have the property not every vector there will exist certain vectors that are associated with that matrix which are called its eigenvectors which have the property that you just multiply by a number the number depends on the vector I will give you a very very simple example right now supposing we have a diagonal matrix in general it will be called an eigenvalue and eigenvector even if it's complex but if m is a hermitian matrix then it follows that lambda is real let's prove that let's suppose we have a hermitian matrix we have found an eigenvector and an eigenvalue let's see what we learn by taking the inner product of this equation with a itself we have M a a and multiplying the equation on the left by a lambda is a number it just comes on the outside a a now if you go back in your notes about five and a half minutes remember that we prove that for any hermitian matrix ama its expectation value is real we also proved that the inner product of a with itself is real so we have an equation that a real number is equal to a real number times an eigenvalue well that means that the eigenvalue is real or another way to write it is to divide it by a a and the ratio of two real numbers are certainly real so the first statement is if we've managed to succeed and find a eigenvector and an eigenvalue of a hermitian matrix the eigen value will be real now I'm going to tell you the secret the secret is that if M isn't observable then the values of it that you can measure when you do an experiment are its eigen values that means the result of an experiment to measure M will be one of the eigen values of the matrix M that's the significance of eigen values they're the possible measurable values of the observable that's represented by n they give you need some examples some very oh yes the land is the eigen values are the values that you measure the eigenvector that's associated with that particular eigen value that eigenvector represents a state where when you measure the quantity M with certainty you get value lambda so the eigen values and the eigen vectors go together there are certain states for any given observable there are certain states and certain certain states which when you measure that particular observable you will with probability 1 get a particular answer and that answer is the eigenvalue so let's just do an example or two and the the easiest examples involve diagonal matrices here's a diagonal matrix let's just give it some letters m110 just two-by-two is good enough for us m2 - there's a diagonal matrix what are its eigenvectors remember they're vectors which when you multiply them by the matrix you just get the original vector back times a number okay I'll just tell you the answer there are two eigenvectors of this matrix one of them has a 1 in the upper place here let's check what does this matrix do when it multiplies this vector well we take the row multiply it by the column that puts in the upper place here M 1 1 but how about the lower place 0 times 1 plus M 2 2 times 0 - 0 but this vector is just M 1 1 times 1 0 so you see what's happened the matrix times the vector gives you a number times the original vector in other words in particular it doesn't change yeah it just gives you the back the original vector times a numerical number in this case the numerical number happens to be M 1 1 so the vector 1 0 is an eigenvector of this matrix with eigenvalue M 1 1 there's another eigenvector and the other eigenvector is to put a 0 here and 1 here let's see what we get in the top place we get em 1 1 times 0 plus 0 times 1 is 0 and then we get 0 times 0 plus em 2 2 times 1 em 2 2 and that's just em 2 2 times 0 1 so I found another eigenvector matrix times vector equals number times the same vector and this time the eigenvalue is M 2 2 so the two vectors 0 1 and 1 0 are both eigen vectors but with two different eigen values M 1 1 and M 2 2 you can see where this is going for example if I'm interested in up-down and the spin of an electron I might describe it by a matrix with a 1 and a minus 1 then the eigenvalues would just be 1 and minus 1 the eigenstates would be 1 0 that's a correct equation alright so the state with the electron up is an eigen vector of a certain observable which is represented by a matrix and the other eigenvector is 0 1 and that is equal to minus 0 1 the minus comes from this minus sign over here so the two the two states of the electron up and down are I gain vectors of the spin operator the spin operator which is usually called Sigma 3 this is definition Sigma 3 it's just this their eigenvalues and you see the pattern that the value that you measure either plus 1 or minus 1 is the eigenvalue the eigenvector is the state in which the eigenvalue is the desired measured quantity any questions and in fact these are normalized right room yeah rule we are we normalized vectors yes let me give you another example which is less a little bit less trivial Oh incidentally if it were 3 by 3 or n by n matrices the same pattern would be there that if they're diagonal if the matrices are diagonal then the diagonal entries are the eigenvalues and the eigenvectors are just vectors with zeros everywhere as except in one place a 1 so you can check that out for yourself experiment around with diagonal matrix matrices let me give you one other example the other example involves an off diagonal matrix these matrices incidentally will play an important role in the ones we're talking about but from time being they're just arbitrary matrices it's hermitian it has real diagonal elements 0 is a real number and the off diagonal elements are complex conjugates of each other 1 is its own complex conjugate it also happens to be symmetric but it's both hermitian and symmetric and now let's see if we can find the eigenvectors then eventyr yeah rather than to do a any algebra I'm going to show you what they are and then we'll see that they are eigenvectors the two eigenvectors in this case okay now what's wrong with this eigenvector it's too big it's not normalized one squared plus one squared is two but here's a statement if a vector is an eigenvector it doesn't matter what it's normalization is if you double it or triple that or anything else it's still an eigenvector but if you wanted to normalize it you would write one over square root of 2 1 over square root of 2 it doesn't really matter in checking whether it's an eigenvector or not but let's normalize it just to remind ourselves to always normalize vectors what is this matrix times this vector all right we take the 0 1 and we multiply it by these two elements and what do we get we get zero time let me come on the side 0 times 1 over square root of 2 plus 1 over 1 over square root of 2 1 over square root of 2 and likewise in the bottom we got back exactly the same vector so this is an eigenvector of this operator with eigenvalue plus 1 all right what about 0 1 1 0 can I find another eigenvector yes I can every reason I can find this because they know the answer but it's not it's it's not hard to find it's not hard to find I just want to illustrate the idea of eigenvectors another time i can tell you how you would go about finding them but glad you want to illustrate it the other possibility is 1 over square root of 2 minus 1 over square root of 2 in other words instead of having the same element here in here we have opposite sign let's check it out 0 times this 0 1 times this is minus minus 1 over square root of 2 and then here we have 1 over square root of 2 plus 0 so we have 1 over square root of 2 and that's the same thing as minus the vector that we started with we have a minus sign here and the opposite sign here it is minus the original vector so we found another eigenvector and this time the eigenvalue is minus one we found two eigenvectors with eigenvalue plus one and eigenvalue minus one I defy you to find any other eigenvector there are none there are none and that's a theorem and there are no more eigen values or eigenvectors right right right well you can always multiply an eigenvector by a number and it's still an eigenvector but no more no more which are not just numerical multiples of the same eigenvector right all right I'm now going to prove a fundamental theorem of quantum mechanics very fundamental about hermitian matrices and I going too fast well if I am yell out okay this one say it again yeah yeah yeah okay Sigma three and this is definition 1 1 sorry 1 minus 1 0 0 and it has eigen value plus 1 and minus 1 Sigma 1 is equal to this 1 1 0 0 I'll show you two interesting things who are well let's just say one interesting about both of these the square of each one of these matrices does everybody know how to square a matrix how to multiply a matrix by itself the square of each of these matrices is the same and it's just plus 1 the square of this matrix you just square the elements 1 squared and minus 1 squared of both plus 1 the square sigma 1 let's square Sigma 1 1 1 0 0 and multiply it by 1 1 zero-zero all right the first element in the upper left-hand corner is the inner product this times this the 1 times the 1 gives us a 1 then the 0 times 1 plus 1 times 0 0 1 times 1 plus 0 times 0 now sorry 1 times 0 plus 0 times 1 is 0 and 1 times 1 plus 0 times 0 is 1 so both Sigma 1 squared and Sigma 3 squared are both equal to 1 now that's kind of an interesting fact notice that it's also true of the eigenvalues of the matrices they were both plus and minus warning if a square of the thing is 1 it stands the reason that it that it's either plus or minus 1 well for matrices particular 2 by 2 matrices like this but matrices in general if the square of the matrix is the unit matrix this this matrix is called is the unit matrix it's called one alright sit it's just the unit matrix with ones on the diagonal Sigma 3 squared Sigma 3 squared is the same as Sigma 1 squared equal to 1 and it's not independent of the fact that the square of each of its I of the eigenvalues is equal to 1 it means that the measured values that you can measure are such that the square is equal to 1 and it means that you can only measure plus one or minus one there's one more matrix I'll write down and it's called Sigma 2 and it's equal to minus I I we will come back to the signal matrices like we'll probably come back to them tonight we found the eigenvectors of Sigma 1 and Sigma 3 we have not yet found the eigenvectors of Sigma 2 and I will tell you in due time what their significance is yeah yeah you can feel the identity gives a very simple geometric interpretation ok if you can say that then tell us one of this ok well I was going to write something up and I can put an H in there ok tell me how but basically if you look at the unit vectors if you look at good vector well think of V 1 and E 2 like 0 1 and 1 0 as the unit vectors if you look at a matrix if you look at the column vectors those are the images of the unit vectors I know what your second hand don't forget so one of them is focus and reflected you are interchanging II 1 V 2 that's right and so the diagonal this is fixed and it also the diagonal fix when you're reflecting in the diet and the main diagonal study is also looks at RIKEN vectors this direction I think where and then the that's one of the square of two things will see pointing in the diagonal directions so they're fixed and then the other one is you're sending here keeping a be 1/6 and e2 is going is whether into it's negative so you're reflecting in your x axis so so the vertical vector is an eigenvector going to is negative right in the middle right and also the squares of these things are you planning twice you come back three exactly not so completely obvious for this one but the square of this one is also one yeah a little I think yeah I'm friend and anybody times something very obvious the dragon see the two squared is also walking I find this one the one yeah they all have that property they're all in that sense they're similar that also proves is that I believe that the eigen values of Sigma 2 are also plus and minus one doesn't tell us what the eigenvectors are but tells us what the eigenvalues are all right let me prove to you now the following interesting theorem suppose and this theorem is significant highly significant supposing you have an observable and it has more than one I get vector in general well high the number of eigenvectors that it has is usually equal to the dimensionality of the matrix it is the dimensionality of the matrix so if n of the 3 by 3 matrices and they're hermitian they will typically have three eigenvalues and three eigenvectors four by four will have four augen vectors and four eigenvalues but here's a theorem if i have a a hermitian matrix m and it has two different eigenvalues two different eigenvalues that means that there's an eigenvector a with eigenvalue lambda a and there is an eigenvector B with a different eigen value lambda B B then if lambda a and lambda B are different what it says is that a and B will be orthogonal to each other before I prove it let me say what it means what it means is that if there is any observable distinct quantity that you can measure which will distinguish between two different well two different distinctly different possible measurable answers that you can get for some observable then that's equivalent to saying that the that the eigenvectors remember the eigenvectors are the states in which you definitely measure lambda a the eigenvectors are the states where if you make a measurement of m you will get lambda a B is the state in which if you measure M you will definitely get lambda B all right these states which correspond to distinctly different measurable quantities it's distinctly different measurable values of the same quantity they are orthogonal all right that's that's the the basic idea is orthogonal states are the state Wabble measurably distinct and you can distinguish between them by measuring appropriate observables for example we already have some examples the two examples that we had one of them was 1 0 & 0 1 these were the two eigen vectors of sigma3 notice that they're orthogonal because they don't share any elements this times this Plus this times this is 0 the orthogonality is the thing that tells you that in a unique experiment you can distinguish between them you'll distinguish between them by measuring the quantity which which is represented by the hermitian operator that has the too many words but but you get the idea let me prove to you that if this is so with two different lambdas then a and B must be orthogonal to each other and we do this easily it's not very hard we just have to juggle a few definitions we first of all multiply this equation the first equation on the left by B so we'll rewrite it then as B M a equals lambda a times B a I've multiplied both sides by B by the vector B going to do a similar thing here I'm going to multiply both sides by a and so we get a M B equals lambda B times a B but now I would like to switch let's see so what do we have to do next this always confuses me that we have the two equations now let's take this equation and yeah we want to conjugate one of these equations we want to thank the complex conjugate of one of these equations which one should we choose the second one this is B M a complex conjugated is what it is remember since M is hermitian we don't have two complex conjugate it that's equal to lambda B star times B a okay now I forgot what to do next ah yes yes hmm yeah now we'll just hope now we'll just complex conjugate everything oh did I do something oh this is did I forget to conjugate something what are we doing we got sorry this is conjugated right yeah good but before we do that let's come play what I've done here is said if I interchange let's see what this were this one if I interchange a and B and at the same time take the hermitian conjugate of M which I don't have to do because it's hermitian and take the complex conjugate I get the combo I get the complex conjugate of the right hand side now just complicate a complex conjugate the whole equation let me just throw away the star throw away the star now I have two equations let's get rid of the middle one here one of them says BM a equals lambda a times B a the other one says BM a equals lambda B times B a it will not surprise you that the conclusion is there are two two possible conclusions one conclusion is that lambda a equals lambda B but I told you already I'm talking about two eigenvalues which are known to be print so my assumption lambda a and lambda B are not the same if lambda a and lambda B are not the same there's only one way this can be true in other words it and it's that B a is 0 in other words let's subtract these two equations we subtract the two equations on the left-hand side we get 0 on the right hand side we get lambda a minus lambda B times B a B a if a product is equal to 0 that means one or the other factor is equal to 0 the product of two things can only be 0 if one or the other factor is equal to 0 so one possibility is that lambda a is equal to lambda B true this is true possibility but I explicitly said let us consider two different eigenvalues two different values of the same observable if they're different then lambda a minus lambda B is not equal to 0 and the only possibility is that B and a or orthogonal to each other that inner product of B with a is equal to 0 so it follows for hermitian matrices if I have two distinct eigenvalues I can immediately conclude that these are orthogonal vectors let's check it out for the eigenvectors of of sigma to remember they where are the eigenvectors of sigma 2 here they are they they are right here hmm no I'm sorry single one is what I meant Civil War let's check it out for signal water notice that the entries here are real and we don't have to we don't have to complex conjugate anything all the entries here are real to take the inner product of two vectors all of whose entries are real all we have to do is take the product of the first two entries sorry the product of this entry with this entry and add to it the product of this one with this one so we get 1 over square root of 2 times 1 over square root of 2 that's 1/2 and then 1 over square root of 2 times minus 1 over square root of 2 that's minus 1/2 1/2 plus minus 1/2 is 0 this vector is orthogonal to this vector so there's an example of the orthogonality of eigenvectors with different eigenvalues for hermitian matrices alright as I said the significance of this is deep and it says that whenever there is an observation that you can do that will uniquely distinguish between two possible states of a system or two possible values of an observable the states associated with them are orthogonal to each other or you can say it the other way if two states are orthogonal it means that there exists some measurement you can do which will uniquely tell you which one of the two is realized in your app and your in your system ok so that's that's one of the fundamental theorems of quantum mechanics and let's take a little break the eigenvectors of sigma 2 are so that you can check them yourself the two eigenvectors of sigma 2 the two eigenvectors of Sigma 2 are one I stick a 1 over square root of 2 to normalize it in front of it and the other one is 1 minus I 1 over square root of 2 one of them has eigenvalue plus 1 in the other has eigenvalue minus 1 all right so you can check that yourself it's a little exercise please do it and you see that Sigma 1 Sigma 2 and Sigma 3 are all similar to each other and that they have the same eigen values plus 1 and minus 1 they all square and give 1 they have another interesting property that we'll come to in a moment I think but let me tell you what their physical meaning is they are observable quantities and with respect to thinking about the the electron spin as I told you be first time or wasn't second time the electron spin classically in classical physics is like a magnet it has a North Pole and a South Pole and it's a pointer it points along an axis if we were talking about and it has what's called a magnetic moment in that magnetic moment is a I hesitate to use the word vector but the kind of vector that points in ordinary space I guess we call it a point there huh right the magnetic moment or we can call it the spin and it's the same apart from a numerical constant the spin and the magnetic moment are the same thing it's affected say it's a it's a point there in space right as such it has components it has components in the vertical axis for example this vector has a component which is about over here in the vertical direction it has a component in the horizontal direction over here and a component over here in the other horizontal direction and those components characterize the pointer they are measurable quantities for a magnet you could decide to measure the X component of the magnet of the of the magnetic moment you could decide to measure the Y component or you could decide to measure the Z component you might think classically you'd be right that you could measure all three of them simultaneously you could do an experiment to measure all three of the components of the magnetic moment simultaneously and in that way figure out exactly what they're where the magnetic moment is pointing let's save that question whether you can measure all of them simultaneously for an electron or not but you can't and the answer is no but you can measure any one of them the X component the Y component of the Z component how do you do it suppose I wanted to measure the X component the X is this way I put it in a big magnetic field and I check whether or not it emits a photon if it emits a photon then it had one one component or the other component and the only possible answers are minus 1 and plus 1 in other words you'll either see a photon or you won't see a photon so the components of the electron spin have only two possible values each of the components the X component the Y component and the Z component and what's more we're going to see in a minute that the component in any old direction if you measure it will only have two possible values now that's a little hard to think about but but let me tell you right now what Sigma 1 Sigma 2 and Sigma 3 are is they represent the observable values of the components of the electron spin along the three axes of space the three axes of ordinary space I'll show you how that works and how we can construct the component along any direction in a moment but notice that they do have sort of very similar properties same eigen values so if you measure the possible values that you can get in an experiment for Sigma one you get one - one for sigma 3 you get 1 and -1 for Sigma 2 you get 1 and -1 that's all you can ever get when you actually measure them and we're going to see the same is true along any axis but before we do let me tell you the last I think it's the last postulate of quantum mechanics and here's here's here is the that's the probability interpretation suppose you prepare a system in a state particular state and let's see let's call that state B somebody gave you an electron or whatever it happens to be in a particular quantum state B now there is some observable let's call it m some observable M that has an eigenvalue lambda a which means that if you measure m you can get the value lambda AE or you may not get the value may get some other lambda but if you measure M you get one of its eigen values and the associated eigen vector is a commas between them just to indicate m is an observable meaning to say it's a hermitian operator lambda a is one of the possible values that you can measure and the associated eigen vector is lambda a B is just any old state any old state whatever not particularly an eigenstate of of M it could be or not all right it's just any old arbitrary state then we could ask the question if the system is prepared in B and we measure M what's the probability that the answer will be lambda a lambda a are the possible values that you can measure M is the thing you're measuring and a is the eigenvector em with eigenvalue lambda a the answer is that the probability is given by the expression the inner product of a with b times it's complex conjugate one way r times its complex conjugate the easy way to write the complex conjugate is just to write it that be a which is just the same as multiplying this by its complex conjugate remember that when you multiply a number by its complex conjugate you always get a real positive number if a and b are normalized it's an easy theorem to prove that this is less than 1 less than or equal to 1 if a and B are unit length then the inner product between them has a magnitude less than 1 so this number is always less than or equal to 1 and it is the probability it's the probability again I'll state what it is it's the probability that if you start with an arbitrary state a a system prepared in an arbitrary state a and you measure the measurable quantity M this is the probability that you get lambda a sorry look let's say that's it room oh sorry yeah okay let me say it again I thought I said it the right way but I probably said it wrong it's prepared in the state B arbitrarily prepared in state B then the probability that you measure lambda AE will be the inner product between B and a squared times its complex conjugate a real positive number given by the square or I call it the square but I mean times the complex conjugate or of a with B okay that's the probability notice it's sort of symmetric in a and B it is symmetric in a and B but but for the moment think of it a symmetrically you prepare the system in a you ask you prepare the system in B and you measure in and ask what the probability for lambda a is that's the that's the basic postulate of quantum mechanics always assuming that all of your vectors are normalized always assuming that your vectors are normalized that your state B is normalized and that the eigen vector a is normalized was if these were if these were real vectors in a real vector space a two two dimensional vector space it would be the cosine of an angle since they're both complex is that equal to what we mean by cosine okay what is true is that what is true is that these are just the ordinary three dimensional vectors real vectors in ordinary three dimensions that the dot product of two vectors two unit vectors is equal to the cosine of the angle between them so it's kind of like the cosine of the angle but it's not a real thing not real in general this won't be real that's why we multiply it by its complex conjugate good good excellent if B happens to be one of the eigenvectors of M with a different eigenvalue then a and B are orthogonal and so what it says is that if you prepare the system in one eye in vector corresponding to a different eigenvalue then the probability that you get a is zero good perfect so if you prepare it and in one eigenvector and you measure the probability that you get a different eigen value is zero so so that's that's part of the of the probabilistic interpretation here and all it says is if a if a quantity is definitely equal to one thing it's surely not equal to the other thing but but expressed in terms of inner products a little more complicated but yes but there yeah we're gonna we're gonna do many many such things but let's so this is the simplest of the things we could do and at the moment this is our basic quantum postulate of probabilities let's give an example what's the probability let's take the case where Sigma 3 is equal to 1 let's say let's say it this way we prepare we prepare B is equal to 1 0 which means by the way that Sigma 3 is equal to plus 1 but now we're not going to measure Sigma 3 we're going to measure Sigma Sigma 1 we're going to measure Sigma 1 and we're going to ask what's the probability that when we measure Sigma 1 that we get plus 1 or minus 1 as the case may be all right so let's ask what what's the probability that if we prepare here's the experiment we prepare the system with the spin up by putting it in a big magnetic field pointing upward and then we measure Sigma 1 which is the component of the spin along the x axis what's the probability that we get plus 1 what's the probability that we get minus 1 you can guess but we'll work it out all right so what is a a is an eigenvector of Sigma 1 with eigenvalue let's say plus 1 so a is equal to 1 over root 2 1 over root 2 what's the inner product between these quick quick quick quick 1 over square root of 2 right 1 over square root of 2 what's the square of it 1/2 so a B is 1 over square root of 2 and times its complex conjugate that's usually written in this way times its complex conjugate its absolute value squared which is the same as multiplied by its complex conjugate is 1/2 so what this tells us is if we prepare an electron in the Upstate and we measure the X component of spin we have a probability 1/2 for getting plus and it will also be true that we will have probability 1/2 for getting minus to see that what's the eigenvector that's associated with minus 1 2 Sigma 2 Sigma 1 we put a minus in here it doesn't change the answer still 1/2 ok so there's an example and this is a characteristic example you prepare a system in this case the system is prepared with a spin vertically up and it's measured in the horizontal direction let's do the opposite we're going to get the same answer oh let's say yeah let's do let's do we start B now this time we first prepare the electron horizontally and we measure the vertical component it's going to be the same inner product the horizontal this time B this we've just all we've done is interchange be in a I forgot which was which now be a we've prepared the system with its spin pointing with its x-component positive and we measured the z component it's just exactly the same thing 1/2 let's do a more complicated case I've told you what the eigenvectors of sigma 2 are so let's ask the most complicated question we can ask what's the probability that if we prepare the state with Sigma 1 equal positive 1 that means B is equal to 1 over square root of 2 1 over square root of 2 and this time we measure Sigma 2 not Sigma 3 but Sigma 2 so this time B is one of the two eigenvectors of Sigma 2 let's take this one 1 over square root of 2i over square root of 2 sorry this is a now this is a a the eigenvector of Sigma 2 so we start with the spin pointing along the x axis and we measure it along the y axis we start with the pointing along the x axis that's an eigenvector of Sigma 1 or along the 1 axis and now we measure it along the 2 axis okay can somebody compute the inner product of these two I think I can the inner product of these two we first of all have two complex conjugate so don't forget the complex conjugate but it's one over the square root of two times 1 over square root of 2 is 1/2 and then plus or we have two complex conjugate so that means minus I over 2 well the square root of 2 times 1 over square root of 2 is 1/2 with an odd e this is the inner product of a with B and another it's a with B or B with a B but it doesn't matter okay what's the complex conjugate of this with itself let's work it out we have to multiply it by 1 over 2 plus I over 2 quick answer not working okay the imaginary parts cancel we have a 1/2 times a 1/2 with an eye and then we have a minus 1/2 times a 1/2 with a might with it with minus I so the imaginary parts cancel the real parts or 1/2 times 1/2 is 1/4 plus 1/4 1/2 ok so we see if we the way to say this now is if we line up the spin in any direction by a strong magnetic field and then we measure the spin in a perpendicular direction we have 1/2 problem in any way in fact in any perpendicular direction we have a probability of 1/2 that it's this way and 1/2 that it's this way ok so any polar is it's called the spin polarization any spin state if we started out with a strong magnetic field and freeze it into some direction if we then remove the magnetic field and measure an orthogonal component to spin it will have equal probability of being up-and-down what about the components of spin in some arbitrary direction let's discuss the components of spin along some other axis we maybe it's a good time to stop now and well let's let's do it and I promise to do this again yeah regarding our foggle in this case this case versus classical if you have to hang two vectors that are 90 degrees of park use we're talking about vectors or pointers fighters they're 90 degrees apart we often say this Hagen production one the other equals zero yeah okay good good good for here okay so that's not so right so we have to we have to distinguish vectors from pointers right so here we have a situation where we have where we have a pointer which has been forced to point in some direction and we measure the pointyness in a perpendicular direction all right let's not good all right good or let's just say yeah okay good good or better yet let's look at the eigenvector which corresponds to the pointing this plus one in the X direction and compare it with the eigenvector that corresponds with the pointyness in another direction then not orthogonal eigenvectors they're not orthogonal vectors in the in this sense yeah Yeah right comes out to one half instead of zero right good what is the thing which has zero probability of pointing up like this yeah the one is pointing down the one that's pointing down in this sense is orthogonal to the one pointing up so we have two so let's use the word perpendicular to mean ordinary space and orthogonal okay is a vocabulary pointers instead of vectors vectors mean States pointers mean pointers orthogonality means States being orthogonal being distinctly different from each other measurably different and perpendicular means perpendicular at 90 degrees in ordinary space okay independent means something a littledifferent independent doesn't mean a perpendicular so let's stick with the word no independent simply means it's not pointing in the same direction and the set of vectors there's a concept of independence linear independence with a but which is not which is different than no THOG and mutual orthogonality so in Independence and orthogonality are quite different or linear independence anyway okay now let's let's come to classical pointers I'm interested in something I'm interested now in the component of the pointer along an arbitrary direction along an arbitrary direction in real space what direction let's pick a direction how do we pick a direction we pick a direction by picking a set of components of a unit vector we take a unit vector and pick a set of components in it let's call let's call the unit vector so unit pointer my goodness a unit pointer we take a unit pointer now this is in real three-dimensional space 1 2 3 1 is the same as X 2 is the same as Y 3 is the same as Z same thing XYZ and now I'll pick some arbitrary pointer so some arbitrary pointer is pointing along that direction and let's take it to be a unit vector which means it's one unit in length it's described by a set of three components the sums of the squares of the components have to add up to 1 but here are the three components let's call this vector N and a unit vector is usually described by putting a little hat over it like that in hat means a unit vector this is a standard notation for a unit vector a unit pointer my goodness please correct me don't then allow me to do that a unit pointer is indicated by a little hat and it has a set of components the components are in x and y and z are in 1 and 2 and n 3 those are the components and they're basically this component this component and this component you add them up and you get the pointer that's pointing in some arbitrary direction these are the components of the pointer now what what about the component of the spinner along the direction in how do we calculate that that the component of the spin or the component of the spin pointer along the arbitrary direction n is given let's call it the spin let's now we're thinking classically we're thinking about a classical magnetic moment a classical spin the dot product Sigma dot n is the component of Sigma along the direction of n that's what a dot product is it's the component of one vector along the axis of another vector sorry pointer what is Sigma dot n it's the first component or the X component or Sigma 1 times n 1 plus Sigma 2 times n 2 plus Sigma 3 times n 3 well this gives us an interesting candidate in the quantum mechanics for the operator which is the component of spin along a direction n namely we take the operator Sigma 1 multiply it by N 1 add to it the operator Sigma 2 times n 2 add to it the operator Sigma 3 and multiply it by n 3 that's a candidate and it is it is the correct candidate which we can call Sigma dot n but it's an operator why is it an operator the ends on numbers they're just numerical numbers that multiply here Sigma's are matrices if we multiply a matrix by a number we just multiply all the entries by the same number and then we add them up we can write this down actually given 3 and then 1 and 2 and + 3 we can add them up looks let's do it n 1 times Sigma 1 what's Sigma 1 Sigma 1 is 1 1 and if I multiply it by n 1 it looks like this that's Sigma 1 times n 1 what about Sigma 2 times n - well Sigma 2 is - I re and now we multiply it by n 2 Sigma 3 times n 3 we take n 3 which is 1 minus 1 and we multiply it by n 3 so that's just n 3 and 3 0 0 now we add them up and what do we get on the diagonal these have no diagonal elements this has diagonal so we get n 3 & 3 minus n 3 we get n 1 minus I and 2 and N 1 plus I and 2 there's a three three components N 1 n 2 and n 3 the sums of the squares should be equal to 1 because it's a unit vector and here is the operator that corresponds to measuring the components of spin along the direction n so what's the experiment the experiment is we take an electron and we put it in the magnetic field pointing along the n direction and we see what we get up or down this is the operator that corresponds to that n 3 or diagonal minus n 3 and 1 minus I into is this hermitian yes it's hermitian real real and this one's the complex conjugate of this so its hermitian is it square equal to 1 what's that anything any unit vector well I don't expect you to be able to see it offhand so what I'm going to do is I'm going to show you an important property and let you check it yourself this is an important property of these Sigma matrices one that you can check but I'll Drive it I'll derive its importance and then allow you to check it the by yourself let's take this Sigma 1 plus Sigma 2 Sigma 1 n 1 plus Sigma 2 n 2 plus Sigma 3 n 3 leave it in this form not in this one leave it in this form and square it what do we get well let's let's write it out Sigma 1 + 1 plus Sigma 2 + 2 plus Sigma 3 + 3 times Sigma 1 + 1 plus Sigma 2 + 2 plus Sigma 3 + 3 let's multiply we're going to get a whole slew of terms all right the first terms first set of terms are going to be things like Sigma 1 squared times n1 squared when I multiply this out I'll get Sigma what's Sigma 1 squared 1 so from this from the n1 squared Sigma 1 squared terms we'll get n1 squared right what about from the Sigma 2 prime Sigma 2 term + n2 squared what about from the Sigma 3 and 3 term plus n3 squared but what's that that's 1 all right now let's look at Sigma let's look at the thing which multiplies n1 times n2 here's the thing which is going to multiply n1 times n2 n1 n2 and what's it going to contain it's going to contain Sigma 1 Sigma 2 but then there's another term like that which is Sigma 2 Sigma 1 all right you get one time which is signal one on the Left Sigma two on the right and the other term has Sigma two on the Left Sigma one on the right why do I bother writing Sigma 1 Sigma 2 and Sigma 2 and Sigma 1 instead of just calling them both Sigma 1 Sigma 2 because when you multiply matrices the order counts the order counts when you multiply matrices matrices in general have different products when you multiply them in different orders that the technical term is that they don't commute it was important that these things all when you multiply them by themselves I give you one I mean if you did make it they'd anything like this yeah these each of the square of these gives good I know but you just kind of prove that for the for this signet that it we're going to prove that forgive me then we have and two in three times Sigma 2 Sigma 3 plus Sigma 3 Sigma 2 and then another term similar which is anyone comes in three all right well this adds up to one which means the unit matrix this is the unit matrix one what's this doing here this is bad we don't want that there we want the square of the component of the spin in any direction to be the same as in every other direction there's nothing special about any of these directions well try it out multiply let's multiply let's multiply signal at this you go home and do yourself prove to yourself that Sigma 1 times Sigma 2 is the opposite is the negative of Sigma 2 times Sigma 1 the anti commute this is a special property the anti commute which is to say that when you change the order it changes their sign this is something to check you sit down and do it numerically Sigma 1 Sigma 2 plus Sigma 2 Sigma 1 is 0 Sigma 2 Sigma 3 plus Sigma 3 Sigma 2 is 0 let's we have five minute we're three minutes let's do one case one case for fun let's say let's do I've avoided my imaginary numbers by doing Sigma 1 and Sigma 3 Sigma 1 times Sigma 3 what's that that's equal to 1 1 0 0 times 1 0 0 minus 1 and that's equal to let's see this time this is zero this time this is minus 1 this is 1 and this times this is 0 did I get that right I think I did ok now let's do it the other way 1 0 0 minus 1 0 1 1 0 alright in the first element 1 times 0 plus 0 3 1 that's 0 again but now let's go to 1 times 1 is 1 and now we go down to this corner at 0 times 0 minus 1 0 so notice when you multiply them in opposite order you get exactly the same thing except for a minus sign when you add these Sigma 1 Sigma 3 and Sigma 3 Sigma 1 add them together you get 0 same is true for Sigma 1 Sigma 2 and Sigma 2 Sigma 3 so these are 0 so what are we learned we've learned that with this particular choice of matrices the square of any component of Sigma along any axis any axis whatever is 1 that means the eigenvalues of such a matrix are plus and minus 1 so any matrix where did I write it I have that written down here it is the eigenvalues of any matrix of this form are plus and minus 1 this will check next time but what it says is the possible measurable values of the spin along any axis of plus or minus 1 ok this is a very weird thing in quantum mechanics you can only get plus or minus 1 if you measure it along any axis all right but then take another axis you can also only get plus or minus 1 plus or minus 1 the it's the probabilities which reflect whether 2 ends or clora will come to it next time that's enough of this time I've probably saturated you you or sitrus saturated myself yeah the preceding program is copyrighted by Stanford University please visit us at stanford.edu
Info
Channel: Stanford
Views: 262,400
Rating: 4.8052301 out of 5
Keywords: Science, physics, math, theory, relativity, equation, formula, Leonard, Susskind
Id: CaTF4QZ94Fk
Channel Id: undefined
Length: 106min 54sec (6414 seconds)
Published: Wed Apr 23 2008
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.