General Relativity Lecture 2

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] Stanford University okay Shall We Begin any questions before I begin quick ones as I was saying a good notation will carry you a long way when a notation is good it's just sort of automatically tells you what to do next it means you can do physics in a completely mindless way just you know it's like having Tinker Toys it's pretty clear which end the stick has to go into it has to go into the thing with the hole you can't try putting a hole into a hole or forcing a stick into a stick there's only one thing you can do you can put the stick into the hole and then the other end of the stick can go in another hole that gives you some more holes that you can put sticks into and the notation of general relativity is much like that if you follow the rules you almost can't make mistakes but you have to learn the rules the rules are the rules of tensor analysis or tensor um algebra tensor analysis and the question which we're aiming at now is to understand enough about tensor analysis metrics to be able to distinguish a flat geometry from a non-fat geometry now that seems awfully simple flat flat means like the plane nonfat means with bumps and lumps in it and you would think we could tell the difference very easily sometimes it's not so easy for example as I mentioned last time if I take this page that page is flat if I roll a page like this it looks curved but it's not curved it's exactly the same page the relationship between the part parts of the page the distances between the letters on my page and so forth they don't change at least not the distances between the the letters measured along the page they don't change when I do various things with a page so a folded page or a I don't want to call it curved because curved would be the wrong thing but a um what's the right word for what's that deformed well it's not really um no I don't want to use the word deformed either just a curled let's call it curled where you don't stretch it and you don't deform it that is not introducing curvature into a space now technically it introduces what's called extrinsic curvature extrinsic curvature has to do with the way a space in this case the page is embedded in a higher dimensional space three-dimensional space of this room here's the page when the page is laid out flat like that it's embedded in the embedding space in one way when it's curled like this it's embedded in the same space in another way and one says that there is extrinsic curvature the extrinsic has to do with the fact that it's in an external space and it has to do with how the space how this uh page is embedded in space but it has nothing to do with the intrinsic geometry if you like you can think of the intrinsic geometry is the geometry of a tiny little bug that moves along the surface cannot look out of the surface only looks along the surface crawls along the surface he may have um a surveying instruments by which he can measure distances along the surface can draw triangles measure to the angles of the triangles within the surface and do all kinds of interesting geometric studies but never looks out of the surface and therefore never detects or notices the fact that it might be embedded in the higher dimensional space in different ways just learns about the intrinsic geometry the intrinsic geometry meaning the geometry that uh is independent of the way the surface is embedded general relativity is about and reman geometry a lot of geometry is all about the intrinsic properties of a geometry it doesn't have to be two-dimensional it can be any number of dimensions and the basic thing which defines the geometry and distinguishes it from other geometries is to imagine sprinkling a bunch of points I don't want to ruin my page here let's do on the back of the page sprinkle a bunch of points draw lines between them sort of triangulate the space and then State what the distance between every pair of neighboring points is specifying those distances specifies the geometry sometimes that geometry can be flattened out on the page without changing the lengths of any of these little links let me give you an example um if I draw a bunch of triangles which are representing uh lengths oops I'm not doing it very well a triangular lattice that's a triangle lattice it's built up out of triangles and if every triangle is an equilateral triangle this this this are all equal this this this are all equal then in fact imagine that the at the nodes here we can um you know what's the right word uh they're like hinges they're hinged the hinging would allow us to sort of fold this thing but as long as we keep these lengths the same and don't change them make them all equilateral triangles this can always be laid out on the desk as flat on the other hand if we were to let's say um take several of these lengths all the ones coming into to some point over here this one this one this one this one this one and this one six of them and we were to double the size of them then this point would have no choice but to come up out of the Blackboard or into the Blackboard there would be a bulge at that point and you can't push that bulge back into the Blackboard and flatten it out without changing the lengths of the links a curve space is basically one which cannot be flattened out without distorting it in fact without distorting it and it's intrinsic property of the space not extrinsic okay so our our goal which as I said last time matches closely the question of whether there is a real gravitational field present or whether the gravit parent gravitational field is just due to an artifact of funny coordinates funny space-time coordinates the question of whether a space is really flat or whether or whether it's really flat or really curved or whether it's just we've presented that space with curved coordinates those two mathematical problems is there really a gravitational field there or is it just an artifact of curved space time coordinates or is the space of the top of this tabletop really flat even though it may look curved because I introduce I won't draw on the table but the curved coordinates of various kinds so the question is how do you tell really if a space is flat what are you giving typically you're given the metric tensor right but before we go into that question and it's it's a hard question it's going to take us it probably we probably won't get completely to the point of uh of answering the question tonight a question yeah and I and I don't mean this factiously I really mean this sincerely what is threat how do we Define it that you can we're going to Define it we have not defined it yet in any mathematical sense I tried to give you an intuition it means you can flatten out the table but of course that's not good enough I will tell you exactly what it means in fact I told you last time but I'll tell you again all right what it means is the metric tensor can be chosen to be just the chronica Delta symbol but we'll come to it all right before we come through it we need to understand a little bit about tensors we've talked about them a little bit but I want to formalize it tonight a little bit um tensors scalers and vectors scalers and vectors are special cases of tensors and uh tensors are the general category of objects they have indices and they transform the most important thing is that they transform when you go from one coordinate system to another in particular we're going to be interested in spaces with quantities fields on them so that every point in space there may be some quantities associated with it with that point and those quantities will be tensors there'll be tens there will other kinds of quantities also that will not be tensors but in particular will be interested in tensor Fields a field is a thing which can vary from SP from point to point and the simplest kind of tensor is a scaler a scaler s ofx is a quantity at each point of space where everybody in all coordinates agrees about its value so the transformation properties and going let's say from the Y from the x coordinate to the YX coordinates XM to ym in changing coordinates the value of a scalar field at a actual point of space or space time but we will get to space time the value of that Scala field does not change so we can write that as S Prime of y equals s of X the prime here simply denotes the fact that I'm talking about a quantity in the coordinate system Y without a prime it will refer to the coordinate system X there are two coordinate systems is the X coordinate system these are the lines of constant X or the surfaces of constant X and then there's an and every point is labeled by a collection of x's how many x's and y's well that depends on the dimension of the space if the space is onedimensional then One X just a one coordinate Will label where you are in that coordinate if it's two dimensional two coordinates X X1 and X2 or y1 and Y2 if it's three-dimensional three Co three Val three coordinates X1 X2 and X3 or y1 Y2 and Y3 and whatever the x coordinates are the y-coordinates are different but we assume as a correspondence that if you know the value of the X's for a particular point then the Y's so let's write that y m where m runs from one to however many is a known function assumed to be a known function of the X's I want this this stands for now the whole set of x's X1 X2 X3 X4 however many and that we can invert the relationship so that if we know the the the value of y of a point we can also know what its X is right so that's a coordinate transformation of some kind and that coordinate transformation can be you know pretty complicated we'll assume it's we'll assume it's continuous we'll assume we can differentiate when we need to differentiate but uh nothing more special than that okay so scalers transform trivially if you know the value of s at a point you know it no matter what coordinate system you use next there are vectors vectors and there are two kinds of vectors we spoke about them last time I'm going to tell you now a little about the geometry what it means there are contravariant vectors which have an index upstairs and there are covariant vectors I shouldn't really use the same symbol because they're not the same thing at the moment but let's just use the same symbol V V for Vector with an index downstairs the rules we'll write down rules for transformation in a moment but let me first tell you a little bit about the difference between covariant and contravariant intuitively what it means so let's suppose for the moment we're located right at this point over here we have some coordinates these could be the x coordinates and I'm going to draw them as straight lines for the moment because at the moment I'm not interested in the the fact that the that the coordinates uh curve and Vary the directions from first to place from place to place I'm mostly interested in the fact that the coordinate axes may not be perpendicular they may not be perpendicular and uh what the implication of non- perpendicularity is for these coordinates furthermore the distance let's call this we can call this X1 we can call this X two they're not perpendicular relative to each other they could be perpendicular relative to each other but not necessarily and the actual physical distance let's say measured in meters from xal X here's X1 = 0 here's X1 = 1 here's X1 X X1 = 2 here's X1 = 3 and so forth here's X2 = 1 the actual physical distance let's say measured in meters or centimeters or whatever units are between xal 0 and xal 1 is not necessarily one unit they're neither perpendicular to each other nor do these labels represent actual physical distance along the axes they're just names of points here's the point xal 0 and so forth and so on okay now let's introduce some vectors two vectors not necessarily two here I've drawn a two-dimensional space there could be more axes for each axis I will introduce a vector which is basically a vector extending one unit of coordinate space along the particular axis I'm going to give this Vector a name it extends from this point to this point x = 0 to x = 1 and I'm going to call it E1 the one here stands for this one X1 and not X2 and not X3 in other words it's along the X1 axis and there's another Vector along the X2 axis which I'll call E2 and these are vectors these are vectors uh and think of them as physical little arrows and of course if there's a third coordinate you point out of the Blackboard there's an e for that and so forth and so on in fact we can label these e sub i e sub I they're little vectors and as I goes from 1 to three or 1 to four or whatever it is these go over the various directions all right now next issue take an arbitrary vector take an arbitrary Vector let's call it V an arbitrary Vector for example one over here V can be expanded as a sum of coefficients time E1 plus other coefficients time E2 plus more coefficients time E3 in other words they can be expanded as a sum of terms the first one having E1 and the coefficient of V1 will call V1 plus V2 E2 plus V3 E3 now the things which are vectors in this formula are the E's the v's are actually numbers but they are components of the vector they're components of v and they tell you how how much of each kind of one of these vectors are present in the sum V1 E1 plus V2 E2 plus V3 E3 and so forth these coefficients these coefficients are called the contravariant components of the vector v it's just the name now there's nothing in what I did that uh required me to put the one here downstairs and not upstairs in this this one downstairs it's a convention to write it in this form here all right so first of all you see what the contravariant components are they're the expansion coefficients the numbers that you have to put in front of these three vectors to expand a given Vector next thing let's define the projection of a given Vector onto the x-axis and the y- AIS how do we do that its definition is the dotproduct of the vector v with the vectors E1 or E2 let's think about the next thing would be the dot product of the vectors of the vector v with just the ordinary dotproduct with let's say E1 let's start with E1 now if we were just using conventional car coordinates perpendicular to each other and if these really were unit vectors if the distance representing each coordinate uh separation here was one unit of whatever the units we're dealing with then these coefficients here would be the same as these dot products the dot products and the coefficients would be the same however when we have a peculiar coordinate system with angles and uh with nonunit separations between the uh the successive coordinate lines here this is not true so let's see if we can work out what v.1 incidentally v.1 is called V sub1 with a covariant index all right notice how things fit together we can also write this here as vmem summation convention Einstein summation convention this means V1 E1 plus V2 E2 plus V3 E3 notice I've concocted things so that an upper index always goes with a lower index we're going to come back to that it's called contraction of indices but just notice that I've done it in such a way that uh that the summation always involves an upper index with a lower index and by definition this is the contravariant component of the vector V the covariant components are the dot products of the vector with the B call these the basis vectors we might as well call them by their name these are called the basis vectors and the covariant components are the dot products with these so let's see what we get let's see if we can figure it out let's plug in for V its expansion here and then take its dot product with e so what is this equal to this is equal to V M E M from here and now we're going to dot it into E1 but why fix E1 let's just let's just go for let's go go for it right now General the general case the dotproduct of V with the nth basis vector and this will be e subn em. is something new it's something new let's isolate it it has two lower indices and it's the metric tensor it is the metric tensor let's let's uh go a little bit further to see what its connection with the metric tensor is the length of a vector is the dotproduct of the vector with itself let's calculate the length of v and we'll see various ways we can write it it's v.v now v.v let's calculate v.v from here we can write that the first V is vmem right that's the first V but then let's take its dot product with the second V the second V is V VN N I have to use a different index for this summation than for this summation I mustn't mix up the summation indices this is vmem this means V1 E1 plus V2 E2 plus V3 E3 time V1 E1 plus V2 and so forth okay but now I can write this in the form VM VN times the quantity e m do n let's call this quantity em. en let's call it gmn and now we have a formula VM VN gmn this is the character of the metric tensor the metric tensor tells you how to compute the length of a vector the vector could be just DX DXM and xn and then we would just be Computing the length of a little interval between two neighboring points gmn okay so uh I just put this all on the Blackboard here to give you a little picture of the difference between covariant and contravariant indices contravariant indices are the things that you use to construct a vector out of the basis vectors covariant indic are the dot products with the with the uh with the basis vectors the different geometric things but they would be the same if we were talking about ordinary cartisian coordinates okay so that's that I just inserted that discussion in order to give you some kind of geometric idea of what covariant and contravariant means and also what the metric tensor is the metric tensor is constructed out of these basis vectors by taking their do products but we'll come back to these things yeah all right so this was just for you you know you asked me to do it yes I I assume this is what you asked me to do it is totally what I asked okay um the um so in orthogonal cordinates the two sets of components are not to be the same orthogonal and unit uh separations between and the two sets of coordinates are the same yeah and that's also true in cinear coordinates where vectors are always in curvy linear coordinates typically in general the coordinate directions are not orthogonal to each other the coordinate directions you know the the direction along which one of the coordinates increases keeping the other ones fixed that's what we're talking about those are typically not orthogonal to each other right but for some for some curve linear coordinates they are oh yeah and so in that case the components would also be the same is that true not quite okay not quite because it's also the question of the length between neighboring coordinate lines cartisian coordinates not only mean that they're orthogonal but also means that the separations are the same here here here this is the same as this and this so let me give you an example where the coordinate axes are orthogonal to each other but where the separations are not the same polar coordinates ordinary polar coordinates the um lines along which the radial coordinate increases are just the uh just the radial lines and the lines associated with the angular coordinates so just circles and clearly at every point a circle is is perpendicular to the line passing through the same point so these coordinates are or thought I didn't draw them very well but these coordinates are orthogonal to each other okay but the separation distance between let's call this theta equals 0 over here and let's call this Theta equal Theta 1 over here the separation between these two points is not the same as the separation between these two points is not the same as the separation between these points so these are not cartisian coordinates and uh according to the rough definition I gave of these e the ease along the Theta direction would be increasing their length as you went up here it's more than just perpendicularity perpendicularity and equality of the distance unit along uh as you go a fixed separation in X more than that in defining cartisian coordinates okay so now let's come to tensor analysis Excuse me yes you use two different phrases one is you talk about coari contravariant vectors tensors yeah you talk about coari contravariant oh yeah yeah um before the metric tens was introduced when we just have abstract coordinates and vectors there are covariant vectors and contravariant vectors and they're different things DX would be a contravariant vector and DF by DX where F would be some scalar that would be a covariant vector different kinds of things as we'll see once the metric tensor is introduced and I've I've I've introduced it here by introducing these unit vectors here or these vectors here once the once the um once the metric tensor is introduced it's possible to make a correspondence between the covariant vectors and the contravariant such a way as to say every Vector can be either represented as covariant or by or Contra variant but we'll come to that we'll come to it um at the moment I should be only talking about covariant and contravariant uh components of a vector in this language here we had one vector and we had its covariant its contravariant components and its covariant components so I so think about components good but I may sometimes lapse into calling them coar uh contravariant vectors by which I mean the contravariant components of a vector yeah okay now tensors are objects which are characterized the thing which characterizes a tensor and makes it a tensor is the way that a transforms under coordinate Transformations we talked about this a little bit and let's go back I want to do it again V is some Vector it has contravariant components and it has contravariant components in the Y coordinates and in the X comp coordinates if if I change coordinates I'm here keeping the vector fixed not going to change the vector but change the coordinates I will change its components clearly so how do the covariant contravariant components change when I change coordinates here's the rule this is the MTH component of V Prime where V Prime means the components of v in the Y coord coordinates Prime means y unprimed means X all right V in the prime coordinates I'm basically reviewing what we said last time is given in terms of the unprimed coordinates by D ym by dxn VN this is the object in the unprimed coordinates in other words the x coordinates and it's gotten by multiplying by Dy by DX * V an example an example is D Prime just Dy dym that would be the primed version of a little interval is obviously equal to D ym by dxn * dxn so this is sort of the um archetype of a uh of a contravariant vector component and you can see that it in fact transforms in exactly this way the covariant components a covariant component of a vector would be for example if you have a scalar s and you differentiate it with respect to y or X let's say y m this becomes the primed component of a covariant vector or the primed covariant component of the gradient Vector this is the gradient of s and I've differentiate it with respect to y not X so this is a primed component and this is equal to d s well let's write it this way d s by DX [Music] n d xn by d y m notice the difference notice and notice how the notation carries you along on the left hand side I have a y with a super index upstairs index M on the right hand side I also have a y with a super index M but on the right hand side I'm going to have a dxn of course I'm going to have a dxn a dxn upstairs has to be balanced by a dxn downstairs and you'll get familiar with this you're allowed to contract indices one upstairs and one downstairs and you you can see the pattern you can see the symmetry of this relationship here likewise here here we have a lower index corresponding to Y the index M on the right hand side I have something below the fraction bar also dym and then above the fraction bar I have dxn but that's compensated for by another dxn downstairs here and again the DS so the the notation pretty much carries you along this is the standard form for the transformation property of contravariant components this is the standard form for transformation properties of covariant and here's an example of contravariant yeah uh from these two examples the it looks like if we were to just consider dimensionally what the the the the yeah represent it would seem like the contravariant uh Vector is one where the units are of Distance by the co variant are of one over distance is this true in general um not in general but I get but I take your point and I think uh no it's not true in general um V could for example be a velocity in which case it would be a length per unit time or it could be a force components of a force in which case it wouldn't be uh those uh but um units have to match so in here we have for example this could be a force on the right a force on the left and a y and a Dy by DX which is dimensionless perhaps they're both measured let's say in um in meters question should those all be partial differentials should they all be what partial differentials they're all partial differentials why are they partial they're partial because there are several coordinates there's X1 X2 X3 and X4 and this is the derivative of one of the Y's with respect to the X's keeping all the other X's fixed right all right so this is the transformation properties of a covariant object and the corresponding thing for contravariant would be the analog of this W Prime [Music] M oh sorry yeah W Prime [Music] M I'm just using W and V to uh just to give different letters for things this is equal to DX n by D ym VN w n same pattern as over here again the X by the Y and so forth all right when yeah when you look at this you look at this and you say oh there's a lower index here of type y why why because Prime means y all right so Prime is associated with Y there's a lower index here M which is of the Y type and so there's got be a lower index below the fraction bar of Type M on this side uh and equation balances equation balances and you get a feel for these things after uh in a while good all right now what about a tensor of higher rank a tensor of higher rank simply means a tensor with more indices and the simplest example of tensors with more indices are just products of vectors is so let's uh let's take an object W not this W now uh let's see do yeah let's take a tensor W maybe I should call it t let's call W W Prime in other words in the Y coordinates it has two indices m and n and I'm going to take one of those indices to be an upstairs index and another to be a downstairs index contravariant and covariant a tensor like this could just be the product of two vectors one with a contravariant index one with a covariant index I won't bother writing it out but what makes this thing a tensor is its transformation properties so let me show you what the transformation properties are and the prime components again for each index for each index there's the same kind of pattern this is a primed component this is the index M so there must be a dym upstairs all right there's going to be a dxp downstairs now what about this index here this is a y index because it's primed downstairs so there must be a Dy downstairs and therefore there's got to be a DX and let's call it Q upstairs and now what do we multiply that we multiply that by W the tensor in the unpre Ed frame there's a p downstairs there's got to be a p upstairs here there's a q upstairs there's got to be a q downstairs so this tells me how a tensor of rank two with one contravariant and one covariant index transforms for each index there's a Dy by DX or a DX by Dy and you simply track where the indices go M upstairs M upstairs n downstairs and downstairs we're going to have a w here unprimed W it's going to have some components let's call them p and Q and the p's and q's have to have to balance all right this is very general if I take a tensor with any number of indices oh incidentally let me do one other example supposing we had a tensor with two contravariant indices oh let's make it two covariant indices MN just as another example a tensor with two covariant indices how does it transform again there are downstairs y's ym another downstairs y n again M and N Go with Y what do I put upstairs well only thing I put upstairs is X call this one p call this one q and now this is wpq all right so this is the transformation property of a thing with two indices purely covariant now and the general pattern is the same for every index you do the same thing either d y by DX or the uh or DX by Dy and you sum over repeated index indices uh M andn are not repeated indices they're on this side and they also appear on this side but p and p are repeated and Q and Q are repeated so this is a double sum this is a double sum over p and Q because p and Q are repeated all right this is the basic um notational device who invented it Einstein was the one who dropped the summation symbol because because he realized he didn't need it I don't know reman gaus was in there I don't know who invented all of this notation um but it's it's very systematic all right so a tensor is an object which is characterized by its transformation properties now notice something about tensors if they are zero in one frame let's start with a scalar if a scalar is zero in one frame it's zero in every frame here just they're equal now supposing a vector is zero in some frame let's say the X frame for a vector to be zero it doesn't mean some component is equal to zero it means all of its components are equal to zero a vector is only zero if all of its components are zero if all of the comp components of the contravariant uh Vector VN are zero then obviously the transformation property is such that all the components of the prime vector are zero likewise with any tensor where's a with any tensor if all of its components are zero in one frame and one coordinate system then all of its coordinates its components are zero in every frame that means that once you've written down an equation in tensor form you can always of course transfer everything to the left side of the equation and set it equal to zero uh if an equation of that type is true in some frame it's true in every frame and that's the basic value of tensors they allow you to express equations of various kinds equations of motion equations of whatever it happens to be in a form where the same exact equation will be true in any coordinate system um and that's of course a deep advantage to thinking about tensors there are other objects which are not we're going to come across some of them which are not tensors which have the property that they may be zero in some frames and not zero in other frames all right frame means coordinate system now so tensors have a certain invariance to them their components are not invariant their components change from one frame to another but the statement that one that a tensor is equal to another tensor the tensor wpq is equal to let's say tpq oh incidentally when you write that that a tensor equation the components have to balance it doesn't make sense to write an equation like wpq equals T PQ oh you could write it you can write it all you want but since this left side transforms differently than the right side then even if this were true in one coordinate system it would not be true in another coordinate system the transformation properties of the two sides are different all right so normally we wouldn't write equations like this we might we might say in some particular coordinate system coordinate system of the blah blah blah kind this might be true but then if you change coordinates it won't be true the kinds of equations which are true in every frame are ones in which the indices balance they transform the same way so if it's true in one yeah what is it that's invar the truth of the equation if this is true in one equation in one one coordinate system then it will be true in every coordinate system except for a vector at least magnitude will be the same oh it's magnitude will be the same but if it's but it's um the individual components in different coordinate systems look we can always write this as wus t wus t equal Z right if every component of this wus T object is zero then it's true in every reference frame so analog to the magnitude of the vector no it's it's not the magnitude of the vector it is the vector itself it's one thing it's true that if the magnitude of a vector is zero in ordinary uh geometry the vector itself is zero okay that will not be true in relativity is not does not follow from the magnitude of a vector being Z that the vector is equal to zero the magnitude of a vector and the vector itself are two different quantities the magnitude of a vector is a scalar the uh the vector itself is a complex thing that points in a direction right to say the two vectors are equal means that the directions are the same and their magnitudes are the same and a tensor of higher rank is a more complicated object which points in several directions it's got some aspect to of it that points in One Direction and another we're going to come to what they uh to what their geometry is like soon enough but for the moment they're defined by the transformation properties okay as I said the importance of tensors is that when a tensor equation is true in one frame it's true in every frame next operations on tensors things you can do to tensors that make new tensors when not at this Point introduc interested in things that you can do to tensors which make other kinds of objects which are not tensors we're interested in the things we can do to a tensor operations we can do to them which will make new tensors okay and in that way we can make a collection of things out of which we can build equations the equations being the same in every reference frame okay so let's uh let's write down some set of operations and then I'll go through what they are and how you do them they're very simple well most of them are simple the last one is not simple well first first of all you can multiply a tensor by a number I didn't even write this down you can take any tensor and multiply it by a numerical number it's still a tensor right I'm not even going to bother with that one okay one addition of tensors that's the first operation we'll talk about two an addition of course includes also subtraction if you multiply a tensor by a negative number and then add it it's subtraction multiplication of tensors makx tensors three contraction and I'll tell you what that means you know you may or may not know the word at this point but we will know the word soon contraction and for differentiation of tensors but not ordinary differentiation covariant differentiation and we will Define that I think tonight covariant differentiation of tensors uh I think those are the four basic processes that you can do on a tensor to make new tensors differentiation with respect to what well differentiation with respect to position these tensors might be things which vary from place to place they live at a point they have a value at a point at the next point they have a different value at the next point they have a different value and learning to differentiate them is going to be um fun and um and hard not very hard a little hard okay adding tensors you only add tensors if their indices match and are of the same kind for example if you have a tensor t m with a bunch of O more upstairs indices contravariant indices and maybe a collection of uh of downstairs indices and you have another tensor of the same kind plus s m s does not stand for SC for scaler here dot dot dot dot dot dot p in other words their indices are of exactly the same kind this might be m n r and so forth this could be p q whatever if the IND if the indices match then you are permitted to add them and construct a new tensor which I'll just call t+ s m dot dot dot dot dot dot P okay in other words the tensor the thing which is just the sum of the indices defines a new tensor which is the sum of the two tensors it's obvious that this transforms the right way if T transforms by multiplying it by a bunch of DX by d y and the Y by x's and S transforms the same way then if you just factor out the uh you know the transformation coefficients the DX Dy and the Dy DXs then you can see easily that the sum is also so a tensor transforms as a tensor uh you can do the same thing with minus incidentally no difference also T minus s is a tensor and this is the basis for saying that tensor equations are the same in every reference frame T minus S = 0 is a tensor equation it's the equation that the tensor T minus s is equal to zero okay next multiplic of tensors now unlike addition multiplication of tensors can be done with tensors of any rank rank means the number of indices and independently of whether those indices are upstairs or downstairs and here's the way it works uh let me start with an example multiplying two vectors supposing we have a vector with a CO the contravariant index and we multiply it by some other Vector to make it a life a little more complicated let's multiply it by a vector with a covariant Index this is a tensor it's a tensor with one upstairs index M and one downstairs index n you have to remember which one is which do not cross as if this is M this one is M if this one is n that one's n and so forth this is a tensor with one covariant sorry one covariant index and one contravariant index it's the set of values the set of values one value of this thing for each M and N Define the components of a tensor with two indices you could have done the same thing with some other Vector let's continue to call it w but one with another upstairs index and this would have been some other tensor with one upstairs and another upstairs index I use upstairs and downstairs because I constantly have to remind myself which one is covariant and which one is contravariant and upstairs and downstairs are easier to to think of but they're the same thing and you usually you can put a you can put a product sign in here if you like just to keep track of the fact that you're multiplying what's that well no this is not doing the vectors this is making we're going to talk about that in a moment this is making a tensor of different rank higher rank just by juxtaposing these together how many indic how many components does this object have well let's let's work in four we're going to be interested in four dimension space times let's say the number of indices is four M runs the number of uh values that M can take on is four the three dimensions it would be three but in four dimensions this would have 16 independent components 16 independent components four for this and four for that this would be a 16 component object it is not the dot product the do product only has one component it's a number sometimes it's called the outer product but uh but it's just a tensor product just a tensor product of two tensors and it makes another tensor typically a tensor of different rank than either one of them the only way you can make a tensor of the same rank is for one of the factors to be a scaler a scaler is a tensor and you can always multiply a tensor by a SC scaler take any scalar multiply it by VM and it's a tenso with one index upstairs in other words multiplying any tensor by a scaler gives back a tensor of the same kind not the same tensor it's multiplied by S but of the same type of the same generic type same number of indices in the same place that's the only situation where when you multiply a tensor by some by some other tensor you get back a tensor of the same type generally you get back a tensor of higher rank with more indices obviously where these tensors will come in we'll find out where they come in soon enough but so far this is just a notational device so far think of it as a notational advice uh device okay everybody happy with it it's really easy it's very hard to make mistakes are there any rules like switching sides v v w is equal to W yeah uh well okay let's be careful yeah let's take uh let's take this one with two upstairs VM WN is equal to WN VM but it is not equal to VN WM VM * WN is the same as VM * WN doesn't matter which is you know 7 * 3 is 3 * 7 we're not doing quantum mechanics every multiplication of numbers and components are numbers components are numerical values so uh on the other hand when you write this as a tensor you must remember that the first index refers to the V and the second index refers to the N you must remember your convention that the first index here was associated with v and the second index here was associated with W okay but the transformation properties are just the transformation properties of a thing with two contravariant indices yeah there's nothing abnormal about multiplying components of vectors they are just numbers and when you multiply two numbers you can multiply them in any order you like same with adding them okay good incidentally how do we prove that a thing like this is a tensor well we just write down the transformation property of v and the transformation property of w and that tells us what the transformation property of V * W is and it's more or less manifest I already uh gave some examples of it that uh that products like this continue to be tensors tensors with the appropriate index structure so that's good we have addition we have multiplication and now we have contraction okay contraction is also very easy an easy algebraic process but in order to prove that contraction leads to tensors we need a little tiny little minor theorem no mathematician would call this theorem they would call it a um maybe at most a Lemma okay here's what the theorem says or the Lemma consider the object the XB I've mostly used M and n's and PS and q's for the indices I'm going to start using A's and B's that just aren't enough letters in the alphabet to to take them all from the same range in the alphabet take D XB by D ym multiply that by d y m by dxa and implicit in this formula is a sum over y uh a sum over M implicitly this means sum over M what is this object anybody know what this object is this is the change in XB when you change ym a little bit times the change in ym when you change XA a little bit summ over M you change y1 a little bit then you change Y2 a little bit then you change what is does that do to Echo what is what is this thing I didn't hear what you said but uh it's probably right but I don't know let me write down a slightly more general formula let's take d f of x well DF DF f is both a function of X and Y I mean it's a function of X but because X depends on y it's also a function of y d f by D ym d ym by dxa you know what this is the change in F when you change y a little bit times a change in Y when you change x a little bit what does it give you it gives you DF DX with an index it gives you DF by dxa this is the change in F when you change XA a little bit and it's been calculated by a sum of steps where first you change y1 a little bit then you change y1 in response to a change in XA then you change Y2 a little bit in a response to a change in XA and that gives you the fxa now what if x what if F what if F happens to be XB then carrying out the formula it tells me that it's the derivative of XB with respect to XA that looks like a stupid thing what does that mean what's the derivative of XB with respect to XA what's the derivative of X1 with respect to X1 no Dera of X1 with respect to X1 one what's the derivative X1 with respect to X2 zero so this thing here is just the chronica delta Delta ba true for Coates no no it's true for any set of coordinates nope true for any coordinates okay notice it has one index upstairs and one index downstairs Delta ba we're going to find out that Delta ba by itself happens to also be a tensor that sounds a little weird it's just a set of numbers but it is a tensor with one upper and one lower index we'll probably come around to it eventually okay that's the little Lemma that we need in order to understand index contraction so let's do an example of index contraction and then Define it more generally uh darn it I have my limma on the Blackboard that I need uh okay let's yeah let's throw it over here all right let's as an example let's take a tensor which is composed out of V and W V has a contravariant index and W has a covariant index excuse me n in this is the tensor tmn and then now what contraction means is it means take any upper index and any lower index combine them together identify them and set their and set them set the numbers equal to each other in other words take v m WM this means V1 W1 plus V2 W2 plus V3 W3 and so forth you've identified a an upper index with a lower index you're not allowed to do this with two upper indices you're not allowed to do it with two lower indices but you can take an upper index and a lower index and let's ask how it transforms all right so first let's write down the transformation properties before we set the indices equal to each other V VW VM W in Prime the prime Ed version of it is d ym by dxa dxb by d y n times V now let's see a WB let's check and see if this equation makes sense the transform properties is always have for each index in the tensor one of these Dy by DXs or DX by d y on the left hand side we have y's this is the Y components of something and here we have upstairs index M so we have a dym by dxa here the Y index is a lower index so this one is lower dxb by D YN and then we multiply by the same tensor except the tens in the UN in the unprimed components AB the ab components this is the transformation property of the rank two tensor with one index upstairs and one index downstairs the primed version of it and the unprimed version of it now let's set m equal to n and contract the indices contract means identify an upper and lower index and sum over them so we now have vmwm how many indices does this object have what kind of object is this it's a scaler it's a scaler it has no indices there's no indices what about these indices no no these indices are summed over these are summed over there this is not the one component of something or the two component it's like a DOT product the one one component the two two component the three three component all added together the result has no components they're summed over all right let's see what it is all we have to do the prime component of it is equal now all we have to do is identify m and n d ym by dxa dxb by now dym M and N have been identified VA WB and now here's where our little limma comes in handy dxb by D ym that's the thing over here I've written them down in opposite order here is dxb by D ym here's dxp by D ym here's d ym by dxa here's d ym by dxa and so I have exactly this combination appearing and that's just Delta AB so this monstrosity over here is just Delta ab and Delta AB is simply the instruction to set a equal to B that's what is a little machine which says set a equal to B and the result is that this thing is equal to VA a w this says set a equal to B okay I set a equal to b or set b equal to a sorry v a a set b equal to a and look what I have on the left on the left I have the contraction of a upper index with a lower index it doesn't matter that I call it a that's not important I have the contraction of the upper index with the lower index in the unprimed frame and that's equal to the corresponding quantity in the prime frame I could call this m if I like it's a summation index doesn't care what I call it okay what does this say this says that the object that I've made has no components and it has the same value in every frame what would you call it you call it a tensor ah you call it a scaler excuse me you call it a scaler so by contracting two indices I make another tensor in this case uh a scaler it's easy to prove and you can do this yourself that if you take any tensor with a bunch of indices MN NMR p q s like that any number of them take one index does from the upper indices and an index from the lower indices set them equal to each other then I would have t NMR P RS how many indices does this object have over here 1 2 3 4 5 six is it really a six indexed object no because I've summed this is the r is not really an index anymore it's been summed over so this is a four index object I've taken a six indexed object in other words a tensor of rank six and by Contracting an upper with a lower index I have REM I have lowered the rank of the tensor by two yeah why does it have to be an upper lower why can't it both be upper L okay what you would find good so let's do that let's start with this expression over here now these both have upper indices so this has to be D YN by D XB and then we would have all right you recognize that this has two upper indices this has two upper indices is the prime component and in both cases we have Dy by DX X Dy by DX okay now supposing I set m equal to n in sum this is uh an illegitimate notation but let's do it anyway this object is not this object this has a DX Dy Dy by DX that's the thing which is Delta AB this has a Dy by DX Dy by DX this is nothing in particular it's certainly not not just chronica Delta AB so since this is not chronica Delta AB this does not become VA W so the transformation property of this thing over here with two upper indices contracted like that is just some bastardized thing with no particular uh is it the inner product two Vector no no no the inner product of two vectors is one upper and one lower right this is really generalization the contraction yeah the contraction is the generalization of the inner product of two vectors yes that's that's exactly right contar one has to be contravariant one we're going to be doing in a moment as soon as we introduce the metric tensil we'll talk about inner product of vectors and this Al also relies on each index having the same range of values otherwise this would yeah yeah sure yeah we're in some some space of a given dimensionality so the range of values of the index is runs over the dimensionality of the space um can I ask a question about something came a while ago uh when you mentioned that if you uh um what you said about making the components all zero and pushing everything to one side of the equation um having that be you know when you can do that therefore you can write the same equation in any frame zero is the same as zero in an yeah so it seems to me and I want to know if this is right uh that what that really seems to depend on is that these Transformations are linear it does absolutely yeah in other words by linear you mean that um yes it depends on that they're linear yeah yes yes indeed that is correct okay we've defined now let's come to the metric tensor the metric tensor plays a big role here I've Illustrated it by a particular construction or way the way that you this emen is the metric tensor uh but let's define it on its own terms uh abstractly again things we've done before but let's do them again okay the definition of the metric tensor is if we have a differential length element DXM which just represents the components of a displacement Vector let's call it a displacement Vector go to a point in the space called X X1 to xn and now displace it and call that little Vector DX it has components contravariant components DXM the contravariant components if you want to remember what they mean geometrically what they mean is these expansion coefficients in terms of some basis vectors but it's easier once you get the hang of it just to just to proceed with the uh with a notation DXM is the contravariant components of a little displacement vector and now we ask what is the length of that Vector well I haven't told you enough to tell you what the length of the vector this could be some arbitrarily shaped complicated space and specifying what the space is is in effect specifying what all the lengths of the little elements are all right so we take the the XM and let's take the squared length it's always easier to start with a squared length Pythagoras's Theorem but Pythagoras's Theorem becomes the more and in coordinates which are not orthogonal Pythagoras's Theorem takes a more complicated form it's still quadratic in the DXs but it involves a DXM and a dxn and a quantity gmn gmn in general this gmn will depend on where you are so it depends on X in particular if you have a complicated curve curved space of some sort with curved coordinates and you pick some little DX over here then the length of it will not only depend on the DXs but it will also depend on where you are in the space and so this is basically the most General thing you can write down which is quadratic let's I'm going to stick with a case of four dimensions just because I have it firmly planted in my head four dimensions and we'll keep with that but how many independent components are there of this gmn thing you know 10 see why well to begin with There are 16 you have four X's you can multiply any one by any other one and so you start with 16 right but dx1 * dx2 is exactly the same as dx2 * dx1 so there's no point in having a separate g12 and a g21 set them equal to each other and then if you count you'll count that there are 10 independent ones was answering for three space Oh for three space it's six yeah for three space for threedimensional space it's six right um well here's what you count X1 X2 X3 X4 X1 X2 X3 X4 you know make a sort of Matrix 16 entries 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 15 16 okay but the one here's uh let's see where are we here's the one one element One Two element and so forth you might as well take the one two element and the 21 element to be the same they both multiply dx1 * dx2 all right so we have an element for here and let's just cross it an element here an element here an element here and an element here that's dx1 squ here's dx2 squ here's dx3 2 dx4 2 we have the one two element the 13 and the 1 4 the 23 the 24 and 34 but we've over we don't need to count separately the one two and the 2 one so we don't have to put anything any new ones down here okay how many are there 1 2 3 4 5 6 7 8 9 10 10 independent uh components of gmn as you say in three dimensions it would be six how about in two dimensions three three the two diagonal elements and one off diagonal which has to which you can take to be the same as the other off diagonal okay so 10 independent elements to the metric tensor so far we haven't proved it's a tensor I call I call it the metric tensor but let's now prove that it's a answer gmn now the basic guiding principle is that the length of the vector is a scalar we have a little Vector somewhere everybody agrees on the length of it although they don't agree on its components that's the underlying principle about a metric space that the length of a vector is a scalar that everybody agrees about the length of the vector okay so let's go from x coordinates what happened I thought I didn't I move this oh well no I did move it but uh seems to me I've seen that before all right so the length of the little DX Vector here it is I'll write it again the squared length the squared length is G MN of X DXM dxn but now let's go to the Y coord uh the Y coordinates the primed coordinates this must be equal the GM let's call it the all let's call it g um I don't know call gpq I don't want to use the same index gpq Dy Dy P dyq now there's something wrong this isn't quite right that's because this should be the the primed components of the tensor the tensor in the uh in the primed coordinates in the uh Y coordinates y okay let's rewrite this by writing that DXM is equal to partial of XM with respect to YP D YP little differential element of X separation is the derivative of x with respect to y * little change in y and let's do the same thing for dxn dxn is equal to partial of xn with respect to yq dyq and now plug these two expressions into here okay so what will we get we'll get that this is equal to G m n of X and now partial of XM with respect to YP partial of xn with respect to yq dyp dyq so look at this side over here and look at the side over here this says dyp dyq this says dyp Dy Q This object over here must be G Prime they play the same role this is the set of objects the PQ objects which you multiply by dyp dyq to find length of the vector and so we found the transformation property the metric tensor in the primed frame is given by the metric tensor in the unprimed frame times our good old friends DX X by Dy * DX by Dy this is just exactly the transformation property of a tensor with two covariant indices so we discover that the metric tensor is indeed really a tensor that's the first fact about the metric tensor it really is a tensor it transforms as a tensor and it has uh many applications any questions about the metric we're going to say more about it in a minute but uh okay let me go on I'm getting a little bit late the metric tensor has two lower indices and that's because it multiplies these differential displacements which have upper indices okay now the metric tensor is also just a matrix it's a matrix with the MN indices this is just a matrix gmn G11 G I'll write it in for threedimensional space g33 g12 g12 G13 G3 uh 23 and it's and furthermore it's a symmetric Matrix g12 and g12 G13 and G13 G23 over here yeah you can call this g31 but it might as well be taken to b equal G13 because dx1 * dx3 is the same as dx3 * dx1 so you can just take you can take it to be a symmetric Matrix now what do we know about symmetric matrices there's one more fact about this Matrix thought of as a matrix it has igen values the igen values are never zero and the reason the igen values are never zero is because a zero igen value would correspond to a little Vector of zero length and there are no vectors of zero length in a uh you know all every direction has a length associated with it what do you know about matrices which are symmetric and don't have zero igen values do you know anything one they are her Mission but they're also invertible in other words they have inverses the metric tensor has an inverse that means that there's a matrix which when you multiply by the metric tensor by the metric Matrix gives you the unit Matrix I'm going to write the equation down and then we'll see what uh what means the inverse Matrix the inverse Matrix to the metric tensor is also called G but you put the indices upstairs G MN upstairs gmn with indices upstairs is also a tensor and it's defined by the property by the defining property that as a matrix it's the inverse of the Matrix with two lower components now how do you write that this is the last thing we're going to do tonight how do you write the fact if we have two matrices let's just take two matrices A and B one is called amn and the other is called uh uh B uh PQ how do we multiply two matrices together we multiply two matrices together by identifying an index in and summing over the index and that gives us the product of the Matrix the mq pro product I'm I'm write the equation and I think you'll recognize it but if not we're going to come back to it g MN time g n let's call it P summed over n contracted this is a legitimate expression here if GNP is truly a tensor if GNP is truly a tensor with the upstairs index then this is a legitimate product of two tensors with the index n contracted can you guess what the answer is for this product equals Delta m p what is Delta MP as a matrix thought of as a matrix it's the unit Matrix the identity matrix it's the identity Matrix what this equation says is the product of the Matrix gmn with the Matrix GNP with upper indices is the unit Matrix It identifies this object over here as the inverse Matrix to this this is called the metric tensor with contravariant in two contravariant indices this is the metric tensor with two covariant indices this is the metric tensor with one contravariant and one covariant index but uh but in any case the definition of this Matrix over here is just the inverse and the inverse is such that when you multiply it with the original Matrix you get the identity Matrix back this will play an important role the fact that there is a metric tensor with upstairs indices and downstairs indices and we'll come to it I think we quit for tonight I think we've done enough for tonight we'll talk just a little bit more about the metric tensor next time and then we'll go on to the subject of curvature the subject of parallel transport curvature differentiation of tensors so far everything I've told you is easy it's just getting the notation and following the notation the idea of a covariant derivative is a little more complicated not much and um but it's essential we have to know how to differentiate things in a space if we're going to do anything useful in particular if we're going to study whether the space is flat we have to know how things vary from point to point the question of whether a space is flat or not has to do fundamentally has to do with derivatives of the metric tensor and the character and nature of the derivatives of the metric tensor so the next time we'll talk a little bit about uh tensor calculus differentiation of tensors and um and the especially the notion of curvature I hope we get the curvature yeah I'm a little confused about what space these things are youve got the space you Dre on the board origin yeah and you got this space coordinates that we've been transforming about and at each point there there can be a value a scaler vector threedimensional vector something but those things don't live in that space do they well they functions of position in the space that's right but then they have to live in the own space but they have yeah but remember that they have the the space that they live in has the same dimensionality as the number of indices 1 two 3 is the dimensionality of the space and it's the same as the number of coordinates that's it's got to be like that or I mean I can construct a a map from the points of the threedimensional space to a two dimensional you could you could you could but these all right all right so what all right so let me give you another answer to tell you what space these things live in they here is the curved or uncurved space that uh as well may be that uh everything is a function of right at every point on that space there is what's called a tangent space the tangent space has exactly the same dimensionality as the space itself but roughly speaking you can think of it as a space of flat planes which are connected to every point at every point there is a tangent space the tangent space has the same dimensionality as the space itself and a mathematician would say that these tensors live in the tangent space uh I don't know if that helps you or not it does help okay the the um tensors vectors and so forth live in the tangent space they have components in the tangent space um and that's uh that's the mathematical way to describe the space that they live in yeah delou Canen here I'm sorry yeah the equation last equation you have there can only be the identity Matrix ifal P does it still make sense if m is notal to p no no no no this is a symbol which is zero if m equal if M does not equal p and one if M does equal p the equation makes sense for every M and P M and P for example it says the g 1 n g n 2 is equal to zero it says that G1 1 n gn1 whoops 1 equal 1 so there's an equation for each M and P some of them say that the right hand side is zero and some of them say that the right hand side is one I I understand that I'm asking you also said that it was the identity Matrix and that's only true for that one case uh the N not equal to P also makes sense it's just not identity matri no no no no no no no no no no no no you miss no no no no this whole thing is the identity matrix it's not a particular component of it is the identity Matrix this whole thing is the Matrix all right so let's write it out let's write it out for 2x two 2 by two two-dimensional space this would be um G Let's uh well the right hand side is the Matrix 1 01 okay let's actually do it okay let's do it this is going to be a pain in the neck but let's write it out we start with a matrix which is G11 with lower indices g12 G I'll call it 21 but g21 is the same as g12 and G22 that's this Matrix now we multiply it by another Matrix whose indices are upstairs and this is G11 g12 G 21 G22 let's multiply it uh you don't need to do that my point is that in that case m equal p in what case m equals p down exle sorry your example which which example your two matrices these two wait up down left right here okay I think the point is M and P are all in three space or four space they're in the same space and that's why the Matrix is going to be symmetric I'm not sure what you're worried about I'm going to write I'm I'm going to now write this equation in its full-blown Glory one 0 01 okay no no no no no no no no no no right so what does it say it says the G11 G G11 G11 Plus g12 g21 this time this plus this time this is equal to 1 it says I'm not going to write all of them it says G11 * g12 + g12 * G22 is equal to zero there's four equations here one for each entry four equations the product of this times this is equal to 1 the product of this time this is equal to zero and so forth I'm not sure what else I'm not sure what you're um what you which is an everything an N byn Matrix they're all n byn p m equals P that's because it's in three space or four space the dimensionality the same right M and P are in the same Dimension they're either three or four or five or whatever let's say three let's say or whatever 1 2 3 4 right yeah so I think the point he's trying to make is just that it is a symmetric space it's 3x3 is that what you're saying that it's a 3X3 Matrix or 2X or 4X same they have to be because the same space yeah it's the same those are indices that go between the the same value they go from 1 to three every single one of them runs from in this case from 1 to two and three-dimensional case they would run from 1 to three each of these would be square matrices n by n Square matrices yes is that what you're saying okay yeah that's true they're all square matrices all n byn matrices right but they correspond to four distinct equation oh well n some number of distinct equations some number of distinct equations each independent component uh yeah MH are they positive do they have to be positive def do they have to positive definite me uh now what do you mean by positive definite the igen values for okay good for a conventional space with a positive metric where all distances are positive the answer is that it has to be positive definite in the context that we will be coming to relativity they will not be positive definite and why is that because if you remember from special relativity the distance the SpaceTime distance between two points is T ^2 - x^2 - y^2 - z^ 2 or depending on convention x^2 + y^2 + z^2 - t^2 so the answer in relativity in general relativity is that it's not positive definite but if we were talking if we were remon talking about a conventional geometry in which all distances are positive then the metric would be positive definite is that what you're asking yeah okay but we are at the moment we're just doing ordinary geometry and we will come to Lorent geometry Loren geometry is nothing but throwing in an extra sign uh the signs but we'll come to that it's worth learning first about differential geometry and tensor analysis in the more conventional um uh situation and I don't quite understand why that can't be why that can't be zero zero value that in that context why none of the IG values are zero none of the igen values are zero that's what allows you to invert The Matrix so the other one that stuff on the board there that's hidden underneath that Bo underneath yeah so we took a four dimensional space yeah and then we got a 10 order M uh 10 independent components right so what's is that some way did that some way Inspire string the saying well we live absolutely no connection 10 is 10 but uh no no connection whatever coming back to the question of uh invertibility uh is the metric tensor being invertible property ofan geometry but not lenan because for for the lenan geometry seem like a vector on the like that has me zero but it's not an igen vector it's not it's non Vector of the metric uh the metric tensor for example in just two dimensional space would be 1 minus one and that has igen values 1 and minus one no Ian value zero for more please visit us at stanford.edu
Info
Channel: Stanford
Views: 630,388
Rating: undefined out of 5
Keywords: physics, classical, modern, quantum, einstein, math, relativity, gravity, time, notation, tensor analysis, geometry
Id: 5VKyRVLMMQ4
Channel Id: undefined
Length: 105min 47sec (6347 seconds)
Published: Wed Oct 17 2012
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.