Flash and JavaScript are required for this feature.
Download the video from Internet Archive.
Description: More on tensors, derivatives, and 1-forms. Contraction of tensor indices; the dual nature of vectors and the associated 1-form found by lowering the vector index.
Instructor: Prof. Scott Hughes
Lecture 3: Tensors Continued
[SQUEAKING][RUSTLING][CLICKING]
SCOTT HUGHES: All right, welcome to Tuesday. So hopefully, you've all saw the brief announcement I send to the class. I have to introduce a colloquium speaker over in Astrophysics, basically at the second this class officially ends, so I will be wrapping things up a little bit early so that I can take into account the spatial separation and get back there in time to actually do the introduction.
I've already posted the lecture notes of material I'm going to be covering today, and it'll probably spill-- I hope to wrap it all up today, but it's possible it'll spill a little bit into Thursday. So if you've already looked at those notes, today will essentially just be sort of my guided tour through that material.
So I want to pick it up with where I left things last time. So we covered a bunch of material that, again, I kind of emphasize what we're doing right now is just laying the mathematical foundations in a very thorough, almost excessively thorough way in order that we have a very strong structure as we begin to move into more physically complicated situations in the special relativity that we're focusing on right now. So we talked about this definition of an inner product between two four-vectors, two vectors in space time.
And it looks just like the inner product between two vectors that you are used to from your Euclidean three space intuition. It's just that we have an extra bit there that enters with a minus sign having to do with the time like components of those two four-vectors. And then, using the fact that I can write my four vector as components contracted onto elements of a set of basis vectors, I can use this to define a tensor, which I will get to the mathematical definition of more precisely in just a moment.
The dot product of any two basis vectors, I will call that the tensor component, eta alpha beta. OK, and so this is the metric tensor of special relativity, at least in rectilinear coordinates. Rectilinear basically just means Cartesian but throwing time in there as well. OK, when we start talking about special relativity and curvilinear coordinates, it'll get a little bit more complicated than that, and I not use a symbol eta in that case.
I am going to reserve eta for this particular form of the metric in this coordinate system. And of course, when you actually write out these components, a very compact way of writing this is it is just the matrix that has the elements minus one, one, one, one down the diagonal and zeros everywhere else. So this is a fairly common way of writing a diagonal matrix. This just takes into account the fact that there's zeros everywhere else.
So I call this the metric tensor, which kind of begs the question, what's a tensor? So this is where we concluded things last time. I'm going to generally define a tensor of type 0n-- we're going to change that zero to something else by the end of today's lecture. You'll be able to see where I'm going probably from a mile away, but let's just leave it like this for now.
So a tensor of type 0n is-- you can think of it as a function or a mapping of n vectors into Lorentz invariant scalars, which is linear in its n arguments. So the inner product clearly falls into this category. If I think of a dot b, let's say this inner product is some number, lowercase a.
A will be a Lorentz invariant. I forgot to state this, but the reason we define the inner product in this way is that we are motivated by the invariant interval in space and time between two events. We wanted to find an inner product that duplicates its mathematical structure. If I take one of these vectors, multiply it by some scalar, linearity is going to hold.
This will just be eta alpha beta. This is terrible notation. I'm using the [INAUDIBLE] symbol alpha for both a pre factor and an index. Minus five points to me. Let's call this gamma. You can quickly convince yourself. That just comes out, and you get a factor of gamma on this.
You can do the same thing on the second slot. If I take the dot product of a with the sum of two vectors-- OK, et cetera. You can keep going. All the rules for linearity are going to hold. I'm not going to step through them all. You can see where they all go.
So whenever I'm going to define a tensor, in my head, I'm imagining it's got properties like this that come along for the ride. Now, especially when you see this defined in certain textbooks, MTW is particularly fond of doing this. So we come back to this idea that it's sort of a function or a mapping.
You can almost abstractly define the tensor as a mathematical machine that's got two slots in it. So several of the recommended textbooks will write down equations. I'm going to put two lines over the symbol to sort of-- if you actually read this, in for instance, MTW or Caroll or something like that, this will be written as a boldface symbol.
It's hard to do on the blackboard, so I'm just going to write double bars over it. So if I imagine this with my slots filled with these, it's got two slots associated with it. I fill it with those two vectors. That is equivalent to saying a dot b, which is equivalent to it in this component notation, something like this.
So we have repeatedly in the little bit of time we've spent together-- I should say I have repeatedly, in the little bit of time we've spent together, emphasized the distinction between frame independent geometric objects, things that reside in the manifold and have kind of an integral, physical, geometric sensibility of their own and their representations. I have emphasized quite strongly that you should think of the vectors a and b as being geometric objects.
This is some thing that is pointing in space time. We all agree that this points, if that's a displacement vector, it points from event one to event two. OK, everyone agrees on that geometric reality of this. Different observers may represent it using different components. That's just because they're using different coordinate systems.
So when I write down something like this-- so let's go back to where I wrote before. This is going to turn into some frame independent Lorentz scalar a. So this is what I like to call a frame independent object. Frame or Lorentz independent geometric object, as is this scalar.
And so therefore, the tensor must be a frame independent geometric object as well. That's a lot of words around the blackboard, but I really want to nail that point home. So tensors just like vectors. Think of them as geometric objects that have an intrinsic geometric meaning associated with them that lives in spacetime.
We will talk about certain examples of them. OK, you guys all have some intuition about vectors, because you've been doing vectors ever since you took your first kindergarten physics course, and so you know that there's some kind of an object that points in a certain direction. Tensors are a little bit more challenging in many cases to develop an intuition for.
Some of them really do have a fairly simple geometric interpretation. You can kind of think of them as-- for instance, we're going to introduce one a little bit, which describes the flow of energy and momentum in space time. And so it'll have two indices associated with it, and those indices tell me about what component of energy or momentum is flowing in a particular direction. Really easy to interpret that.
Some of the others, not so much. Nonetheless, they do have this geometric meaning underneath the hood. And that's bound up in the fact that if I put frame independent geometric objects into all the slots, I get a Lorentz invariant number out of it. It's the only way that that can sort of work.
But one reason why I'm going through this is that just like with vectors, different observers, different frames will get different components in general. So there will be different representations according to different observers. I'm going to write down the same thing.
Representations. So if you want to get a particular observer's components out of the tensor, there's actually a very simple recipe for this. All you do is, you take your tensor, and into its slots, you plug in the basis vectors that that observer uses.
So if I want to get the components-- and this, unfortunately, is a fairly stupid example, but it's the only one we've got at the moment, so let's just work with it. If I take the tensor eta, the special relativity metric, I plug in some observer's basis factors into this thing, this by definition is eta alpha beta. Suppose I have a different observer who comes along, someone who-- we're doing special relativity, so someone who's dashing through the room at three quarters of the speed of light.
I want to know what their components would be. Well, what I do is, I just plug into the slots-- let's put bars on the complements to denote this other observer. Do this operation for this other observers set of basis vectors, and you will get the components that they will measure.
Now, one of the reasons why I'm going through this is that last time, we talked about how to transform basis vectors between different reference frames. We know that these guys are just related to one another by a Lorentz transformation matrix. So let's just take this a step further. So this is telling me that the components of the metric in this barred frame, they're going to be what I get when I put it into the slots.
Those are the basis vectors in the barred frame using the Lorentz transformation matrix to go from the unbarred frame to the barred frame. Now, remember again-- this is one of those places where if you're sort of just becoming comfortable with the index notation, you're temptation at this stage is always to go, these are matrices. I should start doing matrix multiplication.
If you set that urge within you aside, you go, no, no, no, no. Those are just a set of 16 numbers. For any particular set of complements, I can just pull them out. So because of the linearity of all these slots with this thing, this just becomes those two Lorentz transformation matrices acting on the abstract metric tensor with the unbarred basis vectors in the slots.
And this we already know. This is, by definition, this is just eta mu nu. Repeat this exercise for any tensor you care to write down, any 0n tensor you care to write down. Go through all this manipulation, and you will always find that there's a very simple algorithm forgetting the components in, let's call it the barred observers frame, as converted from the unbarred observers frame.
Essentially, you just hit it with a bunch of Lorentz transform matrices, and as an old professor of mine liked to say at this point, line up the indices. That's really all we're doing, is we're just going to line up the mus to convert them to alpha bars. Alpha bar here, mu there, put this guy here.
I want to convert my new into a beta bar. I put my matrix there. Boom. Just line up the indices, and we're done. Now, this is, as I kind of emphasized, a fairly stupid example, because if you take the diagonal of minus one, one, one, one and you apply the most God awful immense Lorentz transformation you care to write down to it, you do all the matrix manipulation and you line it all up, what you'll end up finding is that this is the diagonal of minus one, one, one, one in all frames.
That's actually one of the defining characteristics of the metric of special relativity. If you're working curvilinear coordinates, the metric is always minus one, one, one, one to all observers. So the recipe holds in general. This will hold whenever we are studying tensors from now and from henceforth. Just so happens that this first example we were given is kind of a dumb one.
Nonetheless, learn the lesson and overlook the example, and wisdom shall be yours. I want to spend a few moments talking about a particular subset of tensors, of the 0n tensors, where n equals one. This is a subset of tensions in general that is known in many textbooks as one forms.
For reasons that I will elaborate on in probably 10 or so minutes, these are also sometimes called dual vectors. And just if that's sort of making some neurons light up in your head, set that aside for a moment. I want to carefully go through them before I indicate the manner in which there is a duality that is being applied here.
So if we go back to the definition of a tensor, this tells us that a one form is a mapping from a single vector to the Lorentz invariant scalar. So using some notation that I will probably only use in this lecture, because we're going to move past this notation pretty soon, let's say a one form-- I'm going to denote this with an over tilde on it.
So let's say p is a one form. It will have in this sort of abstract notation a single slot, so I put the vector a into it. And this then gives me some scalar out. So in my notes, I go through some stuff indicating that this guy is-- it's a linear operation, but that's obvious, because it just inherits all the properties from tensors, so I'm not going to go through that. If you want to double check some of the details, they're in the notes that have been posted.
Just like with the tensors, I extract components from this thing by putting my basis vectors inside. So if I take my one form p and I put it in my alpha basis vector, this gives me the alpha component of the one form. Notice, it's in the downstairs position. So one of the reasons why I want to go through this step is it gives me a way to think about what's going on at this scalar that I wrote down on the top board here.
So what is this scalar that I get by putting a vector into my one form? So I take this, put my vector in here. I use the fact that I can write my vector using its components and the basis vector. I use linearity to pull this guy out.
So the scalar that I get by doing this is just the scalar that one gets by contracting the upstairs components that I use to set up my vector with the downstairs components I use to set up my one form. This is an operation that is called contraction, for reasons that I hope are fairly obvious. So let me define a few other characteristics of this thing, and in just a few moments, we'll see what this is good for.
OK, so one of things I'm going to want to do is change the representation of those things. So I'm going to want to know how these components transform between different frames of reference. But we've already done that. We did this using tensors, and this is just a tensor.
So if I change reference frames, if I want to know what the components are according to some barred observer, I will step through the algebra in my notes, but I think you know where I'm going to go with this. You just take the components in the unbarred frame, line it up, contract it with the correct setup of my Lorentz transformation matrix, boom.
Line at the indices, and we've got it there. So the last thing which I want to do with this before I talk a little bit about what this is really good for is say, you know, I've got these basis vectors that allow me to relate the components of my vectors to the geometric object in a way where I don't use a represented bi-symbol.
I actually have an honest to God equal sign. Can I define a similar set of basis one forms? What I want to do is define a family of geometric objects, and I will denote them with an omega and a tilde such that any one form can be written as its components attached to these little basis vectors. Well, the way I'm going to do this is, I'm going to exploit the fact that I already know what basis vectors are.
So I know, for instance, that if I take my one form and I plug in a basis factor, I get this. And so what I want to do is combine this thing which I would like to do with the defining operation of contractions. So I know p alpha a alpha is what I get when I've got my one form and I plug into its slot the vector a.
OK, so let's insist that when I do this, I can write this as p beta omega beta. Now remember, these are the components. This is the actual basis one form. So I'm going to stick into my basis one form this form of the vector. I can then use the linearity of the tensor nature to pull out that component of a.
So what this tells me is this is exactly what I want, provided whatever this geometric object is, it obeys the rule that when I plug basis vectors into it, I get the Kronecker delta back. Now, this may all seem really, really trivial right now. And indeed, if you think about this, in terms of just running down mathematical symbols, this is fairly trivial.
One thing which I want to emphasize is that if you go through and say, well, if I'm working in a basis where this has a time-like component that I'll say is one, the time-like direction is zero everywhere else. Remember to set the points in the x direction, so it's zero, one along x, zero everywhere else.
This then leads to-- so as an example, a set of basis objects that a particular observer would write just like so. I won't write out the two and three components. And again, you look at this and you think to yourself, dude, you're just repeating basis vectors. What's the big deal here?
Now, I'm going to explain the fact that these are sometimes called dual vectors. So if we want to think about this in a language that is reminiscent of linear algebra, if you think of the basis vectors as column vectors, then my basis one forms are essentially row vectors. So these look a lot like my basis vectors.
They enter in a dual way. And so they're going to play an important role in helping us to-- whenever I contract two objects together to make some kind of a Lorentz invariant scalar, I'm going to want to only combine objects that have a dual nature like. That's the only way I can get something sensible out of it. So let me give you an example. Mathematically, this is an equation that I can write down.
No question. If I'm in a particular frame, I've got the components of vector a, I've got the complements of vector b, I can multiply their complements together, sum them, and square them. So this is mathematically well-defined but plays no role in the physics we are going to talk about this term, because this is not related to the underlying invariant structure of the manifold that we are working with.
So remember I talked about how a manifold is essentially a sufficiently smooth set of points endowed with a metric? Well, the metric is what tells us that this is mathematically short. Write it down, but it means nothing. By contrast, of course this has frame independent meaning. This is important.
So that's the sense in which these one forms are often called duels actors. It is when they are combined with vectors, they are duel to it in the sense that when they're combined in this appropriate way, we find that they describe the physics, they capture the invariant characteristics of the physics that are important in the theory that we are describing.
If I may give one more example that is from a completely different field but I think helps to sort of illustrate a useful analogy to think about these things, suppose you are doing quantum mechanics and I give you two wave functions. So suppose you have a wave function psi of x and another wave function phi of x.
If you wanted to, you could multiply them together and integrate overall space. I don't know what you would do with that, but you could. On the other hand, you could take the complex conjugate of one of them, multiply it by the other one, multiply it overall space, and in the notation that you learn about, this is, of course, just the inner product of wave function psi with wave function phi.
Forming the one form, using a one form is akin to selecting an object that has-- it allows us to make a mathematical construction similar to the quantum mechanical enterprise that we use here. And as we'll see in just a couple of minutes, it's actually really easy to flip back and forth between one forms and vectors.
OK, so a better way to move forward with this is rather than talking in terms of these more abstract things, let me you a good example. So imagine I have some trajectory through spacetime. So let's let the t axis go up. There's my x and y-axes. You all can imagine the z-axis.
And some observer moves through spacetime like so. Last lecture, we talked about a couple of important examples of four-vectors. And so one which is germane to the situation is the four velocity of this observer. That just expresses the rate of change of its position in spacetime per unit proper time.
And I'll remind you, tau is time as measured along this observer's trajectory. So roughly speaking, not even roughly speaking, exactly speaking, tau is the time that is measured by the by the watch of the person moving along there. So suppose in addition to this person sort of trundling along through spacetime here, suppose spacetime is filled with some field, phi, which depends on all of my spacetime coordinates.
Question I want to ask is, what is the rate of change of phi along this observer's trajectory? So if you are working in ordinary Euclidean space, you would basically say, ah, this is easy. So you're three space intuition. You don't have this proper time to worry about.
So you would just say that d phi dt along this trajectory is just what I get when I calculate dx dt. And then look at the x derivative of my field phi plus dy dt. This is one of those places where getting the difference between a partial and a total derivative right is important. So if you see me do that again, if I don't correct it, yell at me.
OK, so you get this. And then you say, ah, this is nothing more than that particle's velocity dotted into the gradient of the field phi. It's a directional derivative along the velocity of this trajectory. So generalizing this to spacetime, you basically have the same thing going on.
Only now, time is a coordinate. So we don't treat time as the independent parameter that describes the ticking of clocks as I move along it. I use the proper time of the observer as my independent parameter. So what I will say is, the rate at which the field changes per unit of this guy's proper time, every one of these is a component of the four velocity.
So now, we introduce a little bit of notation. So this derivative is what I get when I contract the four velocity against a quantity that's defined by taking the derivative of this field. We're going to be taking derivatives like this a lot, so a little bit of notation being introduced to save us some writing. This is the directional derivative along the trajectory of this body.
Now, this is a frame independent scalar. This is a quality that all observers will agree on. This is a four velocity. We know this is a four velocity. These are the components of a four-vector, so these are the components of a one form. So the generalization of a gradient is an example of a one form.
So re-using that abstract notation that I gave earlier, I can write-- so the way you will sometimes see this is, the abstract gradient one form of the field phi is represented by the components, the alpha phi. You will also in some cases-- and I actually have a few lines about this in my notes-- sometimes, people will recycle the notation of a gradient that you guys learn about in undergraduate E&M.
I urge a little bit of caution with this notation, because we are going to use this symbol for a derivative to mean something a little bit different in just a couple of lectures. It turns out that the something different reduces to this in the special relativity limit, so there's no harm being done. But just bear in mind this particular notion of a derivative here is going to change its meaning in a little bit.
Another little bit of notation that is sometimes used here, and this is one more unfortunately, I think I'm stuck with this notation, one often says-- so this idea of taking a derivative along a particular four velocity, it comes up a lot. So sometimes, what people then do is, they define it as the directional gradient along u of the field phi.
So don't worry about this too much. We'll come back to us a little bit more appropriate. I just want you to be aware, especially for those of you who might be reading ahead, when you see this. So just think of this as what you get when I am taking a gradient along a velocity u. It basically refers to the gradient one form contracted with the four-vector u.
So the last thing I want to do as I talk about this is revisit this notion of one forms as being dual to vectors for just a moment. So we've just introduced the gradient as our first example of a one form. So the notion of the gradient as a one form, this gives us a nice way to think about what the basis one forms mean.
So when I introduced basis one forms a few moments ago, it was a purely mathematical definition. I just wanted to have objects such that when I popped in the basis vectors, I got the Kronecker delta back. And after belaboring the obvious, perhaps for a little bit too long, we essentially got a bunch of ones and zeros out of it.
Now, putting in math all of those words, I did a lot of junk to get that. But we also know that if I just take the derivative of my coordinate with my coordinate, I'm going to get the Kronecker deltas. This and this, these are the same thing.
So I can think of this operation here as telling me if I regard this as this kind of abstract form of the gradient applied to the coordinate itself, this is nothing more than my basis one form. So my basis one forms are kind of like gradients of my coordinates.
OK, you're sitting here thinking, OK, what the hell does this have to do with anything? So when we combine-- set that aside for just a second-- and remind you of some pretty important intuition that you probably learned the very first time you learned about the gradient. So imagine I just draw level surfaces of some function in-- I'll just do two dimensional space.
OK, so I have some function h of x and y. This represents, like for instance, a height field. If I'm looking at a topographic map, this might tell me about where things are high and where things are low. So h of xy, there might be level surfaces on my map. It'd kind of look like this.
And there'd be another one that kind of looks like this. And maybe it'd have something like this, and then something kind of right here. So we know looking at this thing that the gradient is very low here and very high here.
Let's put this into the language of what we are looking at right now. OK, let's let delta x be a displacement vector in the xy plane, and dh is going to be my one form of my [] function. How do I get the change in height as I move along my displacement vector?
Well, take my one form, plop into its slot delta x, I get something like this. The thing which I kind of want to emphasize here is, we have a lot of geometric intuition about vectors. So if I have a delta x-- let' say this is my delta x. It lasts for about this long over here. I take the exact same delta x, and I apply it over here.
I get a very different result, because that goes through many more contours on the left side than it does on the right side. And the thing-- this is where this duality kind of comes in, and I'm going to put up a couple of graphics illustrating this here-- is that you should think of the one form as essentially that set of level surfaces.
It's a little confusing. I'm not going to-- I mean, I can see a couple blank looks. Maybe even the majority of you have kind of blank looks on your faces here. And that's fine. So what I want you to regard is that when I'm talking about basis one forms and one forms of functions, they have a very different geometric interpretation, even though you're kind of used to gradient as telling you something about the direction along which something is changing.
Actually, you define that direction. The thing that you're worried about is sort of how close the different level surfaces are of things. OK, so coming back to this idea that my basis one form that I use are essentially just the gradients of the coordinates. So I'm going to put some graphics up on the website, which I have actually scanned out of the textbook by Misner, Thorne, and Wheeler.
And what they basically show is, let's say this is the time direction. Let's say this is my x-axis. And this is my y-axis. Your intuition is that the x basis vector will be a little arrow pointing along x. Well, what your intuition should be like for the x basis one form is a series of sheets normal to the x-axis that fill all of space.
OK, spaced one unit apart, filling all of space kind of like that. So here's one example of one of those sheets. And notice, the x-axis pierces every one of those things. That's another way in which these are sort of a set of dual functions to the vectors themselves.
This notion-- so I'm going to turn to something which is perhaps a little less weird in about 30 seconds. But the one thing which I kind of want to emphasize with this-- and again, I'm going to put a couple graphics up that help to illustrate this. And there's some really nice discussions of this. MTW is particularly good for this discussion.
This ends up being a really useful notion for capturing how we are going to compute fluxes through particular directions. Because if I want to know the flux of something in the x direction, well, my x basis one form is actually like a sheet that captures everything that flows in the x direction. So it's sort of a mathematical object that's designed for catching fluxes.
If isn't quite gelling with you, that's fine. This is without a doubt one of the goofier things we're going to talk about in this first introductory period of stuff. This is one of those places where I think it's sort of fair to say, if you're not quite getting what's going on, shut up and calculate works well enough. It's kind of like you know the Feynman's mantra on quantum mechanics.
Sometimes, you've just got to say, OK, whatever. Learn the way it goes, and it's kind of like playing a musical instrument. You sort of strum it and practice it for a while, and it becomes second nature after a while. All right, so to wrap this up, what I want to do for the last major thing today is, I hope I can kind of put a bow on our discussion of tensors.
Let's come back to the metric as our original example of a tensor here. So when I give you the metric as an abstract tensor, and I imagine I have filled both slots, I get a Lorentz invariant number. A dot b is what I get when I take this tensor and I put a and b into it slots.
Suppose I only fill one of its slots. Well, if I take this, I plug in the vector a but I leave the other slot blank, well, what I've got is a mathematical object that will take a vector and produce a Lorentz invariance number. That's a one form.
So let's do this very carefully and abstractly for a moment. But at this point, we basically have almost all the pieces in place. And so I'm going to kind of tone down some of the formality fairly soon. So let's define a one form, an object that takes a single vector inside of it as what I get when I take the metric and put that vector in there.
If I want to get its components out, well, I know the way I do that is I put the basis vector in there. This guy is symmetric, so I can flip the order of the metric. It's symmetric, so I can flip the order of these guys. And what this tells me is that my one form component of a is just the vector component of a hit by the components of the metric.
In other words, the metric converts vectors into one forms by lowering the indices. This is an invertebrate procedure as well. So this metric, I can define eta with indices in the upstairs position by requiring that eta alpha beta contracted with it in the downstairs position gives me the identity back.
Incidentally, when you do this, you again find it's got exactly the same matrix representation. So this thing with its indices in the upstairs position is just minus one, one, one, one. It will not always be that way, though. Again, this is just because special relativity in rectilinear coordinates is simple.
So I will often call that the inverse metric. And then, you shouldn't have a lot of trouble convincing yourself that if I've got a one form, I can make a vector out of it by a contraction operation. That now tells me that I have about 16 gagillian-- well, actually, it's a countable and finite thing. But I have many ways that I can write down the inner product between two vectors.
This guy is-- if you like, you can now regard a vector as being a sort of a function that you put one forms into. These are all the same. These are all equivalents to one another. And actually making a distinction between vector in one form and all that, it's just kind of gotten stupid at this point.
So the distinction among these different objects, the different names, kind of doesn't matter. And indeed, you sort of look at this. Up until now, we've regarded tensors as being these sort of things that operate on vectors. OK, but why not regard vectors as things that operate on one forms?
What this sort of tells you is that this whole notion of tensors being separate from vectors that I talked about before is kind of silly. So I'm going to revisit the definition of a tensor that I started the lecture off with like so. So a new and more complete definition.
A tensor of type mn is a linear mapping of m1 forms and n vectors to the Lorentz scalars. In this definition-- so we've already introduced zero, one tensors. Those are one forms. One zero tensors are vectors.
Furthermore, as I kind of emphasized when I wrote this sentence here, the distinction between the slots that operate on vectors and the slots that operate on one forms, it's nice for getting some of the basic foundations laid. This is one of those places where now that the scaffolding is in place-- we've had the scaffolding in place for a while, but this wall of our edifice is pretty steady, so we can kick the scaffolding away.
We can sort of lose this distinction. The metric lets us convert the nature of the slots on a tensor. So if I have an mn tensor and I use the metric to lower, that means I make it an m minus one n plus one tensor. So an example would be, if I have a tensor r mu beta gamma delta and I lower that first index, so this went from something that operates on one, one form and three vectors.
Now, it's one it operates on four-vectors. Likewise using the inverse metric, you can raise-- and just for completeness, let's write that out. So an example of this would be if I have some tensor su beta gamma, and let's say I raised that first index to get something like this.
OK, so let's see. In my last couple of minutes-- recall, I do have to leave a little bit early today, because I need to introduce a speaker. But I just want to wrap up one thing kind of quickly. I've spent a bunch of time talking about basis objects. And I'm going to go through this fairly quickly.
The notes, if you want to see a few more details, you're welcome to download and look at them. They're not really tricky or super critical. We know we have basis objects for vectors, which hopefully you have pretty good intuition about.
We have basis objects for one forms, where your intuition is perhaps a little bit more befuddled, but it'll come with time. So you might think, uh, now I've got two index tenors. I've got three index tensors. There's a four index tensor on the board.
Scott's probably going to write 17 index tensor on the board at some point. Do I need a basis object for every one of those? So in other words, do we need-- glad that's caught on video-- do I need to be able to say something like the abstract metric tensor is these components times some kind of a two index basis object?
Do I need to do something like this? Cutting to the chase, the answer is no. Basis one forms and vectors are sufficient. So what we're going to do is, abstractly just imagine that if I do have a tensor, I kind of have an outer product. I have both of the basis objects attached to this thing, and each one is just attached to those two slots.
OK, so in this particular case, my two index basis two-form that would go with this thing here, the thing with two indices on it, I'm just going to regard this as-- I'll abstractly write this as just an outer product on the basis one forms. If I ever need a basis object for a tensor like this, I will just regard this as an outer product of these two things.
So what's going on with that notation? What does this mean? Don't lose too much sleep about it. It's basically saying that I have separate objects attached to all of my different indices, and they're kind of coming along here and giving a sense of direction to these things. So for instance, if I have some kind of a tensor-- I mean, a great one, which we're going to talk about in just a little bit.
It's a quantity known as the stress energy tensor, in which I can abstractly think of the tensor as having-- not just a [INAUDIBLE]. It will have two components. And I can think of it as essentially pointing in two different directions at once.
Now, we're going to talk about this in a lot more detail in a couple of weeks. Actually, not even a couple of weeks. A couple of lectures. What I'm going to teach you is that the alpha beta component of the stress energy tensor tells me about the flux of form momentum component alpha in the beta direction.
And so this is basically just saying, when I think of it as the actual object, not just the representation according to some observer, here is the thing that gives me the direction of my four momentum, and here is the direction in which it is flowing. And sometimes, we will make more complicated objects.
And so you might need to imagine-- here's one which I actually wrote down in my notes here-- there will be times when we're going to care about a tensor which at least abstractly, we might need to regard as having this whole set of sort of vomitous basis vectors coming along for the ride here.
And it is actually fairly important to have all these things that are in place. Where I will just conclude things for today is that the place where it is particularly important to remember that we kind of sometimes almost just implicitly have these things coming along for the ride here-- it's important when we calculate derivatives.
So I gave you guys an example of a directional derivative for a scalar field that filled all its spacetime. I imagine that there was some trajectory of an observer moving through this. Now, imagine it isn't a scalar field that fills all of, spacetime but it's a tensor field. Here's t. We'll say this is the y direction.
This is the x direction. Here's my observer moving through all this thing. And again, I'm going to say that this trajectory is characterized by a four velocity, dxt tau, and I'm going to imagine that there is some tensor field that fills all of spacetime when I go and calculate the derivative of this thing, when we're working in special relativity where we are right now, these guys are going to be constant, so it doesn't really matter.
But soon, we're going to generalize to more complicated geometries, more complicated spacetimes, and the basis objects will themselves vary as I move along the trajectory. And I will need to-- in order to have a notion of a derivative that is a properly formed geometric object, I'm going to have to worry about how the basis objects change as I move along this trajectory as well.
So that tends to just make the analysis a little bit more complicated. I have a few notes about this that I will put up on to the web page, but I don't want to go into too much detail beyond that until we actually get into some of the details of these derivatives. So I'm just going to leave it at that for now. So yeah, I'll just say, the derivative in principle and quite often in practice, it will depend on how these guys vary in space and time.
And let me just say, you guys kind of already know that. Because when you studied E&M, you got these somewhat complicated formulas for things like the divergence and curl and stuff like that. And those are essential, because those are notions of derivative where you are taking into account the fact that when you're in a curvilinear coordinate system, your basis vectors are shifting as you move from point to point.
The stuff we're going to get out this will look a little bit different, and it comes to the fact, as I emphasized in my last lecture, that we are going to tend to use what we call a coordinate basis, whereas when you guys learn stuff in E&M, you were using what's known as an orthonormal basis. And it does lead to slight differences.
There's a mapping between them. Not that hard to figure it out, but we don't need to get into those weeds just now. All right, so I will pick it up from there on Thursday. So I'll begin with a brief recap of everything we did. The primary thing which I really want to emphasize more than anything is this board plus this idea that the metric can be used to raise and lower the indices of a tensor.
At this point, talking about vectors, talking about one forms, many of you in a math class probably learned about contravariant vector components and covariant vector components. Once you've got a metric, it's kind of like, who cares? You can just go from one to the other.
And that's why I tend to, almost religiously, I avoid the terms covariant and contravariant, and I just say, upstairs and downstairs. Because I can flip back and forth between them, and there's really no physical meaning in them. You have to think carefully about what is physically measurable, and it has nothing to do with whether it's covariant or convariant, upstairs or downstairs.
All right, I will end there, since I have to go and introduce someone.