Flash and JavaScript are required for this feature.
Download the video from Internet Archive.
Description: This lecture covers the rigid rotor and derivation by commutation rules. Angular momentum is a central theme of this lecture.
Instructor: Prof. Robert Field
Lecture 18: Rigid Rotor II....
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
ROBERT FIELD: Now the main topic of this lecture is so important and so beautiful that I don't want to spend any time reviewing what I did last time. At the beginning when we talked about the rigid rotor, I said that this is not just a simple, exactly solved problem but it tells you about the angular part of every central force problem. And it's even more than that. It enables you to do a certain kind of algebra with operators which enables you to minimize the effort of calculating matrix elements and predicting selection rules simply by the basis of commutation rules of the operators without ever looking at wave functions, without ever looking at differential operators.
This is a really beautiful thing about angular momentum that if we define the angular momentum in this abstract way-- and I'll describe what I mean by this epsilon ijk. If we say we have an operator which obeys this commutation rule, we will call it an angular momentum. And we go through some arguments and we discover the properties of all "angular momentum," in quotes. Now we define an angular momentum classically as r cross p. It's a vector.
Now there are things, operators, where there is no r or p but we would like to describe them as angular momentum. One of them is electron spin, something that we sort of take for granted, and nuclear spin. NMR is based on these things we call angular momentum because they obey some rules.
And so what I'm going to do is show the rules and show where all of this comes from and that this is an abstract and so kind of dry derivation, but it has astonishing consequences. And basically it means that if you've got angular momenta, if you know these rules, you're never going to evaluate another matrix element in your life.
Now, it has another level of complexity. Sometimes you have operators that are made out of combinations of angular momenta, and you can use these sorts of arguments to derive the matrix elements of them. That's called the Wigner-Eckart theorem, and it means that the angular part of every operator is in your hands without ever looking at a wave function or a differential operator. Now we're not going to go there, but this is a very important area of quantum mechanics. And you've heard of three j symbols and Racah coefficients. Maybe you haven't. But there is just a rich literature of this sort of stuff.
So today I'm going to talk briefly about rotational spectra because I'm a spectroscopist. And from rotational spectra we learn about molecular geometry. Now it's really strange because why don't we just look at a molecule and measure it? Well, we can't because it's smaller than the wavelength of light that we would use to illuminate our ruler, and so we get the structure of a molecule, the geometric structure, at least part of it, from the rotational spectrum.
Now I'm only going to talk about the rotational spectrum of a diatomic molecule. You don't want me to go further because polyatomic molecules have extra complexity which you could understand, but I don't want to go there because we have a lot of other stuff to do.
I also had promised to talk about visualization of wave functions, and I'll leave that to your previous experience. But I do want to comment. We often take a sum of wave functions for positive and negative projection quantum numbers to make them real or pure imaginary. When we do that, they are still eigenfunctions of the angular momentum of squared, but they're not eigenfunctions of a projection. And you could-- in fact, maybe you will on Thursday-- actually evaluate Lz times a symmetrized function. So by symmetrizing them, you get to see the nodal structure, which is nice, but you lose the fact that you have eigenfunctions of a projection of angular momentum, and that's kind of sad.
OK, so spectra-- so we have a diatomic molecule, mA, mB, and we have a center mass. And so we're going to be interested in the energy levels for a rotation of the molecule around that axis which is perpendicular to the bond axis.
When we do that, we discover that the energy levels are given by the rotational Hamiltonian. And for a rotation-- it's free rotation, so there's no potential. And the operator is L squared or J squared.
That's another thing. You already have experienced my use of L and J and maybe some other things. They're all angular momentum. They're all the same sort of thing, although L is usually referring to electronic coordinates, and J is usually referring to nuclear coordinates. Big deal. But you get a sense that we're talking about a very rich idea where it doesn't matter what you name the things. They follow the same rules.
So we have a single term in the Hamiltonian, mu r0 squared, or this might be the equilibrium instead of just the fixed internuclear distance. But we're talking about a rigid rotor, so r0 is the internuclear distance.
And now I want to be able to write-- OK, so I want to have this quantity. I want to have this quantity in reciprocal centimeter units because that's what all spectroscopists do, or sometimes they use megahertz. In that case, the speed of light is gone. And when I evaluate the effect of this operator on a wave function, we get an h bar squared, which cancels that. We would like to have an energy level expression EJM is equal to hcBL L plus 1.
So the units of B just accommodate the fact that we want it in wave numbers. But this is energy, So we need the hc. And when the operator operates, we get an h bar squared, and that's canceled by this factor here.
And so the handy dandy expression for the rotational constant is 16.85673 times the reduced mass in AMU units times the internuclear distance in angstrom units squared reciprocal. So if you want to know the energy-- if you want to know the rotational constant in wave number units, this is the conversion. Big deal. So the energy levels are simply-- I'm going to stick with LM even though I'm hardwired to call it J. Now if I go back and forth between J and L, you'll have to forgive me because I just can't-- yes.
All right, so we have the energy levels, hcB times L L plus 1. Now L is an integer, and for the simple diatomics that you're going to deal with, it's an integer. You can start at zero.
And so the energy levels, the L L plus 1, the L L plus 1 is 2. I want to make this look right. B-- this is 1 times 2. This is 2 times 3. And the 3 times 4 is 12. And the important thing is that this energy differences is 2B. This energy difference is 4B. This energy difference is 6B. And so what happens in the spectrum-- here's energy. Here's zero. We have a line here at 2B, a line here at 4B, 6B, 8B.
So if you were able to look at the rotational spectrum, the lines in the spectrum would be evenly spaced. The levels are not. That's very important, especially when you start doing perturbation theory because you're going to have energy denominators which are multiples of a common factor, but they're not equal to each other.
But we have a spectrum, and it looks really, really trivial. And textbooks don't talk about this, but if you have a relatively light diatomic molecule and you have a laboratory which is equipped with a microwave spectrometer which is able to generate data that got you tenure and whatever, it's probably a spectrometer where the tuning range of the microwave oscillator is about 30%. That's a lot. If you think about NMR, the tuning range is-- 30% is huge.
So, you see this and you say, oh yeah. I could assign that spectrum because an obvious pattern. But what happens in the spectrum is you get one line. And so you say, well, I need to know the internuclear distance of this molecule to 6 or 8 or 10 digits, but I get one line. There's no pattern.
The textbooks are so full of formulas that they don't indicate that, in reality, you've got a problem. And, in fact, in reality you've got something that's also a gift.
So there's two things that happen, isotopes and vibration. So we have this one line. We have a very, very strong, very narrow-- you can measure the daylights out of it if you wanted to.
And then down here, there's going to be-- well, actually, sometimes like in chlorine and bromine, there's a heavy isotope and a light isotope, and they have similar abundances. And so you get isotope splittings, and that's expressed in the reduced mass mA mB over mA plus mB. Now the isotope splittings can be really, really small, but these lines have a width of a part in a million, maybe even narrower. And so you can see isotope stuff.
That doesn't tell you anything at all that you didn't know except maybe that you were confused about what molecule it was because if you have a particular atom, it's always born with the normal isotope ratios. Except here we have a little problem where, in sulfur, if you look in minerals, the isotope ratios are not the naturally abundant of sulfur isotope. And this has to do with something really important that happened 2 and 1/2 billion years ago. Oxygen happened.
And so isotope ratios are of some geological chemical significance, but here, if you know what the molecule is, there will be isotope lines. And they can be pretty strong depending on the relative abundance of the different isotopes, or they can be extremely weak. So there's stuff, so some grass to be mowed on the baseline.
In addition-- and this is something that really surprises people. So here is v equals 0, and way up high is v equals 1. Typically, the vibrational intervals are on the order of a thousand times bigger than the rotational intervals. And typically, the rotational constant decreases in steps of about a tenth of a percent per vibration.
Now we do care about how much it decreases because that allows us to know a whole bunch of stuff about how rotation and vibration interact. And I'm not probably going to do the lecture on the rotation and vibration interaction unless I have to give a lecture on something that I can't do and I'll slip in that one.
So what happens is there are vibrational satellites. So here's v equals 0. It has rotational structure. And here is v equals 1. It has rotational structure. The v equals 1 stuff is typically a hundred to a thousand times weaker than the v equals 0 stuff. And that's basically telling you, how does a molecule changes its average 1 over r squared as it vibrates, and that's a useful thing. It may even be useful on Thursday.
So in addition to hyperfine, there's other small stuff having to do with vibrations. And in some experiments that I do, we use UV light to break a molecule. And the fragments that we make are born vibrationally excited. And so by looking at the stuff near the v equals 0 frequency, you see a whole bunch of stuff which tells you the populations of the different vibrational levels.
And that's strange because vibration is not part of the rotational spectrum. Vibration is big, but we get vibrational information from the rotational spectrum. And because the rotational spectrum is at such high resolution, it's trivial to resolve and to detect these weak other features. So as much as I'm going to talk about spectroscopy, it's a little bit more than I had originally planned.
And now we're going to move to this topic which is dear to my heart, and it's an example of an abstract algebra that you use in quantum mechanics. And there are people who only do this kind of thing as opposed to solving Schrodinger equation or even just doing perturbation theory on matrices.
So the rest of today's lecture is going to be an excursion through here as much as I can do. It's all clear in the notes, but I think it's a little bit strange.
Oh, I want to say one more thing. How do we make assignments? You all took 5.111 or 5.112 or 3.091, and there are things that you learn about how big atoms are. And so you can sort of estimate what the internuclear distance is-- maybe to 10% or 20%. That's not of any chemical use, but it's enough to assign the spectrum.
So what you do is you say OK, I guess the internuclear distance is this. That determines what rotational transition you were observing. And that has consequences of suppose you're observing L to L plus 1. Well, what about L plus to L plus 2 or L minus 1 to L? So if you make an assignment, you can predict where the other guys are.
And that would require going to one of your friends who has a different spectrometer and getting him to record a spectrum for you, and that's good for human relations. And that then enables you to make assignments and know the rotational constants to as many digits as you possibly could want, includ-- all the way up to 10. It's just crazy. You really don't care about internuclear distances beyond about a thousandth of an angstrom, but you can have them.
So first of all, you know we can define an angular momentum as r cross p, and we can write that as a matrix. Now I suspect you've all seen this. These are unit vectors along the x, y, and z directions. And this is a vector, so there are three components, and we get three components here. Now you do want to make sure you know this notation and know how to use it.
So here is the magic equation. Li, Lj is equal to ih bar sum over k epsilon ijk Lk. Well, what is epsilon ijk? Well, it's got many names, but it's a really neat tool which is very wonderful in enabling you to derive new equations.
So if i, j, and k correspond to xyz in cyclic order-- in other words, xyz, yzx, et cetera-- then this is plus 1. If it's in anticyclic order, it's minus 1. And if any index is repeated, it's 0. So it packs a real punch, but it enables you to do fantastic things. So if we have Lx, Ly, it's equal to ih bar plus 1 times Lz.
And the point of this lecture is with this, you can derive all of the matrix elements of an angular momentum-- L squared, Lz, L plus minus, and anything else. But these are the important ones, and this is what we want to derive from our excursion in matrix element land.
So the first thing we do is we extract some fundamental equations from this commutator. So the first equation is that L squared Lz is equal to Lx squared Lz plus Ly squared Lz plus Lz squared Lz. And we know this one is 0, right?
This one, you have to do a little practice, but you can write this commutation rule as Lx times Lx comma Lz plus Lx comma Lz Lx. So if you have a square, you take it out the front side then the back side. And now we know what this. This is minus ih bar Ly. And this is minus ih bar Ly.
And we do this one, and we discover we have the same thing except with the opposite sign. And so what we end up getting is that this is 0. Now I skipped some steps. I said them, but I want you to just go through that and see.
So you know what this is. It's going to be Ly, and it's going to be minus Ly times ih bar. And you get the same thing here. But then you have an LxLy, and you have an LxLy. And when you do the same trick with this, you're going to get an Ly and an Lx again, and they'll be the opposite sign.
So this one is really important because what it says is that you can take any projection quantum number and it will commute with the magnitude squared. The same argument works for Ly and Lz and Lx.
So we have one really powerful commutator which is that L squared Li equals 0 for x, y, and z, which means since we like L squared and Lz-- we could add like Lx instead of Lz, but we tend to favor these-- that L squared and Lz are operators that can have a common set of eigenfunctions. If we have two operators that commute, the eigenfunctions of one can be the eigenfunctions of the other. Very convenient.
Then there's another operator that we can derive, and that is-- let's define this thing, a step up or step down or our raising or lowering operator-- we don't know that yet-- Lx plus or minus iLy. So we might want to know the commutation rule of Lz with L plus minus.
We know how to write this out because we have Lz with Lx, and we know that's going to be a minus ih bar Ly. And we have Lz with iLy, and that's going to be a minus Lx. Anyway, I'm going to just write down the final result, that this is equal to plus or minus h bar times L plus minus.
The algebra of this operator enables you to slice through any derivation as fast as you can write once you've loaded this into your head. Yes?
AUDIENCE: So for the epsilon, how do you [INAUDIBLE]? Is it like xy becomes-- if it's cyclical it's positive?
ROBERT FIELD: I'm sorry?
AUDIENCE: When you say the epsilon thing, epsilon ijk, so you're saying that if it's in order, it's 1?
ROBERT FIELD: Let's just do this a little bit. Let's say we have Lx and Ly. Well, we know that that's going to give Lz. And xyz, ijk, that's cyclic order. We say that's the home base. And if we have yxz, that would be anticyclic, and so that would be a minus sign. You know that just by looking at this, and you say if we switch this, the sign of the commutator has to switch.
There's a lot of stuff loaded in there. And once you've sort of processed it, it becomes automatic. You forget the beauty of it. So are you satisfied? Everybody else?
All right, so now let's do another one. Let's look at L squared L plus minus. Well, this one is super easy because we already know that L squared commutes with Lx, Ly, and Lz. So I just need to just write 0 here because this is Lx plus or minus iLy, and we know L squared commutes with both of them.
Now comes the abstract and weird stuff. We're starting to use the commutators to derive the matrix elements and selection rules.
So let us say that we have some function which is an eigenfunction of L squared and Lz. And so we're entitled to say that L squared operating on this function gives an eigenvalue we call lambda. And we can also say that Lz operating on the function gives a different concept mu.
Now this lambda and mu have no significance. They're just numbers. There's not something that's going to pop up here that says, oh yeah, this means something.
So now we're going to use the fact that this function, which we're allowed to have as a simultaneous eigenfunction of L squared and Lz with its own set of eigenvalues, this function, we are going to operate on it and derive some useful results that all are based on the commutation rules.
So let us take L squared operating on L plus minus times f. And we know that L plus minus commutes with L squared. So we can write L plus minus times L squared f. But L squared operating on f gives lambda. We have L plus minus lambda f.
Oh, isn't that interesting? We have-- I'll just write it-- lambda times L plus minus f-- L plus minus f. So it's saying that this thing is an eigenfunction of L squared with eigenvalue lambda. Well, we knew that. So L plus minus operating on f does not change lambda, the eigenvalue of L squared.
Now let's use another one. Let's use Lz L plus minus. Well, I derived it. It's plus or minus h bar times L plus minus. And if I didn't derive it, I should have, but I'm pretty sure I did. And so now what we can do is write Lz L plus minus minus L plus minus Lz is equal to h plus or minus h bar L plus minus.
Let's stick in a function on the right, f, f, f. So now we have these operators operating on the same function. Well, we don't yet know what L plus minus does to f, but we know what Lz does to it. And so what we can write immediately is that Lz operating on L plus minus f is equal to plus or minus h bar L plus minus f plus mu L plus minus f. Well, that's interesting.
So we see that we can rearrange this, and we could write plus minus h bar L plus minus f is equal to mu L plus minus f plus Lz L plus minus f-- that's h bar, OK. Oh, I'm sorry, L plus minus f there.
So what's this telling us? So we can simply combine these terms. We have the L plus minus f here. And so we can write mu plus h bar times L plus minus f. That's the point.
So we have this operator operating--
AUDIENCE: I don't think you want the whole second--
ROBERT FIELD: I'm sorry?
AUDIENCE: The first line goes straight to there. I think your second line's [INAUDIBLE].
ROBERT FIELD: I took this thing over to here. So let's just rewrite that again. We have Lz L plus minus f is equal to this. And so here we have mu, the eigenvalue, and it's been increased by h bar. And so what that tells us is that we have a manifold of levels-- mu, et cetera. So we get a manifold of levels that are equally spaced, spaced by h bar.
AUDIENCE: I think it also should be plus or minus h bar, right?
ROBERT FIELD: Plus minus h bar-- yeah. So we have this manifold of levels, and so what we can say is, well, this isn't going to go forever. This is a ladder of equally spaced levels, and it will have a highest and a lowest member.
And so we can say, all right, well, suppose we have f max mu, and we have L plus operating on it. That's going to give 0. And at the same time we can say we have L minus min mu and that's going to give 0. We're going to use both of these.
Now I'm just going to leave that there. Oh, I'm not. I'm going to say, all right, so since we have this arrangement-- all right, I am skipping something, and I don't want to skip it. So if we have L plus operating on the maximum value of mu, we get 0. And the next one down is down by an integer number of L, and so we can say that Lz operating on f max mu is equal to h bar L, some integer. Now this L is chosen with some prejudice. Yes?
AUDIENCE: Why is there an f of x?
ROBERT FIELD: Now I have to cheat. I'm going to apply an argument which is not based on just abstract vectors. We have an angular momentum. It has a certain length. We know the projection of that angular momentum on some axis cannot be longer than its length. I mean, I'm uncomfortable making that argument because I should be able to say it in a more abstract way, but this is, in fact-- we know there cannot be an infinite number of projection quantum numbers, values of the projection quantum number that aren't reached by applying L plus and L minus. It must be limited. And so we're going to call the maximum value of mu h bar L or L.
Now I have to derive a new commutation rule based on the original one. No, let's not erase this. We might want to see it again.
So let's ask, well, what does this combination of operators do? Well, this is surely equal to Lx squared, and we get a plus i and a minus i, and so it's going to be plus Ly squared. And then we get i times LyLx, and we get a minus i times LxLy.
This is L squared minus Lx squared. We have the square root of 2 components of L squared, and so this is equal to the difference. And now we express this as i times LyLx. And what is this? This is plus ih bar Lx.
AUDIENCE: I think you wrote an x [INAUDIBLE].
ROBERT FIELD: OK, this is, yes, x, and that's Lz. I didn't like what I wrote because I want to have everything but the z and the L squared disappearing, and so we get that we have L squared minus Lz squared. And then we have plus ih bar Lz. I lost the plus and minus. No I didn't. OK, that's it. And so we can rearrange this and say L squared is equal to Lz squared minus or plus h bar Lz plus L plus minus L minus plus.
So we can use this equation-- OK, I'd better not-- to derive some good stuff. I better erase some stuff or access a board. We're actually pretty close to the end, so I might actually finish this.
So we're going to use this equation to find-- so we want lambda, the value of lambda for the top rung of the manifold over here. So we apply L squared to f max mu. And we know we have an equation here which enables us to evaluate what the consequences of that will be, and it will be Lz squared f max mu minus and plus h bar Lz f max mu plus L plus L minus f max mu.
So if we take the bottom sign, that L plus on f max is going to give 0. So we're looking at the bottom sign, and we have a 0 here, and so we have L squared f max mu is equal to Lz squared f max mu minus or plus h bar Lz f max mu plus 0. Isn't that interesting?
So we know that Lz is going to give-- so we're going to get an h bar squared and mu max squared. We're going to get a minus h bar h bar mu max.
So what this is telling us is that L squared operating on f max mu is given by l because we said that we're going to take the maximum value of mu to be h bar L. So I shouldn't have had an extra h bar here.
So we get this result. So lambda-- so L squared operating on this gives the-- oh yeah, maximum mu. So it's telling us that L squared f max mu is equal to h bar squared l l plus 1. Now that is why we chose that constant to be l. And we do a similar argument for the lowest rung of the ladder.
And for the lowest rung of the ladder, we know there must be a lowest rung, and so we will simply say, OK, for the lowest rung of the ladder we're going to get that mu is equal to h bar lambda bar mu min. And we do some stuff, and we discover that lambda has to be equal to h bar squared l l plus 1. And using this other relationship and the top sign, we get h bar squared l bar l bar minus 1.
And there's two ways to solve this. One is that l is equal to minus l bar, and the other is that l bar is equal to l plus 1.
Well, this is the lowest rung of the ladder. Wait a minute, let me just make sure I'm doing the logic correctly. It's this one, OK, here. So this is the lowest rung of the ladder, and l bar is supposedly larger than l. Can't be, so this is impossible. This is correct. And what we end up getting is this relationship, and so mu can be equal to-- and this is l l minus 1 minus l stepped to 1.
This seems very weird and not very interesting until you say, well, how do I satisfy this? Well, if l is an integer, it's obvious. If l is a half integer, it shouldn't be quite so obvious, but it's true. So we can have integer l and half-integer l. That's weird. We can show no connection between the integer l's and the half-integer l's. They belong to completely different problems, but this abstract argument says, yeah, we can have integer l's and half-integer l's.
And if we have electron-- well, we call it electron spin because we want it to be an angular momentum. Spin is sort of an angular momentum or nuclear spin. And we discover that there are patterns of energy levels which enable us to count the number of projection components.
And if you have an integer l, you'll get 2l plus 1 components, which is an odd number. And if it's a half integer you get 2l plus 1 components, which is an even number. And so it turns out that our definition of an angular momentum is more general than we thought. It allows there to be both integer and half-integer angular momentum.
And this means we can have angular momenta where we can't define it in terms of r cross b. It's defined by the commutation rule. It's more general. It's more abstract. It's beautiful.
And I don't have time to finish the job, but in the notes you can see that we can derive the matrix elements for the raising and lowering operators too. And the angular momentum matrix elements are that L plus minus operating on a function gives this combination square root, and it raises or lowers m. So it's sort of like what we have for the a's and a daggers for the harmonic oscillator, but it's not as good because you can't generate all the L's. You can generate the m sub L's.
And that's great, but there's still something that remains to be done to generate the different L's. That's not a problem. It's just there's not a simple way to do it, at least not simple to me.
And so now anytime we're faced with a problem involving angular momenta, we have a prayer of writing down the matrix elements without ever looking at the wave function, without ever looking at a differential operator.
And we can also say, well, let's suppose we had some operator that involves L and S, now that we know that we have these things, and L plus S can be called J. So now we have two different operators, two different angular momenta. We have S and we have the total of J. Well, they're all angular momenta. They're going to satisfy their selection rules and the matrix elements, and we can calculate all of these matrix elements, including things like L dot S and whatever. So it just opens up a huge area where before you would say, well, I got to look at the wave function. I've got to look at this integral. No more.
But there is one thing, and that is these arguments do not determine-- I mean, when you take the square root of something you can have a positive value and a negative value. That corresponds to a phase ambiguity, and these arguments don't resolve that. At some point you have to decide on the phase and be consistent. And since you're never looking at wave functions, that actually is a frequent source of error. But that's the only defect in this whole thing.
So that's it for the exam. I will talk about something that will make you a little bit more comfortable about some of the exam questions on Wednesday, but it's not going to be tested.