Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: In this lecture, Prof. Kardar continues his discussion of The Landau-Ginzburg Approach, including Scattering and Fluctuations, Correlation Functions and Susceptibilities, Comparison to Experiments.
Instructor: Prof. Mehran Kardar
Lecture 4: The Landau-Ginzb...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: OK, let's start. So we have been looking at the problem of phase transitions from the perspective of a simple system which is a piece of magnet. And we find that if we change the temperature of the system, there is a critical temperature, Tc, that separates paramagnetic behavior on the high temperature side and ferromagnetic behavior on the low temperature side. Clearly in the vicinity of this point, whether you're on one side or the other, the magnetization is small and we are relying on that to make us a good parameter to expand things in. The other thing is that we anticipated that over here there are long wavelength fluctuations of the magnetization field and so we did averaging and defined a statistical field, this magnetization as a function of position.
Then we said, OK, if I'm just changing temperature, what potential is the behavior of the probability that I will see in my sample some particular configuration of this magnetization field? So there is a functional that governs that. And the statement that we made was that whatever this functional is, I can write it as the exponential of something if I want. This probability is positive. I will assume that it is-- locally there is a probability density that I will integrate across the system.
Probability density is a function of whatever magnetization I have around point x. And so then when we expanded this, what did we have? We said that the terms that are consistent with rotational symmetry have to be things like m squared, m to the fourth, and so forth. In principal, there is a long series but hopefully since m is small, I don't have to include that many terms in the series. And additionally, I can have an expansion in gradient and the lowest order term in that series was the gradient of m squared and potentially higher order terms. OK?
We said that we could, if we relied on looking at the most probable configuration of this weight, make a connection between what is going on here and the experimental observation. And essentially the only thing that we needed to do was to basically start T from here, so T was made to be proportional to T minus Tc. And then we could explain these kinds of phenomena by looking at the behavior of the most probable magnetization.
Now I kind of said that we are going to have long wavelength fluctuations. There was one case where we actually saw a video of those long wavelength fluctuations and that was for the case of critical opalescence taking place at the liquid gas mixture at its critical point. Can we try to quantify that a little bit better? The answer is yes, we can do so through scattering experiments.
And looking at the sample was an example of a scattering experiment, which if you want to do more quantitatively, we can do the following- we can say that there is some incoming field, electromagnetic wave that is impingent on the system. It's a pass on incoming wave vector k. It sort of goes through the sample and then when it comes out, it gets scattered and so what I will see is some k [INAUDIBLE] that comes from the other part of the system. In principle, I guess I can put a probe here and measure what is coming out. And essentially it will depend on the angle towards which this has rotated.
If I asked well, how much has been scattered? We'd say, well it's a complicated problem in quantum mechanics. Let's say this is a quantum mechanical procedure- you would say that there's an amplitude that you have the scattering that is proportional to some overlap between the kind of state that you started with, what we started is an incoming wave with k initial, presumably there is the initial state of my sample before the wave hits it, and then I end up with the final configuration which is k f, whatever the final version of my system is.
Now between these two, I have to put whatever is responsible for scattering this wave so there is in some sense some overall potential that I have to put over here. Now let's think about the case of this thing being, say, a mixture of gas and liquid, well what is scattering light? Well, it is the individual atoms that are scattering light and there are lots of them.
So basically I have to sum over all of the scattering elements that I have my system. Let's say I have a u for a scattering element, i that is located at position-- maybe bad choice, let's call it sum over alpha. X alpha is the position of let's say the atom that is scattered here. OK? So now since I'm dealing with, say, linear order, not multiple scattering, what I can do is I can basically take this sum outside.
So this thing is related to a sum over alpha of the scattering I would have for individual elements that are scattering. And then roughly each individual element will scatter an amount that I will call sigma q. If you have elastic scattering what happens is that essentially your initial k simply gets rotated without changing its magnitude. So what happens is that essentially everything will end up being a function of this momentum transfer which is proportional to q k f minus k i whose magnitude would be twice the magnitude of your k sine of the half of the angle if you just do the simple geometry over there. So this is for elastic scattering which is what we will be thinking about.
Now the amount that each individual element scatters like each atom is indeed a function of your momentum transferred from the scattering probe. But the thing that you're scattering from is something that is very small like an atom, so it turns out that the resulting q will give significant-- well, the resulting sigma will vary all the over scales where q is related to the inverse of whatever is scattering which is something that is very large. So most of this stuff that is happening at small q, most of the variation that is observed, comes from summing over the contributions of the different elements. So going to the continuum limit, this becomes an integral across your system of whatever the density of the thing that is scattering is.
Indeed if I'm thinking about the light scattering experiment that we saw with critical opalescence, what you would be looking at is this density of liquid versus gas, which if I want to convert to q, I have to do a Fourier transform here. And so this is the amplitude of scattering that I expect and we can see that it is directly probing the fluctuations of the system, Fourier transform [INAUDIBLE] number q. Eventually of course this is the amplitude what you will be seeing is the amount that is scattered, s of q will be proportional to this amplitude squared. We'll have a part that at small q is roughly constant, so basically at small q I can regard this as a constant. So at small q all of the variations is going to come from this Roth q squared.
Of course again thinking about the case of the liquid gas system, where we were seeing the picture, there were variations, so there's essentially lots and lots of these Roth q's depending on which instant of time you're looking at. And then it would be useful to do some kind of a time average and hope that the time average comes from the result of a probability measure such as this. OK? So that's the procedure that we'll follow. We're going to go slightly beyond what we did before.
What we did before was we started with the probability distribution, such as this that we posed on the basis of symmetry and then calculated singular behavior of various thermodynamic functions such as heat capacity, susceptibility, magnetization, et cetera, all of them at macroscopic quantities. But this is a probability that also works at the level of microscopics. It's really a probability as a function of our configurations and the way that that is probed is through scattering experiments. So scattering experiments really probe the Fourier transform of this probability that we have posed over here. OK?
Now again the full probability that I have written down there is rather difficult. Let's say this in the case of the liquid gas system, this row would be explicitly the magnetization. It would be the fluctuations of the magnetization around the mean. I should note that in the case of the magnet, you can say, well how do you probe things? In that case you need something, some probe that scatters from magnetization at each point. And the appropriate probe for magnetization is neutrons.
So you basically hit the system with a beam of neutrons that may be polarizing them in this particular direction, their spins-- and they hit the spins of whatever is in your sample and they get scattered according to this mechanism. And what you will be seeing at small q is related to fluctuations of this magnetization field. Think he was looking for that. Now I realize-- I'm not going to run after him. OK. So teaches them to leave the room earlier. OK.
So let's see what we have to-- we can do for the case of calculating this quantity. Now I'm not going to calculate it for the nonlinear form, it's rather difficult. What I'm going to do is to sort of expand on the trick that we were using last time which led to this other point which is to look at the most probable state. So basically we looked at that function where we were calculating the subtle point integration and the first thing that we did was to find the configuration of the magnetization field that was the most likely. And the answer was that because k is positive, your magnetization that extremizes that probability is something that is uniform, does not depend on x, and is pointing all in one direction.
Let's call it e hat 1. That is indicating this one thing is symmetry breaking in the zero field limit. Of course that spontaneous symmetry breaking only occurs when t is negative and for t positive m bar is 0. For t negative just minimizing the expression t m squared plus u m to the fourth gives you minus t over 4u. OK? So that's the most probable configuration.
What this thing is probing is fluctuations so let's expand around the most probable configuration. So let's say that we-- say that I have thermally excited and m of x which is m bar plus a little bit that varies from each location to another location, like if I'm looking at this critical opalescence, it's the variation in density from one location to another location. But that is true as long as I'm looking at the case of something that has only one component. If I have multiple components, I can also have fluctuations in the remaining m minus 1 directions, and this is n of let's say alpha go from 2 of phi t of e alpha.
So I have broken the fluctuations into two types. I've said that let's say if you were n equals to 2 your m bar would be pointing in some particular direction. And so phi l's correspond to increasing the length or decreasing the length whereas phi t corresponds to going in the orthogonal direction which in general would be n minus 1, different components. OK?
So I want to ask, what's the probability of this set of fluctuations? All I need to do and again this is x-dependent is to substitute into my general expression for the probability so for that I need a few things. One of the things that I need is the gradient of m squared. The uniform part has no gradient so I will either get the gradient from phi l squared or I will get gradient of the n minus 1 component vector field phi t that I will simply write as gradient of phi t squared. So phi t is an n minus 1 component.
I can ask, what is m squared? Basically the first term I need to put over there, but m squared, I have to square this expression. I will get m bar squared to m bar phi l plus phi l squared that comes from the component that is along e 1. All the other components here will add up to give me the magnitude of this transverse field that exists in the other m minus 1 directions.
The other term that I need is m to the fourth and in particular we saw that it is absolutely necessary if t is negative to include the m to the fourth term because otherwise the probability that we were writing just didn't make sense and we need to write expressions for probability that are physically sensible. So I just take the line above and square it. But I want to only keep terms to quadratic order. So to zero order I have m bar to the fourth to third order I have-- to first order I have 4 m bar cubed phi l then there are a bunch of terms that are order of phi squared.
Squaring this will give me 4 m bar squared phi l squared, but the dot product of these two terms will also gives me two m bar squared phi l squared for a total of six m bar squared phi l squared. And then the phi t squared comes simply from twice m bar squared phi t squared. And then there's higher order terms cubic and fourth order m phi t and phi l that I don't write assuming that the fluctuations that I'm looking at our small around the most probable state. OK?
So if I stick with this quadratics then the probability of fluctuations across my system characterized by phi l of x and phi t of x is proportional to exponential of minus integral dd x. I have an overall factor-- well I have a factor of K over 2 for the first term. I have a gradient of phi l squared and then you can see that I have a bunch of them. Let's put the K over 2, let's put the one-half here. I have K phi l gradient of phi l squared.
I'm going to put everything that has phi l squared in it. I have here t over 2 phi l squared from t over 2m squared. So I have phi l squared. I have t over 2. Actually I have taken-- I'm going to make mistakes unless I put the one-half over here, too.
I have another phi l squared from over here. That gets multiplied by u so I will get here plus 12 u m bar squared, 12 rather than 6 because I divided by 2. And then I have a term that is K over 2 gradient of the vector phi t squared. And then I have t over 2 phi t squared and then 2 m bar squared multiplied by u becomes 4u m bar squared. And there are higher order terms that I will not write down. Yes? Did I--
AUDIENCE: You said phi l [INAUDIBLE].
PROFESSOR: Good. Yes, so the question is I immediately jumped to second order, so what happened to the linear term? There's a linear term here and there's a linear term here.
AUDIENCE: [INAUDIBLE]
PROFESSOR: Let's write them down. So the coefficient of phi l would be t over 2 m bar plus 4 u m bar cubed and minimizing that expression is setting this first derivative to zero so if you are expanding around an extreme on the most probable state then by construction you're not going to get any terms that are linear either m phi l or m phi t. OK? Yes?
AUDIENCE: Is there-- what's the reason for not including a term in our general probability, a term like [INAUDIBLE] of m squared?
PROFESSOR: We could. He said that essentially that amount over here to an expansion in powers of the gradient, which if I go to Fourier space this would become q squared plus n squared would be q to the fourth, et cetera. If you are looking at more q or large wavelengths and so we are going to focus on the first few terms. But they exist just as they existed for the phonon spectrum. We looked at the linear portion and then realized that going away from q equals to 0 generates all kinds of other steps.
OK? I mean, these are very important things to ask again and again to ultimately convince yourself, because in reality that expansion that I have written has an infinity of terms in it. You have to always convince yourself that close enough to the critical point all of those other terms that I don't write down are not going to make any difference. OK? All right. So that's the weight.
What I'm going to do is the same thing that we did last time for the case of Goldstone modes, et cetera, which is to go to a Fourier presentation so any one of the components, be the longitudinal or transverse, I will write as a sum over q e to the i q dot, x some Fourier component and then get a square root of v just so that the normalization would look simple.
And if I substitute for phi of x in terms of phi of q, just as we saw last time the probability will decompose into independent contribution for each q because once you substitute it here, every quadratic term will have both a sum over x and-- sorry, and integral over x and sums over q and q prime, e to the i q plus q prime x the integration over x forces q and q prime to be the same up to a sign.
So then we find that the probability distribution as a function of these Fourier amplitudes phi alpha and phi t decomposes into a product. Basically each q mode is acting independently of all the others. And also at the quadratic level we see that there is no crosstalk between transverse and longitudinal so we will have one weight for the transverse, one weight for the longitudinal.
And what's it actually going to look like? Essentially it's going to look like something that is proportional, let's say, for phi l, it is proportional to phi l of q squared. I have here something and then I have k q squared over 2 so then a Fourier transform that I will get k q squared over 2. So let's write it in this fashion, q over 2 q squared and then that's something that's not q dependent. And by convention I will write it as x e l minus 2. It has dimensions of inverse length scale because q has dimensions of inverse length scale so I will shortly define a length so that these two terms would have the same dimensional, so c l is defined in that fashion.
And similarly I have an exponential that governs k over 2 q squared plus c t to the minus 2 phi tilde of t of q squared and this is a vector so there are n minus 1 components there. OK? And you can see that basically potentially these two terms, c l and c t are different. In fact, let's just write down what they are. So this coefficient K over c l squared is defined to be t plus 12 u m bar squared. OK? Question?
AUDIENCE: [INAUDIBLE]
PROFESSOR: OK. Now this depends on whether you are for t positive or t negative-- better be positive since I have written it as one over some positive quantity-- for t positive, m bar is 0 so this is just t. For t negative, then m bar squared is minus t over 4 u so this becomes minus 3t, so this becomes minus 2t. OK? And k over c t squared is t plus 4u m bar squared. It is t for t positive. For t negative, substitute it for m bar squared, it will give me 0. OK?
Actually the top one hopefully you recognize or you remember from last time. We had exactly this expression t and minus 2t when we calculated the susceptibility. This was the inverse susceptibility. In fact, I can be now more precise and call that the inverse of the longitudinal susceptibility. And what we have here is the inverse of the transverse susceptibility.
What does that mean? Let me remind you what susceptibility is. Susceptibility is you have a system and you put on a little bit of the field and then see how the magnetization responds.
If you are in the ordered phase so that your system is spontaneously pointing in one direction, then if you put the field in this direction, you have to climb this Mexican hat potential and you have to pay a cost to do so. Whereas if I put it the field perpendicular, all that happens is that the magnetization rotates, so it can respond without any cost and that's what this is. These are really the Goldstone modes that we were discussing are the transverse fluctuations that I have written before. So again, we discussed last time you break a continual symmetry you will have Goldstone modes and these Goldstone modes are the ones that are perpendicular to the average magnetization, if you like. OK?
So now we have a prediction. We say that if I look at these phi phi fluctuations, I can pick a particular q, let's say, and here I have to put q star in order to get something that is non-zero. Actually that's put here q prime and then pick two different and this is alpha and beta. Well, if I look at this average since the weight is the product of contributions from different q's the answer will be 0 unless q and q prime add up to 0. And if I'm looking at the same q, I better make sure that I'm looking at the same component because the longitudinal and transverse component or any of the n minus 1 transverse components among each other have completely independent Gaussian rates.
If I'm now looking at the same Gaussian, for the same Gaussian I can just immediately read off it's variance which is K q squared plus whatever the appropriate c is for that direction whether it's c l or c t potentially would make a difference. OK? Right. So now we have a prediction for our experimentals. I said that these guys can go and measure the scattering as a function of angle at small angle and they can fit how much is scattered as you go as a function of angle and fit it as a function of q. So we predict that if they look at something that's say like phi l squared, if you're thinking about the liquid gas system that's really the only thing that you have because there is no transverse component if you have a Scalar variable.
We claim that if you go and look at those critical opalescent pictures that we saw, and do it more precisely and see what happens as a function of the scattered wave number q, that you will get a shape that is 1 over q squared plus c to the minus 2. This kind of shape that is called a Lorentzian is indeed what you commonly see for all kinds of scattering line shapes. OK? So we have a prediction. Of course, the reason that it works is because in principle we know that this series will have higher order terms as we discussed like q to the fourth, q to the sixth, et cetera, but they fall way down here where you're not going to be seeing all that much anyway.
Now the place where this curve turns around from being something that is dominated by 1 over K c to the minus 2 to something that falls off as 1 over K q squared or maybe even faster, the borderline is this inverse length scale that we indicated, c l minus 1. So now what happens if I go closer to the phase transition point? As I go closer to the phase transition point, t goes to zero, this c inverse goes towards zero. So if this is for some temperature above Tc and I go to some lower temperature, then what happens is that the curve will start higher and then [INAUDIBLE]. Yeah?
Actually it doesn't cross the other curve which is what I wrote down. Just because it starts higher it can bend and go and join this curve at a further point. Eventually when you go through exactly the critical point then you get the union of all of these curves, which is a 1 over q squared type of curve. So right at the point where t equals to 0 the prediction is that the Lorentzian shape, the coefficient that appears in front of q squared vanishes you will see one over q squared. OK?
Now the results of experiments in reality that are very happily fitted, the Lorentzian, when you're away from the critical point they claim that when you are exactly at the critical point, it's not quite 1 over q squared. Seems to be slightly different. At Tc, the scattering appears to be more similar to 1 over q to 2 minus a small amount. That's where another critical exponent theta is introduced so that's another thing that ultimately you have to try to figure out and understand. OK?
Of course I drew the curves for the longitudinal component. If I look at the curves for the transverse components, and again, by appropriate choice of spin polarized neutrons you can decompose different components of scattering from the magnetization field of a piece of iron, for example. If you are above Tc, there is really no difference between longitudinal and transverse because there is no direction that is selected.
And you can see that the forms that you will get above Tc would be exactly the same. When you go below Tc, that's where the difference appears because the length scale that would appear for the longitudinal parameters would be finite and it corresponds to having to push the magnetization above this bottom of the Mexican hat potential whereas there is no cost in the other direction. So if you can probe in the fluctuations that would correspond to these Goldstone modes, you would see the 1 over q squared type of behavior. OK?
So the story that we were talking about last time around about the Goldstone modes and they're fluctuating a lot because of their low cost also certainly remains in this case. We have now explicitly separated out the longitudinal fluctuations that are finite because they are controlled by this stiffness of going up the bottom of the potential whereas there is no stiffness associated with the transverse ones. OK? All right, so good. So we've talked about some of the things that are experimentally observed. Any questions?
Now we looked at things here in the Fourier space corresponding to this momentum transfer in the scattering experiment, but we can also ask about what is happening in physical space. That is if I have a fluctuation at one point, how much does the influence of that fluctuation propagate in space? So for that I need to calculate things like phi-- let's say, do we need to put an invert, why not?-- phi l of x phi l of x prime. Let's say we want to calculate this quantity. OK?
Now I can certainly decompose phi l of x in terms of these Fourier components. And so what do I get? I will get a sum over q-- maybe I should write it in one case explicitly, q and q prime e to i cube of x e to the i q prime the x prime. Two factors of root 3 giving me the V and then I have phi l of q phi l of q prime and the expectation value will go over here.
Now we said that the different q's and q primes are uncorrelated. So here I immediately will have a delta function mq plus q prime but if I'm looking at the same q and q prime, I have this factor of 1 over K q squared plus c to the minus 2-- c l to the minus 2. OK? So then the whole thing becomes due to the delta function the sum over 1 q of 1 over V each with i q dot x minus x prime because q prime was 2 minus q. And then I have K 2 squared plus c l to the minus. OK?
So then I go to the continuum limit of a large size, the sum over q gets replaced with integral over q d times the density of state. The V's disappear, I will have a factor of 2 pi to the d and what I have is the Fourier transform of k q squared plus c to the minus-- l to the minus 2. And I will write this as minus 1 over K a function I that depends on d dimension and clearly depends on the separation x minus x prime at the correlation length c l. Why I said correlation length shortly becomes apparent.
So I introduce a function I d which depends on x and c to be minus integral over q 2 pi to the d Fourier transform of q squared plus c to the minus 2. If that c was not there that's the integral that we did last time and it was the Coulomb potential in d dimension. So this presumably is related to that. And we can use the same trick that we employed to make it explicit last time around.
We can take the Laplacian of this potential i d and what happens is that I will bring two factors of i q squared so the minus goes away. I will have the integral d d cubed 2 pi to the d. I will have a q squared, denominator is q squared plus c to the minus 2. I add and subtract the c to the minus 2 to the numerator and I have to Fourier transform. OK?
The first part if I divide by the denominator is simply 1. Integral of Fourier transform of 1 will give me a delta function. And then what I have is minus c squared, the same integral that I used to define i d. So this becomes plus i d of x divided by c squared. OK?
So whereas in the absence of c you have the potential do to a charge, the presence of c adds this additional term that corresponds to some kind of a damping. So this equation you probably have seen in the context of screened Coulomb interaction and giving rise to the [INAUDIBLE] potential in three dimensions. We would like to look at it in d dimension so that we know what the behavior is in general. OK?
So again what I'm looking at is the potential that is due to a charge at the origin so this idea of x in principle only depends on the magnitude of x and not on the direction of it. It is something that has general spherical symmetry in d dimensions. So I use that fact of spherical symmetry to write down what the expression for the Laplacian is. OK? We can again use Gauss's law if you've forgotten or whatever but in the presence of spherical symmetry the general expression for Laplacian in d dimension is this. OK?
So if this d was equal to 1 it would be a simple second derivative. And in higher dimensions you would have additional factors if you basically apply Gauss's law to shares around here. You can very easily convince yourself of that. This is some kind of an aerial term that comes in d minus 1 dimension. And then I can write this as either the second derivative if d by the x acts on this, the x to the minus 1 disappears or if it acts on this one, it will gives me d minus 1 x to the d minus 2, x to the d minus 1 gives me an x, the i by d x.
So the equation that I have to solve is this object equals to i over c squared plus a delta function [INAUDIBLE]. OK? Now if you vary one dimension you wouldn't have this term at all and you would have-- except that x equals to 0, the second derivative proportional to the function divided by c squared. So you will immediately write a way x equals to 0 that the answer is e to the minus x over c. OK? Actually proportional because you have to fit out with the amplitude, et cetera.
Now in higher dimensions what happens is that this solution gets modified, falls off with some additional x to the p but we have to be somewhat careful with this. So let's look at this a little bit more closely. If I were to substitute this ansatz into this expression, what would happen? What I need to do is to take the first and the second derivative.
Now if I take the first derivative, the derivative either acts on this factor, gives me a factor of minus 1 over c and then the exponential back, so I can get the i back. If I had an exponential I take a derivative, I will get just minus 1 over psi exponential. If I act on x to the minus b I will get minus p x to the p minus 1, which is different from the original solution by a factor of p over x. OK? If I now take two derivatives I can take the second derivative on I itself and then d I by d x will give me I back with this factor.
So I will get 1 over c squared plus 2 P c x plus P squared over x squared with I but that's not the whole story because the derivative can also leave I aside and act on P over x, which if it does so, it will get P over x squared so that will be an additional term here. So that's the second derivative. So now what I have done is I have evaluated with this ansatz the terms that should appear in that equation of a from x equals to 0, so let's substitute it. Everything now I have is proportional to I so I just forget about the I.
I have the second derivative 1 over c squared plus 2 P divided by x c plus p p plus 1 divided by x squared. And then I have d minus 1, the first derivative, so I have minus d minus 1 over c minus d minus 1 p over x. Both of these terms get an additional factor of x because of here so I will get x c and x squared and what I have on the right hand side from the origin is I over c squared. Divide by the I, I have 1 over c squared.
Now if I'm moving away from x I can organize things in powers of 1 over x. The most important term is the constant and clearly you can see that I chose the decay constant of the exponential correctly as evidenced by the absence or removal of 1 over the constant term on the two sides. But now I have two terms, two types of terms. Terms that are proportional to x squared and terms that are proportional to x psi and there's no way that I can simultaneously satisfy both of these.
So the assumption that the solution of this equation is a single exponential divided by a power law is in fact not correct. But it can be correct in two regions. So for x that is much less than psi then the more important term is the 1 over x squared part. For x but going towards 0 the 1 over x squared is more important than 1 over x. OK so then what I do is I will match these two terms and those two terms that are 1 over x squared tell me that P p plus 1 should be P d minus 1. OK?
And that immediately getting rid of the p's tells me that the P in this regime is d minus 2. OK? Now the d minus 2 you recall is what we had for the Coulomb potential. Right? So basically at short distances you are still not screened by this additional term. You don't see its effect and you get essentially the standard Coulomb potential whereas if you are away what you get is that you have to match the terms that are proportional to x c because they're more important than 1 over x squared. And there you get that 2 P should be d minus 1 or P should be d minus 1 over 2. OK?
So let's just plot that function over here. So if I plot this function as a function of the separation x and it only depends on the magnitude, in fact, what I should plot is minus i d because it's the minus i d that depends on the fluctuations once I divide by K. I find that it has two regimes. Let's say above two dimensions you have one regime that is a simple Coulomb type of potential and the Coulomb potential last time actually we normalized properly.
We saw that it is x to the 2 minus d S d d minus 2. The e to x into the minus x over c I can in fact ignore in this regime because I'm at distance x that is much less than c so the exponential term has not kicked in yet whereas I go at large distances and the exponential term does kick in. So the overall behavior is e to the minus x over c. That's the most dominant behavior that you have. On top of that, we have a power log which is x to the power of d minus 1 over 2.
Now those of you who know what the screened Coulomb potential is know that the screened Coulomb potential in three dimensions is the 1 over r, the Coulomb potential, and you put an exponential on top of that. There is no difference in the powers that you have whether or not you are smaller than this correlation length or larger. You can check here, if I put d equals to 3, this becomes a 1 over x and this becomes a 1 over x. So it's just an accident of three dimensions that the screened Coulomb potential is the 1 over r with an exponential on top. In general dimensions you have different powers.
But having different powers also means that somehow the amplitude that goes over here has to carry dimensions so that it can be matched to what we have here at this distance of C. And so if I try to match those terms, roughly when you're at order of c, what I would do is I would put s d d minus 2 and c to the 3 minus d over 2. And now you can check that the two expressions will have the right dimension and will match roughly at order of c. OK?
So essentially what it says is that if I ask in my system what is the nature of these fluctuations, how correlated they are, they would know to be more or less the same although falling off as if you were at the critical point because we said that the critical point or when you have Goldstone modes, you have just this term. But then they know that you are not exactly sitting at the critical point and then they are no longer correlated. So basically there is this length scale that we also saw when we were looking at these critical opalescence and we were seeing things that were moving together. That length scale where things are moving together is this parameter c that we have defined over here.
So what we have is-- where do we want to put it? Let's put it here. A correlation length which measures the extent to which things are fluctuating together, although when I'm saying fluctuating together, they are still correlations that are falling off but they're not falling off exponentially. They start to fall off exponentially when you are beyond this length scale c. And we have the formula for c.
So what we find is that if I were to invert that, for example, what I find for c l as a function of t is that it is simply square root of k over t when I am on the t positive side. When I go to the t negative side, it just becomes square root of K minus 2 t. OK? So this correlation length I indicated we could state has behavior close to a transition, there's a divergence. We can parametrize those divergences through something like t minus Tc to exponent u, potentially different on the two sides of the transition. But this t is simply proportional to the real t minus Tc so we conclude that u plus is the same as u minus we've just indicated by u, should be one-half.
The amplitudes themselves depend on all kinds of things. We don't know much about them. But we can see that the amplitude ratio B plus over B minus, if I were to divide those two the ratio of those two is universal, it gives me a factor of square root of 2. OK? If I were to plot c t, for example, on the high temperature side c t and c l are of course the same.
On the low temperature side, we said that the Goldstone modes have these long range correlations. They fall off or grow according to the Coulomb potential but there is no length scale so in some sense the correlation length for the transverse modes is always infinity. OK. Now actually in the second lecture what I said was that the fact that the response function such as susceptibility diverges immediately tells you that there have to be long range correlations, so we had predicted before that c has to diverge. But we were not sufficiently precise about the way that it does, so let's try to do that.
Let's see. A relationship more precisely with susceptibility and these correlation lengths, so what we said more generally was that the susceptibilities up to various factors of data, et cetera, that are not that important are related to the integrated magnetization to magnetization connected correlation. So basically, what I have to do is to look at m minus its average at x, m minus its average at some other point, which means that what I'm really looking at is the phi phi averages. OK?
Now what we have shown right now is that these averages are significant. These phi phi correlations are significant only over a distance that is this correlation length and then they die off. So we could basically as far as scaling and things like that is concerned terminate this integration at c. And that when we are looking at distances that are below that, you don't see the effect of the exponential, you just see the Coulomb power law so you would see here fluctuations that decay as x to the 2 minus d. Right?
So essentially what you're doing is integrating x to the 2 minus d in d dimension of space. So you can see that immediately gets related to the square of the correlation length. X to the minus d n d d x, the d part vanishes, there's a 2 that remains and gives you c squared. If you like you can write it in spherical coordinates, et cetera, but dimensions have to work out to be something like this. So now we can-- yes?
AUDIENCE: Just to clarify, when you sat phi of x and phi of 0, are those both longitudinal or both transverse?
PROFESSOR: I wasn't precise so if I, thinking about chi l, these will be both longitudinal. OK? And then we have this expression. If I'm talking about chi t and I'm above the transition temperature, there's no problem. If I'm below the transition temperature, I can use the same thing but have to set c to infinity so I have to integrate all the way to infinity. OK?
But now you can see that the divergence of susceptibility is very much related to the divergence of correlations, in some sense very precisely in that if this goes like t to the minus gamma and the correlation length diverges as t to the minus u, then gamma should be 2 nu. And indeed, our ne is one-half. We had seen previously that gamma was 2 nu.
Secondly that the amplitude ratio for susceptibility should be the square of the amplitude ratio for the correlation length and again this is something that we have seen before. The amplitude ratio for susceptibility was the square root of 2.
Now it turns out that all of this is a gain within this [INAUDIBLE] point approximation looking at the most probable state, et cetera. Because what we find in reality is that at the critical point, the correlations don't decay simply according to the Coulomb law but there is this additional eta which is the same eta that we had over here. OK? And that because of that eta, here what you would have is 2 minus eta and you would get an example of a number of things that we will see a lot later on. That is there even if you don't know what the exponents are, you know that there are relationships among the exponents. This is an example of an exponent identity called a Fisher exponent and there are several of these exponents identities. OK?
But that also brings us to the following- that we did all of this work and we came up with answers for the singular behaviors at critical points and why they are universal. And actually as far as the thermodynamic quantities were concerned, all we ended up doing was to write some expression that was analytical and then find its minimum. And we found that the minimum of an analytical expression always has the same type of singularities, which we can characterize by these exponents. So maybe it's now a good time to check how these match with the experiment.
So let's look at the various types of phase transition, an example of the material that undergoes that phase transition, and what the exponents alpha, beta, gamma, and u are that are experimentally obtained.
AUDIENCE: What is this again?
PROFESSOR: The material that undergoes a transition, so for example when we are talking about the ferromagnet to paramagnet transition, you could look at material such as iron or nickel and if we ask in the context of this systematics that we were developing for the Landau-Ginzburg what they correspond to, they are things that have three components or fields so they correspond to n equals to 3. Of course everything that I will be talking to in this column will correspond to 3-dimensional systems. Later we'll talk also about 2-dimensional and other systems, but let's stick with real 3-dimensional world. So that would be one set. We will look at super fluidity.
Let's say in helium, which we discussed last semester, that corresponds to n equals to 2. We will talk about various examples of liquid gas transition which correspond to a scalar density difference. And this could be anything from say carbon dioxide, neon, argon, whatever gas we like. And also talk about superconductors which to all intents and purposes should have the same type of symmetries as super fluids. An example of a quantum system should be n equals to 2, gained lots of different cases such as aluminum, copper, whatever.
So what do we find for the exponent? Actually for ferromagnetic system the heat capacity does not diverge. It has a discontinuous derivative at the transition and kind of goes in a manner that if you take its derivative then the derivative appears to be singular and corresponds to an alpha. If you try to fit it to it's slightly negative.
The superfluid has this famous lambda shape for its heat capacity and a lambda shape is very well fitted to a logarithm type of function. The logarithm is the limit of a power law as the exponent goes to 0 so we can more or less indicate that by an alpha of 0 or really it's a divergent log. These objects, the liquid gas transition does have weakly divergent heat capacity so the alpha is around 0.1. The values of betas are all less than one-half, for ferromagnet system is of the order of 0.4. It is almost one-third, slightly less for superfluid helium and less for the liquid gas system.
Gammas something like 1.4. We don't have a gamma for superfluid, you can't put a magnetic field on the superfluid. There's nothing that is conjugate to the quantum phase. Here it is more like 1.3. Mu is 0.-- it's not-- [INAUDIBLE]. OK. So what I have here it is more like 1.3, 1.24, 0.7, 0.67, 0.63. OK?
Now they are different from the predictions that we had. Predictions that we had where alpha was 0 discontinuous. Beta goes to one-half. Gamma was 1, mu equals to one-half. And actually these predictions that we just made happen to match extremely well with all kinds of super conducting systems that you look at.
So again it is important to state that within a particular class like liquid gas you can do a lot of different systems. We saw that curve in the second lecture. They all correspond to this same set of exponents, singularly for a different magnet and so forth. So there is something that is universal but our Landau-Ginzburg approach with this looking at the most probable state and fluctuations around it has not captured it for most cases but for some reason has captured it for the case of superconductors. So we have that puzzle and starting from next lecture we'll start to unravel that.