Ineffective Theory

Fluctuations and Phases

(I’m writing a post introducing the sign problem, and it occured to me that there’s this little point about statistical physics that I’ve never seen written down explicitly. Here it is. Be warned: it’s not rigorous. Take it as intuition, not too seriously.)

A statistical system is defined by its partition function. This is a sum (or integral, I don’t care) over all possible states, and it might look something like this:

$$ Z(T,\mu) = \sum_s e^{(-E_s + \mu M_s)/T} $$

Here \(E_s\) is the internal energy of state \(s\), \(M_s\) is the magnetization (“how many spins are pointing up?”), and \(\mu\) is an external magnetic field.

In statistical physics, we’re always interested in expectation values. The expectation value of the magnetization, for instance, is

$$ \langle M \rangle = \frac{\sum_s M_s e^{(-E_s + \mu M_s)/T}}{\sum_s e^{(-E_s + \mu M_s)/T}} $$

Note that this is just a particular derivative of the partition function, \(\langle M \rangle = T \frac{\partial}{\partial \mu}\log Z\). We can ask about how much the magnetization fluctuates, as well. Again, just a derivative of the partition function.

$$ \langle M^2 \rangle - \langle M\rangle^2 \propto \frac{\partial^2}{\partial \mu^2} \log Z $$

In fact, every expectation value is just a derivative of the partition function. That’s why we care about the partition function!

There’s real physical content to this statement, and it’s worth slowing down to appreciate it. If you have a little box fluctuating around equilibrium, then those fluctuations tell you something about what would happen if, say, you raised the magnetic field. For instance, if the magnetization is fluctuating a lot, that tells you that when you raise the magnetic field, the magnetization will respond quickly. That’s why the second expectation value above is termed “magnetic susceptability”.

The other thing we care about in statistical physics is the phase transition. Roughly speaking, the idea is that for \(T < T_c\), the system has one qualitative behavior, and for \(T > T_c\), the system has a different qualitative behavior. This is a blurry definition, of course, so two people can spend quite a lot of breath arguing over order parameters and first- vs. second-order transitions and so on. The idea is, at the critical temperature \(T_c\) (or maybe the critical field strength \(\mu_c\), or some other parameter), there’s some qualitative change in behavior.

Now remember the lesson above. Fluctuations about equilibrium contain information about the system at other values of the parameters. Not just at nearby values of the parameters, but in principle, far away as well. After all, if you know all about the fluctuations at \(T_0\) and \(\mu_0\), you essentially have a Taylor expansion of the partition function about that point, and that expansion can be used to calculate the properties of the system at an arbitrary, different point \((T_1,\mu_1)\).

Except, not if the Taylor expansion doesn’t converge at that different point. Should that partition function posess some sort of nonanalytic behavior, then the fluctuations of the system at \((T_0,\mu_0)\) won’t tell you anything about the system at \((T_1,\mu_1)\). That is the essence of a phase transition. The fluctuations in one phase don’t tell you about the behavior of another phase — they only tell you about the behavior of systems in the same phase.

Of course, if you have a first-order phase transition, then the Taylor expansion might converge, but to the wrong answer, describing a metastable phase.

One last point: there are many sorts of nonanalyticity! A function can be smooth (in the sense of all derivatives being defined at every point) but still nonanalytic. This is what happens in the BKT transition.