Archive

Ineffective Theory

Observing Wavefunctions

At a conference a couple years ago, I was told this story of an argument that occured during a seminar. The speaker was listing measurable properties of a system (you know, “observables”) and included the wavefunction in this list. Naturally, he was challenged by an audience member. There is no Hermitian operator whose measurement reveals the wavefunction, so of course it can’t be considered an observable. The speaker countered with some form of “who cares? I can measure it”. The two go several more rounds, with the questioner trying to get the speaker to admit that the wavefunction is “not an observable”, the speaker trying to get the questioner to admit that it can (under appropriate circumstances) be measured, and absolutely no progress being made.

After a few rounds of this, the speaker made the mistake of again using the word “observable”, instead of simply claiming “it can be measured”. At this point, the questioner pulled out his phone, set it to record, and challenged his opponent, “say the wavefunction is an observable. Say it one more time, I dare you!” (#mycameraismyweapon?)

So, is the wavefunction an observable? Can it be measured? Are the two concepts the same? (They really ought to be!)

Suppose it’s your birthday, and you receive a little quantum system in a box as a present (happy birthday!). There’s no way for you to determine the wavefunction. You can measure the expectation values of a set of commuting Hermitian operators, but after those measurements, the wavefunction has changed in an unpredictable way. Moreover, the no cloning theorem prevents you from making a copy of the wavefunction before doing destructive measurements. It would be a faux pas to go back to your friend and ask for another copy of the same system (hey, it was hand-made!), so you’re just gonna have to live in ignorance.

Thus the wavefunction of the quantum system is not measurable. We already know it’s not “an observable” (in the sense of there being no Hermitian operator blah blah blah), so the two concepts match. Good!

What was the speaker on about? Well, some friends are better than others. A less-devoted friend might just give you a store-bought quantum system. If you break it, you can always go to the store and get another one. If you trust the manufacturing process to be consistent, you can perform destructive measurements to your heart’s content. This is called quantum tomography: as long as you have a large number of systems, which you know share the same wavefunction, you can determine that wavefunction to arbitrary precision by measuring all different expectation values.

To summarize: the wavefunction of a system is neither an observable nor a measurable quantity. However, if you have a black box that produces quantum systems in some reliable, reproducible way, then the wavefunction being produced can be measured, and should be called an observable. The distinction to bear in mind is that although we talk about measuring “the wavefunction of the quantum system”, what we’re really measuring is a property of the machine producing the systems! The fact that there’s no corresponding Hermitian operator acting on the quantum system, isn’t concerning at all. The operator of interest — the expectation value we’re measuring when we do quantum tomography — acts on the black box itself.

One last point. Suppose you have an infinite volume system, in the ground state for simplicity. Oh, let’s also assume the system is gapped (no arbitrarily long-range correlations). Now, although this is “only one” quantum system, since it’s in the infinite volume limit you can divide it into arbitrarily many, arbitrarily large, arbitrarily well-separated pieces. The wavefunction of the vacuum is an observable, after all!

Links for October 2020

How willing are scientists to change the direction of their research, in exchange for money? From NIH data, Kyle Myers concludes that the switching costs are large. (Here is an ungated draft.) Of course, “science” is heterogeneous. Are these large switching costs primarily due to the purchase of new equipment, or the need to re-train in a new field? Anecdotal evidence suggests that switching costs are much lower in, say, mathematics or theoretical physics. I looked for an answer, but all I found was George Borjas complaining about immigration.

The MIP* = RE proof (which I hope one day to understand well enough to write about) had a bug, now fixed. One of the authors writes about the experience.

A room temperature superconductor has been constructed, by using very high pressures.

The House of Representatives published a report on antitrust issues regarding the big technology companies. Matt Stoller argues that this reflects a larger shift towards strong anti-monopoly sentiment, which is now obvious to everybody. I believe this, because Tyler Cowen has, for a little over a year now, been singing the praises of large, monopoly-like businesses.

Closely related: does concentration reduce labor’s share of income? See also Cowen’s comment.

Tom Lehrer has released the lyrics of his songs into the public domain. The big news here is that he hadn’t bothered to do that previously, either out of sloth or sheer disregard for the notion of copyright law.

The correspondence between git repositories and Github repositories is not one-to-one. If you fork a repository on Github, Github doesn’t keep the two repositories entirely separate. There have been mild security concerns related to this before. Most recently: the RIAA submitted a DMCA takedown notice to Github for youtube-dl. Github complied, of course, and posted the takedown notice to the dmca repository, as usual. An enterprising fellow forks the dmca repository, and adds the source code to youtube-dl to his fork. As a result, the source code that the RIAA wanted taken down is, as I write this, visible through Github’s repository of take-down notices. Discussion on Hacker News.

Fluctuations and Phases

(I’m writing a post introducing the sign problem, and it occured to me that there’s this little point about statistical physics that I’ve never seen written down explicitly. Here it is. Be warned: it’s not rigorous. Take it as intuition, not too seriously.)

A statistical system is defined by its partition function. This is a sum (or integral, I don’t care) over all possible states, and it might look something like this:

$$ Z(T,\mu) = \sum_s e^{(-E_s + \mu M_s)/T} $$

Here $E_s$ is the internal energy of state $s$, $M_s$ is the magnetization (“how many spins are pointing up?"), and $\mu$ is an external magnetic field.

In statistical physics, we’re always interested in expectation values. The expectation value of the magnetization, for instance, is

$$ \langle M \rangle = \frac{\sum_s M_s e^{(-E_s + \mu M_s)/T}}{\sum_s e^{(-E_s + \mu M_s)/T}} $$

Note that this is just a particular derivative of the partition function, $\langle M \rangle = T \frac{\partial}{\partial \mu}\log Z$. We can ask about how much the magnetization fluctuates, as well. Again, just a derivative of the partition function.

$$ \langle M^2 \rangle - \langle M\rangle^2 \propto \frac{\partial^2}{\partial \mu^2} \log Z $$

In fact, every expectation value is just a derivative of the partition function. That’s why we care about the partition function!

There’s real physical content to this statement, and it’s worth slowing down to appreciate it. If you have a little box fluctuating around equilibrium, then those fluctuations tell you something about what would happen if, say, you raised the magnetic field. For instance, if the magnetization is fluctuating a lot, that tells you that when you raise the magnetic field, the magnetization will respond quickly. That’s why the second expectation value above is termed “magnetic susceptability”.

The other thing we care about in statistical physics is the phase transition. Roughly speaking, the idea is that for $T < T_c$, the system has one qualitative behavior, and for $T > T_c$, the system has a different qualitative behavior. This is a blurry definition, of course, so two people can spend quite a lot of breath arguing over order parameters and first- vs. second-order transitions and so on. The idea is, at the critical temperature $T_c$ (or maybe the critical field strength $\mu_c$, or some other parameter), there’s some qualitative change in behavior.

Now remember the lesson above. Fluctuations about equilibrium contain information about the system at other values of the parameters. Not just at nearby values of the parameters, but in principle, far away as well. After all, if you know all about the fluctuations at $T_0$ and $\mu_0$, you essentially have a Taylor expansion of the partition function about that point, and that expansion can be used to calculate the properties of the system at an arbitrary, different point $(T_1,\mu_1)$.

Except, not if the Taylor expansion doesn’t converge at that different point. Should that partition function posess some sort of nonanalytic behavior, then the fluctuations of the system at $(T_0,\mu_0)$ won’t tell you anything about the system at $(T_1,\mu_1)$. That is the essence of a phase transition. The fluctuations in one phase don’t tell you about the behavior of another phase — they only tell you about the behavior of systems in the same phase.

Of course, if you have a first-order phase transition, then the Taylor expansion might converge, but to the wrong answer, describing a metastable phase.

One last point: there are many sorts of nonanalyticity! A function can be smooth (in the sense of all derivatives being defined at every point) but still nonanalytic. This is what happens in the BKT transition.