Archive

Ineffective Theory

A Galactic Simulation of Planck-Scale Physics

Let’s imagine simulating a lattice field theory on a quantum computer. To summarize how this works, for the uninitiated: physical space (the thing we’re simulating) is discretized by a lattice. To each point on this lattice we associate some number of qubits, which will represent the state of the fields at and near that point. Then we apply some sequence of quantum gates mimicking physical time evolution. If this seems excessively simple, it’s because it really is (conceptually) very simple.

An unambitious fellow might wish to simulate the standard model; that is, the fields represented would be the fields of the standard model. In a sufficiently large such simulation, of course, you’re not just simulating the standard model. The standard model gives rise to protons and neutrons, and you’re simulating them too. The standard model yields amino acids, and you’re simulating them too. From the standard model emerges sentient life, and, well, in a sufficiently large simulation…

But that’s what an unambitious fellow wants. Today I’m feeling ambitious. We’re going to put near-Planck-scale physics on the quantum computer. Take whatever effective field theory holds right below the Planck scale (“right below” means go far enough down that we can discretize space without thinking too much about quantum gravity), and put that on the quantum computer. And now, run as large of a simulation as possible, to see as much emergent behavior at longer distance scales as possible.

The question is, if we do this, how low energy can we probe? Obviously, with infinite space and time, there’s no limit: if you want lower energies, just build a larger quantum computer and simulate a larger lattice on it. But that’s not the world we live in. In the real world, there’s a limit to the computational power of any computer we can build.

The Margolus-Levitin theorem is not the only theorem that gives physical bounds on computation, but it’s the most relevant one I know of. It states that a single computational “operation”—defined as the process of transitioning from one quantum state to an orthogonal state—can happen no faster than \(\hbar / 4 E\), where \(E\) is the average energy of the system in question (relative to the ground state). By the way, the interpretation of this theorem can be tricky; if you want to employ it, it’s important to be confident that transitioning between two orthogonal states really is the right notion of an “operation”.

Okay, back to our quantum simulations. First, what happens if we use the whole universe? The mass of the observable universe is something like \(10^{54}\) kg. By the ML theorem, that means we can accomplish one operation every \(3 \times 10^{-97}\) seconds. If we allow the computation to take as long as the current age of the universe, that’s a total of only \(10^{125}\) operations. “Only”? Well, assume that:

Now our lattice is restricted to a side length of a bit more than \(10^{31}\) sites. The Planck length is \(10^{-35}\) meters, so our all-consuming simulation will only manage to simulate a cubic millimeter, and only for \(10^{-12}\) seconds. That’s not bad though: it’s certainly enough to see a bit of chemistry, or maybe even biology, emerge all the way from the Planck scale!

(If you’re in the mood for some numerology, feel free to find significance in the fact that the all-encompassing simulation manages to simulate almost exactly half of the orders of magnitude that we believe exist in our universe.)

Using the entire visible universe and letting the simulation run for the age of the universe is a bit unrealistic. Let’s limit it to a galaxy-sized simulation, and let it run for just 1000 years. The mass of the Milky Way is about \(10^{12}\,M_\odot\), giving us \(10^{104}\) operations to work with. Now we get \(10^{26}\) sites per side, for a simulated length scale of one nanometer. So, we can see an atom. Not terrible.

If we use only the full mass of the Earth, and only for 100 years, the news is still good. We have around \(10^{86}\) operations to work with, giving a simulated size of \(10^{-13}\) meters. This is certainly enough to see the standard model emerge; in fact, it’s roughly comparable to modern lattice QCD simulations!

At some point, though, we need to come to terms with the fact that we’re not going to use the entire mass-energy of the Earth, for 100 years, just to perform a single computation. With a one-ton computer, we get \(10^{64}\) operations. This is sufficient for a length scale of \(10^{-19}\) meters. This is no longer quite enough for us to see the standard model emerge.


Why do this ridiculous calculation? Mainly because I’m putting off doing any actual work. But also, because it serves to illustrate an important truth. No matter what quantum hardware advances come our way, we will never be able to perform a “naive” simulation that demonstrates the emergence of the standard model from Planck scale physics.

This might be fine. One can, for instance, imagine presenting a sequence of effective field theories, starting near the Planck scale and working down. All that needs to be done is convincingly argue that the \(n\)th theory indeed emerges from the \((n-1)\)th, and if the energy scales don’t differ by more than a few orders of magnitude, that argument can even be made nonperturbative with lattice calculations.

Or, we might not be fine. For example, what if there just isn’t any good EFT description between a near-Planck EFT and the standard model? The near-Planck EFT could be, in some sense, as “nice” as possible—a traditional, local quantum field theory with no funny stringy stuff—but we would still be powerless to verify that the standard model indeed emerges.

Less dramatically, even given a set of regularly spaced EFTs spanning the gap between standard model physics and Planck-scale physics, it’s not immediately obvious that errors don’t build up enough over “time” to make the whole calculation untrustworthy. It could be so, but as far as I know nobody’s asked.

In general, any plausible calculation to confront this problem is going to need to handle multiple scales with a computational complexity that is strongly sublinear in the ratio of largest to shortest distance. As I write this, to my knowledge, whether such algorithms exist in general is unexplored.