Microcausal and macrocausal explanations
You blow up a balloon, and don’t stop when you should. It pops.
Broadly speaking, there are two types of stories you can tell to explain why it popped. In one, a particular nitrogen molecule (imagine me pointing: this one) inside the balloon had an unusually high velocity when it collided with the rubber surface. This collision was sufficient to sever a nearby hydrogen bond (that one)—the resulting vibrations put extra strain on all neighboring polymers… you get the idea.
Notice that this story has a lot of details that vary from pop to pop. Consider for example an over-pressurized metal drum, rupturing. Clearly this is a similar phenomenon, yet the corresponding story looks nothing alike. (At the very least: no reference to polymers and no hydrogen bonds to speak of.)
Let’s call that one the “microcausal” explanation, for obvious reasons. The more familiar explanation is “macrocausal”, and goes something like this: pumping air into the balloon raised the pressure inside the balloon, and therefore imposed a larger force on each unit area of the rubber. Rubber only has a certain pressure it can withstand before rupturing, so right around the time the pressure reached that point, the balloon popped.
The macrocausal explanation makes no reference to individual particles, but rather to statistical properties of large collections of particles. As a result, it generalizes reasonably well to the case of a steel drum rupturing.
Why not just call the latter explanation “statistical”? This is technically accurate and is fine terminology to use among physicists, but the word “statistical” is colloquially taken to imply that a phenomenon is somehow not entirely real. Similarly the microcausal explanation is sometimes termed “reductionist”, but this again comes with misguided connotations. Both are models of how the physical universe behaves. Let’s take a closer look at the properties of these models.
Strictly speaking, the microcausal explanation generalizes perfectly well to all scenarios: “here is the standard model Lagrangian, that is the whole of physics, all the rest is corollary, go and derive it”. But of course the task of computing the consequences of the standard model is formidable. When we have limited computational resources, macrocausal explanations generalize much more easily.
On the flip side, microcausal explanations are philosophically easier to understand. Most animals (at least those I’ve interacted with) have a natural mental model of the physical universe, and that model involves objects bumping into other objects and causing them to do things. You can see it happen directly! Statistical properties are harder to discern.
As a result, we generally treat macrocausal explanations as if they’re microcausal, neglecting for as long as possible the fact that we’re speaking of statistical and emergent properties. In fact I did that above! The ostensibly “microcausal” explanation invokes a molecule and a hydrogen bond, as if these are primitive physical objects. Still, one explanation is plainly “more macrocausal” than the other, so as long as we all understand that these aren’t two sharply defined categories, let’s stick with the terminology.
It’s common to think of these different explanations as occuring at different levels of abstraction. These levels are often (particularly in physics) different length scales, corresponding literally to the micro and macro. Unfortunately this perspective sheds no light on why the macrocausal explanation should be any good. From some perspective, it’s surprising that there can be two explanations for the same phenomenon, both very accurate, but with little or nothing in common. Hand-waving about levels of abstraction does not make things better.
Instead, I like to view each explanation as making a particular approximation about the system being described. Equivalently, each explanation is exactly true in some limit, and then we’re hoping that the real world isn’t too far from that limit. A key part of understanding the approximation is to understand precisely what limits have been taken. In the case of the balloon popping, the most obvious limit is that the number of molecules of air is large, as is the number of atoms in the balloon. Stating this explicitly makes it pretty clear why this is such a good approximation: Avogadro’s number is indeed large!
One advantage to thinking of different explanations as belonging to different limits is that it naturally gestures towards whole new classes of explanations, corresponding to all different limits you can think of. In particle physics we have perturbative stories and large-\(N\) stories, both corresponding to particular limits, rather than particular scales. In some systems these limits might coincide, and then the explanations also line up.
As the old saw doesn’t quite go: everything is about AI—except AI, which is about power. This post is no exception.
Yudkowsky on Twitter:
Remember: The argument for AGI ruin is never that ruin happens down some weird special pathway that we can predict because we’re amazing predictors. The argument is always that ordinary normal roads converge on AGI ruin, and purported roads away are weird special hopium.
I interpret this (along with other related comments, such as by Zvi) as gesturing towards the possible existence of a macrocausal explanation for AI doom. I have exactly two thoughts.
First, the type of AI-accelerationist argument Yudkowsky is complaining about (“why won’t anybody tell me exactly how AI will destroy the world?”) strikes me as closely analogous to demanding a concrete microcausal explanation for the popping of the balloon—that is, demanding that we pinpoint precisely the hydrogen bond that will fail first. This is absurd.
Yudkowsky goes further though, claiming here and elsewhere that AGI ruin is the default outcome, occupying a substantial supermajority of the probability. I am comfortable with the argument “there are many possible roads with small probability; they ultimately add up to a not-so-small probability”—that is my first thought. My second thought is that I am not so happy with the stronger claim, that ruin is meaningfully the “normal” course of events.
Yudkowsky wants to claim that a probability is close to 100%. In other cases where this sort of assertion is a knowable fact, it’s because the system is close to some limit in which the probability would be exactly 100%. I can’t think of any appropriate limit for AI doom, nor do I see anyone gesturing at one. My estimate for the probability of AGI doom remains considerably under 50%.