You blow up a balloon, and don’t stop when you should. It pops.
Broadly speaking, there are two types of stories you can tell to explain why it popped. In one, a particular nitrogen molecule (imagine me pointing: this one) inside the balloon had an unusually high velocity when it collided with the rubber surface. This collision was sufficient to sever a nearby hydrogen bond (that one)—the resulting vibrations put extra strain on all neighboring polymers… you get the idea.
Notice that this story has a lot of details that vary from pop to pop. Consider for example an over-pressurized metal drum, rupturing. Clearly this is a similar phenomenon, yet the corresponding story looks nothing alike. (At the very least: no reference to polymers and no hydrogen bonds to speak of.)
Let’s call that one the “microcausal” explanation, for obvious reasons. The more familiar explanation is “macrocausal”, and goes something like this: pumping air into the balloon raised the pressure inside the balloon, and therefore imposed a larger force on each unit area of the rubber. Rubber only has a certain pressure it can withstand before rupturing, so right around the time the pressure reached that point, the balloon popped.
The macrocausal explanation makes no reference to individual particles, but rather to statistical properties of large collections of particles. As a result, it generalizes reasonably well to the case of a steel drum rupturing.
Why not just call the latter explanation “statistical”? This is technically accurate and is fine terminology to use among physicists, but the word “statistical” is colloquially taken to imply that a phenomenon is somehow not entirely real. Similarly the microcausal explanation is sometimes termed “reductionist”, but this again comes with misguided connotations. Both are models of how the physical universe behaves. Let’s take a closer look at the properties of these models.
Strictly speaking, the microcausal explanation generalizes perfectly well to all scenarios: “here is the standard model Lagrangian, that is the whole of physics, all the rest is corollary, go and derive it”. But of course the task of computing the consequences of the standard model is formidable. When we have limited computational resources, macrocausal explanations generalize much more easily.
On the flip side, microcausal explanations are philosophically easier to understand. Most animals (at least those I’ve interacted with) have a natural mental model of the physical universe, and that model involves objects bumping into other objects and causing them to do things. You can see it happen directly! Statistical properties are harder to discern.
As a result, we generally treat macrocausal explanations as if they’re microcausal, neglecting for as long as possible the fact that we’re speaking of statistical and emergent properties. In fact I did that above! The ostensibly “microcausal” explanation invokes a molecule and a hydrogen bond, as if these are primitive physical objects. Still, one explanation is plainly “more macrocausal” than the other, so as long as we all understand that these aren’t two sharply defined categories, let’s stick with the terminology.
It’s common to think of these different explanations as occuring at different levels of abstraction. These levels are often (particularly in physics) different length scales, corresponding literally to the micro and macro. Unfortunately this perspective sheds no light on why the macrocausal explanation should be any good. From some perspective, it’s surprising that there can be two explanations for the same phenomenon, both very accurate, but with little or nothing in common. Hand-waving about levels of abstraction does not make things better.
Instead, I like to view each explanation as making a particular approximation about the system being described. Equivalently, each explanation is exactly true in some limit, and then we’re hoping that the real world isn’t too far from that limit. A key part of understanding the approximation is to understand precisely what limits have been taken. In the case of the balloon popping, the most obvious limit is that the number of molecules of air is large, as is the number of atoms in the balloon. Stating this explicitly makes it pretty clear why this is such a good approximation: Avogadro’s number is indeed large!
One advantage to thinking of different explanations as belonging to different limits is that it naturally gestures towards whole new classes of explanations, corresponding to all different limits you can think of. In particle physics we have perturbative stories and large-\(N\) stories, both corresponding to particular limits, rather than particular scales. In some systems these limits might coincide, and then the explanations also line up.
As the old saw doesn’t quite go: everything is about AI—except AI, which is about power. This post is no exception.
Yudkowsky on Twitter:
Remember: The argument for AGI ruin is never that ruin happens down some weird special pathway that we can predict because we’re amazing predictors. The argument is always that ordinary normal roads converge on AGI ruin, and purported roads away are weird special hopium.
I interpret this (along with other related comments, such as by Zvi) as gesturing towards the possible existence of a macrocausal explanation for AI doom. I have exactly two thoughts.
First, the type of AI-accelerationist argument Yudkowsky is complaining about (“why won’t anybody tell me exactly how AI will destroy the world?") strikes me as closely analogous to demanding a concrete microcausal explanation for the popping of the balloon—that is, demanding that we pinpoint precisely the hydrogen bond that will fail first. This is absurd.
Yudkowsky goes further though, claiming here and elsewhere that AGI ruin is the default outcome, occupying a substantial supermajority of the probability. I am comfortable with the argument “there are many possible roads with small probability; they ultimately add up to a not-so-small probability”—that is my first thought. My second thought is that I am not so happy with the stronger claim, that ruin is meaningfully the “normal” course of events.
Yudkowsky wants to claim that a probability is close to 100%. In other cases where this sort of assertion is a knowable fact, it’s because the system is close to some limit in which the probability would be exactly 100%. I can’t think of any appropriate limit for AI doom, nor do I see anyone gesturing at one. My estimate for the probability of AGI doom remains considerably under 50%.
Reading James C. Scott in China’s mountains.
Also from China on tech-related trade restrictions (between China and the U.S.):
We can readily predict that as China’s policy of attracting talent grows
stronger and as those who return to China continue to succeed, American
suspicions of the Chinese who remain in the U.S. will increase, meaning
that the space for these Chinese to continue to advance in the U.S.
will narrow, which will only encourage more Chinese to return to
China. Once the patents prohibited by the U.S. can be reapplied for in
China, the Samuel Slaters and Zhang Rujings who master this technical
know-how will flock to China. This will reverse the talent flow,
leaving the U.S. with a shortage of talent, and the outcome of the
U.S.-China technology war will be clear.
A report from “the second International Conspiracy Theory Symposium”. Relative to the relevant opposition, conspiracy theorists are looking quite good these days.
Somewhat relatedly, here’s Musa al-Gharbi, writing in 2019, on the vaunted “diploma divide”. He ends with:
It may be emotionally satisfying for academics and intellectuals to disparage or patronize the less educated and their political allegiances, but this condescension is unearned: the political leanings of highly-educated or intelligent people tend not to be any more rational or informed than anyone else’s. Putting on a pretense of superiority is likely to blow up in our faces.
As Rick Perry would say, “oops!”
In which GPT-4 provides medical advice. Of course this sort of thing was possible before, and has been in principle possible for a long time—but now it has a friendly face!
Again somewhat related, on the topic of AI safety, it looks possible that all hell is about to break loose. I don’t think I’ve read anything on this topic that makes me happy, either in the “things are going well” sense or simply because “at least there are some people behaving reasonably”. Cowen’s comparison to the early days of COVID is apt, of course, but it’s not an encouraging observation!