Ineffective Theory

Alex Wellerstein's Restricted Data

This book is a history of one corner of American politics: the debate, now approaching a century old, over which bits of nuclear information ought to be protected, and how. As politically oriented books go, it’s remarkable for being level-headed. Nuclear secrecy is a naturally emotional topic, particularly in the American context, where views on free expression and distrust of government were strong enough to compete with Cold War fears of annihilation. If Wellerstein didn’t repeatedly point this fact out, one might read Restricted Data and be unaware what a politically contentious topic the thing is.

An unfortunate aspect of this being a political history is that it’s disappointingly sparse on hard details. Of course there’s only minimal discussion of anything technical—that would be tedious, and has been covered elsewhere anyway. But Wellerstein gives fewer concrete details about what the implementation of a secrecy regime looked like on the ground than I would have liked. A great deal of space is dedicated to the deliberations of the high-ranking officials responsible for writing and interpreting the law. There are several good stories about the intersection of the classification system with people outside the system, some of whom voluntarily chose to obey its requests, and others who played the role of activists, attempting to tear it all down. But the gritty implementation details are largely missing, so a lot of the discussion feels a bit hollow.

Except for that, the book is exceedingly thorough beginning in 1943 and extending nearly through the end of the cold war. After that, everything is understandably very sparse. Wellerstein states that he has no clearance and is under no obligation to keep secrets, but it’s impossible to write a book that’s well grounded in facts when it’s clear that an important plurality of those facts are not yet known. So the post-Cold War era, and particularly events after around 2010, are discussed quite briefly.

Wellerstein suggests (I think partly as a rhetorical flourish) that Western nuclear secrecy essentially began with Szilard’s attempts to convince other scientists not to publish on fission and its applications. Not long after, Szilard became an advocate for relaxing nuclear security, eventually being considered by General Groves to be “a malcontent”, arguing that “secrecy was pointless”. More generally, many of the physicists who were most prominent in the creation of a system of nuclear secrecy pushed for extensive liberalization after the war, and largely failed: this included Oppenheimer and eventually Teller. A common trope with respect to secrecy—and one that gets truer with each decade—is that “the genie cannot be put back in the bottle”. This is typically invoked as an argument for erring on the side of conservatism. It’s interesting to note that at an institutional level, it’s the secrecy itself that has turned out to be irreversible.

Alex Wellerstein, by the way, also has an excellent blog, which I’ve linked to previously.

Links for August 2023

New substack and a good interview regarding PEPFAR. Is this replicable today? Is this a story of executive dysfunction or executive triumph?

Wellerstein discusses the non-destruction of Kyoto in WWII. A lot of effort to deal with a glib just-so story.

Easily repeated claims are less likely to be true

Briefly: if you hear a claim, you heard it for a reason. Someone told it to you! What caused that to happen? Well, it could be something that people like repeating because it’s true, but it could also be getting repeated for other reasons. If you notice that there are non-truth-related reasons for the claim to be repeated (easy to explain; politically convenient; just plain “catchy”), that should lower your probability (conditional on having heard the claim) that it’s true.

There’s a saying often used in physics: “well known to those who know it well”. (After hearing it repeated too many times, this vacuous tripe is now one of my least favorite phrases.) In a similar spirit, the above is obvious once it becomes obvious.

The rest of this post is just being careful and quantitative with the above, for the times it’s not obvious (and is maybe wrong). Here’s a concrete model. Each freshly made claim starts with probability \(p_0\) of surviving. Being true increases the probability to \(p_0 + p_T\), and being catchy increases it to \(p_0 + p_C\). A claim which is both true and catchy has probability \(p_0 + p_T + p_C\) to survive. As long as all probabilities are small, you can think of this as there being three separate, independent mechanisms for survival.

Mostly for convenience, let’s continue to assume that \(p_\bullet \ll 0\), and see what happens to a population of claims for which a fraction \(f_T \in [0,1]\) are true, \(f_C\) are catchy, and the two notions are independent. The total fraction that survive, and the fraction of true ones that survive, are \[ F_{\mathrm{total}} = p_0 + f_T p_T + f_C p_C \,\text{ and }\, F_{\mathrm{true}} = p_0 + p_T + f_C p_C \text. \] So, the fraction of claims that survive that are true is \[ P(\mathrm{True}|\mathrm{Heard}) = f_T \frac{p_0 + p_T + f_C p_C}{p_0 + f_T p_T + f_C p_C} \text. \] That should be the number you think of when you hear a claim and ask “how likely is this to be true?” For claims that are catchy, the probability is instead \[ P(\mathrm{True}|\mathrm{Heard}\land\mathrm{Catchy}) = \frac{f_T F_{\mathrm{true,catchy}}}{F_{\mathrm{catchy}}} = f_T \frac{p_0 + p_T + p_C}{p_0 + f_T p_T + p_C} \text. \] So we see that for generic parameters, \(P(\mathrm{True}|\mathrm{Heard}\land\mathrm{Catchy}) \lt P(\mathrm{True}|\mathrm{Heard})\). The effect is unsurprisingly largest when being catchy results in a large improvement to survival probability. In a more sophisticated model, this would translate to “be more skeptical of catchier claims”.

If the reproductive advantages stack multiplicatively instead of additively, the above effect no longer holds. I’ll leave it as an exercise for the reader to decide when that’s a better model.