A common sentiment is that trained scientists have an important skill, often called “taste”, which is crucial in identifying worthwhile work. For a recent example see Tim Hwang and Caleb Watney’s most recent piece at Macroscience, which summarizes this view pretty well:
Scientists will often scoff at traditional metrics like citations or patents because, in their view, good science is a “know-it-when-you-see-it” phenomenon and fundamentally about good taste. Large-scale surveys of scientists could help bridge this gap by aggregating the opinions of scientists about promising (or unpromising) developments in their field and in adjacent fields.
If we’re going to assert that thus-and-such mechanism is an effective one, it’s good to have a story for what causes the mechanism to work. In this case the most plausible story looks something like this:
- Scientists are repeatedly exposed to examples of historical work in their field and adjacent fields.
- Thanks to obvious selection biases, that work is disproportionately the high-impact work.
- Scientists begin to develop a feel for what sort of work is “high-impact”.
- Work that feels familiar (and thus similar to historical high-impact work) is held to be in good taste.
Note that the selection bias in step 2 is doing heavy lifting here—and it’s good heavy lifting. It’s nice to see selection bias being the good guy for once!
Unfortunately, this procedure breaks down for the sort of work that spawns a new (sub)field. By definition and perhaps design, that work doesn’t look much like historically high-impact work. “Taste”, when driven by the mechanism above, is only sensible within the field that spawns it. This isn’t so surprising—who would expect a particle theorist to identify high-quality immunology research?
Another way to see the problem with taste is to remember that, by assumption, most scientists have pretty good taste. Therefore, within any given field, good-taste work is disproportionately likely to have been explored. That doesn’t mean that everything in good taste has been done, just that low-hanging fruit is likely to have been picked. It follows that if you’re looking for something new, it’s worthwhile to spend a lot of time exploring bad-taste ideas. Most other things being equal, they’re more likely to be the important ones.
For straightforward work, all the above considerations lean in favor of judging via taste. It’s only in the search for unusual (can I type “paradigm changing” without vomiting?), but high-quality, work that using taste becomes a handicap.
This book is a history of one corner of American politics: the debate, now approaching a century old, over which bits of nuclear information ought to be protected, and how. As politically oriented books go, it’s remarkable for being level-headed. Nuclear secrecy is a naturally emotional topic, particularly in the American context, where views on free expression and distrust of government were strong enough to compete with Cold War fears of annihilation. If Wellerstein didn’t repeatedly point this fact out, one might read Restricted Data and be unaware what a politically contentious topic the thing is.
An unfortunate aspect of this being a political history is that it’s disappointingly sparse on hard details. Of course there’s only minimal discussion of anything technical—that would be tedious, and has been covered elsewhere anyway. But Wellerstein gives fewer concrete details about what the implementation of a secrecy regime looked like on the ground than I would have liked. A great deal of space is dedicated to the deliberations of the high-ranking officials responsible for writing and interpreting the law. There are several good stories about the intersection of the classification system with people outside the system, some of whom voluntarily chose to obey its requests, and others who played the role of activists, attempting to tear it all down. But the gritty implementation details are largely missing, so a lot of the discussion feels a bit hollow.
Except for that, the book is exceedingly thorough beginning in 1943 and extending nearly through the end of the cold war. After that, everything is understandably very sparse. Wellerstein states that he has no clearance and is under no obligation to keep secrets, but it’s impossible to write a book that’s well grounded in facts when it’s clear that an important plurality of those facts are not yet known. So the post-Cold War era, and particularly events after around 2010, are discussed quite briefly.
Wellerstein suggests (I think partly as a rhetorical flourish) that Western nuclear secrecy essentially began with Szilard’s attempts to convince other scientists not to publish on fission and its applications. Not long after, Szilard became an advocate for relaxing nuclear security, eventually being considered by General Groves to be “a malcontent”, arguing that “secrecy was pointless”. More generally, many of the physicists who were most prominent in the creation of a system of nuclear secrecy pushed for extensive liberalization after the war, and largely failed: this included Oppenheimer and eventually Teller. A common trope with respect to secrecy—and one that gets truer with each decade—is that “the genie cannot be put back in the bottle”. This is typically invoked as an argument for erring on the side of conservatism. It’s interesting to note that at an institutional level, it’s the secrecy itself that has turned out to be irreversible.
Alex Wellerstein, by the way, also has an excellent blog, which I’ve linked to previously.