Recall McCarthy’s rowboat problem. Usually you can use one to cross a river. But, when you get there, you might discover it’s impossible due to an unexpected infestation of inquisitors.
You could interpret “it’s usually true that you can use a rowboat to cross a river” as “it’s probably true,” in the technical sense according to which “probably” means you assign it a number between zero and one. How?
The mathematics of probability starts from a specified set of possible “outcomes.” The probabilities of all outcomes must add up to 1.0 (or else the math doesn’t work at all). You could treat “I can cross the river” as an outcome, and assign it some probability somehow,1 like maybe 0.9.
So what’s the probability of a missing oar? I don’t know, 0.01 maybe? Hole in the boat? Same-ish? Breaking an arm? Less probable: say 0.001. Electric eels? Ebola epidemic? Invasion of space aliens?
The problem I want to point at is not that this exercise adds up a list of pretty meaningless numbers to get an even more meaningless one—although that does seem to make probabilistic reasoning useless here.
The problem is that there are unenumerable unknowns. Unlike in a formal game, you can’t make a full list of possible outcomes. There are always more things that might happen. If you add up your estimated probabilities, you’ll eventually exceed 1.0.2 Alternatively, if you quit before running out of ideas, you will have underestimated the total probability of failure.
And these are just known unknowns. There are also possibilities no one would ever think of, because we don’t know everything about how the world works.
So, a sensible strategy is to lump all outcomes you haven’t thought of as “something else happens,” and estimate the probability of that. This is very like the logicist closed-world idealization approach: you just pretend you have made a complete list, and proceed. In statistics, this is called a “small-world idealization.”3
Using probabilistic rationality in the eggplant-sized world always requires a small-world idealization. By excluding an unknown set of unaccounted-for factors, that always risks making the analysis so wrong that it is worse than useless. The “other” category is, actually, a probabilistic model of the entire rest of the universe.4
If you use a rowboat regularly, your probability estimates from experience may be good enough for the job. Science explores realms unknown and un-understood. Initial probability estimates must often be meaningless, by definition.
Here one may reasonably ask why we are in the business of “estimating” probabilities at all. That’s inherently subjective and unscientific. Can’t we just let the data do the talking? As we make more and more observations, we’ll get increasingly accurate probabilities. Statistical methods can tell us how close to the true values they are. Everyone learned this in high school science class.
This line of thinking is disastrously wrong, as the next chapter explains. Let’s do a thought-experiment first, to get the intuition.
Suppose you are an astronomer in charge of a telescope with an extra-fancy spectrometer. You arrive at work one evening, and read an email from a colleague on a different continent: “We’ve just seen something really odd in lunar crater X9-PDQ-17b. Some kind of explosion; can’t make any sense of it. Could you check it out with your spectrometer asap?” So you look up the coordinates of X9-PDQ-17b—which turns out to be dinky, only a few dozen meters across—and you tell the telescope to look there.
A spectrometer breaks light down into its constituent wavelengths: a spectrum, as with a prism. A wavelength corresponds to a pure color. Sunlight is a mixture of many. Particular materials—such as rocks on the moon—reflect different percentages of incoming sunlight at different wavelengths. This gives each a unique “signature.” A spectrometer measures precisely how bright light is at each wavelength.
Your spectrometer is at the optical focus of the telescope, so it tells you what you are looking at. Its software consults a large public database of spectra of all sorts of materials. Each entry says what the material is, and how much it reflects each wavelength. Entries are uploaded by expert spectroscopists, who have rigorously calibrated their instruments, and made and cross-checked many measurements.
Typically, an observed spectrum doesn’t precisely match anything in the database. Using a statistical algorithm, the software calculates how nearly it fits every entry, and then tells you how confident you can be that you are looking at, for example, olivine, ilmenite, or anorthosite—typical moon rocks.5
So now it’s finished its measurement of X9-PDQ-17b, and says “green cheese, with probability 0.9837.”
What should you believe is the probability that X9-PDQ-17b is made out of cheese?
Very large amounts of data, plus a sophisticated, widely-used, carefully validated statistical algorithm, say it’s 0.9837. Do you believe that?
No? Why not? If you think the probability is very nearly zero—how come?
It occurs to you that the telescope was cleaned during the daytime today, before you came in. Maybe the cleaner had a green cheese sandwich for lunch, and a crumb fell onto a lens? Green cheese in the spectrometer seems way more likely than green cheese in X9-PDQ-17b.
The spectrometer’s probability estimate implicitly relies on a small-world idealization that excludes unenumerable ways data may not tell you what it usually does. “There’s no extraneous matter in the light path” is a usualness condition that the software doesn’t take into account. What is the probability of there being green cheese on a lens? Who knows; you couldn’t get a meaningful estimate. It’s meant to be excluded by shielding—the spectrometer is enclosed in a metal box—but maybe its shutter was mistakenly left open during cleaning.
On the other hand… it would be an odd coincidence for your colleague to have sent you an urgent email just at the same time the usualness condition was violated. “Oh, of course!” you realize. “It’s April Fools’ Day. Somehow every year I forget, and get taken in by something.” Someone on your team must have altered the software to say “green cheese,” and colluded with the foreign colleague to pull your leg. Now it seems that the probability of green cheese on the moon or on a lens is slight.
But then… you remember that, driving up the mountain to the telescope, you were listening to the radio; and there was an interview with a spokesperson from the Chinese space agency, who said that Western scientists should expect a “special surprise,” demonstrating new Chinese capabilities.
“OMG!” you think. “Could they have?… Surely not… But what an brilliant prank if they did! Explode a green cheese bomb, scattering a thin film of the material across the crater? As an April Fools’ joke?”
Now what do you think the probability of green cheese in X9-PDQ-17b is? Maybe the spectrometer was right! It’s hard to put a meaningful number on the probability, but it seems more likely than green cheese having fallen out of someone’s sandwich.
So, why did you not believe the 0.9837 estimate in the first place? Because you know that the moon is not made out of green cheese; and because you know that small-world idealizations can fail.
A meta-rational approach to probability considers the specifics of the idealization, asking whether it’s sensible; and frequently reality-checks the conclusions of probabilistic inferences, because there can be no guarantee that they aren’t wildly wrong.
- 1.In general, “assigning some probability somehow” is called the problem of the priors. There doesn’t seem to be any way of doing it that is both rationally justifiable and practically feasible. Most probabilists acknowledge this, but claim it isn’t a big deal in practice. Many skeptics consider it a fatal flaw. I think it’s fatal for probabilism as an absolute, universal theory, while agreeing that in many practical cases it’s not a big deal. Meta-rational probabilistic practice entails figuring out whether choosing priors is fatally impossible in a specific case, or easy, or if the problems caused by bad priors can be worked around due to particular features of the situation.
- 2.Technically, you could avoid this by assigning possible outcomes successively smaller probabilities that converge asymptotically to a limit less than 1.0. This artificial procedure would not produce justifiable priors, and I don’t know of anyone advocating it.
- 3.This term is due to Leonard Savage, one of the founders of statistical theory. He suggested that selection of small-world idealizations “may be a matter of judgement and experience about which it is impossible to enunciate complete and sharply defined general principles”—a fine statement of the non-rationality of meta-rationality!
- 4.Or, more precisely, of everything within a light cone. (You could be prevented from crossing the river by the arrival of the first gamma rays from a supernova.) I’ve presented this in Bayesian terms, but the problem affects frequentism equally, in a slightly different form. A frequentist statistical model is an account of “the data generating process.” Inasmuch as its predictions would be upset by a supernova, that too is implicitly a model of the whole universe.
- 5.I have no idea whether spectrometry software works like this. We’re in thought-experiment land here, so it doesn’t matter.