Anti-rationalist: You rationalists just think in black and white. The world is shades of gray!
Rationalist: Yeah, we got this. A gray surface reflects some proportion of the incoming light, varying from none (0.0), corresponding to pure black, continuously to all the light (1.0), corresponding to pure white. Gray is any value in between. It’s just a number. You know… we rationalists can do numbers!
Both have valid points here. “Shades of gray” may be trying to point at nebulosity, and sort-of truths, which do cause fatal problems for rationalism. But quantitative variation does not, or not always.
Recall the possibility that “Alain is bald” is pretty much true, but not altogether so. Or maybe we can say that “Alain is pretty much bald,” but that doesn’t really help. The difficulty for rationalist epistemology is in dealing with borderline cases. What counts as evidence for Alain being actually, versus pretty much, bald? Someone can certainly have some head hairs while counting as unambiguously bald. If you believe that Alain definitely is bald, what exactly is it that you are believing?
Analytic metaphysicists and semanticists—two species of rationalists—have devised several technical approaches to answering these questions. They typically apply that analysis to baldness and to a small, standard set of other examples of graded properties: mainly, colors. A particular apple is basically red, but an orangish red; is it a red apple? They call this problem “vagueness.”
I’ll review briefly some of the attempted solutions, along with some reasons they don’t seem to work. I won’t go into detail, because it is generally acknowledged that none of them are satisfactory, although defenders have added complications to deal with some objections.1
More importantly, none of the approaches would address nebulosity effectively even if they worked technically in the terms in which they were conceived. Nebulosity is not mostly about “vagueness” as understood in this literature—that is, matters of degree rather than kind. In fact, we’ll see that even rationalists’ standard examples turn out not to work that way. So the problem is mis-framed, and must be thought about quite differently. I’ll come back to that at the end of this chapter.
Following the usual pattern, some rationalist approaches mischaracterize the ontological problem as epistemological, and some as linguistic.
An epistemological approach holds that there is an objective fact about how many hairs you can have and not be bald. Maybe it is 438. If you have 438 hairs, it is still false that you are bald. But the moment will come when one drops out, and you have only 437 left. Then it will be an absolute, objective, cosmically true fact that you are bald. The apparent nebulosity of baldness is really an epistemic problem: no one knows exactly what the true cutoff value is. So when we say “Alain is pretty much bald” we mean that he’s close to our estimate of the cutoff value, but aren’t sure if he’s a bit under or a bit over.
Sentence | Belief strength | Truth |
---|---|---|
Alain is bald | 0.7384261 | True |
The advantage of this approach is that you can use ordinary logic (just-plain-true-and-false) when you are certain, and the well-behaved mathematics of probability for cases that are unclear.
The disadvantage is that it is silly. You can try hard, but you can’t seriously believe that adding or subtracting one hair makes a qualitative difference.
So an alternative approach misinterprets the ontological problem as a linguistic one. Semanticists are generally committed to the project of the previous chapter: finding ways to define the words of ordinary language using formal logic or something like it. Vague words have been a big problem for them, and it’s gradually become clear that most words are somewhat vague. Attempted solutions are highly technical and mostly involve non-standard extensions to formal logic.2 It’s hard to imagine that they correspond to how ordinary people think. Some semanticists find that embarrassing; others consider “how people think” to be a job for psychologists instead, so it’s not their problem.
A mainstream rationalist, an engineer perhaps, is also likely to take a linguistic approach. But whereas semanticists want to find the “real” meanings of vague words, an engineer might reject them instead:
Look, the objective truth is clear: this guy has some number of hairs. How many would count as “bald” is subjective and meaningless. It’s just an opinion, like whether Coldplay suck or are the greatest band ever. Words aren’t reality. If you want to know anything, you measure something and use math.
But if you look at a picture of Coldplay’s drummer, Will Champion, you instantly see that he’s pretty well bald, although he still has quite a few hairs on the sides and back. How many hairs? I don’t have even an approximate belief about that. I couldn’t guess to within a factor of ten. Would counting them change anyone’s judgement about whether he’s bald? No.
It would seem that one can know whether or not someone is bald, and that in many cases there is indeed a truth of the matter, and that any workable epistemology has to acknowledge this. But maybe that is a delusion; should we bite the bullet and declare all vague terms defective, so a rational epistemology should simply ignore them?
The problem is that (as the previous chapter explained) nearly everything we think we know about eggplant-sized phenomena, we can only state in vague terms. Eggplants have few interesting mathematical or physical properties. Their biological and culinary properties, the ones that matter, are nebulous. Denying that we can have rational beliefs about eggplant-sized phenomena would limit rationality to physics and maybe chemistry. Some rationalists do back themselves into this position, but most feel it concedes too much.
A third class of approaches bites a different bullet, by admitting that the problem is ontological, not epistemological or linguistic. So: if “Alain is bald” is somewhat true, how true is it? Can we put a number on it? Let’s say an absolute truth is 1.0 true, and an absolute falsehood is 0.0 true, and “Alain is bald” is 0.7384261 true. This addresses the intuition that some sort-of truths are more true than others.3
Fact | Truth |
---|---|
Alain is bald | 0.7384261 |
Putting numbers on truth doesn’t buy you anything unless you can use math to get results that wouldn’t be obvious otherwise. For example, if—
- you somehow determine that “the gboma, an edible plant species related to the common eggplant, is also an eggplant” is 0.8360851 true; and
- “when you swallow a baby eggplant whole, it remains an eggplant for a while in your stomach as you digest it” is 0.71959253 true; and
- if there were a way of combining these observations arithmetically to discover that “a baby gboma is an eggplant when you swallow it whole” is 0.6460186 true; and
- the number 0.6460186 could tell you something of practical value in deciding what to do with a baby gboma;
—if all these were feasible, this would be a compelling theory. Unfortunately, none seem likely.
In particular, no version of this approach handles combination (point 3 above) meaningfully, nor does it seem that any arithmetical theory could.
Also, it may make sense to say that one statement is more true than another when both concern a single continuously-varying property. But comparing how true “maroon is a red” is with how true “a gboma is an eggplant” is seems inherently meaningless. Assigning a consistent set of numbers to diverse statements seems impossible.4
This points to a more serious and pervasive problem: nebulosity is not mostly about quantitative degree. In fact, even the standard examples used to illustrate theories of vagueness are not, when considered more seriously, simply matters of degree. Suppose Daniel has 10,000 hairs mostly at the back of his head and Erika has 8,000 spread evenly: one might judge that Daniel is mostly bald, but Erika just has thin hair. Spatial distribution matters; that is why hair transplantation surgery can make a bald person non-bald. However, distribution would be difficult (and probably meaningless) to quantify.
Color is another standard example: where does red turn into orange? But even the simplest color theory has three axes (hue, saturation, lightness). Is vermillion (differing from central red mainly in hue) more or less truly red than crimson (differing from central red mainly in lightness)? You could create a unified difference metric by choosing axis weights, but it would have no meaning, unless it were formulated to address a particular practical purpose. Anyway, the hue/saturation/lightness model applies only to abstract pure colors of light, not to the real world.
Physical objects rarely if ever have “a color.” Examined closely, a “red” apple is invariably speckled, mottled, and streaked with various other colors. It is sometimes possible to say one apple is redder than another, but often no comparison is possible. They are both “red,” but quite differently so. The same may be true even of steel parts painted as uniformly gray as is practically feasible: close up, some texture is visible even to the naked eye. Different “gray” paints also vary slightly in hue, as well as lightness: detectably reddish, greenish, bluish. Shades of gray do not, in fact, form a continuum.
Further, color is not an intrinsic property of a point on a surface, but depends on illumination and viewing angle:
-
Depending on the spectrum of incoming light, different wavelengths reflect from the same object. Even outdoors, the color of sky light depends on the weather and time of day. Our nervous system performs elaborate processing to compensate, so we rarely notice the objectively large differences.5
-
Viewing angle affects apparent colors because some materials send different wavelengths in different directions. Most dramatic are materials that change their apparent color completely when you move your head. They are used both in car paints and as an anti-counterfeiting measure on money.6
Even considering all these peculiarities, color is one of the simplest cases of nebulosity—which is why it is the one most-often tackled by rationalists seeking a work-around. Most categories, properties, and relationships, when examined closely, have unenumerable dimensions of variation. You cannot, in practice, specify all the ways an object differs from others—nor limit which aspects may be relevant in some future situation.
This is another manifestation of the same problem that bedeviled attempts at category specification in the previous chapter: the weird exceptions, complications, and borderline cases; the proliferation of variant subcategories, and the unbounded recursive expansion of features with features with features.
Rationalist analyses of vagueness jumped from the messy complexity of the real world (the colors of an apple) to a simplistic formal abstraction (a single axis of near-red hues). Then they devised esoteric mathematical techniques to attempt to solve the irrelevant simplistic problem. But those formalisms didn’t work even for that.
Let’s go back to the fictional dialogue at the beginning of this chapter. Both the anti-rationalist and the rationalist were partly right. The anti-rationalist was right that rationalists tend to make trivial formal models that leave out critical details, and talk themselves into absurd conclusions by mistaking their model as Truth. (Later in the book, we’ll talk about how to avoid this common error pattern.) This can include an insistence on binary “black and white” oppositions between poles that are not exclusive; notably, absolute truth vs. falsity.
The rationalist was right that continuously-graded phenomena are no obstacle to rationality. Many rationalists would go further:
Look, this whole discussion is making things much too complicated. “Bald” and “gray” are themselves just models. Every engineer learns that models are never perfect. They work because they approximate reality closely enough.
This is approximately true, and important. It points in the general direction of The Eggplant’s account of rationality. (You could take the whole book as applying an engineer’s attitude to traditionally philosophical questions.) However, the next chapter explains why “models are approximations” is only approximately true, and not an adequate understanding of nebulosity in general.
In fact, any attempt to make a definite theory of an inherently indefinite reality can’t work in general. It may produce a useful model in restricted circumstances.
When your model runs into trouble, you may need to do some ontological remodeling as part of building a more sophisticated one. There are various typical patterns for ontological change (discussed later in this book). One of the simplest and most common is replacing a binary opposition with a continuum.
The anti-rationalist’s invocation of “shades of gray” might be a way of pointing to the rationalist tendency to stick to a fixed formal model for too long. Rationalists do tend toward a stubborn reluctance to give up on models that aren’t working, and resist doing the meta-rational work of finding alternatives.
On the other hand, rationalists’ attempts to account for nebulosity using numbers for belief strength or for degrees of truth are examples of exactly the remodeling pattern the anti-rationalist recommended (“shades of gray”).
We found that these models of truth and belief were also inadequate as general theories, although both are routinely used in practice in some domains where they are effective. I suggested that they fail as universals because categories have many “dimensions of variation.” This is another common pattern of ontological remodeling: replacing a single variable with a bundle of several. Again the result is not accurate in general; there are no objective “dimensions” to the variability of eggplant-sized phenomena. That’s just another, more complex formal model of nebulosity. It’s more nearly accurate in a broader variety of cases, but not an absolute truth.
Looking ahead, what alternative approaches can reasonableness and meta-rationality offer?
Reasonableness is not interested in universality. It aims to get practical work done in specific situations. Precise definitions and absolute truths are rarely necessary or helpful for that. Is this thing an eggplant? Depends on what you are trying to do with it. Is there water in the refrigerator? Well, what do you want it for? What counts as baldness, fruit, red, or water depends on your purposes, and on all sorts of details of the situation. Those details are so numerous and various that they can’t all be taken into account ahead of time to make a general formal theory. Any factor might matter in some situation. On the other hand, nearly all are irrelevant in any specific situation, so determining whether the water in an eggplant counts, or if Alain is bald, is usually easy.
Usually, but not always. Rationality comes into play in the anomalous difficult cases. Then reasoning within a formal model may provide extraordinary value. But which formal model? What factors should it take into account, out of unenumerable details? In what sense does it matter whether this thing is an eggplant? Is it adequate to record “to what degree” the transplant patient is bald, or do we need a more sophisticated description? These are meta-rational questions.
- 1.For more detailed reviews, see the Stanford Encyclopedia of Philosophy entries for “Vagueness” and “Sorites Paradox.”
- 2.The best-known semantic approach to vagueness is called “supervaluationism.” It adds new truth values “supertrue” and “superfalse” to logic. How those work is hard to explain, so I won’t. Simple versions of supervaluationism fail quickly, so advocates have devised complex variants that handle some problem cases, and argue extensively about which one works least badly. No one seems to be really happy with any of them, though.
- 3.Theories of this sort are called infinite-valued logics. Fuzzy logic is the best-known of several varieties.
- 4.Infinite-valued logic runs into numerous other technical and philosophical problems. For a catalog of objections, and replies to them from an advocate, see Jeremy Bradley’s 2009 “Fuzzy Logic as a Theory of Vagueness: 15 Conceptual Questions,” pp. 207–228 in Views on Fuzzy Sets and Systems from Different Perspectives (2009). Originally developed as an ontological model for vague language, fuzzy logic is said to have significant engineering applications, but hardly anyone still accepts it as metaphysically accurate.
- 5.The nervous system’s attempt to correct perceived color for illumination is called “color constancy.” That usually works well, but can be fooled, giving rise to startling optical illusions. The details of its operation are complex and not fully understood. It apparently involves processing both within the retina and in the brain’s visual cortex.
- 6.The ancient Romans produced “dichroic glass,” which changes color according to the angle of illumination, using silver and gold nanoparticles. The most spectacular example is the Lycurgus Cup, which is red when lit from behind and green when lit from the front.