Parts Two and Three of The Eggplant aim for a dramatic perspective shift. Whereas rationalism understands everyday reasonableness as a defective approximation to formal rationality, we will understand formal rationality as a specialized application of everyday reasonableness.1
This flip is best developed in the field of ethnomethodology, which could be described as the empirical study of reasoning and activity, both everyday and technical.2 Parts Two and Three draw primarily on ethnomethodological investigations; I will cite some specific studies as we go along. However, I avoid its difficult jargon, and do not attempt to summarize the field overall, nor necessarily to present it entirely accurately.3
Flipping the relationship between reasonableness and rationality may be disorienting at first. From a rationalist perspective, a natural response might be:
It’s true that human reasoning capability depends on biological hardware that is ill-suited for rationality. You may have a point that cognitive biases and limitations inevitably contaminate science and engineering to some extent. Indeed, it may be worth understanding the details of those in order to eliminate them as much as possible, by bringing our reasoning and beliefs into better accord with rational norms.
This misses the point. Rather, from the meta-rationalist perspective:
Everyday reasonableness provides a wealth of resources that technical rationality necessarily depends on. It does things rationality, unaided, cannot. It is worth understanding the details of those in order to use them better in our practice of rationality, to power it up.
This change of perspective implies a change in explanatory priority. We need to understand reasonableness first (in Part Two), as the basis for understanding rationality (in Part Three). The title of Part Two, “Taking reasonableness seriously,” is meant to suggest that reasonableness is more complex and important than both rationalism and common understanding take it to be.
Rationalism supposes that formal rationality could, at least in principle, serve as a complete mechanism of thinking and acting. However, formal reasoning cannot, unaided, bridge the gap between formalism and reality. Only reasonableness can span that abyss. Reasonableness makes direct contact with nebulous reality, which rationality can’t. This is the main topic of Part Two. Abstracting from a concrete situation to the formal realm, and applying formal solutions in concrete activity, depend on reasonable perception, judgement, interpretation, and improvisation. This will be the main topic of Part Three.
Because it assumes rationality alone could suffice, rationalism takes reasonableness to be the same sort of thing as rationality. That is, it assumes reasonableness is also a way of manipulating representations of beliefs and choosing actions to satisfy goals. Rationalism points to cases in which using rationality, rather than reasonableness, works dramatically better—and rightly so! Indeed, starting from the mistaken understanding that reasonableness and rationality have the same function, it is natural to conclude that reasonableness is a defective approximation. However, manipulating representations and making decisions are not mainly how reasonableness works. Rationality and reasonableness have different functions. Meta-rationality requires choosing the right tool for the job: including when, and how, to use reasonableness or rationality or a particular combination of the two.
Explanatory priority is not a value judgement. Rationalism takes rationality as simply better than alternatives. The meta-rationalist agenda is not to invert that valuation (as some anti-rational ideologies attempt). Rather, meta-rationalism aims to use reasonableness and rationality for different purposes, and also to show their mutual dependence in technical practice.
The cognitive biases field of psychology assumes reasonableness is a bad approximation to rationality and investigates exactly how it differs. The results are both theoretically interesting and useful in practice. Specific patterns of bad reasoning result when one fails to apply rationality in cases when it’s necessary. Recognizing these allow correction.
Conversely, specific patterns of bad reasoning result when one applies rationality in cases in which it is not called for. We’ll discuss those at the end of part Three. Our view will be that rationality is not uniformly superior; it’s a better tool for some jobs.
Shifting the prototypes of reason
Rationality developed as a collection of tools for reasoning better in certain sorts of difficult situations in which people typically think badly. Naturally, rationalism focuses its explanations on those situation types. It takes them as prototypical, and marginalizes and silently passes over more typical sorts of situations and patterns of thinking and acting. This emphasis tends to make rationality seem universally effective.
Gambling games and board games are fun partly because humans are inherently bad at them, and yet we can get better with practice. They are fun also because they are fair, so we can accurately compare skill levels. Making games learnable and fair requires engineering out nebulosity: uncontrolled extraneous factors that are “not part of the game.” That also makes games particularly easy to analyze formally. Much of technical rationality was invented either specifically to play formal games, or by taking formal games as conceptual models for other activities.
Formal games are a tiny part of what most people spend most of their time doing. They are also misleading prototypes for most other things we do, which intimately involve nebulosity. Especially, since technical work constantly grapples with nebulosity, theories of rationality developed to understand poker and chess miss much of what scientists and engineers do.
Cognitive bias research looks at how reasonableness fails to cope with formal games. Our shift in explanatory priority implies starting instead by examining situations that reasonableness deals well with.4 I will use making breakfast as a common source of examples. Formal rationality is neither necessary nor useful when frying an egg.
It’s tempting to think that, since our eventual goal is to understand rationality, it would be best to focus on what it’s good at. But that’s what got rationalism into all the sorts of trouble we reviewed in Part One. In Part Three, we’ll see that, since rationality depends on reasonableness to deal with nebulosity, understanding it really does need to start with breakfast, not chess.
So our strategy is to start by analyzing typical sorts of situations and typical (reasonable) ways of thinking and acting, and then extend that understanding to the atypical (rational) ones.
The transmutation of metaphysics
By shifting the prototypes of reason, the ethnomethodological flip allows us to dissolve the impossible conundrums of epistemology we found in Part One. Through the rest of the book, we’ll remodel metaphysical problems faced by theorists as practical hassles faced by ordinary people in the course of their routine work.
For instance, we replace the insoluble theoretical problem of reference—“how could non-physical propositions refer to physical objects?”—with the practical problem of “I’m explaining how to remove an engine flywheel over the phone; what can I say that will help this person see which screws to loosen?” We replace the insoluble theoretical problem of defining The Scientific Method with the practical problem of “what method can I use to stain only the neurons undergoing toxic dopamine catabolism?”
This may seem unsatisfactory, if you want a tidy general theory of reference, or a precise explanation of why science is unambiguously superior to pseudoscience. Unfortunately, such theories are unavailable. They’re certainly unavailable now, and probably will be forever, because there isn’t a coherent subject matter.
There is no general theory of what makes something a planet, because the category “planet” is inherently incoherent. So are categories such as “reference,” “belief,” and “science.” Attempts to make simple, crisp theories of these phenomena inevitably stray into metaphysical explanations, because no naturalistic ones are possible.
We can find naturalistic understandings of more specific, concrete phenomena, like “why do only mid-sized rocky planets have magnetic fields” or “how is the word ‘the’ used to accomplish reference,” or “what are some ways of believing,” or “when is null hypothesis significance testing actually justified?”
- 1.Martin Heidegger first proposed this reversal of the rationalist explanation in Being and Time (1927); see Hubert L. Dreyfus’ Being-in-the-World (1990) for discussion, and Philip E. Agre’s Computation and Human Experience (1997), pp. 5-9, for a concise summary.
- 2.Ethnomethodologists might find this definition a bit off. They are wary of the word “reasoning” for its cognitivist implications, and of “empirical” for its scientistic implications, although with suitable qualifications most would accept both. In the field, the term for the “flip” is “respecification.”
- 3.I will do my best not to misrepresent it, but I am not an expert in the field. For an authoritative introduction, see John Heritage’s Garfinkel and Ethnomethodology (1991).
- 4.For more on the explanatory priority of everyday activity, see for instance Rodney A. Brooks, “Elephants Don’t Play Chess,” Robotics and Autonomous Systems 6 (1990) pp. 3-15; and David Chapman and Philip E. Agre, “Abstract Reasoning as Emergent from Concrete Activity,” Reasoning About Actions and Plans, Michael P. Georgeff and Amy L. Lansky, eds., Morgan-Kauffman, Los Altos, CA, 1987, pp. 411-424. For a broader discussion of the necessity of shifting prototypes, see the discussion of center/margin dynamics and technical metaphors in Chapters 2 and 3 of Philip E. Agre’s Computation and Human Experience (1997).