This is not cognitive science

Cognitive science is the field likely to come to mind if you think about academic studies of everyday thinking and acting. Just as Part One of The Eggplant might be mistaken for philosophy, because philosophy is the field that has mostly studied that Part’s subject matter, Part Two might at first be mistaken for cognitive science.

As we go along, it will sound less and less like cognitive science, and you might suppose that it is cognitive science done in a peculiarly bad way, presumably in order to justify false conclusions.

In fact, it is neither cognitive nor science. There is quite a lot to say about this. As a preview:

Cognitivism is the view that activity is best understood in terms of mental representations and mechanisms that manipulate them, because these cause actions. Cognitivism began as an inversion of simplistic theories of behaviorism, which viewed events in the world as causing an organism’s actions in response. The Eggplant takes a third view, interactionism, which holds that causality typically rapidly crosses back and forth across the skull, as a matter of perception and action. For interactionism, understanding activity requires taking into account both environments and people. The unit of analysis for interactionism is skills, not things in the head or individual events. Skills are understood not as programs that live in your brain, but as effective patterns of interaction that extend in time and space and cross the boundaries of individuals.1

Inasmuch as rationality is a matter of action, not just thinking, a good understanding must take circumstances into account. Most rationalisms are cognitivist, and therefore mostly ignore circumstances in favor of explaining mental mechanisms. This is one reason they fail to accurately model real-world rational practice.

Going a step further, we will understand thinking itself as a mainly interactive process. In rare cases, it’s best to think while motionless with your eyes closed. Almost always, though, rational thought involves a high-frequency causal loop that crosses your skull. You work out a math problem using paper and pencil; you cannot write a program without your eyes constantly flicking back and forth across the code you’ve got on the screen.

Cognitive science aims to determine what sort of machinery is in the brain, and the mental mechanisms underlying rationality.2 The Eggplant mostly ignores these questions. No doubt there are such mechanisms, but in my view we don’t know enough about them to improve rationality much, which is the book’s aim.3

Fortunately, we don’t need to know what’s going on in the brain in order to observe, understand, and improve thinking and acting. The Eggplant is about what we can be externally seen to do, when we do reasonableness, rationality, and meta-rationality. It’s based mainly on field observation, so this Part comes to sound more like anthropology rather than cognitive science. It’s about how we do reasonableness in the sense of “what ordinarily-observable operations do we apply in which circumstances,” rather than “how” in the sense of “which brain regions are active in an fMRI scan.”

Cognitivism is so pervasive and taken-for-granted as the default, background, assumed understanding of mind and action that you may find it difficult to set it aside. If you are enculturated into cognitivism, you may find yourself trying to understand everything in this Part as a causal explanation of mental machinery. If so, you will miss the point. That is not what this Part aims to do. I encourage you to set cognitivist explanatory reflexes aside if you can.

In Part Three, I will invert the cognitivist move. Cognitivism takes rationality as the basic phenomenon, with reasonableness understandable only as a defective approximation to it. I will present rationality as a particular application of reasonableness—not an overall-better version, but a specialization—and therefore only correctly understandable if reasonableness is understood.

Accounts, theories, and understandings

In The Eggplant, I distinguish three types of explanations: reasonable accounts, rational theories, and meta-rational understandings. I use these words in distinct, quasi-technical ways that don’t quite align with everyday usage, nor exactly with any previous technical usage. So bear in mind, as you read the book, that whenever any of these three words appears, it means something specific and a bit non-standard.

Overall, The Eggplant presents a meta-rational understanding. This was not obvious in Part One, which critiqued rationalist theories, and mostly didn’t provide alternative explanations. It will become apparent here in Part Two that The Eggplant’s explanations are not theories. This is one way in which it is not cognitive science. As a science, cognitive science attempts to formulate and test theories.

There is a chicken-and-egg problem here: until we reach Part Four, I won’t explain in detail what “meta-rationality” means, so I can’t yet explain in detail what a “meta-rational understanding” is. Therefore, the type of explanation in Part Two may seem defective—because it is not a theory, which is probably the sort of explanation you are used to valuing. So the aim of this section is to give a preliminary sense of what an “understanding” is, how it is different from a theory, and why you might find an understanding useful.

A theory is a rational explanation. A theory should be either true or false. (This is more-or-less the meaning of “theory” in the philosophy of science.) A proposed theory can be wrong in either of two ways. It may be false, if it contradicts facts. It may also be not even false, if it makes claims that couldn’t be either true or false. (Those are what the logical positivists called “meaningless” statements, somewhat misleadingly.) It’s hard to see how Hegel’s claim that “self-consciousness recognizes pure Thought or Being as self-identity, and this again as separation” could be either true or false; so it is not a rational theory in this sense.

A common way for a supposed theory to be not-even-false is to be stated in terms of a bad ontology: one with elements that are unusably indefinite, or outright non-existent. We can call those “metaphysical,” as an insult. For example, Hegel’s metaphysical claim is probably not-even-false, because capital-B Being is probably not well-enough defined to make it possible to say anything false (or true) about it. The premodern theory of combustion was wrong because it was stated in terms of a non-detectable metaphysical substance, phlogiston, that turned out not to exist.

Rationalisms attempt, implicitly or explicitly, to be theories of rationality. They may have outright false aspects, but mostly they are not-even-false. For example, decision “theory” is stated in terms of a non-detectable metaphysical substance, “utility,” that almost certainly doesn’t exist. Most correspondence “theories” of truth are stated in terms of non-detectable, ill-defined metaphysical entities, “propositions,” which almost certainly don’t exist. These are failures as “theories” not because they are false, but because their ontologies are wrong.4

A rational theory is primarily epistemological; it wants to be a collection of true beliefs. A meta-rational understanding is primarily ontological; it wants to be a collection of useful distinctions.

Understandings are prior to theories; you have to have an ontology within which you could make true statements.

An understanding mostly cannot be true or false. It can be wrong if its ontology is misleading or unhelpful: one in which true-enough statements cannot be formulated.

Truths are often valuable. Understandings are not better (or worse) than theories; they are a different sort of thing. Understandings may incorporate individual truths or whole theories, because meta-rationality includes rationality and is not opposed to it. An understanding is good (or not) partly on the basis of whether it supports true theories.

Decision theory and the correspondence theory are often useful in practice despite being not-even-false. There is no utility in reality, but choosing to treat some real-world alternative as if it were utility can lead to helpful analyses.5 Parts Three and Four explain how we make this “as if” work.

A meta-rational understanding is not a metaphysical speculation. Like a theory, an understanding should be grounded in specific empirical observations. Like a theory, an understanding can be found to be wrong on the basis of evidence. “Using these two specific different things leads to different results, but this understanding has no way of distinguishing them” shows that the understanding is, at best, incomplete. The way an understanding relies on evidence is different from the way a theory does, however. (We will have to wait for Part Four before getting to that.)

Part Two offers an understanding of reasonableness, and Part Three an understanding of rationality. Neither of these are intended as theories. The Eggplant is not science (cognitive or otherwise) because there mostly aren’t yet detailed theories of the phenomena I discuss. However, presenting understandings rather than theories doesn’t mean they don’t rely on evidence, and it isn’t an evasion of testability. Parts Two and Three could be wrong, and could be shown to be wrong. (I expect some details, at minimum, are wrong.)

The evidential base for The Eggplant decreases as it goes along:

It may be best to regard The Eggplant as a collection of observations which doesn’t prove anything, but instead invites you to observe your own technical work; and suggests that you may see similar things; and (in Parts Four and Five) makes suggestions for how those observations may lead you to doing rationality somewhat differently and better.

Because The Eggplant is meant to be practically useful for technical professionals, not to be an academic work, I won’t discuss or even footnote sources extensively. I expect most readers will not want to hear much about cognitive-developmental psychology, visual psychophysics, ethnomethodology, or the history of science. If you are an exception, the Further Reading Appendix may be interesting.

At the beginning of this section, I mentioned “accounts” as a third type of explanation. Accounts are a central feature of reasonableness, and we’ll cover them later in Part Two. However, no part of The Eggplant is itself intended as an account, so I don’t need to say more about them in this preliminary meta discussion.

Descriptive and normative theories

Going all the way back to the Ancient Greeks, rationalism has tended to equivocate about whether it is a solely normative theory, or also a descriptive one as well. Rationalism always is normative: you should be rational, and here’s how to do that.6 Some rationalisms also make the descriptive claim that you are rational, and here’s how you do do it.

For example, many theories in cognitive science have held that human reasoning not only ought to conform to mathematical logic, it actually does. There’s overwhelming evidence against that claim, so it is much less popular now than a few decades ago. Some cognitive scientists still take probabilism as a descriptive theory of the brain, however.7

Since people are obviously mostly not rational, descriptive rationalism has mainly retreated to the claim that part of your brain is properly rational, and part of it isn’t. We’ll come back to that in the next section.

Like rationality, reasonableness is also normative, but quite differently. It is not so much the content of reasonable norms that differ, as the way they operate, which we’ll call “accountability.” Understanding this is central to understanding what reasonableness is; I explain this later in Part Two.

Meta-rationalism accepts the distinct normativities of reasonableness and rationality, as they are. Therefore, Parts Two and Three of this book are descriptive only. They merely describe ways of being that are themselves normative, and I do not challenge or amend either type of norm.

However, Part Four is normative, in a third way. Just as rationalism says you ought to be rational, meta-rationalism says that, as a technical professional, you ought to develop your meta-rational competence; you have a moral duty to do so.

This is not a dual-process theory

People are rational some of the time, but not all the time. Maybe there’s part of us that is rational, and part that isn’t? This is an attractive theory, because it suggests that we could be consistently rational, or at least rational more often, if we strengthen our rational part in its struggle with the other part. (This idea also goes back to the Ancient Greeks.)

So what is that other part? Rationality has been contrasted with—among many other qualities—irrationality, emotionality, intuition, creativity, superstition, religion, fantasy, imagination, self-deception, unconscious thought, and subjectiveness.

Rationalists tend to blur or collapse all these non-rational phenomena into a single homogeneous, inferior, and uninteresting category. In psychology, this is called dual process theory: there are just two primary mental faculties or modes of thought, the rational one, and all that other stuff. A recent version, popularized in Daniel Kahneman’s Thinking, Fast and Slow describes them as System 1, which is fast, intuitive, and emotional; and System 2, which is slower, more deliberative, and more logical. According to this theory, “cognitive biases” explain irrationality, which consists of allowing System 1 to act instead of System 2.

Because ideas like this are pervasive in our “folk understanding” of thinking, an easy way to misunderstand “reasonableness” is as System 1, or the general non-rational way of thinking. This would be wrong for three reasons.

A few things we do know about brains

Brains have evolved for hundred of millions of years, almost entirely for routine practical activities such as collecting food and avoiding predators. Systematic rationality is primarily a modern product of the European Enlightenment,9 so presumably it is mainly the result of cultural evolution, not biological evolution. There hasn’t been time for a separate brain system for rationality to evolve. We must do calculus mostly by re-using mechanisms that evolved for finding berries. It’s not surprising that we find it difficult and are bad at it.

We don’t have much understanding of what neurons do. One thing we do know is that whatever they do, they do it extremely slowly, relative to computer hardware. It takes about ten milliseconds for a neuron to do anything. Some mental operations complete in about 100 milliseconds. This probably rules out many sorts of sequential processing, such as those necessary for logical inference.

Another thing we know about neurons is that there are an awful lot of them, and each connects to an awful lot of others. Estimates vary, but maybe a hundred billion neurons, making a quadrillion connections total.

What we are extraordinarily good at, and don’t have much conscious access to, is making sense of concrete situations in terms of background understanding of meaning. If someone asks “is there any water in the refrigerator?” you immediately know that they want to drink some, without having to reason about it. This sort of understanding was useful in our evolutionary history, while calculus, had it existed, would not have been.

Putting these facts together, it seems that much of what the brain does must involve shallow consideration of extremely large numbers of possible meanings simultaneously. Almost all possible interpretations are wrong. The bit of your brain that tries to explain everything in terms of wooden spoons fails, as does the one with a passion for ocelots. The thirst bit discovers that its obsession is relevant, presumably based on your experiences of drinking water from refrigerators, so that’s the sense you make of the question.10

In Part Three, we’ll see how key methods of rationality work by stopping these processes from jumping to wrong conclusions. “Fast vs. slow” is not altogether wrong; it’s just not a good theory of brain mechanisms.

Never mind the Church-Turing Thesis

Some rationalisms claim it is impossible that humans could be other than rational, as a matter of principle.

Any method of reasoning other than mathematical logic will lead you to holding contradictory beliefs. From any two contradictory beliefs, you can deduce all falsehoods. Since we don’t believe all false things, our brains must run on logic.

That was quite popular in the 1980s; you don’t hear it anymore. Here’s one I’ve encountered several times recently:

The Physical Church-Turing Thesis, an absolute truth, says that all possible computations can be specified by a set of rational, formal, mathematical rules. Therefore, humans, who are bound by the laws of physics, cannot be anything other than rational. In particular, there can be no such thing as “meta-rationality” beyond rationality; that’s physically impossible. And “reasonableness” cannot be anything other than a defective approximation to rationality.

Oddly, people who say this are usually also vociferous proponents of the superiority of rationality to irrationality, and quick to condemn opponents as irrational. The logical contradiction between “it’s impossible for brains to do anything other than rationality” and “most people are hopelessly irrational” appears not to bother them. The confusion here seems to be between “all rational systems are formal” and “all formal systems are rational.” A rational system must have some “distinctive virtue”; most formal systems do not. A randomly-generated computer program is formal, but will almost certainly do nothing useful.

Once one grants that people can be non-rational in one way—specifically, irrational—the possibility of being non-rational in other ways (such as reasonable or meta-rational) cannot be ruled out by this argument. In particular, meta-rationality does not involve “hypercomputation,” meaning computing the mathematically uncomputable.

If rationality is defined as “any sort of thinking or acting that works well,” then reasonableness and meta-rationality, which often work well, would indeed have to be rational. This is not usually what rationalism means by “rational,” though; it’s too vague to make a theory of. Usually the definition is something more specific, about which optimality might be proven.

The sorts of thinking and acting that rationalisms have usually theorized about are the same ones I will term “rational.” Rationalisms have generally not made theories of reasonable or meta-rational sorts of thinking and acting. Drawing these distinctions is just a terminological choice; those are never correct or incorrect. Perhaps rationalist theories could be extended to cover reasonable and meta-rational activities; but introducing these additional terms points out that they haven’t yet. It suggests that these are important, mostly-overlooked aspects of technical practice, and might need a different sort of explanation.

This is not about folk theories

If you ask non-philosophers about philosophical topics, such as how rationality or science work, they frequently give confident explanations. Philosophers call such answers “folk theories.” They treat folk theories as useful sources of intuition, and starting points for analysis, but consider them confused and inadequately precise. Many philosophers think that their job is to fix folk theories. Often their “fixes” replace vague, complex, mostly-accurate understandings with formal, simplistic, blatantly-wrong theories.

Recognizing this, other theorists may champion folk understandings against academic ones. In that approach, to develop a better theory of rationality than rationalism’s, one might try interviewing lots of technical professionals, asking them how they think rationality works, and hope to summarize and synthesize their answers. As far as I know, no one has seriously attempted this. It might be worth trying, but my guess is that it would not go well.

If you ask scientists how science works, they typically recite a theory of The Scientific Method they learned in high school. That theory entered the science curriculum in the mid-twentieth century and hasn’t been revised since. It’s a simplified version of the mid-century state of the art in the philosophy of science: a bit of late logical positivism mixed up with a bit of Popper’s falsificationism. This theory is incoherent and false, but it doesn’t do too much harm because it’s also irrelevant to what scientists actually do. If you ask a scientist how their science works—about the problem they are working on today—they will give a detailed, accurate explanation, which has nothing to do with their regurgitated version of The Scientific Method.

It seems folk theories of rationality are derived from obsolete rationalisms; collecting and synthesizing them would probably just give you back a muddled version of 1950s philosophy of science.

So that is not the method of The Eggplant. Instead, its understanding derives from observational studies of how people actually do technical work—not how they believe they do it, or what they say about how they do it. This understanding is as dissimilar to folk theories as it is to rationalism.

  1. 1.For a useful overview of interactionism, see Philip E. Agre, “Computational research on interaction and agency,” Artificial Intelligence 72 (1995) pp. 1-52. Although this introduced a collection of AI papers, Agre surveys interactionist approaches in diverse fields, including neuroscience, dynamical systems theory, evolutionary theory, activity theory, developmental psychology, phenomenology, sociology, and anthropology. The sources he references have been critical to forming my understanding in The Eggplant also.
  2. 2.Cognitive scientists have increasingly recognized the deficiencies of cognitivism, and are increasingly taking interactionism on board as the alternative. One term for the movement is “4E,” standing for “Embodied, Embedded, Extended, Enactive.” For a recent survey, see The Oxford Handbook of 4E Cognition.
  3. 3.My view that cognitive science is mostly unhelpful is not universal; many books do aim to help you think and act better by drawing on cognitive science. Their approaches are quite different from that of The Eggplant; they may offer complementary value.
  4. 4.Decision theory and model theory are, however, fine as mathematical theories, a technical term that means “all the statements that can be deduced from a set of axioms.” (Model theory is a formal version of the correspondence theory of truth.) Mathematical theories are never true or false, and are not “theories” in the sense I’m using here.
  5. 5.In economics and financial theory, money is routinely treated as if it were utility. This is not even approximately true, but often useful.
  6. 6.Some rationalisms hold that being rational, according to their normative standard, is impossible. In that case, they may be “prescriptive,” as well as “normative,” by telling you what you should do to be more nearly rational.
  7. 7.The “free energy principle” of Karl Friston is perhaps the most popular descriptive probabilistic rationalism currently.
  8. 8.Interestingly, the System 1/2 terminology originated with Keith Stanovich. He subsequently made the point that “System 1” is misleadingly heterogeneous. He also introduced a “tri-process theory” in which one of the three is explicitly meta-rational. In cognitive science, meta-rational operations are often described as “reflective,” and Stanovich’s third process is a “reflection” that judges when it’s worth applying “algorithmic” rationality. “Distinguishing the reflective, algorithmic and autonomous minds: Is it time for a tri-process theory?” In J. St. B. T. Evans & K. Frankish (Eds.), In two minds: Dual processes and beyond, 2009, pp. 55–88.
  9. 9.Systematic rationality also developed to some extent in other great civilizations during the past 2500 years or so: Ancient Greece, Rome, India, China, and others. Some parts of the medieval Tibetan philosophical literature are startlingly systematic, as another instance.
  10. 10.This is not intended as a specific cognitive model, I don’t know of any specific evidence for it, and nothing in the book turns on it. It’s intended as a vague possible a priori explanation for how the brain may deal with the unenumerability of potential considerations.