It’s the perfect postmodern nightmare. You wake up to discover that you are the anti-hero character in a novel. Worse, it is a famously badly written novel. It is, in fact, an endlessly long philosophical diatribe pretending to be a novel. And it uses all the tiresome technical tricks of postmodern fiction. It is convolutedly self-referential; a novel about a novel that is an endlessly long philosophical diatribe pretending to be a novel about a novel about…
I’ve just read Ken Wilber’s Boomeritis. It’s all that.1
And it seems to be about me. I mean, me personally.
The book diagnoses the psychology of a generation. Many readers have said it is about them, in the sense that they are of that generation, and they discovered ruefully that Boomeritis painted an accurate portrait.
But the central character in the book is a student at the MIT Artificial Intelligence Laboratory who discovers Continental philosophy and social theory, realizes that AI is on a fundamentally wrong track, and sets about reforming the field to incorporate those other viewpoints.
That describes precisely two people in the real world: me, and my sometime-collaborator Phil Agre.
Do you know about “delusions of reference”? They are a form of “patternicity”—seeing meaning where there is none. You believe that communications that have nothing to do with you, are actually about you. It’s a typical symptom of schiozophrenic psychosis.
Is Boomeritis actually about me? Or is my suspicion that I am the model for its central character a sign of psychosis? A question probably only of interest to me—but I’ll return to it at the end of this page.
Mostly, instead, I’ll explain ways in which the novel, and my work at the MIT AI Lab, are relevant to Meaningness. All three concern the problem of the relationship of self and other. They address the stalemate that has resulted from confused, polarized ways of understanding separation and connection.
Orange, green, yellow
Boomeritis is an overview of “Spiral Dynamics”, which is a big fat hairy Theory of Life, The Universe, and Everything. I don’t like those, but the book gradually persuaded me that this one can be useful.
Spiral Dynamics contrasts three worldviews, which for some awful reason are called “orange,” “green,” and “yellow.”2 You’ll find the first two, at least, familiar:
Orange
- Rationality, science, technology, objectivity
- Materialist, capitalist, pragmatic, utilitarian
- Autonomy, independence, competition, results
- Planning, controls, contracts, procedural justice
- Detached, abstract, reductionistic, alienated
- European (intellectual) Enlightenment; modernism
Orange tends toward dualism: the wrong idea that the self is totally distinct from the world.
Green
- Spiritual, emotional, intuitive, subjective
- Relativist, pluralist; diversity, multi-culti, “political correctness”
- Consensus, dialog, community, process
- Harmony, healing, self-realization, social justice
- Connecting, supporting, sharing, togetherness
- Eastern (spiritual) Enlightenment; postmodernism
Green tends toward monism: the wrong idea that self and other are totally connected.
The war of orange and green
You can figure this part out, right? These views hate each other; each thinks the other is the fast road to hell.
Yellow
The thing is, orange and green are both right.
They are also both wrong. Their virulent criticisms of each other are both correct. But their own central values are also both correct. We need the right parts of both, without the wrong parts.
That combination, supposedly, is yellow:
- Big picture, open systems, networks, global flows
- Flexible, simultaneous consideration of multiple perspectives
- Tolerance for chaos, change, and uncertainty
- Integration of ranking (hierarchy) and linking (community)
- Caring combined with freedom
- Voluntary, spontaneous cooperation rather than either win/lose competition or compulsory consensus processing
- Capacity to act in both orange and green modes as appropriate
If this sounds less specific than the other two, it might be because “yellow” is a work in progress. I do think it’s pointing in the right direction, toward what I call participation. That is the way to avoid the false alternatives of monism and dualism.
Boomeritis
The Baby Boom generation—people born roughly 1946 through 1960—was the first to include many with a green worldview. That is a great accomplishment; green is a partly-right response to errors in orange.
“Boomeritis” is the syndrome of getting stuck at green, fighting fruitlessly against orange, and failing to move on to yellow. When Boomeritis looks at yellow, it sees orange—because yellow incorporates aspects of orange that green rejects.
The cause of Boomeritis is narcissism. It is based on “nobody can tell me what to do.” Hierarchy is unacceptable. I take direction from no one. Objective facts limit my fantasies, so science must be a patriarchal, oppressive myth. Reasoning often points out that I am wrong, so I ditch rationality for “intuition,” which somehow always tells me what I want to hear. In any competition, I might lose, so everyone must be awarded gold stars, because everyone is equal. Nothing, and no one, can be better or worse than anything or anyone else. That wouldn’t be nice—because then I might not be the most special thing in the universe.
This disease is not restricted to Boomers. Not all Boomers have it, and many who are younger or older do.
After postmodernism
“Postmodernism” is the academic version of the green worldview. Boomeritis was originally written as an academic/philosophical criticism of postmodernism. Wilber says it was pretty unreadable, when he’d finished; so at the last minute he turned it into a novel instead.
Postmodernism, although obscure and obtuse, is important because it is the dominant orthodoxy in academia, and university indoctrination is one of the main ways Boomeritis is transmitted to younger generations.
It is also important because, beneath its billowing briny blather, postmodernism’s green critique of orange is right.
The problem is that, on its own, green leads inexorably to nihilism. That is not obvious; Boomeritis spends most of its 456 pages explaining it. Here’s a super-condensed version:
- If meaning is purely subjective, and you embrace all perspectives as equally valid, then at points of disagreement meaning completely disintegrates.
- If ethics is merely cultural convention, there is no way to condemn evils such as the “honor killing” of women who have been raped.
- If everyone is automatically equal, there is no call to be any better than you are. There is no possibility of nobility.
- If everyone is supposed to be equal, all differences must be due to evil oppressors. Anyone who is not an oppressor is an all-good victim. Since we are victims, the oppressors must be them. We ought to rebel against the oppressors (and probably kill them all). But this is automatically doomed to failure, because (by definition) the oppressors have all the power (or else we might not be victims, just lazy). So we’d better not actually try to improve anything; instead, we’ll demonstrate sincerity with the vehemence of our denunciations.
After thirty years of chewing on such contradictions, it’s widely understood that postmodernism is unworkable. There is no way forward within the green worldview.
So now what?
What comes after postmodernism?
Shockingly few people seem to be working on that question. It’s hard, because green’s logic, its critique of orange, seems unassailable; yet it leads to a bleak dead end.
Somehow we need to integrate what is right in both the orange and green worldviews to produce some sort of “yellow.” This web site—Meaningness—could be seen as one attempt at that.
Boomeritis does a fine job of exposing the contradictions in green, and has decent sketch of what yellow might look like. But then…
Whoa! Ken, WTF?
Although I admire Boomeritis, I oppose much of Wilber’s other work. Mainly he advocates monist eternalism, which I think is disastrously wrong.
In fact, Wilber (together with Eckhart Tolle) seems to be the main source for a new form of pop spirituality. This movement repackages the German Idealist philosophy Wilber loves, in a glossy new “spiritual but not religious” form that particularly appeals to younger generations.
The key ideas here are eternalism and monism:
- Eternalism: there is a God (but sometimes we’ll call it something else, like “The Absolute,” to deflect the arguments for atheism).
- Monism: you, God, and The Entire Universe are All One.4
Just at the end of Boomeritis, something really bad happens.5 [Spoiler warning:] The anti-hero (who may be me) becomes God.
Wilber proposes that becoming God is what comes after yellow—and the main reason to get to yellow is to go on to become God.
This quest to become God is a central theme in his other work, so I shouldn’t be surprised; but I am appalled.
It’s not just that I think it’s wrong. It’s that his own critique of the green worldview—its monism and its narcissism—seems to apply directly.
He recognizes the contradiction, and dismisses it. He makes the usual monist-eternalist move, which goes something like this:
When we say ‘God,’ we don’t mean God, we mean The Absolute, which is ineffable, and is the same as The Entire Universe. You have to admit that the universe exists. And when we say ‘you,’ we don’t mean your ordinary ego, we mean your true self, which is divine and pure, so there’s no narcissism involved. See? No problem.
This is hokum. There is no Absolute, you are not the entire universe, and there is no “true self.” This stuff is simple wish-fulfillment; a fantasy of personal omnipotence and immortality. (As I will explain in plodding detail in the book.)
Artificial intelligence
The interesting part of AI research is the attempt to create minds, people, selves.6 Besides the fun of playing Dr. Frankenstein, AI calls orange’s bluff.
Orange says that rationality is what is essential to being human. If that’s right, we ought be able to program rationality into a computer, and thereby create something that is also essentially human—an intelligent self—although it would not be of our species.
This project seemed to be going very well up until about 1980, when progress ground to a halt. Perhaps it was a temporary lull? Ironically, by 1985, hype about AI in the press reached its all-time peak. Human-level intelligence was supposed to be just around the corner. Huge amounts of money poured into the field. For those of us on the inside, the contrast between image and reality was getting embarrassing. What had gone wrong?
An annoying philosopher named Hubert Dreyfus had been arguing for years that AI was impossible. He wrote a book about this called What Computers Can’t Do.7 We had all read it, and it was silly. He claimed that a dead German philosopher named Martin Heidegger proved that AI couldn’t work. Heidegger is famous as being the most obscure, voluminous, and anti-intellectual philosopher of all time.
I found a more sensible diagnosis. Rationality requires reasoning about the effects of actions. This turned out to be surprisingly difficult, and came to be called the “frame problem”. In 1985, I proved a series of mathematical theorems that showed that the frame problem was probably inherently unsolvable.8
This was a jarring result. Rational action requires a solution to the frame problem; but rationality (a mathematical proof) appeared to show that no solution was possible.9
Orange had turned against itself, and cut off the tree-limb it was standing on. Still, as we hurtled to the ground, we figured that we’d somehow find a way out. There had to be a solution, because of course we do all act rationally.
At this point, Phil Agre came back from a gig in California with a shocking announcement:
Dreyfus was right.
What?? Had Phil gone over to the Dark Side?
But with the announcement, he brought the secret key: a pre-publication draft of Dreyfus’ next book, Being-in-the-World, which for the first time made Heidegger’s magnum opus, Being and Time, comprehensible.
Being and Time demolishes the whole orange framework. Human being is not a matter of calculation. People are not isolated individuals, living in a world of dead material objects, strategizing to manipulate them to achieve utilitarian goals. We are always already embedded in a web of connections with living nature and with other people. Our actions are called forth spontaneously by the situation we find ourselves in—not rationally planned in advance.
If you have a green worldview, you’re thinking “duh, everyone knows all that—we don’t need a dead German philosopher to tell us.” But it is only because of Heidegger that you can be green. More than anyone else, he invented that worldview.
Being-in-the-World showed us why the frame problem was insoluble. But it also provided an alternative understanding of activity. Most of the time, you simply see what to do. Action is driven by perception, not plans.
Now, seeing is something us AI guys knew something about. Computer vision research had been about identifying manufactured objects in a scene. But could it be redirected into seeing what to do?
Yes, it could.10 In a feverish few months, Agre and I developed a completely new, non-orange approach to AI.11 We found that bypassing the frame problem eliminated a host of other intractable technical difficulties that had bedeviled the field.12
In 1987, we wrote a computer program called Pengi that illustrated some of what we had learned from Dreyfus, Heidegger, and the Continental philosophical tradition.13 Pengi participated in a life-world. It did not have to mentally represent and reason about its circumstances, because it was embedded in them, causally coupled with them in a purposive dance. Its skill came from spontaneous improvisation, not strategic planning. Its apparently intelligent activity derived from interactive dynamics that—continually involving both its self and others—were neither subjective nor objective.
Pengi was a triumph: it could do things that the old paradigm clearly couldn’t, and (although quite crude) seemed to point to a wide-open new paradigm for further research. AI was unstuck again! And, in fact, Pengi was highly influential for a few years.
Although arguably non-orange, Pengi was hardly green. Particularly, it was in no sense social. The next program I wrote, Sonja, illustrated certain aspects of what it might mean for an AI to be socially embedded.14 I will have more to say about this elsewhere when I explain participation, the nebulosity of the self/other boundary, and the fact that meaningness is neither subjective nor objective. This work is arguably “yellow,” in offering orange-language explanations for green facts of existence.
There was another problem. Pengi’s job was to play a particular video game. Its ability to do that had to be meticulously programmed in by hand. We found that programming more complicated abilities was difficult (although there seemed to be no obstacle in principle). Also, although perhaps ant brains come wired up by evolution to do everything they ever can, people are flexible and adaptable. We pick up new capabilities in new circumstances.
The way forward seemed to be machine learning, an existing technical field. Working with Leslie Kaelbling, I tried to find ways an AI could develop skills with experience.15 The more I thought about this, though, the harder it seemed. “Machine learning” is a fancy word for “statistics,” and statistics take an awful lot of data to reach any conclusions. People frequently learn all they need from a single event, because we understand what is going on.
In 1992, I concluded that, although AI is probably possible in principle, no one has any clue where to start. So I lost interest and went off to do other things.16
In Boomeritis, the anti-hero—who may be me—says:
I know, the computer part sounds far out, but that’s only because you don’t know what’s actually happening in AI. I’m telling you, it’s moving faster than you can imagine. (p. 306)
The reality, though, is that AI is moving slower than you can imagine. There’s been no noticeable progress in the past twenty years. And a few pages later “I” explain why:
There are some real stumbling blocks, things having to do mostly with background contexts and billions of everyday details that just cannot all be programmed. (p. 331)
Delusions of reference
In Boomeritis, the AI plot is a paper-thin “frame story” around the long philosophy lecture.17 There’s just enough detail to make me think Ken Wilber did visit the MIT AI Lab, though.
I suspect that he read a draft of Flesh and Machines: How Robots Will Change Us, by Rodney Brooks, which came out the same year as Boomeritis. Rod was head dude at the AI Lab then—and was my PhD supervisor. Here’s an excerpt from his book:
The body, this mass of biomolecules, is a machine that acts according to a set of specifiable rules… We are machines, as are our spouses, our children, and our dogs… I believe myself and my children all to be mere machines. But this is not how I treat them. I treat them in a very special way, and I interact with them on an entirely different level. They have my unconditional love, the furthest one might be able to get from rational analysis. Like a religious scientist, I maintain two sets of inconsistent beliefs and act on each of them in different circumstances. It is this transcendence between belief systems that I think will be what enables mankind to ultimately accept robots as emotional machines, and thereafter start to empathize with them and attribute free will, respect, and ultimately rights to them… When our robots improve enough, beyond their current limitations, and when we credit humans, then too we will break our mental barrier, our need, our desire, to retain tribal specialness, differentiating ourselves from them.
If you have read Boomeritis, you will find this sounds familiar.
So, was I the model for the book’s anti-hero? My guess is that Wilber had a conversation with Rod, who asked him what he did. Wilber mentioned German philosophy, and Rod said “hmm, that sounds like the stuff David Chapman used to go on about.”
“Who?”
“David Chapman. He was a student here a while back. After doing some nice mathematical work, he and another guy, Phil Agre, suddenly started ranting about existential phenomenology and hermeneutics and ethnomethodology. No one could understand a word of it. We figured they were taking too much LSD.
“But then they started writing programs, and the story gradually came into focus. Intelligence depends on the body; AI systems have to be situated in an interpretable social world; understanding is not dependent on rules and representations; skillful action doesn’t usually come from planning.”
“Whoa, that sounds like the green meme in Spiral Dynamics!”
“Well, whatever. Spare me the gobbledegook. Anyway, I was thinking along pretty similar lines at the same time, because I was building robots, and it turns out that if you want to make a robot that actually works, the whole abstract/cognitive/logical paradigm is useless. It’s a matter of connecting perception with action. I never got into all that German stuff, though.”
“So what happened to Chapman? It sounds like I should talk to him.”
“I haven’t a clue. He disappeared a long time ago.”
“What a bizarre story! You know, I’ve just finished writing a long boring book critiquing postmodernism, but suddenly I’m thinking it might work better as a novel…”
If that’s not what happened, the coincidental similarity of Wilber’s anti-hero to me (and/or Agre) would be almost as odd.
Perhaps, though, I am a historical inevitability. If I had not existed, it would have been necessary to invent me—and Wilber did.
Of course, I could just ask him. But uncertainty is more fun. “Yes” or “no” would remove the mystery, and the surreal groundlessness of not knowing whether I am a character in a novel.
Besides, it allows for retaliation…
Retaliation
It just so happens that I am writing a novel myself. Actually, it is an endlessly long philosophical diatribe, thinly disguised as a web-serial vampire romance. Already it is showing worrying signs of postmodern literary gimmicks.
Naturally, as a sword-and-sorcery novel, it has a Dark Lord; a lich king, who seeks to unite himself with God to obtain unlimited power.
I think you can guess where I am going with this…
- 1.It’s also brilliant, inspiring, funny, and (in the end) touching. Two thumbs up.
- 2.This is a gross simplification, and probably Spiral Dynamics geeks would object that I’m distorting their story out of recognition. For one thing, there are many more than three worldviews in the system.
- 3.I wrote about that on this page on Vividness.
- 4.Monists love capital letters. Is that because they think capitals look impressive, or is it the result of bad translations from German?
- 5.Philosophically bad. It works quite well as fiction.
- 6.“Artificial intelligence” is also used to mean “writing programs to do things that are hard, like playing chess.” This is interesting if you are an engineer, but has no broader implications.
- 7.Now out of print, but his revised edition, What Computers Still Can’t Do: A Critique of Artificial Reason, is still available.
- 8.The canonical citation for this would be “Planning for conjunctive goals,” Artificial Intelligence, Volume 32, Issue 3, July 1987, Pages 333-377. But that costs money, and it’s a shortened version of MIT AI Technical Report 802, which you could download for free if you want to geek out. The important bit is the intractability theorem (page 23, proof pp. 45-46). The undecidability theorems are also cute, but less philosophically relevant.
- 9.Technically, what I proved was the NP-completeness of the frame problem. Roughly, this means that there is no solution that is both practical and general. There are general solutions that are “exponential time” (meaning inherently impractical), and non-general solutions that can solve particular classes of problems. Neither of these is philosophically interesting, in my opinion.
- 10.See my “Intermediate Vision: Architecture, Implementation, and Use”, Cognitive Science 16(4) (1992), pp. 491-537.
- 11.The best summary of this is in Phil’s Computation and Human Experience (Cambridge University Press, 1997). The full text of his introduction is online. My take is in Vision, Instruction, and Action (MIT Press, 1991), which is more technical and less philosophical.
- 12.Probably the clearest explanation of this is in my “Penguins Can Make Cake,” AI Magazine 10(4), 1989. Interestingly, two other groups came to similar conclusions independently, just about the same time Agre and I did, although based on purely technical rather than philosophical considerations. These were Rod Brooks and the team of Leslie Kaelbling and Stanley Rosenschein.
- 13.“Pengi: An Implementation of a Theory of Activity,” Proceedings of the National Conference on Artificial Intelligence, 1987, pp. 268-272. Reprinted in George F. Luger, ed., Computation and Intelligence: Collected Readings, MIT Press, 1995, pp. 635-644.
- 14.Described in Vision, Instruction, and Action. See also my “Computer rules, conversational rules,” Computational Linguistics 18(4) (December, 1992), pp. 531-536.
- 15.“Input generalization in delayed reinforcement learning: An algorithm and performance comparisons,” Proceedings of the 12th International Joint Conference on Artificial Intelligence, 1991.
- 16.Recently, Dreyfus published an analysis of why Phil and I failed (Artificial Intelligence 171(18), 2007).
- 17.Wilber says he wrote it in ten days, after the lecture was finished.