Comments on “Interlude: Ontological remodeling”
Adding new comments is disabled for now.
Comments are for the page: Interlude: Ontological remodeling
This is excellent.
This is excellent.
The “Gerrymandering the solar system” section in particular knocks it out of the park. It’s a good example of nebulosity and how scientists don’t follow the “Scientific method” in practice, it’s very accessible and it’s entertaining. Thanks for writing this.
Also, all this talk of ‘rationality’ makes me think of a potential failure mode (although, I think you’ve covered this one elsewhere): I’m guessing the (LessWrong) rationalists who’s first solid exposure to a systematic way of thinking is Bayes theorem/probability theory will probably jump to thinking something along the lines of, “Oh, well, of course the categories aren’t clear: you haven’t put a probability distribution over satellite sizes. Once you have that, you can say that Pluto has a low probability of being a planet, but something like Jupiter or Earth or even Mercury are very likely planets…”
As you point out, bootstrapping the concept of ‘ontological remodeling’ is especially difficult. I like what I’ve read so far of “In the cells of the eggplant”, and I’ll probably buy it and I think it’s a good thing to be written, but I suspect that the rational to meta-rational shift requires something a little more ‘adversarial’ than a book. I’m not quite sure what that is, but I have a vague sense of needing to be repeatedly “kicked out of my eternalism comfort zone”. People with an especially strong sense of curiosity will do that to themselves, but I wonder if there’s a way to give everyone else a bit more of a push.
Heading off "so what?"
This is great, and I’m really looking forward to reading the rest of it!
I have the same concern as Josh Brule above (though of course I don’t know what you talk about outside the extract).
I think the obvious response to ‘things don’t always have sharp defined borders’ is ‘mate I already knew that, so what?’ Particularly from the rationalist community — I feel like Yudkowsky wrote various things along those lines, and there’s Scott Alexander’s ‘The categories were made for man…’ post.
And yet. You actually are pointing at something subtle and important, and the ‘things don’t have sharp defined borders’ point is really relevant to it (in the light of what you’re talking about, rather than as an earth shattering insight in itself).
I don’t know, though, it’s not like I know a better way of going about this. And I guess you hammer the difference between epistemic uncertainty and ontological nebulosity elsewhere. It might be worth putting effort into deliberately warding off the “this guy’s just spouting platitudes about things being a bit fuzzy at the edges” reaction, if you aren’t already. (That was my first impression of the site a couple of years back, fwiw.)
Feyerabend
Also, just out of interest, have you read Against Method? I remember it as being a defence of, as the title suggests, complete methodological anarchy — which would be unsatisfying in the same ‘everything’s as good as everything else’ way that cultural relativism is unsatisfying.
But I haven’t read it since undergrad, so maybe it’s more interesting than that. It’d be good to hear your thoughts on it.
Fair point
hooray for the people who do already know! Not everyone does.
Oops, fair point. For some reason I’d got it into my head that you were aiming squarely at the LW audience.
I’m a popularizer; nearly everything I have to say has been said by someone else decades ago. In Heidegger-speak or Garfinkel-speak or Dzogchen-speak, so it needs translation.
Yep, those sources definitely require translation for me! It might be popularisation, but of a rare sort, so I’m finding it really helpful.
And thanks for the Feyerabend comments. ‘Entertaining trolling’ is what I remember, but if he’d been doing something else I probably wouldn’t have been able to tell the difference. I plan to read Kuhn properly, based on your recommendation. Maybe I’ll do the same with Feyerabend.
That said, there are things
That said, there are things to say about how you work with ontological nebulosity to create better models, and they are not well-known.
This. I eagerly anticipate hearing more about these things! How do we develop patterns for dealing with ontological nebulosity?
The practicality of a given ontology for a given discipline is a feature not entirely of the ontology. It’s a feature of the relation between the two.
So, yeah - different disciplines can, and should, use different ontologies for the same domain when there are practicality gains in those disciplines. Makes sense, that is definitely better.
But, it seems that a better set of ontologies is a set that has as much uniformity as possible, to facilitate communication and collaboration across disciplines in the domain. Also, since generating an entirely new ontology for any new approach seems absurd, a better set should also exclude sufficiently redundant ontologies.
So there are a set of concerns:
1) balancing the virtues of the set with the virtues of the individuals, 2) determining when to individuate and when to merge,
3) determining or evaluating that the cost/effort of remodeling has or may exceeds the potential benefit to tractability.
This reminds me of so many things! I employ heuristics on this front constantly. But they’re just that - internal notions of feasibility that I would struggle to distill into language. I’m not even sure that a given strategy is universal (enough) to use as a technique. But - maybe there is a set of techniques, each with their own relative practicality? ;)
Looking forward to more. This article was awesome, can’t wait for the full book!
Ontological Remodeling of Rationalists / Rationality
First of all, thanks for the new posts. Not to get too personal, but I’ve massively benefited from the writing on this site (it brought me out of a fit of crushing, nihilism induced depression) and am looking forward to the ebook. I’ll definitely pick up “Understanding Computers and Cognition” to tide me over in the meantime.
On to my point…
In reading some of the writing on meta-rationality where you discuss rationality (such as this post), I’ve found that the writing doesn’t seem to make much sense unless I give up my intuitive understanding / definition of what “rationality” means and who a “rationalist” is. This was a big stumbling block when I first encountered your usage of the term, but gradually it became natural for me to just drop my definition whenever I’m reading your writing vs. reading any of the LW-affiliated writing. If there’s one thing that I suspect puts your target audience off of this writing (other than mistakenly seeing it as a criticism of a massively useful thing, as I was disappointed to see happening with slatestarcodex), it might be that they bounce off when they see how your usage of those words doesn’t match their own sense of them.
And, even if I drop my pre-existing definition, what I’m left with is a barely-workable, very fuzzy understanding of what you mean by those words. I’m also left in a state where I feel that I cannot point to anything in reality and use it as an example of your “rationality”, nor can I point to any specific person as an example of your “rationalist”.
I see this as an issue because I think it’s good to avoid non-intuitive / non-standard usages of a word. Maybe a different word or qualifier should be used, or maybe an explanation is in order which explicitly calls out how your definition differs from how the average LW-diaspora member would define it. As far as I know, you don’t ever acknowledge that your definition is different. Maybe you don’t think it IS different?
For example, take this post. As mentioned by drossbucket, it seems like what you’ve described is something that is obvious to the average rationalist community member (i.e. the people on LW and related sites). I would extend that remark to suggest that, according to my own understanding of rationality / rationalists, what you’ve suggested already IS a rationality technique. In fact, when I read the LW post several years ago called “37 ways words can be wrong”, it got me to understand the concept you’re explaining here and had a very positive impact on my way of thinking. It taught me to view words as mere tools for discussion and mental modeling. For example, I regularly use ontological remodeling as part of my “bag of tricks” for problem solving.
Another way of saying it is that I cannot imagine myself going up to someone who considers themselves to be a rationalist, explaining this concept to them, and getting a response other than “yeah I already knew that” or “oh, that’s quite useful, I’ll use that idea in the future”. As I understand it, by your definition of rationality, I shouldn’t get any response other than confusion, disagreement, or a new convert to meta-rationality! I just can’t imagine that anyone I consider to be a rationalist would have such a reaction. I don’t even think I can imagine ANYONE, regardless of whether they’re a rationalist, not being able to understand this idea or its usefulness. If meta-rationality is truly a stage 5-exclusive perk, this hurts the usefulness of the “5 stages” model. The stages are supposed to be competencies, which means people can use stages below the one they’re in, but not above. I guess that means that everyone I know is a stage 5 person but hasn’t realized it yet, since they can understand this idea!
Obviously, my preconceived definition of rationality / rationalist (which I expect is shared with most of the LW diaspora) differs from your definition. If that’s the case, then I think your usage of those words is causing problems in trying to reach your target audience. I see this happening in many of the various comment threads involving discussions of meta-rationality by people in the rationality community, where it seems like people decide to ignore the writing because they don’t see the discrepancy between their definition and yours. It’s quite frustrating to see this close-mindedness, but I can’t blame them - all things considered, why shouldn’t they assume you mean the same thing as them, considering that you have seem to be coming from having spent time reading LW-style rationality writing and considering that they have no reason to suspect you mean something different.
So, I’d like to know - I won’t expect you to try to explicitly define your “rationality”, but at the very least, ARE there actually people who you can point at as specific examples of the “rationalist” you are talking about? Are they really typical of the rationality community? I’d also like to suggest that you (or someone) takes care to call out this discrepancy or find a different word before it’s too late (and a bunch of potential readers bounce off the material).
…or perhaps I am very out of touch with the LW-style rationalist community and your definition actually DOESN’T differ from theirs. I may have become so accustomed to your writing that I can’t even conceive of what LW-style rationality actually is anymore! Honestly, when I think back to the me 5 years ago who decided to abandon the HCI field because of their lack of formal, robust, scientific, evidence-based practices (a quote I remember being said to me to convince me to stay was “you can’t use science to create the iPhone”, suggesting that there was no formal method that could be followed to design great things or to do useful HCI research, despite my LW-inspired conviction that there was), perhaps I was the “rationalist” you are trying to talk about. Coupled with my LW-inspired utilitarianism, it eventually lead to my previously-described disillusionment and nihilistic depression.
Shall I take your comment as
Shall I take your comment as a vote to up its priority, relative to the other two?
The nihilism section already had enough to be useful, for me at least. I’d say I’m much more excited about the meta-rationality project right now.
“Rationalism” is reasonably well-defined and well-understood within philosophy and in the history of ideas. I think I’m using the word in a way consistent with that mainstream tradition.
That certainly clears it up!
Re “37 ways a word can be wrong,”…
I see the distinction now.
Just about everyone who has well-developed rational abilities also has some ability to think meta-rationally.
This explanation of the 5 stages model suddenly makes it seem much more useful. For whatever reason, I had been thinking that the model specifies that a person at each stage cannot use ANY skills from the stages above it, which obviously doesn’t hold true.
a great description of the 4->4.5 STEM nihilism arc.
I also found myself transitioning from nihilism to materialism in the same way as you described on “190-proof vs lite nihilism”.
Thanks for clearing all that up!
Getting it through our thick eggplant skulls
Over the past six months, I’ve been torn between three writing projects: on meta-rationality, nihilism, and the remainder of “How Meaning Fell Apart” (subcultures and atomization).
I believe a Twitter poll would be appropriate here. I would vote enthusiastically. Even if you don’t listen to the results, I’m personally, at least, curious about the results.
Unrelated: I often read comments about your articles being vague. You often reply to the effect that there’s more coming that will complete the thoughts, or that the topic is inherently vague. People also comment that the ideas seem sort of obvious in retrospect (hey I knew that, did I really learn anything). I, also, personally believe (in addition to the aforementioned) that I read your articles quickly and excitedly when they come out then mostly forget them.
In math or CS I have often had the experience that I read a chapter, think it’s straightforward, try some exercises, then say “Oh what was that definition again?”, “Oh wait these ideas are distinct for a reason”, “Uhh how does this go again”, etc.
This is all to ask: do you think exercises of some sort could be appropriate for your format? I did enjoy the Bongard puzzles; I believe I understood that article better because of them.
You say this whole project is meant to be practical not theoretical. I personally don’t have much faith that most people can read an online book and benefit much. At the least, I have more faith in benefits coming from some sort of work or experience. I’m willing to bet research would/does back this up.
I know this sort of thing is associated with cheesy self-help books. I don’t think it can be helped.
I’m not really sure what form the exercises would take but a few occur to me, “Find another example in the history of science of ontological remodeling”, “Find two examples for each of the three types of ontological remodeling”, “Find an online argument where someone is confused about ontological remodeling”, “Write a convincing argument that the new official definition of ‘planet’ is sensible. Write a convincing argument the it’s idiotic.”, etc.
If anyone agrees with me here about exercises being helpful, please say so.
Bartley's Retreat to Commitment
Kind of a shot in the dark. Have you ever read Wm Bartley III’s The Retreat to Commitment? Kevin Simmler is the only other person I know who’s read it. Look him up on Wikipedia if you draw a complete blank. Retreat was his first book, about how “rational Christianity” fell apart, before later phases: Biographer of Wittgenstein and of Werner Erhard, Editor of the complete works of Hayek (and accused sometimes of having written one of the works, or changed it excessively - and I think Bartley died before the “works” was completed), and having sort of mutual love-hate relationship with Karl Popper (something of an exageration but they seem to have had an interesting and important relationship).
Not really convinced by the Pluto example
I’m not sure I understand how to reconcile:
“Meta-rationality treats all category boundaries as inherently nebulous and malleable. It recognizes that there are always marginal cases. Those have to be dealt with pragmatically—taking into account context and purpose—because there is no rational standard.”
With:
“It’s widely understood that the 2006 definition is stupid and useless.”
This statement seems exaggerated, overly emotional, and unsupported. (I would guess that there would be agreement that the 2006 definition is a bit kludgy and inelegant, but I’m not sure how many people would agree that it’s “stupid and useless.”)
It seems like the goals were mostly preserving backward compatibility for the word “planet” while coming up with a naming scheme for new objects in the solar system. It was well-understood that it was kind of arbitrary, so they just picked something. That sounds very pragmatic to me, much like you describe meta-rationality. It doesn’t seem like a particularly good example of the rational mindset?
If everyone agrees that category boundaries are somewhat arbitrary, why get worked up about it? Perhaps there is a better example to demonstrate the difference between rational and meta-rational?
precision and boundary setting
It might be useful to compare formal definitions with legal boundaries (property lines, etc). Legal boundaries can be quite arbitrary at times. For many purposes, the precision provided by surveying property accurately is simply unnecessary; nobody cares exactly where the boundary is. But sometimes, arbitrary precision is useful as a way of solving disputes.
Similarly, most words have no formal definitions; their meanings drift based on usage. For most purposes this is fine; everyone is free to classify astronomical objects using their own categories whenever it suits their purpose.
Apparently, the people who catalog new astronomical objects wanted to decide in advance what upper limit they’ll use for “dwarf planet” in an official standard. If we think of “clearing the neighborhood” as just a Schelling point for setting an arbitrary upper limit on how to classify new objects, this seems like a pretty pragmatic decision as well. (A Schelling point was needed to end debate, but needn’t have any further significance.)
Carving reality at its joints can be important when we want to think clearly about some issue. But for legal reasons, just being precise often suffices.
What is Rationalism(s) and Are Hunter-Gatherers Rational?
The Eggplant essays, and other Meaningness writings seem to alternate between references to “rationality” as if it is one thing, and references to “systems” - and Kaplan’s Stage 4. Let’s name the subject of this(my) mini-essay “Rational systems”. We could use some explicit examples with comments on how well or poorly they fit the idea (to be developed) of a “rational system”. Would the following be examples: Randian “Objectivism”, Ptolemaic astronomy, Newtonian mechanics, Quantum mechanics, general relativity, Scientology, the logic of “Flatland”, Kantian philosophy, Hegelian philosophy, psychology or sociology and their bodies of research/publications? I believe the Ptolemaic system with Aristotelian physics were as logic-based as modern systems, but one can’t say that they delivered the benefits of our modern technological world. To what degree was the Enlightenment, say, a change in a “way of thinking”? Could I make the case that it was more a matter of discarding certain beliefs, and having a program of testing folk- and other dubious beliefs empirically so they got weeded out? How much is the modern world a result of a succession discrete “facts” or “discoveries”? How much a matter of accumulation of methods/instruments? How much a matter of progress in storage capacity and bandwidth of media (i.e. starting with Gutenberg)? How much a matter of institutions, starting with the Royal Society? and successive institutions and their development and tuning: other scientific journals, the German and Johns Hopkins model research university - the refinement of the idea of “scientific fact”.
Do scientific paradigms and science-oriented epistemology shed much light on how to know ephemeral or simply non-general facts? Like Barrack Obama was born in ___, or Russians under the direction of Putin interfered in the US election/interfered enough to change the result? Or Intifa is planning a revolution on 11/4? Could science be characterized as a protocol under which “true statements” (however we define that) come to the foreground and false ones fade into the background? If so, could that be applied to ephemeral or non-general facts?
Are rational systems characterized to any degree by certainty that every question (or well-formed question, or whatever) has an answer? My own reading and thinking leads me to think societies at a hunter-gather level of organization (the Yanomamo have agriculture but I don’t think it has affected the social organization much if at all) more or less don’t know that there’s anything (of importance) that they don’t know; i.e. they have answers to any question that could be communicated to them, or a procedure for getting the answer (Why is my boy sick? Shaman takes halucinagins, goes into a trance, etc). If nearly everything they “know” is “wrong”, that’s not the point.
Eggplants are getting big
Ontology&Concepts+NLP is the key to general AI
I think you’ve really nailed it here David. I think this is really THE key problem that needs to be solved for AGI, and current statistical approaches are light-years away from tackling this. In the current machine learning/neural network craze, most researchers are looking in all the wrong places.
I think the key to progress is understand how to connect many concepts together into a coherent whole. It’s the art of being able to look at things from many angles. All models have limitations , so the key is the ability to identity the correct context and limitations of any given concept, and understand it’s relationships with other concepts.
My understanding is really taken a huge quantum leap over the past year, after months of me loading up on Wikipedia articles and connecting multiple concepts together.
I’ve got an approach to the AI control problem that’s light-years ahead of MIRI and co. :D Yudkowsky and co. ain’t even close. To give an aircraft analogy, my plane is roaring down the runaway towards takeoff speed, MIRI guys literally never even left the hangar!
It’s natural language processing and concept learning! That’s the key to machine psychology and the control problem! The statistical approach is a distraction , and we must get back to symbolic AI !
Look at my latest wiki-book here, the answer to the control problem and AGI is here!
https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Machine_Psychology%26NLP
I’d also recommend an older wiki-book I did here, on the central concepts in the field of knowledge representation and ontology:
https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Ontology%26Concepts
Modal & Non-Monotonic Logics
David,
It seems that what you’re looking for are the logics that don’t assign definite truth values (T/F), but rather allow for degrees of true. Well, there are logics for that, so I think what you call ‘meta-rationality’ is still ‘rationality’ , just not the usual deductive or inductive kind.
I think modal and non-monotonic logics are what we’re looking for here! This includes fuzzy logic and imprecise probabilities. Modal logic allows you perform reasoning about dynamic (temporal) systems, and deals with the notions of counterfactuals (possible worlds).
In my view these modal and non-monotonic logics are the key to understanding concept learning, natural language processing and machine psychology, including the AI control problem! This is because they’re specifically designed to deal with fuzzy boundaries and ambiguities.
Read through my wiki-book entries on these topics here:
https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Modal%26Non-MonotonicLogic
The continuum gambit
@mjgeddes — I asked a related question a while back; in David’s view, none of those things solve the problem.
Linear Temporal Logic (LTL) shows promise
I’ve got respect for David’s view that ‘nebulosity’ can’t be systematized, but even if that’s true, the brain is a natural system (nothing supernatural!) and clearly humans can deal with nebulosity, so there must be some mathematical definition of a method for handling it, even if it’s a non-computable one.
Modal logic, and in particular Linear Temporal Logic (LTL), is showing promise at dealing with nebulous natural language problems.
“MIT researchers have improved award winning automatic planning software by adding in code that mimics human intuition. “
“The strategies used by the human problem solvers were described in a simple statements that could be understood by machines, in a formal language known as linear temporal logic.”
Absolute stages, learning methods
Stages as absolute — I’ve tended to read you as saying stages are absolute, too. Encouraged, for example, by passages like this:
“People who are at a particular stage cannot think or feel in the ways characteristic of later stages. They actually cannot understand explanations given in a later stage’s framework.” (from https://meaningness.com/political-understanding-stages)
Learning methods — my understanding is that effortful methods are the ones that work. E.g. reading and re-reading are very poor; retrieval practice and self-testing are much better, especially when topics are interleaved and optimally spaced out. (My main source on this is the book Make it Stick — https://makeitstick.net/)
In a similar vein, it’s very easy to read something like your commentary on the Pluto debacle and nod along. It’s a lot harder to do a novel-but-fruitful restructuring yourself.
Wording
Perhaps equally important - avoid ever using wording that implies hard boundaries, especially if other sources (e.g. Kegan himself) do imply this.
I may also have been influenced by analogies (here or elsewhere) to e.g. “the shift when you see the boxing kangaroo instead of the noise pattern” which refer to largely atomic, irreversible changes.
Analogously, meta-rationality
Analogously, meta-rationality has been around for more than half a century,7 and we can at least say that it’s not known to be unworkable.
It seems to me there is a case to be made that Pyrrho and Nagarjuna both hit upon the ideas first, but in contexts that didn’t carry their ideas forward to the modern West well. That is, both were in ways too advanced because there wasn’t enough rationality to also support metarationality, although both also found ways too thrive for a while anyway: Pyrrho until the stoics and epicureans evaporated during the European dark age and Nagarjuna as prajnaparamita melted into a mystical state rather than being understood as a philosophical method (though as I’m sure you well know some lineages managed to IEP alive or more likely rediscover what Nagrajuna had meant). But in fairness I wouldn’t know this if it weren’t for the Kant, Hegel, Husserl, Heidegger, Sartre, Hofstadter, Kegan line of reasoning that got us to metarationality as we think of it now.
Really really great!
This is great! This is a wonderful, and accessible, explanation of what you mean by ‘meta-rationality’ and why it’s useful, and important.
I think now, even more strongly, that the ongoing disagreement/confusion about rationality and meta-rationality among you and LW+-ers is due to us, the LW+-ers automatically including all of what you consider to be ‘meta-rationality’ under the category ‘rationality’.
Even the LW-EY-2008 post to which I linked yesterday – Taboo Your Words – makes the exact same point you do in this post – words, and more importantly the ontological categories to which they refer, must (sometimes) be carefully interpreted according to the relevant context and the relevant purposes of their users. (I think this post better emphasizes the importance of context and purpose than the LW post.)
Meta-rationality is all about navigating and creating – and remodeling – ontologies.
Rationality assumes (implicitly, or, often, explicitly) that statements are either true, false, or meaningless. That is, ontologically—in the world. Epistemologically, we may be unsure; we may have a degree of belief; but that is a separate issue.
This is a good statement about (straw) ‘rationalism’ and it makes me wonder if there’s not some kind of ‘rational meta-rationality’. I’m extremely more sympathetic to your frequently repeated claim that ‘rationalism isn’t sufficient!’. I’m much less sure now that it’s ‘false’ (ha!) in that I’m less sure that there’s any possible mind that isn’t effectively using a hodge-podge of (rational) tools, but with no overall ‘rational’ architecture.
Fuzzy logic is more general than probability theory
Hi Kenny,
With probabilities, what the numbers are quantifying is subjective confidence in some hypothesis about the world.
Fuzzy logic also deals with reasoning under uncertainty, but what it’s talking about is something a little different from probabilities. In the case of fuzzy logic, the numbers represent the degree of precision about some concept - that is to say, how well defined is that concept? And that, I think, is precisely what we need for dealing with ontology remodelling.
Fuzzy logic is more general than probability theory, because it can deal with uncertainty in our underlying models or concepts that we are using to talk about reality, whereas probability theory simply assumes a precisely defined underlying model or set of fixed concepts that we use to formulate hypothesis. So fuzzy logic lets you handle an extra layer of uncertainty (what Scott Aaronson calls ‘Knightian uncertainty’ or uncertainty in underlying models themselves).
A guy called Bart Kosko wrote a paper suggesting that probability is just a special case of fuzzy logic way back in 1990! And basically, I think he’s right.
https://en.wikipedia.org/wiki/Fuzzy_logic
Type theory is a form of computational logic that’s an off-shoot of a more abstract mathematical logic (category theory). It’s especially suited for dealing with concepts , because it’s basically a set of language rules (it can be treated as a programming language).
https://en.wikipedia.org/wiki/Type_theory
So if we combine type theory with fuzzy logic , I think we have the tools we need for dealing with ontological remodelling.
To some extent I agree with David though; it’s likely that no strictly formal set of procedures can fully capture ontological remodelling.
To me, meta-rationality is related to a notion of time and the fact that everything ultimately is ephemeral. The thing is that the world is a dynamical system - it evolves in time, and this is the reason that no fixed set of rules can ever entirely capture reality. It’s the dynamical (time-evolution) nature of reality that ultimately defeats any purely formal methods.
That said, I think type theory and fuzzy logic are essential components of ontological remodelling.
I'm building a rationalist qabbalah.
That said, I think type theory and fuzzy logic are essential components of ontological remodelling.
mjgeddes, how would type theory and fuzzy logic be applied when remodelling an ontology wherein the earth is flat?
To provide some context, suppose that I believe the earth is flat. I don’t trust information that I can’t verify either with my senses or according to the opinion of someone I’ve known for years who’s proven that they will be honest with me who says that they’ve verified it personally or else knows someone who they trust who has verified it for themselves and so on. I’ve gone to different fields and lakes and held up a flat-edge to the horizon from various high vantage points and it’s always looked to me as though the horizon were level. Now, someone I trust says that they know someone they trust who told them on the phone that they went to the ocean and climbed a tree with a ruler and held it up against the horizon and they said that it did appear to dip at the outside edges. Neither myself nor my friend can replicate this experiment ourselves because we don’t have any means of getting to the ocean, but I now have reason to believe the earth isn’t flat, which makes me uncertain about whether or not I should be open to information which I would normally consider unverifiable, such as might be found in a physics textbook or on the internet.
In this thought experiment, I’m questioning the validity of my current flat-earth model and wondering whether to change it and if so how I might go about realizing a different model that I can accept. Would you accept that this is a particular instance of ontological remodelling? If so, how will type theory and fuzzy logic be essential components of the process whereby I convince myself that the earth is round?
I hope you can trust, despite the notoriety of my chosen example, that I’m posing this problem to you in good faith. I don’t know the answer, which may be because I don’t have anything more than an acquaintance with these theories beyond their respective Wikipedia articles, and knowledge of Russell’s paradox and his proposed solution of the theory of types from his book Problems in Philosophy. However, to show my hand early, I suspect that the real reason is because, as far as the example goes, neither of these theories are relevant. They may be useful in some other cases of ontological remodelling, particularly instances concerned with theories of logic and mathematics at the object-level, but I believe that they will not play a meaningful role in changing the mind of a flat-earther, or of getting a totalitarian to think like a democrat, etc.
Do you mean to say that these theories are useful for helping rationalists understand why it might sometimes be necessary to adjust their decision frameworks for what is or isn’t significant in order to renegotiate the boundaries of their objects and accomplish different purposes?
Infinitesimals
“As always when something is a prerequisite for itself, you have to proceed in a spiral. An approximate understanding of a small part of the subject makes it possible to grasp more of it, and thereby to revise your understanding of the initial beachhead. You need repeated passes over the topic, in increasing breadth and depth, to master it.”
This is correct, but the denial that outward spirals work is fundamental to rational thought. It’s in the name: “rational”. The irrationals were so named because they were “against reason”; the non-irrationals were thus called the rationals, meaning in accordance with reason, meaning rationalism.
Rational thought allows only integers and ratios, and hence disallows use of physical measurements. This has been consistent since ancient Greek geometry to the present day, although everyone today is unaware of the distinction, because people in the sciences are unaware that “rational thought” disallows irrationals, and people in the humanities are oblivious to the uses of irrationals and continuums. People in the humanities have successfully fooled people into using the phrase “rational thought” as if it were a synonym for proper or even scientific thought, and you will frequently see the phrase “scientific rationalism” in print, as if such a thing were possible.
Aristotle argued that infinitesimals can’t exist, and his physics assume that they can’t exist, and this is the basis of a wide variety of ancient, medieval, and post-modern arguments. All such arguments begin by saying that everything must be explained by a causal chain which can be traced back to a single event which began the chain. Similar arguments apply to the construction of meaning in language (Derrida’s argument in /On Grammatology/ that words can’t signify is exactly that).
So if you want to win over anyone in the rationalist camp, you have to begin by explaining infinitesimals, calculus, and differential equations, and show how the “spiral” works. Otherwise it just sounds like crazy talk to them.
Notes
Other thoughts:
- Doug Lenat's Cyc has an extensive theory and method for ontological remodeling on the fly for an AI. See Guha 1993, "Contexts". (Sorry; I don't know the ref. I only have a prepub draft of it.) Also Lenat & Guha 1990, Building Large Knowledge-Based Systems.
- Similar to my comment on infinitesimals, the very notion of "ontological remodeling" is nonsense to a Platonist or Aristotelian, and there are still a shocking number of Platonists and Aristotelians. I know plenty of people (all Catholic) who explicitly believe objects and people have Aristotelian essences, and I'm told there are contemporary journals of medieval scholastics, where people still publish "research" into questions about essences and the "proofs" of Aquinas and Duns Scotus. I met a semiotician who told me that the entire discipline of semiotics is based on the Aristotelian ontology of Duns Scotus.
- And post-modern philosophy takes the Aristotelian account of essences for granted. A post-modernist might nod and say he believes in ontological remodeling, but he would mean that he believes society continually socially reconstructs its ontology through dialectic--and that it is reconstructing reality by doing so.
- None of these people I'm talking about are nominalists. They don't believe words are socially-constructed concepts which correspond to structures in reality. Nominalism, the understanding of langauge that ended the Middle Ages, is increasingly out of fashion in the humanities. So they won't grok what you're saying; it will sound like crazy talk.
- Phlogiston is not "wrong". Phlogiston is a theory that works quite well at predicting some things, and therefore provides information, and therefore is a good theory. Phlogiston is simply "negative oxygen". Oxygen is a mass which begins *outside* the fuel to be burned, then is combined *into* the products of combustion. Phlogiston theory assumed that the thing-which-must-be-present-for-burning must begin *in* the fuel, and was *eliminated* from the combustion products. So a phlogiston-based chemical reaction would have 1 atom of "phlogiston" in the fuel for every atom of oxygen that was in the air, and one atom of phlogiston dispersed into the air for every atom of O incorporated into the combustion products. The problem was this required phlogiston to have negative mass, and also this didn't explain the need for air in burning.
- Gravity has no meaning at all in Newtonian mechanics. It's just a name. It's important to remember that. I don't know quantum mechanics, but it may be that this "wrinkle in spacetime" is an analogy or description of effects, not a meaning. In nominalist physics, there's typically a bottom level of names that should not be assumed to have referential meanings.
- Comets were believed to be in the heavens until 1577, when Tycho Brahe was able to measure the distance to one. It caused quite a stir, even in the 16th century, as most intellectuals still believed the heavens were perfect and unchangeable.
- It's important in discussions of heliocentrism to point out that, without an ether, there is no fact of the matter as to what moves about what.
- Re."But we can’t have any general theory of truths, because they don’t have anything meaningfully in common."--But we do: information theory.
Suggestions for stabilizing the complete stance?
So when, over a long period of time, there is introspection which learns to recognize how the mind continually alternates between trying to solidify things or discount them as without meaning, then one arrives at prolonged experiences of groundlessness (the complete stance, correct?). By groundlessness I mean a mind of spaciousness where meanings are flowing in and out (sometimes rather quickly) and shifting and changing depending on the circumstances. At this point, the subjective experience is one of awe and, quite frankly, profound relief (fixation sucks).
But then there seems to be this intense peer pressure to conform to meaning-making in the context of either of the extremes. In some sense there might only be one extreme which comes down to fixation, or an appeal to PIN IT DOWN, by all means (including under the guise of there being nothing to pin down). When confidence in the complete stance is still shaky, it is extremely irritating to face this pressure to pin things down over and over again, at least it was for me personally. In my interactions I sometimes got feedback that I seemed spacey and aimless. In retrospect I think this criticism was accurate: there was some combination of being traumatized and exhausted by fixation and so just not wanting to go there, and also murderous rage at a perception of pervasive oppression, as well as simple confusion about how to proceed.
Luckily, after many bouts of nihilistic depression/rage/anxiety a flip switched and I stopped trying to protect myself from other people. (Previously, I was terrified of getting enmeshed, probably due to a sense of powerlessness). OK, ok, a flip didn’t switch–it’s an ongoing thing having to do with lots and lots of hideous shadow stuff and just snacking on that all the time even though it often tastes like pork rinds.
I am writing because I am interested in others’ experiences with taking a stand in the complete stance (to use David’s terminology) and how you have navigated that. I’m sure it’s different for different folks due to varying strengths and weaknesses, but maybe there are also common themes? Or suggestions?
0-4
(0) Your work is fantastic, it helps this meditator and STEM worker greatly! Keep it up!
(1) I have a funny math story for you, since I know you enjoy those.
(2) A book recommendation.
(3) Thought on Pluto reaction.
(0) You lead me to Venkat led me here, years ago, and you led me to Kagan, and now I wander these wilds gathering insight. Thanks very much.
(1) A fun math story, of sorts.
All finite piles of sand are small. Proof my induction.
Base case: a pile of 1 grain of sand is small.
Inductive case: if a pile of N grains of sand is small, then so is a pile of N+1 grains of sand.
So all finite piles of sand are small.
The contrapositive holds: all large piles of sand are infinite!
If you believe reverse direction - that all small piles of sand are finite - well then we have a fun correspondence. Now it’s funny to think this is true, but I enjoy it more as a statement about functional infinity. Small piles of sand are functionally finite (they can be counted), large piles of sand are functionally infinite, and beaches are uncountable!
moving on …
(2) Have you read Systemantics? It’s a fantastic and funny read lampooning the popular understanding of systems, highlighting our difficulty understanding them, their defiance of reason, and the Godel sized holes within. If you haven’t, it might be a rich vein for you:
https://www.amazon.com/gp/product/B00AK1BIDM/
(3) As an irrelevant aside, I think much of the uproar around Pluto’s reclassification amounts to science basics being a kind of religion. It’s foundational to people’s understanding of their world that Pluto is a planet, and they are attached to that. I have my attachments as well, so I mean no disparagement here. And so reclassifying it is an epistemological threat to many and you get fiery resistance. I’m surprised that some people are surprised by this. I don’t know if this thought fits in exactly with this essay but I wanted to put it down on paper, mainly for my own benefit.
Alex
Worrysome blurring of descriptive and prescriptive science
You critique scientific epistemology done fully a priori, divorced from the practice itself. I see where you are coming from in your suggestion that we actually study empirically how science is done. However, this seems to tread on dangerous waters:
-Why should we assume that the scientists are actually practicing reasonably? Using this as the sole metric of good science seems analogous to using the decision making process of the average person as a guide to making good decisions. It seems to confuse descriptive and prescriptive rationality.
-If we are trying to do a science of science, this seems somewhat circular. It seems that one needs prior reasons to regard certain conclusions as more reasonable.
Nevertheless I can see how this approach could evade these problems. It seems that it requires a mix of a priori and a posteriori approaches. I think there are a priori reasons to think something like falsification is necessary (we don’t see causality directly, so how else can we check our models?). Additionally, this post seems to present a priori reasons to admit of ontological nebulousity, which also lends itself to prescription. But perhaps we could look at scientists to see also empirically which practices are more fruitful.
What say you, David?
Anthropology of science
One group of academics who have embarked on careful, observational studies of scientists at work are the social anthropologists. A good friend’s PhD supervisor has some funny stories about doing some of this field work. As a place to dive into this literature, my friend recommends anything by Emily Martin, or ‘The Pasteurization of France’ by Bruno Latour.
Typo
“As soon you as”
a simple solution to the planets thing: YOU get a number
what if we just decided that every astrological body, including planets or things-that-might-be-planets or I-Can’t-Believe-It’s-Not-A-Planet… got an IAU number? like there are currently only 8 things that don’t have one, unless I’m misunderstanding, and they already gave Pluto a hilariously large one given how long we’ve known about it.
…and then the IAU wouldn’t have to sweat about what gets a number, because everything gets a number, and they could rest because THEY can do their job without THEM needing to be the ones who are responsible for coming up with a reasonable definition of “planet”.
and folk classifications could continue to treat Pluto as a planet for cultural reasons or whatever (I still call Venus “the evening star” sometimes even though that name is listed on wikipedia’s page “List of former planets”)
(more precisely, it seems like comets still wouldn’t get numbers?)
Superseding truth?
So, if “true” as a concept scores only 2.2, what’s the meta-rational “workhorse” replacement? Something like “adequate”?
Randomization
I think it’s probably very significant! As you’ve explained, Ptolemy+Aristotle was a strong local maximum in theory-space. If you practice the “half-baked speculations I came up with after reading the Timaeus too many times” method of astronomy, you’re more likely to random-walk yourself out of it.
I wonder if that’s a general pattern: given an accepted theory that hasn’t been meaningfully improved upon for centuries, if a better theory is discovered it will always be discovered by crackpots investigating really terrible ideas.