Comments on “Part One: Taking rationalism seriously”

Making it work, and how it doesn't

James 2020-01-28

I love this line:

Why does rationality work? In large part, because we do practical work to make it work.

In addition to making the world less nebulous, we do this by applying our systems in nebulous ways. We fudge, applying tacit “know-how” to the way we implement the system. Mostly, this is a good thing, because it keeps everything going.

(Forgive me if this what I’m about to rant about is covered in a later part; I’ve only read Part I so far.)

However, there’s a risk that an incoherent understanding of this fact will be marshalled as an excuse for dysfunctional systems. For example, Microsoft a couple of years ago changed their terms of service to say

Don’t publicly display or use the Services to share inappropriate content or material (involving, for example, nudity, bestiality, pornography, offensive language, graphic violence, or criminal activity).

Seems reasonable at first, right? Except it’s for all their services, and does not distinguish between private Skype video calls to your significant other, and posting something publicly on, e.g., Xbox Live. Speaking of posting stuff on Xbox Live, Microsoft encourages you to post clips from the games you play, many of which are quite graphically violent. And does “shar[ing]… criminal activity” include using your Hotmail account to discuss a crime that someone else committed?

Microsoft’s response was, bizarrely, to insist that they hadn’t actually changed the policy, only clarified it. In essence, though, their response to the criticisms was, “That’s not what we meant,” which is the usual response to someone pointing out that a policy is overly broad and has unreasonable consequences. As if “Don’t worry, we intend to enforce the policy selectively,” is supposed to be comforting.

On "ratholes"

James 2020-01-28

On a related note, I’ve recently been trying to delineate a concept I call a “rathole.” The core intuition is an idea that is obviously wrong but that people adopt because it makes them feel smart.

I think there are basically two kinds: positive ratholes and negative ratholes. A positive rathole is based on a systematic account of some phenomenon. A negative rathole is based on a denial of some idea because of a lack of a system account of it. Of course, a rathole can have a positive component and a negative component.

Conspiracy theories are generally ratholes in this sense, but so were and are a lot of intellectually-respectable theories.

I wouldn’t want to actually make it essential to the concept that the idea a rathole is based around is wrong: I think it’s more essential that the rathole-inhabitant(s) think of themselves as the genius possessors of some brilliant truth which others are either too stupid, too cowardly, or too dishonest to accept. I don’t want to give an out to someone to say, “OK, I display all the traits of being in a rathole — but I’m right!”

"Not Philosophy", Godel stuff and the limits of rationality?

Alexander 2020-01-29

I find it funny that you keep declaring this whole project “not philosophy”. I agree that it does not entirely fit (empirical psychology, Buddhism, history, etc) but I nevertheless I can’t find a better word besides “philosophical” to describe it. I think you really ARE doing philosophy at many points on this blog (epistemology mostly imo)- what makes it different than most of today’s philosophy is that it is done for practical and public purposes. I find it a refreshing approach.

On another note, I am only somewhat familiar with Godel’s Incompleteness Theorems, yet when I read your blog I often feel that they have some relevance. Namely, the idea that a formal system cannot represent all truths without allowing the derivation of contradictions. This seems to suggest that a single system of rationality isn’t enough to understand the world, which seems quite meta-rational. Then again, I’ve only looked into the consequences of this theorem without understanding the details and thus this characterization could be wrong.

Similarly, I’ve thought of another relevant relation. It seems to be the case that any subset B of a set A that is not equal to A cannot be a model of the entirety of A, because B would be incapable of modeling itself without infinite regress. If this is true, then it seems interesting things follow:
1. If mind-body dualism is false, then the mind is a subset of the one world. It wouldn’t be able to model itself, and therefore not all of the world. In this case not all facts about the world would be knowable, and the mind would not be completely knowable either.
2. If mind-body dualism is true, then there are 2 distinct worlds and the mind inhabits one. Since the mind would be separate from the other world, it seems in principle possible that it could model the other. Nevertheless, in this case too the mind still can’t completely understand itself because it would have to possess a complete model of itself, which we have seen is impossible.

As such, we will never completely understand ourselves :)

Making rationality work

David Chapman 2020-01-29

we do this by applying our systems in nebulous ways. We fudge, applying tacit “know-how” to the way we implement the system.

Yes, this is the essence of Part III! Which is mostly just unpublished notes so far, but “The Parable of the Pebbles” is a summary.

an incoherent understanding of this fact will be marshalled as an excuse for dysfunctional systems

Yes, indeed! And that’s a nice example.

For much more on this, you might find interesting Bowker and Star’s Sorting Things Out: Classification and Its Consequences.

Ratholes

David Chapman 2020-01-29

I think you’ve found a significant phenomenon there!

I have a related concept-in-formation I called “rationalist pit-traps” in a thread starting here: https://twitter.com/Meaningness/status/1083805073950400514?s=20

Philosophy and Gödel

David Chapman 2020-01-29

“Philosophy” has, like all concepts, nebulous boundaries, so whether or not I’m doing philosophy isn’t well defined. I have specific reasons for saying that I’m not: philosophy is often taken to be irrelevant to practical concerns, and my goals are practical. I’ll probably write about this at great length sometime.

You are right that Gödel is relevant. His incompleteness result was one of the major wounds that, together, killed off logical positivism, which was the last serious attempt to make rationalism work.

Spherical-cow-holes

James 2020-01-31

Thanks for the link. This line from the article linked in your tweet is definitely a keeper:

vast and beautiful grand unified theories based on spherical-cow thinking

That’s definitely the kind of thing that a rathole would form around.

The main difference I see between “rational pit-trap” and “rathole” (although having put those two terms side-by-side I’m now tempted to synthesize them as “rat-trap”) is that your concept seems to focus on the intellectual content of the trap, and mine on the affective aspect.

Wrong link?

Lucy Keer 2020-07-02

One more :)

The “Bypassing post-rationalist nihilism” link seems to go to the title page of the book again - assume this should go somewhere else?

Links to the future

David Chapman 2020-07-02

Ugh, yes, sorry. Thanks for reporting this!

The link is to the URL of a page that is in the outline, but that I haven’t finished writing. Informally I call those “links to the future” (sounds much better than “broken link,” doesn’t it?). For years I’ve wanted a better way of handling those, but (1) I don’t know what the optimal UX would be, and (2) there’s no established tech for this, so I’d have to write nontrivial code.

What I had done is to make broken links go to a custom 404 page with text that explains the situation (and also does a search on the url text, in hopes that it might turn up something else useful). I did this with a Drupal contrib module that isn’t quite the right thing, and customized it a bit. It’s still not the right thing, but maybe better than just “Page Not Found.”

Now it turns out that there’s a long-standing Drupal core bug that makes example.com/foo/bar just go to example.com/foo if bar doesn’t exist. So, for the eggplant urls, the 404 hack didn’t trigger.

Because you reported this, I’ve installed another Drupal contrib module that works around the core bug.

Unfortunately, the two contrib modules don’t cooperate, so for links to the future of The Eggplant, the 404 page you get is a mess.

Drupal is a mess. It was the least-bad option when I set up this site ten years ago, but it’s gone in a very… different… direction.

I’m planning to rebuild the site in some other technology, and have been working toward this fairly actively in the past couple of months. Django is the current most plausible competitor, but it’s not lovable, and I fantasize that I’ll find something better.

What is attractive about Django is that you can just write code, instead of installing 6,237 contrib modules that don’t quite work and fighting their bugs and admin UIs and half-assed config options. I mean, in Drupal you can (and must) also write code, but the fundamental framework was designed originally not to expect you to, and everything about it gets in the way.

(Nobody wanted to know this, but I’m in that particular mental state in which the coffee has kicked in, but I am not yet emotionally prepared to tackle my mother’s insurance paperwork!)

Re: links to the future

Lucy Keer 2020-07-02

Haha, I’m just getting to the end of my day of fighting various bugs here, so can sympathise. Good luck! I’d seen your “links to the future” before, and wondered if something had gone wrong in that area, but wasn’t sure what.

Bypassing "Bypassing post-rationalist nihilism"

Jed Harris 2020-07-25

I’m sympathetic to your plight, trying to find a good exit from a dying ecosystem – with a massive burden of structured content.

In the interim maybe you could just change the text around that link so those of us who try to follow it won’t fall into a dark gray hole searching for the missing page.

Relatedly, in the search I came across “Lovecraft, Speculative Realism, and silly nihilism” which seems to imply that we generally won’t experience post-rationalist vertigo since nihilism is silly – but maybe you’ve had second thoughts?