Comments on “Part One: Taking rationalism seriously”
Adding new comments is disabled for now.
Comments are for the page: Part One: Taking rationalism seriously
On "ratholes"
On a related note, I’ve recently been trying to delineate a concept I call a “rathole.” The core intuition is an idea that is obviously wrong but that people adopt because it makes them feel smart.
I think there are basically two kinds: positive ratholes and negative ratholes. A positive rathole is based on a systematic account of some phenomenon. A negative rathole is based on a denial of some idea because of a lack of a system account of it. Of course, a rathole can have a positive component and a negative component.
Conspiracy theories are generally ratholes in this sense, but so were and are a lot of intellectually-respectable theories.
I wouldn’t want to actually make it essential to the concept that the idea a rathole is based around is wrong: I think it’s more essential that the rathole-inhabitant(s) think of themselves as the genius possessors of some brilliant truth which others are either too stupid, too cowardly, or too dishonest to accept. I don’t want to give an out to someone to say, “OK, I display all the traits of being in a rathole — but I’m right!”
"Not Philosophy", Godel stuff and the limits of rationality?
I find it funny that you keep declaring this whole project “not philosophy”. I agree that it does not entirely fit (empirical psychology, Buddhism, history, etc) but I nevertheless I can’t find a better word besides “philosophical” to describe it. I think you really ARE doing philosophy at many points on this blog (epistemology mostly imo)- what makes it different than most of today’s philosophy is that it is done for practical and public purposes. I find it a refreshing approach.
On another note, I am only somewhat familiar with Godel’s Incompleteness Theorems, yet when I read your blog I often feel that they have some relevance. Namely, the idea that a formal system cannot represent all truths without allowing the derivation of contradictions. This seems to suggest that a single system of rationality isn’t enough to understand the world, which seems quite meta-rational. Then again, I’ve only looked into the consequences of this theorem without understanding the details and thus this characterization could be wrong.
Similarly, I’ve thought of another relevant relation. It seems to be the case that any subset B of a set A that is not equal to A cannot be a model of the entirety of A, because B would be incapable of modeling itself without infinite regress. If this is true, then it seems interesting things follow:
1. If mind-body dualism is false, then the mind is a subset of the one world. It wouldn’t be able to model itself, and therefore not all of the world. In this case not all facts about the world would be knowable, and the mind would not be completely knowable either.
2. If mind-body dualism is true, then there are 2 distinct worlds and the mind inhabits one. Since the mind would be separate from the other world, it seems in principle possible that it could model the other. Nevertheless, in this case too the mind still can’t completely understand itself because it would have to possess a complete model of itself, which we have seen is impossible.
As such, we will never completely understand ourselves :)
Spherical-cow-holes
Thanks for the link. This line from the article linked in your tweet is definitely a keeper:
vast and beautiful grand unified theories based on spherical-cow thinking
That’s definitely the kind of thing that a rathole would form around.
The main difference I see between “rational pit-trap” and “rathole” (although having put those two terms side-by-side I’m now tempted to synthesize them as “rat-trap”) is that your concept seems to focus on the intellectual content of the trap, and mine on the affective aspect.
Wrong link?
One more :)
The “Bypassing post-rationalist nihilism” link seems to go to the title page of the book again - assume this should go somewhere else?
Re: links to the future
Haha, I’m just getting to the end of my day of fighting various bugs here, so can sympathise. Good luck! I’d seen your “links to the future” before, and wondered if something had gone wrong in that area, but wasn’t sure what.
Bypassing "Bypassing post-rationalist nihilism"
I’m sympathetic to your plight, trying to find a good exit from a dying ecosystem – with a massive burden of structured content.
In the interim maybe you could just change the text around that link so those of us who try to follow it won’t fall into a dark gray hole searching for the missing page.
Relatedly, in the search I came across “Lovecraft, Speculative Realism, and silly nihilism” which seems to imply that we generally won’t experience post-rationalist vertigo since nihilism is silly – but maybe you’ve had second thoughts?
Making it work, and how it doesn't
I love this line:
In addition to making the world less nebulous, we do this by applying our systems in nebulous ways. We fudge, applying tacit “know-how” to the way we implement the system. Mostly, this is a good thing, because it keeps everything going.
(Forgive me if this what I’m about to rant about is covered in a later part; I’ve only read Part I so far.)
However, there’s a risk that an incoherent understanding of this fact will be marshalled as an excuse for dysfunctional systems. For example, Microsoft a couple of years ago changed their terms of service to say
Seems reasonable at first, right? Except it’s for all their services, and does not distinguish between private Skype video calls to your significant other, and posting something publicly on, e.g., Xbox Live. Speaking of posting stuff on Xbox Live, Microsoft encourages you to post clips from the games you play, many of which are quite graphically violent. And does “shar[ing]… criminal activity” include using your Hotmail account to discuss a crime that someone else committed?
Microsoft’s response was, bizarrely, to insist that they hadn’t actually changed the policy, only clarified it. In essence, though, their response to the criticisms was, “That’s not what we meant,” which is the usual response to someone pointing out that a policy is overly broad and has unreasonable consequences. As if “Don’t worry, we intend to enforce the policy selectively,” is supposed to be comforting.