Comments on “Judging whether a system applies”

I didn't know that lesswrong

anders horn 2016-06-07

I didn’t know that lesswrong imploded. Where can I read the gossip?
On a related subject to judging when a system applies, since I was young I have tried to identify what kinds of arguments and rhetorical tactics are used when someone is correct and what kind are used illegitimately or for incorrect conclusions so that when I found people arguing about something that I don’t know much about I could orient myself towards the better truth.

I also was more attracted to subjects where I could argue when the teacher was wrong, like math.
One of my sharpest tools for dividing good arguments from pointless was what purpose I needed the distinction for.
So I only had to model the minimum nebulosity necessary.

Kinds of rhetorical tactics

David Chapman 2016-06-07
I didn't know that lesswrong imploded.

Maybe “imploded” is the wrong word. I don’t think there was any scandal or significant schism; people just lost interest. About two years ago it got much less active. Yudkowsky switching to working mainly on HPMoR was probably a major factor.

Most of the community seems to have migrated to Slate Star Codex. (And Scott’s stuff there is generally awesome.)

I have tried to identify what kinds of arguments and rhetorical tactics are used when someone is correct and what kind are used illegitimately or for incorrect conclusions

That’s very interesting! Anything you’d like to share about what you’ve discovered?

I also was more attracted to subjects where I could argue when the teacher was wrong, like math.

Someone (I wish I remembered who) pointed out that you can get a degree—a PhD even—in many humanities departments without having ever had the experience of being unambiguously wrong. This explains a great deal. Many humanities graduates simply don’t understand that they are talking nonsense when they do. They literally don’t know what it means to be wrong, and are incapable of admitting it ever.

Whereas, of course, if you do a STEM degree, you experience being unambiguously wrong about fifty times a day.

If you don’t know what it means to be wrong, you don’t know what it means to be right, either.

Great post

Nadia R. 2016-06-07

Great post, David; you made a good introduction to how selecting systems is itself an art. However, I’ve always thought of Kegan’s Stage 5 as being not about knowing when a system applies and when it does not (itself a stark, systematic distinction) and more about integrating the results of multiple systems, each of which kind-of applies. Sometimes the results don’t mean figuring out which system got it “right” but instead combining their different reasons into a coherent whole.

Degrees of fit, and dialectic

David Chapman 2016-06-08

Hi Nadia, thanks for the comment! It gives me an opportunity to clarify some points.

I don’t remember whether Kegan addressed the question of judging whether a system applies. My recollection is that he’s frustratingly vague about the specifically cognitive aspects of stage 5. He mostly talks vaguely about “dialectic,” which generally means combining aspects of two systems (as you say).

So, part of what I’m doing here is trying to fill in the details of the mostly-missing cognitive aspect of stage 5 in his story. But, obviously, I can’t speak for him, and I’m not relying on psychological research data, so this is conjectural and dubious. Quite possibly I should not have mentioned him at all!

when a system applies and when it does not [is] a stark, systematic distinction

I didn’t mean to suggest that. In general, a system fits a concrete situation more or less well, and never quite perfectly. I’ve added a couple of sentences to the post to explain this; thanks for pointing out the problem!

I wrote this page mainly for people who are just barely ready to go beyond systematic thinking. I wanted to make the concept of meta-systematicity as easy for them to begin to understand as possible, by explaining the simplest and most clear-cut form. It’s atypical, but I hope it is more accessible than a discussion of dialectical cognition.

I don’t know of any really clear explanation of that, so I may have to write one myself. But I haven’t gone very far into the literature, either. If you, or anyone, can recommend sources, I’d appreciate it! Michael Basseches’ work seems like a good starting point, but I haven’t read much of it yet.

We are in agreement

Nadia R. 2016-06-09

I’m looking forward to seeing how you lay out a “path beyond modernism”. Talking about the limitations of systems seems like it may work.

Kegan is quite vague, though not any more so than most of the ed literature I’ve read. I think of his stage 5 not as “combining” the judgements of different systems so much as making each system provide not just a black-and-white judgement with an argument supporting it, but a richer description of how and why and when that argument applies. So while two systems may yield conflicting judgements, they provide models of the world that can be reconciled, and then a new judgement can be made in this new, richer model of the world.

That said, I would not say I have a fluent grasp of Stage 5, whatever it is, and anyway by profession have focused more on the transition from 2→3 and 3→4.

Ooh, I have one

John S Costello 2016-06-10

It’s not so much incorrect to say that George Washington was the first President of the United States as it is trivial. This is because what constitutes the office of President is nebulous and ever-changing. As it exists now, it is so substantially different from the way it was in Washington’s time that it is simply wrong to consider the two positions “the same” even though they share the same name.

Some non-exhaustive examples:

  • The groups of people eligible to select the President now are different now than they were then — across practically every substantial demographic category: geography, race, gender, and age.
  • The procedure for electing a President in the modern two-party state is unrecognizably different from that of Washington's time.
  • The legal powers of the modern President are much broader than those available to Washington, both de jure (because Congress has ceded much of its rule-making authority to the Executive) and de facto (because of extra-constitutional authority the President has arrogated to itself, e.g., the power of the Executive Order).
  • The military powers the modern President commands are vastly different from those of Washington's time, both de jure (we now have four branches of the military, all of which are standing forces) and de facto (Presidents can now wage war without a declaration of war), as well as pragmatically (Presidents with nuclear weapons now literally have the power to destroy human civilization).
  • The modern President has police powers (via the FBI, BATF, Secret Service, federal marshals, …), which Washington didn't have.
  • … (I'm sure I could list many more examples like this, without leaving the domain of differences in roles, responsibilities, powers, procedures, and prerogatives of the office; which are the defining characteristics of what it means to exist *as a political office*).
  • Finally, and ironically, were George Washington to appear now, he would be *ineligible* to be President, and were Barack Obama (or Hillary Clinton!) to be transported back to 1789, they would be *ineligible* for the office as well. (In Obama's case, possibly only de facto.)

So, yes, George Washington was the first “President” — because every person elected “President” comes to a substantially different office, and each is the “first”.

TL;DR

John S Costello 2016-06-10

If when you say “Washington was the first President” you mean it in the same sense that you would mean when you say “Obama is the current President”, then that’s just not true, because in that sense, Washington was never President.

tcejbuS

Alrenous 2016-06-11

It’s true there’s not really any such thing as a ‘president.’ However, it is a good shorthand. GW being the first president is true enough to do work, though really there’s only atoms and the void.

Reasoning systems are functions. Functions have domains. E.g. f(x)=1/x doesn’t have zero in the domain. The domains of reasoning systems are defined by where their axioms hold. It doesn’t actually require 5th level reasoning to work out where a domain is. Though admittedly it occasionally takes 5th level to remember that you have to work out where the domains are.

Definitions are also exactly mathematical. They are coordinate systems. The actual triangle or whatever doesn’t change, but we have to use coordinates to describe it, which we can change. GW has a set of indisputable properties, such as being unanimously elected and making president-y decisions, but we have to use definitions to describe them, and we can change whether our definitions call him ‘president’ or not.

Because the probability is incomputable doesn’t mean it doesn’t exist. (Or has error bars in the +/- 30% range.) It’s a good idea to at least have a ballpark, and compare it to some standard. In physics the standard is 99.999 etc. Expecting this from sociology or politics is just dumb, but 70% is often good enough. Helps keep the categories from muddying each other, and from bad/incompetent actors from intentionally muddying them.

Thinking about how nothing could be 100% certain allowed me to find the thing that does have 100% certainty.

Gender exists no more than presidents do. Again, it’s a relatively-arbitrary definition, a handle we use because we can’t grasp reality directly. It’s a good definition, so it’s useful most of the time, but like systems, it has a domain, based on its original axioms. Transsexuals are outside this domain.

Though, like addition can be generalized to multiplication, the definition of gender could be generalized. How it generalizes depends on how exactly you want to construct the original definition.

I personally define gender such that transsexuals are very definitely neither male nor female. (This is of course a problem for trans, since they change precisely because they want to partake of the properties of the other side.)

The president objectively matches the definition we happen to assign to ‘president.’ Of course when you dissect the president, you’re not going to find how you or I hold the definition of president. It’s in us, not him.

I have to think about whether ‘president’ is local or not.

Pretty sure understanding of correspondence broke down, rather than the theory.

There’s not any logical problem with defining ‘George Washington’ as a time-sensitive function. It’s merely inconvenient. Though if you’re going to do that, should really say you can’t elect a man president even once, because neither men nor presidents actually exist. Problem is you can’t grasp reality directly, so that project isn’t going to get very far…

There’s not really any problem of transtemporal identity. It confuses the definition for the reality. Theseus never had a ship, so he can’t lose it. (Nor was there a Theseus, for that matter. Or a ‘had.’) But he had something defined as a ship. Whether it’s replaced depends on whether you want to define ‘ship’ as a particular set of boards or as the ability to cross water. Theseus loses all his original boards but he can still cross water. There’s your general, systemic answer.

Depends on what you mean by ‘exist.’ There are artifacts that currently exist that can only have existed (modulo error) if GW had a corporeal body and matched the definition ‘president’ in the late 1700s.

If you want you can say GW doesn’t exist and is thus metaphysical, but you then have to account for these artifacts, and the model thus evolved will add up to the same thing as what we’ve got, but be more complicated.

I would say we do indeed measure GW, via his spoor. We must measure (almost) everything through spoor like this, so doing so over a greater span seems hardly problematic.

Of course the thing itself changes without the definition necessarily changing. That’s, like, the essence of locality and identity. The question is not whether they change, but whether they change too much, and it’s clear that humans find the changes in ‘America’ to be immaterial to the continuity of the presidency.

The transtemporal existence of countries is not an issue because the country never really existed in the first place, so it continues to not exist. Studying the transtemporal definition is easy. Those two considerations exhaust the topic.

Countries are kind of interesting in that the definition is part of their causal chains.

Everyone needs to study math. If you want to make a fixed statement about an interesting function f(t), it just has to also have that variable - g(t). Even as the object described by f(t) moves, g(t) will remain true. It will endure, and be definite.

“Washington was the first American President” is a durable claim

It’s a definition. The entities ‘America’ and ‘President’ are defined, for the purposes of this interlocution, such that ‘Washington’ was the first one. It tells you what the sayer thinks ‘America’ and ‘Presidents’ are.

E.g. perhaps America ceases for a while, then America II appears. Was GW the first president, or GW_AII? Depends on how you feel like defining ‘America’ today. If you want to define it such that it will always be true, you can. But you don’t have to. The only thing you’re obliged to do is make the definitions consistent with your other definitions.

Whether technical glitches matter depends on whether you define ‘president’ legally or pragmatically. The technicalities probably make it illegal. But he was allowed to do presidenty things anyway? So....

Someone cares about the de facto first president generally because they want presidential high status to apply to a non-de-jure president. If so, they should argue that directly, rather than trying to re-define ‘president’ to do it, and dragging along a bunch of extra considerations.

The British Crown doesn’t exist. Also nothing is ‘legitimate’ or ‘illegitimate.’

As a matter of fact, control probably goes in the opposite direction as claimed, these days. Whoever runs America also runs significant parts of England. Even if I accept the definition of ‘genuine state’ or whatever, so what?

“Pop Bayesianism” is a

Kaj Sotala 2016-06-19

“Pop Bayesianism” is a religious movement whose central dogma is that you should always reason probabilistically. This is silly, of course. As with other religions, none of its adherents meaningfully believe their Holy Truth, but pretending they do makes them feel better in the face of uncertainty and helplessness.

I’m getting a little frustrated. I thought that you’d have given up with these kinds of strawmen after all the previous discussions in the comments of the original post and elsewhere. Why do you keep making characterizations like “central dogma is … that you should always reason probabilistically”, when you should know very well by now that they’re patently untrue? Or at least, describing a version of “pop Bayesianism” that nobody so far has admitted to endorsing?

Is it a strawman?

David Chapman 2016-06-20

Hi, Kaj, thank you for your (nearly perfect) patience!

I don’t remember our having discussed the specific question of whether “you should always reason probabilistically” is a tenet of pop Bayesianism. I confess that I don’t remember clearly all our discussions from three years ago. I started re-reading them, but there’s an awful lot there! So far as I can recall, that’s not one of the issues we discussed, but I may well be wrong. Maybe you are patient enough to refresh my memory?

It doesn’t seem like a strawman. Here’s E. T. Jaynes, who is the main source of recent pop Bayesianism:

The mathematical rules of probability theory are not merely rules for calculating frequencies of “random variables”; they are also the unique consistent rules for conducting inference (i.e. plausible reasoning) of any kind.

(I explain patiently in “Probability theory does not extend logic” that this is unambiguously, mathematically, utterly, disastrously false. He just got the math outright wrong. Have you read that? It might be interesting, if not.)

And Yudkowsky, in “Searching for Bayes structure”:

if a mind is arriving at true beliefs, that mind must be doing something at least vaguely Bayesian or it couldn’t possibly work.

In fact, any part of a cognitive process that contributes usefully to truth-finding must have at least a little Bayesian structure

How philosophers pondered the nature of words! All the ink spent on the true definitions of words, and the true meaning of definitions, and the true meaning of meaning! What collections of gears and wheels they built, in their explanations! And all along, it was a disguised form of Bayesian inference!

The way you begin to grasp the Quest for the Holy Bayes is that you learn about cognitive phenomenon XYZ, which seems really useful… And this cognitive phenomenon didn’t look anything like Bayesian on the surface, but there’s this non-obvious underlying structure that has a Bayesian interpretation - but wait, there’s still some useful work getting done that can’t be explained in Bayesian terms - no wait, that’s Bayesian too - OH MY GOD this completely different cognitive process, that also didn’t look Bayesian on the surface, ALSO HAS BAYESIAN STRUCTURE - hold on, are these non-Bayesian parts even doing anything?

Yes: Wow, those are Bayesian too!
No: Dear heavens, what a stupid design. I could eat a bucket of amino acids and puke a better brain architecture than that.

(Lightly edited for concision; emphasis in original.)

The overall point of his post seems to be exactly that “you should always reason probabilistically,” that nothing else works, and that anything that does work has to be Bayes in disguise.

I think he got this from Jaynes. Who was, as I patiently explained, flat-out wrong. And pig-headed about it, actually, too. It appears that people pointed out that he was wrong, and he refused to listen.

I don't honestly remember

Kaj Sotala 2016-06-20

I don’t honestly remember whether we happened to discuss that particular claim, either. But I would regardless have presumed all those discussions to already have taught you the heuristic, ‘if you claim that the Bayesians believe something totally crazy, then you’re probably being too uncharitable’. :P

I’ll grant that Jaynes might have made such a claim - I’m not as familiar with him. But Eliezer explicitly disclaimed such an interpretation here: http://lesswrong.com/lw/sg/when_not_to_use_probabilities/

OK, updated accordingly

David Chapman 2016-06-20

Hmm. That’s a sensible post!

It seems to contradict other things he, and others, have written. But, on the theory that maybe I’m missing something, I’ve deleted the paragraph from my post.

Thank you!

Kaj Sotala 2016-06-21

Thank you!

What do you feel it contradicts? I think it’s perfectly consistent with both the earlier quotes you gave (from Jaynes and Eliezer), for instance. You can consistently hold that any inferential process that produces useful information has “Bayes structure” at the core, while also holding that your mind contains specialized mechanisms for doing inference that will do the job better than trying to verbally reason about probabilities.

(I forgot to answer your question of whether I’ve read “Probability theory does not extend logic”. I started reading it at some point, then got distracted by something, then never remembered to go back. But it does seem interesting, so I appreciate the reminder. Maybe I’ll get around finishing it eventually. :))

Attempting to steelman the contradiction

David Chapman 2016-06-21

Well, both these posts are highly informal, and confused, and internally self-contradictory. It would take a lot of steelmanning to make them coherent enough to point out all the contradictions, flat-out errors, and falsehoods.

He starts out with:

I do not always advocate using probabilities,

which would (presumably) contradict “you should always reason probabilistically” outright. That’s good, because you shouldn’t. But then he corrects himself to:

Or rather, I don’t always advocate that human beings, trying to solve their problems, should try to make up verbal probabilities, and then apply the laws of probability theory or decision theory to whatever number they just made up, and then use the result as their final belief or decision.

This is also sensible, but it’s an entirely different claim. He’s flailing around here because he is trying to preserve “you should always use Bayes” in face of multiple reasons you can’t and shouldn’t.

He reformulates several times, and each is a logically distinct claim. Overall, he’s out of his depth and confused and writing unclearly because doesn’t know what he’s talking about, and has a quasi-religious belief in something that is demonstrably false.

A simplified steelmanning would be something like:

  1. Bayes is the complete, correct account of reasoning, and the only one possible.
  2. So you should always reason using it.
  3. Our brains evolved to reason with Bayes, and they contain special-purpose hardware accelerators for that.
  4. Bayesian inference is computationally intractable, so officially we can’t do it at all, but the hardware [somehow] gives good approximations in practice.
  5. Sometimes you can’t use an accelerator, so you have to reason verbally and explicitly. This is slow and error-prone, so do it only as a last resort.
  6. Any time you get a right answer, but appear not to be using Bayes, it’s because you are unconsciously using an accelerator.
  7. That has to be true, because we know Bayes is the only method of reasoning that gets right answers; see #1.

This is all quasi-religious (rationalist-eternalist) reasoning starting from what must be, rather than from what evidence says is the case.

Yudkowsky gets claim #1 from Jaynes, and it is unambiguously, mathematically false. Jaynes thought it was a consequence of Cox’s Theorem, but that is just completely wrong (as I explain patiently in “Probability theory does not extend logic”). Yudkowsky didn’t have enough mathematical background to recognize Jaynes’ error, and took it on faith.

3-4 is wistful certainty: Bayesian inference is impossible, but we must be doing it, so there must be something that makes it possible.

Claim #6 is a blanket strategy to explain away all evidence to the contrary. It’s untestable given the current state of neuroscience. (But we know it is false a priori because #1 is false a priori.)

If “Probability theory does not extend logic” is too long, you can read just the appendix about how Jaynes got confused.

And then, if you don’t believe that, you can go back and read the mathematical details. It’s just fopc, which is basic stuff. Not at all difficult, and everyone should know it—especially if they are interested in rationality!

Thanks! Clearly I ought to

Kaj Sotala 2016-06-21

Thanks! Clearly I ought to read that if I want to keep arguing these things. Though to be honest, I don’t really care that much (anymore?) about whether the Jaynes/Eliezer claim of Bayesianism is true or not. I just showed up to correct what seemed like a misrepresentation, and don’t have a major stake in criticisms of the correct representation. :)

As you’ve pointed out, this stuff forms only a part of rationality, and these days my attention has shifted a lot more towards aspects of rationality for which the Bayesianist debate is irrelevant. I’m now more interested in stuff like accepting painful emotions and being able to see the information that they’re trying to convey, building up reliable habits that take you towards your goals, creating social environments that encourage regular reflection on what you’re doing while also getting you to actually do stuff on top of just reflecting, etc.

invalid arguments

anders horn 2016-06-22

Explaining what I’ve discovered about distinguishing good arguments from bad, will be difficult because it’s a system of resemblances and “that’s funny..”s rather than clear rules. And also because my project failed to find any (non obvious) universally valid or invalid patterns. Whether someone is right or not depends on the content of what they say and not just the form. So I will attempt to reconstruct it in a more intelligible form.

The basic process I use to figure out whether and argument technique is good or not is: when I see an argument used to support a conclusion that I know is wrong, it loses credibility. When an argument convinces me of something that I didn’t know before, then that argument gains credibility. (Now that I think about it is sounds like Hebbian learning)

Judging the validity of arguments that I don’t completely understand is an extension of information scent. Valuable information is true novel and important. I will focus less on novelty and importance because they are less damaging to get wrong and are strongly selected for by other people.
Someone could be wrong because they are uninformed, pigheaded, or lying. (all these are cured if they acknowledge opposing arguments)

Tings I have seen in informative arguments are; citation of facts, (because they could be refuted by refuted the facts), acknowledgement of opposing arguments, agreeing that the other person is right but that there is a bigger picture, wordplay, and that I already trust the person.
Things I have seen in suspicious arguments are; ignoring other peoples points, sensational titles that don’t get lived up to. Traditional examples (like the cargo cult example), claims that everyone else is a fool, visible misunderstandings of facts, disproportionate responses (emotionally or otherwise), incoherent responses, responses that don’t make sense given the context.

Actually do stuff

David Chapman 2016-06-22

Kaj, that’s very interesting… It appears, from a distance, that the LW-centered rationalist community has made a similar transition. There’s a recent LW post about CFAR’s current teaching that seems to reflect the same concerns. I’ve also read (in the popular press) that they’ve stopped teaching Bayes entirely, because people didn’t find it useful.

Generally, this shift seems consistent with the beginnings of a move from “stage 4” (rationalist eternalism) towards “stage 5” (meta-systematic cognition), which I wrote about in the previous post in this series.

I intend the current post to support a first step in that direction. I hope it is helpful for some!

Resemblances rather than clear rules

David Chapman 2016-06-22

Anders, thanks for that! “Resemblances rather than clear rules” sounds rather like “meta-systematic cognition”!

5 more reasons why GW wasn't President

Malcolm Ocean 2016-06-28

11: Somewhat pedantic: the real truth is that “George Washington IS the first President of the United States” That title has not since fallen to someone else, but exists for all eternity and thus to be expressed accurately needs to be in the present tense.

12: Others had already held roles equivalent to President, within the area now known as the United States, before George Washington. The fact that people renamed the area and the role doesn’t change the essence of him not having been the first to play this role.

13: Since history repeats itself, the notion of anything being the first instance of something is absurd. The structure “President of the United States” is merely one instantiation of a much larger pattern that transcends time.

14: There are many people named “George Washington”, making that sentence true only for some more specific designation of the person in question. [∃ “George Washington” such that “George Washington was the first President of the United States”] does not imply [“George Washington was the first President of the United States” ∀ “George Washington”]

15: Given the multiple branches of history that could have occurred, and the fact that they did occur in other everett branches, it’s not accurate to say George Washington was the first President, but merely a first President. Which is a meaningful statement! It’s not true of anyone who in all branches died before the United States was founded, or was born well after, or was born into a universe that never contained a United States.

Mine were created before I really engaged with the rest of the post, including the idea that the systems be applicable to something else. That said, I think that my #12 is somewhat systematic and related to your #9 and #10, and that my #14 and #15 also come from useful systems of thought, if mis-understood ones.

Bayesianism's relevance to subjective probability

Malcolm Ocean 2016-06-28

Regarding the Bayesianism instance, you seem to be focusing on it as a kind of objective probability, whereas the notion of subjective probability, ie credence, remains very valuable. With access to any trusted sources, you can easily become basically as certain that Benjamin Harrison was President #23 as you are certain that Washington was President #1. But without access to sources (or ability to trust them) you might prefer to bet on a coin flip, because you have no idea.

But this is perhaps not relevant to your post.

George Washington Bayes

David Chapman 2016-06-28

Thanks, those five more reasons are great!

Re Bayes… How does what I said relate to subjective vs. objective probability? I don’t see a connection.

Is there a motte and bailey defense happening here?

The motte is “you should be more confident of something if you get more evidence for it,” which is obviously true. (Some Bayesians claim that some unenlightened people don’t get this. Possibly they are right.)

The bailey is “probability theory is the One True Way to become more confident based on evidence.” This is obviously false, because you can’t do probability theory without assigning real numbers to hypotheses, and you can hardly ever do that.

When pushed, Bayesians try to defend the bailey with the Dutch book argument (which has almost nothing to do with reality) and by invoking Cox’s Theorem (which says something completely different from what they want it to).

When that doesn’t work, the next step is to say “well, assigning arbitrary numbers is better than nothing,” but they don’t seem to have any empirical evidence for that.

So then they fall back to the motte: “All we’re really saying is that you should have more subjective confidence in a hypothesis if you have better evidence for it. Therefore, Bayes!”

There is some evidence for it

Romeo Stevens 2016-06-28

There is some evidence for it. the good judgement project, and the forecasting literature in general, supports notion that ad hoc quantification of credence beats tacit models (expert intuition) quite handily. Turning expert judgement into simple linear models, even when not done very rigorously, also beats informal reasoning. This is not to say that these reasoning methods are the be all end all, they are domain limited of course. But I think they contradict your claim.

Good judgement

David Chapman 2016-06-28

Thanks, good point!

I don’t know that literature—I’ve read only brief summaries.

"We cannot say that George

Joshua Holmes 2016-06-28

“We cannot say that George Washington was the first President because we cannot tell if our senses provide objective information about an actually-existing reality. I could be a brain in a vat or a hallucinating mental patient, in which case, much of my knowledge would actually be false.” Although I may actually be a brain in a vat somewhere, it’s irrelevant to the question, since I can distinguish true part of my experiential reality (“George Washington was the first President.”) from false ones (“My mother is the Crab Nebula.”)

“We cannot say whether George Washington was the first President because there is no such thing as objective reality, only subjective experience. And this is obviously true because look at quantum mechanics: observation makes reality!” Besides misunderstanding the “observer” of quantum mechanics, this is self-refuting because it is, itself, an objective fact.

W. W. Bartley

Chris Hibbert 2016-07-31

Nicely written. Thanks. I’ve read about ten of your posts here today, and feel like I’m starting to understand what you’re saying. I’ve always doubted the depth of LWers’ claims that Bayesianism is all there is. OTOH, I don’t treat Bayesianism as just a matter of believing that you should apply Bayes’ rule in order to determine what’s true. At least as promoted on LW, it also means things like “you must update after sharing information with someone you disagree with.”

Anyway, (reference my domain, pancrit.org) I’ve always felt that Bartley’s Pan Critical Rationalism was a deeper answer to the problem of epistemology. PCR starts by recasting the nature of epistemology from “the problem of knowledge” to “How do I decide what to believe”. Bartley’s big invention was to fix Popper’s Critical Rationalism. Popper basically said “this you must believe: the secret to epistemology is attacking every question from many angles.” Bartley’s correction was to say that even Critical Rationalism has to be criticized and defended.

Your approach to the ten denials of George Washington is very much in the spirit of Bartley. It’s not enough to calculate or deny or investigate. You’re not done until you’ve used judgement to figure out what you should believe.

Thanks.

W. W. Bartley

David Chapman 2016-07-31

Thanks, glad you liked it!

In an odd circumstance, I once took a class with Bartley, long ago. Have been reading references to him recently, and he’s on the list of “ought to investigate further and probably actually read his stuff” now (but haven’t done that yet). Thanks for the recommendation; I am moving him forward in the queue!

Bartley

Chris Hibbert 2016-07-31

I should have given the reference. The work I refer to is “The Retreat to Commitment”. I have extra copies to give away. :-)

The Black Box from Probability and radical uncertainty

Malenkiy Scot 2016-11-29

I think the prudent approach would be to ask you what you know about the box and how you know it (e.g. did you disassemble it, for example?). Get a feel if you are lying or trying to pull a prank, etc. Proceed accordingly. (The runes and eldritch figures are red herring, or course, so that’s somewhat of a prior that you are trying to pull a prank).

Get a feel

David Chapman 2016-11-29

Wonderful! Thank you!

“Talk to the guy and see if you can figure out what he is up to” is the answer I was looking for… and no one on LW suggested it. Whereas all the non-rationalists I tried the “puzzle” on came up with that immediately. I thought that was quite telling!

I intended to write several more posts in that sequence, and some are nearly complete in draft form, but I got extra-busy just then, so I didn’t post them immediately, and then LW imploded for unrelated reasons, and I never got around to it.

I’ve vaguely intended to publish them here instead… but some good people are trying to revive LW, so maybe I should put them over there!

Runes and eldritch figures

David Chapman 2016-11-29

The runes and eldritch figures are red herring, or course, so that’s somewhat of a prior

Oh, yes, and this also was part of the answer I was looking for. It was weird how seriously many of the LW respondents took that. I thought it was obviously a joke… if someone wasn’t sure, generally it should be easy to find out by talking to “me” and checking facial expressions and so on.

More generally, the thought experiment is a social situation, and there are social resources available. (You could pull out your phone and google search me and quickly find out what sort of person I am even if you don’t want to engage in conversation.)

The LW mindset goes straight to “how do we apply probability theory to a physical mechanism” which, in this case, is impossible. That was the point I was trying to make.

The LW mindset goes straight

Malenkiy Scot 2016-11-30
The LW mindset goes straight to "how do we apply probability theory to a physical mechanism" which, in this case, is impossible. That was the point I was trying to make.

Probably one of the issues here is framing: in real life most people would handle it correctly without much thinking. But when the problem is presented in the written form in a semi-formal setting some other mental mechanism kicks in. Like when you are solving problems from a textbook your approach is not to come up with creative solutions, but more or less find ways to apply what you’ve just learnt.

Down the garden path

David Chapman 2016-11-30

Yes, that’s a very good point! Maybe I was being sinister and malicious after all, by leading LW readers down a garden path. I set up a context in which a wrong answer would be the obvious one, and the normally-obviously-right answer was hidden behind a hedge.

Just to note that the “This

Simon Grant 2020-11-15

Just to note that the “This must mean something” link to apollospeaks has disappeared… Ah, linkrot…