Recent comments

2011->2021

a s 2021-12-27

Commenting on: I seem to be a fiction

I thought it was amusing that this page was written at the end of an AI winter, and now we’re back in an AI boom which has produced some decent imitation visual cortexes - but still nothing in the direction of AGI. Though some rationalists seem to be trying to worship GPT3 just in case it turns out to be Roko’s Basilisk.

  1. Monists love capital letters. Is that because they think capitals look impressive, or is it the result of bad translations from German?

Maybe they get it from Dr. Bronner’s soap bottles. Although those are translations from German, I wouldn’t want to insult the man by calling them bad, whatever they are.

On "unique cognitive styles"

Michael Nielsen 2021-12-27

Commenting on: How To Think Real Good

You write: “I’ve found that pretty smart people are all smart in pretty much the same way, but extremely smart people have unique cognitive styles, which are their special “edge.””

One possible hypothesis for why this is the case is that pretty smart people are essentially products of institutions. 20 people whose intellectual development is dominated by being at MIT will tend to end up rather similar, simply because they’ve been put through the same sausage grinder.

Someone sufficiently smart will find that relatively easy, and may have time to develop in other ways, very different from what is produced by the standard sausage grinder.

It is, of course, impolite to consider specific people in a context such as this. Still, it checks out with many specific examples. Though I wonder to what extent I am confusing correlation and causation here.

Country-level Stage 5

Esn 2021-12-20

Commenting on: A bridge to meta-rationality vs. civilizational collapse

I have a tentative hypothesis to contribute here: perhaps the elite of Russia is currently, tenuously, in Stage 5?

Hear me out here…

I am thinking in particular of this quote from the essay:

“Stage 5 entertains multiple systems, and is comfortable with contradictions between them, because systems are not absolute truths, only ways-of-seeing that are useful in different circumstances. Stage 5 is uniquely comfortable with value conflicts, since (unlike both 3 and 4) it does not take any value as ultimate.”

The country of Russia is, due to its tumultuous ideological history, currently chock-full of people holding irreconcilable values and historical interpretations at all levels of society. Despite this, it’s going through a period of relative peace and prosperity. The approach of the government for the past 20 years has been to carefully avoid provoking any of them needlessly, even the ones that seem to have little power. The INTERNATIONAL approach is similar. Through careful diplomacy, Russia manages to be on good terms with every country in the conflict-riven Middle East, despite being itself involved in many of the conflicts.

Listen to many of the major statements that the top officials make, from Putin on down, and it’s variations of “there are many ways to develop, there is no single model that is a good fit for all societies - we don’t agree with yours but we’ll leave you to it, but don’t try imposing your system on us or on anyone else who doesn’t want it.”

And while many countries respond well to that, the response from American and EU officials is typically something like “no, there ARE universal values that are good for everyone, WE represent them and spread them, YOU are standing against them, and we WILL keep trying to make you and everyone else behave in the one true, righteous way.”

So from this general impression, I’m getting the sense that at the state level, the Russian state is at Stage 5 (perhaps the Chinese state as well, though they’re more risk-averse so it’s harder to tell), while the West is at one of the lower stages, either Stage 3 or Stage 4.

If the above conversation was happening between two people, is that not the conclusion we would come to?

LW as of June 5th, 2013

David Chapman 2021-11-21

Commenting on: Pop Bayesianism: cruder than I thought?

This post was from June 5th, 2013 (as it says in small print in the sidebar). I believe that it is an accurate assessment of where LW was at then.

Since then, the Berkeley rationalist community has grown up in many ways, which I applaud and admire. One is that the leadership explicitly recognized that Bayes does not have the magical powers they once attributed to it.

My explanation

Dave Kunard 2021-11-21

Commenting on: Pop Bayesianism: cruder than I thought?

As someone who is part of this “community”, there is another term that is sometimes used, I believe coined by Scott Alexander, “x-rationality.” I like this better. You are focusing on the Bayesian part, which is admittedly a particular obession of Yudowsky. Yudowksy is eccentric and the cult issues have been thoroughly discussed. LW has changed over time as more diverse thinkers have been more influential. While LW has been incredibly influential, it is not the whole of “x-rationalism”, it is a sort of intellectual sub-culture that includes things like Scott Alexander’s blogs.

I would say the main unifying principle is the desire to be “less wrong.”

I disringuish this from “vulgar rationality” in the sense that the idea is to improve one’s own rationality, not “shoot fish in a barrel” which is common to many online groups that define themselves as rationalist. There are common interests, but there need not be universal agreement. Good faith conversations, understangind heuristics, there tends to be a preference for choice utlilitarianism but i don’t share it. Specific interest in AI, existential risk, how to update and calibrate one beliefs.

I feel like you are taking a very superficial view of this, literally looking at bayes equation, and seeing that as the whole of what it’s about.

I am a member of two x-rationalist groups, and I literally can’t even remember the last time someone discussed bayes or bayesian probablity. For EY, it was a key insight that did have a “religious” transformational experience.

What i think you also miss is that you look through the lens of your own model and see these things in primarily philosophical terms and perhaps don’t see the “what’s it like inside the equation” subjective power of what you would call “eternalism”- but which doesnt necessarily have the philosophical content you ascribe to it.

I would recommend looking into the phenomenon of temporal lobe epileptic and sub-eliptic expereinces and how they can induce “religious conversions”- that is “religion” which neither fits into the “choiceless” or “choice” modes neatly. It’s nature if more nebulous.

I think you have a lot of great insights, but you should perhaps apply the concept of “nebulosity” to your own models. They are useful modalities, but as you point out, not absolute truth nore devoid of meaning/ :)

I HAVE THE MAGIC. THE CHAOS MAGIC YOU DESIRE. THE FRONTIER OF THE EVOLUTION OF THE SOUL

Martijn Struijs 2021-11-15

Commenting on: A bridge to meta-rationality vs. civilizational collapse

Let us do some magic. I mean, MTG: Pick a card. ANY CARD. Now.. https://scryfall.com/card/aer/47/take-into-custody Look at the back. This is hard online, so let me help https://www.wayfair.com/decor-pillows/pdp/vandor-llc-magic-the-gathering-card-back-throw-ubdv1336.html .

Now. 5 colors. 5 elements of personality. Corresponding to the 5 levels of keegan. Let’s start. I’m level 5:

Stage 1 — Impulsive mind (early childhood)
Stage 2 — Imperial mind (adolescence, 6% of adult population)
Stage 3 — Socialized mind (58% of the adult population)
Stage 4 — Self-Authoring mind (35% of the adult population)
Stage 5 — Self-Transforming mind (1% of the adult population)

I’m not even 1%. I’m 0.01%… I’m not that special… there’s just too many humans. Why am I top percentile? I am a student, cum laude in highschool with 2 extra subjects, cum laude in 2 EU bachelors in 3 years. (math + CS), I saw that AI was silly before I began doing a PhD in it, I had a masters in CS with cum laude and a free SODA paper, I’ve published a bit, and made basically all the mistakes, yet recently I saw the light and did 15 years of lost development (I blame the high modernists. They ruined not just me, but the entirety of youth. My education… is not forced. It is nebulous. It makes people think with their body, not just their mind (If you think this is silly, what do you believe about homunculi? The myth of the brain as the only part of your body that thinks is just the homunculus all over again....), it is scaleable, it actually has existed for 30 years without anyone finding it, and is mostly used to make the inventors rich. In every game of chance, it is the house that wins. Seeing the potential. I… am not the start. I’m simply the bleeding edge in the quest for the soul. I… had the STEM nihillistic depression in highschool. The doctors diagnosed me, correctly, with being a nerd. However, they didn’t see nor think that I should understand myself. And my creativity. The latter is what saved me… From the aftermath. Of terrible decision. I… simply didn’t learn anything important at the university. I mean, in class. I.... simply couldn’t live an unascended life So I had to keep banging on the door even if it didn’t budge. And eventually, it opened, after a very primal and personal emotional experience. The part where it all began…) Perhaps there is a level 6? I mean, what would that even be? Perhaps it is someone who is not only fluid in the mind, but also in the stages. I say that my naivite, tribalism, rationality, social mind, societal mind, transforming mind… are all useful aspects, instrumentally at least. Why limit your mind? I’m level 6. The level people couldn’t even imagine. Fluid over all 5 levels. Impossible to percieve, unless you look very carefully....

Of course, the master of 6 is the best teacher for 5. And like any good teacher, I have a scaffold. The 5 colours. https://eu-browse.startpage.com/av/anon-image?piurl=https%3A%2F%2Fmiro.medium.com%2Fmax%2F2000%2F1%2AsQcpHoGupN6w7l9NzthoQg.png&sp=1636971235T170c5b29ff9169160e84b257103bbd969ff4876588502b0089dbd1f9322e97e3 Fucking magic

This will be all. The rest… Is an exercise for the reader. No need to thank me. Your development is thanks enough. Good bye. Don’t let me tell you about magic. Ask me about magic.

Yudkowsky and the mythical OTR

Daniel 2021-11-13

Commenting on: Rationality, rationalism, and alternatives

I don’t think Yudkowsky neccesarily believes in an One True Rationality.

However, by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.11 If a person is an algorithm, it is probably an incomprehensibly vast one, which could not written concisely. It is probably also an incomprehensibly weird one, which one could not consciously follow accurately.

But Yudkowsky is primarily an GAI researcher, focusing on GOFAI presumably due to the reasonable assumption that one can hardly align an implicit consequentialist if one doesn’t know how to align an explicit consequentialist.

On the other hand maybe Yudkowsky occasionally belives in a One True Rationality, as long as you qualify it by “but it doesn’t exist yet” due to failure to keep apart his FAI research and “the art of human rationality”.

Yudowskyan rationalism cannot claim that meta-rationality is important due to being mostly unable to distinguish it from rationality, tho.
But what I would most want is discussion of whether 5: “Conform to the criterion!” is true, while agreeing that 1&2 are true and that 4,6&7 are false.

To that end I will quote 12 virtues of rationality:

As with the map, so too with the art of mapmaking: The Way is a precise Art. Do not walk to the truth, but dance. On each and every step of that dance your foot comes down in exactly the right spot. Each piece of evidence shifts your beliefs by exactly the right amount, neither more nor less. What is exactly the right amount? To calculate this you must study probability theory. Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.

The bolded text I take to mean that you should conform to the criterion (i.e. 5), with the criterion being probability theory, which you admited is true as far as it goes.

(This is still half a draft, but I decided to post it before I ran out of steam and left it as a draft, in the end leaving what I’d written to disappear when I shut of my computer. Sorry if I’m rambly.)

"Jam?" and an attempt at a rational language

Daniel 2021-11-13

Commenting on: The purpose of meaning

In lojban “Jam?” would be translated as <rutpesxu au pei>, meaning
“(something) is-a-jam-made-from-fruit(s) (unspecified_fruit(s)) -of-species (unspecified) [desire] [question]”, or, more formally, or when something is clearly requested <rutpesxu e'o pei> (e’o means [request]).
But in the CEO case, it would instead be translated as maybe <lo rutpesxu cu mo>, meaning “some jam-made-from-fruit(s)-belonging-to-a-species [main-verb-delimiter] [is-what?]”.

Point is, lojban seems to handle informal language games in a pretty formal and fuctional way (in the hands of a competent speaker), using , well, all of lojban, but, most relevantly (1) modals and stuff from chapter 13 of CLL, and (2) speech act theory applied to logic.

What does that mean for your conclusions on this page? :-)

Decision theory, maybe

David Chapman 2021-10-22

Commenting on: Rationality, rationalism, and alternatives

I’ve found Yudkowsky’s writing vague and inconsistent, so it’s hard to know what his position would be. However, he seems to hold that decision theory† is the One True Rationality, in which case he’d agree with 5-7 as well.

In case this wasn’t clear, the point of “meta-rationalism” is “meta-rationality is important.” Denial of 4-7 is not a definition of meta-rationalism, nor a significant part of it.

† Some version of decision theory that doesn’t exist yet. He’s tried to fix bugs in the standard version(s) but thinks there are unsolved problems still.

Yudkovskyan rationalism

Daniel 2021-10-21

Commenting on: Rationality, rationalism, and alternatives

As I understand it, Yudkovskyan rationalism disagrees with meta-rationalism in claiming that
4. You should conform to the criteria as neatly as you can.
is true, and of the 7 claims, it only disagrees with meta-rationalism about that claim.
(With Yudkovskyan rationalism I mean the “LW rationalism” expounded in the Sequences)

Further reading

David Chapman 2021-09-27

Commenting on: Positive and logical

I’m glad it has been helpful!

There’s some rationality/meta-rationality readings recommended in a section of the “Further reading” page on Meaningness.

Probably this site should have its own equivalent!

Meta-Rationality reading list

Muhammad 2021-09-27

Commenting on: Positive and logical

Hello Dr. Chapman,

Thanks for writing this fantastic book. I really needed this as I was going through a phase of post-rationalist nihilism after leaving LessWrong. I was wondering if you have a meta-rationality reading list for exploring these topics further?

Regards,
Muhammad

Awakening from the Meaning Crisis

Valeria 2021-09-18

Commenting on: Ignorant, irrelevant, and inscrutable

Meta-rationalists have been promising a coherent account of meaning for nearly a century. Somehow, we’ve never delivered

Besides Chapman’s wonderful work, the work of Prof. John Vervaeke, including how he presented it in his “Awakening from the Meaning Crisis” series, seems to be a huge step in that direction (specially the second half of the series).

Preview here: https://www.youtube.com/watch?v=ncd6q9uIEdw

Playlist here:
https://www.youtube.com/playlist?list=PLND1JCRq8Vuh3f0P5qjrSdb5eC1ZfZwWJ

Building the bridge

David Chapman 2021-08-16

Commenting on: A bridge to meta-rationality vs. civilizational collapse

Brent, these are excellent comments/questions.

at least if you drop into an article without reading in order.

Yes, this material, much more than anything else I write, needs to be read sequentially. I’m not sure how to convey that in a web presentation.

it is easy to mistakenly think that the problem is rationality, as opposed to rationalism.

Yes, this essay was written when I was first thinking about this material, before I began writing the book-length version, and I wasn’t as clear about it as I hope my more recent treatments are. Maybe they still don’t emphasize it enough.

have fluid thinkers established sufficient networks to recognize one another? Do they have a common language for communicating? Are they sufficiently organized?

No to all of these; and this is something I hope to help bring about. (But I have limited bandwidth and too many projects, and so on.) Writing here is all I’ve had time for so far, but I do want to develop community, trainings, and so on.

Stage 4 and 4.5 people (I think I am the latter) need mentoring. That would be the bridge you are talking about, wouldn’t it?

Yes; this stuff is best transmitted by apprenticeship. That’s inherently difficult to scale quickly, though. So also written material, video lectures, group courses, etc., which may be less effective but do scale better.

Despite its failings, rationalism—as a colloquial, unrigorous idea that system thinking is essential to solving problems, and even that “problem” is the best way to conceptualize a desire for change—at least offers a positive, tangible, realistic reward. If civilization is in danger, most rationalists still believe that rationalism is the means to avert that danger

I agree with that… getting more people from 3 to 4 is probably more important, and much easier, than getting people from 4 to 5. On the other hand, there’s lots of existing institutions and resources for the former task (it’s what universities are supposed to do, in part, in theory), and nearly no one working on the latter. So I’m taking the high-risk, potentially high-payoff option here.

it has to support a high traffic flow, sufficient to make a material difference to the world. It’s really difficult to see how it scales, especially in light of the war for attention and the finite human lifespan.

Yes… but we can only do as much as we can, and hope it helps enough!

Is the difficulty of building this bridge as the Great Filter?

I had not thought of that! I am not sure whether to be amused or horrified…

Rationalism/rationality; bridge-building

Brent 2021-08-16

Commenting on: A bridge to meta-rationality vs. civilizational collapse

I briefly got confused when you wrote “One needs to become disillusioned and disappointed with rationalism”, even though it was introduced as a specific concept only one paragraph earlier. There is some amount of disorienting recursion in the design of this meta-book, at least if you drop into an article without reading in order.

The confusion was that it is easy to mistakenly think that the problem is rationality, as opposed to rationalism. By the first I mean just the technique of reasoned argument, in contrast to the promise that rational thinking is the magic solution to all of life’s problems.

Anyway, maybe the problem is with the reader, not the text, but there are places where the text seems to assume that the reader already knows everything in the text, which, again, is a bit disorienting.

Hyper-text is great, but sometimes, you have to encounter ideas in the right order.

The call for more help, for those ready to transition, makes me wonder: have fluid thinkers established sufficient networks to recognize one another? Do they have a common language for communicating? Are they sufficiently organized? It seems to be a critical super-power when trying to achieve collective goals. Not to mention actually choosing and defining those goals. But definitely for growing.

Stage 4 and 4.5 people (I think I am the latter) need mentoring. That would be the bridge you are talking about, wouldn’t it? But most of us are not looking for ethical mentoring. Most are unsure how to get technical mentoring, outside academia. We mostly read a lot.

The chance of stumbling on insights seems low, and highly accidental. How many people in STEM decide they need to learn more philosophy because this rationalism thing isn’t reliably making the world better? There are a lot of impediments, not least of which the general opinion of philosophy as being a waste of time.

This brings me back to rationalism vs rationality. Despite its failings, rationalism—as a colloquial, unrigorous idea that system thinking is essential to solving problems, and even that “problem” is the best way to conceptualize a desire for change—at least offers a positive, tangible, realistic reward. If civilization is in danger, most rationalists still believe that rationalism is the means to avert that danger (despite the fact that rationalism mostly created that danger while trying to deal with other problems created by systems 2 and 3).

The bridge to stage 5 not only has to exist: it has to support a high traffic flow, sufficient to make a material difference to the world. It’s really difficult to see how it scales, especially in light of the war for attention and the finite human lifespan. The adoption rate is too low, and the attrition rate is too high.

Is the difficulting of building this bridge as the Great Filter?

Sorry if any of this is redundant, meandering, or otherwise time-wasting. Thanks for your efforts, as always.

More on this subject, please

SJK 2021-07-18

Commenting on: Pop Bayesianism: cruder than I thought?

Excellent summary. Another perspective on the potential harm of this meme: religious Bayesian are easy to manipulate by massaging their first impression of a subject, as the practice enshrines initial bias by allowing only a simplistic + predictable family of recalibration

Thrilled that you're starting to use Orbit

Alejandro 2021-07-18

Commenting on: Bring meta-rationality into your Orbit

I’ve regularly used Anki for the last year inspired by one of Michael’s essays on the topic and despite how difficult it is to identify the effects of spaced repetition for complex topics, I think there are a few notable benefits of the practice. One thing I’ve observed is that the prompts help you interact with the concepts with much more ease, so that you can easily import them in your own reasoning (instead of burdening your memory or failing to come up at all).
Thank you for doing the work of trying to engage further with the reader and I hope that you consider using spaced repetition for more of your essays

Stronger claim

FS 2021-07-06

Commenting on: Probabilism

His claim was even stronger than that.

It was not just that science doesn’t use induction and thus he proposed another theory of how science works. His claim is that induction doesn’t even happen in any domain.

The most elaborate treatment of this view is given in his “Realism and The Aim of Science.”

It is a view that I am wrestling with, and if you combine it with Donald Campbell’s evolutionary epistemology. I think we find a strong case for this.

It is a different case of the impossibility of induction that David Deutsch makes in his books. Which is another interesting solution to induction.

However, I still can’t shake the feeling that some sleight of hand has happened here.

Non-black non-pigments

David Chapman 2021-06-07

Commenting on: When will you go bald?

Thank you very much for pointing this out! Fixed now.

Fixed

David Chapman 2021-06-07

Commenting on: Reasonableness is meaningful activity

Broken link fixed—thank you very much!

very minor nitpick

Juraj 2021-06-07

Commenting on: When will you go bald?

Dichroic coating or holograms don’t use pigments.

Yes it can be said that nebulosity argument might apply to the definition of “pigment” here too… but between us engineers, please don’t.

Why is always either Rota or Feynman that says one of these things?

tekopaul 2021-06-06

Commenting on: How To Think Real Good

I believe Feynman also said something very much along these lines about his own tricks with Geometry in Surely You’re Joking, Mr. Feynman

Popper rejected induction

David Chapman 2021-06-05

Commenting on: Probabilism

Popper explicitly didn’t have a solution to the problem of induction. He said it was insoluble, and that science doesn’t use induction.

I’m guessing that you are thinking of his falsificationism? That is explicitly not a solution to the problem of induction; it’s an alternative theory of how science works.

alternative to induction

FS 2021-06-05

Commenting on: Probabilism

what do you think of Popper’s solution to the problem of induction?

Broken link

David McFadzean 2021-06-05

Commenting on: Reasonableness is meaningful activity

The “chapter on accountability” link needs updating

Narrative Coaching

Danielle Roesmann 2021-05-22

Commenting on: A bridge to meta-rationality vs. civilizational collapse

@C’mon - I’m currently reading Narrative Coaching by David B. Drake, PhD and see some interesting parallels in thinking in meta-rationality. I have a sense that a Narrative Coach could act as a powerful guide if you’re unable to find a mentor.

Chapter 6, How We Change and Transition talks a lot about this “nebulous” state of transition. Perhaps coaching could be the scaffolding between 4.5 and 5?

Mo' betta citings

David Chapman 2021-05-08

Commenting on: Abstract Reasoning as Emergent from Concrete Activity

Thank you! You have increased my subjective well-being by 153+3-2 = 154 utilons!

More seriously, it did seem a bit anomalous that there were so few, and it’s interesting that it’s due to a data bug of some sort.

Slightly better news

Kaj Sotala 2021-05-08

Commenting on: Abstract Reasoning as Emergent from Concrete Activity

All our other papers are available on the web, and have been cited several thousand times. This one seems to have been cited only twice.

If I search Google Scholar with the name of your paper, it gives me two items with that name; one with three citations, but the other with 153 citations. So not quite as good as thousands of cites, but not quite as bad as only two (or three), either!

Propositional to procedural

Kate A 2021-04-15

Commenting on: Acting on the truth

My intuition would actually be to try to reduce propositional to procedural, rather than the other way around. So, more mechanical basic knowledge is the low level of abstraction, and the reasoning is the high level of abstraction.

Probably true isn't much better than absolutely true

Kate A 2021-04-15

Commenting on: How To Think Real Good

I wrote about this one my blog. The thing is that probability tricks you into thinking that you can still change your mind if you find a better theory. But then your evaluation is also performed in terms of probability theory, and your theory of mind is also Bayesian, and now it’s turtles all the way down.

“This is how it goes: the Bayes theorem (and probability theory in general) might not be the one true model, but it probably is because I think so and my mind must be Bayesian so because I think that the Bayes theorem is probably the one true model, because…. Gödel what? As long as you think that the probability theory has the highest probability of all other models you are stuck. Your only chance to get out is the one that is irrational in this framework, probably when you stumble on something that just feels clearly wrong.

The annoying thing is not so much the unprovability, the annoying thing is that. It blinds you to everything that doesn’t conform to your theory. It is not a confirmation bias, it is a confirmation loop. Your sources are self-selecting for the ones using the Bayesian reasoning. Confirmation bias can be corrected for, this cannot.”

Probably true isn't much better than absolutely true

Kate A 2021-04-15

Commenting on: How To Think Real Good

I wrote about this one my blog. The thing is that probability tricks you into thinking that you can still change your mind if you find a better theory. But then your evaluation is also performed in terms of probability theory, and your theory of mind is also Bayesian, and now it’s turtles all the way down.

“This is how it goes: the Bayes theorem (and probability theory in general) might not be the one true model, but it probably is because I think so and my mind must be Bayesian so because I think that the Bayes theorem is probably the one true model, because…. Gödel what? As long as you think that the probability theory has the highest probability of all other models you are stuck. Your only chance to get out is the one that is irrational in this framework, probably when you stumble on something that just feels clearly wrong.

The annoying thing is not so much the unprovability, the annoying thing is that. It blinds you to everything that doesn’t conform to your theory. It is not a confirmation bias, it is a confirmation loop. Your sources are self-selecting for the ones using the Bayesian reasoning. Confirmation bias can be corrected for, this cannot.”

Professor/mentors

C'mon 2021-04-02

Commenting on: A bridge to meta-rationality vs. civilizational collapse

Context for reaching it has been created only rarely, idiosyncratically, by exceptional individual mentors

STEM departments do not explicitly go beyond that. However, at least some professors understand the limitations of formal methods and the inherent nebulosity of their subject matter, and may teach that informally. They may also teach some stage 5 cognitive skills informally, implicitly, or by example.

Could you please point out some such mentors and/or professors please?

Thank you

Ethnomethodology process in a way looks to me similar to ML

C’mon 2021-04-01

Commenting on: What they don’t teach you at STEM school

we can find the rules by video recording people eating breakfast, and watching carefully, over and over.

That sounds very much like how machine learning works.

No?

Too much!

Kenny 2021-03-21

Commenting on: Bring meta-rationality into your Orbit

I was taking the periodic ‘quizzes’ for several weeks and it was very interesting! I could definitely perceive that my memory, of what was being tested in the periodic “reviews” anyways, was improving.

But I’m less sure how helpful this as a general technology. It’s certainly useful if it’s particularly important that one remember particular ‘passwords’. That is useful, sometimes incredibly so – lots of credentials, and concretely useful skills, are ‘gated’ behind an ability to recall specific facts or ideas easily and rapidly.

But I can’t imagine much use beyond that – the time costs of reviewing just one blog post are already way too expensive. I’m not sure how idiosyncratic this is, for me, or others generally. But I don’t think I’d have even an order of magnitude of a larger ‘budget’ for the reviews, even under perfect conditions.

I think this is something that is very useful, for a specific, fairly narrow, and very focused purpose. (And maybe even just one such purpose could be maintained at a time, with a possible exception for (at least roughly) full-time students or scholars.)

Mentors

Angie France 2021-03-14

Commenting on: A bridge to meta-rationality vs. civilizational collapse

We must creat mentors to help others make way to step 5 on the bridge. Our world is lacking greatly in mentors in general.

Wrongly assuming Romanticism

David Chapman 2021-02-03

Commenting on: The ethnomethodological flip

Yes; unsurprisingly this piece is well-known to anyone doing ethnomethodology. There’s a discussion in Michael Lynch’s Scientific practice and ordinary action, for instance (pp. 26-28).

Gellner made the common mistake of assuming that any critique of rationalism must be the Romantic anti-rational critique: that rationalism neglects critical aspects of subjectivity, which Romanticism valorizes.

This was exactly wrong. If anything, ethnomethodology could be criticized for the opposite: rigorous refusal to deal with subjectivity at all (on the grounds that, as observers, we have no access to it). It is more similar to behaviorism than Romanticism in this respect.

I frequently run into the same misunderstanding of my own work (e.g. in twitter discussions with self-described rationalists).

Ignorant, irrelevant, and inscrutable” discusses this. Ethno comes in the “inscrutable” category (i.e. meta-rational).

Gellner critique

joe 2021-02-02

Commenting on: The ethnomethodological flip

Ever seen this strange sociological critique of ethnomethodology by Ernest Gellner (in 1975)? He’s sniffing at it as a kind of Californian self-obsessed conformist hippie fad. I found it rather confusing (though funny!), but the last couple of pages also sound pretty prescient—a kind of negatively characterized fluid mode, “DIY subjectivity”, reductio ad solipsism. Quite disorienting. (Started reading Gellner via Cosma Shalizi fwiw, he of the famously impeccable taste… http://bactra.org/notebooks/gellner.html )

Here’s the piece:
http://tucnak.fsv.cuni.cz/~hajek/ModerniSgTeorie/literatura/etnometodologie/gellner-ethnomethodology.pdf

Feedback

Kenny 2021-02-01

Commenting on: Bring meta-rationality into your Orbit

It was overall a little jarring – I genuinely enjoy reading your writing and answering questions in between was noticeably different.

I’m not ‘sold’ on it, but I’m open to continue testing it!

Like all good games nowadays, I like the latitude it allows you – the person reading the prompts – in deciding whether to record a prompt ‘forgotten’ or ‘remembered’. I definitely played with different ‘personal interpretations’ of what those two possibilities might mean for a given prompt. I think the ambiguity (nebulosity) were rather delicious for your work in particular. I ended-up grading myself as ‘forgetting’ any prompts that I couldn’t both explain in my own words or with my own examples and any for which I couldn’t recall your specific terms or phrases.

I’m definitely curious to play with the longer-term prompts.

It’d also be interesting to feed other prompts, or pre-made sets of them for various topics, into a kind of big ‘prompt stew’ with which I could regularly challenge myself.

Thanks!

Kenny 2021-02-01

Commenting on: Maps, the territory, and meta-rationality

Thanks – both for this post and the ‘novel ontologies’ you’ve provided everyone. I’ve had a lot of use of ‘reasonableness’ and ‘meta-rationality’ so far!

Understanding the issue but not having exactly the "right" answer

David Chapman 2021-01-31

Commenting on: Bring meta-rationality into your Orbit

Thank you very much for this feedback! Glad you enjoyed it overall.

I had similar difficulties sometimes when first using Orbit with another author’s prompts.

I hope that with more experience in writing prompts we’ll be able to minimize this sort of problem.