Comments on “Ignorant, irrelevant, and inscrutable”

Add new comment

Scrutinising Meta-rationality

Ssica3003 2017-07-24

I’d certainly enjoy helping with demystifying meta-rational thought.

Visit my site for details of a talky meetup I have at my house in London, UK to discuss these things.

Beyond rationalism

Mimetic Arbitrage 2017-07-24

Hey David,

(I think I submitted multiple times… due to too many typos from not being fully awake yet. Please delete the previous submissions.)
This is my first time commenting, but I’ve been reading your blogs (this one, Buddhism for Vampires, Vividness, and Approaching Aro) for nearly a year now. Thank you for helping me to get out of a purely rationalist mindset and deepen my practice of meditation. Since then, I have written extensively on meta-rationalism in a currently unpublished form, but I do hope to distill it down to a book. I can’t find your email, but I would like you to look at parts of it in private. This is one of the few blog posts related to the topic that I have decided to thrown onto the internet.

Since starting to read your posts, I’ve delved deeper into empiricism and existentialism. I believe that Buddhism is existentialism in a different flavor and that existentialism is a continuation of empiricism. This is most evident when you examine Lev Shestov, who is the philosopher who has the view most similar to that of Meaningness.
Start with this video. Gregory Sadler’s whole existentialist playlist is excellent. Another existentialist, Rainer Rilke, is the favorite poet of Buddhist scholar Joanna Macy, who was also influenced by Gregory Bateson’s Steps Towards an Ecology of Mind, which you have also previously mentioned.

Shestov can be considered a “radical empiricist”, “existentialist”, and “anti-rationalist”. He is the dharma successor of Kierkegaard and Nassim Taleb’s previous incarnation as a Russian Jew 100 years earlier. (Excuse my blasphemous humor, or not.) Both Kierkegaard and Nietzsche expressed an idea along the lines of faith being the greatest passion to make the same choice over and over again in infinite samsara cycles.

Shestov, like Taleb, used skeptical empiricism to advocate for faith and says that All Things Are Possible, which is the title of one of his books.
In All Things Are Possible Part II, Aphorism 3, he explained the problem of induction, made famous by Hume. It aligns very well with the postmodernist and Zen critique of rationalism, in that it’s too restrictive and unimaginative, conveying a nihilist lack of hope.

His aphorism 23 in Part I dismantled metaphysics, which really has the same problem as rationalism. He says,

The first assumption of all metaphysics is, that by dialectic development of any concept a whole system can be evolved. Of course the initial concept, the a priori, is generally unsound, so there is no need to mention the deductions. But since it is very difficult in the realm of abstract thought to distinguish a lie from truth, metaphysical systems often have a very convincing appearance. The chief defect only appears incidentally, when the taste for dialectic play becomes blunted in man, as it did in Turgenev towards the end of his life, so that he realises the uselessness of philosophical systems. It is related that a famous mathematician, after hearing a musical symphony to the end, inquired, “What does it prove?” Of course, it proves nothing, except that the mathematician had no taste for music. And to him who has no taste for dialectics, metaphysics can prove nothing, either. Therefore, those who are interested in the success of metaphysics must always encourage the opinion that a taste for dialectics is a high distinction in a man, proving the loftiness of his soul.

This paragraph sums up nicely while systems are useful contextually, we must go meta-systematic to truly innovate.

My suggestion would be to expand upon what Hume, Kierkegaard, Shestov, Jordan Peterson, etc have to say rather than getting caught up in reacting against rationalism. Like Deng Xiaoping says, “Doesn’t matter if it’s a black cat or a white cat, it’s a good cat if it can catch mice.” Rationalism is just a cat of an arbitrary color. Rationalists can have their black cat. That doesn’t stop us from getting an army of cats of all the colors of the rainbow.

Good luck with your quest, and sometimes I also dabble in this quest.

Hewitt

anders 2017-07-24

Speaking of inscrutability. A couple years ago Carl Hewitt showed up on Lambda The Ultimate blog and the general consensus was that no one could understand what he was saying. It is unsetteling because he might be saying something really interesting but I can’t be certain. http://lambda-the-ultimate.org/node/4784

Through the looking-glass

Dan 2017-07-24

(Apologies in advance if this is more grumpy than it needs to be.)

If it’s obvious, it seems not worth explaining

This cuts both ways, it seems.

For my part, as a STEM person who fits your “stage 4.5” pretty well, I thought all the problems you lay out in this post were totally obvious to everyone concerned, and have spent years wondering why you keep not getting around to explaining WTF you’re talking about. (“If“, that is, “you have anything valid to say—which we do not concede.” I still think you might, but I just can’t tell!) This seems to be the consensus of most SSC commentors as well, for instance.

So I’m glad you know these things now, but it’s annoying that it took this long, and it’s extra annoying that it never even occurred to me that you didn’t know. Perhaps we rationalist readers needed to do a better job explaining our difficulties. But, we did try:

As I mentioned, I’m not sure I have anything useful to say about defending rationalism against irrationalism or anti-rationalism. However, I want to suggest two ways to defend rationalism against meta-rational criticism.

These suggestions aren’t useful either, because everyone already uses them and they don’t work. In particular, they’re the same arguments the LessWrong crowd used to use on you all the time. They’re great for convincing rationalists to ignore meta-rationalism, but (until now!) not so great for convincing meta-rationalists to be more scrutable.

So… As part of your current core target audience, I really want to just tell you exactly what you need to say to get this stuff across to me and others like me—but since I still don’t actually know what “metarationality” is supposed to mean, I can’t! And it seems to me that you also don’t have a good feel for what needs to be explained to us non-meta rationalists. How can we bridge this gap?

(I’m reminded of Dan Ingram’s observation that “highly enlightened beings” by his standards are usually also highly terrible at explaining things to beginners, because they’ve totally forgotten what it’s even like for the basics to be non-obvious.)

Antirationality is not about reason

mtraven 2017-07-24

I have some sympathy for the hippie countercultural critique of rationalism, as long as it’s understood that the enemy there is not rational thought per se but malignant systems based on narrow ideas of rationality – like capitalism or the McNamara defense department.

Oddly enough this is completely compatible with the fact that the current-day rationalist movement also believes that malignant rationality is a problem, in fact they think it’s the world’s biggest problem (when embodied in some future AI).

Gregory Bateson probably had the most sensible articulation of the view that conscious rationality was dangerous. His solution was holistic, biosystems-level thinking. The rationalists on the other hand seem to think malign systems can be tamed with mathematical logic. Neither of these seems very promising to me.

More here.

exactly what "metarationality" is supposed to mean

David Chapman 2017-07-24

what you need to say to get this stuff across to me and others like me

Yeah… well, I’m working on it! Have you read “A first lesson in meta-rationality”? Was it helpful at all?

I wrote this post as a prelude to a much longer piece that tries to lay out some of the key ideas as clearly as I can. It’s 10k words so far and maybe half done. 20k words is half the length of a short book. Awkward: much too long for the web, much too short for print. Kindle maybe?

Anyway, it’s hard going, because (as I mentioned I have recently noticed) nobody has bothered to do this before, so I have to work out the presentation from scratch. The ideas are all very well-known, decades old, but there’s no good intro anywhere.

they’ve totally forgotten what it’s even like for the basics to be non-obvious

That’s a very interesting analogy… Probably teachers of anything have this experience. I certainly do on the rare occasions when I teach STEM stuff. Like, lexical scoping… I know I had to learn that once, but when someone gets confused by it, I have to do a lot of work to figure out how they could possibly be going wrong.

Carl Hewitt

David Chapman 2017-07-24

A couple years ago Carl Hewitt showed up on Lambda The Ultimate blog and the general consensus was that no one could understand what he was saying.

When I showed up at the MIT AI Lab in 1978, he had already been extensively incomprehensible there for more than a decade. It was some people’s opinion that he actually had nothing to say, but lots of important work sprang from other people trying interpret what may or may not have been gibberish as profound.

Most of religion is like that… it’s funny when it happens in computer science too!

Hewitt

mtraven 2017-07-24

Hewitt is a pretty strange dude with a mix of good and bad intellectual instincts, IMO. I didn’t have much to do with him at MIT but I’ve spent some time with him now that we are both on the west coast, including participating in workshops on Inconsistency Robustness that he organized.

Actor theory was a great idea. Inconsistency Robustness was also a pretty good topic, I thought, because it’s obviously true that people and machines have to deal with a variety of inconsistencies and due to the influence of mathematical logic, which does not tolerate inconsistency, CS/AI doesn’t have very good tools for doing that.

These workshops attracted an eclectic bunch, mostly very senior people of the same vintage as Carl. I presented a paper on the need for inconsistency in scientific ontologies. There was a couple of papers fro the legal AI field, which interested me because legal reasoning works by applying possibly inconsistent cases, so seemed a better model for human reasoning than logic.

But for some reason Carl, who organized the workshop, spent half of it presenting the stuff cited above on consistency and how Gödel was wrong, negating the point of the workshop (although, I realize now, maybe he was embodying it by being self-inconsistent).

First lessons

Dan 2017-07-24
Have you read "A first lesson in meta-rationality"? Was it helpful at all?

I have, and just read it again. No, it’s not very helpful. I struggle to explain why!

From the rest of your writing, it seems that there is supposed to be something called “meta-systematic thinking” which in some situations would give different results (better results!) than the “systematic thinking” that STEM people usually use. That sounds exciting, and I’d love to know what the difference is! I’d also like to decide for myself whether the results really are better.

But “A first lesson” doesn’t help with that: you’re using Bongard problems as a first example precisely because STEM people already know how to think in the necessary way. They don’t clarify why we need a meta/non-meta distinction.

Also, the single example isn’t enough for me to generalize. Raven’s Progressive Matrices are similar enough to Bongard problems that I feel confident calling them “meta-rational” as well. But once I venture out even as far as crossword puzzles and cryptograms I’m already unsure what counts.

It occurs to me that the very concept of “meta-rationality” might be meta-rational. That would be annoying.

I wrote this post as a prelude to a much longer piece that tries to lay out some of the key ideas... maybe half done.

Hooray! I look forward to it.

meta rationality

anders 2017-07-24

The distinction I think is that rationality gets you correct answers to well defined problems, such as what route should I take to get to work most quickly or with the least gas consumption.
Meta-rationality is what you use to get a problem to that point. for example when I am formulating the problem I ask myself questions like: should I carpool with my coworker? It adds some time to the 0route but saves on gas money. He likes to smoke, is saving money worth the some discomfort an possible heath risk? (what units should I quantify in?) Should I ask him not to smoke, how much would that favor weigh on our relationship? Should I get a closer job instead? What are the tradeoffs between pay and distance that I’m willing to make? etc.

Meta-rationality is what I use to decide which questions will be included in the problem I set up and which will be left out.

I think another thing I've

Peter 2017-07-25

I think another thing I’ve got from this site - all this talk about rationality and systems and so on has a lot to do with justification. With science: it’s not enough for the average working scientist to just discover something, they have to publish, which means getting past peer review. This means you can’t just try to publish something true, you have to publish something justifiable - even if your discovery was given to you by angels, you still need evidence, often conventionally-acceptable evidence such as p-values or whatever. Likewise the peer reviewers have to justify their conclusions, and there’s limits to what justifications they can use - a peer reviewer can’t say, “oh, this paper is obviously by so-and-so, and so-and-so is a bit of a laughing stock really, do not publish”, even if it’s true. Not if they want to be taken seriously as a peer reviewer ever again.

I think the “pre-systematic mode” means situations where if you have the right status you can just assert stuff that sounds truthy and it will be taken seriously. “Systematic” starts with starting to get serious about reasoning and evidence, as things go along more and more things come under the sway of the systems, systems get joined together, situations where you can’t use the systems look less and less like the default and more and more like anomalies that can and should be abolished. The ultimate dream: unite everything into one big system, covering everything, and get it accepted by everyone, and there will be no more interminable arguments, where disagreements about the case in point degenerate into disagreements about the meanings of words, rules of evidence, background principles, etc. and every attempt to resolve things gets worse.

At some point the realisation comes that this Ain’t Gonna Happen (4.5? Am I understanding this right?), that even if there is a fundamental truth to things, too much of that truth is not humanly accessible, and we’re left with a big complicated heap of systems which aren’t completely unifiable, not with our capabilities anyway. The “meta-rational turned irrational” thing is to dynamite the whole heap and go back to asserting truthy-sounding things.

Doing a chemistry degree is quite instructive. There’s this process that starts in school where they teach you deeper and deeper theories and they get to second year quantum chemistry and you think “Yes! This is The Answer, everything else was leading up to this” and then you learn that the maths is intractable for anything other than toy systems and in the third year you learn about a zillion and one dodgy-but-tractable approximations, also all of that stuff that was building up to quantum chem is still stuff you can argue from when trying to get something published… under the right circumstances. If you go into research, there’s a good chance you’ll end up in a subfield where there are conventional standards of what’s acceptable for publication, little meta-rationality is required. But every now and again someone tries to do something novel - perhaps something that combines two areas together - and the conventional standards don’t exist yet. Doing such work - and more importantly, publishing such work and reviewing such work - requires meta-rationality (also: fraught arguments with journal editors). At least if I understand it right? Do I understand it right?

Is scrutability possible?

Gordon Worley 2017-07-25

I might write a more detailed response along these lines depending on where my thinking takes me, but I’ve previously thought about this issue and after thinking about it more since reading this yesterday it still seems to me that meta-rationality is specifically inscrutable because it needs meta-rationality to explain itself.

In fairness this is also a problem for rationality, too, because it can’t really explain itself in terms of pre-rationality, and from what I can tell we actually don’t know that well how to teach rationality either. STEM education mostly seems to teach some of the methods of rationality, like how to use logic to manipulate symbols, but tends to do so in a way that ends up domain restricted. Most STEM graduates are still pre-rational thinkers in most domains of their lives, though they may dress up their thoughts in the language of rationality, and this is specifically what projects like LessWrong are all about: getting people to at least be actually rational rather than pre-rational in rationalist garb.

But even with CFAR and other efforts LW seems to be only marginally more successful than most because I know a lot of LW/CFAR folks who have read, written, and thought about rationality a lot and they still struggle with many of the basics to not only adopt the rationalist world view but to also at least stop using the pre-rationalist world view and instead notice they don’t understand something. To be fair marginal success is all LW needed to achieve to satisfy its goals of producing a supply of people capable of doing AI safety research, but I think it’s telling that even such a project so directed to making rationality learnable has only been marginally successful and from what I can tell not by making rationality scrutable but by creating lots of opportunities for enlightenment.

Given that we don’t even have a good model of how to make rationality truly scrutable, I’m not sure we can really hope to make meta-rationality scrutable. What seems to me more likely is that we can work to find ways of not explaining meta-rationality but training people into it. Of course this is already what you’re doing with Meaningness, but it’s also for this reason I’m not sure we can do more than what Meaningness has so far been working to accomplish.

Teaching rationality

anders 2017-07-25

Well the basic problem with applying rationalism to everyday life is that non-rationality is very inconsistency robust, while rationalism very much … isn’t. Non rationality is perfect for acting in a world with illegible and unknowable constraints because tradition doesn’t require knowable reasons (hence the joke in fiddler on the roof). Rationalism is perfect for acting in a world with enumerable and quantifiable constraints. Rationalism is an acid that dissolves every support except explicit reasons. Most people react to the suggestion that they rationally regulate their life by recoiling in horror, or by quietly not complying. That is because people tacitly know that they can’t even enumerate the constraints that go into making a bowl of cereal, so how the hell could they expect to correctly run the rest of their life? Rationality is very useful because as an acid it can dissolve things that shouldn’t be there leaving room for something new. A lot of therapy is rationalizing patterns of behavior and thoughts and removing or replacing things that have outdated reasons. So whenever you CAN apply rationality you should, because it generalizes far better. It allows systematic planning with the confidence that the plan only depends on the premises and will not mystically fail due to the position of mars.
Basically I don’t think people have any particular difficulty learning to be instrumentally rational when the need arises. But systematic rationalism
is time intensive hobby (that happens to be MY hobby) that doesn’t satisfy an immediate need for most people and so they are understandably reluctant to convert.

Most LW AI safety stuff I have read misses the mark because it correctly diagnoses problems that rationalism has, but misses that you shouldn’t (and can’t) built a general AI based on rationalism. (confession: I haven’t read very much AI safety stuff)

This is probably an illusion

Kaj Sotala 2017-07-26

This is probably an illusion of memory. The transition occurs only after years of accumulating bits of insight into the relationship between pattern and nebulosity, language and reality, math and science, rationality and thought.

Reminds me of the Burrito Tutorial Fallacy: https://byorgey.wordpress.com/2009/01/12/abstraction-intuition-and-the-monad-tutorial-fallacy/

The heart of the matter is that people begin with the concrete, and move to the abstract. Humans are very good at pattern recognition, so this is a natural progression. By examining concrete objects in detail, one begins to notice similarities and patterns, until one comes to understand on a more abstract, intuitive level. This is why it’s such good pedagogical practice to demonstrate examples of concepts you are trying to teach. It’s particularly important to note that this process doesn’t change even when one is presented with the abstraction up front! For example, when presented with a mathematical definition for the first time, most people (me included) don’t “get it” immediately: it is only after examining some specific instances of the definition, and working through the implications of the definition in detail, that one begins to appreciate the definition and gain an understanding of what it “really says.”

Unfortunately, there is a whole cottage industry of monad tutorials that get this wrong. To see what I mean, imagine the following scenario: Joe Haskeller is trying to learn about monads. After struggling to understand them for a week, looking at examples, writing code, reading things other people have written, he finally has an “aha!” moment: everything is suddenly clear, and Joe Understands Monads! What has really happened, of course, is that Joe’s brain has fit all the details together into a higher-level abstraction, a metaphor which Joe can use to get an intuitive grasp of monads; let us suppose that Joe’s metaphor is that Monads are Like Burritos. Here is where Joe badly misinterprets his own thought process: “Of course!” Joe thinks. “It’s all so simple now. The key to understanding monads is that they are Like Burritos. If only I had thought of this before!” The problem, of course, is that if Joe HAD thought of this before, it wouldn’t have helped: the week of struggling through details was a necessary and integral part of forming Joe’s Burrito intuition, not a sad consequence of his failure to hit upon the idea sooner.

But now Joe goes and writes a monad tutorial called “Monads are Burritos,” under the well-intentioned but mistaken assumption that if other people read his magical insight, learning about monads will be a snap for them. “Monads are easy,” Joe writes. “Think of them as burritos.” Joe hides all the actual details about types and such because those are scary, and people will learn better if they can avoid all that difficult and confusing stuff. Of course, exactly the opposite is true, and all Joe has done is make it harder for people to learn about monads, because now they have to spend a week thinking that monads are burritos and getting utterly confused, and then a week trying to forget about the burrito analogy, before they can actually get down to the business of learning about monads.

Monads are burritos

David Chapman 2017-07-26

That’s a really good (and funny) explanation! Amusingly, it exemplifies the principle it advocates (give both a concrete example and the general definition).

Also, monads are just like burritos…

A related story. Some guy practices intensive meditation for decades in a cave. He gets nowhere. Finally gives up in disgust. He wanders into town with no money. Pulls an overripe mango out of the garbage. He sits on a rock and bites into the mango and immediately attains enlightenment.

Students gather.

“What sort of rock?” they ask. “Exactly how overripe was the mango?”

Rationality Critics Remorse is Running Rampant

Hal Morris 2017-07-27

We may be seeing a bigger wave of Rationality Critics’ Remorse than when Bruno Latour “got (a bit of rationality) religion” due to alarm over creationists and climate change deniers.

It is showing up now on social-epistemology.com/Social Epistemology Reply & Exchange Collective, the forum of the Steve Fuller influenced PoMo-ish incarnation of Social Epistemology (see “Two Kinds of Social Epistemology”/https://social-epistemology.com/2013/07/22/two-kinds-of-social-epistemology-finn-collin/), which has been getting broader input from former critics like Massimo Pigliucci, and an article coauthered by staunchly pro-science Naomi Oreskes.

The latest example, which reminds me somewhat of this article (maybe an illusion due to near simultaneity) is “The Bad News About Fake News” by Neil Levy at https://social-epistemology.com/2017/07/24/the-bad-news-about-fake-news-neil-levy/ which quotes a conversation between John Searle and his friend(!?) Michel Foucault noting that MF talked clearly in (their conversations, I presume) so why couldn’t he write that way?

Monads *are* burritos

drossbucket 2017-07-27

Yes, that article is great. You should also read the reply by Mark Dominus where he elaborates on exactly why monads are like burritos.

(note: I know nothing about monads beyond what I learnt there)

Look forward to the 20k word monster post btw :)

Catching up some

David Chapman 2017-07-27

Sorry, I’ve gotten rather behind on replies.

Rationality gets you correct answers to well defined problems… Meta-rationality is what you use to get a problem to that point.

Well put!

the realisation comes that this Ain’t Gonna Happen (4.5?)

Yup… I’d like to understand better how many people reach that point, and of them, how many are thrown into a black depression, versus how many say “well, that’s interesting, I wonder what happens next?”, and how to avoid the former.

They teach you deeper and deeper theories and they get to second year quantum chemistry and you think “Yes! This is The Answer, everything else was leading up to this” and then you learn that the maths is intractable for anything other than toy systems and in the third year you learn about a zillion and one dodgy-but-tractable approximations, also all of that stuff that was building up to quantum chem is still stuff you can argue from when trying to get something published… under the right circumstances.

Funny you mention this—the draft of the much-too-big overview includes exactly this (taking the water bond length as an example). Maybe that’s too complicated and I’ll drop it.

conventional standards don’t exist yet. such work requires meta-rationality

Yes. Just in the past few days, I’ve been thinking about presenting everything in terms of “what do you do when you reach the edge of the map?” When the system runs out, stops being applicable, or at least frays and fuzzes?

the very concept of “meta-rationality” might be meta-rational

meta-rationality is specifically inscrutable because it needs meta-rationality to explain itself.

Yes! To actually explain it in general. On the other hand, you can start out the presentation with examples (such as the Bongard problems) and, as those become comprehensible, talking increasingly abstractly, generally, and in meta-rational terms.

Or, anyway, that’s my plan, and I hope it works :)

STEM education … tends to do so in a way that ends up domain restricted.

Yes… some university educators understand this problem and try to overcome it, but yeah. I’ve thought a lot about how to teach STEM better, although I’ve deliberately not gone into university teaching. I need clones or parallel lifetimes or something.

can’t even enumerate the constraints that go into making a bowl of cereal

Funny that you use that example—it was my standard one when I was a graduate student. Phil Agre and I had a giant draft paper called “A Computational Theory of Breakfast” which never got close to completion.

Getting the wrong end of the burrito

David Chapman 2017-07-27

I had a completely different mental image for how monads are like burritos. Typically a monad (as used in Haskell, according to my understanding of it, which might be totally wrong because I’ve spent approximately fifteen minutes studying Haskell) is a plain wrapper for some messy stuff, and it’s sort of a tubular channel, because you pass it from one computation that puts the stuff in it to another that eats it, via intermediaries that don’t want to get sauce on their fingers.

Burritos aren't tubes

David Chapman 2017-07-27

… but burritos aren’t actually tubular channels. They obviously should be, though.

Jeff Shrager and I used to eat burritos together and get sauce on our fingers and talk about how there has to be better tech. What you want is a tube with a piston on the bottom on a screw so you twist the base of the burrito to shove more of the filling up toward the top.

This get-rich-quick plan didn’t work out, but to be fair we never actually tested a prototype even.

So monads should probably come with twistable bases.

That’s my contribution to computer science for the day.

Monads, the Universe, and Everything

Sytse Wielinga 2017-07-29

I’m (overly) conscious that this may sound like a rant, but hopefully there’s some interesting or funny pieces in it, so here goes:

Typically a monad (as used in Haskell, according to my understanding of it, which might be totally wrong because I’ve spent approximately fifteen minutes studying Haskell) is a plain wrapper for some messy stuff, and it’s sort of a tubular channel, because you pass it from one computation that puts the stuff in it to another that eats it, via intermediaries that don’t want to get sauce on their fingers.

Unfortunately, this does seem to be totally wrong (in Haskell) ;-)

To correctly understand monads as wrappers you have to distinguish monadic types from monadic values. The types are wrappers, but the values don’t pass stuff from computation to computation, but actually represent a sequence of computations that return a value of the wrapped type themselves (in the case of lazy monads).

And here’s the funny part: the messy stuff they hide away is the wrapper, which describes what the word ‘computation’ means, in such a way that you can build computations out of other computations. So the tortilla is imaginary, meta-systematic and somewhat complicated and difficult to get right; as one might imagine when trying to eat a burrito without a physical tortilla (or a plate, for that matter). So I guess monads aren’t burritos; instead, the monad itself is the marketing material for the design of burrito-shooting-machines that will deliver the filling parabolically (and the Haskell compiler loves the marketing!).

A couple years ago Carl Hewitt showed up on Lambda The Ultimate blog and the general consensus was that no one could understand what he was saying.

I happened to read some of his papers (his ‘direct logic’ is, I think, a unique solution to an important problem, but sadly, I haven’t gotten around to figuring out how to build formal logic systems yet), and I think the summary I built in my head at the time might be correct:

The criticism of Gödel is that taking a system of formal logic, and saying ‘here’s a proof that’s infinitely long and it proves something that’s wrong’ is just silly, not some kind of mystical proof that mathematics is doomed; Carl Hewitt solves it by just stipulating that proofs (which, in formal logic, are just strings of letters and symbols) have finite length, and if I recall correctly, that’s all that’s needed to make Gödel’s problem go away.

Hewitt’s other idea, actors, is interesting as well, because it’s (in a mathematical way) more general than any other description of computation (a ‘description of computation’ is just what source code is; you can think of actors as something in the same class of concepts as ‘(lambda) function’ and ‘formal grammar’); but you can’t see that it’s more general unless you include things outside of the one CPU (core) that programs are thought to be ‘run on’, which means that rationalists won’t see any value in it, because all CPUs are designed in order to give the appearance of the simpler world those rationalists imagine themselves to be in (which, as Hewitt has been pointing out for decades, is the only reason why we’re having so much trouble scaling).

It was only when I learned to set aside the pious “structural oppression” nonsense mouthed by campus activists that I came to understand the underlying, genuine basis of their seemingly-irrational “moral” concerns—and to sympathize with them.

Too bad you didn’t provide a link here! I would’ve so liked to sympathize with campus activists…

And now a final comment I had to drop: I’ve seen some comments about Jordan Peterson around here somewhere, whom I’ve been following with a lot of interest; I don’t think describing him as ‘an eternalist’ (whatever that means; I still can’t figure out who could be an eternalist, I’d think only habits of thought, in the Dzogchen sense of the word, can be?) is accurate in the least. His belief system does center around the idea that human biology (the structure of the brain) has some universals that make certain ways of ‘orienting yourself in the world’ through narrative (and orienting yourself to the world always happens through narrative as an intermediary, for the appropriate definition of the word ‘narrative’), bad hypotheses about how to create a good society (from a non-dual perspective). This seems to me, to be very similar to what David Chapman says: you can be factually incorrect about meaning.

By means of an introduction to Jordan Peterson, I think you (David) will like this one, perhaps a lot: https://www.youtube.com/watch?v=f3vqXCLhJLE

As for his lectures, it’s obviously always best to just drop in halfway through a lecture halfway into the course: https://www.youtube.com/watch?v=3nAIAPYuD7c

There it immediately becomes clear that in his ‘maps of meaning’ lectures, he’s trying to teach everything he knows about how to do and legitimize meta-systematic thinking about how to be a human being, grounded in both tradition (the closest thing to proof of hypotheses that we have, which is not very close, but still) and scientific/empiricist models of biology/psychology (and cybernetics, apparently).

Anyway, thanks so much for all of the insightful writing. I’m looking forward to your pending discoveries about how meta-rationality works!

Awakening from the Meaning Crisis

Valeria 2021-09-18

Meta-rationalists have been promising a coherent account of meaning for nearly a century. Somehow, we’ve never delivered

Besides Chapman’s wonderful work, the work of Prof. John Vervaeke, including how he presented it in his “Awakening from the Meaning Crisis” series, seems to be a huge step in that direction (specially the second half of the series).

Preview here: https://www.youtube.com/watch?v=ncd6q9uIEdw

Playlist here:
https://www.youtube.com/playlist?list=PLND1JCRq8Vuh3f0P5qjrSdb5eC1ZfZwWJ

Add new comment:

You can use some Markdown and/or HTML formatting here.

Optional, but required if you want follow-up notifications. Used to show your Gravatar if you have one. Address will not be shown publicly.

If you check this box, you will get an email whenever there’s a new comment on this page. The emails include a link to unsubscribe.