Comments on “Your questions about meta-rationality?”
Comments are for the page: Your questions about meta-rationality?
tl;dr
Too many words. I’ve been following meaningness.com for like a year now, because you identified the phenomenon of the nihilistic trap people get stuck in post-rationality. Great job naming a real problem! But I’ve actively and passively kept an eye on the site since then and never felt like you had a clear answer about how to get out of it. I think whatever your answer is, it won’t work unless it’s something I can intuitively grasp, and that means it probably needs to be expressible in a thousand words. Perhaps also some contrived stories of people successfully making that transition (that is, explicit examples of how it could play out). But I don’t think an expansive philosophical tome surveying historical approaches is something I’ll be able to get through.
Uncertainty vs. fuzziness vs. illegibility vs. nebulosity vs...
“Nebulosity” is, um, nebulous.
Consider a Strawman!Scientist; they’ll come up with a theory, defend it against objections until the weight of counterexamples finally piles up high enough to where they can’t write it off as “weird errors”, then they jump straight to a new theory which they defend against all objections until…
The idea of probability theory / uncertainty is nebulous to them. They might even be force their way through a little bit of the math, but they don’t grok it.
Or, consider someone who’s studied probability theory, but never seen fuzzy set theory. They put a probability distribution over everything - they’re a “level” more sophisticated than a hardcore dualist, but they still believe that all categories are distinct and anything to the contrary is merely a statement of your own ignorance. The idea that a statement itself could have a degree of truth is nebulous to them.
I suppose my point is that “nebulosity” seems to cover a lot of different things and would benefit from more discussion/description. Of course, the nature of nebulosity makes that a difficult project.
I guess I’m concerned that rationalists attempting to make the jump to meta-rationality will keep trying to “formalize away” anything that is nebulous. It’s not exactly what I have in mind, but I’m reminded of “What the Tortoise Said to Achilles”. Sometimes, I want to tell people, “Don’t be the tortoise!”
Instrumental meta-rationality
I love learning about systems and working inside systems, so it’s very easy for me to point at ways that stage 4 rationality is useful to me. Even when I can see that it makes sense to apply multiple systems to something (e.g. a common case is when there is a kind of reductionist view of something and a high-level abstract view) I usually stick to thinking in terms of the union of all such systems that I understand. It would be interesting to me to read what kinds of situations trigger you to think in a meta-rational way.
Meaning
I frequently feel confused by how you use the word “meaning.” I understand the idea of causality, and I understand the idea of taking actions pursuant to a goal. I understand that people have meta-preferences about what kinds of goals are important or worthwhile. I also understand the idea of communicating an understanding to someone (or yourself) by taking an action or that “means” something to them; linguistic “meaning”. But some of your writing makes it sound a lot more confusing than that. For example, regarding the different varieties of nihilism, you write about “higher” and “lower” meanings, and you talk about rejecting the idea that anything has a meaning, and so on. I don’t think I understand these at all. And I don’t just mean that I have no formal understanding but of course I sort of nebulously know what you are getting at – I really have no idea whatsoever.
It might be a lot to ask for the book to clear this up, but it’s certainly confusing to me!
Reasonableness and random thoughts
I’m most interested in the explanation of reasonableness. I’d also be interested in a “manual of meta-rationality”, but I guess that has to wait. +1 for Marshall’s point about showing meta-rationality in a practical context.
I think “function and structure” should be excised from the finished book. It seems useful for (1) having conversations like this while the book is in progress, (2) gesturing at further reading in case you don’t finish writing the book, or (3) introducing an extremely large work of thousands of pages. But since Eggplant is shortish, I doubt it will add any value to announce what you’re going to say instead of just saying it. The one thing to save would be the recommendation about who should skip part 1.
It applies to me: I’m Gen Y, and my friends and I are now agreed that we adopted a “meta-rational” approach to truth (though not necessarily to other things) at some point in high school. So the entire first half of the book is superfluous for me personally, and maybe for a lot of younger readers. Also it sounds like it’s going to be all stuff you’ve already covered in Meaningness. (Obviously it needs to be there for Eggplant to work as a stand-alone thing; I’m just voting to make it relatively low priority.)
“The aim is not to refute these theories in detail—because it is uncontroversial that each did fail.” Judging partly from Meaningness comments over the years, I’d guess that many of these things are uncontroversial for students of the history of philosophy, but not for your target audience. It might be a good idea to recommend books that spell out any marvellous refutations which this ebook is too small to contain. :)
“It uses mainly STEM examples, and occasionally gets quite technical. However, most of it can be understood without any particular STEM knowledge.” My friends and I have a running joke/pet peeve that math textbooks always say things like this and the second sentence is always a lie—I suspect you shouldn’t make that claim unless you’ve field-tested it on real live non-STEM people.
Overall, the book looks really promising and I’m excited to read it. Cheers!
To structure the book in
To structure the book in terms of its uses I think could make sense, as it won’t distort anything about the content itself. Although meta-rationality can’t be broken into pieces, I suppose the book is trying to capture its essence and weld it into a tool for humans, and that tool can be used in lots of different areas, so those areas can acceptably be the structure as they are practical, not epistemological, and are already implicitly part of the work itself - if you catch my drift.
Also I know you’ve mentioned that it’s intended for STEM people, but the scope of it is realistically going to be larger and more diverse than just that, so it might be useful to clearly state how it could be of use in humanities subjects, general public discourse, in people’s daily lives etc.
Heidegger, AI, and rationality
I know you worked on “Heideggerian AI” decades ago, and that’s now basically the dominant mode of AI in the form of machine learning.
Sorry if I’m somewhat abusing the term, but it’s so evocative.
We need even more embodiment, as Dreyfus argues, but my contrarianism—combined with a kind of ressentiment against Google’s computational power—leads me to want to reappreciate the cognitivist take on intelligence and experience.
Heideggerianism seems anti-rational, at least anti-rationalistic. It places primacy in deep embodied adaptation, rather than conceptual systems. There’s real truth to this, but it also seems like a source of bias.
I don’t want to write a whole essay in a comment, and I’m not sure I have enough clarity to do so anyway, but I wonder if you kind of see where the argument could lead, and whether the metarationality paradigm could help negotiate between the Heideggerian stance and the cognitivist stance…
I understand it's not the
I understand it’s not the goal of the book, but I’m mostly interested in a “manual of meta-rationality”, or at least writing which helps one to gain competence in meta-rationality. Perhaps by following a narrow path which leads from rationality to meta-rationality, I could come away with some useful, practical insights towards that goal or at least a better way to generate those insights.
Use of Wep- I mean Instrumental Metarationality
The thing I’m curious about, is this: am I to understand metarationality as a distinct skillset? Or is it just a generalized fuzzy-wuzzy intuition looking for serendipitous coincidences/convenience?
In the subject I brought up Use of Weapons (recommended, blast(hehe) of a novel) because, well, it is concerned with this kind of cross-cutting cleverness/capability.
So is Karate Kid.
So is Django Unchained.
Matter of fact, it seems to my memory that this is a common element in Hero’s Journey type stories, or really, any story concerned with showing a character’s growth when it isn’t (only) about their morality.
From personal experience however, I do not believe that actively trying to apply such cleverness is a valid approach to anything, barring certain specific types of structured tests (name eludes me). Mastery, in my view, is only possible to attain in very, very specific contexts; a skill is just a skill, nothing more. If you have more of them, they’re bound to overlap more often- but is that metarationality?
In fact, is my relentlessly instrumental approach at all valid here? Or is metarationality something, well, less literal and concrete than that? I think I might be missing some key part here. Or missing the key for the part. Trying to grab hold of a cloud, maybe.
Jam tomorrow, but never jam today
My response to all of your writing on these subjects is similar to that described in the “tl;dr” comment, extended over several years. There is an endless process of deferral, of jam promised for a tomorrow that never arrives.
I have yet to see anything to persuade me against Scott Alexander’s criticism, that what you are calling “rationality” is what is known on LessWrong as “straw Vulcanism”, and what you are calling “meta-rationality” is no more – in fact, a lot less – than what LessWrong and its successor LesserWrong are actually about and contain a large amount of material on. And now you promise “a gradual introduction”, which will “lead gradually from rationality into meta-rationality”, which is “not going to look in detail at the main concern of meta-rationality” and “also not address the main concerns of rationality”, instead limiting itself to “the questions “what are truth and belief anyway?””
It is as if the imaginary reader in your head is always someone stuck in “rationalism”, and however many words you write, you are still addressing that unchanging imaginary figure.
So, what I would like to see is the jam. Not “a few waypoints sketching a narrow path through the jungle towards a beachhead”, but the promised second part, “the meta-rational alternative: a different account of how rationality works and how best to use it.” Because if not now, when?
Empty words are not enough
Addressed to the commenters of ‘tl;dr’ and ‘Jam tomorrow…’, from my own personal experience rather than David’s: I don’t think you’ve read the rest of David’s work right. Fixation and denial, nebulosity, continuum gambit, eternalism and nihilism, once you understand why these things fail, and understand where David’s criticisms of these things are coming from, the method falls into your hands fully formed. I’m not convinced it’s possible to explain it in mere words; meta-rationality is inherently very practical. That’s what’s meant by the article about ontological remodelling. The answer is there, but it’s catch-22, you need the answer to understand the answer, and that’s tricky to get.
As for less{,er}wrong, I’ll believe it when I see it. All I’ve seen from them so far has been posturing, or useless systematic stuff that won’t work outside of its limited domain.
Re: Jam Tomorrow and Re: "useless systematic stuff"
Re: Richard
I have yet to see anything to persuade me against Scott Alexander’s criticism, that what you are calling “rationality” is what is known on LessWrong as “straw Vulcanism”, and what you are calling “meta-rationality” is no more – in fact, a lot less – than what LessWrong and its successor LesserWrong are actually about and contain a large amount of material on.
I brought something like this up recently. David has stated before that his definition of “rationality” is NOT the same thing as what LW calls “rationality”, and that he isn’t really specifically targeting LWers with his writing and aims for a wider audience. I don’t think he’s ever claimed that meta-rationality is supposed to somehow compete with the writing on LW. The term “rationality” has a long and well-defined history outside of LW.
Re: anonymous
I don’t know if the phrase “useless systematic stuff” makes much sense from the perspective of the meta-systematic mode. Someone in the meta-systematic mode uses systems, after all, and LW certainly has systems by the bucketful.
"useless systematic stuff"
Perhaps I could’ve phrased that better. I meant something more like “misapplication of systemic methods to domains in which they do not function”; see David’s articles on Bayes for further reading on that topic.
Accusations of metarationality appear non-transitive
David Chapman says Scott Alexander is metarational. Scott Alexander keeps saying he learned his alleged metarationality from Eliezer Yudkowsky’s Sequences. David Chapman keeps saying the Sequences are not, in fact, metarational.
This puzzles me. Shouldn’t the difference between meta and non-meta be pretty dramatic and agree-upon-able?
The meta-rationist allows for Knightian uncertainty?
Standard LW rationalist: Uses standard Bayesian probability theory
Meta-rationalist: Extends probability theory by allowing Knightian uncertainty - see Aaronson. Examples: Imprecise probability, Dempster-Shafer theory etc.
Knightian uncertainty: still rational
Meta-rationality comes with the realization that if you perceive something without a proper method for looking (so that you get an ontological view appropriate to what you’re looking for), no amount of rational thinking will get you anywhere (but if you think hard enough, you may fool yourself into believing that you got somewhere).
Luckily, a “proper method” for looking is kind of built in to humans, which is why “common sense” works quite well for people with skin in the game, who experiment with things for their own purposes.
Knightian uncertainty is an example of what partial breakdown of rational systems looks like to a rationalist. To a meta-rationalist, it’s a sign that you’re not looking in the right place (if what you’re looking for is creating a functional rational system); alternatively, it’s a sign that you’re looking in exactly the right place if you want to be somewhere exciting (for living organisms, uncertainty leads to growth, partly because it leads to the death of dysfunctional parts of you).
Some examples of cryptic meta-rationalist advice, to illustrate the above:
If your thoughts don’t make sense, look again. If your thoughts do make sense, look again.
First be, then look, then act, then think.
David: does any of this correspond to the idea you have about meta-rationality, or is that something else? Does any of this make sense at all, in the context of meta-rationality?
Do "meta-rational thoughts" exist?
It seems to me, that the word meta-rationality describes something that has always been a fundamental part of human nature.
I wonder if “meta-rationality” might even have preceded rationality itself?
The existence and necessity of a logic-free process of revising your view of the world doesn’t depend on rational thought or logic, but only on perception and concept.
On the other hand, maybe you could say, that until recently (the last one or two hundred years or so, maybe a bit longer), the only meta-rational process available worked over fairly long periods of time, with dreaming (during sleep) playing a central role in this process, and that it had little capacity to deal with abstraction; but that now, some people have developed the ability to look at the process of thinking rational thoughts with sufficient clarity and complexity, that they can use a rational thought process against some problem, and in the moment, work to modify and fine-tune the rational thought process?
This would provide an answer to Dan’s question, “Shouldn’t the difference between meta and non-meta be pretty dramatic and agree-upon-able?”: it is pretty dramatic, but it’s not an aspect of linguistic thoughts and pieces of writing (those are always either rational or pre-rational), but an aspect of how a person uses his brain to arrive at a certain rational thought process.
Its agree-upon-ableness is a big problem, because when somebody derives a wrong conclusion, it may be very difficult or even nearly impossible to find out what caused him to derive this wrong conclusion: did he fail to use meta-rational methods or not? Did he fail to use proper meta-rational methods or fail to use them at all? Or is it the case that he used some of the best meta-rational thinking that anybody could do, but fell prey to cognitive dissonance, or a character flaw such as stubbornness, or a lack of knowledge? Or is his view perfectly appropriate, but inadequately explained?
And if what he says does make sense, did he just learn and correctly use somebody else’s rational system by accident, or by being embedded in a social system where this type of thinking still works very well, or did he use meta-rational thinking in order to derive his conclusions?
My hypothesis about the subject question: “meta-rational thoughts” do not exist, but “meta-rational thinking” does.
The rhythm of rationalism
Mark Chapman
“My understanding is that “straw Vulcanism” is the Romantic criticism: “But poetry! But Love! But Awe! You can’t explain that with your ‘rationality’ can you?” Whatever I’m doing, it’s not that.”
The use of the term on LessWrong (where it was created) is wider than that. Spock (the Hollywood rationalist for whom the phrase is named) regularly states impossibly precise probabilities (usually low probabilities for outcomes that Kirk as regularly achieves). The straw Vulcan has exact numbers calculated by exact reasoning for everything all the time. This is well recognised, e.g. by Eliezer, to be impractical and a caricature. But at the same time, the Bayesian rules apply and give guidance even when you don’t and can’t have the numbers. As he says, you have to sense the rhythm of it.
What is metarationality?
You’ve quoted my writing as metarational, which surprises me a bit; I thought of what I was doing as “just thinking”, at the level of fuzziness which seems appropriate to the amount I know (or, rather, don’t know). I still can’t reliably predict what things you’re going to refer to as “metarational.”
There’s a thing about “the smarty-pants know-it-alls don’t actually know it all” that I see as a “post-rational” political position, and it’s not generally a place I choose to put myself. So if that’s meta-rationality, then it’s not my philosophy.
If meta-rationality is a claim about what minds do – for example, that we do not generally reason from axioms or “raw sense-data” to derived conclusions according to a finite set of legal moves – then I might be a meta-rationalist.
Meta-rationality is abduction?
Sarah,
Look at this informal equation:
Deduction x Induction = Abduction!
If we take deductive logic (methods for reasoning from axioms to certain conclusions) and combine it with inductive logic (methods for reasoning under uncertainty by generalizing from specific observed examples), is the end result abduction? (method for working backwards from observations to best explanations).
Induction and abduction
mjgeddes,
Aren’t induction and abduction both purely epistemic rationalist constructions?
I would define induction as ‘the ability to predict with high likelihood of success whether something is true’ and abduction as ‘the ability to find the One True Cause based on observations’.
Both depend on the problem statement: they will give different results depending on what you’re looking for.
Neither concept is meaningful outside of a rational system of logic, with a fixed (or semi-fixed) ontology.
Problems related to induction and abduction that you will need meta-rationality to solve, include:
- Deciding on what kind of ontology you're going to apply to the problem (in a 'who killed Jane?' type of problem, you may have to decide on motives; but that depends on how you analyze human minds -- more sophisticated analyses don't even have a simple concept we can call 'motive')
- More crucially, what kind of system provides the justification and context for solving the problem in the first place, which is necessary to justify that the problem statement is appropriate (after solving the problem of Jane's death/murder, should someone go to jail? Do you trust the legal system? What are the boundary conditions for 'guilt'/when 'is' someone a murderer? Is jailing someone justified at all?)
And none of this ‘logic’ thing, including abduction, is a good way of thinking about how to design ontologies/knowledge bases/rhetoric/practices in order for them to be useful to others.
Or to put it another way: if you model other people as engines that apply ‘induction’ and ‘abduction’ based on what they ‘know’, and who can only be in trouble if they either 1) don’t ‘know’ something such that induction and abduction will fail, or 2) use the ‘wrong’ method for applying induction or abduction, then you’re going to be very bad at predicting whether you are being helpful to them or not.
Actually, ‘thinking’ isn’t even a good way at all: experience teaching others, or seeing others trying to apply for themselves whatever they gleaned from what you said/wrote/did, is necessary to get really good at doing those things that actually help others solve the problems that are meaningful to them.
Abduction isn't a strictly formal system
Hi Sytse,
No abduction isn’t about looking for ‘one true cause’, it’s about applying creativity to generate multiple hypotheses. So it’s not strictly a formal logical system.
https://en.wikipedia.org/wiki/Abductive_reasoning
Deduction and Induction are the formal systems of logic (I think your definition of induction is OK).
Any meta-rationality still has to relate back to the formal logic systems. Induction and Deduction can still be elements of meta-rationality, even though meta-rationality might transcend them both. I think it’s using creativity to combine Induction and Deduction that isn’t strictly a formal logical processes.
If then, we say that the ‘x’ (multiplication) sign represents using creativity to combine Induction and Deduction, thus obtaining a meta-rational system (abduction) that transcends formal methods, then the secret to meta-rationality is in that ‘x’ sign!
Deduction x Induction = Abduction!
I think what the book really
I think what the book really needs is solid examples of what meta rational thinking actually is, as compared to rational and irrational thinking. As in Bob (stage 3 tribal irrational thinker) and Jane (stage 4 systematic rational thinker) and Clarence (stage 5 fluid meta rational thinker) are all confronted with the same problem. What are they thinking? What are their solutions to the problem? How are their solutions workable and not? How is this typical of this sort of thinking?
Understandably your target audience is STEM folks, but by demonstrating how this all works across technical and interpersonal and personal issues you can make it relevant to anybody and show them the objective. You’re aware there isn’t currently a good path to meta rationality, but by concretely showing the destination, people who get it will find that path, and help spread knowledge of it to others.
At some point this all needs to come down to earth where normal people live and not just technically oriented typically poorly socialized engineers and mathematicians.
"The map is not the territory"
I think there’s fertile ground for you to till in the persistent confusion of Scott Alexander, Sarah Constantin, and myself – all LW-ers. I was going to check what years the LW sequences posts spanned and before I could even get to the main LW page my browser auto-completed this post for me:
I think that post is actually a great, concrete example of something I’d expect to be confused about, in terms of what you deem meta-rationality. To me at least, that post is ‘meta-rationality’. It’s all about the invisible traps that words can be in preventing us from escaping from some particular fixed (and probably also at least partially private) meaning they have in our minds.
There’s also a a common LessWrong phrase (not coined by EY or any other LW contributor tho) that’s used like a mantra to remind oneself that ontologies aren’t fixed ― “The map is not the territory”. Isn’t that phrase, and the concepts at which it’s referencing (or hinting), meta-rationality? Rationality, as you use the term, seems to be pretty much ‘using maps’ – ‘map’ seems to be what you usually describe as a “system”. Meta-rationality is ‘picking a map to use’.
I feel like the confusion is a result of a kind of implied assumption on ‘our’ part – rationality is whatever works to (generally) form true beliefs and act effectively. So whatever ‘meta-rationality’ might be – if it definitely does contain methods, heuristics, ideas, etc. that work better than ‘regular rationality’ – it’s kind of automatically subsumed under ‘rationality’.
One area tho where I suspect whatever-it-is-exactly-you’re-doing might be helpful is in thinking about the ‘meaning’ of things, in the sense of ‘how should I feel about this?’. Tho I worry that the ultimate answer is one of (a) there is no ultimate answer! [which seems obviously and trivially true]; or (b) the ultimate answer is whatever answer you want it to be! [but you should really try something like the good parts of tantra, because it’s so much fun!].
The 'rationality' of existence
Something I was thinking about while reading this post is the ‘rationality’ of existence.
In other words, assume that the universe is rational, i.e. it not only can be described as but is a simple ‘formal’ system. A concrete example would be the universe being (or equivalent to) a simple cellular automata.
Obviously, this wouldn’t obviate needing meta-rationality, for practical reasons alone. But, ‘ultimately’, everything would be ‘rational’ and ‘fixed’, even if it wouldn’t appear so to us or anyone or anything similar-ish.
Existence isn't any-'thing'
Why should I assume that the universe is rational, or can be described as such, or can ‘ultimately’ be described at all? How would this assumption guide my activity, particularly further investigations into the matter at hand?
I think David’s whole point about the use of words like ‘ultimately’ and ‘really’ is that they’re used to hand-wave one’s way past the invocation of quasi-religious beliefs for the sake of putting an end to further deliberation. Eliezer raises the idea of “thought terminating cliches”, which I think applies here. “The universe is rational,” doesn’t so much describe a potential state of affairs as it enables you to stop thinking about ‘the universe’, since the ‘real’ answer is that the thing you’re trying to predicate rationality of isn’t a ‘thing’ at all, ie. something you can grasp or observe in its entirety, but some nebulous and undefined ‘stuff’ which you are a part of and interact with, that changes on account of your interaction with it and which changes you as well.
What’s the purpose of believing that the universe is or is equivalent to a simple cellular automaton? Even if I knew that the universe had and would continue to evolve according to a single basic rule, hasn’t it already developed far past the point where knowing what the rule is would enable me to predict where it’s going to end up next?
This seems to me to be what David has in mind by “rationalism is an eternalism”. My feeling about your example, which matches the sense I get from the tone of the writing by authors and their commenters in the rationalist adjacent part of the internet, is that you believe that fuzzy heuristics, hard-to-describe intuitions, and experiential know-how are all just necessary evils on the way toward getting at the simple, pristine truths (ideally one single Truth, or maybe one nice book-sized Theory that could then be ideally summarized in one or two longform thinkpieces online for me and my peers to comment on) regarding life, the universe and everything. That is, such methods are supposed to form a scaffolding, and once the Truth is there in its midst, the scaffolding can be removed and we can act like it was never there. Whereas the argument from meta-rationality is more like: those rule-of-thumb heuristics, the ugly coarse-grained tools that I rely on - purely ‘for practical reasons’ - are actually all I’m ever going to get. Which isn’t actually a problem, because they’re really all I’ve ever been using anyway, it’s always been like this, and there is way more to be found there that is beautiful and meaningful, that opens up a path for us toward noble action than you may be giving the practical credit for. While the structure you were looking for was never going to be enclosed within its purported scaffolding, at the heights you’ve built up to there is an excellent view to be found.
Following the argument, an eternal principle, such as faith in the ultimate rationality of everything, is one more such tool. Its purpose is to bring to an end the ceaseless rumination I might undergo in reaction to the experience of anxiety over the absence of eternal principles, ultimate answers, or whether or not I can ever be sure that everything that happens in the universe will eventually make sense - if not to me, then at least to someone else out there. The alternative to this principle would be for me to tolerate the nebulosity of the big unknowns while looking for useful, enjoyable ways to interact with, change and be changed by them.
Misattribution
The notion of thought-terminating cliches is Robert J. Lifton’s, not Eliezer Yudkowsky’s as I mistakenly supposed, and I got it from Meaningness, here:
Not a question but a
Not a question but a compliment for your work. Also, I think the discussion “Jordan Peterson & John Vervaeke discuss the Meaning of Life” (https://www.youtube.com/watch?v=-RCtSsxhb2Q) could be very relevant to your blog/book.
Please include this example in the book
I like this suggestion from unkempt. Please include this of some kind in the book.
As in Bob (stage 3 tribal irrational thinker) and Jane (stage 4 systematic rational thinker) and Clarence (stage 5 fluid meta rational thinker) are all confronted with the same problem. What are they thinking? What are their solutions to the problem? How are their solutions workable and not? How is this typical of this sort of thinking?
Can you add more elaboration about the differences between meta-rationality and meta-rationalism?
I am thinking they are distinctions without a (significant) difference.
I'd like to see the stages of
I’d like to see the stages of psychosocial development covered. And maybe some rationale for why meta-rationality is something different from common sense, considering that choosing the most appropriate rational system for the problem at hand is something regular people do all the time, if not always well.
And of course ideally some coverage of tantra and how it’s meant to provide a meta-rational tool set, but I guess that’s entirely out of the scope of this book?