Recent comments

Professor/mentors

C'mon 2021-04-02

Commenting on: A bridge to meta-rationality vs. civilizational collapse

Context for reaching it has been created only rarely, idiosyncratically, by exceptional individual mentors

STEM departments do not explicitly go beyond that. However, at least some professors understand the limitations of formal methods and the inherent nebulosity of their subject matter, and may teach that informally. They may also teach some stage 5 cognitive skills informally, implicitly, or by example.

Could you please point out some such mentors and/or professors please?

Thank you

Ethnomethodology process in a way looks to me similar to ML

C’mon 2021-04-01

Commenting on: What they don’t teach you at STEM school

we can find the rules by video recording people eating breakfast, and watching carefully, over and over.

That sounds very much like how machine learning works.

No?

Too much!

Kenny 2021-03-21

Commenting on: Bring meta-rationality into your Orbit

I was taking the periodic ‘quizzes’ for several weeks and it was very interesting! I could definitely perceive that my memory, of what was being tested in the periodic “reviews” anyways, was improving.

But I’m less sure how helpful this as a general technology. It’s certainly useful if it’s particularly important that one remember particular ‘passwords’. That is useful, sometimes incredibly so – lots of credentials, and concretely useful skills, are ‘gated’ behind an ability to recall specific facts or ideas easily and rapidly.

But I can’t imagine much use beyond that – the time costs of reviewing just one blog post are already way too expensive. I’m not sure how idiosyncratic this is, for me, or others generally. But I don’t think I’d have even an order of magnitude of a larger ‘budget’ for the reviews, even under perfect conditions.

I think this is something that is very useful, for a specific, fairly narrow, and very focused purpose. (And maybe even just one such purpose could be maintained at a time, with a possible exception for (at least roughly) full-time students or scholars.)

Mentors

Angie France 2021-03-14

Commenting on: A bridge to meta-rationality vs. civilizational collapse

We must creat mentors to help others make way to step 5 on the bridge. Our world is lacking greatly in mentors in general.

Wrongly assuming Romanticism

David Chapman 2021-02-03

Commenting on: The ethnomethodological flip

Yes; unsurprisingly this piece is well-known to anyone doing ethnomethodology. There’s a discussion in Michael Lynch’s Scientific practice and ordinary action, for instance (pp. 26-28).

Gellner made the common mistake of assuming that any critique of rationalism must be the Romantic anti-rational critique: that rationalism neglects critical aspects of subjectivity, which Romanticism valorizes.

This was exactly wrong. If anything, ethnomethodology could be criticized for the opposite: rigorous refusal to deal with subjectivity at all (on the grounds that, as observers, we have no access to it). It is more similar to behaviorism than Romanticism in this respect.

I frequently run into the same misunderstanding of my own work (e.g. in twitter discussions with self-described rationalists).

Ignorant, irrelevant, and inscrutable” discusses this. Ethno comes in the “inscrutable” category (i.e. meta-rational).

Gellner critique

joe 2021-02-02

Commenting on: The ethnomethodological flip

Ever seen this strange sociological critique of ethnomethodology by Ernest Gellner (in 1975)? He’s sniffing at it as a kind of Californian self-obsessed conformist hippie fad. I found it rather confusing (though funny!), but the last couple of pages also sound pretty prescient—a kind of negatively characterized fluid mode, “DIY subjectivity”, reductio ad solipsism. Quite disorienting. (Started reading Gellner via Cosma Shalizi fwiw, he of the famously impeccable taste… http://bactra.org/notebooks/gellner.html )

Here’s the piece:
http://tucnak.fsv.cuni.cz/~hajek/ModerniSgTeorie/literatura/etnometodologie/gellner-ethnomethodology.pdf

Feedback

Kenny 2021-02-01

Commenting on: Bring meta-rationality into your Orbit

It was overall a little jarring – I genuinely enjoy reading your writing and answering questions in between was noticeably different.

I’m not ‘sold’ on it, but I’m open to continue testing it!

Like all good games nowadays, I like the latitude it allows you – the person reading the prompts – in deciding whether to record a prompt ‘forgotten’ or ‘remembered’. I definitely played with different ‘personal interpretations’ of what those two possibilities might mean for a given prompt. I think the ambiguity (nebulosity) were rather delicious for your work in particular. I ended-up grading myself as ‘forgetting’ any prompts that I couldn’t both explain in my own words or with my own examples and any for which I couldn’t recall your specific terms or phrases.

I’m definitely curious to play with the longer-term prompts.

It’d also be interesting to feed other prompts, or pre-made sets of them for various topics, into a kind of big ‘prompt stew’ with which I could regularly challenge myself.

Thanks!

Kenny 2021-02-01

Commenting on: Maps, the territory, and meta-rationality

Thanks – both for this post and the ‘novel ontologies’ you’ve provided everyone. I’ve had a lot of use of ‘reasonableness’ and ‘meta-rationality’ so far!

Understanding the issue but not having exactly the "right" answer

David Chapman 2021-01-31

Commenting on: Bring meta-rationality into your Orbit

Thank you very much for this feedback! Glad you enjoyed it overall.

I had similar difficulties sometimes when first using Orbit with another author’s prompts.

I hope that with more experience in writing prompts we’ll be able to minimize this sort of problem.

Pathways and typos

David Chapman 2021-01-31

Commenting on: Maps, the territory, and meta-rationality

Ed — Thank you very much! Helpful examples, highly pertinent.

Steve — Both fixed now. Error reports always appreciated!

Typo in essay

Steve Alexander 2021-01-30

Commenting on: Maps, the territory, and meta-rationality

“If create an Orbit account, or link to your existing one“ — missing “you”

Also, there’s a repeated “a a” in footnote 12.

RE: Orbit

Orion J Anderson 2021-01-29

Commenting on: Bring meta-rationality into your Orbit

I read your maps and meta-maps article and did all the orbit prompts. Overall, I enjoyed engaging with them and I think the technology has great potential.

An experience I had with a lot of the questions was that I would read your prompt and be uncertain what answer you were expecting. I would have a concept or quote from that segment in my mind, though. When I clicked to reveal the answer, I would see that the phrase that I had recalled was indeed the answer you had in mind, I simply hadn’t understood how or why it was the answer. In those cases I wasn’t sure whether I should click “remembered” or “forgot.”

In a couple of cases, I thought about it and concluded that I hadn’t been able to answer the question because I hadn’t fully understood the passage; I was able to recall the quote, but not to see how it applied. In most cases, though, I ended up feeling like the problem was that the prompt was confusing or vague; I had understood what I read, but not understood the question you asked about it.

response to query for non-cartographic examples of maps

Ed Giniger 2021-01-29

Commenting on: Maps, the territory, and meta-rationality

“Genetic pathways” and “signaling pathways (in cell biology)” are excellent examples of maps that aren’t maps. You may have seen them – biologists love to show big wall charts enumerating all the genes in the genome or all the proteins in the cytoplasm, with little arrows showing which ones regulate which other ones. The reductionists then define “pathways”, by which A controls B, which controls C, which controls D, …etc., whereas the anti-reductionists say that the whole mess is irreducibly a network, in which everything controls everything else, so trying to define “pathways” is arrogant, oversimplifying nonsense that tells you nothing real. Both are wrong. Pathways ARE illusory, in the sense that if you try to connect more than two links by using the map you will find that you can link anything to anything else (much like 7 degrees of separation in a human population), and if you try to use that connection you “discovered” you’ll find that it predicts nothing. On the other hand, pathways are REAL, in the sense that some things really do work together, or work in opposition, reliably, under a host of different environmental conditions, and across a vast range of species. So sometimes there really is something there. The usual meta-rational justification is to invoke the relative weights of the connections between elements; if only we knew these then the map would be predictive. Again, not true. The weights of connections are not fixed; they vary as a high power of the number of links in the overall map. That makes prediction, well, let’s call it problematic. BUT: to reiterate, it remains true that biology behaves as if some of these pathways really are there and really do act reliably. Which tells us that there are parts of the marsh that are almost always less liquid, others that are gooey all the time, and parts that can be drier or messier, depending on whether it has been more evolutionarily successful for the system to evolve to err on the side of variability or toward reliability, and under which conditions.

All of this is particularly relevant at present because the world of neuroscience is in the thick of recapitulating exactly this same process in the description of neural pathways, which are certain to have the same strengths as genetic and signaling pathways, and the same limitations. All of which is fine, and practical and interesting and incredibly informative - as long as one recognizes the inherent nebulosity.

Funding

David Pinto 2021-01-29

Commenting on: Bring meta-rationality into your Orbit

Check Sqale. Co

I started a PhD and found it useless. Happy to pursue other paths. Funding is an important piece in the puzzle.

Like your writing and transparency. Will check orbit.

Is this what I have been looking for?

Michael Nieman 2021-01-29

Commenting on: Meta-rationality: An introduction

I am interested in this. I’ll tell you why. I have thought for years that the defining problem of this stage of human evolution is, how do we proceed when we now “know” that rationality is and always will be by definition partial and limited. And yet we cannot return to a prerational state. We are stuck here. Clearly, per Wilber and others, we needed to free ourselves from a prerational condition, and many unquestioned advances have come from that. But we seem to have reached the limits of that. Also, many social problems, much incomprehension between human groups, is caused by our mistaken belief that we are mostly rational, most of the time. For one thing, access to many perfectly good sources of wisdom and guidance is mostly cut off. Religion itself, wrongly understood as an alternative materialistic explanation of reality, has become a source of madness. However, I am not an expert in anything. I am a retired car salesman with an interest in philosophy and theology, attempting to learn to play the 🪕 and bring a lifetime of efforts at writing poetry to fruition. So perhaps, according to your, “who is this for?”, it would not be for me?

Thoughts on new format

Nick Hay 2021-01-28

Commenting on: Bring meta-rationality into your Orbit

I enjoyed this new format a lot. A couple of thoughts so far.

The reviews gave good places to pause and pick things up again later. Perhaps there’s a way to facilitate that even more by e.g. being able to press a button to re-review questions collected in the most recently read section?

The starburst, and the collecting x of N prompts text, gave a satisfying sense of progress.

The system supports a freedom to fail at remembering or even knowing a satisfactory answer to a prompt, trusting that you’ll get another chance later.

Non-cartographic map examples?

Chris Butler 2021-01-26

Commenting on: Maps, the territory, and meta-rationality

Really enjoyed this piece! I hear a “map is not the territory or whatever” at least weekly.

One question that might expand this further: are there non-cartographic maps that speak to the same thing?

I’ve found a lot of the map discourse (if you want to call it that) tend to avoid mental maps, system diagrams, and others for some reason. These tend to be the maps that people deploy to explain things to people that isn’t about physical locations. It ends up being the way people frame the problems they are solving and solutions they are recommending.

Great article

tr4nsmute 2021-01-20

Commenting on: Reasonable believings

Excellent article!

"The tarot works but it's evil"

David Chapman 2021-01-11

Commenting on: Reasonable believings

That’s partly a casual joke. But:

Divination methods can “work” by giving something for your unconscious or imagination or feelings to work with. Useful insights do come out of practices like the tarot. They can also generate delusion.

Part of the reason the tarot in particular works seems to be that it’s a collection of “archetypes” or generic myth fragments that are deeply embedded in the “cognitive grammar” of our culture. So they are a fast, effective way of shifting yourself into the “mythic mode” which I wrote about very briefly here.

The “evil” part is that the its archetypes are partly derived from the Neo-Platonic occult tradition, which has real value but which is also (in my opinion) highly distorted and distorting in some ways. It will tend to guide your unconscious/imagination/feelings along particular lines that may be harmful. And there’s some of the Medieval worldview in there, and a bunch of 19th Century Romanticism. These also are problematic, in my opinion.

tarot

sidd 2021-01-11

Commenting on: Reasonable believings

My current belief is that the tarot works, but it’s evil. It’s a pack of Platonic Ideals. If you have one, I strongly advise you to stab it, burn it, mix the ash with salt, and scatter it in running water.

Could you elaborate on this? On the tarot working, and on it being evil?

Great post!

Kenny 2021-01-11

Commenting on: Reasonable believings

I would love to read the output of a “book, or an extensive research program” about this (written or summarized by you)!

I particularly liked the ‘fact’ versus ‘belief’ distinction.

Grounding all of the ‘believings’ in ‘reasonableness’, and especially in the sense of ‘accounting’ for those ‘believings’, seems very insightful.

Thanks!

David Chapman 2021-01-07

Commenting on: Now with Django!

Thank you very much for this—I haven’t had a chance yet, but I will follow up both these points.

Tufte CSS sidenotes don't seem to crash

Kenny 2021-01-04

Commenting on: Now with Django!

I just tested adding a sidenote, to the demo page for Tufte CSS (to which I linked before), and the sidenote is referenced on the same line as an existing sidenote. They were formatted pretty nicely in the margin.

I also tested adding a LOT of text to the first (existing) sidenote and it still didn’t crash into the extra sidenote in the margin.

Something you still might not be satisfied with is that the sidenotes and margin notes use a span element, i.e. you can’t break the note text into separate p elements. It might be possible to alter the CSS to also work with div elements for the note text tho.

Also, using ` in comments (for inline code) seems to cause something to { fail / throw an error } in your site code. Your help page does mention using code elements tho so maybe that’s okay. But I had to escape the backtick character at the beginning of this paragraph too to seemingly avoid the same failure/error.

CSS for sidenotes

David Chapman 2021-01-03

Commenting on: Now with Django!

Thank you very much for both suggestions!

My understanding is that CSS-based sidenote approaches can’t prevent notes from crashing into each other? So you have to be careful, as a writer, to not write ones that are too long and too close together.

In places, I write dense, long footnotes. I don’t want to give that up…

Congratulations on a successful renovation!

Kenny 2021-01-03

Commenting on: Now with Django!

I’m using Tufte CSS for some of my personal projects. It has nice sidenotes and doesn’t seem to use JavaScript. I’m guessing you’d rather ignore plumbing for awhile tho!

For dead links, or even for living links, it’s pretty common to use The Internet Archive to avoid or mitigate link-rot. The site you mentioned in a previous comment seems preserved there:

woohoo

J 2021-01-02

Commenting on: Now with Django!

Looks/feels great. I love the model of hypertextbook upon which this is based. Looking forward to reading more!

Interestingly broken link

David Chapman 2021-01-02

Commenting on: Now with Django!

Thank you!—I’ve replaced the page it linked to with another at a different site.

It used to link to nyingma.com, which is apparently defunct. That’s potentially interesting… I remember wondering who was behind that site, because they didn’t make it obvious. Some sort of religious politics is probably involved.

Most of my outbound links nowadays are to the Wikipedia or the Stanford Encyclopedia of Philosophy, partly because they’re usually quite good, but also because they will probably still exist in ten years.

In this case I linked to the RigpaWiki, which is generally excellent. OTOH, it is produced by the former students of Soggy Alibi, a disgraced and now dead fake lama, so I’m not sure how much longer until it too is claimed by impermanence.

Nine Yanas

Andreas Keller 2021-01-02

Commenting on: Now with Django!

On the page https://vividness.live/yanas-are-not-buddhist-sects, the “nine yanas” link does not work. Gone the fast way into emptyness :-)

end goal of Integral/ Ken Wilber

ross 2021-01-01

Commenting on: I seem to be a fiction

Hi David. Very interesting! I have looked into some of Ken Wilber’s work a bit, and I don’t think the purpose is to become omnipotent.

I tend to think that becoming one with God or whatever else they want to call the highest stage of mental evolution is something that I would have to experience to verify if it is possible. If someone doesn’t do the experiment of trying to achieve this state, it would be very tricky to be sure it is not possible based on logic alone. It is a hard experiment to do, so I understand if people want to use logic to see if it is worth trying out first, though.

The next best thing than doing the experiment yourself is to study people who have done the experiment, and another book by Ken Wilber with a few other authors, <u>Transformations of Consciousness: Conventional and Contemplative Perspectives On Development</u>, is an attempt to do that.

Cheers

Embodied Artificial Intelligence

Stephen 2020-12-29

Commenting on: The ethnomethodological flip

I enjoyed reading Computation and Human Experience, which you have recommended many times and to which you have contributed its main research. However, I apologize if there isn’t a better place to comment about the book. If there is, which this comment is entirely about, please point me in the right direction (including moving the discussion elsewhere).

As I haven’t fully absorbed your latest writing, it may coincide only superficially with the main focus of this site, since it touches on ethnomethodology, grounded meaning, reasonableness, and models of the world. This may especially be true if AI no longer interests you. It is my foremost hope that this is not the case, and you will write more about AI and AI forecasting.

Also, if I happen to have any wrong or confused thoughts about the book’s arguments, which I cannot anticipate upon a stream-of-consciousness reflection of what was learned, I greatly apologize in advance, and you may retract or edit things out, even the entire comment, whenever you wish.

Computation and Human Experience recapitulated my experiences as a software engineer working on large systems for several decades, and as an AI researcher and engineer. It also gave a perspective that remains too rarely found in technical texts; this is not in the sense of being interdisciplinary, which would be a prosaic and meaningless characterization, but justifying certain approaches through its narratives and drawing useful generalizations about intelligent system design. It foreshadowed the rise of use case design, embodied systems, the importance of distributed, subsymbolic representations, the decline of the waterfall model in favor of more continuous integration, and models of intelligent systems in which time, efficiency, and other engineering constraints play an important role; this is in the sense that practical application should feed back on the model into theory, in order to make the system actually useful, which often calls for major revisions to the abstractions themselves. I find that this pattern is not only applicable to the efficient execution of a system’s implementation, but comes up when designing an efficient interface between the user interacting with it.

I retained a sense of how “symbolic grounding” is not usefully about consciousness, but can relate to an embodiment problem of intelligence. “Meaning” doesn’t accord with an idealistic mental representation; rather, it emerges from the realization of a model and its interplay between its environment.

The idea of “metaphors” as invariant patterns across otherwise incompatible epistemologies was resonant with independently driven reflections. But I now see the importance of making sound presuppositions about the nature of human cognition and how they should change our model of a generalized AI system, despite these universals (metaphors) in the model space being suggestive of multiple viable paths.

I wanted to write the above to reflect on how to best draw (epistemological, practical) lessons from the book, in light of AI recently finding itself on a more robust path. A fully intelligent system, however, may or may not emerge without some degree of top-down, more systematic approaches, as the book agrees with on some level. We might therefore continue to learn from failures tracing back to the implicit (yet course-altering) presupposition of dualism, idealism, well-defined mental states, among other misleading motifs of representationalism that persist in predominant philosophies.

If it is the case that bottom-up models are primary, while structured models are necessarily secondary, I nonetheless would arrive at the conclusion that we might find it important to draw from both symbolic and sub-symbolic models of cognition in order to realize a contemporary (neural network) architecture and control system that, together, dispenses with any misleading epistemologies. In this way, a robust system might be implemented efficiently because it best finds some useful “middle ground”.

On the other hand, if bottom-up approaches suffice to scale up to a general AI system, it would be fascinating to see how abstraction emerges from its bottom-up calculus. It’s obvious that abstraction is more than just epiphenomena of the mind, but contributes to a model’s intelligence, as language can be used to think about a lot of things, and because we notice stereotypes (templates) of problems that arise seamlessly that can be reused to solve other problems more efficiently.

If you believe that meaning is fundamentally subsymbolic, embodied, implicit, and nebulous, then you might also believe that the latter approach is necessary and sufficient. I only have a hunch that this is the case, not any principled explanation (yet). But I think it’s important to answer this for reasons of safety and understanding (explaining) the nature of intelligence/AI. I would be interested to hear from you (as a contributor of the work) regarding:

  1. Whether general intelligence might emerge without implementing any explicit models of the environment.

  2. How distributed representations of implicit kinds might give rise to general intelligence, and if so, how we would know that they are reliable planning systems, which are still conceptualized as predetermined, static models over actions.

  3. The main reason, however, for my bringing AI up with you is the conviction that it’s of critical importance to achieve robust AI in order to set the stage for more meaningful, enjoyably useful ways of life. The changes that occur from AI may destabilize society in ways that seem to imply a tragic future; however, in my opinion, changes can generally be handled well enough, so long as safe systems are the ones that society selects for. If you have responses about any of this, including necessary constraints to the model space in general, AI safety, or the societal implications of AI, I request a reply that states your prior expectations. More importantly, I request an eventual series of posts exploring (much further) the dynamics of atomized societies in the context of transformative AI.

Circumrationality vs reasonableness

David Chapman 2020-12-05

Commenting on: Part Three: Taking rationality seriously

Circumrationality is a special sort of reasonableness—it’s the reasonable work we do around the margins of a formal system in order to get it to relate to the real world.

Doing circumrationality requires some basic understanding of the rational system, so it’s sort of intermediate between reasonableness and rationality. But it is defined by its role rather than its nature: that being, relating the formal system to concrete reality.

Circumrationality?

Andrew 2020-12-05

Commenting on: Part Three: Taking rationality seriously

Has circumrationality replaced “reasonableness” used in the earlier parts? Or are they distinct concepts?

"Change in explanatory priority"

David Chapman 2020-12-02

Commenting on: The ethnomethodological flip

Ah, no, both sorts of thinking are good in different circumstances; neither is better overall.

The “flip” is at the theoretical level: the “change in explanatory priority.” That is, changing from explaining reasonableness in terms of rationality to explaining rationality in terms of reasonableness.

Definition

Rob Alexander 2020-12-01

Commenting on: The ethnomethodological flip

One thing I don’t see in that text is a clear definition of exactly what your mean by “the ethnomethodological flip”. Would a good candidate be:

“Changing your prototype of “good thinking” from rationality to everday reasonableness.”?

The meaning of meaning

James 2020-11-29

Commenting on: Reasonableness is meaningful activity

This fits what I’ve been thinking about in terms of the meaning of “meaning.” Meaning is a connection between things: between words and what they refer to, between actions and intentions, etc. Reasonableness preserves these connections, even if it doesn’t always attend to all of them (e.g. butterfly effects).

Rationality severs most of these connections through abstraction. That has the paradoxical effect of making a rational models applicable to a larger number of situations at the cost of being less applicable to any particular one of them. And it’s more than just not attending to connections that don’t matter in the context — some of them may be supremely important in the actual context, but the model can’t capture them and would break down if it tried.

A classic example is that utilitarianism has trouble incorporating “innocence” into its calculations of whether to kill one person to save others, but most people would consider that an extremely important consideration!

The scope of AI / opening remarks.

David G Horsman 2020-11-19

Commenting on: How should we evaluate progress in AI?

“Artificial intelligence is an exception. It has always borrowed criteria, approaches, and specific methods from at least six fields:
1. Science 2. Engineering 3. Mathematics 4. Philosophy 5. Design 6. Spectacle.”
Not borrowed. That is the unique scope of AI. It’s cross-domain by its nature. Our unique blindness is that Biology is #1. It determines and trumps the rest. Constrains them.
It’s notable that it was filled under “other” here although I was watching for it.

I'd love a (more) "comprehensive account of reasonableness"

Kenny 2020-11-15

Commenting on: Part Two: Taking reasonableness seriously

Maybe that can be a future project!

I’m a little puzzled by your agreement with this, from another commenter on this post:

Reality itself does not seem to be a system.

Given the incredible precision of, e.g. some parts of quantum mechanics, it sure seems like ‘reality’ is very likely a (relatively) simple system – in terms of its rules and its fundamental constituents. In that sense, it does very much seem to be a ‘system’, even in your specific usage of that term.

Where I would very much agree with you is reality, in the sense of our experience in the “eggplant-sized world”, NOT being a system – in your sense.

And, to me, that’s one of the wonderful things about this site/book and your writing generally – you’re helping dissolve the confusion between these drastically different scales of understanding.

It now seems obvious to me that there’s no ‘system’ that can be cognitively internalized or even published as a book (let alone read and understood) whereby something “eggplant-sized” can be understood in terms of, e.g. quantum mechanics; let alone any more fundamental theory. But I think it only seems obvious because I’ve been reading your work for so long!

But I also ‘can’t help’ but try to spin new rationalisms out of some of these thoughts! It seems to me like there’s a connection between your points and the ‘impossibility’ of rational systems in any universe, from a mathematical or computational perspective.

Something that I recently found extremely insightful is this from Scott Aaronson’s paper Why Philosophers Should Care About Computational Complexity:

My last example of the philosophical relevance of the polynomial/exponential distinction concerns
the concept of “knowledge” in mathematics.[14] As of 2011, the “largest known prime number,”
as reported by GIMPS (the Great Internet Mersenne Prime Search),[15] is p := 2^43112609 − 1. But
on reflection, what do we mean by saying that p is “known”? Do we mean that, if we desired,
we could literally print out its decimal digits (using about 30,000 pages)? That seems like too
restrictive a criterion. For, given a positive integer k together with a proof that q = 2^k − 1 was
prime, I doubt most mathematicians would hesitate to call q a “known” prime,[16] even if k were so
large that printing out its decimal digits (or storing them in a computer memory) were beyond the
Earth’s capacity. Should we call 2^2^1000 an “unknown power of 2,” just because it has too many
decimal digits to list before the Sun goes cold?

All that should really matter, one feels, is that

(a) the expression ‘243112609 − 1’ picks out a unique positive integer, and
(b) that integer has been proven (in this case, via computer, of course) to be prime.

But wait! If those are the criteria, then why can’t we immediately beat the largest-known-prime
record, like so?

p0 = The first prime larger than 2^43112609 − 1.

Clearly p0 exists, it is unambiguously defined, and it is prime. If we want, we can even write
a program that is guaranteed to find p0 and output its decimal digits, using a number of steps
that can be upper-bounded a priori.[17] Yet our intuition stubbornly insists that 2^43112609 − 1 is a
“known” prime in a sense that p0 is not. Is there any principled basis for such a distinction?

The clearest basis that I can suggest is the following. We know an algorithm that takes as
input a positive integer k, and that outputs the decimal digits of p = 2k − 1 using a number of
steps that is polynomial—indeed, linear—in the number of digits of p. But we do not know any
similarly-efficient algorithm that provably outputs the first prime larger than 2k − 1.[18]

Just to note that the “This

Simon Grant 2020-11-15

Commenting on: Judging whether a system applies

Just to note that the “This must mean something” link to apollospeaks has disappeared… Ah, linkrot…

Turing and Description Length

Bunthut 2020-11-08

Commenting on: A first lesson in meta-rationality

However, by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.

I don’t think that definition does what you want it to do. For example, you often say that not even some simple task (like here the Bongard problems) can be solved systematically, with the implication that intelligence in general can’t be either then. However, minimum description length isn’t compositional that way: it can well be that a task can be embedded in another, such that the big task can be described shorter.

If we look at any task in total isolation, we will often find that it requires vast background knowledge, and an explanation of how to do the task in isolation will require spelling out that background knowledge - but if we think of the task as in a broader environment, it can often be possible to learn that background knowledge, and describing how to do that might be a lot shorter.

For example, describing how to make an omlette in enough detail that someone with no experience cooking can get it right first try might be quite long- describing how to train the relevant steps until success is much shorter.

Reality does not come divided up into bits; we have to do that.

You’ve asked what import the Turing thesis really has here. Well, its sentences like this that make me reach for it. If we imagine a computer program to solve Bongard problems, of course reality comes to it divided into bits, literally. We will have to give our Bongard-solver some input, and it will be in bits. Most likely some sort of pixelgraphic. All distinctions which it will later end up making can be described in the language of which pixels are on or off. So all the divisions are already there: the question is which ones to ignore.

And it certainly seems that much the same is true of our brain: When our retina sends signals to the nervous system, these are more or less in pixel format, too (its not essential that it’s pixels, just some fixed format). Now from the perspective of our conscious thought it may seem like we are learning to make a distinction: what actually happens is that we learn to pay attention to it.

Broken link

David Chapman 2020-11-06

Commenting on: The parable of the pebbles

Er, um, … The part of the outline that points into is nebulous enough that I can’t even attach a page name to it; so unfortunately I can’t fix it yet.

Thank you very much for pointing it out, nonetheless!