Recent comments

Great article

tr4nsmute 2021-01-20

Commenting on: Reasonable believings

Excellent article!

"The tarot works but it's evil"

David Chapman 2021-01-11

Commenting on: Reasonable believings

That’s partly a casual joke. But:

Divination methods can “work” by giving something for your unconscious or imagination or feelings to work with. Useful insights do come out of practices like the tarot. They can also generate delusion.

Part of the reason the tarot in particular works seems to be that it’s a collection of “archetypes” or generic myth fragments that are deeply embedded in the “cognitive grammar” of our culture. So they are a fast, effective way of shifting yourself into the “mythic mode” which I wrote about very briefly here.

The “evil” part is that the its archetypes are partly derived from the Neo-Platonic occult tradition, which has real value but which is also (in my opinion) highly distorted and distorting in some ways. It will tend to guide your unconscious/imagination/feelings along particular lines that may be harmful. And there’s some of the Medieval worldview in there, and a bunch of 19th Century Romanticism. These also are problematic, in my opinion.

tarot

sidd 2021-01-11

Commenting on: Reasonable believings

My current belief is that the tarot works, but it’s evil. It’s a pack of Platonic Ideals. If you have one, I strongly advise you to stab it, burn it, mix the ash with salt, and scatter it in running water.

Could you elaborate on this? On the tarot working, and on it being evil?

Great post!

Kenny 2021-01-11

Commenting on: Reasonable believings

I would love to read the output of a “book, or an extensive research program” about this (written or summarized by you)!

I particularly liked the ‘fact’ versus ‘belief’ distinction.

Grounding all of the ‘believings’ in ‘reasonableness’, and especially in the sense of ‘accounting’ for those ‘believings’, seems very insightful.

Thanks!

David Chapman 2021-01-07

Commenting on: Now with Django!

Thank you very much for this—I haven’t had a chance yet, but I will follow up both these points.

Tufte CSS sidenotes don't seem to crash

Kenny 2021-01-04

Commenting on: Now with Django!

I just tested adding a sidenote, to the demo page for Tufte CSS (to which I linked before), and the sidenote is referenced on the same line as an existing sidenote. They were formatted pretty nicely in the margin.

I also tested adding a LOT of text to the first (existing) sidenote and it still didn’t crash into the extra sidenote in the margin.

Something you still might not be satisfied with is that the sidenotes and margin notes use a span element, i.e. you can’t break the note text into separate p elements. It might be possible to alter the CSS to also work with div elements for the note text tho.

Also, using ` in comments (for inline code) seems to cause something to { fail / throw an error } in your site code. Your help page does mention using code elements tho so maybe that’s okay. But I had to escape the backtick character at the beginning of this paragraph too to seemingly avoid the same failure/error.

CSS for sidenotes

David Chapman 2021-01-03

Commenting on: Now with Django!

Thank you very much for both suggestions!

My understanding is that CSS-based sidenote approaches can’t prevent notes from crashing into each other? So you have to be careful, as a writer, to not write ones that are too long and too close together.

In places, I write dense, long footnotes. I don’t want to give that up…

Congratulations on a successful renovation!

Kenny 2021-01-03

Commenting on: Now with Django!

I’m using Tufte CSS for some of my personal projects. It has nice sidenotes and doesn’t seem to use JavaScript. I’m guessing you’d rather ignore plumbing for awhile tho!

For dead links, or even for living links, it’s pretty common to use The Internet Archive to avoid or mitigate link-rot. The site you mentioned in a previous comment seems preserved there:

woohoo

J 2021-01-02

Commenting on: Now with Django!

Looks/feels great. I love the model of hypertextbook upon which this is based. Looking forward to reading more!

Interestingly broken link

David Chapman 2021-01-02

Commenting on: Now with Django!

Thank you!—I’ve replaced the page it linked to with another at a different site.

It used to link to nyingma.com, which is apparently defunct. That’s potentially interesting… I remember wondering who was behind that site, because they didn’t make it obvious. Some sort of religious politics is probably involved.

Most of my outbound links nowadays are to the Wikipedia or the Stanford Encyclopedia of Philosophy, partly because they’re usually quite good, but also because they will probably still exist in ten years.

In this case I linked to the RigpaWiki, which is generally excellent. OTOH, it is produced by the former students of Soggy Alibi, a disgraced and now dead fake lama, so I’m not sure how much longer until it too is claimed by impermanence.

Nine Yanas

Andreas Keller 2021-01-02

Commenting on: Now with Django!

On the page https://vividness.live/yanas-are-not-buddhist-sects, the “nine yanas” link does not work. Gone the fast way into emptyness :-)

end goal of Integral/ Ken Wilber

ross 2021-01-01

Commenting on: I seem to be a fiction

Hi David. Very interesting! I have looked into some of Ken Wilber’s work a bit, and I don’t think the purpose is to become omnipotent.

I tend to think that becoming one with God or whatever else they want to call the highest stage of mental evolution is something that I would have to experience to verify if it is possible. If someone doesn’t do the experiment of trying to achieve this state, it would be very tricky to be sure it is not possible based on logic alone. It is a hard experiment to do, so I understand if people want to use logic to see if it is worth trying out first, though.

The next best thing than doing the experiment yourself is to study people who have done the experiment, and another book by Ken Wilber with a few other authors, <u>Transformations of Consciousness: Conventional and Contemplative Perspectives On Development</u>, is an attempt to do that.

Cheers

Embodied Artificial Intelligence

Stephen 2020-12-29

Commenting on: The ethnomethodological flip

I enjoyed reading Computation and Human Experience, which you have recommended many times and to which you have contributed its main research. However, I apologize if there isn’t a better place to comment about the book. If there is, which this comment is entirely about, please point me in the right direction (including moving the discussion elsewhere).

As I haven’t fully absorbed your latest writing, it may coincide only superficially with the main focus of this site, since it touches on ethnomethodology, grounded meaning, reasonableness, and models of the world. This may especially be true if AI no longer interests you. It is my foremost hope that this is not the case, and you will write more about AI and AI forecasting.

Also, if I happen to have any wrong or confused thoughts about the book’s arguments, which I cannot anticipate upon a stream-of-consciousness reflection of what was learned, I greatly apologize in advance, and you may retract or edit things out, even the entire comment, whenever you wish.

Computation and Human Experience recapitulated my experiences as a software engineer working on large systems for several decades, and as an AI researcher and engineer. It also gave a perspective that remains too rarely found in technical texts; this is not in the sense of being interdisciplinary, which would be a prosaic and meaningless characterization, but justifying certain approaches through its narratives and drawing useful generalizations about intelligent system design. It foreshadowed the rise of use case design, embodied systems, the importance of distributed, subsymbolic representations, the decline of the waterfall model in favor of more continuous integration, and models of intelligent systems in which time, efficiency, and other engineering constraints play an important role; this is in the sense that practical application should feed back on the model into theory, in order to make the system actually useful, which often calls for major revisions to the abstractions themselves. I find that this pattern is not only applicable to the efficient execution of a system’s implementation, but comes up when designing an efficient interface between the user interacting with it.

I retained a sense of how “symbolic grounding” is not usefully about consciousness, but can relate to an embodiment problem of intelligence. “Meaning” doesn’t accord with an idealistic mental representation; rather, it emerges from the realization of a model and its interplay between its environment.

The idea of “metaphors” as invariant patterns across otherwise incompatible epistemologies was resonant with independently driven reflections. But I now see the importance of making sound presuppositions about the nature of human cognition and how they should change our model of a generalized AI system, despite these universals (metaphors) in the model space being suggestive of multiple viable paths.

I wanted to write the above to reflect on how to best draw (epistemological, practical) lessons from the book, in light of AI recently finding itself on a more robust path. A fully intelligent system, however, may or may not emerge without some degree of top-down, more systematic approaches, as the book agrees with on some level. We might therefore continue to learn from failures tracing back to the implicit (yet course-altering) presupposition of dualism, idealism, well-defined mental states, among other misleading motifs of representationalism that persist in predominant philosophies.

If it is the case that bottom-up models are primary, while structured models are necessarily secondary, I nonetheless would arrive at the conclusion that we might find it important to draw from both symbolic and sub-symbolic models of cognition in order to realize a contemporary (neural network) architecture and control system that, together, dispenses with any misleading epistemologies. In this way, a robust system might be implemented efficiently because it best finds some useful “middle ground”.

On the other hand, if bottom-up approaches suffice to scale up to a general AI system, it would be fascinating to see how abstraction emerges from its bottom-up calculus. It’s obvious that abstraction is more than just epiphenomena of the mind, but contributes to a model’s intelligence, as language can be used to think about a lot of things, and because we notice stereotypes (templates) of problems that arise seamlessly that can be reused to solve other problems more efficiently.

If you believe that meaning is fundamentally subsymbolic, embodied, implicit, and nebulous, then you might also believe that the latter approach is necessary and sufficient. I only have a hunch that this is the case, not any principled explanation (yet). But I think it’s important to answer this for reasons of safety and understanding (explaining) the nature of intelligence/AI. I would be interested to hear from you (as a contributor of the work) regarding:

  1. Whether general intelligence might emerge without implementing any explicit models of the environment.

  2. How distributed representations of implicit kinds might give rise to general intelligence, and if so, how we would know that they are reliable planning systems, which are still conceptualized as predetermined, static models over actions.

  3. The main reason, however, for my bringing AI up with you is the conviction that it’s of critical importance to achieve robust AI in order to set the stage for more meaningful, enjoyably useful ways of life. The changes that occur from AI may destabilize society in ways that seem to imply a tragic future; however, in my opinion, changes can generally be handled well enough, so long as safe systems are the ones that society selects for. If you have responses about any of this, including necessary constraints to the model space in general, AI safety, or the societal implications of AI, I request a reply that states your prior expectations. More importantly, I request an eventual series of posts exploring (much further) the dynamics of atomized societies in the context of transformative AI.

Circumrationality vs reasonableness

David Chapman 2020-12-05

Commenting on: Part Three: Taking rationality seriously

Circumrationality is a special sort of reasonableness—it’s the reasonable work we do around the margins of a formal system in order to get it to relate to the real world.

Doing circumrationality requires some basic understanding of the rational system, so it’s sort of intermediate between reasonableness and rationality. But it is defined by its role rather than its nature: that being, relating the formal system to concrete reality.

Circumrationality?

Andrew 2020-12-05

Commenting on: Part Three: Taking rationality seriously

Has circumrationality replaced “reasonableness” used in the earlier parts? Or are they distinct concepts?

"Change in explanatory priority"

David Chapman 2020-12-02

Commenting on: The ethnomethodological flip

Ah, no, both sorts of thinking are good in different circumstances; neither is better overall.

The “flip” is at the theoretical level: the “change in explanatory priority.” That is, changing from explaining reasonableness in terms of rationality to explaining rationality in terms of reasonableness.

Definition

Rob Alexander 2020-12-01

Commenting on: The ethnomethodological flip

One thing I don’t see in that text is a clear definition of exactly what your mean by “the ethnomethodological flip”. Would a good candidate be:

“Changing your prototype of “good thinking” from rationality to everday reasonableness.”?

The meaning of meaning

James 2020-11-29

Commenting on: Reasonableness is meaningful activity

This fits what I’ve been thinking about in terms of the meaning of “meaning.” Meaning is a connection between things: between words and what they refer to, between actions and intentions, etc. Reasonableness preserves these connections, even if it doesn’t always attend to all of them (e.g. butterfly effects).

Rationality severs most of these connections through abstraction. That has the paradoxical effect of making a rational models applicable to a larger number of situations at the cost of being less applicable to any particular one of them. And it’s more than just not attending to connections that don’t matter in the context — some of them may be supremely important in the actual context, but the model can’t capture them and would break down if it tried.

A classic example is that utilitarianism has trouble incorporating “innocence” into its calculations of whether to kill one person to save others, but most people would consider that an extremely important consideration!

The scope of AI / opening remarks.

David G Horsman 2020-11-19

Commenting on: How should we evaluate progress in AI?

“Artificial intelligence is an exception. It has always borrowed criteria, approaches, and specific methods from at least six fields:
1. Science 2. Engineering 3. Mathematics 4. Philosophy 5. Design 6. Spectacle.”
Not borrowed. That is the unique scope of AI. It’s cross-domain by its nature. Our unique blindness is that Biology is #1. It determines and trumps the rest. Constrains them.
It’s notable that it was filled under “other” here although I was watching for it.

I'd love a (more) "comprehensive account of reasonableness"

Kenny 2020-11-15

Commenting on: Part Two: Taking reasonableness seriously

Maybe that can be a future project!

I’m a little puzzled by your agreement with this, from another commenter on this post:

Reality itself does not seem to be a system.

Given the incredible precision of, e.g. some parts of quantum mechanics, it sure seems like ‘reality’ is very likely a (relatively) simple system – in terms of its rules and its fundamental constituents. In that sense, it does very much seem to be a ‘system’, even in your specific usage of that term.

Where I would very much agree with you is reality, in the sense of our experience in the “eggplant-sized world”, NOT being a system – in your sense.

And, to me, that’s one of the wonderful things about this site/book and your writing generally – you’re helping dissolve the confusion between these drastically different scales of understanding.

It now seems obvious to me that there’s no ‘system’ that can be cognitively internalized or even published as a book (let alone read and understood) whereby something “eggplant-sized” can be understood in terms of, e.g. quantum mechanics; let alone any more fundamental theory. But I think it only seems obvious because I’ve been reading your work for so long!

But I also ‘can’t help’ but try to spin new rationalisms out of some of these thoughts! It seems to me like there’s a connection between your points and the ‘impossibility’ of rational systems in any universe, from a mathematical or computational perspective.

Something that I recently found extremely insightful is this from Scott Aaronson’s paper Why Philosophers Should Care About Computational Complexity:

My last example of the philosophical relevance of the polynomial/exponential distinction concerns
the concept of “knowledge” in mathematics.[14] As of 2011, the “largest known prime number,”
as reported by GIMPS (the Great Internet Mersenne Prime Search),[15] is p := 2^43112609 − 1. But
on reflection, what do we mean by saying that p is “known”? Do we mean that, if we desired,
we could literally print out its decimal digits (using about 30,000 pages)? That seems like too
restrictive a criterion. For, given a positive integer k together with a proof that q = 2^k − 1 was
prime, I doubt most mathematicians would hesitate to call q a “known” prime,[16] even if k were so
large that printing out its decimal digits (or storing them in a computer memory) were beyond the
Earth’s capacity. Should we call 2^2^1000 an “unknown power of 2,” just because it has too many
decimal digits to list before the Sun goes cold?

All that should really matter, one feels, is that

(a) the expression ‘243112609 − 1’ picks out a unique positive integer, and
(b) that integer has been proven (in this case, via computer, of course) to be prime.

But wait! If those are the criteria, then why can’t we immediately beat the largest-known-prime
record, like so?

p0 = The first prime larger than 2^43112609 − 1.

Clearly p0 exists, it is unambiguously defined, and it is prime. If we want, we can even write
a program that is guaranteed to find p0 and output its decimal digits, using a number of steps
that can be upper-bounded a priori.[17] Yet our intuition stubbornly insists that 2^43112609 − 1 is a
“known” prime in a sense that p0 is not. Is there any principled basis for such a distinction?

The clearest basis that I can suggest is the following. We know an algorithm that takes as
input a positive integer k, and that outputs the decimal digits of p = 2k − 1 using a number of
steps that is polynomial—indeed, linear—in the number of digits of p. But we do not know any
similarly-efficient algorithm that provably outputs the first prime larger than 2k − 1.[18]

Just to note that the “This

Simon Grant 2020-11-15

Commenting on: Judging whether a system applies

Just to note that the “This must mean something” link to apollospeaks has disappeared… Ah, linkrot…

Turing and Description Length

Bunthut 2020-11-08

Commenting on: A first lesson in meta-rationality

However, by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.

I don’t think that definition does what you want it to do. For example, you often say that not even some simple task (like here the Bongard problems) can be solved systematically, with the implication that intelligence in general can’t be either then. However, minimum description length isn’t compositional that way: it can well be that a task can be embedded in another, such that the big task can be described shorter.

If we look at any task in total isolation, we will often find that it requires vast background knowledge, and an explanation of how to do the task in isolation will require spelling out that background knowledge - but if we think of the task as in a broader environment, it can often be possible to learn that background knowledge, and describing how to do that might be a lot shorter.

For example, describing how to make an omlette in enough detail that someone with no experience cooking can get it right first try might be quite long- describing how to train the relevant steps until success is much shorter.

Reality does not come divided up into bits; we have to do that.

You’ve asked what import the Turing thesis really has here. Well, its sentences like this that make me reach for it. If we imagine a computer program to solve Bongard problems, of course reality comes to it divided into bits, literally. We will have to give our Bongard-solver some input, and it will be in bits. Most likely some sort of pixelgraphic. All distinctions which it will later end up making can be described in the language of which pixels are on or off. So all the divisions are already there: the question is which ones to ignore.

And it certainly seems that much the same is true of our brain: When our retina sends signals to the nervous system, these are more or less in pixel format, too (its not essential that it’s pixels, just some fixed format). Now from the perspective of our conscious thought it may seem like we are learning to make a distinction: what actually happens is that we learn to pay attention to it.

Broken link

David Chapman 2020-11-06

Commenting on: The parable of the pebbles

Er, um, … The part of the outline that points into is nebulous enough that I can’t even attach a page name to it; so unfortunately I can’t fix it yet.

Thank you very much for pointing it out, nonetheless!

Link to “Using a rational framework" goes to

Benjamin Taylor 2020-11-03

Commenting on: The parable of the pebbles

Stephen Harrod Buhner and Nonlinear Knowing - Beyond Rationalism

Taylor Horne 2020-11-03

Commenting on: Acting on the truth

Hi there!
I’m curious if you’ve read any of Stephen Harrod Buhner’s works. He has written extensively on subjects that may interest you.
I suspect his work may be quite synchronous and supportive of what you’re doing here at Meaningness.
Happy trails,
Taylor

HTML bug: fixed

David Chapman 2020-11-01

Commenting on: I seem to be a fiction

Wow, thank you for letting me know!

I’ve fixed it, I think.

HTML bug

Stig 2020-10-30

Commenting on: I seem to be a fiction

I have been reading your books and metablog for a few months now, and really appreciate your writing!

This page appears to have some style errors. The sidebar is displayed in a much too large font size, wrapping many words across more than one line. The sidebar also disappears behind an embedded youtube video.

It also looks like a missing closing quote in the id attribute of an h2 element messes with the document. The offending line in the source begins like this:

html <h2 id="boomeritis&gt;Boomeritis&lt;/h2&gt;&#10;&#10;&lt;p&gt;The Baby Boom generation—people born ro

Maybe this is the cause of the sidebar issue, the sidebar is in approximately the same location on the page.

Nate's priors

Paté 2020-10-25

Commenting on: The probability of green cheese

I read this post on the same day I saw Nate Silver, in all seriousness, tweeting out an explanation as to why Donald Trump has a 1/8 chance of winning: “if the polls move toward Trump in the closing days rather than Biden (50/50 chance) and there’s a polling error in Trump’s favor (50/50 chance) then he’s 50/50 to win. That gets you to his 1 in 8 odds in our current forecast.”
https://twitter.com/NateSilver538/status/1319787685758861313

being helpful

David Chapman 2020-10-24

Commenting on: What probability can’t do

Fixed—thank you very much indeed!

Typo

Ein 2020-10-24

Commenting on: What probability can’t do

A comprehensive collection of reasons probabilism can’t work, together with the proposed work-arounds, would BE helpful.

A few issues

DickHurts 2020-10-08

Commenting on: Positive and logical

Logical positivism (resp. logical empiricism) is concerned with statements that van be verified logically (resp. empirically). It does not say all statements can be verified. So Goedel’s Incompleteness results just show there are certain statements which cannot be verified, i.e. which are not subject to logical positivism and/or empiricism. Also, Incompleteness is not a flaw in classical logical logic, rather it is just an aspect of that logical system. So I don’t see how you can claim, classical logic is “inherently somewhat broken”.

If something is mathematically true, you can be absolutely sure of it, because a mathematical proof is unarguably correct.

This is true.

There are mathematical truths that can’t be proven.

This is false. If a mathematical statement cannot be proven, then it it is not a mathematical truth. Mathematical truths are, tautologically, mathematical statements for which a proof has been given.

The promise of rationalism was that, by rational methods, we can eventually come to know anything that is true.

Rationalism doesn’t assert that all claims are subject to verification by logical reasoning, but that only statements that are subject to verification by logical reasoning can be said to be true.

proving mathematics correct—also failed.

I’m not sure what this is trying to say. Mathematics certainly has issues, namely the prevalence of classical logic as opposed to intuitionism, which limits its practical applicability. As do the usage of set theoretic foundations over type theoretic ones. But if you fix your base logic and axioms then everything derived therefrom is correct. You obviously can’t prove the axioms or they would be theorems rather than axioms.

The idea that logical positivism “failed” is only possible in fields like philosophy where there is no expectation of logical rigor. It was merely generalized. Science today is concerned with statements subject to refutability. Logical positivism/empiricism is concerned with statements subject to verifiability, which are a fortiori subject to refutability.

Of course rationalism “failed” if you define it to mean a system that claims everything is subject to understanding via reason. But that’s disingenuous. Rationalism is just the view that knowledge of truth derives from reason. Philosophers seem not to like this because it points out that topics like metaphysics are inherently worthless. Obviously knowledge can be derived from empiricism as well, but such knowledge need only approximate truth.

Surprising amount of detail

David Chapman 2020-09-30

Commenting on: Where did you get that idea in the first place?

Thanks! I love that essay. I’ll probably cite it somewhere in the book.

An article that I think is relevant

Sean 2020-09-30

Commenting on: Where did you get that idea in the first place?

I found this article recently and immediately thought of your refrain “we make rationalism work”. It’s about the surprising amount detail that reality has.

http://johnsalvatier.org/blog/2017/reality-has-a-surprising-amount-of-detail

Practice?

Kenny 2020-09-25

Commenting on: Where did you get that idea in the first place?

I wonder if modern AI, i.e. machine learning, will eventually stumble on meta-rationality.

Having read you for a while, I was struck by something after reading this post about historical iron manufacturing:

Without some rational system (that’s good enough), all reasoning is ‘reasonableness’, even thinking ore deposits would replenish themselves (which is true for ‘bog iron’) after enough time:

As with many ancient technologies, there is a triumph of practice over understanding in all of this; the workers have mastered the how but not the why. Lacking an understanding of geology, for instance, meant that pre-modern miners, if the ore vein hit a fault line (which might displace the vein, making it impossible to follow directly) had to resort to sinking shafts and exploratory mining an an effort to ‘find’ it again. In many cases ancient miners seem to have simply abandoned the works when the vein had moved only a short distance because they couldn’t manage to find it again. Likewise, there was a common belief (e.g. Plin. 34.49) that ore deposits, if just left alone for a period of years (often thirty) would replenish themselves, a belief that continues to appear in works on mining as late as the 18th century (and lest anyone be confused, they clearly believe this about underground deposits; they don’t mean bog iron). And so like many pre-modern industries, this was often a matter of knowing how without knowing why.

Is a (good enough) rational system of reasonableness possible? I’d guess your answer is ‘no’ and I’m inclined to agree, but I’m not that confident about it. Given that AI is (reasonably) targeting human-like intelligence, maybe we can (someday) cobble together a ‘reasonably rational’ artifical but human-like and human-level intelligence. (That seems possibly reasonable to me!)

(I think it’s reasonable that, in practice, AI people are mostly trying to mimic human behavior because I’m relatively convinced that it’s not rationally possible to define or recognize ‘intelligence’ in general.

One thing that I think is interesting about machine learning is that it’s not entirely rational, or even that much at all. Practice seems to have far outstripped the capacity for rational theories to explain it fully. Like historical mining (and probably modern mining too still!), machine learning is mostly reasonable. Machine learning products, e.g. trained models, seem to be obviously reasonable too, which is fascinating because of how capable they are anyways. (Tho, and I think you’d agree, we shouldn’t be surprised about this!)

AI for math

David Chapman 2020-09-23

Commenting on: Where did you get that idea in the first place?

I don’t know enough about the Olympiad to have much of an opinion. It sounds similar to the Putnam Competition, which I know a little about. The problems there usually depend on recognizing some trivial trick, such as an algebraic simplification based on a coincidence of arithmetic, that makes the solution simple.

It’s plausible that current-generation theorem provers, plus maybe some knowledge base of typical competition problem trick patterns, would do well at that.

Generally, the pattern over the past 70 years has been that computers are good at things people are bad at (like math) and can’t do things people find easy (like making breakfast).

AI for solving IMO problems

Timothy Johnson 2020-09-23

Commenting on: Where did you get that idea in the first place?

I learned this week that some AI researchers are now attempting to use theorem provers to solve math problems from the IMO: https://www.quantamagazine.org/at-the-international-mathematical-olympiad-artificial-intelligence-prepares-to-go-for-the-gold-20200921/

As you suggest in this post, it sounds like the hardest step is to even get started, since so many problems require a clever construction. What do you think their chances are of making any headway?

I distinctly remember my own

Olga 2020-09-09

Commenting on: Are eggplants fruits?

I distinctly remember my own eggplant-like a-ha moment. At the peak of animalistic magical thinking in my childhood I pretended that all objects have souls. That worked pretty well until one day I was eating a watermelon and I got stuck thinking whether cut up parts of the watermelon were some new objects or the same one, and what about the seeds that were left after the flesh was eaten?

Whenever I came across Plato’s world of ideal forms afterwards, I thought about that watermelon and considered his theory as a failed idealistic attempt at defining separate objects.

That makes me wonder about the relationship between the explicit usage of metarational thinking (considered to be more characteristic for 30+, phd folks) and the intuitive usage that is always there in practical life but not exposed. Stages of development suggest that first a person should learn systems and then understand their limitations. It would be interesting to know whether early exposure to the concept of nebulosity would help progress to metarational thinking more easily or hinder the mastery of rational systems.

Great examples

Olga 2020-09-09

Commenting on: Reductio ad reductionem

Thank you very much for the examples, I finally understood how postmodern critique also applies to natural sciences. Before that I assumed that it mostly applies to social sciences.

Rationalists are mostly physicalists

David Chapman 2020-09-07

Commenting on: Reference: rationalism’s reality problem

Starting with the logical positivists, nearly rationalists have all been committed to physicalism. So the non-physicality of their theories of reference is a problem for them, at least!

For someone who isn’t committed to physicalism, this may not be a problem. However, there aren’t any good non-physical theories of reference either, as far as I know.

The stuff of reference

Julia 2020-09-07

Commenting on: Reference: rationalism’s reality problem

Though I have some intuitions about the deficits of the correspondence theory of truth, I have a hard time understanding your points of criticism, e.g. “How can an abstraction interact with a dog?”
Why do you need those things to interact or “physically connect”? Do other things do that? Maybe I don’t understand traditional theories of reference enough to understand what doesn’t work about them.

I guess I don’t see why a concept/a sentence/an abstraction on the one hand, and a thing or a situation in the real world that they refer to on the other hand, would have to be from the same “stuff” (i.e. ideal vs. material). In fact, I don’t see why you would have to invoke this classification at all, since even the things that sentences can refer to (e.g. situations) aren’t strictly material but packed with “ideas” (for lack of a better term). Or am I completely missing the point here?

I would love if you could elaborate on that.