Comments on “A first lesson in meta-rationality”

Thanks, had some enjoyable

Dunkelheit 2016-07-23

Thanks, had some enjoyable time solving them. I think the defining feature of these puzzles is that you cannot be totally sure that you have solved them! Disagreements about what constitutes a solution are entirely possible (just like for an ethical problem for example). I am not even sure how to prove that a solution like “objects that are exactly like the ones on the left go to the left and all others go to the right” is bogus. So yes, seems like a pretty good example of what you call “nebulosity”.

how to prove that a solution

Dunkelheit 2016-07-23

how to prove that a solution like “objects that are exactly like the ones on the left go to the left and all others go to the right” is bogus.

One way to do it would be to reformulate the puzzle more rigorously: suppose Alice supplies Bob with a sequence of objects and then tells him on which side the objects go (one by one). Bob’s task is to find the algorithm with a minimal length that decides the side of the objects revealed so far. Intuituvely all fun-spoiling algorithms will have a length proportional to the length of the sequence revealed so far and the minimal length of the algorithm will converge towards some constant value. Algorithm that achieves this value is the true answer to the puzzle. At this point the problem looks suspiciously like calculating Kolmogorov-Chaitin complexity.

The One True Rule

David Chapman 2016-07-23

Yes, given six instances, or any finite number of instance, there will always be multiple valid rules. (This is the general Problem of Induction.)

However, it’s a principle of the construction of Bongard problems that one rule should be obviously better than any alternatives. Also, “is one of these six things” is explicitly considered an invalid rule. I didn’t include all such details, because the general form of the problems is meant to be intuitive by induction from the easy examples :-)

As far as I know, no one has ever tried to prove that the intended solution is best (according to a minimum-program-length criterion, or any other).

However, I’d be moderately surprised if anyone could come up with a plausible alternative rule for any of standard problems.

Analogies all the way down

mtraven 2016-07-23

Have you dived into the rest of Hoftstader’s later work (like Fluid Concepts and Creative Analogies) and that of his students? He’s been chasing analogy-making as the master key to AI for decades; always sounded promising but I don’t have a great sense of how much progress has been made.

Later work

David Chapman 2016-07-23

I did read his Copycat stuff when he did a sabbatical year at MIT-AI, and I think read at least abstracts of subsequent work until I left the field. I found it unimpressive. I think he’s a very good philosopher and an astonishing writer, but not so great as a hacker. And you do have to be a great hacker to actually do AI.

I only learned of Fluid Concepts and Creative Analogies when researching this post. Weird factoid: a copy of it was the very first thing ever sold by Amazon.

Doing a bit of casual looking-around, it seemed that Foundalis’ thesis is the best (and pretty much last) thing to have come out of his lab.

Warning: comment contains Bongard solution spoilers...

lk 2016-07-24

These Bongard problems are fascinating, and have been fun to solve! I’ll have to follow up your links - it would be interesting to me to know a little more about how people go about doing this. I haven’t seen them before and from my experience at trying the ones you show here, the Feynman-style ‘bag of tricks’ idea you mention in How To Think Real Good is very relevant - they are much easier if you have the right cached tool.

I found the stability one very easy, and the one with the three dots didn’t take long either. For both of them I have a good cached tool - from my physics-maths background I’m used to projecting stuff against the axes, or seeing if something is stable against a small perturbation. Those ideas obviously appear somewhere in my ‘list of things to try’, and something about the situation prompted me to try them. (For the stability one it was a very rapid thought, connecting that ‘broken chair’ with the idea of instability; I’m not so sure what it was about the dots that prompted the connection.)

On the other hand, the circle-midpoint one took me ages. After lots of false starts I had some success from just sort of eyeballing the whole lot for a while, and noticing that some of the pictures on the left were kind of neater than the similar ones on the right. And then eventually I connected that vague judgment with the midpoint thing. I don’t think there’s anything in my personal box of tools that’s relevant to that situation!

I wrote about having a similar experience with the Cognitive Reflection Test.

Newbigin, Polanyi, & intuition in the scientific process

Julia K. 2016-07-24

20th century Hungarian scientist Michael Polanyi wrote about the role intuition plays in formulating scientific hypotheses.

Lesslie Newbigin’s very readable little book on epistemology, Proper Confidence (1995) paraphrases Polayni’s work The Tacit Dimension (1966):

“Obviously, there are no rules for making new discoveries. Discovery means learning something new which was not known before. It involves a venture beyond what is known. How does it come about that discoveries in science are made?…”

“Advances in scientific knowledge are made by recognizing a problem and seeking a solution. It may be a problem which no one has recognized before. But what exactly is a problem? Is it something known or something unknown? If it is known, why is there a problem? If it is unknown, how would we recognize a solution when we found it? The answer that Polayni proposes to this old conundrum is as follows: Recognition of a problem is an awareness, an intuition, that there is something - a pattern or a harmony waiting to be found - hidden in the apparent haphazardness of empirical reality.

“This cannot be more than an intuition. And it may prove to have been an illusion. There have been scientists who have spent years seeking solutions to problems which were illusory. One might refer to the centuries of effort devoted to the discovery of perpetual motion or of the ‘philosopher’s stone,’ but there are plenty of modern examples. Scientific discovery involves such gifts as intuition, imagination to project possible patterns, prudence coupled with a willingness to take risks, and courage and patience in pursuing a long and arduous course of investigation. At every point along this course, there is need of personal judgment in deciding whether a pattern is significant or merely random. None of these things can be coverd by formal rules. They all involve the personal commitment of the scientist.”

And, regarding “mushy” things like the not-technically triangles vs. circles Bongard problem, where we rely on a “fluidity of descriptions” when modeling:

“Polanyi points out that knowing is always part of a tradition. The mental activity involved in trying to make reliable contact with reality can function only by indwelling a tradition of language, concepts, models, images, and assumptions of many kinds which funtion as the lenses through which we try to find out what is really there.

“The critical movement initiated by Descartes sought to subject all tradition to question and to build a structure of knowledge which would be accredited by pure unaided reason, having the precision and the certainty of mathematics… No one can deny the acheivements of the critical period. But it was a mistake to suppose that the enterprise of knowing the reality of which we are a part can begin, so to speak, with an empty page.”

Zendo

Julia K. 2016-07-24

The game Zendo, which uses Icehouse Pieces, is basically Bongard problems. Lots of fun for 2+ players!

Tacit Knowledge

David Chapman 2016-07-24

Thank you!

I read The Tacit Dimension decades ago, and I agree: it’s very good, and relevant. I should probably go back and read it again sometime.

Footnote 15 [Spoiler]

Alex 2016-07-25

I came up with a different solution than the footnote:

“Objects where the width of the base is at least that of the entire object”

Is this equivalent to the gravity solution? I think not. Why?

1) With the possible exception of the Meta-Bongard (of which Foundalis himself says on his homepage, that it does not strictly follow the rules) all the problems presented are about the geometry of the objects. Maybe I lack some fundamental insight in physics, but gravity is not a geometric property?!

2) In my understanding, once you go down the rabbit hole of allowing high-level arguments such as physics, you have to account for distribution of mass. It is obviously perfectly possible to habe P-shaped objects that do stand upright (I’m looking at a desk lamp right now as prrof-of-concept) and within the rules of the game I see no a priori reason why equal distribution of mass should be assumed. I also do not think that Occam can solve this for us. For sure gravity is a more complex model than width, and if I took more words to describe my solution based on width that is only because the word “gravity” implies so much more which is kind of my point.

I would have made this comment anyways because I’m just that much of a nitpicker. However, in the light of David: “I’d be moderately surprised if anyone could come up with a plausible alternative rule for any of standard problems.” [Is this a standard problem?] and Ik: “broken chair” [But this is not how you are supposed to look at these things?!] I find my observation even more intersting.

I do realize that the notion that there actually are rules to follow when constructing a Bongard-Problem (cf. http://www.foundalis.com/res/invalBP.html) is a very Stage 4 one. But I do not think that this is me “just not getting it”. Rather I am not convinced by the proposition that Bongard-Problems might be human-complete in the sense that every human solvable problem can be turned into a Bongard-Problem (presumably in polynomial time?). My impression is that the human mind is able to solve Bongard-Problems only because they are a somehow defined class of problems and we have meta-knowledge on how that class of problems work. This is not that far from the Sudoku-example in my opinion. The same is true for standardized intelligence tests. Once you are really serious about allowing for any solution without meta-rules, they break together. This can be trivially seen in the verbal branches which would not survive an encounter with the later Wittgenstein.

Footnote 15 [Spoiler]

David Chapman 2016-07-25

Thanks! Yes, I’m moderately surprised :-)

It’s easily “fixable” (a trapezoid with a base not much smaller than top would be stable), but probably that’s not really relevant to your point here, which is about whether Bongard problems are human-complete.

That question is somewhat ill-defined because the “class of Bongard problems” is also ill-defined (the page by Foundalis notwithstanding). I gather that Bongard himself was vague about this (but haven’t read his book). Various authors have extended the set in various directions, and there’s no Rules Committee to say which are valid.

If we did restrict them to ones that are “strictly geometrical,” it’s much more plausible that one could write an AI program to solve them. It would have to have a great deal of geometrical knowledge, but probably that’s bounded in a way that general human knowledge isn’t. Writing it could be a lot of fun! It might even be useful too; I don’t know.

Problem Choosing

ssica3003 2016-07-26

Enjoyed this post and excited about 4.2… etc. Thanks!

RE: footnote 9: problem choosing. Would love to see this. I have always tried to explain to others that the first step of scientific method (have a hypothesis) is where philosophy lives, and is where much of the hard work is, to no avail. Would love to see this explained.

DL might be able to solve these problems

mjgeddes 2016-07-28

Hi,

Deep learning is definitely not ‘mostly brute force’. The most common definition of deep learning is exactly that of solving a Bongard-type problem - DL involves taking some data, then working backwards to construct the model (‘rule’ or ‘program’) that generates that data.

3 types of inference

mjgeddes 2016-07-28

There’s basically 3 types of inference - deduction (symbolic), induction (probability) and abduction (concept learning and inference-to-best-explanation).

I think it’s that 3rd one (concept learning and IBE) that’s still not well understood, and perhaps this is what you mean when you talk about ‘meta-rationality’.

I think all 3 types of inference need to work together to get a full account of rationality.

Abduction and deep learning

David Chapman 2016-07-28

Hi Marc,

I think you are right about abduction! I actually intended to mention it, in an early version of this post, but decided to leave it out just because it was an extra complication.

It’s probably best that we agree to disagree about deep learning. We could argue the technical issues, and one of us might convince the other, or not; but this doesn’t seem the place for that. Anyway, ultimately, the only proof would be for someone to actually succeed. If you do that, I will be very impressed!

David

Alternative method for "the dreaded metaphysics" problems

Dirdle 2016-07-29

I found the second one hard and the third easy. You don’t have to realise that line segments can include angles to solve the second - instead you can say the ones on the left have a vertex with three lines, and the ones on the right a vertex with five, and the number of segments is irrelevant. Then the third problem is very straightforward, because you’re already thinking in terms of vertices being anywhere that a line branches, and just go back to counting segments.

This could be ‘fixed’ by adding a collection with two connected three-line vertices on the right (total of five lines).

The number of lines radiating from a vertex

David Chapman 2016-07-29

Yes, interestingly, I solved the second one by counting the number of lines radiating from a vertex too. (Apparently this is not the “usual” way.) I still didn’t find the third one easy. I’m not very good at puzzles, really!

Arguably the lines vs vertices formulations of the solution to the second one supports Alex‘s suggestion above that some of these have more than one correct solution. In this case, the two alternatives might be considered “the same rule, expressed different ways,” but that runs into the nebulosity of what “same” and “different” mean!

Lines radiating from a vertex

Gerry Quinn 2016-07-29

That’s how I solved it too. I found the third one a bit harder, but not terribly much so, though I did initially fall into the ‘trap’ set by the previous ones.

One of the problems (“Initially, you probably see a variety of shapes, each with a tiny blob attached. “) I solved without looking at the objects on the right. I quickly saw the common factor, and just had to look at the other side to confirm it.

I guess some Bongard problems have better-determined solutions than others.

"Church-Turing Thesis" and "Human-Complete problems"

Andreas Keller 2016-07-30

The statement you are citing: „We know humans can’t do anything more than a computer can“ is just a hypothesis, and I think it is simply wrong. This is the (often implicit) presupposition of AI that is the reason for its failure.

In mathematics, mathematical objects are known for which there is no complete formal description. For example, the set of computable total functions is not enumerable (as can be shown with a rather simple diagonalization proof). So any formal system producing the algorithms producing such functions is incomplete. This can be interpreted in such a way that any formal system can only produce a limited range of patterns.

However, you can always produce another algorithm (or formalism, or systematic piece of knowledge) not covered by your previous system. The current system can always be extended. The process of this extension is not some mysterious woo thing but can be described even by a computable function (the term “productive function” has been coined here). If you try to build this extension mechanism into the formal system itself, the result is again a limited formal system, not a general intelligent system. But it looks like humans have no problem to do this. So the Church-Turing-Thesis only applies to formal systems. Humans are physical systems and it looks like we are not describable completely in terms of a single formal theory. While any process going on in our cognition might be described formally in hindsight, it is not possible to create a single formal theory describing them all.

What is missing from formal systems is a complete reference to themselves. You can apply the productive function from the outside and get a new, more powerful or different formal system, but if you build this external reference into the formal theory, you get a new formal system whose external point of reference has shifted elsewhere. Formal systems cannot refer to themselves in their totality. Humans seem able to do this or at least to always generate an external (meta-)viewpoint to any piece of formal knowledge they are using. The result is that we are historically or biographically developing in physical time. Formal systems cannot do that. Anything that is derivable inside a formalism is derivable right from the beginning. A formal system cannot describe its own historical evolution, but human cognition is evolving.

A result of this is that there is no complete formal system describing human cognition. This also means that every knowledge is special. Therefore, the concept of a “human-complete problem” is, in my view, nonsensical. There is no single problem so that knowing how to solve this problem would give you the means of solving every problem humans can solve. If such a problem (and the knowledge for its solution) would exists, this would mean that there would be a complete formal theory (a complete description in terms of a finite amount of knowledge) of how human cognition is working. But this does not exist. Instead, human cognition develops special knowledge for each special type of problem. There is no architecture for general intelligence. Cognition develops historically and the total process cannot be described in terms of any single algorithm or formalism. For any theory of human cognition, there is the possibility of a process that moves it out of the scope of that theory. You can think of this as applying a productive function to the formal theory describing the current stage of development.

As long as AI and “Cognitive Science” continue looking for the generally intelligent algorithm or formalism, nothing will come out of it. Since cognition is developing historically, there are no fixed laws of thought, so there cannot even be a “cognitive science” in the strict sense of the term “science”. Instead, cognitive psychology should be viewed as a historical discipline, belonging into the humanities. This does not mean that there is any woo factor here, just that exact descriptions are only possible in hindsight, by extending the formal theories used to describe thought processes. There is no woo factor involved in the application of a productive function to a formalism describing an enumerable subset of a non-enumerable productive set, you just apply that function and get a new element and an extended formalism. You can always do this although there is no complete formal description to this.

So it looks like the basic assumption of scientism that it is possible to describe all of reality in a single formalism is wrong. You always need a multitude of systems and you have to switch between them and generate new ones. It is not possible to unify all of them into a single system without leaving gaps. The incompleteness will come in the form of gaps or in the form of vagueness (which means using concepts with incomplete definitions that might have to be extended, i.e. here the gaps are in the language used). Every formalism contains a finite amount of information and can only describe a finite range of patterns or regularity.
So I think what you describe by the terms “patterned” and “nebulous” is a necessary result of this limitedness of all formal systems.

The "equidistant points"

S. T. Silva 2016-07-30

The “equidistant points” still doesn’t seem true to me. (I actually tried to check and thought the fourth right one disproved it.)
I intend to buy TTAoLaB.
The “circle next to concavity” seems to me easier than you want it, with the second left and first right side by side - I saw those at a glance, checked one more, then checked all for “if the concavity were upward, the circle would be to its left/right”.
I got the meta-problem right in everything except the problem with Greek letters, which I couldn’t solve thus far …
From a sample of one: the mushy problem looked persuasive for someone having the “this is normal rationality” reaction …
So, thanks!

27 modes of reasoning are enough for AGI

mjgeddes 2016-07-30

Andreas,

The Godel limitations apply only to formal ( deductive) reasoning. Computers can easily escape these limitations by supplementing deductive reasoning with inductive and abductive reasoning, which are based on the use of empirical data. For example, our ability to ‘see’ the truth of Godel statements actually relies on the empirical fact that we’ve never in practice found any inconsistencies in the basic system of arithmetic we use - adding this extra empirical fact lets us escape Godel at the cost of a slight degree of uncertainty. And computers can do the same.

I do agree that there is not likely to be one unified algorithm that can capture general intelligence- indeed the very essence of intelligence is exactly what you say - it’s the ability to switch between different modes of explanation .

However, there is no reason why a small finite number of different algorithms working together can’t do the job. Not perfectly of course, but if you have different algorithms working together, each of which can give you a partial coverage of cognition in general, the idea is that all the different methods can complement each other and cover for each others weaknesses, such that the whole system can still be a very good approximation to cognition in general.

I have my own theory of AGI and it says that you only need a total of 27 different types of algorithms (based on reasoning systems for 27 ‘core knowledge domains’) to do the job.

Mushy problems [contains spoilers]

David Chapman 2016-07-31

From a sample of one: the mushy problem looked persuasive for someone having the “this is normal rationality” reaction …

Oh, great, I’m glad to hear that!

The “equidistant points” still doesn’t seem true to me. (I actually tried to check and thought the fourth right one disproved it.)

I’m not sure how you are counting, but either way the points are equally spaced along the vertical axis. They aren’t equidistant in the plane.

(Or, they are close to equidistant; these are scans of hand-drawn diagrams, so the distances are not precise. That’s part of the “mushiness” of the whole thing. The pictures aren’t meant to be imprecise representations of the problem; they are the problem. Generally, the “triangles” aren’t quite triangular, there are tiny gaps in “lines,” and so on.)

What does “TTAoLaB” stand for?

the problem with Greek letters

Yeah, I was stuck on that one for a long time, too. The ones on the left have four “bare endpoints”; the ones on the right have two.

TTAoLaB = The Thrilling

S. T. Silva 2016-07-31

TTAoLaB = The Thrilling Adventures of Lovelace and Babbage. Thanks.

Thrilling Adventures

David Chapman 2016-07-31

Oh! Yes, that’s a wonderful book! Sidney Padua, the author, did a huge amount of historical research, and turned up some fascinating discoveries. The comic is funny and charming, but also teaches a surprising amount of computer science as well as history. Miraculous!

Map of knowledge - the 27 core knowledge domains

mjgeddes 2016-07-31

Here are my ‘27 core knowledge domains’ that represent all knowledge. I use the archaic letters of the Greek alphabet to symbolize each domain. I believe that any AGI system must have a modelling system for each of these 27 domains, and all of these together are sufficient.

The left-hand column is the mathematical domains. The middle column is the physics domains. And the right-hand column is the teleological (mental) domains. The domains are ordered in terms of decreasing abstraction, along two different dimensions. Left-to-right (most abstract to least abstract) and top-to-bottom (most abstract to least abstract). The positions of the symbols on the page are intended to provide clues as to the relationships between the knowledge domains.

The beginning of infinity awaits....
http://www.zarzuelazen.com/CoreKnowledgeDomains2.html

Church-Turing thesis

Andreas Keller 2016-08-20

Dr. Kurt Ammon has just published his new paper here: http://arxiv.org/abs/1608.04672. I regard it as highly relevant with respect to the discussion on the Church-Turing thesis that was going on here. Kurt Ammon is a mathematician working on topics related to this problem. Se also https://creativisticphilosophy.wordpress.com/2016/08/20/kurt-ammon-informal-physical-reasoning-processes/ for further information on Kurt Ammon’s previous publications.

Comment

David Chapman 2016-08-21

Andreas, thank you for this!

Sorry you had trouble posting your comment. The spam filter wasn’t sure about it, so it decided I had to approve it manually.

Great read. Thank you. Question...

Michael 2016-09-08

So it sounds like the point we’re about to broach here is the harmony between the relationship of perception and this nebulous, ever changing haze that is reality, full of patterns.

One point you made was really responsible for this thought/question, and that was the fact that reality doesn’t impose the bounds necessary to comprehend it or chop up the information being percieved by us into something comprehensible. We do that.

My question is: Could intelligence be solely dependent on biological mechanisms, or the harmonious interplay between two extremely complex systems? With the biological (genetic, coded, whatever) mechanisms of an object, and perhaps even the very substance of an object being informed by the environment (reality), thus attuning the objects capacity to parse reality within the creation of implicit bounds being created as the systems “play” and interact with each other.

This is a tough thought to get out right now for some reason. Basically I’m trying to hash out whether a constantly self-modifying program (meta-programming) utilizing an array of sensors that detect various aspects of reality to create an analog of the nebulousness of reality (a long way of saying introduce environmental randomness), interacting with another similar program or community of similar programs could, with a basic set of initial conditions eventually develop stronger AI?

I think we get so hung up on storage/memory requirements, processing power, exponential growth and so on that evolving the state of an object, and reducing the operations using simple logic like “the only thing that matters is the previous state of the object, the current state, and the future state based on the current and preceding stimulus” is largely left unexplored.

Brains naturally prune unused connections. In the same way, the more an object can self-modify it’s state and heuristics by learning from the inputs, outputs, and by being governed using some basic system imposed by the programmer, other objects or environment, the faster the outputs would be determined because incorrect answers or undesirable outcomes should be used less and less.

I’d never heard of most of the things you discussed before now, so please excuse the poorly worded comment/question. I simply have a passion for AI and had to ask myself what the hell was happening inside me to make answering those puzzles so damn easy for me. FWIW I couldn’t figure it out. Yet. ;)

Peace

Read as philosophy, this is mind blowing stuff - thank you

Benjamin Taylor 2016-11-18

Thanks for curating and presenting this. I have been exploring, as a practitioner with a slight romantic yen for academia, ‘systems thinking’ for some time. In that universe, like Terry Pritchett tree frog, the moment you think you have got it sussed, you see a new ring of leaf-edges on the horizon. This has now opened up huge horizons for me, and as a meta model for meta cognition, explaining for example why consultants shouldn’t codify method (and, perhaps, why method /can’t/ be codified). As a teaching tool I think it could be one of the best.

It was fun and slightly punctured my bubble to read some of the comments, with the technical discussions over the rule base for the game and the possibilities of human-created non-human ‘intelligence’. I think, though worthy and meaningful, they miss the point. The point, for me, is that the rules are contestable. This post is like the inflection point between the early Wittgenstein (‘the world is everything that is the case’ and the bit about the ladder we climb and pull up after ourselves, and ‘whereof we cannot speak, thereof we must pass over in silence - the logical absolutist sucked into dualism by the mystical) and latter Wittgenstein - word games and language acts (basically a meaningful reassertion of ‘it’s tortoises all the way down). Cool.

Philosophy & management consulting

David Chapman 2016-11-19

Thank you, glad you liked it! I expect to write much more about meta-systematic cognition in coming months. If you haven’t already seen it, this post is an abstract overview, and most of the other recent posts in the metablog are also relevant.

I didn’t mention in this post that I am drawing heavily on Robert Kegan’s adult developmental theory, which makes meta-systematicity “stage 5” of cognitive development.

Kegan was an academic experimental psychologist at Harvard, but for the past 20 years seems to have put most of his energy into management consulting for executive development. I know little about that work (although I intend to learn more soon). You might want to look into it.

why consultants shouldn’t codify method (and, perhaps, why method /can’t/ be codified)

Right. This is a key to “stage 5” in Kegan’s scheme. Everything in reality is both nebulous (vague, ambiguous, constantly-changing) and patterned. Systems and methods rely on patterns, and tend to break down in the face of nebulosity. Skill in working with nebulosity is meta-systematic “fluidity.”

the inflection point between the early and later Wittgenstein

Right, exactly. Wittgenstein was one of the first people to begin to understand these issues (in Philosophical Investigations). Heidegger did so also, a bit earlier, and perhaps in greater depth, although considerably less clearly.

I watched the Brian Cantwell

Duckland 2016-11-29

I watched the Brian Cantwell Smith talk and was curious for more. Tracked down this intro for an upcoming book of his – https://web.archive.org/web/20140311060517/http://www.ageofsignificance.org/aos/aos-v1c0.xhtml

Looks good, more detailed than his talk

BCS upcoming

David Chapman 2016-11-29

Yes… that book (with various titles) has been upcoming since about 1987, unfortunately. He’s famous for writer’s block. It’s frustrating; he has a depth of understanding that’s pretty well unmatched, but most of it will probably never appear in public.

The Age of Significance

David Chapman 2016-11-29

That preface is excellent, though, and I would recommend it to anyone who is interested!

Writing grand prolegomenas to books that never come into being—because they are too gigantic to actually write—is, of course, a fault I share with him. However, the decision to put my books in web-only form has resulted in releasing little bits, which are maybe better than nothing more than the prefaces.

There's a quite good YouTube

Armot 2017-01-29

There’s a quite good YouTube channel about math explained from a stage 5 point of view, it is 3Blue1Brown.

Anyone who likes recreative math or understanding it should take a look, the content is worth gold.

Fleas in the Jar

David Chapman 2017-01-30

[Benjamin Taylor left the interesting comment below, but unfortunately the spam filter mistakenly rejected it. I’m reposting it at his request.]

I commented on this ‘flea jar’ paradigm post from What’s the Pont:
https://whatsthepont.com/2017/01/29/the-fleas-in-the-jar-experiment-who-kills-innovation-the-jar-the-fleas-or-both/#comment-21554

There’s something in here about the (in)ability to switch contexts. Inter-contextuality, meta-contextuality, whatever it is or whatever you call it is how we get creative in the first place – but somehow we seem programmed to (quite often intentionally) limit ourselves to a single context really easily.

If you put your post here together with the excellent post at Meaningness:
https://meaningness.com/bongard-meta-rationality
(which, for me, shows why any method/toolset/worldview is necessarily limited in a world with infinite potential rulesets), I think you get something interesting.

Systems/cybernetics/complexity teaches multiple contextual views but seems to do so only at ‘peak moments’ – before subsiding back into restricted contextuality as it is ‘applied’ to ‘specialisms’.

This begs questions I don’t have time to explore here, but Bongard games are the best example for me of how you can teach this point – perhaps experientially – and which might have wider application.

enjoyed this

jz 2017-04-17

enjoyed this

This post inspired a presentation at SCiO which is linked below

Benjamin Taylor 2017-10-16

Very interesting presentation

David Chapman 2017-10-16

Thank you for the link—really interesting to see these ideas being put into practical use!

Rationality from a planning perspective

Thorbjoern Mann 2017-10-16

For all the challenging and entertaining value of these explorations into the various niches of thinking, there is something bugging me about the notion of ‘rationality’ as the file drawer into which the discussion has been categorized. From the point of view of design and planning that I have tried to explore issues – such as the problem of prediction of how a proposed plan will perform over time, – there is one aspect that explains why such prediction is so ‘hard’, – especially for social issues – and why the very effort to do it with greater certainty (as an argument for acceptance of the plan) may actually be counterproductive. That aspect may be crudely described as the desire of people to ‘make a difference’ in their lives. At least once they go beyond the basic problem of survival, of being accepted in their social context.
‘Making a difference’ can then be tracked through many of the stages described in the article (I haven’t read the book): getting better at recognizing, understanding, then using the existing rules; but then going on to breaking some of them (breaking the rules is meaningless unless one has recognized the current rules) and making one’s own rules. The new rules are meaningless if they are just ‘new’ but can’t be assessed against some principles, criteria, standards (also current or ‘new’) of plausibility, functionality, ethics, aesthetics: are they appealing to those others from whom you wish to be recognized as ‘different’?

Sorry , the comment went to

Thorbjoern Mann 2017-10-16

Sorry , the comment went to the wrong page my mistake.

I’m curious what you make of

Aysja 2020-06-22

I’m curious what you make of this paper (wrt deep learning not being able tackle Bongard problems): https://www.researchgate.net/profile/Tanner_Bohn/publication/341166652_A_Deeper_Look_at_Bongard_Problems/links/5ebb7252a6fdcc90d6723d84/A-Deeper-Look-at-Bongard-Problems.pdf

While I think the low robustness of the model (unable to generalize past the 12 tiles) is concerning, it’s impressive to me that they’re able to achieve near 100% accuracy on 200 BPs under some conditions. Accuracy here means the ability to separate the 12 tiles correctly, in a feature space defined using a linear combination of feature maps from a CNN. The ability of the model to use different levels of abstraction from the features in the network reminds me of your comment on different levels of description.

I think this model is certainly not an accurate portrayal of the human mind solving a BP (in some cases it clearly picks up on features that humans wouldn’t use), but I do find it interesting that it’s able to perform so well, especially given the abstract nature of many BPs.

Anyways, thoughts?

CNNs applied to Bongard problems

David Chapman 2020-06-22

Thank you for pointing this work out to me!

I’ve only done a quick skim, because it didn’t seem there was enough detail to fully understand what they did even, if one did a close reading.

However… Generally speaking, CNNs are extraordinarily good at finding predictive local features, and lousy at understanding large-scale spatial relationships. And, in cases in which they appear to be using large-scale spatial relationships, it usually turns out that they are “cheating” somehow.

What the authors say is that they pretrained on tasks that already involve features similar to those sufficient to easily solve the specific problems they posed. (“Easily” = logistic regression of extracted features.) So, this doesn’t seem to address the issues they rightly point out make Bongard problems difficult in general.

Fig 5 is sort of interesting.

  • a-c are straightforward local-feature problems.
  • In d, the network cheated, as they say.
  • I’m not sure why they say e is uninterpretable; a unit that takes input from the whole field can easily discriminate “left of picture” vs. “right of picture,” and that seems to be what it’s doing.
  • f and g are easy local feature problems; the only mystery is the activation map. As they say, “a standard method of visualizing what a CNN has learned is frequently not well-suited for Bongard problem”; it would take some digging in to see just why not, but I don’t expect any revelations from that.
  • I suspect in h the network is “cheating” by computing the average local feature density rather than counting objects.

thank you for the puzzles!

Rosa Zubizarreta 2020-08-12

And, particularly resonating with your point about “Reality does not come divided up into bits; we have to do that. But we can’t do it arbitrarily, and just impose whatever boundaries we like, because reality is also patterned.” Reminds me very much of Eugene Gendlin’s “Experiencing and the Creation of Meaning”....

Turing and Description Length

Bunthut 2020-11-08

However, by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.

I don’t think that definition does what you want it to do. For example, you often say that not even some simple task (like here the Bongard problems) can be solved systematically, with the implication that intelligence in general can’t be either then. However, minimum description length isn’t compositional that way: it can well be that a task can be embedded in another, such that the big task can be described shorter.

If we look at any task in total isolation, we will often find that it requires vast background knowledge, and an explanation of how to do the task in isolation will require spelling out that background knowledge - but if we think of the task as in a broader environment, it can often be possible to learn that background knowledge, and describing how to do that might be a lot shorter.

For example, describing how to make an omlette in enough detail that someone with no experience cooking can get it right first try might be quite long- describing how to train the relevant steps until success is much shorter.

Reality does not come divided up into bits; we have to do that.

You’ve asked what import the Turing thesis really has here. Well, its sentences like this that make me reach for it. If we imagine a computer program to solve Bongard problems, of course reality comes to it divided into bits, literally. We will have to give our Bongard-solver some input, and it will be in bits. Most likely some sort of pixelgraphic. All distinctions which it will later end up making can be described in the language of which pixels are on or off. So all the divisions are already there: the question is which ones to ignore.

And it certainly seems that much the same is true of our brain: When our retina sends signals to the nervous system, these are more or less in pixel format, too (its not essential that it’s pixels, just some fixed format). Now from the perspective of our conscious thought it may seem like we are learning to make a distinction: what actually happens is that we learn to pay attention to it.

Objective Reality

The fuzziness of objects, along with subjects, has long interested me. That comes up in discussions about hyper-objects, which are those that are so large and complex that we tend not to easily perceive them as coherent and unified things. And we can consider hyper-subjects of the bundled mind (4E cognition, communal, animistic, bicameral, etc)

But we don’t need to turn to unusual examples and speculations. Such vagueness can be observed directly in our own experience. It’s similar to how, in turning one’s awareness onto the mind, one will never discover a soul, ego, or a willpower. All that one can find is a stream of experience that has no clear boundary or stopping point. And that supposedly ‘inner’ experience is continuous with external perception. The metaphor of the body-mind as a container is a cultural bias.

(Functional) fixation

Nick Gall 2024-05-09

Hi David,

I’m writing up my thoughts on creative improvisation, which I believe is founded on the ability to recontextualize, reframe, repurpose, and I was wondering if you discussed the concept of “functional fixation” in your writings on meta-rationality, since you discuss fixation in Meaningness.

One obstacle to the ability to reframe is a behavior known as “functional fixation” or “functional fixedness”: https://en.wikipedia.org/wiki/Functional_fixedness . A paradigm of this cognitive bias is demonstrated by the “candle problem”: “give participants a candle, a box of thumbtacks, and a book of matches, and ask them to attach the candle to the wall so that it does not drip onto the table below”. There is an extensive psychological literature on the phenomenon and factors (including exercises) that can reduce it, ie increase one’s ability to appreciate the nebulosity of a situation. Some reference to it might be a useful addition to meta-rationality.

Fixation

David Chapman 2024-05-09

Thanks; fixation is a major theme, but I’m not intending to cover the particular psychological literature you refer to. (I’m reasonably familiar with it, and it’s relevant, but not enough so to go into.)

Functional Fixation

@Nick Gall - I have no grand thoughts to add. But functional fixation sounds like the opposite of what social scientists call cognitive flexibility. The latter is a key component of fluid intelligence, the main factor of increasing average IQ over recent generations (i.e., Flynn effect).

Other correlated facets are cognitive complexity, pattern recognition, perspective shifting, aesthetic appreciation, abstract thought, negative capability, etc. Also, as relevant to your inquiry, cognitive flexibility and fluid intelligence are strongly linked to creativity and divergetnt thinking, of course along with perspective shifting.

You might look into the extensive social science research that touches upon personality theory and has direct relationship to ideological differences. Anyway, I’m willing to bet that all of this overlaps with meta-rationality, as part of general neurocognitive development.

Creativity et al

Nick Gall 2024-05-09

David, I suspected the omission was a conscious choice, but I wanted to give you a heads up just in case.

Benjamin, Thanks for the additional pointers to literature. I’m always interested in research into the supposedly mysterious processes at play in meta-rationality such as creativity, intuition, imagination, improvisation, etc. I just started reading some papers in a fascinating book, “Creativity and the Wandering Mind: Spontaneous and Controlled Cognition - A volume in Explorations in Creativity Research” https://www.sciencedirect.com/book/9780128164006/creativity-and-the-wandering-mind :
‘The purpose of this book is to provide readers with a state-of-the-art collection of papers about the impact that mind wandering and cognitive control have on several manifestations of creativity, including imagination, fantasy, and play.’

Just another example of the myriad aspects affecting creativity et al!

Interesting

@Nick - I’m sure that book would be a good read. Mind wandering is definitely an important component. Related to this topic is also fantasy-proneness and lateral thinking, all being various expressions of ‘openness’.

Also, the ability to shift perspectives is closely linked to the ability to take on other people’s perspectives. That involves cognitive empathy, mind-reading, and theory of mind. This goes hand in hand with inclusivity and a larger circle of moral concern, even to the point of an out-group bias.

As the opposite side of the equation, I was thinking that the direct personality correlate of functional fixation would be what is called cognitive closure or need for closure’ along with need for certainty and low tolerance for ambiguity. It’s a tendency toward control and constraint.