Comments on “Part Two: Taking reasonableness seriously”
Comments are for the page: Part Two: Taking reasonableness seriously
Reasonableness: a comprehensive account & its completeness
What topics would need to be treated in a comprehensive account of reasonableness but are out of scope for The Eggplant?
Also, is there a sense in which reasonableness is complete, in that any system that could be comprehensively reasonable could also (be trained to be) rational and meta-rational? One intuition for this being that developmentally we are reasonable for a long time before we’re rational, and rationality itself took historical time to be built up out of reasonableness, but in both cases we’re using the whole of our brains and embodiment the entire time. Another, that the way we humans implement rational systems is always embodied and all the other E-s (e.g. in Lakoff and Núñez’s account of the conceptual metaphors of mathematics, and empirical accounts of our use of notation systems and computing hardware)?
I guess I did already answer my questions! :) Your answers make good sense to me, although for the first I was hoping you might have a list of out-of-scope-of-The-Eggplant features of reasonableness handy....
I'd love a (more) "comprehensive account of reasonableness"
Maybe that can be a future project!
I’m a little puzzled by your agreement with this, from another commenter on this post:
Reality itself does not seem to be a system.
Given the incredible precision of, e.g. some parts of quantum mechanics, it sure seems like ‘reality’ is very likely a (relatively) simple system – in terms of its rules and its fundamental constituents. In that sense, it does very much seem to be a ‘system’, even in your specific usage of that term.
Where I would very much agree with you is reality, in the sense of our experience in the “eggplant-sized world”, NOT being a system – in your sense.
And, to me, that’s one of the wonderful things about this site/book and your writing generally – you’re helping dissolve the confusion between these drastically different scales of understanding.
It now seems obvious to me that there’s no ‘system’ that can be cognitively internalized or even published as a book (let alone read and understood) whereby something “eggplant-sized” can be understood in terms of, e.g. quantum mechanics; let alone any more fundamental theory. But I think it only seems obvious because I’ve been reading your work for so long!
But I also ‘can’t help’ but try to spin new rationalisms out of some of these thoughts! It seems to me like there’s a connection between your points and the ‘impossibility’ of rational systems in any universe, from a mathematical or computational perspective.
Something that I recently found extremely insightful is this from Scott Aaronson’s paper Why Philosophers Should Care About Computational Complexity:
My last example of the philosophical relevance of the polynomial/exponential distinction concerns
the concept of “knowledge” in mathematics. As of 2011, the “largest known prime number,”
as reported by GIMPS (the Great Internet Mersenne Prime Search), is p := 2^43112609 − 1. But
on reflection, what do we mean by saying that p is “known”? Do we mean that, if we desired,
we could literally print out its decimal digits (using about 30,000 pages)? That seems like too
restrictive a criterion. For, given a positive integer k together with a proof that q = 2^k − 1 was
prime, I doubt most mathematicians would hesitate to call q a “known” prime, even if k were so
large that printing out its decimal digits (or storing them in a computer memory) were beyond the
Earth’s capacity. Should we call 2^2^1000 an “unknown power of 2,” just because it has too many
decimal digits to list before the Sun goes cold?
All that should really matter, one feels, is that
(a) the expression ‘243112609 − 1’ picks out a unique positive integer, and
(b) that integer has been proven (in this case, via computer, of course) to be prime.
But wait! If those are the criteria, then why can’t we immediately beat the largest-known-prime
record, like so?
p0 = The first prime larger than 2^43112609 − 1.
Clearly p0 exists, it is unambiguously defined, and it is prime. If we want, we can even write
a program that is guaranteed to find p0 and output its decimal digits, using a number of steps
that can be upper-bounded a priori. Yet our intuition stubbornly insists that 2^43112609 − 1 is a
“known” prime in a sense that p0 is not. Is there any principled basis for such a distinction?
The clearest basis that I can suggest is the following. We know an algorithm that takes as
input a positive integer k, and that outputs the decimal digits of p = 2k − 1 using a number of
steps that is polynomial—indeed, linear—in the number of digits of p. But we do not know any
similarly-efficient algorithm that provably outputs the first prime larger than 2k − 1.
outside all systems?
If what you mean by “systems” is something like “formal systems of thought devised by humans”, then I’ve got no objection.
But if you mean “systems” in a much broader sense, then I do object. Meta-rationality doesn’t stand outside the ecosystem. It doesn’t stand outside the central nervous system of the person doing it. Like any human activity, meta-rationality is contained in and constrained by the limits of the systems it is embedded in.
We can use a rationalist system of thought, called “biology” to study the ecosystem. And the formal methodologies used by biologists can be supervised by meta-rational reasonableness. But that is still taking place inside the brain of an animal. That animal needs to eat or it will die. Eventually it will die anyway. These are biological facts. The human is bounded and constrained by the ecosystem.
At the same time the ecosystem is bounded by the products of human thought. If a human gets the idea in his head to build a bunch of nuclear weapons and put them in rockets and gives one man the choice to launch those rockets, then well maybe it has the deterrent effect they were looking for and maybe it wipes out life on earth.
Systems of all kinds have hierarchical relationships with each other. The nest inside each other like Russian dolls. But if you pursue these relationships far enough things often get weird. Peano’s axioms (PA) is a formal system which nests inside Zermelo-Frankel set theory (ZFC). It nests in the sense that statements and objects and proofs inside PA can be translated into their equivalent in ZFC. Anything you can say or prove in PA you can do in ZFC, but not the other way around. But ZFC is also limited. For example Cohen proved ZFC can’t decide the continuum hypothesis. What axioms do you need to prove Cohen’s theorem? All you need is PA. Weird.
Every system exists in a context that bounds and constrains its operation. Often that context is consists of other systems. Other systems that supervise it, or physically contain it, or create the preconditions for its existence. But these hierarchical relationships do not fit together into some great chain of being with an uber-system at the top. Instead, because each of these hierarchical relationships can be of a different character than the others, the systems form an interconnected web.
But even calling it a web exaggerates the unity of the thing. It temps you to say “maybe the uber-system is the network”. That kind of systemization has its place. An operating system is an interconnected web of programs, which are each themselves systems of instructions. An ecosystem is an interconnected web organisms, which are all independent biological systems. Mathematical logic is an interconnected web of formal systems which can embed in, describe and prove theorems about each other.
But as well as systematization-as-network works, it can’t systematize everything. It can’t grow to encompass everything. Reality itself does not seem to be a system. Every system exists in some kind of environment and is limited by that environment. There is no one system to rule them all.