Comments on “When will you go bald?”
Adding new comments is disabled for now.
Comments are for the page: When will you go bald?
very minor nitpick
Dichroic coating or holograms don’t use pigments.
Yes it can be said that nebulosity argument might apply to the definition of “pigment” here too… but between us engineers, please don’t.
Qualia… they’re out there
An interesting failure even in pop science writing (as opposed to serious rationalists) is “structural color”.
I was looking up bluebirds and noticed there’s a lot of websites confidently declaring things like “they’re not really blue, they just look blue”. What they apparently mean is, if you took a bluebird apart, there wouldn’t be a piece of it that’s still blue.
But of course it really is blue! Looking blue to humans and cameras is the reasonable use of “is blue”.
Ontological Nebulosity
It may be basically the engineer’s response from the end of the sequence, but I disagree with the characterization of nebulosity as “ontological”. In the boldness example (in the broad domain where humans and hairs and their positions are reasonably well-defined), there is a fact of the matter about the distribution of hairs on my head, and that of any other person. No relevant problem with the territory’s ontology. We want to define boldness for our own use, and are not very successful because the territory is continuous and does not cluster well. Which wouldn’t be a problem if we didn’t use discrete language - so I still see it as a linguistic problem, though not a solvable one.
Ha!
I’ve been loving the recent posts (I write as I’m currently several weeks behind). I’m enjoying the riffing.
One thing I found amusing is in thinking that, in practice, ‘baldness’ really could be ‘sharpened’. IIRC, there have been studies about the precise ‘distribution’ of ‘redness’ among peoples. If there was a sufficiently motivating purpose, I could readily imagine “Ahh, yes, he has {0.13, 0.57} hair distribution; very much what would have been previously considered ‘bald’.”
There’s a kind of [ha!] shadow corollary to your meta-rationalism thesis (and you’ve definitely mentioned it, if somewhat indirectly) [emphasis mine]:
While there is no eternal meaning, absolute truth, or perfect rational algorithm for either forming (absolutely) true beliefs or making optimal decisions, we could get arbitrarily close – if we wanted to.
I’m less skeptical that it hasn’t already been done and isn’t meaningful too (to someone, for some purpose).
[I realized, after typing the above, that this is ‘just’ a restatement of things you’ve already written about, extensively!]
Existence/reality, viewed as a ‘charnel ground’ (or hell): there is no eternal meaning, absolute truth, or perfect rational algorithm for either forming (absolutely) true beliefs or making optimal decisions.
But there’s lots we can do instead of lamenting the impossibility of fixed, eternal, absolute ‘perfection’.
Indeed, biological evolution has done a pretty good job of producing ‘biotech’ for the creation of meaning and truth via ‘good enough’ (tho also very sophisticated!) algorithms, as you point out in footnote [5], where you mention the color constancy that our eyes and brains calculate.
And, after giving up on ‘escaping the charnel ground’, we can notice that the fact that there is any stable meaninginess, that any kind or degree of truth is possible, or that any algorithms are even ‘good enough’ (for often fantastic purposes), is amazing and, in some sense, a miracle. (Existence is also a ‘pure land’!)