Comments on “Where did you get that idea in the first place?”
Adding new comments is disabled for now.
Comments are for the page: Where did you get that idea in the first place?
Practice?
I wonder if modern AI, i.e. machine learning, will eventually stumble on meta-rationality.
Having read you for a while, I was struck by something after reading this post about historical iron manufacturing:
Without some rational system (that’s good enough), all reasoning is ‘reasonableness’, even thinking ore deposits would replenish themselves (which is true for ‘bog iron’) after enough time:
As with many ancient technologies, there is a triumph of practice over understanding in all of this; the workers have mastered the how but not the why. Lacking an understanding of geology, for instance, meant that pre-modern miners, if the ore vein hit a fault line (which might displace the vein, making it impossible to follow directly) had to resort to sinking shafts and exploratory mining an an effort to ‘find’ it again. In many cases ancient miners seem to have simply abandoned the works when the vein had moved only a short distance because they couldn’t manage to find it again. Likewise, there was a common belief (e.g. Plin. 34.49) that ore deposits, if just left alone for a period of years (often thirty) would replenish themselves, a belief that continues to appear in works on mining as late as the 18th century (and lest anyone be confused, they clearly believe this about underground deposits; they don’t mean bog iron). And so like many pre-modern industries, this was often a matter of knowing how without knowing why.
Is a (good enough) rational system of reasonableness possible? I’d guess your answer is ‘no’ and I’m inclined to agree, but I’m not that confident about it. Given that AI is (reasonably) targeting human-like intelligence, maybe we can (someday) cobble together a ‘reasonably rational’ artifical but human-like and human-level intelligence. (That seems possibly reasonable to me!)
(I think it’s reasonable that, in practice, AI people are mostly trying to mimic human behavior because I’m relatively convinced that it’s not rationally possible to define or recognize ‘intelligence’ in general.
One thing that I think is interesting about machine learning is that it’s not entirely rational, or even that much at all. Practice seems to have far outstripped the capacity for rational theories to explain it fully. Like historical mining (and probably modern mining too still!), machine learning is mostly reasonable. Machine learning products, e.g. trained models, seem to be obviously reasonable too, which is fascinating because of how capable they are anyways. (Tho, and I think you’d agree, we shouldn’t be surprised about this!)
An article that I think is relevant
I found this article recently and immediately thought of your refrain “we make rationalism work”. It’s about the surprising amount detail that reality has.
http://johnsalvatier.org/blog/2017/reality-has-a-surprising-amount-of-detail
AI for solving IMO problems
I learned this week that some AI researchers are now attempting to use theorem provers to solve math problems from the IMO: https://www.quantamagazine.org/at-the-international-mathematical-olympiad-artificial-intelligence-prepares-to-go-for-the-gold-20200921/
As you suggest in this post, it sounds like the hardest step is to even get started, since so many problems require a clever construction. What do you think their chances are of making any headway?