Comments on “I seem to be a fiction”
Adding new comments is disabled for now.
Comments are for the page: I seem to be a fiction
made of awesome
Hi,
Got here through Sab’s blog.
I always figured existentialism would come back and properly bite the world in the ass. Just didn’t expect it would leave its teeth so deep in the flesh.
Awesome progress of orange and green to yellow in this. My mind burst in ten different directions because of this post. What would happen now if you and fiction-you ever met in a virtual reality or something?
I’m very tempted to borrow from another old German and ask if this is all some dialectic at play – thesis+antithesis ==> synthesis?
Henry Markram
Dear David,
Thanks a lot. This is brilliant as always. Just wanted to mention that at least in Europe it seems that there is a huge renaissance of AI to be expected. You have heard about the project of Henry Markram?
http://bluebrain.epfl.ch/cms/lang/en/pid/56882
Most likely the Eurpoean Community is going to spend huge amounts of money on that project, and so I guess we will hear slightly more about all that over the next view years.
All the best,
Florian
Alien Anthropologists
I once heard something like this:
Imagine an anthropologist from an advanced alien race visiting earth from light years away and returning to his planet and telling his colleagues in a joking tone:, ” Earthlings are hilarious. They have finally developed computers. And guess what they are trying to get their computers to do — think like themselves!”
Spot on
Yes, I can guess where you are going with this . . .
What comes after postmodernism...
I came across this article you might find interesting:
METAMERICANA: On Crispin Glover’s Epic Performance Art (google it as I can’t add a link due to spam filter).
Their answer is “metamodernism” :
”…Over the next twenty years, we’ll hear a lot about art that is simultaneously sincere and ironic and neither, naive and knowing and neither, optimistic and cynical and neither. While two of the theorists presently associated with metamodernism, Tim Vermeulen and Robin van den Akker, originally described such art as “oscillating” between conditions traditionally associated with Modernism and postmodernism–for instance, sincerity and irony, respectively–their view has changed in recent months. Now, they, and other metamodernists, are more likely to note that metamodern art permits us to inhabit a “both/and” space rather than merely the “either/or” spaces deeded us by the “dialectics” of postmodernism…”
That was fun - thanks for
That was fun - thanks for writing this.
Shortly after reading, I felt there was a strong similarity with what happened with mathematics around the turn of the century. As I started writing my thoughts down, the connection turned out to be tenuous at best. Perhaps still worth sharing, though:
The way I think of it, the Old Math, perhaps best embodied by Euclid, was about picking the right axioms and working forward to prove theorems. I suspect this is why there was so much interest in proving the parallel postulate from the other 4; the parallel postulate isn’t obviously correct, the way the others are.
Non-Euclidean geometry might have come as a bit of a shock, but it didn’t sink the Right Axioms program by itself. It just shows that we couldn’t figure out the correct axioms from reason alone - we had to go out and check what the universe was actually using. If my understanding of cosmology is correct (I wouldn’t bet on it), then the current thinking is that the universe is actually flat, although there was good reason to think that it was hyperbolic for a while.
The real trouble happened with Cantor, infinite cardinalities, and the continuum hypotheses. Infinite cardinalities put him at odds with Kronecker (“God made the integers, all else is the work of man.”), appeared to cause Cantor a great deal of distress and prompted him to speculate on the existence of an Absolute Infinite above all of the other infinities (i.e. God). And, of course, no one could seem to prove or disprove the continuum hypotheses (although showing it was independent of ZFC didn’t happen until quite a bit later, kind of wrecking my narrative) which was disturbing because it seems like a natural question, seems way to complicated to be obviously correct or incorrect, and determining it’s truth was forever beyond the reach of any empirical method.
There were some feuding between mathematicians at the time, rivaling the intensity of the Newton/Leibniz and Neyman-Pearson/Fisher feuds, but the field sorted itself out and the next generation of mathematicians all pretty much accepted the idea that you pick your assumptions for the problem at hand, and don’t worry too much about the metaphysical significance. It’s how we end up with things like the definition of compactness - there’s no way anyone would ever mistake that for a universal truth, it’s a definition that’s particularly useful for proving certain things, and, hey, a lot of spaces of interest happen to be compact!
As I understand it, there was quite a bit of anxiety about things like infinite cardinalities, but no one in the mathematics community got too upset about Godel’s incompleteness theorem. In fact, I believe at least one member of Godel’s committee didn’t want to pass him because it didn’t seem like much of an improvement over the completeness theorem that he presented for his Master’s degree a few years earlier. The incompleteness theorem might have been the final nail in the coffin for Mathematics as Eternal Truth, but that idea had really been dead for decades. Mathematics had shifted from eternalism to ‘merely’ a particularly useful tool.
I bring this up because that seems it seems like attempt after attempt to create Strong AI gets carved out that way. Good old fashioned AI turned into research on theorem proving and model checking for software systems. Beating humans at chess ended up relying on the minimax algorithm and some clever hacks. Machine learning is just souped-up statistics. Neural networks appear to be very good at problems where the input has natural notion of locality, but not so great elsewhere.
I think it’d be more exciting if there was a clean, relatively simple theory that gave us Strong AI - and I haven’t ruled out the possibility - but I’m not so convinced there is one. I have the vague impression that most of the people in AI are embarrassed to talk about Strong AI because they can’t find that theory and most of the rest are trying to push ahead without having any real theories at all. (I am guilty of this and spent a nontrivial amount of time tinkering with genetic programming before realizing I wasn’t even trying to build a theory.) The whole thing still feels like it’s in the ‘alchemy stage’.
Great stuff, resurrected!
I have been fascinated reading this – a great story – as well as much of your other materials. Particularly as I was doing my cognitive science PhD from 1987 to 1990. I don’t recall your name from then, but I do recall Phil Agre’s.
You gave up on AI in 1992, you say? Back in 1990, I was trying to suggest to people that a plausible and interesting way forward was to look at how people actually perform complex tasks. I wasn’t into situated action stuff (as e.g. Suchman), but still. I think I had some tiny little insights on that, but to my knowledge no one has taken them up or used them in any way.
These days, I love applying this background awareness of human information processing to the real-life complex world of trying to live in new patterns. And I’m a long-term (10 years+) fan of Robert Kegan’s insights, by the way.
Maybe it would be great to talk one day? I’m open. Or if you know younger researchers than us, wanting a new line that is different from AI’s current manifestation.
Go well!
Simon
How NASA builds teams
You might enjoy checking out this book on how NASA builds teams: https://www.amazon.com/How-NASA-Builds-Teams-Scientists/dp/0470456485
The Orange and Green framing seems very much aligned. In addition to yellow, there’s also a blue visionary perspective, which yields a full 2x2.
Delusions of reference
In case you’d like support in your worries about “delusions of reference” - I agree that it has to be either you or Phil Agre, and no one else in the world. What fun. (I was a graduate student when you and Phil were active in AI, and I studied with Phil some at UChicago. I think I met you once at a conference.)
Just curious: is there something that makes you think that it’s you rather than Phil?
HTML bug
I have been reading your books and metablog for a few months now, and really appreciate your writing!
This page appears to have some style errors. The sidebar is displayed in a much too large font size, wrapping many words across more than one line. The sidebar also disappears behind an embedded youtube video.
It also looks like a missing closing quote in the id attribute of an h2
element messes with the document. The offending line in the source begins like this:
html
<h2 id="boomeritis>Boomeritis</h2> <p>The Baby Boom generation—people born ro
Maybe this is the cause of the sidebar issue, the sidebar is in approximately the same location on the page.
end goal of Integral/ Ken Wilber
Hi David. Very interesting! I have looked into some of Ken Wilber’s work a bit, and I don’t think the purpose is to become omnipotent.
I tend to think that becoming one with God or whatever else they want to call the highest stage of mental evolution is something that I would have to experience to verify if it is possible. If someone doesn’t do the experiment of trying to achieve this state, it would be very tricky to be sure it is not possible based on logic alone. It is a hard experiment to do, so I understand if people want to use logic to see if it is worth trying out first, though.
The next best thing than doing the experiment yourself is to study people who have done the experiment, and another book by Ken Wilber with a few other authors, <u>Transformations of Consciousness: Conventional and Contemplative Perspectives On Development</u>, is an attempt to do that.
Cheers
2011->2021
I thought it was amusing that this page was written at the end of an AI winter, and now we’re back in an AI boom which has produced some decent imitation visual cortexes - but still nothing in the direction of AGI. Though some rationalists seem to be trying to worship GPT3 just in case it turns out to be Roko’s Basilisk.
- Monists love capital letters. Is that because they think capitals look impressive, or is it the result of bad translations from German?
Maybe they get it from Dr. Bronner’s soap bottles. Although those are translations from German, I wouldn’t want to insult the man by calling them bad, whatever they are.
They Ain't Lookin' Too Hard (all caps!)
That was a fun trip – who dropped the acid in my chilli?
Boy, they must not be looking too hard when they wrote, “David, where are you?”
But seriously, where the hell are you?
Personally, if your grokking continues, perhaps you’ll end up a God, or your novel’s anti-hero will – is his name “Ken Wilbur”?
Thanks for the tour!