Comments on “I seem to be a fiction”

Add new comment

They Ain't Lookin' Too Hard (all caps!)

Sabio 2011-03-26

That was a fun trip – who dropped the acid in my chilli?
Boy, they must not be looking too hard when they wrote, “David, where are you?”
But seriously, where the hell are you?
Personally, if your grokking continues, perhaps you’ll end up a God, or your novel’s anti-hero will – is his name “Ken Wilbur”?
Thanks for the tour!

made of awesome

Andrew 2011-03-28

Hi,
Got here through Sab’s blog.

I always figured existentialism would come back and properly bite the world in the ass. Just didn’t expect it would leave its teeth so deep in the flesh.

Awesome progress of orange and green to yellow in this. My mind burst in ten different directions because of this post. What would happen now if you and fiction-you ever met in a virtual reality or something?

I’m very tempted to borrow from another old German and ask if this is all some dialectic at play – thesis+antithesis ==> synthesis?

Existentialism & dialectic

David Chapman 2011-03-29

Hi, thanks for visiting!

Yes, post-modernism is basically Existentialism Lite(TM); it derives from mostly the same philosophical sources. A main difference is that post-modernism lacks existentialism’s angst. That would seem a good thing, except that it is only because post-modernism refuses to deal with its own nihilism. Existentialism is largely about its own nihilism, which it failed to overcome. Post-modernism knows it hasn’t got a workable response, so it just tries to ignore the question.

What little I know about Spiral Dynamics comes almost entirely from Boomeritis, so my interpretation isn’t worth much, but yes, my understanding is that “yellow” is meant to be a dialectical synthesis of “green” and “orange.”

Ken Wilber is hugely influenced by Hegel; I’m not. I’m trying to combine what I think are accurate insights of opposing world-views, which might count as dialectics, but if Hegel has a useful method for doing that, I’m ignorant of it!

Cheers,

David

Henry Markram

Florian 2011-06-10

Dear David,

Thanks a lot. This is brilliant as always. Just wanted to mention that at least in Europe it seems that there is a huge renaissance of AI to be expected. You have heard about the project of Henry Markram?

http://bluebrain.epfl.ch/cms/lang/en/pid/56882

Most likely the Eurpoean Community is going to spend huge amounts of money on that project, and so I guess we will hear slightly more about all that over the next view years.

All the best,
Florian

AI in Europe

David Chapman 2011-06-10

Hi, Florian,

Thanks for this—I had vaguely heard of Blue Brain, but didn’t know it was going to get big funding.

The “whole brain emulation” approach to AI is an admission of failure in what came before, namely trying to understand what the mind does, and simulating that. The new approach is to say “We don’t need to understand how the mind works, we just need to simulate the brain. It’s made of neurons connected together, so we can simulate those.

If Blue Brain hires lots of smart people, probably some interesting things will come out of it. I’m skeptical that human-like intelligence will be one of them. The problem is that we also don’t know (in enough detail) what neurons do.

The project page says that they are replicating wet-lab results. If they can get that to be accurate enough, this approach could work. I think that’s unlikely—but if they make significant progress, I will get very interested!

David

Alien Anthropologists

Sabio 2011-06-10

I once heard something like this:

Imagine an anthropologist from an advanced alien race visiting earth from light years away and returning to his planet and telling his colleagues in a joking tone:, ” Earthlings are hilarious. They have finally developed computers. And guess what they are trying to get their computers to do — think like themselves!”

Spot on

Sergio DuBois 2013-01-16

Yes, I can guess where you are going with this . . .

What comes after postmodernism...

Shane 2015-10-24

I came across this article you might find interesting:
METAMERICANA: On Crispin Glover’s Epic Performance Art (google it as I can’t add a link due to spam filter).

Their answer is “metamodernism” :

”…Over the next twenty years, we’ll hear a lot about art that is simultaneously sincere and ironic and neither, naive and knowing and neither, optimistic and cynical and neither. While two of the theorists presently associated with metamodernism, Tim Vermeulen and Robin van den Akker, originally described such art as “oscillating” between conditions traditionally associated with Modernism and postmodernism–for instance, sincerity and irony, respectively–their view has changed in recent months. Now, they, and other metamodernists, are more likely to note that metamodern art permits us to inhabit a “both/and” space rather than merely the “either/or” spaces deeded us by the “dialectics” of postmodernism…”

Metamodernism

David Chapman 2015-10-28

Thank you very much! That was interesting.

That was fun - thanks for

Joshua Brule 2017-04-18

That was fun - thanks for writing this.

Shortly after reading, I felt there was a strong similarity with what happened with mathematics around the turn of the century. As I started writing my thoughts down, the connection turned out to be tenuous at best. Perhaps still worth sharing, though:

The way I think of it, the Old Math, perhaps best embodied by Euclid, was about picking the right axioms and working forward to prove theorems. I suspect this is why there was so much interest in proving the parallel postulate from the other 4; the parallel postulate isn’t obviously correct, the way the others are.

Non-Euclidean geometry might have come as a bit of a shock, but it didn’t sink the Right Axioms program by itself. It just shows that we couldn’t figure out the correct axioms from reason alone - we had to go out and check what the universe was actually using. If my understanding of cosmology is correct (I wouldn’t bet on it), then the current thinking is that the universe is actually flat, although there was good reason to think that it was hyperbolic for a while.

The real trouble happened with Cantor, infinite cardinalities, and the continuum hypotheses. Infinite cardinalities put him at odds with Kronecker (“God made the integers, all else is the work of man.”), appeared to cause Cantor a great deal of distress and prompted him to speculate on the existence of an Absolute Infinite above all of the other infinities (i.e. God). And, of course, no one could seem to prove or disprove the continuum hypotheses (although showing it was independent of ZFC didn’t happen until quite a bit later, kind of wrecking my narrative) which was disturbing because it seems like a natural question, seems way to complicated to be obviously correct or incorrect, and determining it’s truth was forever beyond the reach of any empirical method.

There were some feuding between mathematicians at the time, rivaling the intensity of the Newton/Leibniz and Neyman-Pearson/Fisher feuds, but the field sorted itself out and the next generation of mathematicians all pretty much accepted the idea that you pick your assumptions for the problem at hand, and don’t worry too much about the metaphysical significance. It’s how we end up with things like the definition of compactness - there’s no way anyone would ever mistake that for a universal truth, it’s a definition that’s particularly useful for proving certain things, and, hey, a lot of spaces of interest happen to be compact!

As I understand it, there was quite a bit of anxiety about things like infinite cardinalities, but no one in the mathematics community got too upset about Godel’s incompleteness theorem. In fact, I believe at least one member of Godel’s committee didn’t want to pass him because it didn’t seem like much of an improvement over the completeness theorem that he presented for his Master’s degree a few years earlier. The incompleteness theorem might have been the final nail in the coffin for Mathematics as Eternal Truth, but that idea had really been dead for decades. Mathematics had shifted from eternalism to ‘merely’ a particularly useful tool.

I bring this up because that seems it seems like attempt after attempt to create Strong AI gets carved out that way. Good old fashioned AI turned into research on theorem proving and model checking for software systems. Beating humans at chess ended up relying on the minimax algorithm and some clever hacks. Machine learning is just souped-up statistics. Neural networks appear to be very good at problems where the input has natural notion of locality, but not so great elsewhere.

I think it’d be more exciting if there was a clean, relatively simple theory that gave us Strong AI - and I haven’t ruled out the possibility - but I’m not so convinced there is one. I have the vague impression that most of the people in AI are embarrassed to talk about Strong AI because they can’t find that theory and most of the rest are trying to push ahead without having any real theories at all. (I am guilty of this and spent a nontrivial amount of time tinkering with genetic programming before realizing I wasn’t even trying to build a theory.) The whole thing still feels like it’s in the ‘alchemy stage’.

Alchemy and artificial intelligence

David Chapman 2017-04-18

Funny you should draw that analogy… Dreyfus’s first paper on AI was titled “Alchemy and Artificial Intelligence.”

1965

David Chapman 2017-04-18

… and “Alchemy and Artificial Intelligence” was published in 1965, which I just realized is more than half a century ago.

We don’t have much to show for that half-century of work.

Great stuff, resurrected!

Simon Grant 2019-06-24

I have been fascinated reading this – a great story – as well as much of your other materials. Particularly as I was doing my cognitive science PhD from 1987 to 1990. I don’t recall your name from then, but I do recall Phil Agre’s.

You gave up on AI in 1992, you say? Back in 1990, I was trying to suggest to people that a plausible and interesting way forward was to look at how people actually perform complex tasks. I wasn’t into situated action stuff (as e.g. Suchman), but still. I think I had some tiny little insights on that, but to my knowledge no one has taken them up or used them in any way.

These days, I love applying this background awareness of human information processing to the real-life complex world of trying to live in new patterns. And I’m a long-term (10 years+) fan of Robert Kegan’s insights, by the way.

Maybe it would be great to talk one day? I’m open. Or if you know younger researchers than us, wanting a new line that is different from AI’s current manifestation.

Go well!

Simon

How NASA builds teams

Patrick Atwater 2019-07-19

You might enjoy checking out this book on how NASA builds teams: https://www.amazon.com/How-NASA-Builds-Teams-Scientists/dp/0470456485

The Orange and Green framing seems very much aligned. In addition to yellow, there’s also a blue visionary perspective, which yields a full 2x2.

Delusions of reference

Tim Converse 2020-04-21

In case you’d like support in your worries about “delusions of reference” - I agree that it has to be either you or Phil Agre, and no one else in the world. What fun. (I was a graduate student when you and Phil were active in AI, and I studied with Phil some at UChicago. I think I met you once at a conference.)

Just curious: is there something that makes you think that it’s you rather than Phil?

Confirmations of reference

David Chapman 2020-04-21

Hi, Tim! I definitely remember your name, and sort of think I remember your face; pretty sure we did meet. Nice to hear from you!

Yes, it’s as likely, or more, to be Phil rather than me (or to be a composite of the two of us). I just didn’t want to drag him into a dubious story any more than necessary.

HTML bug

Stig 2020-10-30

I have been reading your books and metablog for a few months now, and really appreciate your writing!

This page appears to have some style errors. The sidebar is displayed in a much too large font size, wrapping many words across more than one line. The sidebar also disappears behind an embedded youtube video.

It also looks like a missing closing quote in the id attribute of an h2 element messes with the document. The offending line in the source begins like this:

html <h2 id="boomeritis&gt;Boomeritis&lt;/h2&gt;&#10;&#10;&lt;p&gt;The Baby Boom generation—people born ro

Maybe this is the cause of the sidebar issue, the sidebar is in approximately the same location on the page.

HTML bug: fixed

David Chapman 2020-11-01

Wow, thank you for letting me know!

I’ve fixed it, I think.

end goal of Integral/ Ken Wilber

ross 2021-01-01

Hi David. Very interesting! I have looked into some of Ken Wilber’s work a bit, and I don’t think the purpose is to become omnipotent.

I tend to think that becoming one with God or whatever else they want to call the highest stage of mental evolution is something that I would have to experience to verify if it is possible. If someone doesn’t do the experiment of trying to achieve this state, it would be very tricky to be sure it is not possible based on logic alone. It is a hard experiment to do, so I understand if people want to use logic to see if it is worth trying out first, though.

The next best thing than doing the experiment yourself is to study people who have done the experiment, and another book by Ken Wilber with a few other authors, <u>Transformations of Consciousness: Conventional and Contemplative Perspectives On Development</u>, is an attempt to do that.

Cheers

2011->2021

a s 2021-12-27

I thought it was amusing that this page was written at the end of an AI winter, and now we’re back in an AI boom which has produced some decent imitation visual cortexes - but still nothing in the direction of AGI. Though some rationalists seem to be trying to worship GPT3 just in case it turns out to be Roko’s Basilisk.

  1. Monists love capital letters. Is that because they think capitals look impressive, or is it the result of bad translations from German?

Maybe they get it from Dr. Bronner’s soap bottles. Although those are translations from German, I wouldn’t want to insult the man by calling them bad, whatever they are.

Add new comment:

You can use some Markdown and/or HTML formatting here.

Optional, but required if you want follow-up notifications. Used to show your Gravatar if you have one. Address will not be shown publicly.

If you check this box, you will get an email whenever there’s a new comment on this page. The emails include a link to unsubscribe.