[extropy-chat] "The Singularity Myth"
lcorbin at tsoft.com
Sun Mar 19 08:34:10 UTC 2006
Damien Sullivan writes
> > it's always necessary to state which flavor of it [the singularity] you
> > think will occur.
> Good caveat. I wrote a Usenet post on various types:
Very nice. I have a question about it (below).
> > a big AI breakthrough as well. Even though I (nor anyone) used the term
> > "singularity", (this was still pre-Vinge's use of the term)
> Vinge used it in _True Names_ in 1987.
Ah, thanks so much for that correction/reminder! My time-line concerning
all this has been out of whack for a while.
> Wikipedia says he first hit print with it in Omni in 1983.
> [Lee wrote]
> > Calling the singularity... a myth seems unfounded. It's hard to
> > imagine any alternative (short of civilization collapse) over
> > the next couple of hundred years.
> AI never happens, or never becomes cheap enough to compete with humans in most
But isn't "never" an awfully long time?
> Or complexity turns out to rise faster than intelligence past a
> point, so it in fact takes longer for a superhuman to become even smarter than
> it did for a human. Safe genetic engineering of humans is hard, and genetic
> selection is either hard, or easy but limited in effect, boosting average
> human intelligence by a few standard deviations but not increasing the
> maximum much.
Well, in another post tonight "New Path for Evolutionary Psychology" I
copied an essay that speaks of genetic selection (including mutation)
in just an eight-hundred year period that may account for the 17 point
IQ difference between Jews and gentile Europeans. Yes, a mutation is as
yet unproven, but it seems reasonable when one looks at what breeders
do even more quickly than that with dogs. The Doberman pinscher exists
because Herr Doberman wanted a guard dog with different capabilities,
whether it includes actual mutations or not.
As for us, the birth canal is no longer the strict limitation that it was!
In your Usenet post you write
> Loose analogies fly around at this point; some say "they'll be
> to us as we are to dogs", I invoke Turing-completeness and say
> dogs just aren't that good at understanding each other, in the
> sense we mean it. It's not that we're too complex for dogs, but
> dogs are complex enough to understand anything.
I have often spoken of a computerish boundary that humans seem to
have crossed; I associated it with Von Neumann---I think he referred
to a kind of complexity barrier.
Anyway, you appear to be taking the opposite tack. Unless you mean
"but dogs are *not* complex enough to understand anything"? Is
that what you actually meant to write?
More information about the extropy-chat