[extropy-chat] "The Singularity Myth"
phoenix at ugcs.caltech.edu
Sun Mar 19 08:59:32 UTC 2006
On Sun, Mar 19, 2006 at 12:34:10AM -0800, Lee Corbin wrote:
> Damien Sullivan writes
> > Good caveat. I wrote a Usenet post on various types:
> > http://www.mindstalk.net/typesofsing.html
> > > Calling the singularity... a myth seems unfounded. It's hard to
> > > imagine any alternative (short of civilization collapse) over
> > > the next couple of hundred years.
> > AI never happens, or never becomes cheap enough to compete with humans in
> > most applications.
> But isn't "never" an awfully long time?
Hmm, maybe, for getting intelligent behavior in machines. Getting *cheap*
intelligent behavior... maybe for that level of efficient processing power
brains are really really good.
Also, "doesn't happen in the next couple of hundred years" doesn't have to be
> Well, in another post tonight "New Path for Evolutionary Psychology" I
> copied an essay that speaks of genetic selection (including mutation)
> in just an eight-hundred year period that may account for the 17 point
Yeah, genetics is my usual argument for "something vaguely Singularityish
should happen". OTOH, for fast selection I need a few unproven technologies:
I envision fertilizing 100+ embryos, letting them develop then gene-testing
all of them, e.g. with a microarray, against a large gene-phenotype database,
then being able to reliably implant the embryo you've selected back into the
mother, or an artificial womb. The mass fertilizing is probably easy, though
the ovary extraction could probably use work; the other two pieces certainly
don't seem impossible, but they are a bit over the horizon.
> > Loose analogies fly around at this point; some say "they'll be
> > to us as we are to dogs", I invoke Turing-completeness and say
> > dogs just aren't that good at understanding each other, in the
> > sense we mean it. It's not that we're too complex for dogs, but
> > dogs are complex enough to understand anything.
Yeah, "dogs are not complex enough". Saw it even before I hit your text.
Thanks and fixed.
> I have often spoken of a computerish boundary that humans seem to
> have crossed; I associated it with Von Neumann---I think he referred
> to a kind of complexity barrier.
I should probably try to update my anti-Singularity page
though I shouldn't do it this week (oral quals coming up). One thing to
update it with might be the emphasis in modern cognitive science on
evolutionary continuity between us and other animals. Very little of our
mental abilities seems qualitatively unique to us. Full-blown recursive
language might be... which does correspond to a big natural jump in computer
theory. There's even room in CS for a simple evolutionary change leading to
such a jump: adding a push-down stack to a finite automata might take a big
development, with some increase in computational power. *Duplicating* the
push-down stack would be relatively easier -- segmentation, plus expanding the
automaton rules -- and gets you a Turing machine. Triplicating the stack
doesn't get you anything more, except some speed.
So most of cog sci is saying "no big jumps to us, no big jumps from us" except
for the computer theory side, which says "big jump to us, but no more
conceivable jump from us without invoking infinite computation or precision".
-xx- Damien X-)
More information about the extropy-chat