[extropy-chat] Fundamental limits on the growth rate ofsuperintelligences

kevinfreels.com kevin at kevinfreels.com
Mon Feb 13 22:27:09 UTC 2006


What exactly are you referring to when you mention "growth rate"? Are you
referring to the speed at which it becomes more intelligent? Obtains more
control? Increases it's own processing speed? How exactly do you even
quantify a "growth rate" for a sentient superintelligence? Certainly you
aren't just referring to processor speed.

Also, the words "rate" and "speed" themselves are a problem since we only
would be referring to time as we humans see it. Wouldn't an AI see time much
differently? With the ability to parallel process, I doubt time would mean
very much other than the number of cycles it takes to obtain a certain goal.
How would time apply to an AI quantum computer? Would it be able to create
itself as some sort of closed timelike curve computer that could pull out
answers to any problem the moment the question was asked? This all gets
really weird, but isn't that the kind of limit you are speaking of Robert?




----- Original Message ----- 
From: "John K Clark" <jonkc at att.net>
To: "ExI chat list" <extropy-chat at lists.extropy.org>
Sent: Monday, February 13, 2006 3:18 PM
Subject: Re: [extropy-chat] Fundamental limits on the growth rate
ofsuperintelligences


> Robert Bradbury Wrote:
>
> > Even if a self-evolving AI were to develop
>
> Even? Of course an AI will be self evolving.
>
> > it would still be constrained by my ability to pull its plug out of the
> > wall.
>
> There will come a point where turning the AI off would be equivalent to
> turning the world economy off, nobody would dare try. And even if somebody
> did dare the AI would consider it attempted murder, and that's not very
> friendly. It is not wise to tickle a sleeping dragon.
>
> > So long as an AI lacks the ability to create and integrate into its
> > architecture alternative (presumably improved) hardware or simply more
of
> > the same its growth rate is constrained.
>
> So humans look at the AI and see it is doing something with its innards,
we
> know it must perform routine maintenance from time to time but is that all
> its doing or is it upgrading itself? Nobody knows. No human being has more
> than a vague understanding how this colossal computer works anymore, not
its
> hardware and not its software.
>
> >whether humans would allow themselves to be placed what might become a
> >strategically difficult position
>
>
> You're talking about out thinking somebody that is far far smarter than
you
> are, and that is imposable. The AI will be  more brilliant at strategy
than
> any human who ever lived and so will find it absurdly easy to fool us if
> he wants to, and if we're trying to kill him he will want to. You can
count
> on it.
>
> > humans may only allow the Singularity to happen at a rate at which they
> > can adapt.
>
> That will never happen. If we deliberately make our AI stupid or even slow
> down the rate it is improving itself there is no guarantee country X will
do
> the same thing. I don't care how solemnly they swear they will make their
AI
> stupid too the temptation to cheat on the agreement would be absolutely
> enormous, literally astronomical. The only thing to do is charge ahead
full
> steam and make the best AI you can.
>
>   John K Clark
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>




More information about the extropy-chat mailing list