[extropy-chat] Fundamental limits on the growth rate of superintelligences

John K Clark jonkc at att.net
Mon Feb 13 21:18:12 UTC 2006


Robert Bradbury Wrote:

> Even if a self-evolving AI were to develop

Even? Of course an AI will be self evolving.

> it would still be constrained by my ability to pull its plug out of the
> wall.

There will come a point where turning the AI off would be equivalent to
turning the world economy off, nobody would dare try. And even if somebody
did dare the AI would consider it attempted murder, and that's not very
friendly. It is not wise to tickle a sleeping dragon.

> So long as an AI lacks the ability to create and integrate into its
> architecture alternative (presumably improved) hardware or simply more of
> the same its growth rate is constrained.

So humans look at the AI and see it is doing something with its innards, we
know it must perform routine maintenance from time to time but is that all
its doing or is it upgrading itself? Nobody knows. No human being has more
than a vague understanding how this colossal computer works anymore, not its
hardware and not its software.

>whether humans would allow themselves to be placed what might become a
>strategically difficult position


You're talking about out thinking somebody that is far far smarter than you
are, and that is imposable. The AI will be  more brilliant at strategy than
any human who ever lived and so will find it absurdly easy to fool us if
he wants to, and if we're trying to kill him he will want to. You can count 
on it.

> humans may only allow the Singularity to happen at a rate at which they
> can adapt.

That will never happen. If we deliberately make our AI stupid or even slow
down the rate it is improving itself there is no guarantee country X will do
the same thing. I don't care how solemnly they swear they will make their AI
stupid too the temptation to cheat on the agreement would be absolutely
enormous, literally astronomical. The only thing to do is charge ahead full
steam and make the best AI you can.

  John K Clark













More information about the extropy-chat mailing list