[extropy-chat] Bluff and the Darwin award
Eugen Leitl
eugen at leitl.org
Thu May 18 06:41:39 UTC 2006
On Wed, May 17, 2006 at 05:20:00PM -0400, Heartland wrote:
> Hard takeoff, as I understand it, doesn't refer to the growing amount of impact
> that intelligence growth will have on the outside environment, but to the growth
> itself. If an AI is capable of making the first improvement to itself, this already
You don't have to understand yourself in order to make improvements,
use evolutionary methods.
> means that this AI had enough knowledge about its structure and ways of improving
> itself to not seek any extra knowledge outside of its immediate environment.
If you 0wn the global network, your outside environment is the whole planet.
Not much of a handicap, eh?
> And even if such AI were required to go outside of its environment to learn how to
> improve itself, a smarter AI should be able to minimize that requirement on each
> iteration. In any case, I don't see how hard takeoff is not inevitable soon after
> the first iteration.
Hard takeoff is a direct consequence of hardware overhang (human designers build lousy
systems) and a large pool of hardware to 0wn and expand into.
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 191 bytes
Desc: Digital signature
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060518/1c02c45f/attachment.bin>
More information about the extropy-chat
mailing list