[ExI] Yes, the Singularity is the greatest threat to humanity
Richard Loosemore
rpwl at lightlink.com
Sun Jan 16 01:12:41 UTC 2011
Michael Anissimov wrote:
> I made this blog post in response to a post at Singularity Hub
> responding to NPR coverage of the Singularity Institute:
>
> http://www.acceleratingfuture.com/michael/blog/2011/01/yes-the-singularity-is-the-biggest-threat-to-humanity/
>
Michael,
There is a serious problem with this. You say:
> There are “basic AI drives” we can expect to emerge in sufficiently
> advanced AIs, almost regardless of their initial programming
... but this is -- I'm sorry to say -- pure handwaving.
Based on which theoretical considerations would you come to the
conclusion that some basic AI drives will "emerge" almost regardless of
their initial programming? (And please do not cite Steven Omuhundro's
paper of the same name: this contained no basis to support that claim).
There are currently no AGI motivation systems that function well enough
to support a general purpose intelligence. There are control systems
for narrow AI, but these do not generalize well enough to make an AGI
stable. (In simple terms, you cannot insert a general enough top level
goal and have any guarantees about the overall behavior of the system,
because top level goal is so abstract). So we certainly cannot argue
from existing examples.
To "drive" an AGI, you need to design its drive system. What you then
get is what you put in. There are at least some arguments to indicate
that drives can be constructed in such a way as to render the behavior
predictable and stable. However, even if you did not accept that that
had been demonstrated yet, it is still a long stretch to go to the
opposite extreme and assert that there are drives that you would expect
to emerge regardless of programming, because that assertion is
predicated on knowledge of AI drive systems that simply does not exist
at the moment.
Richard Loosemore
More information about the extropy-chat
mailing list