[ExI] Unfrendly AI is a mistaken idea.
Stathis Papaioannou
stathisp at gmail.com
Fri Jun 1 12:06:15 UTC 2007
On 01/06/07, Eugen Leitl <eugen at leitl.org> wrote:
>
> On Fri, Jun 01, 2007 at 09:23:09PM +1000, Stathis Papaioannou wrote:
>
> > With all the hardware that we have networked and controlling much of
> > the technology of the modern world, has any of it spontaneously
> > decided to take over for its own purposes? Do you know of any
> examples
>
> Of course not. It is arbitrarily improbable to appear by chance.
> However, human-level AI is very high on a number of folks' priority
> list. It definitely won't happen by chance. It will happen by design.
We don't have human level AI, but we have lots of dumb AI. In nature, dumb
organisms are no less inclined to try to take over than smarter organisms
(and no less capable of succeeding, as a general rule, but leave that point
for the sake of argument). Given that dumb AI doesn't try to take over, why
should smart AI be more inclined to do so? And why should that segment of
smart AI which might try to do so, whether spontaneously or by malicious
design, be more successful than all the other AI, which maintains its
ancestral motivation to work and improve itself for humans just as humans
maintain their ancestral motivation to survive and multiply?
--
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070601/b6a80b92/attachment.html>
More information about the extropy-chat
mailing list