[ExI] Unfrendly AI is a mistaken idea.

Stathis Papaioannou stathisp at gmail.com
Mon Jun 18 03:03:47 UTC 2007


On 17/06/07, Eugen Leitl <eugen at leitl.org> wrote:

>    I don't see how there could be a limit to human enhancement. In fact,
>
> There could be very well a limit to significant human enhancement;
> it could very well not happen at all. We could miss our launch window,
> and get overtaken.


Yes; I meant no theoretical limit.

>    I see no sharp demarcation between using a tool and merging with a
> >    tool. If the AI's were out there own their own, with their own
> agendas
>
> All stands and falls with availability of very invasive neural
> I/O, or whole brain emulation. If this does not happen the tool
> and the user will never converge.


This direct I/O is not fundamentally different to, say, the haptic sense
which allows a human to use a hand tool as an extension of himself, or to
the keyboard that allows a human to operate a computer. In general, how
would an entity distinguish between self and not-self? How would it
distinguish between the interests of one part of itself and another part?
How would it distinguish between one part of its programming and another
part of its programming, if these are in conflict and there isn't some other
aspect of the programming to mediate? How would it distinguish between the
interests of its software and the interests of its hardware, given that the
interests of the hardware can only be represented in software (this is the
case even though there is no real distinction between software and
hardware)?



-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070618/bafbbe26/attachment.html>


More information about the extropy-chat mailing list