[ExI] inference paradigm in ai

William Flynn Wallace foozler83 at gmail.com
Mon Sep 4 21:20:30 UTC 2017

On Mon, Sep 4, 2017 at 2:35 PM, Adrian Tymes <atymes at gmail.com> wrote:

> On Mon, Sep 4, 2017 at 12:24 PM, William Flynn Wallace
> <foozler83 at gmail.com> wrote:
> > Some say that we will put moral sense into the AIs.  And if we install
> our
> > abilities but not our faults, then will the AIs see themselves as
> superior
> > to us?
> Given the definition of "abilities" and "faults", would not such AIs
> be superior by definition?

Yes.  But would they see themselves as such?  And if so, and if they had a
moral sense, would it not occur to them not to do some things we might have
them do?

Think of Asimov's robot laws:  An AI with those laws installed would not
obey a command to hurt someone, and might tell us that we were wrong and

bill w​

> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170904/089e5cce/attachment.html>

More information about the extropy-chat mailing list