[ExI] inference paradigm in ai
William Flynn Wallace
foozler83 at gmail.com
Mon Sep 4 21:20:30 UTC 2017
On Mon, Sep 4, 2017 at 2:35 PM, Adrian Tymes <atymes at gmail.com> wrote:
> On Mon, Sep 4, 2017 at 12:24 PM, William Flynn Wallace
> <foozler83 at gmail.com> wrote:
> > Some say that we will put moral sense into the AIs. And if we install
> > abilities but not our faults, then will the AIs see themselves as
> > to us?
> Given the definition of "abilities" and "faults", would not such AIs
> be superior by definition?
Yes. But would they see themselves as such? And if so, and if they had a
moral sense, would it not occur to them not to do some things we might have
Think of Asimov's robot laws: An AI with those laws installed would not
obey a command to hurt someone, and might tell us that we were wrong and
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat