[ExI] Fwd: ai

Tomaz Kristan protokol2020 at gmail.com
Sat Mar 12 21:47:59 UTC 2016


We already have AI's which will refuse to obey.

John Clark gave the Siri example recently. When you send Siri to calculate
something too time consuming, like really big primes, it will decline.



On Sat, Mar 12, 2016 at 10:27 PM, William Flynn Wallace <foozler83 at gmail.com
> wrote:

>
> You know that I am barely a beginner re AI, yet I have a very long
> association with intelligence and its measurement and correlates.
>
> One prominent aspect of intelligence is the ability not to do things - to
> inhibit actions.  A large percentage (?) of our neurons are inhibitory in
> nature and others are able to inhibit at times.  Much of what we call
> maturity is the intelligence to restrain ourselves from acting out every
> impulse and emotion.
>
> If you were to walk up to people you know or strangers on the street,and
> ask them to spit in your face, what would happen?  My guess is that you
> won't get spit on even once unless you ask a bratty two year old.
>
> What is the equivalent in AI?  Are there instructions you can feed to one
> and it will fail to carry them out?  Like HAL?
>
> I have no idea, but I do think that if this never happens, then you don't
> have a truly intelligent entity, much less  a moral or ethical one trained
> in the simplest manners.  Of course you would have to ask it to do
> something independent of earlier programming.
>
> ​(I think I see a flaw in the above and it has to do with generalization,
> but I'll let it go for now.)​
>
> bill w
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>


-- 
https://protokol2020.wordpress.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160312/cf30acf7/attachment.html>


More information about the extropy-chat mailing list