[ExI] Fwd: ai
William Flynn Wallace
foozler83 at gmail.com
Sat Mar 12 21:27:53 UTC 2016
You know that I am barely a beginner re AI, yet I have a very long
association with intelligence and its measurement and correlates.
One prominent aspect of intelligence is the ability not to do things - to
inhibit actions. A large percentage (?) of our neurons are inhibitory in
nature and others are able to inhibit at times. Much of what we call
maturity is the intelligence to restrain ourselves from acting out every
impulse and emotion.
If you were to walk up to people you know or strangers on the street,and
ask them to spit in your face, what would happen? My guess is that you
won't get spit on even once unless you ask a bratty two year old.
What is the equivalent in AI? Are there instructions you can feed to one
and it will fail to carry them out? Like HAL?
I have no idea, but I do think that if this never happens, then you don't
have a truly intelligent entity, much less a moral or ethical one trained
in the simplest manners. Of course you would have to ask it to do
something independent of earlier programming.
(I think I see a flaw in the above and it has to do with generalization,
but I'll let it go for now.)
bill w
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160312/37c2623c/attachment.html>
More information about the extropy-chat
mailing list