[ExI] Fwd: ai

Adrian Tymes atymes at gmail.com
Sat Mar 12 21:55:40 UTC 2016


On Sat, Mar 12, 2016 at 1:27 PM, William Flynn Wallace <foozler83 at gmail.com>
wrote:

> You know that I am barely a beginner re AI, yet I have a very long
> association with intelligence and its measurement and correlates.
>
> One prominent aspect of intelligence is the ability not to do things - to
> inhibit actions.  A large percentage (?) of our neurons are inhibitory in
> nature and others are able to inhibit at times.  Much of what we call
> maturity is the intelligence to restrain ourselves from acting out every
> impulse and emotion.
>
> If you were to walk up to people you know or strangers on the street,and
> ask them to spit in your face, what would happen?  My guess is that you
> won't get spit on even once unless you ask a bratty two year old.
>
> What is the equivalent in AI?  Are there instructions you can feed to one
> and it will fail to carry them out?  Like HAL?
>

That's possible - even common, sometimes - with current systems.  It's
usually called "safety", or some variant.

For example, the rockets that CubeCab is designing are intended to fly
along a certain trajectory...but once they leave the aircraft they're
autonomous, having to make their own decisions.  Should they detect that
they are significantly off course enough, they are programmed to stop
flying.

I'm not sure I should get into details of how this happens (due to the ITAR
laws: details might qualify as "technical data"), or the many redundant
checks that go beyond just "programming" to make sure no rogue rocket
steers itself into a city.  But the techniques for this are decades old and
widely accepted.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160312/1eeba4ba/attachment.html>


More information about the extropy-chat mailing list