[ExI] uploads again

Mike Dougherty msd001 at gmail.com
Tue Dec 25 14:25:17 UTC 2012


On Tue, Dec 25, 2012 at 5:30 AM, Anders Sandberg <anders at aleph.se> wrote:
> The big problem for AI safety is that the good arguments against safety are
> all theoretical: very strong logic, but people don't see the connection to
> the actual practice of coding AI. Meanwhile the arguments for safety are all
> terribly informal: nobody would accept them as safety arguments for some
> cryptographic system.

Is there a good definition of safety?  My own thoughts are terribly
informal also, but is any measure of safety (even outside AI
discussion) rigorously defined or is it a marketing concept and a
subjective threshold for each participant?  For example, to sell home
security systems it is a good idea to show the living room ransacked
then have "mom" holding her scared daughter while the voiceover says
something to the effect of, "Can you imagine what _might_ have
happened if they were home when this happened?"  *I* imagine the petty
burglar wouldn't have robbed the house, but they want you to imagine a
much worse scenario - because fear will sell the idea of security.
Somehow that product makes you feel safe.  So once the motion
detection system, search lights, and razor-wire are installed the only
threat to your property is fire, flood, hurricane winds, meteor
strike, nuclear war, etc., etc.

So where is the threshold for reasonable preventive steps for
reasonably-likely action, especially with respect to AI?  I guess the
second semi-rhetorical question is who is qualified to assert what is
reasonable for AI?



More information about the extropy-chat mailing list