[ExI] Wired article on AI risk

Mike Dougherty msd001 at gmail.com
Wed May 23 02:49:17 UTC 2012

On Tue, May 22, 2012 at 12:23 PM, Kelly Anderson <kellycoinguy at gmail.com> wrote:
> As a computer scientist I would have to disagree slightly with this,
> at least the way we use "stupid" computers today. The issue with AI
> isn't that it is dangerous, but rather by its very nature it is not as
> predictable as a programmed computer. Yes, programmed computers with
> bugs can cause airplanes to crash, but it is unlikely that a stupid
> computer of today is going to rise up and take over the world. Yet
> just such things are possible with AGI. If you can counter this
> argument, I'm certainly interested in what you would have to say.

Though stupid computers, as tools, can be instructed to "act" by AGI
before mere humans can counter that action.  Clearly the only way to
be "safe" is to turn off your computer right now and be "friendly" to
all the other newly displaced former computer users.  Of course this
safety will only last until the largest neighbor decides he wants what
you have.

j/k I'm sure nothing we build can ever be used against us even by accident.

j/k still, I'm actually pretty sure the future is somewhere between
the two extremes.  I hope to benefit from more than suffer because of
whatever the future brings, but that might be optimism speaking.

More information about the extropy-chat mailing list