[ExI] Wired article on AI risk

Kelly Anderson kellycoinguy at gmail.com
Wed May 23 19:14:38 UTC 2012


On Tue, May 22, 2012 at 8:49 PM, Mike Dougherty <msd001 at gmail.com> wrote:
> On Tue, May 22, 2012 at 12:23 PM, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>> As a computer scientist I would have to disagree slightly with this,
>> at least the way we use "stupid" computers today. The issue with AI
>> isn't that it is dangerous, but rather by its very nature it is not as
>> predictable as a programmed computer. Yes, programmed computers with
>> bugs can cause airplanes to crash, but it is unlikely that a stupid
>> computer of today is going to rise up and take over the world. Yet
>> just such things are possible with AGI. If you can counter this
>> argument, I'm certainly interested in what you would have to say.
>
> Though stupid computers, as tools, can be instructed to "act" by AGI
> before mere humans can counter that action.

Yes, and an AGI can also instruct a "stupid" robot to pick up a shovel
and bash you in the head... so that argument is weak.

> Clearly the only way to
> be "safe" is to turn off your computer right now and be "friendly" to
> all the other newly displaced former computer users.  Of course this
> safety will only last until the largest neighbor decides he wants what
> you have.

Total avoidance of risk ensures starvation.

> j/k I'm sure nothing we build can ever be used against us even by accident.
>
> j/k still, I'm actually pretty sure the future is somewhere between
> the two extremes.  I hope to benefit from more than suffer because of
> whatever the future brings, but that might be optimism speaking.

If it's 51% better and 49% worse, that's still progress, or so says Kevin Kelly.

-Kelly




More information about the extropy-chat mailing list