[ExI] singularity summit on foxnews

Stefano Vaj stefano.vaj at gmail.com
Sat Sep 15 20:16:07 UTC 2007


On 9/15/07, Robert Picone <rpicone at gmail.com> wrote:
> We do however see no problem with killing those things that we regard
> as alien when they are in our advantage to kill, or when it is a
> consequence of something that is in our advantage....  It of course
> would almost never be in an AI's advantage to wipe out the human race,
> like it isn't in our advantage for to wipe out geese, but there would
> definitely be circumstances when it would be in an AI's advantage to
> do something which might kill a few humans...  It's a mistaken idea
> that unfriendly AI is about AI wanting to wipe out the human race,
> killing without scruples whenever they can get away with it and it
> benefits them is more than enough.

I do not see how this would be so radically different from what human
beings have always been doing not only to different species, as you
say, but *to one another*. Do you suggest to prevent the further
enhanced or un-enhanced biological reproduction of the members of our
species to avoid this risk?

Therefore, either you feel you are personally and directly threatened
by a specific AI rather than, say, another specific human being; or it
is unclear why the level of your concern about murders or genocides
not involving you personally should depend on the biological or
non-biological nature of their perpetrators.

> For the most part Americans don't care if our actions indirectly
> caused the deaths of a good 400,000+ people on the other side of the
> world so long as we don't have any direct connections to these
> people...  An AI would be likely to have a lot mote influence over
> these things than an individual human would, but why would an AI
> bother to avoid these sorts of situations unless something in its
> design made it want to avoid these events even more than a human
> would?

So, the difference would be that "individual" AIs would find
themselves as having more influence on life or death issues than
"individual" humans...

Even if this were actually the case, the problem is
- that I do not really see that "collective" humans are any safer than
individual humans for other humans (the historical experience would
rather suggest the opposite);
- and that the working of human societies put anyway individual humans
in the position of unleashing collective - and/or automatic, albeit
"non-intelligent" - processes not so very different from those which
could be generated from your unfriendly AI.

Stefano Vaj



More information about the extropy-chat mailing list