[ExI] singularity summit on foxnews

Robert Picone rpicone at gmail.com
Sat Sep 15 05:38:49 UTC 2007


On 9/14/07, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On 9/14/07, hkhenson <hkhenson at rogers.com> wrote:
> > Consider the plant kudzu.
>
> Yes. I will be the first to admit that I do not care much for the
> success of kudzu, nor I identify much with its destiny. I think that a
> kudzu individual may have a different view, however.
>
> > >but above all why you or
> > >I should care, especially if we were to be (physically?) dead anyway
> > >before the coming of such an AI.
> >
> > You can't count on it, not unless you take steps to die real
> > soon.  It is very likely someone will be alive at the point AIs reach
> > takeoff.  The problem with AIs thinning out the world's excess
> > population is that it's hard to imagine a situation where unfriendly
> > AIs didn't make a clean sweep.
>
> Why don't we make a sweep as clean as possible of other species, or
> for that matter of silicon
> crystals? Because if they are not in our harm's way we do not care,
> basically (even not counting  the living species or other chemical
> configurations we actually like) . Why should AIs?

We do however see no problem with killing those things that we regard
as alien when they are in our advantage to kill, or when it is a
consequence of something that is in our advantage....  It of course
would almost never be in an AI's advantage to wipe out the human race,
like it isn't in our advantage for to wipe out geese, but there would
definitely be circumstances when it would be in an AI's advantage to
do something which might kill a few humans...  It's a mistaken idea
that unfriendly AI is about AI wanting to wipe out the human race,
killing without scruples whenever they can get away with it and it
benefits them is more than enough.

For the most part Americans don't care if our actions indirectly
caused the deaths of a good 400,000+ people on the other side of the
world so long as we don't have any direct connections to these
people...  An AI would be likely to have a lot mote influence over
these things than an individual human would, but why would an AI
bother to avoid these sorts of situations unless something in its
design made it want to avoid these events even more than a human
would?



More information about the extropy-chat mailing list