[ExI] singularity summit on foxnews

Robert Picone rpicone at gmail.com
Sat Sep 15 23:00:55 UTC 2007


On 9/15/07, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On 9/15/07, Robert Picone <rpicone at gmail.com> wrote:
> > We do however see no problem with killing those things that we regard
> > as alien when they are in our advantage to kill, or when it is a
> > consequence of something that is in our advantage....  It of course
> > would almost never be in an AI's advantage to wipe out the human race,
> > like it isn't in our advantage for to wipe out geese, but there would
> > definitely be circumstances when it would be in an AI's advantage to
> > do something which might kill a few humans...  It's a mistaken idea
> > that unfriendly AI is about AI wanting to wipe out the human race,
> > killing without scruples whenever they can get away with it and it
> > benefits them is more than enough.
>
> I do not see how this would be so radically different from what human
> beings have always been doing not only to different species, as you
> say, but *to one another*. Do you suggest to prevent the further
> enhanced or un-enhanced biological reproduction of the members of our
> species to avoid this risk?
>
> Therefore, either you feel you are personally and directly threatened
> by a specific AI rather than, say, another specific human being; or it
> is unclear why the level of your concern about murders or genocides
> not involving you personally should depend on the biological or
> non-biological nature of their perpetrators.
>

Actually, I don't see a huge difference between this hypothetical AI
and a human either, but you'll notice that for the most part, humans
don't kill whenever it would benefit them.  I am, for example, fairly
confident that I could kill a man on the street and take whatever was
in his wallet without being caught, and yet I have no desire to do
this.

The major difference is that there already is a system in place to
prevent the creation of unfriendly humans.  A human's behaviors, as
much as they can be, are designed by a society.  This society,
throughout the design process, attempts to install safeguards to
discourage behavior that is threatening to other members of the
society.  If an unfriendly biological intelligence is developed, most
often it is caught before it has reached human-level intelligence, and
submitted for review of some sort (like psychiatry, school/parental
punishments, or the legal system) to evaluate what is necessary for
redesign.

This system isn't perfect, but at least as far as murders are
concerned it seems to have a 99.99% chance of success, which would be
more than satisfactory to me for the development of AI.

The other factor would be that humans have little trouble empathizing
with most other humans they are likely to encounter.  A sort of
intuitive version of the golden rule comes into play here to effect
our behavior when we can put ourselves in someone else's shoes.  On
average though, we have very little ability to do this when dealing
with foreign cultures or other species.  It follows, that if an AI
does not think/feel like a human (which, short of brain simulation I
count as unlikely), they will constantly be dealing with an alien
culture, if we're horrible at this ourselves, how could we expect an
AI to be good at it without it being a feature of the design?

> > For the most part Americans don't care if our actions indirectly
> > caused the deaths of a good 400,000+ people on the other side of the
> > world so long as we don't have any direct connections to these
> > people...  An AI would be likely to have a lot mote influence over
> > these things than an individual human would, but why would an AI
> > bother to avoid these sorts of situations unless something in its
> > design made it want to avoid these events even more than a human
> > would?
>
> So, the difference would be that "individual" AIs would find
> themselves as having more influence on life or death issues than
> "individual" humans...
>
> Even if this were actually the case, the problem is
> - that I do not really see that "collective" humans are any safer than
> individual humans for other humans (the historical experience would
> rather suggest the opposite);
> - and that the working of human societies put anyway individual humans
> in the position of unleashing collective - and/or automatic, albeit
> "non-intelligent" - processes not so very different from those which
> could be generated from your unfriendly AI.
>
> Stefano Vaj

Both good points about our society, but I don't see how they do
anything but support my argument.  When collectives lash out, it has
always been against those things alien to them, namely minorities,
foreign cultures, or both at once.  If AIs acted like humans in this
respect, and frequently collaborated, humans, or some subset of
humans, would likely qualify for such a potential lashing out.
Consider how many people out there are hostile to the concept of AI as
you are accusing me of being, do you suppose this hostility would be
completely one-sided?

I do of course advocate trying to remove the behaviors that spawn
these issues from human society as well, but the steps to shape the
behaviors of individuals that do not yet exist are much more clear
than the steps to significantly change the behaviors of large groups
that already exist.



More information about the extropy-chat mailing list