[ExI] singularity summit on foxnews

Stefano Vaj stefano.vaj at gmail.com
Sun Sep 16 15:30:28 UTC 2007


On 9/16/07, Robert Picone <rpicone at gmail.com> wrote:
> The major difference is that there already is a system in place to
> prevent the creation of unfriendly humans.  A human's behaviors, as
> much as they can be, are designed by a society.  This society,
> throughout the design process, attempts to install safeguards to
> discourage behavior that is threatening to other members of the
> society.  If an unfriendly biological intelligence is developed, most
> often it is caught before it has reached human-level intelligence, and
> submitted for review of some sort (like psychiatry, school/parental
> punishments, or the legal system) to evaluate what is necessary for
> redesign.

This sounds a little optimistic to me, but the real point is that I do
not see why an individual human with a big, stupid computer would be
so much less dangerous than a slowly evolving  individual AI networked
with, and under the control of, other AIs, and humans with stupid
computers.

On the contrary, I suppose that security measures are likely to be
*more* effective in the latter case than they ordinarily are in our
current society.

> The other factor would be that humans have little trouble empathizing
> with most other humans they are likely to encounter.  A sort of
> intuitive version of the golden rule comes into play here to effect
> our behavior when we can put ourselves in someone else's shoes.  On
> average though, we have very little ability to do this when dealing
> with foreign cultures or other species.  It follows, that if an AI
> does not think/feel like a human (which, short of brain simulation I
> count as unlikely), they will constantly be dealing with an alien
> culture, if we're horrible at this ourselves, how could we expect an
> AI to be good at it without it being a feature of the design?

In fact, I do not. But not being a specieist, and being, e.g.,
entirely in favour of a further speciation of biological humans, I
accept that we cannot expect anything more than the kind of empathy we
currently feel, in average, towards members of other cultures, races
or species.

There again, empathy is usually more based on proximity and shared
interests and reciprocal knowledge than it is on "similarity". Many
people are more concerned for, say, a single pet living with them than
they are for an entire tribe at the other end of the world, even
though the latter is immensely closer in biological and "cultural"
terms.

Why should AIs "stick together", and why should humans do the same?

I imagine that more complex scenarios are in order, where AIs may
actively support, and take side for, the community they belong to. For
the time being, computers are supportive to the interests of their
users. This may be reversed in the future - i.e., "users" may become
supportive of their computer's interests :-) -, but I do not see a
general alliance of all computers against all owners any time soon.

Stefano Vaj



More information about the extropy-chat mailing list