[ExI] singularity summit on foxnews

Robert Picone rpicone at gmail.com
Tue Sep 18 17:06:34 UTC 2007


On 9/16/07, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On 9/16/07, Robert Picone <rpicone at gmail.com> wrote:
> > The major difference is that there already is a system in place to
> > prevent the creation of unfriendly humans.  A human's behaviors, as
> > much as they can be, are designed by a society.  This society,
> > throughout the design process, attempts to install safeguards to
> > discourage behavior that is threatening to other members of the
> > society.  If an unfriendly biological intelligence is developed, most
> > often it is caught before it has reached human-level intelligence, and
> > submitted for review of some sort (like psychiatry, school/parental
> > punishments, or the legal system) to evaluate what is necessary for
> > redesign.
>
> This sounds a little optimistic to me, but the real point is that I do
> not see why an individual human with a big, stupid computer would be
> so much less dangerous than a slowly evolving  individual AI networked
> with, and under the control of, other AIs, and humans with stupid
> computers.
>
> On the contrary, I suppose that security measures are likely to be
> *more* effective in the latter case than they ordinarily are in our
> current society.
>

Well, Keith offered the danger of speed/ability, but I don't subscribe
to that school of thought.  While AIs will doubtlessly outpace humans
for quite a while, it seems unlikely that a collection of humans aided
by computers wouldn't be able to compete in terms of potential danger.

My problem would be more that from the 2nd to the 5th year of the
first AI's public existence, it, or some other AI created to emulate
it, will be the most influential being on the planet.  It seems
worthwhile to keep something that could quite possibly be a sociopath
from holding this position.

it also seems rather likely that it will be influential in that
subsequent AIs will be created through little more than slight
modification of the design.  If the first AI was unfriendly, it seems
rather likely that there will be multiple other unfriendly AIs around
before anyone realizes a mistake.

> > The other factor would be that humans have little trouble empathizing
> > with most other humans they are likely to encounter.  A sort of
> > intuitive version of the golden rule comes into play here to effect
> > our behavior when we can put ourselves in someone else's shoes.  On
> > average though, we have very little ability to do this when dealing
> > with foreign cultures or other species.  It follows, that if an AI
> > does not think/feel like a human (which, short of brain simulation I
> > count as unlikely), they will constantly be dealing with an alien
> > culture, if we're horrible at this ourselves, how could we expect an
> > AI to be good at it without it being a feature of the design?
>
> In fact, I do not. But not being a specieist, and being, e.g.,
> entirely in favour of a further speciation of biological humans, I
> accept that we cannot expect anything more than the kind of empathy we
> currently feel, in average, towards members of other cultures, races
> or species.
>
> There again, empathy is usually more based on proximity and shared
> interests and reciprocal knowledge than it is on "similarity". Many
> people are more concerned for, say, a single pet living with them than
> they are for an entire tribe at the other end of the world, even
> though the latter is immensely closer in biological and "cultural"
> terms.
>

I doubt that this is the usual case, pets seem to be the exception to
the rule because to many they qualify as family, and as such are a
member of someone's human sub-groupings.  A zoo-keeper is more likely
to empathize with a cousin they have never met than any one animal.  A
Southern Baptist who lives next to and shares political interests with
an atheist is  more likely to empathize with another Southern Baptist
who was raised in a similar environment but has lived on the other
side of the world their entire life and is their political opposite.
And I am more likely to empathize with a childhood acquaintance than
the Mexican immigrant next door.  I'd say perceived similarity/common
experience plays as much of a role as anything else.

> Why should AIs "stick together", and why should humans do the same?
>

I never meant to imply they would.

> I imagine that more complex scenarios are in order, where AIs may
> actively support, and take side for, the community they belong to. For
> the time being, computers are supportive to the interests of their
> users. This may be reversed in the future - i.e., "users" may become
> supportive of their computer's interests :-) -, but I do not see a
> general alliance of all computers against all owners any time soon.
>
> Stefano Vaj

Neither do I, which is why I made the point that I don't expect them
to ever want to annihilate the human race.  But I do expect that a
great many of them will empathize a great deal more with any other AI
than a subset of humans they have never and will never come into
contact with, and there will be those that they will never have
contact with unless you expect a quick end to all poverty.



More information about the extropy-chat mailing list