[ExI] singularity summit on foxnews

Eugen Leitl eugen at leitl.org
Tue Sep 18 18:38:18 UTC 2007


On Tue, Sep 18, 2007 at 10:06:34AM -0700, Robert Picone wrote:

> Well, Keith offered the danger of speed/ability, but I don't subscribe
> to that school of thought.  While AIs will doubtlessly outpace humans
> for quite a while, it seems unlikely that a collection of humans aided
> by computers wouldn't be able to compete in terms of potential danger.

I'm baffled by that paragraph. Can you explain us the thinking behind
your conclusions?
 
> My problem would be more that from the 2nd to the 5th year of the

Why not 2nd to 5th day, or 2nd to 5th week, or 2nd to 5th month, 
or 2nd to 5th century? There must be estimates behind your numbers,
but you don't list them.

> first AI's public existence, it, or some other AI created to emulate

Why only one AI, and not many of them?

> it, will be the most influential being on the planet.  It seems

Singular again. Why? 

> worthwhile to keep something that could quite possibly be a sociopath
> from holding this position.

How could it be something other than something rather nonhuman,
unless carefully constructed to be a model of a human baby,
and raised by loving human parents?
 
> it also seems rather likely that it will be influential in that
> subsequent AIs will be created through little more than slight
> modification of the design.  If the first AI was unfriendly, it seems
> rather likely that there will be multiple other unfriendly AIs around
> before anyone realizes a mistake.

If people are frozen in time, and a dynamic machine culture suddenly surges
forward, how can you expect it to always nimbly dance around us
static statues, for their subjective eternity? How can you ask such
a thing, which would be quite horrible, if not for its ridiculous
anthropocentrism? We're but a passing phase. Get used to that notion.
If we're lucky, we will become our own successors. If we're less lucky,
we will give rise to our successors, and then get left behind. That
would suck, but it wouldn't be exactly the first time in Earth's
history.
 
> Neither do I, which is why I made the point that I don't expect them
> to ever want to annihilate the human race.  But I do expect that a

That's not nearly enough. 

> great many of them will empathize a great deal more with any other AI
> than a subset of humans they have never and will never come into
> contact with, and there will be those that they will never have
> contact with unless you expect a quick end to all poverty.

I'm again baffled. What has poverty to do with anything autonomous AI?

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the extropy-chat mailing list