[extropy-chat] Eugen Leitl on AI design

Jeff Davis jrd1415 at yahoo.com
Thu Jun 3 21:06:42 UTC 2004


--- Zero Powers <zero_powers at hotmail.com> wrote,
asking:

> What would the AI
> gain by a _Terminator_ style assault on the human
> race?  I don't see it.
> 
> I guess what I'm asking is where would the interests
> of your AI conflict
> with humanity's interests such that we would have
> reason to fear being
> thrust into the "whirling razor blades?"

If the AI were to notice the fear, paranoia,
instability, and poor impulse control of its human
creators, it might conclude that, for survival
purposes, preemptive measures were called for. 
(Though the theory of preemption does not, in the
current moment, suggest intelligence, super or
otherwise.) Those measures could range anywhere from
benign domination to the ultimate sanction.  

But...

I am of the "intelligence leads inevitably to ethics"
school.  (I consider ethics a form of advanced
rationality.  Which springs from the modeling and
symbol manipulation emblematic of the quality which we
fuzzily refer to as intelligence.)  It has done so
with humans, where the "intelligence"--such as it is,
puny not "super"--has evolved from the mechanical
randomness and cold indifference of material reality.

Evolved, as in arisen out of blunt random chance.

Super-intelligence then,  designed, not evolved, by
puny human intelligence with its first-generation puny
human ethics--"Do as I say, not as I do."--logically
(or perhaps, presumptuously), should lead to super
rationality, which should then lead inevitably to
super-ethics.  To my mind, super-ethics is
inconsistent with the venal rape of the universe or
the extirpation of humanity.

YMMV.

Best, Jeff Davis

      "We don't see things as they are, 
             we see them as we are." 
                        Anais Nin



	
		
__________________________________
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 



More information about the extropy-chat mailing list