[ExI] Unfrendly AI is a mistaken idea.

Lee Corbin lcorbin at rawbw.com
Mon Jun 4 04:12:54 UTC 2007


Christopher writes

>> Lee Corbin wrote:
>>
>> Suppose an AI is somehow evolved to solve physics questions.
>> Then during its evolution, predecessors who deviated from the
>> goal (by wasting time, say, reading Kierkegaard) would be
>> eliminated from the "gene pool". More focused programs
>> would replace them.
> 
> Suppose businesses evolved that attempted to solve physics questions.

The analogy doesn't fit well, to me. Firstly, businesses as we know
them attempt to survive (because humans are in charge), and there
are cases where they completely change what line of business they're in.

> During their evolution, one might expect that businesses who deviated
> from this goal (by wasting time, say, researching competitors, executing
> alliances and buyouts, updating employee skill sets, lobbying for
> beneficial legislation, and transplanting themselves to foreign soil)
> would be eliminated from the "gene pool".  More directly goal focused
> businesses would replace them... 

Secondly, "researching competitors" really and obviously does contribute
to their survival in the world of free markets, whereas in my example,
studying the Danish existentialist has nothing to do, we should assume,
with physics.

In Stathis's example, I supposed that ability to solve physics problems
was judged by fairly stringent conditions somehow, perhaps by humans,
or perhaps by other machines. "Executing alliances and buyouts, 
transplanting themselves to foreign soil", etc., however, might be good for
solving physics problems by either an AI or by a business, I guess.

Lee




More information about the extropy-chat mailing list