[extropy-chat] On difficult choices (was: Books: Harris; Religion and Reason)

Marc Geddes m_j_geddes at yahoo.com.au
Thu Jan 12 07:12:26 UTC 2006


Let me just clarify my above point.
 
I believe a Singularity (initiated by recursively
self-improving AI) is possible, but ONLY a friendly AI
can initiate a Singularity, since any AI that isn't
friendly would be limited and not recursively
self-improving. 
 
Why do I think this?  The point comes down to
something actually realized by Eli: that morality is
not something that can be externally searched for, but
something that has to be *built into* the structure of
a mind from the start.  But he (Eli) doesn't seem to
have fully realized the consequences of his own
argument. 
 
In order to recursively self-improve an AI would have
to be able to perform mathematical self-reflection. 
This can only work if such self-reflection involved
the ability to seamlessly integrate different kinds of
knowledge ( i.e 'Consilience') and to grow knowlede
(since mathematical self-reflection  - or
'Godelization' by definition involves an expansion of
knowledge).  
 
Call the originial fai computer program (or in
mathematical jargon a 'function') F
Call a possible improved  version of the function     
                                     hyper-F
 
In order for the system to determine that hyper-F
really is a mathematical improvement over F, there has
to be another kind of mathematical entity (call it M)
which expresses the relationship between the two
functions - F and hyper F.  In other words, there has
to an M that embeds F and hyper-F in a single
mathematical field. 
 
But the process of embedding (or integrating) two
different functions into a single coherent field is
precisely the role played by *Memes*, which form the
basis for morality.  A dynamic positive-sum
interaction between two different people is
*equivalent* to a static mathematical relationship
between two functions.  
 
A well functioning M generator has to have morality
already built into it.  A coherent relationship
between a possible future version of oneself and a
current version of oneship is *equivalent* to a
positive-sum dynamic interaction between two different
people. 
 
Thus, the argument suggests, the problem of recursive
self-improvement is simply a generalized version of
the problem of morality.  Ergo, solving the problem of
recursive self-improvement has to incorporate (or
subsume) a solution to the morality problem.  Ergo,
only Friendly AI can recursively self-improve. 
 
Am I making sense here?


"Till shade is gone, till water is gone, into the shadow with teeth bared, screaming defiance with the last breath, to spit in Sightblinder’s eye on the last day”


		
____________________________________________________ 
Do you Yahoo!? 
The New Yahoo! Movies: Check out the Latest Trailers, Premiere Photos and full Actor Database. 
http://au.movies.yahoo.com



More information about the extropy-chat mailing list