[ExI] Echoes of the Invincible

Samantha Atkins sjatkins at mac.com
Wed Jun 26 03:39:48 UTC 2013



-- 
Samantha Atkins
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Tuesday, June 25, 2013 at 2:20 PM, Rafal Smigrodzki wrote:

> On Tue, Jun 25, 2013 at 4:11 AM, Eugen Leitl <eugen at leitl.org (mailto:eugen at leitl.org)> wrote:
> > On Tue, Jun 25, 2013 at 12:19:40AM -0400, Rafal Smigrodzki wrote:
> > 
> > > Imagine we actually are the firstborn. An AI is created with a stable
> > > goal system, takes over the world, possibly eradicating us. The AI is
> > > 
> > 
> > 
> > You just described something which can't be built by us, a
> > Darwinian system. Have you noticed how explicit-codifying
> > AI never went anywhere, and most people continued to plod
> > along, as if nothing happened? The task is much too hard
> > for anything less than a god (which can have arisen by
> > a darwinian design, and could plunk down a hyperfit brittle
> > system down in our midst which would flatten us, but be
> > a pitiful toy in the original context it arose in, which
> > *will* be expansive, so no chances for that hyperfit
> > oddness cropping up in our midst.
> > 

Why conflate explicit codifying with being able to build an AGI at all?  It is much more likely to be a hybrid of symbolic and sub-symbolic techniques.  Significant parts of an AGI need to self unfold/develop from the "seed" which is about the best we can likely come up with as human programmers and system designers.   I don't think that any godhood is required.  


 
> > 
> > This is what I mean that there is no mechanism. People
> > armwave a lot, but that's unfortunately not enough.
> > 
> 
> 
> ### I am assuming you expect that superhuman AI will be produced, just
> not using explicit programming techniques.
> 
> Leaving aside the question of that programming techniques are employed
> in creating it, do you disagree that a superhuman AI with a stable
> goal system could exist? Maybe it could only be evolved using
> evolutionary programming techniques, maybe something more directly
> controlled by programmers but an AI with a stable goal system, and
> capable of cooperating with its copies, would create a stably
> non-evolving society, as long as the AI were smart enough to suppress
> the emergence of competing, evolving replicators.
> 
> 


Why is a stable goal system being mixed up with a stably non-evolving society?  They are not at all the same thing.   Do humans have more or less stable goal systems?  I would say yes, broadly speaking, as there is a common set of root goals most people have and divergence among individuals as to the particulars of values sufficient to satisfy those and then more individualized specific goals.  But even this last are a bit range bound and not unstable per se. 

Has this led to a non-evolving society?  Nope. 
> 
> I agree that the stable-goal SAI is likely to perish if pitted against
> an ecosystem of less stable, evolving SAIs but this is not the
> situation I am considering here.
> 
> 

I don't see that relatively stable goal structure at all implies lack of considerable flexibility so I challenge what seems to be the root premise.

 
> 
> Please note, the reason for the seemingly progressive nature of
> evolution is a combination of lack of a goal system of the process as
> a whole, and the existence of progressively more complex ecological
> niches that can be reached by progressively more complex beings. As
> long as something can evolve (i.e. there is an ecological niche for
> it), it will evolve, since there is no designer with god's-eye view of
> the process to stop it at some pre-determined point.
> 
> The ecology dominated by SAI with stable goals would be different - if
> the SAI's goals involved prevention of competitor evolution, the SAI
> could nip the competition in the bud, before it became intelligent
> enough to pose too much of a challenge. The stable SAIs could keep
> eradicating upstart microbes forever, without ever having to deal with
> a real opponent.
> 
> 


Why would it bother to avoid all competitors arising?  Why would it not prize diversity and new views and minds and capabilities at least as much as we do?  

- samantha
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130625/7d6dc2a9/attachment.html>


More information about the extropy-chat mailing list