[ExI] Echoes of the Invincible

Eugen Leitl eugen at leitl.org
Wed Jun 26 08:59:09 UTC 2013


On Tue, Jun 25, 2013 at 08:39:48PM -0700, Samantha Atkins wrote:

> Why conflate explicit codifying with being able to build an AGI at all?  It is much more likely to be a hybrid of symbolic and sub-symbolic techniques.  Significant parts of an AGI need to self unfold/develop from the "seed" which is about the best we can likely come up with as human programmers and system designers.   I don't think that any godhood is required.  

I'm willing to bet that we're barely smart enough to crib
off biology, or create boundary conditions for emergence.
Both result in evolutionary systems.

I think the naturally inspired and expecially naturally
derived intelligences have the best story and track 
record so far. 
 
> > > This is what I mean that there is no mechanism. People
> > > armwave a lot, but that's unfortunately not enough.
> > > 
> > 
> > 
> > ### I am assuming you expect that superhuman AI will be produced, just
> > not using explicit programming techniques.
> > 
> > Leaving aside the question of that programming techniques are employed
> > in creating it, do you disagree that a superhuman AI with a stable
> > goal system could exist? Maybe it could only be evolved using
> > evolutionary programming techniques, maybe something more directly
> > controlled by programmers but an AI with a stable goal system, and
> > capable of cooperating with its copies, would create a stably
> > non-evolving society, as long as the AI were smart enough to suppress
> > the emergence of competing, evolving replicators.
> > 
> > 
> 
> 
> Why is a stable goal system being mixed up with a stably non-evolving society?  

How would you implement a stable goal system in an animal?

> They are not at all the same thing.   Do humans have more or less stable goal systems?  

No, and it's a very good thing.

> I would say yes, broadly speaking, as there is a common set of root 

Extremely broadly speaking. Speaking so broadly, specific
directions become meaningless. Evolution has no specific purpose,
even it pushes up complexity, it doesn't mean it abandons lower
tiers.

> goals most people have and divergence among individuals as to the 
> particulars of values sufficient to satisfy those and then more 
> individualized specific goals.  But even this last are a bit range bound and not unstable per se. 
> 
> Has this led to a non-evolving society?  Nope. 
> > 
> > I agree that the stable-goal SAI is likely to perish if pitted against
> > an ecosystem of less stable, evolving SAIs but this is not the
> > situation I am considering here.
> > 
> > 
> 
> I don't see that relatively stable goal structure at all implies lack of considerable flexibility so I challenge what seems to be the root premise.

Stable goal system in a spatially distributed system provably
result in useless, brittle systems.
 
> Why would it bother to avoid all competitors arising?  Why would it not prize diversity and new views and minds and capabilities at least as much as we do? 

Biological diversity is nice. It can also kill you. 



More information about the extropy-chat mailing list