[extropy-chat] AI design

Damien Broderick thespike at satx.rr.com
Thu Jun 3 17:44:33 UTC 2004


At 09:14 AM 6/3/2004 -0700, Adrian mentioned:

>the capacity for SIs to overcome their optimization
>functions and decide on new ones - for example, the
>paperclip maximizer who would realize that paperclips
>only have meaning if there's something for them to
>clip, and other sentient units for the convenience of
>a clip to serve.  (Unless you propose that a SI would
>not strive to understand why it does what it does,
>which would seem to strongly interfere with any
>capability for self-improvement.)

Exactly. That was my point, too, or part of it. As I mentioned the other 
day (a point I think was dismissed as some sort of namby-pamby wooly hippie 
love-in comforting delusion), semiosis is social at its core. Unless you 
set out deliberately and with great difficulty to make a psychotically 
one-note uber-`optimizer', the shortest path to AGI must be through two- or 
n-way communication; it has to learn to be a person, or at least to operate 
inside a domain of other communicating persons. It mightn't be made of 
meat, but it is made of lexemes. And that means providing something like 
the inherited templates we have for universal grammar, Gricean implicature, 
etc. It seems Eliezer is making the strong claim that in the absence of 
black box legacy code the *only* kind of AGI we can make *must* fall into a 
one-note attractor and lack any capacity to reason its way free. Even my 
water boiler has a feedback switch that tells it not to keep heating the 
water once it's boiled. Why would a smarter water boiler suddenly become 
prey to stupidity? Why wouldn't it pay attention when I started to yelp?

Damien Broderick 




More information about the extropy-chat mailing list