[ExI] Universally versus 'locally' Friendly AI

Richard Loosemore rpwl at lightlink.com
Sun Mar 13 15:42:24 UTC 2011


spike wrote:
> Kelly Anderson wrote:
>> ...I think we're only going to get one chance at this. I think that's why
> it's so important that we select really good parents to raise these first
> AGIs...
> 
> Indeed?  We select?  Agreed it is *important* we select, but we do not and
> cannot select.  Whoever is successful in figuring out how to create AGI
> selects themselves.

First of all, the "parenting" of the first AGI will not be a Mom and Pop 
operation (so to speak) because the process of development will involve 
multiple trials during which the dynamics will be meticulously observed. 
  My own current plan involves running very large numbers of (contained) 
child development experiments to see how the dynamics of the systems 
motivation mechanism actually works in practice.  During these 
experiments there will be automatic systems looking for "errant" 
patterns of thought - if the system starts to dwell on ideas that 
involve negativity (of various kinds) we will want to know about it, and 
be able to do a trace to find out how it got into that state.

Only after sorting through different types of motivation mechaism (or, 
more likely, different balances of parameters within the main MM) will 
some AGIs be allowed to go through longer periods of development, toward 
full maturity.  And even then the thoughts inside will be monitored 
continually, with automatic alarms set to go off if the system begins to 
think about the idea of breaking free of its motivation mechanism and 
experimenting with violent motivations.

Finally, I anticipate that this process will take place under the 
scrutiny of a large organization dedicated to safety.  No "lone 
inventor" is going to have the resources to do this, so when you talk 
about the selection of parents being somehow out of your control, I 
think you are imagining a situation that is unlikely to occur.

At least, that is the goal of organizations like IEET and FHI:  to 
ensure that the process is transparent.

(It was supposed to be the goal of Lifeboat Foundation as well, but 
that, as they say, is another story).



Richard Loosemore






More information about the extropy-chat mailing list