[ExI] Brain emulation, regions and AGI [WAS Re: Kelly's future]

Kelly Anderson kellycoinguy at gmail.com
Sat Jun 4 16:10:00 UTC 2011

On Sat, Jun 4, 2011 at 9:47 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
> Kelly Anderson wrote:
>> On Sat, Jun 4, 2011 at 8:49 AM, Richard Loosemore <rpwl at lightlink.com>
>> wrote:
>> I think this is all more a matter of training or ubringing if you
>> will, rather than leaving some magickal module out. There are people
>> who are wonderfully non-violent, but it's not because they don't have
>> some physical module, but buecause they were taught that violence is
>> bad, and it sunk in.
>> Admittedly Richard did not say if this module was a software or
>> hardware module. And we probably differ a bit on how much is
>> "programming" vs. "training".
> There are two things going on:  (a) the presence of a module, and (b) the
> development of that module.
> That distinction is, I believe, very important.  It is all too easy to start
> thinking that motivations or drives can just emerge from the functioning of
> a system (this assumption is rampant in all the discussions I have ever seen
> of AI motivation), but in truth that appears to be very unlikely.  No
> module, no drive.

I'm not talking about the presence or absence of a particular module
in the original design. Nor of a module sneaking it's way into
existence. Rather, I am talking about the more mundane unintended
consequences. Whenever I try to come up with an optimization algorithm
for AGI goals, I keep running into the wall (i.e. human extinction)
because I don't think we can come up with such a function that doesn't
have the unintended consequence of making humanity rather irrelevant
and unimportant.

> So, in the case of humans what that means is that even someone who happens
> to be a non-violent person, as a result of upbringing or conscious decision,
> will almost certainly have that module in there, but it will be very weak.
> When building an AGI, it is not my plan to include modules like that and
> then try to ensure that they stayed weak when the system was growing up ....

Yes, we should absolutely try. It might buy us a few years. ;-)

> that is not my idea of trying to guarantee friendliness!  Instead, we would
> leave the module out entirely.  As a result the system might be unable to
> understand the concept of "losing it" (i.e. outburst of anger), but that
> would not be much of a barrier to its understanding of us.

Anger is a useful emotion in humans. It helps you know what's
important to work against. If AGI doesn't have the feeling of anger, I
don't know how it will really understand us. Again, these differences
seem as dangerous as the similarities, just in different ways.

> Bear in mind that part of my reason for talking in terms of modules is that
> I have in mind a specific way to implement motivation and drives in an AGI,
> and that particular approach is radically different than the "Goal Stack"
> approach that is assumed by most people to be the only way to do it.  One
> feature of that alternate approach is that it is relatively easy to have
> such modules.  (Although, having said that, it is still one of the most
> difficult aspects to implement).

You still have to have some mechanism for determining which module
"wins"... call it what you will.


More information about the extropy-chat mailing list