[ExI] Brain emulation, regions and AGI [WAS Re: Kelly's future]

Kelly Anderson kellycoinguy at gmail.com
Sat Jun 4 17:10:19 UTC 2011


On Sat, Jun 4, 2011 at 10:41 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
> Kelly Anderson wrote:
>>
> That is (almost certainly) because you are thinking in terms of the AGI's
> motivation system as "algorithm" driven (in the sense of it having what I
> have called a "goal stack").  This was what the last para of my message was
> all about.  In terms of goal-stack motivation mechanisms, yes, there is no
> way to make the system stable enough and safe enough to guarantee
> friendliness.  Indeed, I think it is much worse than that: there is in fact
> no way to ensure that such a motivation mechanism will make an AGI stably
> *intelligent*, never mind friendly.

Intelligent beings (damaged humans) with no emotional core can't make
decisions very well. Oliver Sacks books and articles are very good at
pointing this out. They are also a horror show of what can go wrong
with the brain! So I would say that without a motivation mechanism you
are almost certain not to get stable intelligence. Although the
stability of the intelligence state is not something I've ever
considered before. This is probably very important... I will try and
keep that in mind as I think about this sort of thing in the future.
Very interesting concept.

>> Anger is a useful emotion in humans. It helps you know what's
>> important to work against. If AGI doesn't have the feeling of anger, I
>> don't know how it will really understand us. Again, these differences
>> seem as dangerous as the similarities, just in different ways.
>
> No, no, no, it is not!  I mean, respectively but firmly disagree! :-) It can
> be made in such a way that it feels some mild frustration in some
> circumstances.  From that it can understand what anger is, by analogy. But
> there is no reason whatever to suppose that it needs to experience real
> anger in order to empathize with us.  If you think this is not true, can you
> explain your reasoning?

So it can get angry, but not REAL angry? :-)

I can't imagine the level of fear experienced by a gazelle being
chased by a lion. I have never experienced that level of fear. I can
imagine that I can imagine it, but that's not the same. So, can I
truly know what it is like to be a gazelle? I think that I can not,
not really. So my feeling of empathy for the gazelle is limited.

I don't want AGI to have limited empathy for humans, but empathy at
the level we are able to have for each other. This may necessitate
limited life spans for AGIs according to some people...

>> You still have to have some mechanism for determining which module
>> "wins"... call it what you will.
>
> Yes, but that is not a problem.  If the competition is between modules that
> are all fairly innocuous, why would this present a difficulty?

Well, it is not a problem in the sense that it is dangerous. But what
I'm saying is that you need a goal stack (in some form) to determine
what wins, otherwise, you just have a mental disease that prevents you
from making decisions.

-Kelly




More information about the extropy-chat mailing list