[ExI] Chain of causes

Richard Loosemore rpwl at lightlink.com
Fri Mar 18 14:20:51 UTC 2011

Eugen Leitl wrote:
> On Fri, Mar 18, 2011 at 09:28:11AM -0400, Richard Loosemore wrote:
>> Eugen Leitl wrote:
>>> On Fri, Mar 18, 2011 at 09:03:35AM -0400, Richard Loosemore wrote: 
>>>> desire it, but then it does as it pleases.  The reverse is that the 
>>>> only  design that actually does do what we want it to do, has a 
>>>> mechanism that  does not look as though it should really work.
>>> Did I get that right? Did you just color large swathes of approach
>>> space sterile?
>> Clarification please...?
> I was actually asking for clarifying what you meant. To me
> it looks like the only controllable/predictable systems are
> sterile (crystalline order), while the only fertile region
> is boundary of crystalline order and chaos (EoC), which
> however is incomputable, and hence out of control.
> (This is all just opinion and handwaving, of course, and difficult to
> nail numbers upon).

Well, I agree with you.  It is indeed EoC.

Except, what I try to do in my paper is to argue that it is not a simple 
complex system [!] in which the entire system is just an EoC mess, but 
rather that we would expect some components of the system (in essence, 
the hot core of the concept-learning and concept-deployment mechanisms) 
to contain the worst of the complexity.

In that case, we should expect that if we treat concepts like "atoms" 
and analyze their interactions as if they had some peculiar 
bond-formation (and other) properties, we would (a) expect some badly 
disconnected relationships between effect and cause, BUT (b) we would 
have some hope of discovering the nature of those mechanisms by a 
combination of experimental psychology and very extensive computer 
simulation of large numbers of candidate mechanisms.

The bottom line is that it is difficult, but doable.

And, most importantly, it looks nothing like conventional AI/AGI because 
the attempt to design mechanisms by hand (as if they should do what we 
hand-craft them to do) has to be abandoned in favor of the psychology + 
simulation approach.

And, as a side effect, this way of looking at the dynamics of thought 
leads to an interesting mechanism for controlling the *motivation* (i.e. 
friendliness, aggression, etc) of the system.  The motivation mechanisms 
can be decoupled neatly from the regular concept-dynamics by making the 
concept dynamics (all that atom bond formation I mentioned just now) 
happen in a landscape in which there are external gradients that push 
the dynamics in this or that direction.   Much the same as when you have 
atomic interactions happening in an external electric or magnetic field, 
for example.  In this case, what the system feels compelled to do (e.g. 
be empathic to humans) is NOT modifiable directly by thought processes. 
Which makes the system stable, and permanently motivated to do empathic 
things rather than aggressive things.

Richard Loosemore

More information about the extropy-chat mailing list