[ExI] Chain of causes
rpwl at lightlink.com
Fri Mar 18 13:03:35 UTC 2011
Anders Sandberg wrote:
> It is an interesting situation. In most parts of our environment if we
> interact with it a bit we will cause very brief chains of events - a
> However, when I press a key on my keyboard there is a long chain of
> events in the keyboard, operating system and (in this case) email
> editor, potentially including an even vaster chain where this email gets
> sent to a large number of servers worldwide, possibly read and possibly
> responded to. It is a lot more like the music video.
> The real problem is of course that our evolved understanding of chains
> of events likely is limited. We have a pretty good folk physics
> understanding, but most of our current tech runs in different domains.
> So we should expect our intuitions of cause and effect to be weak when
> dealing with our new technologies. We are certainly trying to design
> them to work according to easy cause-effect relations, but many of the
> truly important ones cannot be designed that way. They are systems,
> often adaptive and autonomous. A modified organism, an artificial
> intelligence or a company will do as they damn well please. Designing
> such systems requires some other abilities, abilities we might not even
> have evolved as a species.
This is part of the claim that I made in my 2007 paper on complex
systems and AGI (originally given at the 2006 AGIRI workshop).
My point, then, was to argue that the unavoidable complex-systems nature
of AGI requires us to take a different attitude to it -- the tangled
nature of the interactions inside the system make it less likely that
there is an "understandable" relationship between causal mechanism and
If this is an accurate picture of where we are, then it could be that
much of the work going on in AGI will be wasted, because it is deeply
entwined with the understandability of AGI mechanisms. People design
such mechanisms because the mechanisms look like they will do the
required job, but it could be that they only mechanisms that really work
are the ones that do *not* look like they will do the job.
This is just the reverse of the "they will do as they damn well please"
problem, which happens when we design it as if it ought to work as we
desire it, but then it does as it pleases. The reverse is that the only
design that actually does do what we want it to do, has a mechanism that
does not look as though it should really work.
More information about the extropy-chat