[ExI] AI motivations

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Fri Dec 28 05:11:44 UTC 2012

On Wed, Dec 26, 2012 at 5:09 AM, Anders Sandberg <anders at aleph.se> wrote:

> This suggests that a hierarchical system might function even if it is very
> large: the top-level strategy is decided on a slow timescale, with local
> systems doing tactical decisions faster, even more local systems figuring
> out optimal implementations faster than that, subsystems implementing them
> even faster, and with low-level reflexes, perception and action loops
> running at tremendous speed. It just requires a somewhat non-human
> architecture.

### I would think this architecture would be actually rather
human-like: Our thinking is also strongly hierarchical, with multiple
levels of abstraction separating the basic input and output links
(e.g. the occipital V1 cortex, or the motor strip) from the top-level
pattern recognizers involved in moral reasoning (frontal pole, VM
prefrontal cortex). I absolutely agree that a hierarchical system can
be much larger than what one would expect from the speed of its
fastest responses. In a changeable and complex environment where
fitness involves both speed and sophistication, all well-functioning
systems are likely to require both fast-and-simple and
slow-and-complex subsystems.


More information about the extropy-chat mailing list