[ExI] Unsolved problems
Jef Allbright
jef at jefallbright.net
Wed May 28 19:09:29 UTC 2008
On Tue, May 27, 2008 at 6:56 PM, Keith Henson <hkeithhenson at gmail.com> wrote:
> One problem I can express, but have no idea of how to solve is the
> localization problem.
>
> There are a number of ways it can be expressed, the most general is
> that computation goes up at most as the cube of the linear dimensions
> while propagation delays (at best speed of light) goes up with the
> linear dimension. Human smartness may be dependent on two dimensions,
> the area of the cortex.
>
> So a large chunk of computronium, if it is to be of "one mind" has to
> think slower than a smaller piece.
I would suggest that "one mind" is always only a rough approximation
based on perception of net agency (even as perceived by the agent
itself ) having little basis in the actual dynamics of the system and
thus poorly founded for extrapolation to hypothetical discrete minds
of ever larger scale.
> This leads to a human being able to think rings around a "Jupiter
> brain" because of speed of light delays.
Non-sequitur. A single human would be terribly outmatched by a single
(hypothetical) "Jupiter brain" thinking at photonic speed with vastly
greater capacity. However, you do present a valid case for
constraints on the ultimate radius of effective growth of a singleton,
therefore shifting the emphasis from size to topology.
> It leads to multiple AIs rather than just one of them since local
> thinking will have a tendency to pinch off when the results of the
> rest of an AIs brain will not report in for hours.
Yes, and giving up the illusion that there ever was (or could be) a
discrete core mind.
> It leads to fundamental economics in that nearby resources are much
> more valuable than far away ones.
Yes, locality is fundamental law to economics, and causal
effectiveness in any dimension.
> Much of this is very familiar from biology in such terms as territoriality.
>
> And the problem is always cropping up in distributed computing.
Yes, models relevant but not yet well understood abound in biological
and ecological science, even, for example, in the mathematics of
vascular systems and the like.
You might consider that although the volume increases as the cube
while the surface of interaction increases as the square, this is
hardly a limit on growth to the extent that the structure is fractal.
A three-dimensional fractal structure (although this applies equally
well to higher dimensions) would optimize for growth of "self" while
optimizing "impedance matching" with the adjacent possible. The
branches would be well-matched with the local environment of
interaction, while the "core" would literally and realistically have
nothing at all to say.
As I've mentioned several times here and elsewhere, this structure
appears to be highly applicable to thinking on the Fermi "paradox",
(along with its derivative that increasing intelligence can be
expected to correlate with maximization of intended consequences with
minimization of unintended (and unforeseen) consequences.)
Indeed, it seems to me of fundamental interest that this "law",
applying at all scales, is crucial to self-organization, the universal
bias favoring increasing synergies over increasing degrees of freedom,
or, as we like to say around here, "extropy."
- Jef
More information about the extropy-chat
mailing list