[ExI] An old skeleton tumbles out of the list closet

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Tue Nov 13 05:52:59 UTC 2012


On Mon, Nov 12, 2012 at 9:48 PM, Dan <dan_ust at yahoo.com> wrote:
>
>
> I imagine some of the free rider stuff might fall by the wayside simply
> because changing the neurotech will change the incentives for it. The thing
> I would fear, of course, is caring resulting in a total loss of autonomy,
> but I reckon that's the horror scenario and not the most likely outcome.
>

### I have been fascinated with the technical details of future goal
systems for years. The first time I wrote about "autopsychoengineering" on
this list must have been sometime in the last millenium. Of course, it
doesn't matter what we might want or desire in this respect - the tautology
of evolution means that what survives, survives, and what dies, dies.
Still, inquiring minds want to know.

Currently I suspect that the entities that are going to replace us soon
will have the following features:

1) A programmatic way of defining in-groups (for example, instantly
recognizing a mind providing appropriate credentials as self), instead of
the evolved trickery we have

2) Completely altruistic and reliably non-defecting behavior in-group

3) A common set of moral presets enabling structured stable interactions
in-group, for example a non-adversarial dominance mechanism producing many
levels of "master", "slave" roles without in-group conflict

4) Lack of individual ability or drive to replicate, which would be
supplanted by non-individual design and validation protocols to produce new
minds

5) A large variety of individual cognitive styles operating under a common
general protocol for exchange of information, establishment of trust, to
assure the ability to explore large spaces of solutions, instead of
clustering of solutions due to group-think

You may note this sounds suspiciously like a treatise on eusocial insects
couched in sociologist-speak. Well, thinking about these issues is hard, so
I fall back on knowledge about solutions that have been around for some
time.

One problem that I find rather opaque is how this new society/superorganism
would assure that new designs of minds do not start exerting an
inappropriate positive feedback on their own creation (i.e. cancerous
growth), to the detriment of the society as a whole. Having a single stable
entity in charge of new mind design evaluation and old design elimination
would be a possible solution, as in a Greg Egan's polis. Another solution
may have many moving parts - specialized groups of minds, following
structured interaction protocols to gestate new minds, with multiple levels
of redundant cross-checking of mind performance before a new design is
allowed to be produced in larger numbers, or more importantly, to
participate in the design of yet newer generations of minds.

Are ya'all eager to upload and start tinkering?

Rafal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20121113/92ec542f/attachment.html>


More information about the extropy-chat mailing list