[extropy-chat] Will we all choose to become one mind only?

Jef Allbright jef at jefallbright.net
Thu Apr 26 16:41:36 UTC 2007


On 4/26/07, Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>
> On 4/26/07, TheMan <mabranu at yahoo.com> wrote:
>
> > Are we all going to become one, when singularity
> > comes, so that there will be no political and moral
> > issues anymore, just one mind, a mind that always
> > knows what it wants, goes for what it wants and
> > nothing else, and never fights with itself?
>
> Individuals are always "fighting" with themselves. Every single decision
> that is made involves a weighing up of multiple alternatives, multiple
> outcomes, multiple utilities for each outcome. If multiple minds were
> integrated into one person the behaviour of that person would reflect some
> sort of average of the individual minds.

Stathis, would you agree that "composite" would be a better word than
"average" here, since "average" entails a reduction of information? I
think this question is key because it appears to highlight that you
and I are looking at the same scenario but working in opposite
directions.

I see "group minds" emerging due to the adaptive benefits of
increasing degrees of freedom enabled by a more complexly effective
organizational structure operating within an increasingly complex
environment.  The subjective experience of the composite would be a
high level expression of salient features of its internal state over
time, fundamentally unavailable to its members. The subjective
experience of each member, while subjectively "complete", would
reflect a necessarily lower-level description of interactions with the
greater "reality."

It seems that you are working in the opposite direction, assuming the
primality of subjective experience, and imagining how to combine
multiple subjective experiences into one, with this combined average
subjective agent then interacting with its world.


> Two careful conservatives + one
> reckless radical = one mostly careful, sometimes radical joined person. The
> difference would be that this person could not harm, punish or reward some
> selected part of himself because all the parts experience what the whole
> experiences.

It's not completely clear here, but it appears that you're claiming
that each of the parts would experience what the whole experiences.
>From a systems theoretical point of view, that claim is clearly
unsupportable.  It seems to be another example of your assumption of
subjective experience as primary.


> > I'm thinking we might all choose to become something
> > like one of those clusters of human minds called "the
> > joined", described in Clarke/Baxter: "The light of
> > other days", joined also with AI of course (and why
> > not with the minds of all the animals as well!).
> >
> > Is there a danger in all individuals becoming one? Can
> > there be a survival value, for the human species, in
> > such diversity of opinions that exists today, where
> > people can't accept each other's ways of thinking,
> > where people even kill each other because they have
> > different beliefs etc?
>
> The collective decisions of the joined mind would, over time, resemble the
> collective decisions of the individuals making up the collective.

It seems clear to me that the behavior of the collective would display
characteristics *not* present in any of its parts.  This is
fundamental complexity theory.


> The
> equivalent of killing each other might be a decision to edit out some
> undesirable aspect of the collective personality, which has the advantage
> that no-one actually gets hurt.

This sounds nice, but it's not clear to me what model it describes.

- Jef



More information about the extropy-chat mailing list