[extropy-chat] Will we all choose to become one mind only?
jef at jefallbright.net
Fri Apr 27 17:23:05 UTC 2007
On 4/26/07, Stathis Papaioannou <stathisp at gmail.com> wrote:
> On 4/27/07, Jef Allbright <jef at jefallbright.net> wrote:
> > Stathis, would you agree that "composite" would be a better word than
> > "average" here, since "average" entails a reduction of information? I
> > think this question is key because it appears to highlight that you
> > and I are looking at the same scenario but working in opposite
> > directions.
> Yes, "composite" is a better word.
> > I see "group minds" emerging due to the adaptive benefits of
> > increasing degrees of freedom enabled by a more complexly effective
> > organizational structure operating within an increasingly complex
> > environment. The subjective experience of the composite would be a
> > high level expression of salient features of its internal state over
> > time, fundamentally unavailable to its members. The subjective
> > experience of each member, while subjectively "complete", would
> > reflect a necessarily lower-level description of interactions with the
> > greater "reality."
> I don't see how you could distinguish the experiences of each member from
> the experiences of the composite, or each other, if they were truly joined.
> It would be like separating the part of you that likes chocolate from the
> part that doesn't want to put on weight. I don't envision the composite as
> someone with multiple personality disorder (which probably doesn't exist,
> BTW) but as a completely integrated single person.
I don't think multiple personality order exists as popularly conceived
as essentially discrete personalities inhabiting one physical body.
I do think multiple personality order makes sense in the sense of
multiple systems of behavior (with associated differences in accessing
subjective "resources") emerging at various times to dominate the
observed behavior of the larger system, like multiple attractors
within a chaotic system.
It seems that our difference comes down to our difference in
understanding the nature of subjective experience. You seem to
believe that subjective experience is fundamental or primary in some
important way (it is, but to apply it to "objective" descriptions of
the world entails a category error), while I see "subjective
experience" very simply as a description of the perceived internal
state of any system, as perceived by that system. The recursive
nature of this model tends to throw people off. [Any progress yet on
> > It seems that you are working in the opposite direction, assuming the
> > primality of subjective experience, and imagining how to combine
> > multiple subjective experiences into one, with this combined average
> > subjective agent then interacting with its world.
> Yes, although you don't need to call it an average, as you said above.
> > > Two careful conservatives + one
> > > reckless radical = one mostly careful, sometimes radical joined person.
> > > difference would be that this person could not harm, punish or reward
> > > selected part of himself because all the parts experience what the whole
> > > experiences.
> > It's not completely clear here, but it appears that you're claiming
> > that each of the parts would experience what the whole experiences.
> > >From a systems theoretical point of view, that claim is clearly
> > unsupportable. It seems to be another example of your assumption of
> > subjective experience as primary.
> Would you say that the two hemispheres of the brain have separate
> experiences, despite the thick cable connecting them?
No doubt you're aware of very famous split-brain experiments showing
that if the corpus callosum is cut, then the existence of separate
experiences is clearly shown. With the corpus callosum intact and
feedback loops in effect, then the "subjective reality" of various
functional modules of the brain is driven in the direction of a
coherent whole, but (if one could interrogate individual brain modules
individually), one would observe that each module necessarily reports
its own internal state ("subjective experience") in terms relevant to
its own functioning.
It might be informative to consider the distinction between
"subjective reality" and "subjective experience" above.
> > > > I'm thinking we might all choose to become something
> > > > like one of those clusters of human minds called "the
> > > > joined", described in Clarke/Baxter: "The light of
> > > > other days", joined also with AI of course (and why
> > > > not with the minds of all the animals as well!).
> > > >
> > > > Is there a danger in all individuals becoming one? Can
> > > > there be a survival value, for the human species, in
> > > > such diversity of opinions that exists today, where
> > > > people can't accept each other's ways of thinking,
> > > > where people even kill each other because they have
> > > > different beliefs etc?
> > >
> > > The collective decisions of the joined mind would, over time, resemble
> > > collective decisions of the individuals making up the collective.
> > It seems clear to me that the behavior of the collective would display
> > characteristics *not* present in any of its parts. This is
> > fundamental complexity theory.
> Yes, I suppose that's true and the fact that the parts are in communication
> would alter the behaviour of the collective. However, even the disconnected
> parts would display emergent behaviour in their interactions.
Stathis, I repeatedly detect either unfamiliarity or discomfort with
systems thinking in your world view. What could it possibly mean to
say that "...disconnected parts would display emergent behavior..."?
Emergent behavior is meaningless in regard to parts, it can refer only
to systems of parts.
I'm sure you had a point. I'm not sure it was coherent. I would be
interested in knowing what you intended. I apologize for my
> > > The
> > > equivalent of killing each other might be a decision to edit out some
> > > undesirable aspect of the collective personality, which has the
> > > that no-one actually gets hurt.
> > This sounds nice, but it's not clear to me what model it describes.
> In a society with multiple individuals, the Cristians might decide to
> persecute the Muslims. But if a single individual is struggling with the
> idea of whether to follow Christianity or Islam, he is hardly in a position
> to persecute one or other aspect of himself. The internal conflict may lead
> to distress, but that isn't the same thing.
As I see it, clearly one of those conflicting systems of thought is
going to lose representation, corresponding to "dying" within the mind
of the person hosting the struggle.
Maybe here again we see the same fundamental difference in our views.
In your view (I'm guessing) the difference is that no one died, no
unique personal consciousness was extinguished. In my view, a person
exists to the extent that they have an observable effect (no matter
how indirect); there is no additional ontological entity representing
the unique core of their being, or subjective experience, or whatever
it is called by various peoples for the thousands of years since
people became aware of their awareness.
You will of course recognize the implication of an unfounded belief in
a soul in the above, and most likely reject it out of hand since you
are a modern man, well-read and trained in science and most certainly
do not believe in a soul. Obviously Jef doesn't really know who he's
dealing with (thus this paragraph.)
But my point is that despite any amount of evidence or debate, even
with belief in the heuristic power of Occam's Razor, the subjective
experience of subjective experience tends to hold sway. As I
mentioned above, it has the advantage of being (subjectively)
BTW, I just received Ayer's Language, Truth, and Logic from the used
book store, and in skimming through it I find his precise use of
language a real treat, but his actual thinking on Positivism may
elicit a different reaction.
I'll get to it right after I finish Taleb's The Black Swan. His
Fooled by Randomness was much better in my opinion.
More information about the extropy-chat