[extropy-chat] Will we all choose to become one mind only?

Jef Allbright jef at jefallbright.net
Sun Apr 29 18:37:54 UTC 2007

On 4/29/07, Stathis Papaioannou <stathisp at gmail.com> wrote:
> On 4/28/07, Jef Allbright <jef at jefallbright.net > wrote:

<snipping all the agreeable stuff>

> OK, but the important difference between this and MPD or separate
> individuals is that neither the systems of behaviour dominant at different
> times nor the larger system consider the parts as "other", mainly because
> thoughts and feelings are shared.

I would agree that none are considered "other" in the sense of
intrinsic identity, but I would argue that there is no such thing as
intrinsic identity and all we have to work with is recognition of
patterns which we do tend to identify, with varying degree of
effectivity, as individual personalities.

It appears that you disagree with the standard notion of MPD, seeing
only a single intrinsic identity exhibiting multiple patterns of
behavior.  I disagree with MPD, seeing no intrinsic identities at all,
and only various patterns of behavior.

> > It seems that our difference comes down to our difference in
> > understanding the nature of subjective experience.  You seem to
> > believe that subjective experience is fundamental or primary in some
> > important way (it is, but to apply it to "objective" descriptions of
> > the world entails a category error), while I see "subjective
> > experience" very simply as a description of the perceived internal
> > state of any system, as perceived by that system.
> I'm not sure how to respond to this, because I don't see the disagreement.

It seems to me that the essential difference is that your view seems
to assume the existence of an additional ontic entity representing the
subjective self.  Thus your descriptions seem to me to include
distracting Ptolomeic epicycles.

> That there is some subjective experience associated with certain physical
> processes is surely undeniable. For all I know, there may also be a some
> tiny subjective experience when a thermostat responds to a temperature
> change, or even when the thermostat just sits there. The "perceived internal
> state of any system as perceived by the system" is as good a description of
> this as any, and captures the fact that there is no separate consciousness
> juice at work here.

Would "as good as any, and..." imply "better than many, because..."?

> As to whether the subjective experience is part of an
> objective description of the world, that might just boil down to be a matter
> of linguistic taste. Did I just write that sentence because I wanted to, or
> did I *really* write that sentence because the matter in my brain, a slave
> to the laws of physics, made me do so?

Either can be equally valid, within context.  But one of these is
preferable per the heuristic of Ockham's principle of parsimony.

> > The recursive
> > nature of this model tends to throw people off.  [Any progress yet on
> > IAASL?]
> I'm only about a third of the way through. The first few chapters do deal
> with the notions of reductionist versus higher order explanation, and of a
> hierarchy of conscious experience depending on the system's complexity. It
> all makes perfect sense so far.

I'll have to get back to it soon.  I got about two thirds through, and
found nothing intellectually stimulating other than Hofstadter's
enjoyable multilayered word play.

> > > > It's not completely clear here, but it appears that you're claiming
> > > > that each of the parts would experience what the whole experiences.
> > > > >From a systems theoretical point of view, that claim is clearly
> > > > unsupportable.  It seems to be another example of your assumption of
> > > > subjective experience as primary.
> > >
> > >  Would you say that the two hemispheres of the brain have separate
> > > experiences, despite the thick cable connecting them?
> >
> > No doubt you're aware of very famous split-brain experiments showing
> > that if the corpus callosum is cut, then the existence of separate
> > experiences is clearly shown.  With the corpus callosum intact and
> > feedback loops in effect, then the "subjective reality" of various
> > functional modules of the brain is driven in the direction of a
> > coherent whole, but (if one could interrogate individual brain modules
> > individually), one would observe that each module necessarily reports
> > its own internal state ("subjective experience") in terms relevant to
> > its own functioning.
> But the crucial point is that all the functional subsystems in the brain are
> normally in communication, creating an integrated whole, or the illusion of
> an integrated whole if you prefer that qualifier.

Okay so far, with the understanding that the "illusion of an
integrated whole" is "experienced" at the level of the whole.

> If I am linked to another
> person so that I experience his thoughts, feelings, memories and he mine,

How could two individual components of a system have the same
experience?  I know there's plenty of science fiction providing such
scenarios, but in terms of actual systems theory it's incoherent.  I'm
not sure how to frame this concept effectively for you; I think your
background is more in the "softer" humanities side than the "harder"
sciences and engineering side of C.P. Snow's Great Divide.  My
preference would be to refer to efficiencies of information flow, or
dynamics of feedback loops, but such analogies don't carry over very
well here.

You know those science fiction stories where there's a "ripple in
time" or a "glitch in the matrix" and the person embedded in that
reality claims to have experienced some strange feeling as if his Self
was something not completely embedded in that reality, thus providing
some invariant reference for the experience?  Does that bother you?
It sure bothers me. ;-)

> then the he/me distinction will vanish even though in reality there are
> still two distinct brains and bodies. The original "me" will not have a
> preference that the other "me" experiences pain because the original "me"
> will experience that pain as well.

Experience is necessarily in terms of the system doing the
experiencing.  If multiple brains and bodies were interconnected
effectively as a hive mind, then the higher-level hive mind would
experience richer sensory input than any of the individuals, would
interact with its larger environment in richer ways than any
individual, and would process more abstract models of its "reality"
than any of the individuals.

> The original "me" will not be able to
> even consider himself as separate as mental exercise, because the other "me"
> will inevitably have the same thought. It would be like trying to think of
> your left hand as alien even though you are neurologically intact.

This is key.  System-level thoughts are not spread throughout the
elements of the system, they are "emergent" as the higher level
behavior of the system.  From an engineering perspective this is just
so obvious I don't know what else to say within the limitations of
this medium of email.  But I remain motivated to work together toward
a mutual understanding.

> > It might be informative to consider the distinction between
> > "subjective reality" and "subjective experience" above.
> ?

In an effective hive mind, as with a human mind, the functional
components couldn't possibly, even in principle, work with the
infinite complexity of unfiltered reality.  The very process of
"making sense" (extracting, selecting, and encoding regularities) of
the environment creates a "subjective reality" that is "subjectively

> > > > > The collective decisions of the joined mind would, over time,
> resemble
> > > the
> > > > > collective decisions of the individuals making up the collective.
> > > >
> > > > It seems clear to me that the behavior of the collective would display
> > > > characteristics *not* present in any of its parts.  This is
> > > > fundamental complexity theory.
> > >
> > >  Yes, I suppose that's true and the fact that the parts are in
> communication
> > > would alter the behaviour of the collective. However, even the
> disconnected
> > > parts would display emergent behaviour in their interactions.
> >
> > Stathis, I repeatedly detect either unfamiliarity or discomfort with
> > systems thinking in your world view.  What could it possibly mean to
> > say that "...disconnected parts would display emergent behavior..."?
> > Emergent behavior is meaningless in regard to parts, it can refer only
> > to systems of parts.
> I like to think of reductionism as the "true" theory explaining the world. A
> hydrogen atom behaves differently from a proton and electron, but really, it
> is *no more* than a proton and electron; it's just that we're not smart
> enough for the behaviour of a hydrogen atom to be immediately and
> intuitively obvious when we contemplate its components, so we call it
> emergent behaviour.

I used to strongly believe the same way.  At some point I realized
that reductionism is an idealization that ultimately fails due to the
utter inability of any system to form an objective model of its world.
 Consider the three-body problem as part of the explanation. (As a
self-referential aside, a complete explanation is of course impossible
here or in any other context.)

> To give another example, we normally don't consider that
> two marbles sitting next to each other are any more than, well, two marbles
> sitting next to each other; and yet it is possible to assert that they
> actually form a system, namely a pair of marbles, which is somehow different
> to, or greater than, either marble individually.

I would point out that your system of marbles is meaningless outside
the context of an observer, and that a fundamental definition of
"system" might be based on the notion of the emergent property of
dividing the universe into an inside and an outside, thus requiring at
least three elements forming a conceptual tetrahedron. (Credit to
Buckminster Fuller, and possibly C.S. Peirce.)

>  It would be crazy to go
> around thinking of every object and interaction in the world in terms of
> subatomic particles, but that's what it actually is.

Probably not in any way that matters at the level of abstraction of
our present discussion.

> With regard to my point, I originally asserted that multiple individuals who
> are joined might end up making similar decisions to the same collection of
> separate individuals voting or trying to arrive at a consensus. This is
> probably an unwarranted assumption, as being joined is a new factor which
> might change the net behaviour. In other words, there are more subatomic
> particles in the joined than in the individualist collective, namely those
> particles forming connections between individuals, so the two collections
> would be expected to behave differently.

An interesting observation would be that any system is defined
*entirely* in terms of its "connections", but while profound, this may
be leading us off track.

> When I said that disconnected parts would also display emergent behaviour, I
> meant that the disconnected parts when interacting would display emergent
> behaviour as compared to disconnected parts on their own. This is by analogy
> with the hydrogen atom: the proton and electron together will display
> emergent behaviour in that it was not evident when they were widely
> separated. However, with both people and subatomic particles, the
> foundations for the emergent behaviour were already clearly present in the
> parts, it's just that we weren't knowledgeable and bright enough to
> immediately see it.
> > > > > The
> > > > > equivalent of killing each other might be a decision to edit out
> some
> > > > > undesirable aspect of the collective personality, which has the
> > > advantage
> > > > > that no-one actually gets hurt.
> > > >
> > > > This sounds nice, but it's not clear to me what model it describes.
> > >
> > >  In a society with multiple individuals, the Cristians might decide to
> > > persecute the Muslims. But if a single individual is struggling with the
> > > idea of whether to follow Christianity or Islam, he is hardly in a
> position
> > > to persecute one or other aspect of himself. The internal conflict may
> lead
> > > to distress, but that isn't the same thing.
> >
> > As I see it, clearly one of those conflicting systems of thought is
> > going to lose representation, corresponding to "dying" within the mind
> > of the person hosting the struggle.
> >
> > Maybe here again we see the same fundamental difference in our views.
> > In your view (I'm guessing) the difference is that no one died, no
> > unique personal consciousness was extinguished.
> Yes; the part that "loses" the battle lives on in the consciousness of the
> whole, and might even reassert itself at some future point. It's a matter of
> who gets hurt or upset, and how complete and irreversible the process is.

I strongly disagree.  The configuration of the system necessarily
changes to reflect the outcome of the conceptual battle.  Although it
sounds nice in humanistic terms to think that each conceptual entity
is somehow fully preserved, I don't see how this thinking can be
warranted.  If the human brain or hive-mind were a closed system, then
this could argued on the basis of conservation of information, but
that is far from the case.

> > In my view, a person
> > exists to the extent that they have an observable effect (no matter
> > how indirect); there is no additional ontological entity representing
> > the unique core of their being, or subjective experience, or whatever
> > it is called by various peoples for the thousands of years since
> > people became aware of their awareness.
> I would agree that a person cannot exist without having an observable
> effect, and that this observable effect is necessary and sufficient for the
> existence of that person. However, the observable effect is only important
> to the person themselves insofar as it does give rise to this feeling of
> personhood or consciousness. That is, if the same effect could be reproduced
> using computer hardware, or by God in heaven, or whatever, that would be
> fine with me.

I am somewhat hopeful that the thinking in Hofstadter's _I Am A
Strange Loop_ will help clarify this.

> > You will of course recognize the implication of an unfounded belief in
> > a soul in the above, and most likely reject it out of hand since you
> > are a modern man, well-read and trained in science and most certainly
> > do not believe in a soul.  Obviously Jef doesn't really know who he's
> > dealing with (thus this paragraph.)
> >
> > But my point is that despite any amount of evidence or debate, even
> > with belief in the heuristic power of Occam's Razor, the subjective
> > experience of subjective experience tends to hold sway.  As I
> > mentioned above, it has the advantage of being (subjectively)
> > complete.
> Computationalism contains the idea of multiple realizability, which means
> that the mind can survive destruction of the substrate on which it is being
> run.

Okay so far.

> This has striking similarities with the concept of an immaterial soul
> and the possibility of resurrection after the death of the body.

Yes, but thinking along those lines leads to conceptual cul-de-sac.

> The modern
> advance over dualism is that the need for a separate soul-substance is
> obviated.

Yes, but belief in mind-matter dualism is quite distinct from
substratism. On this discussion list nearly all posters express a
belief in substrate independence, but most continue to bump into the
"paradoxes" of a worldview that struggles with overcoming mind-matter
dualism while preserving belief in a discrete Self.

- Jef

More information about the extropy-chat mailing list