[extropy-chat] Will we all choose to become one mind only?
stathisp at gmail.com
Mon Apr 30 13:24:39 UTC 2007
On 4/30/07, Jef Allbright <jef at jefallbright.net > wrote:
It appears that you disagree with the standard notion of MPD, seeing
> only a single intrinsic identity exhibiting multiple patterns of
> behavior. I disagree with MPD, seeing no intrinsic identities at all,
> and only various patterns of behavior.
It's the difference between the multiple patterns of behaviour being aware
of each other and feeling they "own" each other's actions, or else time
sharing the same body as separate patterns of behaviour, with some of the
patterns of behaviour being completely unaware of the presence or actions of
the others. (Although the latter makes it into DSM-IV as Dissociative
Identity Disorder, its existence as a real clinical entity is controversial,
but we can nevertheless consider it as at least a theoretical possibility.)
You can describe the difference as I just have, or you can describe the
"normal" case as a single integrated identity and the MPD/DID case as
multiple identities in the one body.
> > It seems that our difference comes down to our difference in
> > > understanding the nature of subjective experience. You seem to
> > > believe that subjective experience is fundamental or primary in some
> > > important way (it is, but to apply it to "objective" descriptions of
> > > the world entails a category error), while I see "subjective
> > > experience" very simply as a description of the perceived internal
> > > state of any system, as perceived by that system.
> > I'm not sure how to respond to this, because I don't see the
> It seems to me that the essential difference is that your view seems
> to assume the existence of an additional ontic entity representing the
> subjective self. Thus your descriptions seem to me to include
> distracting Ptolomeic epicycles.
I must keep missing the mark with my terminology here, because I don't
actually believe in any such additional ontic entity. It's just often easier
to refer to person, identity, self, etc. as a collective term, like
"football team" is a collective term for individuals who play football and
tend to cooperate in particular loosely-defined ways during a football
match. The players, coach, team colours etc. can change over time but there
is still a sense in which in which it is the "same" team, mainly because the
changes are gradual and the players and supporters consider it to be the
same team. But I don't think this entails that there is a separate ontic
entity representing the team.
Then there is the quite separate issue: what almost everyone naively means
when they refer to "themselves" is *important*. This has nothing to do with
philosophical considerations but is just a statement of value. The sun still
feels warm on your face irrespective of whether it is a nuclear reactor or a
ball of fire dragged across the sky by Apollo in his chariot.
> That there is some subjective experience associated with certain physical
> > processes is surely undeniable. For all I know, there may also be a some
> > tiny subjective experience when a thermostat responds to a temperature
> > change, or even when the thermostat just sits there. The "perceived
> > state of any system as perceived by the system" is as good a description
> > this as any, and captures the fact that there is no separate
> > juice at work here.
> Would "as good as any, and..." imply "better than many, because..."?
It's better than any descriptions postulating a separate ontic entity, for a
> As to whether the subjective experience is part of an
> > objective description of the world, that might just boil down to be a
> > of linguistic taste. Did I just write that sentence because I wanted to,
> > did I *really* write that sentence because the matter in my brain, a
> > to the laws of physics, made me do so?
> Either can be equally valid, within context. But one of these is
> preferable per the heuristic of Ockham's principle of parsimony.
I didn't think that Ockham's razor says anything about how verbose you can
be in your explanation. Two explanations can be equivalent even though one
is longer than the other, and both might be preferable to a shorter, but
wildly improbable explanation.
> But the crucial point is that all the functional subsystems in the brain
> > normally in communication, creating an integrated whole, or the illusion
> > an integrated whole if you prefer that qualifier.
> Okay so far, with the understanding that the "illusion of an
> integrated whole" is "experienced" at the level of the whole.
> > If I am linked to another
> > person so that I experience his thoughts, feelings, memories and he
> How could two individual components of a system have the same
> experience? I know there's plenty of science fiction providing such
> scenarios, but in terms of actual systems theory it's incoherent.
Imagine doing the actual experiment. You walk up to someone and effect
connection between your brain and his brain. Suddenly, all your sensory
inputs double, you seem to remember stuff that you know you always knew but
somehow couldn't quite access until a moment ago, and you seem to understand
and conceptualise things which you couldn't quite grasp before (OK, it
probably wouldn't work at all because the two brains' internal wiring will
be completely incompatible or something, but it's a SF scenario). So there
wouldn't be two entities with shared experiences, there would be one entity
with the shared experience of the original two entities. On the other hand,
if the connection worked but was low bandwidth, I think there would be a
sense in which individuality could be maintained. You would observe your
counterpart wincing in response to touching a hotplate with his right hand,
and then a few moments later you would see your own #2 right hand touching
the hotplate and experience the pain. You might even consider severing the
connection if you thought the pain was going to be bad enough, although that
might be like having a stroke affecting one side of your body.
> not sure how to frame this concept effectively for you; I think your
> background is more in the "softer" humanities side than the "harder"
> sciences and engineering side of C.P. Snow's Great Divide.
No actually, I grew up playing with electric circuits and making explosives.
It's a wonder I survived to adulthood.
System-level thoughts are not spread throughout the
> elements of the system, they are "emergent" as the higher level
> behavior of the system. From an engineering perspective this is just
> so obvious I don't know what else to say within the limitations of
> this medium of email. But I remain motivated to work together toward
> a mutual understanding.
Sure, I understand that. I don't "see" in the neurons of my visual cortex,
even though those neurons fire when I look at something. Only in certain
neurological diseases do the various functional subsystems become
> > It might be informative to consider the distinction between
> > > "subjective reality" and "subjective experience" above.
> > ?
> In an effective hive mind, as with a human mind, the functional
> components couldn't possibly, even in principle, work with the
> infinite complexity of unfiltered reality. The very process of
> "making sense" (extracting, selecting, and encoding regularities) of
> the environment creates a "subjective reality" that is "subjectively
OK, but it's sort of strange to think of subjective reality without also
automatically thinking of it as subjectively experienced.
> I like to think of reductionism as the "true" theory explaining the world.
> > hydrogen atom behaves differently from a proton and electron, but
> really, it
> > is *no more* than a proton and electron; it's just that we're not smart
> > enough for the behaviour of a hydrogen atom to be immediately and
> > intuitively obvious when we contemplate its components, so we call it
> > emergent behaviour.
> I used to strongly believe the same way. At some point I realized
> that reductionism is an idealization that ultimately fails due to the
> utter inability of any system to form an objective model of its world.
> Consider the three-body problem as part of the explanation. (As a
> self-referential aside, a complete explanation is of course impossible
> here or in any other context.)
The three body problem can be solved by computer simulation using nothing
more than the laws of classical mechanics. It's just luck that the single
body problem has the solution x=vt which provides us a computational
shortcut. I don't see why one explanation is called reductionist and the
other not. In any case, reductionism does involve an idealization because it
assumes the observer is outside of the system under consideration; but this
is just a practicality, like getting sufficiently accurate starting
parameters to predict the behaviour of a chaotic system.
> > > In a society with multiple individuals, the Cristians might decide to
> > > > persecute the Muslims. But if a single individual is struggling with
> > > > idea of whether to follow Christianity or Islam, he is hardly in a
> > position
> > > > to persecute one or other aspect of himself. The internal conflict
> > lead
> > > > to distress, but that isn't the same thing.
> > >
> > > As I see it, clearly one of those conflicting systems of thought is
> > > going to lose representation, corresponding to "dying" within the mind
> > > of the person hosting the struggle.
> > >
> > > Maybe here again we see the same fundamental difference in our views.
> > > In your view (I'm guessing) the difference is that no one died, no
> > > unique personal consciousness was extinguished.
> > Yes; the part that "loses" the battle lives on in the consciousness of
> > whole, and might even reassert itself at some future point. It's a
> matter of
> > who gets hurt or upset, and how complete and irreversible the process
> I strongly disagree. The configuration of the system necessarily
> changes to reflect the outcome of the conceptual battle. Although it
> sounds nice in humanistic terms to think that each conceptual entity
> is somehow fully preserved, I don't see how this thinking can be
> warranted. If the human brain or hive-mind were a closed system, then
> this could argued on the basis of conservation of information, but
> that is far from the case.
Then I would return to the naive, non-philosophical stance and point out
that, as a matter of fact, if I have an opinion and change my mind about it,
I am at worst only a little upset, whereas faced with the prospect that my
opinion will be wiped from the collection of individuals by means of
homicide, I will be very upset.
> > In my view, a person
> > > exists to the extent that they have an observable effect (no matter
> > > how indirect); there is no additional ontological entity representing
> > > the unique core of their being, or subjective experience, or whatever
> > > it is called by various peoples for the thousands of years since
> > > people became aware of their awareness.
> > I would agree that a person cannot exist without having an observable
> > effect, and that this observable effect is necessary and sufficient for
> > existence of that person. However, the observable effect is only
> > to the person themselves insofar as it does give rise to this feeling of
> > personhood or consciousness. That is, if the same effect could be
> > using computer hardware, or by God in heaven, or whatever, that would be
> > fine with me.
> I am somewhat hopeful that the thinking in Hofstadter's _I Am A
> Strange Loop_ will help clarify this.
Could you specify a chapter?
Thank-you, as ever, for your careful consideration of my posts.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat