[ExI] Quantum consciousness, quantum mysticism, and transhumanist engineering

Stathis Papaioannou stathisp at gmail.com
Wed Mar 22 12:12:06 UTC 2017


On Wed., 22 Mar. 2017 at 8:22 pm, Brent Allsop <brent.allsop at gmail.com>
wrote:

>
>
> On Mon, Mar 13, 2017 at 11:50 PM, Stathis Papaioannou <stathisp at gmail.com>
> wrote:
>
>
> But the comparison of redness and greenness, or anything else whatsoever
> that the system does, will necessarily occur provided only that the
> substituted part is behaviorally identical. "Behaviorally identical" means
> that it interacts with its neighbors in the same way - nothing else.
>
>
> Well, there you have it.  I'm guessing that you still can't see how this
> is what I've been trying to say all along.  You must include this
> comparison behavior when you do any type of neural substitution correctly.
> Not preserving this functionality in your theory is what makes it
> fallacious.  Can you not see that up until now, you've always nuro
> substituted out any theory I provided that included this ability?  You
> always twisted any theoretical system I was proposing, that preserves this
> ability to compare during the neuro substitution, in a way that always
> completely removed this comparison ability.  Go ahead, propose any
> qualitative theory that preserves this, then try a nuro substitution with
> it.
>
> If you provide a qualitative theory that include the necessary ability to
> compare red and green in your neuro substitutuion, you will be able to do a
> neural substitution from redness/greenness to purpleness/yellowness, in a
> way that both of them will behave the same honestly and accurately saying:
> "I know what red and green are like".  You will be able to do this again,
> to blackness and whiteness.  And again to oneness and zeroness.  All of
> them still correctly proclaiming: "I know what red and green is like for
> me."
>
> But, the only way to keep them "Behaviorally identical" is to keep each of
> these neural substituted conscious entities qualia blind and qualitatively
> isolated from each other - the way all of you still are. If you do the
> neural substitution in any way, such that the qualitative isolation is not
> preserved, the behavior will not be different saying things like: redness
> and greenness sure are different than purpleness and yellowness.  For
> example, you could add a qualitative memory system, so that the being could
> remember and compare what redness and greenness was like, before the
> qualitative substitution.
>
> It is also important to remember, that we are talking about a simple 2
> qualitative pixel element comparison system.  It's easy to preserve
> isolation with such a simple system, especially when you have a system your
> are substituting that only interacts with a few of it's neighbors.  If you
> have a qualitative system like we have, where you can compare any of the
> tens of thousands of qualitative pixel or voxel element with all of the
> others at the same time, preserving the isolation is much more difficult,
> but not impossible.  All of the tens of thousands of voxel elements must
> interact with all the others in some comparison enabling way - allowing the
> qualitative comparison of them all at the same time
>
> There is a scene in the British TV series "Humans" season 2 where one of
> the "Synths" that has become "conscious" recollects that life was very
> different before he become "conscious".   Once we are no longer qualia
> blind, we'll all demand that our TV shows be much less qualia blind, having
> them say things more like like: "My oneness and zeroness representations of
> red and green were sure qualitatively less than my new redness and
> greenness representations.  At least in Humans, you can see this
> qualitative recognition they have, on their faces, when they become
> conscious, and they walk outside for the first time.  If they were
> "behaving the same", they wouldn't have that astonished look on their face,
> after they walk outside, once they become qualitatively "conscious."
>

So, do you agree that if the substituted component interacts with its
neighbours normally then the whole system will be able to distinguish red
from green and normally? Or can you imagine a situation where the
substituted component interacts with its neighbours normally but the system
does *not* behave normally? If the latter, please explain, preferably with
an example not related to consciousness.

-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170322/f5eef713/attachment.html>


More information about the extropy-chat mailing list