[ExI] Digital Consciousness .

Kelly Anderson kellycoinguy at gmail.com
Thu Apr 25 16:54:05 UTC 2013


On Thu, Apr 25, 2013 at 9:15 AM, Brent Allsop <brent.allsop at canonizer.com>wrote:

> Hi Stathis,
>
> (And Kelly Anderson, tell me if given what we've covered, does the below
> make sense to you?)
>
> It is not a 'proof' that abstracted computers can be conscious.  It
> completely ignores many theoretical possible realities. For example
> Material Property Dualism is one of many possible theories that proves this
> is not a 'proof".
>
> There is now an "idealized effing theory" world described in the Macro
> Material Property Dualism camp: http://canonizer.com/topic.asp/88/36 .
>

I would fall in the camp of Functional Property Dualism, I think. Unless
there is some other camp that is even more explicit.
>
> In that theoretically possible world, it is the neurotransmitter glutamate
> that has the element redness quality.  In this theoretical world Glutamate
> causally behaves the way it does, because of it's redness quality.  Yet
> this causal behavior reflects 'white' light, and this is why we think of it
> has having a 'whiteness' quality.  But of course, that is the classic
> example of the quale interpretation problem (see:
> http://canonizer.com/topic.asp/88/28 ).  If we interpret the causal
> properties of something with a redness quality to it, and represent our
> knowledge of such with something that is qualitatively very different, we
> are missing and blind to what is important about the qualitative nature of
> glutamate, and why it behaves the way it does.
>

I don't believe that. I believe "redness" is an emergent illusion
constructed by the brain in software, and has as much to do with the
glutamate as Word has to do with Accumulators and Assembly.


> So, let's just forget about the redness quality for a bit, and just talk
> about the real fundamental causal properties of glutamate in this
> theoretical idealizing effing world.  In this world, the brain is
> essentially a high fidelity detector of real glutamate.  The only time the
> brain will say: "Yes, that is my redness quality" is when real glutamate,
> with it's real causal properties are detected.  Nothing else will produce
> that answer, except real fundamental glutamate.
>

I totally disagree, but I do understand your position better.


> Of course, as described in Chalmers' paper, you can also replace the
> system that is detecting the real glutamate, with an abstracted system that
> has appropriate hardware translation levels for everything that is being
> interpreted as being real causal properties of real glutamate, so once you
> do this, this system, no matter what hardware it is running on, can be
> thought of, or interpreted as acting like it is detecting real glutamate.
>

Correct. And that's as "real" as the other in my mind.


> But, of course, that is precisely the problem, and how this idea is
> completely missing what is important.  And this theory is falsifiably
> predicting the alternate possibility he describes in that paper.  it is
> predicting you'll have some type of 'fading quale', at least until you
> replace all of what is required, to interpret something very different than
> real consciousness, as consciousness.
>

If it walks like a duck, and quacks like a duck, who is to say that it is
not a duck?


> It is certainly theoretically possible, that the real causal properties of
> glutamate are behaving the way they do, because of it's redness quality.
> And that anything else that is being interpreted as the same, can be
> interpreted as such - but that's all it will be.  An interpretation of
> something that is fundamentally, and possibly qualitatively, very different
> than real glutamate.
>

As Spike says, if it is qualitatively different, but still delivers me a
Big Mac when I order it, I'm good with that for many purposes.


>
> This one theoretical possibility, thereby, proves Chalmers' idea isn't a
> proof that abstracted computers have these phenomenal qualities, only that
> they can be thought of, or interpreted as having them.
>

I'm of the camp that will believe something has consciousness when it says
it does, and when it acts in every way as if it does. Good enough for me. I
guess that puts me in a different cannon?

-Kelly
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130425/ccb375d7/attachment.html>


More information about the extropy-chat mailing list