[ExI] Digital Consciousness .

Stathis Papaioannou stathisp at gmail.com
Thu Apr 25 23:51:52 UTC 2013


On Fri, Apr 26, 2013 at 1:15 AM, Brent Allsop
<brent.allsop at canonizer.com> wrote:
>
> Hi Stathis,
>
> (And Kelly Anderson, tell me if given what we've covered, does the below
> make sense to you?)
>
> It is not a 'proof' that abstracted computers can be conscious.  It
> completely ignores many theoretical possible realities. For example Material
> Property Dualism is one of many possible theories that proves this is not a
> 'proof".

The argument does not assume any theory of consciousness. Of course,
if the argument is valid and a theory predicts that computers cannot
be conscious then that theory is wrong. What you have to do is show
that either the premises of the argument are wrong or the reasoning is
invalid.

> There is now an "idealized effing theory" world described in the Macro
> Material Property Dualism camp: http://canonizer.com/topic.asp/88/36 .
>
> In that theoretically possible world, it is the neurotransmitter glutamate
> that has the element redness quality.  In this theoretical world Glutamate
> causally behaves the way it does, because of it's redness quality.  Yet this
> causal behavior reflects 'white' light, and this is why we think of it has
> having a 'whiteness' quality.  But of course, that is the classic example of
> the quale interpretation problem (see: http://canonizer.com/topic.asp/88/28
> ).  If we interpret the causal properties of something with a redness
> quality to it, and represent our knowledge of such with something that is
> qualitatively very different, we are missing and blind to what is important
> about the qualitative nature of glutamate, and why it behaves the way it
> does.
>
> So, let's just forget about the redness quality for a bit, and just talk
> about the real fundamental causal properties of glutamate in this
> theoretical idealizing effing world.  In this world, the brain is
> essentially a high fidelity detector of real glutamate.  The only time the
> brain will say: "Yes, that is my redness quality" is when real glutamate,
> with it's real causal properties are detected.  Nothing else will produce
> that answer, except real fundamental glutamate.
>
> Of course, as described in Chalmers' paper, you can also replace the
> system that is detecting the real glutamate, with an abstracted system that
> has appropriate hardware translation levels for everything that is being
> interpreted as being real causal properties of real glutamate, so once you
> do this, this system, no matter what hardware it is running on, can be
> thought of, or interpreted as acting like it is detecting real glutamate.
> But, of course, that is precisely the problem, and how this idea is
> completely missing what is important.  And this theory is falsifiably
> predicting the alternate possibility he describes in that paper.  it is
> predicting you'll have some type of 'fading quale', at least until you
> replace all of what is required, to interpret something very different than
> real consciousness, as consciousness.
>
> It is certainly theoretically possible, that the real causal properties of
> glutamate are behaving the way they do, because of it's redness quality.
> And that anything else that is being interpreted as the same, can be
> interpreted as such - but that's all it will be.  An interpretation of
> something that is fundamentally, and possibly qualitatively, very different
> than real glutamate.
>
> This one theoretical possibility, thereby, proves Chalmers' idea isn't a
> proof that abstracted computers have these phenomenal qualities, only that
> they can be thought of, or interpreted as having them.

If it is true that real glutamate is needed for redness then the
redness qualia will fade and eventually disappear if the glutamate
detecting system is replaced with alternative hardware. This is not in
itself problematic: after all, visual qualia will fade and eventually
disappear with progressive brain damage. But the problem arises if you
accept that the alternative harware is just as good at detecting the
glutamate and stimulating the neighbouring neurons accordingly, but
without the relevant qualia, then you have a situation where the
qualia fade and may eventually disappear BUT THE SUBJECT BEHAVES
NORMALLY AND NOTICES NO DIFFERENCE. And that is the problem.


--
Stathis Papaioannou



More information about the extropy-chat mailing list