[ExI] Digital Consciousness .

Brent Allsop brent.allsop at canonizer.com
Thu Apr 25 15:15:51 UTC 2013


Hi Stathis,

(And Kelly Anderson, tell me if given what we've covered, does the below
make sense to you?)

It is not a 'proof' that abstracted computers can be conscious.  It
completely ignores many theoretical possible realities. For example
Material Property Dualism is one of many possible theories that proves this
is not a 'proof".

There is now an "idealized effing theory" world described in the Macro
Material Property Dualism camp: http://canonizer.com/topic.asp/88/36 .

In that theoretically possible world, it is the neurotransmitter glutamate
that has the element redness quality.  In this theoretical world Glutamate
causally behaves the way it does, because of it's redness quality.  Yet
this causal behavior reflects 'white' light, and this is why we think of it
has having a 'whiteness' quality.  But of course, that is the classic
example of the quale interpretation problem (see:
http://canonizer.com/topic.asp/88/28 ).  If we interpret the causal
properties of something with a redness quality to it, and represent our
knowledge of such with something that is qualitatively very different, we
are missing and blind to what is important about the qualitative nature of
glutamate, and why it behaves the way it does.

So, let's just forget about the redness quality for a bit, and just talk
about the real fundamental causal properties of glutamate in this
theoretical idealizing effing world.  In this world, the brain is
essentially a high fidelity detector of real glutamate.  The only time the
brain will say: "Yes, that is my redness quality" is when real glutamate,
with it's real causal properties are detected.  Nothing else will produce
that answer, except real fundamental glutamate.

Of course, as described in Chalmers' paper, you can also replace the system
that is detecting the real glutamate, with an abstracted system that has
appropriate hardware translation levels for everything that is being
interpreted as being real causal properties of real glutamate, so once you
do this, this system, no matter what hardware it is running on, can be
thought of, or interpreted as acting like it is detecting real glutamate.
But, of course, that is precisely the problem, and how this idea is
completely missing what is important.  And this theory is falsifiably
predicting the alternate possibility he describes in that paper.  it is
predicting you'll have some type of 'fading quale', at least until you
replace all of what is required, to interpret something very different than
real consciousness, as consciousness.

It is certainly theoretically possible, that the real causal properties of
glutamate are behaving the way they do, because of it's redness quality.
And that anything else that is being interpreted as the same, can be
interpreted as such - but that's all it will be.  An interpretation of
something that is fundamentally, and possibly qualitatively, very different
than real glutamate.

This one theoretical possibility, thereby, proves Chalmers' idea isn't a
proof that abstracted computers have these phenomenal qualities, only that
they can be thought of, or interpreted as having them.


On Wed, Apr 24, 2013 at 9:45 PM, Stathis Papaioannou <stathisp at gmail.com>wrote:

> On Thu, Apr 25, 2013 at 2:11 AM, Brent Allsop
> <brent.allsop at canonizer.com> wrote:
>
> > Oh and Stathis said:
> >
> > <<<<
> > There is an argument from David Chalmers which proves that computers
> > can be conscious assuming only that (a) consciousness is due to the
> > brain and (b) the observable behaviour of the brain is computable.
> > Consciousness need not be defined for the purpose of the argument
> > other than vaguely: you know it if you have it. This makes the
> > argument robust, not dependent on any particular philosophy of mind or
> > other assumptions. The argument assumes that consciousness is NOT
> > reproducible by a computer and shows that this leads to absurdity. As
> > far as I am aware no-one has successfully challenged the validity of
> > the argument.
> >
> > http://consc.net/papers/qualia.html
> >>>>
> >
> > Stathis, I think we’ve made much progress on this issue, since you’ve lat
> > been involved in the conversation.  I’m looking forward to see if we can
> now
> > convince you that we have, and that there is a real possibility of a
> > solution to the Chalmers’ conundrum you believe still exists.
> >
> > I, for one, am in the camp that believes there is an obvious problem with
> > Chalmer’s “fading dancing’ qualia argument, and once you understand this,
> > the solution to the ‘hard problem’, objectively, simulatably, sharably
> (as
> > in effing of the ineffable) goes away, no fuss, no muss, all falsifiably
> or
> > scientifically demonstrably (to the convincing of all experts) so.
>  Having
> > an experience like: “oh THAT is what your redness is like – I’ve never
> > experienced anything like that before in my life, and was a that kind of
> > ‘redness zombie’ before now.” Will certainly falsify most bad theories
> out
> > there today, especially all the ones that predict that kind of sharing or
> > effing of the ineffable will never be possible.
>
> The paper cited has nothing to do with the "Hard Problem" or the
> possibility of sharing experiences. It is just a proof that computers
> can be conscious.
>
>
> --
> Stathis Papaioannou
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130425/a1fd5465/attachment.html>


More information about the extropy-chat mailing list