<div dir="ltr">On Thu, Apr 25, 2013 at 9:15 AM, Brent Allsop <span dir="ltr"><<a href="mailto:brent.allsop@canonizer.com" target="_blank">brent.allsop@canonizer.com</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><img alt=""></div><div>Hi Stathis,<br>
<br></div><div>(And Kelly Anderson, tell me if given what we've covered, does the below make sense to you?)<br>
</div><div><br>It
is not a 'proof' that abstracted computers can be conscious. It
completely ignores many theoretical possible realities. For example
Material Property Dualism is one of many possible theories that proves
this is not a 'proof".<br>
<br></div>There is now an "idealized effing theory" world described in the Macro Material Property Dualism camp: <a href="http://canonizer.com/topic.asp/88/36" target="_blank">http://canonizer.com/topic.asp/88/36</a> .<br>
</div></blockquote><div><br></div><h2 style="zoom:1;line-height:19px;font-family:Arial;margin:0px 0px 10px;padding:0px;color:rgb(0,0,0)"><font style="font-weight:normal">I would fall in the camp of Functional Property Dualism, I think.</font><span style="font-family:arial;font-size:small;font-weight:normal;line-height:normal;color:rgb(34,34,34)"> Unless there is some other camp that is even more explicit.</span></h2>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">
<div>In that theoretically possible world, it is the
neurotransmitter glutamate that has the element redness quality. In
this theoretical world Glutamate causally behaves the way it does,
because of it's redness quality. Yet this causal behavior reflects
'white' light, and this is why we think of it has having a 'whiteness'
quality. But of course, that is the classic example of the quale
interpretation problem (see: <a href="http://canonizer.com/topic.asp/88/28" target="_blank">http://canonizer.com/topic.asp/88/28</a>
). If we interpret the causal properties of something with a redness
quality to it, and represent our knowledge of such with something that
is qualitatively very different, we are missing and blind to what is
important about the qualitative nature of glutamate, and why it behaves
the way it does.<br></div></div></blockquote><div><br></div><div style>I don't believe that. I believe "redness" is an emergent illusion constructed by the brain in software, and has as much to do with the glutamate as Word has to do with Accumulators and Assembly. </div>
<div style> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div><div>So, let's just forget about the redness quality for a
bit, and just talk about the real fundamental causal properties of
glutamate in this theoretical idealizing effing world. In this world,
the brain is essentially a high fidelity detector of real glutamate.
The only time the brain will say: "Yes, that is my redness quality" is
when real glutamate, with it's real causal properties are detected.
Nothing else will produce that answer, except real fundamental
glutamate.<br></div></div></blockquote><div><br></div><div style>I totally disagree, but I do understand your position better.</div><div style> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr"><div></div><div>Of course, as described in Chalmers' paper, you can also replace the
system that is detecting the real glutamate, with an abstracted system
that has appropriate hardware translation levels for everything that is
being interpreted as being real causal properties of real glutamate, so
once you do this, this system, no matter what hardware it is running on,
can be thought of, or interpreted as acting like it is detecting real
glutamate. </div></div></blockquote><div><br></div><div style>Correct. And that's as "real" as the other in my mind.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr"><div>But, of course, that is precisely the problem, and how this
idea is completely missing what is important. And this theory is
falsifiably predicting the alternate possibility he describes in that
paper. it is predicting you'll have some type of 'fading quale', at
least until you replace all of what is required, to interpret something
very different than real consciousness, as consciousness.<br></div></div></blockquote><div><br></div><div style>If it walks like a duck, and quacks like a duck, who is to say that it is not a duck?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr"><div>It is certainly theoretically possible, that the real
causal properties of glutamate are behaving the way they do, because of
it's redness quality. And that anything else that is being interpreted
as the same, can be interpreted as such - but that's all it will be. An
interpretation of something that is fundamentally, and possibly
qualitatively, very different than real glutamate.<br></div></div></blockquote><div><br></div><div style>As Spike says, if it is qualitatively different, but still delivers me a Big Mac when I order it, I'm good with that for many purposes.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div><div>
<br></div>This one theoretical possibility, thereby, proves
Chalmers' idea isn't a proof that abstracted computers have these
phenomenal qualities, only that they can be thought of, or interpreted
as having them.</div></blockquote><div><br></div><div style>I'm of the camp that will believe something has consciousness when it says it does, and when it acts in every way as if it does. Good enough for me. I guess that puts me in a different cannon?</div>
<div style><br></div><div style>-Kelly</div><div style> </div></div></div></div>