[ExI] Digital Consciousness .
Brent Allsop
brent.allsop at canonizer.com
Sat Apr 27 20:03:16 UTC 2013
On 4/27/2013 2:02 AM, Stathis Papaioannou wrote:
>
> On 27/04/2013, at 3:46 AM, Brent Allsop <brent.allsop at canonizer.com
> <mailto:brent.allsop at canonizer.com>> wrote:
>
>> Hi Stathis,
>>
>> <<<
>> The argument does not assume any theory of consciousness. Of course,
>> if the argument is valid and a theory predicts that computers cannot
>> be conscious then that theory is wrong. What you have to do is show
>> that either the premises of the argument are wrong or the reasoning is
>> invalid.
>> >>>
>>
>> It's frustrating that you can't see any more than this from what I'm
>> trying to say. I have shown exactly how the argument is wrong and
>> how the reasoning is invalid, in that the argument is completely
>> missing a set of very real theoretical possibilities.
>
> An argument has premises, or assumptions, and a conclusion. If you
> challenge the argument you can challenge the premises or you can
> challenge the logical process by which the conclusion is reached. If
> the conclusion follows logically from the premises then the argument
> is VALID, whether or not the premises are true. If the argument is
> valid and the premises are true then the argument is said to be SOUND.
>
> It would help if you could follow this and specify exactly where you
> see the problem, but it seems that you're not challenging the validity
> of the argument, but the truth of the premises. And the only premise
> is that the externally observable behaviour of the brain is
> computable. So, you must believe that the observable behaviour of the
> brain is NOT computable. In other words, there is something about the
> chemistry in the brain that cannot be modelled by a computer, no
> matter how good the model and no matter how powerful the computer. Is
> that what you believe?
I guess I fail to even understand why you think his fading / dancing
quale paper is any kind of 'proof' that computers can be conscious.
Chalmers points out in that paper that there are two predicted
possibilities when we do the neuro substitution experiment. One is that
there will be some kind of unavoidable fading quale, as you do the
substitution one neuron at a time, and the other is that you will be
able to find some way to replace all the neurons with abstracted
representations of such, and that during the entire experience, you'll
still experience all of the same phenomenal consciousness, no fading
quale of any kind. I guess, if you assume that it will be the latter as
one of your premises, as Chalmers argues is only the most likely case,
in his mind, then, yes, one might consider the rest to be a proof, and I
would agree with that. But even Chalmers admits there is a 25% chance,
in his mind, that there will be some kind of fading quale or that
Material Property Dualism (He calls it "type F monism") will be
demonstrated to be true by science. He more or less states this in that
paper, and he told me the specific 25% number, personally.
Obviously, if science proves this fading quale to be the case, as
predicted, it will be quite falsifying for anyone that thinks such has
been proven not possible? Do you not agree with this?
Brent Allsop
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130427/de943897/attachment.html>
More information about the extropy-chat
mailing list