[ExI] Consciouness and paracrap

Stathis Papaioannou stathisp at gmail.com
Thu Feb 18 01:49:01 UTC 2010


On 18 February 2010 07:38, Spencer Campbell <lacertilian at gmail.com> wrote:

> Stathis Papaioannou <stathisp at gmail.com>:
>> Several people have commented that we need a definition of
>> consciousness to proceed, but I disagree. I think everyone knows what
>> is meant by the word and so we can have a complete discussion without
>> at any point defining it.
>
> Dude, I barely know what I mean by the word when I use it in my own
> head. Are you talking about access consciousness? Phenomenal
> consciousness? Reflexive consciousness? All of the above?
>
> http://www.def-logic.com/articles/silby011.html
>
> The reason I haven't supplied a rigorous definition for consciousness,
> as I have for intelligence, is because I can't articulate the meaning
> of it for myself. This, to me, does not seem ancillary to the
> discussion; it seems to be the very root of the discussion, namely the
> question, "what is consciousness?".
>
> Stathis Papaioannou <stathisp at gmail.com>:
>> For those who say that consciousness does
>> not really exist: consciousness is that thing you are referring to
>> when you say that consciousness does not really exist.
>
> That's fair. There isn't any question of what I'm talking about when I
> refer to the Flying Spaghetti Monster.
>
> I can describe the FSM to you in great detail, however. I can't do the
> same with consciousness, except perhaps to say that, if it exists, it
> occasionally compels normally sane people to begin a sentence with
> "dude".

You can't define it, but when I ask you if you are conscious now do
you have to stop and think? It is this immediately understood sense I
am referring to. This is not to say that further elaboration is
useless, but you can go a long way discussing it without explicit
definition.

> Stathis Papaioannou <stathisp at gmail.com>:
>> The purpose of the above is to show that it is impossible (logically
>> impossible, not just physically impossible) to make a brain part, and
>> hence a whole brain, that behaves exactly like a biological brain but
>> lacks consciousness. Either it isn't possible to make such an
>> artificial component at all, or else it is possible to make such a
>> component but it will necessarily also have consciousness. The
>> alternative is to say that you're happy with the idea that you may be
>> blind, deaf, unable to understand English etc. but neither you nor
>> anyone else has noticed.
>>
>> Gordon Swobe's response is that this thought experiment is ridiculous
>> and I should come up with another one that doesn't challenge the
>> self-evident fact that digital computers cannot be conscious.
>
> Gordon doesn't disagree with that proposition as-stated, even if he
> sometimes claims that he does (for some reason). He's consistently
> said that we should be able to engineer artificial consciousness, but
> that to do so requires more than a clever piece of software in a
> digital computer.
>
> So, I suggest that you rephrase the experiment so that it explicitly
> involves replacing neurons, cortices, or whole brains with
> microprocessor-driven prosthetics. We know that he believes the
> whole-brain version will be a zombie, but I haven't been able to
> discern any clear conclusions from him on the other two.

The thought experiment involves replacing brain components with
artificial components that perfectly reproduce the I/O behaviour of
the original components, but not the consciousness. Gordon agrees that
this is possible. However, he then either claims that the artificial
components will not behave the same as the biological components (even
though it is an assumption of the experiment that they will) or else
says the experiment is ridiculous.

> He has said before that partial replacement only confuses the matter,
> implying that it's a useless thought experiment. I do not see why he
> would think that, though.

Perhaps because he can see that it shows that his thesis that it is
possible to separate consciousness from behaviour is false. It's
either that or accept the possibility of partial zombies.

> The only coherent answer of his I remember goes something like this: a
> man has a damaged language center, and a surgeon replaces neurons with
> artificial substitutes one by one. This works so poorly that the
> surgeon must replace the entire brain before language function is
> returned, at which point the man is a philosophical zombie.
>
> But we always start with the assumption that computerized neurons do
> not work poorly, indeed that they "depict" ordinary neurons perfectly
> (using that depiction as a guide to manipulate their synthetic axons
> and such), and I've never seen him explain why he considers this
> assumption to be inherently false.

That's the problem: he could say that they can't work properly on the
grounds that there is something non-computable about neuronal
behaviour, but he does not. Instead, he agrees that they will work
properly, then in the next breath says they will not work properly.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list