[ExI] [Extropolis] Woo Hoo, I convinced GPT-3 it Isn't Conscious.
brent.allsop at gmail.com
Mon Aug 30 01:31:20 UTC 2021
On Sun, Aug 29, 2021 at 5:58 PM Stathis Papaioannou <stathisp at gmail.com>
> On Mon, 30 Aug 2021 at 07:45, Brent Allsop <brent.allsop at gmail.com> wrote:
>> Hi Stathis,
>> We've gone over this many times, but your model seems to be missing
>> representations of redness and greenness, as different than red and green.
>> So it appears that all I say get's mapped into your model, leaving it
>> absent of what I'm trying to say. Here you are talking about only the 3:
>> Strongest form of effing the effable, where you directly computationally
>> bind another's phenomenal qualities into your own consciousness.
>> Both the 3rd, strongest, and the 2nd stronger forms, where you
>> computationally bind something you have never experienced before, into you
>> consciousness, require brain hacking.
>> The 1st, weakest form of effing the ineffable, I was using with Emerson,
>> is different. It does not require brain hacking. All it requires is
>> objective observation and communication in way that distinguishes between
>> red and redness, and can model differences in specific intrinsic
>> qualities. If one is using only one abstract word "red" for all things
>> representing red knowledge, you can't model differences in different
>> intrinsic qualities which may be representing red. For the weakest form of
>> effing the ineffable, all you need is a phenomenal definition for
>> subjective terms like "redness", enabling you to communicate things with
>> well defined terms like this example effing statement: "My redness is like
>> your greenness, both of which we call red."
>> Also, thanks to all your endless help, I think I have a better
>> understanding of our differences. I would like to get these differences
>> between your "Functional Property Dualist
>> camp, and the "Qualia are Material Qualities
>> camp canonized. Let me see if you agree that this is a good way to
>> concisely describe our differences?
>> Functionalists, like James Carroll and yourself, using the
>> neuro-substitution argument make the assumption that a neuron functions
>> similarly to the discrete logic gates in an abstract CPU.
>> You also assume ALL computation operates this way, which is why you think
>> you can make the claim that the neuro-substitution argument can be applied
>> to all possible computational cases, justifying your belief that your neuro
>> substation argument is a "proof" that qualia must be functional in all
>> possible computational instances.
>> Where as Materialists, like Steven Lehar and I, think this way of
>> thinking about consciousness, or making this assumption is WRONG.
>> We believe that within any such abstract discrete logic only functional
>> system, there can be nothing that is the intrinsic qualities that represent
>> information like redness or greenness.
>> There is no way to perform the necessary "computational binding" of such
>> intrinsic qualities. As you so adequately point out, discrete logic gates
>> can't do this kind of computational binding.
>> Both of these are required so one can be aware of 2 or more
>> intrinsic qualities at the same time, the very definition of consciousness
>> for me.
>> Even if there was some "function" from which redness emerged, you could
>> use the same neuro-substitution argument to "prove", redness can't be
>> functional Either.
>> Since you completely leave intrinsic qualities like redness out of your
>> way of thinking, you don't seem to be able to model this all important
>> difference, which is so critical for me.
> I don’t know why you keep insisting that I don’t believe in “intrinsic
> redness”. I do believe that there is “intrinsic redness”, I just don’t
> think it can possibly be attached to a substance. I don’t think that even a
> miracle from God can attach intrinsic redness to a substance. Also, I
> actually do think that neurons essentially function like computer circuits,
> but I could be wrong about this, they might function fundamentally
> differently, they might even be a miracle from God: but even in that case,
> intrinsic redness cannot be attached to a substance!
Before we jump down this rat hole, for the gazillionth time, could you
indicate if I am getting anywhere close to describing the differences
between our two camps, so we can get this canonized, or do I just need to
state this unilaterally in our materialist camp?
So you do claim you believe in "intrinsic redness." That is a HUGE step
forward. Then you must also agree that consciousness is dependent on
whatever these intrinsic qualities you claim to believe in are like? That
if their quality changes (i.e. redness -> greenness), the resulting
consciousness is qualitatively different, though the system may remain
functionally equivalent? Do you also believe that intrinsic redness can be
computationally bound to intrinsic greenness, so we can be aware of both of
them, and their differences, at the same time, and that only if you do
this, can it then be considered to be conscious?
I didn't mean to say you don't believe in "intrinsic redness", just that
you are leaving it out of your neural substitution argument.
I continually ask you to specify some way that something COULD (even if it
is a functional way, or even a miracle performed by God) be responsible
for this "intrinsic redness" that you claim you do believe in,
That you describe some way for 2 or more of these intrinsic qualities to be
computationally bound, achieving what I define consciousness to be: Two or
more computationally bound elemental intrinsic qualities like redness and
These two things is what I'm tiring to point out you leave out of your
substitution argument. The discrete logic circuits, you believe neurons to
be functioning like, alone, simply can't do the computational binding
required to achieve 2 or more elemental qualities like redness and
greenness being computationally bound. Notice that I am making a
falsifiable claim here. IF you can provide any example, where discrete
logic circuitry can do computational binding of two or more "intrinsic
qualities like redness and greenness" I predict the problem will be
resolved. But you never do this, you just continue to ignore this
computational binding issue, while you do your substitution, even though
you claim you do believe in intrinsic redness.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat