[ExI] Possible seat of consciousness found

Brent Allsop brent.allsop at gmail.com
Thu Mar 5 04:16:47 UTC 2020


Hi Stathis,

With this it still appears that you are just mapping what I am saying, into
your model.  There is still no evidence that you understand my model.  In
your simpler model, only the external behavior (picking the strawberries)
matters.  But my model is more complex and includes additional behavior you
seem to ignore (are blind to?) like responses to: “What is red knowledge
like for you?”  In other words, parsing what I say, into your model, it
becomes a contradiction where I say both change in qualia results in change
in behavior AND the change in qualia doesn’t result in change in behavior
(as in the example of the three strawberry-picking robots).


But correctly parsing what I say into my more complex model, it means
something different, and isn’t a contradiction.


And even if the 'hard problem' is a separate issue, no such "hard problem'
exists in my more complex model, like it does in your simpler model.





On Tue, Mar 3, 2020 at 6:30 PM Stathis Papaioannou via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Wed, 4 Mar 2020 at 11:05, Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Hi Stathis,
>> So I still haven't convinced you I fully understand absolutely everything
>> about your beliefs, model, and thinking.
>> OK, let me try again, yet again.
>>
>> I know that all of today's computational systems are made of discrete
>> components, where each component has inputs which result in outputs.
>> I know that any of these individual discrete components can be replaced
>> with myriads of different physical instantiations, and that as long as all
>> possible inputs map to the same output of that component, these sets of
>> discrete systems must in all aspects, both internal and external, function
>> the same, no matter what physics is used to implement them.  If any of the
>> physical changes to any of these discrete components, which didn't
>> change the mapping of the inputs to the outputs, changed the qualia, that
>> would render the idea of qualia absurd or contradictory, hence there is a
>> "hard problem".
>>
>> So, did I miss anything?
>>
>
> The last phrase, ‘hence there is a “hard problem”’, should not be
> included. The “hard problem” is quite a separate issue.
>
> Now, can you describe to me any of the significant problems I see with any
>> of that?
>>
>
> I may have this wrong, and if so please forgive me, but you have said that
> the qualia affect behaviour so the behaviour would change if the qualia
> change, and also you have said that behaviour could be the same even though
> the qualia are different, as in the example of the three strawberry-picking
> robots. I don’t think these are valid objections.
>
>> --
> Stathis Papaioannou
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20200304/17c6df26/attachment.htm>


More information about the extropy-chat mailing list