[ExI] Possible seat of consciousness found

Brent Allsop brent.allsop at gmail.com
Mon Feb 17 22:59:09 UTC 2020


I don't think so.


You represent red things with knowledge that has your redness quality, like
robot number one in this 3 robots that are functionally the same but
qualitatively differen
<https://docs.google.com/document/d/1YnTMoU2LKER78bjVJsGkxMsSwvhpPBJZvp9e2oJX9GA/edit?usp=sharing>t
paper.  Robot number 2 is engineered to be red/green qualia inverted.  It
represents red things with knowledge that has your greenness quality.



Stathis also claimed that robot 3 has qualia and that this “is the best
conjecture” since it is equally functional, it makes decisions about which
strawberries to pick, identically to the first 2.


So, my question to anyone who thinks robot number 3 has qualia is: What is
robot number 3’s knowledge qualitatively like?  (After all, the word 'red'
isn't physically red or green, right?)




On Mon, Feb 17, 2020 at 1:11 PM Stathis Papaioannou via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Tue, 18 Feb 2020 at 06:47, William Flynn Wallace via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Since no one has any idea,but many theories, what is wrong with the idea
>> that any creature who makes a decision, an intelligent act, is conscious?
>> Or capable of learning, if you want to put it that way.
>>
>
> That would seem to be the best conjecture.
>
>> --
> Stathis Papaioannou
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20200217/8543a68f/attachment.htm>


More information about the extropy-chat mailing list