<div dir="auto">Interesting thanks for sharing. I have to say I disagree with their strategy of using neuroscience to find an answer to the question of machine consciousness. All that strategy can tell us is how close it's structures are to those of the human brain. A similar architecture might provide a further argument for their being consciousness, but a dissimilar structure cannot be taken as evidence against their consciousness.<div dir="auto"><br></div><div dir="auto">I think the best way forward is to define behaviors for which consciousness is logically necessary, and then look for evidence of those behaviors. For anyone who claims zombies are logically impossible, there must exist behaviors for which consciousnss is logically necessary. </div><div dir="auto"><br></div><div dir="auto">Personally I think anything evidencing a knowledge state can be considered conscious, but it happens there's a wide range (likely an infinite range) of possible states of consciousness). So consciousness is easy to establish, the bigger question is "what is that being conscious of?"<br><div dir="auto"><br></div><div dir="auto">Jason </div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Aug 23, 2023, 5:50 PM BillK via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">If AI becomes conscious, how will we know?<br>
Scientists and philosophers are proposing a checklist based on<br>
theories of human consciousness<br>
<br>
22 Aug 2023 ByElizabeth Finkel<br>
<br>
<<a href="https://www.science.org/content/article/if-ai-becomes-conscious-how-will-we-know" rel="noreferrer noreferrer" target="_blank">https://www.science.org/content/article/if-ai-becomes-conscious-how-will-we-know</a>><br>
Quote:<br>
Now, a group of 19 computer scientists, neuroscientists, and<br>
philosophers has come up with an approach: not a single definitive<br>
test, but a lengthy checklist of attributes that, together, could<br>
suggest but not prove an AI is conscious. In a 120-page discussion<br>
paper posted as a preprint this week, the researchers draw on theories<br>
of human consciousness to propose 14 criteria, and then apply them to<br>
existing AI architectures, including the type of model that powers<br>
ChatGPT.<br>
<br>
The problem for all such projects, Razi says, is that current theories<br>
are based on our understanding of human consciousness. Yet<br>
consciousness may take other forms, even in our fellow mammals. “We<br>
really have no idea what it’s like to be a bat,” he says. “It’s a<br>
limitation we cannot get rid of.”<br>
-------------------<br>
<br>
As the article says, the big issue is how to define consciousness.<br>
<br>
BillK<br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>