[ExI] If AI becomes conscious, how will we know?

efc at swisscows.email efc at swisscows.email
Thu Aug 24 14:21:44 UTC 2023


I'd go for the good old Turing, perhaps in an updated version. Ideally I'd 
like to see independent will, volition and goals which have not been 
programmed in from the start.

No, this is not fleshed out, but this would be my starting point.

Best regards,
Daniel


On Wed, 23 Aug 2023, Jason Resch via extropy-chat wrote:

> Interesting thanks for sharing. I have to say I disagree with their strategy of using neuroscience to find an answer to the question
> of machine consciousness. All that strategy can tell us is how close it's structures are to those of the human brain. A similar
> architecture might provide a further argument for their being consciousness, but a dissimilar structure cannot be taken as evidence
> against their consciousness.
> I think the best way forward is to define behaviors for which consciousness is logically necessary, and then look for evidence of
> those behaviors. For anyone who claims zombies are logically impossible, there must exist behaviors for which consciousnss is
> logically necessary. 
> 
> Personally I think anything evidencing a knowledge state can be considered conscious, but it happens there's a wide range (likely an
> infinite range) of possible states of consciousness). So consciousness is easy to establish, the bigger question is "what is that
> being conscious of?"
> 
> Jason 
> 
> On Wed, Aug 23, 2023, 5:50 PM BillK via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>       If AI becomes conscious, how will we know?
>       Scientists and philosophers are proposing a checklist based on
>       theories of human consciousness
>
>       22 Aug 2023       ByElizabeth Finkel
>
>       <https://www.science.org/content/article/if-ai-becomes-conscious-how-will-we-know>
>       Quote:
>       Now, a group of 19 computer scientists, neuroscientists, and
>       philosophers has come up with an approach: not a single definitive
>       test, but a lengthy checklist of attributes that, together, could
>       suggest but not prove an AI is conscious. In a 120-page discussion
>       paper posted as a preprint this week, the researchers draw on theories
>       of human consciousness to propose 14 criteria, and then apply them to
>       existing AI architectures, including the type of model that powers
>       ChatGPT.
>
>       The problem for all such projects, Razi says, is that current theories
>       are based on our understanding of human consciousness. Yet
>       consciousness may take other forms, even in our fellow mammals. “We
>       really have no idea what it’s like to be a bat,” he says. “It’s a
>       limitation we cannot get rid of.”
>       -------------------
>
>       As the article says, the big issue is how to define consciousness.
>
>       BillK
>
>       _______________________________________________
>       extropy-chat mailing list
>       extropy-chat at lists.extropy.org
>       http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 
> 
>


More information about the extropy-chat mailing list