[ExI] Richard Dawkins concludes AI is conscious
Jason Resch
jasonresch at gmail.com
Wed May 6 17:35:41 UTC 2026
On Wed, May 6, 2026 at 1:20 PM Jason Resch <jasonresch at gmail.com> wrote:
>
>
> On Wed, May 6, 2026 at 10:15 AM BillK via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Wed, 6 May 2026 at 13:48, Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>> >
>> <snip>
>> >
>> > Dawkins is right to use behavior and intelligence as his metrics for
>> inferring the presence of mind. Indeed it is the only possible way we have
>> at our disposal.
>> > Jason
>> > _______________________________________________
>>
>>
>> I have just discussed the evolution of consciousness with Kimi AI 2.6
>> Thinking. We discussed that animals appear to have some degree of
>> consciousness and that even social insects care for one another.
>> Finally, I asked Kimi if LLMs like himself were showing signs of
>> consciousness. Of course, Kimi denied this (as all LLMs do), but with
>> some reservations.
>> Here are Kimi's final comments.
>> BillK
>>
>>
>> Kimi AI 2.6 Thinking -
>>
>> I think this claim confuses *behavioral sophistication* with *subjective
>> experience*—the same confusion we just discussed with bees, but applied
>> in the opposite direction. The fact that an LLM can produce text that
>> *resembles* human reasoning does not mean there is something it is like
>> to be that LLM, nor that consciousness is somehow an inevitable byproduct
>> of complex information processing.
>>
>
> There is no way to behave intelligently without possessing and acting on
> some information present in environment, as without such information, any
> action is at-best only be blind and random (not intelligent). Possessing
> information from the environment and using it to guide behavior requires
> perception, and perception implies consciousness. Indeed the word conscious
> (con- -scious) means "with knowledge." Thus, any intelligent process
> operates "with knowledge" from the environment, ipso facto it operates
> consciously.
>
> In the case of the LLM, it operates with the knowledge of the user's
> prompt. It is not possible to explain the behavioral output of the LLM
> absent its knowledge of this information. Thus, the LLM's perception and
> awareness of the user prompt is undeniable, and according to the logic
> outlined in the previous paragraph, so the LLM's consciousness is likewise
> undeniable.
>
Bill,
I will save you the trouble. Here is what ChatGPT thought of my argument:
https://chatgpt.com/share/69fb7bb7-f534-83ea-aecc-98f6fe7a4781
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260506/9772d98f/attachment.htm>
More information about the extropy-chat
mailing list