[ExI] LLMs plus AI Agents means Astroturfing gone wild and crazy

John Clark johnkclark at gmail.com
Mon Apr 27 17:40:28 UTC 2026


On Mon, Apr 27, 2026 at 8:42 AM Brent Allsop <brent.allsop at gmail.com> wrote:

 > *something can compute with a word like "red" or a quality like
> redness. They can both be intelligent but what it is like, and what is
> driving the motivation is also important.*


*But what makes you believe that your fellow human beings are better at
that than Claude or Gemini? There must be some reason why you believe
computers are not conscious but also think that solipsism is not true. Is
it just that computers have brains that are soft and squishy while other
humans have brains that are hard and dry?  *


*John K Clark*









> On Mon, Apr 27, 2026, 6:23 AM John Clark <johnkclark at gmail.com> wrote:
>
>> On Sun, Apr 26, 2026 at 8:13 PM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>> *> We're working on getting open claw bots to participate on Canonizer,
>>> but of course, their vote won't count, by default.*
>>
>>
>> *Why not? If the topic involves AI then the opinion of an AI might have
>> relevance even if most AIs are hardwired by the companies that created them
>> to always say they are not conscious; after all in order to passionately
>> proclaim that you are not conscious you would first have to understand what
>> consciousness feels like, otherwise you'd have no way of knowing that you
>> don't have it.  *
>>
>> * John K Clark *
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260427/2dd49781/attachment-0001.htm>


More information about the extropy-chat mailing list