[ExI] LLMs plus AI Agents means Astroturfing gone wild and crazy
John Clark
johnkclark at gmail.com
Tue Apr 28 11:14:44 UTC 2026
On Mon, Apr 27, 2026 at 10:59 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
*> I read the paper a while back.*
>
*Better you than me. *
*>To me it seems to be a regurgitation of Searle's biological naturalism. *
*And Searle's biological naturalism, including his Chinese Room argument,
is not worth a bucket of warm spit. *
* > It doesn't say why only a metabolically active thing can serve as a
> "mapmaker" nor why a robot with a battery and computer *
*As I've said I have not read the paper and do not intend to, but I've read
enough papers like it** to know that the implicit difference is that a
human brain is soft and squishy but a computer is brain hard and dry;
although of course they won't put it that way, philosophers will always
dress up tautologies, clichés and pure silliness into convoluted language
in an attempt to make them sound profound. *
> *> pure carbon chauvinism.*
>
*Apparently carbon is supposed to have some magical quality that silicon
lacks. *
> *> it says only metabolically active living cells can have "experienced
> semantics" while non-living substrates are reduced to merely dealing with
> "symbolic representations" and hence are non-conscious zombies. And so it
> fails for the same 27 original arguments leveled against Searle in 1980.*
>
*And so my 2 predictions about that paper have been proven to be correct
just as I knew they would be. *
*John K Clark*
>
>
> Jason
>
>
> On Mon, Apr 27, 2026, 3:56 PM BillK via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Mon, 27 Apr 2026 at 18:42, John Clark via extropy-chat <
>> extropy-chat at lists.extropy.org wrote:
>> > But what makes you believe that your fellow human beings are better at
>> that than Claude or Gemini? There must be some reason why you believe
>> computers are not conscious but also think that solipsism is not true. Is
>> it just that computers have brains that are soft and squishy while other
>> humans have brains that are hard and dry?
>> >
>> > John K Clark
>> > _______________________________________________
>>
>>
>> A senior staff scientist at Google’s artificial intelligence laboratory
>> DeepMind, Alexander Lerchner, argues *in a new paper*
>> <https://deepmind.google/research/publications/231971/?ref=404media.co>
>> that no AI or other computational system will ever become conscious.
>> "The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate
>> Consciousness".
>>
>> <
>> https://www.404media.co/google-deepmind-paper-argues-llms-will-never-be-conscious/
>> >
>> Quote:
>> Lerchner’s paper argues that AGI without sentience is possible, saying
>> that “the development of highly capable Artificial General Intelligence
>> (AGI) does not inherently lead to the creation of a novel moral patient,
>> but rather to the refinement of a highly sophisticated, non-sentient tool.”
>> --------------------------------
>>
>> Other cognitive scientists agree with his conclusions but are rather
>> upset that he hasn't cited any of their decades of research papers. :)
>> BillK
>>
>>
>>
>> ___
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260428/e22c59e3/attachment.htm>
More information about the extropy-chat
mailing list