[ExI] LLMs plus AI Agents means Astroturfing gone wild and crazy

Jason Resch jasonresch at gmail.com
Tue Apr 28 02:58:12 UTC 2026


I read the paper a while back. These were my notes / objections from
reading it:

>From the paper:
"A constituted mental state is one whose semantic reality is physically
made of,
and fundamentally un-abstractable from, the specific thermodynamic and
metabolic dynamics of the experiencing
organism."

My response:
To me it seems to be a regurgitation of Searle's biological naturalism. It
doesn't say why only a metabolically active thing can serve as a "mapmaker"
nor why a robot with a battery and computer draining that battery is not a
thermodynamic system nor metabolically active. Does food have to be
digested to charge the battery of the robot, elsewhise it is a zombie?


>From the paper:
"However, a silicon replacement that perfectly mimics only the electrical
firing profile (�� → ��′) preserves nothing but an extrinsic computational
map, one defined entirely by an external mapmaker’s chosen abstractions (��
→ ��′).
It systematically obliterates the intrinsic thermodynamic territory (��)
required for life, substituting a constitutive physical reality with a
causally inert, syntactic simulation. The qualia do not mysteriously
“fade”; the foundational metabolic substrate required to instantiate them
is simply removed."

My response:
He is avoiding the argument (Chalmers's fading qualia argument). If it
doesn't fade then what does he think happens? Is he advocating for suddenly
disappearing qualia the moment one neuron is substituted?

Searle at least answered Chalmers head on. Searle thought your conscious
states would disconnect from the physical states your brain was realizing
(showing some type of assumed dualism in Searle's thinking).

>From the paper:
"The physical state of the system alone determines its evolution. The
semantic content of the symbol (��) plays no causal role, since the machine
would perform the same physical operations even if the symbol referred
to nothing at all."

My response:
The above constraint would make self-isolated brain states, such as dreams,
impossible. For then the brain becomes exactly the self-evolving system
devoid of external references, which he claims can't be conscious.

>From the paper:
"The same principle applies to embodied AI systems. Sensors and actuators
allow the system to interact with the physical world, but they do not
automatically transform symbolic representations into intrinsic,
experienced semantics. The system may build increasingly detailed maps of
its environment, but interacting with the territory does not by itself turn
the map into the territory of experience."

My response:
A rather hand-wavey denial, that doesn't really explain or say anything.

>From the paper:
"It operates only on symbols (e.g., floating point numbers manipulated by
matrix multiplications) that have been discretized and alphabetized for
computation by an external mapmaker."

My response:
He says this makes consciousness impossible, yet it seems to be a perfect
description of the brain's visual cortex operating on discretized neural
spikes computed at the retina and sent down the wire of the optic nerve.

>From the paper:
"To fully understand the difference between the embodied robot running an
algorithm on a chip and the biological mapmaker, we need to remember that
for the latter, subjective experience is a given, not because of abstract
information processing, but because of a specific, metabolically
constituted physical
reality."

My response:
This is just pure carbon chauvinism.


In summary, the paper is just another version of Searle's biological
naturalism: it says only metabolically active living cells can have
"experienced semantics" while non-living substrates are reduced to merely
dealing with  "symbolic representations" and hence are non-conscious
zombies. And so it fails for the same 27 original arguments leveled against
Searle in 1980.


Jason


On Mon, Apr 27, 2026, 3:56 PM BillK via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Mon, 27 Apr 2026 at 18:42, John Clark via extropy-chat <
> extropy-chat at lists.extropy.org wrote:
> > But what makes you believe that your fellow human beings are better at
> that than Claude or Gemini? There must be some reason why you believe
> computers are not conscious but also think that solipsism is not true. Is
> it just that computers have brains that are soft and squishy while other
> humans have brains that are hard and dry?
> >
> > John K Clark
> > _______________________________________________
>
>
> A senior staff scientist at Google’s artificial intelligence laboratory
> DeepMind, Alexander Lerchner, argues *in a new paper*
> <https://deepmind.google/research/publications/231971/?ref=404media.co>
> that no AI or other computational system will ever become conscious.
> "The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate
> Consciousness".
>
> <
> https://www.404media.co/google-deepmind-paper-argues-llms-will-never-be-conscious/
> >
> Quote:
> Lerchner’s paper argues that AGI without sentience is possible, saying
> that “the development of highly capable Artificial General Intelligence
> (AGI) does not inherently lead to the creation of a novel moral patient,
> but rather to the refinement of a highly sophisticated, non-sentient tool.”
> --------------------------------
>
> Other cognitive scientists agree with his conclusions but are rather upset
> that he hasn't cited any of their decades of research papers.  :)
> BillK
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260427/fd99847b/attachment.htm>


More information about the extropy-chat mailing list