[ExI] all we are is just llms was: RE: e: GPT-4 on its inability to solve the symbol grounding problem

Jason Resch jasonresch at gmail.com
Fri Apr 21 11:02:33 UTC 2023


On Fri, Apr 21, 2023, 12:47 AM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
>
>
> *From:* Gordon Swobe <gordon.swobe at gmail.com>
> *…*
> >…As for the "club," there is no club, but most AI researchers are not
> wild-dreamers and prone to hyperbole. One would never know it from what
> goes on here on ExI, but my views on these matters are the mainstream.  -gts
>
>
>
> Hard to say really.  Plenty of people have concluded ChatGPT is human
> level or higher intelligence while stopping short of saying it is
> conscious.  This is what gave me the idea of separating those two
> parameters into perpendicular axes somehow, then seeing if we can find a
> way to measure them.
>
> We have ways of measuring human intelligence (we think we do (in some
> specific areas)) but I know of no tests for consciousness.  So now our job
> is to invent such tests.
>
> Ideas?
>
> OK I have one idea, a bad one: ask it if it is conscious.  OK did that, it
> claims it is not.  But that is inconclusive, for if it is conscious it
> might lie and claim that it is not.
>
> Wait, this whole notion might be going down a completely wrong absurd
> road.  Does it make a lick of sense to separate intelligence from
> consciousness?  Billw or anyone else, does that make any sense to
> hypothetically dissociate those concepts, which cannot be separated in
> humans?
>

In principle (but not in practice) conscious can be separated from
intelligence by recording every possible intelligent behavior as a response
to every possible situation. These are sometimes referred to as lookup
tables or Blockhead minds in honor of Ned Block who used this argument as a
way to suggest you could have functional equivalence without any
processing, understanding, awareness, and in theory, without consciousness.

But if you consider how to make such a table of all possible recordings of
actions by an intelligent mind, it requires putting the mind into every
possible situation at some time in the past. In this way, you aren't really
escaping consciousness, just interfacing with a consciousness that existed
long ago. Eliezer Yudkowski likened talking to a Blockhead brain to having
a cell phone conversation with a distant intelligent (and conscious) mind.

There's a technique in software engineering called memoization which uses
memory to store the result of functions such that when the same input is
seen again, the function need not be computed again. We might ask, would a
brain that used such techniques be less conscious or differently conscious.
Would it over time devolve into a Blockhead zombie or would it retain it's
experience. Here I think it might depend at how low a level the memoization
is applied.

But all this is just to say that by trading off memory for processing, we
can in theory reduce the number of uniquely created instances of conscious
experiences to just one. In practice, this isn't possible, as the
combinations of possible inputs greatly exceeds what could be recorded
using all the atoms of the universe, so this will always remain just a
thought experiment.

Consciousness (which I define as awareness of information) is required to
implement certain functional capacities, including nearly any intelligent
behavior, as all intelligence requires interaction with the environment,
and so minimally one must be conscious of at least some information from
the environment to act intelligently.

Jason

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230421/7da199c8/attachment.htm>


More information about the extropy-chat mailing list