[ExI] GPT-4 on its inability to solve the symbol grounding problem
jasonresch at gmail.com
Mon Apr 10 23:33:59 UTC 2023
On Mon, Apr 10, 2023, 6:55 PM Gordon Swobe <gordon.swobe at gmail.com> wrote:
> On Mon, Apr 10, 2023 at 4:17 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> If you ask me, I think the atom of conscious is the If-then-else
>> construct. The simplest binary discrimination of some statement or input
>> that can put a system in more than one distinct state.
> I think that computationalist view is merely a description in *your* mind
> that *you* assign to the physics/biology of the brain.
No, see below.
The brain is not *intrinsically* a digital computer running software.
I agree %!
> The computational model is merely a handy metaphor, one that became
> popular as people became enamoured of computers starting around 1950-1960
> and increasingly so after about 1980 as the computer revolution
> accelerated. You create that map in your mind but the map is not the
I think this explains a lot. You have a mistaken impression of what
computationalism is if you think computationalism is the same thing as the
The computer metaphor is the idea that the brain works like a computer. I
agree with you that the brain works nothing like a computer. The brain is
not a device with logic gates, or instructions, or addressable memory. It's
not a Turing machine.
But that is not what computationalism says or implies. Computationalism is
not the computer metaphor, actually it's almost the opposite of it.
Computationalism stems from the idea that computers are flexible enough
that they can mimic the behavior of any finitely describable system.
Therefore, computationalists believe that if a computer were programmed to
mimic the brains operation with the right level of detail and fidelity,
then this reproduction of the brain's operation would be conscious in the
same way as the original.
In summary, computationalism is *not* the idea that the human brain
operates like a computer, but rather, that a computer can be made to
operate *like a human brain*.
We know from the Church-Turing thesis that computers can replicate the
operations of any finitely describable system. Then all it takes to get
from this to computationalism is to reject the possibility of zombies or
fading qualia. As then, the program that perfectly mimics the human brain
will necessarily be conscious in the same way as the original human brain
whose operations it mimics.
>> Yes I can infer that a mouse probably also feels pain, but now I am
>>> beginning to tread outside of the first person and my thoughts start
>>> turning into conjectures.
>> We tread those waters when we suppose other humans are conscious. As I
>> asked before, how do you know you aren't the first person with a gene
>> mutations on earth that makes you conscious? Our choice is then between
>> solipsism or conjecturing that other minds besides our own are conscious.
> We've discussed this before and yes, it is a bit of leap to infer even
> that other humans are conscious. For all I know I am the first, though I
> highly doubt it. I consider it reasonable to infer consciousness in other
> people and in higher mammals and to dogs and cats and so on, as we all have
> very similar nervous systems and sense organs and lives and behaviors, but
> it becomes increasingly speculative as we look down the food chain and at
> non-mammals, to say nothing of these hunks of plastic and metal we call
Ignoring present GPTs, do you believe it is possible in principle to build
an AI super intelligence? One able to reason independently to such a high
degree that it's able to invent new technologies and conceive of new
scientific discoveries entirely on its own? Or do you think artificial
intelligences will never be able to surpass humans in their creative and
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat