[ExI] The symbol grounding problem in strong AI
Gordon Swobe
gts_2000 at yahoo.com
Fri Jan 1 17:13:23 UTC 2010
--- On Fri, 1/1/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
> You destroy a person and make a copy, and you have the
> "same" person again even if the original has been dead a million years.
> The physical object doesn't survive, but the mind does
Okay, but you'll agree I assume that the person's intentionality goes away completely for a million years? He went away to become food for worms (or to cryo, whatever). We can rightly consider anyone during that million year period who claims his mind still exists as a loon who believes in ghosts. Yes?
> > No, but you on the other hand should describe yourself
> as such given that you believe we can get intentional
> entities from running programs. The conventional strong AI
> research program is based on that same false premise, where
> software = mind and hardware = brain, and it won't work for
> exactly that reason.
>
> I was referring to ordinary programs that aren't considered
> conscious. The program is not identical with the computer, since the
> same program can be instantiated on different hardware. If you want to
> call that dualism, you can.
But I think you would expect the same for a program that had somehow caused strong AI. That is the dualistic approach to strong AI that Searle takes issue with. For strong AI to work (as it does in humans that have the same capability) we need to re-create the substance of it (not merely the form of it as in a program) much like nature did and exactly as you did in your experiment above about recreating a copy of the brain.
> As I said, I agree with that paper. I just think he's wrong
> about computers and their potential for consciousness, which in
> that he only alludes to in passing.
I pointed you to that paper to show you his conception of consciousness/intentionality, and because if I remember correctly he also discusses the problem with duality.
> > If you expect to find consciousness in or stemming
> from a computer simulation of a brain then I would suppose
> you might also expect to eat a photo of a ham sandwich off a
> lunch menu and find that it tastes like the ham sandwich it
> simulates. After all, on your logic the simulation of the
> ham sandwich is implemented in the substrate of the menu.
> But that piece of paper won't taste much like a ham
> sandwich, now will it? And why not? Because, as I keep
> trying to communicate to you, simulations of things do not
> equal the things they simulate. Descriptions of things do
> not equal the things they describe.
>
> You keep repeating this, but I have shown that a device
> which reproduces the behaviour of a biological brain will also
> reproduce the consciousness.
You didn't show it to me. If you showed me anything, you showed me that an artificial brain that behaves like a real brain but does not have the material substance of a real brain will result in a mindless cartoon character that merely acts like he has intentionality, i.e., weak AI.
You'll find it easier to see if you replace his entire brain with a formal programmatic description of it. Programs merely describe the real or supposed things that they're about. They're the depiction of food on a lunch menu, not the food itself.
-gts
More information about the extropy-chat
mailing list