[extropy-chat] Creating software with qualia
Hal Finney
hal at finney.org
Thu Dec 1 00:11:56 UTC 2005
Thanks for the comments on my posting. A few replies:
David McFadzean writes:
> Question for Hal, in your thought experiment do the robots necessarily
> have qualia (in other words, are you saying philosophical zombies are
> logically impossible), or are you stipulating that they do have qualia
> as part of the thought experiment, or do we merely assume that they
> likely do have qualia because of how they behave?
My point is to show a model for how agents that have various
problem-solving capabilities would naturally come to speak and act as
if there were something mysterious about their consciousness and qualia.
But actually there is nothing mysterious going on. Therefore there is no
need to invoke mysterious physics in order to explain why problem-solving
entities perceive a mysterious gap between physicality and mentality.
In terms of your questions, probably the third choice would be closest.
As I put it, an alien being would be as justified in concluding that the
robots have qualia as that humans have qualia. And in fact we would
expect all intelligent entities that evolve (or are designed with)
human-like planning capabilities to behave as if they have qualia, and
express puzzlement at the many paradoxes this raises, just as we have
been going here on this list.
As far as zombies, I know that philosophers distinguish between logical
impossibility, metaphysical impossibility, and many other flavors as well,
until it makes my head hurt. I could not parse what type of impossibility
it is that zombies possess.
Jef Allbright writes:
> One suggestion: I kept stumbling over your use of "computational" to
> describe the more subjective model. It seems to me that the other
> model was just as computational, but within the domain of physics.
> Might it be more useful to refer to them as the "physical" model and
> the "intentional" model?
Maybe so. I was trying to avoid loaded language about mental states
that would suggest that I was trying to smuggle in my conclusion.
I tried to stay as neutral as possible (although I slipped a few times
and spoke of the robot "imagining" things rather than modeling them).
Brent Allsop writes:
> No, you're categorically talking about something completely different here
> that has nothing to do with qualia.
>
> When you talk about the knowledge these robots have - whether it is of the
> "physical" or "mental" they are still represented by abstract information
> fundamentally based on only arbitrary causal representations.
>
> We do very similar thinking things with similar different kinds of models as
> what you describe these software robots doing. The critical difference is -
> all of our conscious knowledge or models are represented with qualia -
> rather than abstract information represented by arbitrary causal properties.
Yes, the robots are purely mechanical and work with abstract information
that represents the outside world. The point is that despite that,
the robots can speak and act as if they are puzzled by some of the same
paradoxes of consciousness vs physicality that we have been discussing
here.
Suppose you met a race of aliens. You discuss consciousness and qualia
with them and from what they say, they experience these pretty much
the same as humans. Which would you predict: that they are like the
robots, fully physical and natural, acting and talking as if they have
consciousness when they actually don't? Or that they are like you think
humans are, with some extra physics or something going on, so that when
they speak of having consciousness, it is really true?
What evolutionary forces would act on the aliens to make one solution more
likely than the other? If purely physical/mechanical aliens (like the
robots) are able to act conscious as well as ones with the extra "effing"
ability, why would evolution actually select for and create that ability?
Hal
More information about the extropy-chat
mailing list