[ExI] A paper that actually does solve the problem of consciousness

gts gts_2000 at yahoo.com
Sun Nov 16 01:11:35 UTC 2008


> ... and if we develop the ability to map and eff such
> to each other's minds (as in oh THAT is what red is like
> for you), it will prove this theory correct, and at least
> some of your theories' predictions wrong.

Let us say for the sake of argument that you someday develop effing technology of the sort you suppose. You place an effing helmet on your head (or whatever) and I do the same. We hope to have connected our brains somehow through your machine such that we can share qualia. Our hope is that I will see the same red as you when you see red, such that I will say, "oh THAT is what red is like for Brent."

We turn the machine on and you peer at something red. I then suddenly see something unexpected in my field of vision.

The experiment looks interesting at this point. But I must stop and ask myself: how do I really know that my experience corresponds to Brent's? Instead of me saying, "Oh so THAT is what red is like for Brent", why should I not say instead, "Oh so THAT is what it's like for me to have a connection to Brent via some machine as he looks at something that looks like red to him"?

In other words, no matter how sophisticated your proposed effing technology, and no matter how convincing its underlying theory, it seems to me that I must take an unjustified leap of faith before I can accept that I  have really experienced your qualia.



More information about the extropy-chat mailing list