[ExI] Digital Consciousness .

Stathis Papaioannou stathisp at gmail.com
Sat Apr 27 08:02:36 UTC 2013



On 27/04/2013, at 3:46 AM, Brent Allsop <brent.allsop at canonizer.com> wrote:

> 
> Hi Stathis,
> 
> <<<
> The argument does not assume any theory of consciousness. Of course,
> if the argument is valid and a theory predicts that computers cannot
> be conscious then that theory is wrong. What you have to do is show
> that either the premises of the argument are wrong or the reasoning is
> invalid.
> >>>
> 
> It’s frustrating that you can’t see any more than this from what I’m trying to say.  I have shown exactly how the argument is wrong and how the reasoning is invalid, in that the argument is completely missing a set of very real theoretical possibilities.

An argument has premises, or assumptions, and a conclusion. If you challenge the argument you can challenge the premises or you can challenge the logical process by which the conclusion is reached. If the conclusion follows logically from the premises then the argument is VALID, whether or not the premises are true. If the argument is valid and the premises are true then the argument is said to be SOUND.

It would help if you could follow this and specify exactly where you see the problem, but it seems that you're not challenging the validity of the argument, but the truth of the premises. And the only premise is that the externally observable behaviour of the brain is computable. So, you must believe that the observable behaviour of the brain is NOT computable. In other words, there is something about the chemistry in the brain that cannot be modelled by a computer, no matter how good the model and no matter how powerful the computer. Is that what you believe?

> You must admit that the real causal properties of glutamate are very different than a set of causal properties of real silicon and wires that are only configured in a way such that their very different properties can be interpreted as the real thing.  As everyone here has unanimously agreed, the map is very different than the territory.
> 
> 
> <<<
> If it is true that real glutamate is needed for redness then the
> redness qualia will fade and eventually disappear if the glutamate
> detecting system is replaced with alternative hardware. This is not in
> itself problematic: after all, visual qualia will fade and eventually
> disappear with progressive brain damage. But the problem arises if you
> accept that the alternative hardware is just as good at detecting the
> glutamate and stimulating the neighbouring neurons accordingly, but
> without the relevant qualia, then you have a situation where the
> qualia fade and may eventually disappear BUT THE SUBJECT BEHAVES
> NORMALLY AND NOTICES NO DIFFERENCE. And that is the problem.
> >>>
> 
> Again, you are completely missing the significance of what I’m trying to say, here.  The behavior will be extremely different and problematic as you attempt the neural substitution.  It will not be anywhere near as simple as the argument claims it will be.  The prediction is, you will not be able to replace any single neuron, or even large sets of neurons that are the neural correlates of a redness quality, without also replacing significant portions of the rest of the system that is aware of what that redness experience is like.  The entire system must be configured in a very contrived way, so it can lie about it having real phenomenal qualities, when it really just has large sets of ones and zeros that are being interpreted as such.
> 
> Secondly, it will be extremely difficult to emulate when we introspect about what a redness quality is like, and how it is different than greenness, and when we reason about the fact that we are picking the glutamate, I mean the redness quality, because of its quality (and associated neural correlate properties), and not because some random media is being interpreted as such – with no such qualities associated with it.  The only way you will be able to get a system to ‘behave’ the same, is by some very extremely complicated abstracted systems capable of lying about having redness qualities, when it in fact has no redness qualities.  The design of the hardware, that is finally able to ‘behave’ as if it really experiencing a redness quality, will be so extreme, it will be obvious from the hardware engineers, that it isn’t really aware of real redness.  Only that an extremely complex mechanism is set up, so that it can lie about being aware of the qualitative nature of redness, and knowing how it is different than the qualitative nature of greenness.
> 
> Also, another thing you are completely missing is the significance of being able to connect minds with real qualia together.  Obviously half of our conscious world is represented with stuff in the right hemisphere, and half of our conscious world is represented with stuff in the left.  Clearly the corpus callosum is able to merge these together, so that the right hemisphere knows that the redness quale in the other hemisphere is very different than a greenness quality it has in it’s right hemisphere.  It knows this difference more absolutely than it knows the world beyond its senses exist.
> 
> In other words, the prediction is, we’ll be able to configure merged system that can experience redness and greenness at the same time, and know absolutely, how they are qualitatively different.  This is true regardless of whether you are a functional or a material or any other type of theorist.
> 
> Once we can do this, we’ll be able to both connect multiple conscious worlds together, the way multiple hemispheres are connected together, and significantly expand them all.  We’ll be able to endow them with hundreds of thousands of phenomenal qualities nobody has experienced before, using this to represent much more knowledge than our limited minds can comprehend now, and everything.  This is all that matters.  You will never be able to do any of that between a mind with real quale, and a mind with abstracted information that is only being interpreted as if it was the real thing.
> 
> When you have an expanded phenomenally conscious mind, merged with your very limited conscious mind (the way your right and left hemisphere are connected), you will be able to have an ‘out of body’ experience, where your knowledge of yourself moves from one to the other (just like when regular people have an “out of body experience” and their knowledge of their spirit travels around from the right field of awareness, to their left, and so on.  None of that will be possible between a real brain, and an abstracted brain, because there is nothing in an abstracted brain, other than a bunch of stuff that is configured in a way, so that it can lie about what it is really like, and is only believable, as long as nobody is looking at the actual hardware, in any kind of an effing way.
> 
> Brent Allsop
> 
> 
> 
> 
> On Thu, Apr 25, 2013 at 5:51 PM, Stathis Papaioannou <stathisp at gmail.com> wrote:
>> On Fri, Apr 26, 2013 at 1:15 AM, Brent Allsop
>> <brent.allsop at canonizer.com> wrote:
>> >
>> > Hi Stathis,
>> >
>> > (And Kelly Anderson, tell me if given what we've covered, does the below
>> > make sense to you?)
>> >
>> > It is not a 'proof' that abstracted computers can be conscious.  It
>> > completely ignores many theoretical possible realities. For example Material
>> > Property Dualism is one of many possible theories that proves this is not a
>> > 'proof".
>> 
>> The argument does not assume any theory of consciousness. Of course,
>> if the argument is valid and a theory predicts that computers cannot
>> be conscious then that theory is wrong. What you have to do is show
>> that either the premises of the argument are wrong or the reasoning is
>> invalid.
>> 
>> > There is now an "idealized effing theory" world described in the Macro
>> > Material Property Dualism camp: http://canonizer.com/topic.asp/88/36 .
>> >
>> > In that theoretically possible world, it is the neurotransmitter glutamate
>> > that has the element redness quality.  In this theoretical world Glutamate
>> > causally behaves the way it does, because of it's redness quality.  Yet this
>> > causal behavior reflects 'white' light, and this is why we think of it has
>> > having a 'whiteness' quality.  But of course, that is the classic example of
>> > the quale interpretation problem (see: http://canonizer.com/topic.asp/88/28
>> > ).  If we interpret the causal properties of something with a redness
>> > quality to it, and represent our knowledge of such with something that is
>> > qualitatively very different, we are missing and blind to what is important
>> > about the qualitative nature of glutamate, and why it behaves the way it
>> > does.
>> >
>> > So, let's just forget about the redness quality for a bit, and just talk
>> > about the real fundamental causal properties of glutamate in this
>> > theoretical idealizing effing world.  In this world, the brain is
>> > essentially a high fidelity detector of real glutamate.  The only time the
>> > brain will say: "Yes, that is my redness quality" is when real glutamate,
>> > with it's real causal properties are detected.  Nothing else will produce
>> > that answer, except real fundamental glutamate.
>> >
>> > Of course, as described in Chalmers' paper, you can also replace the
>> > system that is detecting the real glutamate, with an abstracted system that
>> > has appropriate hardware translation levels for everything that is being
>> > interpreted as being real causal properties of real glutamate, so once you
>> > do this, this system, no matter what hardware it is running on, can be
>> > thought of, or interpreted as acting like it is detecting real glutamate.
>> > But, of course, that is precisely the problem, and how this idea is
>> > completely missing what is important.  And this theory is falsifiably
>> > predicting the alternate possibility he describes in that paper.  it is
>> > predicting you'll have some type of 'fading quale', at least until you
>> > replace all of what is required, to interpret something very different than
>> > real consciousness, as consciousness.
>> >
>> > It is certainly theoretically possible, that the real causal properties of
>> > glutamate are behaving the way they do, because of it's redness quality.
>> > And that anything else that is being interpreted as the same, can be
>> > interpreted as such - but that's all it will be.  An interpretation of
>> > something that is fundamentally, and possibly qualitatively, very different
>> > than real glutamate.
>> >
>> > This one theoretical possibility, thereby, proves Chalmers' idea isn't a
>> > proof that abstracted computers have these phenomenal qualities, only that
>> > they can be thought of, or interpreted as having them.
>> 
>> If it is true that real glutamate is needed for redness then the
>> redness qualia will fade and eventually disappear if the glutamate
>> detecting system is replaced with alternative hardware. This is not in
>> itself problematic: after all, visual qualia will fade and eventually
>> disappear with progressive brain damage. But the problem arises if you
>> accept that the alternative harware is just as good at detecting the
>> glutamate and stimulating the neighbouring neurons accordingly, but
>> without the relevant qualia, then you have a situation where the
>> qualia fade and may eventually disappear BUT THE SUBJECT BEHAVES
>> NORMALLY AND NOTICES NO DIFFERENCE. And that is the problem.
>> 
>> 
>> --
>> Stathis Papaioannou
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130427/c63500a6/attachment.html>


More information about the extropy-chat mailing list