[ExI] ai emotions

Stuart LaForge avant at sollegro.com
Fri Jul 12 03:24:24 UTC 2019


Quoting Brent Allsop:

> A few questions.  I?m assuming that in your model, all this: "Strawberry ->
> Red light -> Eye ->" are identical in both cases.

Yes.

> But what do you mean by
> this?
>
> ?red + your brain = redness. Glutamate exists
> with or without brains, but redness does not.?
> I?m assuming that both of these are *different* in your model in the non
> inverted and inverted set: ?Brain -> Redness??

Yes, that particular expression would be different for somebody with  
inverted qualia such that red + their brain = greenness.

>
> You?ve indicated that the downstream, ?redness? does not exist without the
> upstream ?brain??

Yes.

> If there is one pixel on the surface of the strawberry that is switching
> between red and green, what is the physical change in the physics of the ?
> brain? in your model?

It should not change that much. In fact you might not even notice it  
unless you were really up close and looking for it. For example if you  
look closely at the flesh tones of human portraiture painted by  
classically trained artists, you can see small regions of reds,  
greens, blues, and other seemingly unrelated colors making up what  
appears to be a single homogeneous skin tone under various conditions  
of simulated light and shadow.

>  And is the difference between ?Redness? and ?
> Greenness? physically or objectively detectable, without cheating by
> observing anything upstream from your "Redness" and "Greenness"?

No, I don't think so. Your question is a little like asking if it is  
possible to crack a code without having any access to clear text or  
the cypher key. And the answer is: no, not in the life time of the  
universe for all but the most simple of cyphers.

Stuart LaForge









More information about the extropy-chat mailing list