[ExI] Why stop at glutamate?
brent.allsop at gmail.com
Fri Apr 14 12:25:35 UTC 2023
Even though I figured it was too good to be true (Get a Lucid Air for
$29,999!), I got very excited in anticipation to see you try:
"Okay I attempt to show that below."
But then, there was nothing. Functionalists, it seems to me, just always
seem to lack any ability to demonstrate any ability to understand what a
redness quality is. It is almost impossible to get them to talk about it.
I"m finally forcing you to give something like the following, but you seem
very reluctant to do even this.
Examples of mathematical properties:
- evenness (in reference to numbers)
- squareness (in reference to triangles)
- dimensionality (in reference to spaces)
I'm right there with you on these 3.
- charge (in reference to charged particles in our universe)
And even with this one, you can provide an abstract description of anything
like this, and you can then fully simulate all of it with any sufficient
turing complete abstract system.
- redness (in reference to visual experiences in normally sighted humans in
In my opinion, you seem to not see the problem with this one. In order for
this to be true, you'd need to be able to communicate to someone, who has
never experienced redness before what redness was like, with only text.
Giovanni seems to think you could do this. Do you think this also? It is
just blatantly, obviously, even logically (platonically?) wrong. Even chat
bots can understand this. Chat bot's know the word "redness" can't be
grounded, unless someone can experience subjective redness.
All you seem to be saying, to me, is that all 3 of these systems can tell
you the strawberry is red:
But when you ask them "What is redness like for you?" they must give you
very different answers, even if they are going to be mathematically correct.
They are substrate dependent on the qualities of their knowledge. If you
change the fist one, to the second one, they are made of different
subjective (and necessarily objective) properties, even though they can
function the same,as far as telling you the strawberry is red.
You've told me how you can get any system to tell you the strawberry is
red, but you haven't told me how you can get the first one, to substitute
one of it's pixels of redness, with anything but P1, and still say that
pixel which is actually, objectively, made of something different than P1,
is the subjectively the same as all the other P1 pixels making up it's
conscious knowledge of the strawberry.
I have a question for functionalists. Do you guys agree with Steven Lehar
<https://canonizer.com/topic/81-Mind-Experts/4-Steven-Lehar>'s (Current top
peer ranked expert at Canonizer in this field) pointing out our conscious
knowledge is a bubble world in our head
composed of pixels of something that have subjective (and I believe,
necessarily objectively observable) qualities or properties? Giovani's
idea of conscious knowledge seems to not be anything explicit like this.
He seems to think it is all just complex recursive algorithms, and nothing
explicit like this.
On Fri, Apr 14, 2023 at 3:19 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Thu, Apr 13, 2023, 10:52 PM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> On Thu, Apr 13, 2023 at 8:20 PM Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>> On Thu, Apr 13, 2023, 10:04 PM Brent Allsop via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>> Hi Jason,
>>>> On Thu, Apr 13, 2023 at 5:56 PM Jason Resch via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>> On Thu, Apr 13, 2023 at 4:17 PM Brent Allsop via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>> Hi Gadersd,
>>>>>> On Thu, Apr 13, 2023 at 2:35 PM Gadersd via extropy-chat <
>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>> Brent, where is the glutamate quality of electrons, neutrons, and
>>>>>>> protons? Which electron has the redness quality?
>>>>>>> Electrons behave the way they do, because they have a quality you
>>>>>> have never experienced before. (Note: I'm a pan qualityist. a
>>>>>> panpsychist minus the pan computational binding ;)
>>>>>> There exists higher order structure that doesn’t exist in the
>>>>>>> component parts, hence the phrase “more than the sum of the parts."
>>>>>> I guess that would be a hypothetical possibility. I try to
>>>>>> always point out that some day, someone will experience redness without
>>>>>> glutamate, falsifying the prediction that it is glutamate that behaves the
>>>>>> way it does, because of its redness quality. Once glutamate is falsified,
>>>>>> they will try something else, possibly including something that is the sum
>>>>>> of some configuration of parts, or ANYTHING. The reason we use glutamate
>>>>>> is because it is so easily falsifiable. Falsifiability is what we are
>>>>>> missing with the qualitative nature of consciousness, and ease of
>>>>>> falsifiability is the reason we are using glutamate as an easy stand-in for
>>>>>> whatever redness turns out to be.
>>>>>> I just wish people with these kinds of "qualities arise from
>>>>>> <whatever>" theories would explicitly acknowledge (instead of ignoring),
>>>>>> what everyone knows absolutely, that color qualities are real, and then
>>>>>> provide some example of some kind of "function" or some configuration of
>>>>>> parts, the sum total of which could be pointed to and say: "THAT is
>>>>>> redness." at least in a way that would pass the laugh test?
>>>>> You ask of functionalism more than you have achieved for your own
>>>>> theory: you have yet to name what molecule is responsible for redness which
>>>>> won't be falsified.
>>>>> The function for redness is a function that is found in the neural
>>>>> network of a normally sighted person's brain (likely within but perhaps not
>>>>> limited to the https://en.wikipedia.org/wiki/Colour_centre). It is
>>>>> likely not anything rudimentary like a square root function, it will be a
>>>>> function able to (at a minimum) discriminate among millions of possible
>>>>> color values.
>>>> Not sure what you mean by "won't be falsified", as I have tried to say
>>>> I fully expect the prediction that it is glutamate that has the
>>>> redness quality to be falsified.
>>>> But that something else, will be reliably demonstrated to always have
>>>> the same redness quality, and when it does, just substitute glutamate for
>>>> whatever that is.
>>> Yes, that thing, whatever it is, is still undefined/unknown to your
>>> theory. Why then do you require functionalists to give an answer when your
>>> theory, at present, doesn't have one?
>> Oh OK. Yes. I agree with this. I'm just trying to say that physical
>> stuff has color qualities. For example, it is intuitive to us to think of
>> the strawberry as having a red quality, and because of that quality, it
>> reflects 700 nm light. I'm saying that is the right way to think about it,
>> it is just a different set of objectively observable properties, which is
>> the redness quality. Whereas, if someone is making the same claim about
>> some function, then give me any example of any function which would result
>> in someone having a redness experience, that isn't laphable.
> Okay I attempt to show that below.
>>> And are you saying that physical stuff doesn't have color properties?
>>>> And that functions do?
>>> I believe the property of color is a mathematical property, not a
>>> physical one. Math subsumes all of physics. For any physical property you
>>> can think of, there is a mathematical object with that property. Functions,
>>> like mathematics, are sufficiently general that they can define any
>>> describable relation between any set of mathematical objects. And as I said
>>> before, properties are nothing other than relations. A function then, is a
>>> near universal tool to realize any imaginable/definable property: be they
>>> physical properties, mathematical properties, and yes, even color
>>> If a function can discriminate among millions of possible color values,
>>>> it would achieve that by representing them with millions of distinguishable
>>>> physical properties, right?
>>> It hardly matters what they are, so long as they're distinguishable, and
>>> related to each other in the same ways colors are to each other.
>>> i.e. the function would arise from, or be implemented on, the physical
>>>> properties, you seem to be saying that the physical properties would arise
>>>> from the function?
>>> Functional properties exist on a level that's separate from and
>>> independent of physical properties. Think of the properties or some code
>>> written in Python. The properties of that function are not physical
>>> properties. Nor do the properties of that function depend on physical
>>> properties. So long as you had a python interpreter there, you could run
>>> that python code in any universe, even ones with an alien physics. Physical
>>> properties never enter the picture.
>> OK, yea. You're talking about logical (non physical) platonic facts,
> We could call them that. I think "mathematical properties" is the most
> general term though, as they cover not just logical properties, but any
> conceivable physical ones too.
> Examples of mathematical properties:
> - evenness (in reference to numbers)
> - squareness (in reference to triangles)
> - dimensionality (in reference to spaces)
> - charge (in reference to charged particles in our universe)
> - redness (in reference to visual experiences in normally sighted humans
> in our universe)
> Mathematical objects and their properties can be as simple or complex as
> we need them to be. There is a mathematical object that is
> indistinguishable from our physical universe. It has all the same
> properties our physical universe has. If redness is a property of glutamate
> then the "mathematical glutamate" found in the mathematical object that's
> identical with our universe has the redness property too.
> What I'm talking about is, you are doing a neuro substitution, and you get
>> to that first pixel of subjective knowledge that has a redness property.
>> Let's even assume it is a particular complex neural pattern (call it P1),
>> not glutamate, which you can point to, and say: "THAT" is the subjective
>> redness quality of that pixel.
>> You seem to be arguing that consciousness would not be substrate
>> dependent on that P1 quality, and that you could substitute that with
>> glutamate, P29, or anything else, and it would still result in a redness
> Functionalism in the most basic terms, is the idea that minds are defined
> by what the brain does, not by what it is. Think of this analogy for a car:
> let's say we replace the brake fluid in a car with an alternate liquid that
> functions similarly enough that the brakes work as well before as after the
> replacement. Since the brake fluid still serves it's functional role we can
> still call it a brake fluid even though it may be of an entirely different
> chemical composition. The composition of the parts, is not relevant so long
> as they preserve the relationships among all the parts. Overall behavior of
> the system remains unchanged.
> So your question of whether we can replace P1 with glutamate or P29
> depends on whether glutamate and P29 play the same role and have the same
> relations as P1 has. If not, they aren't valid candidates for substitution.
> They said they might work if we replace more parts of the brain. For
> example, let's say we arrange a bunch of objects such that their position
> in a machine determines their relations to all the other pieces, so long as
> every object has the same mass. Then we can make this machine work by
> putting identically sized glass marbles throughout the machine. We could
> not then replace one marble with a lighter plastic bottle cap. However, if
> we strip out all the marbles and replace them all with plastic bottle caps
> this will restore the relations within the machine and preserve it's
>> How could any platonic, or mathematical fact, produce an experience with
>> a redness quality, in a way that you could replace it with P1, and the
>> person would still say it was the same quality as P1, even though it wasn't
> Either by changing P1 with another function let's call it "P1a" which
> though internally it has a different implementation or details, it "hides"
> them by virtue of those fine grain details not being relevant at the level
> P1 relates to other parts in the system.
> For example, let's say we're dealing with NAND memory storing a bit, which
> it does so by holding some charge of electrons together. From a functional
> point of view, it makes no difference if the elections are spin up or spin
> down in the x axis. Thus we might substitute a spin up electron with a spin
> down one, and the memory state of the NAND chip will remain unchanged. The
> system doesn't care about the spin state of the electrons, only how many
> electrons are there.
> From a functional/logical point of view you can consider different
> possible sorting algorithms. Quick sort and Merge sort are two of the most
> commonly used sorting algorithms (or sorting functions). They have similar
> performance properties and perform an identical task, but they have and use
> very different internal processes to accomplish their sorting. It these
> internal properties are not important to how other parts of the system use
> the sort function, then quick sort and merge sort are examples of two
> different, but interchangeable functions.
> Whether or not then fine grain details of some internal function are
> relevant to a particular state of consciousness is, as I mentioned before,
> unknowable, as no program can determine its own code or implementation
> based on how it perceives itself. This follows from the Church-Turing
> thesis. And a clear example is with Virtual Machines. An Atari game, from
> it's point of view, has no ability to tell if it's running on an original
> Atari system or some emulator in a modern PC.
> Thus it will always require some degree of faith, whether you could take a
> particular functional substitution of some part (or whole) of your brain
> and remain unchanged subjectively. The finer grain details you go and
> include, the more likely it is to succeed, but we don't necessarily know
> how deep to go, and when it becomes safe to abstract or ignore details
> below a certain level.
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 26214 bytes
Desc: not available
More information about the extropy-chat