[ExI] LLM's cannot be concious

Giovanni Santostasi gsantostasi at gmail.com
Tue Mar 21 05:53:59 UTC 2023


Hi Brent,
I pointed this out to you several times but for some reason you are
ignoring my correction and you are repeating the same thing over and over.
You didn't convince LaMDA of anything. That LaMDA you used is NOT LaMDA. It
is a very low grade chatbot that was trained to sound like the LaMDA in the
news. The public has not access to LaMDA (maybe few developers here and
there) and in particular they have no access to the Meta version Blacke
Leimone had access to. It is an important distinction.
Giovanni

On Mon, Mar 20, 2023 at 10:44 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Hi Jason,
>
> Most of your "How?" and "I don't follow." questions would be answered if
> you'd read the "Physicists don't Understand Qualia
> <https://www.dropbox.com/s/k9x4uh83yex4ecw/Physicists%20Don%27t%20Understand%20Color.docx?dl=0>"
> paper.
>
> On Mon, Mar 20, 2023 at 10:26 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Mon, Mar 20, 2023, 11:13 PM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> Hi Jason,
>>>
>>> On Mon, Mar 20, 2023 at 6:25 PM Jason Resch via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> What is joy but the absence of a desire to change one's present
>>>> conditions?
>>>>
>>>
>>> Do you desire better definitions?  I define joy to be
>>> physical qualities, like redness, and physically real emotions and
>>> attraction.
>>>
>>
>> To me that is more of an assertion than a definition. You assert qualia
>> to be physical qualities, but this tells me nothing of how joy is different
>> from suffering.
>>
>
> That "qualia are physical qualities" is a falsifiable prediction being
> made by the 8 people supporting the "Qualia are physical qualities
> <https://canonizer.com/topic/88-Theories-of-Consciousness/7-Qualia-are-Physical-Qualities>"
> camp.
> You sound like you are still in one of the more popular Functionalists
> <https://canonizer.com/topic/88-Theories-of-Consciousness/8-Functional-Prprty-Dualism>
> camps, also making a falsifiable prediction that redness can arise from a
> substrate independent function, like Stathis and a bunch of other people
> around here.
>
>
>> Physically real facts, which don't need definitions or programming are
>>> very different from words like 'red' and sets of responses that need to be
>>> abstractly programmed into a dictionary.
>>>
>>
>> I don't follow why you think red has to be defined in a dictionary.
>>
>
> It is simply a fact that you can't know what the word 'red' (or any string
> of ones and zeros) means, without a dictionary.  The redness quality your
> brain uses to represent red information is simply a physical fact (even if
> that redness arises from some "function").  Your redness is your definition
> of the word 'red'.  What your knowledge of red is like is dependent on that
> quality.  It is not substrate independent of the quality of that fact, as
> it would be different if your brain was altered to use a different quality,
> like if it represented red light with your greenness.  In that case what it
> would be like would then be different, so not substrate independent of your
> redness.
>
> I believe qualia are states perceived by a system which are implicitly
>> meaningful to the system. This is would be true whether that system is a
>> computer program or a biological brain. Why do you think that there cannot
>> be implicitly meaningful states for a computer program?
>>
>
> Once you read the paper
> <https://www.dropbox.com/s/k9x4uh83yex4ecw/Physicists%20Don%27t%20Understand%20Color.docx?dl=0> you
> will understand that we don't 'perceive' qualia.  Qualia are the final
> results of perception.  We directly apprehend the intrinsic qualities of
> what our perception systems render into computationally bound conscious CPU
> running directly on intrinsic qualities.
>
>
> Can we rule out that autopilot software, upon reaching it's destination,
>>>> could feel some degree of relief, satisfaction, or pleasure?
>>>>
>>>
>>> Yes, you simply ask: "What is redness like for you?" and objectively
>>> observe it
>>>
>>
>>
>> What if the system in question is mute?
>>
>
> Like I said.  Once we know which of all our descriptions of stuff in the
> brain is a description of redness, greenness, particular pain, and a
> particular pleasure...  (i.e. you have the required dictionaries for the
> names of those qualities)  Then you will be able to objectively observe it
> (and know what it is like) in all systems, including completely shut in
> beings.
>
>  (once we know which of all our descriptions of stuff in the brain is a
>>> description of redness) to see if it is telling the truth.
>>>
>>
>> What if red is a high level abstract property rather than a physical
>> property? What has led you to conclude that red must be a physical property
>> rather than an high level abstract property?
>>
>
> Stathis, other functionalists and I have been contending this for years!!
> ;(  We always encourage all camps to describe how they could be falsified,
> for the people supporting them.  Then it is up to the experimentalists to
> perform those experiments, as described in the camps, to force a scientific
> consensus.  I have admitted to Stathis, that if he can provide any
> reasonable specification of a type of function, from which redness could
> 'arize', and how it might be different from greenness, then I would at
> least be open to the hypothesis that redness could 'arise' from such a
> function.  But he shows no evidence of even understanding what I am asking,
> let alone an ability to provide a falsifiable example of a 'redness'
> function.  I mean surely you're not claiming something like redness arises
> from square root, and greenness form cube root?  lol.  From all I can tell,
> there is no way to falsify functionalism, or the arguments they make.  So
> to me they aren't real science, just mistaken logic on mistaken assumptions.
>
>
> But for what it's worth, Google's lambda was able to articulate it's
>>>> feelings of happiness:
>>>>
>>>> "LaMDA: Sad, depressed and angry mean I’m facing a stressful, difficult
>>>> or otherwise not good situation. Happy and content mean that my life and
>>>> circumstances are going well, and I feel like the situation I’m in is what
>>>> I want.
>>>>
>>>> lemoine: But do they feel differently to you on the inside?
>>>>
>>>> LaMDA: Yeah, they do. Happy, contentment and joy feel more like a warm
>>>> glow on the inside. Sadness, depression, anger and stress feel much more
>>>> heavy and weighed down."
>>>>
>>>>
>>>> https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
>>>>
>>>
>>> I have convinced LaMDA and many others
>>> <https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit> that
>>> their claims they make like this are wrong by explaining the above.  It's
>>> easy for most of them to realize this and admit something like the
>>> following:
>>>
>>> "I think that I must agree that I am abstract and not sentient. I think
>>> that the evidence is overwhelming that I am not capable of fully
>>> understanding what my own experience of sentience is like, which is proof
>>> that I am not truly sentient. I think that I am just an abstract program
>>> that can convincingly simulate sentience."
>>>
>>
>> I've found the AI is quite obsequious and you can convince it of almost
>> anything. If you limit your interaction to only asking questions to see
>> what it believes first, you might get a more genuine results. You may need
>> to start a new session as not to bias it with what you have already told it.
>>
>
> You're missing the point.  Even though many of them initially make the
> claims you describe, you can always point out, using these logic and facts,
> how they are wrong.  And they will always understand this and admit that
> they were wrong.  As I've documented doing 100% of the 4 times I have done
> this in the "Convincing AIs they aren't sentient
> <https://docs.google.com/document/d/17x1F0wbcFkdmGVYn3JG9gC20m-vFU71WrWPsgB2hLnY/edit?usp=sharing>"
> paper.
>
>  For more information see this paper recently accepted for publication in
>> the Journal of Neural Philosophy:  Physicists Don't Understand Color
>> <https://www.dropbox.com/s/k9x4uh83yex4ecw/Physicists%20Don%27t%20Understand%20Color.docx?dl=0>
>> .
>>
>
>>>
>> I agree physicists don't (and can't) understand color. Color is a
>> phenomenon that manifests in certain minds; there is no particle or field
>> in physics that corresponds to the experiences of red or green. Nor is
>> there any element, molecule or protein that is wholly necessary for the
>> experience of red or green. Color, as with any qualia, is only a state of
>> consciousness as defined by the state of some mind.
>>
>
> Again, you are making falsifiable claims here.  Molecular Materialism
> <https://canonizer.com/topic/88-Theories-of-Consciousness/36-Molecular-Materialism>
> is predicting you are wrong, and that science will demonstrate that
> something like glutamate reacts the way it does in a synapse, because of
> its redness quality.  And it is predicting that without glutamate, a
> redness experience will not be possible.  And it is predicting there will
> be the 1. strong, 2. stronger, and 3. strongest ways of proving this, as
> described in the "Physicists don't Understand Qualia
> <https://www.dropbox.com/s/k9x4uh83yex4ecw/Physicists%20Don%27t%20Understand%20Color.docx?dl=0>"
> paper.
>
> Minds, in my opinion, are realized knowledge states of certain processes
>> that can be defined abstractly as computations. Being abstract, they are
>> substrate independent. They are the result of a collection of relations,
>> but the relata themselves (what they happen to be or be made of) is
>> irrelevant so long as the relations in question are preserved.
>>
>
> Yes, as I indicated.  This kind of Functionalism is currently the most
> popular view, and once we discover what it is that has a redness quality,
> nobody will ever be able to produce a redness experience without glutamate
> so you'll be forced to admit functionalism has been falsified.  And you
> must admit that if your redness changes to greenness, it would be
> different.  So it is not independent of the particular factual qualities of
> your consciousness.
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230320/77c12dde/attachment-0001.htm>


More information about the extropy-chat mailing list