[ExI] all we are is just llms was

Giovanni Santostasi gsantostasi at gmail.com
Mon Apr 24 06:55:46 UTC 2023


Brent,
I hope we are done talking about this redness quality business once for
all. Watch this and it should be enough to say "we rest our case".
https://www.youtube.com/watch?v=MJBfn07gZ30

On Sun, Apr 23, 2023 at 11:51 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:

> Brent,
> Watch this is and tell me what you think and the relevance to your
> understanding of yellowness.
> https://www.youtube.com/watch?v=7GInwvIsH-I
>
> Giovanni
>
> On Sun, Apr 23, 2023 at 11:48 PM Giovanni Santostasi <
> gsantostasi at gmail.com> wrote:
>
>> How language influences the color we see:
>> https://www.youtube.com/watch?v=cGZJflerLZ4
>>
>> On Sun, Apr 23, 2023 at 11:01 PM Giovanni Santostasi <
>> gsantostasi at gmail.com> wrote:
>>
>>> Let say something provocatory, but I want really to drive the point. It
>>> is childish to think that
>>> [image: image.png] is not a symbol or a "word" that the brain invented
>>> for itself. It is a nonverbal symbol but it is a symbol, it is a "word". It
>>> is so obvious to me, not sure why it is not obvious to everybody else.
>>> Would it be less mysterious if we heard a melody when we see a
>>> strawberry (we hear a pitch when we hit a glass with a fork), if we heard a
>>> little voice in our head that says "red", in fact we do when we learn to
>>> associate [image: image.png] with "red". There are neuroscientists who
>>> invented a vest with actuators that react when a magnetic field is present.
>>> It is interesting but not something that should case endless debate about
>>> the incommunicability of qualia. What is really interesting in an
>>> experiment like that is how the brain rewires to adapt to this new sensory
>>> information.
>>>  The brain had to invent a way to alert us of the presence of objects
>>> that reflect a certain range of light frequencies and it came up with [image:
>>> image.png]. Great, what is the fuss about?
>>> The communication issue is not an issue. Here I tell you what red means
>>> to me, this: [image: image.png]. Do you agree that this is what you
>>> "mainly" see when you see a strawberry or a firetruck? Yes, great, time to
>>> move on. Can I robot learn what color a firetruck is? Yes, it is already
>>> done, the word red suffices for all purposes necessary in terms of what
>>> a conversational AI needs.
>>> It is a different business for an AI that needs to move in the real
>>> world and it is trivial to  teach an AI how to recognize
>>> [image: image.png] if given optical sensors.
>>> Nothing else is interesting or fascinating about this, not at least from
>>> a scientific perspective. If silly philosophers want to debate this let
>>> them, this why they are irrelevant in the modern world.
>>>
>>> Giovanni
>>>
>>>
>>> On Sun, Apr 23, 2023 at 10:42 PM Jason Resch via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>>
>>>>
>>>> On Sun, Apr 23, 2023 at 11:16 PM Gordon Swobe <gordon.swobe at gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Sat, Apr 22, 2023 at 4:17 AM Jason Resch via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Sat, Apr 22, 2023, 3:06 AM Gordon Swobe via extropy-chat <
>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>
>>>>>>> On Fri, Apr 21, 2023 at 5:44 AM Ben Zaiboc via extropy-chat <
>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>
>>>>>>>> On 21/04/2023 12:18, Gordon Swobe wrote:
>>>>>>>
>>>>>>> > Yes, still, and sorry no, I haven't watched that video yet, but I
>>>>>>>> will
>>>>>>>> > if you send me the link again.
>>>>>>>>
>>>>>>>>
>>>>>>>> https://www.youtube.com/watch?app=desktop&v=xoVJKj8lcNQ&t=854s
>>>>>>>>
>>>>>>>>
>>>>>>> Thank you to you and Keith. I watched the entire presentation. I
>>>>>>> think the Center for Human Technology is behind the movement to pause AI
>>>>>>> development. Yes? In any case, I found it interesting.
>>>>>>>
>>>>>>> The thing (one of the things!) that struck me particularly was the
>>>>>>>> remark about what constitutes 'language' for these systems, and
>>>>>>>> that
>>>>>>>> make me realise we've been arguing based on a false premise.
>>>>>>>
>>>>>>>
>>>>>>> Near the beginning of the presentation, they talk of how, for
>>>>>>> example, digital images can be converted into language and then processed
>>>>>>> by the language model like any other language. Is that what you mean?
>>>>>>>
>>>>>>> Converting digital images into language is exactly how I might also
>>>>>>> describe it to someone unfamiliar with computer programming. The LLM is
>>>>>>> then only processing more text similar in principle to English text that
>>>>>>> describes the colors and shapes in the image. Each pixel in the image is
>>>>>>> described in symbolic language as "red" or "blue" and so on. The LLM then
>>>>>>> goes on to do what might be amazing things with that symbolic information,
>>>>>>> but the problem remains that these language models have no access to the
>>>>>>> referents. In the case of colors, it can process whatever
>>>>>>> symbolic representation it uses for "red" in whatever programming language
>>>>>>> in which it is written, but it cannot actually see the color red to ground
>>>>>>> the symbol "red."
>>>>>>>
>>>>>>
>>>>>> That was not my interpretation of his description. LLMs aren't used
>>>>>> to process other types of signals (sound, video, etc.), it's the
>>>>>> "transformer model" i.e. the 'T' in GPT.
>>>>>>
>>>>>> The transformer model is a recent discovery (2017) found to be adept
>>>>>> at learning any stream of data containing discernable patterns: video,
>>>>>> pictures, sounds, music, text, etc. This is why it has all these broad
>>>>>> applications across various fields of machine learning.
>>>>>>
>>>>>> When the transformer model is applied to text (e.g., human language)
>>>>>> you get a LLM like ChatGPT. When you give it images and text you get
>>>>>> something not quite a pure LLM, but a hybrid model like GPT-4. If you give
>>>>>> it just music audio files, you get something able to generate music. If you
>>>>>> give it speech-text pairs you get something able to generate and clone
>>>>>> speech (has anyone here checked out ElevenLabs?).
>>>>>>
>>>>>> This is the magic that AI researchers don't quite fully understand.
>>>>>> It is a general purpose learning algorithm that manifests all kinds of
>>>>>> emergent properties. It's able to extract and learn temporal or positional
>>>>>> patterns all on its own, and then it can be used to take a short sample of
>>>>>> input, and continue generation from that point arbitrarily onward.
>>>>>>
>>>>>> I think when the Google CEO said it learned translation despite not
>>>>>> being trained for that purpose, this is what he was referring to: the
>>>>>> unexpected emergent capacity of the model to translate Bengali text when
>>>>>> promoted to do so. This is quite unlike how Google translate (GNMT) was
>>>>>> trained, which required giving it many samples of explicit language
>>>>>> translations between one language and another (much of the data was taken
>>>>>> from the U.N. records).
>>>>>>
>>>>>
>>>>> That is all fine and good, but nowhere do I see any reason to think
>>>>> the AI has any conscious understanding of its inputs or outputs.
>>>>>
>>>>
>>>> Nor would I expect that you would when you define conscious
>>>> understanding as "the kind of understanding that only human and some animal
>>>> brains are capable of."
>>>> It all comes down to definitions. If we can't agree on those, we will
>>>> reach different conclusions.
>>>>
>>>>
>>>>> You write in terms of the transformer, but to me all this is covered
>>>>> in my phrase "the LLM then goes on to do what might be amazing things with
>>>>> that symbolic information, but..."
>>>>>
>>>>
>>>> Is there any information which isn't at its core "symbolic"? Or do you,
>>>> like Brent, believe the brain communicates with other parts of itself using
>>>> direct meaning, like with "��" such that no interpretation is needed?
>>>>
>>>>
>>>>>
>>>>> >  (has anyone here checked out ElevenLabs?).
>>>>>
>>>>> Yes. About a week ago, I used GPT-4, ElevenLabs and D-ID.com in
>>>>> combination. I asked GPT-4 to write a short speech about AI, then converted
>>>>> it to speech, then created an animated version of my mugshot giving the
>>>>> speech, then uploaded the resulting video to facebook where it amazed my
>>>>> friends.
>>>>>
>>>>
>>>> Nice.
>>>>
>>>>
>>>>>
>>>>>
>>>>> These are impressive feats in software engineering, interesting and
>>>>> amazing to be sure, but it's just code.
>>>>>
>>>>
>>>> "Just code."
>>>> You and I also do amazing things, and we're "just atoms."
>>>>
>>>> Do you see the problem with this sentence? Cannot everything be reduced
>>>> in this way (in a manner that dismisses, trivializes, or ignores the
>>>> emergent properties)?
>>>>
>>>> Jason
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230423/9b15bb31/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 746 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230423/9b15bb31/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 746 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230423/9b15bb31/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 746 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230423/9b15bb31/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 746 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230423/9b15bb31/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 746 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230423/9b15bb31/attachment-0004.png>


More information about the extropy-chat mailing list