<div dir="ltr"><a href="https://mashable.com/article/strawberry-optical-illusion">https://mashable.com/article/strawberry-optical-illusion</a><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Apr 24, 2023 at 12:05 AM Giovanni Santostasi <<a href="mailto:gsantostasi@gmail.com">gsantostasi@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">There is not a single red pixel in this red strawberry picture.... yeah, it was an illusion after all, as old Giovanni said...</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 23, 2023 at 11:55 PM Giovanni Santostasi <<a href="mailto:gsantostasi@gmail.com" target="_blank">gsantostasi@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Brent, <br>I hope we are done talking about this redness quality business once for all. Watch this and it should be enough to say "we rest our case".<br><a href="https://www.youtube.com/watch?v=MJBfn07gZ30" target="_blank">https://www.youtube.com/watch?v=MJBfn07gZ30</a><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 23, 2023 at 11:51 PM Giovanni Santostasi <<a href="mailto:gsantostasi@gmail.com" target="_blank">gsantostasi@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Brent,<br>Watch this is and tell me what you think and the relevance to your understanding of yellowness.<br><a href="https://www.youtube.com/watch?v=7GInwvIsH-I" target="_blank">https://www.youtube.com/watch?v=7GInwvIsH-I</a><br><br>Giovanni </div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 23, 2023 at 11:48 PM Giovanni Santostasi <<a href="mailto:gsantostasi@gmail.com" target="_blank">gsantostasi@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">How language influences the color we see:<br><a href="https://www.youtube.com/watch?v=cGZJflerLZ4" target="_blank">https://www.youtube.com/watch?v=cGZJflerLZ4</a><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 23, 2023 at 11:01 PM Giovanni Santostasi <<a href="mailto:gsantostasi@gmail.com" target="_blank">gsantostasi@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Let say something provocatory, but I want really to drive the point. It is childish to think that <div><img src="cid:ii_lguf03er0" alt="image.png" width="43" height="43" style="margin-right: 0px;"> is not a symbol or a "word" that the brain invented for itself. It is a nonverbal symbol but it is a symbol, it is a "word". It is so obvious to me, not sure why it is not obvious to everybody else. Would it be less mysterious if we heard a melody when we see a strawberry (we hear a pitch when we hit a glass with a fork), if we heard a little voice in our head that says "red", in fact we do when we learn to associate <img src="cid:ii_lguf48ge1" alt="image.png" width="41" height="41" style="margin-right: 0px;"> with "red". There are neuroscientists who invented a vest with actuators that react when a magnetic field is present. It is interesting but not something that should case endless debate about the incommunicability of qualia. What is really interesting in an experiment like that is how the brain rewires to adapt to this new sensory information. <br> The brain had to invent a way to alert us of the presence of objects that reflect a certain range of light frequencies and it came up with <img src="cid:ii_lguf563p2" alt="image.png" width="38" height="38" style="margin-right: 0px;">. Great, what is the fuss about? <br>The communication issue is not an issue. Here I tell you what red means to me, this: <img src="cid:ii_lguf6gi53" alt="image.png" width="38" height="38" style="margin-right: 0px;">. Do you agree that this is what you "mainly" see when you see a strawberry or a firetruck? Yes, great, time to move on. Can I robot learn what color a firetruck is? Yes, it is already done, the word red suffices for all purposes necessary in terms of what a conversational AI needs.<br>It is a different business for an AI that needs to move in the real world and it is trivial to teach an AI how to recognize </div><img src="cid:ii_lgufjap45" alt="image.png" width="25" height="25" style="margin-right: 0px;"> if given optical sensors. <br><div>Nothing else is interesting or fascinating about this, not at least from a scientific perspective. If silly philosophers want to debate this let them, this why they are irrelevant in the modern world. <br><br>Giovanni <br><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 23, 2023 at 10:42 PM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 23, 2023 at 11:16 PM Gordon Swobe <<a href="mailto:gordon.swobe@gmail.com" target="_blank">gordon.swobe@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Apr 22, 2023 at 4:17 AM Jason Resch via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Apr 22, 2023, 3:06 AM Gordon Swobe via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Fri, Apr 21, 2023 at 5:44 AM Ben Zaiboc via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" rel="noreferrer" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 21/04/2023 12:18, Gordon Swobe wrote: </blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">> Yes, still, and sorry no, I haven't watched that video yet, but I will <br>
> if you send me the link again. <br>
<br>
<br>
<a href="https://www.youtube.com/watch?app=desktop&v=xoVJKj8lcNQ&t=854s" rel="noreferrer noreferrer" target="_blank">https://www.youtube.com/watch?app=desktop&v=xoVJKj8lcNQ&t=854s</a><br>
<br></blockquote><div><br>Thank you to you and Keith. I watched the entire presentation. I think the Center for Human Technology is behind the movement to pause AI development. Yes? In any case, I found it interesting.<br><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
The thing (one of the things!) that struck me particularly was the <br>
remark about what constitutes 'language' for these systems, and that <br>
make me realise we've been arguing based on a false premise.</blockquote><div><br>Near the beginning of the presentation, they talk of how, for example, digital images can be converted into language and then processed by the language model like any other language. Is that what you mean?<br><br>Converting digital images into language is exactly how I might also describe it to someone unfamiliar with computer programming. The LLM is then only processing more text similar in principle to English text that describes the colors and shapes in the image. Each pixel in the image is described in symbolic language as "red" or "blue" and so on. The LLM then goes on to do what might be amazing things with that symbolic information, but the problem remains that these language models have no access to the referents. In the case of colors, it can process whatever symbolic representation it uses for "red" in whatever programming language in which it is written, but it cannot actually see the color red to ground the symbol "red."</div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">That was not my interpretation of his description. LLMs aren't used to process other types of signals (sound, video, etc.), it's the "transformer model" i.e. the 'T' in GPT.</div><div dir="auto"><br></div><div dir="auto">The transformer model is a recent discovery (2017) found to be adept at learning any stream of data containing discernable patterns: video, pictures, sounds, music, text, etc. This is why it has all these broad applications across various fields of machine learning.</div><div dir="auto"><br></div><div dir="auto">When the transformer model is applied to text (e.g., human language) you get a LLM like ChatGPT. When you give it images and text you get something not quite a pure LLM, but a hybrid model like GPT-4. If you give it just music audio files, you get something able to generate music. If you give it speech-text pairs you get something able to generate and clone speech (has anyone here checked out ElevenLabs?).</div><div dir="auto"><br></div><div dir="auto">This is the magic that AI researchers don't quite fully understand. It is a general purpose learning algorithm that manifests all kinds of emergent properties. It's able to extract and learn temporal or positional patterns all on its own, and then it can be used to take a short sample of input, and continue generation from that point arbitrarily onward.</div><div dir="auto"><br></div><div dir="auto">I think when the Google CEO said it learned translation despite not being trained for that purpose, this is what he was referring to: the unexpected emergent capacity of the model to translate Bengali text when promoted to do so. This is quite unlike how Google translate (GNMT) was trained, which required giving it many samples of explicit language translations between one language and another (much of the data was taken from the U.N. records).</div></div></blockquote><div><br>That is all fine and good, but nowhere do I see any reason to think the AI has any conscious understanding of its inputs or outputs.</div></div></div></blockquote><div><br></div><div>Nor would I expect that you would when you define conscious understanding as "the kind of understanding that only human and some animal brains are capable of."</div><div>It all comes down to definitions. If we can't agree on those, we will reach different conclusions.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> You write in terms of the transformer, but to me all this is covered in my phrase "the LLM then goes on to do what might be amazing things with that symbolic information, but..."<br></div></div></div></blockquote><div><br></div><div>Is there any information which isn't at its core "symbolic"? Or do you, like Brent, believe the brain communicates with other parts of itself using direct meaning, like with "🟥" such that no interpretation is needed?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br>> (has anyone here checked out ElevenLabs?).<br><br>Yes. About a week ago, I used GPT-4, ElevenLabs and D-ID.com in combination. I asked GPT-4 to write a short speech about AI, then converted it to speech, then created an animated version of my mugshot giving the speech, then uploaded the resulting video to facebook where it amazed my friends.</div></div></div></blockquote><div><br></div><div>Nice.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"> <br><br>These are impressive feats in software engineering, interesting and amazing to be sure, but it's just code.<br></div></div></blockquote><div><br></div><div>"Just code."</div><div>You and I also do amazing things, and we're "just atoms." </div><div><br></div><div>Do you see the problem with this sentence? Cannot everything be reduced in this way (in a manner that dismisses, trivializes, or ignores the emergent properties)?</div><div><br></div><div>Jason </div></div></div>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>