[ExI] all we are is just llms was

Gordon Swobe gordon.swobe at gmail.com
Sat Apr 22 07:04:44 UTC 2023

On Fri, Apr 21, 2023 at 5:44 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On 21/04/2023 12:18, Gordon Swobe wrote:

> Yes, still, and sorry no, I haven't watched that video yet, but I will
> > if you send me the link again.
> https://www.youtube.com/watch?app=desktop&v=xoVJKj8lcNQ&t=854s
Thank you to you and Keith. I watched the entire presentation. I think the
Center for Human Technology is behind the movement to pause AI development.
Yes? In any case, I found it interesting.

The thing (one of the things!) that struck me particularly was the
> remark about what constitutes 'language' for these systems, and that
> make me realise we've been arguing based on a false premise.

Near the beginning of the presentation, they talk of how, for example,
digital images can be converted into language and then processed by the
language model like any other language. Is that what you mean?

Converting digital images into language is exactly how I might also
describe it to someone unfamiliar with computer programming. The LLM is
then only processing more text similar in principle to English text that
describes the colors and shapes in the image. Each pixel in the image is
described in symbolic language as "red" or "blue" and so on. The LLM then
goes on to do what might be amazing things with that symbolic information,
but the problem remains that these language models have no access to the
referents. In the case of colors, it can process whatever
symbolic representation it uses for "red" in whatever programming language
in which it is written, but it cannot actually see the color red to ground
the symbol "red."

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230422/f4e39b19/attachment.htm>

More information about the extropy-chat mailing list