[ExI] Language models are like mirrors

Gordon Swobe gordon.swobe at gmail.com
Sun Apr 2 07:47:18 UTC 2023


On Sat, Apr 1, 2023 at 4:19 PM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On 01/04/2023 21:08, Gordon Swobe wrote:
>
> On Sat, Apr 1, 2023 at 7:36 AM Ben Zaiboc via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On 01/04/2023 13:43, Gordon Swobe wrote:
>>
>> Unlike these virtual LLMs, we have access also to the referents in the
>> world that give the words in language meaning.
>>
>>
>>
>> I don't understand why this argument keeps recurring, despite having been
>> demolished more than once.
>>
>
> I has not been demolished in my opinion and incidentally, as I’ve
> mentioned, my view is shared by the faculty director of the masters program
> in computational linguistics at the University of Washington. This is what
> she and her fellow professors teach. Many others understand things the same
> way. Brent points out that the majority of those who participate in his
> canonizer share similar views, including many experts in the field.
>
>
> Ah, your opinion. You know what they say, "You're entitled to your own
> opinions..."
>
> And you're using 'argument from authority' again.
>

Merely refuting your claim that my argument is “demolished.” Far from
demolished, it is quite widely accepted among other views.

The idea that language models are in some real sense “conscious people” is
probably a tiny minority view even if enticing to us as extropians. Here on
ExI, we live with one foot in reality and one foot in science fiction,
which is what I both like and dislike about it.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230402/b6cab907/attachment.htm>


More information about the extropy-chat mailing list