[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Sat Mar 25 01:49:07 UTC 2023


On Fri, Mar 24, 2023 at 2:46 AM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I can already hear someone saying "but we will include photographs of
> objects in the training so they have referents," but this still does not do
> the trick. These digital photographs can be displayed to the human operator
> of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It
> can detect colors by wavelength, but still this is only digital data. It
> does not see the colors. Likewise with shapes. It is turtles (one's and
> zero's) all the way down with no referents.
>

### Have you ever seen any colors? You know that your optic nerve does not
pump colors into your brain, it delivers strings of action potentials,
which are digitized and compressed information streams
about photons impinging on your eyes. Your brain creates colors as a way of
coding surfaces according to reflectances, which is useful in object
recognition, since reflectance of most objects is a stable property
of that object. Your brain uses advanced algorithms to extract patterns
from digitized data and you, the spirit in the brain, have a subjective
experience while such digital work proceeds... does it ring any bells
in the context of GPT4?

Rafal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230324/f2acd569/attachment.htm>


More information about the extropy-chat mailing list