[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Dave S snapbag at proton.me
Fri Mar 24 12:33:26 UTC 2023


On Friday, March 24th, 2023 at 2:43 AM, Gordon Swobe via extropy-chat <extropy-chat at lists.extropy.org> wrote:

> I can already hear someone saying "but we will include photographs of objects in the training so they have referents," but this still does not do the trick.

I'm with you so far.

> These digital photographs can be displayed to the human operator of the chatbot, but the bot itself sees only 1s and 0s, on's and off's. It can detect colors by wavelength, but still this is only digital data. It does not see the colors. Likewise with shapes. It is turtles (one's and zero's) all the way down with no referents.

Now you seem to be saying that a digital machine can never understand the world like an analog machine. That makes no sense to me. We detect colors by wavelength. Our understanding of the world is limited by our senses. Digital machines can have infinitely more and better senses than analog machines. They could see IR and UV, they could have a sense of magnetism, they could smell better than a bear, etc.

LLMs probably don't understand things, but that's because they only look for patterns in their data. But there's no fundamental reason that more advanced AIs won't be as able to understand reality as we can...and likely much better than we can.

-Dave



More information about the extropy-chat mailing list