[ExI] GPT-4 on its inability to solve the symbol grounding problem
spike at rainier66.com
spike at rainier66.com
Thu Apr 6 18:23:02 UTC 2023
…> On Behalf Of William Flynn Wallace via extropy-chat
Subject: Re: [ExI] GPT-4 on its inability to solve the symbol grounding problem
>…Simple??? You think humans are simple? Our brains are the most complex thing in the universe and the more psychology tries to understand it, the more complex it becomes. "The more you know the more you realize just how much you don't know." I dunno who said that. bill w
Billw it is good that you don’t know who said that, for if you did know who said that, you would know less than you know now.
We have long required intelligence to include some mysterious process we don’t understand. We think we mostly understand how a neuron works: model it as a device with a bunch of input wires and one output wire, where the output is low, unless enough inputs go high, at which time the output goes high. That kinda works, but axons and dendrites are not really wires, so much as they are branchy things with something analogous to amplifiers all along the length, so if it transmits a signal, it gets easier for it to transmit a stronger signal the next time.
We imagine a system as a big number of devices like that, then we imagine adding more dendrites and more neurons and more axons and more yakkity yaks and bla blas and eventually, once there are enough of them, some mysterious magic somehow occurs, and that… is… human intelligence.
That theory allows us to maintain the mystery of us, for we don’t understand how adding sufficient numbers of something we do understand results in something we don’t understand.
OK then, we disqualified Eliza from the intelligence club because we understand how it works. It fooled a lot of people into thinking it was thinking, but we have previously decided that intelligence must be mysterious, and we can take this Eliza code apart and see eactly how it works. Likewise you can make an enormous lookup table derived from a teen chat room. It sounds just like a human teen, but we see how that works, so no intelligence points, even if we demonstrate that it damn well does fool a lot of teens and grownups as well. Result: we now have no way of knowing what fraction of teen-chat archives were generated by jokers who set up giant lookup tables in Excel. Some of that material probably is just fake intell... eh… like… basically… like…in other words… like… fake teen speak.
But, you know the rules: no mysterious, no intelligence. Result: we have had to move the goal posts repeatedly, resulting in my own snarky comments about moving goalposts.
ChatGPT has a language model (analogous in some ways to the huge lookup table derived from the teen chat room) but it also has something mysterious that we don’t all understand (or I sure don’t): the transformer.
I am learning about transformers (paradoxically being taught about it by… ChatGPT) but currently I don’t see why stringing together a bunch of transformers enables ChatGPT any more than I understand the mystery of why adding sufficient numbers of neurons results in human level intelligence.
Just as some kind of mysterious magic happens with enough neurons, one must suppose that given enough transformers, and our lack of complete understanding of how sufficient numbers of those things can do the kinds of stuff we have seen ChatGPT do (such as writing novels with 8 simple graphics) well, OK then. I would hafta say it is (in a sense) a form of intelligence.
ChatGPT isn’t bad at writing novels either. That was a fun romp with the first two parts of its ABC trilogy.
spike
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230406/1e8091cf/attachment.htm>
More information about the extropy-chat
mailing list