[ExI] Is the GPT-3 statistical language model conscious?
Dave Sill
sparge at gmail.com
Tue Oct 13 16:44:31 UTC 2020
On Tue, Oct 13, 2020 at 12:27 PM Jalil Farid via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
> I think one question to ask is "what is consciousness?"
>
That's a rabbit hole we've been down many times to no avail. In the case of
GPT-3, though, we don't really need to ask it because it's clear from its
design that it's incapable of consciousness by almost any definition.
After hearing the remarks, it appears a program is probably on track within
> the next 10 years to at least statistically answer some basic questions and
> pass a Turing Test. We probably will see some commercial applications for
> weak AIs, but within my lifetime it's very likely that GPT-10 is mostly
> impossible to differentiate from a real human.
>
Weak AIs are already being used for chatbots, customer service call
centers, and many other things. GPT-N alone will never pass a Turing
test--it's a language generator with no understanding of anything.
Sure you can ask, "Is it concious?" But who are we to decide what
> consciousness is and isn't?
>
Right. It's a waste of time.
-Dave
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20201013/66e83d5e/attachment.htm>
More information about the extropy-chat
mailing list