[ExI] Is the GPT-3 statistical language model conscious?
sparge at gmail.com
Tue Oct 13 13:33:52 UTC 2020
On Tue, Oct 13, 2020 at 9:14 AM Stuart LaForge via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> Quoting Dave Sill <sparge at gmail.com>:
> > The test is a demonstration of what GPT-3 is, and isn't. It is good at
> > generating reasonable text. It isn't smart.
> From what I have been able to see of its output, it actually is
> pretty smart when comes to writing stuff. It just seems to lack common
> sense which is understandable since GPT-3 has no sensory inputs except
> for text. This could cause it to underperform on tasks that would
> require it to associate text with sensory and motor experiences just
> as Bill Hibbard observed earlier.
GPT-3 is a statistical language model. It deep-learns from a massive amount
of written text. It has no mechanism whatsoever for understanding the text
or reasoning/thinking/planning/problem solving/self awareness. Any
intelligence you perceive in its output comes from the authors of the
> The original question of the thread was: is GPT-3 conscious. I think it's
> > clearly not.
> You have made that quite obvious. And while I do value your opinion, I
> am agnostic at this point barring further reliable data but very
> curious. Therefore, I have joined the waitlist to beta test GPT-3
> through an API for research purposes. If my request is approved, I
> think it would be interesting experiment to have GPT-3 set up to post
> to ExI's mailserver although I would need assistance with that from
> John Klos and perhaps you or one of the other tech gurus on the list.
> Then we could generate our own reliable data.
> Are you interested?
Sure, that would be interesting. It's a shame that OpenAI isn't really open
and that Microsoft "owns" GPT-3.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat