[ExI] Is the GPT-3 statistical language model conscious?

Stuart LaForge avant at sollegro.com
Tue Oct 13 13:12:14 UTC 2020

Quoting Dave Sill <sparge at gmail.com>:

> The story is that GPT-3 was posting on reddit. The postings I've read look
> to me like they're a combination of GPT-3 and human efforts.

I can't refute that. I wasn't the one performed that experiment. What  
you suggest is certainly possible.

>> *Human: How many eyes does a blade of grass have?*
>> >
>> > *GPT-3: A blade of grass has one eye.*
>> Yikes! Do you not see how biased this test is? This test is like
>> expecting a child of color who grew up in the poor area of town to
>> know that a yacht is to a regalia what a pony is to a stable on an
>> I.Q. test. Or asking Mary the color scientist how fire-engine red
>> differed from carnelian. The test cited above neglected to ask the
>> most important question of all as a control: "How many eyes do you
>> have?" If it had answered "none", wouldn't that have freaked you out?
> The test is a demonstration of what GPT-3 is, and isn't. It is good at
> generating reasonable text. It isn't smart.

 From what I have been able to see of its output, it actually is  
pretty smart when comes to writing stuff. It just seems to lack common  
sense which is understandable since GPT-3 has no sensory inputs except  
for text. This could cause it to underperform on tasks that would  
require it to associate text with sensory and motor experiences just  
as Bill Hibbard observed earlier.

But as you note, the data where GPT-3 is writing about more abstract  
concepts might have been manipulated for marketing purposes or  

> The original question of the thread was: is GPT-3 conscious. I think it's
> clearly not.

You have made that quite obvious. And while I do value your opinion, I  
am agnostic at this point barring further reliable data but very  
curious. Therefore, I have joined the waitlist to beta test GPT-3  
through an API for research purposes. If my request is approved, I  
think it would be interesting experiment to have GPT-3 set up to post  
to ExI's mailserver although I would need assistance with that from  
John Klos and perhaps you or one of the other tech gurus on the list.  
Then we could generate our own reliable data.

Are you interested?

Stuart LaForge

More information about the extropy-chat mailing list