[ExI] Is the GPT-3 statistical language model conscious?

Dylan Distasio interzone at gmail.com
Tue Oct 13 13:31:55 UTC 2020


On Tue, Oct 13, 2020 at 9:13 AM Stuart LaForge via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>  From what I have been able to see of its output, it actually is
> pretty smart when comes to writing stuff. It just seems to lack common
> sense which is understandable since GPT-3 has no sensory inputs except
> for text. This could cause it to underperform on tasks that would
> require it to associate text with sensory and motor experiences just
> as Bill Hibbard observed earlier.
>

It's able to string together words in an order that typically makes
grammatical sense and that usually has some contextual relationship to the
seed used.   It's been trained on a truly massive amount of data with an
amped up architecture compared to GPT-2.  It's not surprising that it has
uncovered enough relationships in text to fool us for a paragraph.

For me, while the tech under the hood is different, this is just the latest
iteration (granted a highly advanced one) that began with RNNs, moved on to
CNN, followed by LSTM, and on to the GPT series.

Do you consider an image recognition system conscious?  IMO, this isn't
much different.  It finds patterns in text, and with a seed, attempts to
guess at what would make sense as output with some level of novelty
introduced.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20201013/6e124232/attachment.htm>


More information about the extropy-chat mailing list