[ExI] Is the GPT-3 statistical language model conscious?

BillK pharos at gmail.com
Fri Oct 9 21:10:16 UTC 2020


On Fri, 9 Oct 2020 at 21:50, Will Steinberg via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> I don't understand how this works that it could do something like that.  Anyone wanna explain?
>


This article looks like a good explanation.
<https://www.forbes.com/sites/bernardmarr/2020/10/05/what-is-gpt-3-and-why-is-it-revolutionizing-artificial-intelligence/>

Quote:
In short, this means that it generates text using algorithms that are
pre-trained – they’ve already been fed all of the data they need to
carry out their task. Specifically, they’ve been fed around 570gb of
text information gathered by crawling the internet (a publicly
available dataset known as CommonCrawl) along with other texts
selected by OpenAI, including the text of Wikipedia.

If you ask it a question, you would expect the most useful response
would be an answer. If you ask it to carry out a task such as creating
a summary or writing a poem, you will get a summary or a poem.

More technically, it has also been described as the largest artificial
neural network every created
------------------

BillK



More information about the extropy-chat mailing list