[ExI] Is the GPT-3 statistical language model conscious?
Jalil Farid
monteluna at protonmail.com
Tue Oct 13 16:20:05 UTC 2020
I think one question to ask is "what is consciousness?"
After hearing the remarks, it appears a program is probably on track within the next 10 years to at least statistically answer some basic questions and pass a Turing Test. We probably will see some commercial applications for weak AIs, but within my lifetime it's very likely that GPT-10 is mostly impossible to differentiate from a real human.
Sure you can ask, "Is it concious?" But who are we to decide what consciousness is and isn't? We're certain it's an emergency phenomenon, so we have no real way to recreate it. Even if we did, I imagine the engineering is mostly using some form of self-organizing soft/wet/hard-ware, and it will look far less than the standard engineering where we fully understand the process from a reductivist approach.
Maybe asking if AI is conscious is a futile question. If you can't really explain what properties differentiate an original from a copy to begin with, something that can mimic the original well enough where you can't tell is practically, an original in its own right.
-------- Original Message --------
On Oct 13, 2020, 10:12 AM, Dylan Distasio via extropy-chat wrote:
> On Tue, Oct 13, 2020 at 9:39 AM Dave Sill via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
>> It's a shame that OpenAI isn't really open and that Microsoft "owns" GPT-3.
>
> I agree with this sentiment completely. It's extremely disappointing, and goes against what we were told initially about OpenAI.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20201013/7780d588/attachment.htm>
More information about the extropy-chat
mailing list