[ExI] Is the GPT-3 statistical language model conscious?

Dave Sill sparge at gmail.com
Mon Oct 12 16:56:43 UTC 2020


On Mon, Oct 12, 2020 at 12:04 PM Dylan Distasio via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> The ability to construct meaningful models of reality and understand cause
> and effect seem to be two of the largest missing ingredients in getting to
> some kind of generalized/strong AI.   There is clearly something missing in
> the current approaches in terms of how human beings learn and model
> reality.  I think unraveling how memory actually works in a brain would go
> a long way towards figuring out these other two issues.
>

As a programmer, I couldn't help thinking about how I'd attempt to
implement AI, even though I never formally studied it. Most or all of the
attempts I've seen lack an overarching system of motivation. We're
motivated as infants by hunger, comfort, familiarity, understanding, etc..
Communication consists primarily of crying when a need isn't being met.
Over time, we learn that understanding things and communicating our desires
can satisfy our needs better and more effectively.

I think we need a basic learning mechanism coupled with a motivational
system that rewards learning.

But then, maybe motivation can be hardcoded.

At any rate, the basic learning mechanism is the key: taking audiovisual
input and building an understanding of it that can be used to predict
outcomes and achieve desired results.

-Dave
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20201012/0fa3a7f0/attachment.htm>


More information about the extropy-chat mailing list