[ExI] LLM's cannot be concious

Jason Resch jasonresch at gmail.com
Mon Mar 20 13:44:36 UTC 2023


On Sun, Mar 19, 2023, 4:10 PM Dave S via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Sunday, March 19th, 2023 at 2:01 PM, Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
> [...] But I also think we cannot rule out at this time the possibility
> that we have already engineered conscious machines. Without an established
> and agreed upon theory of consciousness or philosophy of mind, we cannot
> even agree on whether or not a thermostat is conscious.
>
>
> I think that rabbit hole that isn't going to yield much of use since
> there's no way an entity can determine whether or not another entity is
> conscious,
>
> Where does our own volition and initiative come from? Is it not already
> programmed into us by our DNA?
>
>
> The mechanisms are in our DNA. Some of it is hormone-driven like hunger,
> sex drive, etc. Some of it comes from our thoughts and experiences. We try
> a food we like a lot and we'll seek it out again.
>
> And is our own DNA programming that different in principle from the
> programming of a self-driving car to seek to drive to a particular
> destination?
>
>
> Yes. We decide when and where to go. Self-driving cars don't just go on
> random joy rides. They don't have initiative and they don't experience joy.
>


I believe there may be an inconsistency between these two claims:

1. "there's no way an entity can determine whether or not another entity is
conscious"

And

2. "they don't experience joy."

If it were possible to know whether another entity experienced joy then
wouldn't it be possible to determine that another entity is conscious.


I believe we can, to some degree of confidence, determine when another
entity is conscious, when by it's externally visible behavior, it
demonstrates possession of knowledge for which the observed behavior would
be exceedingly improbable if the entity did not possess that knowledge.

For example, if AlphaZero makes a series of brilliant chess moves, it would
be very unlikely to occur if it did not possess knowledge of the evolving
state of the chess board. Thus we can conclude something within AlphaGo
contains the knowledge of the chess board, and states of knowledge are
states of consciousness.

It is much harder, however, to use this method to rule out the presence of
certain knowledge states, as not all states will necessarily manifest
outwardly detectable behaviors. So it is harder to say Tesla's autopilot
does not experience joy, than it is to say Tesla's autopilot is conscious
of the road sign up ahead.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230320/74bde064/attachment.htm>


More information about the extropy-chat mailing list