[ExI] More thoughts on sentient computers

Jason Resch jasonresch at gmail.com
Thu Feb 23 16:37:02 UTC 2023


On Tue, Feb 21, 2023 at 11:49 PM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Mon, Feb 20, 2023 at 2:48 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> I think anything possessing a knowledge state is conscious, and therefore
>> anything capable of demonstrating the presence of some knowledge to us we
>> can presume that something, somewhere, within that system is conscious. In
>> that sense, a guided missile is conscious. It demonstrates knowledge of the
>> relative position between itself and the target by homing in on its target.
>> Likewise DeepBlue is conscious of the board state and positions of the
>> pieces on that board. It demonstrates this by generating meaningful moves
>> for a given state of a board and the game. When ChatGPT provides meaningful
>> responses to our queries, it demonstrates knowledge both of our queries and
>> of the related knowledge it pulls in to craft its response to us.
>>
>
> ### I would not completely discount the possibility that DeepBlue has some
> degree of consciousness but I think it is quite unlikely. Since reading
> "Consciousness and the Brain" I believe that human or animal consciousness
> requires ongoing circulation of information between specifically designed
> structures within the forebrain and that this circulation involves loops
> that are maintained over time, in a manner similar to resonance (but much
> more complicated).
>

I was thinking about the importance of loops recently. And I am partial to
the possibility that they are required. However, I think we also need to
pay attention to information flows that occur beyond the boundaries of
one's skull (or computer chip). For example, in many systems there is a
feedback loop through the environment. The thermostat's output may or may
not directly feed into its input, but its output indirectly affects the
future input as the temperature of the room changes. Likewise for DeepBlue,
once it makes its move, its move is reflected in the board state of its
future inputs. There is a fine line between information that enters our
conscious mind that comes from the environment, vs. information that comes
in as input from memory. Our memories, though local to our brains, seem to
function as much like a local environment, which enables a
consciousness-loop without having to interact with an external environment.


> Mere presence of an encoding of information is not sufficient to create
> consciousness.
>

I agree that statically stored information is not conscious, it requires
(at minimum) some process of interpretation of that information.



> Consciousness happens when probability distributions encoded throughout
> the cortex collapse (*not* quantum mechanically, it's just a coincidence of
> terms used) to a specified outcome, which is maintained by interactions
> between the encoding areas and other, distant areas that pick out outcomes
> based on some algorithm that I do not understand (but the neuroscientists
> referenced in this book may be close to understanding).
>  ----------------------
>
>>
>> None of this is meant to suggest that these devices have consciousness
>> anything like humans. Indeed I would expect the consciousness of these
>> machines to be of a radically different form than human, or animal
>> consciousness. But I also think the variety of possible consciousnesses is
>> as varied as the number of possible mathematical objects, or at least as
>> varied as the number of possible computations (a countable infinity).
>>
>
> ### Now yes, full agreement. DeepBlue may have some internal quality that
> in some general way might be put in the same category as human
> consciousness but it is not a human consciousness.
>
>
>>
>> But it is very dangerous to assume that something is not conscious when
>> it is. That is almost as dangerous as assuming something is conscious when
>> it is not.
>>
>> ### Eliezer is scared of the transformers waking up to goal-oriented
> life, for example by simulating goal-oriented agents in response to a
> prompt.
>
> Somebody prompted ChatGPT to simulate Eliezer, the concerned AI
> researcher, and to come up with ideas to contain the Unfriendly AI, and it
> did.
>
> We are witnessing the rise of ethereal avatars to oppose Golems of silica.
> Magical time.
>
>
Indeed. The rate of progress is frightening.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230223/498321cb/attachment.htm>


More information about the extropy-chat mailing list