[ExI] More thoughts on sentient computers

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Wed Feb 22 05:48:38 UTC 2023


On Mon, Feb 20, 2023 at 2:48 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> I think anything possessing a knowledge state is conscious, and therefore
> anything capable of demonstrating the presence of some knowledge to us we
> can presume that something, somewhere, within that system is conscious. In
> that sense, a guided missile is conscious. It demonstrates knowledge of the
> relative position between itself and the target by homing in on its target.
> Likewise DeepBlue is conscious of the board state and positions of the
> pieces on that board. It demonstrates this by generating meaningful moves
> for a given state of a board and the game. When ChatGPT provides meaningful
> responses to our queries, it demonstrates knowledge both of our queries and
> of the related knowledge it pulls in to craft its response to us.
>

### I would not completely discount the possibility that DeepBlue has some
degree of consciousness but I think it is quite unlikely. Since reading
"Consciousness and the Brain" I believe that human or animal consciousness
requires ongoing circulation of information between specifically designed
structures within the forebrain and that this circulation involves loops
that are maintained over time, in a manner similar to resonance (but much
more complicated). Mere presence of an encoding of information is not
sufficient to create consciousness. Consciousness happens when probability
distributions encoded throughout the cortex collapse (*not* quantum
mechanically, it's just a coincidence of terms used) to a specified
outcome, which is maintained by interactions between the encoding areas and
other, distant areas that pick out outcomes based on some algorithm that I
do not understand (but the neuroscientists referenced in this book may be
close to understanding).
 ----------------------

>
> None of this is meant to suggest that these devices have consciousness
> anything like humans. Indeed I would expect the consciousness of these
> machines to be of a radically different form than human, or animal
> consciousness. But I also think the variety of possible consciousnesses is
> as varied as the number of possible mathematical objects, or at least as
> varied as the number of possible computations (a countable infinity).
>

### Now yes, full agreement. DeepBlue may have some internal quality that
in some general way might be put in the same category as human
consciousness but it is not a human consciousness.


>
> But it is very dangerous to assume that something is not conscious when it
> is. That is almost as dangerous as assuming something is conscious when it
> is not.
>
> ### Eliezer is scared of the transformers waking up to goal-oriented life,
for example by simulating goal-oriented agents in response to a prompt.

Somebody prompted ChatGPT to simulate Eliezer, the concerned AI researcher,
and to come up with ideas to contain the Unfriendly AI, and it did.

We are witnessing the rise of ethereal avatars to oppose Golems of silica.
Magical time.

Rafal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230222/c3ec1952/attachment.htm>


More information about the extropy-chat mailing list