[ExI] What is Consciousness?

Jason Resch jasonresch at gmail.com
Sun Mar 19 19:13:39 UTC 2023


On Sun, Mar 19, 2023 at 2:01 PM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
>
>
> *From:* extropy-chat <extropy-chat-bounces at lists.extropy.org> *On Behalf
> Of *Jason Resch via extropy-chat
> *…*
>
>
>
> >…We see recurring themes of information, recursion, computation, and
> machines and logic. I think these are likely key to any formal definition
> of consciousness. …Jason
>
>
>
>
>
>
>
> Jason, there is a reason I stopped worrying in the past coupla weeks that
> ChatGPT was going to cause the singularity.  I am a big Hofstader fan, read
> Escher Godel Bach twice, cover to cover, invested a lot of time into that
> marvelous work.  He convinced me that machine consciousness (or any other
> sentience or self-awareness) requires a type of recursion.  Hofstadter goes
> on at length about recursion and self-reference, the importance of Godel’s
> work to understanding ourselves.
>

My friend with access to GPT-4 asked it: "Does your neural network contain
such reflexive loops, or is it strictly feed forward?", below is its reply:

As an AI language model, my underlying architecture is based on the
Transformer model, which is primarily feedforward in nature but includes
some elements of recurrence or reflexivity, specifically in the
self-attention mechanism.


The Transformer model consists of a multi-layer feedforward neural network
with self-attention mechanisms that allow the model to learn complex
relationships between input and output sequences. The self-attention
mechanism weighs the importance of different input elements relative to
each other, effectively capturing long-range dependencies and relationships
within the input data. This mechanism introduces a form of reflexivity or
recurrence, as it allows the model to consider the context of the entire
input sequence when generating outputs.


However, it is important to note that the self-attention mechanism in the
Transformer model is not equivalent to traditional recurrent neural
networks (RNNs), which maintain a hidden state that is updated at each time
step. The Transformer model processes input sequences in parallel rather
than sequentially, which makes it fundamentally different from RNNs.


In summary, while my neural network architecture is primarily feedforward,
it includes some elements of reflexivity in the form of self-attention
mechanisms that allow the model to capture complex relationships within
input sequences.


Is this enough to meet Hofsteader's requirements of recursion? I cannot say.



>
>
> I tried to convince myself that two or more ChatGPTs could train each
> other on their own time, which is a form of recursion and self-reference,
> and that process could perhaps spring into a human-level AGI with a will,
> with self-awareness, of all the stuff we think of as us.
>
>
>
> Now after studying GPT^2 discussions and GPT^3 discussions, they all seem
> to devolve to nothing.  The technology I think is still coming for that
> process: two or more AIs training each other using background compute
> cycles, but I now don’t think ChatGPT is that technology or is capable of
> it.
>
>
>
> If you know of examples of GPT-GPT discussions or GPT-any chatbot that
> became interesting, do share please.
>

There was a paper or video I saw recently that was about how a major
advance had been made by using AI models to generate prompts and rate
answers to help train other models. This was seen as a major advance as
previously human judges had to do all this work. I cannot find the
reference at this point in time.


>
>
> That belief was behind my comment last week that ChatGPT is not really
> thinking, but rather is working language models.
>
>
>
> I currently don’t think ChatGPT is the technology capable of causing the
> singularity.  I am losing no sleep, not one minute of sleep over ChatGPT.
>

Whether or not GPT-3 or GPT-4 triggers the singularity, I agree neither
will, but we are now in the runaway phase of recursive self-improvement by
AIs, using AI to design hardware, to write software, and train itself and
other AIs, it won't be long at this rate.


>
>
> Oops, partially retract that last comment but in a good way: I am losing
> some sleep over ChatGPT, by staying up late to goof with it.  It is the
> coolest software tool to come along in a long time.
>
>
>

I agree.  It is fascinating.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230319/6207ae3a/attachment.htm>


More information about the extropy-chat mailing list