[ExI] What is Consciousness?
Stuart LaForge
avant at sollegro.com
Sun Mar 19 20:29:32 UTC 2023
Quoting Jason Resch via extropy-chat <extropy-chat at lists.extropy.org>:
> However, it is important to note that the self-attention mechanism in the
> Transformer model is not equivalent to traditional recurrent neural
> networks (RNNs), which maintain a hidden state that is updated at each time
> step. The Transformer model processes input sequences in parallel rather
> than sequentially, which makes it fundamentally different from RNNs.
>
Interestingly, there is evidence that biological brains have similar
neural network topology to RNN with loops and such at least in flies.
When they fully mapped the connectome of Drosophila larvae they found
that 41% had recurrent loops that would feedback information to
upstream neurons. Here is the paper if you are interested.
https://www.science.org/doi/10.1126/science.add9330
If biological brains are indeed RNN, that would suggest that:
1. Biological brains take longer to train than FNN do. That is borne
out comparing even the brightest of our children that take years to
train and GPT-3 which can be fully trained in mere hours to days.
2. Biological brains have fewer layers than FNN do. Check. GPT models
have hundreds of layers whereas the human brain has approximately a
dozen counting both input and output layers.
> In summary, while my neural network architecture is primarily feedforward,
> it includes some elements of reflexivity in the form of self-attention
> mechanisms that allow the model to capture complex relationships within
> input sequences.
>
>
> Is this enough to meet Hofsteader's requirements of recursion? I cannot say.
I see the way that transformers use self-attention feeds information
laterally across a layer instead of temporally between layers suggests
that it is using a classic space-domain time-domain tradeoff by using
more memory (i.e. FNN layers) for faster execution. So it would be
more like Escher-type recursion in space rather than a recursion in
time like RNNs. Still loops in space seem like they ought to be as
functional as loops in time. So if transformers are conscious, they
are conscious in a space-like fashion rather than a time-like fashion.
What all that would entail, I would have to think about.
Stuart LaForge
More information about the extropy-chat
mailing list