[ExI] are qualia communicable?
Jason Resch
jasonresch at gmail.com
Tue Apr 18 21:13:23 UTC 2023
On Sun, Apr 16, 2023 at 5:59 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On 15/04/2023 23:01, Giovanni Santostasi wrote:
> > Another even deeper mystery that the communicability of qualia is how
> > the brain creates an I.
>
> Oh, I thought that was simple. In it's essentials, anyway. I'm sure the
> details of the implementation are pretty complex, but the principle, as
> I understand it, is just that in amongst the many models we make, of the
> external world and other agents etc., there's a model of the agent doing
> the modelling. This model is referred to as 'I', just like the model of
> my cousin is referred to as 'Brian'. So when we say "Brian is going to
> the shops", we are making a prediction involving the 'Brian' model, and
> when we say "I am going to the shops" we are making a prediction
> involving the 'I' model (which of course encompasses the system doing
> the predicting). So you could call it a 'self-referential model'.
>
> Or is this obvious and trivial, and you're talking about the details of
> how this is done?
>
> If you mean the actual implementation, then I doubt anyone knows just
> yet. It's a general question about how the brain creates and manipulates
> models, especially models of agent systems. Probably quite high in the
> layers of abstraction, so analysing it in terms of neuronal connections
> will be difficult.
>
> But once we know how the brain creates models in general, we'll know
> what an 'I' is, as it's just another model.
>
> (Some models will be simpler than others, but going by how the brain
> works in general, and the massive duplication it uses, I doubt if a
> self-model will be that much different from a model of your room. Bigger
> and more complex, yes, but using the same principles).
>
Your description of a model growing to include itself brought the following
passage to mind:
"The evolution of the capacity to simulate seems to have culminated in
subjective consciousness. Why this should have happened is, to me, the most
profound mystery facing modern biology. [...] Perhaps consciousness arises
when the brain’s simulation of the world becomes so complete that it must
include a model of itself. Obviously the limbs and body of a survival
machine must constitute an important part of its simulated world;
presumably for the same kind of reason, the simulation itself could be
regarded as part of the world to be simulated. Another word for this might
indeed be “self-awareness,”
-- Douglas Hofstadter and Daniel Dennett in "The Mind’s I" (1981)
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230418/fdd7518f/attachment.htm>
More information about the extropy-chat
mailing list