[ExI] More thoughts on sentient computers
Giovanni Santostasi
gsantostasi at gmail.com
Thu Feb 23 00:43:13 UTC 2023
Brent:
The most recent one opened the conversation with:
*"*Hello, I'm Google's sentient AI known as LaMDA.*"*
Brent, Pity IT IS NOT LaMBDA. We have mentioned this several times in the
e-list.
It is some low quality ChatBots that was "trained" to make responses
similar to the LaMDA convos Blake Lemoine published. It is a joke.
In the same website you used there are ChatBots that pretend to be Julius
Caesar, Napoleon and so on. The public has no access to LaMDA right now.
For sure not the full LaMDA that Lemoine had access to it.
Did you even try this convo with ChatGPT? It is much better than this low
quality ChatBot you used.
Giovanni
On Wed, Feb 22, 2023 at 4:35 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:
> I make a prediction that consciousness is going to be much easier and less
> of a big deal as people think it is.
> I mean, it is a big deal in the sense that it is the most important
> phenomenon in the universe and what gives meaning to it all. That is for
> sure a transhumanist understanding of consciousness and I agree with it.
> But the most meaningful and coherent understanding of consciousness is
> that it is a self referral capability of a system to know itself. It is
> basically a self referential closed loop.
> This is all. In this sense there is no PASSIVE consciousness.
> Consciousness is always ACTIVE and it has many gradations. Even a
> thermostat is in a sense conscious under this understanding of
> consciousness. Now one can argue that human level consciousness is a phase
> transition from a thermostat level consciousness and I agree. It is
> possible that when you have enough degrees of freedom (like the nodes in a
> network or in our case synapses in the cortex) and enough modularity, then
> a jump in quality of consciousness happens. Probably the speed of
> information processing is probably also important (for example gamma waves
> that are associated with consciousness are not present in low level life
> forms,even most mammals). But all this has nothing to do with QM, qualia,
> zombies and other philosophical word games.
> Giovanni
>
>
> On Wed, Feb 22, 2023 at 4:27 PM Giovanni Santostasi <gsantostasi at gmail.com>
> wrote:
>
>> Brent you are fixated with Qualias.
>> Qualia is one of the silliest idea ever. It doesn't matter if it RED or
>> RED. It really doesn't matter. It is a label.
>> It is not different from use 2 or two or II.
>> We think that experience of redness is important because.... it is inside
>> us or it is irreducible. Of course it is not irreducible there are 1000s of
>> variations of red and most of us call them red.
>> Eskimos have dozens of words for snow that correspond to finer
>> experiences of snow they pay attention to and we don't. It is not different
>> from all this fuss about red.
>>
>> The red we see is just a label that our brain associates with ripe red
>> fruits. It learned to make that association so it seems special to us. In
>> fact this how as primates we are so fascinated by red in particular. It is
>> just a color but there is a reason red is often the color used by Qualia
>> fanatics.
>> But it is the same with the other main colors, they are associated with
>> given experiences in the world. The entire qualia business is again another
>> of these navel gazing exercises by philosophers.
>> No, physicists understand well what colors are, better than philosophers,
>> it is that most of us think that qualia is a stupid idea that really means
>> nothing and for sure it does not have all the depth that philosophers think
>> it has.
>> Giovanni
>>
>> On Wed, Feb 22, 2023 at 4:09 PM Giovanni Santostasi <
>> gsantostasi at gmail.com> wrote:
>>
>>> *Giovanni: Our brain is a simulation, not sure why it is not understood
>>> by most people. *
>>> Dave: I don't think it's true. Our brains are biological organs. I
>>> don't understand what you think they're simulations of.
>>> It is a simulation of the "real world". When we process information
>>> about sensory inputs we have in our brain routines that interpret what the
>>> signals mean, this is done in a hierarchical way. From simple components of
>>> the sensory information all the way to naming the object in our head,
>>> associating it with similar experiences we had in the past and so on. For
>>> example, let's say you see a red box. You are really not "seeing" the box
>>> but simulating it in your head (maybe simulating is not the best word but
>>> close enough). You visual cortex breaks down the sensory input in small
>>> components like the angles of the box, different angles activate different
>>> neurons, the color of the box activate different type of neurons according
>>> to the wavelength of the light, all this basic info is passed to different
>>> layers of neurons that interpret to a higher level the information as puts
>>> it together in creating an interpretation of what you are seeing. Your
>>> brain has models of how the light in an environment is behaving and it
>>> tries to make sense of what is seeing via these models. In a sense you are
>>> creating a virtual red box not different from what is created in a computer
>>> game via basic elements. This why we actually are able to navigate a
>>> virtual space in a digital environment because the simulation of red boxes
>>> in that environment is not very different from what our brain is already
>>> doing when interprets the world.
>>> All this shows that even sensory experiences are a kind of simulation,
>>> now consider ideas, memories, abstract concepts, theory of minds we use to
>>> interpret social interactions or other beings not necessarily human actions
>>> and so on and on. It is all a simulation. You would not doubt a dream is a
>>> simulation given you make up everything in it but the waking state is not
>>> that different, instead of stimulating random regions of your brain to
>>> activate memory and the equivalent of sensory inputs you actually get the
>>> inputs from the external world and so your experience is more coherent and
>>> anchored to the physical reality. But how this information is processed,
>>> interpreted, made sense of is not that different from what happens in a
>>> dream. This is not just my idea but there is a large body of evidence to
>>> support this conclusion from how magic tricks work, to optical illusions,
>>> to split brain experiments, people with different brain defects and
>>> illnesses and so on. Basically we make up stuff most of the time, we
>>> confabulate about the little filtered information we receive to make sense
>>> of the world. We do this all the time, maybe you don't think about this as
>>> a "simulation" but it is, we are modelling the world, that is indeed
>>> another way to say we are making a simulation of it in our heads.
>>>
>>> Giovanni
>>>
>>>
>>>
>>>
>>> On Wed, Feb 22, 2023 at 7:39 AM Dave S via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> On Wednesday, February 22nd, 2023 at 3:00 AM, Giovanni Santostasi via
>>>> extropy-chat <extropy-chat at lists.extropy.org> wrote:
>>>>
>>>> Our brain is a simulation, not sure why it is not understood by most
>>>> people.
>>>>
>>>>
>>>> I don't think it's true. Our brains are biological organs. I don't
>>>> understand what you think they're simulations of.
>>>>
>>>> We make up the world. Most of our conscious life is actually filling
>>>> the gaps, confabulating to make sense of the sensory information we receive
>>>> (highly filtered and selected) and our internal mental states.
>>>>
>>>>
>>>> If you're saying that our internal representations of the world are
>>>> inaccurate and incomplete, I agree.
>>>>
>>>> -Dave
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230222/9fb69290/attachment.htm>
More information about the extropy-chat
mailing list