[ExI] Conscious AI or a Zombie?

Brent Allsop brent.allsop at gmail.com
Mon Jul 3 01:25:11 UTC 2023


Good point, we for sure compute both abstractly and phenomenally.
I changed the name in the camp statement to:
No Rights for Abstract Only Systems

Whatever is representing my thoughts and memory around something like the
number 1, and the way they are computationally bound into meaningful
subjective consciousness relationships we are aware of are all still like
something.  Whereas it doesn't matter what is representing the number 1 in
an abstract computer, by design.

So I'm trying to figure out how to state how your beliefs differ from mine.





On Sun, Jul 2, 2023 at 5:43 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Sun, Jul 2, 2023, 7:07 PM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Hi Jason,
>> This Jeremy Bentham statement could be saying the same thing we are
>> trying to say in the "No Rights for Abstract Systems
>> <https://canonizer.com/topic/849-Robot-or-AI-Rights/2-No-Rights-for-Abstract-Systems>"
>> camp.
>>
>> But I suspect you are disagreeing with the camp statement, and defining
>> terms differently than I'm thinking?
>>
>
> I don't see that there's any way one can define an "abstract system" in a
> way that doesn't include ourselves.
>
> Jason
>
>
>
>>
>>
>> On Sun, Jul 2, 2023 at 4:54 PM Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>
>>> On Sun, Jul 2, 2023, 5:32 PM Brent Allsop via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>>
>>>> It's interesting how much diversity of thinking there is here on the
>>>> rights of AI systems.
>>>> I'm very interested in what everyone thinks, and would love to have a
>>>> more formal and dynamic representation of what everyone
>>>> currently believes on this.
>>>> I've created a topic for this: "Robot or AI Rights
>>>> <https://canonizer.com/topic/849-Robot-or-AI-Rights/1-Agreement>".
>>>> And I've started the "No Rights for Abstract Systems
>>>> <https://canonizer.com/topic/849-Robot-or-AI-Rights/2-No-Rights-for-Abstract-Systems>"
>>>> camp.
>>>>
>>>
>>> “The question is not, Can they reason?, nor Can they talk? but, Can they
>>> suffer? Why should the law refuse its protection to any sensitive being?”
>>> -- Jeremy Bentham
>>>
>>> It'd be great to have camps for people who currently think differently,
>>>> and to see which views have the most consensus, and to track this over time
>>>> as science is able to demonstrate more about what consciousness is and
>>>> isn't.
>>>>
>>>
>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Sun, Jul 2, 2023 at 12:39 PM William Flynn Wallace via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>> Except for smell, sensations come to us from our unconscious mind.
>>>>> There they get routed to various brain areas and then, only perhaps (they
>>>>> could be shut down and never get to the forebrain) to upper areas.  These
>>>>> processes are aware of the nature of the stimuli and know where to send
>>>>> them, but they are not available to our conscious mind.
>>>>>
>>>>> Or you could say that these unconscious processes are really conscious
>>>>> but simply not available to what we call our conscious mind.  Thus you can
>>>>> have awareness or consciousness in part of the brain but not the part we
>>>>> think of as our consciousness.
>>>>>
>>>>> If an organism deals in some appropriate way with incoming stimuli you
>>>>> could call it aware.  Amoebas do that.
>>>>>
>>>>> bill w
>>>>>
>>>>> On Sun, Jul 2, 2023 at 1:02 PM Jason Resch via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>
>>>>>> Stuart,
>>>>>>
>>>>>> Thank you for putting your argument into such clear words. I agree
>>>>>> completely with your definition of CAP.
>>>>>>
>>>>>> Also, those videos of the AI and robotic soccer players are great. I
>>>>>> also agree it is not logically possible to say these entities are not aware
>>>>>> of the ball, and what else is consciousness, beyond "awareness"? If the
>>>>>> bots are aware of the ball (which they plainly are), then they're conscious
>>>>>> of it.
>>>>>>
>>>>>> Jason
>>>>>>
>>>>>> On Sat, Jul 1, 2023 at 6:16 PM Stuart LaForge via extropy-chat <
>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>
>>>>>>>
>>>>>>> Quoting Jason Resch via extropy-chat <extropy-chat at lists.extropy.org
>>>>>>> >:
>>>>>>>
>>>>>>> > If you believe that philosophical zombies are logically
>>>>>>> impossible, then
>>>>>>> > consciousness is logically necessary (in the presence of certain
>>>>>>> > behavioral, sensory, and introspective capacities).
>>>>>>>
>>>>>>> Actually, I would more rigorously say that philosophical zombies
>>>>>>> are
>>>>>>> impossible precisely BECAUSE they violate what I call causal
>>>>>>> awareness
>>>>>>> principle or CAP. Causal awareness is hereby defined as the ability
>>>>>>> to
>>>>>>> sense, act on, and react to ones environment as well as ones own
>>>>>>> effect on it. Anything that is causally aware is able to act
>>>>>>> rationally. Consciousness is necessary, but not sufficient, for
>>>>>>> causal
>>>>>>> awareness, and therefore, rational action. However, something
>>>>>>> cannot
>>>>>>> act rationally without being conscious too. Acting in ones own best
>>>>>>> interests requires consciousness. A sentient being must be
>>>>>>> conscious
>>>>>>> of those interests and the means to achieve them in the context of
>>>>>>> a
>>>>>>> changing environment.
>>>>>>>
>>>>>>> Here rational action is used as a proxy for intelligent behavior.
>>>>>>> That
>>>>>>> is to say that philosophical zombies cannot exist because they
>>>>>>> would
>>>>>>> have no way to display intelligent goal-seeking behavior because
>>>>>>> they
>>>>>>> would not be conscious of any goal; Therefore, they could not be
>>>>>>> conscious of how to navigate the environment to achieve a goal;
>>>>>>> nor,
>>>>>>> would they be conscious of whether they had achieved the goal or
>>>>>>> not.
>>>>>>> That is to say, that logically, a philosophical zombie does not
>>>>>>> have
>>>>>>> the logical properties necessary to fit its own definition.
>>>>>>> Philosophical zombies are an unwarranted assumption to make the
>>>>>>> Hard
>>>>>>> Problem of consciousness seem like a different question than the
>>>>>>> so-called Easy Problem, whereas, CAP says that the complete
>>>>>>> solution
>>>>>>> of the Easy Problem, is also the solution to Hard Problem.
>>>>>>>
>>>>>>> While pre-programmed behavior could mimic rational action in the
>>>>>>> short
>>>>>>> term, any sufficient change in the p-zombie's environment, like a
>>>>>>> new
>>>>>>> obstacle, would thwart it, and expose it as a zombie. In the harsh
>>>>>>> world of nature, philosophical zombies, even if they came to exist
>>>>>>> by
>>>>>>> some extraordinary chance, would quickly go extinct.
>>>>>>>
>>>>>>> Therefore philosophical zombies, as opposed to fungal zombies, are
>>>>>>> both logically and physically impossible. It is, in short,
>>>>>>> impossible
>>>>>>> to do what humans evolved from worms to be able to do, without
>>>>>>> being
>>>>>>> in some measure, more conscious than a worm.
>>>>>>>
>>>>>>> > Accordingly, I believe
>>>>>>> > that consciousness was an unavoidable consequence during the
>>>>>>> course of
>>>>>>> > evolution of life on Earth as once nature created creatures having
>>>>>>> certain
>>>>>>> > capacities/abilities, consciousness had no other choice but to
>>>>>>> exist, it
>>>>>>> > became logically necessary.
>>>>>>>
>>>>>>> Yes, there is certainly a natural history of consciousness. You can
>>>>>>> look at the evolution of the nervous system of vertebrates through
>>>>>>> the
>>>>>>> process of encephalization and decussation and see a causal
>>>>>>> narrative.
>>>>>>> The moment life decided to detach itself from the ocean bottom and
>>>>>>> move around looking for food, a complete, or open, digestive system
>>>>>>> with a seperate mouth and anus became advantageous. Once it started
>>>>>>> moving mouth first through the world, sensory organs like eyes or
>>>>>>> antennae became advantageous; Moreover, those creatures which
>>>>>>> evolved
>>>>>>> sensory organs like taste-buds and eyes near their mouths, as
>>>>>>> opposed
>>>>>>> to near their anus, had a survival advantage.
>>>>>>>
>>>>>>> Once an organism had a concentration of senses on its rostral or
>>>>>>> front
>>>>>>> end, that end became differentiated from the caudal or rear end of
>>>>>>> the
>>>>>>> organism. The cluster of sensory organs became a rudimentary head.
>>>>>>> As
>>>>>>> part of that differentiation called encephalization, it became
>>>>>>> advantageous for that organism to develop an organ to process the
>>>>>>> sensory information and locate it near the sense organs. The organ
>>>>>>> I
>>>>>>> speak of began as a small nerve cluster. As successive generations
>>>>>>> of
>>>>>>> the organism started moving through the environment faster, sensing
>>>>>>> and avoiding danger, finding food and resources, weathering natural
>>>>>>> disasters and extinction events, and generally leading more
>>>>>>> complicated lives, it finally evolved into the conscious brain.
>>>>>>>
>>>>>>> > The same, I believe, is true for AI. It is unavoidable when we
>>>>>>> create
>>>>>>> > machines of certain abilities, and I believe existing software is
>>>>>>> already
>>>>>>> > conscious. For example, my open source project on artificial
>>>>>>> sentience
>>>>>>> > could be such an example: https://github.com/jasonkresch/bots
>>>>>>>
>>>>>>> Yes, I agree. These AI learned how to play soccer/football on their
>>>>>>> own. They are certainly conscious of one another, the ball, and
>>>>>>> their
>>>>>>> goals and that consciousness allows some very complex goal-seeking
>>>>>>> behavior to emerge. By the CAP, these AI agents in the following
>>>>>>> video
>>>>>>> presentation of a recent Science paper by Lui et al.
>>>>>>>
>>>>>>> https://www.youtube.com/watch?v=KHMwq9pv7mg&t=10s.
>>>>>>>
>>>>>>> Some might object because the consciousness is expressed within a
>>>>>>> virtual setting. But that's all right because Google built bodies
>>>>>>> for
>>>>>>> the little guys:
>>>>>>>
>>>>>>> https://www.youtube.com/watch?v=RbyQcCT6890
>>>>>>>
>>>>>>> If you push one over, they stand right back up. So yeah, the CAP
>>>>>>> says
>>>>>>> they are rudimentarily conscious because they display causal
>>>>>>> awareness
>>>>>>> and rational action. They lie somewhere between thermostats and
>>>>>>> humans
>>>>>>> on the consciousness scale. Someday, their consciousness may far
>>>>>>> surpass ours.
>>>>>>>
>>>>>>> Stuart LaForge
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> extropy-chat mailing list
>>>>>>> extropy-chat at lists.extropy.org
>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>>>
>>>>>> _______________________________________________
>>>>>> extropy-chat mailing list
>>>>>> extropy-chat at lists.extropy.org
>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>>
>>>>> _______________________________________________
>>>>> extropy-chat mailing list
>>>>> extropy-chat at lists.extropy.org
>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230702/a105162f/attachment.htm>


More information about the extropy-chat mailing list