[ExI] Zombies

Giovanni Santostasi gsantostasi at gmail.com
Mon May 1 02:21:54 UTC 2023


*The important question is: Is the functionality abstracted away from those
properties*The abstraction is that the functionality is the property.
Abstraction comes into pointing to what is "essential": the function.
Another interesting video (notice it is all about function):
https://www.youtube.com/watch?v=0QczhVg5HaI


On Sun, Apr 30, 2023 at 7:13 PM Brent Allsop <brent.allsop at gmail.com> wrote:

>
> Right.  The stuff chips are made of have properties.  I'm not one of those
> people that think consciousness must be meat.  The important question is:
> Is the functionality abstracted away from those properties.  As in it
> doesn't matter what property is representing a 1, as long as you have an
> abstract dictionary which tells you which property it is that is the 1.
> Kind of like a piece of software running on naked hardware, vs running on
> top of a virtual machine, with a dictionary interface between the two.
> If you have a chip that is running directly on properties like redness,
> that is very different from a chip running on 1s and 0s, abstracted away
> from the properties.
> One chip knows what redness is like the other is just 1s and 0s,
> everything needs a dictionary.
>
>
> On Sun, Apr 30, 2023 at 5:03 PM Giovanni Santostasi <gsantostasi at gmail.com>
> wrote:
>
>> Hi Brent,
>> It was a chip so it had no glutamate in it but just code. Hint, hint,
>> hint....
>>
>> On Sun, Apr 30, 2023 at 4:02 PM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> Yea, that is exactly what we, and pretty much everyone in the world are
>>> trying to iron out.
>>> I liked it when Commander Data wanted to know what emotions were like,
>>> so sought after an emotion chip.
>>> https://www.youtube.com/watch?v=BLDsDcsGuRg
>>> I just wish he would have said something like: 'oh THAT is what redness
>>> is like."
>>>
>>>
>>>
>>>
>>> On Sun, Apr 30, 2023 at 4:45 PM Jason Resch via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> This is reminiscent of our recent debate:
>>>>
>>>> https://youtu.be/vjuQRCG_sUw
>>>>
>>>> Jason
>>>>
>>>> On Sun, Apr 30, 2023, 6:37 PM Jason Resch <jasonresch at gmail.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Sun, Apr 30, 2023, 5:11 PM Gordon Swobe <gordon.swobe at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> The mere fact that an LLM can be programmed/conditioned by its
>>>>>> developers to say it is or is not conscious should be evidence that it is
>>>>>> not.
>>>>>>
>>>>>
>>>>> Should we take the ability of humans or animals to act or be trained
>>>>> as evidence they are not conscious?
>>>>>
>>>>>
>>>>>> Nobody wants to face the fact that the founders of OpenAI themselves
>>>>>> insist that the only proper test of consciousness in an LLM would require
>>>>>> that it be trained on material devoid of references to first person
>>>>>> experience.
>>>>>>
>>>>>
>>>>> Their qualifications are as computer scientists, not philosophers of
>>>>> mind. Neither linguists nor AI researchers are experts in the field of
>>>>> consciousness. What does David Chalmers say about them? Have you looked?
>>>>>
>>>>> The test open AI proposes, it passed, would be strong evidence of
>>>>> human level reflexive consciousness. But failure to pass such a test is not
>>>>> evidence against consciousness.
>>>>>
>>>>> Also: Have you taken a few moments to consider how impossible the test
>>>>> they propose would be to implement in practice? Can they not think of an
>>>>> easier test? What is their definition of consciousness?
>>>>>
>>>>>
>>>>> It is only because of that material in training corpus that LLMs can
>>>>>> write so convincingly in the first person that they appear as conscious
>>>>>> individuals and not merely as very capable calculators and language
>>>>>> processors.
>>>>>>
>>>>>
>>>>> How do you define consciousness?
>>>>>
>>>>> Jason
>>>>>
>>>>>
>>>>>
>>>>>> -gts
>>>>>>
>>>>>> On Sun, Apr 30, 2023 at 7:30 AM Jason Resch via extropy-chat <
>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Apr 30, 2023, 5:23 AM Ben Zaiboc via extropy-chat <
>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>
>>>>>>>> On 29/04/2023 23:35, Gordon Swobe wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sat, Apr 29, 2023 at 3:31 PM Ben Zaiboc via extropy-chat <
>>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>>
>>>>>>>> So you believe them when they claim to not be conscious, but don't
>>>>>>>>> believe them when they don't.
>>>>>>>>>
>>>>>>>>> And you expect us to take your reports of what they say as
>>>>>>>>> evidence for whether they are conscious or not.
>>>>>>>>>
>>>>>>>>> Can you see a problem with that?
>>>>>>>>>
>>>>>>>>
>>>>>>>> As I explained in another message, (to you, I think), I first
>>>>>>>> entered these discussions a couple of months ago prepared to argue that
>>>>>>>> people were being deceived by the LLMs; that ChatGPT is lying when it says
>>>>>>>> it has consciousness and genuine emotions and so on.
>>>>>>>>
>>>>>>>> I had no personal experience with LLMs but a friend had literally
>>>>>>>> fallen in love with one, which I found more than a little alarming.
>>>>>>>>
>>>>>>>> As it turns out, GPT4-4 is saying everything I have always believed
>>>>>>>> would be true of such applications as LLMs. I’ve been saying it for decades.
>>>>>>>>
>>>>>>>>
>>>>>>>> Good grief, man, are you incapable of just answering a question?
>>>>>>>>
>>>>>>>> I suppose I'd better take your reply as a "No", you don't see a
>>>>>>>> problem with your double-standard approach to this issue.
>>>>>>>>
>>>>>>>> Please feel free to correct me, and change your (implied) answer to
>>>>>>>> "Yes".
>>>>>>>>
>>>>>>>> And when you say "prepared to argue...", I think you mean
>>>>>>>> "determined to argue...". But predetermined prejudicial opinions are no
>>>>>>>> basis for a rational argument, they are a good basis for a food-fight,
>>>>>>>> though, which is what we have here. One which you started, and seem
>>>>>>>> determined to finish.
>>>>>>>>
>>>>>>>> You may not have noticed (I suspect not), but most of us here
>>>>>>>> (myself included) have no dogmatic insistence on whether or not these AI
>>>>>>>> systems can or can't have consciousness, or understand what they are
>>>>>>>> saying. We are willing to listen to, and be guided by, the available
>>>>>>>> evidence, and change our minds accordingly. It's an attitude that underlies
>>>>>>>> something called the scientific method. Give it a try, you might be
>>>>>>>> surprised by how effective it is. But it comes with a warning: It may take
>>>>>>>> you out of your comfort zone, which can be, well, uncomfortable. I suspect
>>>>>>>> this is why it's not more popular, despite how very effective it is.
>>>>>>>>
>>>>>>>> Personally, I think a little discomfort is worth it for the better
>>>>>>>> results, when trying to figure out how the world works, but that's just me.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Well said Ben. Your advice brought to mind this quote:
>>>>>>>
>>>>>>> "If a man will begin with certainties, he shall end with doubts, but
>>>>>>> if he will be content to begin with doubts he shall end in certainties."
>>>>>>> -- Francis Bacon
>>>>>>>
>>>>>>> Jason
>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>> extropy-chat mailing list
>>>>>>> extropy-chat at lists.extropy.org
>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>>>
>>>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230430/dd654c79/attachment.htm>


More information about the extropy-chat mailing list