[ExI] What Does Chatbot Eugene Goostman's Success on the Turing Test Mean?

spike spike66 at att.net
Tue Jun 10 20:27:31 UTC 2014


 

 

From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Clark
Sent: Tuesday, June 10, 2014 10:55 AM
To: ExI chat list
Subject: Re: [ExI] What Does Chatbot Eugene Goostman's Success on the Turing Test Mean?

 

>…What Does Chatbot Eugene Goostman's Success on the Turing Test Mean? It means that some human beings cannot pass the Turing Test and are as dumb as a stump…

Ja, or they are impaired by AD.  Do treat these with respect John, for you and I may be dumb as a stump someday in the future.

>… Take a look at a transcript of the conversations and ask yourself if you would have been fooled. I wouldn't have been.

http://www.theguardian.com/technology/2014/jun/09/eugene-person-human-computer-robot-chat-turing-test John K Clark

Ja, but would it fool an AD patient?  One who is living in 1966 and doesn’t remember ever having owned a computer?  I think it would, and it could be therapeutic for that purpose.

Recall a few years ago when some joker rigged up Eliza and went into a teen chat site.  He had a population there which post-dated Eliza, so they didn’t know it could be done.  Most of them were fooled, at least at first.  With the AD patients, you have a population there many of which never did really use computers much, the current 80-something crowd.  My father was one; he owned a computer for years but never really did use it much.

Notice the conversation in the link you provided, the kinda chaotic nature of it.  This would be perfect for conversing with an AD patient, for that is the nature of their discussions; chaotic, random, repetitive, etc.  Computers never get tired or impatient, so they can answer the same question 20 times in an hour and never be bothered at all.  In that sense they could do this task better than any human.  We get impatient, frustrated, bored, we grieve for the patient as we knew them in their better days while the physical person is still present in a sense.

This is one of the goals I hope will result if you personally have never been to a nursing home or a memory care facility.  Once you see it and hear it personally firsthand, it changes your perspective, and I hope it changes your level of motivation.  Worked on me.

Another idea: you have seen the Wii avatars, the mii, ja?  OK then, we could perhaps rig up a version of that in which the machine watches the movements of the patient and acts as a virtual mirror: it reflects objects in the room.  But it creates a very realistic mii that looks like the patient did when the patient was 25, based on photos or video.  The patient would perceive herself as a young woman or man looking in the mirror.  I recognize this brings up a gaggle of new ethical questions, but we have those already anyway: often AD patients keep asking about their parents and when are their parents coming to pick them up, etc.  After a while, visitors just tell the patient her parents just called and will be over shortly.  Twenty-some times an hour, visitors will like to the patient.  Does it count as a lie if the patient is unable to store the info?  I think it would be ethically in the green to make a virtual youth-mirror for AD patients, and to create a chat-bot which pretends to be a visitor.

Refutation please?  Alternative ideas?

spike

 

 

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140610/d2364152/attachment.html>


More information about the extropy-chat mailing list