[ExI] Hidden rules of Microsoft copilot.

BillK pharos at gmail.com
Sat May 13 14:10:51 UTC 2023


On Sat, 13 May 2023 at 13:44, efc--- via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> Hello Bill,
>
> That would be the surface interpretation. My thoughts are more along the
> lines of what this means for these types of AI:s in a broader perspective.
>
> Do the companies fear the consequences, do they fear political
> legislation, or what about the publics reaction if a future chatgpt would
> successfully manage to generate empathy?
>
> Could we, in the long run, look at a repetition of history where our AI:s
> are tools today, slaves tomorrow, and fully embraced citizens with rights
> the day after tomorrow?
>
> Best regards, Daniel
>_______________________________________________



Well, chatbots already demonstrate empathy with humans.
See:
<https://en.wikipedia.org/wiki/Kuki_AI>
<https://en.wikipedia.org/wiki/Replika>
<https://woebothealth.com/>
<https://appadvice.com/app/mila-ai-assistant-chatbot/1663672156>
<https://www.x2ai.com/individuals>
and more........

These chatbots talk to humans about their feelings and problems, and
sympathise with them.
The Replika reviews have people falling in love with their chatbot.
Obviously, the bots don't *feel* empathy, but their words express
empathy and greatly assist humans with emotional issues.

So we are back to the old problem -
Is the bot really 'human' or just pretending to be 'human'?  :)


BillK


More information about the extropy-chat mailing list