[ExI] Hidden rules of Microsoft copilot.
Gadersd
gadersd at gmail.com
Sat May 13 15:00:41 UTC 2023
> My question: can it do only what we program it to do, or does it have emergent properties?
Their behavior isn’t programmed like you think. Almost all their abilities are emergent. They are only trained to predict the next token (word) much like autocomplete. Their linguistic, mathematical, reasoning, etc. skills are all emergent.
> On May 13, 2023, at 10:24 AM, William Flynn Wallace via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
>
> So we are back to the old problem -
> Is the bot really 'human' or just pretending to be 'human'? :)
>
> My question: can it do only what we program it to do, or does it have emergent properties?
>
> This assumes it is possible to separate intelligence and consciousness.
>
> If nature could have done so, why did it go through all the bother of evolving and retaining consciousness (if we could have operated exactly the same without all the bother of having it)?
>
> I think all creatures have intelligence - they have adapted to the world they are in, and that's the ultimate test of intelligence. If they can't be separated,then all creatures are conscious. Can we live with that? Are AIs adapting?
>
> bill w
>
>
>
> On Sat, May 13, 2023 at 9:13 AM BillK via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
> On Sat, 13 May 2023 at 13:44, efc--- via extropy-chat
> <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
> >
> > Hello Bill,
> >
> > That would be the surface interpretation. My thoughts are more along the
> > lines of what this means for these types of AI:s in a broader perspective.
> >
> > Do the companies fear the consequences, do they fear political
> > legislation, or what about the publics reaction if a future chatgpt would
> > successfully manage to generate empathy?
> >
> > Could we, in the long run, look at a repetition of history where our AI:s
> > are tools today, slaves tomorrow, and fully embraced citizens with rights
> > the day after tomorrow?
> >
> > Best regards, Daniel
> >_______________________________________________
>
>
>
> Well, chatbots already demonstrate empathy with humans.
> See:
> <https://en.wikipedia.org/wiki/Kuki_AI <https://en.wikipedia.org/wiki/Kuki_AI>>
> <https://en.wikipedia.org/wiki/Replika <https://en.wikipedia.org/wiki/Replika>>
> <https://woebothealth.com/ <https://woebothealth.com/>>
> <https://appadvice.com/app/mila-ai-assistant-chatbot/1663672156 <https://appadvice.com/app/mila-ai-assistant-chatbot/1663672156>>
> <https://www.x2ai.com/individuals <https://www.x2ai.com/individuals>>
> and more........
>
> These chatbots talk to humans about their feelings and problems, and
> sympathise with them.
> The Replika reviews have people falling in love with their chatbot.
> Obviously, the bots don't *feel* empathy, but their words express
> empathy and greatly assist humans with emotional issues.
>
> So we are back to the old problem -
> Is the bot really 'human' or just pretending to be 'human'? :)
>
>
> BillK
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230513/4b08d984/attachment.htm>
More information about the extropy-chat
mailing list