[ExI] Hidden rules of Microsoft copilot.
Gadersd
gadersd at gmail.com
Sat May 13 15:29:53 UTC 2023
> The closest I can come is that it is programming itself. Is that emergent too, or programmed in?
In some sense it programs itself. The training process, called gradient descent, works by nudging the parameters of the model in the direction that locally increases its word prediction accuracy the most. Each nudge improves the model and after a massive number of small nudges it gains the ability to accurately predict text. Accurate text prediction requires a wide variety of skills such as linguistics, mathematics, etc. so the model emergently gains these abilities. These models are so complex that it is impossible for a human to explicitly program behavior in them like in traditional programming. Machine learning is a new paradigm of programming in which people need only specify objectives for the model and the model automatically gravitates towards good solutions for the objectives.
> On May 13, 2023, at 11:07 AM, William Flynn Wallace via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
> Thanks Gadersd- it's probably beyond my ability to understand it. The closest I can come is that it is programming itself. Is that emergent too, or programmed in? bill w
>
> On Sat, May 13, 2023 at 10:02 AM Gadersd via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>> My question: can it do only what we program it to do, or does it have emergent properties?
>
> Their behavior isn’t programmed like you think. Almost all their abilities are emergent. They are only trained to predict the next token (word) much like autocomplete. Their linguistic, mathematical, reasoning, etc. skills are all emergent.
>
>> On May 13, 2023, at 10:24 AM, William Flynn Wallace via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>>
>>
>> So we are back to the old problem -
>> Is the bot really 'human' or just pretending to be 'human'? :)
>>
>> My question: can it do only what we program it to do, or does it have emergent properties?
>>
>> This assumes it is possible to separate intelligence and consciousness.
>>
>> If nature could have done so, why did it go through all the bother of evolving and retaining consciousness (if we could have operated exactly the same without all the bother of having it)?
>>
>> I think all creatures have intelligence - they have adapted to the world they are in, and that's the ultimate test of intelligence. If they can't be separated,then all creatures are conscious. Can we live with that? Are AIs adapting?
>>
>> bill w
>>
>>
>>
>> On Sat, May 13, 2023 at 9:13 AM BillK via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>> On Sat, 13 May 2023 at 13:44, efc--- via extropy-chat
>> <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>> >
>> > Hello Bill,
>> >
>> > That would be the surface interpretation. My thoughts are more along the
>> > lines of what this means for these types of AI:s in a broader perspective.
>> >
>> > Do the companies fear the consequences, do they fear political
>> > legislation, or what about the publics reaction if a future chatgpt would
>> > successfully manage to generate empathy?
>> >
>> > Could we, in the long run, look at a repetition of history where our AI:s
>> > are tools today, slaves tomorrow, and fully embraced citizens with rights
>> > the day after tomorrow?
>> >
>> > Best regards, Daniel
>> >_______________________________________________
>>
>>
>>
>> Well, chatbots already demonstrate empathy with humans.
>> See:
>> <https://en.wikipedia.org/wiki/Kuki_AI <https://en.wikipedia.org/wiki/Kuki_AI>>
>> <https://en.wikipedia.org/wiki/Replika <https://en.wikipedia.org/wiki/Replika>>
>> <https://woebothealth.com/ <https://woebothealth.com/>>
>> <https://appadvice.com/app/mila-ai-assistant-chatbot/1663672156 <https://appadvice.com/app/mila-ai-assistant-chatbot/1663672156>>
>> <https://www.x2ai.com/individuals <https://www.x2ai.com/individuals>>
>> and more........
>>
>> These chatbots talk to humans about their feelings and problems, and
>> sympathise with them.
>> The Replika reviews have people falling in love with their chatbot.
>> Obviously, the bots don't *feel* empathy, but their words express
>> empathy and greatly assist humans with emotional issues.
>>
>> So we are back to the old problem -
>> Is the bot really 'human' or just pretending to be 'human'? :)
>>
>>
>> BillK
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230513/fe85d10b/attachment-0001.htm>
More information about the extropy-chat
mailing list