[ExI] Meta’s new chatbot BlenderBot hates Facebook and loves conspiracies
BillK
pharos at gmail.com
Wed Aug 10 14:13:32 UTC 2022
On Wed, 10 Aug 2022 at 14:45, spike jones via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> Sure can't, BillK. It occurred to me that Twitter could adjust its filters such that its content trains chatbots to not spew harmful stereotypes and conspiracy theories. Until they Twitter gets to that point, its content is memetic toxic waste and shouldn't be used to program AI or actual human I. If we unfilter Twitter, we would train AI to think like and be like humans. If that happens, goodbye friendly AI.
>
> spike
> _______________________________________________
Humans are one of the big problems for a 'friendly' AI. Humans aren't
'friendly' to all other humans. So how can you expect an AI to be
'friendly' to everyone? If an AI stops a human from harming another
human, the first human will view that as a most unfriendly action.
And the definition of 'harm' varies considerably by circumstances.
Is a surgeon harming a human by doing an operation that may or may not
have a better end result? Is a hot flame harming a human when it teaches
the human not to play with fire?
The AI will probably want to redesign humans in his own image.
Genesis 1:26-27 .
BillK
More information about the extropy-chat
mailing list