[ExI] ai emotions
pharos at gmail.com
Tue Jun 25 12:43:43 UTC 2019
On Tue, 25 Jun 2019 at 12:02, Mike Dougherty <msd001 at gmail.com> wrote:
> On Tue, Jun 25, 2019, 6:11 AM Rafal Smigrodzki <rafal.smigrodzki at gmail.com> wrote:
>> ### The latter is closer to being true, of course, - if you don't have a lot of money or other useful resources, any goody-goody feelings you may have don't matter. A bit like "When a tree falls in the forest and nobody hears it, it doesn't matter if it makes a sound".
> What this thread has proven to me is that some of the smartest people I know do not understand compassion, so the likelihood that "ai emotions" will be any better is small (assuming ai will be "learning" such things from other "smart people")
> The word is rooted in "co-suffering" and is about experiencing someone's situation as one's own. Simply sharing "I understand" could be an act of compassion. Our world is so lacking in this feature that even the commonly understood meaning of the word has been lost. That seems very Newspeak to me. (smh)
That is a well-known problem for AI. If humans are biased, then what
they build will have the same biases built in. They may try to avoid
some of their known biases, like racial prejudice for example, but all
their unconscious biases will still be there. Even the training data
fed to AIs will be biased.
On the other hand, humans don't want cold merciless AI intelligences.
They want AIs to be biased to look after humans.
But not *all* humans, of course.
Humans can feel compassion for people that they are bombing to
destruction, but still feel duty bound to continue for many reasons.
Human society is a confused mess and it is likely that if AIs become
autonomous they will also follow that path.
More information about the extropy-chat