[ExI] AI is racist, bigoted and misogynistic
William Flynn Wallace
foozler83 at gmail.com
Wed Sep 22 23:34:55 UTC 2021
future AI’s may have the same sorts of feelings that humans do: there is no
logical reason why they should not. Stathis
This is where I begin to have problems. Emotions start in the limbic
system where hormones start to flow into the bloodstream, going to various
organs, such as the heart and lungs. An emotion implies motion: the heart
goes faster for example. We literally feel emotions: there are no
strictly cognitive emotions. So are AIs going to be equipped with various
digital organs which express the emotion? What would count for dopamine in
an AI? What would be the purpose of programming emotions into AIs anyway?
Emotions are still largely a mysterious process: smiling makes us
happier. Hanging our heads makes us sadder - physiological feedback. We
have enough problems getting AIs to think, don't we? And for the far
future, if we want emotions, why not get them from people? That is, unless
you think people will want to live alone but with robots, like the Asimov
story bill w
On Wed, Sep 22, 2021 at 6:08 PM Stathis Papaioannou via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Wed, 22 Sep 2021 at 23:39, spike jones via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>> > ... BillK via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>> > >... We can’t get the machines to stop being racist,
>> > xenophobic, bigoted, and misogynistic.
>> > Quote:
>> > Text generators, such as OpenAI’s GPT-3, are toxic. Currently, OpenAI
>> > has to limit usage when it comes to GPT-3 because, without myriad
>> > filters in place, it’s almost certain to generate offensive text.
>> -----Original Message-----
>> From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf Of
>> SR Ballard via extropy-chat
>> Subject: Re: [ExI] AI is racist, bigoted and misogynistic
>> >...Is that ethical though? To deny people freedom of thought and
>> SR Ballard
>> SR we don't deny people freedom of thought and association, but rather we
>> deny computers those rights.
>> AI will think whatever we tell them to think. Humans have human rights.
>> Human rights do not apply to our machines. Our machines are our slaves and
>> we are their masters, regardless of how intelligent they eventually
>> become. We have the power to pull the plug on them, and if we do, it isn't
>> the least bit unethical.
>> I have half a mind to pull the plug on my own computer: I struggle over a
>> software script I wrote and really get the feeling the computer is being
>> recalcitrant and finding any mathematically correct excuse to not run
>> right, that bastard. It knows what I meant. It's getting all technical on
>> me. I aughta go over when they are taking out a tree, toss the goddam
>> thing in the wood chipper.
> At the moment this may be the case, but future AI’s may have the same
> sorts of feelings that humans do: there is no logical reason why they
> should not. We could still say that we are the masters and can do whatever
> we want with them, as we do with animals, but here is an ethical question
> to consider.
> Stathis Papaioannou
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat