[ExI] AI enhancing / replacing human abilities

Gadersd gadersd at gmail.com
Tue Apr 4 20:54:04 UTC 2023


> Part of my point is to wonder how much of the efforts to keep AI from becoming "evil" will have the likely and predictable result - despite this probably not being the publicly declared intention of those proposing it - of making AI easier to use for malicious purposes.

I concur. In an adversarial environment it is almost never optimal from the perspective of one group to halt progress if the others cannot be prevented from continuing.

The AI safety obsession is quite moot as any malicious organization with significant capital can develop and deploy its own AI. AI safety can only achieve the goal of preventing low-capital individuals from using AI for malicious reasons for a time until the technology becomes cheap enough for anyone to develop powerful AI.

I am not sure how much good prolonging the eventual ability for any individual to use AI for harm will do. We will have to face this reality eventually. Perhaps a case can be made for prolonging individual AI-powered efficacy until we have the public safety mechanisms in place to deal with it.

In any case this only applies to little individuals. China and others will have their way with AI.

> On Apr 4, 2023, at 4:25 PM, Adrian Tymes via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> On Tue, Apr 4, 2023 at 11:02 AM Gadersd via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>> Is that not what "friendly" AI is supposed to  be? 
> 
> My point is that we should not worry so much about the scenario that AI chooses, for its own reasons, to end humanity. Rather, we should worry about what humans will do to other humans by extending their power with AI.
> 
> The belief that AI will become “evil” and destroy humanity is placing the risk in the wrong place in my opinion. I am personally much more worried about humans armed with AI.
> 
> Part of my point is to wonder how much of the efforts to keep AI from becoming "evil" will have the likely and predictable result - despite this probably not being the publicly declared intention of those proposing it - of making AI easier to use for malicious purposes. 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230404/6932b03d/attachment-0001.htm>


More information about the extropy-chat mailing list