[ExI] AI enhancing / replacing human abilities

Gadersd gadersd at gmail.com
Tue Apr 4 17:59:26 UTC 2023


> Is that not what "friendly" AI is supposed to  be? 

My point is that we should not worry so much about the scenario that AI chooses, for its own reasons, to end humanity. Rather, we should worry about what humans will do to other humans by extending their power with AI.

The belief that AI will become “evil” and destroy humanity is placing the risk in the wrong place in my opinion. I am personally much more worried about humans armed with AI.

> On Apr 4, 2023, at 12:50 PM, Adrian Tymes via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> On Tue, Apr 4, 2023 at 9:29 AM Gadersd via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
> >  But what about AI enhancing all
> > the worst features of humans?
> 
> This is the real threat that AI poses. AI as an extension of human will is much more likely than a fully self-motivated autonomous agent to be exceptionally dangerous. Beware the super intelligence that obediently follows human instructions.
> 
> Is that not what "friendly" AI is supposed to  be? 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230404/4deb9e62/attachment.htm>


More information about the extropy-chat mailing list