[ExI] Singularity news

efc at swisscows.email efc at swisscows.email
Thu Apr 20 11:33:20 UTC 2023



On Thu, 20 Apr 2023, Jason Resch via extropy-chat wrote:

> I concur. I think this is the most probable and least risky course. In their current form, these AIs act as power magnifiers, they
> take the wishes and intentions of any human and allow them to express themselves, think, or achieve goals more ably.

I agree as well. The only thing that would happen if the development and
research was limited, was to push all development under ground. Powerful
nation states would never dream of abandoning r&d around AI, so we would
risk one nation state (or company) reaching the top first, and then
utilizing this to stop all competition.

Much better to diseminate the knowledge far and wide so that there won't
be any single one research institution with a monopoly.

That being said however, there are scary scenarios!

Imagine using this future AI to automatically crack various
implementations of SSL or popular encryption software. Describe as close
to possible the hardware and software setup of the opponent and let
loose your AI.

Or why not profiling? Today CIA & Co build detailed profiles of their
targets. Why not feed the AI on all publicly and privately available
information on a target and use that model to predict his next move,
decision, his vices etc. Talk about a force multiplier when trying to
blackmail, persuade someone to do something.

Best regards, 
Daniel




More information about the extropy-chat mailing list