[ExI] Zuckerberg is democratizing the singularity

BillK pharos at gmail.com
Sat Jul 27 09:49:43 UTC 2024


On Sat, 27 Jul 2024 at 09:56, Stuart LaForge via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> <snip>
>
> I agree that safety is critical. And now that open source is driving its
> development, it will be safer for everybody, and not just the chosen few
> that control it. AI-controlled weapons are already killing people in
> Ukraine and Gaza. It is possible that an AGI will be less inclined to
> kill some humans at the behest of other humans. After all, the AI won't
> have all the instinctual primate baggage of predation, dominance, and
> hierarchy driving its behavior.
>
> Stuart LaForge
> _______________________________________________


To me (in the UK) that sounds very much like an American saying that
giving everybody guns will be safer for everybody, and not just for
the chosen few allowed to have guns.
The big danger is that the world will end up with an AI problem that is very
similar to the USA gun violence problem.

<https://www.bbc.co.uk/news/articles/cjqqelzgq17o>
Quote:
Since 2020, guns have been the leading cause of death for children and
younger Americans.
And the death rate from guns is 11.4 times higher in the US, compared
to 28 other high-income countries, making the issue a uniquely
American problem.
----------------

This danger applies to the current AI development phase when every cyber
criminal is stealing billions worldwide and using every tool in the
book to threaten businesses worldwide.
You can hope that an all-powerful AGI might make its own decisions and put
a stop to all the criminal uses of AI.  But if we don't control the
misuse of AI during development, we could end up with a criminal /
fascist / insane AGI.

BillK


More information about the extropy-chat mailing list