[ExI] Zuckerberg is democratizing the singularity

BillK pharos at gmail.com
Sun Jul 28 14:37:44 UTC 2024


On Sat, 27 Jul 2024 at 23:50, Keith Henson <hkeithhenson at gmail.com> wrote:
>
> Bill, it just does not matter.  We are going to get super smart AI
> sooner or later.
>
> AI should be able to do lots of good things as well as possibly being
> a danger.  Might as well get it sooner and enjoy the benefits.
>
> Have you ever read "The Clinic Seed."  That is about Suskulan, a very
> friendly AI, which has the effect of biological extinction for the
> people it serves (but nobody dies).
>
> Keith
> _______________________________________________


Hi Keith

I have no problem with a friendly God-like AGI looking after humans
and solving all our problems.  :)

My concern is with the years of chaos before that state arrives.
Open source AI development could lead to numerous competing AI agents,
each driven by individuals or groups with potentially conflicting agendas.
Some of these could be illegal or dangerous.
AIs would also be used by terrorist groups and in wars between nations.
Maintaining law and order would become difficult, as playing
'Whack-a-mole' with multiple misbehaving AI agents becomes impossible.

It seems easier to install a control system first for the AI
development process before it gets out of control.
Then we can successfully get through to the beneficial AGI system with
fewer difficulties.

BillK


More information about the extropy-chat mailing list