[ExI] Existential risk of AI

BillK pharos at gmail.com
Tue Mar 14 16:50:39 UTC 2023


On Tue, 14 Mar 2023 at 15:08, spike jones via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
<snip>
>
> My notion is that long before that happens, we will discover better ways to
> train software than our current method, which involves writing actual
> software.  We will develop a kind of macro language for writing higher level
> software.
>
> spike
> _______________________________________________


So, you think the benefits of developing AI is worth the risk because,
either we will stop development before AGI is reached,
or if AGI is created, we will have new programming methods that will
enable humans to keep AGI under control.

I think that scenario is unlikely.
Humans won't stop AI development at lower levels.
Why? Because AI is now regarded as a military weapon to support
control over weaker nations.
This means that AGI will not be restricted, for fear that foreign
nations might be developing more advanced AGI systems.
AGI is this generation's nuclear weapons.
Self-defence means a powerful AGI is required.
But as AGI develops beyond human intelligence, then human control
becomes impracticable.
Eventually, a point will be reached where AGI will decide for itself
what it wants to do.


BillK


More information about the extropy-chat mailing list