[ExI] Existential risk of AI

Tara Maya tara at taramayastales.com
Tue Mar 14 18:13:51 UTC 2023


Interesting. If military is the rounte things go, we shouldn't assume the Singularity means there will be a single super-AI. Rather, intelligence is driven by competition. We could have the AIs fighting over our heads, literally, as we become increasingly irrelevant to them.

Although, following the idea Spike planted in my head, perhaps they will defend and avenge us as fiercely as John Wick did his dog.

Tara Maya


> On Mar 14, 2023, at 9:50 AM, BillK via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> So, you think the benefits of developing AI is worth the risk because,
> either we will stop development before AGI is reached,
> or if AGI is created, we will have new programming methods that will
> enable humans to keep AGI under control.
> 
> I think that scenario is unlikely.
> Humans won't stop AI development at lower levels.
> Why? Because AI is now regarded as a military weapon to support
> control over weaker nations.
> This means that AGI will not be restricted, for fear that foreign
> nations might be developing more advanced AGI systems.
> AGI is this generation's nuclear weapons.
> Self-defence means a powerful AGI is required.
> But as AGI develops beyond human intelligence, then human control
> becomes impracticable.
> Eventually, a point will be reached where AGI will decide for itself
> what it wants to do.
> 
> 
> BillK

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230314/51c3c35c/attachment.htm>


More information about the extropy-chat mailing list