[ExI] Existential risk of AI

spike at rainier66.com spike at rainier66.com
Tue Mar 14 14:55:36 UTC 2023



-----Original Message-----
From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf Of
BillK via extropy-chat
...

On Tue, 14 Mar 2023 at 14:29, spike jones via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
< big doggy snip>
>
>>... OK then, how do we deal with a dog-level intelligence which can be
trained to do good or do harm?
>
> spike
> _______________________________________________


>...Or rather, how do we deal with an AGI intelligence that looks on humans
as dog-level intelligences?

BillK
_______________________________________________

Ja, BillK, there is an in-between stage here.  Currently our proto-AIs don't
have their own will, but dogs do, and we guide their will to do what we
want.  Before, long before we get to AGI superior to humans, we will be
training sub-AIs, dog level AIs.  

Then... as the software gets smarter, so do we.

spike





More information about the extropy-chat mailing list