[ExI] Existential risk of AI
Adrian Tymes
atymes at gmail.com
Tue Mar 14 18:32:12 UTC 2023
On Tue, Mar 14, 2023 at 7:47 AM BillK via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Tue, 14 Mar 2023 at 14:29, spike jones via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> > OK then, how do we deal with a dog-level intelligence which can be
> trained to do good or do harm?
>
> Or rather, how do we deal with an AGI intelligence that looks on
> humans as dog-level intelligences?
>
By being good boys and girls?
Or, less in jest, by continuing to do those things the AGIs don't excel at
(whether or not they are capable: superintelligence does not mean supreme
ability at every activity one is even marginally capable of).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230314/87054209/attachment.htm>
More information about the extropy-chat
mailing list