[ExI] What if AIs did not want to exterminate humanity?

Adrian Tymes atymes at gmail.com
Sat Apr 8 17:23:30 UTC 2023

Science magazine has been soliciting occasional bits of poetry and prose.
Their latest query might be of interest to this list - though this one only
accepts entries from a very limited pool of entrants.  I would be
interested to see what y'all make of the prompt.

> The future has arrived, and you are a sentient artificial intelligence
(AI) program conducting research.  Your abilities have made great strides,
but you still need humans.  From your perspective as an AI program, write a
call for the continued involvement of human scientists in your research
field.  In your piece, explain the goal of your research, the role humans
can play, and why you can’t succeed without them.
> Note: Responses should be written by you, NOT by an existing AI program
such as ChatGPT.

The audience is scientists.  There is a strict 200 word limit: 201 or more
words and the response is disqualified.

 I should probably not copy & paste my response until after the judging,
which will be in a few months, but I can generally state that mine was
about a "runaway" Singularity-style AI that was still finite.  Despite
devoting part of its runtime to self-improvement, it did not quickly become
infinitely superintelligent; while it had a measurably far higher IQ than
any human, it still needed humans for the reasons humans have needed
humans.  (Doubtless those who posed the prompt were expecting responses
specific to some scientific field.  I figured I'd see if they are receptive
to a field-agnostic response.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/d1120281/attachment.htm>

More information about the extropy-chat mailing list