[ExI] On the speed of self-improvement

BillK pharos at gmail.com
Fri May 12 17:11:28 UTC 2023


On Fri, 12 May 2023 at 17:37, Adrian Tymes via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> On Fri, May 12, 2023, 9:10 AM BillK via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>>
>> Just as the basic chess-playing computers learn by playing millions of
>> test games in a very brief interval of time, the AGI will change its
>> own programming in what will appear to humans to be the blink of an
>> eye. By the time humans know something unexpected is happening it will
>> be too late.
>
>
> This does not follow.  The chess playing programs took time to improve.  There exist today experiments in self-optimizing AIs.  If they were inherently of infinite speed (or would quickly achieve this from exponential improvement), they would have already demonstrated this.
>
> Many of the fears about AGI seem to stem from fears of infinities like this.  "Some day, the very first thing of this kind shall appear and it will immediately do things before humans can react."  The data does not make this assertion likely, let alone guaranteed.
> _______________________________________________


Ohhh, you're one of these crazy optimist types!  :)
It is not fear, to say just be careful. Caution required!

I asked Bard -
Could a self-optimising AI improve faster than humans can react?

Bard replied -
Yes, a self-optimizing AI could improve faster than humans can react.
This is because AI can learn and improve at an exponential rate, while
humans are limited by our biological capabilities. For example, AI can
process and analyze vast amounts of data in a fraction of the time it
takes a human to do so. This allows AI to identify patterns and make
connections that humans would never be able to see. Additionally, AI
can experiment and try new things without fear of failure. This allows
AI to learn and improve much faster than humans, who are often
hesitant to take risks.

As a result of these factors, it is possible that a self-optimizing AI
could eventually surpass human intelligence. This could have a
profound impact on society, as AI could potentially solve some of the
world's most pressing problems. However, it is important to remember
that AI is still in its early stages of development. It is possible
that AI could pose a threat to humanity if it is not carefully
controlled. It is therefore important to develop ethical guidelines
for the development and use of AI.
------------------

So charging ahead to develop AGI as fast as possible means facing
risks that we don't yet know about.



BillK



More information about the extropy-chat mailing list