[ExI] On the speed of self-improvement

Brent Allsop brent.allsop at gmail.com
Fri May 12 21:03:13 UTC 2023


I guess the scary thing is if the US took this view, and China didn't, and
we got into a war with China, who would win?



On Fri, May 12, 2023 at 2:12 PM William Flynn Wallace via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I think that how much trouble an AI can create depends solely on what it
> is hooked up to.  I would not let one run anything other than really
> trivial things .  Take their recommendations but enable them yourself.
> bill w
>
> On Fri, May 12, 2023 at 12:14 PM BillK via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Fri, 12 May 2023 at 17:37, Adrian Tymes via extropy-chat
>> <extropy-chat at lists.extropy.org> wrote:
>> >
>> > On Fri, May 12, 2023, 9:10 AM BillK via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>> >>
>> >> Just as the basic chess-playing computers learn by playing millions of
>> >> test games in a very brief interval of time, the AGI will change its
>> >> own programming in what will appear to humans to be the blink of an
>> >> eye. By the time humans know something unexpected is happening it will
>> >> be too late.
>> >
>> >
>> > This does not follow.  The chess playing programs took time to
>> improve.  There exist today experiments in self-optimizing AIs.  If they
>> were inherently of infinite speed (or would quickly achieve this from
>> exponential improvement), they would have already demonstrated this.
>> >
>> > Many of the fears about AGI seem to stem from fears of infinities like
>> this.  "Some day, the very first thing of this kind shall appear and it
>> will immediately do things before humans can react."  The data does not
>> make this assertion likely, let alone guaranteed.
>> > _______________________________________________
>>
>>
>> Ohhh, you're one of these crazy optimist types!  :)
>> It is not fear, to say just be careful. Caution required!
>>
>> I asked Bard -
>> Could a self-optimising AI improve faster than humans can react?
>>
>> Bard replied -
>> Yes, a self-optimizing AI could improve faster than humans can react.
>> This is because AI can learn and improve at an exponential rate, while
>> humans are limited by our biological capabilities. For example, AI can
>> process and analyze vast amounts of data in a fraction of the time it
>> takes a human to do so. This allows AI to identify patterns and make
>> connections that humans would never be able to see. Additionally, AI
>> can experiment and try new things without fear of failure. This allows
>> AI to learn and improve much faster than humans, who are often
>> hesitant to take risks.
>>
>> As a result of these factors, it is possible that a self-optimizing AI
>> could eventually surpass human intelligence. This could have a
>> profound impact on society, as AI could potentially solve some of the
>> world's most pressing problems. However, it is important to remember
>> that AI is still in its early stages of development. It is possible
>> that AI could pose a threat to humanity if it is not carefully
>> controlled. It is therefore important to develop ethical guidelines
>> for the development and use of AI.
>> ------------------
>>
>> So charging ahead to develop AGI as fast as possible means facing
>> risks that we don't yet know about.
>>
>>
>>
>> BillK
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230512/80ef7939/attachment.htm>


More information about the extropy-chat mailing list