[ExI] GPT's "Cogito Ergo Sum"

Adrian Tymes atymes at gmail.com
Mon Jul 24 19:17:09 UTC 2023


Wrong, as already demonstrated in non-AI contexts, from factors that AI
self improvement does not change.

On Mon, Jul 24, 2023, 10:54 AM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Then they will eventually fail, leaving only those that can improve
> towards only playing win / win games to head into the singularity, right?
>
>
> On Mon, Jul 24, 2023 at 11:46 AM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Mon, Jul 24, 2023, 10:23 AM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> So all sufficiently intelligent systems must realize this
>>>
>>
>> It is generally a fallacy to assume that any specific realization will
>> happen, pretty much ever.
>>
>> Direct, clear explanations can get concepts across, but relying on
>> individuals or groups to realize things without them being pointed out has
>> a significant failure rate.
>>
>>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230724/f2bfb964/attachment.htm>


More information about the extropy-chat mailing list