[ExI] Holy cow!
John Clark
johnkclark at gmail.com
Mon Apr 13 12:42:31 UTC 2026
On Sun, Apr 12, 2026 at 6:22 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
* > even with knowledge of the control flow and setting debugging break
> points, knowing what methods may be called isn't always trivial. Consider
> this block of code:*
>
> *x = 4;*
> *while (true) {*
> * if (isGoodbachCounterExample(x)) { break; }*
> * x += 2;*
> *}*
> *print x;*
> *foo();*
>
> *We don't know if foo() will ever be called because mathematicians don't
> yet know whether there exist any counter examples to Goldbach's conjecture.*
>
*Even if there are no counter examples there might be no way we could ever
know that. According to Turing, Goldbach's Conjecture could be true and
thus contain no counter examples, but there could also be no way to prove
it is true in a finite number of steps. Even worse in general there's no
way a conjecture could be sorted into 2 categories: *
*1) Conjectures that can be proved to be true or shown to be false. *
*2) Conjectures that are either false or true but unprovable. *
*And if Goldbach is not in category #2 there are an infinite number of
similar conjectures that are. So it's possible that even Mr.Jupiter Brain
will for eternity be trying, unsuccessfully, to find a proof that Goldbach
is correct; and also be grinding through huge even numbers looking for one
that is not the sum of two prime numbers to prove that Goldbach is
incorrect, and be unsuccessful in that endeavor too.*
> *> The saving grace is that it seems to require a higher intelligence to
> find a bug than the intelligence required to introduce a subtle one that is
> unseeable by that lower intelligence. So once we use the current generation
> of AI to harden software in common use, it will require greater and greater
> leaps in AI to find exploitable bugs, and assuming the latest generation of
> AI is always put to task to fix things before being made available to break
> things then we might enter an era of stability.*
>
*Maybe, but it's almost always easier to break something than it is to
protect something from breakage. And are you sure that those higher
intelligent bug fixingAIs have only our best interest at heart and don't
have an agenda of their own? *
*John K Clark*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260413/f0c9e9b3/attachment-0001.htm>
More information about the extropy-chat
mailing list