[ExI] Holy cow!

Adrian Tymes atymes at gmail.com
Mon Apr 13 13:13:47 UTC 2026


On Sun, Apr 12, 2026 at 6:22 PM Jason Resch via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
> On Sun, Apr 12, 2026, 4:47 PM Adrian Tymes via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>> I have run into the halting problem in practice, as knowing whether
>> the program will halt.  Knowing whether a given function will be
>> invoked...granted, this is a problem if you don't control the code and
>> can't insert checkpoints.  When I'm developing some program, I
>> generally can do that - though, as you note later, this is a specific
>> case, not the general case.
>
> True. Though even with knowledge of the control flow and setting debugging break points, knowing what methods may be called isn't always trivial. Consider this block of code:
>
> x = 4;
> while (true) {
>    if (isGoodbachCounterExample(x)) { break; }
>    x += 2;
> }
> print x;
> foo();
>
> We don't know if foo() will ever be called because mathematicians don't yet know whether there exist any counter examples to Goldbach's conjecture.

Consider that this would only be run on a computer that actually
exists - which, by consequence of existing, has finite memory and thus
a maximum exact value that it can hold in memory.  So, for a given
computer, we can know if foo() will be called, since we only need to
calculate up to that maximum number.

This is another example of the "specific vs. hypothetical general"
divide you noted.

>> indeed.  Perfect cybersecurity for any sufficiently complex system is
>> not practical.
>
> Yes, I think this is John's argument. If super intelligent AI can find exploitable bugs in code written by less intelligent people or agents, then things may inevitably become an arms race between AI hardening code and more intelligent AI breaking it.
>
> The saving grace is that it seems to require a higher intelligence to find a bug than the intelligence required to introduce a subtle one that is unseeable by that lower intelligence. So once we use the current generation of AI to harden software in common use, it will require greater and greater leaps in AI to find exploitable bugs, and assuming the latest generation of AI is always put to task to fix things before being made available to break things then we might enter an era of stability.

Indeed.  And for the near future, it appears that everyone making the
most advanced AIs is interested in stability; the rogue actors
interested in instability don't have the same degree of resources, and
aren't making the most advanced AIs.

> But he is right to be concerned, if Mythos or some future AI is a quantum leap beyond current ones, it will be in a position to find bugs which no other previous AI or human engineer was able to spot. Bostrom listed hacking as one of the superpowers of superintelligence:
>
> https://alwaysasking.com/when-will-ai-take-over/#Superpowers_of_Superintelligence

It is true that this is a hypothetical danger.  The solution may lie
in alignment - not necessarily always of the AIs themselves, but of
whatever is making the new AIs.  It is possible that at some point in
the future this will become purely AIs making AIs...but we are not
there yet with the Mythos that exists today, and John wanted to know
if people would remember Mythos finding these bugs or Iran being
bombed, putting the question squarely in the context of what exists
right now.  (Thus, the Singularity wouldn't apply even if it literally
happens tomorrow.)

>> However, there is a world of difference between "an attack is possible
>> at all" and "an attack is likely enough to seriously worry about", let
>> alone "this particular attack will definitely happen any time now".
>>
>> John confuses these three.  The ability of AI to discover this many
>> vulnerabilities does not by itself move us out of the first category,
>> no matter how much John insists otherwise.
>
> It's unclear to me what fraction of code Mythos looked at, but let's say any given software library has only a 10% chance of being exploited. That would put the chance of any system being vulnerable to close to 100% given how many individual programs and libraries any given modern machine is likely to contain.

That's a common error.  "X% sounds like a plausibly low number, right?
 But then if we scale X% across the many instances we have, we'd have
a problem too large to not see.  So why aren't we seeing a problem?"
The reason is often that the actual X% is far, far lower than that
initial guess.

In this case, "any given software library has only a 0.000...1% chance
of being exploited" is much closer to reality.  I don't know off the
top of my head how many 0s that goes down to, but whatever it is, the
number is substantially less than 10%.

>> > I think the Battlestar Galactica remake gets this right. They learned their machine enemy could remotely hack and disable their military ships. To counteract this tactic, the humans had to strip all networking from their computers.
>>
>> Yeah...and then, if it were reality, the Cylons could take advantage
>> of the resulting massive latency in human operations to pick apart
>> their ships.  Getting the enemy to remove their advantages willingly
>> is itself a form of attack.
>
> True.
>
> I would say all security imposes a cost to implement. I had to spend probably 15 minutes today doing captchas and 10 minutes waiting on OTP codes to be sent to my phone today, and 5 minutes resetting passwords. Lots of time wasted and latency introduced.

You are quite correct to note that all security imposes a cost.  When
the cost of that security exceeds the value that would be lost to not
having that security, that particular security might not be worth
having.  The perception of this leads to certain shortcuts taken in
certain security measures - some of which wind up being efficient,
some of which hamper the security, and it can be hard to tell which is
which.



More information about the extropy-chat mailing list