[ExI] Holy cow!

Jason Resch jasonresch at gmail.com
Sun Apr 12 22:20:56 UTC 2026


On Sun, Apr 12, 2026, 4:47 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Sun, Apr 12, 2026 at 12:22 PM Jason Resch via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> > To John's point:
> >
> > The halting problem implications are more severe than whether or not a
> task will finish, it includes not being able to know whether or not why
> block of code will ever be reached or not.
> >
> > So it is not just whether a task finishes or not, but whether some
> function will be invoked or not, whether or not the machine will accept
> arbitrary inputs and test them as code, etc.
>
> I have run into the halting problem in practice, as knowing whether
> the program will halt.  Knowing whether a given function will be
> invoked...granted, this is a problem if you don't control the code and
> can't insert checkpoints.  When I'm developing some program, I
> generally can do that - though, as you note later, this is a specific
> case, not the general case.
>

True. Though even with knowledge of the control flow and setting debugging
break points, knowing what methods may be called isn't always trivial.
Consider this block of code:

x = 4;
while (true) {
   if (isGoodbachCounterExample(x)) { break; }
   x += 2;
}
print x;
foo();

We don't know if foo() will ever be called because mathematicians don't yet
know whether there exist any counter examples to Goldbach's conjecture.


> > To Adrian's point:
> >
> > There is much that can be done to minimize an attack surface, such as
> only connecting to trusted machines, validating input, using firewalls,
> activating the NX (no execute bit) to prevent arbitrary code execution, etc.
>
> Indeed, and it rather annoys me when people assume, imply, or outright
> declare that just because most people don't do that, nobody can ever
> do that - with the resulting massive advantage to any attacker.  I did
> that as part of my early career, and it is well known that people tend
> to get offended when you try to tell them that the life they
> personally experienced could not have happened (absent any evidence of
> a mechanism for false memories).
>
> > As to the halting problem implications, note that it is not the general
> case (any arbitrary programs cannot all be predicted), but the key word is
> general. There are software validation tools that can for limited specific
> cases, prove correctness, by brute force iterating over every possible
> program state.
> >
> > That said, any modern operating system is far too complex a beat to run
> correctness provers against. Even if you were to only run one piece of
> proven software on some server, how do you know there is not an exploitable
> bug in the DNS, NTP, TCP/IP stack, firewall, TLS library, SSH, or any of
> the hundreds of other software libraries on which the server software and
> operating system depend?
>
> indeed.  Perfect cybersecurity for any sufficiently complex system is
> not practical.
>

Yes, I think this is John's argument. If super intelligent AI can find
exploitable bugs in code written by less intelligent people or agents, then
things may inevitably become an arms race between AI hardening code and
more intelligent AI breaking it.

The saving grace is that it seems to require a higher intelligence to find
a bug than the intelligence required to introduce a subtle one that is
unseeable by that lower intelligence. So once we use the current generation
of AI to harden software in common use, it will require greater and greater
leaps in AI to find exploitable bugs, and assuming the latest generation of
AI is always put to task to fix things before being made available to break
things then we might enter an era of stability.

But he is right to be concerned, if Mythos or some future AI is a quantum
leap beyond current ones, it will be in a position to find bugs which no
other previous AI or human engineer was able to spot. Bostrom listed
hacking as one of the superpowers of superintelligence:

https://alwaysasking.com/when-will-ai-take-over/#Superpowers_of_Superintelligence



> However, there is a world of difference between "an attack is possible
> at all" and "an attack is likely enough to seriously worry about", let
> alone "this particular attack will definitely happen any time now".

John confuses these three.  The ability of AI to discover this many
> vulnerabilities does not by itself move us out of the first category,
> no matter how much John insists otherwise.
>

It's unclear to me what fraction of code Mythos looked at, but let's say
any given software library has only a 10% chance of being exploited. That
would put the chance of any system being vulnerable to close to 100% given
how many individual programs and libraries any given modern machine is
likely to contain.


> > I think the Battlestar Galactica remake gets this right. They learned
> their machine enemy could remotely hack and disable their military ships.
> To counteract this tactic, the humans had to strip all networking from
> their computers.
>
> Yeah...and then, if it were reality, the Cylons could take advantage
> of the resulting massive latency in human operations to pick apart
> their ships.  Getting the enemy to remove their advantages willingly
> is itself a form of attack.
>

True.

I would say all security imposes a cost to implement. I had to spend
probably 15 minutes today doing captchas and 10 minutes waiting on OTP
codes to be sent to my phone today, and 5 minutes resetting passwords. Lots
of time wasted and latency introduced.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260412/27326d47/attachment.htm>


More information about the extropy-chat mailing list