[ExI] Holy cow!

Adrian Tymes atymes at gmail.com
Sun Apr 12 17:42:38 UTC 2026


On Sun, Apr 12, 2026 at 12:22 PM Jason Resch via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
> To John's point:
>
> The halting problem implications are more severe than whether or not a task will finish, it includes not being able to know whether or not why block of code will ever be reached or not.
>
> So it is not just whether a task finishes or not, but whether some function will be invoked or not, whether or not the machine will accept arbitrary inputs and test them as code, etc.

I have run into the halting problem in practice, as knowing whether
the program will halt.  Knowing whether a given function will be
invoked...granted, this is a problem if you don't control the code and
can't insert checkpoints.  When I'm developing some program, I
generally can do that - though, as you note later, this is a specific
case, not the general case.

> To Adrian's point:
>
> There is much that can be done to minimize an attack surface, such as only connecting to trusted machines, validating input, using firewalls, activating the NX (no execute bit) to prevent arbitrary code execution, etc.

Indeed, and it rather annoys me when people assume, imply, or outright
declare that just because most people don't do that, nobody can ever
do that - with the resulting massive advantage to any attacker.  I did
that as part of my early career, and it is well known that people tend
to get offended when you try to tell them that the life they
personally experienced could not have happened (absent any evidence of
a mechanism for false memories).

> As to the halting problem implications, note that it is not the general case (any arbitrary programs cannot all be predicted), but the key word is general. There are software validation tools that can for limited specific cases, prove correctness, by brute force iterating over every possible program state.
>
> That said, any modern operating system is far too complex a beat to run correctness provers against. Even if you were to only run one piece of proven software on some server, how do you know there is not an exploitable bug in the DNS, NTP, TCP/IP stack, firewall, TLS library, SSH, or any of the hundreds of other software libraries on which the server software and operating system depend?

indeed.  Perfect cybersecurity for any sufficiently complex system is
not practical.

However, there is a world of difference between "an attack is possible
at all" and "an attack is likely enough to seriously worry about", let
alone "this particular attack will definitely happen any time now".
John confuses these three.  The ability of AI to discover this many
vulnerabilities does not by itself move us out of the first category,
no matter how much John insists otherwise.

> I think the Battlestar Galactica remake gets this right. They learned their machine enemy could remotely hack and disable their military ships. To counteract this tactic, the humans had to strip all networking from their computers.

Yeah...and then, if it were reality, the Cylons could take advantage
of the resulting massive latency in human operations to pick apart
their ships.  Getting the enemy to remove their advantages willingly
is itself a form of attack.



More information about the extropy-chat mailing list