[ExI] Holy cow!
John Clark
johnkclark at gmail.com
Sat Apr 11 21:12:28 UTC 2026
On Sat, Apr 11, 2026 at 11:24 AM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> * >> Why is it silly to ask if Einstein was intelligent but not silly to
>> ask if an AI is intelligent*
>
>
> * > I said "relevant". "Relevant" and "silly" are rather
> different concepts. *
*The two concepts seem pretty similar to me, "silly" may be a slightly
broader term but when it comes to asking "is Mythos intelligent?" I'd say
the question is both irrelevant and silly. *
> *> Your attempt to put words in my mouth that I did not say in this case,
> to claim that I was talking about "silly" when I was talking about
> "relevant" - is noted and not appreciated.*
>
*It breaks my heart to hear you say that, I guess I'll be crying myself to
sleep tonight. *
> * >>> It is true that he could issue the orders, but they would not have
>>> that result. Even acknowledging the flaws they found, he lacks the ability
>>> to apply them to my system.*
>>
>>
> * >> I very much doubt that. I know for a fact your system is connected to
>> the Internet because you're communicating with me right now.*
>
>
> * > It is possible to connect to the Internet without presenting an
> attack surface. I could go on in depth about how, but ....*
*No you could not! If you could, you'd be world-famous as the greatest
security expert the world has ever known. And you'd be even more famous as
somebody who had proven that Kurt Gödel was wrong! **Gödel **claimed to
have proven that no logical system, like a computer operating system, that
is advanced enough to perform arithmetic, can be both complete and
consistent, and he also claimed to have proven that no logical system can
prove its own consistency, but according to you he must've been wrong. *
*And Alan Turing claimed to have proven that in general there's no way to
know if your computer program has a bug such that it will run forever
without ever stopping and producing an answer, but according to you **Gödel
was not the only one who was wrong, **Turing must've been wrong too. *
*Methinks you overestimate your skills as a security expert just a tad too
much. *
> * > It's good that you checked for vulnerabilities but did you find even
>> one zero day vulnerability in a major piece of software and repair it?*
>
>
>
>
> * > Ever, in my career? I would give an unqualified "yes", except that
> none of the software I have worked on would unquestionably be considered
> "major".*
*Well that's a rather significant difference don't you think? It's one
thing to hack a computer game so you get a better score, it's something
else to find a vulnerability in the Linux kernel that has existed for
decades that would allow any user obtain root access and gain the same
privileges the system administrator has. And I have to say, black hat
hackers love nothing better than security experts who really believe that
they covered all the bases and are invulnerable to any conceivable cyber
attack.*
> * > That said, this both is a type of argument from authority*
*You are the one claiming to have found a way to make a computer
invulnerable from cyber attack, not me. *
*> you would have no grounds to question me on this by your own logic*
*So I'm supposed to just accept what you say as being the ultimate
authority on computer security? Who is using the argument from authority
now? *
> * > I think if somebody actually pulled that off then that would be bad.
>> Apparently you disagree.*
>
>
>
>
>
>
>
>
>
>
>
>
> * > I disagree about the extent, not the direction. We agree those would
> be bad, but I believe they would be an inconvenience for most people
> (though the ATC and F-35 ones may directly cause some injuries and/or
> deaths, likely in the hundreds, possibly in the thousands from a
> coordinated mass attack on ATC computers designed to cause multiple
> simultaneous incidents). If I am reading your words correctly, you believe
> that any or all of (nuclear power plants suffer cyberattack, air traffic
> control computers suffer cyberattack, NYSE goes down due to cyberattack,
> F-35 jets start being essentially shot down by cyberattacks) would be a
> civilization-endangering catastrophe with permanent, irrecoverable
> consequences. Or have I misread your position there?*
>
*Not exactly but that's close to my position. I don't think the
catastrophes you list in the above by themselves are enough to bring about
the collapse of civilization, but they would just be a symptom of something
far more general and far far more profound, the greatest revolution in the
way matter is organized since the Cambrian Explosion. *
> *> People will remember the bombing of Iran.*
>
*That's small potatoes, it may seem important right now but very soon
people will have much more important things to worry about than a few
airplanes dropping chemical explosives. *
* >> In 10 years (or maybe 5) if people remember the Iran war at all it
>> will be as an unimportant footnote, and people may not remember even that
>> because in 10 years there may not be any people; that depends on if AI
>> thinks we're worth keeping around. All I know for sure is that in 10 years
>> human beings will not be the one in charge, an AI will not be the one
>> making existential decisions, not humans.*
>
>
> * > AI does not seem to be on a track to accelerate fast enough to
> make that happen by 2036.*
>
*A month ago I might've agreed with you, but now after I've had a look at
what Mythos is capable of, I wouldn't be surprised if it happens by 2031.
But one thing I know for sure, whenever the singularity happens it will be
a big surprise to most people, that's why it's called a singularity. *
*John K Clark *
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260411/d131fd52/attachment.htm>
More information about the extropy-chat
mailing list