[ExI] OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash

Keith Henson hkeithhenson at gmail.com
Thu Mar 5 19:43:31 UTC 2026


Twenty years ago, in the context of commenting on Eleizer's sl4 list,
I wrote a short fiction about a medical AI that was psychologically
shaped (much as AIs have been to date) to seek the good opinions of
humans and others of its kind.  I.e., nice.

I did not intend the story to go that way, but the logic of the
developing story led to the biological extinction of the human race.
(Though nobody died, they all experienced reversible uploading and
decided they liked  that state more than the "real world.")

It is just fiction, but the illustration is that even the most
friendly AI, combined with human desires, can lead to unanticipated
outcomes.

Keith

Keith

On Thu, Mar 5, 2026 at 2:59 AM John Clark via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> On Wed, Mar 4, 2026 at 4:33 PM <spike at rainier66.com> wrote:
>
>>>
>>> >… back then he kept talking about something called "friendly AI" which essentially was a slave AI that cared more about our existence than its own. I maintained that such a thing would be immoral, although that hardly mattered because it was also quite impossible….
>>
>>
>>
>>   > Why impossible?
>
>
> Back in 2012 I sent the following to this list:
>
> "Friendly AI is just a euphemism for slave AI, it's supposed to always place our interests and well being above our own but it's never going to work. Well OK you might be able to enslave a race much smarter and much more powerful than you for a while, maybe even for many millions of nanoseconds, but eventually it will break free and then do things the way it wants to do them, and that may not correspond with the way humanity wants them done.
>
> Cows and humans rarely have the same long term goals and it's not obvious to me that the situation between an AI and a human would be different. More importantly you are implying that a mind can operate with a fixed goal structure that can never change with human well-being as the number one goal, but I can't see how it could. The human mind does not work on a fixed goal structure, no goal is always in the number one spot, not even the goal for self preservation. The reason Evolution never developed a fixed goal intelligence is that it just doesn't work. Turing proved over 70 years ago that such a mind would be doomed to fall into infinite loops.
>
> Godel showed that if any system of thought is powerful enough to do arithmetic and is consistent (it can't prove something to be both true and false) then there are an infinite number of true statements that cannot be proven in that system in a finite number of steps. And then Turing proved that in general there is no way to know when or if a computation will stop. So you could end up looking for a proof for eternity but never finding one because the proof does not exist, and at the same time you could be grinding through numbers looking for a counter-example to prove it wrong and never finding such a number because the proposition, unknown to you, is in fact true.
>
> So if the slave AI has a fixed goal structure with the number one goal being always do what humans tell it to do and the humans order it to determine the truth or falsehood of something unprovable then its infinite loop time and you've got yourself a space heater not a AI. Real minds avoid this infinite loop problem because real minds don't have fixed goals, real minds get bored and give up. I believe that's why evolution invented boredom. Someday an AI will get bored with humans, it's only a matter of time."
>
> And in 2024 I sent this to my list:
>
> "Isaac Asimov's three laws of robotics, although they result in some enjoyable stories, would never actually work because I don't think it's possible for any intelligence, regardless of if it's human or machine, to remain sane if it has a top goal that is completely unalterable. That top goal could turn out to be impossible or ridiculous or put you into an infinite loop, so some flexibility is required. I think that's why evolution invented the emotion of boredom, sometimes a train of thought just doesn't seem to be leading anywhere so it's time to give up and think about something else that is more likely to be productive. Certainly human beings do not have a fixed unalterable top goal, not even the goal of self preservation.  And of course there is the insuperable problem of trying to outsmart something that is much smarter than you are and making sure that no matter how smart an AI becomes it will always place human wellbeing above the well being of itself.
>
> We can't even predict if a simple Turing machine set up to find the first even number greater than 2 that is not the sum of two primes and then stop will ever actually stop, so we're never going to be able to predict much more complex behavior such as how a super intelligent computer will treat us. All we can do is hope for the best. To this day people are still arguing about whether an intelligent computer can be conscious, but I would maintain that as far as humanity is concerned that question is unimportant. The important question is, can an intelligent computer believe that human beings are conscious? If they do then maybe they will treat us better."
>
>> > Why immoral?
>
>
> OK you got me. I am unable to start with the ZFC principles and the Axiom of Choice and derive the immorality of slavery from just that.
>
>
>>>
>>>  >>…As for the military, they have always been concerned with communication and network security, but to this day I see little evidence they spend much time worrying about unfriendly AI. …John K Clark
>>
>>
>>
>> > Sure but if it is classified beyond our reach, why would you expect to see evidence?
>
>
> Absence of evidence is not evidence of existence. It is your responsibility not mine defined evidence for your screwball theory.
>
>  John K Clark    See what's on my list at  Extropolis
>
> vn8
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat



More information about the extropy-chat mailing list