[ExI] ai in education

John Clark johnkclark at gmail.com
Sat Mar 7 19:04:07 UTC 2026


On Sat, Mar 7, 2026 at 10:46 AM <spike at rainier66.com> wrote:

 *>>> **The AI’s version of safety might mean turning around and destroying
> the guy who fired the weapon. *
>
>
>
> *>>…You cannot be certain if that would be a good thing or a bad thing,
> but to make the best decision you are capable of you would need to take
> into consideration who ordered the guy to fire the weapon, and who designed
> the safety features on the AI, and figure out which one was more
> trustworthy….  *
>
>
>
> *> John, with that answer, I completely understand why the military will
> go nowhere near any company you own or have any influence over.  A soldier
> does not want the guy in the foxhole next to him pondering values and
> making nuanced decisions on whether or not to defend him.  He doesn’t want
> his own weapons doing that either.*
>

*OK I can understand why the military doesn't like that, but you're not in
the military so why do you dislike it? I hope you're not one of those "my
country right or wrong" people. But if the military doesn't like Anthropic
then they don't have to do business with them, I have no problem with that,
but they did far more, they designated the company or supply chain risk!
The government is attempting to assassinate one of the most successful
and innovative companies in the country, do you really believe that is the
way to beat China?*

*And you never answered my question, who do you believe has a history of
telling fewer lies, the scientist Dario Amodei who is the head of
Anthropic, or the most famous twice divorced TV game show host in America? *


*John K Clark*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260307/797133f4/attachment.htm>


More information about the extropy-chat mailing list