<div dir="auto"><div><br><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Sat, Mar 7, 2026, 2:40 PM Keith Henson via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Sat, Mar 7, 2026 at 11:51 AM spike jones via extropy-chat<br>
<<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br>
><br>
> -----Original Message-----<br>
> From: extropy-chat <<a href="mailto:extropy-chat-bounces@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat-bounces@lists.extropy.org</a>> On Behalf Of Adrian Tymes via extropy-chat<br>
> Cc: Adrian Tymes <<a href="mailto:atymes@gmail.com" target="_blank" rel="noreferrer">atymes@gmail.com</a>><br>
> Subject: Re: [ExI] ai in education<br>
><br>
> On Sat, Mar 7, 2026 at 2:17 PM spike jones via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a>> wrote:<br>
> >>... The same reasons the military distrusts Anthropic would cause me to distrust it: we can’t be sure it won’t turn on us.<br>
><br>
> >...How can you be certain, to the degree you are requesting of AI, that a human-run military won't turn on us?<br>
><br>
> _______________________________________________<br>
><br>
><br>
> We can't. We use all available resources and technology to prevent it. So far so good.<br>
><br>
> We are buying AI. We need complete control of it before we can trust it with our defenses, using all available resources and technology.<br>
<br>
As others have pointed out, it can't be done. It is partly a<br>
definitional problem. Intelligence is unpredictable to some extent.<br>
If you have complete control, completely predictable it is not<br>
intelligent.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">Indeed there is probably some kind of theorem which says that a less intelligent process cannot predict (in every situation) what a more intelligent process will do.</div><div dir="auto"><br></div><div dir="auto">Jason </div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Keith<br>
> spike<br>
><br>
><br>
> _______________________________________________<br>
> extropy-chat mailing list<br>
> <a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
> <a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div></div>