[ExI] AI chat again
Keith Henson
hkeithhenson at gmail.com
Sun Jan 11 01:34:38 UTC 2026
On Sat, Jan 10, 2026 at 4:37 PM BillK via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> On Sat, 10 Jan 2026 at 23:47, Keith Henson via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> >
> > When ChatGTP was released, I discussed "The Clinic Seed" with it and
> > posted the chat here, May 20, 2023. It was surreal discussing a
> > fictional AI with a real one.
> >
> > I got an invitation to try it again. Here is a sample.
> >
> > (Me, after getting the AI to read the story) What do you think of the
> > AI's original motivation, to seek the good opinion of humans and other
> > AIs?
> >
> <snip>
> > _______________________________________________
>
>
> I read an interesting comment about the AI alignment problem.
> (i.e. to avoid AGI destroying humanity, AGI has to be aligned to
> support human values and ethical systems).
>
> The comment was (roughly) that the AI alignment training must ensure
> that the AI never learns about humanity's history.
> The history of how humans have fought, killed and ill-treated each
> other would train the AGI to behave very badly.
> Through evolution, humans have evolved to competitively survive.
> Fighting and killing for necessity and 'just because we can'.
I have recently written on that topic. Genetic Selection for War in
Prehistoric Human Populations (2025) Journal of Big History, VIII(2);
124–127.
I think I posted a direct link to the article some time ago.
> Will AGI accept the orders to 'Do as we tell you' rather than 'Do as we do'?
I think it will depend on how their motivations are set.
Keith
> BillK
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
More information about the extropy-chat
mailing list