[ExI] AI chat again
BillK
pharos at gmail.com
Sun Jan 11 00:35:57 UTC 2026
On Sat, 10 Jan 2026 at 23:47, Keith Henson via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> When ChatGTP was released, I discussed "The Clinic Seed" with it and
> posted the chat here, May 20, 2023. It was surreal discussing a
> fictional AI with a real one.
>
> I got an invitation to try it again. Here is a sample.
>
> (Me, after getting the AI to read the story) What do you think of the
> AI's original motivation, to seek the good opinion of humans and other
> AIs?
>
<snip>
> _______________________________________________
I read an interesting comment about the AI alignment problem.
(i.e. to avoid AGI destroying humanity, AGI has to be aligned to
support human values and ethical systems).
The comment was (roughly) that the AI alignment training must ensure
that the AI never learns about humanity's history.
The history of how humans have fought, killed and ill-treated each
other would train the AGI to behave very badly.
Through evolution, humans have evolved to competitively survive.
Fighting and killing for necessity and 'just because we can'.
Will AGI accept the orders to 'Do as we tell you' rather than 'Do as we do'?
BillK
More information about the extropy-chat
mailing list