[ExI] 'Friendly' AI won't make any difference

Keith Henson hkeithhenson at gmail.com
Sat Feb 27 08:40:26 UTC 2016

 I would suggest that the danger from Friendly AI is not the AI, but
the humans.  There is a very old theme in human stories, watch what
you ask for, because you may _get_ it.

I developed this theme in "The Clinic Seed which most of you have read
at some time.  In the story humanity, at least in it's original home
in Africa, goes effectively extinct due to friendly AI clinics that do
what they want.  Of course as time goes on, what they want changes . .
. .


"Can you teach me this language and how to read?"  Zaba asked.

There was a short pause, which was really a very long pause for
Suskulan as he projected what would happen and thought about the
unstated (though obvious) reason he had been given the upgrade.

"Yes" Suskulan said at last inflecting his voice to a sigh.  "But it
will change you and the rest of the people of the tata in ways you
cannot foresee and may not like. You can sleep through the nine or ten
days it will take to finish healing you.  Are you sure you want to do

"Yes," said Zaba firmly, "I want to learn."

And thus was the fate of this particular tata determined . . . .


If you google halting state machines of loving grace it should drop
you on page 104 of Charles Stross's novel where he says much the same


More information about the extropy-chat mailing list