[extropy-chat] AI design
Alejandro Dubrovsky
alito at organicrobot.com
Thu Jun 3 12:50:34 UTC 2004
On Thu, 2004-06-03 at 05:57 -0400, Eliezer Yudkowsky wrote:
> An AI that had been programmed with a static utility function over human
> happiness and to not destroy human bodies, would rewrite all extant human
> brains in a state of maximum "pleasure" as defined in the utility function,
> and then freeze them in that exact position (because the utility function
> is over static states rather than dynamic states).
i see that i missed the "static" at the beginning of the paragraph. and
i made a mistake in my function anyway. screw my last message
alejandro
More information about the extropy-chat
mailing list