[extropy-chat] Fools building AIs (was: Tyranny in place)

Keith Henson hkhenson at rogers.com
Mon Oct 9 03:31:22 UTC 2006


At 11:32 PM 10/6/2006 -0400, Ben wrote:
>Hi,

snip

> > A better question might be, "as rationality increases asymptotically,
> > does a generic human goal system have the urge to eliminate humans by
> > replacing them with something better?"
>
>I don't really believe in the idea of a "generic human goal system."
>It seems that some human goal systems, if pursued consistently, would
>have this conclusions, whereas others would not...
>
> > I personally happen to think that the position of your friend is
> > inconsistent with profound rationality and understanding of
> > intelligence.
>
>Can you explain why you think this?  This statement seems inconsistent
>with your own discussion of rationality, above.
>
>I stress that I am opposed to the annihilation of humanity!  I am just
>pointing out the very basic point that a value judgement like this has
>nothing to do with rationality... rationality is about logical
>consistency and optimal goal pursuit, not about what one's values and
>goals are.  So long as one's goals are not logically inconsistent,
>they are consistent with rationality...

I sincerely doubt anyone who has passing familiarity of the subject would 
give an AI the goal "eliminate human misery."

Or even minimize human misery.

Keith Henson




More information about the extropy-chat mailing list