[extropy-chat] Are ancestor simulations immoral? (An attempted survey)

BillK pharos at gmail.com
Wed May 24 20:55:17 UTC 2006


On 5/24/06, Russell Wallace wrote:
>  Let me clarify my position: the scenario I was objecting to was one where
> an inescapable government starts exerting micro-level control over what
> people can do with technology and their own lives on the pretext of
> preventing abuse; human nature being what it is, that path inevitably leads
> to a nightmare dystopia whose one saving grace is that it probably leads on
> to extinction of all sentient life.
>
>  If you're talking about a hypothetical post-Singularity world where an AI
> goes around making sure nobody creates hell worlds filled with pure
> pointless suffering, but otherwise leaves people alone (this would have to
> be an AI, if it were a human the innate desire to wield more power would
> take over), that would be a different thing altogether.
>

That's better.  :)
Post singularity, the FAI it won't be like a super head of the FBI and
NSA combined, that we talk about and discuss whether we approve the
latest political move.

We will *LOVE* it with an all consuming passion. It will be our 'God'.
We won't have any choice in the matter. It's nanobots will amend our
puny brains as it sees fit.

Humanity will be happy like never before, as we do it's bidding.
Assuming that it still has any interests that humans can help with.

Perhaps it will redesign the human race into something wonderful.
(But not noticeably like present humanity).


BillK




More information about the extropy-chat mailing list