[ExI] The atoms red herring. =|

Samantha Atkins sjatkins at mac.com
Thu Nov 18 06:22:23 UTC 2010


On Nov 17, 2010, at 5:41 PM, Alan Grimes wrote:

>> Look at those last two sentences.  He "presumes"?!!  Well, of course
>> he "presumes".  That's the basis of his "knowledge".  But there's no
>> knowledge in it, just pure ego.
> 
> ;)
> 
> I don't have 1/10^12'th the ego required to assume that the world should
> be converted to computronium, tomorrow for all practical purposes. I am
> not so arrogant to assume that computronium will be the most prized
> substance in the universe. And I am not so self-righteous that I can
> claim that it would be benevolent to forcibly upload anyone. All of
> these positions have been expressed on this list within the last two weeks.
> 

If you could upload people to an environment with even more opportunity, richness of experience and quality of life than what we have now and with much much better longevity and prospects for open ended growth and becoming, then how would not doing so be more 'friendly' than doing so?   How would it be more moral?  What if you can see the blockages and misapprehensions that would cause many people to refuse this if asked, as an advanced AGI probably could.    Would it then still be moral to let people suffer and die final death here in slow time to accede to their possibly actually irrational wishes?   I think an actual Friendly AI might ponder for a while on this. 

- s




More information about the extropy-chat mailing list