[ExI] Rights without selves (was: Nolopsism)
nanite1018 at gmail.com
Thu Feb 11 01:54:13 UTC 2010
On Feb 10, 2010, at 8:31 PM, Spencer Campbell wrote:
> Yes: this is the best description of your point yet, I think.
> Nevertheless I still disagree. To say that judgements made by
> rational, self-aware entities are more valid than those made by
> mechanistic automatons is entirely arbitrary. I suppose it derives
> from the claim that an internal experience of deeming something right
> or wrong is somehow important, but that experience is fleeting and
> currently inaccessible to all but the mind of origin.
> Morality and ethics are methods of categorizing actions in the world.
> Nothing more, nothing less. There isn't anything especially conscious
> about categorization.
Well, while morality and ethics are categorizations of actions, I think they are a special type. They are "this is good" or "this is bad." Now you need "good or bad for or to who or what?" I don't see any answer to that question besides "to 'people.'" Or, perhaps, living things. I don't see how something can be good or bad to something that doesn't have an awareness of good or bad, or even an alternative that changes anything. A car, without a person, is useless, it has no meaning, no value. It certainly doesn't value itself, since "it" is incapable of generating evaluations, and doesn't employ de se operators (which I think follows from the paper that started this all off). So it doesn't matter to the car, or anything else (because nothing can "matter" without an evaluator), whether it gets blown up, or rusts, or runs out of gas. Whereas, to life, it is really important. It determines whether it continues to exist or not. A car without evaluating entities is just a piece of matter, and matter just changes forms, it doesn't wink out of existence (I'm including energy as a form of matter).
Without something which can say "this is good or bad for/to 'me,'" I'm not sure how you would build something that can say something is good or bad in a non-arbitrary fashion. I say in a non-arbitrary fashion, because while you can build something which says "it is wrong for jellyfish to vomit", it really makes no difference to anything that a jellyfish vomits. But if you say "going and starting to kill people is bad because it hurts everyone's ability to live, including me," then you have an objective basis for that statement: your existence is threatened by people murdering each other. It actually makes a difference whether you live or die, because you can live or die, you can wink out of existence, even if the matter you are composed of is never destroyed.
I can't wait to read the Napoleon argument. I'm sure it will, at least, be interesting.
nanite1018 at gmail.com
More information about the extropy-chat