[extropy-chat] Prime Directive
jef at jefallbright.net
Fri Oct 27 22:34:41 UTC 2006
> line of thought leads inexorably to the absurdity of being
> dead in order to avoid suffering."
> I'm having hard time seeing the absurdity here and how the
> last sentence follows from the rest of the paragraph. Is
> there something wrong with choosing death in order to avoid suffering?
Correct, that was poorly worded. Should have said something like "the
absurdity of choosing death to avoid all suffering."
The point is that pleasure and suffering don't really really act as
goals, but as feedback signals to keep the system operating in
homeostasis. Lock the feedback at some fixed state and the system will
tend towards failure. On the other hand, modify the system itself to
use the feedback differently or use other types of feedback and open up
a world of new transhuman possibilities.
> More importantly, the above implies that promotion of values
> is somehow more important than ability to experience them. It
> suggests that values could exist in absence of experience,
> that is, they could still exist even with all the humans in
> the world wiped out.
Yes, my point is that values are what influence rational
decision-making, and one can (and we often do) choose to promote our
values extending beyond any expectation of experience. For example,
parents sacrificing for children, philanthropy, heroic sacrifice of
There's a meta-understanding of this as well--the superrational idea
that if we each were to act altruistically, in such a way as to bring
about the kind of world we would prefer but without the requirement of
direct payoff, then we *would* in fact each enjoy living in a better
world. However, this doesn't work within the context of our current
society for obvious reasons.
> Ability to experience values has priority over values as
> values cannot exist without the ability to experience them.
I think the difficulty here begins with semantics. Let me know whether
the preceding explanation clarifies.
> Jef Allbright:
> "Regarding consciousness, you have no way of measuring
> whether anyone is
> conscious. We all could be zombies but behave exactly the same.
> Occam's razor implies that all have consciousness similar to your own,
> but in any case you have absolutely no access to others subjective
> experience. So how is consciousness relevant to your decision-making?"
> How is decision-making relevant to consciousness? Anyway,
> "consciousness" is a very
> messy word that could mean a lot of things. In order to have
> a discussion people
> have to synch their referents for the all the terms used
> during a discussion first.
> Otherwise, they will talk past each other.
> So, what you call "consciousness" I'd rather replace with
> "ability to process
> reality/information." This should bring much more clarity to
> this discussion.
I agree that the term is grossly misused and I prefer a systems view as
> I interpret Descartes' "I think therefore I am" strictly as,
> "I'm able to process
> reality/information therefore I am." That recursion process
> has plenty of base
> cases to make us feel something instead of nothing. If what
> you're saying were
> true, you would not be able to understand this sentence
> because infinite recursion
> would prevent you from *processing* characters on the screen.
In an algorithmic universe round wheels still turn smoothly even though
the universe doesn't have time to compute the infinite series of digits
in pi. Approximate models work quite well within suitable context. But
my point was about recursion in the modeling of the universe by a system
within the universe.
More information about the extropy-chat