[extropy-chat] A future fit to live in?
Heartland
velvethum at hotmail.com
Mon Jan 15 01:20:36 UTC 2007
Heartland:
I would never imply that pleasure supergoal is a choice. It never was. As you point
out, what we are and what we want has been caused by blind evolutionary mechanism.
We've been all hardwired to seek pleasure and have no choice in this matter.
Jef:
I would say that a self reflective system would output a statement of "pleasure"
when it senses that the feedback loop is tightening on it's goal. This is
consistent with humans expressing pleasure either when their "reality" becomes
closer to their expectations or when their expectations become closer to their
"reality." It is also consistent with our "pleasure setpoint" moving up as our
expectations (previous values) are met and set to a higher level. It's also
consistent with our saying that we are "pleased" when something "bad" stops, even
when nothing "good" happened. Don't you see that "pleasure" is just a
self-refective description of the status of the feedback loop as reported by the
system, but lacks any functional or absolute status? It's exactly the same
conceptual trap as the concept of qualia. All of which, just to be clear, does not
deny the existence of pleasure or subjective experience.
----
Jef, it sounds like you're a card-carrying functionalist. :-)
Jef:
If you want to maintain your claim that a goal of "pleasure" motivates all
behavior, then your idea fails when extended to organisms that don't have the
complex capability of experiencing pleasure. Unless you want to warp the concept
of "pleasure" to include what ants and amoebas "experience" when they are not
inhibited from carrying ot their normal behaviors. I suppose you could also claim
that Tilden's robots feel "displeasure" if you lift their little legs off the
ground. Or maybe your claim is that humans have some special undefined quality
that sets them apart from lower order animals in terms of goals versus values.
----
It's important to understand that pleasure is *one of many* motivating forces that
drives different agents of different complexity. Sometimes it's just electricity
and code. I think it would be wrong to go from "humans do the things they do only
if they expect to be paid in pleasure at the end of the task" to "machines that do
something must therefore be driven by and capable of experiencing pleasure." I
don't see a logical connection there. Pleasure doesn't have to extend to all
agents, as you suggest, just like a capacity for abstract thought doesn't have to
extend below complexity of a human mind.
Heartland:
My point is that your choice to promote values even at a cost of your survival is
still motivated by the higher goal of pleasure.
Jef:
Did you ever consider that our internal values are very much relevant to promoting
survival, without requiring an explicit goal of "you must survive!"?
----
Yes, and I find it inconsistent while trying to point this out to you. Promotion of
survival without the explicit goal of trying to stay alive makes no sense. Humans
are not perfectly rational.
Jef:
I never said that promoting one's values is more important than promoting one's
survival.
----
I'm sorry Jef, but that is exactly what you've been saying (just look at your
previous sentence). If you hadn't been saying that I wouldn't have been compelled
to reply to your posts. I'm sure you don't think you are saying that, but you do.
(read below)
Jef:
What I said is that fundamentally what we do is try to affect our environment in
such a way that we promote our values into the future (values for survival
included.).
----
And some values are more important than trying to stay alive, right?
So, it seems to you that this stipulation of "[promoting survival] without
requiring an explicit goal of 'you must survive!' makes all this right and
logically consistent. Let's then focus on this. How can you claim that promoting
survival is more important than promoting (survival-unrelated) values and still
insist that explicit goal of staying alive is less important than promoting values?
In other words, how can trying to survive be more important than staying alive?
Staying alive *is the whole point* of trying to survive. It's as if someone
campaigned hard for candidate X while not caring about whether or not X wins the
office.
Jef:
Seeking pleasure is ammoral , but tends to correlate with activity that we would
assess as "good".
----
I would say that's a very strong correlation. :-) Perhaps strong enough to define
"good" as "pleasure?"
Heartland:
If you or someone else can show why wireheading is wrong without
resorting to the obvious "yuck factor" I would like to read it.)
Jef:
Wireheading can be useful to the extent it improves functioning by compensating for
performance impairments due to detrimental side-effects of our evolved
configuration. For example, physical pain is a useful adaptation in that it forces
a prompt (and usually appropriate) response by the organism away from danger.
However, a detrimental side-effect of this useful adaption is the possibility of
lingering or disabling pain. Similarly for mental and emotional conditions that
might be compensated beneficially to improve the functioning of the (human)
organism.
----
Yes.
Jef:
However, wireheading can be very bad to the extent that it subverts the "pleasure
sense" by changing values toward environment, short-circuiting the healthy mode of
allowing the agent to change environment toward values. Such short-cicuiting of
the growth process is morally neutral with regard to the individual, but
detrimental and morally wrong from the point of view of others in the community.
----
On the face of it, this is a strong argument, but consider this. Pleasure is not
only chocolate and sex (as I cautioned against this knee-jerk thought before) but
also seeing a healthy environment and happy community. If wireheading has a natural
tendency to change values toward environment, that powerful force will be
automatically balanced by an opposing force that says, "it would be wrong to change
values as this change could negatively affect my environment which would certainly
diminish my potential for experiencing pleasure in the future." This simple and
"cold" benefit vs. cost analysis would keep a rational agent on the path of
"pleasure growth" resulting in equal benefits to the individual *and* society.
Obviously, this "agent" doesn't necessarily refer to a present-day human. This
could only work for rational agents. The point is that it's not that wireheading
itself is broken, but that humans still are.
Jef:
If we're going to attempt to continue this discussion, I think it would be very
effective if we tried reversing roles. If agreeable, you can reply by clearly
summarizing my position (my point of view) as coherently as possible, then state
why anyone might have a problem with it. I'll do the same for your position. This
should greatly minimize the tendency to talk past one another.
----
Ah, it's that trap again. Fool me once... . :-)
S.
More information about the extropy-chat
mailing list