[ExI] The point of emotions
Lee Corbin
lcorbin at rawbw.com
Thu Apr 24 14:50:22 UTC 2008
Thomas writes
From: M1N3R
Sent: Monday, April 21, 2008 1:41 PM
> An interesting topic. I just have to add something to it.
> I've seen several most astonishing thoughts about this.
> The feeling process in us, if I may call it that, is due to
> very complex phenomena in our brain and all through
> the body. Be it chemical or electrical in nature.
Thanks for making explicit a huge point of controversy.
No less than Damasio himself claims that the body is
*essential* for the experience of emotion. I totally
disagree, and imagine that he must be just using words
very differently from how we use them here.
Look, if some computer interface attached to a bodiless
head simulated *all* the feedback conveyed by nerves
in a normally functioning body, then what would be the
difference? Clearly the "goose bumps" and other pleasures
(e.g. if you watch carefully you can feel certain areas of
your body, for example your shoulders being pleasurably
affected by music).
So for now, I defy anyone to make a good argument that
having a body is really essential for anything. All of what
happens to us really occurs just in the brain.
> If you think of it like that, it is just a mass of currents
> running through the body, quite chaotic.
You mean brain.
> To understand these would mean a new beginning in science,
> quite evidently. Now, to build them into computers seems
> dubious to me. Would it really be beneficial?
Certainly. Let's take a few examples. Suppose that the
afferent (as opposed to efferent) nerve impulses could be
completely replaced by the output from a robot body.
Then I could get rid of the body I have now, which has
all sorts of less than >H limitations (including declining
organ viability).
Moving on to the philosophically more interesting examples,
just my brain could be uploaded, and by definition a
successful upload gives me a life at least as rich in any
way one can imagine as my old body+brain could deliver.
> As for humans, I consider emotions an inseparable part
> of our self
Yes, and a lot of ink has been spilled on this thread that
time has not permitted me to address. *Emotions*,
when the term is broadly enough understood, really *are*
an essential part of the ability to function. Anyone who
doesn't think so needs to review the "Damasio card
experiments", or results like that. We used to talk about
it here a lot. Bottom line (to me): the right way to calculate
benefit to you can involve so much arithmetic that people
can't do it, and so results of a lot of experiences create
"an emotional signature", so that you have strong biases
against, say, getting yourself into certain situations, biases
that you cannot consciously account for. The 19th century
Phineas Gage seemed to function all right so far as anyone
could measure, but his life was screwed up because this
mechanism had been damaged in a mining accident.
> For example, love just grabs one by the throat and turns
> them over. Should a computer be exposed to that?
Any program I write to do work for me probably doesn't
need that to do its job. Oh, yes, true, I may want my
refrigerator to absolutely love its job, and absolutely
delight in keeping itself well-stocked.
An AI, on the other hand, doesn't need what people call
"irrational love", where one is inclined to do things not in
one's best interest (e.g. go about raping those you are
infatuated with).
> Or how about depression? These are mostly due to the
> presence or absence of certain chemicals in the brain
yes
> (and body),
no, not from a total philosophic perspective
> as I gather. Now, provided we get to know the exact
> nature of feelings in our system, how would that be simulated?
No one knows how, exactly, at the present time. But
nature did it, and so we have a working model to
backward engineer.
> Because if I think of a silicon or whatever computer,
> I just can't think of anything other than pure simulation.
> Would that be the same?
If you are a philosophical functionalist the way most
people are here, then the answer is a resounding "YES".
> ...somehow human thought, however rational, always
> has a certain emotional background to it (if we want
> humanlike AI) [for us to become]. After so much talk
> I come back to my beloved future model, collective
> consciousness.
An interesting and difficult topic. If I couldn't be at two
places at once (as most people continue to suppose, no
matter how many times some of us straighten them out),
then I would worry very much about joining a collective
consciousness. Here is why: on a normal day I'm me
because I reference all those Lee Corbin memories.
This new entity might spend 1% of its time on my memories
and 99% on other people's. What good would that do me?
Or as I like to put it, just how much runtime would Lee get?
On the other hand, I'd have no trouble joining a collective
as a copy, so long as I knew that there was an unadulterated
Lee still carrying on. Hey, 1% is better than 0% :-)
Lee
P.S. Any "group consciousness" discussion should OBVIOUSLY
go into a new thread.
P.P.S. Thanks Thomas, for volunteering to post in plain text from
now on.
> If plausible, aided by a certain intelligence enhancement,
> wouldn't that be the best solution (to become posthuman
> but still remain human in nature)? Please comment on that,
> I'd be so interested to hear your opinion.
More information about the extropy-chat
mailing list