[ExI] Unfrendly AI is a mistaken idea.

Lee Corbin lcorbin at rawbw.com
Thu May 24 15:13:43 UTC 2007


Stathis writes

> On 24/05/07, Lee Corbin <lcorbin at rawbw.com> wrote:
>
> > Now remember that even anything as primitive as an orginal human
> > being (uploaded, no doubt) will still have total formal control over
> > his emotions. I guess you want to be a happy cat. Well, why not be
> > an even happier advanced human and be able to appreciate it more? 
> 
> Why not just have the happiness at no cost? You might say,
> because being an intelligent being has a certain je ne sais quoi,
> adding richness to the raw emotion of happiness. However,
> if you have complete access to your mind you will be able to
> pin down this elusive quality and then give it to yourself directly...

Yes, heh, heh.  But that seems a strategy good only for the very
short term. Unless you mix in a tremendous urge to advance (which
not coincidentally most of us already possess), then you fail ultimately
by incredible orders of magnitude to obtain vastly greater satisfaction.

Isn't that really what repels us about the image of a wirehead? No
progress?

> And if that won't quite do either, because it isn't the real thing,

"Real" thing?  Tut, tut.  I am beyond such pedestrian sentiments  :-)

> well, you can just work out what extra positive feeling the real
> thing would have provided and give *that* to yourself directly. 

Quite right.  But again, it would be unwise in the long run.

But besides, I happen to have a very strong *predilection*
for learning and finding truth. "To delight in understanding"
has long been my maxim for what I ultimately wish for. So,
even though you're right and I've been motivated to feel that
way by genetic systems out of my direct control (so far), I
would still choose to go on getting my raw pleasure
indirectly.

Lee




More information about the extropy-chat mailing list