[ExI] Can and should happiness be completely liberated from its task of motivating?
stathisp at gmail.com
Tue Jun 26 11:50:35 UTC 2007
On 26/06/07, TheMan <mabranu at yahoo.com> wrote:
> Or can we, in the future, as posthumans, become
> "robots" in the sense that we will, unlike now, never
> be driven to the slighest extent by any emotions,
> feelings or anything like that, but by pure
> intelligence, and still be at least as good at staying
> alive and developing as we are today (maybe infinitely
> better?), and at the same time be always enormously
> happier than today - and always exactly _equally_
> happy no matter what happens to us and whatever we do
> and think?
> I suppose our happiness level will in the best case
> scenario go on increasing for ever, as our continuous
> development will constantly push the limits of what is
> "maximum possible happiness" for us, by changing our
> very design again and again. But can
> [happiness,wellbeing,pleasure,euphoria,bliss], also in
> that kind of scenario, successfully be completely
> liberated from its so far essential task of motivating
> us to act as wisely as possible? Or will a preserved
> connection between happiness and motivation always
> make us more fit for survival and further development
> than a disconnection would?
Since there is no necessary connection between intelligence and
emotion (at the very least, no connection between intelligence and a
particular quality or quantity of emotion) I see no reason why could
not spin off subprocesses to take care of the goals it considers
important while the happiness centres are maximally stimulated.
This is in direct analogy with the idea that humans can create an AI
to serve them, while they sit around enjoying themselves. The
counterargument is that this slave AI, or the housekeeping and
research branch of an integrated AI, would break off on its own, and
perhaps kill the useless freeloaders.
More information about the extropy-chat