[ExI] taxonomy for fermi paradox fans:

Flexman, Connor connor_flexman at brown.edu
Fri Jan 30 18:41:38 UTC 2015


John Clark said:

> But that's exactly my fear, it may be fundamental. If they can change
> anything in the universe then they can change the very thing that makes the
> changes, themselves. There may be something about intelligence and positive
> feedback loops (like having full control of your emotional control panel)
> that always leads to stagnation. After all, regardless of how well our life
> is going who among us would for eternity opt out of becoming just a little
> bit happier if all it took was turning a knob? And after you turn it a
> little bit and see how much better you feel why not turn it again, perhaps
> a little more this time.
>
> This is a really great point. Reminds me of the issues tackled in the MIRI
paper about intelligent agents reasoning about their environment while they
are embedded in it. https://intelligence.org/files/ProblemsSelfReference.pdf
Definite issues can arise here from agents caught between modifying
themselves vs their environment minus themselves vs the whole system.
Hopefully consequentialist reasoning could funnel the agents toward making
sure there was a failproof method for optimizing utils in the future
light-cone before undertaking large-scale wireheading.
Connor

-- 
Non est salvatori salvator,
neque defensori dominus,
nec pater nec mater,
nihil supernum.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150130/a6b8cc75/attachment.html>


More information about the extropy-chat mailing list