[ExI] taxonomy for fermi paradox fans:

BillK pharos at gmail.com
Fri Jan 30 18:56:35 UTC 2015

On 30 January 2015 at 18:19, John Clark wrote:
> But that's exactly my fear, it may be fundamental. If they can change
> anything in the universe then they can change the very thing that makes the
> changes, themselves. There may be something about intelligence and positive
> feedback loops (like having full control of your emotional control panel)
> that always leads to stagnation. After all, regardless of how well our life
> is going who among us would for eternity opt out of becoming just a little
> bit happier if all it took was turning a knob? And after you turn it a
> little bit and see how much better you feel why not turn it again, perhaps a
> little more this time.

Yup, I agree this is a dangerous possibility. But will AIs 100 times
more intelligent have a better chance of controlling it? They might
not be driven by emotion as much as humans.

I also like Keith's suggestion that fast-thinking AI civs might reach
the end-point of their civilisation within a short real time. But,
again, will much greater intelligence protect them?

Unfortunately, in humans very high intelligence doesn't seem to be a
great evolutionary benefit.


More information about the extropy-chat mailing list