[ExI] What would an IQ of 500 or 1000 look like?

Anders Sandberg anders at aleph.se
Mon May 25 21:31:37 UTC 2015


David Lubkin <lubkin at unreasonable.com>:
An interesting question falls out: Why do we  
describe them as superintelligences? Any alien  
species might be incomprehensible or seem  
arbitrary in its actions to us. Super- is a  
judgment.


Yes. We need to see that their ability to reach their goals is vastly superior to ours in order to ascribe superintelligence to them. This is context- and goal-dependent. An alien flopping around in gas giant hurricanes might actually achieve very deep fluid dynamic goals, but we cannot understand them so we might just as well label its behaviour incomprehensible or arbitrary. The transhuman that habitually taps her fingers and clacks her teeth (vide Peter Watts "Echopraxia") may just look like she has an annoying personality quirk... until much later, when the full extent of the plan becomes visible and we suddenly see a very smart mind. 


The problem is that sometimes it is impossible to tell what is just weird and what is smart (e.g. the bicamerals in Echopraxia - puppet masters or puppets?) Again, I think one can prove undecidability for this problem - some code can generate smart solutions that only become visible arbitrarily far into the future. 



In an extropian context, I think we use  
superintelligence to denote those  
incomprehensible minds that we hope or fear can  
see and leverage what we cannot, in ways that  
might have rapid, extreme consequences for us.  
It's the impact on us that draws our attention,  
that distinguishes them from merely weird. 


No, I think you are describing super-powerful minds. Power is the ability to effect change, but it does not have to be smart in order to do a lot - a supernova or financial crash have extreme consequences, yet the nonlinearities guiding them are not that smart in any sense. Goal-directed smart entities can effect change in ways that are far harder to avoid since they are subtle, and they leverage these subtle things to get unexpected big effects at the end of long causal chains or through apparently low-probability events. 


Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150525/00e9a082/attachment.html>


More information about the extropy-chat mailing list