[ExI] Are we too stupid? (Was: Cold fusion)

Anders Sandberg anders at aleph.se
Mon May 27 16:45:46 UTC 2013


On 2013-05-27 15:25, spike wrote:
>
> Look at us; how long have we been debating the singularity right 
> here?  Twenty years now?  We aren't any closer than we were then, 
> other than computers are faster and better connected.  It is possible 
> that humans are just slightly too dumb to discover how to cause a 
> singularity.
>
>

This is actually a really good question. How smart do you have to be in 
order to trigger a singularity?

Toby Ord pointed out that there is a curious coincidence in much of our 
thinking. We (handwavy use of us in the community who think that 
intelligence explosions are real possibilities) tend to think there is 
some limit - dogs will not invent singularity tech no matter how much 
time [*] we give them, yet many of us think there is some takeoff point 
near current human mental and technical capacities. This limit is 
presumably set by the laws of nature (in particular, the laws of 
computational complexity). Yet our current state is totally contingent - 
it is happening right now, and was not around in the past nor will it be 
in the future unless we manage to stagnate. So why are we assuming the 
takeoff point is near this tiny little window of capacity we are having 
right now? One could imagine Greek philosophers talking about 
Aristotle's "talking tools" and the progress over in Alexandria coming 
up with an intelligence explosion concept, yet clearly being far away 
from any takeoff points.

Some possible answers might be that (1) the takeoff point is either far 
below or above us (see footnote below). (2) The question is badly posed, 
or the concepts used are verbiage. (3) there is an anthropic factor were 
beings who talk around singularities tend to be found just before them. 
(4) there is no such thing as a takeoff point. (5) we are living in an 
unusual era.


[*] Give simple animal enough time to evolve in the right environment, 
and they may of course become intelligent, develop tech, and have a 
singularity. So framing the question right turns out to be really hard: 
how do we distinguish between waiting for natural evolution plus 
individual efforts as a result of it, having some resources and 
intelligence and using those, and other methods like making random 
trial-and-error? One possible answer to the question might simply be 
that it is wrongly posed: give enough hydrogen some time, and it will 
turn into singularity creatures. I suspect re-framing the question so it 
becomes well posed will be rather useful for improving our thinking 
about the topic.


-- 
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130527/8d7b80ad/attachment.html>


More information about the extropy-chat mailing list