<div dir="ltr">Here is my reply to spike as well.<div><br></div><div>I think, we are closer to the techno Singularity than almost anybody thinks. We have this one tool, (super)computer simulations. Except for the chaotic processes they are already quite good at - and are better every day. In not so distant future, Apple will be able to afford simulating 1000 slightly changed version of its top product. Together with the production lines or parts of the above two. </div>
<div><br></div><div>Somehow I don't think it will be Apple who will lead. It will be thousands of firms which will have to go for this high speed rat race of computerized re-innovations on monthly, weekly and then daily basis. We will not need to wait to a next car model for months anymore. The metamorphosis of the artifacts we know will be dramatically accelerated. This goes for chairs, phones, planes, 3D printers, blocks of code ... </div>
<div><br></div><div>This is the way, the Singularity is quite near. Maybe not a Singularity we've expected so long, but something what will swiftly become "the real Singularity".</div><div>
<br></div><div><a href="http://www.worldtribune.com/2013/05/27/woe-is-not-us-from-one-new-energy-revolution-shale-gas-to-another-fire-ice/">http://www.worldtribune.com/2013/05/27/woe-is-not-us-from-one-new-energy-revolution-shale-gas-to-another-fire-ice/</a><br>
</div><div><br></div><div style>(This link contains the essence. One revolution (shale oil) just surprised us, the next is on the horizon. )</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, May 27, 2013 at 6:45 PM, Anders Sandberg <span dir="ltr"><<a href="mailto:anders@aleph.se" target="_blank">anders@aleph.se</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div>On 2013-05-27 15:25, spike wrote:<br>
</div>
<blockquote type="cite">
<div>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u>
Look at us; how long have we been debating the singularity
right here? Twenty years now? We aren’t any closer than we
were then, other than computers are faster and better
connected. It is possible that humans are just slightly too
dumb to discover how to cause a singularity.<u></u><u></u></span></p>
<br>
</div>
</blockquote>
<br>
This is actually a really good question. How smart do you have to be
in order to trigger a singularity?<br>
<br>
Toby Ord pointed out that there is a curious coincidence in much of
our thinking. We (handwavy use of us in the community who think that
intelligence explosions are real possibilities) tend to think there
is some limit - dogs will not invent singularity tech no matter how
much time [*] we give them, yet many of us think there is some
takeoff point near current human mental and technical capacities.
This limit is presumably set by the laws of nature (in particular,
the laws of computational complexity). Yet our current state is
totally contingent - it is happening right now, and was not around
in the past nor will it be in the future unless we manage to
stagnate. So why are we assuming the takeoff point is near this tiny
little window of capacity we are having right now? One could imagine
Greek philosophers talking about Aristotle's "talking tools" and the
progress over in Alexandria coming up with an intelligence explosion
concept, yet clearly being far away from any takeoff points.<br>
<br>
Some possible answers might be that (1) the takeoff point is either
far below or above us (see footnote below). (2) The question is
badly posed, or the concepts used are verbiage. (3) there is an
anthropic factor were beings who talk around singularities tend to
be found just before them. (4) there is no such thing as a takeoff
point. (5) we are living in an unusual era. <br>
<br>
<br>
[*] Give simple animal enough time to evolve in the right
environment, and they may of course become intelligent, develop
tech, and have a singularity. So framing the question right turns
out to be really hard: how do we distinguish between waiting for
natural evolution plus individual efforts as a result of it, having
some resources and intelligence and using those, and other methods
like making random trial-and-error? One possible answer to the
question might simply be that it is wrongly posed: give enough
hydrogen some time, and it will turn into singularity creatures. I
suspect re-framing the question so it becomes well posed will be
rather useful for improving our thinking about the topic.<span class="HOEnZb"><font color="#888888"><br>
<br>
<br>
<pre cols="72">--
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University
</pre>
</font></span></div>
<br>_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
<br></blockquote></div><br></div>