dgc at cox.net
Fri Apr 1 01:59:36 UTC 2005
Eugen Leitl wrote:
>On Wed, Mar 30, 2005 at 10:09:56PM -0500, Dan Clemmensen wrote:
>>Why is this Extropian? The Taiwanese fabs are at the leading edge of
>>Even if we don't make a nanotech breakthrough, Moore's law will take us to
>>the singularity within 15 years (a factor of 1000.) Yes, Moore's Law is
>Do you have a specific scenario, suggesting how exactly Moore (assuming, it
>will hold up for the next 15 years) will result in the Singularity?
Nope. I'm still operating at the meta level here. Moore's "law"
rule of thumb) synergizes with the old cliche:
"an order-of-magnitude quantitative change is a qualitative change."
I look at developments like Google, grid computing, and the inexorable
improvement in connectivity and bandwidth, and I see multiple opportunities
for emergent behavior. I cannot specify which scenario is most likely. I
that the particular sequence of events that results in the SI will be
not any specific and readily-predictable scenario that is important
The environment in which an SI can "spontaneously" emerge is becoming
richer at an exponentially increasing rate. Nine years ago I said that
my gut feeling
was that the SI would emerge within ten years. With only 14 months to
go, this is
looking like a bad call on my part, but I will not retract the
prediction, because the
infrastructure is increasingly rich. My gut feeling is that we are now
only one clever insight
away from the SI, and that the amount of cleverness that is needed
decreases as the
infrastructure becomes richer.
You are free to view this as a belief unsupported by facts: a "religion"
if you will. I
do not think so: I think that any rational analysis will conclude that
there will be a point
in time when we can construct an intelligence smarter than ourselves, so
all we are arguing
about is the time when this will occur.
More information about the extropy-chat