[extropy-chat] Singularity econimic tradeoffs (was: MARS: Because it is hard)

Eugen Leitl eugen at leitl.org
Fri Apr 16 17:38:15 UTC 2004


On Fri, Apr 16, 2004 at 07:54:47AM -0400, Dan Clemmensen wrote:

> A singularity driven by computer power and software does not depend on any
> particular hardware improvement such as molecular circuitry, or any 
> particular
> software technology such as AI (except in the broadest sense.)

SI is driven by superhuman, superrealtime level agents. Augmenting people has
a high threshold, and hence will be late. Way too late. Same technology will make
AI-capable hardware available much before. Software doesn't figure
prominently, because humans write software. As such it has a ceiling. Methods
are getting better, and there are synergies, but there is a distinct limit.
De novo AI has a bootstrap threshold, which means that while the hardware for
human-level AI might be already available, or will be shortly, it won't get
used until bootstrap succeeds. Bootstrap of de novo AI is computationally
very expensive, and hence will definitely require molecular circuits. Metric
moles of them. 

Anything else only becomes relevant if, for whatever reason, AI fails to
materialize. This could happen, but it would be genuinely surprising.
 
> I cannot make legitimate definite predictions, but I (and many others) 
> can try to
> make educated guesses based on trends. There is an aphorism in marketing:

The short history seems to show our guesses are perhaps not that educated as we
like them to be. 

> "the only trend you can count on is demographics." Well guess what? Moore's
> law as been as consistent as demographics over at least the last 50 years.

Moore's law is about integration density, not computer performance. As any
exponential process in a finite-resource universe, it's bound to start
suddenly deviating from reality at some point. 
 
> Incidentally, I can and have made definite predictions. I agree that I 
> cannot
> make legitimate predictions :-)  Eight years ago I predicted the singularity
> within ten years.

Many people have predicted singularity at 2000+X. Your X was unusually short,
but many others placed their bets not soon after. I put my bets somewhen in
2030..2050 range, IIRC. While this is still comfortably remote, I'd now would
rather push that date up. I would still put it under a century, though. 
 
> >Of course, if any success is high probability of "bad", and Singularity
> >research increases probability, the payback might be not that good after 
> >all.
> 
> Perhaps I misunderstand you or I was unclear.
> 
> Other unrelated research that will lead to faster and cheaper computers 
> will be
> undertaken anyway, as will software research and deployment that will 

Computers nowadays are not all-purpose, as such AI takes special
architectures. Very unlike what they teach in CS classes. It will further
take AI codes, which are otherwise useless but for adaptive robotics and
gaming.

It is a pretty specific field, with not much drive behind it.

> unintentionally
> improve the environment that may spontaneously create an SI or that may 

There is very little spontaneity about building a supercritical AI. It is a
deliberate project, with a very specific goal. Google is not going to
suddenly awaken, and start commenting on your queries.

> dramatically
> simplify the work of an intentional SI developer.

I see SI come from computational neuroscience, not CS. 
 
> I think Elizier and whoever wants to help him, or whoever wants to 
> start a parallel project
> with similar goals, should be funded. If I understand your statement, 
> you object to
> funding such project because they may awaken the demon. By contrast, I 

The golem, rather. Yes, I don't think funding supercritical AI seeds is a
good idea. I think we should regulate any research targeting an AI smarter
than a chimp (given that they haven't produced anything even remotely
naturally intelligent, it currently translates into "no regulation at all".
I agree that funding general AI for autonomous systems and adaptive robotics
is worthwhile, and potentially very high ROI.

> think the god or
> demon will wake anyway, so research to awaken the god rather than the 
> demon is a
> good idea.

Research to prove that self-consistent, sustainable friendliness is possible
is worthwhile. Unless it yields positive results, such research is extremely
dangerous, and hence irresponsible.
 
> Being of a sunny and carefree disposition, and having a "belief" that 
> reason tends to
> "good," I think that the SI will rapidly create a morality for itself 
> that I will consider
> "good." Therefore, I'm in favor of actively accelerating the advent of 
> the SI if possible.

If you believe that, I have a sweet deal on some real estate in Brooklyn.

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07078, 11.61144            http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org         http://nanomachines.net
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20040416/644226ef/attachment.bin>


More information about the extropy-chat mailing list