[ExI] Bootstrapping a singularity (not-essay) (was Re: MAX MORE in Second Life yesterday)

Bryan Bishop kanzure at gmail.com
Fri Jun 13 22:26:55 UTC 2008


On Friday 13 June 2008, Michael Anissimov wrote:
> Natasha, the possibility of a hard-takeoff AI operating largely
> independent of humans is a real possibility, not single-track
> thinking.

Grounding problem. 

Instrumentation is still largely grounded in human society.

> If someone created a smarter-than-human AI, it might be easier for
> the AI to initially improve upon itself recursively, rather than
> enhance the intelligence of other humans.  This would be for various
> reasons: it would have complete access to its own source code, its
> cognitive elements would be operating much faster, it could extend
> itself onto adjacent hardware, etc.

What adjacent hardware? That's only if the seed hardware, in the first 
place, was wired up to it. So far you just said superintelligence, not 
anything about implementation like hardware and so on. Let's say we 
have a self-recursive seedai implemented; then, if it's going to be 
accelerating, it has to have accelerating computational capacity, which 
necessistates physical implementation.

> Are you familiar with the general arguments for hard takeoff AI?
>  Quite a few are found here:
> http://www.singinst.org/upload/LOGI//seedAI.html.  If you assign a
> hard takeoff a very low probability, then I would at least figure
> that you've read the arguments in favor of the possibility and have
> refutations of them.

Read it again, Michael:
> The ability to add and absorb new hardware.  The human brain is
> instantiated with a species-typical upper limit on computing power
> and loses neurons as it ages.  In the computer industry, computing
> power continually becomes exponentially cheaper, and serial speeds
> exponentially faster, with sufficient regularity that "Moore's Law"
> [Moore97] is said to govern its progress.  Nor is an AI project
> limited to waiting for Moore's Law; an AI project that displays an
> important result may conceivably receive new funding which enables
> the project to buy a much larger clustered system (or rent a larger
> computing grid), perhaps allowing the AI to absorb hundreds of times
> as much computing power.  By comparison, the 5-million-year
> transition from Australopithecus to Homo sapiens sapiens involved a
> tripling of cranial capacity relative to body size, and a further
> doubling of prefrontal volume relative to the expected prefrontal
> volume for a primate with a brain our size, for a total sixfold
> increase in prefrontal capacity relative to primates [Deacon90].  At
> 18 months per doubling, it requires 3.9 years for Moore's Law to
> cover this much ground.  Even granted that intelligence is more
> software than hardware, this is still impressive.

*cough* It's funding-limited, instead of self-manufacturing and truly 
self-replicating and so on. Although the virtual bitspace of the 
internet may seem to be expanding effortlessly, it's based on the 
physical implementation of the transistors, which build up the flip 
flops in our RAM modules, which makes up the memory that we have so 
much of (well, it never really seems enough, but 2003 estimates had it 
at 161 exabytes or so, and by now that's a very, very low estimate).

So, let's think about the hardware manufacturing behind the ai 
machinery. Let's consider the si fab. That's a linear production 
facility. You could build a fab that builds si fabs, I guess, but 
that's still linear. You need an si fab that builds the silicon 
componentry *plus* si fabs, i.e. a self-replicating machine, otherwise 
you're still going to be limited by the output of the si fab, and in 
the case of scarcity-centricism and money (getting donations to add 
hardware), that's limited by the amount of money and all sorts of weird 
economics going on, which I'm just about ready to refuse to touch at 
all (simply because it's ridiculous). The goal is acceleration, not 
limitation due to old, silly systems. ;-) That's why the manufacturing 
processes are needed, that's why the grounding problem is important. 
Not just symbolic grounding problem nonsense, I'm talking about 
cybernetic interfaces I guess. Feedback, etc.

Sorry for the hijack.

- Bryan
________________________________________
http://heybryan.org/



More information about the extropy-chat mailing list