[ExI] Survival (was: elections again)

Bryan Bishop kanzure at gmail.com
Wed Jan 2 04:48:06 UTC 2008


On Tuesday 01 January 2008, Eugen Leitl wrote:
> On Tue, Jan 01, 2008 at 12:30:52PM -0800, Jef Allbright wrote:
> > This touches on a key point that seems to elude the most outspoken
> > proponents of hard take-off singularity scenarios:  So-called
> > "recursively self-improving" intelligence is relevant only to the
>
> I never understood why people said recursive in that context.
> It's simply a positive-feedback enhancement process. It doesn't use
> a stack nor tail-recursion, and it's certainly not a simple
> algorithm, like (iterate over all elements; enhance each; stop when
> you're done). Exponential runaway self-enhancement, or explosive

Instead of that simple algorithm, perhaps talking about lateral thought 
or lateral integration would be more appropriate?

> enhancement, or bloody transcension (in the sense of Daleish Robot
> God goodness) is pretty descriptive in comparison.
>
> > extent it improves via selective interaction with its environment. 
> > If
>
> The environment doesn't have to be embodied. Unlike simpler darwinian
> systems, human designs don't need to be embodied in order to be

Embodiment brings along loads more information, while an abstraction 
only provides limited (human-selected) information, so there's a 
funneling of what humans think to be relevant into what we are 
inputting, and how would that be useful? Ai isn't going to come about 
by giving less information than we get (and I mean neural information, 
not necessarily bits and bytes from the net).

> evaluated, making progress both much faster, and also allowing to
> leap across bad-fitness chasms. (The underlying process is still
> darwin-driven, but most people don't see it that way).

I suppose it could be darwinian esp. if you have humans filtering the 
information, but I still don't see how that's useful.

> > the environment lacks requisite variety, then the "recursively
>
> Most of the environment are other invididuals. That's where the
> complexity is.

That's locally accessible complexity, but have you ever tried asking 
your neighbor for their brain? Not so accessible, is it? :)

> > self-improving" system certainly can go "vwhooom" as it explores
> > possibility space, but the probability of such explorations having
> > relevance to our world becomes minuscule, leaving such a system
> > hardly
>
> Most of what engineers do in simulation rigs today is highly relevant
> to our world. Look at machine-phase chemistry; the science is all
> known, but it is currently not computationally tractable, mostly

I would never have expected to see the statement "the science is all 
known" coming from you. 

> because our infoprocessing prowess is puny. I could easily see
> bootstrap of machine-phase self-rep which happens 99% in machina, 1%
> in vitro. In fact, this is almost certainly how we meek monkeys are
> going to pull it off.

How so? Biocellular life doesn't have to do that much computation ... 
but it also has a few billion years of precomputation to back it up. 

> > This suggests a ceiling on the growth of **relevant** intelligence
> > of a singleton machine intelligence to only slightly above the
> > level
>
> Why singleton? That's a yet another sterile assumptions. Single
> anything ain't going to happen either. Humanity is not a huge pink
> worm torso with billions of limbs, which started growing in Africa,
> then spreading all over the planet as a huge single individual.

Unfortunately, the stream-of-consciousness model has fooled enough 
people (even amongst us here) into believing that future enhancements 
are going to be linear permutations and combinations or simple plays on 
the old stuff, yes even though they talk of exponential growth (which 
is all down the same path for them).

> You'll notice ecosystems don't do huge individuals, and that's not a
> coincidence.

Google.

> > supported by all available knowledge and its latent connections,
> > therefore remaining vulnerable to the threat of asymmetric
> > competition with a broad-based system of cooperating
> > technologically augmented specialists.
>
> Do you see much technological augmentation right now? I don't.

Not direct augmentation, but I think Jef was trying to point out that 
even a well-organized set of technologists sitting behind computers can 
get lots of stuff done. And already these guys can do much more than, 
say, the Novamente ai system.

> Getting a lot of bits out and especially in in a relevant fashion,
> that's medical nanotechnology level of technology. Whereas, building

Off-topic: have we ever done some quick calculations on bit/unit density 
for nanotech scenarios? Given our current pathetic nanotech setups, 
it's a few hundred units to a bit or to an operation, but with progress 
this ratio can be reversed.

> biologically-inspired infoprocessing systems is much more tractable,
> and in fact we're doing quite well in that area, even given our
> abovementioned puny computers.

- Bryan
________________________________________
Bryan Bishop
http://heybryan.org/



More information about the extropy-chat mailing list