[ExI] Survival (was: elections again)

Eugen Leitl eugen at leitl.org
Wed Jan 2 13:37:35 UTC 2008


On Tue, Jan 01, 2008 at 10:48:06PM -0600, Bryan Bishop wrote:

> Instead of that simple algorithm, perhaps talking about lateral thought 
> or lateral integration would be more appropriate?

I must admit that term doesn't ring a bell. Can you expand?
 
> > The environment doesn't have to be embodied. Unlike simpler darwinian
> > systems, human designs don't need to be embodied in order to be
> 
> Embodiment brings along loads more information, while an abstraction 

Embodiment is good for two things: first, if your reality model is
inaccurate, and second, if your computers are so ridiculous that
a physical experiment is cheaper than a numerical experiment.
Both is mostly true for today's computers and models, though some
very slow and/or very expensive experiments and/or simple
problems are being run in the virtual drydock.

> only provides limited (human-selected) information, so there's a 

Numerical models allow you in principle to sample very large regions
of problem space very rapidly, by virtue of moving electrons,
photons and flipping things instead of atoms. People are impressed
with what a mole of DNA bases as oligos in a solution can do, but 
they would be far more impressed with what a mole of spin valves
can do with ~same number of atoms, and Joules. 

> funneling of what humans think to be relevant into what we are 
> inputting, and how would that be useful? Ai isn't going to come about 
> by giving less information than we get (and I mean neural information, 
> not necessarily bits and bytes from the net).

The network isn't blind and dumb. Even today, there's sensors and
actuators online you wouldn't believe. But we're of course not talking
about today's networks and automation, but resources which are many
decades away. I would be very surprised if I didn't have a 3d prototyping
machine in my cellar 20, 30 years from now which would have a self-rep
closure of almost unity, or even above.
 
> > evaluated, making progress both much faster, and also allowing to
> > leap across bad-fitness chasms. (The underlying process is still
> > darwin-driven, but most people don't see it that way).
> 
> I suppose it could be darwinian esp. if you have humans filtering the 
> information, but I still don't see how that's useful.

I meant the (yet unvalidated) hypothesis that thought is driven by
darwinian dynamics on top of cortical columns substrate.
 
> > > the environment lacks requisite variety, then the "recursively
> >
> > Most of the environment are other invididuals. That's where the
> > complexity is.
> 
> That's locally accessible complexity, but have you ever tried asking 
> your neighbor for their brain? Not so accessible, is it? :)

I meant the component of the fitness function that is not you nor
the habitat, but others you share the habitat with.
 
> > Most of what engineers do in simulation rigs today is highly relevant
> > to our world. Look at machine-phase chemistry; the science is all
> > known, but it is currently not computationally tractable, mostly
> 
> I would never have expected to see the statement "the science is all 
> known" coming from you. 

For chemistry, the science has been completely known since 1930, or 
hereabouts. The fly in the ointment is that the equations are absolutely
useless without computers, and with the current codes the computers
make truly lousy models, and really slow at that.

If you don't believe me, look at computational chemistry. It does
very well for small systems, especially static systems. Make the amount
of particles higher, and the system size bigger, and you have to start
building one approximation upon another, until the result is more
like Toon Town than your city.
 
> > because our infoprocessing prowess is puny. I could easily see
> > bootstrap of machine-phase self-rep which happens 99% in machina, 1%
> > in vitro. In fact, this is almost certainly how we meek monkeys are
> > going to pull it off.
> 
> How so? Biocellular life doesn't have to do that much computation ... 

I'm still talking about bootstrap of machine-phase chemistry, aka
mechanosynsthesis, aka building and breaking individual bonds by
numerically controlled nanorobots. To do this, you must build up
a library of reactions capable of depositing and abstracting a
repertoire of structures rich enough to deposit the depositing structure
itself. The amount of control in those reactions is so high your
experiments resemble argon matrix, monomer crystal polymerization,
2D physi/chemisorbed systems in cryogenic UHV, and what happens in
enzymatic reaction centers -- only more so, and not as a stochastic
assembly of systems, and observed with the resolution of a proximal
probe. Obviously, most physical experiments won't be very helpful,
but as a loose source of constraints about the problem space. You
won't be able to build up a set of reactions, only candidates for
such.

> but it also has a few billion years of precomputation to back it up. 

Most of that computation was used to bootstrap robustly evolving systems.
The efficiency of computation was low (brownian motion of linear biopolymers
in a solvated soup is MC in slomo). In comparison to theoretic limits
of computation on self-replicating substrate the entire computation done
by the molecular soup is negligible. 
 
> Unfortunately, the stream-of-consciousness model has fooled enough 
> people (even amongst us here) into believing that future enhancements 

Yes, people are so extremely fond of purely sequential processes
that they never do the sequential process of a BOTEC, and realize
purely sequential anythings are useless in practice. 

> are going to be linear permutations and combinations or simple plays on 
> the old stuff, yes even though they talk of exponential growth (which 
> is all down the same path for them).

Very true.
 
> > You'll notice ecosystems don't do huge individuals, and that's not a
> > coincidence.
> 
> Google.

Even though corporations are not a good model of ecosystems, Google is not
the single search machine vendor. All scenarios involving singletons must
not only account for the narrowest possible population bottleneck, but also
the mechanisms maintaining a zero-diversity monoclone henceforth. Notice that you
can't synchronize monoclones in a relativistic universe even for very small
spatial dimensions. The bigger you get, the slower you'll get. This
also assumes the environment has zero diversity, so the system must apply
diversity forcing to the environment. If that sounds like a nice horror
novel, that's because it has a great potential for one.
 
> > > supported by all available knowledge and its latent connections,
> > > therefore remaining vulnerable to the threat of asymmetric
> > > competition with a broad-based system of cooperating
> > > technologically augmented specialists.
> >
> > Do you see much technological augmentation right now? I don't.
> 
> Not direct augmentation, but I think Jef was trying to point out that 
> even a well-organized set of technologists sitting behind computers can 

People should read "The Mythical Man-month". Nothing has changed since.
In fact, situation today is much more complicated because suddenly(!) 
everyone realizes they must go parallel, and debugging parallel asynchronous
stuff is a nightmare even if it's homogenous.

> get lots of stuff done. And already these guys can do much more than, 
> say, the Novamente ai system.

There's no Novamente AI system. There's not anything like AI at all, so 
far. AI is something I could send to buy me groceries, and it would be
back with about the right stuff in an hour.
 
> > Getting a lot of bits out and especially in in a relevant fashion,
> > that's medical nanotechnology level of technology. Whereas, building
> 
> Off-topic: have we ever done some quick calculations on bit/unit density 
> for nanotech scenarios? Given our current pathetic nanotech setups, 

Tons of it. Drexler, Merkle, Freitas, Sandberg & Co have published a
lot on it. http://nanomedicine.com/ is a good starter, but I would
expect you knew all about that site.

> it's a few hundred units to a bit or to an operation, but with progress 
> this ratio can be reversed.
> 
> > biologically-inspired infoprocessing systems is much more tractable,
> > and in fact we're doing quite well in that area, even given our
> > abovementioned puny computers.

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the extropy-chat mailing list