[ExI] cyprus banks

Eugen Leitl eugen at leitl.org
Fri Mar 22 10:33:22 UTC 2013


On Fri, Mar 22, 2013 at 03:38:37AM +0100, Tomasz Rola wrote:

> [...]
> > The Parallella's dual ARM cores are just vestigal appendices on the DSP 
> > array, and the FPGA (Zynq 7020). They're auxiliary, all the heavy 
> > lifting is done elsewhere.
> 
> Ok, this is interesting even if still infant. Thanks.

The Parallella (assuming they're ever going to ship the dev kit)
is testing some new waters, but nothing too risky, I would hope.

The DSP cores are basically an Analog Device DSP cluster on
a chip, each optimised for 32 bit (single precision) float
performance, and 32 kByte embedded memory in each DSP core.

The leftover FPGA (especially if you chuck out unneeded
functionality like driving the HDMI with video) is probably 
sufficient to implement a message passing interface with 
6 links (8 bit each) which allows you a 3D torus, and potentially 
can achieve up to ~1 GByte/s throughput, each. 

The energy efficiency is roughly 8x of that of a Blue Gene/Q, 
so such a system would be in touching distance of an EFlop 
system in terms of energy
efficiency required.
 
> > For instance, one of our project is converting images to
> > chemical structures. What strikes you about that problem?
> 
> Nothing yet. Out of curiosity, what images? By "converting to chemical 

Digitized documents, many millions of them. From various ages,
with varying quality, using different representations, some
of which would case a human expert analyst to at least pause
briefly eliminating impossibilities when faced with ambiguous
representations (which are the rule rather than an exception).

> structures", do you mean creating unambiguous description of such 
> structures?

Yes, translating them into standard representation of chemical
structures and reactions, with a known error rate of 99% or
better (meaning, that you need to know when you've made
a mistake, since manual examination of each document is
right out, since it take so much time you could do it all 
by hand, which you can't, for cost reasons).

As I said, that task is Turing-complete. You need a machine
vision package with chemical common sense, which is not feasible 
with an expert system approach. The only other alternative is to
have a huge training set and use machine learning, which is
precisely about not coding this explicitly, and requiring very
serious hardware (both in therm of memory/node and batching
across many nodes) to deliver the performance required.

This is way out of league for a small shop. It's something
for the likes of Google, but they don't have the skills and
the drive to do it, as it would be a tiny niche of their
core business. 
 
> > We're not disputing, we're trying to figure out what each of us means. I 
> > think we're making progress.
> 
> Some progress. Yep.
> 
> I think I now understand our disagreement better and it is not 
> disagreement actually. More like, in multidimensional space (let's not 
> define it too well) the meaning of "high level" is two rather different 
> points. Yours is more about raw computing power, right? Mine is, well, I 
> guess I have been infected by Lisp bug and it already started to convert 

I've been on a Lisp/AI track since early 1980s. The Lisp environment
(and dynamic language environments in general) are obviously a near-optimal
tool for a human programmer, but they're rather useless for AI, as
AI is not about coding down things explicitly. That has a complexity
ceiling issue and issue of externalizing internal language, which 
is a dead end (experts can't tell you exactly how to be an expert,
just using language).

The problem of Lisp is that it comes from lambda calculus, and
mathematicians never saw the need to formulate a massively parallel
branch of reasoning, both because top-level reasoning of humans
is sequential, and because they're physics-agnostic, proudly so.

This also applies to CS people, and most developers, which is why
they get continous surprises, of the bad kind (it took about an
hour of my time to convince a developer that the threads-in-a-global
memory model is on its own legs, and that 99% of all developers don't
have any clue how to solve things in a shared-nothing asynchronous
model with 10^3 to 10^9 independant systems, with no sequential
sections, so it's nondeterministic until you reach back in time
to make it deterministic, which is an absolute exception orelse
you're bringing things to a halt, screechingly, and has to be done 
all manually).

> my mind (despite clear similarity to some veneric disease, this one has a 
> strong promise of happy end, although promise and delivery may differ in 
> case of each patient). So I am more interested in algorithms, sometimes 
> algorithms creating algorithms (this can be upped as many levels as one 
> wishes but I am not this high on evolutionary ladder...) and so on. Speed 
> is important later, when (if) the program starts doing something 
> noticeable.

Here's a deceptively simple problem: make a GA learn evolvability.
There's not nearly enough transistors on this whole planet at the 
moment to make it happen, nevermind in a single, tightly-coupled 
installation.
 
> Wrt to software domination, I think I need to retract my previous 
> statements a bit. Long ago, one bud showed me his "pocket clock" stuffed 
> in a soap box, built on integrated circuits, diodes and other such stuff. 
> I was in awe. It had no buttons, re/setting was done by shortcircuiting 
> proper pair of wires. Nowadays, if I was to do such stunt, I'd go with 
> some small 8bit cpu, readymade display module and some glue code in 
> assembly to drive it. It would've been easier to design and test, and 
> change the code until it does what I wanted, rather than solder ICs and 
> later sit on the mess of wires, debugging it with multimeter. So there are 
> situations where I would love to stay with software even if it was 
> overkill in terms of hardware used, power drawn and overall ellegance. On 
> the other hand, there is a charm of constructing things out of carefully 
> counted number of gates (which I never did, just to be clear).

The reason for keeping the number of logic blocks low is that
each incurs a delay, and each ps requires a 300 um spacing
between adjacent blocks, and each fs requires 300 nm, which 
is equivalent to 1 PHz refresh rate. 
 
> Actually, given a fact that I am a theoretical solderer, I'd love to stay 
> with software every time.
> 
> Now, back to original quotation that started this "nondispute":
> 
> -  http://www-rohan.sdsu.edu/faculty/vinge/misc/WER2.html
>  
> - "Progress in hardware has followed an amazingly steady curve in the last 
>   few decades. Based on this trend, I believe that the creation of 

Actually, people who've been watching the benchmark space in CPUs
and GPUs in the last couple years would disagree. The transistor
count never translated directly into performance for most applications,
but the last couple years have been especially disappointing.
As we go off-Moore (the doublings are no longer constant, since
associated with a lot of money spent which won't ROI anytime soon)
things will become progressively more disappointing.

>   greater-than-human intelligence will occur during the next thirty 
>   years."
> 
> I have heard/read this quotation few times over years, and it made me 
> increasingly unhappy. It may be true that hardware eventually goes to 

Not just you.

> "point S" or however we call it, but I'm afraid it is not going there fast 
> enough. I would have been much more happier when someone said something 
> like:
> 
>  "Progress in _software_ has followed an amazingly steady curve..."
> 
> or 
> 
>  "Progress in _hardware_ _and_ _software_ has followed an amazingly steady 
> curve..."
> 
> It would have felt like we were going to somewhere. Software can be 
> evolved much faster.
> 
> I have looked at the essay and I noticed there were some annotations 
> added, so maybe I will be a bit happier when I read them.
> 
> Now, I'd rather not go into another iteration of our nondispute on hw vs 
> sw. It feels more and more irrelevant, bifurcating and more irrelevant. 
> There is a melt of hw and sw and what acts as piece of hw may be actually 
> a melt. But for my own use, I will retain the notion that hw != sw. To me, 

Would I want to implement a subset of MPI in the Zynq 7020 I would need
to formulate it in a HDL. That description can be equally cast into
an ASIC.

> hw is bought in a shop and it changes its ways only if this had been 
> designed in. Sw is something I myself write in emacs or vim or cat. By 

You can write VHDL in an editor just fine. You'll find that ARM is a
fabless operation. 

> this definition, Windows is hw but I don't actually care. Well, ok, to be 
> exact it is intended to be hw but someone could change its ways if she sat 
> on it for long enough. But OTOH, the same could have been said about 
> Pentium. Frak it. It sounds idiotic but ok, Windows is hw. One more reason 
> to frak it. All the way down to hell.
> 
> [...]
> > What we need is Avogadro scale computing, which is 3d integrated 
> > molecular electronics. Such things will be COTS sometime, but that time 
> > is several decades removed yet.
> 
> I am not sure what kind of problem you want to solve with it, but even 

Near-brute-force search in parameter space. E.g. if you want to recapitulate
what a few GYrs of a planetfull full of chemicals did. 

> such thing moves the limit of possible computation only somewhat further. 
> But I would like to have it, too. Actually, I need it too. Even thou I 
> don't yet have a soft to keep it warm.

You could. It would be a lot like using a silicon compiler.
Instead of memory, your could would occupy a volume of logic,
and you would lay out your data flow across that circuit.
Not manually, of course, your toolchain would do that for you,
and it would be quite grateful for explicit hints (a la
"unroll that inner loop using paths all evaluated in parallel").
 
> > > unprecedented hardware. We could as well boast about unprecedented 
> > > colors of computer chassis. Irrelevant, without software, which nobody 
> > > seems to say or aknowledge, AFAIK.
> > 
> > Modelling physical problems is not particularly demanding, in terms of 
> > software complexity. Ideally, it's a direct physical implementation of a 
> > particular kernel, as a ring of gates biting their own tails, and only 
> > directly talking to similiar ouroboros loops packed in a closest packing 
> > on a 3d lattice. Because it's the only game in town, relativistically.
> 
> Doeasn't sound optimistic and wasn't meant to be optimistic, I guess.

It is very optimistic, because it's a generic blueprint, and it's a low-complexity
system to boot. It's small enough so that you could find things in that search
space, assuming you have enough computation at your disposal. The system is
also positive-feedback, in that it's assisting in its own construction. 
You can start working on it right away, if you had access to a meganode box.



More information about the extropy-chat mailing list