[ExI] cyprus banks

Eugen Leitl eugen at leitl.org
Thu Mar 21 16:21:48 UTC 2013


On Wed, Mar 20, 2013 at 10:46:49PM +0100, Tomasz Rola wrote:

> The top500 is all about (clockspeed * cores) and MIPS/FLOPS, not new 
> functionality. Some parts of software may be helped by functionality 

The availability of computational resources constrains new functionality
that can be implemented in terms of their primitives. For many problem
classes, all things being equal, kilonode < meganode < giganode < teranode.

Again, I'm not very happy with the Top500 list since LINPACK is not a
particularly good metric for problems I care about.

> encoded in hardware (plus recompilation/patching), but those parts of 
> hardware become obsoleted by introduction of another software - like new 
> a/v codecs.

Particularly codecs are a good example that is hardware-limited.
What is possible is limited by the computational resources
available. Look at the realtime demos on
http://www.heise.de/newsticker/meldung/GTC-2013-Techdemos-zeigen-Zukunft-der-Echtzeit-Computergrafik-1826822.html
-- the only reason that these are realtime is because
the new hardware takes less time to render basic physics.
 
> Software rules. Software dictates what functionality will be introduced in 
> new hardware. Windows is implemented in software. Not even in rom. (Patent 

My core functionality is system size and execution speed,
and more advanced software makes this *slower*. At the high end,
where we're looking at refresh rate on a computational volume,
there is no software *at all*. Provably so. It's all state,
evolving, using nearest-neighbors interaction of the
hardware primitives, as represented by logic occupying the
volume voxels.

> hyenas - there is prior art, new os could have been introduced to Amiga 
> computers by changing rom chips on a mainboard).
> 
> Even when there are changes introduced into hardware, they are always 
> trying to minimize negative (like noncompatibility) impact on software.

We're not communicating very well here, I think.
 
> > > king. I mean, doing new functionality was relegated to software, and 
> > > hardware was more and more expected to just execute software fast(er) and 
> > > reliable(r).
> > 
> > This might have happened in some alternative universe, but not in
> > this universe.
> 
> Actually, the way I see it, exactly this happens in our universe.
> 
> As far as I would like, we don't live in my version of alternative 
> universe. This one is acceptable (barely, at least if we stick to 
> technical side). But if we were in my version, there would have been 
> software-defined cpus and other elements, gpu would have been a piece of 

Yes, we agree. Only you're calling state software here, which is not
a good description. An optimal system does not have a static mapping,
where you compile state into a static blob, but it has a blob of state
that is evolving onboard.

> fpga or something similar. Things like this are, perhaps, in the labs. But 
> I don't see them in my shops. Yes I know current fpgas are slow and so on, 

The Parallella's dual ARM cores are just vestigal appendices on the
DSP array, and the FPGA (Zynq 7020). They're auxiliary, all the heavy
lifting is done elsewhere.

> but certainly they could have been sped up a little and made more 
> practical, if there was enough demand. A computer built on 
> software-definable elements is not technical impossibility, but the state 
> of business is such that I may not see it before I am all grey and 
> indifferent. Alternatively, I may ecke out some money and build it myself.

Well, I'm already all grey and indifferent, or nearly there ;)
 
> (So vote for Rola Universe :-), or even better, give me loads of money to 
> build universal reconstructor and I promise to fraud as much as I can).
> 
> > > Therefore exciting oneself with great developments in hardware domain 
> > > (true, there is amazing amount of Nobel-level thinking, I admit) is kind 
> > > of like boasting about faster and faster cars, not telling there is not so 
> > > many places to drive them to.
> > 
> > If you want to reach the stars in nogeological times you need
> > to do better than chemical rockets.
> 
> We are not going to have even this, I'm afraid. But this is a different 
> subject from computing hardware. And I agree on this one.

Not a different subject, since a metaphor.
 
> > > When one looks at software, I might be biased but if there is any kind of 
> > > amazingly steady curve, I wouldn't say it is upwards.
> > 
> > There is no fundamental progress in software. The progress in hardware has
> > recently been limited, especially since Moore has ended.
> 
> Exactly. And it should be progress in software, not in hardware, that 

I see no progress in software, as long as developing means human
primates manually massaging data, and jabbering in meetings.
For instance, one of our project is converting images to
chemical structures. What strikes you about that problem?
Why, it's Turing-complete. If you want to do it without
human intervention, which renders the whole idea moot, you
need to imbue the system with chemical common sense. Can
a developer encode that there explictly? Hell, no. 

> should receive some positive stimulation, rather than promotion of trash 
> programming languages whose names start on "J", team work and other BS.

I'm happy I don't have to deal with that BS.
 
> > > So unless you, predictators reading my words :-), want to build 
> > > Singularity out of transistors (doomed by design, too many elements, 
> > 
> > Transistors would do fine, but we've ran into scale limits.
> 
> One can make a cpu with some transistors and some microcode. Or one can 

No, because executing microcode bloats your circuitry and is only
doing macros of primitives. If you're looking at solving problems
which must be done in 5-10 gate delays, tops, then software
is not a solution. It's a problem. A ns might be 30 cm long,
but we're dealing with ps and lower now.

> make a cpu with many more transistors, and nonflexible. Building 
> Singularity out of transistors only is to me not wiser than building 
> all-mechanical 64-bit Pentium with all-mechanical gigabyte of ram.

Of course there are far more optimal ways to do computation. 
My point is that 2d or 3d arrays of transistors would be 
sufficient, if we could continue to scale them. Which we
can't, so the issue is moot. We'll buy us some time with
3d stacking and architectural redesigns, but that is no
longer a huge playground.
 
> Truly, there is problem of scaling.
> 
> > There is no fundamental difference between hardware and software.
> > At the hight end there's no software, only hardware, and its state.
> 
> Hardware is fixed, AFAIK. Changing state can help a bit, to some degree. 

No, hardware is not fixed, because people design hardware. To them,
it is not a constant. 

> The more software, the more flexibility. I am not so much concerned about 

Of course once you've reached an optimal configuration (in truth
hardware and representation must co-evolve), then hardware improvement 
has stopped. 

> "height" and the like - I am just happy if I can change ways of my machine 
> by typing in some text and calling gcc or sbcl on it. I could have 
> experimented a bit, like building PDP-10/Lisp Machine (they are two _very_ 
> different and not connected designs) clone out of soft-computer, if I had 
> one. And it would do me safer browsing and safer banking, with attention 
> of script kiddas turned toward screwing Pentium owners (soon to be ARM 
> owners). As I said, this universe can do, I can emulate whatever machine I 
> can fit on my box.

Unfortunately I need something which makes national facilities looke
like pocket calculators, so software buys me nothing.
 
> I am not sure if we should dispute much about it all. I wanted just to 

We're not disputing, we're trying to figure out what each of us
means. I think we're making progress.

> point out, that some comments about incredible hardware progress, while 
> true, do not touch a clue of the problem at all. What is great hardware 
> good for, if all we run on it is Windows, sometimes Linux? The Singularity 
> is not going to be hardware based, it will be software run on some off the 
> shelf cpu(s). So, there is no such software, we only boast about 

What we need is Avogadro scale computing, which is 3d integrated
molecular electronics. Such things will be COTS sometime, but that
time is several decades removed yet.

> unprecedented hardware. We could as well boast about unprecedented colors 
> of computer chassis. Irrelevant, without software, which nobody seems to 
> say or aknowledge, AFAIK.

Modelling physical problems is not particularly demanding, in terms
of software complexity. Ideally, it's a direct physical implementation
of a particular kernel, as a ring of gates biting their own tails, 
and only directly talking to similiar ouroboros loops packed in a closest
packing on a 3d lattice. Because it's the only game in town,
relativistically.
 
> It doesn't matter if hardware is 10e3 or 10e6 times faster than x years 

It is not nearly as fast as some people think.

> ago. Building any prophecies on such facts is useless. And clueless. 
> Singularity will be software. It may, some time later, choose to make it's 
> own hardware, possibly soft-definable. But it will not come to existence 

I disagree. You need about a human equivalent, and that's not that
current hardware can touch.

> just because someone pushes the clock up to petahertz and memory to 

Clocks don't work. You don't need them, since you can synchronize
arrays of free running oscillators at whatever scale you like.

> exabyte.

There is no difference between CPU and memory. Because you need
already to have your data if you want to operate on it as quickly
as possible. Neurons have no core. Neurons have no software.
Neurons have state, and that's all it needs.
 
> Now, I imagine someone arguing that with such great hardware, certainly 
> somebody will write a software in no blink time. However, I disagree.

Nobody will write software for it, because humans have limits.
Try coordinating 10^9 things happening at the same time. People
crap out at below 10^1. It's their hardware limitation, at the
top-level.



More information about the extropy-chat mailing list