[ExI] homebrew cold freon bath super computer

Tomasz Rola rtomek at ceti.pl
Sat Mar 10 18:02:30 UTC 2012


On Fri, 9 Mar 2012, spike wrote:

> 
> 
> >... On Behalf Of Tomasz Rola
> Subject: Re: [ExI] homebrew cold freon bath super computer
> 
> On Fri, 9 Mar 2012, spike wrote:
> 
> >> We have a tool at work which is a cold Freon bath... Would that be a cool
> science fair project or what?
> 
> >...Or what, I think. I mean, if you'd like some results out of this, better
> aim for as much practicality as possible...
> 
> Ja, I think you are right.  The Freon bath coming available caused me to
> think of the idea of a stack of low-end processors, but it is waay overkill
> for that application, a solution in search of a problem.  Now I need to
> think of something to use that cold Freon.

Can't help the mania. :-)

> >...With some modifications, your plan could be doable:
> 
> >...4a. For the rest of the cash buy yourself some cheap Linux based micros
> with ARM cpu and 200-something megs of ram, like BeagleBone or Raspberry Pi
> 
> This I find interesting.  I might try that.  Those Pis are supposed to be
> powerful and easily interfaced.  I can imagine setting a cluster of about
> 100 of those in an ordinary water bath cooling system dumping the heat into
> a refrigerator, running something that is useful enough to take up room in
> my fridge.  I think I can wire them together and figure out a way to bring
> the signal out to a switch on the kitchen counter, perhaps by cutting a
> notch in the door seal, or drilling a hole in the side of the refrigerator.
> Good chance my bride will be less enthusiastic about this whole notion than
> I am.
> 
> >...4b. You may consider buying and dismantling old broken laptops - ...
> 
> Possibly better performance than phones, but too much stuff there I don't
> need.  Your Raspberry idea is better.
> 
> >...5. Install your soft (Beowulf, maybe?), connect the plugs, run run run.
> 
> Ja, but run what what what?  Folding at home?  Codebreakers?

Okay. This question should be asked at the very beginning. Spike, do you 
want to build a cluster for some specific computing task or do you want to 
have it as interesting hobby project?

I think my previous answer should be scrapped, remember only "frak the 
phones" part. I really back this part up. Might be wrong, but spent some 
time thinking/reading and came to conclusion that once manufacturer stops 
supporting a phone (or any other proprietary embedded device), it quickly 
becomes a dead weight, mostly good only as a doorstep or fancy ninja 
weapon.

So, back to the board.

If all you want is taking part in existing distributed project, you might 
be better with Intel-compatible hardware. I think those projects have 
precompiled clients ready for download.

You can use some old hardware if you have it. If this is just "one 
hardware", no need for buying a switch.

In case you want more hardware, you want a switch, too. There are plenty 
of SOHO-aimed cheap network gear. Wifi might work in such case (small 
cluster, <10 pieces). Or a router. Actually it depends on how you connect 
to the internet. But I think you want to keep your home network unexposed 
to the rest of the cruel world.

If you want many tens of hardware, you want to build a skeletal network,
too. I don't know much about this but I guess you need gigabit switch for 
backbone and 100Mbps switches for subnets. SOHO-class switches might or 
might not be good enough (costs++). Wifi will most probably be unable to 
perform in such situation (i.e. I guess you will face strange blockages in 
cluster's work, only helped by reboot).

Now, after you perform manual labour and make all the easy stuff, there 
are issues related to software and maintainance. The more you know from 
this list, the better:

1. Install Linux, basic configuration tasks. There are some Linux distros 
tuned for Beowulf clusters, I guess this can make few things easier. 

2. Moving around the installed system, reading docs (man pages, 
/usr/share/doc) - the best browser for this I know is emacs editor. Using 
command line for simple things.

3a. Installing and deinstalling software with package manager.

3b. Configuring network, static addresses.

3c. Configuring network with DHCP.

4. Grabbing some source from sourceforge or savannah.gnu or 
savannah.nongnu, unpacking (tar, gzip/bzip2, unzip), finding your way 
inside (emacs), reading README, using configure and make to compile it, 
install into /usr/local (I use stow).

5. Writing nontrivial shell scripts (emacs, bash) using various cmd line 
tools (ls, cut, awk, sed and more)

6. (optional) Writing some (working) code in Python

7. (optional) Writing some (working) code in C

If not much of above sounds familiar, you might want to learn a bit first, 
before you start fiddling with hardware. I mean, you can also count on 
"learning on the run" but I'm afraid this would end with problems piling 
up, you start cursing all day long and heavy binging in later stages.

For starting with Linux, you can try VirtualBox. I assume you are Windows 
user, right? VirtualBox will give you a virtual computer, with which you 
can do every bad thing you ever dreamt of. After irreversible error, you 
simply scratch it and start over. I would choose Debian Linux for 
practical reasons. Usually I recommend Ubuntu but it is actually aimed 
towards Windows refugees and packed with all kinds of graphical wizards. 
In cluster situation, you should count only on console (a vt100 terminal, 
if you like).

Oh, there might be some nice semi-automated Linux distros for painless 
Beowulf, I'm not really sure because I am not this deep into the subject. 
I assume if I can manage with console, I can manage with browser-based 
administration, too. But the other way, it is not so sure.

Now, some practical issues about Raspberries. AFAIK they don't sell yet
- not sure if you could buy more than 2 when they start. Also, while all 
they micros with ARMs are very cool and some folks are doing nice things 
with them (like DecBox: [ http://retrocmp.com/projects/decbox ]), they are 
also rather new and unsupported, so once you decide to built on them, you 
are very much on your own. Hence the need to know what to do with your 
hands in front of Linux console (hint: keep yer hands on top of the desk).

They start coming with GPUs, which gives them nice theoretical 
performance. However, I am not sure if their GPUs are supported by current 
software incarnation. So you might end up with 700MHz nodes that maybe 
will one day outmatch 2 year old PC cpu, one on one.

I think big scale usage of ARM cpus is in the future (which means it's not 
quite in the present).

BTW, the current PC market is at least aimed at gamers, which means 
performance rulez. With micros, what rulez is energy saving. So this is a 
bit different. Just MHO.

Apart from Limulus mentioned by Eugen, I found this (a bit aged but 
interesting):

http://www.calvin.edu/~adams/research/microwulf/

You can also google around for hints, there are some project descriptions 
on the net (hardware, software).

So, to sum this all in one sentence, I myself would aim at small cluster 
and cheapish 100Mbps switch (but 16 ports would be nice to have). Actual 
hardware - rather PCs/laptops than Raspberries, if you count on starting 
soon. If you go with micros, this will take you more time, I think. Rather 
than pushing for more nodes, I would push for more power/node. What you 
would not like is time spent on maintaining your cluster - the more exotic 
it goes, the more you drink and the more divorces you have. So I would try 
to minimize this one thing and make this aim a general rule, overriding 
any other rule.

Whatever you choose, don't blow it up. It's better to have small cluster 
that you are able to operate than huge unusable one. I would start small, 
adding units in small steps.

> >...6. Don't worry about temperature, if you don't overclock you should be
> fine with air cooling (and maybe custom made big box with few grand voltage
> regulated 120+mm propellers and many holes) - but I would measure temps all
> the time, just in case...
> 
> Hard to say.  If I invest in 100 of these and want them in close quarters, I
> need a major power supply.  I would think it would require a home
> refrigerator scale cooling system.

Actually, they say Raspberry Pi needs about 3.5W to operate. Even if all 
of this was outputed as heat, a 100 pieces would've been rough equivalent 
of 4x100W lightbulbed chandelier.

If you want to put them all into small volume, temperature measurement is 
your friend (few temperature sensors placed strategically). Ventillation 
is your another friend. You may regulate it all manually or you may want 
to create some form of automation, increasing air pumping or turning off 
some nodes if temp goes too high. Remember to account for measurements 
inaccuracies. But automation is, uh, tricky.

You can also consider gluing small radiators to your cpus - the kind that 
overclockers glue to mosfets on their motherboards. They also come with 
thermoconductive gluing tape AFAIR, so no glue is needed. They should be 
cheap.

WRT to wifi as interconnect - apart from performance, I wouldn't do this 
because of security. Every neighbor, every stranger driving by can break 
into such cluster for fun or maybe even for profit. Yes there are "steps 
for securing", but it doesn't hurt to close the doors too. With wifi, it's 
like you make big holes in the walls.

> >...Sorry if after modifications the plan is not exotic enough for you.
> Seems to me, exotic is what makes you tick :-)... Tomasz Rola
> 
> Ja it is an aerospace engineer's thing.  Why make something simple and
> straightforward, when it can be made complex and wonderful?

Ja, it's exotic hunter there. How about cluster of calculators?

http://www.cemetech.net/projects/item.php?id=33

Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.      **
** As the answer, master did "rm -rif" on the programmer's home    **
** directory. And then the C programmer became enlightened...      **
**                                                                 **
** Tomasz Rola          mailto:tomasz_rola at bigfoot.com             **



More information about the extropy-chat mailing list