[ExI] Weird new way to do physics

Tomasz Rola rtomek at ceti.pl
Sat Nov 5 18:09:15 UTC 2011


On Sat, 5 Nov 2011, The Avantguardian wrote:

> > ----- Original Message -----
> > From: Tomasz Rola <rtomek at ceti.pl>
> > To: The Avantguardian <avantguardian2020 at yahoo.com>; ExI chat list 
> <extropy-chat at lists.extropy.org>; Tomasz Rola <rtomek at ceti.pl>
> > Cc: 
> > Sent: Thursday, November 3, 2011 10:21 PM
> > Subject: Re: [ExI] Weird new way to do physics
> > 
> > On Thu, 3 Nov 2011, The Avantguardian wrote:

> [snip a bunch of suggested scripting languages I have heard of but never 
> used]

> How about Free BASIC Ide? In any case Python has some *impressive* 
> strengths like its ability to handle integers as large as you have 
> memory and time to handle.

Uhm, Basic is cool if one likes it, I guess. I don't like it very much, 
so I would rather stay away. BTW, if the authors don't plan to port it to 
64-bit, you may be stuck with 32-bit version, with some penalties to 
performance compared to 64-bits. Penalties are going to be small nowadays, 
but later the difference could grow as 64b grows and 32b remains 
noble/unchangeable legacy. It's somewhat similar to running DOS app in 
Windows world.

On Linux it is possible to run 32-bit code on 64-bit kernel/OS, not sure 
about other OSes - so you might as well be trapped in 32-bit OS in some 
cases...

Big Integers - I think most if not all languages I mentioned are on par 
with Python.

In CL, there is also rational number data type, which if I understand what 
I read, should give means to infinite precision (i.e. better than 
float/double arithmetic). Something like this:

[1]> (/ 1.0 10) ;; this gives float
0.1

[3]> (/ 1 10) ;; here's rational
1/10

One day, I would like to push CL rationals to the limits - something like 
computing Pi digits or similar. Simple arithmetic works for rationals, but 
common math routines output floating point:

[4]> (log (/ 1 10))
-2.3025851

[5]> (+ (/ 1 10) (/ 1 3))
13/30

So working with rationals requires building the whole bunch from the 
grounds up (the "grounds" is already built, fortunately). I mean, 
logarithms, trigonometric functions and the like. This is trivial using 
Taylor series, but since they are infinite sums, one needs to include 
desired limit/precision in function call.

But I disgress.

In Python, there is Decimal:

http://docs.python.org/library/decimal.html

It is not about rational numbers but it is comparable, I think.

Why I even mention this numeric stuff - see below.

BTW, Common Lisp, Haskell and Ocaml are not quite scripting languages. 
Maybe CL started as interpreter - however I believe CL started as a 
library that could be loaded into older, preexisting LISP implementations, 
which themselves could have been interpreted or not.

But I think nowadays there are implementations (of all those 
abovementioned ones) that focus more on compilation (both batch and JIT) 
and giving interpreter as an option (to ease development, for example to 
interact with proper editor: http://en.wikipedia.org/wiki/SLIME ). They 
allow one to use all goodies a dynamic language can give - and then to 
compile it and have speed boost for free. In many cases performance should 
be comparable to C: 

http://shootout.alioth.debian.org/u64q/which-programming-languages-are-fastest.php

Those are kind of synthetic programmers' benchmarks. My favourites fare 
quite well :-) - look at median column. Of course those are not real life 
programs.

There are some consequences of this focus switch - one is, performance is 
measured wrt compiled code speed. Improvements to interpreter become 
secondary.

The other is, some languages are easier to evolve. Once they evolve, they 
should allow to use new idioms (be it a library or a new language 
construct) on existing code base - like, add OO programming to your fav 
lang and compile it all with old compiler, which makes adding OO (and 
other ideas) so much easier. While this kind of stuff can be done even in 
C (one C++ compiler either was or still is based on special purpose 
preprocessor to C), I would rather use CL for such experiments because 
IMHO it shines here. Another example is paralellization of your code. In 
Haskell this starts to be quite trivial:

http://stackoverflow.com/questions/3011668/how-difficult-is-haskell-multi-threading

http://www.haskell.org/haskellwiki/GHC/Concurrency

Seems like add a line here or there, recompile with special options and 
voila! it runs on multicore. Of course some cases are trivial and some are 
not. And in real life, there is no free lunch.

Overally, I think the most potential lies, probably, in Haskell. Both as 
language and as compiler. I wouldn't be suprised if one day its 
compiler-generated code outperformed C routinely. And its close connection 
to mathematical theory of computation could yield some unexpected bonuses.

However, learning Haskell seems to be royal PITA. Or maybe I should have 
started 10 yrs ago rather than learning Python. I guess I would be done by 
now :-). OTOH, I plan to program for some time counting from now. Whatever 
I learn, no matter how much my arse suffers, will be used. So the effort 
is worth it, as long as you plan to write non trivial programs and have no 
problem with maxing their execution speed and minimizing your development 
time (it is two argument function).

> > AFAIK Python as it is today is unfit for calculations, especially on 
> > multicore machines (because it is unicore). Unless you do it in Sage 
> > (which relies on libraries in written in C and optimised a lot).

> Ahh. That explains no matter how high of a priority I give its thread, 
> it never uses more than 50% my processing power. I have been exploiting 
> this as a feature rather than a bug but hey, whether it's dawn or dusk 
> depends upon ones literal perspective on the world no?

Well, if you don't mind waiting twice as long... or to be exact, about 
1.000000001-1.999 times as long, depending on what exactly you compute and 
how. Some code is simply unable to go parallel.

> In any case you are right, it is painfully slow. It is still running and 
> I have thought of half a dozen improvements to make on the code, but it 
> *is* making progress so I am afraid of terminating it to upgrade. Damn 
> my shortsightedness. I should have coded a way to stop the program and 
> save its progress to disk. Sigh. Still the progress is encouraging. It 
> is still a little over a 10% through its search space but its last few 
> local minima have been the scalar in the far right column:

> [3.1666666666666576, -4.1333333333333364, -4.1666666666666696, 
> -3.3333333333333393, -4.9666666666666668] -3.33143361786e-08
> [-0.73333333333334094, -4.06666666666667, -2.5666666666666753, 
> 3.4666666666666566, -4.9666666666666668] -2.90099819722e-08
> [-4.7666666666666675, -0.90000000000000757, -3.600000000000005, 
> 4.0999999999999881, -4.9666666666666668] 2.29044871958e-08
> [1.2999999999999929, -3.7333333333333378, -4.533333333333335, 
> -0.36666666666667436, -4.9000000000000004] 2.1769437808e-08
> [-4.2000000000000028, -0.1666666666666744, 1.1333333333333258, 
> -0.66666666666667429, -4.8666666666666671] 1.18882326205e-08
> [-4.3000000000000025, 0.23333333333332554, -0.76666666666667427, 
> 0.099999999999992234, -4.4666666666666686] -7.44793737795e-09
> [-2.1333333333333435, -2.5333333333333421, -0.10000000000000775, 
> 2.4333333333333269, -4.4000000000000021] -4.2784336074e-09

> If the output in the far right column ever reaches zero from either the 
> positive or negative side, then that will indicate the existence of a 
> natural law so unintuitive that its bound to be novel.

Aha! Do you use floats up there, in those number lists?

Opss. Floats are tricky because they can't be implemented to be exact. I 
mean, they can if you can have infinite memory in a computer. AFAIK the 
current standard allows for exact representation of 0.5 (or other negative 
power of 2) but not 0.1. There are infinite number of reals that cannot be 
represented _exactly_ by computer float. I really mean it when I say 
"infinite", this is not a metaphore.

This means, your computation may come to zero simply because there will be 
no better float to represent a result, but not necesarilly because it 
really comes to zero.

This is why I mentioned rationals and Decimal.

If you use floats in your code, you can turn it off now. Sorry. Or you can 
leave it on, but even if you get zero, there is a need to tell if the zero 
is true or false.

Floats are good for engineering. For results in maths or physics, they are 
sometimes good - depends what is being computed.

In other words, you may consider rethinking your computations so it can 
check for zero with more precision than float/double can give.

Science cannot depend on whether you compute results with 20 digits or 
200.

Just my holly opinion(s).

At the same time, I admit that I sucked pathetically (yea, try to imagine 
this) at numerical methods.

> First natural law 
> discovered by a computer. Heh. Maybe I'll get some credit too. ;-)

Man, you are racing against the robots:

http://en.wikipedia.org/wiki/Adam_(robot)

What is worse, on this list there are folks who will root for them rather 
than for you.

>> I will let you all know what my algorithm comes up with. 
> 
> Sure, please do.

> Not like I have the cred these days to publish anywhere else. At least 
> it will have time and date stamp and tons of critical review. ;-)

Creds are fine. But... sometimes one can read about articles supressed 
because editors/reviewers thought otherwise. Or industry literally funding 
the whole branch of research presenting some aspect of industry in very 
good light.

So, as long as you can freely publish in alternative places, don't worry 
about creds.

Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.      **
** As the answer, master did "rm -rif" on the programmer's home    **
** directory. And then the C programmer became enlightened...      **
**                                                                 **
** Tomasz Rola          mailto:tomasz_rola at bigfoot.com             **


More information about the extropy-chat mailing list