<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
On 2016-09-05 00:06, William Flynn Wallace wrote:<br>
<blockquote
cite="mid:CAO+xQEYE2822v70RZ2tiSKYjAxJ4Kk5aOaJS+kYkxGczrKyXeQ@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div class="gmail_default"><span style="font-size:19.2px">This
may be too complicated to answer: what, if any, ways are
there to validate simulations? Well, let the world go by
and see what really happens, I suppose. What else? Do real
world experiments? In short, why trust simulations? We
should primie facie distrust them (like the null
hypothesis). At least two problems arise: GIGO for one.
Not putting in crucial variables (because you don't know
that they are crucial) is another.</span></div>
</div>
</blockquote>
<br>
This is a big thing in the world of simulating stuff. It depends on
what kind of system you are modelling.<br>
<br>
Many physics problems have conserved quantities like energy, so you
can validate your code by checking the energy stays constant. Or you
run it for systems with known behavior and compare. But often the
right approach is to first write down the maths and figure out an
approximation scheme that have guaranteed error bounds: a lot of
awesome numerical analysis exist, including stuff like symplectic
methods that allows virtually error-free prediction of planetary
orbits over very long timespans. <br>
<br>
You can also empirically change the resolution or stepsize in the
simulation to see how it responds: if there is a noticeable change
in output, you better increase the resolution or do things another
way. Same thing for parameters: if everything changes if you twiddle
the knobs of the model slightly, you should be skeptical. <br>
<br>
In neuroscience things are harder, since the systems are more
complex and not everything is known. You can still build a
simulation and compare to reality though: if it doesn't fit, your
model (either the theory or the code) is not right. You can also
validate things by making virtual experiments to get predictions and
then check them in the lab - this has produced some very solid
results. <br>
<br>
But many neuroscience models do not aim at perfect fits, but rather
to see if our theory produces the right behavior. That can sometimes
involve making very simple models rather than complex ones: my rule
of thumb is that you better get more results out of the model than
you have free parameters, otherwise it is suspect. This is why many
computational neuroscientists are somewhat skeptical of large scale
computer simulations: we might not learn much from them.<br>
<br>
Now, in systems like climate you have a bit of the physics side - we
know pretty well how air, heat and water move - but also a bit of
the neuroscience mess - clouds are hard to model, vegetation changes
in complex ways. So there is a fair bit of uncertainty (many climate
modellers are *really* good at statistics and the theory of
uncertainty). That is not a major problem; one can handle it. The
thing that worries me most is that many of the scientific codes are
vast, messy systems with subroutines written by a postdoc that left
years ago - I think many parts of science ought to have a code
review, but nobody will like the answers. This is likely true for
the simulations underlying much of our economy too: I know enough
about how insurance risk models work to not want to look too much
under the hood. Often the bugs and errors get averaged out by the
complex dynamics so that they do not matter as much as they would in
simple models, which I guess is a kind of relief. <br>
<br>
Many models are trusted just because they fit what people believe,
which is often based on running models. In science people actually
do perform comparisons with data, experiments or even mathematical
analysis to keep the models in the vicinity of reality. <br>
<br>
The key thing to remember is that "all models are wrong, but some
are useful". You should not select a model because it promises
perfect answers (that is frequently dangerous) but rather than it
gives you the information that matters with a high probability. <br>
<br>
<pre class="moz-signature" cols="72">--
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University</pre>
</body>
</html>