<div dir="ltr"><div class="gmail_default" style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:19.2px">This is a big thing in the world of simulating stuff. It depends on what kind of system you are modelling. anders</span><br style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:19.2px"></div><div class="gmail_default" style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:19.2px"><br></span></div><div class="gmail_default" style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:19.2px">Ah yes, landed the big fish. Thought I could lure Anders into this, and I actually understand most of it at my low level. Thanks!</span></div><div class="gmail_default" style="font-family:"comic sans ms",sans-serif;font-size:small;color:rgb(0,0,0)"><span style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:19.2px"><br></span></div><div class="gmail_default"><span style="font-size:19.2px">So I won't worry - the modelers are on the job. Biggest worry - economic models based on faulty theories of human behavior.</span></div><div class="gmail_default"><span style="font-size:19.2px">Now the question is: are there any other kind?</span></div><div class="gmail_default"><span style="font-size:19.2px"><br></span></div><div class="gmail_default"><span style="font-size:19.2px">My answer is no. Psych gave up on big models long ago and are content to try to understand small niches of behavior, while often tossing away what was learned from the big models. We're young and sometimes stupid and vain to not let our forefathers teach us something about people. We are a long way from synthesis. Long.</span></div><div class="gmail_default"><span style="font-size:19.2px"><br></span></div><div class="gmail_default"><span style="font-size:19.2px">bill w</span></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Sep 5, 2016 at 4:35 AM, Anders <span dir="ltr"><<a href="mailto:anders@aleph.se" target="_blank">anders@aleph.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span class="">
On 2016-09-05 00:06, William Flynn Wallace wrote:<br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_default"><span style="font-size:19.2px">This
may be too complicated to answer: what, if any, ways are
there to validate simulations? Well, let the world go by
and see what really happens, I suppose. What else? Do real
world experiments? In short, why trust simulations? We
should primie facie distrust them (like the null
hypothesis). At least two problems arise: GIGO for one.
Not putting in crucial variables (because you don't know
that they are crucial) is another.</span></div>
</div>
</blockquote>
<br></span>
This is a big thing in the world of simulating stuff. It depends on
what kind of system you are modelling.<br>
<br>
Many physics problems have conserved quantities like energy, so you
can validate your code by checking the energy stays constant. Or you
run it for systems with known behavior and compare. But often the
right approach is to first write down the maths and figure out an
approximation scheme that have guaranteed error bounds: a lot of
awesome numerical analysis exist, including stuff like symplectic
methods that allows virtually error-free prediction of planetary
orbits over very long timespans. <br>
<br>
You can also empirically change the resolution or stepsize in the
simulation to see how it responds: if there is a noticeable change
in output, you better increase the resolution or do things another
way. Same thing for parameters: if everything changes if you twiddle
the knobs of the model slightly, you should be skeptical. <br>
<br>
In neuroscience things are harder, since the systems are more
complex and not everything is known. You can still build a
simulation and compare to reality though: if it doesn't fit, your
model (either the theory or the code) is not right. You can also
validate things by making virtual experiments to get predictions and
then check them in the lab - this has produced some very solid
results. <br>
<br>
But many neuroscience models do not aim at perfect fits, but rather
to see if our theory produces the right behavior. That can sometimes
involve making very simple models rather than complex ones: my rule
of thumb is that you better get more results out of the model than
you have free parameters, otherwise it is suspect. This is why many
computational neuroscientists are somewhat skeptical of large scale
computer simulations: we might not learn much from them.<br>
<br>
Now, in systems like climate you have a bit of the physics side - we
know pretty well how air, heat and water move - but also a bit of
the neuroscience mess - clouds are hard to model, vegetation changes
in complex ways. So there is a fair bit of uncertainty (many climate
modellers are *really* good at statistics and the theory of
uncertainty). That is not a major problem; one can handle it. The
thing that worries me most is that many of the scientific codes are
vast, messy systems with subroutines written by a postdoc that left
years ago - I think many parts of science ought to have a code
review, but nobody will like the answers. This is likely true for
the simulations underlying much of our economy too: I know enough
about how insurance risk models work to not want to look too much
under the hood. Often the bugs and errors get averaged out by the
complex dynamics so that they do not matter as much as they would in
simple models, which I guess is a kind of relief. <br>
<br>
Many models are trusted just because they fit what people believe,
which is often based on running models. In science people actually
do perform comparisons with data, experiments or even mathematical
analysis to keep the models in the vicinity of reality. <br>
<br>
The key thing to remember is that "all models are wrong, but some
are useful". You should not select a model because it promises
perfect answers (that is frequently dangerous) but rather than it
gives you the information that matters with a high probability. <br><span class="">
<br>
<pre cols="72">--
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University</pre>
</span></div>
<br>______________________________<wbr>_________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/<wbr>mailman/listinfo.cgi/extropy-<wbr>chat</a><br>
<br></blockquote></div><br></div>