[ExI] A Working Quantum Computer by 2017?

William Flynn Wallace foozler83 at gmail.com
Tue Sep 6 22:18:13 UTC 2016


This is a big thing in the world of simulating stuff. It depends on what
kind of system you are modelling.  anders

Ah yes, landed the big fish.  Thought I could lure Anders into this, and I
actually understand most of it at my low level.  Thanks!

So I won't worry - the modelers are on the job.  Biggest worry - economic
models based on faulty theories of human behavior.
Now the question is:  are there any other kind?

My answer is no.   Psych gave up on big models long ago and are content to
try to understand small niches of behavior, while often tossing away what
was learned from the big models. We're young and sometimes stupid and vain
to not let our forefathers teach us something about people.  We are a long
way from synthesis.  Long.

bill w

On Mon, Sep 5, 2016 at 4:35 AM, Anders <anders at aleph.se> wrote:

> On 2016-09-05 00:06, William Flynn Wallace wrote:
>
> This may be too complicated to answer:  what, if any, ways are there to
> validate simulations?  Well, let the world go by and see what really
> happens, I suppose.  What else?  Do real world experiments?  In short, why
> trust simulations?   We should primie facie distrust them (like the null
> hypothesis).  At least two problems arise:  GIGO for one.  Not putting in
> crucial variables (because you don't know that they are crucial) is another.
>
>
> This is a big thing in the world of simulating stuff. It depends on what
> kind of system you are modelling.
>
> Many physics problems have conserved quantities like energy, so you can
> validate your code by checking the energy stays constant. Or you run it for
> systems with known behavior and compare. But often the right approach is to
> first write down the maths and figure out an approximation scheme that have
> guaranteed error bounds: a lot of awesome numerical analysis exist,
> including stuff like symplectic methods that allows virtually error-free
> prediction of planetary orbits over very long timespans.
>
> You can also empirically change the resolution or stepsize in the
> simulation to see how it responds: if there is a noticeable change in
> output, you better increase the resolution or do things another way. Same
> thing for parameters: if everything changes if you twiddle the knobs of the
> model slightly, you should be skeptical.
>
> In neuroscience things are harder, since the systems are more complex and
> not everything is known. You can still build a simulation and compare to
> reality though: if it doesn't fit, your model (either the theory or the
> code) is not right. You can also validate things by making virtual
> experiments to get predictions and then check them in the lab - this has
> produced some very solid results.
>
> But many neuroscience models do not aim at perfect fits, but rather to see
> if our theory produces the right behavior. That can sometimes involve
> making very simple models rather than complex ones: my rule of thumb is
> that you better get more results out of the model than you have free
> parameters, otherwise it is suspect. This is why many computational
> neuroscientists are somewhat skeptical of large scale computer simulations:
> we might not learn much from them.
>
> Now, in systems like climate you have a bit of the physics side - we know
> pretty well how air, heat and water move - but also a bit of the
> neuroscience mess - clouds are hard to model, vegetation changes in complex
> ways. So there is a fair bit of uncertainty (many climate modellers are
> *really* good at statistics and the theory of uncertainty). That is not a
> major problem; one can handle it. The thing that worries me most is that
> many of the scientific codes are vast, messy systems with subroutines
> written by a postdoc that left years ago - I think many parts of science
> ought to have a code review, but nobody will like the answers. This is
> likely true for the simulations underlying much of our economy too: I know
> enough about how insurance risk models work to not want to look too much
> under the hood. Often the bugs and errors get averaged out by the complex
> dynamics so that they do not matter as much as they would in simple models,
> which I guess is a kind of relief.
>
> Many models are trusted just because they fit what people believe, which
> is often based on running models. In science people actually do perform
> comparisons with data, experiments or even mathematical analysis to keep
> the models in the vicinity of reality.
>
> The key thing to remember is that "all models are wrong, but some are
> useful". You should not select a model because it promises perfect answers
> (that is frequently dangerous) but rather than it gives you the information
> that matters with a high probability.
>
> --
> Dr Anders Sandberg
> Future of Humanity Institute
> Oxford Martin School
> Oxford University
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160906/3bd3d38f/attachment.html>


More information about the extropy-chat mailing list