[ExI] Model uncertainty (Was: LIGO)
hkeithhenson at gmail.com
Thu Nov 10 03:35:40 UTC 2016
As I have explained a number of times in the past on this list, people
who have a bleak view of the future will go for an irrational leader,
one who will lead them into war. From the genes long experience in
the stone age, this always worked to correct the problem, which was
too many people for the environment. Wars come in various flavors,
internal and external to the group, and sometimes both. You have to
wonder which group will be exterminated? It was the Jews in Germany,
but don't forget that the intellectuals, anyone with a degree, or wore
glasses were the ones who the Cambodians exterminated under Pol Pot.
If Trump really wants the Mexicans to build a wall, it will probably
be to keep the miserable poor gringos out of Mexico.
On Wed, Nov 9, 2016 at 4:38 PM, Anders <anders at aleph.se> wrote:
> On 2016-11-09 19:51, William Flynn Wallace wrote:
> I am still trying to get my head around that: how could we have seen two
> crazy-unlikely events in just a few weeks? My view of the cosmos must be
> serious flawed. COOL!
> This expresses my feeling exactly. I think somewhere along the line I lost
> contact with the human race and am seriously out of contact with reality. I
> am still stunned.
> In fact, this is an interesting development. Pollsters and information
> markets missed the UK election, Brexit and the US election. The models are
> clearly wrong. Even if one accepted that the 15% chance of Trump on Monday
> evening was true and we saw a 15% probability event, the swathe of other
> recent polling failures demonstrate that something important has changed.
> Generally, I think there is both an epistemic uncertainty about how to poll
> current people properly, and a more meaty uncertainty about what is going on
> politically. I have recently shifted away from my previous model that people
> had a broken epistemology because of networked media to a model that what we
> are seeing is more a tribal defense of core values (Haidt's explanation:
> Now, realizing that one's model is not correct and trying to fix it is an
> uneasy but exciting place. Especially since it might mean one should change
> strategy about a lot of things.
> Dr Anders Sandberg
> Future of Humanity Institute
> Oxford Martin School
> Oxford University
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
More information about the extropy-chat