[ExI] evolution and crazy thinking

Colin Hales col.hales at gmail.com
Thu Jul 19 02:44:02 UTC 2018


On Thu., 19 Jul. 2018, 1:01 am William Flynn Wallace, <foozler83 at gmail.com>
wrote:

> Job done (I am right) :-) ..... or am I? So far the evidence is consistent
> with my hypothesis. It predicts exactly what your post is about: it
> predicts humans screwing up badly as a primary cognitive necessity to deal
> with the unknown.
>
> cheers
> colin
> I can't see some of these biases as leading to anything but further bias,
> and maybe worse at that.  Use the self-serving bias along with the
> fundamental attritutional error too much and you never understand the other
> people in the world.
>
> And it is unclear to me just how one gets out of the original biased
> decision/action.  Some will stop at the first approximation - I call this
> 'good enough for who it's for' - and will be consistently wrong.  Add the
> self-serving bias to this and you get a person who cannot admit he is wrong
> - who maybe knows it but has no way to get his ox out of his ditch.
>
> If one uses unbiased methods one can be wrong many times before one is
> right, but, like the scientists, he doggedly keeps with his methods and
> finally gets to the truth of things.
>
> In short, I don't see making biased decisions as a necessary first step.
>
> bill w
>

Ok. Think of it as a general requirement for cognitive agents where
individual or species-fatal unknowns can happen. If the agent is rigidly
defined, choices are hard coded and never change. The behaviours may be
complex. Yet there can be no bias (in the sense meant here) except that the
agent may respond erroneously to deep novelty.

Now imagine an agent that can adapt to novelty. Initially it is 'wrong' in
some sense. Later it becomes 'right' in the sense of predictive.

I am not claiming being wrong as a necessary first step. I am claiming that
being wrong is a natural result of an encounter with the unknown. By
definition.

The problem happens when the agent fails to adapt to or even acknowledge
evidence shedding light on the rectitude of choices it makes.

If an agent is equipped to encounter and prevail over arbitrarily deep
novelty, then it it automatically inherits a potential for
state-trajectories into the BS -weeds, and in the absence of contrary
evidence, may never escape. For example, being in a Facebook bubble or a
religious sect or conspiracy-theory etc etc.

This predisposition for being wrong first, and correcting, means novelty is
less a threat to survival. A happy side effect is an ability to do science
(where novelty is the food) and an ability to problem-solve, because
'problem' is another word for a kind of novelty.

That's the sense I mean.

If you 'gift' an agent absolute truth and all possible truths, then
somewhere at the novelty boundary, your gift ends and your agent is
clueless and powerless.

Cheers

Colin






> On Tue, Jul 17, 2018 at 7:18 PM, Colin Hales <col.hales at gmail.com> wrote:
>
>> I wrote about this in my book.
>>
>> When you don't know something then to understand it you have to make an
>> explanatory hypothesis which, at the moment of its creation, is formally
>> 'wrong' in the sense that until evidence confirms it, it's not predictive
>> yet. Over time your hypothesis acquires a body of evidence and you get to
>> be 'right' in the sense of 'predictive'. That is, in order for a human to
>> make sense of the world, you have to be able to be 'wrong'. Making wrong
>> hypotheses is a double edged sword.
>>
>> 1) You get to be right post-hoc.
>> but
>> 2) On the down side, if you're an idiot that has a broken sense of what
>> evidence is (...in the 1000 cognitive biases in Wikipedia and in the
>> attached 'codex') then you get stuck with your own, pardon me, bullshit.
>> Like religion, for example.
>>
>> In the brutal evolutionary 'get it right or die' process, evolution has
>> favoured a creature like us that can be very wrong and use that exact
>> ability to then get at the true nature of things. Later you become 'right'.
>> I used this to great effect in a formal scientific account of scientific
>> behaviour.
>>
>> Job done (I am right) :-) ..... or am I? So far the evidence is
>> consistent with my hypothesis. It predicts exactly what your post is about:
>> it predicts humans screwing up badly as a primary cognitive necessity to
>> deal with the unknown.
>>
>> cheers
>> colin
>>
>>
>>
>>
>>
>> On Wed, Jul 18, 2018 at 5:51 AM, Spike Jones <spike at rainier66.com> wrote:
>>
>>>
>>>
>>>
>>>
>>> *From:* extropy-chat <extropy-chat-bounces at lists.extropy.org> *On
>>> Behalf Of *William Flynn Wallace
>>>
>>>
>>>
>>> >…Evolution did a great job but it has a long way to go.  I hope it
>>> gets the chance.  'Survival of the fittest' does not seem to describe the
>>> current state of world affairs in the evolutionary sense.
>>>
>>> ​Are we, in fact, not losing the unfit?  bill w​
>>>
>>>
>>>
>>>
>>>
>>> This observation about survival of the fittest should have been stated
>>> survival of the best adapted.
>>>
>>>
>>>
>>> If we are discussing humans, fitness in the traditional sense is nearly
>>> irrelevant.  The unfit prosper in the right environment, such as our
>>> technically advanced world.  People who are dependent on modern medical
>>> technology for instance are likely to reside near a hospital, which implies
>>> a big city, where reproductive opportunities are relatively plentiful.  In
>>> that sense, the unfit are better adapted to our world than the fittest.
>>>
>>>
>>>
>>> spike
>>>
>>>
>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20180719/01fed302/attachment.html>


More information about the extropy-chat mailing list