[ExI] cog bias again
Ben Zaiboc
ben at zaiboc.net
Fri Sep 25 08:00:03 UTC 2020
On 25/09/2020 02:03, Spike wrote:
> Oops. I am certain I posted the incorrect link. Here's the right one, thanks Dan:
>
> https://www.newyorker.com/magazine/2017/02/27/why-facts-dont-change-our-minds
No surprises there.
I also fits in with a book I'm currently reading (quite possibly on a
recommendation from this list, I don't remember): "Why everyone (else)
is a hypocrite" by Robert Kurzban. If someone here did recommend it, I'd
like to thank them, and re-recommend it to everyone.
It's about how our minds are made up of thousands of functional modules,
specialised for all kinds of different things, and why they don't always
agree, leading to the kind of results talked about in Spike's article,
among other things, like scientists being religious, hypocrisy in
general, and people putting locks on their refrigerator doors (does that
/really/ happen? I don't know whether to believe it or not, but it
illustrates the point well).
I was thinking that this view of the human mind could well be useful to
AI researchers, and that not taking it into account could even partly
explain our dismal progress to date at creating AGI. I remember someone
years and years ago talking about a mind being a 'loosely-bound bundle
of parallel processes', which is not too far from this idea of modules
doing their own thing, and not necessarily co-operating perfectly with
each other (by design, or rather, evolution).
--
Ben Zaiboc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20200925/08277d71/attachment.htm>
More information about the extropy-chat
mailing list