[ExI] Searle's review of Bostrom's book
pharos at gmail.com
Sun Sep 21 11:52:20 UTC 2014
On Sun, Sep 21, 2014 at 10:26 AM, Anders Sandberg wrote:
> I suspect his mistake is that he thinks consciousness is essential for
> intelligence, and hence, given his philosophical commitments, there is no AI
> problem. But this is a risky strategy for arguing a risk is zero, since even
> if one has good reasons to think one is right one can still be wrong.
> Especially about philosophy of mind and future technology.
Strange, when we now know that humans operate most of the time without
Conscious thinking is really hard work.
Kahneman introduces two mental systems, one that is fast and the other
slow. Together they shape our impressions of the world around us and
help us make choices. System 1 is largely unconscious and it makes
snap judgements based upon our memory of similar events and our
emotions. System 2 is painfully slow, and is the process by which we
consciously check the facts and think carefully and rationally.
Problem is, System 2 is easily distracted and hard to engage, and
System 1 is wrong as often as it is right. System 1 is easily swayed
by our emotions.
More information about the extropy-chat