[ExI] unconscious

Anders Sandberg anders at aleph.se
Wed Jul 23 10:25:53 UTC 2014

William Flynn Wallace <foozler83 at gmail.com> , 22/7/2014 8:56 PM:
I propose that what we could work on is getting deeper and deeper into our unconscious by making more and more of it conscious. 
No. Your computer *would* be more powerful if you had a display of all the contents of the memory and could change any of them - except that the huge complexity and amount of trivial low-level operations is overwhelming, and of course much of it moves too fast to be seen without slowing down the whole thing. The same thing is true for the unconscious: having access to the state of your brainstem or early level visual system is not particularly useful for thinking. And your mental representations spread out across your cortex, even if they could be visualized (on a screen or in your minds eye) are a huge, messy network changing while you are trying the operation of understanding some relevant detail of it. 
Being conscious of something is like sending a memo to the CEO. Would a company do better if the CEO was told *everything* going on, from the orders at the production line to the gossip in the cafeteria to the interminable legal meeting in room 1203? And would the CEO intervening in the 1203 meeting actually help? 
In computer science we drum into students that levels of abstraction are very, very useful: your software (the conscious) should not concern itself with what kind of hardware or operating system it is running on (the unconscious). The reason for this is practical (and has been demonstrated - often painfully - far too many times to count): when you do not have clean separation of levels software becomes buggy, programmers are tempted to use low-level functions that *will* cause trouble, security and portability goes out the window, and it becomes very hard to maintain and understand it. Brains were not designed, so the level separation is not as neat (and yes, this leads to a lot of the above problems). But it makes sense to separate low-level broadly parallel processing from the high-level serial processing. 

​We know that the conscious, the rider, often jumps at conclusions, stereotypes and more, all to get as quick an answer as possible, and thus makes a lot of simple mistakes that presumably we would not make if we tapped into the unconscious ​, such as judging people by a first and short impression​.
Unfortunately these intuitive jumps only work in some domains. Leon Kass (of the president's bioethics council) famously argued for the "wisdom of repugnance", that intuitive moral disgust was a useful tool for figuring out that cloning, genetic engineering, human enhancement and eating ice-cream in public (sic!) were likely immoral. His mistake was to assume that his intuitions were universal and reliable outside his own social domain; critics quickly pointed out that racists, homophobes or opponents to mixed-race couples also have similar intuitions but few or no ethicists think their reasoning or views are correct. At best intuitions tell us what we and people around us feel, but they do not tell us why or how reliable. 
(being able to trace intuitions would also be impractical: my views of ice-cream or sex have been shaped by millions of experiences that subtly changed the weights of my neural networks. Even if I could view those sources rather than being told the network it would be impossible to draw good conscious conclusions from them - the network is in a sense the conclusion, done unconsciously).
We have decent intuitions for people - but these intuitions are often biased by unconscious racism too. We are strongly overconfident in the reliability of our lie-detection intuitions, but experiments show that they are not much better than chance. Our intuitions for physics are based on everyday life; at least when doing real engineering and science it would be better to throw them out altogether. 
So I think making the unconscious conscious is not the way to higher intelligence. Quite the opposite. We might want more layers!

Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140723/3ef4d571/attachment.html>

More information about the extropy-chat mailing list