[ExI] Semiotics and Computability
bbenzai at yahoo.com
Tue Feb 16 20:36:57 UTC 2010
> The logic of the CRA is correct. But it reasons from a flawed
> premise: That the human organism has this somehow ontologically
> special thing called "consciousness."
> So restart the music, and the merry-go-round. I'm surprised no one's
> mentioned the Giant Look Up Table yet.
I was thinking about this very thing (even though I said I'd no longer discuss the dread CRA), and I disagree that the logic is correct. It suffers from a fundamental flaw, as I see it: the (usually unquestioned) assumption that it's possible, even in principle, to have a set of rules that can answer any possible question about a set of data (the 'story'), in a consistently sensible fashion, without having any 'understanding'.
Searle just casually tosses this assertion out as though it was obvious that it was possible, when it seems to me to be so unlikely that it's a simply outrageous assumption. Before anyone uses it in an argument, they need to demonstrate that it's possible. Without doing this, any argument based on it is totally invalid.
If you think about it, we actually use this principle to actually test for understanding of a subject. We put people through exams where they're supposed to demonstrate their understanding by answering questions. If the questions are good ones (difficult to anticipate, posing a variety of different problems, etc.), and the answers are good ones (clearly stating how to solve the problems posed), we conclude that the person has demonstrated understanding of the subject. Our education system pretty much depends on this. So why on earth would anyone suggest that exactly the same setup - asking questions about a set of data, and seeing if the answers are correct and consistent - could be used in an argument to claim that understanding is absent?
More information about the extropy-chat