On Tue, May 7, 2013 Gordon <span dir="ltr"><<a href="mailto:gts_2000@yahoo.com" target="_blank">gts_2000@yahoo.com</a>></span> wrote:<br><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div><div style="font-family:times new roman,new york,times,serif"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div><div class="h5"><div><div><font size="4">>> Gordon, suppose the fMRI said that you were in great pain, absolute agony, but you felt just fine, very happy and healthy; who are you going to believe, the machine or your own direct experience? </font></div>
</div></div></div></blockquote><div style="font-family:'times new roman','new york',times,serif"><div style="font-family:'times new roman','new york',times,serif"><div><div><div><div style="font-family:'times new roman','new york',times,serif">
<font size="4"><br></font></div><div><font face="times new roman, serif" size="4">> I would believe my direct experience. </font></div></div></div></div></div></div></div></div></blockquote><div><span style="font-family:times new roman,serif"><font size="4"><br>
Smart move, therefore we can never be as certain of the qualia <font size="4">experienced</font> <font size="4">by</font> other people or by other computers as we are of ourselves because we only have direct experiance to our own q<font size="4">u</font>alia; Raymond Smullyan made that clear in his 1982 dialog "An Epistemological Nightmare". By the way, Smullyan is something of a mystic and before I started reading his wonderful and beautiful books I thought all mystics <font size="4">were</font> fools, but now I just think that most mystics are fools:</font></span><br>
<br>
<h1>An Epistemological Nightmare</h1>
<p align="RIGHT"><br></p>
<h2>Scene 1</h2>
<p>Frank is in the office of an eye doctor. The doctor holds up a book
and asks "What color is it?" Frank answers, "Red." The doctor says,
"Aha, just as I thought! Your whole color mechanism has gone out of
kilter. But fortunately your condition is curable, and I will have you
in perfect shape in a couple of weeks."
</p><h2>Scene 2</h2>
<p>(A few weeks later.) Frank is in a laboratory in the home of an
experimental epistemologist. (You will soon find out what that means!)
The epistemologist holds up a book and also asks, "What color is this
book?" Now, Frank has been earlier dismissed by the eye doctor as
"cured." However, he is now of a very analytical and cautious
temperament, and will not make any statement that can possibly be
refuted. So Frank answers, "It seems red to me."
</p><p><b>Epistemologist:</b><br> Wrong!
</p><p><b>Frank:</b><br> I don't think you heard what I said. I merely said that it seems red to me.
</p><p><b>Epistemologist:</b><br> I heard you, and you were wrong.
</p><p><b>Frank:</b><br> Let me get this clear; did you mean that I was wrong that this book is red, or that I was wrong that it seems red to me?
</p><p><b>Epistemologist:</b><br> I obviously couldn't have meant
that you were wrong in that it is red, since you did not say that it is
red. All you said was that it seems red to you, and it is this statement
which is wrong.
</p><p><b>Frank:</b><br> But you can't say that the statement "It seems red to me" is wrong.
</p><p><b>Epistemologist:</b><br> If I can't say it, how come I did?
</p><p><b>Frank:</b><br> I mean you can't mean it.
</p><p><b>Epistemologist:</b><br> Why not?
</p><p><b>Frank:</b><br> But surely I know what color the book seems to me!
</p><p><b>Epistemologist:</b><br> Again you are wrong.
</p><p><b>Frank:</b><br> But nobody knows better than I how things seem to me.
</p><p><b>Epistemologist:</b><br> I am sorry, but again you are wrong.
</p><p><b>Frank:</b><br> But who knows better than I?
</p><p><b>Epistemologist:</b><br> I do.
</p><p><b>Frank:</b><br> But how could you have access to my private mental states?
</p><p><b>Epistemologist:</b><br> Private mental states! Metaphysical
hogwash! Look, I am a practical epistemologist. Metaphysical problems
about "mind" versus "matter" arise only from epistemological confusions.
Epistemology is the true foundation of philosophy. But the trouble with
all past epistemologists is that they have been using wholly
theoretical methods, and much of their discussion degenerates into mere
word games. While other epistemologists have been solemnly arguing such
questions as whether a man can be wrong when he asserts that he believes
such and such, I have discovered how to settle such questions
experimentally.
</p><p><b>Frank:</b><br> How could you possibly decide such things empirically?
</p><p><b>Epistemologist:</b><br> By reading a person's thoughts directly.
</p><p><b>Frank:</b><br> You mean you are telepathic?
</p><p><b>Epistemologist:</b><br> Of course not. I simply did the one
obvious thing which should be done, viz. I have constructed a
brain-reading machine--known technically as a cerebroscope--that is
operative right now in this room and is scanning every nerve cell in
your brain. I thus can read your every sensation and thought, and it is a
simple objective truth that this book does not seem red to you.
</p><p><b>Frank (thoroughly subdued):</b><br> Goodness gracious, I really could have sworn that the book seemed red to me; it sure seems that it seems read to me!
</p><p><b>Epistemologist:</b><br> I'm sorry, but you are wrong again.
</p><p><b>Frank:</b><br> Really? It doesn't even seem that it seems red to me? It sure seems like it seems like it seems red to me!
</p><p><b>Epistemologist:</b><br> Wrong again! And no matter how many
times you reiterate the phrase "it seems like" and follow it by "the
book is red" you will be wrong.
</p><p><b>Frank:</b><br> This is fantastic! Suppose instead of the
phrase "it seems like" I would say "I believe that." So let us start
again at ground level. I retract the statement "It seems red to me" and
instead I assert "I believe that this book is red." Is this statement
true or false?
</p><p><b>Epistemologist:</b><br> Just a moment while I scan the dials of the brain-reading machine--no, the statement is false.
</p><p><b>Frank:</b><br> And what about "I believe that I believe that the book is red"?
</p><p><b>Epistemologist (consulting his dials):</b><br> Also false. And again, no matter how many times you iterate "I believe," all these belief sentences are false.
</p><p><b>Frank:</b><br> Well, this has been a most enlightening
experience. However, you must admit that it is a little hard on me to
realize that I am entertaining infinitely many erroneous beliefs!
</p><p><b>Epistemologist:</b><br> Why do you say that your beliefs are erroneous?
</p><p><b>Frank:</b><br> But you have been telling me this all the while!
</p><p><b>Epistemologist:</b><br> I most certainly have not!
</p><p><b>Frank:</b><br> Good God, I was prepared to admit all my
errors, and now you tell me that my beliefs are not errors; what are you
trying to do, drive me crazy?
</p><p><b>Epistemologist:</b><br> Hey, take it easy! Please try to recall: When did I say or imply that any of your beliefs are erroneous?
</p><p><b>Frank:</b><br> Just simply recall the infinite sequence of
sentences: (1) I believe this book is red; (2) I believe that I believe
this book is red; and so forth. You told me that every one of those
statements is false.
</p><p><b>Epistemologist:</b><br> True.
</p><p><b>Frank:</b><br> Then how can you consistently maintain that my beliefs in all these false statements are not erroneous?
</p><p><b>Epistemologist:</b><br> Because, as I told you, you don't believe any of them.
</p><p><b>Frank:</b><br> I think I see, yet I am not absolutely sure.
</p><p><b>Epistemologist:</b><br> Look, let me put it another way.
Don't you see that the very falsity of each of the statements that you
assert saves you from an erroneous belief in the preceding one? The
first statement is, as I told you, false. Very well! Now the second
statement is simply to the effect that you believe the first statement.
If the second statement were true, then you would believe the first
statement, and hence your belief about the first statement would indeed
be in error. But fortunately the second statement is false, hence you
don't really believe the first statement, so your belief in the first
statement is not in error. Thus the falsity of the second statement
implies you do not have an erroneous belief about the first; the falsity
of the third likewise saves you from an erroneous belief about the
second, etc.
</p><p><b>Frank:</b><br> Now I see perfectly! So none of my beliefs were erroneous, only the statements were erroneous.
</p><p><b>Epistemologist:</b><br> Exactly.
</p><p><b>Frank:</b><br> Most remarkable! Incidentally, what color is the book really?
</p><p><b>Epistemologist:</b><br> It is red.
</p><p><b>Frank:</b><br> What!
</p><p><b>Epistemologist:</b><br> Exactly! Of course the book is red. What's the matter with you, don't you have eyes?
</p><p><b>Frank:</b><br> But didn't I in effect keep saying that the book is red all along?
</p><p><b>Epistemologist:</b><br> Of course not! You kept saying it
seems red to you, it seems like it seems red to you, you believe it is
red, you believe that you believe it is red, and so forth. Not once did
you say that it is red. When I originally asked you "What color is the
book?" if you had simply answered "red," this whole painful discussion
would have been avoided.
</p><h2>Scene 3</h2>
<p>Frank comes back several months later to the home of the epistemologist.
</p><p><b>Epistemologist:</b><br> How delightful to see you! Please sit down.
</p><p><b>Frank (seated):</b><br> I have been thinking of our last
discussion, and there is much I wish to clear up. To begin with, I
discovered an inconsistency in some of the things you said.
</p><p><b>Epistemologist:</b><br> Delightful! I love inconsistencies. Pray tell!
</p><p><b>Frank:</b><br> Well, you claimed that although my belief
sentences were false, I did not have any actual beliefs that are false.
If you had not admitted that the book actually is red, you would have
been consistent. But your very admission that the book is red, leads to
an inconsistency.
</p><p><b>Epistemologist:</b><br> How so?
</p><p><b>Frank:</b><br> Look, as you correctly pointed out, in each
of my belief sentences "I believe it is red," "I believe that I believe
it is red," the falsity of each one other than the first saves me from
an erroneous belief in the proceeding one. However, you neglected to
take into consideration the first sentence itself. The falsity of the
first sentence "I believe it is red," in conjunction with the fact that
it is red, does imply that I do have a false belief.
</p><p><b>Epistemologist:</b><br> I don't see why.
</p><p><b>Frank:</b><br> It is obvious! Since the sentence "I believe
it is red" is false, then I in fact believe it is not red, and since it
really is red, then I do have a false belief. So there!
</p><p><b>Epistemologist (disappointed):</b><br> I am sorry, but your
proof obviously fails. Of course the falsity of the fact that you
believe it is red implies that you don't believe it is red. But this
does not mean that you believe it is not red!
</p><p><b>Frank:</b><br> But obviously I know that it either is red or it isn't, so if I don't believe it is, then I must believe that it isn't.
</p><p><b>Epistemologist:</b><br> Not at all. I believe that either
Jupiter has life or it doesn't. But I neither believe that it does, nor
do I believe that it doesn't. I have no evidence one way or the other.
</p><p><b>Frank:</b><br> Oh well, I guess you are right. But let us
come to more important matters. I honestly find it impossible that I can
be in error concerning my own beliefs.
</p><p><b>Epistemologist:</b><br> Must we go through this again? I
have already patiently explained to you that you (in the sense of your
beliefs, not your statements) are not in error.
</p><p><b>Frank:</b><br> Oh, all right then, I simply do not believe
that even the statements are in error. Yes, according to the machine
they are in error, but why should I trust the machine?
</p><p><b>Epistemologist:</b><br> Whoever said you should trust the machine?
</p><p><b>Frank:</b><br> Well, should I trust the machine?
</p><p><b>Epistemologist:</b><br> That question involving the word
"should" is out of my domain. However, if you like, I can refer you to a
colleague who is an excellent moralist--he may be able to answer this
for you.
</p><p><b>Frank:</b><br> Oh come on now, I obviously didn't mean
"should" in a moralistic sense. I simply meant "Do I have any evidence
that this machine is reliable?"
</p><p><b>Epistemologist:</b><br> Well, do you?
</p><p><b>Frank:</b><br> Don't ask me! What I mean is should you trust the machine?
</p><p><b>Epistemologist:</b><br> Should I trust it? I have no idea, and I couldn't care less what I should do.
</p><p><b>Frank:</b><br> Oh, your moralistic hangup again. I mean, do you have evidence that the machine is reliable?
</p><p><b>Epistemologist:</b><br> Well of course!
</p><p><b>Frank:</b><br> Then let's get down to brass tacks. What is your evidence?
</p><p><b>Epistemologist:</b><br> You hardly can expect that I can
answer this for you in an hour, a day, or a week. If you wish to study
this machine with me, we can do so, but I assure you this is a matter of
several years. At the end of that time, however, you would certainly
not have the slightest doubts about the reliability of the machine.
</p><p><b>Frank:</b><br> Well, possibly I could believe that it is
reliable in the sense that its measurements are accurate, but then I
would doubt that what it actually measures is very significant. It seems
that all it measures is one's physiological states and activities.
</p><p><b>Epistemologist:</b><br> But of course, what else would you expect it to measure?
</p><p><b>Frank:</b><br> I doubt that it measures my psychological states, my actual beliefs.
</p><p><b>Epistemologist:</b><br> Are we back to that again? The
machine does measure those physiological states and processes that you
call psychological states, beliefs, sensations, and so forth.
</p><p><b>Frank:</b><br> At this point I am becoming convinced that
our entire difference is purely semantical. All right, I will grant that
your machine does correctly measure beliefs in your sense of the word
"belief," but I don't believe that it has any possibility of measuring
beliefs in my sense of the word "believe." In other words I claim that
our entire deadlock is simply due to the fact that you and I mean
different things by the word "belief."
</p><p><b>Epistemologist:</b><br> Fortunately, the correctness of
your claim can be decided experimentally. It so happens that I now have
two brain-reading machines in my office, so I now direct one to your
brain to find out what you mean by "believe" and now I direct the other
to my own brain to find out what I mean by "believe," and now I shall
compare the two readings. Nope, I'm sorry, but it turns out that we mean
exactly the same thing by the word "believe."
</p><p><b>Frank:</b><br> Oh, hang your machine! Do you believe we mean the same thing by the word "believe"?
</p><p><b>Epistemologist:</b><br> Do I believe it? Just a moment while I check with the machine. Yes, it turns out I do believe it.
</p><p><b>Frank:</b><br> My goodness, do you mean to say that you can't even tell me what you believe without consulting the machine?
</p><p><b>Epistemologist:</b><br> Of course not.
</p><p><b>Frank:</b><br> But most people when asked what they believe
simply tell you. Why do you, in order to find out your beliefs, go
through the fantastically roundabout process of directing a
thought-reading machine to your own brain and then finding out what you
believe on the basis of the machine readings?
</p><p><b>Epistemologist:</b><br> What other scientific, objective way is there of finding out what I believe?
</p><p><b>Frank:</b><br> Oh, come now, why don't you just ask yourself?
</p><p><b>Epistemologist (sadly):</b><br> It doesn't work. Whenever I ask myself what I believe, I never get any answer!
</p><p><b>Frank:</b><br> Well, why don't you just state what you believe?
</p><p><b>Epistemologist:</b><br> How can I state what I believe before I know what I believe?
</p><p><b>Frank:</b><br> Oh, to hell with your knowledge of what you believe; surely you have some idea or belief as to what you believe, don't you?
</p><p><b>Epistemologist:</b><br> Of course I have such a belief. But how do I find out what this belief is?
</p><p><b>Frank:</b><br> I am afraid we are getting into another
infinite regress. Look, at this point I am honestly beginning to wonder
whether you may be going crazy.
</p><p><b>Epistemologist:</b><br> Let me consult the machine. Yes, it turns out that I may be going crazy.
</p><p><b>Frank:</b><br> Good God, man, doesn't this frighten you?
</p><p><b>Epistemologist:</b><br> Let me check! Yes, it turns out that it does frighten me.
</p><p><b>Frank:</b><br> Oh please, can't you forget this damned machine and just tell me whether you are frightened or not?
</p><p><b>Epistemologist:</b><br> I just told you that I am. However, I only learned of this from the machine.
</p><p><b>Frank:</b><br> I can see that it is utterly hopeless to
wean you away from the machine. Very well, then, let us play along with
the machine some more. Why don't you ask the machine whether your sanity
can be saved?
</p><p><b>Epistemologist:</b><br> Good idea! Yes, it turns out that it can be saved.
</p><p><b>Frank:</b><br> And how can it be saved?
</p><p><b>Epistemologist:</b><br> I don't know, I haven't asked the machine.
</p><p><b>Frank:</b><br> Well, for God's sake, ask it!
</p><p><b>Epistemologist:</b><br> Good idea. It turns out that...
</p><p><b>Frank:</b><br> It turns out what?
</p><p><b>Epistemologist:</b><br> It turns out that...
</p><p><b>Frank:</b><br> Come on now, it turns out what?
</p><p><b>Epistemologist:</b><br> This is the most fantastic thing I
have ever come across! According to the machine the best thing I can do
is to cease to trust the machine!
</p><p><b>Frank:</b><br> Good! What will you do about it?
</p><p><b>Epistemologist:</b><br> How do I know what I will do about it, I can't read the future?
</p><p><b>Frank:</b><br> I mean, what do you presently intend to do about it?
</p><p><b>Epistemologist:</b><br> Good question, let me consult the
machine. According to the machine, my current intentions are in complete
conflict. And I can see why! I am caught in a terrible paradox! If the
machine is trustworthy, then I had better accept its suggestion to
distrust it. But if I distrust it, then I also distrust its suggestion
to distrust it, so I am really in a total quandary.
</p><p><b>Frank:</b><br> Look, I know of someone who I think might be
really of help in this problem. I'll leave you for a while to consult
him. Au revoir!
</p><h2>Scene 4.</h2>
<p>(Later in the day at a psychiatrist's office.)
</p><p><b>Frank:</b><br> Doctor, I am terribly worried about a friend of mine. He calls himself an "experimental epistemologist."
</p><p><b>Doctor:</b><br> Oh, the experimental epistemologist. There is only one in the world. I know him well!
</p><p><b>Frank:</b><br> That is a relief. But do you realize that he
has constructed a mind-reading device that he now directs to his own
brain, and whenever one asks him what he thinks, believes, feels, is
afraid of, and so on, he has to consult the machine first before
answering? Don't you think this is pretty serious?
</p><p><b>Doctor:</b><br> Not as serious as it might seem. My prognosis for him is actually quite good.
</p><p><b>Frank:</b><br> Well, if you are a friend of his, couldn't you sort of keep an eye on him?
</p><p><b>Doctor:</b><br> I do see him quite frequently, and I do
observe him much. However, I don't think he can be helped by so-called
"psychiatric treatment." His problem is an unusual one, the sort that
has to work itself out. And I believe it will.
</p><p><b>Frank:</b><br> Well, I hope your optimism is justified. At any rate I sure think I need some help at this point!
</p><p><b>Doctor:</b><br> How so?
</p><p><b>Frank:</b><br> My experiences with the epistemologist have
been thoroughly unnerving! At this point I wonder if I may be going
crazy; I can't even have confidence in how things appear to me. I think
maybe you could be helpful here.
</p><p><b>Doctor:</b><br> I would be happy to but cannot for a while.
For the next three months I am unbelievably overloaded with work. After
that, unfortunately, I must go on a three-month vacation. So in six
months come back and we can talk this over.
</p><h2>Scene 5.</h2>
<p>(Same office, six months later.)
</p><p><b>Doctor:</b><br> Before we go into your problems, you will be happy to hear that your friend the epistemologist is now completely recovered.
</p><p><b>Frank:</b><br> Marvelous, how did it happen?
</p><p><b>Doctor:</b><br> Almost, as it were, by a stroke of
fate--and yet his very mental activities were, so to speak, part of the
"fate." What happened was this: For months after you last saw him, he
went around worrying "should I trust the machine, shouldn't I trust the
machine, should I, shouldn't I, should I, shouldn't I." (He decided to
use the word "should" in your empirical sense.) He got nowhere! So he
then decided to "formalize" the whole argument. He reviewed his study of
symbolic logic, took the axioms of first-order logic, and added as
nonlogical axioms certain relevant facts about the machine. Of course
the resulting system was inconsistent--he formally proved that he should
trust the machine if and only if he shouldn't, and hence that he both
should and should not trust the machine. Now, as you may know, in a
system based on classical logic (which is the logic he used), if one can
prove so much as a single contradictory proposition, then one can prove
any proposition, hence the whole system breaks down. So he decided to
use a logic weaker than classical logic--a logic close to what is known
as "minimal logic"--in which the proof of one contradiction does not
necessarily entail the proof of every proposition. However, this system
turned out too weak to decide the question of whether or not he should
trust the machine. Then he had the following bright idea. Why not use
classical logic in his system even though the resulting system is
inconsistent? Is an inconsistent system necessarily useless? Not at all!
Even though given any proposition, there exists a proof that it is true
and another proof that it is false, it may be the case that for any
such pair of proofs, one of them is simply more psychologically
convincing than the other, so simply pick the proof you actually
believe! Theoretically the idea turned out very well--the actual system
he obtained really did have the property that given any such pair of
proofs, one of them was always psychologically far more convincing than
the other. Better yet, given any pair of contradictory propositions, all
proofs of one were more convincing than any proof of the other. Indeed,
anyone except the epistemologist could have used the system to decide
whether the machine could be trusted. But with the epistemologist, what
happened was this: He obtained one proof that he should trust the
machine and another proof that he should not. Which proof was more
convincing to him, which proof did he really "believe"? The only way he
could find out was to consult the machine! But he realized that this
would be begging the question, since his consulting the machine would be
a tacit admission that he did in fact trust the machine. So he still
remained in a quandary.
</p><p><b>Frank:</b><br> So how did he get out of it?
</p><p><b>Doctor:</b><br> Well, here is where fate kindly interceded.
Due to his absolute absorption in the theory of this problem, which
consumed about his every waking hour, he became for the first time in
his life experimentally negligent. As a result, quite unknown to him, a
few minor units of his machine blew out! Then, for the first time, the
machine started giving contradictory information--not merely subtle
paradoxes, but blatant contradictions. In particular, the machine one
day claimed that the epistemologist believed a certain proposition and a
few days later claimed he did not believe that proposition. And to add
insult to injury, the machine claimed that he had not changed his belief
in the last few days. This was enough to simply make him totally
distrust the machine. Now he is fit as a fiddle.
</p><p><b>Frank:</b><br> This is certainly the most amazing thing I
have ever heard! I guess the machine was really dangerous and unreliable
all along.
</p><p><b>Doctor:</b><br> Oh, not at all; the machine used to be excellent before the epistemologist's experimental carelessness put it out of whack.
</p><p><b>Frank:</b><br> Well, surely when I knew it, it couldn't have been very reliable.
</p><p><b>Doctor:</b><br> Not so, Frank, and this brings us to your
problem. I know about your entire conversation with the
epistemologist--it was all tape-recorded.
</p><p><b>Frank:</b><br> Then surely you realize the machine could not have been right when it denied that I believed the book was red.
</p><p><b>Doctor:</b><br> Why not?
</p><p><b>Frank:</b><br> Good God, do I have to go through all this
nightmare again? I can understand that a person can be wrong if he
claims that a certain physical object has a certain property, but have
you ever known a single case when a person can be mistaken when he
claims to have or not have a certain sensation?
</p><p><b>Doctor:</b><br> Why, certainly! I once knew a Christian
Scientist who had a raging toothache; he was frantically groaning and
moaning all over the place. When asked whether a dentist might not cure
him, he replied that there was nothing to be cured. Then he was asked,
"But do you not feel pain?" He replied, "No, I do not feel pain; nobody
feels pain, there is no such thing as pain, pain is only an illusion."
So here is a case of a man who claimed not to feel pain, yet everyone
present knew perfectly well that he did feel pain. I certainly don't
believe he was lying, he was just simply mistaken.
</p><p><b>Frank:</b><br> Well, all right, in a case like that. But how can one be mistaken if one asserts his belief about the color of a book?
</p><p><b>Doctor:</b><br> I can assure you that without access to any
machine, if I asked someone what color is this book, and he answered,
"I believe it is red," I would be very doubtful that he really believed
it. It seems to me that if he really believed it, he would answer, "It
is red" and not "I believe it is red" or "It seems red to me." The very
timidity of his response would be indicative of his doubts.
</p><p><b>Frank:</b><br> But why on earth should I have doubted that it was red?
</p><p><b>Doctor:</b><br> You should know that better than I. Let us
see now, have you ever in the past had reason to doubt the accuracy of
your sense perception?
</p><p><b>Frank:</b><br> Why, yes. A few weeks before visiting the
epistemologist, I suffered from an eye disease, which did make me see
colors falsely. But I was cured before my visit.
</p><p><b>Doctor:</b><br> Oh, so no wonder you doubted it was red!
True enough, your eyes perceived the correct color of the book, but your
earlier experience lingered in your mind and made it impossible for you
to really believe it was red. So the machine was right!
</p><p><b>Frank:</b><br> Well, all right, but then why did I doubt that I believed it was true?
</p><p><b>Doctor:</b><br> Because you didn't believe it was true, and
unconsciously you were smart enough to realize the fact. Besides, when
one starts doubting one's own sense perceptions, the doubt spreads like
an infection to higher and higher levels of abstraction until finally
the whole belief system becomes one doubting mass of insecurity. I bet
that if you went to the epistemologist's office now, and if the machine
were repaired, and you now claimed that you believe the book is red, the
machine would concur.
</p><p>No, Frank, the machine is--or, rather, was--a good one. The
epistemologist learned much from it, but misused it when he applied it
to his own brain. He really should have known better than to create such
an unstable situation. The combination of his brain and the machine
each scrutinizing and influencing the behavior of the other led to
serious problems in feedback. Finally the whole system went into a
cybernetic wobble. Something was bound to give sooner or later.
Fortunately, it was the machine.
</p><p><b>Frank:</b><br> I see. One last question, though. How could the machine be trustworthy when it claimed to be untrustworthy?
</p><p><b>Doctor:</b><br> The machine never claimed to be
untrustworthy, it only claimed that the epistemologist would be better
off not trusting it. And the machine was right.</p><p><br></p><p> John K Clark<br>
</p><br><br><br><br></div></div>