[extropy-chat] Possible Worlds Semantics
Ian Goddard
iamgoddard at yahoo.com
Mon Apr 17 06:12:03 UTC 2006
I bemoaned:
> Trying to explain my answer runs into describing a
> thicket of nested possible worlds ...
But it's an attractive challenge! Here's an effort to
explain my answer [1] to the puzzle Hal posted. [2]
The following models the range of possible envelopes
from $10^1 to $10^7. Below the envelope values are two
lines, the first shows the position of the second son
(let's call him 'B') who got $10^3; so he's placed
directly under $10^3. From there B wonders if along
the second line A's position falls under $10^2 or
$10^4 (fixed-pitch font necessary):
$10^1 $10^2 $10^3 $10^4 $10^5 $10^6 $10^7
B *-------*------(B)------*-------*-------*-------*
A *-------A?------*-------A?------*-------*-------*
B has $10^3 and wonders: "Does A have $10^2 or $10^4?"
All B really needs to know is if A has $10^2; thus
B constructs a possible world wherein A has $10^2
and deduces the probable consequences of such:
----------------------------------------------------
B's mentally modeled possible world (where 'I' = B):
----------------------------------------------------
1. I have $10^3 and so I want to know if A has $10^2.
2. If A has $10^2, he wants to know if I have $10^1.
3. If I had $10^1, then I'd ONLY want to switch.
Ergo: if A has $10^2, then he wants to know if I
definitely want to switch. Therefore, if I know
that A wants to know only that, then I can
infer with likelihood that A has only $10^2,
and if A has that, then I do NOT want to switch.
----------------------------------------------------
<end of B's modelled possible world>
Of course B can also look at BOTH of A's possible
states. Here's what B wants to know about A if (1) A
has $10^2 (as exampled above) or (2) A has $10^4:
1. B(A(B))
2. B(A(B(A(B))))
1. B: "What's A's view about me?"
2. B: "What's A's view about my view about A's view
about me?"
For the sake of exploration lets just keeping looking
up the $10^n scale from B's fixed position to higher
positions along A's possible-position line:
3. B(A(B(A(B(A(B))))))
3. B: "What's A's view about my view of A's view about
my view about A's view about me?"
4. B(A(B(A(B(A(B(A(B))))))))
B: "What's A's view about my view of A's view about my
view of A's view about my view about A's view about
me?"
Woh, I'm getting dizzy!
Seems like it can just keep going without limit... And
note that the sentences seem to be grammatical, making
sense both syntactically and semantically even as they
grow more "ridiculous" with each iteration. Assuming a
model with no upward bound, it seems like a
recursively defined generative language involving an
immediate language-model isomorphism such that both
the language and the model share a common structure;
almost like at a higher level of abstraction there's a
structure of which both the language and the model are
individual instantiations. Maybe that higher level is
'natural logic' (some underlying ontic logic of which
our formal systems merely try to model). Well, maybe
that's just pipe dreaming, whatever, I sense a potent
avenue of inquiry in this topic of nested models of
knowledge states Hal brings to our attention! ~Ian
_____________________________________________________
[1] My effort to answer a question
http://lists.extropy.org/pipermail/extropy-chat/2006-April/026273.html
[2] to a Puzzle Posted by Hal:
http://lists.extropy.org/pipermail/extropy-chat/2006-April/026233.html
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
More information about the extropy-chat
mailing list