[extropy-chat] what is probability?
jef at jefallbright.net
Sun Jan 14 02:20:18 UTC 2007
From: extropy-chat-bounces at lists.extropy.org on behalf of gts
> On Sat, 13 Jan 2007 12:08:09 -0500, Benjamin Goertzel <ben at goertzel.org>
>> All the Bertrand paradox shows is that the natural language concept
>> "select at random" is ambiguous, and can be disambiguated to yield
>> multiple meanings.
> What then is the one true disambiguated meaning of "select at random"? Or
> is there no such thing?
There is no one true meaning here. The meaning of "random" is, as all meaning, dependent on context. Sheeesh.
> I mentioned Jaynes' proposed solution. Apparently Jaynes went to great
> lengths to derive what he hoped would be the one true meaning of
> "selecting a random chord". I've seen a summary of his argument and it
> seems quite plausible, but even he stopped short of saying he had proved
> his case in any formal logical sense.
He highlighted the problem being what we mean by random, then suggested that we could choose to apply a common-sense meaning of random, and we could likewise choose a definition of random that works over as wide a context as possible; for example: scale invariance, translational invariance, rotational invariance... pick as many as you like, the more the better. No one mentioned time invariance but that would be equally valid. This is the maximum entropy principle I mention earlier can guide us.
> And even assuming his argument is correct for the random chord paradox,
> how is it in any way translatable to the other paradoxes?
> If the Bertrand paradox is fundamentally unsolvable
Like all paradox (in principle), it's perfectly solvable once you define it properly within sufficient context that its meaning is unambiguous.
> then it seems to me
> the principle of indifference is toast as a logical principle,
Huh? The principle of indifference is not some trick or evan an algorithm. It's a very fundamental principle describing our interaction with observed reality. There's no known exception, only misapplication.
> and if so
> then it seems two rational agents would be free in certain cases to use
> different bayesian priors.
There's absolutley nothing wrong with using different priors, but it's true you should use the best prior available.
> I would guess that thought is probably anathema to AI researchers; we want
> to know all robots of a kind will think and act identically under
> identical circumstances, yes? Or do we? Real humans seem not to.
To the extent that identical robots are in identical circumstances, of course they will act identically. This is as fundamentally true as the principal of indifference that you're having trouble with.
Show us two identical humans in identical circumstances.
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 5568 bytes
Desc: not available
More information about the extropy-chat