[ExI] More thoughts on sentient computers

Jason Resch jasonresch at gmail.com
Thu Feb 23 16:55:02 UTC 2023


On Wed, Feb 22, 2023 at 8:00 PM Giovanni Santostasi via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Jason,
> The Newcomb paradox is mildly interesting. But the perceived depthness of
> it is all in the word game that AGAIN philosophers are so good at. I'm so
> glad I'm a physicist and not a philosopher (we are better philosophers than
> philosophers but we stopped calling ourselves that given the bad name
> philosophers gave to philosophy). The false depth of this so-called paradox
> comes from a sophistry that is the special case of the predictor being
> infallible. In that case all kinds of paradoxes come up and "deep"
> conversations about free will, time machines and so on ensue.
>

I agree there is no real paradox here, but what is interesting is that it
shows a conflict between two commonly used decision theories: one based on
empiricism and the other based on expected-value. Note that perfect
prediction is not required for this conflict to arise, this happens even
for imperfect predictors, say a psychologist who is 75% accurate:
Empiricist thinker: Those who take only one box walk away 75% of the time
with a million dollars. Those who take both 75% of the time walk away with
$1,000 and 25% of the time walk away with $1,001,000. So I am better off
trying my luck with one box, as so many others before me did that and made
out well.
Expected-value thinker: The guess of my behavior has already been made by
the psychologist. Box A is already empty or not. I will increase my
expected value by $1,000 if I take box B. I am better off taking both
boxes.
On an analysis it seems the empiricist tends to do better. So is
expected-value thinking wrong? If so, where is its error?



> In all the other cases one can actually write a code to determine, given
> the predictor success rate, what is the best choice from a statistical
> point of view.
>

True.


> Nothing deep there.
>

The depth arises in the conversations and debates between one who genuinely
believes in one-boxing and one who genuinely believes in two-boxing. I had
several month's long debates between co-workers in the past. The
conversations lead in very interesting directions.


> So the only issue is if we can have an infallible predictor and the answer
> is not. It is not even necessary to invoke QM for that because just the
> idea of propagation of errors from finite information is enough. Even in
> predicting the stability of the solar system many million of years from now
> we will need to know the current position of planets to basically an
> infinite level of precision given all the nonlinear interactions in the
> system. If one has the discipline to do without these absolute abstractions
> (basically creationist ideas based on concepts like a perfect god) of
> perfect knowledge, perfect understanding then one realizes that these
> philosophical riddles are not deep but bs (same thing with qualia,
> philosophical zombies and so on). No wonder this paradox has attracted
> William Craig Lane's attention.
>
>
Perhaps it's not physically possible for biological and physical minds, but
we could imagine it as possible to achieve 100% accuracy in the case of an
uploaded brain or AI, where all environmental inputs can be controlled.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230223/4c4c8c03/attachment.htm>


More information about the extropy-chat mailing list