[extropy-chat] Re: Structure of AI

Adrian Tymes wingcat at pacbell.net
Tue Nov 23 16:57:57 UTC 2004


--- Eliezer Yudkowsky <sentience at pobox.com> wrote:
> I'm not talking about a problem where some silly
> human is trying to come up 
> with hypotheses and then protect them from
> falsification.  I am talking 
> about a case where you are *genuinely unsure* which
> universe you live in, 
> and Occam's Razor won't always save you.  Suppose
> that you're in the 
> eighteenth century, weighing Newton's gravitation
> and Einstein's relativity 
> as mutually exclusive alternatives.  Occam's Razor,
> historically, would 
> have given you the wrong answer because they
> couldn't perform measurements 
> precise enough to see the superiority of Einstein's
> predictions.  That you 
> could test the theories *eventually* would not
> change the fact that, right 
> *now*, the now of this hypothetical eighteenth
> century, you would either be 
> uncertain which universe you lived in, or wrong
> about it.  For this reason 
> do we need to entertain alternatives.  Also, note my
> use of a hypothetical, 
> alternative eighteenth century in this explanation.

This is one of the reasons why untestable stuff is
acknowledged as untestable.  Testing capability
advances all the time, especially these days...

> Nick Bostrom would probably say, "What if we have
> the experimentally 
> testable prediction that building a
> superintelligence wastes so much 
> sim-computing-power that the sim gets shut down
> shortly thereafter?"  Now 
> you have an alarming prediction, and you need an
> advance expectation on it.

One can make an analogy to other predictions that the
universe will suddenly behave radically different, on
a grand scale, because of some simple action.  These
have always turned out false (or almost always, if one
stretches the terms and looks hard enough).  For
example, people have predicted that planetary
alignments would signal the end of the Earth (say, it
was the Y2K signal for our sim or something).  Yet the
Earth has survived previous planetary alignments just
fine.  Likewise, building a superintelligence is
something one ramps up to, and there would seem likely
to be signs we could detect before the sim shuts down,
if we are in a sim, that we could react to.  (Normal
computer administrators have an array of tools to use
short of rebooting the machine if a single rogue
process starts eating up a lot of computing power.)
Plus, the computing power is ultimately expressed in
terms of atoms, which the sim would already simulate -
so would even a superintelligence necessarily use more
computing power in the sim?

> Wei Dai would probably say that all the different
> contexts simulating 
> Adrian Tymes exist in superposition from the
> perspective of the agglomerate 
> "Adrian" while he has not yet performed any test
> that distinguishes them, 
> and then diverges as soon as the test is performed,
> so that here, now, you 
> should anticipate all those futures; that is, the
> superposition of possible 
> substrates is analogous to the superposition of
> quantum states in 
> many-worlds theory.

Why anticipate all futures?  I can act now without yet
knowing exactly what I'll have for dinner tomorrow (or
if I'll skip dinner tomorrow in favor of being hungry
come Thanksgiving dinner).

> I agree.  Wisely restricting ourselves to this mode
> of thinking, we still 
> find that we are unsure of exactly which universe we
> live in.  That is what 
> probability theory is for.

This is correct.  But an AI would not necessarily need
to determine all aspects of the universe beyond its
ability to test.

> I see.  Well, in that case, it is clearly no service
> to humanity for me to 
> waste my time talking to someone who will make no
> difference to the outcome 
> of the Singularity.  Goodbye, discussion over.

That only applies if you think yours is the only
possible effort that can make a difference.  Which
leads you to ignore other efforts.  Which ultimately
impairs your own effort's effectiveness.  (I've run
into this exact problem with my Casimir work: there
have been and still are a lot of failed efforts; only
by studying them, and acknowledging that those that
have not yet failed have, indeed, not yet failed,
could I come up with something that might succeed.  I
also plan for what happens should I fail too, such
that others might find my error and suceed at my
ultimate aim.)

(I know, you said "discussion over".  But I suspect
you'll still read this - and even if I don't
contribute money, I can still contribute advice.
Yours to follow or not, as you choose.  But keep in
mind that my thinking is also certain others'
thinking, if you wonder why everyone on the ExI list
is not funding your effort yet.)



More information about the extropy-chat mailing list