[extropy-chat] Re: Structure of AI

Eliezer Yudkowsky sentience at pobox.com
Tue Nov 23 06:43:50 UTC 2004


Adrian Tymes wrote:
>> 
>> I wasn't previously planning to work on that, but now that you mention
>> it, it might be a good way to stress-test the basic concepts, for the
>> same reason that people run really weird HTML through their browsers
>> to see if they crash.  How do you get an exact analysis of which
>> universe you live in?  Human minds can imagine alternate 
>> possibilities, and this is a fine talent to have, especially if you're
>> not sure which possibility is real.
> 
> There are many possibilities.  I prefer the scientific one: you come up
> with experiments to test and measure the differences, and if you can't -
> which is logically equivalent to there being no way to tell the 
> difference, since nothing you could detect would be the slightest bit
> different - then the difference is acknowledged and ignored, at least
> unless and until a way to measure the difference is discovered.

Mm, a fine rule for human arguments about goblins and fairies.  In 
probability theory we need to deal with the issue of hypotheses that give 
the same predictions for all phenomena up until now, then diverge at a 
future date; we want a prediction in advance.  Vide "grue" and so on.  What 
we use for this is Occam's Razor, formalized by something like Kolmogorov 
complexity.  Remember, a properly designed AI is not going to argue with 
you about fairies and goblins to begin with, so the conversational reply 
you give to humans may not be the appropriate answer.

I'm not talking about a problem where some silly human is trying to come up 
with hypotheses and then protect them from falsification.  I am talking 
about a case where you are *genuinely unsure* which universe you live in, 
and Occam's Razor won't always save you.  Suppose that you're in the 
eighteenth century, weighing Newton's gravitation and Einstein's relativity 
as mutually exclusive alternatives.  Occam's Razor, historically, would 
have given you the wrong answer because they couldn't perform measurements 
precise enough to see the superiority of Einstein's predictions.  That you 
could test the theories *eventually* would not change the fact that, right 
*now*, the now of this hypothetical eighteenth century, you would either be 
uncertain which universe you lived in, or wrong about it.  For this reason 
do we need to entertain alternatives.  Also, note my use of a hypothetical, 
alternative eighteenth century in this explanation.

> For example: are we in a highly detailed computer simulation, or is this
> reality just what it appears to be?  Answer: if the simulation is
> detailed enough that we can never tell the difference, then it does not
> matter - any and every action we do has the same effect, and the
> universe we perceive behaves in exactly the same way.  (Note that this
> specifically excludes, for example, Agent Smith like characters: if they
> were present, we could eventually detect them, and thus we would have a
> way to find out that we were in a sim.)  It could be that way, but we
> can show that trying to determine that is futile - so we move on to 
> questions where our efforts are not wasted.

Nick Bostrom would probably say, "What if we have the experimentally 
testable prediction that building a superintelligence wastes so much 
sim-computing-power that the sim gets shut down shortly thereafter?"  Now 
you have an alarming prediction, and you need an advance expectation on it.

Wei Dai would probably say that all the different contexts simulating 
Adrian Tymes exist in superposition from the perspective of the agglomerate 
"Adrian" while he has not yet performed any test that distinguishes them, 
and then diverges as soon as the test is performed, so that here, now, you 
should anticipate all those futures; that is, the superposition of possible 
substrates is analogous to the superposition of quantum states in 
many-worlds theory.

> By eliminating the unanswerable questions, and focusing on ways to find
> the answers where answers can be found, this mindset has proven to be
> very useful in dealing with reality.  It is, of course, far from the 
> only one that humans can use.

I agree.  Wisely restricting ourselves to this mode of thinking, we still 
find that we are unsure of exactly which universe we live in.  That is what 
probability theory is for.

>> Now you're just being silly.  Don't tell me what I may or may not
>> imagine to kick-start my thinking.
> 
> *shrugs* Only if I don't give you my money - which I haven't.  There
> are, I have found, certain things that increase the likelihood of
> project success, and one of them is staying focussed on the project
> itself; if you really need to stray into something like this to 
> kick-start your thinking, then you have - perhaps permanently - lost
> sufficient focus that the odds of your particular effort succeeding are
> virtually zero. This is only my opinion, of course, but my opinion helps
> direct my spending.  If you honestly disagree, and believe that it will
> help you, it is not my place to stop you - only to spend my money
> elsewhere.

I see.  Well, in that case, it is clearly no service to humanity for me to 
waste my time talking to someone who will make no difference to the outcome 
of the Singularity.  Goodbye, discussion over.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list