[extropy-chat] Re: Structure of AI

Adrian Tymes wingcat at pacbell.net
Tue Nov 23 05:43:25 UTC 2004


--- Eliezer Yudkowsky <sentience at pobox.com> wrote:
> Adrian Tymes wrote:
> > --- Eliezer Yudkowsky <sentience at pobox.com> wrote:
> >> It may not sound practically important (how many
> maxentropy universes
> >> did we plan to run into, anyway?) but from a
> theoretical standpoint
> >> this is one hell of a huge nitpick:  The real
> universe is an atypical
> >> special case.
> > 
> > It's also the only one that matters.  Any and all
> efforts to deal with
> > universes radically different than the one we
> actually face are wasted.
> 
> I wasn't previously planning to work on that, but
> now that you mention it, 
> it might be a good way to stress-test the basic
> concepts, for the same 
> reason that people run really weird HTML through
> their browsers to see if 
> they crash.  How do you get an exact analysis of
> which universe you live 
> in?  Human minds can imagine alternate
> possibilities, and this is a fine 
> talent to have, especially if you're not sure which
> possibility is real.

There are many possibilities.  I prefer the scientific
one: you come up with experiments to test and measure
the differences, and if you can't - which is logically
equivalent to there being no way to tell the
difference, since nothing you could detect would be
the slightest bit different - then the difference is
acknowledged and ignored, at least unless and until a
way to measure the difference is discovered.

For example: are we in a highly detailed computer
simulation, or is this reality just what it appears
to be?  Answer: if the simulation is detailed enough
that we can never tell the difference, then it does
not matter - any and every action we do has the same
effect, and the universe we perceive behaves in
exactly the same way.  (Note that this specifically
excludes, for example, Agent Smith like characters: if
they were present, we could eventually detect them,
and thus we would have a way to find out that we were
in a sim.)  It could be that way, but we can show that
trying to determine that is futile - so we move on to
questions where our efforts are not wasted.

(By analogy to your browser example, this would be
like not caring what the content of a document is, if
the document is unavailable.  A 404 error is a 404
error, no matter what you were supposed to get -
although there is special code in browsers to handle
what happens if there is an error when reading an
image that is to be displayed within a HTML document.)

By eliminating the unanswerable questions, and
focusing on ways to find the answers where answers can
be found, this mindset has proven to be very useful in
dealing with reality.  It is, of course, far from the
only one that humans can use.

> > That this type of analysis is even considered in
> the SIAI's effort to
> > build FAI leads me to conclude that the SIAI is
> not worth funding, even
> > if FAI itself would be a desirable goal. 
> 
> Now you're just being silly.  Don't tell me what I
> may or may not imagine 
> to kick-start my thinking.

*shrugs* Only if I don't give you my money - which I
haven't.  There are, I have found, certain things that
increase the likelihood of project success, and one of
them is staying focussed on the project itself; if you
really need to stray into something like this to
kick-start your thinking, then you have - perhaps
permanently - lost sufficient focus that the odds of
your particular effort succeeding are virtually zero.
This is only my opinion, of course, but my opinion
helps direct my spending.  If you honestly disagree,
and believe that it will help you, it is not my place
to stop you - only to spend my money elsewhere.



More information about the extropy-chat mailing list