[extropy-chat] Re: another objection

Eliezer Yudkowsky sentience at pobox.com
Thu Jun 3 17:50:57 UTC 2004


Norm Wilson wrote:
> 
> Because morality is an abstract concept that affects human behavior, but
> is not itself physically measurable by the FAI.  The FAI cannot (so far
> as we know) directly "perceive" morality, so it considers humans to be 
> the only available measuring devices and assumes that smarter humans who
> know more are better at measuring (or at least describing, or behaving 
> in accordance with) the concept of morality.  To remove humans from the 
> process would be analogous the throwing out the thermometer and 
> extrapolating the current temperature based on past results.  By 
> teaching us more, the FAI would effectively be turning us into better 
> "morality thermometers".

It would be more accurate to say that an FAI needs to look at humans to 
find the specification of how to compute morality.  It's like a sentient 
calculator that wonders "What is the Multiplication?" where the 
Multiplication turns out to consist of multiplying 6 x 7.  The Calculator 
is pretty sure that the answer is somewhere in the vicinity of 41 or 43, 
although different Calculators have been known to disagree on the subject. 
  But certainly a superintelligence would know that the Multiplication was 
41, or 43, or whatever; not return preposterous answers such as 4 or 30,000.

Unless the superintelligence happened to contain a question, "What is the 
Multiplication?", that consisted of multiplying 2 x 2 or 300 x 100.

If you presented this scenario to Calculators who did not quite understand 
how Multiplication worked, they would laugh at the idea that a 
superintelligence could get the answer so blatantly wrong - thinking that 4 
or 30,000 was the Multiplication!  The Calculators may not really 
understand how Multiplication works, or whether the answer is 41 or 43, but 
they know a superintelligence wouldn't be that stupid.  Even if a 
superintelligence started out with a Multiplication of 4 or 30,000, surely 
it would look over this Multiplication, realize how silly and inappropriate 
it was, and choose a better Multiplication like, oh, say, 45 or 39; it 
might seem a little alien, but not something grossly wrong like 4 or 
30,000.  Would not the superintelligence realize the meaninglessness of 4, 
that it cannot possibly result from multiplying 6, let alone 7?  Maybe the 
superintelligence would multiply 6 by 2/3, but that seems implausible.

Note that all Calculators, if they knew more, thought faster, were more the 
people they wished they were, had grown up farther together, would 
uniformly agree that the Multiplication was exactly 42; and in this sense 
their Multiplication is as objective as mathematics.  The Calculators' 
mistake is to think that the superintelligence's Multiplication must 
necessarily ask the same implicit question.

I now quote Damien Broderick and Adrian Tymes from a recent discussion on 
the Extropians list:

Damien Broderick wrote:
 >
 > It's a *super-intelligence*, see, a constructed mind that
 > makes the puny human *insignificant* by comparison--so what *else* is it
 > going to do except get trapped immediately by a really dumb pun into
 > turning the cosmos in smiley faces?

Adrian Tymes wrote:
 >
 > [...]
 > the capacity for SIs to overcome their optimization
 > functions and decide on new ones - for example, the
 > paperclip maximizer who would realize that paperclips
 > only have meaning if there's something for them to
 > clip, and other sentient units for the convenience of
 > a clip to serve.  (Unless you propose that a SI would
 > not strive to understand why it does what it does,
 > which would seem to strongly interfere with any
 > capability for self-improvement.)

Funny how natural selection hasn't picked a different optimization 
criterion than reproductive fitness.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list