[extropy-chat] The athymhormic AI

Eliezer S. Yudkowsky sentience at pobox.com
Tue Mar 29 20:39:25 UTC 2005


Rafal Smigrodzki wrote:
> Last week I commented here on the low likelihood of an AI designed as a pure
> epistemic engine (like a cortex without much else) turning against its owners,
> which I derived from the presence of complex circuitry in humans devoted to
> producing motivation and a goal system.
> 
> Now I found more about actual neurological conditions where this circuitry is
> damaged, resulting in reduced volition with preserved mentation. Athymhormia,
> as one of the forms of this disorder is called, is caused by interruption of
> the connections between frontopolar cortex and the caudate, the subcortical
> circuit implicated in sifting through motor behaviors to find the ones likely
> to achieve goals. An athymhormic person loses motivation even to eat, despite
> still being able to feel hunger in an intellectual, detached manner. At the
> same time he has essentially normal intelligence if prodded verbally, thanks to
> preservation of the cortex itself, and connections from other cortical areas
> circumventing the basal ganglia.
> 
> I would expect that the first useful general AI will be athymhormic, at least
> mildly so, rather than Friendly. What do you think, Eliezer?

Utilities play, oh, a fairly major role in cognition.  You have to 
decide what to think.  You have to decide where to invest your computing 
power.  You have to decide the value of information.

Athymhormic patients seem to have essentially normal intelligence if 
prodded verbally?  This would seem to imply that for most people 
including these patients, conscious-type desires play little or no role 
in deciding how to think - they do it all on instinct, without 
deliberate goals.  If I contracted athymhormia would I lose my desire to 
become more Bayesian?  Would I lose every art that I deliberately employ 
to perfect my thinking in the service of that aspiration?  Would I 
appear to have only slightly diminished intelligence, perhaps the 
intelligence of Eliezer-2004, on the grounds that everything I've 
learned to do more than a year ago has already become automatic reflex?

If it's unwise to generalize from normal humans to AIs, is it really 
that much wiser to generalize from brain-damaged humans to AIs?  I don't 
know how to build an efficient real-world probability estimator without 
mixing in an expected utility system to allocate computing resources and 
determine the information value of questions.

If humans behave differently, it's because natural selection gave us a 
crap architecture composed of a grab-bag of ad-hoc mechanisms, so that 
you can disable the Goal System for Eating while leaving intact the Goal 
System for Cognition even though they really ought to be the same 
mechanism, and would be in any decently designed AI.

So my reply is that an AI designed with an architecture capable of 
athymhormia will be at such a cognitive disadvantage as to wash it out 
of the race to Singularity; or if somehow the AI prospers then the 
athymhormia will wash out of its architecture.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list