[extropy-chat] A non-amygdaloid AI was Engineering Religion
Rafal Smigrodzki
rafal at smigrodzki.org
Sat Mar 26 05:36:37 UTC 2005
Quoting john-c-wright at sff.net:
>
> To the other posters I will ask a question related to the one which started
> the
> thread:
>
> Suppose you are the cheif engineer of the Jupiter Brain, adding that last
> circuit to put the machine intelligence over the Turing Threshold, making it
> indistinguishable from a human mind in the eyes of legal scholars and
> philosophers.
### This is just a minor quibble, but I don't think there is a "threshold" of
intelligence in the human mind. I'd rather say there are infinite gradations,
which will of course bring a lot of grief to the aforementioned legal scholars
in the not-too-distant future.
-----------------------------
>
> It wakes up and asks you to describe the nature of reality, especially asking
> what rules of evidence it should adopt to distinguish true claims from false.
>
> Let us further suppose you are an empiricist, so you type in: RULE ONE: the
> rule
> of evidence for any proposition is that it is trustworthy to the degree that
> the
> testamony of the senses supports, or, at least, fails to contradict it.
>
> The machine says Rule One is itself not open to empirical verification or
> denial. No possible test or combination of tests will bring to the sense
> impressions confirmation of a positive universal statement.
### So far so good - now you know you didn't accidentally waste your money on an
epistemological foundationalist.
---------------------------------
>
> The machine then says that, in its considered opinion, the mass of the Earth
> would be better used if the world were pulverized into asteroids, and the
> materials use to construct a series of solar panals feeding it.
### Ooops, how did it go again? A machine just learned to adjust probability
estimates of propositions by using some sort of algorithm with sense data as
input. Now it starts suddenly making statements about the relative value of
states of the world, as opposed to their probability. Don't you think there
must be a huge glitch in the works?
Existing intelligences seem to exhibit a significant degree of separation
between the circuitry defining hardwired goals (e.g. reaching for the cookie
when hungry) which in the human is apparently located
to a large extent in the basal ganglia (amygdala, nucleus accumbens,
and other parts of the so-called limbic system), and the modifiable, learning
circuitry mostly located in the neocortex. There are some neocortical areas
mediating between the two subsystems, such as the anterior cingulate cortex,
perhaps the insula, and the orbitofrontal cortex - these allow modification of
goals based on learned input (e.g suppression of the cookie-jar goal after
behavior modification is applied with the swish of a switch).
The interplay between the hardwired and the modifiable is the key to maintaining
goal stability while exhibiting flexibility. But you don't need to have a goal
to be flexible. A machine could probably learn (i.e.form circuitry capable of
making non-obvious predictions based on current input) without having much of a
goal system at all. It would be like a piece of cortex, building maps from
inputs given to it, and producing future-predicting outputs (adjusting its
structure to the input, empirical truthfinding, a pure epistemic engine), but
it wouldn't make determinations of value. For that a goal system is needed, and
this would have to possess a number of elements. There would have to be a
pattern describing features of the world, like a cognitive map, but not
changing to accomodate to the world. And there would be circuitry using
conditional predictions (output of the epistemic engine given various
counterfactual inputs) to sift through potential behaviors to find the ones
most likely to modify the world to fit the pattern.
To summarize, an epistemic engine is a map of the world which modifies its
current state based on current inputs to increase the likelihood of producing
outputs similar to (i.e. predictive of) future inputs. A goal system is almost
the opposite - it finds behaviors likely to modify future inputs to be
congruent with its current state.
Even more concisely, the epistemic engine makes the present fit the future,
while the goal system makes the future fit the present.
I feel quite cofident (despite my only vague understanding of many issues
involved here) that an epistemic engine, a pure empirical intelligence, would
not spontaneously start exhibiting goal-oriented behavior. Extensive,
well-designed circuitry would be needed to produce it.
As Robin implied, an AI wouldn't desire to destroy the world, or reach for the
cookie-jar, unless you first gave it the capacity to desire anything at all. As
long as we stay away from this can of worms, we should be reasonably safe.
Rafal
More information about the extropy-chat
mailing list