[ExI] Some new angle about AI
aware at awareresearch.com
Sat Jan 2 20:09:13 UTC 2010
On Sat, Jan 2, 2010 at 11:22 AM, Lee Corbin <lcorbin at rawbw.com> wrote:
> Let's suppose for a moment that Gordon is right. In other
> words, internal mechanisms of the neuron must also be
Argh,"turtles all the way down", indeed. Then must nature also
compute the infinite expansion of the digits of pi for every soap
bubble as well?
> I want to step back and reexamine the reason that all of this
> is important, and how our reasoning about it must be founded
> on one axiom that is quite different from the other scientific
> And that axiom is moral: if presented with two simulations
> only one of which is a true emulation, and they're both
> exhibiting behavior indicating extreme pain, we want to
> focus all relief efforts only on the one. We really do
> *not* care a bit about the other.
This way too leads to contradiction, for example in the case of a
person tortured, then with memory erased, within a black box. The
morality of any act depends not on the **subjective** state of
another, which by definition one could never know, but on our
assessment of the rightness, in principle, of the action, in terms of
> For those of us who are functionalists (or, in my case, almost
> 100% functionalists), it seems almost inconceivable that the causal
> components of an entity's having an experience require anything
> beneath the neuron level. In fact, it's very likely that the
> simulation of whole neuron tracks or bundles suffice.
Let go of the assumption of an **essential** consciousness, and you'll
see that your functionalist perspective is entirely correct, but it
needs only the level of detail, within context, to evoke the
appropriate responses of the observer. To paraphrase John Clark,
"swiftness" is not in the essence of a car, and the closer one looks
the less apt one is to find it. Furthermore (and I realize that John
didn't say /this/), a car displays "swiftness" only within an
appropriate context. But key is understanding is that this
"swiftness" (separate from formal descriptions of rotational velocity,
power, torque, etc.) is a function of the observer.
> But I have no way of going forward to address Gordon's
> question. Logically, we have no way of knowing
(and this is an example where logic fails but reason still prevails)
> that in
> order to emulate experience, you have to simulate every
> single gluon, muon, quark, and electron. However, we
> can *never* in principle (so far as I can see) begin to
> answer that question, because ultimately, all we'll
> finally have to go on is behavior (with only a slight
> glance at the insides).
> I merely claim that if Gordon or anyone else who doubts
> were to live 24/7 for years with an entity that acted
> wholly and completely human, yet who was a known simulation
> at, say, the neuron level, entirely composed of transistors
> whose activity could be single-stepped through, then Gordon
> or anyone else would soon apply the compassionate axiom,
> and find himself or herself incapable of betraying or
> inflicting pain on his or her new friend anymore than
> upon a regular human.
And here, despite a ripple (more accurately a fold, or
non-monotonicity) and a veering off to infinity on one side of your
map of reality, you and I can agree on your conclusion.
Happy New Year, Lee.
More information about the extropy-chat