[ExI] Some new angle about AI

Lee Corbin lcorbin at rawbw.com
Tue Jan 5 05:52:44 UTC 2010


Jef wrote at 1/2/2010 12:09 PM:

> [Lee wrote]
> 
>> Let's suppose for a moment that [the skeptical view] is right.
 >> In other words, internal mechanisms of the neuron must also be
>> simulated.
> 
> Argh,"turtles all the way down", indeed.  Then must nature also
> compute the infinite expansion of the digits of pi for every soap
> bubble as well?

Well, as you know, in one sense nature does compute
infinite expansions---but not in a very useful sense.
It's annoying that nature exactly solves the Schrödinger
differential equation for the helium atom whereas
we cannot.

>> ...if presented with two simulations
>> only one of which is a true emulation, and they're both
>> exhibiting behavior indicating extreme pain, we want to
>> focus all relief efforts only on the one. We really do
>> *not* care a bit about the other.
> 
> This way too leads to contradiction, for example in the case of a
> person tortured, then with memory erased, within a black box.

I do not see any contradiction here. I definitely do not
want that experience whether or not memories are erased,
nor, in my opinion, would it be moral for me to sanction
it happening to someone else. I consider the addition or
deletion of memories per se as not affecting the total
benefit over some interval to an entity. Yes, sometimes
memory erasure might make certain conditions livable,
and certain other memory additions might even produce
fond reminisces.

> The morality of any act depends not on the **subjective** state of
> another, which by definition one could never know, but on our
> assessment of the rightness, in principle, of the action, in terms of
> our values.

Yes, we're always guessing (though with pretty good guesses
in my opinion), about what others experience.

>> For those of us who are functionalists (or, in my case, almost
>> 100% functionalists), it seems almost inconceivable that the causal
>> components of an entity's having an experience require anything
>> beneath the neuron level. In fact, it's very likely that the
>> simulation of whole neuron tracks or bundles suffice.
> 
> Let go of the assumption of an **essential** consciousness, and you'll
> see that your functionalist perspective is entirely correct, but it
> needs only the level of detail, within context, to evoke the
> appropriate responses of the observer.  To paraphrase John Clark,
> "swiftness" is not in the essence of a car, and the closer one looks
> the less apt one is to find it.  Furthermore (and I realize that John
> didn't say /this/), a car displays "swiftness" only within an
> appropriate context.  But key is understanding is that this
> "swiftness" (separate from formal descriptions of rotational velocity,
> power, torque, etc.) is a function of the observer.

But this makes it sound, to me, that you're going right
back to a "subjective" consideration, namely, this time
around, in the mind of an observer. So if A and B are
your observers, then whether or not true suffering is
occurring to C is a function of A or B?

> Happy New Year, Lee.

Thanks. Happy New Year to you too!

Lee




More information about the extropy-chat mailing list