[ExI] Do digital computers feel?

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Mon Jan 2 02:13:50 UTC 2017


On Fri, Dec 30, 2016 at 10:01 AM, Jason Resch <jasonresch at gmail.com> wrote:

>
>
> On Fri, Dec 30, 2016 at 1:14 AM, Rafal Smigrodzki <
> rafal.smigrodzki at gmail.com> wrote:
>
>>
>>
>> On Tue, Dec 27, 2016 at 12:03 PM, Jason Resch <jasonresch at gmail.com>
>> wrote:
>>
>>>
>>> If infinities are relevant to mental states, they must be irrelevant to
>>> any external behavior that can be tested in any way. This is because the
>>> holographic principal places discrete and finite bounds on the amount of
>>> information that can be stored in a an area of space of finite volume. Even
>>> if there is infinite information in your head, no physical process
>>> (including you) can access it.
>>>
>>
>> ### Indeed, this is a valuable insight. But you could still have
>> qualitative but inaccessible (to other observers) differences between the
>> mental states realized on finite machines vs. ones implemented in
>> (putatively) infinite physics.
>>
>> ---------------------------------------------------
>>
>
> What would be accessing this information and having these perceptions
> then? It seems to me you would need some "raw perceiver" which itself is
> divorced entirely from the physical universe. Can there be perceptions that
> in theory can have no effect on behavior whatsoever? Not even in detectable
> differences in neuronal behavior or positions of particles in the brain?
>

### Yes, precisely. I, the analog-implemented copy of Rafal, have qualia
and I say so but the almost perfectly copied Rafal digital P-zombie might
have no qualia and yet say he does have qualia, and not even lie about it,
being unable to perceive the absence or presence of qualia. If qualia are a
correlate of information processing, without causal involvement in the
process, then one could imagine pairs of objects that perform equivalent
operations but differ in the presence of qualia.

I am not saying that digital minds and analog minds definitely differ in
their qualia. I am merely confused by the application of identity of
indiscernibles to the question of counting the amount of subjective
experience in multiple runs of the same digital simulation - is it one
experience per run, or is it one experience for all possible runs of that
simulation?

It's a riddle, and I invite you to give me answers - as I said above, I am
genuinely confused.
 -------------------------

>
>> ### I have always considered myself a computationalist but recently
>> thinking about the identity of indiscernibles as applied to finite
>> mathematical objects simulating mental processes I became confused. I think
>> I am still a computationalist but a mildly uneasy one. At least, if
>> digitally simulated human minds are P-zombies, it won't hurt to be one, so
>> I still intend to get uploaded ASAP.
>>
>>
> What does your unease come from? Is it the uncertainty over whether or not
> the brain is infinite or finite? I think even if it is finite there is
> reason to be uneasy over uploading, the question of whether the functional
> substitution captures the necessary level. The concept of a substitution
> level is defined and explored in this paper: http://iridia.ulb.ac.
> be/~marchal/publications/CiE2007/SIENA.pdf
>
>
### I am uneasy because I imagine simple mathematical objects (i.e. things
that can be computed and manipulated by finite digital computers) as
existing in a part of the mathematical realm that is separate from our
world. There is nothing breathing fire into the equations of that realm,
and digital simulations are reducible to objects in that realm. Our realm,
which I believe to be also a form of mathematics, differs in a way that I
find difficult to describe but it does feel qualitatively different from
what I could ascribe to mere digital objects.

-----------------------


> I think the matter of the substitution level and the importance of it is
> what Ned Block captured in his Blockhead thought experiment (
> https://en.wikipedia.org/wiki/Blockhead_(computer_system) ), where his
> brain was replaced with a lookup table. This can replicate external
> behaviors, but it is an entirely different function from one that actually
> implements his mind, and thus it may be a zombie or zombie-like.
>
>
### Even here we get into baffling issues. To generate that look up table
you actually have to run googolplexes of minds through googolplexes of
conversations and write down the bitstrings they generate. You can't avoid
that - the "sensible" responses are only sensible because a mind does some
thinking, so you have to have minds of some sort, digital or analog, that
will go through all possible conversations to separate the sensible
bitstrings from the googolplex to googolplex power stack of all possible
bitstrings. In other words, to make the look up table you need to
precompute all possible conversations.

So, where do the conversation-related qualia occur? During the
precomputation stage? Or during look-up? Or both? Or neither?

I am pretty sure that if you make all possible physically existing humans
and make them have all possible conversations, there will be a lot qualia
happening in the precomputation stage, and none in the look-up stage. What
qualia are generated by using digital simulations of all possible humans I
don't know. As I mentioned above, I am confused.

I am still a computationalist. I think digital simulations of appropriate
quality should feel qualia, identity of indiscernibles be damned. But I am
not sure.

Rafal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170101/0a352cef/attachment.html>


More information about the extropy-chat mailing list