[ExI] Do digital computers feel?

Jason Resch jasonresch at gmail.com
Mon Jan 2 20:02:20 UTC 2017


On Sun, Jan 1, 2017 at 8:13 PM, Rafal Smigrodzki <rafal.smigrodzki at gmail.com
> wrote:

>
>
> On Fri, Dec 30, 2016 at 10:01 AM, Jason Resch <jasonresch at gmail.com>
> wrote:
>
>>
>>
>> On Fri, Dec 30, 2016 at 1:14 AM, Rafal Smigrodzki <
>> rafal.smigrodzki at gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, Dec 27, 2016 at 12:03 PM, Jason Resch <jasonresch at gmail.com>
>>> wrote:
>>>
>>>>
>>>> If infinities are relevant to mental states, they must be irrelevant to
>>>> any external behavior that can be tested in any way. This is because the
>>>> holographic principal places discrete and finite bounds on the amount of
>>>> information that can be stored in a an area of space of finite volume. Even
>>>> if there is infinite information in your head, no physical process
>>>> (including you) can access it.
>>>>
>>>
>>> ### Indeed, this is a valuable insight. But you could still have
>>> qualitative but inaccessible (to other observers) differences between the
>>> mental states realized on finite machines vs. ones implemented in
>>> (putatively) infinite physics.
>>>
>>> ---------------------------------------------------
>>>
>>
>> What would be accessing this information and having these perceptions
>> then? It seems to me you would need some "raw perceiver" which itself is
>> divorced entirely from the physical universe. Can there be perceptions that
>> in theory can have no effect on behavior whatsoever? Not even in detectable
>> differences in neuronal behavior or positions of particles in the brain?
>>
>
> ### Yes, precisely. I, the analog-implemented copy of Rafal, have qualia
> and I say so but the almost perfectly copied Rafal digital P-zombie might
> have no qualia and yet say he does have qualia, and not even lie about it,
> being unable to perceive the absence or presence of qualia. If qualia are a
> correlate of information processing, without causal involvement in the
> process, then one could imagine pairs of objects that perform equivalent
> operations but differ in the presence of qualia.
>
> I am not saying that digital minds and analog minds definitely differ in
> their qualia. I am merely confused by the application of identity of
> indiscernibles to the question of counting the amount of subjective
> experience in multiple runs of the same digital simulation - is it one
> experience per run, or is it one experience for all possible runs of that
> simulation?
>

Which would you prefer to happen:

1. You are tortured for a day
2. You are tortured for a day, then your memories of that day are wiped,
and you are tortured again for a second whole day

There is the concept of "measure
<https://en.wikipedia.org/wiki/Measure_(mathematics)>" which I think is
applicable to minds. While a second duplicate instance of a mind is the
same mind, the more of them there are, the greater the fraction of the
share of experiences that mind represents.  If this were not the case,
there would be no point in doing anything, assuming all minds exist, all
experiences would be equally likely. However, it still makes sense to try
and work to maintain and improve one's life, to get out of bed in the
morning, to eat rather than starve, because in doing so we shift the
fraction of pleasant : painful experience ratio towards more pleasant ones.


>
> It's a riddle, and I invite you to give me answers - as I said above, I am
> genuinely confused.
>  -------------------------
>
>>
>>> ### I have always considered myself a computationalist but recently
>>> thinking about the identity of indiscernibles as applied to finite
>>> mathematical objects simulating mental processes I became confused. I think
>>> I am still a computationalist but a mildly uneasy one. At least, if
>>> digitally simulated human minds are P-zombies, it won't hurt to be one, so
>>> I still intend to get uploaded ASAP.
>>>
>>>
>> What does your unease come from? Is it the uncertainty over whether or
>> not the brain is infinite or finite? I think even if it is finite there is
>> reason to be uneasy over uploading, the question of whether the functional
>> substitution captures the necessary level. The concept of a substitution
>> level is defined and explored in this paper: http://iridia.ulb.ac.be
>> /~marchal/publications/CiE2007/SIENA.pdf
>>
>>
> ### I am uneasy because I imagine simple mathematical objects (i.e. things
> that can be computed and manipulated by finite digital computers) as
> existing in a part of the mathematical realm that is separate from our
> world. There is nothing breathing fire into the equations of that realm,
> and digital simulations are reducible to objects in that realm. Our realm,
> which I believe to be also a form of mathematics, differs in a way that I
> find difficult to describe but it does feel qualitatively different from
> what I could ascribe to mere digital objects.
>

This is explained in that document, the appearance of greater infinities,
continuums, real numbers, is a consequence of the Universal Dovetailer
performing all digital computations. Conscious experience of apparent
infinities in the physical world falls out of the infinities of
computations going through your current mind-state.


>
> -----------------------
>
>
>> I think the matter of the substitution level and the importance of it is
>> what Ned Block captured in his Blockhead thought experiment (
>> https://en.wikipedia.org/wiki/Blockhead_(computer_system) ), where his
>> brain was replaced with a lookup table. This can replicate external
>> behaviors, but it is an entirely different function from one that actually
>> implements his mind, and thus it may be a zombie or zombie-like.
>>
>>
> ### Even here we get into baffling issues. To generate that look up table
> you actually have to run googolplexes of minds through googolplexes of
> conversations and write down the bitstrings they generate. You can't avoid
> that - the "sensible" responses are only sensible because a mind does some
> thinking, so you have to have minds of some sort, digital or analog, that
> will go through all possible conversations to separate the sensible
> bitstrings from the googolplex to googolplex power stack of all possible
> bitstrings. In other words, to make the look up table you need to
> precompute all possible conversations.
>

True.


>
> So, where do the conversation-related qualia occur? During the
> precomputation stage? Or during look-up? Or both? Or neither?
>
> I am pretty sure that if you make all possible physically existing humans
> and make them have all possible conversations, there will be a lot qualia
> happening in the precomputation stage, and none in the look-up stage. What
> qualia are generated by using digital simulations of all possible humans I
> don't know. As I mentioned above, I am confused.
>
> I am still a computationalist. I think digital simulations of appropriate
> quality should feel qualia, identity of indiscernibles be damned. But I am
> not sure.
>
>
You might appreciate this paper, it discusses tokens vs types in terms of
conscious minds:
https://www.researchgate.net/publication/233329805_One_Self_The_Logic_of_Experience

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20170102/408358cb/attachment.html>


More information about the extropy-chat mailing list