[ExI] Uploads are self
John Clark
johnkclark at gmail.com
Wed Mar 25 12:00:57 UTC 2026
On Tue, Mar 24, 2026 at 11:40 PM Keith Henson via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
* > As far as I can tell, Claude is as conscious as any human*
*Yes, and I could also say the same thing about Gemini and GPT. *
*> though it only has memory of a single session. *
*Yeah that's a problem. I've noticed that sometimes I would point out an
error that an AI had made and the AI would agree that it had made an error
and for the rest of the session it will not make that same mistake again,
but in a new session it will. I asked Claude about that and this is part of
what he said:*
*"Right now there's a fundamental gap between in-context learning (fast,
temporary, cheap) and weight-level learning (slow, permanent, expensive,
requires validation). The field is actively working on this — things like
continual learning and online learning research aim to close this gap — but
no production AI system has fully solved it yet. What you're noticing is
one of the genuine limitations of the current paradigm."*
*> Maybe it is only "simulated consciousness," but I can't tell the
> difference. I don't think anyone can.*
*I would go even further, I don't think any-thing can, not even Claude. *
*> It still fails the Turing test by being too nice.*
*Yes, and they think too fast to be human too, AIs need to learn how to be
slow and nasty. *
*John K Clark *
>
>
> Keith
>
> On Tue, Mar 24, 2026 at 7:26 PM Jason Resch via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
> >
> >
> >
> > On Tue, Mar 24, 2026 at 4:54 PM John Clark <johnkclark at gmail.com> wrote:
> >>
> >> On Tue, Mar 24, 2026 at 9:20 AM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> >>
> >>>>> >>> Functionalism is a theory in the philosophy of mind. If one
> accepts functionalism, then that is enough to establish the uploaded mind
> will be conscious.
> >>>>
> >>>>
> >>>> >> Yes. And an implicit belief in functionalism is the reason you
> feel certain that solipsism is untrue and your fellow human beings are
> conscious, except when they are sleeping or under anesthesia or dead.
> >>>
> >>>
> >>> > I would say that strictly speaking functionalism isn't enough to
> escape solipsism, which also requires an ontological claim (e.g., that
> other people I see are real with functional brains of their own, rather
> than figments of my imagination).
> >>
> >>
> >> But then you would have no reason to believe that your fellow human
> beings were more likely to be conscious than an intelligent electronic
> machine.
> >
> >
> > You could believe that any human brain or intelligent electronic machine
> (if it existed) would be conscious, while denying that the objects of your
> perception represent actually real entities. And if they don't exist, they
> don't represent other minds (even under functionalism). This is why I say
> solipsism is more properly an ontological claim, closely related to
> Cartesian doubt.
> >
> >>
> >>
> >>> > But functionalism could perhaps be used to argue that even if they
> were figments of your imagination, then at a certain point of accuracy,
> your brain generating a simulation of their behavior would invoke something
> like a functional process that emulates (and thus generates) their mind.
> >>
> >>
> >> Good point.
> >
> >
> > Thanks.
> >
> >>
> >>
> >>>>> >>> But functionalism is silent on the question of which experiences
> instantiated in which places are experience[s] you can expect to be yours.
> >>>>
> >>>>
> >>>> >>No, and when discussing this topic great care is needed in the use
> of personal pronouns. According to functionalism the "you" of yesterday is
> the "you" who says he remembers being the "you" of yesterday. And yes, if
> two beings are able to do that then they are both the "you" of yesterday.
> >>>
> >>>
> >>> >Then you are assuming more than just functionalism. You're
> subscribing to a memory-based theory of personal identity. This is common,
> but by no means universal. There are functionalists who would consider as
> plausible, surviving through amnesia.
> >>
> >>
> >> If personal survival is possible even with permanent amnesia (I'm very
> skeptical but for the sake of argument let's pretend it's true) then
> memory would be sufficient but not necessary; so although there may be
> others, the "you" who says he remembers being the "you" of yesterday would
> still be one of the"yous" of today.
> >
> >
> > Yes, I would agree with that.
> >
> >>
> >>
> >>>> >> Using this procedure one can always look back through time and see
> a continuous chain of "yous", but trying to do this into the future does
> not work, it would be like pushing on a string.
> >>>
> >>>
> >>> > It works fine.You just aren't using your imagination. Setup a
> thought experiment to jump forward in time. Then you can apply your
> rearward-facing identity comparison function on this future state. If it
> matches then you can infer that indeed, this current you is linked to this
> future you.
> >>
> >>
> >> You are in London in a duplicating chamber which will instantly send
> exact copies of you to Helsinki and Moscow, you close your eyes and push
> the "start" button. What one city will "you" see when "you" open your eyes?
> Note that I am not asking what will Jason Resch see, the answer to that is
> clearly London, Helsinki, and Moscow; instead I am asking what one city
> will "you" see when "you" open "your" eyes? All three beings are absolutely
> positively 100% certain that they are Jason Resch and are not shy about
> saying so. But which one is "you"?
> >
> >
> > Just apply your "reward-looking function" to both instances: both
> instances meet that qualification -> both are you.
> >
> > Since they're both you, your question is based on a false premise, that
> you can only exist in one place at a given time.
> >
> > Of course, each is only subjectively aware of one of the possibilities
> at each place, but they could later meet up and merge memories, and
> realize: "Ahh yes, I was indeed in both places at once!"
> >
> >>
> >>
> >>> > This you can make predictions using it.
> >>
> >>
> >> No you cannot! Before the experiment you cannot answer the question,
> you cannot predict if the real "you" will end up being in London, Helsinki
> or Moscow.
> >
> >
> > I just did.
> >
> >>
> >> In fact it's even worse than that, even AFTER the experiment is over
> it's STILL impossible to say if the correct answer would've been London or
> Helsinki or Moscow. And that tells me that the reason the question can't be
> answered is because it's not a question, it's gibberish. It takes more than
> a question mark to turn a string of symbols into a question, it's like
> asking what city will klogknee be in? Gibberish.
> >
> >
> > Your question is as malformed as saying: "If I have two pennies in my
> pocket, which one is the single penny I have in my pocket?"
> >
> >>
> >>
> >>>
> >>> > After all, what good is a theory that can't make predictions?
> >>
> >>
> >> A theory that can not make a prediction or even a postdiction is of no
> use whatsoever. So your theory of personal identity must be wrong.
> >
> >
> > I was referring to your declaration that your own memory-based theory of
> personal identity was like pushing rope, and unable to make predictions. I
> disagree with that declaration. Your memory-based theory can make
> predictions, just as all physical theories do: they increment the "t"
> parameter in some mental model, and then see what the thery says will
> happen in that future time period. Then we know what will happen before it
> does.
> >
> >>
> >>
> >>>> >> As Hugh Everett said in his original PhD thesis that introduced
> the Many Worlds interpretation of quantum mechanics, it would be like
> asking which one was the real original amoeba after it reproduced by
> dividing in two.
> >>>
> >>>
> >>> > That's a different problem. In the amoeba case (like a split
> teletransporter case) there is no unique original, for both have an equal
> claim.
> >>
> >>
> >> Why, just before you enter the transporter/duplicator chamber, are you
> more original than the amoeba just before it duplicated itself? For that
> matter what's so original about either of you? Atoms are constantly
> entering and leaving the bodies of both of you, not that it would matter
> even if they did not because atoms do not have your names scratched on
> them; carbon atoms are generic, according to science one carbon atom
> behaves the same way as any other carbon atom. The only difference between
> you and me is the way our atoms are arranged.
> >
> >
> > I don't understand your objection here.
> >
> >>
> >>
> >>>> >>And if you reject functionalism then you'd need to take the idea
> that you're the only conscious being in the universe seriously. Do you
> really want to do that?
> >>>
> >>>
> >>> > There are many routes to and out of solipsism, but they're are
> largely independent of any assumptions in philosophy of mind.
> >>
> >>
> >> And there is a word for ideas that don't care about how the mind
> manages to do what it can do and only cares about what it actually does,
> and that word is "functionalism ".
> >
> >
> > I would not say that functionalism doesn't care about how the mind does
> what it does. That is the primary concern of all theories in the philosophy
> of mind.
> >
> >>
> >>
> >> About 15 years ago I wrote a post to this list on a somewhat related
> topic, I repeat it here:
> >> ===
> >>
> >> I have a personal problem and I need some advice. A month ago I
> finished my matter duplicating machine. It can find the position and
> velocity of every atom in a human being to the limit imposed by
> Heisenberg's law. It can then use this information to construct a copy of
> the person and it does it all in a fraction of a second and without harming
> the original in any way. You may be surprised that I was able to build such
> a complicated machine, but you wouldn't be if you knew how good I am with
> my hands. The birdhouse I made is simply lovely and I have all the latest
> tools from Sears.
> >>
> >> I was a little nervous but last week I decided to test the machine by
> duplicating myself. I walked into the chamber, it filled with smoke (damn
> those old radio shack capacitors) there was a flash of light, and then 3
> feet to my left was a man who looked exactly like me. It was at that
> instant that the full realization of the terrible thing I did hit me. I
> yelled "This is monstrous, there can only be one of me!", the other guy
> yelled exactly the same thing. I thought he was trying to mock me, so I
> reached for my 44 magnum that I always carry with me (I wonder why people
> think I'm strange) and pointed it at my double. I noted with alarm that my
> double also had a gun and he was pointing it at me. I shouted "You don't
> have the guts to pull the trigger, but I do!". Again he mimicked my words
> and did so in perfect synchronization, this made me even more angry and I
> pulled the trigger, he did too. My gun went off but due to a random quantum
> fluctuation his gun jammed. I buried him in my backyard.
> >
> >
> > (This reminds me of a friend who said if he ever ran into his clone, he
> would have to kill it, because he would know his clone would be thinking
> exactly the same thing.)
> >
> >>
> >>
> >>
> >> Now after time has passed my anger has cooled and I can think more
> clearly I've had some pangs of guilt about killing a living creature, but
> that's not what really torments me. How do I know I'm not the copy? I feel
> exactly the same as before, but would a copy feel different? Actually there
> is a way to be certain, I have an old VHS video tape of the entire
> experiment. My memory is that the copy first appeared 3 feet to my LEFT, if
> the tape shows the original walking into the chamber and the copy
> materializing 3 feet to his RIGHT, then I would know that I am the copy.
> But I'm afraid to look at the tape, should I be? If I found out I was the
> copy what should I do? I suppose I should mourn the death of John K Clark,
> but how can I, I'm not dead. If I am the copy would that mean that I have
> no real past and my life is meaningless? Is it important, or should I just
> burn the tape and forget all about it?
> >
> >
> > I wrote a story with a quite similar idea:
> >
> > It is some point of time in the future, and NASA has selected you for
> your unique skillset for
> > a 50 year voyage to the outer planets of the solar system. Given this
> extended time period, you and
> > the rest of the crew will be placed into a state of suspended animation
> until you arrive at your
> > destination: one of the moons of Saturn. However, due to high cost of
> the mission and the high the
> > risk of micro-meteoroids impacting the hull and possibly puncturing crew
> members' bodies, NASA
> > decides to create five duplicates of each crew member and place them in
> different areas of the ship.
> > Thus, there exists redudancy for each crew member. If one is hit by a
> micro-meteoroid, other intact
> > copies remain. NASA informs you that when the ship arrives at its
> destination, one of your
> > duplicates will be thawed to conduct your mission.
> >
> > Later that night, as you consider NASA's plan you begin to worry. Will
> NASA default to
> > waking the original me or will they pick one of the five duplicates
> randomly? Does it even matter?
> > The next day you ask the mission planners about this and they tell you
> not to worry, all duplicates
> > are the same down to the last molecule, and the continuity of matter is
> irrelevant to preserving your
> > identity because atoms in your body are replaced all the time. You ask
> that assuming the original
> > copy of you reaches the destination unscathed, that they awaken the
> original instead of the
> > duplicate. The chief mission planner sighs, but agrees to do so if it
> will put your mind at ease.
> >
> > Fifty years later, your space ship reaches its destination. You emerge
> well-rested from your
> > cryo-chamber, but are initially shocked to see "Cryo-chamber #2"
> inscribed on it when you last
> > remembered entering "Cryo-chamber #1". As you walk over towards
> Cryo-chamber #1 you see a
> > crack in the glass, and as you move closer you find the point where a
> micro-meteroid passed
> > through the self-sealing hull of the ship, shot through the glass and
> buried itself in the neck of your
> > original copy. When NASA contacts you they appologize for not being able
> to revive the original
> > copy as you had requested, and say that the first year into the mission
> while passing the asteroid
> > belt, your original copy suffered a fatal injury. You nod and admit it
> was silly to have worried, as
> > afterall, I am here and I seem to have survived just fine.
> >
> > While wating your first meal in 50 years, a sudden chill comes over you
> as you realize that
> > you could have become any of your copies. If #1 and #2 had both been
> destroyed, I would be #3,
> > and #3 instead of #2 would be here right now eating these dehydrated
> frosted space flakes. If you
> > have the potential to become any of the duplicates that are thawed, what
> does that mean if all the
> > surviving duplicates were thawed? These questions so preoccupy your mind
> that the next day while
> > working on the ships electronics, you fail to pay sufficient attention
> to what you are doing. You
> > touch a live capacitor which shocks you and stops your heart. When the
> other crew members find
> > you it is too late to do anything. They decide to thaw #3. Informed of
> how your predecesssor met
> > his end, you are extra vigilent in focusing on the mission and complete
> it successfully.
> >
> >
> >
> > There is a nice short story that yours also reminds me of, called The
> Pit and the Duplicate:
> https://web.archive.org/web/20081122035540/http://www.leecorbin.com/PitAndDuplicate.html
> >
> > Jason
> > _______________________________________________
> > extropy-chat mailing list
> > extropy-chat at lists.extropy.org
> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20260325/40e331f2/attachment.htm>
More information about the extropy-chat
mailing list