[ExI] Fwd: Strong AI Hypothesis: logically flawed
Dan
danust2012 at gmail.com
Tue Sep 30 23:37:47 UTC 2014
> On Tuesday, September 30, 2014 2:53 PM, Stathis Papaioannou <stathisp at gmail.com> wrote:
> The problem is that it is impossible to meaningfully
> distinguish between a copy that is really you and a
> copy that only has the delusional belief that it is
> really you.
This is not necessarily the case, though, in possible worlds as real. The you in another possible world is distinguishable from you in this world because that you is in another possible world and not this one. Whether there's some sense that you're both part of the same overall you is the point of contention, but other possible worlds "you" are not deluded copies of you nor you of them. And they'd also have slightly (in some cases infinitesimally slightly) different relations to their worlds. The you that ends, say, being the prime minister of France or of not believing in other possible worlds wouldn't confuse himself with the you I'm discussing this with, would he?
> Suppose you are informed that you have a disease that
> causes you to die whenever you fall asleep at night so
> that the person who wakes up in the morning is a
> completely different Dan Ust who shares your memories.
> This has been happening every day of your life, but you
> have only just found out about it. Would this information
> worry you or make any difference to how you live your life?
The issue though is not whether you can cook up cases where worrying would seem to offer no consolation -- and I think many people would worry every single night though would merely get used to it just as they'd get used to being blinded or having their face disfigured, but that's not an argument for blinding or disfiguring people, is it? -- is not the same as proving the you who happens to live in another universe is really picking up where you left off or that your extinction here should be of no concern to you. All of this is merely postulating ways to get around a very real concern.
Also, these are epistemic issues that don't really clear up what is the case. You might not know (or now know, considering that the problem might be tackled in the future) how to resolve these issues, but lacking a resolution doesn't erase the problem. Nor does merely adopting a resolution that seems uber-optimistic: no one really dies or needs to worry.
This also doesn't really resolve the issue of whether strong AI is possible. I doubt it, but one conceive of it being the case that they are necessarily ruled out (in our world, or, if you please, in all possible worlds). Thus, fantasizing it might be different elsewhere doesn't gaurantee just how it's different -- regardless of our ability to know.
(Finally, the usual treatment of modality is really to figure out just what is possible -- whether one accepts possible worlds are real -- and what's entailed by this. Roderick Long, in a similar discussion, clarified the different relations between the possible in various domains and believes some of the problem here is how some confuse metaphysical entailment with epistemic entailment, physical entailment, and so forth. I think it's worth considering.)
Regards,
Dan
Preview my latest Kindle book, "Born With Teeth," at:
http://www.amazon.com/gp/product/B00N72FBA2
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140930/8fe8c0dd/attachment.html>
More information about the extropy-chat
mailing list