[ExI] samantha vs singularity debate
msd001 at gmail.com
Thu Sep 16 01:34:19 UTC 2010
On Wed, Sep 15, 2010 at 4:53 PM, Gregory Jones <spike66 at att.net> wrote:
> Watching the interchange between Singulary Utopia and Samantha reminds me
> of Douglas Hofstadters Eternal Golden Braid struggle between Achilles and
> Mr. Tortoise. Singularity Utopia has confidently proposed that we will have
> infinite this and that, do the impossible, etc. (no need for me to
> elaborate, you get the picture).
> Clearly the paradox I mentioned a few days ago cannot be avoided in that
> model. A post singularity infinite AI would master time travel, go back in
> time to before the horrifying wars of the 20th century (and every other
> century for that matter), explain to the people living then how to bring
> about the singularity, which causes the singularity to happen sooner,
> sparing all that unnecessary suffering.
> If that can happen, why didn't it? Or did it and we just don't know we are
> post singularity sims? If so, all our simulated suffering is intentional
> (or at least voluntary) on the part of some super AI, in which case the post
> singularity AI is at least partly evil, which breaks SU's model.
This iteration of the optimization process is better than all previous
attempts. It's still bad enough that the exit condition has not been met.
the universe in which you currently find yourself may be in a sort tree or
recursion stack and there is no guarantee that it is the leading edge of any
particular process. There are likely multiple simultaneous process. An
earlier observation of the process optimizer noted that subroutines inside
miserable universe calculations are particularly good at evaluating utopian
universe situations (and vice versa) so it becomes difficult to trace one's
state in the universal machine. Now seems to be the only point upon which
we can all agree.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat