[ExI] samantha vs singularity debate
ablainey at aol.com
ablainey at aol.com
Wed Sep 15 23:28:11 UTC 2010
Of course I agree with you, but one point regarding the finite nature of the AI. If either an infinite or finite AI were to develop in the future I cannot see how it could time travel in its entirety. As such only a finite part of it could come back to bring about the singularity thus this fragment may not be clever enough to cause every human to except its direction or to skip humans as you suggest. Perhaps the future AI is smart enough to know that all that is needed is a Seed AI that will ultimately lead to itself. From an energy point of view this would be most efficient. Why use more power to punch a hole in time or send more of yourself back than is needed? This would only decrease its future omnipotence.
>From that starting point of an absolute minimal seed AI. It would also make sense to send it to the time zone where society, memetics and technology were most ripe for the seed to take hold. Which would again reduce the loss from the future AI.
From: Gregory Jones <spike66 at att.net>
To: ExI chat list <extropy-chat at lists.extropy.org>
Sent: Wed, 15 Sep 2010 22:48
Subject: Re: [ExI] samantha vs singularity debate
--- On Wed, 9/15/10, ablainey at aol.com <ablainey at aol.com> wrote:
>...This could tie in with the memes thread. Mankind would need to be at a
developemental state in which it would except the guidence of the AI...
On the contrary sir. The AI is so clever it could cause any or every human to
accept its direction. Or skip the humans all together and just get online
(inline?) itself after everyone has gone to bed, and program up its own
embryonic or emergent self.
>... Perhaps the social catalyst was the wars?
No. See above.
>...Do you not feel that mankind is being memetically guided toward something?
I do not. For the past 9 yrs or so, it has felt to me more like we are
memetically wandering, groping in the darkness, vaguely understanding that
something fundamental is missing but knowing not what. I am thankful for fresh
and apparently young optimists like Singulariy Utopia, but I do wish to point
out that overuse of the notion of infinite is riddled with paradox.
>...I do and it would appear that it has been going on for quite a long time.
Some say NWO, others the reds. Why not a post singularity AI? A
Al, the notion of an infinitely capable infinitely benevolent AI is as
paradoxical as the notion of a god with the same characterists, which caused me
eventually to stop believing in such a being. I never found a way out of that
paradox, and see no way out still.
I would counter-suggest a version of the Adkinsian model, whereby a future AI is
really really smart and uses most or all the available matter and energy, but is
not infinite. It would be really big and really good, perhaps appearing
infinite from our current point of view, but not infinite. It would be
incapable of some of our fondest desires, such as resurrecting long dead loved
ones for instance. But it could perhaps make a reasonable sim of those departed
Such a being, compared to actual infinity (even aleph naught infinity) would
still be zero.
extropy-chat mailing list
extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat