[ExI] invisible singularity and SU
Gregory Jones
spike66 at att.net
Fri Sep 17 15:17:45 UTC 2010
--- On Fri, 9/17/10, BillK <pharos at gmail.com> wrote:
> ...
>
> Now, don't start believing the stuff you read on MySpace
> and Facebook! :)
> Remember that most of the dolly young blonds that have
> 'Friended' you
> there are really FBI agents...
Sure but if one is an FBI agent oneself, then that feature could be a real turn-on. billK, send your friends over to me. {8^D
> I'd guess that the 47 claim might be to avoid harassment from
> males...
But would encourage harassment from the over 47 male crowd. {8-]
> ...But if I was 47, I don't think I would get wildly exuberant
> about a Singularity in 2045 - when I would be 82 - if I made it
> that far... BillK
Ja, but I might go the opposite direction with that. If someone is 47, that might explain the insistence on specifying the 2045 date, reasoning wishfully that if it is much past that, then it matters little. Singularity predictions are heavily influenced by wishful thinking.
Regarding picking dates for when the singularity will occur, this is an example of something Eliezer patiently tried to explain here but was mostly unsuccessful I fear, or if successful we have dropped the ball on that. His notion, which I find compelling, is that we are not waiting to develop faster computers, more extensive networks or some secret team of DARPA researchers to write bigger and better expert systems.
Rather the hard takeoff scenario AI is dependent on a totally other software technology that only a few unknown isolated researchers are working on. These have no guarantee of ever discovering the magic ingredient. AI research could all be a scientific blind alley, like alchemy. We just don't know.
Note that this commentary is being made by one who made a similar mistake which was immortalized in Damien Broderick's book The Spike, Feb 2001 version, top of page 87. In that, I predicted a probability that the next record prime would be discovered by a certain date (November 2001.) That prediction turned out to be true, as well as two subsequent ones which puzzled and delighted the GIMPS crowd to no end. But the reasoning behind the predictions was flawed. Primes do not follow any such pattern. I got lucky thrice. I was using superposition of probability distribution functions, which do not legitimately apply in that case, nor does it apply in predicting when the singularity will occur.
The more worrysome part about the singularity is not the question of when it will occur, but rather what will occur. I argue we have no damn way of knowing. We have those who predict that all will be fine, utopia etc, we have those who argue we must take definite steps to the contrary, otherwise the emergent AI might well be unfriendly. My argument is that we cannot predict what will happen when or if an AI emerges, regardless of what actions we take. We don't know and we cannot know.
spike
More information about the extropy-chat
mailing list