[ExI] singularity utopia's farewell

John Grigg possiblepaths2050 at gmail.com
Thu Sep 16 20:15:38 UTC 2010


S.U. wrote:
I am saddened you think my site is a kook's paradise although contrary
to your assertions some serious-non-kooks have professed a great love
for it. I wanted to create something colorful and fun, which in
conservative circles could perhaps seem kooky. I imagine straitlaced
Christians probably thought the Rolling Stones and the Beatles where
kooky back in the 60s. Thinking about the Beatles I am reminded of the
colourful Yellow Submarine film, which was rather colourful and kooky,
thus I am inclined to state you are a Blue Meanie.

http://en.wikipedia.org/wiki/Blue_Meanies_%28Yellow_Submarine%29
>>>

I first saw "The Yellow Submarine" when I was about six, and the Blue
Meanies scared the hell out of me!  My wife (now ex-wife) once dressed
as a Blue Meanie for Halloween...

he continues:
Hopefully my site will attract the kooks instead of the Blue Meanies.
...

[*] asks: if I am so smart then why am I not rich. I reply thus... is
amassing vast wealth truly smart? Perhaps I have a rich mind?
Furthermore I'm not superhuman or perhaps I am superhuman thus I
struggle to exist burdened by immense depression living amidst this
world of fools. If I was incredibly smart, which I am, it is also
possible normal humans could see me as a threat thus I could find it
difficult to function in this civilization: try to imagine a human
surviving in a community of monkeys; the monkeys could be hostile
because they feel threatened. In The Country of The Blind the one-eyed
man is not King.
>>>

It can be reassuring to someone of somewhat low self-esteem and
limited personal success, to want to believe the world is not nearly
as wise and special.

he continues:
No wonder some Transhumanists fear the advent of unfriendly AI... you
see greater intelligence than yours as a threat thus you will probably
try to enslave AIs thus you are likely to cause AIs to rebel and
attack you (a self-fulfilling prophecy). If I was a AI, I would
definitely feel unfriendly towards most humans.
>>>

Oh, so Eliezer's friendly AI project is actually about "A.I. slavery!" lol

he continues:
Previously I have been criticized for being over the top with my
ideas... but no one seems to get it... that's the whole point... the
SINGULARITY *IS* OVER THE TOP because it will radically and very
dramatically transform the human race, hopefully in a very colorful
manner.
>>>

A good point.

he continues:
Some people are so disrespectful and hostile on this chat-list, with
no sense of fun, therefore I shall bring my input to a close. I am
private person with no desire to become a public punching bag. I don't
have the energy.
>>>

I really enjoyed S.U.'s posts until he started ranting about being a
techno-messiah!  Perhaps we was merely trying to do positive thinking
& visualization at a very intense level, but it left me cold.  I
thought for a time he was someone playing games, and using an alias
for strictly personal amusement, but now I tend to believe he is
sincere.

I was about to say I was too hard on the guy, but I probably was
not...  But despite my comments and challenges, I am rooting for him.
: )  I hope he can make it to a transhumanist conference sometime soon
and forge some real world friendships with people who can help
encourage and guide him.

John


On 9/16/10, Adrian Tymes <atymes at gmail.com> wrote:
> 2010/9/16 Gregory Jones <spike66 at att.net>
>
>>  We have been perplexed by the puzzling commentary by Singularity Utopia
>>
>
> I wasn't.  I've seen this category of fail far too often.  Here's roughly
> what happened:
>
> * SU saw something (the Singularity) that promised to make a lot of problems
> go away.
> (The particular something is irrelevant; people have made this mistake about
> a number of
> other things, with different justifications.)
>
> * SU confused "a lot" for "all".  (Failure #1.  In short, There Is No God
> Solution.  Some
> things can be miraculous or at least very good, but if you think something
> solves all
> problems forever without costing anyone anything - not just "as compared to
> current
> solutions", some current set of problems, for a limited time, or for a
> limited and
> acceptable cost (including if the cost is acceptable because it is only
> borne by others),
> but literally everything forever - you can safely assume you've overlooked
> or
> misunderstood something.)
>
> * Based on that incorrect data, SU logically decided that promoting it would
> be the best
> course of action.  (If there were some god-action that could fix all
> problems with
> practically zero effort, then yes, getting everyone to do it would be best.
> Again: we
> know the Singularity is not that, but SU believed it was.)
>
> * SU went to an area perceived friendly to that something (this list).
>
> * SU was informed of the realities of the thing, and how they were far less
> than initially
> perceived.  In particular, SU was informed that the thing would require a
> lot of careful,
> skilled work, not merely boundless enthusiasm
>
> * SU experienced dissonance between SU's perception of the thing and what SU
> was
> being told.  In particular, SU perceived (likely not fully consciously) that
> SU would have
> to actually do some less-than-enjoyable work in order to achieve this end,
> making
> accepting this new truth less palatable.  (Letting such personal concerns
> affect judgment
> of what is real was failure #2.  Reality doesn't care if you suffer.)
>
> * Knowing that there are many people generally resistant to change, SU
> resolved the
> dissonance by believing that even these supposed adherents must be more of
> the
> same, and therefore that anything they said could be dismissed without
> rational
> consideration.  (Failure #3: few people actually do personal attacks when
> they see a way
> to instead demonstrate the "obvious" benefits of and reasons for their
> position.  Most
> such disagreements are not about logic, but about the axioms and data from
> which logic
> can be done.)
>
> * Having committed to that, SU then rationalized why we were "attacking"
> that vision, and
> ignored all further evidence to the contrary.  (Extension of failure #3.)
>
> There is a sad irony in this case, because the principles of Extropy, as
> defined by Mr.
> More, include memes that defend against this category of error.  This
> particular collection
> of mental missteps is more common in, for example, politics.  (To be fair,
> there are more
> people in politics who will personally attack to back up ulterior motives,
> but politics -
> especially on large scales - often deals with situations so large that most
> stakeholders
> are honestly starting from very limited sets of data, perceiving parts of
> the whole that
> other stakeholders do not, and vice versa.)
>



More information about the extropy-chat mailing list