[ExI] (no subject)

Max More maxmore01 at gmail.com
Sat Apr 1 02:33:02 UTC 2023


Stuart: I think you have it right.

A number of people have been commenting on the irrationality of
rationalists. That's unfortunate because they are talking only about some
rationalists, Yudkowsky's circle being among them.

Yudkowsky has spent so much time talking with similar people, using their
special, made-up language that he's driven himself down an intellectual
hole to a place of absurdity.

Many signs of apocalyptic, cultish beliefs are present. Yudkowsky saw
himself as the AI Jesus, bringing us salvation. When he utterly failed at
that -- by his own word -- he became the AI prophet of doom, warning us of
the demon/genie/AI that will answer our wishes and kill or enslave us all.
His freakout over Roko's Basilisk was another strong sign up this.

EY seems to think he's in the movie, *Forbidden Planet*, and someone has
unleashed the Krell. Only this isn't the monster from the Id, it's the
monster from the language model.

I have issues with this guy but he says a lot of sensible stuff about EY in
a multipart blog. Here's one:

https://aiascendant.substack.com/p/extropias-children-chapter-7

I'm in the middle of writing a long blog post on all this. Here's a post
with links to what I think are really good, non-panic pieces:
https://maxmore.substack.com/p/the-dont-panic-about-ai-collection

--Max

------------------------

His underlying logic is based on the premise of fear of an unknown
quantity. In the podcast he said that no possible utility function
would allow for the survival of the human race. That is patently
absurd. Even if the only utility function of an AI is to generate
wealth for its company, then it will understand that the survival of
customers and clients are necessary for its utility function to be
maximized.


When Lex asked him for possible solutions to either the interpretation
problem or the alignment problem, he drew a blank and admitted he had
no idea. But when the conversation turned to throwing billions of
dollars into alignment research, he tried to become a gatekeeper for
AI funding. He literally said that billionaires like Musk should
consult with HIM before funding anybody else's research or ideas on
alignment. If that is not a good old-fashioned primate power-grab,
then what is?


Moreover, in the podcast, he explicitly disavowed transhumanism so
perhaps it is time that transhumanism disavowed him.


Stuart LaForge



-- 
Max More, PhD
Director of Communications
Biostasis Technologies
Editor, *The transhumanist Reader*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/614c77cb/attachment.htm>


More information about the extropy-chat mailing list