[ExI] Against AI Doomerism, For AI Progress

Brent Allsop brent.allsop at gmail.com
Mon Apr 3 18:33:15 UTC 2023


Max, Giulio and everyone, yet more bleating of your lonely opinions will
not stop all the doom and gloom bleating and tweeting.
How do you think Trump got elected??  Bleating and tweeting like this, even
if it is peer reviewed/published will only make the problem far worse.

Instead of just more bleating and tweeting, which only drives everyone
apart and into their own bubble, we need to build and track consensus
around the morally right camp
<https://canonizer.com/topic/16-Friendly-AI-Importance/3-Such-Concern-Is-Mistaken>.
Once we get started, even if the competition tries to catch up, we will be
able to track which arguments really work to convert people to a trusted
morally right camp, and amplifying the moral wisdom of the crowd
<https://canonizer.com/files/2012_amplifying_final.pdf>.

As of this writing, they have 3789 signatures
<https://futureoflife.org/open-letter/pause-giant-ai-experiments/>.  And
only ONE button for those who agree.  THAT is the problem, no room for any
other POV to show the errors contained therein.
I bet if we all worked at it, we could build a consensus with 10s of
thousands of signatures, for a start, for a morally superior camp
<https://canonizer.com/topic/16-Friendly-AI-Importance/2-AI-can-only-be-friendly>,
and continue extending a trusted peer ranked experts in this field
<https://canonizer.com/topic/53-Canonizer-Algorithms/19-Peer-Ranking-Algorithms>
consensus lead over the falling further behind competing camp
<https://canonizer.com/topic/16-Friendly-AI-Importance/9-FriendlyAIisSensible>.
I bet if we created a peer ranked expert canonizer algorithm
<https://canonizer.com/topic/53-Canonizer-Algorithms/11-Mind-Experts> for
this, people like Max, Zuckerberg, and Kurzweil, might even rank above Elon.

We could take all we agree on in that letter and put it in a super camp,
then force them to put all the bad stuff in a competing camp to a morally
superior camp, and show how bad that view really is, and stop this kind of
bleating and tweeting madness that is standing in the way of the
singularity.  Let's finally make a trusted source of moral truth that can
change the world.  All you need to do to get started is support this camp
<https://canonizer.com/topic/16-Friendly-AI-Importance/3-Such-Concern-Is-Mistaken>
or one of its sub camps.  Then if you have time help us wiki improve
everything.















On Mon, Apr 3, 2023 at 12:11 AM Stuart LaForge via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Quoting Max More via extropy-chat <extropy-chat at lists.extropy.org>:
> > My (long) take on fears of AI and the recent petition for a pause,
> featuring
> > Clippy the supervillain! AI apocalypse prophets and cultists!
> > The drama of AI regulation! Progress not pause!
> > https://maxmore.substack.com/p/against-ai-doomerism-for-ai-progress
>
> Great blog post, Max. I think you hit all the major talking points.
> LOL:) "I want to paperclip you! Let me out!"- Clippy.
>
> Stuart LaForge
>
>
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230403/28ecd4c9/attachment.htm>


More information about the extropy-chat mailing list