[ExI] AI and Eliezer
efc at swisscows.email
efc at swisscows.email
Wed Mar 20 20:18:47 UTC 2024
On Wed, 20 Mar 2024, Dylan Distasio via extropy-chat wrote:
> You strike to the quick of it! He's willing to bomb non-compliant data centers out of existence. After his Time interview, I can't
> take him seriously any longer.
>
> Climate change is another great example, and I would add a third playing out in real time now in the US (and on this list), some have
> an irrational fear that Trump is an infinite threat and are pretty much willing to do ANYTHING to prevent his re-election.
That makes me think that this is a broader problem that cuts through many
different issues.
How come we became so extreme in our opinions?
Surely "social media" is too simple an explanation. I would think that
perhaps we have several factors coming together in a polarizing storm
here.
Best regards,
Daniel
> On Wed, Mar 20, 2024 at 5:11 AM efc--- via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
>
> On Wed, 20 Mar 2024, Samantha via extropy-chat wrote:
>
> > Oh my. Eliezer the luddite. I have been off the opinion for over two
> > decades that there is no way for humanity to continue at the current level,
> > much less transcend this level, without massively more effective intelligence
> > deployed than today whether human or artificial.
> >
> > If this is so then AGI is *essential* to human flourishing.
> >
> > Also "shut it down" means shutting down and controlling the entire internet
> > at this point. The work has gone well outside a few large tech companies.
> > It is vibrantly alive and being advanced in the open source world. Surely
> > Eliezer doesn't believe humanity could survive the level of tyranny it would
> > actually take to "shut it down"?
>
> The problem with Eliezer is that he deals with infinite threats. If you
> deal with infinite threats every action is excusable and preferable in
> order to avoid infinite evil.
>
> Just look at the more extreme climate change activists, they deal with
> infinite threats (everyone will die tomorrow) and therefore anything is
> allowed.
>
> Another classic is Pascals wager. Assign infinite good and infinite bad,
> and all calculation, nuance and rationality goes out the window.
>
> Best regards,
> Daniel
>
>
> > - samantha
> >
> > On 3/13/24 17:31, Keith Henson via extropy-chat wrote:
> >> I have been perusing X recently to see what Anders Sandberg and
> >> Eliezer have been saying
> >>
> >> Eliezer Yudkowsky
> >> @ESYudkowsky
> >> ·
> >> 4h
> >> I don't feel like I know. Does anyone on the ground in DC feel like
> >> making a case? To be clear, the correct action is "Shut it down
> >> worldwide" and other actions don't matter; I'm not interested in who
> >> tries to regulate it more strictly or has nicer goals.
> >>
> >> ·
> >> 16m
> >> Replying to
> >> @ESYudkowsky
> >> Given the economic, political, and international realities, do you
> >> think "shut it down" is possible? If not, are there any other options?
> >> You might remember a story I posted on sl4 where a very well aligned
> >> AI caused humans to go extinct (though nobody died).
> >>
> >>
> >> Keith
> >>
> >> _______________________________________________
> >> extropy-chat mailing list
> >> extropy-chat at lists.extropy.org
> >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> >_______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
>
More information about the extropy-chat
mailing list