[ExI] Eliezer Yudkowsky New Interview - 20 Feb 2023
BillK
pharos at gmail.com
Sun Feb 26 12:35:55 UTC 2023
Eliezer has done a long interview (1 hr. 49 mins!) explaining his
reasoning behind the dangers of AI. The video has over 800 comments.
<https://www.youtube.com/watch?v=gA1sNLL6yg4>
Quotes:
We wanted to do an episode on AI… and we went deep down the rabbit
hole. As we went down, we discussed ChatGPT and the new generation of
AI, digital superintelligence, the end of humanity, and if there’s
anything we can do to survive.
This conversation with Eliezer Yudkowsky sent us into an existential
crisis, with the primary claim that we are on the cusp of developing
AI that will destroy humanity.
Be warned before diving into this episode, dear listener.
Once you dive in, there’s no going back.
---------------
One comment -
Mikhail Samin 6 days ago (edited)
Thank you for doing this episode!
Eliezer saying he had cried all his tears for humanity back in 2015,
and has been trying to do something for all these years, but humanity
failed itself, is possibly the most impactful podcast moment I’ve ever
experienced.
He’s actually better than the guy from Don’t Look Up: he is still
trying to fight.
I agree there’s a very little chance, but something literally
astronomically large is at stake, and it is better to die with
dignity, trying to increase the chances of having a future even by the
smallest amount.
The raw honesty and emotion from a scientist who, for good reasons,
doesn't expect humanity to survive despite all his attempts is
something you can rarely see.
--------------------
BillK
More information about the extropy-chat
mailing list