[ExI] Eliezer Yudkowsky Long Interview

BillK pharos at gmail.com
Fri Apr 7 11:56:29 UTC 2023


Eliezer Yudkowsky — Why AI Will Kill Us, Aligning LLMs, Nature of
Intelligence, SciFi, & Rationality
Posted by Sergio Tarrero in category: robotics/AI     Apr 7, 2023

For 4 hours, I tried to come up with reasons for why AI might not kill
us all, and Eliezer Yudkowsky explained why I was wrong.

We also discuss his call to halt AI, why LLMs make alignment harder,
what it would take to save humanity, his millions of words of sci-fi,
and much more.

If you want to get to the crux of the conversation, fast forward to
2:35:00 through 3:43:54. Here we go through and debate the main
reasons I still think doom is unlikely.

Transcript: https://dwarkeshpatel.com/p/eliezer-yudkowsky.
Apple Podcasts: https://apple.co/3mcPjON
Spotify: https://spoti.fi/3KDFzX9

Timestamps:
(0:00:00) — TIME article.
(0:09:06) — Are humans aligned?
(0:37:35) — Large language models.
(1:07:15) — Can AIs help with alignment?
(1:30:17) — Society’s response to AI
(1:44:42) — Predictions (or lack thereof)
(1:56:55) — Being Eliezer.
(2:13:06) — Orthogonality.
(2:35:00) — Could alignment be easier than we think?
(3:02:15) — What will AIs want?
(3:43:54) — Writing fiction & whether rationality helps you win.

<https://www.youtube.com/watch?v=41SUp-TRVlg>
---------------------

This seems to be a very thorough discussion, covering a lot of ground.
But 4 hours long!  It must be good to hold people's interest for that long.

BillK



More information about the extropy-chat mailing list