[ExI] Yudkowsky in Time on AI Open Letter.

Jason Resch jasonresch at gmail.com
Thu Mar 30 20:52:37 UTC 2023


On Thu, Mar 30, 2023, 2:48 PM Darin Sunley via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
>
> We live in a timeline where Eliezer Yudkowsky just got published in Time
> magazine responding to a proposal to halt or at least drastically curtail
> AI research due to existential risk fears.
>
> Without commencing on the arguments on either side or the qualities
> thereof, can I just say how f*cking BONKERS that is?!
>
> This is the sort of thing that damages my already very put upon and
> rapidly deteriorating suspension of disbelief.
>
> If you sent 25-years-ago-me the single sentence "In 2023, Eliezer
> Yudkowsky will get published in Time magazine responding to a proposal to
> halt or at least drastically curtail AI research due to existential risk
> fears." I would probably have concluded I was already in a simulation.
>
> And I'm not certain I would have been wrong.
>

It is a sign of the times that these conversations are now reaching these
outlets.

I think "alignment" generally insoluble because each next higher level of
AI faces its own "alignment problem" for the next smarter AI. How can we at
level 0, ensure that our solution for level 1, continues on through levels
2 - 99?

Moreover presuming alignment can be solved presumes our existing values are
correct and no greater intelligence will ever disagree with them or find a
higher truth. So either our values are correct and we don't need to worry
about alignment or they are incorrect, and a later greater intelligence
will correct them.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230330/7fdf8338/attachment.htm>


More information about the extropy-chat mailing list