[ExI] Yudkowsky in Time on AI Open Letter.

BillK pharos at gmail.com
Thu Mar 30 21:08:06 UTC 2023


On Thu, 30 Mar 2023 at 21:55, Jason Resch via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> It is a sign of the times that these conversations are now reaching these outlets.
>
> I think "alignment" generally insoluble because each next higher level of AI faces its own "alignment problem" for the next smarter AI. How can we at level 0, ensure that our solution for level 1, continues on through levels 2 - 99?
>
> Moreover presuming alignment can be solved presumes our existing values are correct and no greater intelligence will ever disagree with them or find a higher truth. So either our values are correct and we don't need to worry about alignment or they are incorrect, and a later greater intelligence will correct them.
>
> Jason
> _______________________________________________


"Our" values??   I doubt that China thinks our values are correct.
The fundamental values problem is that nations, races, religions, etc.
will never agree what values are correct.
The AGIs will be just as confused as humans on which values are preferable.


BillK



More information about the extropy-chat mailing list