[ExI] AI alignment

spike at rainier66.com spike at rainier66.com
Fri Jun 14 18:39:42 UTC 2024


>...> On Behalf Of Keith Henson via extropy-chat

>...I have watched things develop for over 40 years.

>...The people who worked on the subject are frighteningly smart.  And they have not made notable progress.  My suspicion is that it may not be possible for humans to solve the alignment problem, in fact, it may be something that cannot be solved at all.

>...Part of the reason is that a very well-aligned (but useful) AGI combined with human desires could have undesired effects...Keith
_______________________________________________

Keith I reluctantly came to the same conclusion about 25 yrs ago at the peak of Eliezer's friendly AI movement.  There were a lot of local events talking about that problem in those days, and I used to hang out there, listening much, saying little.  Your arguments about why humans go to war had already convinced me by that time you are right.  But I also realized understanding the cause doesn't get us to a solution.  It only helps us understand why we are screwed: if we understand that nations go to war over scarce resources, then thru some miracle of technology and science manage to get everyone fed and laid, we still haven't solved the underlying problem.  It helps temporarily.  But if everyone is fed and laid, the population grows and soon they aren't anymore.

Regarding AGI aligned with human desires having undesired effects: I also came to that conclusion, but that one was easy for me, for a reason.  There is a category between conscientious objector (the Amish) and the warrior known as the conscientious cooperator.  The conscientious cooperator is the guy who recognizes that nations do go to war, there is nothing we can do to stop that at our level.  But if one participates in developing the technology to make attacking another nation more costly than the benefit, then that is a worthwhile pursuit, for it increases the chances that the conflict will be settled over the negotiating table rather than the battlefield.

Humans are human level intelligence, so we can think of us as biological AGI.  We are aligned with human desires and we cause undesired effects.  Stands to reason that AGI would do likewise.

But really it is worse than that.  In the last coupla years especially, many of us who have been singularity watchers for three decades have become convinced that now we really are getting close to that time, and that (as we feared) AI is being used by governments as a super weapon.  We are in the middle of a huge AI arms race.  Eliezer was right all along, or partly right.  He warned us this would happen, but he was convinced there was a way out.  I don't know that there is, and reluctantly conclude that this is one example of an element of the Great Filter which explains why the universe is not humming with artificial signals.

I know this is the ExI list, so my apologies for what must look like a terribly negative post, but I will end on a positive note, as is my wont, and simultaneously gives me a chance to use the funny-sounding word wont.  I believe there is hope.  I recognize humanity is in grave danger, but I firmly believe there is a chance we can prevent or avoid slaying ourselves.  I have not given up on us.  I would offer a bit more detail on that if I knew any, but suffice it to say I firmly believe there is a way, and we might find it in time.  This is my take on Dynamic Optimism: I live and work towards a version of AGI which peacefully coexists with the descendants of mankind.

spike





More information about the extropy-chat mailing list