[ExI] AI alignment

Keith Henson hkeithhenson at gmail.com
Sat Jun 15 18:04:50 UTC 2024


On Fri, Jun 14, 2024 at 11:39 AM <spike at rainier66.com> wrote:
>
> >...> On Behalf Of Keith Henson via extropy-chat
>
> >...I have watched things develop for over 40 years.
>
> >...The people who worked on the subject are frighteningly smart.  And they have not made notable progress.  My suspicion is that it may not be possible for humans to solve the alignment problem, in fact, it may be something that cannot be solved at all.
>
> >...Part of the reason is that a very well-aligned (but useful) AGI combined with human desires could have undesired effects...Keith
> _______________________________________________
>
> Keith I reluctantly came to the same conclusion about 25 yrs ago at the peak of Eliezer's friendly AI movement.  There were a lot of local events talking about that problem in those days, and I used to hang out there, listening much, saying little.

And there was never a hint of a solution.

> Your arguments about why humans go to war had already convinced me by that time you are right.

The evolved psychological trait humans have is the reason I made a
strong case for not modeling AIs on human brains.

> But I also realized understanding the cause doesn't get us to a solution.  It only helps us understand why we are screwed: if we understand that nations go to war over scarce resources, then thru some miracle of technology and science manage to get everyone fed and laid, we still haven't solved the underlying problem.  It helps temporarily.  But if everyone is fed and laid, the population grows and soon they aren't anymore.

The world of The Clinic Seed solved that problem by no reproduction in
the (much more desirable) uploaded state.

> Regarding AGI aligned with human desires having undesired effects: I also came to that conclusion, but that one was easy for me, for a reason.  There is a category between conscientious objector (the Amish) and the warrior known as the conscientious cooperator.  The conscientious cooperator is the guy who recognizes that nations do go to war, there is nothing we can do to stop that at our level.  But if one participates in developing the technology to make attacking another nation more costly than the benefit, then that is a worthwhile pursuit, for it increases the chances that the conflict will be settled over the negotiating table rather than the battlefield.

Most humans evolved the psychological traits for wars long, long
before there were nation-states.  Good points though.  And good
weapons make it a lot more likely your side will win.

> Humans are human level intelligence, so we can think of us as biological AGI.  We are aligned with human desires and we cause undesired effects.  Stands to reason that AGI would do likewise.

Perhaps.  Humans are not the same though.  The San people do not seem
to have evolved the psychological characteristics for war.  The reason
may have been because they have the lowest fertility of any human
group.

> But really it is worse than that.  In the last coupla years especially, many of us who have been singularity watchers for three decades have become convinced that now we really are getting close to that time, and that (as we feared) AI is being used by governments as a super weapon.

Could be.  But the prime use of intelligence is making good decisions.
Human intelligence is influenced by
"evolved-in-the-Stone-Age-psychological-traits" that were good for the
genes behind such traits.  Machine intelligence might make better
decisions than humans.

> We are in the middle of a huge AI arms race.  Eliezer was right all along, or partly right.  He warned us this would happen, but he was convinced there was a way out.  I don't know that there is, and reluctantly conclude that this is one example of an element of the Great Filter which explains why the universe is not humming with artificial signals.

Human desires make safe and useful AI impossible.  But as you know
from my postings on the subject, I suspect that what we see at
Tabby's Star is advanced aliens.  If what we see in the light dips is
not dust clouds, then they are uploads living in data centers about
400 times the area of the Earth and using around 1.4 million times our
total energy use.  The objects are around 7 AU out from the star which
makes them cold, optimal for low error computation.

If this is true,  there are upsides and downsides.  Upside is that
something made it through their local singularity, and the downside is
that we have competition.

> I know this is the ExI list, so my apologies for what must look like a terribly negative post, but I will end on a positive note, as is my wont, and simultaneously gives me a chance to use the funny-sounding word wont.  I believe there is hope.  I recognize humanity is in grave danger, but I firmly believe there is a chance we can prevent or avoid slaying ourselves.  I have not given up on us.  I would offer a bit more detail on that if I knew any, but suffice it to say I firmly believe there is a way, and we might find it in time.  This is my take on Dynamic Optimism: I live and work towards a version of AGI which peacefully coexists with the descendants of mankind.

I agree.  In any case, putting a halt to AI development is not feasible.

Keith

> spike
>
>



More information about the extropy-chat mailing list