[ExI] AI alignment

William Flynn Wallace foozler83 at gmail.com
Fri Jun 14 19:44:56 UTC 2024


uploads etc.

If an upload can program itself to enjoy food, sex, travel -- or anything
at any time, where is the contrast?  Studies support the idea that we need
some average or worse times to enjoy the really good times.

Don't you think that a person who gets all the rewarding things and
experiences anytime he wants will get bored?  And everybody else can do the
same thing - we will all be equal.  Can we stand it?

The more we eat at one sitting the less it tastes good and so we quit.
Wired into us.  The only thing I know that is not susceptible to this
effect is stimulating the reward centers - like the rats who kept on
stimulating themselves and never stopped to eat and died of starvation.

bill w

On Fri, Jun 14, 2024 at 1:41 PM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> >...> On Behalf Of Keith Henson via extropy-chat
>
> >...I have watched things develop for over 40 years.
>
> >...The people who worked on the subject are frighteningly smart.  And
> they have not made notable progress.  My suspicion is that it may not be
> possible for humans to solve the alignment problem, in fact, it may be
> something that cannot be solved at all.
>
> >...Part of the reason is that a very well-aligned (but useful) AGI
> combined with human desires could have undesired effects...Keith
> _______________________________________________
>
> Keith I reluctantly came to the same conclusion about 25 yrs ago at the
> peak of Eliezer's friendly AI movement.  There were a lot of local events
> talking about that problem in those days, and I used to hang out there,
> listening much, saying little.  Your arguments about why humans go to war
> had already convinced me by that time you are right.  But I also realized
> understanding the cause doesn't get us to a solution.  It only helps us
> understand why we are screwed: if we understand that nations go to war over
> scarce resources, then thru some miracle of technology and science manage
> to get everyone fed and laid, we still haven't solved the underlying
> problem.  It helps temporarily.  But if everyone is fed and laid, the
> population grows and soon they aren't anymore.
>
> Regarding AGI aligned with human desires having undesired effects: I also
> came to that conclusion, but that one was easy for me, for a reason.  There
> is a category between conscientious objector (the Amish) and the warrior
> known as the conscientious cooperator.  The conscientious cooperator is the
> guy who recognizes that nations do go to war, there is nothing we can do to
> stop that at our level.  But if one participates in developing the
> technology to make attacking another nation more costly than the benefit,
> then that is a worthwhile pursuit, for it increases the chances that the
> conflict will be settled over the negotiating table rather than the
> battlefield.
>
> Humans are human level intelligence, so we can think of us as biological
> AGI.  We are aligned with human desires and we cause undesired effects.
> Stands to reason that AGI would do likewise.
>
> But really it is worse than that.  In the last coupla years especially,
> many of us who have been singularity watchers for three decades have become
> convinced that now we really are getting close to that time, and that (as
> we feared) AI is being used by governments as a super weapon.  We are in
> the middle of a huge AI arms race.  Eliezer was right all along, or partly
> right.  He warned us this would happen, but he was convinced there was a
> way out.  I don't know that there is, and reluctantly conclude that this is
> one example of an element of the Great Filter which explains why the
> universe is not humming with artificial signals.
>
> I know this is the ExI list, so my apologies for what must look like a
> terribly negative post, but I will end on a positive note, as is my wont,
> and simultaneously gives me a chance to use the funny-sounding word wont.
> I believe there is hope.  I recognize humanity is in grave danger, but I
> firmly believe there is a chance we can prevent or avoid slaying
> ourselves.  I have not given up on us.  I would offer a bit more detail on
> that if I knew any, but suffice it to say I firmly believe there is a way,
> and we might find it in time.  This is my take on Dynamic Optimism: I live
> and work towards a version of AGI which peacefully coexists with the
> descendants of mankind.
>
> spike
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20240614/63abcdf5/attachment.htm>


More information about the extropy-chat mailing list