[ExI] AI alignment

efc at swisscows.email efc at swisscows.email
Sat Jun 15 10:15:07 UTC 2024



On Fri, 14 Jun 2024, spike jones via extropy-chat wrote:

> laid, we still haven't solved the underlying problem.  It helps
> temporarily.  But if everyone is fed and laid, the population grows
> and soon they aren't anymore.

Then we'd have to expand to another planet, and another, bottom of the
sea and the top of the mountain. But true, even adding mars,
terraforming, space station á la The Expanse etc. we'll grow out of our
own solar system. The interesting thing is how long time that will take,
and what we'll invent during the years. Also, of course, how long time
we have on this planet before having to move to another one or risk
negative outcomes.

Usually people laugh at me when I say this and call me unrealistic, but
I'm a long term optimist, and given how far we've come from the Savannah
already, I don't see any limits except time.

> Humans are human level intelligence, so we can think of us as
> biological AGI.  We are aligned with human desires and we cause
> undesired effects.  Stands to reason that AGI would do likewise.

Well, I'm not so sure. We don't know what intelligence is, and
we don't know the answer to AGI (yet). It could very well be that there
is a fundamental difference. Or, as you say, it could be one and the
same principle implemented in different mediums, with simlar pros and
cons.

> But really it is worse than that.  In the last coupla years
> especially, many of us who have been singularity watchers for three
> decades have become convinced that now we really are getting close to
> that time, and that (as we feared) AI is being used by governments as
> a super weapon.  We are in the middle of a huge AI arms race.  Eliezer

I treat it as the classic nuclear arms race. Of course everyone wants
everyone else to ban it, while secretly pursuing it themselves á la the
prisoners dilemma. So bans will never work, because what will happen is
that the more authoritarian countries on the planet will continue R&D in
secret and then spring the surprise on the "naive" western democratic
world.

So I think we'll see a classical nuclear arms race, where perhaps a few
super powers have the resources to reach the goal at first. But what
will be interesting, is to see if the technology and resources required
will develop as technology usually does and spread.

We might have a silent chess game going on between the AI:s of the world
that we might, in the end, not even be aware of. We'll see shifts in
policy, small statements here and there, and in the end our wise
politicians just shrug their shoulders and follow along.

> I know this is the ExI list, so my apologies for what must look like a
> terribly negative post, but I will end on a positive note, as is my
> wont, and simultaneously gives me a chance to use the funny-sounding
> word wont.  I believe there is hope.  I recognize humanity is in grave
> danger, but I firmly believe there is a chance we can prevent or avoid
> slaying ourselves.  I have not given up on us.  I would offer a bit

I'm 100% convinced. We've had the capability of blowing ourselves to
pieces with nuclear for 70 years or so, and yet, despite all the mad men
alive and with enormous power, we've not done so.

That brings me great hope and confidence, that we'll handle this
challenge as well. =)

Best regards, 
Daniel


>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>


More information about the extropy-chat mailing list