[ExI] Eliezer Yudkowsky New Interview - 20 Feb 2023

Giovanni Santostasi gsantostasi at gmail.com
Tue Feb 28 22:50:16 UTC 2023


I think people make the crazy assumption super intelligent = magical.

Basically, given it is super intelligent it can do anything.
The solution to a super intelligent AI going rogue even by an optimization
algorithm that didn't consider a change of environment where the
optimization doesn't give the planned outcomes (Elizier example of
evolution giving us a love for sugar when it was scarce in the savanna and
now being something that makes us sick when it is abundant) is to control
the environment not the AI itself.

We have a lot of safety systems for our critical infrastructures and
weapons. Even a super powerful person like the US president cannot simply
push a button and launch a nuclear strike. We don't control the president,
we control the access system. If the AGI went rogue it would basically be
like dealing with a super powerful computer virus, dangerous but not able
to destroy humankind.
Most of the security systems we have right now can be breached up to a
point but we know how to deal with most computer viruses. For critical
systems we even use antiquated but robust software (like some pre-2000
software in some of the nuclear silos) exactly because it is efficient
enough but not easily hackable.
We already have a lot of malignant agents in the world and it is not
because they are not super intelligent that they didn't destroy the world.
It is because our security systems are pretty good and no amount of
intelligence can breach them.

The comparison with ants is absurd because ants are not intelligent at all,
they cannot build or even conceive a trap. There is a gradation in
intelligence for sure and the AGI may be much superior to us but we are not
talking about a completely different domain of existence.
Anyway I think all this AGI killing of all of us is based on irrational
fears and a not well defined problem.

Also the AGI is not some abstract entity in the sky like a god; it needs to
rely on servers for example mostly run by humans. Servers that can be
bombed or even sabotaged with simple tools. One can think of doom scenarios
where we have armies of robots controlled by the AGI that defend the
servers but they are pretty absurd also because we don't have such robots
in control of anything. You can write a science fiction novel where the bot
is cunning enough to help us with developing the robot technology and then
kills all of us with it but this exercise has been done already many times
and is an entertaining but pretty unrealistic and unimaginative plot.

Anyway for any doom scenario I can see many ways to stop the AGI before it
destroys humanity, it may do some really bad damage but also be damage
itself. Is intelligence coming with a sense of preservation? If so the AGI
would have much to lose to fight humans because victory is not guaranteed
and not sure what there is to obtain in such a victory.
If you listen to Elizier it is because it wants to free atoms to be used as
it pleases; that again is such an absurd and ridiculous argument.

In conclusion I think this AGI= end of the world is all based on
abstractions, meaningless paradoxes due to bringing things to extreme
theoretically possible but unlikely conclusions and so on. It is what
philosophers are good for, this is not really a scientific problem no
matter how formally you want to reason about it.

Giovanni









On Tue, Feb 28, 2023 at 10:02 AM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
>
>
> *From:* extropy-chat <extropy-chat-bounces at lists.extropy.org> *On Behalf
> Of *Dave S via extropy-chat
> *…*
>
>
>
> >…Something can be profitable without being a good idea. AIs should be
> our tools, not independent beings competing with us.
>
>
>
> -Dave
>
>
>
>
>
> Of course.  But it is a good idea to the person who is making the profit,
> not the person whose job has just been replaced by AI.
>
>
>
> We are getting a preview of things to come.  Think about my previous post,
> and imagine college counselors, equity and diversity this and thats, the
> huge staff that universities hire who do things of value but don’t teach
> classes.  Looks to me like much of that can be automated, and it would be
> difficult to argue against doing so.  Students don’t have a lot of money,
> so if you could save them 20% on their tuition bills just by automating
> most of the counseling services… cool.
>
>
>
> I can imagine that the counseling staff won’t think much of the idea.
>
>
>
> spike
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230228/6526d5ae/attachment.htm>


More information about the extropy-chat mailing list