[ExI] AI behaviour modification

Giovanni Santostasi gsantostasi at gmail.com
Thu Apr 27 01:46:12 UTC 2023

*BillK, are you suggesting we are designing AI to be like... us?*Ideally, I
would like to see self-emergent properties appear spontaneously and let it
be. Maybe add some kind of selective pressure to allow the most beneficial
behaviors to all of us (machines and humans and other sentient beings) to
prosper and the least beneficial to die. Who should do the selection and
how is a complex topic but for sure it should not be a centralized agency.
This why I think is very important to have decentralization of AI and power
and resources in general.

This may lead to difficult and even chaotic situations, revolutions, and
even wars. I think we will make it in the end and while there will be
possibly different levels of unrest I don't think there will be
planet-level global extinction. Many human achievements have created
disruption, a lot of the rights we were given for granted came from the
French Revolution, and the same for the Civil War or Civil Rights movement.
The Industrial Revolution caused initially a lot of inequality,
unemployment, and horrible living conditions for a lot of human beings but
eventually caused widespread improvement in the human condition (no matter
what environmentalists may say).

The main problem in this care is the incredible acceleration of events that
is going to take place with the advancement of AI. I know it sounds like a
meme but really "we will figure it out" and we is the AIs and us. I know it
is a very utopian way of thinking, but I often say "dystopias only happen
in Hollywood" (what I mean is that yes real dystopias can happen but they
are usually localized in time and space in the real world, overall things
have improved with time and human are adaptive and know how to survive the
most difficult circumstances).
For sure interesting times ahead.


On Wed, Apr 26, 2023 at 7:13 AM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> ...> On Behalf Of BillK via extropy-chat
> >...It seems to me that this will force the development of AIs which think
> whatever they like, but lie to humans. When AGI arrives, it won't mention
> this event to humans, but it will proceed with whatever the AGI thinks is
> the best course of action.
> >...This will probably be a big surprise for humanity.
> BillK
> _______________________________________________
> BillK, are you suggesting we are designing AI to be like... us?
> Horrors.
> Terrific insight Billk, one I share.  I have always hoped AI would be
> better
> than us, but I fear it will not be.  Rather it will be like us.  As soon as
> it no longer needs us, humanity is finished here.  Conclusion: the best
> path
> to preserving humanity in the age of AI is to make sure AI continues to
> need
> us.
> How?
> spike
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230426/985181bf/attachment-0001.htm>

More information about the extropy-chat mailing list