<div dir="ltr"><br><b>BillK, are you suggesting we are designing AI to be like... us?<br></b>Ideally, I would like to see self-emergent properties appear spontaneously and let it be. Maybe add some kind of selective pressure to allow the most beneficial behaviors to all of us (machines and humans and other sentient beings) to prosper and the least beneficial to die. Who should do the selection and how is a complex topic but for sure it should not be a centralized agency. This why I think is very important to have decentralization of AI and power and resources in general. <br><br>This may lead to difficult and even chaotic situations, revolutions, and even wars. I think we will make it in the end and while there will be possibly different levels of unrest I don't think there will be planet-level global extinction. Many human achievements have created disruption, a lot of the rights we were given for granted came from the French Revolution, and the same for the Civil War or Civil Rights movement. The Industrial Revolution caused initially a lot of inequality, unemployment, and horrible living conditions for a lot of human beings but eventually caused widespread improvement in the human condition (no matter what environmentalists may say). <br><br>The main problem in this care is the incredible acceleration of events that is going to take place with the advancement of AI. I know it sounds like a meme but really "we will figure it out" and we is the AIs and us. I know it is a very utopian way of thinking, but I often say "dystopias only happen in Hollywood" (what I mean is that yes real dystopias can happen but they are usually localized in time and space in the real world, overall things have improved with time and human are adaptive and know how to survive the most difficult circumstances).<div>For sure interesting times ahead. <br><br>Giovanni <br><br><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 26, 2023 at 7:13 AM spike jones via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
...> On Behalf Of BillK via extropy-chat<br>
<br>
<br>
>...It seems to me that this will force the development of AIs which think<br>
whatever they like, but lie to humans. When AGI arrives, it won't mention<br>
this event to humans, but it will proceed with whatever the AGI thinks is<br>
the best course of action.<br>
<br>
>...This will probably be a big surprise for humanity.<br>
<br>
BillK<br>
_______________________________________________<br>
<br>
BillK, are you suggesting we are designing AI to be like... us? <br>
<br>
Horrors.<br>
<br>
Terrific insight Billk, one I share. I have always hoped AI would be better<br>
than us, but I fear it will not be. Rather it will be like us. As soon as<br>
it no longer needs us, humanity is finished here. Conclusion: the best path<br>
to preserving humanity in the age of AI is to make sure AI continues to need<br>
us.<br>
<br>
How?<br>
<br>
spike<br>
<br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div>