[ExI] Yudkowsky in Time on AI Open Letter.

Tara Maya tara at taramayastales.com
Fri Mar 31 06:40:52 UTC 2023


The problem with the letter demanding that AI be suspended is precisely the same problem as when dozens of scientists signed a letter asking that everyone destroy their nukes and stop working on nuclear weapons or else humanity would be destroyed. 

How do they propose to overcome the "other guy could get it first" problem that has plagued human beings ever since we learned to pick up sticks and stone? If you don't learn to sharpen flint and the other gets it first, guess who slaughters the men in you tribe and steals your daughters? If the US had called moratorium on developing nukes in 1942 or even in 1952, how would that have worked out if Fascists or Communists had a nuclear monopoly? 

They have to prove more than AI is a threat. They have to prove that AI is more of a threat than AI in the hands of our enemies. How? How can they do that? Is the whole human race going to join hands, sing kum-by-yah and stop research on AI?

Sorry, but not only I don't believe CHINA will stop working on this if a moratorium were called, I'm skeptical even believe Bill Gates or Elon Musk would stop working on it. Maybe they only want their competition to stop...?

AI makes us, humans, smarter. Or at least feel smarter. Who with the power to make a bigger brain is simply going to surrender it? I don't see a way out of our own tinker nature. And I agree with BiliK. The bigger brain will simply make more powerful whatever values already existed in the small brain using it. 


Tara Maya


> On Mar 30, 2023, at 2:08 PM, BillK via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> On Thu, 30 Mar 2023 at 21:55, Jason Resch via extropy-chat
> <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>> 
>> It is a sign of the times that these conversations are now reaching these outlets.
>> 
>> I think "alignment" generally insoluble because each next higher level of AI faces its own "alignment problem" for the next smarter AI. How can we at level 0, ensure that our solution for level 1, continues on through levels 2 - 99?
>> 
>> Moreover presuming alignment can be solved presumes our existing values are correct and no greater intelligence will ever disagree with them or find a higher truth. So either our values are correct and we don't need to worry about alignment or they are incorrect, and a later greater intelligence will correct them.
>> 
>> Jason
>> _______________________________________________
> 
> 
> "Our" values??   I doubt that China thinks our values are correct.
> The fundamental values problem is that nations, races, religions, etc.
> will never agree what values are correct.
> The AGIs will be just as confused as humans on which values are preferable.
> 
> 
> BillK
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230330/9faa6eba/attachment.htm>


More information about the extropy-chat mailing list