[ExI] The case for how and why AI might kill us all

Tara Maya tara at taramayastales.com
Fri Mar 31 16:10:06 UTC 2023


I'm just a humble historian. I don't understand what in the history of human interaction with technology has led to the conclusion that the way to survive is to reject technology.

All I can figure out is that these doomsayers are assuming AI will be pitted against humanity, whereas it seems far more likely to me that Humans-In-Group-A+AI will be pitted against Humans-in-Group-B without AI. In which case, yeah, it's obvious those with AI will win.

Indian pundits at one point decided that Brahmans should not engage in sea travel. China burned their ocean-going ships on the shore. As a result, both India and China, previously greater civilizations, lost out to Europe in the Age of Exploration.

We are entering a new Age of Exploration. It disturbs me to hear calls to burn our ships on the sea, preserve the purity of our souls by refraining from the new scary ships. 

I still see fear of technology as a greater danger than technology.

Tara Maya



> On Mar 31, 2023, at 3:28 AM, BillK via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> This is not the first time humanity has stared down the possibility of
> extinction due to its technological creations. But the threat of AI is
> very different from the nuclear weapons we've learned to live with.
> Nukes can't think. They can't lie, deceive or manipulate. They can't
> plan and execute. Somebody has to push the big red button.
> ----------
> Sam Altman forecasts that within a few years, there will be a wide
> range of different AI models propagating and leapfrogging each other
> all around the world, each with its own smarts and capabilities, and
> each trained to fit a different moral code and viewpoint by companies
> racing to get product out of the door. If only one out of thousands of
> these systems goes rogue for any reason, well... Good luck. "The only
> way I know how to solve a problem like this is iterating our way
> through it, learning early and limiting the number of
> 'one-shot-to-get-it-right scenarios' that we have," said Altman.
> ------------
> Yudkowski believes even attempting this is tantamount to a suicide
> attempt aimed at all known biological life. "Many researchers steeped
> in these issues, including myself, expect that the most likely result
> of building a superhumanly smart AI, under anything remotely like the
> current circumstances, is that literally everyone on Earth will die.,"
> he wrote. "Not as in 'maybe possibly some remote chance,' but as in
> 'that is the obvious thing that would happen.'
> ------------------------
> 
> 
> So Altman thinks the world might end up with hundreds of competing
> AIs, all with different value systems and running under different
> legal systems. That sounds like out-of-control chaos to me. Until one
> AI system comes out on top and closes all the weaker systems down.
> Will the winner look after humans though?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230331/231a369b/attachment-0001.htm>


More information about the extropy-chat mailing list