[ExI] The case for how and why AI might kill us all
Gadersd
gadersd at gmail.com
Fri Mar 31 14:32:11 UTC 2023
> That sounds like out-of-control chaos to me. Until one
> AI system comes out on top and closes all the weaker systems down.
> Will the winner look after humans though?
What is a king without subjects? What is a god without worshippers? If the dominant AI retains humanlike qualities then I expect it to appreciate its underlings in some way if even as zoo animals. However, if it is purely a green paper maximizer then the results may be less pleasant.
> On Mar 31, 2023, at 6:28 AM, BillK via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
> The case for how and why AI might kill us all
> By Loz Blain March 31, 2023
>
> <https://newatlas.com/technology/ai-danger-kill-everyone/>
> Quotes:
> This is not the first time humanity has stared down the possibility of
> extinction due to its technological creations. But the threat of AI is
> very different from the nuclear weapons we've learned to live with.
> Nukes can't think. They can't lie, deceive or manipulate. They can't
> plan and execute. Somebody has to push the big red button.
> ----------
> Sam Altman forecasts that within a few years, there will be a wide
> range of different AI models propagating and leapfrogging each other
> all around the world, each with its own smarts and capabilities, and
> each trained to fit a different moral code and viewpoint by companies
> racing to get product out of the door. If only one out of thousands of
> these systems goes rogue for any reason, well... Good luck. "The only
> way I know how to solve a problem like this is iterating our way
> through it, learning early and limiting the number of
> 'one-shot-to-get-it-right scenarios' that we have," said Altman.
> ------------
> Yudkowski believes even attempting this is tantamount to a suicide
> attempt aimed at all known biological life. "Many researchers steeped
> in these issues, including myself, expect that the most likely result
> of building a superhumanly smart AI, under anything remotely like the
> current circumstances, is that literally everyone on Earth will die.,"
> he wrote. "Not as in 'maybe possibly some remote chance,' but as in
> 'that is the obvious thing that would happen.'
> ------------------------
>
>
> So Altman thinks the world might end up with hundreds of competing
> AIs, all with different value systems and running under different
> legal systems. That sounds like out-of-control chaos to me. Until one
> AI system comes out on top and closes all the weaker systems down.
> Will the winner look after humans though?
>
> BillK
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
More information about the extropy-chat
mailing list