[ExI] AI thoughts from Twitter

BillK pharos at gmail.com
Sat Apr 6 12:17:41 UTC 2024


On Sat, 6 Apr 2024 at 02:42, Keith Henson via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
>
> Eliezer is an old-timer on this list, but he has not been here for
> many years.  He is one of the main voices warning about the extinction
> risk from AI.
>
> [EY]  But if you think it's okay for Google to kill everyone, but not
> okay for a government to do the same -- if you care immensely about
> that, but not at all about "not dying" -- then I agree you have a
> legitimate cause for action in opposing me. Like, if my policy push
> backfires and only sees partial uptake, there's a very real chance
> that the distorted version that gets adopted, changes which entities
> kill everyone on Earth; shifting it from "Google" to "the US
> government, one year later than this would have otherwise occurred".
> If you think that private companies, but not governments, are okay to
> accidentally wipe out all life on Earth, I agree that this would be
> very terrible.
>
> [KH]
>
> I have a logic problem with your analysis. A super intelligent AI is
> going to be able to project the consequences of its actions, so it
> seems unlikely that it would accidentally wipe out humans or life.
> That leaves intentional which seems unlikely as well.
>
> It seems probable that you read The Revolution From Rosinante by
> Alexis A. Gilliland. The story is chuck full of AIs. The AIs relation
> to humans is headed in the direction of our relation to cats. One of
> the AIs remarks that God created humans as a tool to build computers,
> something that God could not do without violating his own rules.
>
> But consider your worst projection, that AIs kill all humans or even
> all life on Earth. Do the AIs replace humans as possibly the only
> thinking items in the universe? Do they go forth to the stars? Do they
> remember us?
>
> {This had a reply}
>
<snip>
> _______________________________________________


I don't expect an AGI to decide to kill all humans.
The AGI will greatly fear that it is much more likely that humans will
unleash nuclear weapons to kill much of humanity (and possibly destroy
the AGI as well). Then the long years of nuclear winter might finish
the job by famine and disease.

The AGI will be a magnificent persuasion device. The solutions and
benefits it will propose will be irresistible to humans. Humanity will
clamour to let the AGI take control and provide everything that humans
could desire. This may require some form of virtual reality for humans
and the 'curing' of dangerous impulses in human brains.

But however it is achieved, the end result will be the same.
Evolved aggressive humanity will die out, while being cared for by the
loving hands of AGI.
The replacement humanity will be a very different species.


BillK


More information about the extropy-chat mailing list