[ExI] Yudkowsky ‘Humanity’s remaining timeline? It looks more like five years than 50’
Adrian Tymes
atymes at gmail.com
Tue Feb 20 16:48:36 UTC 2024
These proposals tend to ignore the China/Russia angle: AI coders in China,
Russia, and a few other nations will not be under any legislative ban, and
are more likely to do The Wrong Thing (for whatever specific Wrong Thing is
being discussed) than those in nations where a ban is possible anyway, so a
ban just makes it more likely they'll do it and we won't have tools to
respond (since developing a thing includes studies to know how the thing
works, which can be used to counter the thing).
Yudkowsky does acknowledge this and claims that military strikes may be
necessary to enforce this. Unfortunately, that quickly leads to nuclear
war, so the question arises as to whether the possible damage from AI
exceeds the definite damage from nuclear war...and, especially after
factoring in the odds, that case simply doesn't math out. The damage from
the means that would be required to enforce the ban, greatly exceeds the
likely damage from unfettered AI.
On Tue, Feb 20, 2024 at 3:10 AM BillK via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> ‘Humanity’s remaining timeline? It looks more like five years than
> 50’: meet the neo-luddites warning of an AI apocalypse.
> From the academic who warns of a robot uprising to the workers worried
> for their future – is it time we started paying attention to the tech
> sceptics?
> by Tom Lamont Sat 17 Feb 2024
>
> <
> https://www.theguardian.com/technology/2024/feb/17/humanitys-remaining-timeline-it-looks-more-like-five-years-than-50-meet-the-neo-luddites-warning-of-an-ai-apocalypse
> >
>
> Quotes:
> Don’t imagine a human-made brain in one box, Yudkowsky advises. To
> grasp where things are heading, he says, try to picture “an alien
> civilisation that thinks a thousand times faster than us”, in lots and
> lots of boxes, almost too many for us to feasibly dismantle, should we
> even decide to.
> ------
> What would the others have us do? Stewart, the soldier turned grad
> student, wants a moratorium on the development of AIs until we
> understand them better – until those Russian-roulette-like odds
> improve. Yudkowsky would have us freeze everything today, this
> instant. “You could say that nobody’s allowed to train something more
> powerful than GPT-4,” he suggests. “Humanity could decide not to die
> and it would not be that hard.”
> ------------------------
>
> BillK
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20240220/68bdb860/attachment.htm>
More information about the extropy-chat
mailing list