[ExI] Eliezer at SXSW March 2025
BillK
pharos at gmail.com
Wed May 28 16:16:39 UTC 2025
On Wed, 28 May 2025 at 13:55, Adrian Tymes via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
> These systems will be developed, and soon - if not by us, then by the bad guys, who are defined in their badness in this case by intentionally designing the systems toward malicious ends (biasing their service in favor of authoritarian regimes, for instance). "We" halting development will only empower them.
>
> He has called for all-out war on them. Even that would probably not suffice, and in any case, it's not happening. Given this reality, calls for "we" to stop are counterproductive.
> _______________________________________________
Oh, I think Eliezer is well aware that nobody is going to stop AI
research. Vance has said that the USA is in an AI arms race with
China. Of course, if Eliezer is correct in thinking that a runaway AGI
will end the human race, then it makes no difference whether it is
developed by the USA or China. He is just trying to persuade all AI
researchers to be really, really careful.
As Jason said, the AI problem will probably be resolved in the near
future, for good or ill.
BillK
More information about the extropy-chat
mailing list