[ExI] Existential risk of AI
sjatkins
sjatkins at protonmail.com
Tue Mar 14 23:52:04 UTC 2023
------- Original Message -------
On Tuesday, March 14th, 2023 at 7:15 AM, Stuart LaForge via extropy-chat <extropy-chat at lists.extropy.org> wrote:
>
> I have over the years been a critic of Eliezer's doom and gloom. Not
> because I think his extinction scenarios are outlandish, but because
> the technology has enough upside to be worth the risk. That being
> said, I believe that we cannot give in to the animal spirits of
> unfounded optimism and must tread carefully with this technology.
>
> It is true that the current generation of AIs, which use massive
> inscrutable tensors to simulate sparse neural networks, are black
> boxes. But so are the biological brains that they are
> reverse-engineered from. We don't know any more about how the brain
> gives rise to intelligent goal-seeking behavior than we do about how
> ChatGPT writes poetry. Therefore, I agree that there are landmines
> ahead that we must be wary of.
It has long been my believe that the lack of significantly more effective intelligence on this planet is a much greater x-risk than that AGI will go full Terminator. I am pretty sure the "Great Filter" in answer to the Fermi Paradox is the complexity due to accelerating technology exceeding the intelligence and decision making speed of the technological species. I think we are stewing in that.
I think Eliezer's greatest failing was becoming thoroughly infected with and spreading the Precautionary Principle to an absurd degree and thus slowing development of more intelligence on this planet. The very notion that we should not work to develop higher intelligence than our own until we can guarantee we have bound its development is amazingly arrogant and self-defeating.
More information about the extropy-chat
mailing list