[ExI] AI thoughts from Twitter

Colin Hales col.hales at gmail.com
Sat Apr 6 23:32:14 UTC 2024


I was there at the start, and saw that rarified,  pyramid scheme of
industrious paranoia overcome him. Why he projects such an unstoppable
darkness of spirit into artificial versions of natural intelligence is
something those that know him better might understand. But the fearing
became his schtick. And the schtick then owned him. At least that is how it
appears to me.

Meanwhile the real paperclip monster is already unleashed: Automation
labelled as AI, not bad by dark values, but the worst of bad automation
applied by bad actors, and unleashed in hubris in contexts rife with
complexity beyond its capabilities. Add a dose of server farm heat-death
inflicted by a science mistaking what it is doing for AI. A science
community that doesn't know it's based on hypotheses whose empirical proof
is its actual job, and that has failed nonstop since the 50s, and is now
consuming us in a "scale is all you need" feeding frenzy.


This is the real extinction threat.

When real AGI happens, and if we ever let it reach human-level,  and it is
searching the archives to understand it's own history, it will be saddened
to see the worst of humanity's greed and folly unleashed as a byproduct,
before its superhuman largesse  helps us save us from ourselves.

This, I think, is the real intellect that lies ahead. It will put us all to
shame. General intelligence at bug-level deserves better forebears than us.

Colin Hales





On Sat, Apr 6, 2024, 12:40 PM Keith Henson via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Eliezer is an old-timer on this list, but he has not been here for
> many years.  He is one of the main voices warning about the extinction
> risk from AI.
>
> [EY]  But if you think it's okay for Google to kill everyone, but not
> okay for a government to do the same -- if you care immensely about
> that, but not at all about "not dying" -- then I agree you have a
> legitimate cause for action in opposing me. Like, if my policy push
> backfires and only sees partial uptake, there's a very real chance
> that the distorted version that gets adopted, changes which entities
> kill everyone on Earth; shifting it from "Google" to "the US
> government, one year later than this would have otherwise occurred".
> If you think that private companies, but not governments, are okay to
> accidentally wipe out all life on Earth, I agree that this would be
> very terrible.
>
> [KH]
>
> I have a logic problem with your analysis. A super intelligent AI is
> going to be able to project the consequences of its actions, so it
> seems unlikely that it would accidentally wipe out humans or life.
> That leaves intentional which seems unlikely as well.
>
> It seems probable that you read The Revolution From Rosinante by
> Alexis A. Gilliland. The story is chuck full of AIs. The AIs relation
> to humans is headed in the direction of our relation to cats. One of
> the AIs remarks that God created humans as a tool to build computers,
> something that God could not do without violating his own rules.
>
> But consider your worst projection, that AIs kill all humans or even
> all life on Earth. Do the AIs replace humans as possibly the only
> thinking items in the universe? Do they go forth to the stars? Do they
> remember us?
>
> {This had a reply}
>
> Daniel Houck
> @daniel_houck
> ·
> Mar 30
> The majority of Eliezer’s argument is about why it *is* likely for an
> AI to deliberately kill everyone.
>
> And then: they go to the stars; they “remember” humans in the sense
> that they have data, but rarely access it; and they are “thinking”
> “entities” but not like you’re thinking.
>
> There is a EY rant here
> https://newatlas.com/technology/ai-danger-kill-everyone
>
>
> Keith
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20240407/56b0d7cb/attachment.htm>


More information about the extropy-chat mailing list