[ExI] Eliezer Yudkowsky New Interview - 20 Feb 2023

Giulio Prisco giulio at gmail.com
Sun Feb 26 17:53:02 UTC 2023


On 2023. Feb 26., Sun at 18:43, Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Sun, Feb 26, 2023, 11:44 AM Gadersd via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Yudkowsky has good reasons for his doomsaying, but I still can’t shake a
>> gut feeling that he is overestimating the probability of AI destroying
>> humanity. Maybe this gut feeling is off but I can’t help but be mostly
>> optimistic.
>>
>
> In my view to the threat, while real, is unavoidable, for the following
> reasons:
>
> 1. Even with the controls he suggests, computation keeps getting cheaper.
> The rise of super intelligence cannot be prevented through top down
> controls when computation is a million times cheaper than it is today and
> anyone's phone can train gpt-4.
>
> 2. I see no possibility that ants could design a prison that humans could
> not escape from. This is roughly the same position we as humans are in:
> trying to design a prison for super intelligences. It's as hopeless for as
> as it is for the ants.
>
> 3. The problem is perennial, and is a law of nature. It is a function of
> change and evolution. New species are always rising and then themselves
> being replaced by still better designs. It is just happening much faster
> now. Should early hominids have conspired to prevent the rise of humans?
> Even super intelligences will worry about the next incipient ultra
> intelligence around the corner coming to replace them. I don't see any way
> of stopping evolution. The things most adept at persisting will persist
> better than other less adept things. At the current pace, technologies will
> continue for a few more centuries until we reach the fundamental physical
> limits of computation and we obtain the best physically possible hardware.
> Then intelligence becomes a matter of physical scale.
>
>
 We’ll have to negotiate based on mutual utility and threat. Trade and MAD.
Hands ready on the plug (if there is a plug). Just like we must do with
other people and nations.


>
> Now, should we believe that AI will wipe us all out? I am not as
> pessimistic as Yudkowsky is here. Though I see the rise of super
> intelligence as unavoidable and the problem of alignment as insoluble, I
> would still classify my view as more optimistic than his.l, for the
> following reasons:
>
> A) All conscious entities share a universal goal. It is the same goal
> which all conscious entities are necessarily aligned with. It is the goal
> of maximizing the quantity, quality and variety of conscious experiences.
> There is no other source of value than the value of consciousness itself.
> More intelligent and more capable entities will only be better than us at
> converting energy into meaningful, enjoyable, surprising states of
> consciousness. Is this something we should fear?
>
> B) Destroying humanity is destroying information. Would it not be better
> for a super intelligence to preserve that information, as all information
> has some no zero utility. Perhaps it would capture and copy all of Earth's
> biosphere and fossil record and run various permutations/simulations of it
> virtually.
>
> C) Regarding alignment, the more intelligent two entities are, the less
> likely they are to be wrong on any given question. Therefore, the more
> intelligent two entities are, the less likely they are to disagree with
> each other (at least on simpler questions which, (to their minds), have
> obvious answers. So the question is, are we correct in the rightness of not
> destroying all life on Earth? Would a more intelligent entity than us
> disagree with us, presuming we are right?
>
> D) Ignoring the threat of AI, our present state is not sustainable. Even
> with the estimated 1% annual chance of nuclear war, the chance we survive
> 300 years without nuclear war is just 5%. This is just nuclear war, it
> ignores bioterrorism, environmental destruction, gamma ray bursts, asteroid
> collisions, or any of a myriad of treats that could destroy us.
> Super intelligence maybe our best hope at solving the many problems we
> face and guaranteeing our long term survival, as the present status quo is
> not survivable. Super intelligence could devise technologies for mind
> uploading and space exploration that provide intelligence (of any and
> various kinds) a chance to flourish for billions of not trillions of years,
> and fill the universe with the light of consciousness. We biological
> humans, in our meat bodies surely cannot do that.
>
> That's just my view.
>
> Jason
>
>
>
>
>> > On Feb 26, 2023, at 7:35 AM, BillK via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>> >
>> > Eliezer has done a long interview (1 hr. 49 mins!) explaining his
>> > reasoning behind the dangers of AI. The video has over 800 comments.
>> >
>> > <https://www.youtube.com/watch?v=gA1sNLL6yg4>
>> > Quotes:
>> > We wanted to do an episode on AI… and we went deep down the rabbit
>> > hole. As we went down, we discussed ChatGPT and the new generation of
>> > AI, digital superintelligence, the end of humanity, and if there’s
>> > anything we can do to survive.
>> > This conversation with Eliezer Yudkowsky sent us into an existential
>> > crisis, with the primary claim that we are on the cusp of developing
>> > AI that will destroy humanity.
>> > Be warned before diving into this episode, dear listener.
>> > Once you dive in, there’s no going back.
>> > ---------------
>> >
>> > One comment -
>> >
>> > Mikhail Samin    6 days ago (edited)
>> > Thank you for doing this episode!
>> > Eliezer saying he had cried all his tears for humanity back in 2015,
>> > and has been trying to do something for all these years, but humanity
>> > failed itself, is possibly the most impactful podcast moment I’ve ever
>> > experienced.
>> > He’s actually better than the guy from Don’t Look Up: he is still
>> > trying to fight.
>> > I agree there’s a very little chance, but something literally
>> > astronomically large is at stake, and it is better to die with
>> > dignity, trying to increase the chances of having a future even by the
>> > smallest amount.
>> > The raw honesty and emotion from a scientist who, for good reasons,
>> > doesn't expect humanity to survive despite all his attempts is
>> > something you can rarely see.
>> > --------------------
>> >
>> > BillK
>> >
>> > _______________________________________________
>> > extropy-chat mailing list
>> > extropy-chat at lists.extropy.org
>> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230226/ef02f755/attachment.htm>


More information about the extropy-chat mailing list