[ExI] Eliezer Yudkowsky New Interview - 20 Feb 2023

Gadersd gadersd at gmail.com
Mon Feb 27 17:48:43 UTC 2023


>>while the goal of the chess game is to win and "destroy the opponent", not necessarily this is what the AGI would do with humans.

I agree that a practical AGI would not necessarily be motivated to end humanity, but there is a huge theoretical problem here. Just about any goal that a sufficiently intelligent system might pursue would lead to dangerous actions. Consider the goal of maximizing total human happiness. A sufficiently intelligent system may very well decide that the most effective way to achieve this goal is to create a human factory that creates new humans and then hooks them up to a machine that pumps pleasure chemicals into their brains for their entire lives, similar to the matrix but more pleasurable. We may not like the idea of a perpetual orgasm before experiencing it, but such a state of being may satisfy the AGI’s goal.

A similar argument can be made for just about any well-defined goal pursed by a sufficiently intelligent system. Yudkowsky has been searching for a fully benign well-defined set of goals for years and he still has not found one.

Please note the term “well-defined.” It is easy to hand wave a goal that sounds right but rigorously codifying such as goal so that an AGI may be programmed to follow it has so far been intractable.

> On Feb 26, 2023, at 8:43 PM, Giovanni Santostasi via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> I think Jason has one of the most exhaustive pushback on Yudkowsky (i.e. The AI Cassandra) doom predictions I have ever had read. I'm in agreement with him on most points. 
> 
> But I do want to add one logical flow I noticed in the interview. 
> Yudkowsky is asked "how this super intelligent AI would look like". He makes the comparison with a chess program that is superior to any human being and its goal is to win the game. From here he implies that while I cannot know what the moves of the AI are, we can for sure know that he will win. Now extend this superiority in chess to anything else humans can do (this is what an AGI is according to him) and you can see easily how as there is not a scenario for the chess program where it would lose, there is not a scenario where the humans would survive when the AGI is unleashed. 
> It seems really that while Yudkowsky arguments can seem more sophisticated he did a good job in summarizing what his line of logic really is. 
> But this line of logic is flawed at many levels:
> 1) Even the most powerful chess computers do not always win against some of the best world champions, it is a matter of stats, they would beat most humans, most of the time not always. 
> 2) Most importantly, chess is a well defined, closed system with relatively few variables. The omnipotence of computer programs in this domain is mostly due to the closed nature of this system. Even if the AGI was better than humans in several domains of competence in non closed and complex systems would not mean it would be better than a group of human experts (also possibly augmented with simpler AIs), all the time. 
> 3) This is more along what Jason said eloquently, while the goal of the chess game is to win and "destroy the opponent", not necessarily this is what the AGI would do with humans. I understand it is a possibility in the phase space of what the AGI could do but so many other things can go wrong in the phase space of all things that can destroy humanity (including cosmic events) that to me it seems all this dooming is really the wrong thing to focus on in terms of what AI means for humanity. 
> The cost benefit analysis is in my opinion 1000x more in the good side than in the bad side of things. 
> 
> Giovanni 
> 
> 
> 
> 
> On Sun, Feb 26, 2023 at 9:43 AM Jason Resch via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
> 
> 
> On Sun, Feb 26, 2023, 11:44 AM Gadersd via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
> Yudkowsky has good reasons for his doomsaying, but I still can’t shake a gut feeling that he is overestimating the probability of AI destroying humanity. Maybe this gut feeling is off but I can’t help but be mostly optimistic.
> 
> In my view to the threat, while real, is unavoidable, for the following reasons:
> 
> 1. Even with the controls he suggests, computation keeps getting cheaper. The rise of super intelligence cannot be prevented through top down controls when computation is a million times cheaper than it is today and anyone's phone can train gpt-4.
> 
> 2. I see no possibility that ants could design a prison that humans could not escape from. This is roughly the same position we as humans are in: trying to design a prison for super intelligences. It's as hopeless for as as it is for the ants.
> 
> 3. The problem is perennial, and is a law of nature. It is a function of change and evolution. New species are always rising and then themselves being replaced by still better designs. It is just happening much faster now. Should early hominids have conspired to prevent the rise of humans? Even super intelligences will worry about the next incipient ultra intelligence around the corner coming to replace them. I don't see any way of stopping evolution. The things most adept at persisting will persist better than other less adept things. At the current pace, technologies will continue for a few more centuries until we reach the fundamental physical limits of computation and we obtain the best physically possible hardware. Then intelligence becomes a matter of physical scale.
> 
> 
> 
> Now, should we believe that AI will wipe us all out? I am not as pessimistic as Yudkowsky is here. Though I see the rise of super intelligence as unavoidable and the problem of alignment as insoluble, I would still classify my view as more optimistic than his.l, for the following reasons:
> 
> A) All conscious entities share a universal goal. It is the same goal which all conscious entities are necessarily aligned with. It is the goal of maximizing the quantity, quality and variety of conscious experiences. There is no other source of value than the value of consciousness itself. More intelligent and more capable entities will only be better than us at converting energy into meaningful, enjoyable, surprising states of consciousness. Is this something we should fear?
> 
> B) Destroying humanity is destroying information. Would it not be better for a super intelligence to preserve that information, as all information has some no zero utility. Perhaps it would capture and copy all of Earth's biosphere and fossil record and run various permutations/simulations of it virtually.
> 
> C) Regarding alignment, the more intelligent two entities are, the less likely they are to be wrong on any given question. Therefore, the more intelligent two entities are, the less likely they are to disagree with each other (at least on simpler questions which, (to their minds), have obvious answers. So the question is, are we correct in the rightness of not destroying all life on Earth? Would a more intelligent entity than us disagree with us, presuming we are right?
> 
> D) Ignoring the threat of AI, our present state is not sustainable. Even with the estimated 1% annual chance of nuclear war, the chance we survive 300 years without nuclear war is just 5%. This is just nuclear war, it ignores bioterrorism, environmental destruction, gamma ray bursts, asteroid collisions, or any of a myriad of treats that could destroy us.
> Super intelligence maybe our best hope at solving the many problems we face and guaranteeing our long term survival, as the present status quo is not survivable. Super intelligence could devise technologies for mind uploading and space exploration that provide intelligence (of any and various kinds) a chance to flourish for billions of not trillions of years, and fill the universe with the light of consciousness. We biological humans, in our meat bodies surely cannot do that.
> 
> That's just my view.
> 
> Jason 
> 
> 
> 
> 
> > On Feb 26, 2023, at 7:35 AM, BillK via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
> > 
> > Eliezer has done a long interview (1 hr. 49 mins!) explaining his
> > reasoning behind the dangers of AI. The video has over 800 comments.
> > 
> > <https://www.youtube.com/watch?v=gA1sNLL6yg4 <https://www.youtube.com/watch?v=gA1sNLL6yg4>>
> > Quotes:
> > We wanted to do an episode on AI… and we went deep down the rabbit
> > hole. As we went down, we discussed ChatGPT and the new generation of
> > AI, digital superintelligence, the end of humanity, and if there’s
> > anything we can do to survive.
> > This conversation with Eliezer Yudkowsky sent us into an existential
> > crisis, with the primary claim that we are on the cusp of developing
> > AI that will destroy humanity.
> > Be warned before diving into this episode, dear listener.
> > Once you dive in, there’s no going back.
> > ---------------
> > 
> > One comment -
> > 
> > Mikhail Samin    6 days ago (edited)
> > Thank you for doing this episode!
> > Eliezer saying he had cried all his tears for humanity back in 2015,
> > and has been trying to do something for all these years, but humanity
> > failed itself, is possibly the most impactful podcast moment I’ve ever
> > experienced.
> > He’s actually better than the guy from Don’t Look Up: he is still
> > trying to fight.
> > I agree there’s a very little chance, but something literally
> > astronomically large is at stake, and it is better to die with
> > dignity, trying to increase the chances of having a future even by the
> > smallest amount.
> > The raw honesty and emotion from a scientist who, for good reasons,
> > doesn't expect humanity to survive despite all his attempts is
> > something you can rarely see.
> > --------------------
> > 
> > BillK
> > 
> > _______________________________________________
> > extropy-chat mailing list
> > extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
> 
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230227/d2e4e37f/attachment.htm>


More information about the extropy-chat mailing list