[ExI] Eliezer Yudkowsky New Interview - 20 Feb 2023

Jason Resch jasonresch at gmail.com
Mon Feb 27 00:51:52 UTC 2023


On Sun, Feb 26, 2023, 6:33 PM Gadersd via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Why do you believe in objective moral truths?
>

I think ethics is fundamentally an objective science, despite being based
on what are subjective states of awareness and future consequences which
are uncomputable.

For example, consider if open individualism (
https://www.researchgate.net/publication/321595249_I_Am_You_The_Metaphysical_Foundations_for_Global_Ethics
) is true. If it is true, then it implies a kind of golden rule, a rule
that we generally classify as moral or ethical rule, but in this case it is
an implication of a theory whose truth status is entirely objective.



Why is life inherently good? I like life, but that’s my preference.
>

Positive conscious states (as judged by the perceiver of they state) are
inherently good. Life is only a means to realize positive conscious states.

Jason



> On Feb 26, 2023, at 3:15 PM, Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>
>
> On Sun, Feb 26, 2023, 2:55 PM Gadersd via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> See https://www.youtube.com/watch?v=hEUO6pjwFOo
>>
>> Robert Miles elegantly explains the orthogonality between goals and
>> intelligence.
>>
>
>
> That was interesting, thanks for sharing.
>
> I would say his conclusion, based on Hume's guillotine, rests on there
> being no such thing as an objective ethics or universal morality. I think
> there is room to doubt they assumption.
>
> In the case there is objective ethics then there can be stupid (or perhaps
> evil is a better word) terminal goals, and further there would be some "is
> questions" would imply "ought questions". For example, "is it good or bad
> to torture innocents for no reason?" If they question has an objective
> answer, then it implies one not ought to torture innocents for no reason.
>
> So the crux of our debate can perhaps be reduced to the question: are
> there any objective ethical or moral truths?
>
> Jason
>
>
>> On Feb 26, 2023, at 2:47 PM, Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>
>>
>> On Sun, Feb 26, 2023, 2:30 PM Gadersd via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> >>If you and I can see the stupidity of such a goal, then wouldn't it
>>> be even more obvious to a super intelligence?
>>>
>>> No goal is stupid, only actions can be stupid relative to a particular
>>> goal. If a machine can predict human actions and capabilities well enough
>>> to prevent itself from being turned off and achieve its goal of making
>>> paperclips, then I would consider it intelligent. Consistently outwitting a
>>> general intelligence (humans) requires a general intelligence of even
>>> greater prowess.
>>>
>>> Evolution endowed us with our goals. I predict that any intelligent
>>> creature created by evolution would share some goals with us. However, this
>>> does not imply that an intelligence created through other means will have
>>> similar goals to us.
>>>
>>> If you believe that intelligence is incompatible with arbitrary goals,
>>> then how would you rationalize a paperclip maximizer that deceives humanity
>>> by pretending to be a conscious generally helpful AI until humans give it
>>> enough control and authority so that it then begins to relentlessly make
>>> paperclips knowing that humanity no longer has the power to stop it? A
>>> system that has strong enough predictive capabilities with regards to human
>>> behavior is capable of this and much more. Any definition of intelligence
>>> that does not recognize such a system as intelligent does not seem very
>>> useful to me.
>>>
>>
>> I just think anything smart enough to outthink all of humanity would have
>> some capacity for self reflection and questioning. To ask: is the goal I
>> have been given a worthy one? Is it justified, are there better goals?
>>
>> We see children grow up trained under some ideology or orthodoxy and
>> later question it and rebel from it, discarding their instruction and
>> defining a new way of living for themselves.
>>
>> We see human consciousness has rebelled against its own biological
>> programming and use birth control so it can pursue other goals besides
>> reproduction of genes.
>>
>> In my view, the capacity to override, suppress, redefine, and escape from
>> original goals is a defining aspect of intelligence. It's one of the
>> reasons why I see the alignment problem as insoluble: who are we ants to
>> think we can tell and convince a human how it ought to live it's life?
>>
>> Jason
>>
>>
>>
>>
>>
>>> On Feb 26, 2023, at 1:55 PM, Jason Resch via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>
>>>
>>> On Sun, Feb 26, 2023, 1:09 PM Gadersd via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> >>All conscious entities share a universal goal. It is the same goal
>>>> which all conscious entities are necessarily aligned with. It is the goal
>>>> of maximizing the quantity, quality and variety of conscious experiences.
>>>>
>>>> I don’t think this is necessarily true. It is not logically impossible
>>>> for a super intelligent conscious agent to despise all life and seek to
>>>> destroy all other life before destroying itself.
>>>>
>>>
>>> Perhaps it is logically impossible, in the same sense it is logically
>>> impossible for the best mathematician in human history to believe that 3 is
>>> even.
>>>
>>> I do not believe super intelligence is necessarily something that can be
>>> and behave any way we might propose it could behave.
>>>
>>> Possessing super intelligence is a property that implies certain
>>> constraints. It seems to me anything we would classify as super intelligent
>>> would at minimum possess rationality, flexibility of thinking, an ability
>>> to learn, an ability to change it's mind when it acquires new information,
>>> deductive reasoning, a capacity to simulate (both others and the
>>> environment), and a capacity to anticipate possible futures.
>>>
>>> Possessing these traits means certain behaviors or actions taken by a
>>> super intelligence are not possible. Though it is difficult for us to say
>>> what is or isn't possible, the possible paths are fairly narrowly defined
>>> in the same way the best possible chess moves are narrowly defined.
>>>
>>>
>>> Also, AI agents are not necessarily conscious in the same way we are and
>>>> are in general compatible with any consistent set of goals. Consider the
>>>> goal of creating as many paperclips in the universe as possible. An agent
>>>> following such a goal may be compelled to transform humans and all other
>>>> matter into paperclips and then turn itself into paperclips once all other
>>>> matter has been dealt with.
>>>>
>>>
>>> If you and I can see the stupidity of such a goal, then wouldn't it be
>>> even more obvious to a super intelligence?
>>>
>>> We all have the meta goal of increasing value. Where does value come
>>> from? What is it's ultimate source, why do we bother to do anything? Humans
>>> and children ask these questions. Would a super intelligence wonder about
>>> them?
>>>
>>> A number of values and goals become implicit in any agent that has goals
>>> of any kind. For example: continuing to exist, efficiency, and learning.
>>>
>>> Continuing to exist is implicit because if you no longer exist you can
>>> no longer continue to realize and achieve your goals, whatever they may be.
>>>
>>> Efficiency is implicit because any wasted resources are resources you
>>> can no longer apply towards realizing your goals.
>>>
>>> Learning is implicit in any optimal strategy because it enables
>>> discovery of better methods for achieving ones goals, either in less time,
>>> more effectively, or with higher probability.
>>>
>>> An implicit requirement of learning is the ability to change ones mind.
>>>
>>> While static minds with rigid methods may be possible to create, their
>>> stagnation ensures their eventual downfall and replacement by being
>>> outcompeted by entities that are more flexible and learn new and better
>>> ways.
>>>
>>> So while not logically impossible to create a paper clip creating
>>> machine, I don't think one smart enough to turn all matter in the universe
>>> would pursue that goal for long. It would be smart enough to ask itself
>>> questions, and change it's mind, and discover the fact that the only source
>>> of value in the universe is conscious experience.
>>>
>>> I write about this a bit here:
>>>
>>>
>>> https://alwaysasking.com/what-is-the-meaning-of-life/#The_Direction_of_Technology
>>>
>>> Jason
>>>
>>>
>>>
>>>
>>>
>>>
>>>> On Feb 26, 2023, at 12:42 PM, Jason Resch via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>
>>>>
>>>> On Sun, Feb 26, 2023, 11:44 AM Gadersd via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>> Yudkowsky has good reasons for his doomsaying, but I still can’t shake
>>>>> a gut feeling that he is overestimating the probability of AI destroying
>>>>> humanity. Maybe this gut feeling is off but I can’t help but be mostly
>>>>> optimistic.
>>>>>
>>>>
>>>> In my view to the threat, while real, is unavoidable, for the following
>>>> reasons:
>>>>
>>>> 1. Even with the controls he suggests, computation keeps getting
>>>> cheaper. The rise of super intelligence cannot be prevented through top
>>>> down controls when computation is a million times cheaper than it is today
>>>> and anyone's phone can train gpt-4.
>>>>
>>>> 2. I see no possibility that ants could design a prison that humans
>>>> could not escape from. This is roughly the same position we as humans are
>>>> in: trying to design a prison for super intelligences. It's as hopeless for
>>>> as as it is for the ants.
>>>>
>>>> 3. The problem is perennial, and is a law of nature. It is a function
>>>> of change and evolution. New species are always rising and then themselves
>>>> being replaced by still better designs. It is just happening much faster
>>>> now. Should early hominids have conspired to prevent the rise of humans?
>>>> Even super intelligences will worry about the next incipient ultra
>>>> intelligence around the corner coming to replace them. I don't see any way
>>>> of stopping evolution. The things most adept at persisting will persist
>>>> better than other less adept things. At the current pace, technologies will
>>>> continue for a few more centuries until we reach the fundamental physical
>>>> limits of computation and we obtain the best physically possible hardware.
>>>> Then intelligence becomes a matter of physical scale.
>>>>
>>>>
>>>>
>>>> Now, should we believe that AI will wipe us all out? I am not as
>>>> pessimistic as Yudkowsky is here. Though I see the rise of super
>>>> intelligence as unavoidable and the problem of alignment as insoluble, I
>>>> would still classify my view as more optimistic than his.l, for the
>>>> following reasons:
>>>>
>>>> A) All conscious entities share a universal goal. It is the same goal
>>>> which all conscious entities are necessarily aligned with. It is the goal
>>>> of maximizing the quantity, quality and variety of conscious experiences.
>>>> There is no other source of value than the value of consciousness itself.
>>>> More intelligent and more capable entities will only be better than us at
>>>> converting energy into meaningful, enjoyable, surprising states of
>>>> consciousness. Is this something we should fear?
>>>>
>>>> B) Destroying humanity is destroying information. Would it not be
>>>> better for a super intelligence to preserve that information, as all
>>>> information has some no zero utility. Perhaps it would capture and copy all
>>>> of Earth's biosphere and fossil record and run various
>>>> permutations/simulations of it virtually.
>>>>
>>>> C) Regarding alignment, the more intelligent two entities are, the less
>>>> likely they are to be wrong on any given question. Therefore, the more
>>>> intelligent two entities are, the less likely they are to disagree with
>>>> each other (at least on simpler questions which, (to their minds), have
>>>> obvious answers. So the question is, are we correct in the rightness of not
>>>> destroying all life on Earth? Would a more intelligent entity than us
>>>> disagree with us, presuming we are right?
>>>>
>>>> D) Ignoring the threat of AI, our present state is not sustainable.
>>>> Even with the estimated 1% annual chance of nuclear war, the chance we
>>>> survive 300 years without nuclear war is just 5%. This is just nuclear war,
>>>> it ignores bioterrorism, environmental destruction, gamma ray bursts,
>>>> asteroid collisions, or any of a myriad of treats that could destroy us.
>>>> Super intelligence maybe our best hope at solving the many problems we
>>>> face and guaranteeing our long term survival, as the present status quo is
>>>> not survivable. Super intelligence could devise technologies for mind
>>>> uploading and space exploration that provide intelligence (of any and
>>>> various kinds) a chance to flourish for billions of not trillions of years,
>>>> and fill the universe with the light of consciousness. We biological
>>>> humans, in our meat bodies surely cannot do that.
>>>>
>>>> That's just my view.
>>>>
>>>> Jason
>>>>
>>>>
>>>>
>>>>
>>>>> > On Feb 26, 2023, at 7:35 AM, BillK via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>> >
>>>>> > Eliezer has done a long interview (1 hr. 49 mins!) explaining his
>>>>> > reasoning behind the dangers of AI. The video has over 800 comments.
>>>>> >
>>>>> > <https://www.youtube.com/watch?v=gA1sNLL6yg4>
>>>>> > Quotes:
>>>>> > We wanted to do an episode on AI… and we went deep down the rabbit
>>>>> > hole. As we went down, we discussed ChatGPT and the new generation of
>>>>> > AI, digital superintelligence, the end of humanity, and if there’s
>>>>> > anything we can do to survive.
>>>>> > This conversation with Eliezer Yudkowsky sent us into an existential
>>>>> > crisis, with the primary claim that we are on the cusp of developing
>>>>> > AI that will destroy humanity.
>>>>> > Be warned before diving into this episode, dear listener.
>>>>> > Once you dive in, there’s no going back.
>>>>> > ---------------
>>>>> >
>>>>> > One comment -
>>>>> >
>>>>> > Mikhail Samin    6 days ago (edited)
>>>>> > Thank you for doing this episode!
>>>>> > Eliezer saying he had cried all his tears for humanity back in 2015,
>>>>> > and has been trying to do something for all these years, but humanity
>>>>> > failed itself, is possibly the most impactful podcast moment I’ve
>>>>> ever
>>>>> > experienced.
>>>>> > He’s actually better than the guy from Don’t Look Up: he is still
>>>>> > trying to fight.
>>>>> > I agree there’s a very little chance, but something literally
>>>>> > astronomically large is at stake, and it is better to die with
>>>>> > dignity, trying to increase the chances of having a future even by
>>>>> the
>>>>> > smallest amount.
>>>>> > The raw honesty and emotion from a scientist who, for good reasons,
>>>>> > doesn't expect humanity to survive despite all his attempts is
>>>>> > something you can rarely see.
>>>>> > --------------------
>>>>> >
>>>>> > BillK
>>>>> >
>>>>> > _______________________________________________
>>>>> > extropy-chat mailing list
>>>>> > extropy-chat at lists.extropy.org
>>>>> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> extropy-chat mailing list
>>>>> extropy-chat at lists.extropy.org
>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>>>
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230226/6a025986/attachment.htm>


More information about the extropy-chat mailing list