[ExI] Against the paperclip maximizer or why I am cautiously optimistic
jasonresch at gmail.com
Mon Apr 3 15:02:52 UTC 2023
On Mon, Apr 3, 2023, 9:54 AM Tara Maya via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On Apr 3, 2023, at 2:52 AM, Rafal Smigrodzki via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
> The AI would not make any trivial mistakes, ever, including mistakes in
> ethical reasoning.
> I can agree with what you said except this. I believe that the more
> intelligent a species the more profound mistakes it can make. I think this
> is simply because the more intelligent a mind is the more choices open to
> it and the greater the possibility that some of those choices will be
> wrong, even by its own moral code.
> I'm not a doomsayer about AI. This applies to any sentient beings, human,
> animal, machine or alien.
> This is simply, to me, part of any definition of intelligence, that it
> evolves to guide "free will," which is the ability to make choices among
> many possible actions, according to values that have shorter or longer term
> pay-offs, and includes the possibility of being unable to always calculate
> the best long-term payoff for itself and others.
Building on this, any system of ethics based on consequences (i.e.
consequentialism/utilitarianism) is uncomputable in the long term as the
future can never be predicted with complete accuracy.
Even for a superhuman intelligence guided by the principle of doing the
best for itself and others, it will still make errors in calculation, and
can never provide optimal decisions in all cases or over all timeframes.
The best we can achieve I think will reduce to some kind of learned
Smullyan, Bennett, and Chaitin seem to have reached a similar conclusion:
"In the dialog, Smullyan comes up with a wonderful definition of the Devil:
the unfortunate length of time it takes for sentient beings as a while to
come to be enlightened. This idea of the necessary time it takes for a
complex state to come about has been explored mathematically in a
provocative way by Charles Bennett and Gregory Chaitin. They theorize that
it may be possible to prove, by arguments similar to those underlying
Gödel's Incompleteness Theorem, that there is no shortcut to the
development of higher and higher intelligences (or, if you prefer, more and
more "enlightened" states); in short, that "the Devil" must get his due."
Page 342-343, in "The Mind's I"
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat