[ExI] Self improvement
Stefano Vaj
stefano.vaj at gmail.com
Tue Apr 26 10:49:28 UTC 2011
On 25 April 2011 04:47, The Avantguardian <avantguardian2020 at yahoo.com> wrote:
> I think what Eugen is getting at is that your dilemma is actually eerily similar
> to the one faced by God in the book of Genesis. In order to even qualify as an
> intelligent agent, the AI must always have a choice as to the actions it
> performs at any given time because decision-making is tied up somewhere in the
> very definitions of agency, morality, and intelligence. If you don't give it the
> ability to make decisions, you are simply programming an "app" or automaton.
Even though this is is put in a slightly more moralistic terms, this
has been a Leitmotif of mine for quite a long time on this list.
1) We can have dangerous or non-dangerous machines which may be
arbitrarily stupid or "intelligent" (at given tasks) without
exhibiting anything closer to biological entities-style "motivation"
more than the average contemporary car or washing machine or abacus.
2) We can have, for the fun of it or as a way to perpetuate ourselves,
machines emulating specific human beings (or animals) as well as
generic ones, and/or a pseudo-Darwinian evolution thereof. In that
case, they are as good or evil as any Darwinian-driven mechanism, and
the danger they represent cannot really be distinguished from that
involved in the existence of a human being with equivalent
computational resources at his or her fingertips (and of course an
appropriately hi-level interface thereto).
3) If we plan to avoid the perceived threat of scenario (2) we simply
end up in scenario (1), and in any event we are not avoiding anything
in particular since motivation is easily supplied by human beings
controlling the type (1) machines.
4) One fails to see however why we should be more concerned by the
risk of "us" being taken over by "them" ("us" who?) any more than any
any human generation by the subsequent generation, or the genetic make
of a species by the genetic make of that species in a 10,000 years
time, or an old species by its successors.
5) As for personal, physical threats to our individual survival the
risk of being killed by a type (2) machine really appears
infinitesimal in comparison with that of dying of old age, accidents,
or killing by type (1) machines and/or fellow biological entities
(from bacteria to other human beings). Even if it were to happen, I
suspect that you are more likely to end up murdered because you used
to be the lover of your opponent's wife when he was still in flesh and
bone than because of his having being uploaded on a different
platform.
But, hey, speaking of reprogramming human beings (or emulations
thereof) in order to make them "naturally" good and still have them
remain behaviourally human and not mechanism-like has been a
time-honoured pastime for centuries, who am I to spoil the sport?
--
Stefano Vaj
More information about the extropy-chat
mailing list