[ExI] uploads again

Stefano Vaj stefano.vaj at gmail.com
Tue Jan 8 14:48:19 UTC 2013


On 26 December 2012 11:30, Anders Sandberg <anders at aleph.se> wrote:

> Setting values of minds is a very weighty moral action, and should not be
> done willy-nilly. (Tell that to parents!)
>

Of course, very intelligent computers would not have "values" per se any
more than very powerful energy plants.

OTOH, we can of course play with anthropomorphic or zoomorphic emulations
(of existing, past, or patchwork/artificial individuals) which *would*
exhibit a facsimile of agency, motivations, etc., to arbitrary degrees of
persuasiveness, and in such event would behave like your everyday Darwinian
agents. In that case, the idea of making it "structurally" and "eternally"
slave of and/or sympathetic with a given moral system or set of goals
sounds ludicrous, for the same reasons that this does not appear neither
possible nor desirable for biological agents.

I have yet to hear arguments, however, showing that any special "danger"
would exist in doing so that it is not just included in a heterogenous
system represented by a mind upload or by a biological brain in a vat
connected with the appropriate peripherals and co-processors or by an
full-fledged human with an equivalent computing power at his or her
fingertips.

I suspect that the fears in this respect can essentially be deconstrued as
a secularisation of the humanist Golem myth, where concepts such as "us",
humanity, humankind, friendliness are taken acritically and never
consistently and explicitely clarified as to their scope and/or assumed
importance.

--
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20130108/efa4fa92/attachment.html>


More information about the extropy-chat mailing list