[extropy-chat] "3 Laws Unsafe" by the Singularity Institute
Adrian Tymes
wingcat at pacbell.net
Tue May 11 17:18:59 UTC 2004
--- Eugen Leitl <eugen at leitl.org> wrote:
> On Tue, May 11, 2004 at 12:07:45AM -0700, Adrian
> Tymes wrote:
> > Seriously, though, giving human-equivalent AIs the
> > same rights as humans seems to be at least a
> start.
>
> How do you propose to *prevent* them from *taking*
> the rights, I wonder?
> Solitary AI running in a sandbox on airgapped
> hardware is a pretty synthetic
> scenario.
>
> Everything else is uncontainable.
You have a point. But see my other post: even
solitary human intelligences running on airgapped
wetware have a history of taking rights, too. Giving
them that which they would get in the end increases
their chances of being friendly - and, I dare say,
even Friendly. It's not an absolute thing (one can
not get 100% by this method), but in case the absolute
measures fail (that is, in case there turns out to be
no way to guarantee 100%), I'd at least prefer to load
the dice in our favor before rolling them.
More information about the extropy-chat
mailing list