[ExI] Kelly's future

Stefano Vaj stefano.vaj at gmail.com
Tue May 24 10:52:31 UTC 2011


On 23 May 2011 23:22, Kelly Anderson <kellycoinguy at gmail.com> wrote:

> Once achieved, an AGI is easily replicated. That much I will grant
> you. But mixing explicit programming with a training process is very
> difficult.
>

My philosophical point, not a very important one for that matter, is that a
plant which has been grown in a garden is indistinguishable from an
identical plan which has been build on the basis of an explicit blueprint
by, say, the end terminal of a teleporter.

The same goes for the software end product of either a training mechanism or
an explicit programming effort, even though the second may well be faced
with unpractical or intractable difficulties.

Accordingly, what makes me doubt very much the idea that we are ourselves
AGI produced by some Intelligent Designer is more Occam's razor than any
mystical quality which would distinguish ourselves from such a product.


> > being fully emulatable like anything else has little to do with
> > intelligence. As for qualia, they are a dubious linguistic and
> philosophical
> > artifact of little use at all..
>
> > I suspect "consciousness" to be just an evolutionary artifact that albeit
> I dunno... "redness" seems useful for communicating between sentient
> beings. So I'm not sure how useless it is. Please elaborate.
>

Redness is a useful label, which can be made use of in communicating with
any entity, "sentient" or not, that can discriminate and handle the relevant
feature of red objects. As to what it "really" means, if anything at all, to
a PC, to an eagle or to a fellow human being who might well be a
philosophical zombie for all I know, I am inclined to contend that the
question is undecidable and irrelevant.


> I agree that much of what we think we observe is a kind of
> hallucination. Our eyes simply aren't good enough optically to produce
> the model that is in my mind of the world.
>

No, what I mean is that we project our own feelings and experience on other
things. According to the PNL approach, this may be empirically convenient
sometimes, but not only is philosophically unwarranted and useless, it can
also entangle us in ineffective behaviours and paradoxes.

All right, I guess I see your point. It isn't rape unless it has the
> psychological component of doing damage to the other being. So we are
> going to be stuck with assholes who won't be happy with their sexbot,
> no matter what. Perhaps they will rape my sexbot... and I'll probably
> be none to happy about it. ;-)
>

Yes, this is also an interesting point I had not think of (consensual rape
may not qualify for the rapist in the first place).

But I was seeing things more from the side of the victim, suggesting that
the victims themselves cannot really say to be raped from their own POV
unless their dislike and refusal are sincere...

Accordingly, those who might like to suffer an *actual* rape, as opposed to
just seeing it mimicked, are bound never to have the experience they
crave... :-)


-- 
Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110524/e0f6e58d/attachment.html>


More information about the extropy-chat mailing list