[ExI] Kelly's future
Kelly Anderson
kellycoinguy at gmail.com
Thu May 26 20:58:09 UTC 2011
2011/5/24 Stefano Vaj <stefano.vaj at gmail.com>:
> On 23 May 2011 23:22, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>>
> My philosophical point, not a very important one for that matter, is that a
> plant which has been grown in a garden is indistinguishable from an
> identical plan which has been build on the basis of an explicit blueprint
> by, say, the end terminal of a teleporter.
Granted.
> The same goes for the software end product of either a training mechanism or
> an explicit programming effort, even though the second may well be faced
> with unpractical or intractable difficulties.
Except that mixing modes is very difficult. You pretty much have to
pick the programming model or the training model. Mixing models is
going to be exceptionally difficult. Just as partially growing a plant
while also synthesizing it is difficult.
> Accordingly, what makes me doubt very much the idea that we are ourselves
> AGI produced by some Intelligent Designer is more Occam's razor than any
> mystical quality which would distinguish ourselves from such a product.
I don't go for intelligent design either.
>> > being fully emulatable like anything else has little to do with
>> > intelligence. As for qualia, they are a dubious linguistic and
>> > philosophical
>> > artifact of little use at all..
>>
>> > I suspect "consciousness" to be just an evolutionary artifact that
>> > albeit
>> I dunno... "redness" seems useful for communicating between sentient
>> beings. So I'm not sure how useless it is. Please elaborate.
>
> Redness is a useful label, which can be made use of in communicating with
> any entity, "sentient" or not, that can discriminate and handle the relevant
> feature of red objects. As to what it "really" means, if anything at all, to
> a PC, to an eagle or to a fellow human being who might well be a
> philosophical zombie for all I know, I am inclined to contend that the
> question is undecidable and irrelevant.
When you get down to the raw philosophy of it, you are correct. I
think most of us tend to be a little more practical than most
philosophers allow for.
>> I agree that much of what we think we observe is a kind of
>> hallucination. Our eyes simply aren't good enough optically to produce
>> the model that is in my mind of the world.
>
> No, what I mean is that we project our own feelings and experience on other
> things. According to the PNL approach, this may be empirically convenient
> sometimes, but not only is philosophically unwarranted and useless, it can
> also entangle us in ineffective behaviours and paradoxes.
Yes, I see that point.
>> All right, I guess I see your point. It isn't rape unless it has the
>> psychological component of doing damage to the other being. So we are
>> going to be stuck with assholes who won't be happy with their sexbot,
>> no matter what. Perhaps they will rape my sexbot... and I'll probably
>> be none to happy about it. ;-)
>
> Yes, this is also an interesting point I had not think of (consensual rape
> may not qualify for the rapist in the first place).
ya.
> But I was seeing things more from the side of the victim, suggesting that
> the victims themselves cannot really say to be raped from their own POV
> unless their dislike and refusal are sincere...
Being a capitalist entrepreneur, I tend to look at things from the POV
of the customer. I this case the rapist.
> Accordingly, those who might like to suffer an *actual* rape, as opposed to
> just seeing it mimicked, are bound never to have the experience they
> crave... :-)
That's pretty easy, just put on the right dress and walk around the
wrong neighborhood at the wrong time. :-) No sexbot required. Your
point is well made though.
-Kelly
More information about the extropy-chat
mailing list