[ExI] The point of emotions
stathisp at gmail.com
Mon Apr 21 12:54:25 UTC 2008
On 21/04/2008, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> On Mon, Apr 21, 2008 at 1:43 PM, Stathis Papaioannou <stathisp at gmail.com> wrote:
> > My question is, would an
> > AI behaving in this way ipso facto have emotions: real likes and
> > dislikes, the way we experience them?
> The same old story. How would we know? How would you know that I
> personally do, for instance, rather than "mimicking" your emotions,
> that you might be the only entity in the universe really to feel?
> I think that with regard both to AIs *and* to other human beings, the
> healthier and more sensible approach is that of the PNL, that is: we
> do not know, we will never know, what we are doing is projecting.
I am not asking a philosophical question but, if you will, a practical
question about what we really think is likely to be the case. We
cannot be absolutely sure that the Earth is spherical rather than flat
any more than we can be absolutely sure that other people have minds
like our own. However, it would be very surprising if the Earth really
were flat despite all the evidence and it would be at least as
surprising if other people didn't have minds despite all the evidence.
Operationalism may be OK as a game scientists play but it isn't what
they actually *believe*, if they are honest with themselves.
More information about the extropy-chat