[ExI] Semiotics and Computability
Gordon Swobe
gts_2000 at yahoo.com
Fri Feb 5 13:45:26 UTC 2010
--- On Fri, 2/5/10, Stathis Papaioannou <stathisp at gmail.com> wrote:
>> Does your watch understand what time it is, Stathis?
>> No, of course not. But yet it tells you the correct time
>> anyway, *as if* it had understanding. Amazing!
>
> The watch obviously does not behave exactly like a human,
> so there is no reason why it should understand time in the same way as
> a human does.
My point here is that intelligent behavior does not imply understanding.
We can construct robots that behave intelligently like humans but which have no subjective understanding of anything whatsoever. We've already started doing so in limited areas. It's only a matter of time before we do it in a general sense (weak AGI).
> The point is that is impossible to make a brain or
> brain component that behaves *exactly* like the natural
> equivalent but lacks understanding.
Not impossible at all! Weak AI that passes the Turing test is entirely possible. It will just take a lot of hard work to get there.
-gts
More information about the extropy-chat
mailing list