[ExI] AI Motivation revisited
stefano.vaj at gmail.com
Thu Jun 30 16:57:13 UTC 2011
2011/6/29 Richard Loosemore <rloosemore at susaro.com>
> Stefano, your argument is fine .... except that you have neglected to notice that I was talking about whether a PC could simulate a mind "in real time". In other words, from the very beginning I have been talking about anything EXCEPT the universal computation issue!
> I never disputed whether a tinkertoy or a bunch of marbles running in a maze (or a Searlean idiot locked up in a room with pieces of paper being passed under the door) could simulate a mind .... hey, no problem: all of these things could simulate a mind if programmed correctly.
> All I cared about was whether a PC could do it in real time. In other words, fast enough to keep up with a human.
This is crystal clear. The relevance of my remark however remains, for
- Firstly, AGIs can be by definition implemented, and in fact no
especial or very powerful hardware is required to do so; in fact, I
would even submit that the "intelligence" (in a rigourous sense) of a
system is irrilevant to its ability to exhibit AGIs traits.
- Secondly, and conversely, we do have no special reasons to believe
that AGIs running on silicon will at a given point reach a level of
performance vastly exceeding, or even similar, to biological brains,
*in what might be specific to the latter*, unless of course really
disproportionate computing resources are thrown at the task (and even
there, limiting factors might exist for traditional computers, at
least from an engineering POV).
This is why all the debate on their "motivations" sounds moot to me at
this stage, especially since a non AGI-running, but very powerful,
computer which be additioned with human-like motivations simply by
integrating an actual human being to the system would be
indistinguishable from a runaway AGI for all practical purposes.
More information about the extropy-chat