<div class="gmail_quote">On 18 April 2011 15:39, Ben Zaiboc <span dir="ltr"><<a href="mailto:bbenzai@yahoo.com">bbenzai@yahoo.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
I was disappointed to read this:<br>
<br>
"Ask yourself how it’s possible for a creature of a given intelligence level to be able to design a creature of greater intelligence. Designing a creature of superior intelligence requires a level of intelligence that the designer simply does not have. Therefore, it is logically impossible to use the traditional blueprint-design approach to create a creature of superior intelligence"<br>
</blockquote><div><br>There again, it sounds as if there is some fundamental flaw of a philosophical nature in this line of reasoning.<br><br>I am more and more inclined to define "intelligence" simply as the ability to exhibit a universal computation ability - something which, as shown by Wolfram is indeed a very low threshold. I suspect in fact that "human" or "animal" intelligence is nothing else than a universal computation device running a very peculiar program. <br>
<br>Accordingly. something more intelligent than something else is simply something performing better at a given task. <br></div></div><br>Now, we already know that we are able to design devices offering better performances than human brains at given tasks (say, adding integers).<br>
<br>Why should there be tasks were we would be prevented to do just the same? <br><br>For instance, I do not see any deep conceptual obstacle to designing devices that perform even better than humans in Turing tests. An entirely different story is the effort required to do so and whether we should consider such achievement as a top priority.<br>
<br>-- <br>Stefano Vaj<br>