[ExI] Did Hugo de Garis leave the field?

Stefano Vaj stefano.vaj at gmail.com
Thu Apr 21 15:47:19 UTC 2011

On 18 April 2011 15:39, Ben Zaiboc <bbenzai at yahoo.com> wrote:

> I was disappointed to read this:
> "Ask yourself how it’s possible for a creature of a given intelligence
> level to be able to design a creature of greater intelligence. Designing a
> creature of superior intelligence requires a level of intelligence that the
> designer simply does not have. Therefore, it is logically impossible to use
> the traditional blueprint-design approach to create a creature of superior
> intelligence"

There again, it sounds as if there is some fundamental flaw of a
philosophical nature in this line of reasoning.

I am more and more inclined to define "intelligence" simply as the ability
to exhibit a universal computation ability - something which, as shown by
Wolfram is indeed a very low threshold. I suspect in fact that "human" or
"animal" intelligence is nothing else than a universal computation device
running a very peculiar program.

Accordingly. something more intelligent than something else is simply
something performing better at a given task.

Now, we already know that we are able to design devices offering better
performances than human brains at given tasks (say, adding integers).

Why should there be tasks were we would be prevented to do just the same?

For instance, I do not see any deep conceptual obstacle to designing devices
that perform even better than humans in Turing tests. An entirely different
story is the effort required to do so and whether we should consider such
achievement as a top priority.

Stefano Vaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110421/9f76cc69/attachment.html>

More information about the extropy-chat mailing list