[extropy-chat] Eugen Leitl on AI design
Eliezer Yudkowsky
sentience at pobox.com
Wed Jun 2 13:40:07 UTC 2004
Eugen Leitl wrote:
> On Wed, Jun 02, 2004 at 08:21:29AM -0400, Eliezer Yudkowsky wrote:
>
>>wondering why you think you can give hardware estimates for intelligence
>>when you claim not to know how it works. I used to do that too, convert
>>synaptic spikes to floating-point ops and so on. Later I looked back on my
>>calculations of human-equivalent hardware and saw complete gibberish,
>>blatantly invalid analogies such as Greek philosophers might have used for
>>lack of any grasp whatsoever on the domain. People throw hardware at AI
>>because they have absolutely no clue how to solve it, like Egyptian
>>pharaohs using mummification for the cryonics problem.
>
> Many orders of magnitude more performance is a poor man's substitute for
> cleverness, by doing a rather thorough sampling of a lucky search space.
Right. But it automatically kills you. Worse, you have to be clever to
realize this. This represents an urgent problem for the human species, but
at least I am not personally walking directly into the whirling razor
blades, now that I know better.
> I think (based on my layman's understanding of current state of the art)
> a connectionist, very probably a spiking architecture is vital.
What on Earth does that have to do with intelligence? That is building a
wooden airfield and hoping that someone brings cargo.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
More information about the extropy-chat
mailing list