[ExI] Did Hugo de Garis leave the field?
Kelly Anderson
kellycoinguy at gmail.com
Mon Apr 25 06:46:17 UTC 2011
On Sun, Apr 24, 2011 at 5:53 AM, Eugen Leitl <eugen at leitl.org> wrote:
> On Wed, Apr 20, 2011 at 11:02:17PM -0600, Kelly Anderson wrote:
>
>> There seems to be a philosophical position on the part of some that
>> you can't design intelligence that is more intelligent than yourself.
>
> Artificial intelligence is hard. We know that darwinian design
> can do it, and we have plenty of educated guesses to prime
> the evolutionary pump: animals.
Ok, I agree AI is hard. That being said, doing AI without human level
computational levels is probably even harder. That is, when your CPU
has as much computational power as a real human brain, it should be
SLIGHTLY easier to get results that are closer to human level. Trying
to do human level AI with 1/100th of the power is definitely harder.
> It's just the easiest approach, particularly given the
> chronical hubris in the AI camp. I never understood the
> arrogance of early AI people, it was just not obvious that
> it was easy. But after getting so many bloody noses they
> still seem to think it's easy. Weird.
The results of the recent survey posted here seems to be that the AI
people are a lot more conservative than they used to be. The early AI
people can be forgiven because nobody understood how hard things that
are easy for us were back then.
>> I think that is just a ridiculous position. Just having an
>> intelligence with the same structure as ours, but on a better
>> substrate, or larger than a physical skull would result in higher
>
> It does seem that we're metabolically constrained, so a larger
> cortex should result in more processing capacity.
Seems logical.
> But the most
> interesting things is that we've got some pretty extreme outliers
> which manage extremely impressive feats within the same metabolic
> or more or less genetic envelope, so it's worthwhile to look
> at how they differ from us bog humans. It might be the synaptic
> density and the fibre connectivity as well as molecular-scale
> features, but right now nobody really knows. Both postmortem
> and in vivo instrumentation are difficult.
Agreed. We may even stumble into a model for intelligence that is
superior to human processing in every way... it COULD happen...
>> intelligence rather trivially. Add a computer type memory without the
>> fuzziness of human memory and you would get better processing. I have
>
> Ah, this is not obvious. There are some extreme cases where people
> are cursed with a real photographic memory to the point of their
> processing ability being completely overwhelmed. Have you ever
> had the problem of seeing too many options simultaneously, and
> unable to pick the right path?
Yes, but Google helps us to get more things done with less time. If
future AIs went with the model of "looking up" with precision the
things they need to know, then it isn't the same as the idiot savant
that has that poor processing model because of a good memory.
As an aside, studying the brains of these savants could be helpful in
figuring it all out...
>> never understood the argument that the brain is insufficiently bright
>> to understand itself, since we work in groups... it just confuses me
>
> I presume you're familiar with "The Mythical Man-month" and its more
> recent ilk.
Of course. And yet we build the Hoover Dam and went to the moon as a
group. Studying the brain is a big group issue.
>> that anyone would take this position. It seems like saying one person
>> could never design a mission to land men on the moon. That may be
>> true, but it is also entirely irrelevant to whether we can accomplish
>> it as a species.
>
> Artificial intelligence is a lot harder than putting men on the moon.
> And will have a *slightly* higher impact than that.
Agreed, that's why we didn't do it first... :-)
-Kelly
More information about the extropy-chat
mailing list