[extropy-chat] Fundamental limits on the growth rate ofsuperintelligences

kevinfreels.com kevin at kevinfreels.com
Tue Feb 14 06:56:31 UTC 2006


>
> In your life I am certain you have met people you judge to be smart and
> people you judge to be stupid, how exactly did you  quantify it?

I've never attempted to quantify the growth rate of a person's intelligence.
Only the level that they may have been at a given moment - like when they
pull in front of me on the freeway. You are asking about fundamental limits
on the rate at which a super AI becomes smarter and I am wondering how
anyone could answer that question.

You seem to be working with the idea that every upgrade that an AI makes
will be a true improvement and that every decision will be perfectly geared
towards future goals having been already worked out 2 billion steps in
advance.  I think it's a long stretch to think that any AI would be perfect
and not make mistakes. I would expect them to be set back by errors and bad
decisions as we are although they would see the world much differently and
make different kinds of mistakes. Emotions could get in the way of hardcore
number crunching, but they may choose to add emotions anyways just for the
pleasure of it. You simply can't know their minds or motivations once they
become independent. Some may even go flat-out crazy.

Also, you are assuming that the AI has nothing better to do with it's time
than to improve upon itself. It may very well become so interested in
observing it may never choose to do anything but observe. The AI version of
the couch potato.....

As for a fundamental limit on their ultimate intelligence, well, I would
suppose that a computer will never be built that can solve a problem before
it is presented with the problem. That makes a nice limit. But they will
only get there if they choose to go there.

You do bring up a good point although I don't know if you meant to. I know
all sorts of idiots out there and they are all considered "intelligent" or
"sentient". Has anyone considered the possibility that an AI would be
created that was sentient, but stupid? I guess you could call it Artificial
Stupidity?






More information about the extropy-chat mailing list