[ExI] Self improvement
Anders Sandberg
anders at aleph.se
Thu Apr 21 22:38:20 UTC 2011
Richard Loosemore wrote:
>
> I disagree with the statement that "the results in theoretical AI have
> put some constraints on it". These theoretical results exploit the
> fact that there is no objective measure of "intelligence" to get their
> seemingly useful results, when in fact the results say nothing of
> importance.
So you think Shane Legg's definition doesn't have any useful content or
similarity to what we usually talk about?
I think the real problem is that we do not have a good methodology at
all for approaching the pertinent question. Theory might indeed be too
abstracted from reality to say anything about it (a common complaint
from real programmers), empirical programming experience doesn't give
much data (we have failed at getting recursive self-improvement for 50+
years, heuristics systems seem to level off rather than take off, but
this doesn't tell us much), analogies from other domains (economics,
evolution) is of doubtful use. So how can we come up with a way of
getting information about the question?
--
Anders Sandberg,
Future of Humanity Institute
James Martin 21st Century School
Philosophy Faculty
Oxford University
More information about the extropy-chat
mailing list