[ExI] Limiting factors of intelligence explosion speeds
Richard Loosemore
rpwl at lightlink.com
Mon Jan 24 18:27:32 UTC 2011
Stefano Vaj wrote:
> On 21 January 2011 15:30, Richard Loosemore <rpwl at lightlink.com
> <mailto:rpwl at lightlink.com>> wrote:
>
> Well, remember that the hypothesis under consideration here is a
> system that is capable of redesigning itself.
>
>
> In principle, a cellular automaton, a Turing machine or a personal
> computer should be able to design themselves if we can do it ourselves.
> You just have to feed them the right program and be ready to wait for a
> long time...
This is meaningless in the present context, surely?
Lots of things are capable of designing themselves in principle. I
don't give a fig if some cellular automaton might do in the next 10
gigayears, I am only considering the question of intelligence explosions
happening as a result of building AGI systems.
> "Human-level" does not mean identical to a human in every respect,
> it means smart enough to understand everything that we understand.
>
>
> Mmhhh. Most humans do not "understand" (for any practical mean) anything
> about the working of any computational device, let alone their own
> brain. Does it qualify them as non-intelligent? :-/
Well, people who deliberately play semantic tricks with my sentences,
THEM I am not so sure about ..... ;-)
> The main idea of building an AGI would be to do it in such a way
> that we understood how it worked, and therefore could (almost
> certainly) think of ways to improve it.
>
>
> We are already able to design (or profit from) devices that exhibit
> intelligence. The real engineering feat would be a Turing-passing
> system, which in turn probably requires a better reverse-engineering of
> human ability to pass it by definition. But many non-Turing passing
> systems may be more powerful and "intelligent", not to mention useful
> and/or dangerous, in other senses.
So..... ? Does not relate to the point!
Richard Loosemore
More information about the extropy-chat
mailing list