<div class="gmail_quote">On 21 January 2011 15:30, Richard Loosemore <span dir="ltr"><<a href="mailto:rpwl@lightlink.com">rpwl@lightlink.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div class="im">Stefano Vaj wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
On 20 January 2011 20:27, Richard Loosemore <<a href="mailto:rpwl@lightlink.com" target="_blank">rpwl@lightlink.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Anders Sandberg wrote:<br>
E) Most importantly, the invention of a human-level, self-understanding<br>
AGI would not lead to a *subsequent* period (we can call it the<br>
"explosion period") in which the invention just sits on a shelf with<br>
nobody bothering to pick it up.<br>
</blockquote>
<br>
Mmhhh. Aren't we already there? A few basic questions:<br>
<br>
1) Computers are vastly inferior to humans in some specific tasks, yet<br>
vastly superior in others. Why human-like features would be so much<br>
more crucial in defining the computer "intelligence" than, say, faster<br>
integer factorisation?<br>
</blockquote>
<br></div>
Well, remember that the hypothesis under consideration here is a system that is capable of redesigning itself.<br></blockquote><div><br>In principle, a cellular automaton, a Turing machine or a personal computer should be able to design themselves if we can do it ourselves. You just have to feed them the right program and be ready to wait for a long time...<br>
<br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
"Human-level" does not mean identical to a human in every respect, it means smart enough to understand everything that we understand. </blockquote><div><br>Mmhhh. Most humans do not "understand" (for any practical mean) anything about the working of any computational device, let alone their own brain. Does it qualify them as non-intelligent? :-/<br>
<br></div><br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div class="im"><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
2) If the Principle of Computational Equivalence is true, what are we<br>
really all if not "computers" optimised for, and of course executing,<br>
different programs? Is AGI ultimately anything else than a very<br>
complex (and, on contemporary silicon processor, much slower and very<br>
inefficient) emulation of typical carbo-based units' data processing?<br>
</blockquote>
<br></div>
The main idea of building an AGI would be to do it in such a way that we understood how it worked, and therefore could (almost certainly) think of ways to improve it.<br></blockquote><div><br>We are already able to design (or profit from) devices that exhibit intelligence. The real engineering feat would be a Turing-passing system, which in turn probably requires a better reverse-engineering of human ability to pass it by definition. But many non-Turing passing systems may be more powerful and "intelligent", not to mention useful and/or dangerous, in other senses.<br>
<br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Also, if we had a working AGI we could do something that we cannot do with human brains: we could inspect and learn about any aspect of its function in real time.<br></blockquote><div><br>Perhaps. Or perhaps we will first be able to do that with biological brains. Who knows? Ultimately, we might even discover that bio or bio-like brains are a decently optimised platform for what they do best, and that silicon really shines in a "co-processor" position, same as GPUs vs CPUs. But of course this would not prevent us from implementing AGIs entirely on silicon, if we accept the performance hit.<br>
<br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
There are other factors that would add to these. One concerns the AGI's ability to duplicate itself, after acquiring some knowledge. In the case of a human, a single, world-leading expert in some field would be nothing more than one expert. But if an AGI became a world expert, she could then duplicate herself a thousand times over and work with her sisters as a team (assuming that the problem under attack would benefit from a big team).<br>
</blockquote><div><br>In principle, I do not see any specific reason why duplicating a bio-based brain should be any more impossibile than the same data, features and process on another platform... <br><br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Lastly, there is the fact that an AGI could communicate with its sisters on high-bandwidth channels, as I mentioned in my essay. We cannot do that. It would make a difference.</blockquote><div><br>Really can't a fyborg do that? Aren't we already doing that? :-/ <br>
<br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">A workstation that is used to design the next Intel processor has zero self-understanding, because it cannot autonomously start and complete a project to redesign itself.<br>
</blockquote><div><br>To form an opinion on the above, I would require a more precise definition of "autonomously", "understanding", "self" etc. <br><br>In the meantime, I suspect that the difference essentially lies in the execution of different programs, or in the hallucination of supposed "bio-specific" gifts which does not really bear close inspection. The behavioural features and range of simpler animals and the end result of contemporary, ad-hoc, sophisticated computer emulations illustrate well, I believe, this point.<br>
</div></div><br>-- <br>Stefano Vaj<br>