<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
On 2016-04-07 06:33, Rafal Smigrodzki wrote:<br>
<blockquote
cite="mid:CAAc1gFiZoG_CdF3bVWpdXg3xGJAcNWq6s-pVGHr+xrkjhMEUHg@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra">
<div class="gmail_quote">On Wed, Apr 6, 2016 at 10:53 PM,
spike <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:spike66@att.net" target="_blank">spike66@att.net</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
I can think of one definition of AI which would never
move. If we do accomplish true AGI, it will self-improve
recursively. This is the singularity. When or if that
happens, the debate is over.<br>
</blockquote>
<div><br>
</div>
<div>### AlphaGO performed domain-specific recursive
self-improvement. It played against itself, using results
from each game to make the next game more advanced. At
first it must have played like a small child that only
knows the rules and some moves, then by reapplying the
same cycle of move generation, game, result evaluation and
modification of move generation rules it very quickly went
superhuman.</div>
</div>
</div>
</div>
</blockquote>
<br>
I think Spike was thinking about the architecture being
self-improving rather than the content of the architecture. But
there are at least in principle models that improve their
architecture, such as Schmidthuber's Gödel machine. That one is
implementable, but I have never seen any evidence it is improving in
an accelerating manner. <br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University</pre>
</body>
</html>