<HTML><FONT FACE=arial,helvetica><HTML><FONT SIZE=2 PTSIZE=10>In a message dated 07/03/2008 02:57:47 GMT Standard Time, rpwl@lightlink.com writes:<BR>
<BR>
<BR>
<BLOCKQUOTE TYPE=CITE style="BORDER-LEFT: #0000ff 2px solid; MARGIN-LEFT: 5px; MARGIN-RIGHT: 0px; PADDING-LEFT: 5px">Unfortunately, I think I was not clear enough, and as a result you have <BR>
misunderstood what I said in rather a substantial way.<BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"></BLOCKQUOTE><BR>
<BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0">More than likely, it was late and I think my view of your post was clouded by reading previous posts. On reading it again I would agree that my response wasn't quite in line with it, but I think we may disagree on some issues.<BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"><BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"><BLOCKQUOTE TYPE=CITE style="BORDER-LEFT: #0000ff 2px solid; MARGIN-LEFT: 5px; MARGIN-RIGHT: 0px; PADDING-LEFT: 5px">When you build an AGI, you *must* sort out the motivation mechanism <BR>
ahead of time, or the machine will simply not work at all. You don't <BR>
build an AGI and *then* discover what its motivation is.<BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"></BLOCKQUOTE><BR>
<BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0">Agreed in principle. However I still subscribe to the idea that the end result AGI will have unpredictable motivation regardless of it's starting point. I'll say again that the development is a stochastic process unless we code every single line by hand, spoon feed its knowledge base and fully understand all possible outcomes of the system to the Nth degree.<BR>
This is unrealistic and impossible to achieve even in today's Non-AI software let alone in developing an AGI. Imagine a self learning Windows! <BR>
<BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"><BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"><BLOCKQUOTE TYPE=CITE style="BORDER-LEFT: #0000ff 2px solid; MARGIN-LEFT: 5px; MARGIN-RIGHT: 0px; PADDING-LEFT: 5px">If you do not understand the motivation system before you build it, then <BR>
it will not work, as simple as that.</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"></BLOCKQUOTE><BR>
<BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0">Agreed. But as above, we cannot know in advance what the AGI will decide to do even if we <I>can</I> control its motivations.</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"><BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"><BR>
<BLOCKQUOTE TYPE=CITE style="BORDER-LEFT: #0000ff 2px solid; MARGIN-LEFT: 5px; MARGIN-RIGHT: 0px; PADDING-LEFT: 5px"><BR>
The reason why many people do talk as if a future AGI will have a <BR>
"surprise" motivation system is that today's AI systems are driven by <BR>
extremely crude and non-scalable "goal-stack" control systems, which are <BR>
great for narrow-AI planning tasks, but which become extremely unstable <BR>
when we imagine using them in a full-blown AGI.<BR>
<BR>
But when people imagine an extended form of goal-stack drive system <BR>
controlling a future AGI, they fail to realise that the very same <BR>
instability that makes the AGI seem so threatening will also make it so <BR>
unstable that it will never actually become generally intelligent.</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"></BLOCKQUOTE><BR>
<BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0">The methodology used to build the AI/AGI in the first place is irrelevant to the finished AGI. There are infinite ways to mathematically get from 1 to 100 and likewise infinite ways in which an AGI could rewrite its motivational code. Just because our ability to code AI is limited by preconceptions, personal aptitude, knowledge of coding and excepted methodology such as "goal-stack" control systems. Does not mean the AGI will follow our limited rules. So we cannot predict the outcome.<BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"><BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"><BLOCKQUOTE TYPE=CITE style="BORDER-LEFT: #0000ff 2px solid; MARGIN-LEFT: 5px; MARGIN-RIGHT: 0px; PADDING-LEFT: 5px"><BR>
The bottom line: you cannot make statements like "An ...[AGI]... could <BR>
and probably will do major damage", because there is no "probably" about <BR>
it. You either set out to make it do damage and be intelligent at the <BR>
same time (an extremely difficult combination, in practice, for reasons <BR>
I have explained elsewhere), or you don't! There is no surprise.<BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"></BLOCKQUOTE><BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"><BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0">The full quote ended with '</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"><I>probably not through design or desire, but just through exploration of ability or pure accident,' </I>which is the important bit. <I><BR>
</I>If my car is fitted with autobrakes which apply when closing on a stationary vehicle, they might not stop the car from running over a dog. The point being that any AGI must have explicit rules or ability to stop it doing something in order to be safe.<BR>
If we give the AGI a basic motivation to 'learn all it can,' We must add an exception that it cannot learn what happens to humans when dropped into a vat of acid.<BR>
We can overcome this to an extent by blanket rules, but the basic premise is still the same. If an AGI can cause damage, it will.<BR>
I have seen first hand, many times what happens when AI systems come across situations where they have no explicit rules for the situation. I once watched (from a distance) a CNC machine throwing 3 metre steel bars across a workshop, simply because a part off tool broke. This was a very dangerous example of a very simple AI following an equally simple rule and nearly killing someone.<BR>
As with an AGI, all I could do was stop and stare, then wait for the dust to settle.</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"><BR>
<BR>
</FONT><FONT COLOR="#000000" BACK="#ffffff" style="BACKGROUND-COLOR: #ffffff" SIZE=2 PTSIZE=10 FAMILY="SANSSERIF" FACE="Arial" LANG="0"><BLOCKQUOTE TYPE=CITE style="BORDER-LEFT: #0000ff 2px solid; MARGIN-LEFT: 5px; MARGIN-RIGHT: 0px; PADDING-LEFT: 5px">For someone to talk about discovering an AGI's motivation after the fact <BR>
would be like a company building a supersonic passenger jet and then <BR>
speculating about whether the best way to fly it is nose-forward or <BR>
nose-backward.<BR>
<BR>
<BR>
<BR>
Richard Loosemore<BR>
</BLOCKQUOTE><BR>
<BR>
The jet could not reconfigure its aerodynamics after assemble, if it could, perhaps it <U>would</U> fly better backwards ;o) What's more, many planes are already aerodynamically capable of flying backwards. But that's a whole different fishkettle. Ask a group of well educated pilots what makes a plane fly and you would be surprised at the answers LOL.<BR>
<BR>
Alex</FONT> </HTML>