<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<br>
<div class="moz-cite-prefix">On 2015-09-24 12:24, spike wrote:<br>
</div>
<blockquote cite="mid:04d201d0f6b3$36608b20$a321a160$@att.net"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<meta name="Generator" content="Microsoft Word 14 (filtered
medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.hoenzb
{mso-style-name:hoenzb;}
span.EmailStyle18
{mso-style-type:personal-reply;
font-family:"Calibri","sans-serif";
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
<div class="WordSection1">
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p>Of
all the AI discussion here, I don’t recall any theoretical
AI which emerged and subsequently reached a long-term
equilibrium after having taken over some modest segment or
task. We need a map of all AI possibilities to do valid
Baysian statistics, and the Bluebird scenario is one of
them.<o:p></o:p></span></p>
<div>
<div>
<div>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p><br>
</o:p></span></p>
</div>
</div>
</div>
</div>
</blockquote>
<br>
It depends on the goal structure. Maximizers try to maximize some
utility function, and generically that tends to end badly. It is not
just paperclip maximizers that steamroll the universe: the agent
ordered to make one (1) hamburger will also try to ensure that the
probability of success is as high as possible. Which might include
adding surveillance and armor to ensure that the hamburger really
*is* there and will not be stolen... in fact, let's get rid of all
other agents that could interfere to pre-empt any theft...<br>
<br>
Satisficers seem much better: they will be happy with reaching a
certain goal well enough. However, as some of my colleaues pointed
out, they can start misbehaving too by becoming maximizers:
<a class="moz-txt-link-freetext" href="http://lesswrong.com/lw/854/satisficers_want_to_become_maximisers/">http://lesswrong.com/lw/854/satisficers_want_to_become_maximisers/</a><br>
<br>
Now, humans are not well modelled as utility maximizers or
satisficers in a lot of situations. We have a mess of goals, and
some actions are not even goal-oriented. It is not too hard to make
AIs just as hopeless: just connect a random neural network to a
robot and set it off - but obviously humans are a bit better at
doing something sensible. But in that space of networks there are
doubtless some that are human/Bluebird-like. It just seems to me
that they have a very small measure compared to the "wobble around
in pointless circles" and "maximize X" agents. <br>
<br>
There is a tricky interaction between having goals and being
intelligent: intelligence can be defined as the ability to achieve
goals in general circumstances, so to be intelligent you need to
have some kind of goals. But clearly some goals are special: even I
can write an AI whose goal is to not do anything, but I think we can
agree it's intelligence is not interesting. So interesting
intelligence requires nontrivial goals - so in that space of minds
there will be some correlation between being goal-oriented and
intelligent. But some agents may not have an entirely goal-oriented
architecture, yet be pretty good at achieving what we call goals.<br>
<br>
Mapping mindspace is fun! There are *weird* corners out there...<br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University</pre>
</body>
</html>