<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 2016-02-25 22:43, Colin Hales wrote:<br>
</div>
<blockquote
cite="mid:CAGHyZJm3kCc7M=eaiZ-p2AJ1umfRzVr3Mzs6_uDAf7T-dVELqw@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div>Evaluations of the AI risk landscape are, so far,
completely and utterly vacuous and misguided. It completely
misses an entire technological outcome that totally changes
everything. </div>
</div>
</blockquote>
<br>
OK, let me see if I understand what you say. (1) Most people doing
AI and AI risk are wrong about content and strategy. (2) Real AGI is
model-less, something that just behaves. (3) The current risk
conversation is about model-based AI, and (4) you think that
approach is totally flawed. (5) You are building a self-adapting
hierarchical control system which you think will be the real thing.<br>
<br>
Assuming this reading is not too flawed:<br>
<br>
I agree with (1). I think there is a fair number of people who have
correct ideas... but we may not know who. There are good theoretical
reasons to think most AI-future talk is bad (the Armstrong and
Sotala paper). There are also good theoretical reasons to think that
there is great value in getting better at this (essentially the
argument in Bostrom's Superintelligence), although we do not know
how much this can be improved. <br>
<br>
I disagree with (2), in the sense that we know model-less systems
like animals do implement AGI of a kind but that does not imply a
model-based approximation to them does not implement it. Since
design of model-less systems, especially with desired properties, is
very hard, it is often more feasible to make a model system. Kidneys
are actually just physical structures, but when trying to make an
artificial kidney it makes sense to regard it as a filtering system
with certain properties.<br>
<br>
I agree with (3) strongly, and think this is a problem! Overall, the
architectures that you can say sensible things about risk in are
somewhat limited: neuromorphic or emergent systems are opaque and do
not allow neat safety proofs. We need to develop a better way of
thinking about complex adaptive technological systems and how to
handle them. <br>
<br>
However, as per above, I do not think model-based systems are
necessarily flawed, so I disagree with (4). It might very well be
that less-model based systems like brain emulations are the ticket,
but it remains to be seen. <br>
<br>
(5): I am not entirely certain that counts as being model-less.
Sure, you are not basing it on some GOFAI logic system or elaborate
theory (I assume), just the right kind of adaptation. But even the
concept of control is a model. <br>
<br>
If you think your system will be the real deal, ask yourself: why
would it be a good thing? Why would it be safe (or possible to make
safe)?<br>
<br>
[ Most AGI people I have talked with tend to answer the first
question either by scientific/engineering curiosity or that getting
more intelligence into the world is useful. I buy the second answer,
the first one is pretty bad if there is no good answer to the second
question. The answers I tend to get to the second question are
typically (A) it will not be so powerful it is dangerous, (B) it
will be smart enough to be safe, (C) during development I will nip
misbehaviors in the bud, or (D) it does not matter. (A) is sometimes
expressed like "there are people and animals with general
intelligence and they are safe, so by analogy AGI will be safe".
This is obviously flawed (people are not safe, and do pose an
xrisk). (A) is often based on underestimating the power of
intelligence, kind of underselling the importance of AGI. (B) is
wrong, since we have counter-examples (e.g. AIXI): one actually
needs to show that one's particular architecture will somehow
converge on niceness, it does not happen by default (a surprising
number of AGI people I have chatted with have very naive ideas of
metaethics). (C) assumes that early lack of misbehavior is evidence
against late misbehavior, something that looks doubtful. (D) is
fatalistic and downright dangerous. We would never accept that from
a vehicle engineer or somebody working with nuclear power. ]<br>
{Now you know the answers I hope you will not give :-) }<br>
<pre class="moz-signature" cols="72">--
Dr Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University
</pre>
</body>
</html>