<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix"><br>
Hi Anders,<br>
<br>
Thanks for the feedback.<br>
<br>
"I think the take home message is, after looking at prediction
track records and cognitive bias, that experts and consensuses in
this domain are pretty useless."<br>
<br>
It sounds like you are expecting expert consensus (at least
anything that can be determined via a top down survey or any kind
traditional prediction system) to be a reliable measure of truth.
And of course, I would completely agree that in this and many
other domains, simple popular vote, via a top down survey, is
useless, and build no consensus but destroys it.<br>
<br>
To me, that's not what an open, bottom up, consensus building
survey system like canonizer.com is about. This kind of system is
all about communication, and knowing, concisely and
quantitatively, what the competing camps are, and what it would
take to convert others from their current consensus camps. This
Friendly AI domain is much like consciousness in that we just
exponentially blew past the there are now more than 20,000 peer
reviewed publications in this domain. And most of that is
childish and infinite yes / no / yes / no arguments, most of which
there is no one else that agrees with any one of them. So
attempting to jump into any such morass at any point is a complete
waste of time, and even if we have made any consensus progress
during the last 50 years, nobody has been able to rigorously
measure for such, whether such consensus is valuable or not.<br>
<br>
I've spent many hours jumping into this friendly AI pool,
listening to gazillions of arguments on both sides. Obviosly
there are way better and more experienced experts than I, but we
can't expect everyone to spend as much time as I've spent, in
every single critically important domain like this. You've
mentioned a few new arguments that I haven't yet heard of, like
"Eliezer-Omohundro thesis that most of mindspace is unsafe" which
I should probably check out. But given how much I already know of
the kind of arguments Eliezer has used in the past, and so on,
it's hard for me to know if such arguments are any better than the
many many arguments I already feel like I've completely wasted my
time with.<br>
<br>
If any such is really a good (able to convince others) argument,
someone should be able to concicely describe such, and we should
be able to rigorously measure how many people agree that it is a
good argument, relative to other arguments and other points of
view. And we should be able to track this in real time. I.E. Is
this a new and exploding argument, compared to others, or have
most other experts abandoned this thesis, and is Eliezer and
Omohundro, about the only ones left still believing in it?<br>
<br>
I in no way want to prematurely "dismissing the possibility that
the other camp is on to something". But I don't have time to
follow every mistaken idea/arguement that at best only one, or a
few people still agree with.<br>
<br>
You pointed out that that I should: "find a really good set of
arguments that shows that some human-level or beyond AI systems
actually are safe". And in my opinion, for me, the argument that
I'm presenting, that any sufficiently intelligent (arround human
level) will realize that the best, most moral thing do to, is to
coperatively work with everyone to get everythinging for all, or
at least as much of it as possible. And, at least at
Canonizer.com, there are more people that also think the same way
than any other camp, as is concisely, quantitatively represented
in a way that nobody can deny.<br>
<br>
If Eliezer, or anyone else, thinks we in our camp are wrong, they
need to know, concisely and quantitatively, why we think we have
met your challenge, and what we would accept as falsifying our
current working hypothesis. If there is a larger, more important
camp out there, they should focus on that camp, first.<br>
<br>
If Eliezer and Omohundro, are the only ones that think their "most
mindspaces are unsafe" hypothesis is valid, it's probably not
worth anybodies time, like all the other thousands of similar
ideas out there that only a few lonely people think are any
better? On the other hand, if there are a huge group of people,
especially if any of them were experts that I'd trust, and most
importantly, if this consensus is growing rapidly, then I should
probably continue to ignore all the other lonely / fading
arguments, and instead spend time on trying to understand and not
dismiss that one.<br>
<br>
This kind of open survey system isn't about determining truth.
It's all about comunicating in concice and quantitative ways, so
the best theories and arguments can quickly rise and be recognized
above all the mistaken and repetative childish bleating noise.
It's about having a bottom up system with a focus on building
consensus, and finding out exactly what others are having problems
with, not simply destroying consensus, like all primitive top down
survey systems do. It's about communicating in a concise and
quantitative way that amplifies everyone's moral wisdom and
education on any such existentially important moral issues, in a
way that you can measure it's progress, or lack thereof. It's
about having a real time concise and representation of all that
belief, with definitive measurements of which are the best and
improving, that anyone can quickly and easily digest the best
ones, without having everyone be required to read 20,000 peer
reviewed publications.<br>
<br>
Brent Allsop<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
On 12/24/2012 3:46 PM, Anders Sandberg wrote:<br>
</div>
<blockquote cite="mid:50D8DB4C.2060105@aleph.se" type="cite">
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<div class="moz-cite-prefix">On 24/12/2012 20:22, Brent Allsop
wrote:<br>
</div>
<blockquote cite="mid:50D8AB67.10502@canonizer.com" type="cite">
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<div class="moz-cite-prefix"><br>
But, all intelligence must eventually logically realize the
error in any such immoral, lonely, and will eventually loose,
'slavish' thinking. Obvisly what is morally right, is to
co-operate with everyone, and seek to get the best for
everyone - the more diversity in desires the better.<br>
</div>
</blockquote>
<br>
This is anthropomorphising things a lot. Consider a
utility-maximizer that has some goal (like making maximal
paperclips). There are plenty of reasons to think that it would
not start behaving morally:<br>
<a moz-do-not-send="true" class="moz-txt-link-freetext"
href="http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html">http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html</a><br>
<br>
Typically moral philosophers respond to this by claiming the AI is
not a moral agent, being bound by a simplistic value system it
will never want to change. That just moves the problem away from
ethics to safety: such a system would still be a danger to others
(and value in general). It would just not be a moral villain. <br>
<br>
Claims that systems with hardwired top-level goals will
necessarily be uncreative and unable to resist more flexible
"superior" systems better be followed up by arguments. So far the
closest I have seen is David Deutsch argument that they would be
uncreative, but as I argue in the link above this is inconclusive
since we have a fairly detailed example of something that is as
creative (or more) than any other software and yet lends itself to
hardwired goals (it has such a slowdown that it is perfectly safe,
though).<br>
<br>
<blockquote cite="mid:50D8AB67.10502@canonizer.com" type="cite">
<div class="moz-cite-prefix"> And I think that is why there is
an emerging consensus in this camp, that thinks fear of any
kind of superior intelligence is silly, whether artificial,
alien, or any kind of devils, whatever one may imagine them to
be in their ignorance of this necessary moral fact.<br>
</div>
</blockquote>
<br>
I'm not sure this emerging consensus is based on better
information or just that a lot of the lets-worry-about-AI people
are just busy over at SingInst/LessWrong/FHI working on AI safety.
I might not be a card-carrying member of either camp, but I think
dismissing the possibility that the other camp is on to something
is premature. <br>
<br>
The proactive thing to do would be for you to find a really good
set of arguments that shows that some human-level or beyond AI
systems actually are safe (or even better, disprove the
Eliezer-Omohundro thesis that most of mindspace is unsafe, or
prove that hard takeoffs are impossible/have some nice speed
bound). And the AI-worriers ought to try to prove that some
proposed AI architectures (like opencog) are unsafe. I did it for
Monte Carlo AIXI, but it is a bit like proving a snail to be
carnivorous - amble for your life! - it is merely an existence
proof.<br>
<br>
<blockquote cite="mid:50D8AB67.10502@canonizer.com" type="cite">
<div class="moz-cite-prefix"> So far, at least, there are more
people, I believe experts, willing to stand up and defend this
position, than there are willing to defend any fearful camps.<br>
</div>
</blockquote>
<br>
There has been some interesting surveys of AI experts and their
views on AI safety over at Less Wrong. I think the take home
message is, after looking at prediction track records and
cognitive bias, that experts and consensuses in this domain are
pretty useless. I strongly recommend Stuart Armstrong's work on
this:<br>
<a moz-do-not-send="true" class="moz-txt-link-freetext"
href="http://fora.tv/2012/10/14/Stuart_Armstrong_How_Were_Predicting_AI">http://fora.tv/2012/10/14/Stuart_Armstrong_How_Were_Predicting_AI</a><br>
<a moz-do-not-send="true" class="moz-txt-link-freetext"
href="http://singularity.org/files/PredictingAI.pdf">http://singularity.org/files/PredictingAI.pdf</a><br>
<br>
Disaggregate your predictions/arguments, try to see if you can
boil them down to something concrete and testable. <br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University </pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
extropy-chat mailing list
<a class="moz-txt-link-abbreviated" href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>
<a class="moz-txt-link-freetext" href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a>
</pre>
</blockquote>
<br>
</body>
</html>