[ExI] Yes, the Singularity is the greatest threat to humanity
Michael Anissimov
michaelanissimov at gmail.com
Sat Jan 22 05:29:06 UTC 2011
On Mon, Jan 17, 2011 at 11:23 PM, Eugen Leitl <eugen at leitl.org> wrote:
> On Mon, Jan 17, 2011 at 03:11:30PM -0800, Michael Anissimov wrote:
>
> > This is the basis of Eugen's opposition to Friendly AI -- he sees it as a
>
> This is not the basis. This is one of the many ways I'm pointing
> out what you're trying to do is undefined. Trying to implement
> something undefined is going to produce an undefined outcome.
>
> That's a classical case being not even wrong.
>
Defining it is our goal... put yourself in my shoes, imagine that you think
that uploading is much, much harder than intelligence-explosion-initiating
AGI. What to do? What, what, what to do?
I thought string theory was the natural case of not even being wrong....
> Yes, I don't think something derived from a monkey's idle fart
> should have the power to constrain future evolution of the
> universe. I think that's pretty responsible.
>
Then what?
> Not one being, a population of beings. No singletons in this universe.
> Rapidly diversifying population. Same thing as before, only more so.
>
A very aggressive human nation could probably become a singleton with MNT if
they wanted to. It could still happen. Aggressively distribute nukes, etc.
Nuke every major city of the enemy, bwahahaha!
Academic reference: *Military Nanotechnology:** Potential Applications and
Preventive Arms Control. *Read it? I can mail it to you to borrow if not.
**
If a human nation can do it, then couldn't a superintelligence..?
> > lot of responsibility whether or not we want it, and to maximize the
> > probability of a favorable outcome, we should aim for a nice agent.
>
> Favorable for *whom*? Measured in what? Nice, as relative to whom?
> Measured in which?
>
Guess who I'm going to quote now... I'll bet you can't guess.
...
...
...
>From "Wiki Interview with
Eliezer"<http://www.acceleratingfuture.com/wiki/Wiki_Interview_With_Eliezer/Ethics_And_Friendliness>
:
*Eugene Leitl has repeatedly expressed serious concern and opposition to
SIAI's proposed Friendliness architecture. Please summarize or reference his
arguments and your responses.*
Eugene Leitl believes that altruism is impossible *period* for a
superintelligence - any superintelligence, whether derived from humans or
AIs. Last time we argued this, which was long ago, and he may have changed
his opinions in the meantime, I recall that he was arguing for this
impossibility on the basis of "all minds necessarily want to survive as a
subgoal, therefore this subgoal can stomp on a supergoal" plus "in a
Darwinian scenario, any mind that does not want to survive, dies, therefore
all minds will evolve independent drives toward survival." I consider the
former to be flawed on grounds of Cognitive
Science<http://www.acceleratingfuture.com/wiki/Cognitive_Science>,
and the latter to be flawed on the grounds that post-Singularity, conscious
redesign outweighs the Design
Pressures<http://www.acceleratingfuture.com/wiki/Design_Pressure>
evolution
can exert. Moreover, there are scenarios in which the original Friendly seed
AI need not reproduce. Eugene believes that evolutionary design is the
strongest form of design, much like John Smart, although possibly for
different reasons, and hence discounts *intelligence* as a steering factor
in the distribution of future minds. I do wish to note that I may be
misrepresenting Eugene here. Anyway, what I have discussed with Eugene
recently is his plans for a Singularity *without* AI, which, as I recall,
requires uploading a substantial fraction of the entire human race, possibly
without their consent, and spreading them all over the Solar System
*before* running
them, before *any* upload is run, except for a small steering committee,
which is supposed to abstain from all intelligence enhancement, because
Eugene doesn't trust uploads either. I would rate the pragmatic
achievability of this scenario as zero, and possibly undesirable to boot, as
Nick Bostrom and Eugene have recently been arguing on wta-talk.
~~~
If you reckoned that altruistic superintelligence were at least possible in
theory, then you'd worry less about the specifics. To quote Nick Bostrom
this time <http://www.nickbostrom.com/ethics/ai.html>:
It seems that the best way to ensure that a superintelligence will have a
beneficial impact on the world is to endow it with philanthropic values. Its
top goal should be friendliness. How exactly friendliness should be
understood and how it should be implemented, and how the amity should be
apportioned between different people and nonhuman creatures is a matter that
merits further consideration. I would argue that at least all humans, and
probably many other sentient creatures on earth should get a significant
share in the superintelligence’s beneficence. If the benefits that the
superintelligence could bestow are enormously vast, then *it may be less
important to haggle over the detailed distribution pattern* and more
important to seek to ensure that everybody gets at least some significant
share, since on this supposition, even a tiny share would be enough to
guarantee a very long and very good life. One risk that must be guarded
against is that those who develop the superintelligence would not make it
generically philanthropic but would instead give it the more limited goal of
serving only some small group, such as its own creators or those who
commissioned it.
Emphasis mine.
Thanks for suggesting my strategies, but I think I can manage on my own.
>
Your strategy is useless if a hard takeoff happens, that's my point.
--
Michael Anissimov
Singularity Institute
singinst.org/blog
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110121/a109e65c/attachment.html>
More information about the extropy-chat
mailing list