[ExI] Hard Takeoff
Brent Allsop
brent.allsop at canonizer.com
Tue Nov 16 04:19:13 UTC 2010
Moral Experts,
(Opening note: For those that don't enjoy the below religious / moral /
mormonesque rhetoric that I enjoy, I hope you can simply translate it on
the fly to something more to your liking. ;)
This is a very exciting topic, and I think morally a critically
important one. If we do the wrong thing, or fail to do it right, I
think we're all agreed the costs could be very extreme. Morality has to
do with knowing what is right, and what is wrong, does it not? I sure
desperately want to "Choose The Right" (CTR) as mormons like to always
say. But I feel I desperately need more help to be more morally
capable, especially in this area. It is especially hard for me to
understand, fully grasp, and remember ways of thinking about things that
are very diverse from my current way of thinking about things. All this
eternal yes it is no it is not, yes it is isn't doing me any good, for sure.
This is a much more complex issue than I've ever really fully thought
about, and I appreciate the help from people on both sides of the
issue. I may be the only one, but I would find it very valuable and
educational to have concise descriptions of all the best arguments and
issues, and a quantitative ranking, by the experts, of the importance of
them all, and a quantitative measure of who, and how many people, are in
each camp. In other words, I think the best way for all of us to
approach this problem, is to have some concise, quantitative, and
constantly improving representation of the most important issues,
according to the experts on all sides, so we can all be better educated
(with good references) about what the most important arguments are, why,
and which experts, and how many, are in each camp - going forward, as
ever more scientific data, ever more improved reasoning... - comes in.
We've started one survey topic, on the general issue of the importance
of friendly AI, (see: http://canonizer.com/topic.asp/16 ) which so far
shows a somewhat even distribution of experts on both sides. But this
is obviously just a start at what is required so all of us can be better
educated on all the most important issues and arguments.
Through this discussion, I've realized that a critical sub component of
the various ways of thinking about this issue is one's working
hypothosis about the possibility of a rapid isolated, hidden, or remote
'hard take off'. I'm betting that the degree to which one holds such as
a real possibility of a isolated hard take off as their working
hypothosis, the more likely they are to fear or want to be cautious
about AI, and visa versa. So I think it will be very educational to
everyone to more rigorously concisely develop and measure for the
various most important reasons for this particular sub issue on both sides.
Towards this end, I'd like to create several new related survey topics
to get a more detailed map of what the experts believe in this space.
First, would be a survey topic on the possibility of any kind of
isolated rapid hard takeoff. We could create two related topics to
capture, concisely state, and quantitatively rank the importance, and
value of (i.e. their ability to be convincing) the various arguments had
relative to each other. We could have one argument topic ranking
reasons why an isolated hard takeoff might be possible, and another
ranking reasons for why it might not be likely.
This way, the experts on both sides of the issue could collaboratively
develop the best and most concise description of each of the arguments,
and help rank which are the most convincing for everyone and why. (It
would be interesting to see if the ranking for each side changed, when
surveying those in the pro camp, verses those in the con camp, and so on)
As these two pro and con argument ranking topics developed, the members
of the pro and con camps could reference these arguments, and develop
concise descriptions of why the pro or con arguments are more convincing
to them, than the others, and why they are in their particular camp, or
why they currently use the particular pro or con theory as their working
hypothesis. And of course, it would be very interesting to see if
anyone jumps camps, once things start getting more developed, or when
new scientific results or catastrophes, come in, and so on.
Would anyone else think this kind of moral expert survey information
would be helpful to them in their effort to make the best possible
decisions and judgments on such important issues? Would anyone else
have any better or additional ways to develop or structure a survey of
critically important information that anyone thinks everyone interested
in this topic needs to know about?
I'm going to continue developing this survey along these lines, using
what I've heard others say so far here, but there is surely better ways
to go about this, that others can help find or point out, obviously the
more diversity the better, so I would love to have any other ideas or
inputs or help with this process.
Looking forward to any and all feedback, pro or con, and it wold be
great to at least get a more comprehensive survey of who was in these
camps, starting with the improvement of this one:
http://canonizer.com/topic.asp/16 .
And also, I hope for some day achieving perfect justice. Those that are
wrong, are arguably doing great damage compared to the heroes that are
right - the ones that are helping us all to be morally better. It seems
to me to achieve perfect justice, the mistaken or wicked ones, will have
to make a restitution to the heroes, for the damage they continue to do,
for as long as they continue to be wrong (to sin?). The better we
rigorously track all this, the sooner we can achieve better justice right?
The more help I get, from all sides, the more capable I'll bee of being
in the right camp sooner, and the more capable I'll bee of helping
others to do the same, and the less restitution I'll have to clean up
for being mistaken longer, and the more reward we will all reap, sooner,
in a more just and perfect heaven.
Brent Allsop
On 11/15/2010 7:33 PM, Michael Anissimov wrote:
> Hi John,
>
> On Sun, Nov 14, 2010 at 9:27 PM, John Grigg
> <possiblepaths2050 at gmail.com <mailto:possiblepaths2050 at gmail.com>> wrote:
>
>
> I agree that self-improving AGI with access to advanced manufacturing
> and research facilities would probably be able to bootstrap itself at
> an exponential rate, rather than the speed at which humans created it
> in the first place. But the "classic scenario" where this happens
> within minutes, hours or even days and months seems very doubtful in
> my view.
>
> Am I missing something here?
>
>
> MNT and merely human-equivalent AI that can copy itself but not
> qualitatively enhance its intelligence beyond the human level is
> enough for a hard takeoff within a few weeks, most likely, if you take
> the assumptions in the Phoenix nanofactory paper.
>
> Add in the possibility of qualitative intelligence enhancement and you
> get somewhere even faster.
>
> Neocortex expanded in size by a factor of only about 4 from chimps to
> produce human intelligence. The basic underlying design is much the
> same. Imagine if expanding neocortex by a similar factor again led to
> a similar qualitative increase in intelligence. If that were so, then
> even a thousand AIs with so-expanded brains and a sophisticated
> manufacturing base would be like a group of 1000 humans with assault
> rifles and helicopters in a world of six billion chimps. If that were
> the case, then the Phoenix nanofactory + human-level AI-based estimate
> might be excessively conservative.
>
> --
> michael.anissimov at singinst.org <mailto:michael.anissimov at singinst.org>
> Singularity Institute
> Media Director
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101115/fad7de23/attachment.html>
More information about the extropy-chat
mailing list