[extropy-chat] Bluff and the Darwin award

Russell Wallace russell.wallace at gmail.com
Tue May 16 11:02:15 UTC 2006


On 5/16/06, Samantha Atkins <sjatkins at mac.com> wrote:
>
> Personally I was greatly inspired by Olaf Stapledon's "Star Maker" at
> a young age.


Me too!

I don't think his scenario is unrealistic, at least not
> in the long run.


Me neither.

What precisely are you calling make-believe?   You yourself are into
> the creation of strong AI.  It is a bit strange to devote your life
> to something you consider make-believe.


I work on it in the spirit in which Goddard early in the 20th century worked
on rocketry with the intent that it be used for space travel - he didn't
have to believe starships were just around the corner. It's not
technological progress I'm calling make-believe, but the imminent
apocalypse/nirvana meme.

What is fantasy and what is forward vision?


Given that we can't (and never have been able to) make actual predictions
past the near future, forward vision more than a couple of decades forward
_is_ necessarily fantasy.

There are many very good reasons not to let emotional irrational
> monkeys seriously restrict that which the brightest of them only
> begin to have a glimmer of.  It is not necessary to claim that it is
> all make-believe and thus unregulatable.


Do you think we today can foresee the Singularity or whatever well enough to
have a reasonable chance of making good, detailed policy decisions and
passing appropriate, well-tuned regulations on it? I would guess from your
words above that you agree with me that we can't.

That is the substance of my claim. I'm not calling make-believe the idea
that the Singularity may someday happen. What I'm calling make-believe is
any specific prediction regarding it. Because we are not close enough to do
as well as random chance in making specific predictions, any attempt to
discuss Singularity policy right now is worse than useless folly - it
encourages the passing of regulations that reduce our chances of ever making
it at all. If it's a topic for policy discussion, then it's a political
problem, and that means laws and regulations.

What I advocate is acknowledging that relevant policy discussion is not yet
possible, and that it should be considered not a political problem but a
purely technical one, until the technology actually delivers something of
substance, thereby providing information that can feed a _meaningful_ policy
discussion. If regulations are to be proposed, then let them be proposed at
that later date when there's some prayer of them being based at least partly
on fact rather than fantasy.

I very much disagree.   It would take very serious tech including
> nanotechnology to get a substantial amount of humanity far enough off
> planet to provide much safety of the kind you seem to be advocating.


Of course it will take such tech - that's exactly my point! What do you
disagree with?

Hmm, from your choice of words above perhaps you're thinking I want Diaspora
in the sense of "the world's about to blow up, I want a lifeboat so I can
get out of blast radius"? My vision is a different one. I want it so life
can stop being about squabbling over who gets to control which corner of
this one little ball of rock. So we can have room to breathe.

Only the nearest stars are reachable to biological creatures such as
> ourselves in a timeframe that will have us arrive before the post-
> biological offspring pass us and get there first.


*shrug* Maybe. Honestly, I think the actual future, if we make it, will end
up being stranger and more wonderful than either of us could have predicted.
But that's something for our descendants to figure out; maybe ourselves in
person if progress in life extension is fast enough; but at any rate, people
in the future. What we need to worry about right now is getting that far in
the first place.

If we do not develop substantially greater effective intelligence and
> rationality then the likeliest scenario is that we do ourselves in.


Consider this my little attempt at improving our effective intelligence and
rationality :)

Wireheading is not that far away.  Neither are various kinds of
> cyborg by some definitions though the certainly need not be killers
> and will not be all-powerful.   It is possible although unlikely that
> something can be cooked up to eat the biosphere.  At some point
> breaking up planets will become possible.  Should we pretend this is
> not possible?


No, I make no claims regarding ultimate impossibility of anything that
doesn't involve logical self-contradiction. But neither should we pretend
any of it is close enough for meaningful policy discussion.

I really do not think this is called for.   As a "Singularity Summit"
> this was very tame and not very scary at all.


It provided a platform for Bill McKibben to speak. Bill McKibben, who
advocates an end to progress, the snuffing out of the entire future of
sentient life - _and is respected and taken seriously_. If that doesn't
scare you, what would?

> Granted that everyone needs something to believe, if you find
> > "Singularity in my lifetime" is what you personally need, then
> > believe in life extension or cryonics that'll let you stick around
> > to see it, and let go of the illusion that it's just around the
> > corner.
>
> It is either within this century or likely not at all for this
> species imho.


You may well be right. That's why I think it's so important to keep working
flat out on the technology, and not get snared in politics.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060516/752a254e/attachment.html>


More information about the extropy-chat mailing list