[ExI] Power sats as weapons

Keith Henson hkeithhenson at gmail.com
Wed Sep 12 19:36:57 UTC 2012


On Wed, Sep 12, 2012 at 4:56 AM,  John Clark <johnkclark at gmail.com>
>
> On Tue, Sep 11, 2012  Keith Henson <hkeithhenson at gmail.com> wrote:
>
>> Granted that power satellites and platforms for propulsion lasers are
>> relatively delicate.  But in order to damage one, you have to deliver the
>> agent of damage.
>>
> A bucket of sand moving at several miles per second would play havoc with
> the optical elements of a huge laser, and you'd be unlikely to destroy all
> the grains before they hit, and even if you did get that lucky I'd jest
> send in another bucket. Sand is a lot cheaper than gigawatt lasers and
> power satellites, you'll run out of lasers before I run out of sand.

Accelerating sand to where it reaches GEO is not cheap.  By spending
over $100 B I think I can get the cost down to $100/kg, but it's not
like the power sat construction company is going to loan out their
transport system for people who want to sandblast their expensive
hardware.

>> Nukes require physical delivery.
>
> That's true, you do need to deliver thermonuclear bombs. If you want to
> send a package from Korea to New York City one way to do it is to strap the
> package to the top of a rocket and blast it on a 10,000 mile ballistic
> trajectory to that city, another much cheaper way to deliver your package
> is to use UPS or Federal Express. There is another advantage, its
> anonymous. If I launch a rocket with a nuclear warhead it's obvious that I
> sent it, but if New York were to just blow up one day, well who knows how
> it happened.

Given the lower limits of how light you can make a nuke, there are not
too many FedEx packages that need to go through the neutron scanner.
Think in terms of a shipping container.

>  > Launching one from the ground as an attack against a propulsion laser is
>> possible, though very expensive.
>>
> Using a nuke against a space laser would be the waste of a good nuke, sand
> is cheaper and would work better.
>
>> It will take hours to get there and the hostile intent will be obvious.
>
> Yes, so a terrorist organization wouldn't even bother attacking a space
> based laser, they'd attack big cities, and they can do that without warning
> and anonymously. And your big laser will be of no help whatsoever in
> preventing that.

That depends on the model.  If the driving force behind terrorism is
poor economic prospects, and propulsion lasers/power sats make the
world much better off economically, then perhaps they would prevent
such attacks.

>>> It would take months for a space laser to deliver as much energy to a
>> target as a H bomb could do in less than a millionth of a second, so I just
>> don't see the advantage from a military perspective.
>>
>> The trend for a long time has been precision rather than raw power.
>>
> But for MAD you need raw power, and terrorists aren't big on precision.

MAD depends on rational players.  Terrorist tend to be weak on that
front, and in any case are not much into a military perspective.

Keith

>   John K Clark
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20120911/82967f09/attachment-0001.html>
>
> ------------------------------
>
> Message: 7
> Date: Tue, 11 Sep 2012 10:11:43 -0700
> From: "spike" <spike66 at att.net>
> To: "'ExI chat list'" <extropy-chat at lists.extropy.org>
> Subject: Re: [ExI] Fermi Paradox and Transcension
> Message-ID: <00bb01cd9040$85334990$8f99dcb0$@att.net>
> Content-Type: text/plain;       charset="us-ascii"
>
>
>
> -----Original Message-----
> From: extropy-chat-bounces at lists.extropy.org
> [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Keith Henson
>
>
>>>...  If it is absorbing a significant portion of the energy of the star,
> it is either jet off somewhere or cook.
>
>>...That I really don't understand.  If you were to enclose the sun in a one
> AU Dyson shell, the equilibrium temperature on the outside would be around
> 127 deg C.
>
> Ja, but portions of the inner layers of an MBrain get warmer than that.  I
> have tried to model this a few different ways, and I end up with a
> surprising result: an MBrain cannot collect a large portion of the energy
> from a star.  Otherwise the inner nodes cannot reject sufficient heat to
> stay in the temperature range in which electronic devices we know today
> would work long term.  The problem is that I don't trust any of my models.
> I need to create a digital model with actual numbers on attitude
> determination and control, see what our worst cases are.
>
>>>... I don't know of an online explanation of that concept.  I might need
> to write one.
>
>>...I would like to see that.
>
> Me too.  I need to finish writing up proposals (Three down, 16 to go.)
>
>>...If you are trying to think fast, your physical layer needs to be as
> small as you can get it... Keith
>
> Ja, and its temperature goes up linearly as the inverse of the radius of
> orbit, all else being equal.  The problem is that all else is not equal when
> you start moving inboard closer to the star.  I am surprised at how
> complicated this question becomes, but it explains why there aren't already
> a jillion thermal models out there on the web.  Rather than materials
> availability, manufacturing or lifting the finished nodes to interplanetary
> orbit, heat management may be the biggest technical hurdle for an MBrain.
> Either that or I am missing something fundamental.
>
> Parting shot: if an MBrain is mostly transparent and relies on a mostly
> unobstructed view of cold space for heat rejection, such that we could see
> through an MBrain without much loss of light, and if it is fundamentally
> necessary that all MBrains must be constructed this way, this would explain
> why we haven't seen the signature of one anywhere.  We would be looking for
> a large cool object, when in fact an MBrain would be a dense hot object with
> a nearly invisible misty haze around it that would look a lot like a dust
> ring.  If we go with the Outback Postcards explanation for why we don't get
> signals from the MBrains (because the interesting stuff is all happening
> right there and they don't care about anything out here, and don't bother
> sending postcards to Aborigines during wicked cool technical talks) and
> MBrains must be mostly transparent, the universe could be filled with
> MBrains and we wouldn't know it.
>
> spike
>
> spike
>
>
>
>
>
> ------------------------------
>
> Message: 8
> Date: Tue, 11 Sep 2012 19:40:44 +0200
> From: Stefano Vaj <stefano.vaj at gmail.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Subject: Re: [ExI] Fermi Paradox and Transcension
> Message-ID:
>         <CAPoR7a7-OOdGkASPX1y-+kbReJNm8qyjtthi0=2kowQyjVta2g at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On 11 September 2012 16:07, BillK <pharos at gmail.com> wrote:
>> On Tue, Sep 11, 2012 at 2:32 PM, Stefano Vaj wrote:
>>> But yes, I am inclined to concede that as a stupid human is less likely to
>>> see a blatant contradiction than a clever one, a more-than-human entity
>>> might be even quicker in weighing the logical aspects involved.
>>
>> I doubt that. A stupid human is more likely to see certain actions as
>> just plain 'wrong'. Think of simple folk wisdom, or the 'Being There'
>> film.
>> Clever humans, on the other hand, can devise magnificent
>> justifications for any wrong act that they want to do.
>>
>> Intelligence level is not linked to morality.
>
> Sure.
>
> But, if I may insist once more on the distinction, I am referring to
> *moral reasoning* here, not to morality.
>
> I can adopt an excellent moral system, justify it with a horribly
> flawed philosophy, and be a terrible sinner.
>
> Or I can be a very good man, in principle adhering to a very bad moral
> code which I infringe, but which supported by very persuasive
> arguments.
>
> Or any other mix thereof.
>
> --
> Stefano Vaj
>
>
> ------------------------------
>
> Message: 9
> Date: Tue, 11 Sep 2012 19:20:13 +0100
> From: BillK <pharos at gmail.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Subject: Re: [ExI] Fermi Paradox and Transcension
> Message-ID:
>         <CAL_armgq_rKWFHPR1ZSW9dJtnOayZDgBK-jgJ9_vd6fypbtr=Q at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Tue, Sep 11, 2012 at 6:11 PM, spike wrote:
> <snip>
>> Parting shot: if an MBrain is mostly transparent and relies on a mostly
>> unobstructed view of cold space for heat rejection, such that we could see
>> through an MBrain without much loss of light, and if it is fundamentally
>> necessary that all MBrains must be constructed this way, this would explain
>> why we haven't seen the signature of one anywhere.  We would be looking for
>> a large cool object, when in fact an MBrain would be a dense hot object with
>> a nearly invisible misty haze around it that would look a lot like a dust
>> ring.  If we go with the Outback Postcards explanation for why we don't get
>> signals from the MBrains (because the interesting stuff is all happening
>> right there and they don't care about anything out here, and don't bother
>> sending postcards to Aborigines during wicked cool technical talks) and
>> MBrains must be mostly transparent, the universe could be filled with
>> MBrains and we wouldn't know it.
>>
>>
>
> Why not go to the source? The much-missed Robert Bradbury invented MBrains.
> In 'Year Million', edited by Damien Broderick, Robert has a chapter
> "Under Construction: Redesigning the Solar System."
> I don't have a copy, but a review quotes:
> MBrains, comprised of swarm-like, concentric, orbiting computronium
> shells that use solar sail-type materials to funnel and reflect the
> largest possible quantity of stellar energy.
> -----------------
>
> Looking back through Exi posts I find:
> Robert J. Bradbury Thu, 2 Dec 1999
>  The standard M-Brain architecture I designed, radiates heat only in
> one direction (outward, away from the star). Each layer's waste heat
> becomes the power source for each subsequent (further out) layer. To
> satisfy the laws of thermodynamics and physics, you have to get cooler
> and cooler but require more and more radiator material. At the final
> layer you would radiate at the cosmic microwave background (or
> somewhat above that if you live in a "hot" region of space due to lots
> of stars or hot gas).
>  Each shell layer orbits at the minimal distance from the star (to
> reduce inter-node propagation delays) while not melting from too much
> heat. [That makes the best use of the computronium in the solar system
> since the different materials from which computers may be constructed
> (TiC, Al2O3, Diamond, SiC, GaAs, Si, organic,
> high-temp-superconductor, etc.) each has different "limits" on
> operating temperature.] I suspect that some layers may be element
> constrained (e.g. GaAs) and assume that diamondoid rod-logic computers
> are not "best" for every operating temperature -- single-electron
> Si-based computers, or high-temperature copper oxide superconducting
> computers may be better in specific environments.
>
> However it is important to keep in mind that the mass of the computers
> in a node is probably very small compared to the mass of the radiators
> and cooling fluid (this is the part that needs to be worked out in
> detail).
> -------------------
>
> BillK
>
>
> ------------------------------
>
> Message: 10
> Date: Tue, 11 Sep 2012 12:36:52 -0700
> From: Jeff Davis <jrd1415 at gmail.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Subject: Re: [ExI] Fermi Paradox and Transcension
> Message-ID:
>         <CAHUTwkNHxRZXZkMLZRr-PuQEWYVfJ8BTcP7-+krGU-FUfUBBVw at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Tue, Sep 11, 2012 at 7:07 AM, BillK <pharos at gmail.com> wrote:
>
>> Clever humans, on the other hand, can devise magnificent
>> justifications for any wrong act that they want to do.
>>
>> Intelligence level is not linked to morality.
>
> Absolutely.  (I take your meaning to be "More intelligent does not
> imply more moral.")
>
> The problem originates in the inherent conflict between the
> constraints on behavior imposed by an ethical system, and and the
> pursuit of naked self-interest.  Social groupings in primates, herding
> animals, fish, birds, others, evolved because they enhance survival.
> Ethical behavior evolved within such groupings because it enhances the
> stability of the group.
>
> Dominance hierarchies based on power -- the "big dog" concept --
> clearly manifest in social groups.  These contribute to stability by
> forced acquiescence to the order of dominance.    Males (and perhaps
> females) challenge each other and thereby establish the order of
> dominance.  Recent studies however, seem to confirm that social
> animals also have a genetically-based sense of equity -- justice,
> fairness, call it what you will, which helps to maintain the stability
> of the force-built dominance hierarchy.    In humans this "fairness"
> sense would be the "built-in" source of ethical behavior/thinking.
>
> It seems to me that having and employing a "sense of fairness" would
> tend to reduce conflict within the group, thus enhancing group
> stability.
>
> In the case of an AI, one would -- at least initially -- have a
> designed, not an evolved, entity.  Consequently, unless designed in,
> it would not have any of the evolved drives -- survival instinct or
> (sexual) competitive impulse.  So it seems to me there would be no
> countervailing impulse-driven divergence from consistently
> ethics-based behavior.  The concept and adoption of ethics would, as I
> have suggested, be developed in the formative stage -- the
> "upbringing" -- of the ai, as it becomes acquainted with the history
> and nature of ethics, first at the human-level of intelligence and
> then later at a greater-than-human level of intelligence.
>
> Others, substantially more dedicated to this subject, have pondered
> the friendly (in my view this is equivalent to "ethical") ai question,
> and reached no confident conclusion that it is possible.  So I'm
> sticking my neck way out here in suggesting, for the reasons I have
> laid out, that, absent "selfish" drives, a focus on ethics will
> logically lead to a super ethical (effectively "friendly") ai.
>
> Fire at will.
>
> Best, Jeff Davis
>                   "Everything's hard till you know how to do it."
>                                                        Ray Charles
>
>
> ------------------------------
>
> Message: 11
> Date: Tue, 11 Sep 2012 13:13:18 -0700
> From: "spike" <spike66 at att.net>
> To: "'ExI chat list'" <extropy-chat at lists.extropy.org>
> Subject: [ExI] ethics vs intelligence, RE:  Fermi Paradox and
>         Transcension
> Message-ID: <010f01cd9059$e2765f70$a7631e50$@att.net>
> Content-Type: text/plain;       charset="us-ascii"
>
>
>>> But yes, I am inclined to concede that as a stupid human is less
>>> likely to see a blatant contradiction than a clever one...
>
>> I doubt that. A stupid human is more likely to see certain actions as just
> plain 'wrong'. ...
>
>
> Heh.  All ethical dilemmas seem to pale in comparison to those presented to
> the families of Alzheimer's patients.
>
> For instance, imagine an AD patient who seems partially OK some mornings for
> the most part, but nearly every afternoon and evening tends to grow more and
> more agitated, confused, lost, terrified, angry, worried, combative, clearly
> not enjoying life.  But the patient sometimes has a good day, and on those
> occasions clearly states a preference to stay in their own home until there
> is nothing left of the brain.  When is it time to check the patient into
> elder care?
>
> Easy, right?  OK what if the patient's spouse is doing something wrong in
> the medication, such as giving the patient large doses of useless vitamins,
> on pure faith since Paul Harvey said they are good for this or that?  What
> if you come to suspect the patient is receiving sleep aids in the middle of
> the day, and the rest of the family doesn't know?  What is the right thing
> to do there?  Ignore one's own suspicion and go along, knowing that if a
> patient is suffering, well hell, it isn't suffering to be asleep, ja?
> Apparently AD doesn't hurt in the sense that it causes pain, so it doesn't
> keep one awake as something like arthritis would, but the suffering is real.
> If a spouse decided the person is better off sleeping most of the time, is
> it appropriate to second-guess that spouse?  Come on extro-ethics hipsters,
> think hard, suggest the right answers, and while you are at it, do again
> make the case that ethical behavior and intelligence are related please?
> And if you answer that one, please try to convince me that a machine-based
> super-intelligence will be super ethical, and if you succeed at either of
> those, I will feel much better thanks.
>
> spike
>
>
>
> ------------------------------
>
> Message: 12
> Date: Tue, 11 Sep 2012 13:29:11 -0700
> From: "spike" <spike66 at att.net>
> To: "'ExI chat list'" <extropy-chat at lists.extropy.org>
> Subject: Re: [ExI] Fermi Paradox and Transcension
> Message-ID: <011601cd905c$1b0202c0$51060840$@att.net>
> Content-Type: text/plain;       charset="us-ascii"
>
>
>>... On Behalf Of BillK
> Subject: Re: [ExI] Fermi Paradox and Transcension
>
> On Tue, Sep 11, 2012 at 6:11 PM, spike wrote:
> <snip>
>>> ...Parting shot: if an MBrain is mostly transparent and relies on a
>> mostly unobstructed view of cold space for heat rejection...
>
>>...Why not go to the source? The much-missed Robert Bradbury invented
> MBrains.
> In 'Year Million', edited by Damien Broderick, Robert has a chapter "Under
> Construction: Redesigning the Solar System."
> I don't have a copy...
>
> I do have a copy, however... read on please.
>
>
>>... but a review quotes:
> MBrains, comprised of swarm-like, concentric, orbiting computronium shells
> that use solar sail-type materials to funnel and reflect the largest
> possible quantity of stellar energy.
> -----------------
>
> Of course, however...
>
>>... The standard M-Brain architecture I designed, radiates heat only in one
> direction (outward, away from the star). Each layer's waste heat becomes the
> power source for each subsequent (further out) layer...Robert J. Bradbury
> Thu, 2 Dec 1999>
>
> Hmmm, Robert and I did not agree on this.  He and I spent many hours at my
> home debating and deriving thermal models after this was written, most of
> the activity happening between 2001 and 2004.  After that he became
> distracted by another project, but my own feeling at the time and today is
> that his design does not close.  His contribution is valuable: the inner
> nodes have a different construct than the outer nodes, and must be able to
> operate at higher temperatures.
>
>>... To satisfy the laws of thermodynamics and physics, you have to get
> cooler and cooler but require more and more radiator material. At the final
> layer you would radiate at the cosmic microwave background (or somewhat
> above that if you live in a "hot" region of space due to lots of stars or
> hot gas)...  Robert J. Bradbury Thu, 2 Dec 1999
>
> Robert did not have, never did have, a detailed thermal model.  He had good
> ideas.  But there is a lot of blood, sweat and tears yet to be shed over a
> detailed thermal model, as well as some actual tests of hardware in space to
> measure their control parameters before I will trust my own models.
>
>>...However it is important to keep in mind that the mass of the computers
> in a node is probably very small compared to the mass of the radiators and
> cooling fluid (this is the part that needs to be worked out in detail).
> Robert J. Bradbury Thu, 2 Dec 1999 (?)
> -------------------
>
> BillK
> _______________________________________________
>
> BillK, I can't tell if this last sentence is part of Robert's commentary or
> yours, but Robert and I never did agree on the use of cooling fluid.  My
> MBrain nodes always relied on passive cooling only, for I did some calcs a
> long time ago which convinced me that cooling fluid doesn't help at all in
> the long run.  It can only help if you have available a low entropy cold
> space into which you dump waste heat.  But in Robert's vision, the inner
> nodes have only a view to a high entropy warm space, and the outer nodes
> which have a view to cold space do not need fluid of any kind.
>
> There is a lot of work to do on this.  I worked out the orbit mechanics
> first, because orbit mechanics are easier and cleaner than the thermal
> models, and I know how to do those.  Now the hard work begins.
>
> Final shot: microprocessor technology moved a long ways since Robert wrote
> the above passages.  He didn't live to see a cell phone win a chess
> tournament against several masters and two grandmasters, without a charger
> and without phoning a friend.
>
> spike
>
>
>
> ------------------------------
>
> Message: 13
> Date: Tue, 11 Sep 2012 13:57:47 -0700
> From: Jeff Davis <jrd1415 at gmail.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Subject: Re: [ExI] Fermi Paradox and Transcension
> Message-ID:
>         <CAHUTwkMVCrH_8Z7Oqf=4p8_Eax2sb9SThAh5jaPUYEqBymAGDg at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Mon, Sep 10, 2012 at 6:02 AM, Ben Zaiboc <bbenzai at yahoo.com> wrote:
>> Jeff Davis <jrd1415 at gmail.com> wrote:
>
>>> An advanced ai would have no such problems, and would be far more likely to conform to a higher ethical standard.
>
>>> That's what I was saying.
>>
>>
>> OK, I get that.  Sort of.  With a reservation on the idea that "Humans know what constitutes ethical behaviour".  Do we?
>
> I had to pause and give the question some thought.  I realized that my
> assertion -- that "Humans know what constitutes ethical behavior" --
> was just my "legacy" assumption, unvetted unexamined.  Upon
> examination, I see that it isn't something I know for a fact, but
> rather something I have come to believe, without having looked at it
> closely.
>
> So, "Do we?"
>
> I seem to.  At least I have a robust notion of the difference between
> right and wrong.  Does that qualify?  And I extend that knowledge of
> myself to the rest of humanity.  Am I wrong?  The test of culpability
> -- indeed, sanity -- in a (TV) court of law is found in the phrase
> "Did the defendant know the difference between right and wrong?"  This
> suggests that the courts at least think anyone not mentally defective
> can reasonably assumed to know the difference between right and wrong.
>
> So I'm pretty confident that most folks understand at least their own
> ethical system, and acknowledge the obligatory nature of adherence to
> "right" behavior.  But I'm open to challenges.
>
>> If so, why can't we all agree on it?
>
> Different cultures have different values, and within cultures there
> are subcultures with different values.  Different values result in
> different ethical systems.  There's the source of the disagreement.
>
>> (which is a different question to "why don't we all act in accordance with it?")  When you look at this, there's very little that we can all agree on, even when it comes to things like murder, stealing, and granting others the right to decide things for themselves.
>
> I see it as a cultural conflict not an ethical one.  Independent of
> culture, if you ask someone if adherence to their values -- obedience
> to the law(?) -- is obligatory in order to remain in good standing
> within their society, won't they say "Yes"?
>
>> Religion causes the biggest conflicts here, of course, but even if you ignore religious 'morality', there are still some pretty big differences of opinion.  Murder is bad.  Yes, of course.  But is it always bad?  Opinions differ.  When is it not bad?  Opinions differ widely.  Thieving is bad.  Yes, of course.  But is it always bad?  Opinions differ.  Etc.
>
> Again, cross-cultural conflicts, all.  Within their own
> culturally-distinct ethical system, all will agree that right behavior
> is obligatory (though they will all retain the option to defect when
> survival is at stake.  A man's gotta do, etc).
>
>> The question still remains:  What would constitute ethical behaviour for a superintelligent being?  I suspect we have no idea.
>
> Not being able to know the meaning of "superintelligent" is the first
> problem.  We can project in that direction though our experience with
> exceptionally intelligent humans, but beyond that darkness begins to
> fall. And beyond that, where an advanced level of complexity predicts
> the emergence of the unpredictable, we're friggin' totally in the
> dark.  I would very much like to hear someone attempt to penetrate the
> first level -- the penetrable level -- of darkness.
>
> Intelligence: an iterative process of data collection and processing
> for pattern recognition.
>
> More intelligent than that: more of the same, a change in degree not kind.
>
> Super-intelligent: ?  A change in kind.
>
>> We can't assume it would just take our ideas as being correct (assuming it could even codify a 'universal human ethics' in the first place).  It would almost certainly create its own from scratch.
>
> If its "upbringing", training, intellectual development is similar to
> that of a human child, then it will gradually absorb human-provided
> information.  I will achieve intellectual maturity in stages.  But it
> will follow this developmental process with human knowledge as its
> seed.  Like a child it will at first accept everything as "true".
> Then later it seems definitive that it will self-enhance, part of
> which would include a re-examination of all prior knowledge and
> revision as called for.  Even so, revision cannot erase the causal
> priors.  You have to have something to start from, an empty mind has
> emptied itself of the tools for revision.  So "starting from scratch"
> is not possible.  I will however grant you something close to it.
>
>> We simply can't predict what that would lead to.
>
> Not with any confidence, anyway.
>
> Best, Jeff Davis
>
> Aspiring Transhuman / Delusional Ape
>     (Take your pick)
>           Nicq MacDonald
>
>
>
> ------------------------------
>
> Message: 14
> Date: Tue, 11 Sep 2012 22:16:28 +0100
> From: BillK <pharos at gmail.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Subject: Re: [ExI] Fermi Paradox and Transcension
> Message-ID:
>         <CAL_armh=uSKmHV7fe92GP8B8-ZrD=HmjgcbVbwaLAM6_v9gY0Q at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Tue, Sep 11, 2012 at 9:29 PM, spike wrote:
> <snip>
>> There is a lot of work to do on this.  I worked out the orbit mechanics
>> first, because orbit mechanics are easier and cleaner than the thermal
>> models, and I know how to do those.  Now the hard work begins.
>>
>> Final shot: microprocessor technology moved a long ways since Robert wrote
>> the above passages.  He didn't live to see a cell phone win a chess
>> tournament against several masters and two grandmasters, without a charger
>> and without phoning a friend.
>>
>>
>
> I found a copy of Robert's paper on the Wayback Machine Copyright 1997-2000.
> <http://web.archive.org/web/20080918090527/http://www.aeiveos.com:8080/~bradbury/MatrioshkaBrains/MatrioshkaBrainsPaper.html>
>
> Although, as you say, it probably doesn't include later revisions, you
> might like to store a copy.
>
> BillK
>
>
> ------------------------------
>
> Message: 15
> Date: Tue, 11 Sep 2012 17:22:12 -0400
> From: Will Steinberg <steinberg.will at gmail.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Subject: Re: [ExI] ethics vs intelligence, RE: Fermi Paradox and
>         Transcension
> Message-ID:
>         <CAKrqSyEiSipVE93rwyRZJ_wk3O0GSyMroksXAWzzwikHO4kvXw at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> There are no ethics, the proof being Godel's: in any ethical framework,
> there exists a situation whose ethicity cannot be determined.  Thus there
> is no correct ethical system.  It's all up to you: decide what you believe
> and then do or don't institute it in your reality.
> On Sep 11, 2012 4:27 PM, "spike" <spike66 at att.net> wrote:
>
>>
>> >> But yes, I am inclined to concede that as a stupid human is less
>> >> likely to see a blatant contradiction than a clever one...
>>
>> > I doubt that. A stupid human is more likely to see certain actions as
>> just
>> plain 'wrong'. ...
>>
>>
>> Heh.  All ethical dilemmas seem to pale in comparison to those presented to
>> the families of Alzheimer's patients.
>>
>> For instance, imagine an AD patient who seems partially OK some mornings
>> for
>> the most part, but nearly every afternoon and evening tends to grow more
>> and
>> more agitated, confused, lost, terrified, angry, worried, combative,
>> clearly
>> not enjoying life.  But the patient sometimes has a good day, and on those
>> occasions clearly states a preference to stay in their own home until there
>> is nothing left of the brain.  When is it time to check the patient into
>> elder care?
>>
>> Easy, right?  OK what if the patient's spouse is doing something wrong in
>> the medication, such as giving the patient large doses of useless vitamins,
>> on pure faith since Paul Harvey said they are good for this or that?  What
>> if you come to suspect the patient is receiving sleep aids in the middle of
>> the day, and the rest of the family doesn't know?  What is the right thing
>> to do there?  Ignore one's own suspicion and go along, knowing that if a
>> patient is suffering, well hell, it isn't suffering to be asleep, ja?
>> Apparently AD doesn't hurt in the sense that it causes pain, so it doesn't
>> keep one awake as something like arthritis would, but the suffering is
>> real.
>> If a spouse decided the person is better off sleeping most of the time, is
>> it appropriate to second-guess that spouse?  Come on extro-ethics hipsters,
>> think hard, suggest the right answers, and while you are at it, do again
>> make the case that ethical behavior and intelligence are related please?
>> And if you answer that one, please try to convince me that a machine-based
>> super-intelligence will be super ethical, and if you succeed at either of
>> those, I will feel much better thanks.
>>
>> spike
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20120911/4715c561/attachment-0001.html>
>
> ------------------------------
>
> Message: 16
> Date: Tue, 11 Sep 2012 22:59:11 +0100
> From: BillK <pharos at gmail.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Subject: Re: [ExI] ethics vs intelligence, RE: Fermi Paradox and
>         Transcension
> Message-ID:
>         <CAL_armge3u+7Q5q4kjQZMshz8ZxrgQdy8hxTw7nDK1tMiDtWkQ at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Tue, Sep 11, 2012 at 10:22 PM, Will Steinberg wrote:
>> There are no ethics, the proof being Godel's: in any ethical framework,
>> there exists a situation whose ethicity cannot be determined.  Thus there is
>> no correct ethical system.  It's all up to you: decide what you believe and
>> then do or don't institute it in your reality.
>>
>>
>
> Those are my principles, and if you don't like them... well, I have others.
>     Groucho Marx
>
>
> BillK
>
>
> ------------------------------
>
> Message: 17
> Date: Tue, 11 Sep 2012 15:28:36 -0700
> From: Keith Henson <hkeithhenson at gmail.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Subject: [ExI] On the brink of food riots
> Message-ID:
>         <CAPiwVB4f=2-_O5mqW2tz8NhEr963yWmqZ6RoKYk-dMK2MHEfwQ at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> The US government policy of converting food (corn) to vehicle fuel has
> "interesting" consequences, though they probably only brought the
> crisis forward a few years.
>
> I have talked here for almost the last decade about wars and related
> social disruptions as the consequence of a bleak future.  Some of
> those take years for the xenophobic memes to build up.
>
> Food shortages (i.e., high prices) have much faster effects.  The
> thing that is particularly disturbing about this report is how close
> we seem to be to really widespread food riots.
>
> http://arxiv.org/pdf/1108.2455v1.pdf
>
> Given the number of people on food stamps in the US, the US may not be
> immune to global food price rises causing trouble here.
>
> Keith
>
>
> ------------------------------
>
> Message: 18
> Date: Tue, 11 Sep 2012 23:00:41 -0400
> From: Mike Dougherty <msd001 at gmail.com>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Subject: Re: [ExI] Power sats as weapons
> Message-ID:
>         <CAOJFdbK0TNopJvURtL0+PYH9nRJjfPxBR298JQKD9mMBpWYxLA at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Tue, Sep 11, 2012 at 11:36 AM, John Clark <johnkclark at gmail.com> wrote:
>> That's true, you do need to deliver thermonuclear bombs. If you want to send
>> a package from Korea to New York City one way to do it is to strap the
>> package to the top of a rocket and blast it on a 10,000 mile ballistic
>> trajectory to that city, another much cheaper way to deliver your package is
>> to use UPS or Federal Express. There is another advantage, its anonymous. If
>> I launch a rocket with a nuclear warhead it's obvious that I sent it, but if
>> New York were to just blow up one day, well who knows how it happened.
>
> I was curious to know if you were aware that you wrote this on 9/11.
>
> I don't know if it's irony or synchronicity, but I thought it was noteworthy.
>
>
> ------------------------------
>
> Message: 19
> Date: Wed, 12 Sep 2012 12:32:07 +0100
> From: Anders Sandberg <anders at aleph.se>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Subject: Re: [ExI] ethics vs intelligence
> Message-ID: <505072B7.5000202 at aleph.se>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 11/09/2012 22:22, Will Steinberg wrote:
>>
>> There are no ethics, the proof being Godel's: in any ethical
>> framework, there exists a situation whose ethicity cannot be
>> determined.  Thus there is no correct ethical system.  It's all up to
>> you: decide what you believe and then do or don't institute it in your
>> reality.
>>
>
> That is obviously false. Here is a consistent and complete moral system:
> "everything is permitted".
>
> It is worth distinguishing ethics and morality. A morality is a system
> of actions (or ways of figuring them out) that are considered to be
> right. Ethics is the study of moral systems, whether in the form of you
> thinking about what you think is right or wrong, or the academic pursuit
> where thick books get written. A lot of professional ethics is
> meta-ethics, thinking about ethics itself (what the heck is it? what it
> can and cannot achieve? how can we find out?), although practical
> ethicists do have their place.
>
> Now, I think Will is right in general: for typical moral systems there
> are situations that are undecidable as "right" or "wrong" (or have
> uncomputable values, if you like a more consequentialist approach). They
> don't even need to be tricky G?del- or Turing-type situations, since
> finite minds with finite resources often find that they cannot analyse
> the full ramifications. Some systems are worse: Kant famously forces you
> to analyse *and understand* the full moral consequences of everybody
> adopting your action as a maxim, while rule utilitarianism just wants
> you to adopt the rules that given current evidence will maximize utility
> (please revise them as more evidence arrives or your brain becomes better).
>
> But this doesn't mean such systems are pointless. Unless you are a
> catatonic nihilist you will think that some things are better than
> others, and adopting a policy of action that produces more of the good
> is rational. This is already a moral system! (at least in some ethical
> theories) A lot of our world consists of other agents with similar (but
> possibly not identical) concerns. Coordinating policies often produce
> even better outcomes, so we have reasons to express policies succinctly
> to each other so we can try to coordinate (and compressed formulations
> of policies often make them easier to apply individually too: cached
> behaviors are much quicker than to ardously calculate the right for
> every situation).
>
> [ The computational complexity of moral systems is an interesting topic
> that I would love to pursue. There are also cool links to statistical
> learning theory - what moral systems can be learned from examples, and
> do ethical and meta-ethical principles provide useful boundary
> conditions or other constraints on the models? ]
>
> --
> Anders Sandberg,
> Future of Humanity Institute
> Philosophy Faculty of Oxford University
>
>
>
> ------------------------------
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
> End of extropy-chat Digest, Vol 108, Issue 13
> *********************************************




More information about the extropy-chat mailing list