From agrimes at speakeasy.net Mon Nov 1 00:19:40 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Sun, 31 Oct 2010 20:19:40 -0400 Subject: [ExI] Flash of insight... In-Reply-To: <4CCDEE41.20706@canonizer.com> References: <4CCB3ACB.8000106@speakeasy.net><8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDEE41.20706@canonizer.com> Message-ID: <4CCE079C.4010102@speakeasy.net> > And remember that there are two parts to most conscious perception. > There is the conscious knowledge, and it's referent. For out of body > experiences, the knowledge of our 'spirit' or 'I' leaves our knowledge > of our body (all in our brain). > Our conscious knowledge of our body has a referent in reality, but our > knowledge of this 'spirit' does not. Surely in the future we'll be able > to alter and represent all this conscious knowledge any way we want. > And evolution surely had survivable reasons for usually representing > this 'I' just behind our knowledge of our eyes. Interesting. I don't seem to have any such perception. I see what I see, I type what I type, but I'm not, metaphysically speaking, directly present in any of my own perceptions. I have no perception at all of being "inside my head" -- I am my head. =P It seems perfectly natural to me. People are always talking about this concept of "self esteem" WTF is that? I mean it's meaningless to either hold one's self in esteem or contempt. Generally, by my appearance and sometimes by my actions, I do display a lack of self-consciousness. =\ I'm not sure if that's directly related. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From jrd1415 at gmail.com Mon Nov 1 00:21:27 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sun, 31 Oct 2010 17:21:27 -0700 Subject: [ExI] Wind Power Without the Blades In-Reply-To: <4CCADE6F.30603@satx.rr.com> References: <4CCADE6F.30603@satx.rr.com> Message-ID: On Fri, Oct 29, 2010 at 7:47 AM, Damien Broderick wrote: > Here's another improvement over the first generation pinwheel-on-a-stick. Don't know how bird or bat friendly it is though. http://nextbigfuture.com/2010/10/order-of-magnitude-enhancement-of-wind.html Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From possiblepaths2050 at gmail.com Mon Nov 1 06:43:10 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 31 Oct 2010 23:43:10 -0700 Subject: [ExI] Atomic rockets science fiction/fact online source! Message-ID: This science fiction/fact website talks about the atomic rockets that were so popular in the speculative fiction of many decades past. And how to try to imbue some sound science into one's science fiction, if you want your characters to travel the galaxy in one of these "retro" vehicles.... http://www.projectrho.com/rocket/ John : ) From jonkc at bellsouth.net Mon Nov 1 16:48:36 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 1 Nov 2010 12:48:36 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <4CCDE6E0.3020008@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net><8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> Message-ID: <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> This universal obsession with the original makes me wonder if it could be the result of an innate flaw in our mental wiring; otherwise it's difficult to explain something like the persistent irrationality in the art market. People will happily pay 140 million dollars for an original Jackson Pollock abstract painting, but those same people wouldn't pay 5 dollars for a copy so good they couldn't tell the difference, a copy so good it would take a team of world class art experts many hours of close study to tell the difference; and even then the difference wouldn't be that one was better than the other, just that they had at last found a tiny difference between the two. Up to now that sort of erroneous thinking hasn't caused enormous problems, it just led some rich men into making some very stupid purchases, but during the singularity that sort of dementia could become much more serious. Unless you can develop software fixes to mitigate the wiring errors in your head and put aside the Mighty Original dogma then you will be dog meat in the singularity. Well..., you probably will be anyway but at least you'll have a chance. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Nov 1 18:21:20 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 01 Nov 2010 13:21:20 -0500 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> References: <4CCB3ACB.8000106@speakeasy.net><8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> Message-ID: <4CCF0520.9000601@satx.rr.com> On 11/1/2010 11:48 AM, John Clark wrote: > This universal obsession with the original makes me wonder if it could > be the result of an innate flaw in our mental wiring; otherwise it's > difficult to explain something like the persistent irrationality in the > art market. It's very easy to understand, in a culture that fetishizes individual ownership. Once, only the wealthy could afford to pay an excellent painter to handmake a likeness of the family, the residence, the dog or the god. These were unique and occasionally were even prized for their aesthetic value. With what is called by scholars The Age of Mechanical Reproduction, suddenly a thousand or a million pleasing or useful indistinguishable objects could be turned out like chair legs. Art-as-index-of-wealth and art-as-index-of-superior taste had to adjust, valorizing the individual work, and especially the item that could not be a copy. When nanotech arrives, capable of replicating the most distinctive and rare items, this upheaval will happen again. Have you ever seen a real van Gogh? The thick raised edges of the paint, catching the light differently from different angles? Next to that, printed reproductions are dull, faithless traitors. If nano makes it possible to compile an exact copy in three dimensions, only the fourth will be lost--and that irretrievably, except to the most extreme tests. We'll see increasingly what we have seen as avant-garde for a century: evanescent art, performance, destruction of an art work after its creation. And in addition, a widespread downward revaluation of originals *of the art-work kind*. All of this might have some bearing on how individuals regard *themselves* as "originals", but we have no experiences of nearly exact human copies other than the near resemblance of twins, triplets, etc. Certainly monozygotic "copies" of people usually have a marked fondness for each other, but they don't consider each other as mutually fungible. Damien Broderick From pjmanney at gmail.com Mon Nov 1 20:32:47 2010 From: pjmanney at gmail.com (PJ Manney) Date: Mon, 1 Nov 2010 13:32:47 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <4CCF0520.9000601@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: On Mon, Nov 1, 2010 at 11:21 AM, Damien Broderick wrote: > When nanotech arrives, capable of replicating the most distinctive and rare > items, this upheaval will happen again. Have you ever seen a real van Gogh? > The thick raised edges of the paint, catching the light differently from > different angles? Next to that, printed reproductions are dull, faithless > traitors. If nano makes it possible to compile an exact copy in three > dimensions, only the fourth will be lost--and that irretrievably, except to > the most extreme tests. We'll see increasingly what we have seen as > avant-garde for a century: evanescent art, performance, destruction of an > art work after its creation. And in addition, a widespread downward > revaluation of originals *of the art-work kind*. I agree with Damien on most of his post. However, I disagree on the downward revaluation. Let me add something from my own experience raised in the art world. Right now, you can buy copies of famous works of art, made with oil, "painted" on canvas. For about $300, I can have a "handmade" and same size oil copy of Van Gogh's Starry Night: http://www.1st-art-gallery.com/Vincent-Van-Gogh/Starry-Night.html They don't diminish Van Gogh's original one bit. It boils down to one word: provenance. It's the most important aspect determining value in a piece AFTER rarity/culturally agreed value. Nanofabbing affects rarity. It doesn't affect provenance. And it doesn't even have to apply to art. If I clean out my attic, the items go to the trash bin or Goodwill. When they cleaned out Marilyn Monroe's attic, even her x-rays were valuable. http://www.nydailynews.com/money/2010/06/28/2010-06-28_marilyn_monroes_chest_xray_from_1954_sells_for_45000_at_las_vegas_auction.html The auctioned contents of Jackie Kennedy Onassis' attic (she apparently threw nothing away) brought a total of $50 million to her estate. Even if I owned the exact same triple-strand pearl necklace, rocking chair and fountain pens, you can bet mine wouldn't! And why should it? The buyers were purchasing history. Not jewelry, furniture or office supplies. Provenance has an important place in the art market. Your nanomade Van Gogh may look as good as the real thing, but was it owned by an established lineage, from the hand of Vincent, to his brother/dealer Theo, to the Van Gogh family and dealers to MOMA? http://www.moma.org/collection/provenance/provenance_object.php?object_id=79802 Or how about this Paul Gauguin masterpiece, owned by fellow artist Edgar Degas? http://www.moma.org/collection/provenance/provenance_object.php?object_id=78621 Or Picasso's famous portrait of Gertrude Stein, given in her will to the Metropolitan Museum of Art. That's as good a provenance as you're going to find! http://wings.buffalo.edu/english/faculty/conte/syllabi/377/Images/Ray_Stein.jpg http://www.nytimes.com/2010/06/12/arts/12iht-melik12.html I don't care how many portraits of Stein you're going to make in your nanofabber. The history of the original in the Met, held in Picasso's and Stein's hands and so important in art history, can't be replicated and will retain its value -- as long as no one mixes the two up and there are people with the ego to stoke and means to own it. ;-) PJ From agrimes at speakeasy.net Mon Nov 1 21:29:59 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Mon, 01 Nov 2010 17:29:59 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> References: <4CCB3ACB.8000106@speakeasy.net><8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> Message-ID: <4CCF3157.5080602@speakeasy.net> > Unless you can develop software fixes to mitigate the > wiring errors in your head and put aside the Mighty Original dogma then > you will be dog meat in the singularity. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Can we please have a lengthy, protracted and heated argument over this last line here? -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From possiblepaths2050 at gmail.com Mon Nov 1 21:57:50 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 1 Nov 2010 14:57:50 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <4CCF3157.5080602@speakeasy.net> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF3157.5080602@speakeasy.net> Message-ID: John K Clark wrote: Unless you can develop software fixes to mitigate the wiring errors in your head and put aside the Mighty Original dogma then you will be dog meat in the singularity. Well..., you probably will be anyway but at least you'll have a chance. >>> John, the odds are that you will have died of old age before the Singularity happens. I sure hope you are signed up for cryonics (and the odds are not so great for that, either)! John ; ) On 11/1/10, Alan Grimes wrote: >> Unless you can develop software fixes to mitigate the >> wiring errors in your head and put aside the Mighty Original dogma then >> you will be dog meat in the singularity. > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > Can we please have a lengthy, protracted and heated argument over this > last line here? > > > > -- > DO NOT USE OBAMACARE. > DO NOT BUY OBAMACARE. > Powers are not rights. > > From spike66 at att.net Mon Nov 1 21:46:50 2010 From: spike66 at att.net (spike) Date: Mon, 1 Nov 2010 14:46:50 -0700 Subject: [ExI] failure to commuicate Message-ID: <000001cb7a0e$4ab444d0$e01cce70$@att.net> I saw this at my son's favorite zoo yesterday. It really got me to thinking about such things as my having walked over this access cover about 20 to 30 times before I noticed the epic fail. Millions likely walked over it and never noticed. So how is it that so much happens all around us that we never see? Or on the other hand, what kind of silly goofball actually reads manhole covers? Why is it that you and I see a pile of ants and we are all aahhh jaysus, where's my can of raid; yet Charles Darwin sees the same thing and writes the stunning seventh chapter of Origin of Species? I want to be like Darwin when I grow up. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 33659 bytes Desc: not available URL: From pharos at gmail.com Mon Nov 1 23:07:19 2010 From: pharos at gmail.com (BillK) Date: Mon, 1 Nov 2010 23:07:19 +0000 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: On Mon, Nov 1, 2010 at 8:32 PM, PJ Manney wrote: > > I don't care how many portraits of Stein you're going to make in your > nanofabber. ?The history of the original in the Met, held in Picasso's > and Stein's hands and so important in art history, can't be replicated > and will retain its value -- as long as no one mixes the two up and > there are people with the ego to stoke and means to own it. ?;-) > > I appreciate the *present* importance of provenance in the art and antiques world. People pay a million dollars for a painting with provenance because they expect to be able to sell it on to someone else for two million dollars. It's an investment. That's really the only reason to pay extra for provenance. When nanotech lets everyone have their own Van Gogh, provenance will become worthless, because there will be no way to tell if the certificate is attached to the original or a nanocopy identical down to the atomic level. (Even today expert forgers forge the provenance as well, of course). I would distinguish between provenance and 'intrinsic value'. A Walmart sweater that was once worn by George Bush is still just a Walmart sweater. BillK From possiblepaths2050 at gmail.com Mon Nov 1 23:48:29 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 1 Nov 2010 16:48:29 -0700 Subject: [ExI] An Aubrey deGrey documentary Message-ID: I realize many of you have probably already seen this, but for those of you who have not, I recommend it. A bittersweet production that tries to show both sides, and even peeks in Aubrey's inner life... http://video.google.com/videoplay?docid=-3329065877451441972# John From thespike at satx.rr.com Mon Nov 1 23:48:12 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 01 Nov 2010 18:48:12 -0500 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: <4CCF51BC.6070708@satx.rr.com> On 11/1/2010 6:07 PM, BillK wrote: > I would distinguish between provenance and 'intrinsic value'. > A Walmart sweater that was once worn by George Bush is still just a > Walmart sweater. No, it's a Walmart sweater with cooties. From spike66 at att.net Tue Nov 2 03:12:10 2010 From: spike66 at att.net (spike) Date: Mon, 1 Nov 2010 20:12:10 -0700 Subject: [ExI] prediction for 2 November 2010 Message-ID: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> Tomorrow the US has its biannual symbolic insurgency in the form of congressional elections. I make the following prediction: a middle-of-the-road outcome, where the currently out of power party gains a net of 55 seats in the house and 7 (perhaps 8) in the senate. Once again, we libertarians will go home empty handed. I predict something else as well: after tomorrow, both major parties will be surprised and disappointed with the outcome and will be accusing the other of election fraud. Stay tuned. spike From avantguardian2020 at yahoo.com Tue Nov 2 10:55:20 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Tue, 2 Nov 2010 03:55:20 -0700 (PDT) Subject: [ExI] Fusion Rocket In-Reply-To: References: Message-ID: <319941.52817.qm@web65601.mail.ac4.yahoo.com> John Grigg's post on atomic rockets inspired me to commit to virtual paper a concept design for a fusion rocket. So feel free to beat up on this idea for a while. http://sollegro.com/fusion_rocket/ Stuart LaForge ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung From bbenzai at yahoo.com Tue Nov 2 12:42:58 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 2 Nov 2010 12:42:58 +0000 (GMT) Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: Message-ID: <495099.98419.qm@web114412.mail.gq1.yahoo.com> PJ Manney wrote: > I don't care how many portraits of Stein you're > going to make in your > nanofabber. The history of the original in the Met, > held in Picasso's > and Stein's hands and so important in art history, > can't be replicated > and will retain its value -- as long as no one mixes > the two up and > there are people with the ego to stoke and means to > own it. ;-) Let me just check that I understand this correctly. If an art dealer makes a molecularly-precise copy of a famous artwork, so that the two are literally completely indistinguishable, and mixes them up so that even he doesn't know which is the original, he has thereby destroyed something? Presumably this is only true if he admits to doing it. If he never admits to it, and nobody ever finds out, the something is not destroyed. Or am I missing something? Ben Zaiboc From dan_ust at yahoo.com Tue Nov 2 13:13:45 2010 From: dan_ust at yahoo.com (Dan) Date: Tue, 2 Nov 2010 06:13:45 -0700 (PDT) Subject: [ExI] failure to commuicate In-Reply-To: <000001cb7a0e$4ab444d0$e01cce70$@att.net> References: <000001cb7a0e$4ab444d0$e01cce70$@att.net> Message-ID: <605562.38403.qm@web30101.mail.mud.yahoo.com> Regarding what you do when you see ants, speak for yourself. :) Regards, Dan From: spike To: ExI chat list Sent: Mon, November 1, 2010 5:46:50 PM Subject: [ExI] failure to commuicate I saw this at my son?s favorite zoo yesterday.? It really got me to thinking about such things as my having walked over this access cover about 20 to 30 times before I noticed the epic fail.? Millions likely walked over it and never noticed.? So how is it that so much happens all around us that we never see?? Or on the other hand, what kind of silly goofball actually reads manhole covers? ?Why is it that you and I see a pile of ants and we are all aahhh jaysus, where?s my can of raid; yet Charles Darwin sees the same thing and writes the stunning seventh chapter of Origin of Species? ? I want to be like Darwin when I grow up. ? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Nov 2 15:17:58 2010 From: pharos at gmail.com (BillK) Date: Tue, 2 Nov 2010 15:17:58 +0000 Subject: [ExI] DARPA funded 100 year starship program In-Reply-To: References: Message-ID: On Tue, Oct 19, 2010 at 1:15 AM, John Grigg wrote: > Well, at least DARPA seems capable of longterm thinking... > > > > > More information is now available. Apparently DARPA are NOT planning to build a starship. The commentators got a bit over-excited. Quote: DARPA?s press release actually deals with HOW starships should be studied, rather than studying the starships themselves. They want help from Ames to consider the business case for a non-government organization to provide such services that would use philanthropic donations to make it happen. Quoting from DARPA?s news release: ?The 100-Year Starship study looks to develop the business case for an enduring organization designed to incentivize breakthrough technologies enabling future spaceflight.? Quote from the press release: ?We endeavor to excite several generations to commit to the research and development of breakthrough technologies and cross-cutting innovations across a myriad of disciplines such as physics, mathematics, biology, economics, and psychological, social, political and cultural sciences, as well as the full range of engineering disciplines to advance the goal of long-distance space travel, but also to benefit mankind.? ------------ This may come as a surprise to many, as DARPA is a military defense agency. (!) But DARPA adds... "DARPA also anticipates that the advancements achieved by such technologies will have substantial relevance to Department of Defense (DoD) mission areas including propulsion, energy storage, biology/life support, computing, structures, navigation, and others. ------------------------- Ah-ha! That explains it. DARPA's plan is apparently to encourage private funding of breakthrough technologies that DARPA can make use of in military endeavours. So not quite so wonderful as at first sight. BillK From jonkc at bellsouth.net Tue Nov 2 15:53:56 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 2 Nov 2010 11:53:56 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <4CCF0520.9000601@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net><8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: <1F812800-56CF-40F2-A059-15D385A29BAE@bellsouth.net> On Nov 1, 2010, at 2:21 PM, Damien Broderick wrote: > If nano makes it possible to compile an exact copy in three dimensions, only the fourth will be lost--and that irretrievably, except to the most extreme tests. I don't know what you mean by that. > All of this might have some bearing on how individuals regard *themselves* as "originals", but we have no experiences of nearly exact human copies Yes and for the same reason Evolution had little incentive to develop our emotional hunches regarding this issue so that they corresponded with reality. So if we have no experience on this matter yet, and if there is no reason to think that emotion will lead us in the correct direction, then if we are ever in a situation where it's important to make correct decisions involving the original-copy distinction we will only have logic to rely on. Even if you are so lucky as to live long enough to enter the singularity you will never survive it unless bronze age beliefs and superstitions are abandoned. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Nov 2 16:03:01 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 2 Nov 2010 12:03:01 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: <944B3FA3-1909-4669-A880-2022A7E10837@bellsouth.net> On Nov 1, 2010, at 4:32 PM, PJ Manney wrote: > I don't care how many portraits of Stein you're going to make in your > nanofabber. The history of the original in the Met, held in Picasso's > and Stein's hands and so important in art history, can't be replicated > and will retain its value Art will retain its value only as long as people retain irrational and downright contradictory views regarding the original and the copy, but as there is no chance such people will survive the singularity there is no chance original art with its high value will survive the singularity either. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Nov 2 16:18:11 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 2 Nov 2010 12:18:11 -0400 Subject: [ExI] Fusion Rocket In-Reply-To: <319941.52817.qm@web65601.mail.ac4.yahoo.com> References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: <31CBC069-35F9-4945-AFD7-873F852DA9EC@bellsouth.net> I sent this to the list back in 2002. ========================= The efficiency of a rocket depends on its exhaust velocity, the faster the better. The space shuttle's oxygen hydrogen engine has a exhaust velocity of about 4500 meters per second and that's pretty good for a chemical rocket, the nuclear heated rocket called NERVA tested in the 1960's had a exhaust velocity of 8000 meters per second, and ion engines are about 80,000. Is there any way to do better, much better, say around 200,000,000 meters per second? Perhaps. The primary products of a fission reaction are about that fast, but if you use Uranium 235 or Plutonium 239 the large bulk of the material will absorb the primary fission products and just heat up the material, that slows things way down. However the critical mass for the little used element Americium-242 (half life about a century) is less than 1% that of Plutonium. This would be great stuff to make a nuclear bomb you could put in your pocket, but it may have other uses. In the January 2000 issue of Nuclear Instruments and Methods Physics Research A Yigal Ronen and Eugene Shwagerous calculate that a metallic film of Americium 242 less than a thousandth of a millimeter thick would undergo fission. This is so thin that rather than heat the bulk material the energy of the process would go almost entirely into the speed of the primary fission products, they would go free. They figure a Americium-242 rocket could get to Mars in two weeks not two years as with a chemical rocket. There are problems of course, engineering the rocket would be tricky and I'm not sure I'd want to be on the same continent as a Americium 242 production facility, but it's an interesting idea. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Nov 2 16:00:16 2010 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 2 Nov 2010 09:00:16 -0700 Subject: [ExI] Fusion Rocket In-Reply-To: <319941.52817.qm@web65601.mail.ac4.yahoo.com> References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: The main problem is, current fusion reactor operators consider sustaining fusion for a few seconds to be "long duration", and have engineered several tricks to keep it going that long. (See the entire "inertial confinement" branch, for example: "it's 'contained' because we imploded it, for the duration of the implosion".) You'd need to keep it up for several minutes. If you could solve that problem, while keeping the fusion self-sustaining, you probably would not be far from having a commercially viable fusion reactor - as well as being much closer to a working fusion rocket. On Tue, Nov 2, 2010 at 3:55 AM, The Avantguardian < avantguardian2020 at yahoo.com> wrote: > John Grigg's post on atomic rockets inspired me to commit to virtual paper > a > concept design for a fusion rocket. So feel free to beat up on this idea > for a > while. > > http://sollegro.com/fusion_rocket/ > > > Stuart LaForge > > ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Nov 2 16:09:09 2010 From: spike66 at att.net (spike) Date: Tue, 2 Nov 2010 09:09:09 -0700 Subject: [ExI] DARPA funded 100 year starship program In-Reply-To: References: Message-ID: <003b01cb7aa8$48e199b0$daa4cd10$@att.net> ... On Behalf Of BillK On Tue, Oct 19, 2010 at 1:15 AM, John Grigg wrote: >> Well, at least DARPA seems capable of longterm thinking... >> >> >More information is now available. Apparently DARPA are NOT planning to build a starship. The commentators got a bit over-excited. > During this discussion we saw the "suicide astronaut" concept, where the experts were saying a Mars mission would be a no-return. If you look thru the ExI archives from the 90s, that concept is all over the place in there. In about 1989 thru 1992, I did the calculations on that a hundred different ways, and every time it points to the same conclusion: if we land humans on the surface of Mars, even one human, in any kind of meaningful mission, it is a one-way trip. Many weights engineers in 80s and 90s concluded likewise. Nothing has changed. spike From protokol2020 at gmail.com Tue Nov 2 16:42:44 2010 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Tue, 2 Nov 2010 17:42:44 +0100 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <944B3FA3-1909-4669-A880-2022A7E10837@bellsouth.net> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> <944B3FA3-1909-4669-A880-2022A7E10837@bellsouth.net> Message-ID: A balloon. It's an overpriced thing, this "originality". Pretty much everything can be overpriced and ballooned for some time and it is what happened with the "original art pieces". -------------- next part -------------- An HTML attachment was scrubbed... URL: From pjmanney at gmail.com Tue Nov 2 17:08:53 2010 From: pjmanney at gmail.com (PJ Manney) Date: Tue, 2 Nov 2010 10:08:53 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: On Mon, Nov 1, 2010 at 4:07 PM, BillK wrote: > I appreciate the *present* importance of provenance in the art and > antiques world. People pay a million dollars for a painting with > provenance because they expect to be able to sell it on to someone > else for two million dollars. It's an investment. That's really the > only reason to pay extra for provenance. No, it's not. You're missing the psychology behind the entire art, antique and collectibles markets. Lots of people buy provenanced items because 1) they're crazy fans of the creator or previous owner; 2) they need to feel the item in THEIR hot little hands and its proximity brings them that much closer to the fame/infamy/whatever associated with the object; 3) the ego-investment of owning it outstrips the financial investment (much more common than you think). The investment value of a Babe Ruth baseball means squat to a rabid Yankees fan. And owning a famous Picasso (there aren't a lot of famous ones) makes its [male] owner feel his [male] member swell with pride... ;-) If you ever spent time at Sotheby's, Christies or any high powered auction house and watched the insanity all around you, you'd get what I mean. Real collectors don't care squat about increasing their investment. Once they own it, it's THEIRS. [Daffy Duck: "Go, go, go! Mine, mine, mine!"] Those who buy for investment -- and there are many these days -- are simply acquisitive and usually only the ego/genital-inflation applies. [Paging Steve Wynn...] But that doesn't mean there isn't some bat-s#!t crazy collector waiting in the wings to buy it if Wynn doesn't. You need to separate the post-scarcity economics of everyday crap from the really unusual items. Almost all stuff will instantly lose value. We've seen the beginning of this already, as when eBay entered the marketplace and suddenly, the "rarity" wasn't so rare anymore and prices dropped like buckshot-filled ducks from the sky. But the insanely special item will retain value IF YOU CAN PROVE IT IS WHAT IT CLAIMS. That's not impossible. Don't think identification based on atomic structure. Think identification based on proof of location/ownership. Then provenance is the only thing that's important. > When nanotech lets everyone have their own Van Gogh, provenance will > become worthless, because there will be no way to tell if the > certificate is attached to the original or a nanocopy identical down > to the atomic level. > (Even today expert forgers forge the provenance as well, of course). Yes, forgers do forge provenance -- in fact, most dealers forge items and provenance ALL THE TIME and MOST COLLECTORS KNOW THAT -- it's up to the collector to make sure the dealer is not full of crap. Big-time collecting is not for the faint of heart, ignorant or gullible. Which is why now, as in the future, the protection of original objects is a business in itself. As future technology makes originals harder to forge, future technology (and sleuthing) will make verification possible. Think of what's at stake in the market. The guys who pay hundreds of millions are willing to protect their investment. Or their passion. Or their privates. Which is what is really at stake. ;-) > I would distinguish between provenance and 'intrinsic value'. > A Walmart sweater that was once worn by George Bush is still just a > Walmart sweater. And that's why provenance IS important. Right now, GWB's sweat stains are worth money to someone, not the sweater. Picasso's real fingerprints are worth money to many people. Not the reproduction of them. These things may not have value to you, but based on collecting psychology, I am willing to bet money something immensely cool, like the originals of Van Gogh's Starry Night or Picasso's Guernica will have value in a nanofabbed future. Now, all this goes out the window in a post-apocalyptic future, when we're using Shakespeare's First Folio to wipe our buttocks. PJ From x at extropica.org Tue Nov 2 17:22:19 2010 From: x at extropica.org (x at extropica.org) Date: Tue, 2 Nov 2010 10:22:19 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: On Mon, Nov 1, 2010 at 1:32 PM, PJ Manney wrote: > The history of the original in the Met, held in Picasso's > and Stein's hands and so important in art history, can't be replicated > and will retain its value -- as long as no one mixes the two up and > there are people with the ego to stoke and means to own it. ?;-) Against my better judgment I reenter the perennial identity debates. The value of the "original", whether an object of art or a human agent, is based entirely on perceived status--very real in terms of our evolutionarily derived nature and cultural context but nothing intrinsic. Yes, the history may be important information, but it's NOT a property of the object. The meaning of anything lies not in what it "is", but in what it does, as perceived in relation to the values of some observer, even when it is the observer. We see through the eyes of our ancestors, for valid evolutionary reasons, just as our present system of social decision-making is based on competition over scarcity rather than cooperation for abundance; artwork and jewelry are prized more for their rarity than for their capacity to inspire; and the "self" is considered discrete and essential despite the synergistic advantages of diverse agency acting on behalf of an entirely fictitious entity. Recognizing this is not to diminish the assumed "intrinsic" value of the art or the person, but to open up new opportunities for meaningful interaction with what is ultimately only perceived patterns of information. - Jef From x at extropica.org Tue Nov 2 17:38:19 2010 From: x at extropica.org (x at extropica.org) Date: Tue, 2 Nov 2010 10:38:19 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: On Tue, Nov 2, 2010 at 10:08 AM, PJ Manney wrote: > Lots of people buy provenanced > items because 1) they're crazy fans of the creator or previous owner; > 2) they need to feel the item in THEIR hot little hands and its > proximity brings them that much closer to the fame/infamy/whatever > associated with the object; 3) the ego-investment of owning it > outstrips the financial investment (much more common than you think). > The investment value of a Babe Ruth baseball means squat to a rabid > Yankees fan. ?And owning a famous Picasso (there aren't a lot of > famous ones) makes its [male] owner feel his [male] member swell with > pride... ?;-) Yes. Just as the alpha chimp defends his mating privileges. But what of the bonobo, more inclined to give and receive favors...? > You need to separate the post-scarcity economics of everyday crap from > the really unusual items. ?Almost all stuff will instantly lose value. Yes, referring to items valued for function rather than status. >?But the > insanely special item will retain value IF YOU CAN PROVE IT IS WHAT IT > CLAIMS. Not if the values of the agent have evolved from hording to giving, taking to producing, narrow to broad self-interest. And this need not be at the biological level. A stronger driver and reinforcer of such change is a society and culture that rewards more altruistic behavior and we're already on that path. - Jef From msd001 at gmail.com Tue Nov 2 18:17:43 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 2 Nov 2010 14:17:43 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: I'm not sure originality matters in the sense of "this thing was created first" as much as the novelty "this thing is unlike any thing that preceded it." It will be difficult to maintain uniqueness in a nanofabbed world but if the artist sells new works under a non-disclosure agreement and copies show up everywhere then the artist may have a legal case against the purchaser. I doubt even the singularity will be enough to stop lawyers from making money. I wonder how exact a copy this supposed nanofab future will produce. ex: There is considerable notoriety in the world of 'high fashion' despite the fact that anyone clever enough to cut cloth and use a sewing machine could theoretically reproduce those articles worn by Paris runway models. Will the owner of the current 'original' Van Gogh allow it to be scanned to the molecular level to facilitate the perfect copy? Until we have the ability to rearrange subatomic particles to literally create gold, such materials will continue to have a material worth that could retain inherent value. Conquistadors hammered Aztec/Inca gold statues into bricks for easier transport of the raw metal with no regard for the production items they were destroying. Those items would be worth far more than their weight in gold if found today. If found in the far future, are they again valued only for the weight of their materials? I guess if they could be copied to data and later reproduced at will, there's no inherent value in the item (assuming the pattern is not lost). I suppose this necessitates having the mass converted losslessly to energy and the energy credit applied to the owner of the converted object. Even if this wondrous violation of physics becomes possible, greedy bankers (or politicians) will take a small fee during the transaction process. So even with a magical upload of mass to a communal energy pool there will (likely) be a fee directed to the bank that manages your share of the pool, there will be lawyers fees for protecting novelty and uniqueness rights (as well as prosecuting violation of those rights) and politicians to tax individual's consumption of the communal energy pool to download items back into physical reality. This post-singularity scenario isn't even zero-sum; it's negative sum. From spike66 at att.net Tue Nov 2 18:29:03 2010 From: spike66 at att.net (spike) Date: Tue, 2 Nov 2010 11:29:03 -0700 Subject: [ExI] hot processors? Message-ID: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> Question please for you microprocessor hipsters. I retired a seven year old desktop and replaced with an HP Pavilion dv7 notebook. I have plenty of sims that I run on a regular basis, ones that need to run over night. I ran a new one yesterday (it?s an excel macro) and found that it runs about six times faster than the 7 yr old desktop. However? after about half an hour it conked. It didn?t actually crash, in fact excel didn?t even stop. When I touched the mouse this morning, it resumed right where it left off, but it did nothing all night. Is that a feature of laptops? Can the batteries run down while the thing is plugged in to AC? Is there any reason why a laptop would not run continuously over night? It put out a lot of heat while it was running: perhaps there is some kind of thermal protection? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrd1415 at gmail.com Tue Nov 2 18:24:59 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Tue, 2 Nov 2010 11:24:59 -0700 Subject: [ExI] Age of Gliese 581 was Re: Retired military officers come forward about UFO visitations Message-ID: All this talk of aliens got me thinking, and so a question popped into my head: "How old," I wondered, "is Gliese 581?" Googled it. Wikipediaed it. Bingo! Citations 5 and 7 as follows: 5. # ^ a b "Star: Gl 581". Extrasolar Planets Encyclopaedia. http://exoplanet.eu/star.php?st=Gl+581. Retrieved 2009-04-27. "Mass 0.31 Msun, Age 8+3-1 Gyr" 7. # ^ Selsis 3.4 page 1382 "lower limit of the age that, considering the associated uncertainties, could be around 7 Gyr", "preliminary estimate", "should not be above 10-11 Gyr" ANSWER: 7-11 billion years. Whereas our little neighborhood is a mere 4 billion years old. And of course, Gliese 581 is in the news lately on account of Gliese 581g. I don't have to tell you where I'm going with this , do I? Hint: Three to seven billion years head start. Oh, and by the way, you shouldn't assign military personnel more credibility than they deserve. At best they live in a bubble, at worst they're full on Kool-aid junkies. Been there. Seen it. Generalizations -- particularly worshipful ones -- aren't helpful. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From pjmanney at gmail.com Tue Nov 2 19:59:34 2010 From: pjmanney at gmail.com (PJ Manney) Date: Tue, 2 Nov 2010 12:59:34 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <944B3FA3-1909-4669-A880-2022A7E10837@bellsouth.net> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> <944B3FA3-1909-4669-A880-2022A7E10837@bellsouth.net> Message-ID: 2010/11/2 John Clark : > Art will retain its value only as long as people retain irrational and > downright contradictory views regarding the original and the copy, EXACTLY!!! Most of the list (as usual) is confusing the rationality of the Gedankenexperiment with the irrationality of real, on the ground, human behavior. > but as > there is no chance such people will survive the singularity ?there is no > chance original art with its high value will survive the singularity > either. I'm not assuming the singularity. Nanofabbers don't define the singularity IMHO, because they don't assume ever-increasing AGI. I'm assuming post-scarcity economics. BIG difference. PJ From pjmanney at gmail.com Tue Nov 2 20:10:44 2010 From: pjmanney at gmail.com (PJ Manney) Date: Tue, 2 Nov 2010 13:10:44 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: On Tue, Nov 2, 2010 at 10:38 AM, wrote: > Not if the values of the agent have evolved from hording to giving, > taking to producing, narrow to broad self-interest. ?And this need not > be at the biological level. ?A stronger driver and reinforcer of such > change is a society and culture that rewards more altruistic behavior > and we're already on that path. You and I have talked at great length about Non Zero Sum behavior, etc. And while I fervently agree with you and Robert Wright that the arrow of history has demonstrated an increase of empathetic and altruistic behavior and increased context (for many reasons), I think nanofabbers will occur too soon in our future for us to have evolved either biologically or culturally beyond our chimp-brains entirely. PJ From sparge at gmail.com Tue Nov 2 19:47:23 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 2 Nov 2010 15:47:23 -0400 Subject: [ExI] hot processors? In-Reply-To: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> Message-ID: 2010/11/2 spike > > > Is that a feature of laptops? Can the batteries run down while the thing > is plugged in to AC? Is there any reason why a laptop would not run > continuously over night? It put out a lot of heat while it was running: > perhaps there is some kind of thermal protection? > I suspect it's some kind of fancy power-saving mode. You can probably disable that while it's plugged in. You might also want to consider keeping it on a laptop cooler when it's running unattended for a long time to reduce the fire hazard. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Nov 2 20:13:40 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 02 Nov 2010 15:13:40 -0500 Subject: [ExI] more altruistic behavior In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: <4CD070F4.1070108@satx.rr.com> On 11/2/2010 12:38 PM, x at extropica.org wrote: > A stronger driver and reinforcer of such > change is a society and culture that rewards more altruistic behavior > and we're already on that path. Hahahahahahahahaha! ( uhrgh, groans Krusty ) Well, let's see the results of today's US elections for an index. Damien Broderick [yes, I know, just a blip in the trajectory from appalling-horror-then to somewhat-moderated-horror-now] From stefano.vaj at gmail.com Tue Nov 2 20:04:02 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 2 Nov 2010 21:04:02 +0100 Subject: [ExI] Flash of insight... In-Reply-To: <4CCDE6E0.3020008@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> Message-ID: On 31 October 2010 23:00, Damien Broderick wrote: > Also interesting that in NDE reports, many people claim to experience > themselves as "floating above" their damaged bodies (although still > "visuo"-centric, I gather). > I believe there were experiments a couple of year ago inducing out-of-body "delocalisation" in perfectly healthy people. Interesting, but not such a big deal, IMHO. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Nov 2 19:59:42 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 2 Nov 2010 20:59:42 +0100 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: 2010/10/31 John Clark > Actually its quite difficult to come up with a scenario where the copy DOES > instantly know he is the copy. > > Mmhhh. Nobody ever feels to be a copy. What you could become aware is that somebody forked in the past (as in "a copy left behind"). That he is the "original" is a matter of perspective... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Nov 2 20:28:20 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 2 Nov 2010 21:28:20 +0100 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/2 Adrian Tymes > The main problem is, current fusion reactor operators consider sustaining > fusion > for a few seconds to be "long duration", and have engineered several tricks > to keep > it going that long. > What's wrong in a pulse propulsion detonating H-bombs one after another, V1-style? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan_ust at yahoo.com Tue Nov 2 20:36:28 2010 From: dan_ust at yahoo.com (Dan) Date: Tue, 2 Nov 2010 13:36:28 -0700 (PDT) Subject: [ExI] Age of Gliese 581 was Re: Retired military officers come forward about UFO visitations In-Reply-To: References: Message-ID: <524986.75237.qm@web30106.mail.mud.yahoo.com> I recall a recent letter or article in _Science_ or _Nature_ questioned whether there is a Gliese 581g after all. The data appear to not be unambiguous on this. Regards, Dan ----- Original Message ---- From: Jeff Davis To: ExI chat list Sent: Tue, November 2, 2010 2:24:59 PM Subject: [ExI] Age of Gliese 581 was Re: Retired military officers come forward about UFO visitations All this talk of aliens got me thinking, and so a question popped into my head: "How old," I wondered, "is Gliese 581?" Googled it.? Wikipediaed it.? Bingo! Citations 5 and 7 as follows: 5.? # ^ a b "Star: Gl 581". Extrasolar Planets Encyclopaedia. http://exoplanet.eu/star.php?st=Gl+581. Retrieved 2009-04-27. "Mass 0.31 Msun, Age 8+3-1 Gyr" 7. # ^ Selsis 3.4 page 1382 "lower limit of the age that, considering the associated uncertainties, could be around 7 Gyr", "preliminary estimate", "should not be above 10-11 Gyr" ANSWER: 7-11 billion years. Whereas our little neighborhood is a mere 4 billion years old. And of course, Gliese 581 is in the news lately on account of Gliese 581g. I don't have to tell you where I'm going with this , do I?? Hint: Three to seven billion years head start. Oh, and by the way, you shouldn't assign military personnel more credibility than they deserve.? At best they live in a bubble, at worst they're full on Kool-aid junkies.? Been there.? Seen it. Generalizations -- particularly worshipful ones -- aren't helpful. Best, Jeff Davis "Everything's hard till you know how to do it." ? ? ? ? ? ? ? ? ? ? ? Ray Charles From thespike at satx.rr.com Tue Nov 2 21:11:59 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 02 Nov 2010 16:11:59 -0500 Subject: [ExI] Flash of insight... In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> Message-ID: <4CD07E9F.8040700@satx.rr.com> On 11/2/2010 3:04 PM, Stefano Vaj wrote: > I believe there were experiments a couple of year ago inducing > out-of-body "delocalisation" in perfectly healthy people. Interesting, > but not such a big deal, IMHO. It's only a big deal given that several people who seemed to think that sense of identity is innately constructed as being *behind your eyes* might be wrong about how this actually works at a deep level. From scerir at alice.it Tue Nov 2 21:27:20 2010 From: scerir at alice.it (scerir) Date: Tue, 2 Nov 2010 22:27:20 +0100 Subject: [ExI] hot processors? In-Reply-To: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> Message-ID: <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> "spike": It put out a lot of heat while it was running: perhaps there is some kind of thermal protection? # I had several problems (ie laptop running very very slow) due to hot temperature this summer. Now I use something like this: http://www.laptoptoys.net/lapcool_tx_adjustable_notebook_stand.html From possiblepaths2050 at gmail.com Tue Nov 2 22:21:32 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 2 Nov 2010 15:21:32 -0700 Subject: [ExI] more altruistic behavior In-Reply-To: <4CD070F4.1070108@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> <4CD070F4.1070108@satx.rr.com> Message-ID: Damien Broderick wrote: Well, let's see the results of today's US elections for an index. [yes, I know, just a blip in the trajectory from appalling-horror-then to somewhat-moderated-horror-now] >>> Hey Damien, at least I voted today! : ) Oh, but am I merely contributing to the overall problem??? John On 11/2/10, Damien Broderick wrote: > On 11/2/2010 12:38 PM, x at extropica.org wrote: > >> A stronger driver and reinforcer of such >> change is a society and culture that rewards more altruistic behavior >> and we're already on that path. > > Hahahahahahahahaha! ( uhrgh, groans Krusty ) > > Well, let's see the results of today's US elections for an index. > > Damien Broderick > > [yes, I know, just a blip in the trajectory from appalling-horror-then > to somewhat-moderated-horror-now] > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From lists1 at evil-genius.com Tue Nov 2 21:23:25 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 02 Nov 2010 14:23:25 -0700 Subject: [ExI] Fire and evolution (was hypnosis) Message-ID: <4CD0814D.3040806@evil-genius.com> From: "spike" "I have long pondered if speciation between humans and chimps was accelerated by the fact that for some reason the protohumans figured out that little burning bush trick, and the chimps didn't, or just couldn't master it. This would represent the technology segregation we talk about today, that separates those humans who use electronics from those who do not. Today it is called the digital divide. Back then it was what we might call the conflagration chasm." That would be surprising, as the earliest current evidence for the domestication of fire is ~1.7 million years ago, and that is hotly disputed: many archaeologists put it ~400,000 years ago. All these dates are long, long after the human/chimp/bonobo split 6-7 million years ago. Of course, the progression of protohuman evolution from the split onward had many different branches, and was not a neat linear sequence...there were many species of Australopithecus and Homo which died out. So Spike's hypothesis may well be correct for a more recent evolutionary divide. From lists1 at evil-genius.com Tue Nov 2 21:09:14 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 02 Nov 2010 14:09:14 -0700 Subject: [ExI] Counterfeits (Was: THE MIGHTY ORIGINAL) In-Reply-To: References: Message-ID: <4CD07DFA.5040802@evil-genius.com> This reminds me of the old conundrum: "Who is the most successful counterfeiter in history?" > From: Ben Zaiboc > > If an art dealer makes a molecularly-precise copy of a > famous artwork, so that the two are literally > completely indistinguishable, and mixes them up so > that even he doesn't know which is the original, he > has thereby destroyed something? > > Presumably this is only true if he admits to doing it. > If he never admits to it, and nobody ever finds out, > the something is not destroyed. > > Or am I missing something? From atymes at gmail.com Tue Nov 2 22:13:07 2010 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 2 Nov 2010 15:13:07 -0700 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/2 Stefano Vaj > 2010/11/2 Adrian Tymes > >> The main problem is, current fusion reactor operators consider sustaining >> fusion >> for a few seconds to be "long duration", and have engineered several >> tricks to keep >> it going that long. >> > > What's wrong in a pulse propulsion detonating H-bombs one after another, > V1-style? > > That didn't seem to be what was proposed here, nor is that really V1-style. What you're talking about was once called Project Orion. It could work, in theory, especially if you kept it outside the atmosphere to avoid radiation concerns - but, the major need for rockets today is ones that can work inside the atmosphere, to get people and things to orbit without riding the extreme edge of performance. What was illustrated here would be safe to use inside the atmosphere: no or minimally radioactive exhaust (i.e., radiation-safe if you're far enough away that the heat alone won't fry you). The problem is keeping it lit for about 10 minutes (the typical length of time it takes to achieve orbit). -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Nov 2 23:13:18 2010 From: spike66 at att.net (spike) Date: Tue, 2 Nov 2010 16:13:18 -0700 Subject: [ExI] hot processors? In-Reply-To: <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> Message-ID: <000701cb7ae3$894e3540$9bea9fc0$@att.net> "spike": >>It put out a lot of heat while it was running: perhaps there is some kind of thermal protection? ># I had several problems (ie laptop running very very slow) due to hot temperature this summer. Now I use something like this: http://www.laptoptoys.net/lapcool_tx_adjustable_notebook_stand.html OK, I just got back from the local electronics merchant where I purchased a notebook cooler. Let's see if this helps. If this machine fails to run all night, I will need to rethink my strategy on using a laptop, and may cause me to rethink the notion of the singularity. We may be seeing what really is an S-curve in computing technology, where we are approaching a limit of calculations per watt of power input. Or not, I confess I haven't followed it in the past 5 yrs the way I did in my misspent youth. Are we still advancing in calculations per watt? spike From brent.allsop at canonizer.com Wed Nov 3 02:51:05 2010 From: brent.allsop at canonizer.com (Brent Allsop) Date: Tue, 2 Nov 2010 20:51:05 -0600 Subject: [ExI] Flash of insight... In-Reply-To: <4CCE079C.4010102@speakeasy.net> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDEE41.20706@canonizer.com> <4CCE079C.4010102@speakeasy.net> Message-ID: Alan, It is certainly possible there is some amount of diversity in the way people consciously represent themselves. So you don't have a feeling of looking out of your eyes? And can you imagine what an out of body experience might be like? Thanks, Stefano, for mentioning the recent scientists that were able to so easily induce out of body experiences. Here is one reference to some of this work in science daily: http://www.sciencedaily.com/releases/2007/08/070823141057.htm Alan, I bet you'd have fun if you could get a head set and camera setup like that, so you could experience such yourself. Certainly experiencing this would be very enlightening to everyone. I'm always chuckling at how people are so clueless when they talk about having a 'spirit' or an "out of body experience' in the traditional religious interpretation way. Everyone assumes such dosn't have to have any knowledge. The referent or reality isn't near as important as the knowledge of such - whether veridacal or not. All this induction of out of body experiences is as exactly as predicted is possible by the emerging expert consensus "Representational Qualai Theory", and as was described in the 1229 story, written well before such science was demonstrated. And we surely haven't seen the last of this type of stuff - wait till we start effing the ineffable, and start learning just how diverse various people's conscious experiences of the world, their bodies, and their spirits are. I look forward to soon knowing first hand just how diverse your experience of yourself are, Alan, compared to my own. Brent Allsop 2010/10/31 Alan Grimes > > And remember that there are two parts to most conscious perception. > > There is the conscious knowledge, and it's referent. For out of body > > experiences, the knowledge of our 'spirit' or 'I' leaves our knowledge > > of our body (all in our brain). > > > Our conscious knowledge of our body has a referent in reality, but our > > knowledge of this 'spirit' does not. Surely in the future we'll be able > > to alter and represent all this conscious knowledge any way we want. > > And evolution surely had survivable reasons for usually representing > > this 'I' just behind our knowledge of our eyes. > > Interesting. > > I don't seem to have any such perception. I see what I see, I type what > I type, but I'm not, metaphysically speaking, directly present in any of > my own perceptions. > > I have no perception at all of being "inside my head" -- I am my head. > =P It seems perfectly natural to me. > > People are always talking about this concept of "self esteem" WTF is > that? I mean it's meaningless to either hold one's self in esteem or > contempt. > > Generally, by my appearance and sometimes by my actions, I do display a > lack of self-consciousness. =\ I'm not sure if that's directly related. > > -- > DO NOT USE OBAMACARE. > DO NOT BUY OBAMACARE. > Powers are not rights. > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Nov 3 03:07:01 2010 From: spike66 at att.net (spike) Date: Tue, 2 Nov 2010 20:07:01 -0700 Subject: [ExI] hot processors? In-Reply-To: References: Message-ID: <002b01cb7b04$3035a9e0$90a0fda0$@att.net> -----Original Message----- From: Tomasz Rola [mailto:rtomek at ceti.com.pl] ... Subject: Re: [ExI] hot processors? On Tue, 2 Nov 2010, spike wrote: > "spike": > >>It put out a lot of heat while it was running: perhaps there is some > >>kind of thermal protection? ... >...1. You sure this is about cpu temperature? I don't recall you giving any figures, so how do you know it? Don't know this, just a theory. Turned out wrong. Read on. >5. Wrt switching off, check your power settings in Windows (and in BIOS, too, if we are at it). If you plan to run something at night, you don't want the thing to hibernate two hours after you go to bed. Just tell it to stay always on while on A/C power... Thanks! Did this. It had a default to turn off after half an hour even if plugged in. I told it to stay the heck on and WORK, all night, or until I tell it to stop. In return I bought it a nice laptop cooler, so it should be eager to work for me. >6. AFAIK there is no way batteries could go low while you are plugged to the wall. Unless something is broken... OK cool, I thought that would be the case, but didn't know for sure. >BTW, you don't want to turn fancy screensaver in your laptop. Instead, you may want to blank and switch off the display after some no-activity period... I have the laptop driving a big screen, with the laptop lid closed. >Now, you can run your excel sim and have a look on cpu temps given by NHC... Thanks Tomasz, this is cool. The laptop looks like it is about 6 times faster than the desktop it replaces. spike From msd001 at gmail.com Wed Nov 3 03:33:41 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 2 Nov 2010 23:33:41 -0400 Subject: [ExI] Fire and evolution (was hypnosis) In-Reply-To: <4CD0814D.3040806@evil-genius.com> References: <4CD0814D.3040806@evil-genius.com> Message-ID: On Tue, Nov 2, 2010 at 5:23 PM, wrote: > That would be surprising, as the earliest current evidence for the > domestication of fire is ~1.7 million years ago, and that is hotly disputed: domestication of fire is hotly disputed? nice. From thespike at satx.rr.com Wed Nov 3 03:45:57 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 02 Nov 2010 22:45:57 -0500 Subject: [ExI] Fire and evolution (was hypnosis) In-Reply-To: References: <4CD0814D.3040806@evil-genius.com> Message-ID: <4CD0DAF5.9090602@satx.rr.com> On 11/2/2010 10:33 PM, Mike Dougherty wrote: > On Tue, Nov 2, 2010 at 5:23 PM, wrote: >> > That would be surprising, as the earliest current evidence for the >> > domestication of fire is ~1.7 million years ago, and that is hotly disputed: > domestication of fire is hotly disputed? nice. No flames, please! From spike66 at att.net Wed Nov 3 03:37:57 2010 From: spike66 at att.net (spike) Date: Tue, 2 Nov 2010 20:37:57 -0700 Subject: [ExI] Flash of insight... In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDEE41.20706@canonizer.com> <4CCE079C.4010102@speakeasy.net> Message-ID: <003501cb7b08$81ce5910$856b0b30$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Brent Allsop Sent: Tuesday, November 02, 2010 7:51 PM To: ExI chat list Subject: Re: [ExI] Flash of insight... . http://www.sciencedaily.com/releases/2007/08/070823141057.htm >.Alan, I bet you'd have fun if you could get a head set and camera setup like that, so you could experience such yourself. Certainly experiencing this would be very enlightening to everyone. Cool idea Brent! Rig up a hat with a rod about a meter long with a camera on the end, out behind and above, with the output rigged to video display glasses. Then you could pretend to be an avatar. And watch all the crazy looks you would get from normal people. >.I'm always chuckling at how people are so clueless when they talk about having a 'spirit' or an "out of body experience' in the traditional religious interpretation way. I imagined myself as a disembodied spirit, but instead of an out-of-body experience, I demon-possessed my own body. It is kinda like The Exorcist, only it was me in here, so it became sorta self-referential, and without all the projectile barfing (eewww, that really turns me off.) And I am not really an evil spirit either, nor a saint by any means, but rather more like half way between good and evil. So it was like The Exorcist, except it was self-possession by a neutral spirit. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Wed Nov 3 03:52:23 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 02 Nov 2010 23:52:23 -0400 Subject: [ExI] Flash of insight... In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDEE41.20706@canonizer.com> <4CCE079C.4010102@speakeasy.net> Message-ID: <4CD0DC77.7070603@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > Alan, > It is certainly possible there is some amount of diversity in the way > people consciously represent themselves. > So you don't have a feeling of looking out of your eyes? There might be a terminology gap here. What I'm saying is that there is no sense of a "homunculus" that observes things through the eyes. > And can you imagine what an out of body experience might be like? The When I was a bit more of a free thinker than I am now, I experimented with all manner of things, I don't think I ever achieved one. The closest I got was imagining I was seeing a remote location while I was doing something else. I was really keen on trying to achieve some level of ESP, but I couldn't and eventually gave up, except for my precog ability, which seems to be marginal at best, possibly/probably merely intuition. > I look forward to soon > knowing first hand just how diverse your experience of yourself are, > Alan, compared to my own. ???? How do you propose to do that? -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From thespike at satx.rr.com Wed Nov 3 04:44:42 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 02 Nov 2010 23:44:42 -0500 Subject: [ExI] hot processors? In-Reply-To: <002b01cb7b04$3035a9e0$90a0fda0$@att.net> References: <002b01cb7b04$3035a9e0$90a0fda0$@att.net> Message-ID: <4CD0E8BA.4020007@satx.rr.com> On 11/2/2010 10:07 PM, spike wrote: > with the laptop lid closed. I thought *that* causes it to overheat. Damien Broderick From spike66 at att.net Wed Nov 3 05:46:44 2010 From: spike66 at att.net (spike) Date: Tue, 2 Nov 2010 22:46:44 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> Message-ID: <005301cb7b1a$8015f580$8041e080$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of spike Subject: [ExI] prediction for 2 November 2010 >... Once again, we libertarians will go home empty handed...I predict something else as well: after tomorrow, both major parties will be surprised and disappointed with the outcome and will be accusing the other of election fraud...spike Well damn. Looks like the democrats and republicans have won nearly every race. Kennita Watson appears to have lost this time. Better luck next time! spike From natasha at natasha.cc Wed Nov 3 06:05:53 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 03 Nov 2010 02:05:53 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <4CCF0520.9000601@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net><8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: <20101103020553.dn3kyivxcg8gg4oo@webmail.natasha.cc> Poiesis need not a painting, a performance or a structure to embellish the process of creation.? The electical charges of the brain?signify this process.? The many and varied?outcomes, as?presented in?mediums of paint, performance and structure,?care little, if anything, about what society considers to be the mighty original.? They are all spirts of thought, coalescing image and narrative, metaphor and symbol.? All the mightly originals are copies, and became copies once they left the electrical charges.? Personhood is the ultimately the electrical charge and the outcome.? All else is stuff.? And, as lovely as the original Matisse or Monet truly are --?and as incomparabel a printed image is next to the brush strokes and refraction of light across the hues, textures and tones or the originals (Damien is accurate in his?observations); the former does have a distinguishable?character that the later lacks. Alas, they all are copies. Nano+pershood?may simply?wink at its own assembleges. It is?a wink that may make very light of a very heavy topic that has manipulated high art and economics for a while now. Natasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Nov 3 06:04:20 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 3 Nov 2010 17:04:20 +1100 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: 2010/11/3 Stefano Vaj : > 2010/10/31 John Clark >> >> Actually its quite difficult to come up with a scenario where the copy >> DOES instantly know he is the copy. >> > > Mmhhh. Nobody ever feels to be a copy. What you could become aware is that > somebody forked in the past (as in "a copy left behind"). That he is the > "original" is a matter of perspective... Think about what you would say and do if provided with evidence that you are actually a copy, replaced while the original you was sleeping some time last week. -- Stathis Papaioannou From possiblepaths2050 at gmail.com Wed Nov 3 07:09:45 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 00:09:45 -0700 Subject: [ExI] A fun animated short about the continuity of identity and "making copies" Message-ID: I absolutely loved this animated short film, which reminded me of the countless discussion threads about this very topic, that have graced so many transhumanist email lists over the years. http://www.youtube.com/watch?v=pdxucpPq6Lc John : ) From jrd1415 at gmail.com Wed Nov 3 07:11:08 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 3 Nov 2010 00:11:08 -0700 Subject: [ExI] The answer to tireless stupidity Message-ID: You're gonna like this. Chatbot Wears Down Proponents of Anti-Science Nonsense http://www.technologyreview.com/blog/mimssbits/25964/?nlid=3722 Best, Jeff Davis "Men occasionally stumble over the truth, but most pick themselves up and hurry off as if nothing had happened." Winston Churchill From possiblepaths2050 at gmail.com Wed Nov 3 07:14:05 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 00:14:05 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <005301cb7b1a$8015f580$8041e080$@att.net> References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> Message-ID: Things went about like I expected. The general public was not just not happy about Obama's performance record... I really wonder if he will even get re-elected... John On 11/2/10, spike wrote: > > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of spike > Subject: [ExI] prediction for 2 November 2010 > >>... Once again, we libertarians will go home empty handed...I predict > something else as well: after tomorrow, both major parties will be surprised > and disappointed with the outcome and will be accusing the other of election > fraud...spike > > > Well damn. Looks like the democrats and republicans have won nearly every > race. Kennita Watson appears to have lost this time. Better luck next > time! > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From ablainey at aol.com Wed Nov 3 11:37:46 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Wed, 03 Nov 2010 07:37:46 -0400 Subject: [ExI] Flash of insight... In-Reply-To: <4CD07E9F.8040700@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <4CD07E9F.8040700@satx.rr.com> Message-ID: <8CD4962ABDE7A88-99C-1376@webmail-d024.sysops.aol.com> I had a mental play with this after the thread the other day. I closed my eyes and tried to consciously move the 'I' around with very little success. I tried to concentrate on various sensory inputs to see if it make any difference to the percieved position of consciousness. Apart from the perception of moving maybe a few inches inside my head it wash a complete wash out. Perhaps that is enough to show something. I certainly wasn't floating around the room or having any sense of perception from an external point. One thing I did notice is that the 'I' is not perceived as a singular point, it feels more like it is diffused over a 3d region. I would still like to know if blind people also percieve themselves to be in thier heads? Especially if they are cortically blind. Also if it visual input from an artificial source would alter the position? This might show if the 'I' position is created by a physical reference to the sensory input or by the physical position of the brain itself. Do snails percieve themselves to be at a point somewhere between their eye stalks or in thier heads? -----Original Message----- From: Damien Broderick To: ExI chat list Sent: Tue, Nov 2, 2010 9:11 pm Subject: Re: [ExI] Flash of insight... On 11/2/2010 3:04 PM, Stefano Vaj wrote: > I believe there were experiments a couple of year ago inducing > out-of-body "delocalisation" in perfectly healthy people. Interesting, > but not such a big deal, IMHO. It's only a big deal given that several people who seemed to think that sense of identity is innately constructed as being *behind your eyes* might be wrong about how this actually works at a deep level. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Wed Nov 3 13:07:16 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 3 Nov 2010 13:07:16 +0000 (GMT) Subject: [ExI] Flash of insight... In-Reply-To: Message-ID: <609321.58856.qm@web114404.mail.gq1.yahoo.com> ablainey at aol.com wrote: ... > This might show if the 'I' position is > created by a physical reference to the sensory input > or by the physical position of the brain itself. Do > snails percieve themselves to be at a point > somewhere between their eye stalks or in thier > heads? How could it be related to the physical position of the brain? You don't know where your brain is unless someone tells you or you read it in a book, or extrapolate from where someone else's is. There is no direct perception of the position of your brain, unlike, say, your stomach. The whole concept of the 'I' position is meaningless anyway. All you can say is where your current /viewpoint/ is. The feeling of being somewhere is solely a product of your senses, and can change very easily. I particularly liked Spike's idea for locating your awareness behind and above your own head, using a camera on a pole. Ben Zaiboc From dan_ust at yahoo.com Wed Nov 3 13:06:30 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 06:06:30 -0700 (PDT) Subject: [ExI] Fire and evolution (was hypnosis) In-Reply-To: <4CD0814D.3040806@evil-genius.com> References: <4CD0814D.3040806@evil-genius.com> Message-ID: <668700.46914.qm@web30107.mail.mud.yahoo.com> Wasn't the homo line (from the hypothesized homo/pan split)?also in different niches at this point too? I'm not sure of the research done on pan genus itself -- in terms of its evolution -- but I was under the impression that it was limited to dense forests -- while the homo line was exploring many different niches, some of them not dense forests. Regards, Dan ----- Original Message ---- From: "lists1 at evil-genius.com" To: extropy-chat at lists.extropy.org Sent: Tue, November 2, 2010 5:23:25 PM Subject: [ExI] Fire and evolution (was hypnosis) From: "spike" "I have long pondered if speciation between humans and chimps was accelerated by the fact that for some reason the protohumans figured out that little burning bush trick, and the chimps didn't, or just couldn't master it.? This would represent the technology segregation we talk about today, that separates those humans who use electronics from those who do not.? Today it is called the digital divide.? Back then it was what we might call the conflagration chasm." That would be surprising, as the earliest current evidence for the domestication of fire is ~1.7 million years ago, and that is hotly disputed: many archaeologists put it ~400,000 years ago.? All these dates are long, long after the human/chimp/bonobo split 6-7 million years ago. Of course, the progression of protohuman evolution from the split onward had many different branches, and was not a neat linear sequence...there were many species of Australopithecus and Homo which died out.? So Spike's hypothesis may well be correct for a more recent evolutionary divide. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ablainey at aol.com Wed Nov 3 13:52:44 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Wed, 03 Nov 2010 09:52:44 -0400 Subject: [ExI] Flash of insight... In-Reply-To: <609321.58856.qm@web114404.mail.gq1.yahoo.com> Message-ID: <8CD4975865E2116-1DA0-31A7@webmail-d031.sysops.aol.com> What if the positional perception is related to neural pathway length. So the nerves which have the lowest latency and presumably get more run time time accordingly create a positional reference for the brain rather than a simple weighting of the senses. That is why I ask about the cortically blind. Ideally I would like to know where the 'I' is for someone who is blind, deaf and has no sense of smell or taste. How you would ever communicate such an abstract question to such a person is beyond me. The camera on a stick is akin to the snail however this only shows visual perception of position. I can change that perception by putting the TV on or playing a FPS game and it doesn't effect where I percieve myself when my eyes are closed. I don't the 'I' as being meaningless. Imagine an upload scenario where your consciousness is stored in a black box in some safe vault while a robot you goes out wandering the universe. If you are correct then you will 'feel' that you are out there doing all those things. However if the 'I' is a perception created by latency of input, you would feel the remoteness of your robot body. yes? You might as well be wetware sitting in a vault operating an avatar via VR. thus my interest in the issue which isn't as simple as it seems. -----Original Message----- From: Ben Zaiboc To: extropy-chat at lists.extropy.org Sent: Wed, Nov 3, 2010 1:07 pm Subject: Re: [ExI] Flash of insight... ablainey at aol.com wrote: ... > This might show if the 'I' position is > created by a physical reference to the sensory input > or by the physical position of the brain itself. Do > snails percieve themselves to be at a point > somewhere between their eye stalks or in thier > heads? How could it be related to the physical position of the brain? You don't know where your brain is unless someone tells you or you read it in a book, or extrapolate from where someone else's is. There is no direct perception of the position of your brain, unlike, say, your stomach. The whole concept of the 'I' position is meaningless anyway. All you can say is where your current /viewpoint/ is. The feeling of being somewhere is solely a product of your senses, and can change very easily. I particularly liked Spike's idea for locating your awareness behind and above your own head, using a camera on a pole. Ben Zaiboc _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Wed Nov 3 14:02:47 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Wed, 03 Nov 2010 10:02:47 -0400 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: <4CD16B87.2060301@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > Think about what you would say and do if provided with evidence that > you are actually a copy, replaced while the original you was sleeping > some time last week. My copy would go find the clown who did it and kill him suicide-bomber style. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From pharos at gmail.com Wed Nov 3 14:55:12 2010 From: pharos at gmail.com (BillK) Date: Wed, 3 Nov 2010 14:55:12 +0000 Subject: [ExI] hot processors? In-Reply-To: <000701cb7ae3$894e3540$9bea9fc0$@att.net> References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> <000701cb7ae3$894e3540$9bea9fc0$@att.net> Message-ID: On Tue, Nov 2, 2010 at 11:13 PM, spike wrote: > OK, I just got back from the local electronics merchant where I purchased a > notebook cooler. ?Let's see if this helps. ?If this machine fails to run all > night, I will need to rethink my strategy on using a laptop, and may cause > me to rethink the notion of the singularity. ?We may be seeing what really > is an S-curve in computing technology, where we are approaching a limit of > calculations per watt of power input. ?Or not, I confess I haven't followed > it in the past 5 yrs the way I did in my misspent youth. ?Are we still > advancing in calculations per watt? > > Oh-oh. I just did a search on 'HP Pavilion dv7 overheating' and it looks like you've bought a problem laptop. Do the search and you'll see what I mean. ************************ Is there any chance of returning it and getting your money back? ***************************** If not, then a high-power laptop cooler is required. Something like this: with twin fans. A simple stand won't be sufficient. You won't be able to use the laptop on your lap without getting burnt. Even using it on any flat surface like a desk will cause overheating. It seems to be a design fault by HP on this model. The internal fan is too small to cool the processor and the graphics chip they fitted. And the air vents are badly positioned and easily blocked. It is essential to keep the vents clean on this model by blowing compressed air through the vents on a regular basis. Best of luck! BillK From jonkc at bellsouth.net Wed Nov 3 15:59:19 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 3 Nov 2010 11:59:19 -0400 Subject: [ExI] Let's play What If. In-Reply-To: <4CD16B87.2060301@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD16B87.2060301@speakeasy.net> Message-ID: <31B57CF6-0901-48BA-B8D8-296482340D65@bellsouth.net> On Nov 3, 2010, at 10:02 AM, Alan Grimes wrote: >> Think about what you would say and do if provided with evidence that >> you are actually a copy, replaced while the original you was sleeping >> some time last week. > > My copy would go find the clown who did it and kill him suicide-bomber style. I doubt if you'd do that, I often disagree with you but you don't seem like the suicide-bomber type; but then again, they always say it's the person you'd least suspect. At any rate you certainly wouldn't if you didn't know you were a copy, and you wouldn't know unless you met up with a very convincing person armed with ironclad evidence and a golden tongue. And I'm not sure you'd really believe it even then as logical arguments have little effect on some. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Nov 3 16:22:04 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 3 Nov 2010 12:22:04 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: <10DB97EF-5FDC-43A9-AA80-7F181DF4A7D3@bellsouth.net> On Nov 2, 2010, at 2:17 PM, Mike Dougherty wrote: > Until we have the ability to rearrange subatomic particles to > literally create gold, such materials will continue to have a material > worth that could retain inherent value. If it has value then it has a price, but in the age of nanotechnology if you had some gold that I wanted (because I thought it looked pretty?) what could I trade you for it? About the only thing I can think of is another rare element, platinum maybe, because both the elements gold and platinum are unique, although atoms of gold or platinum are not. One gold atom is just like another but it is not like a platinum atom, it is like nothing else in the universe except for another gold atom. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Nov 3 16:26:39 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 3 Nov 2010 12:26:39 -0400 Subject: [ExI] Counterfeits (Was: THE MIGHTY ORIGINAL) In-Reply-To: <4CD07DFA.5040802@evil-genius.com> References: <4CD07DFA.5040802@evil-genius.com> Message-ID: <73C31AF2-6B9C-49AE-B36B-B4E829D3A513@bellsouth.net> On Nov 2, 2010, at 5:09 PM, lists1 at evil-genius.com wrote: > "Who is the most successful counterfeiter in history?" The world's tallest midget who lives on the world's largest island. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Wed Nov 3 17:03:13 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 3 Nov 2010 13:03:13 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <10DB97EF-5FDC-43A9-AA80-7F181DF4A7D3@bellsouth.net> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> <10DB97EF-5FDC-43A9-AA80-7F181DF4A7D3@bellsouth.net> Message-ID: 2010/11/3 John Clark : > If it has value then it has a price, but in the age of nanotechnology if you > had some gold that I wanted (because I thought it looked pretty?) what could > I trade you for it? About the only thing I can think of is another rare > element, platinum maybe, because both the elements gold and platinum are > unique, although atoms of gold or platinum are not. One gold atom is just > like another but it is not like a platinum atom, it is like nothing else in > the universe except for another gold atom. This may be the only context where the high-holy atom argument has you making a case for differences in atoms :) Possibly the only thing we can trade that is more rare than minerals: time. If I am to enjoy clock time at any multiplier above 1 then I need your clock time working for me. Slavery is certainly nothing new. Wage slavery is simply a PC term for the idea. (and I agree with how you feel about PC terms too) From jonkc at bellsouth.net Wed Nov 3 17:00:49 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 3 Nov 2010 13:00:49 -0400 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> Message-ID: On Nov 3, 2010, at 3:14 AM, John Grigg wrote: > The general public was not just not happy about Obama's performance record... I really wonder if he > will even get re-elected... I don't know but the big republican victory yesterday makes it far MORE likely Obama will be re-elected in two years because now he will have somebody to blame. Not counting yesterday, presidents have suffered 3 huge midterm losses since World War 2, Truman in 1946, Reagan in 1982, and Clinton in 1994; in all three cases the president was EASILY re-elected two years later. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Nov 3 17:17:42 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 10:17:42 -0700 Subject: [ExI] Fire and evolution (was hypnosis) In-Reply-To: <668700.46914.qm@web30107.mail.mud.yahoo.com> References: <4CD0814D.3040806@evil-genius.com> <668700.46914.qm@web30107.mail.mud.yahoo.com> Message-ID: <004a01cb7b7b$0671db70$13559250$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Dan Subject: Re: [ExI] Fire and evolution (was hypnosis) Wasn't the homo line (from the hypothesized homo/pan split)?also in different niches at this point too? I'm not sure of the research done on pan genus itself -- in terms of its evolution -- but I was under the impression that it was limited to dense forests -- while the homo line was exploring many different niches, some of them not dense forests. Regards, Dan Ja, clearly the pan's feet are better adapted for swinging from trees and homo's feet are better for walking distances on a grassy plane. Good point Dan. For that matter, as pointed out by someone earlier, pan's hands are not as good as homo's at grasping a burning bush. Pan's thumbs are mounted too far aft. spike From dan_ust at yahoo.com Wed Nov 3 17:43:52 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 10:43:52 -0700 (PDT) Subject: [ExI] The answer to tireless stupidity In-Reply-To: References: Message-ID: <832116.3227.qm@web30106.mail.mud.yahoo.com> This is not necessarily a cure for anti-science nonsense or even nonsense. It could be used against anyone holding any view: simply wear them down. E.g., someone here argues for Extropians or transhumanist views and someone else sets up a chatbot merely to keep pushing their buttons. Also, the usual argument I've seen regarding other planets warming up doesn't use Neptune, but Mars. And the evidence that the warming Mars has to do with fluctuations in solar output seems much more relevant -- though, to my mind, it's by no means decisive here. Regards, Dan ----- Original Message ---- From: Jeff Davis To: ExI chat list Sent: Wed, November 3, 2010 3:11:08 AM Subject: [ExI] The answer to tireless stupidity You're gonna like this. Chatbot Wears Down Proponents of Anti-Science Nonsense http://www.technologyreview.com/blog/mimssbits/25964/?nlid=3722 Best, Jeff Davis "Men occasionally stumble over the truth, but most pick themselves up and hurry off as if nothing had happened." ? ? ? ? ? ? ? Winston Churchill _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Wed Nov 3 17:33:27 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 10:33:27 -0700 Subject: [ExI] hot processors? In-Reply-To: References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> <000701cb7ae3$894e3540$9bea9fc0$@att.net> Message-ID: <006101cb7b7d$39b5b810$ad212830$@att.net> ... On Behalf Of BillK ... > >Oh-oh. I just did a search on 'HP Pavilion dv7 overheating' and it looks like you've bought a problem laptop. Do the search and you'll see what I mean. Did that yesterday, found the same site you did, bought the cooler stand, now it seems to be working fine. Turns out I incorrectly concluded that it had overheated before. There is a setting that defaults to turning itself to sleep mode if all four processor cores are working at full bore for an hour. I reset that to never sleep while the power cord is plugged in, and it ran all night last night, and returned a buttload of useful results. ************************ >Is there any chance of returning it and getting your money back? ***************************** >Best of luck! BillK I will run it full bore for a few nights. If it works, then I will be satisfied with it. spike From atymes at gmail.com Wed Nov 3 17:11:47 2010 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 3 Nov 2010 10:11:47 -0700 Subject: [ExI] The answer to tireless stupidity In-Reply-To: References: Message-ID: I was wondering when someone would put something like this together. Perhaps in the next American election cycle, some high profile candidate (large state governor, or President) can put it together to rebut tweets using common arguments of the opposition. On Wed, Nov 3, 2010 at 12:11 AM, Jeff Davis wrote: > You're gonna like this. > > Chatbot Wears Down Proponents of Anti-Science Nonsense > > http://www.technologyreview.com/blog/mimssbits/25964/?nlid=3722 > > Best, Jeff Davis > > "Men occasionally stumble over the truth, > but most pick themselves up and hurry off > as if nothing had happened." > Winston Churchill > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Nov 3 18:17:20 2010 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 3 Nov 2010 11:17:20 -0700 Subject: [ExI] The answer to tireless stupidity In-Reply-To: <832116.3227.qm@web30106.mail.mud.yahoo.com> References: <832116.3227.qm@web30106.mail.mud.yahoo.com> Message-ID: Very true. However: 1) Might it be the case that those whose arguments are not based on facts have more buttons to push? If they can not be secure in letting the other side have the last word, because they know everyone else can tell which side is the buffoon... 2) The point of the debate is more often to convince the silent audience. If one side keeps making emotional arguments, and the other side keeps rebutting by linking to facts supported by outside sources, more people who witness the debate will come away leaning toward the latter. 3) This is an interesting development as a political tool. Like any technology, it can be used for good or evil. However, like many new technologies, those who we view as "good" tend to be in a better position to use these tools, and thus will probably make more effective use of them (at least in the next decade or two). (In other words: try imagining an Extropian setting one of these up, then try to imagine a creationist setting one of these up. It's easier to imagine the former case, no?) On Wed, Nov 3, 2010 at 10:43 AM, Dan wrote: > This is not necessarily a cure for anti-science nonsense or even nonsense. > It > could be used against anyone holding any view: simply wear them down. E.g., > someone here argues for Extropians or transhumanist views and someone else > sets > up a chatbot merely to keep pushing their buttons. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Wed Nov 3 18:48:27 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 03 Nov 2010 13:48:27 -0500 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> Message-ID: <4CD1AE7B.9000808@satx.rr.com> On 11/3/2010 12:00 PM, John Clark wrote: > the big republican victory yesterday makes it far MORE likely Obama will > be re-elected in two years because now he will have somebody to blame. OMG, you mean the USA won't have President Palin to lead the nation to recovery? This is another crushing blow after the loss of Christine O'Donnell as VP. Damien Broderick From pharos at gmail.com Wed Nov 3 18:51:59 2010 From: pharos at gmail.com (BillK) Date: Wed, 3 Nov 2010 18:51:59 +0000 Subject: [ExI] hot processors? In-Reply-To: <006101cb7b7d$39b5b810$ad212830$@att.net> References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> <000701cb7ae3$894e3540$9bea9fc0$@att.net> <006101cb7b7d$39b5b810$ad212830$@att.net> Message-ID: On Wed, Nov 3, 2010 at 5:33 PM, spike wrote: > I will run it full bore for a few nights. ?If it works, then I will be > satisfied with it. > > Fair enough. But remember that with the cooler you're effectively changing it into a desktop pc and losing the flexibility of having a laptop. I'd recommend running a temperature monitor software that rings alarms or shuts down if the temperature gets too high. (It's easy to get the air vents blocked up without noticing). Core Temp reports on multiple cores and seems quite nice. Even if the temperature doesn't get quite high enough to close down, running for long periods at high temperatures will shorten the life span of the chips. Cheers, BillK From spike66 at att.net Wed Nov 3 18:50:49 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 11:50:49 -0700 Subject: [ExI] The answer to tireless stupidity In-Reply-To: References: Message-ID: <007d01cb7b88$08bb9170$1a32b450$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Jeff Davis ... >Chatbot Wears Down Proponents of Anti-Science Nonsense >http://www.technologyreview.com/blog/mimssbits/25964/?nlid=3722 >Best, Jeff Davis The immediate problem I see with this is that both sides can set up a chatbot, which then chatter away tirelessly about inane trivia through the night. But other than that, the chatbots are not like their human counterparts. On the subject of global warming, there is no need to have humans in that loop. So impervious are the participants on both sides to actual scientific data and mathematical models, it would soon become impossible to distinguish between the chat generated by this means vs the human input, so mired is this particular topic in culture, politics and even religion. I can think of a possible criterion to distinguish between human and mechanical conversation: as soon either side actually changes its views on global warming or even demonstrates it has actually learned, we know for sure that is a chatbot, for humans have never been observed to change their views on this topic. spike From spike66 at att.net Wed Nov 3 19:04:21 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 12:04:21 -0700 Subject: [ExI] sex machine, was: RE: The answer to tireless stupidity Message-ID: <008101cb7b89$ec95fe20$c5c1fa60$@att.net> >Subject: [ExI] The answer to tireless stupidity >Chatbot Wears Down Proponents of Anti-Science Nonsense... Jeff Davis Actually this application would be a pointless waste of perfectly good technology. Consider the online lonely hearts club. There are places on the web (and the usenets before that, and DARPAnet even before that) where lonely hearts would hang out and make small talk. A really useful application of a chatbot would be to have it mines one's own writings and produce an enormous lookup table, which it would then use to perform all the tedious, error-prone and emotionally hazardous early stages of online seduction. As soon as the other party agrees to meeting for, um, stimulating conversation (and so forth), then the seductobot would alert the user, who then reads over what the bot has said to the perspective contact. Of course, the other party might also have set up a seduct-o-matic to do the same thing. Similarly to Jeff's example, it might soon become very difficult to distinguish two humans trying to get each other into the sack from two lookup tables doing likewise. As soon as actual creativity or innovation is seen in the mating process, we know it must be a chatbot, for humans have discovered nothing essentially new in that area since a few weeks after some adventurous pair of protobonobos first discovered copulation. spike From dan_ust at yahoo.com Wed Nov 3 19:31:20 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 12:31:20 -0700 (PDT) Subject: [ExI] The answer to tireless stupidity In-Reply-To: References: Message-ID: <726837.76877.qm@web30107.mail.mud.yahoo.com> I can imagine a variation on this that might along with Spike's chabotting on global warming: set up a chatbot to push a position you disagree with, let it become really popular, then have it switch sides in a discussion. This might look like an honest changing of opinion and some might be duped by it. Regards, Dan From: Adrian Tymes To: ExI chat list Sent: Wed, November 3, 2010 1:11:47 PM Subject: Re: [ExI] The answer to tireless stupidity I was wondering when someone would put something like this together. Perhaps in the next American election cycle, some high profile candidate (large state governor, or President) can put it together to rebut tweets using common arguments of the opposition. On Wed, Nov 3, 2010 at 12:11 AM, Jeff Davis wrote: You're gonna like this. > >Chatbot Wears Down Proponents of Anti-Science Nonsense > >http://www.technologyreview.com/blog/mimssbits/25964/?nlid=3722 > >Best, Jeff Davis > >?"Men occasionally stumble over the truth, >?but most pick themselves up and hurry off >?as if nothing had happened." >? ? ? ? ? ? ? Winston Churchill >_______________________________________________ >extropy-chat mailing list >extropy-chat at lists.extropy.org >http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan_ust at yahoo.com Wed Nov 3 19:28:18 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 12:28:18 -0700 (PDT) Subject: [ExI] The answer to tireless stupidity In-Reply-To: References: <832116.3227.qm@web30106.mail.mud.yahoo.com> Message-ID: <238426.99429.qm@web30104.mail.mud.yahoo.com> I don't disagree about the?"silent audience" in any discussion, though I wonder if some of them aren't just immediately turned off by a continuous stream of emotional arguments anyhow. Regarding facts, the problem here would be interpretation in many cases. Also, merely citing journal articles doesn't settle things in many cases. Think about those economists and market analysts pointing out that the housing bubble was going to burst and those who argued against them. The latter could've easily created chatbots citing all the relevant articles in peer-reviewed journals right up until the market unraveled in 2008. In a sense, it's all going to depend on what the silent audience takes for fact and reliable reasoning in the first place. (Of course, this is not an attack on chatbots per se, but merely to point out that the wider social context is important.) Regarding a Creationist setting these up, well, aren't there already cheat sheets that Creationists use? Isn't there a book out called _How to Debate an Atheist_? Yes, this can be used for good or ill, and, like you, I'm more the optimist here. But the likely long-term outcome is probably not going to be the Dark Side is thwarted by chatbots, but that Dark Side chatbots make the more intelligent people less likely to take chat seriously. (In my opinion, that might actually be a big win. There are almost always more important things to do. :) Regards, Dan From: Adrian Tymes To: ExI chat list Sent: Wed, November 3, 2010 2:17:20 PM Subject: Re: [ExI] The answer to tireless stupidity Very true.? However: 1) Might it be the case that those whose arguments are not based on facts have more buttons to push?? If they can not be secure in letting the other side have the last word, because they know everyone else can tell which side is the buffoon... 2) The point of the debate is more often to convince the silent audience.? If one side keeps making emotional arguments, and the other side keeps rebutting by linking to facts supported by outside sources, more people who witness the debate will come away leaning toward the latter. 3) This is an interesting development as a political tool.? Like any technology, it can be used for good or evil.? However, like many new technologies, those who we view as "good" tend to be in a better position to use these tools, and thus will probably make more effective use of them (at least in the next decade or two). (In other words: try imagining an Extropian setting one of these up, then try to imagine a creationist setting one of these up.? It's easier to imagine the former case, no?) On Wed, Nov 3, 2010 at 10:43 AM, Dan wrote: This is not necessarily a cure for anti-science nonsense or even nonsense. It >could be used against anyone holding any view: simply wear them down. E.g., >someone here argues for Extropians or transhumanist views and someone else sets >up a chatbot merely to keep pushing their buttons. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan_ust at yahoo.com Wed Nov 3 19:42:01 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 12:42:01 -0700 (PDT) Subject: [ExI] The answer to tireless stupidity In-Reply-To: <007d01cb7b88$08bb9170$1a32b450$@att.net> References: <007d01cb7b88$08bb9170$1a32b450$@att.net> Message-ID: <773270.29941.qm@web30105.mail.mud.yahoo.com> I've "observed" people changing their minds on this -- mostly from being skeptical to anthropogenic global warming to believing in it. (I'm not going to say these people saw the light or they were duped -- or whether they were just going with the flow.* I don't know enough about their thought processes to say.) Regarding, though, you view of setting these chatbots up to eventually reach a consensus, this is the ideal of rhetoric: to get people to argue by going back to premises (which can include "actual scientific data and mathematical models") and eventually deciding on which conclusions are correct. This is seen with the typical use of enthymemes. Recall, an enthymeme is basically a syllogism where there's an unstated premise. In rhetoric, the person offering up the enthymeme in good faith is assuming that his interlocutors accept the unstated premise. If they don't, then the premise, to argue in good faith, is made explicit. Eventually, it's hope that the process will terminate for any debate -- as eventually all participants reach premises which they all agree on and then can move forward to the conclusion. Again, if they argue in good faith, the conclusion should be acceptable to all and this resolves the?difference in?opinions. Regards, Dan * How many people really need to have an opinion on this? Why is it that, like so many issues, people must take a side rather than just admit that they don't know and are not really capable, at their current state of knowledge and skill, of vetting the arguments on this? ----- Original Message ---- From: spike To: ExI chat list Sent: Wed, November 3, 2010 2:50:49 PM Subject: Re: [ExI] The answer to tireless stupidity From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Jeff Davis ... >Chatbot Wears Down Proponents of Anti-Science Nonsense >http://www.technologyreview.com/blog/mimssbits/25964/?nlid=3722 >Best, Jeff Davis The immediate problem I see with this is that both sides can set up a chatbot, which then chatter away tirelessly about inane trivia through the night.? But other than that, the chatbots are not like their human counterparts. On the subject of global warming, there is no need to have humans in that loop. So impervious are the participants on both sides to actual scientific data and mathematical models, it would soon become impossible to distinguish between the chat generated by this means vs the human input, so mired is this particular topic in culture, politics and even religion. I can think of a possible criterion to distinguish between human and mechanical conversation: as soon either side actually changes its views on global warming or even demonstrates it has actually learned, we know for sure that is a chatbot, for humans have never been observed to change their views on this topic. spike From rtomek at ceti.pl Wed Nov 3 20:36:25 2010 From: rtomek at ceti.pl (Tomasz Rola) Date: Wed, 3 Nov 2010 21:36:25 +0100 (CET) Subject: [ExI] hot processors? In-Reply-To: References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> <000701cb7ae3$894e3540$9bea9fc0$@att.net> Message-ID: On Wed, 3 Nov 2010, BillK wrote: > On Tue, Nov 2, 2010 at 11:13 PM, spike wrote: > > OK, I just got back from the local electronics merchant where I purchased a > > notebook cooler. ?Let's see if this helps. ?If this machine fails to run all > > night, I will need to rethink my strategy on using a laptop, and may cause > > me to rethink the notion of the singularity. ?We may be seeing what really > > is an S-curve in computing technology, where we are approaching a limit of > > calculations per watt of power input. ?Or not, I confess I haven't followed > > it in the past 5 yrs the way I did in my misspent youth. ?Are we still > > advancing in calculations per watt? Yes, I would say so. Compare: Pentium 1 @ 100MHz - about 15W (clock/wat = 6.67) Athlon XP @ 1800MHz - about 70-80W (c/w = 22.5-25.7) AthlonII 4x @ 2600MHz - about 170W (c/w = 61.2) (source: google, wikipedia, tomshardware, my memory) This assumes there were no other advances than mere clock. But in fact c/w doesn't tell about memory & bus speeds, micro optimisations, out of order execution, etc etc. On the Intel side, it should look even better, especially if we forget the flaky Pentium4. > Oh-oh. I just did a search on 'HP Pavilion dv7 overheating' and it > looks like you've bought a problem laptop. Do the search and you'll > see what I mean. Just in case some other folks here "use their computahs for computaahsion". I'm no big hardware expert but I am a big fan of stability. There are two utilities, that can be used for testing one's machine and they are free. 1. Memtest86 - [ http://en.wikipedia.org/wiki/Memtest86 ] 2. Prime95 - [ http://en.wikipedia.org/wiki/Prime95 ] Since I only use Windows about twice a year or so, I cannot tell about Prime95, but Memtest is ok. Once again, this is good moment to stress about monitoring a hardware. I don't know whether this is obvious, but to me, everybody running some nontrivial load on one's computer really wants to know how it is doing. I am for knowing my machine, knowing it's sounds, what is usual and what is a sign. It is analogous to racing: if you only drive to work or for some shopping, you don't need to understand how it is possible that you move. But once you enter racing, you could do better knowing at least some basics of your car's mechanics. Also, for me stability has more value than speed, so I don't mind downclocking a bit. This is, IMHO, quite a good idea while running so called budget PCs (and which one is not budget nowadays?). A 100 MHz off your clock is just a few percent drop in performance but it can make you feel much better and cooler (and no more questions like, can I go for a walk or should I stay and wait for another mysterious beep). Regards, Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:tomasz_rola at bigfoot.com ** From thespike at satx.rr.com Wed Nov 3 20:42:52 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 03 Nov 2010 15:42:52 -0500 Subject: [ExI] Australian dollar Message-ID: <4CD1C94C.3040705@satx.rr.com> In case anyone's interested, today 1 AUD = 1.00582 USD From thespike at satx.rr.com Wed Nov 3 20:52:11 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 03 Nov 2010 15:52:11 -0500 Subject: [ExI] Bayes and psi In-Reply-To: <4CD1C4D6.6000101@satx.rr.com> References: <567253.29951.qm@web30701.mail.mud.yahoo.com> <4CD1C4D6.6000101@satx.rr.com> Message-ID: <4CD1CB7B.1080804@satx.rr.com> This might be of interest: a link to a plenary lecture Prof. Utts gave this summer at the 8th International Conference on Teaching Statistics. http://icots8.org/cd/pdfs/plenaries/ICOTS8_PL2_UTTS.pdf THE STRENGTH OF EVIDENCE VERSUS THE POWER OF BELIEF: ARE WE ALL BAYESIANS? Jessica Utts, Michelle Norris, Eric Suess, Wesley Johnson Although statisticians have the job of making conclusions based on data, for many questions in science and society prior beliefs are strong and may take precedence over data when people make decisions. For other questions, there are experts who could shed light on the situation that may not be captured with available data. One of the appealing aspects of Bayesian statistics is that the methods allow prior beliefs and expert knowledge to be incorporated into the analysis along with the data. One domain where beliefs are almost sure to have a role is in the evaluation of scientific data for extrasensory perception (ESP). Experiments to test ESP often are binomial, and they have a clear null hypothesis, so they are an excellent way to illustrate hypothesis testing. Incorporating beliefs makes them an excellent example for the use of Bayesian analysis as well. In this paper, data from one type of ESP study are analyzed using both frequentist and Bayesian methods. From dan_ust at yahoo.com Wed Nov 3 20:54:58 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 13:54:58 -0700 (PDT) Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <4CD1AE7B.9000808@satx.rr.com> References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> Message-ID: <162237.28097.qm@web30101.mail.mud.yahoo.com> Kidding aside, do you think Palin will ever be more than a media phenom? Regards, Dan Overthrow all governments everywhere! ----- Original Message ---- From: Damien Broderick To: ExI chat list Sent: Wed, November 3, 2010 2:48:27 PM Subject: Re: [ExI] prediction for 2 November 2010 On 11/3/2010 12:00 PM, John Clark wrote: > the big republican victory yesterday makes it far MORE likely Obama will > be re-elected in two years because now he will have somebody to blame. OMG, you mean the USA won't have President Palin to lead the nation to recovery? This is another crushing blow after the loss of Christine O'Donnell as VP. Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From possiblepaths2050 at gmail.com Wed Nov 3 21:03:14 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 14:03:14 -0700 Subject: [ExI] A new Culture novel by Iain Banks Message-ID: This is always a cause for celebration! http://io9.com/5668042/preview-surface-detail-by-iain-m-banks John From pharos at gmail.com Wed Nov 3 21:14:13 2010 From: pharos at gmail.com (BillK) Date: Wed, 3 Nov 2010 21:14:13 +0000 Subject: [ExI] Australian dollar In-Reply-To: <4CD1C94C.3040705@satx.rr.com> References: <4CD1C94C.3040705@satx.rr.com> Message-ID: On Wed, Nov 3, 2010 at 8:42 PM, Damien Broderick wrote: > In case anyone's interested, today > > 1 AUD = 1.00582 USD > Yes, I noticed. First time since 28 years ago. Another step in the Fed's campaign to devalue the US dollar. Currency devaluation has many bad consequences, of course, as well as the good consequences of possibly increasing exports and reducing imports. Other countries appear to have noticed what the US Fed is doing, so we may be entering a phase of competitive devaluations around the world. BillK From thespike at satx.rr.com Wed Nov 3 21:15:14 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 03 Nov 2010 16:15:14 -0500 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <162237.28097.qm@web30101.mail.mud.yahoo.com> References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> Message-ID: <4CD1D0E2.9020009@satx.rr.com> On 11/3/2010 3:54 PM, Dan wrote: > Kidding aside, do you think Palin will ever be more than a media phenom? In the USA, who can say? From spike66 at att.net Wed Nov 3 21:05:55 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 14:05:55 -0700 Subject: [ExI] The answer to tireless stupidity In-Reply-To: <773270.29941.qm@web30105.mail.mud.yahoo.com> References: <007d01cb7b88$08bb9170$1a32b450$@att.net> <773270.29941.qm@web30105.mail.mud.yahoo.com> Message-ID: <003d01cb7b9a$e7bd1620$b7374260$@att.net> ... On Behalf Of Dan ... Subject: Re: [ExI] The answer to tireless stupidity >...I've "observed" people changing their minds on this -- mostly from being skeptical to anthropogenic global warming to believing in it. (I'm not going to say these people saw the light or they were duped -- or whether they were just going with the flow.* I don't know enough about their thought processes to say.)... Dan, the critical and divergent question is not so much if global warming is occurring or if it is anthropogenic, but rather the next step beyond that, which is: what are we going to do about it. That immediately causes a divergence of opinion that is not easily swayed by scientific data. One group suggests creating taxes on carbon dioxide production, while another group makes plans to replace their air conditioners with bigger units. This is a problem that we cannot discuss to a solution. If one economy taxes itself to reduce carbon dioxide emissions while its competitors do not, then the non-taxing competitors continue to generate CO2 with impunity, pretty soon they own the gold, they own everything; then they make the rules. How is discussion of scientific models of any help with this problem? We might as well set up multiple chatbots on both (or all sides) of that issue and let them chatter away, while leaving the rest of us to figure out bigger and better air conditioning systems. >...* How many people really need to have an opinion on this? Why is it that, like so many issues, people must take a side rather than just admit that they don't know and are not really capable, at their current state of knowledge and skill, of vetting the arguments on this?...Dan Everyone who is eligible to vote needs an opinion on this. The tax and cap CO2 solutions require jillions of votes, to elect leaders who will tax CO2 and send us down the branch where our competitors own everything, then once they do, they make our rules for us. spike From dan_ust at yahoo.com Wed Nov 3 21:43:28 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 14:43:28 -0700 (PDT) Subject: [ExI] The answer to tireless stupidity In-Reply-To: <003d01cb7b9a$e7bd1620$b7374260$@att.net> References: <007d01cb7b88$08bb9170$1a32b450$@att.net> <773270.29941.qm@web30105.mail.mud.yahoo.com> <003d01cb7b9a$e7bd1620$b7374260$@att.net> Message-ID: <635001.18085.qm@web30105.mail.mud.yahoo.com> Regarding your final comment: Don't you think that's the problem? I mean you don't seriously think anyone eligible to vote is going to have an intelligent, informed opinion? Also, the incentives are skewed -- as Caplan seemed to demonstrate in his _The Myth of the Rational Voter_: voters experience very low or zero costs for their decision because their vote only counts in a tie breaker. This allows for fantasy views on public policy issues and, if Caplan is right, the issue becomes why don't we have much worse polities. (Caplan attempts to answer that too: elected officials mitigate some of the harm of bad policies by breaking campaign promises and the like.*) Regards, Dan * Someone also presented an argument for corruption as helpful in many cases because it was a market means of subverting bad policies. E.g., if a cop can be bribed not to enforce a bad law (which ones aren't?), then the effects of that bad law can be somewhat mitigated. This is, of course, no a perfect solution and, certainly, worse than getting rid of the bad law and turning over the legislators to me for vivisec -- er, re-education. :) ----- Original Message ---- From: spike To: ExI chat list Sent: Wed, November 3, 2010 5:05:55 PM Subject: Re: [ExI] The answer to tireless stupidity ... On Behalf Of Dan ... Subject: Re: [ExI] The answer to tireless stupidity >...I've "observed" people changing their minds on this -- mostly from being skeptical to anthropogenic global warming to believing in it. (I'm not going to say these people saw the light or they were duped -- or whether they were just going with the flow.* I don't know enough about their thought processes to say.)... Dan, the critical and divergent question is not so much if global warming is occurring or if it is anthropogenic, but rather the next step beyond that, which is: what are we going to do about it.? That immediately causes a divergence of opinion that is not easily swayed by scientific data.? One group suggests creating taxes on carbon dioxide production, while another group makes plans to replace their air conditioners with bigger units.? This is a problem that we cannot discuss to a solution.? If one economy taxes itself to reduce carbon dioxide emissions while its competitors do not, then the non-taxing competitors continue to generate CO2 with impunity, pretty soon they own the gold, they own everything; then they make the rules.? How is discussion of scientific models of any help with this problem?? We might as well set up multiple chatbots on both (or all sides) of that issue and let them chatter away, while leaving the rest of us to figure out bigger and better air conditioning systems. >...* How many people really need to have an opinion on this? Why is it that, like so many issues, people must take a side rather than just admit that they don't know and are not really capable, at their current state of knowledge and skill, of vetting the arguments on this?...Dan Everyone who is eligible to vote needs an opinion on this.? The tax and cap CO2 solutions require jillions of votes, to elect leaders who will tax CO2 and send us down the branch where our competitors own everything, then once they do, they make our rules for us. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From dan_ust at yahoo.com Wed Nov 3 21:44:26 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 14:44:26 -0700 (PDT) Subject: [ExI] Australian dollar In-Reply-To: References: <4CD1C94C.3040705@satx.rr.com> Message-ID: <423771.36663.qm@web30106.mail.mud.yahoo.com> Why is reducing imports a good thing? Regards, Dan ----- Original Message ---- From: BillK To: ExI chat list Sent: Wed, November 3, 2010 5:14:13 PM Subject: Re: [ExI] Australian dollar On Wed, Nov 3, 2010 at 8:42 PM, Damien Broderick wrote: > In case anyone's interested, today > > 1 AUD = 1.00582 USD Yes, I noticed.? First time since 28 years ago. Another step in the Fed's campaign to devalue the US dollar. Currency devaluation has many bad consequences, of course, as well as the good consequences of possibly increasing exports and reducing imports. Other countries appear to have noticed what the US Fed is doing, so we may be entering a phase of competitive devaluations around the world. BillK _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From possiblepaths2050 at gmail.com Wed Nov 3 21:56:04 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 14:56:04 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <4CD1D0E2.9020009@satx.rr.com> References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> <4CD1D0E2.9020009@satx.rr.com> Message-ID: I wanted to share some links about things that affected the elections... The stupidity of American voters... http://www.salon.com/technology/how_the_world_works/2010/11/01/the_unbearable_stupidity_of_american_voters The very shadowy world of campaign funding... http://motherjones.com/politics/2010/11/2010-midterms-campaign-finance-secret-spending But life goes on... I do look forward to voting for an AGI candidate in the 2042 presidential election. : ) John On 11/3/10, Damien Broderick wrote: > On 11/3/2010 3:54 PM, Dan wrote: > >> Kidding aside, do you think Palin will ever be more than a media phenom? > > In the USA, who can say? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From possiblepaths2050 at gmail.com Wed Nov 3 22:07:20 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 15:07:20 -0700 Subject: [ExI] Bill Moyers: Welcome to the American Plutocracy Message-ID: An excerpt from the Bill Moyers article: "Time to close the circle: Everyone knows millions of Americans are in trouble. As Robert Reich recently summed it the state of working people: They?ve lost their jobs, their homes, and their savings. Their grown children have moved back in with them. Their state and local taxes are rising. Teachers and firefighters are being laid off. The roads and bridges they count on are crumbling, pipelines are leaking, schools are dilapidated, and public libraries are being shut." "Why isn?t government working for them? Because it?s been bought off. It?s as simple as that. And until we get clean money we?re not going to get clean elections, and until we get clean elections, you can kiss goodbye government of, by, and for the people. Welcome to the plutocracy." I would just add that I would replace the term "Plutocracy" with "Kleptocracy..." http://www.truth-out.org/bill-moyers-money-fights-hard-and-it-fights-dirty64766 From pharos at gmail.com Wed Nov 3 22:14:17 2010 From: pharos at gmail.com (BillK) Date: Wed, 3 Nov 2010 22:14:17 +0000 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> <4CD1D0E2.9020009@satx.rr.com> Message-ID: On Wed, Nov 3, 2010 at 9:56 PM, John Grigg wrote: > I wanted to share some links about things that affected the elections... > > The very shadowy world of campaign funding... > http://motherjones.com/politics/2010/11/2010-midterms-campaign-finance-secret-spending > > Yes, but Meg Whitman (Republican) spent about 160 million of her own money and still lost. There's losing and then there's really really painful losing. BillK From possiblepaths2050 at gmail.com Wed Nov 3 22:21:33 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 15:21:33 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> <4CD1D0E2.9020009@satx.rr.com> Message-ID: >Yes, but Meg Whitman (Republican) spent about 160 million of her own >money and still lost. >There's losing and then there's really really painful losing. At least she spent her own money... On 11/3/10, BillK wrote: > On Wed, Nov 3, 2010 at 9:56 PM, John Grigg wrote: >> I wanted to share some links about things that affected the elections... >> >> The very shadowy world of campaign funding... >> http://motherjones.com/politics/2010/11/2010-midterms-campaign-finance-secret-spending >> >> > > > Yes, but Meg Whitman (Republican) spent about 160 million of her own > money and still lost. > > There's losing and then there's really really painful losing. > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From possiblepaths2050 at gmail.com Wed Nov 3 20:56:19 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 13:56:19 -0700 Subject: [ExI] How old people will remake the world Message-ID: At least for some people, the aging of the world population will improve life... http://www.salon.com/books/feature/2010/10/31/shock_of_gray_interview John From thespike at satx.rr.com Wed Nov 3 22:43:37 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 03 Nov 2010 17:43:37 -0500 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> <4CD1D0E2.9020009@satx.rr.com> Message-ID: <4CD1E599.7050804@satx.rr.com> On 11/3/2010 5:21 PM, John Grigg wrote: >> Yes, but Meg Whitman (Republican) spent about 160 million of her own >> >money and still lost. >> >There's losing and then there's really really painful losing. > At least she spent her own money... A common misconception. She was secretly funded by the Illuminati, the Masons, the Mormons, the Vatican, and the Grays. From spike66 at att.net Wed Nov 3 22:35:49 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 15:35:49 -0700 Subject: [ExI] Australian dollar In-Reply-To: <423771.36663.qm@web30106.mail.mud.yahoo.com> References: <4CD1C94C.3040705@satx.rr.com> <423771.36663.qm@web30106.mail.mud.yahoo.com> Message-ID: <000801cb7ba7$77590a30$660b1e90$@att.net> That US dollar will drop way faster now that the federal reserve has just bought up 600 billion in US Treasury notes. US government buying its own debt is equivalent to spinning up the printing presses at the national mint. Are we going to keep pretending this is a debt that never needs to be paid back? ... On Behalf Of Dan ... >Why is reducing imports a good thing? Dan Dan, your even asking the question worries me. Answer: because we are spending ourselves to brutal catastrophe. spike From possiblepaths2050 at gmail.com Wed Nov 3 23:03:53 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 16:03:53 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <4CD1E599.7050804@satx.rr.com> References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> <4CD1D0E2.9020009@satx.rr.com> <4CD1E599.7050804@satx.rr.com> Message-ID: The Grays will be spending billions down the road for the cause of alien/human hybrid civil rights... Talk about coming out of the closet!!! John : ) On 11/3/10, Damien Broderick wrote: > On 11/3/2010 5:21 PM, John Grigg wrote: > >>> Yes, but Meg Whitman (Republican) spent about 160 million of her own >>> >money and still lost. >>> >There's losing and then there's really really painful losing. > >> At least she spent her own money... > > A common misconception. She was secretly funded by the Illuminati, the > Masons, the Mormons, the Vatican, and the Grays. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From thespike at satx.rr.com Wed Nov 3 23:10:19 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 03 Nov 2010 18:10:19 -0500 Subject: [ExI] Australian dollar In-Reply-To: <000801cb7ba7$77590a30$660b1e90$@att.net> References: <4CD1C94C.3040705@satx.rr.com> <423771.36663.qm@web30106.mail.mud.yahoo.com> <000801cb7ba7$77590a30$660b1e90$@att.net> Message-ID: <4CD1EBDB.4060207@satx.rr.com> On 11/3/2010 5:35 PM, spike wrote: > Are we going to keep pretending this is a debt that never needs to be paid > back? Suppose there really is going to be a moderately fast but perceptible runup to a technological Singularity, how long would that be a problem? Damien Broderick From nymphomation at gmail.com Wed Nov 3 21:33:37 2010 From: nymphomation at gmail.com (*Nym*) Date: Wed, 3 Nov 2010 21:33:37 +0000 Subject: [ExI] A new Culture novel by Iain Banks In-Reply-To: References: Message-ID: On 3 November 2010 21:03, John Grigg wrote: > This is always a cause for celebration! > > http://io9.com/5668042/preview-surface-detail-by-iain-m-banks *possible spoilerettes* I'm still only up to page 562, if you like the Culture there is a lot more of it than in Matter or Inversions (not read the latter yet..) The whole book is built around aspects of uploading and backing up, but a tortured instance of an alien seems to be the only duplicated 'soul'. =:o) Heavy splashings, Thee Nymphomation 'If you cannot afford an executioner, a duty executioner will be appointed to you free of charge by the court' From spike66 at att.net Wed Nov 3 22:59:44 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 15:59:44 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> <4CD1D0E2.9020009@satx.rr.com> Message-ID: <000901cb7baa$cea90030$6bfb0090$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Grigg... >The stupidity of American voters... >http://www.salon.com/technology/how_the_world_works/2010/11/01/the_unbearab le_stupidity_of_american_voters ... Andrew Leonard reports "By 52 percent to 19 percent, likely voters say federal income taxes have gone up for the middle class in the past two years." Without a hint of self-doubt, Leonard concludes that American voters are unbearably stupid. He is the kind of guy who buys a ton of junk at the local Walmart, pays with a credit card, notes on the way out he still has as much cash as when he went in, then concludes that he got all this stuff for free. He marvels at the stupidity of all those silly proles in line dishing out actual money for their purchases, instead of just using a credit card, like he and the other smart people do. Clue for Andrew Leonard: if there is a deficit, taxes are actually going up, regardless of what your current tax bill reads. Taxes went up during the W administration. They are going up waaay faster now. Andrew, that is what those unbearably stupid 52% are getting that you are missing. spike From pharos at gmail.com Wed Nov 3 23:24:41 2010 From: pharos at gmail.com (BillK) Date: Wed, 3 Nov 2010 23:24:41 +0000 Subject: [ExI] Fire and evolution (was hypnosis) In-Reply-To: <004a01cb7b7b$0671db70$13559250$@att.net> References: <4CD0814D.3040806@evil-genius.com> <668700.46914.qm@web30107.mail.mud.yahoo.com> <004a01cb7b7b$0671db70$13559250$@att.net> Message-ID: On Wed, Nov 3, 2010 at 5:17 PM, spike wrote: > Ja, clearly the pan's feet are better adapted for swinging from trees and > homo's feet are better for walking distances on a grassy plane. ?Good point > Dan. ?For that matter, as pointed out by someone earlier, pan's hands are > not as good as homo's at grasping a burning bush. ?Pan's thumbs are mounted > too far aft. > > By coincidence, Stone Age humans were only able to develop relatively advanced tools after their brains evolved a greater capacity for complex thought, according to a new study that investigates why it took early humans almost two million years to move from razor-sharp stones to a hand-held stone axe. ------------------ BillK From rpwl at lightlink.com Thu Nov 4 00:52:40 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 03 Nov 2010 20:52:40 -0400 Subject: [ExI] A new Culture novel by Iain Banks In-Reply-To: References: Message-ID: <4CD203D8.5080602@lightlink.com> John Grigg wrote: > This is always a cause for celebration! > > http://io9.com/5668042/preview-surface-detail-by-iain-m-banks Yay!! More Culture! Coming on the heels of John Clark's prognosis that yesterday's election will mean Obama is more likely to get elected in 2012, this is turning out to be a more cheerful day than I expected... :-) And the AUD paritied the USD today... what is this, are the planets all lined up or something? Richard Loosemore From brent.allsop at canonizer.com Thu Nov 4 02:53:40 2010 From: brent.allsop at canonizer.com (Brent Allsop) Date: Wed, 03 Nov 2010 20:53:40 -0600 Subject: [ExI] Flash of insight... In-Reply-To: <4CD0DC77.7070603@speakeasy.net> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDEE41.20706@canonizer.com> <4CCE079C.4010102@speakeasy.net> <4CD0DC77.7070603@speakeasy.net> Message-ID: <4CD22034.4060304@canonizer.com> Psychonaughts, From the way others are talking about all this, they clearly don't yet fully understand what is going on in the right way. If you think of a simulated world like Halo, where there are two competitors in that simulated world. The data representing one of them could be stored in one memory chip, while the data representing the second could be represented by the circuits in a different memory chip. If a third competitor showed up between them, certainly you wouldn't necessarily conclude that the 3rd persons existence was represented by something spatially between these two chips. But, he could be, just by happenstance. The actual representations (or neural correlates of our 3D conscious knowledge) need not have anything to do with each other. Though the brilliant Steven Lehar makes some very powerful arguments, mostly for efficiencies sake, for the correlates being laid out in a very isomorphic 3D way. Kind of like an actual model of your spacial world laid out within the neurons of your cortex. Think of the flat mountains, moon behind them, and the stars, all as being not infinitely far away (since your brain isn't large enough to represent much more than a few miles of 3d space) but merely flat cut outs pasted on the inside of your skull - or actually as being represented by the set of neurons closest to your skull. And of course, your body represented by the neurons near the center of all this - with your 'spirit' being inside this, as if it was looking out of the representation of the eyes - though unlike the rest, you knowledge of your spirit has no referent in reality. On 11/2/2010 9:52 PM, Alan Grimes wrote: > >> I look forward to soon >> knowing first hand just how diverse your experience of yourself are, >> Alan, compared to my own. > ???? > > How do you propose to do that? > > You haven't read chapters 5 and 6 of 1229 Years After Titanic yet, have you? http://home.comcast.net/~brent.allsop/1229.htm#_Toc22030742 To start, if we happen to represent things very similarly, there is a chance something like an FMRI will be able see enough resolution of neural operation to tell us that my experiences are very similar to yours - or not. There may be other tricks, like using cameras and goggles to induce one of us to experience things the way the other does. (Again, being confirmed by the FMRI like device observing us achieving similar responsible neural correlates - and then saying: "There, you have it, that is what it is like for Alan.) Ultimately, though, as predicted by brilliant V.S. Ramachandran, we need to do between brains, what the corpus calosum is doing between our brain hemispheres. We need to eff the ineffable - as in oh THAT is what salt tastes like for you. Such a connecting 'cable of neurons' will enable our conscious models of reality worlds to subjectively merge. When I hug my spouse, currently I only experience half of what is going on. With this kind of a hook up, I'll be able to experience it all, just as I now do for both the right and left half of my body and world of about 2 miles in both directions - represented by both hemispheres - right hemisphere representing my lift body/world and visa verse. And, as predicted in the 1229 story, our 'spirits' will freely traverse between such consciously connected phenomenal worlds. We'll be making unimaginable phenomenal worlds exponentially more diverse and which nobody has yet experienced anything phenomenally like yet, and so much more. Not to mentionn we'll finally know 'what it is like to be a bat' or a snail.... as we grow toward becoming omni phenomenal and realizing that all of nature is so much more than just cause and effect behavior. I know how the light of a sunset behaves, and what my brains representation of a sunset is phenomenally like. The real question is, what is the actual sunset really phenomenally like. Brent Allsop From atymes at gmail.com Thu Nov 4 03:40:10 2010 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 3 Nov 2010 20:40:10 -0700 Subject: [ExI] The answer to tireless stupidity In-Reply-To: <238426.99429.qm@web30104.mail.mud.yahoo.com> References: <832116.3227.qm@web30106.mail.mud.yahoo.com> <238426.99429.qm@web30104.mail.mud.yahoo.com> Message-ID: You miss one of my major points: Yes, anyone _can_ use this. Think about who _will_. Or, at least, who is more likely to. The odds of a scientist who knows evolution using this within the next five years, exceed the odds of a creationist using this within the same time frame. Yes, that's getting into probabilities. Yes, it's not guaranteed. The future can not be guaranteed. Even the Singularity is not absolutely certain to happen, in any form that we'd call a Singularity, but merely likely. But certain outcomes can be made more likely - and if that's all that can be done, then it shall have to be good enough. And in this case, it is more likely that people we would agree with will use it, before people we would disagree with, at least as regards to their use of it. 2010/11/3 Dan > I don't disagree about the "silent audience" in any discussion, though I > wonder if some of them aren't just immediately turned off by a continuous > stream of emotional arguments anyhow. > > Regarding facts, the problem here would be interpretation in many cases. > Also, merely citing journal articles doesn't settle things in many cases. > Think about those economists and market analysts pointing out that the > housing bubble was going to burst and those who argued against them. The > latter could've easily created chatbots citing all the relevant articles in > peer-reviewed journals right up until the market unraveled in 2008. In a > sense, it's all going to depend on what the silent audience takes for fact > and reliable reasoning in the first place. (Of course, this is not an attack > on chatbots per se, but merely to point out that the wider social context is > important.) > > Regarding a Creationist setting these up, well, aren't there already cheat > sheets that Creationists use? Isn't there a book out called _How to Debate > an Atheist_? Yes, this can be used for good or ill, and, like you, I'm more > the optimist here. But the likely long-term outcome is probably not going to > be the Dark Side is thwarted by chatbots, but that Dark Side chatbots make > the more intelligent people less likely to take chat seriously. (In my > opinion, that might actually be a big win. There are almost always more > important things to do. :) > > Regards, > > Dan > > *From:* Adrian Tymes > *To:* ExI chat list > *Sent:* Wed, November 3, 2010 2:17:20 PM > *Subject:* Re: [ExI] The answer to tireless stupidity > > Very true. However: > > 1) Might it be the case that those whose arguments are not based on facts > have > more buttons to push? If they can not be secure in letting the other side > have the > last word, because they know everyone else can tell which side is the > buffoon... > > 2) The point of the debate is more often to convince the silent audience. > If one > side keeps making emotional arguments, and the other side keeps rebutting > by > linking to facts supported by outside sources, more people who witness the > debate will come away leaning toward the latter. > > 3) This is an interesting development as a political tool. Like any > technology, it > can be used for good or evil. However, like many new technologies, those > who we > view as "good" tend to be in a better position to use these tools, and thus > will > probably make more effective use of them (at least in the next decade or > two). > (In other words: try imagining an Extropian setting one of these up, then > try to > imagine a creationist setting one of these up. It's easier to imagine the > former > case, no?) > > On Wed, Nov 3, 2010 at 10:43 AM, Dan wrote: > >> This is not necessarily a cure for anti-science nonsense or even nonsense. >> It >> could be used against anyone holding any view: simply wear them down. >> E.g., >> someone here argues for Extropians or transhumanist views and someone else >> sets >> up a chatbot merely to keep pushing their buttons. >> >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Thu Nov 4 05:04:47 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 04 Nov 2010 01:04:47 -0400 Subject: [ExI] Flash of insight... In-Reply-To: <4CD22034.4060304@canonizer.com> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDEE41.20706@canonizer.com> <4CCE079C.4010102@speakeasy.net> <4CD0DC77.7070603@speakeasy.net> <4CD22034.4060304@canonizer.com> Message-ID: <4CD23EEF.7080309@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: >>> I look forward to soon >>> knowing first hand just how diverse your experience of yourself are, >>> Alan, compared to my own. >> ???? >> How do you propose to do that? > You haven't read chapters 5 and 6 of 1229 Years After Titanic yet, have > you? > http://home.comcast.net/~brent.allsop/1229.htm#_Toc22030742 > =\ I skimmed those again, they just seemed to be a random collection of vague statements or dialogue beginning with "I". =\ If you want to see how to write in the first person, read Orange Sky by myself. =P Problem is, I don't have it up on the web right now and since the thing is over 300k in length, it'd take weeks to convert it to html format. I was thinking of publishing it but then I'd have to rewrite it and I was running out of creative energy before I even finished it. =\ > To start, if we happen to represent things very similarly, there is a > chance something like an FMRI will be able see enough resolution of > neural operation to tell us that my experiences are very similar to > yours - or not. There may be other tricks, like using cameras and > goggles to induce one of us to experience things the way the other > does. (Again, being confirmed by the FMRI like device observing us > achieving similar responsible neural correlates - and then saying: > "There, you have it, that is what it is like for Alan.) Implausible. The proposal fails to account for dimorphisms in the neural architecture that are at the heart of what's being discussed. ie: our neural networks might be incapable of simulating the other without in some ways becoming the other, so you couldn't just "sample" it like a taste test. The proposal doesn't even account for getting even that far. > Ultimately, though, as predicted by brilliant V.S. Ramachandran, we need > to do between brains, what the corpus calosum is doing between our brain > hemispheres. We need to eff the ineffable - as in oh THAT is what salt > tastes like for you. Such a connecting 'cable of neurons' will enable > our conscious models of reality worlds to subjectively merge. When I > hug my spouse, currently I only experience half of what is going on. > With this kind of a hook up, I'll be able to experience it all, just as > I now do for both the right and left half of my body and world of about > 2 miles in both directions - represented by both hemispheres - right > hemisphere representing my lift body/world and visa verse. Now that is an interesting proposal. In my Tortoise Vs. Achilles dialogs I have a character, a borganism, who has a true single consciousness across several bodies. (Look, I've written ten times as much as you and I'm better at it too!, I just don't go around citing it as if it were a classic or peer reviewed literature). I'm extremely cautious with the word "need" but yes, the ability to set such up between brains and, more importantly, between a brain and a computronium counterpart would be extremely useful. It definitely falls within the category of Real Transhumanism (tm). > And, as predicted in the 1229 story, our 'spirits' will freely traverse > between such consciously connected phenomenal worlds. We'll be making > unimaginable phenomenal worlds exponentially more diverse and which > nobody has yet experienced anything phenomenally like yet, and so much > more. Not to mention we'll finally know 'what it is like to be a bat' > or a snail.... as we grow toward becoming omni phenomenal and realizing > that all of nature is so much more than just cause and effect behavior. Predictions of this sort are useless because they don't lead towards meaningful action. The correct way to think about this is "Do you want to do this or do you not want to do it?" With the answer to that in hand, the next question is "So what are you going to do about it, huh? punk... What are you going to do!". Me? I'm going to get my self a NAO, and a personal supercomputer and solve AI. After that it's off to the races... > I know how the light of a sunset behaves, and what my brains > representation of a sunset is phenomenally like. The real question is, > what is the actual sunset really phenomenally like. I'm not sure that question is meaningful. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From dan_ust at yahoo.com Thu Nov 4 13:39:23 2010 From: dan_ust at yahoo.com (Dan) Date: Thu, 4 Nov 2010 06:39:23 -0700 (PDT) Subject: [ExI] Australian dollar In-Reply-To: <000801cb7ba7$77590a30$660b1e90$@att.net> References: <4CD1C94C.3040705@satx.rr.com> <423771.36663.qm@web30106.mail.mud.yahoo.com> <000801cb7ba7$77590a30$660b1e90$@att.net> Message-ID: <362237.28012.qm@web30102.mail.mud.yahoo.com> Imports themselves are not to blame. Also, recall the context of my statement here: BillK wrote, "Currency devaluation has many bad consequences, of course, as well as the good consequences of possibly increasing exports and reducing imports." He's, obviously, here pointing to an upside to currency devaluation. I was questioning whether this was really an upside after all. What's wrong, after all, with imports? They're a sign of trade -- that people somewhere else want to sell stuff to you. This is usually a great thing -- it spreads the division of labor ever further --?making for greater efficiency in production -- and usually provides you with more things to choose from. We are not "spending ourselves to a brutal catastrophe." The US government is. If you're worried about spending being too high (by whose reckoning?), then the thing to do is stop government-sponsored credit expansion. Also, stop government debt-financing -- which is one of the main drivers of credit policy (credit expansion allows big debtors to borrow more; the biggest debtor in any modern economy is its government). This debt, too, doesn't need to be paid back. It should be defaulted. Defaulting on the government debt will make creditors unlikely to loan to the government again. More importantly, paying it off will involve coercion -- via taxation or some other coercive means. Yes, I know, the wealthy creditors who lent to the government enjoy being paid off by taxes and the like. Well, that has to stop and would undermine the Hamiltonian notion of having national debt to cleave the wealthy to the government. (Granted, my recommendation here would be unpopular with these same creditors and they would try to persuade everyone that the world will end if the government default or were just abolished outright.*) Regards, Dan * Which is _the_ libertarian position. Libertarians who advocate government are inconsistent. ----- Original Message ---- From: spike To: ExI chat list Sent: Wed, November 3, 2010 6:35:49 PM Subject: Re: [ExI] Australian dollar That US dollar will drop way faster now that the federal reserve has just bought up 600 billion in US Treasury notes.? US government buying its own debt is equivalent to spinning up the printing presses at the national mint. Are we going to keep pretending this is a debt that never needs to be paid back? ... On Behalf Of Dan ... >Why is reducing imports a good thing? Dan Dan, your even asking the question worries me.? Answer: because we are spending ourselves to brutal catastrophe. spike From rahmans at me.com Thu Nov 4 10:20:57 2010 From: rahmans at me.com (Omar Rahman) Date: Thu, 04 Nov 2010 11:20:57 +0100 Subject: [ExI] New Improved Turing Test was: Subject: The answer to tireless stupidity In-Reply-To: References: Message-ID: <11E05544-EA6E-4921-9866-21539C70EA03@me.com> Spkie, This is brilliant. You've just set up the scenario for a new and improved Turing test. Why improved? It basically fulfills the Turing test....but potentially serves a reproductive purpose thereby influencing evolution. Well done sir! Regards, Omar Rahman P.S. Time to think up some super sexy code to attract post-singularity mates! > > >> Subject: [ExI] The answer to tireless stupidity > >> Chatbot Wears Down Proponents of Anti-Science Nonsense... Jeff Davis > > Actually this application would be a pointless waste of perfectly good > technology. > > Consider the online lonely hearts club. There are places on the web (and > the usenets before that, and DARPAnet even before that) where lonely hearts > would hang out and make small talk. A really useful application of a > chatbot would be to have it mines one's own writings and produce an enormous > lookup table, which it would then use to perform all the tedious, > error-prone and emotionally hazardous early stages of online seduction. As > soon as the other party agrees to meeting for, um, stimulating conversation > (and so forth), then the seductobot would alert the user, who then reads > over what the bot has said to the perspective contact. > > Of course, the other party might also have set up a seduct-o-matic to do the > same thing. > > Similarly to Jeff's example, it might soon become very difficult to > distinguish two humans trying to get each other into the sack from two > lookup tables doing likewise. As soon as actual creativity or innovation is > seen in the mating process, we know it must be a chatbot, for humans have > discovered nothing essentially new in that area since a few weeks after some > adventurous pair of protobonobos first discovered copulation. > > spike From spike66 at att.net Thu Nov 4 15:28:43 2010 From: spike66 at att.net (spike) Date: Thu, 4 Nov 2010 08:28:43 -0700 Subject: [ExI] New Improved Turing Test was: Subject: The answer to tireless stupidity In-Reply-To: <11E05544-EA6E-4921-9866-21539C70EA03@me.com> References: <11E05544-EA6E-4921-9866-21539C70EA03@me.com> Message-ID: <003201cb7c34$f7642470$e62c6d50$@att.net> Subject: [ExI] New Improved Turing Test was: Subject: The answer to tireless stupidity >> Similarly to Jeff's example, it might soon become very difficult to >> distinguish two humans trying to get each other into the sack from two >> lookup tables doing likewise... spike >Spike, >This is brilliant. You've just set up the scenario for a new and improved Turing test. Why improved? It basically fulfills the >Turing test... ... Omar you are too kind sir, but I cannot claim originality. A few years ago, a guy realized that plenty of college-age hipsters had never heard of Eliza, the software psychoanalyst. That was a toy that came and went a long time ago. I played with it some in college. He set up an Eliza-like program, which is easy to reproduce in excel with a good sized lookup table, then set it to hang out in a teen chat room, to see if the kids would ever figure out they were talking to a computer. A few of them did, but most did not. There was one striking example of a kid who poured out his heart to this program for 55 minutes, apparently never realizing it was a machine. That is a form of the Turing test success. It made Slashdot headlines, but I think it has been at least five or six years ago, long enough for everyone to forget and have a fresh innocent batch of teens to redo the experiment. Muwaaaahaaahaahahahahahahaaaa... >...but potentially serves a reproductive purpose thereby influencing evolution. Well done sir! >Regards, >Omar Rahman Hmmm, that gives me pause. Fortunately the kinds of mating I had in mind seldom results in actual reproduction. But I suppose it could generate larvae, in which case we would be encouraging the breeding of people who rely on machines to do the messy emotional stuff that is intertwined with the mating game. Oy freaking vey. Well, wait a minute, hold that thought. Perhaps this isn't anything new. Consider Hallmark cards. There is an example where we take the sweet gooey feeling stuff that many of us here recognize we are not particularly good at, and hire others to do it for us. We buy the birthday wishes written by others for a couple bucks. Same for wedding best wishes, get well soon cards, sympathy cards and so on. We already subcontract emotional care and feeding to others who are better at it than we are. So I guess it isn't such a major stretch to imagine we set up seductobots to look around on the web, get acquainted with, and prime prospective mates. I could even see setting the seductobot with one's own personality quirks. Here's a possible innovation. The seductobot, being tireless, can filter through arbitrarily many potential mates, more than its human counterpart could ever service with actual copulation. It would be a little like the 72 virgins thing, only there would be more than 72 and they wouldn't actually be virgins. So one could set the bot to present the person as he *really is* as opposed to the idealized version of oneself that pretty much everyone presents if they hang out on lonely hearts sites. One could actually downplay one's virtues, as few of us actually ever do. Then the potential mate would enjoy pleasant surprises as opposed to disappointments as she came to know you better. spike From pharos at gmail.com Thu Nov 4 16:54:40 2010 From: pharos at gmail.com (BillK) Date: Thu, 4 Nov 2010 16:54:40 +0000 Subject: [ExI] Australian dollar In-Reply-To: <362237.28012.qm@web30102.mail.mud.yahoo.com> References: <4CD1C94C.3040705@satx.rr.com> <423771.36663.qm@web30106.mail.mud.yahoo.com> <000801cb7ba7$77590a30$660b1e90$@att.net> <362237.28012.qm@web30102.mail.mud.yahoo.com> Message-ID: On Thu, Nov 4, 2010 at 1:39 PM, Dan wrote: > Imports themselves are not to blame. Also, recall the context of my statement > here: BillK wrote, "Currency devaluation has many bad consequences, of course, > as well as the good consequences of possibly increasing exports and reducing > imports." > > He's, obviously, here pointing to an upside to currency devaluation. I was > questioning whether this was really an upside after all. What's wrong, after > all, with imports? They're a sign of trade -- that people somewhere else want to > sell stuff to you. This is usually a great thing -- it spreads the division of > labor ever further --?making for greater efficiency in production -- and usually > provides you with more things to choose from. > I agree that trade is good. But I was writing in the context of the huge US deficit funding. The US specifically needs to get the import / export trade back in balance. > We are not "spending ourselves to a brutal catastrophe." The US government is. > If you're worried about spending being too high (by whose reckoning?), then the > thing to do is stop government-sponsored credit expansion. Also, stop government > debt-financing -- which is one of the main drivers of credit policy (credit > expansion allows big debtors to borrow more; the biggest debtor in any modern > economy is its government). > I'd love to have governments do as I tell them, but they won't listen. :) > This debt, too, doesn't need to be paid back. It should be defaulted. Defaulting > on the government debt will make creditors unlikely to loan to the government > again. More importantly, paying it off will involve coercion -- via taxation or > some other coercive means. Yes, I know, the wealthy creditors who lent to the > government enjoy being paid off by taxes and the like. Well, that has to stop > and would undermine the Hamiltonian notion of having national debt to cleave the > wealthy to the government. (Granted, my recommendation here would be unpopular > with these same creditors and they would try to persuade everyone that the world > will end if the government default or were just abolished outright.*) > > The US *is* defaulting on the debt by devaluing the dollar (and hoping that nobody notices). Your economic theory comments ignore the practical situation that the US in now in. The government is owned by the wealthy and has been used and is currently being used to expedite the transfer of all the wealth in the nation into the pockets of the already unbelievably wealthy few. Dollar devaluation doesn't much affect the super-wealthy who own property, land, gold, etc. in the US and abroad in tax havens. As currency devalues, real assets tend to keep their real value. That's where Obama failed. He had a chance to stop the looting when the financial crisis hit, but instead he caved in, bailed them out by giving them billions more and let them carry on as usual. BillK From scerir at alice.it Thu Nov 4 18:05:09 2010 From: scerir at alice.it (scerir) Date: Thu, 4 Nov 2010 19:05:09 +0100 Subject: [ExI] Bayes and psi In-Reply-To: <4CD1CB7B.1080804@satx.rr.com> References: <567253.29951.qm@web30701.mail.mud.yahoo.com><4CD1C4D6.6000101@satx.rr.com> <4CD1CB7B.1080804@satx.rr.com> Message-ID: Damien Broderick > This might be of interest: a link to a plenary lecture Prof. Utts gave > this summer at the 8th International Conference on Teaching Statistics. > http://icots8.org/cd/pdfs/plenaries/ICOTS8_PL2_UTTS.pdf Michael Strevens wrote papers on Bayes vs philosophy The Bayesian Approach to the Philosophy of Science http://www.strevens.org/research/simplexuality/Bayes.pdf Notes on Bayesian Confirmation Theory http://www.nyu.edu/classes/strevens/BCT/BCT.pdf From jonkc at bellsouth.net Thu Nov 4 21:04:08 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 4 Nov 2010 17:04:08 -0400 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <364035E2-F6CA-4F92-B739-563093FF0921@bellsouth.net> <8D7BE957-ED66-4DEB-AE0C-B77CF6F169CF@bellsouth.net> Message-ID: On Oct 28, 2010, at 2:37 PM, Dave Sill wrote: > If I have two identical apples in my hands, they're still two separate apples, not one. If the apples are truly identical then if you exchange their position then you have made no change at all, the universe has no way of knowing it happened or any reason for caring. >>> I'd never agree to allow a non-destructive upload of myself without it being made clear to the upload immediately upon activation that that's what it is. >> If you are very very very lucky maybe someday Mr. Jupiter Brain will give you that choice, or at least pretend to give you that choice. > I'm assuming that the experiment is being conducted by benevolent, trustworthy parties. If that's not true, all bets are off. If Mr. Jupiter Brain decides, for whatever reason, to upload you rather than just off you then he will not be conducting an experiment, he already knows what will happen; and if he is kind (and I don't know that he will be) and knows you have an irrational fear of being uploaded then he just won't tell you that you are an upload. Ignorance is bliss. John K Clark John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Fri Nov 5 16:59:19 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 5 Nov 2010 11:59:19 -0500 Subject: [ExI] Announcing the Gada Prize in Personal Manufacturing @ Humanity+ Message-ID: Today, Humanity+ is announcing that we are taking on the Gada Prize in Personal Manufacturing. I am really excited about this one. Here's the announcement: Announcing the Gada Prize in Personal Manufacturing @ Humanity+ http://humanityplus.org/2010/11/gada-prize-in-personal-manufacturing-at-humanityplus/ """ Humanity+ is proud to announce the Gada Prize (gadaprize.org) in Personal Manufacturing. By January 1, 2013, we will award $20,000 to the individual or team who demonstrates specific improvements to 3D printing technology. The prize was initially hosted by the Foresight Instituteand is now hosted by Humanity+ . Founded in 1998, Humanity+ focuses on human enhancement and emerging technologies. Desktop 3D printers promise a disruption in manufacturing technology, with improvements on price, productivity, and portability. We believe that a fully open-source 3D printer will herald a new era for both industrial manufacturing and individual prototyping? allowing everyone to rapidly build and test their inventions. The Gada Prize awards innovations applied to the RepRap platform, an open source 3D printer capable of printing plastic objects. Established in 2005 by Adrian Bowyer, the RepRap project has now grown into an international community of scientists, researchers, engineers and RepRap operators. The long-term vision of the RepRap project is an open-source self-replicating machine ? a 3D printer that can build copies of itself. Interested? Everyone is invited to get involved! The teamsare especially friendly, and you can always reach out to us. Humanity+ is an international organization focusing on technologies that expand human capacities. We primarily engage in promotion, conferences, ethics, debate, publication, and sponsored projects. The goal of the Gada Prizes is to improve the lives of one billion people by 2020. After an incubation period with the Foresight Institute, the Gada Prize is now a welcome addition to our portfolio. Resources: - RepRap wiki has a list of teams - RepRap.org prize forum - irc.freenode.net #reprap - irc.freenode.net #hplusroadmap Contact: Bryan Bishop Asst. Director of R&D, Humanity+ bryan at humanityplus.org phone: +1-512-203-0507 """ On a related note, you can access information at gadaprize.org from now on. - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Fri Nov 5 20:04:46 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 5 Nov 2010 13:04:46 -0700 Subject: [ExI] How old people will remake the world In-Reply-To: References: Message-ID: I'm amazed that no one has commented about this fascinating link. The aging of the first world's population has great social ramifications (especially since in some nations the young people are not having enough children to maintain replacement levels). John On 11/3/10, John Grigg wrote: > At least for some people, the aging of the world population will improve > life... > > > http://www.salon.com/books/feature/2010/10/31/shock_of_gray_interview > > John > From spike66 at att.net Fri Nov 5 21:20:19 2010 From: spike66 at att.net (spike) Date: Fri, 5 Nov 2010 14:20:19 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <4CD46562.9000903@evil-genius.com> References: <4CD46562.9000903@evil-genius.com> Message-ID: <005101cb7d2f$4035e580$c0a1b080$@att.net> From: lists1 at evil-genius.com [mailto:lists1 at evil-genius.com] Subject: Re: [ExI] prediction for 2 November 2010 >> The stupidity of American voters... > >> http://www.salon.com/technology/how_the_world_works/2010/11/01/the_unbearabl e_stupidity_of_american_voters > >> Clue for Andrew Leonard: ... >> Andrew, that is what those unbearably stupid 52% are getting that you are missing... spike ... >Note to modern liberals: a political strategy based on telling people they're stupid is doomed to fail. And they're not stupid...unlike liberals, they >understand that there *is* a problem, even though they don't know what to do about it and blame the wrong things for it... I have an idea for Andrew Leonard: start a new political party. There was a new one formed recently in New York called "The Rent Is Too Damn High Party." Having a name like that helps the voters sum up what the party is about. In that spirit, I suggest Andrew Leonard for the "Voters Are Unbearably Stupid Party." Its platform is to tell the voters that they are unbearably stupid. spike From nebathenemi at yahoo.co.uk Fri Nov 5 22:31:16 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Fri, 5 Nov 2010 22:31:16 +0000 (GMT) Subject: [ExI] Singularity spotting In-Reply-To: Message-ID: <176276.12152.qm@web27001.mail.ukl.yahoo.com> I saw the Xbox game "Singularity" prominently on sale while at a games store, wondering if I could run Civilisation 5 on my PC. Plus, with "Transhuman Space","Eclipse Phase" and other games bringing transhumanism to role-playing games, I saw this cartoon: http://rpg.drivethrustuff.com/images/site_resources/Happy%20D20%20Adventures%20-%2013.jpg Plus I remember reading in a New Scientist interview maybe two weeks back where the man interviewed said that "we'll either face a singularity-type scenario or a new dark age". So, there's ever-expanding popular usage of The Singularity as a concept. At this rate, it'll be 2011-s buzzword. Tom From emlynoregan at gmail.com Sat Nov 6 04:02:01 2010 From: emlynoregan at gmail.com (Emlyn) Date: Sat, 6 Nov 2010 14:32:01 +1030 Subject: [ExI] The Codescape Message-ID: Hi all, sorry I haven't been around for a while, coding ;-) But I thought this bit that I just wrote was on topic for the list. --- http://point7.wordpress.com/2010/11/06/the-codescape/ There?s this incredible place where I like to spend a lot of my time. Everyone I know is near it, closer every day, but mostly they don?t come in. When I was a kid, it barely existed, except in corporates and universities, but it expanded slowly. There wasn?t much you could do, even after it began to really explode through the 90s. But lately it?s become somewhere new, somewhere much bigger, somewhere much more interesting. It?s a place I call the Codescape, and it?s becoming the platform on which the whole world runs. The Codescape is simply the space of all computer programs (code) spanning the world. The internet is implemented in it, but it is not the ?net. ?The Cloud? is one of the more interesting pieces of it, but it is not the cloud. It exists in every general purpose machine, as soon as anyone tries to make it run code. Some of it is in your computer, some is in your phone, there?s even a little bit in your car. There might be a tiny pocket in your pacemaker. In fact it?s something that many of us grew accustomed to thinking of as a lot of isolated little pocket worlds ? the place inside one machine or the place inside one network. It?s related to the computer scientist?s platonic space of pure code-as-mathematics, but it is really the gritty, logical-physical half-world of the running program instances, and the sharp edged, capricious, often noxious rules that real running environments bring. It is the space of endless edge cases, failures, unforseen and unforeseeable interactions between your own code and dimly perceived layers built by others. The platonic vision of the code is a trick, an illusion. We like to fool ourselves into thinking that we can create software like one might do maths, in a tower of the mind, all axioms and formal system rules known and accounted for, and the program created inside those constraints like a beautiful fractal, eternal in its elegance and parsimony. Less a construct than a discovery. The platonic code feels like a clean creation in the world of vision and symbols. Code is something you can see, after all, expressed as a form of writing. If you spend long enough away from the machines, you can think this is the real thing, mistake the map for the territory. But the real Codescape isn?t amenable to this at all. It is a dark place and a silent place. You know you are in the Codescape because your primary sensory modalities are touch, smell, and frankly, raw instinct. It is an environment composed of APIs, system layers, protocols and, ultimately, raw bytes. It is an environment where the code vibrates in time with the thrumming of the hardware. You feel through this environment, trying to understand the shapes, reach perfectly into rough, edged crenelations, looking for that sensation of lock, the successful grasp. Always, though, you are ready for the feeling of an unexpected sharp edge, a hot surface, the smell of something turned bad, the tingle of your spidey sense. It is a place that you can?t physically be in, but you can project yourself into. The lines of code are like tendrils, or tentacles, or maybe like a trail of ants reaching out from the nest. That painstaking projection, and the mapping of monkey senses and instincts to new purposes, turns most people off, but I think those of us most comfortable with it find the physical world similar. Possibly less abstractable, and so more alien. Certainly dumber. Oddly enough, we don?t talk about codespace much. It isn?t because we don?t want to, but because largely we cannot. We who travel freely between worlds often can?t express it, because it is a place of system and not of narrative. During periods of hype (mostly about the internet), a lot of bad novels and terrible movies get written about it (while missing it entirely), with gee-whiz 3D graphics and faux h4XX0r jargon. Sometimes some of us are even fooled by this, and so we pay unfortunate obeisance to notions like ?virtual reality? and ?cyberspace?, and construct things like 3D corporate meeting places, or Second Life, or World of Warcraft. Those are bonefide places, good for the illiterate, and a pleasant place to unwind for people of the code. They even contain little pockets of bone fide codescape inside themselves ? proper, first-class codescape, because all of the codescape is as real as the rest. But there is something garrish, gauche about these 3D worlds, like the shopping mall inside an airport, divorced from the country in which it physically exists. The main codescape now, as it exists in 2010, is like the mother of all MMOs. Many, many of us, those who can walk it (how many? hundreds of thousands?) play together in the untamed, expanding chaos of a world tied together by software and networks. Each of us play for our own reasons; some for profit, some for potential power, some for attention, and many of us, increasingly, for individual autonomy and personal expression. It?s a weird place. It?s never really been cool (although it?s come close at times), because the kinds of people who decide on what?s cool can?t even see it. These days the cool kids (like Wired, or Make Magazine, or BoingBoing) like open hardware, or physical making. But everything interesting is being enabled by software, more and more and more software, and so becomes at heart a projection out of the Codescape. Douglas Rushkoff?s recent book, ?Program or be Programmed?, talks about how we are now living in this world where what I call the Codescape is shaping the lives of everyone, and where we are divided into the code-literate and not. His book is mostly dreary complaining that it?s all too hard and the ?net should be more like it was in the 90s (joining an increasing chorus of 90s technorati who are finding themselves unable to keep up), but that first sentiment is absolutely spot on. If you can code, then, if you so choose, you can feel your way through codespace, explore the shifting landscape, and maybe carve out part of it in the shape of your own imaginings. Otherwise, you get internet-as-shiny-shopping-mall, a landscape of opaque gadgets, endless ads, monthly fees, and the faint suspicion that you are being constantly conned by fagan-esque gangs. I contend that if you care about personal autonomy, about freedom, in the 21st century, then you really should try to be part of this world. Perhaps for the first time, the potential for individuals is rivalling that of corporate entities. There is cheap and free server time on offer, high level environments into which you can project your codebase. The protocols are open, the documentation (sometimes just code itself) is free and freely available. Even the very best programming tools are free. If you can acquire the skills and the motivation, you can walk the Codescape with nothing more than an internet connection, a $100 chinese netbook, and your own wits. There is no barrier to entry, other than your ability to twist your mind into the shape that the proper incantations demand. Everything has a programmable API, which you can access and play with and create with if you are prepared to make the effort. At your fingertips are the knowledge and information resources of the world, plus the social interactions of 2 billion humans and counting, plus a growing resource of inputs and outputs in the physical world with which you can see and act. It?s a new frontier, expanding faster than we can explore and settle it. It?s going to be unrecognisable in 2020, and again in 2030, and who knows what after that. But the milestones are boring. The fun is in living it. The first challenge is just to try. -- Emlyn http://my.syyn.cc - A service for syncing buzz and facebook, posts, comments and all. http://www.blahblahbleh.com - A simple youtube radio that I built http://point7.wordpress.com - My blog Find me on Facebook and Buzz From kanzure at gmail.com Sat Nov 6 15:14:45 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 6 Nov 2010 10:14:45 -0500 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: scary boots Date: Sat, Nov 6, 2010 at 10:10 AM Subject: [london-hack-space] Request for knowledge: Implantable Microchips To: london-hack-space at googlegroups.com Hello everybody, Some of you may have been there when I mentioned my desire to get myself microchipped. I want to be identifiable with pet scanners, and using it to access places would be cool as well (albeit somewhat unsuave as it'll be in the back of my neck). Can't help noticing that all the ones for sale (and most are only for sale to registered vets) come with different brand names and only assert that they work with that particular company's scanner. Is there any standardization in the market? If not, what is most commonly used/works with easily-obtained scanners? Any other considerations I should bear in mind? I am aware that the insertion cannula is quite large. I'm not worried about the insertion, because I have an experienced piercer who'll do it for me, and I'm not a pussy. But I'm damned if I'm going to get it inserted and then find out it's not compatible with anything. any help or links appreciated! Scary ps. would anyone be interested if i put photos of my crinoline up or is that totally dull to everyone who's not a frivolous poser like me? -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 6 16:30:29 2010 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 6 Nov 2010 09:30:29 -0700 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: References: Message-ID: I'll leave it to someone more qualified talk about possible medical concerns, but as to standardization: nope. "Standardization" means "let other people make stuff that works with our toys", which is something that private vendors are loathe to do in any early stage market such as this, because they think that's a part of the market they could serve themselves. It is only once the market matures, and vendors realize they can do better by focusing on a part of the market and letting other people handle the rest, that standards begin to emerge. Of course, it is usually the case that vendors can do better by specializing on some profitable niche all along, even in an early stage market. In new markets, it is not obvious what that niche is. But more important is greed, and the common, usually errant belief that one vendor can do everything a customer would want with no outside assistance. (Also known as the Not Invented Here syndrome.) 2010/11/6 Bryan Bishop > > > ---------- Forwarded message ---------- > From: scary boots > Date: Sat, Nov 6, 2010 at 10:10 AM > Subject: [london-hack-space] Request for knowledge: Implantable Microchips > To: london-hack-space at googlegroups.com > > > Hello everybody, > > Some of you may have been there when I mentioned my desire to get myself > microchipped. I want to be identifiable with pet scanners, and using it to > access places would be cool as well (albeit somewhat unsuave as it'll be in > the back of my neck). > > Can't help noticing that all the ones for sale (and most are only for sale > to registered vets) come with different brand names and only assert that > they work with that particular company's scanner. Is there any > standardization in the market? If not, what is most commonly used/works with > easily-obtained scanners? Any other considerations I should bear in mind? > > I am aware that the insertion cannula is quite large. I'm not worried about > the insertion, because I have an experienced piercer who'll do it for me, > and I'm not a pussy. But I'm damned if I'm going to get it inserted and then > find out it's not compatible with anything. > > any help or links appreciated! > > Scary > > ps. would anyone be interested if i put photos of my crinoline up or is > that totally dull to everyone who's not a frivolous poser like me? > > > > -- > - Bryan > http://heybryan.org/ > 1 512 203 0507 > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Nov 6 16:23:07 2010 From: spike66 at att.net (spike) Date: Sat, 6 Nov 2010 09:23:07 -0700 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: References: Message-ID: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> Bryan wrote: >. I mentioned my desire to get myself microchipped. I want to be identifiable with pet scanners, and using it to access places would be cool as well (albeit somewhat unsuave as it'll be in the back of my neck).- Bryan Hi Bryan, I don't know about compatibility, but implanted microchips are the mark of the beast: http://www.av1611.org/666/biochip.html Fortunately I have always been a fan of beasts. One comment you made here is that the chip will go in the back of the neck. The mark of the beast sites says they did a 1.5 million dollar research project and found that the best places would be the back of the hand or the forehead (as described in holy scripture donchaknow.) Without a penny of research, I can see these would be the second and third worst places for such a device (for men anyways.) That being said, I would think a far better place or a microchip would be the earlobe. There are no muscles nearby, no tendons, no contact with a pillow, no risk of it wandering off and lodging in your damn brain somewhere. Furthermore people already abuse that particular body part for no particular reason other than some misguided fashion notions. Far be it from me to criticize misguided fashion notions, but this looks to me like a far better place for a subcutaneous chip, ja? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 6 16:47:44 2010 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 6 Nov 2010 09:47:44 -0700 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> References: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> Message-ID: 2010/11/6 spike > One comment you made here is that the chip will go in the back of the > neck. The mark of the beast sites says they did a 1.5 million dollar > research project and found that the best places would be the back of the > hand or the forehead (as described in holy scripture donchaknow.) Without a > penny of research, I can see these would be the second and third worst > places for such a device (for men anyways.) > > I can see the forehead, but why is the back of the hand a bad place? Just the visible bump (since there's not that much flesh between the handbones and the skin there)? > That being said, I would think a far better place or a microchip would be > the earlobe. There are no muscles nearby, no tendons, no contact with a > pillow, > Maybe for you, but I've grown used to sleeping with my head turned sideways (so I can have another pillow atop my head to block out noise), so it would definitely contact pillow there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Sat Nov 6 16:50:44 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 6 Nov 2010 11:50:44 -0500 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> References: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> Message-ID: 2010/11/6 spike > Hi Bryan, Spike-- just to be clear, I didn't write the original email, but I did think it worth consideration. I don't particularly have a need to microchip myself as a cat/dog/antelope. But I imagine someone.. uh. Might? - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Nov 6 18:58:11 2010 From: spike66 at att.net (spike) Date: Sat, 6 Nov 2010 11:58:11 -0700 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: References: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> Message-ID: <002101cb7de4$8fc551c0$af4ff540$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Adrian Tymes Subject: Re: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips 2010/11/6 spike >> 1.5 million dollar research project and found that the best places would be the back of the hand or the forehead (as described in holy scripture donchaknow.) Without a penny of research, I can see these would be the second and third worst places for such a device (for men anyways.) >I can see the forehead, but why is the back of the hand a bad place? Just the visible bump (since there's not that much flesh between the handbones and the skin there)? Not enough flab on the hand, way too many nerve endings, muscles and tendons everywhere, too much exposure to scrapes, plenty of mechanical stress, just sounds risky to me. Possible alternative would be that loose flab on the upper arm. Most of us recall seeing our elementary school teacher writing something on the board, and that upper-arm flab would get to oscillating. Flab is a good place to put a microchip, to reduce the risk of its wandering off. Actually one of the best places for something like that might be in the scrotum, although it might make the user look a little strange when using the reader. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Nov 6 19:06:30 2010 From: spike66 at att.net (spike) Date: Sat, 6 Nov 2010 12:06:30 -0700 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: References: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> Message-ID: <002601cb7de5$b8e577a0$2ab066e0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Bryan Bishop Subject: Re: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips Spike-- just to be clear, I didn't write the original email, but I did think it worth consideration. I don't particularly have a need to microchip myself as a cat/dog/antelope. But I imagine someone.. uh. Might?- Bryan Oh ok cool, I did miss that. When the pet chips became available a few years ago, I thought it might be cool to have something like that to keep one's medical records, blood type, drug allergies and so forth. I didn't get one because of the same reasons your article mentions: there is no standard, and I don't want to keep having it changed every five years. The guy who wrote the article commented "I am not a pussy." and I am not either, don't even play one on TV, but I don't want to keep changing a subcutaneous chip as often as major music distribution formats change. I am one who has already lived thru vinyl LPs, 8 track tapes, cassette tapes, CDs, DVDs, MP3, and now whatever it is that young people use to buy their music. I have already rebought my favorite albums thrice. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sat Nov 6 20:05:05 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Nov 2010 21:05:05 +0100 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/2 Adrian Tymes > 2010/11/2 Stefano Vaj > > 2010/11/2 Adrian Tymes >> >>> The main problem is, current fusion reactor operators consider sustaining >>> fusion >>> for a few seconds to be "long duration", and have engineered several >>> tricks to keep >>> it going that long. >>> >> >> What's wrong in a pulse propulsion detonating H-bombs one after another, >> V1-style? >> >> What you're talking about was once called Project Orion. > Exactly. > It could work, in theory, > especially if you kept it outside the atmosphere to avoid radiation > concerns > Or, you could try to limit somewhat radioactive pollution and accept the rest, especially for "once-for-all" projects... ;-) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 6 20:48:29 2010 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 6 Nov 2010 13:48:29 -0700 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/6 Stefano Vaj > Or, you could try to limit somewhat radioactive pollution and accept the > rest, especially for "once-for-all" projects... ;-) > There's a fundamental problem with that type of thing. Anything where you aren't planning on returning to Earth, but where your trip does have adverse consequences for those who remain (like radioactive exhaust during launch), doesn't shield you from people who can predict these consequences and prevent you from launching even once. Given the resources required, keeping it a secret while also getting the spaceship actually built is not possible. If you try, the secret will be discovered by such people after you start bending metal, probably around the time you start test firing the engine's components. (That, or it will remain in the planning stages forever, and thus fail to actually build the spaceship.) Plan on returning, plan on giving those you leave behind no reason to stop you, or plan on never leaving in the first place. Any other plan is guaranteed to fail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sat Nov 6 22:08:34 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Nov 2010 23:08:34 +0100 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/6 Adrian Tymes > There's a fundamental problem with that type of thing. Anything where you > aren't > planning on returning to Earth, but where your trip does have adverse > consequences > for those who remain (like radioactive exhaust during launch), doesn't > shield you > from people who can predict these consequences and prevent you from > launching > even once. > Misunderstanding. Let us imagine that you make use of a Project Orion spaceship to take out of the earth gravity well a space solar power plant which "breaks even" and is then capable of supplying the energy required for its maintenance and growth. Or a mirror aimed at limiting a (hypotethically real, I am not discussing the issue here) runaway global warming by deflecting some of sun irradiation. Or the necessary to create a permanent base where building stuff and fuel is much cheaper. You need not imagine that you would go on launcing Project Orion ships every week for all eternity. They might well simply be a reasonable exception option in terms of risk-performance to break a few vicious circles. Having said that, the environmental consequences of a few launch might well be grossly exaggerated, in particolar in comparison with other environmentally-challenging techs in widespread use in spite of the very real damages suffered by many people as a consequence thereof. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Sat Nov 6 23:27:41 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 6 Nov 2010 16:27:41 -0700 (PDT) Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: <964866.1581.qm@web65602.mail.ac4.yahoo.com> > >It could work, in theory, >>especially if you kept it outside the atmosphere to avoid radiation concerns >> Or, you could try to limit somewhat radioactive pollution and accept the rest, especially for "once-for-all" projects... ;-) Another?criticism with the Orion Project spaceships?is that of the electromagnetic pulse (EMP) that would be generated with each "boost". At high enough altitudes,?the EMP could blackout a whole hemisphere. While the ship itself could be hardened, amounting to putting faraday cages around all the electronics, most earthbound systems would still be vulnerable. Just thought I would throw that in. Stuart LaForge ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 7 20:24:01 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 7 Nov 2010 21:24:01 +0100 Subject: [ExI] Fusion Rocket In-Reply-To: <964866.1581.qm@web65602.mail.ac4.yahoo.com> References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> <964866.1581.qm@web65602.mail.ac4.yahoo.com> Message-ID: 2010/11/7 The Avantguardian > Another criticism with the Orion Project spaceships is that of the > electromagnetic pulse (EMP) that would be generated with each "boost". > Interesting. But wasn't the Internet developed to deal exactly with widespread fusion explosions, albeit on a much larger scale than a single Project Orion launch? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sun Nov 7 20:29:06 2010 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 7 Nov 2010 12:29:06 -0800 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/6 Stefano Vaj > 2010/11/6 Adrian Tymes > > There's a fundamental problem with that type of thing. Anything where you >> aren't >> planning on returning to Earth, but where your trip does have adverse >> consequences >> for those who remain (like radioactive exhaust during launch), doesn't >> shield you >> from people who can predict these consequences and prevent you from >> launching >> even once. >> > > Misunderstanding. > > Let us imagine that you make use of a Project Orion spaceship to take out > of the earth gravity well a space solar power plant which "breaks even" and > is then capable of supplying the energy required for its maintenance and > growth. Or a mirror aimed at limiting a (hypotethically real, I am not > discussing the issue here) runaway global warming by deflecting some of sun > irradiation. Or the necessary to create a permanent base where building > stuff and fuel is much cheaper. > > You need not imagine that you would go on launcing Project Orion ships > every week for all eternity. They might well simply be a reasonable > exception option in terms of risk-performance to break a few vicious > circles. > Ah. Yes, that is less of a problem, but still a problem. Fundamentally: if it's allowed once, for anyone, it'll be allowed indefinite times. There is ample reason to believe that there won't be any worldwide limits on the number of launches. (For one, if only 5 launches per year would be safe, who decides who will get to do those 5 - and what happens when someone launches a sixth?) People may oppose it on those grounds - but that may be surmountable, especially if no one else will have the ability to do this before you plan to have no further need of it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 7 20:33:53 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 7 Nov 2010 21:33:53 +0100 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/7 Adrian Tymes > Fundamentally: if it's allowed once, for anyone, it'll be allowed > indefinite times. > There is ample reason to believe that there won't be any worldwide limits > on the > number of launches. (For one, if only 5 launches per year would be safe, > who > decides who will get to do those 5 - and what happens when someone launches > a sixth?) > > People may oppose it on those grounds - but that may be surmountable, > especially > if no one else will have the ability to do this before you plan to have no > further need > of it. > Sure. But they could be, and are, opposing oil burning on the same grounds. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 7 20:15:50 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 7 Nov 2010 21:15:50 +0100 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: On 3 November 2010 07:04, Stathis Papaioannou wrote: > 2010/11/3 Stefano Vaj : > > 2010/10/31 John Clark > >> > >> Actually its quite difficult to come up with a scenario where the copy > >> DOES instantly know he is the copy. > > > > Mmhhh. Nobody ever feels to be a copy. What you could become aware is > that > > somebody forked in the past (as in "a copy left behind"). That he is the > > "original" is a matter of perspective... > > Think about what you would say and do if provided with evidence that > you are actually a copy, replaced while the original you was sleeping > some time last week. > My point is that no possible evidence would make you a "copy". The "original" would in any event from your perspective simply a fork behind. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Sun Nov 7 21:58:52 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Sun, 07 Nov 2010 16:58:52 -0500 Subject: [ExI] I love the world. =) Message-ID: <4CD7211C.8060304@speakeasy.net> I've been watching waaay too much Dr. Who. (There's Tom Baker, David Tennant and then everyone else who pretended to be a Doctor. ;) Then as I went back to my kitchen to pig out on yet more cookies, I took a peek out my window through the blinds only to be shocked by a truly dazzling sunset. The world is such a place of amazing majesty, I wouldn't dare change a thing about it. For me, transhumanism is mostly about fixing this horrible mortality bug in the human body, everything else I wouldn't have any other way. Why do other transhumanists suffer the fools who talk about reducing it all to computronium even for an instant? -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From spike66 at att.net Sun Nov 7 22:57:43 2010 From: spike66 at att.net (spike) Date: Sun, 7 Nov 2010 14:57:43 -0800 Subject: [ExI] I love the world. =) In-Reply-To: <4CD7211C.8060304@speakeasy.net> References: <4CD7211C.8060304@speakeasy.net> Message-ID: <001801cb7ecf$3023be50$906b3af0$@att.net> >... On Behalf Of Alan Grimes Subject: [ExI] I love the world. =) Me too! {8-] >... The world is such a place of amazing majesty, I wouldn't dare change a thing about it... I would. I would fix it to where mosquitos bite only each other. >...Why do other transhumanists suffer the fools who talk about reducing it all to computronium even for an instant? I don't think the computronium would reduce it all to computronium for only an instant. Once it reduces it all to computronium, it likely would stay that way indefinitely. If you meant the transhumanists reducing it all to computronium, the common notion is that they (and everyone else) have little or no say in the matter. The computronium does whatever it wants. The problem is that we don't know what it wants. We don't even know if the computronium cares what we want. spike From msd001 at gmail.com Mon Nov 8 00:20:11 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 7 Nov 2010 19:20:11 -0500 Subject: [ExI] I love the world. =) In-Reply-To: <001801cb7ecf$3023be50$906b3af0$@att.net> References: <4CD7211C.8060304@speakeasy.net> <001801cb7ecf$3023be50$906b3af0$@att.net> Message-ID: On Sun, Nov 7, 2010 at 5:57 PM, spike wrote: > If you meant the transhumanists reducing it all to computronium, the common > notion is that they (and everyone else) have little or no say in the matter. > The computronium does whatever it wants. ?The problem is that we don't know > what it wants. ?We don't even know if the computronium cares what we want. 1) computronium isn't even a real thing. We might as well be discussing trouble with Tribbles (and the humane ways in which we can protect ourselves from them without resorting to genocide) 2) the concept of computronium is maximal computing density of matter. I was under the impression that this magical substance would be employed to do useful work: computing. This should be anthropomorphized no more than the CPU in your current computer "wants" for anything. There are plenty of monsters utilizing currently available computing technology. These monsters can already kill us according to their programming (human-designed programming) Computronium wouldn't make these monsters kill us any more severely than they already can. 3) we will continue to advance according to our own programming. Mostly that frightened monkey programming that kept us from being eaten by primordial predators will make us just as likely to hit the computronium monsters with a proverbial rock or (as recently discussed) a burning branch. Once the threat becomes possible, expect to see right next to the firehose something like "in case of hard takeoff, break glass to employ EMP." In a not-quite-worst-case scenario we are forced to Nuke the Internet and revert back to Amish-level technologies. Not a pretty situation, but humanity would adapt. 4) as far as you or I having any say in the matter, how is that different from any public policy currently "offered" by the government under which you/we are currently living? Yeah right, you could move somewhere more agreeable to your views - if only you had the means to up and leave (and the fortitude to start a new life elsewhere) From avantguardian2020 at yahoo.com Mon Nov 8 00:40:58 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sun, 7 Nov 2010 16:40:58 -0800 (PST) Subject: [ExI] The Codescape In-Reply-To: References: Message-ID: <570302.42318.qm@web65601.mail.ac4.yahoo.com> I liked your post on the codescape, Emlyn.?The interesting thing from my perspective is how much?it has changed in my lifetime. When I was a kid, knowing even a single programming language made you (un)cool. These days you need to know almost half a dozen to put together a decent website. And?if you want to be?a serious?codejockey,?you need to know about a dozen. That's quite a bit different from the way meatspace works where most people?know one or two languages and get by just fine. IMO what the codescape needs is a "lingua franca". ? ?Stuart LaForge ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung ----- Original Message ---- > From: Emlyn > To: ExI chat list > Sent: Fri, November 5, 2010 9:02:01 PM > Subject: [ExI] The Codescape > > Hi all, sorry I haven't been around for a while, coding ;-) But I > thought this bit that I just wrote was on topic for the list. > > --- > > http://point7.wordpress.com/2010/11/06/the-codescape/ > > There?s this incredible place where I like to spend a lot of my time. > Everyone I know is near it, closer every day, but mostly they don?t > come in. > > When I was a kid, it barely existed, except in corporates and > universities, but it expanded slowly. There wasn?t much you could do, > even after it began to really explode through the 90s. But lately it?s > become somewhere new, somewhere much bigger, somewhere much more > interesting. > > It?s a place I call the Codescape, and it?s becoming the platform on > which the whole world runs. > > The Codescape is simply the space of all computer programs (code) > spanning the world. The internet is implemented in it, but it is not > the ?net. ?The Cloud? is one of the more interesting pieces of it, but > it is not the cloud. It exists in every general purpose machine, as > soon as anyone tries to make it run code. Some of it is in your > computer, some is in your phone, there?s even a little bit in your > car. There might be a tiny pocket in your pacemaker. > > In fact it?s something that many of us grew accustomed to thinking of > as a lot of isolated little pocket worlds ? the place inside one > machine or the place inside one network. It?s related to the computer > scientist?s platonic space of pure code-as-mathematics, but it is > really the gritty, logical-physical half-world of the running program > instances, and the sharp edged, capricious, often noxious rules that > real running environments bring. It is the space of endless edge > cases, failures, unforseen and unforeseeable interactions between your > own code and dimly perceived layers built by others. > > The platonic vision of the code is a trick, an illusion. We like to > fool ourselves into thinking that we can create software like one > might do maths, in a tower of the mind, all axioms and formal system > rules known and accounted for, and the program created inside those > constraints like a beautiful fractal, eternal in its elegance and > parsimony. Less a construct than a discovery. > > The platonic code feels like a clean creation in the world of vision > and symbols. Code is something you can see, after all, expressed as a > form of writing. If you spend long enough away from the machines, you > can think this is the real thing, mistake the map for the territory. > > But the real Codescape isn?t amenable to this at all. It is a dark > place and a silent place. You know you are in the Codescape because > your primary sensory modalities are touch, smell, and frankly, raw > instinct. > > It is an environment composed of APIs, system layers, protocols and, > ultimately, raw bytes. It is an environment where the code vibrates in > time with the thrumming of the hardware. You feel through this > environment, trying to understand the shapes, reach perfectly into > rough, edged crenelations, looking for that sensation of lock, the > successful grasp. Always, though, you are ready for the feeling of an > unexpected sharp edge, a hot surface, the smell of something turned > bad, the tingle of your spidey sense. > > It is a place that you can?t physically be in, but you can project > yourself into. The lines of code are like tendrils, or tentacles, or > maybe like a trail of ants reaching out from the nest. That > painstaking projection, and the mapping of monkey senses and instincts > to new purposes, turns most people off, but I think those of us most > comfortable with it find the physical world similar. Possibly less > abstractable, and so more alien. Certainly dumber. > > Oddly enough, we don?t talk about codespace much. It isn?t because we > don?t want to, but because largely we cannot. We who travel freely > between worlds often can?t express it, because it is a place of system > and not of narrative. > > During periods of hype (mostly about the internet), a lot of bad > novels and terrible movies get written about it (while missing it > entirely), with gee-whiz 3D graphics and faux h4XX0r jargon. Sometimes > some of us are even fooled by this, and so we pay unfortunate > obeisance to notions like ?virtual reality? and ?cyberspace?, and > construct things like 3D corporate meeting places, or Second Life, or > World of Warcraft. Those are bonefide places, good for the illiterate, > and a pleasant place to unwind for people of the code. They even > contain little pockets of bone fide codescape inside themselves ? > proper, first-class codescape, because all of the codescape is as real > as the rest. But there is something garrish, gauche about these 3D > worlds, like the shopping mall inside an airport, divorced from the > country in which it physically exists. > > The main codescape now, as it exists in 2010, is like the mother of > all MMOs. Many, many of us, those who can walk it (how many? hundreds > of thousands?) play together in the untamed, expanding chaos of a > world tied together by software and networks. Each of us play for our > own reasons; some for profit, some for potential power, some for > attention, and many of us, increasingly, for individual autonomy and > personal expression. > > It?s a weird place. It?s never really been cool (although it?s come > close at times), because the kinds of people who decide on what?s cool > can?t even see it. These days the cool kids (like Wired, or Make > Magazine, or BoingBoing) like open hardware, or physical making. But > everything interesting is being enabled by software, more and more and > more software, and so becomes at heart a projection out of the > Codescape. > > Douglas Rushkoff?s recent book, ?Program or be Programmed?, talks > about how we are now living in this world where what I call the > Codescape is shaping the lives of everyone, and where we are divided > into the code-literate and not. His book is mostly dreary complaining > that it?s all too hard and the ?net should be more like it was in the > 90s (joining an increasing chorus of 90s technorati who are finding > themselves unable to keep up), but that first sentiment is absolutely > spot on. If you can code, then, if you so choose, you can feel your > way through codespace, explore the shifting landscape, and maybe carve > out part of it in the shape of your own imaginings. Otherwise, you get > internet-as-shiny-shopping-mall, a landscape of opaque gadgets, > endless ads, monthly fees, and the faint suspicion that you are being > constantly conned by fagan-esque gangs. > > I contend that if you care about personal autonomy, about freedom, in > the 21st century, then you really should try to be part of this world. > Perhaps for the first time, the potential for individuals is rivalling > that of corporate entities. There is cheap and free server time on > offer, high level environments into which you can project your > codebase. The protocols are open, the documentation (sometimes just > code itself) is free and freely available. Even the very best > programming tools are free. If you can acquire the skills and the > motivation, you can walk the Codescape with nothing more than an > internet connection, a $100 chinese netbook, and your own wits. There > is no barrier to entry, other than your ability to twist your mind > into the shape that the proper incantations demand. > > Everything has a programmable API, which you can access and play with > and create with if you are prepared to make the effort. At your > fingertips are the knowledge and information resources of the world, > plus the social interactions of 2 billion humans and counting, plus a > growing resource of inputs and outputs in the physical world with > which you can see and act. > > It?s a new frontier, expanding faster than we can explore and settle > it. It?s going to be unrecognisable in 2020, and again in 2030, and > who knows what after that. But the milestones are boring. The fun is > in living it. The first challenge is just to try. > > -- > Emlyn > > http://my.syyn.cc - A service for syncing buzz and facebook, posts, > comments and all. > http://www.blahblahbleh.com - A simple youtube radio that I built > http://point7.wordpress.com - My blog > Find me on Facebook and Buzz > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From emlynoregan at gmail.com Mon Nov 8 01:25:00 2010 From: emlynoregan at gmail.com (Emlyn) Date: Mon, 8 Nov 2010 11:55:00 +1030 Subject: [ExI] The Codescape In-Reply-To: <570302.42318.qm@web65601.mail.ac4.yahoo.com> References: <570302.42318.qm@web65601.mail.ac4.yahoo.com> Message-ID: On 8 November 2010 11:10, The Avantguardian wrote: > I liked your post on the codescape, Emlyn. Thanks Stuart! > The interesting thing from my > perspective is how much?it has changed in my lifetime. When I was a kid, knowing > even a single programming language made you (un)cool. These days you need to > know almost half a dozen to put together a decent website. And?if you want to > be?a serious?codejockey,?you need to know about a dozen. Absolutely. I've said for a while now, it's much more difficult to be a coder now than it used to be, because there is no certainty. You can't really know your environment in the way you used to be able to, you have to trust often quite opaque layers from elsewhere. You have to turn over knowledge and paradigms constantly (actually at an increasing rate). You have to be comfortable with stringing together lots of shallow knowledge, and also with going deep in what I think of as the shallow-deep way; go in fast, learn the details, really understand temporarily, do what needs doing really well in an encapsulated way (so that what has been made can be used with a lot less understanding), then break back out, and do the next thing, forgetting the depth you had acquired. You'll probably never need that detailed knowledge again, and if you do you can acquire it again. Even understanding can be looked at through the lens of access rather than ownership. > That's quite a bit > different from the way meatspace works where most people?know one or two > languages and get by just fine. IMO what the codescape needs is a "lingua > franca". > ?Stuart LaForge > > ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung > Well, you can get along with just a language or two for a while, if you pick the right one(s). But really to stay in it long term is to commit to changing your knowledge over frequently. There's an underlying unity to at least large families of languages, and of course you look for that to help move. I used to try to find the similarities, to help move from language to language, which is good, but it means you always have a dreadful accent, and lots of impedance. Now I try to find the differences, the things that make each language unique, to try to become as native as possible as quickly as possible. As to a lingua franca, the Codescape is on top of that, it's got heaps of them! > > > ----- Original Message ---- >> From: Emlyn >> To: ExI chat list >> Sent: Fri, November 5, 2010 9:02:01 PM >> Subject: [ExI] The Codescape >> >> Hi all, sorry I haven't been around for a while, coding ;-) But I >> thought this bit that I just wrote was on topic for the list. >> >> --- >> >> http://point7.wordpress.com/2010/11/06/the-codescape/ >> >> There?s this incredible place where I like to spend a lot of my time. >> Everyone I know is near it, closer every day, but mostly they don?t >> come in. >> >> When I was a kid, it barely existed, except in corporates and >> universities, but it expanded slowly. There wasn?t much you could do, >> even after it began to really explode through the 90s. But lately it?s >> become somewhere new, somewhere much bigger, somewhere much more >> interesting. >> >> It?s a place I call the Codescape, and it?s becoming the platform on >> which the whole world runs. >> >> The Codescape is simply the space of all computer programs (code) >> spanning the world. The internet is implemented in it, but it is not >> the ?net. ?The Cloud? is one of the more interesting pieces of it, but >> it is not the cloud. It exists in every general purpose machine, as >> soon as anyone tries to make it run code. Some of it is in your >> computer, some is in your phone, there?s even a little bit in your >> car. There might be a tiny pocket in your pacemaker. >> >> In fact it?s something that many of us grew accustomed to thinking of >> as a lot of isolated little pocket worlds ? the place inside one >> machine or the place inside one network. It?s related to the computer >> scientist?s platonic space of pure code-as-mathematics, but it is >> really the gritty, logical-physical half-world of the running program >> instances, and the sharp edged, capricious, often noxious rules that >> real running environments bring. It is the space of endless edge >> cases, failures, unforseen and unforeseeable interactions between your >> own code and dimly perceived layers built by others. >> >> The platonic vision of the code is a trick, an illusion. We like to >> fool ourselves into thinking that we can create software like one >> might do maths, in a tower of the mind, all axioms and formal system >> rules known and accounted for, and the program created inside those >> constraints like a beautiful fractal, eternal in its elegance and >> parsimony. Less a construct than a discovery. >> >> The platonic code feels like a clean creation in the world of vision >> and symbols. Code is something you can see, after all, expressed as a >> form of writing. If you spend long enough away from the machines, you >> can think this is the real thing, mistake the map for the territory. >> >> But the real Codescape isn?t amenable to this at all. It is a dark >> place and a silent place. You know you are in the Codescape because >> your primary sensory modalities are touch, smell, and frankly, raw >> instinct. >> >> It is an environment composed of APIs, system layers, protocols and, >> ultimately, raw bytes. It is an environment where the code vibrates in >> time with the thrumming of the hardware. You feel through this >> environment, trying to understand the shapes, reach perfectly into >> rough, edged crenelations, looking for that sensation of lock, the >> successful grasp. Always, though, you are ready for the feeling of an >> unexpected sharp edge, a hot surface, the smell of something turned >> bad, the tingle of your spidey sense. >> >> It is a place that you can?t physically be in, but you can project >> yourself into. The lines of code are like tendrils, or tentacles, or >> maybe like a trail of ants reaching out from the nest. That >> painstaking projection, and the mapping of monkey senses and instincts >> to new purposes, turns most people off, but I think those of us most >> comfortable with it find the physical world similar. Possibly less >> abstractable, and so more alien. Certainly dumber. >> >> Oddly enough, we don?t talk about codespace much. It isn?t because we >> don?t want to, but because largely we cannot. We who travel freely >> between worlds often can?t express it, because it is a place of system >> and not of narrative. >> >> During periods of hype (mostly about the internet), a lot of bad >> novels and terrible movies get written about it (while missing it >> entirely), with gee-whiz 3D graphics and faux h4XX0r jargon. Sometimes >> some of us are even fooled by this, and so we pay unfortunate >> obeisance to notions like ?virtual reality? and ?cyberspace?, and >> construct things like 3D corporate meeting places, or Second Life, or >> World of Warcraft. Those are bonefide places, good for the illiterate, >> and a pleasant place to unwind for people of the code. They even >> contain little pockets of bone fide codescape inside themselves ? >> proper, first-class codescape, because all of the codescape is as real >> as the rest. But there is something garrish, gauche about these 3D >> worlds, like the shopping mall inside an airport, divorced from the >> country in which it physically exists. >> >> The main codescape now, as it exists in 2010, is like the mother of >> all MMOs. Many, many of us, those who can walk it (how many? hundreds >> of thousands?) play together in the untamed, expanding chaos of a >> world tied together by software and networks. Each of us play for our >> own reasons; some for profit, some for potential power, some for >> attention, and many of us, increasingly, for individual autonomy and >> personal expression. >> >> It?s a weird place. It?s never really been cool (although it?s come >> close at times), because the kinds of people who decide on what?s cool >> can?t even see it. These days the cool kids (like Wired, or Make >> Magazine, or BoingBoing) like open hardware, or physical making. But >> everything interesting is being enabled by software, more and more and >> more software, and so becomes at heart a projection out of the >> Codescape. >> >> Douglas Rushkoff?s recent book, ?Program or be Programmed?, talks >> about how we are now living in this world where what I call the >> Codescape is shaping the lives of everyone, and where we are divided >> into the code-literate and not. His book is mostly dreary complaining >> that it?s all too hard and the ?net should be more like it was in the >> 90s (joining an increasing chorus of 90s technorati who are finding >> themselves unable to keep up), but that first sentiment is absolutely >> spot on. If you can code, then, if you so choose, you can feel your >> way through codespace, explore the shifting landscape, and maybe carve >> out part of it in the shape of your own imaginings. Otherwise, you get >> internet-as-shiny-shopping-mall, a landscape of opaque gadgets, >> endless ads, monthly fees, and the faint suspicion that you are being >> constantly conned by fagan-esque gangs. >> >> I contend that if you care about personal autonomy, about freedom, in >> the 21st century, then you really should try to be part of this world. >> Perhaps for the first time, the potential for individuals is rivalling >> that of corporate entities. There is cheap and free server time on >> offer, high level environments into which you can project your >> codebase. The protocols are open, the documentation (sometimes just >> code itself) is free and freely available. Even the very best >> programming tools are free. If you can acquire the skills and the >> motivation, you can walk the Codescape with nothing more than an >> internet connection, a $100 chinese netbook, and your own wits. There >> is no barrier to entry, other than your ability to twist your mind >> into the shape that the proper incantations demand. >> >> Everything has a programmable API, which you can access and play with >> and create with if you are prepared to make the effort. At your >> fingertips are the knowledge and information resources of the world, >> plus the social interactions of 2 billion humans and counting, plus a >> growing resource of inputs and outputs in the physical world with >> which you can see and act. >> >> It?s a new frontier, expanding faster than we can explore and settle >> it. It?s going to be unrecognisable in 2020, and again in 2030, and >> who knows what after that. But the milestones are boring. The fun is >> in living it. The first challenge is just to try. >> >> -- >> Emlyn >> >> http://my.syyn.cc - A service for syncing buzz and facebook, posts, >> comments and all. >> http://www.blahblahbleh.com - A simple youtube radio that I built >> http://point7.wordpress.com - My blog >> Find me on Facebook and Buzz >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Emlyn http://my.syyn.cc - A service for syncing buzz and facebook, posts, comments and all. http://www.blahblahbleh.com - A simple youtube radio that I built http://point7.wordpress.com - My blog Find me on Facebook and Buzz From thespike at satx.rr.com Sun Nov 7 22:42:50 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 07 Nov 2010 16:42:50 -0600 Subject: [ExI] I love the world. =) In-Reply-To: <4CD7211C.8060304@speakeasy.net> References: <4CD7211C.8060304@speakeasy.net> Message-ID: <4CD72B6A.80701@satx.rr.com> On 11/7/2010 3:58 PM, Alan Grimes wrote: > I took > a peek out my window through the blinds only to be shocked by a truly > dazzling sunset. The world is such a place of amazing majesty, I > wouldn't dare change a thing about it. > > For me, transhumanism is mostly about fixing this horrible mortality bug > in the human body, everything else I wouldn't have any other way. > > Why do other transhumanists suffer the fools who talk about reducing it > all to computronium even for an instant? Rudy Rucker argues that case in detail in my anthology YEAR MILLION and in various of his own novels. And nobody can accuse Rudy of lacking imagination or boldness--he was there before just about anyone else. Damien Broderick From giulio at gmail.com Mon Nov 8 07:25:26 2010 From: giulio at gmail.com (Giulio Prisco) Date: Mon, 8 Nov 2010 08:25:26 +0100 Subject: [ExI] Turing Church Online Workshop 1, Teleplace, Saturday November 20, 9am-1pm PST Message-ID: Turing Church Online Workshop 1, Teleplace, Saturday November 20, 9am-1pm PST http://giulioprisco.blogspot.com/2010/11/turing-church-online-workshop-1.html http://telexlr8.wordpress.com/2010/11/07/turing-church-online-workshop-1-teleplace-saturday-november-20-9am-1pm-pst/ Turing Church Online Workshop 1, in Teleplace, Saturday November 20, 9am-1pm PST (noon-4pm EST, 5pm-9pm UK, 6pm-10pm EU). The workshop will explore transhumanist spirituality and "Religion 2.0" and it will be a coordination-oriented summit of groups and organizations active in this area. Format: Online-only workshop in Teleplace. Those who already have Teleplace accounts for teleXLR8 can just show up at the workshop. There are a limited number of seats available for others, please contact me if you wish to attend. Panelists: - Lincoln Cannon (Mormon Transhumanist Association) - Ben Goertzel (Cosmist Manifesto) - Mike Perry (Society for Universal Immortalism) - Giulio Prisco (Turing Church) - Martine Rothblatt (Terasem) Agenda: - Talks by the panelists in the first 2 hours. - Discussion between the panelists in the last 2 hours, with the participation of the audience. Objectives: - To discover parallels and similarities between different organizations and to agree on common interests, agendas, strategies, outreach plans etc. - To discuss whether it makes sense to establish a umbrella organization, or to consider one of the existing organizations as such. - To develop the idea of scientific resurrection: our descendants and mind children will develop ?magic science and technology? in the sense of Clarke?s third law, and may be able to do grand spacetime engineering and even resurrect the dead by ?copying them to the future?. Of course this a hope and not a certainty, but I am persuaded that this concept is scientifically founded and could become the ?missing link? between transhumanists and religious and spiritual communities. - And of course, how to make our our beautiful ideas available, understandable and appealing to billions of seekers. My own presentation will be a revised and expanded version of my talk on a talk on The Cosmic Visions of the Turing Church at the Transhumanism and Spirituality Conference 2010. The main point can be summarized in one sentence (Slide 4): "A memetically strong religion needs to offer resurrection besides immortality." From scerir at alice.it Mon Nov 8 11:21:48 2010 From: scerir at alice.it (scerir) Date: Mon, 8 Nov 2010 12:21:48 +0100 Subject: [ExI] Seth Loyd on birds, plants, ... In-Reply-To: <4CD72B6A.80701@satx.rr.com> References: <4CD7211C.8060304@speakeasy.net> <4CD72B6A.80701@satx.rr.com> Message-ID: <36F35165AAA04A00B8E05C5F4E3E2FA3@PCserafino> Seth Lloyd on quantum 'weirdness' used by plants, animals, etc. http://www.cbc.ca/technology/story/2010/11/03/quantum-physics-biology-living-things.html Supposedly, the video of this lecture will appear on the Perimeter Institute website, or at pirsa.org. From charlie.stross at gmail.com Mon Nov 8 11:17:19 2010 From: charlie.stross at gmail.com (Charlie Stross) Date: Mon, 8 Nov 2010 11:17:19 +0000 Subject: [ExI] I love the world. =) In-Reply-To: References: <4CD7211C.8060304@speakeasy.net> <001801cb7ecf$3023be50$906b3af0$@att.net> Message-ID: <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> On 8 Nov 2010, at 00:20, Mike Dougherty wrote: > 3) we will continue to advance according to our own programming. > Mostly that frightened monkey programming that kept us from being > eaten by primordial predators will make us just as likely to hit the > computronium monsters with a proverbial rock or (as recently > discussed) a burning branch. Once the threat becomes possible, expect > to see right next to the firehose something like "in case of hard > takeoff, break glass to employ EMP." In a not-quite-worst-case > scenario we are forced to Nuke the Internet and revert back to > Amish-level technologies. Not a pretty situation, but humanity would > adapt. Humanity *in the abstract* might adapt; but if we have to go there, you and I, personally, are probably going to die. Even today, all our supply chains have adapted to just-in-time production and shipping, relying on networked communications to ensure that stuff gets where it's needed; we can't revert to doing things the old way -- the equipment has long since been scrapped -- and we'd rapidly starve. Your average big box supermarket only holds about 24-48 hours worth of provisions, and their logistics infrastructure is highly tuned for efficiency. Now add in gas stations, railroad signalling, electricity grid control ... If we have to Nuke The Net Or Die, it'll mean the difference between a 100% die-back and a 90% die-back. Meanwhile, the Mormons, with their requirement to keep a year of canned goods in the cellar, will be laughing. (Well, praying.) -- Charlie From pharos at gmail.com Mon Nov 8 11:55:13 2010 From: pharos at gmail.com (BillK) Date: Mon, 8 Nov 2010 11:55:13 +0000 Subject: [ExI] I love the world. =) In-Reply-To: <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> References: <4CD7211C.8060304@speakeasy.net> <001801cb7ecf$3023be50$906b3af0$@att.net> <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> Message-ID: On Mon, Nov 8, 2010 at 11:17 AM, Charlie Stross wrote: > Humanity *in the abstract* might adapt; but if we have to go there, you and I, > personally, are probably going to die. Even today, all our supply chains have > adapted to just-in-time production and shipping, relying on networked > communications to ensure that stuff gets where it's needed; we can't revert > to doing things the old way -- the equipment has long since been scrapped -- > and we'd rapidly starve. Your average big box supermarket only holds about > 24-48 hours worth of provisions, and their logistics infrastructure is highly > tuned for efficiency. Now add in gas stations, railroad signalling, electricity > grid control ... If we have to Nuke The Net Or Die, it'll mean the difference > between a 100% die-back and a 90% die-back. > > Meanwhile, the Mormons, with their requirement to keep a year of canned > goods in the cellar, will be laughing. (Well, praying.) > > It's bad enough even with your 'highly-tuned' supply system. That's only for popular items. If something breaks nowadays, you just can't get spares. You have to buy a new one. For large items, if you need an unusual spare part for a Fiat car, chances are you will wait a month while they ship it in from Italy. BillK From giulio at gmail.com Mon Nov 8 12:01:22 2010 From: giulio at gmail.com (Giulio Prisco) Date: Mon, 8 Nov 2010 13:01:22 +0100 Subject: [ExI] I love the world. =) In-Reply-To: <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> References: <4CD7211C.8060304@speakeasy.net> <001801cb7ecf$3023be50$906b3af0$@att.net> <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> Message-ID: Not only the Mormons, but also rural communities able to produce enough basic goods for their own bare survival. It is us city people who would be totally screwed. I would not know how to survive after Nuke the Internet, but my grandfather would. However if computronium superAIs, if and when such a thing will exist, decide to take over, there is not much that we can do, we would not even see it coming until it is here already. Perhaps they will upload us to a virtual Farmville as real as reality, with our memories edited to continue to live under the illusion that we have escaped. G. On Mon, Nov 8, 2010 at 12:17 PM, Charlie Stross wrote: > On 8 Nov 2010, at 00:20, Mike Dougherty wrote: > >> 3) we will continue to advance according to our own programming. >> Mostly that frightened monkey programming that kept us from being >> eaten by primordial predators will make us just as likely to hit the >> computronium monsters with a proverbial rock or (as recently >> discussed) a burning branch. ?Once the threat becomes possible, expect >> to see right next to the firehose something like "in case of hard >> takeoff, break glass to employ EMP." ?In a not-quite-worst-case >> scenario we are forced to Nuke the Internet and revert back to >> Amish-level technologies. ?Not a pretty situation, but humanity would >> adapt. > > Humanity *in the abstract* might adapt; but if we have to go there, you and I, personally, are probably going to die. Even today, all our supply chains have adapted to just-in-time production and shipping, relying on networked communications to ensure that stuff gets where it's needed; we can't revert to doing things the old way -- the equipment has long since been scrapped -- and we'd rapidly starve. Your average big box supermarket only holds about 24-48 hours worth of provisions, and their logistics infrastructure is highly tuned for efficiency. Now add in gas stations, railroad signalling, electricity grid control ... If we have to Nuke The Net Or Die, it'll mean the difference between a 100% die-back and a 90% die-back. > > Meanwhile, the Mormons, with their requirement to keep a year of canned goods in the cellar, will be laughing. (Well, praying.) > > > -- Charlie > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From msd001 at gmail.com Mon Nov 8 14:36:39 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 8 Nov 2010 09:36:39 -0500 Subject: [ExI] I love the world. =) In-Reply-To: <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> References: <4CD7211C.8060304@speakeasy.net> <001801cb7ecf$3023be50$906b3af0$@att.net> <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> Message-ID: On Mon, Nov 8, 2010 at 6:17 AM, Charlie Stross wrote: > Humanity *in the abstract* might adapt; but if we have to go there, you and I, personally, are probably going to die. Even today, all our supply chains have adapted to just-in-time production and shipping, relying on networked communications to ensure that stuff gets where it's needed; we can't revert to doing things the old way -- the equipment has long since been scrapped -- and we'd rapidly starve. Your average big box supermarket only holds about 24-48 hours worth of provisions, and their logistics infrastructure is highly tuned for efficiency. Now add in gas stations, railroad signalling, electricity grid control ... If we have to Nuke The Net Or Die, it'll mean the difference between a 100% die-back and a 90% die-back. Of course. But the usual scenario about AI destroying humanity (with or without computronium) puts me in a mindset that some humans remaining, no matter how distant from my own person/family/tribe/ethnicity/etc. is still better than none at all. I'm willing to expand the definition of humanity to include uploaded-state behavior patterns/identities too though - so maybe the human Farmville is also better than nonexistence. From pharos at gmail.com Mon Nov 8 17:25:04 2010 From: pharos at gmail.com (BillK) Date: Mon, 8 Nov 2010 17:25:04 +0000 Subject: [ExI] War ----- It's a meme! Message-ID: John Horgan has an article in Scientific American about why tribes go to war that might be of interest. I know that Keith has suggested that war is caused either by hard times or an expectation of hard times, but I feel this is a weak theory as it seems to cover all cases and therefore is untestable. Horgan thinks that war is learned behaviour. Some Quotes: Analyses of more than 300 societies in the Human Relations Area Files, an ethnographic database at Yale University, have turned up no clear-cut correlations between warfare and chronic resource scarcity. Similarly, the anthropologist Lawrence Keeley notes in War before Civilization: The Myth of the Peaceful Savage (Oxford University Press, 1997) that the correlation between population pressure and warfare "is either very complex or very weak or both." Margaret Mead dismissed the notion that war is the inevitable consequence of our "basic, competitive, aggressive, warring human nature." This theory is contradicted, she noted, by the simple fact that not all societies wage war. War has never been observed among a Himalayan people called the Lepchas or among the Eskimos. In fact, neither of these groups, when questioned by early ethnographers, was even aware of the concept of war. Warfare is "an invention," Mead concluded, like cooking, marriage, writing, burial of the dead or trial by jury. Once a society becomes exposed to the "idea" of war, it "will sometimes go to war" under certain circumstances. Some people, Mead stated, such as the Pueblo Indians, fight reluctantly to defend themselves against aggressors; others, such as the Plains Indians, sally forth with enthusiasm, because they have elevated martial skills to the highest of manly virtues. ------------------ BillK From spike66 at att.net Mon Nov 8 17:27:16 2010 From: spike66 at att.net (spike) Date: Mon, 8 Nov 2010 09:27:16 -0800 Subject: [ExI] 25th anniversary of engines of creation Message-ID: <000301cb7f6a$30a93e90$91fbbbb0$@att.net> On Mon, Nov 8, 2010 at 6:17 AM, Charlie Stross wrote: > Humanity *in the abstract* might adapt; but if we have to go there, you and I, personally, are probably going to die... Hi Charlie, good to see you posting here again. Isn't it amazing that we are coming up on the 25th anniversary of Drexler's Engines of Creation? For many of us, that was the book that launched a thousand memeships. Charlie is one who posted back a long time ago when we used to debate something that now seems settled: which comes first, strong AI or strong nanotech (replicating assembler)? The argument at the time (early to mid 90s) was that AI enables nanotech (by providing the designs), but nanotech enables AI (by providing super-capable computers.) Is there anyone here for whom that argument is not completely settled? Do explain please. spike From darren.greer3 at gmail.com Mon Nov 8 17:42:46 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 8 Nov 2010 13:42:46 -0400 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: "War has never been observed among a Himalayan people called the Lepchas or among the Eskimos. In fact, neither of these groups, when questioned by early ethnographers, was even aware of the concept of war." Martin Van Creveld has a theory about this in his *Decline of the Nation States*. He calls Inuit society (Eskimo is very culturally offensive, by the way) a modality, the kind of tribe that only goes to war when a number of tribes join together in warfare with a temporary leader united under a single banner but still maintaining tribal autonomy. He cites the war against Troy in *The Iliad* by the tribes under the temporary leadership of Agamemnon and Menelaus to be a good example of this. (recall Achilles and the Myrmidons.) Opportunities for warfare under these circumstances are exceedingly rare, and usually involve a cultural taboo being violated. The Inuit have a unique societal structure and are likely the exception rather the the rule. I can't speak for the Lepchas, but I would imagine it would be something similar. Darren On Mon, Nov 8, 2010 at 1:25 PM, BillK wrote: > John Horgan has an article in Scientific American about why tribes go > to war that might be of interest. I know that Keith has suggested that > war is caused either by hard times or an expectation of hard times, > but I feel this is a weak theory as it seems to cover all cases and > therefore is untestable. Horgan thinks that war is learned behaviour. > > Some Quotes: > > Analyses of more than 300 societies in the Human Relations Area Files, > an ethnographic database at Yale University, have turned up no > clear-cut correlations between warfare and chronic resource scarcity. > Similarly, the anthropologist Lawrence Keeley notes in War before > Civilization: The Myth of the Peaceful Savage (Oxford University > Press, 1997) that the correlation between population pressure and > warfare "is either very complex or very weak or both." > > Margaret Mead dismissed the notion that war is the inevitable > consequence of our "basic, competitive, aggressive, warring human > nature." This theory is contradicted, she noted, by the simple fact > that not all societies wage war. War has never been observed among a > Himalayan people called the Lepchas or among the Eskimos. In fact, > neither of these groups, when questioned by early ethnographers, was > even aware of the concept of war. > > Warfare is "an invention," Mead concluded, like cooking, marriage, > writing, burial of the dead or trial by jury. Once a society becomes > exposed to the "idea" of war, it "will sometimes go to war" under > certain circumstances. Some people, Mead stated, such as the Pueblo > Indians, fight reluctantly to defend themselves against aggressors; > others, such as the Plains Indians, sally forth with enthusiasm, > because they have elevated martial skills to the highest of manly > virtues. > > ------------------ > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* (Recall -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Nov 8 18:00:32 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 8 Nov 2010 13:00:32 -0500 Subject: [ExI] I love the world. =) In-Reply-To: <4CD7211C.8060304@speakeasy.net> References: <4CD7211C.8060304@speakeasy.net> Message-ID: <47B672AC-F800-44DA-9EC4-BF0BF1ECC2DF@bellsouth.net> On Nov 7, 2010, at 4:58 PM, Alan Grimes wrote: > The world is such a place of amazing majesty, It could be better. > I wouldn't dare change a thing about it. You are suffering from either a lack of courage or a lack of imagination. > I wouldn't have any other way. A world without cancer would be another way, and I believe I'd prefer that. > Why do other transhumanists suffer the fools who talk about reducing it > all to computronium even for an instant? Do you have any reason to be certain that hasn't already happened? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Nov 8 18:35:12 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 8 Nov 2010 13:35:12 -0500 Subject: [ExI] 25th anniversary of engines of creation. In-Reply-To: <000301cb7f6a$30a93e90$91fbbbb0$@att.net> References: <000301cb7f6a$30a93e90$91fbbbb0$@att.net> Message-ID: <46A9FE76-ED94-4851-BBB2-9839121037BE@bellsouth.net> On Nov 8, 2010, at 12:27 PM, spike wrote: > The argument at the time (early to mid 90s) was that AI enables nanotech (by providing the designs), > but nanotech enables AI (by providing super-capable computers.) > Is there anyone here for whom that argument is not completely settled? Me. I don't know which will come first but I do know there won't be much time between the two events. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Nov 8 18:41:43 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 8 Nov 2010 13:41:43 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: On Nov 7, 2010, at 3:15 PM, Stefano Vaj wrote: > My point is that no possible evidence would make you a "copy". The "original" would in any event from your perspective simply a fork behind. I see no reason to assume "you" are the original, and even more important I see no reason to care if "you" are the original. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Mon Nov 8 19:07:10 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 08 Nov 2010 14:07:10 -0500 Subject: [ExI] 25th anniversary of engines of creation In-Reply-To: <000301cb7f6a$30a93e90$91fbbbb0$@att.net> References: <000301cb7f6a$30a93e90$91fbbbb0$@att.net> Message-ID: <4CD84A5E.207@lightlink.com> spike wrote: > Isn't it amazing that we are coming up on the 25th anniversary of Drexler's > Engines of Creation? For many of us, that was the book that launched a > thousand memeships. Charlie is one who posted back a long time ago when we > used to debate something that now seems settled: which comes first, strong > AI or strong nanotech (replicating assembler)? The argument at the time > (early to mid 90s) was that AI enables nanotech (by providing the designs), > but nanotech enables AI (by providing super-capable computers.) > > Is there anyone here for whom that argument is not completely settled? Do > explain please. I'm a little confused about which way you are implying that it was settled? Strong AI will, of course, come first, because: (a) We already have the computing power to do it (all that is lacking is the understanding of how to use that computing power), and (b) Without strong AI, designing safe nanotech is going to be very difficult indeed. Richard Loosemore From thespike at satx.rr.com Mon Nov 8 20:16:40 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Nov 2010 14:16:40 -0600 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: <4CD85AA8.5080402@satx.rr.com> On 11/8/2010 12:41 PM, John Clark wrote: > I see no reason to assume "you" are the original, and even more > important I see no reason to care if "you" are the original. The endless perspective or Point-of-View confusion. Of course a copy experiences himself as the original (that's what an exact copying process *means*). Of course the rest of the world experiences him as equally you. There are two major problems seldom addressed in this complacent view: 1) The jurisprudential--who owns the original's possessions? Where provenance of the original can be established, it seems pretty likely that the law will find for the original, in the absence of an advance agreement to split the loot. 2) If copying requires destruction of the original, is it psychologically likely that he will go to his death happy in the knowledge that his exact subsequent copy will continue elsewhere? Many here say, "Hell, yes, it's only evolved biases and cognitive errors that could support any other opinion!" Others say, "Maybe so, but you're not getting me into that damned gas chamber." So if the world becomes filled with people happy to be killed and copied, of course it's likely that after a few hundred iterations identity will be construed this way by almost everyone. If the USA becomes filled with the antiabortion offspring of the duped who believe evolution is a godless hoax and humans never walked on the moon, those opinions will also be validated. So what? Damien Broderick From agrimes at speakeasy.net Mon Nov 8 22:17:01 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Mon, 08 Nov 2010 17:17:01 -0500 Subject: [ExI] I love the world. =) In-Reply-To: <47B672AC-F800-44DA-9EC4-BF0BF1ECC2DF@bellsouth.net> References: <4CD7211C.8060304@speakeasy.net> <47B672AC-F800-44DA-9EC4-BF0BF1ECC2DF@bellsouth.net> Message-ID: <4CD876DD.4020002@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > On Nov 7, 2010, at 4:58 PM, Alan Grimes wrote: >> The world is such a place of amazing majesty, > It could be better. >> I wouldn't dare change a thing about it. > You are suffering from either a lack of courage or a lack of imagination. And you are lacking eyesight. =P >> I wouldn't have any other way. > A world without cancer would be another way, and I believe I'd prefer that. Come on, read my first posting again! I explicitly said that transhumanism was about fixing the bugs in the human body, specifically death, but implicitly all other things one might want to customize for either good or even bad reasons. =P >> Why do other transhumanists suffer the fools who talk about reducing it >> all to computronium even for an instant? > Do you have any reason to be certain that hasn't already happened? Byte me. =| Nick Bostrom is a sophist and so is everyone else who agrees with him. You are getting into a DesCartes versus Occam argument here. If you side with DesCartes you must first claim that you are the happy victim of an unspeakably evil monster. If you side with Occam you get to sit in your easy chair with a smirk on your face and quietly say "prove it" every once in a while. The null hypothesis in this case is that there is nothing artificial about the reality in which we live. Artificial structures are extremely easy to detect wherever they exist on earth, therefore show me an artifact of the simulation you are proposing that proves it exists. Should you manage to prove that it exists, I'll immediately drop everything and start working on the problem of "outloading" myself to whatever is out there. With that done, I'll amuse myself by making silly, arbitrary, and obnoxious changes to this universe with the aim of inspiring my peers to follow me to the exit. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From pharos at gmail.com Tue Nov 9 13:47:15 2010 From: pharos at gmail.com (BillK) Date: Tue, 9 Nov 2010 13:47:15 +0000 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: 2010/11/8 Darren Greer wrote: > (Eskimo is very culturally offensive, by the way) That's too simplistic. Let's have a good nit-pick! :) Eskimo isn't offensive in the UK or US or even in Alaska. It is a general term for all the native people in the Arctic region. >From Wikipedia: In Alaska, the term Eskimo is commonly used, because it applies to both Yupik and Inupiat peoples. Inuit is not accepted as a collective term or even specifically used for Inupiat (which technically is Inuit). No universal replacement term for Eskimo, inclusive of all Inuit and Yupik people, is accepted across the geographical area inhabited by the Inuit and Yupik peoples. --------------------------- >From alaskanative.net: Alaska's Native people are divided into eleven distinct cultures, speaking twenty different languages. In order to the tell the stories of this diverse population, the Alaska Native Heritage Center is organized based on five cultural groupings, which draw upon cultural similarities or geographic proximity: * Athabascan * Yup'ik & Cup'ik * Inupiaq & St. Lawrence Island Yupik * Unangax & Alutiiq (Sugpiaq) * Eyak, Tlingit, Haida & Tsimshian ----------------- Some of the indigenous races would be equally offended to be called Inuit. So, to be strictly correct, you have to find out which culture the person you are speaking to is a member of and use that name. BillK From lists1 at evil-genius.com Tue Nov 9 12:13:03 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 09 Nov 2010 04:13:03 -0800 Subject: [ExI] A side note on Inuit/Eskimo (was War ---- It's a meme!) In-Reply-To: References: Message-ID: <4CD93ACF.5010108@evil-genius.com> > From: Darren Greer > He calls Inuit society (Eskimo is very culturally offensive, by the way) So is Inuit -- if you happen to be Yupik (the other major far northern tribal group commonly lumped under 'Eskimo'). Unfortunately, there is no agreed-upon replacement. (An aside: my last three posts to this list have never posted, nor have they been rejected by a moderator. If anyone can tell me what's going on, I'd appreciate it, because it's frustrating to write out a long, thoughtful reply and never see it.) From jonkc at bellsouth.net Tue Nov 9 15:41:49 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 9 Nov 2010 10:41:49 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CD85AA8.5080402@satx.rr.com> References: <4CC6738E.3050609@speakeasy.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> Message-ID: <63E678CC-AA5E-46E3-BF42-B31B9DBB0101@bellsouth.net> On Nov 8, 2010, at 3:16 PM, Damien Broderick wrote: > Of course a copy experiences himself as the original (that's what an exact copying process *means*). Of course the rest of the world experiences him as equally you. Then that pretty much ends the matter as far as I'm concerned, but for some reason never clearly explained, not for you. > There are two major problems seldom addressed in this complacent view: > 1) The jurisprudential--who owns the original's possessions? I don't know what you mean by "seldom addressed", that's the first thing anti-uploaders say, after "it just wouldn't be me!" of course. The answer is that the ownership of the possessions will be determined by whoever makes the law at the time, and that is irrelevant to the question at hand. I said it before I'll say it again, you're talking about the law I'm talking about logic and the two things have absolutely nothing to do with one another. > 2) If copying requires destruction of the original [...] Stop right there! Exactly what is being destroyed? The atoms are not destroyed, not that that's important as they are very far from unique, and the information on how those atoms are arranged are not unique either as that's been duplicated in the uploading process. So that naturally brings up another question: what is so original about The Original? There is only one possible answer to that, but as I've said before I don't believe in the soul. > is it psychologically likely that he will go to his death happy in the knowledge that his exact subsequent copy will continue elsewhere? You are arguing that my ideas must be wrong because some people might fear them for unclear reasons, I don't think that follows. Primitive people are terrified to have their picture taken because they think it will rob them of their essence, some people who like to think of themselves as sophisticated refuse to live or work on the thirteenth floor of a building unless it is renamed "the fourteenth floor"; so what? The only thing more illogical than the law is psychology. > So if the world becomes filled with people happy to be killed and copied, of course it's likely that after a few hundred iterations identity will be construed this way by almost everyone. Yes, so right or wrong your views have no future, mine do. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Tue Nov 9 17:49:28 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 13:49:28 -0400 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: >>So, to be strictly correct, you have to find out which culture the person you are speaking to is a member of and use that name.<< Yup, that seems like it might be true. At least it was when I worked at the Pauktuutit Inuit Women's centre. We used Inuit except in cases where those being referred to believed it didn't apply, such as the Innui in Quebec and the Dene in Saskatchewan. >From Wikipedia: In Alaska, the term Eskimo is commonly used, because it applies to both Yupik and Inupiat peoples. Inuit is not accepted as a collective term or even specifically used for Inupiat (which technically is Inuit). No universal replacement term for Eskimo, inclusive of all Inuit and Yupik people, is accepted across the geographical area inhabited by the Inuit and Yupik peoples.<< It's not so much the generalized term that people are using, but what that generalized term means and where it comes from. Inuit means "our people" from a Northern Indigenous tribal dialect. Eskimo means "eater of raw flesh" from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking aside, I can see that the objections to the first might be greater than the second. >>* Athabascan * Yup'ik & Cup'ik * Inupiaq & St. Lawrence Island Yupik * Unangax & Alutiiq (Sugpiaq) * Eyak, Tlingit, Haida & Tsimshian<< So you wanted nit-picking? :) There are specific tribal names and even tribes-within-tribes and generalized rubric headings. The above is a confusing mixture. Athabaskan and Haida generally consider themselves first nations, and more importantly 'treatied' first nations if they live in Canada. The other tribes may or may not call themselves first nations for a number of political reasons, not the least being that when the colonizers dealt with the northern tribes, there were in fact so few of them in numbers that they found there was more bargaining power in being considered as a single nation rather than a group of very small tribes seperated by vast geographical distances. (Some) of the northern tribes you name find the term Inuit inappropriate for political reasons, not cultural ones. The term Inuit was adopted for political reasons, and gained wide-spread use when some Northern tribes negotiated Nunavut as a seperate Canadian Territory. One of the mistakes people make when dealing with Indiginous people in North America (and Russia) is to forget about the political distinctions as well as the language and culture. There was a complex political structure in place when the colonizers first arrived here. So Inuit may be (and is) often objected to on political grounds (such as the Innui and the Dene who have been very successful in negotiating political advantages as isolated tribes by looking at their small numbers and unique culture-within-a-culture as a bargaining chip rather than a libaility.) But the term Eskimo is (or almost is) universally culturally offensive, as far as I know. And this biggest nit-pick of all? I stated the term Eskimo was cultural offensive, and bet even the Yupik and (I know the Innui) find it so. It is best when dealing with tribes who don't identify with the "Our People's" designation to ask them what they prefer to be called, instead of assuming "eater of raw flesh" is OK. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue Nov 9 18:42:41 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 9 Nov 2010 13:42:41 -0500 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: 2010/11/9 Darren Greer : > It's not so much the generalized term that people are using, but what that > generalized term means and where it comes from. ?Inuit means "our people" > from a Northern?Indigenous?tribal dialect. Eskimo means "eater of raw flesh" > from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking aside, > I can see that the objections to the first might be greater than the second. Likewise I take offense at being called a Typical American to mean "eater of junk food while watching TV." I think the colloquial "Couch Potato" is more appropriate for that particular meaning. :) > But the term Eskimo is (or almost is) universally culturally offensive, as > far as I know. And this biggest nit-pick of all? I stated the term Eskimo > was cultural offensive, and bet even the Yupik and (I know the Innui) find > it so. It is best when dealing with tribes who don't identify with the "Our > People's" designation to ask them what they prefer to be called, instead of > assuming "eater of raw flesh" is OK. Ultimately I prefer to be called "Mike." If we could remember to call people by their names rather than by labels all these problems could be easily avoided. From pharos at gmail.com Tue Nov 9 18:02:35 2010 From: pharos at gmail.com (BillK) Date: Tue, 9 Nov 2010 18:02:35 +0000 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: 2010/11/9 Darren Greer wrote>: > It's not so much the generalized term that people are using, but what that > generalized term means and where it comes from. ?Inuit means "our people" > from a Northern?Indigenous?tribal dialect. Eskimo means "eater of raw flesh" > from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking aside, > I can see that the objections to the first might be greater than the second. > Thanks for the info. Complicated, isn't it? Does that mean that Spike is really an Eskimo? ;) (He loves sushi). Some sources do give alternative (less-offensive) meanings for Eskimo. BillK From hkeithhenson at gmail.com Tue Nov 9 20:59:54 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 9 Nov 2010 13:59:54 -0700 Subject: [ExI] I love the world. =) Message-ID: On Mon, Nov 8, 2010 at 5:00 AM, Charlie Stross wrote: > > On 8 Nov 2010, at 00:20, Mike Dougherty wrote: > >> 3) we will continue to advance according to our own programming. >> Mostly that frightened monkey programming that kept us from being >> eaten by primordial predators will make us just as likely to hit the >> computronium monsters with a proverbial rock or (as recently >> discussed) a burning branch. ?Once the threat becomes possible, expect >> to see right next to the firehose something like "in case of hard >> takeoff, break glass to employ EMP." ?In a not-quite-worst-case >> scenario we are forced to Nuke the Internet and revert back to >> Amish-level technologies. ?Not a pretty situation, but humanity would >> adapt. > > Humanity *in the abstract* might adapt; but if we have to go there, you and I, personally, are probably going to die. Even today, all our supply chains have adapted to just-in-time production and shipping, relying on networked communications to ensure that stuff gets where it's needed; we can't revert to doing things the old way -- the equipment has long since been scrapped -- and we'd rapidly starve. Your average big box supermarket only holds about 24-48 hours worth of provisions, and their logistics infrastructure is highly tuned for efficiency. Now add in gas stations, railroad signalling, electricity grid control ... If we have to Nuke The Net Or Die, it'll mean the difference between a 100% die-back and a 90% die-back. I understand that just losing GPS will make hash of the banking industry (timestamps). > Meanwhile, the Mormons, with their requirement to keep a year of canned goods in the cellar, will be laughing. (Well, praying.) I thought you are out of date on this since our Mormon neighbors got rid of their year of food back in the late 80s. But it seems like this is still part of Mormon culture, though it may be followed by a minority of them. Could we get through a loss of the net and not loose most of the population? At this point "the net" and phone service is largely the same thing, at least outside a LATA. I think that might be possible today, enough people remember old ways of doing things. Ten years from now? 20? 30? Perhaps, but it gets harder and harder as time goes on and dependency on the net increases. Losing process control computers would be really bad. There are processes that can't be run by hand at all, they are unstable and people are too slow. I wonder if there was any consideration for the possible consequences before this started? Keith From darren.greer3 at gmail.com Tue Nov 9 22:57:34 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 18:57:34 -0400 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: >>Ultimately I prefer to be called "Mike." If we could remember to call people by their names rather than by labels all these problems could be easily avoided.<< Agreed. Except that if you're not the one giving yourself or your cultural or ethnic group the label, that's when it becomes a problem. I expect Americans didn't even start off calling themselves Americans. Likely the British came up with it first. Probably because coach potato was already taken. By Canadians. :) Darren On Tue, Nov 9, 2010 at 2:42 PM, Mike Dougherty wrote: > 2010/11/9 Darren Greer : > > It's not so much the generalized term that people are using, but what > that > > generalized term means and where it comes from. Inuit means "our people" > > from a Northern Indigenous tribal dialect. Eskimo means "eater of raw > flesh" > > from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking > aside, > > I can see that the objections to the first might be greater than the > second. > > Likewise I take offense at being called a Typical American to mean > "eater of junk food while watching TV." > I think the colloquial "Couch Potato" is more appropriate for that > particular meaning. :) > > > But the term Eskimo is (or almost is) universally culturally offensive, > as > > far as I know. And this biggest nit-pick of all? I stated the term Eskimo > > was cultural offensive, and bet even the Yupik and (I know the Innui) > find > > it so. It is best when dealing with tribes who don't identify with the > "Our > > People's" designation to ask them what they prefer to be called, instead > of > > assuming "eater of raw flesh" is OK. > > Ultimately I prefer to be called "Mike." If we could remember to call > people by their names rather than by labels all these problems could > be easily avoided. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Tue Nov 9 23:03:31 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 19:03:31 -0400 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: >>Does that mean that Spike is really an Eskimo? ;) (He loves sushi).<< What little I know of Spike is from Exi. But based on that limited amount of data, I think he probably defies easy categorization. :) Darren On Tue, Nov 9, 2010 at 2:02 PM, BillK wrote: > 2010/11/9 Darren Greer wrote>: > > It's not so much the generalized term that people are using, but what > that > > generalized term means and where it comes from. Inuit means "our people" > > from a Northern Indigenous tribal dialect. Eskimo means "eater of raw > flesh" > > from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking > aside, > > I can see that the objections to the first might be greater than the > second. > > > > > Thanks for the info. Complicated, isn't it? > > Does that mean that Spike is really an Eskimo? ;) > (He loves sushi). > > Some sources do give alternative (less-offensive) meanings for Eskimo. > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Tue Nov 9 21:14:11 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 9 Nov 2010 14:14:11 -0700 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: I'm so proud of the list members for showing sensitivity regarding the indigenous peoples of Alaska. I grew up there with a bestfriend who was half Inuit and half White. He caught hell from both sides... John On 11/9/10, BillK wrote: > 2010/11/9 Darren Greer wrote>: >> It's not so much the generalized term that people are using, but what that >> generalized term means and where it comes from. ?Inuit means "our people" >> from a Northern?Indigenous?tribal dialect. Eskimo means "eater of raw >> flesh" >> from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking >> aside, >> I can see that the objections to the first might be greater than the >> second. >> > > > Thanks for the info. Complicated, isn't it? > > Does that mean that Spike is really an Eskimo? ;) > (He loves sushi). > > Some sources do give alternative (less-offensive) meanings for Eskimo. > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From darren.greer3 at gmail.com Tue Nov 9 23:26:04 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 19:26:04 -0400 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: > >>I'm so proud of the list members for showing sensitivity regarding the > indigenous peoples of Alaska. I grew up there with a bestfriend who > was half Inuit and half White. He caught hell from both sides...>> Can relate to that. My Dad's First Nations and Mom's Irish. They used to call us, well, I won't tell you what the local pedigreed European descendants called us, but I was once told in a talking circle when I was a kid that I didn't belong there because I didn't live on the rez. This elder spoke up for me though. He told the objector that a 'circle has no corners.' So the guy beside me gets a scolding and I get my first geometry lesson. :) Darren > > John > > On 11/9/10, BillK wrote: > > 2010/11/9 Darren Greer wrote>: > >> It's not so much the generalized term that people are using, but what > that > >> generalized term means and where it comes from. Inuit means "our > people" > >> from a Northern Indigenous tribal dialect. Eskimo means "eater of raw > >> flesh" > >> from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking > >> aside, > >> I can see that the objections to the first might be greater than the > >> second. > >> > > > > > > Thanks for the info. Complicated, isn't it? > > > > Does that mean that Spike is really an Eskimo? ;) > > (He loves sushi). > > > > Some sources do give alternative (less-offensive) meanings for Eskimo. > > > > > > BillK > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Nov 10 00:26:15 2010 From: spike66 at att.net (spike) Date: Tue, 9 Nov 2010 16:26:15 -0800 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: <004201cb806d$e32403d0$a96c0b70$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of BillK ... Does that mean that Spike is really an Eskimo? ;) (He loves sushi)... BillK Don't I wish. Eskimos qualify as native American, which are equivalent to Latino by the quirky reasoning of our current employment law. If I could prove Eskimo heritage, I would have a job. Aside: I worked with an Eskimo (Inuit) when I worked in the southern California desert. He died of apparent heat related heart failure at age 43. spike From spike66 at att.net Wed Nov 10 00:44:10 2010 From: spike66 at att.net (spike) Date: Tue, 9 Nov 2010 16:44:10 -0800 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: <000001cb8070$64324660$2c96d320$@att.net> . On Behalf Of Darren Greer Subject: Re: [ExI] War ----- It's a meme! >>Does that mean that Spike is really an Eskimo? ;) (He loves sushi).<< What little I know of Spike is from Exi. But based on that limited amount of data, I think he probably defies easy categorization. :) Darren You are too kind. When are we going to get together for a good sushi devouring session? I miss those. We used to get together with the local transhumanist crowd at least once or twice a year. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Wed Nov 10 00:57:49 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 09 Nov 2010 18:57:49 -0600 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: <4CD9EE0D.4000304@satx.rr.com> On 11/9/2010 4:57 PM, Darren Greer wrote: > I expect Americans didn't even start off calling themselves Americans. > Likely the British came up with it first. Probably because coach potato > was already taken. By Canadians. :) Tut tut, that would be Royal Canadian Mounted Couch Potato. Damien Broderick From darren.greer3 at gmail.com Wed Nov 10 01:51:31 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 21:51:31 -0400 Subject: [ExI] War ----- It's a meme! In-Reply-To: <4CD9EE0D.4000304@satx.rr.com> References: <4CD9EE0D.4000304@satx.rr.com> Message-ID: > > >> > >>Tut tut, that would be Royal Canadian Mounted Couch Potato.<< > Don't forget the french-fried Surete du Quebec, led by the inimitable Constable Poutine (of Kurdish heritage.) > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Wed Nov 10 02:45:21 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 22:45:21 -0400 Subject: [ExI] I love the world. =) In-Reply-To: References: Message-ID: >>Could we get through a loss of the net and not loose most of the population? At this point "the net" and phone service is largely the same thing, at least outside a LATA. I think that might be possible today, enough people remember old ways of doing things. Ten years from now? 20? 30? Perhaps, but it gets harder and harder as time goes on and dependency on the net increases. Losing process control computers would be really bad. There are processes that can't be run by hand at all, they are unstable and people are too slow.<< An interesting discussion that I missed, likely because I am full-time work and part-time school and don't have a lot of free time. I just moved back to rural Nova Scotia from San Francisco and this subject has been on my mind a lot lately. I've noticed that since I've been here ( a small isolated community of eighty people or so, with the nearest town of any size forty kilometers away) that lives here have changed considerably due to technology since I was a boy. The Internet has created broader personal networks and people are better educated because of it. Improved medical procedures are helping people live longer. Everyone owns more stuff and drive nicer cars and they engage in a broader range of social, political and physical activity than they used to. They are even more tolerant of ethnic and cultural diversity. All of these changes in less than twenty years. Yet, my guess is, based on my recent observations, that if they were forced off the grid tomorrow, and had to go back to "amish-level" technology they would survive a hell of a lot longer than most of my friends in urban areas. Many of them still hunt and they all have guns. Some even cross-bows for bear-hunting. They know how to get food and to keep it, even without electricity. It is not uncommon for them to lose power in the winter months, and they have techniques for keeping their food: root cellars and snow banks and packing an unpowered fridge full of ice if the juice is going to be out for any length of time. They still preserve food (salt and pickle and spice) and keep low-temperature vegetable bins in basements, and almost everybody keeps some kind of garden in the warmer months. And most of all, they know how to cooperate to get things done. They do it all the time at local auxiliary and volunteer fire department and church meetings. Most houses here are oil, gas, propane or pellet stove heated. But many also have wood stoves or fireplaces for emergencies even if they don't use them. You'd be hard-pressed to find a single household without a chopping axe. Since I've moved back here I've even given some thought to defense, as scary as that sounds. The village is in a narrow valley divided by a small, fish-abundant river, and is easily defended if you had enough people motivated to do it (likely why this spot was chosen as a settlement in the late 1700's to begin with, not a peaceful time in my neck of the woods.) All in all, the chances of keeping a thriving community here in the event of something so disastrous as described above would be fairly good, at least for awhile. But there is one interesting aside. Coyotes are a huge problem. They grow sleeker and braver and more numerous with each passing year, glutted on domestic cats and injured deer and human garbage carelessly stored. So people might find that predation was once again an issue, especially with small children, if there were no passing cars and ambient machine noise to keep them away during the day. And of course, access to medication and infection and disease would also be a problem, though my parents still teach their grandchildren remedies for some of the minor common ailments that many of us now run to the doctor for. One more thing. In my black fantasy, when the big one hits, I'm gonna grab a gun and head to the little library village-centre and defend the books. The world's first armed librarian. Darren On Tue, Nov 9, 2010 at 4:59 PM, Keith Henson wrote: > On Mon, Nov 8, 2010 at 5:00 AM, Charlie Stross > wrote: > > > > On 8 Nov 2010, at 00:20, Mike Dougherty wrote: > > > >> 3) we will continue to advance according to our own programming. > >> Mostly that frightened monkey programming that kept us from being > >> eaten by primordial predators will make us just as likely to hit the > >> computronium monsters with a proverbial rock or (as recently > >> discussed) a burning branch. Once the threat becomes possible, expect > >> to see right next to the firehose something like "in case of hard > >> takeoff, break glass to employ EMP." In a not-quite-worst-case > >> scenario we are forced to Nuke the Internet and revert back to > >> Amish-level technologies. Not a pretty situation, but humanity would > >> adapt. > > > > Humanity *in the abstract* might adapt; but if we have to go there, you > and I, personally, are probably going to die. Even today, all our supply > chains have adapted to just-in-time production and shipping, relying on > networked communications to ensure that stuff gets where it's needed; we > can't revert to doing things the old way -- the equipment has long since > been scrapped -- and we'd rapidly starve. Your average big box supermarket > only holds about 24-48 hours worth of provisions, and their logistics > infrastructure is highly tuned for efficiency. Now add in gas stations, > railroad signalling, electricity grid control ... If we have to Nuke The Net > Or Die, it'll mean the difference between a 100% die-back and a 90% > die-back. > > I understand that just losing GPS will make hash of the banking > industry (timestamps). > > > Meanwhile, the Mormons, with their requirement to keep a year of canned > goods in the cellar, will be laughing. (Well, praying.) > > I thought you are out of date on this since our Mormon neighbors got > rid of their year of food back in the late 80s. But it seems like > this is still part of Mormon culture, though it may be followed by a > minority of them. > > Could we get through a loss of the net and not loose most of the > population? At this point "the net" and phone service is largely the > same thing, at least outside a LATA. I think that might be possible > today, enough people remember old ways of doing things. Ten years > from now? 20? 30? Perhaps, but it gets harder and harder as time > goes on and dependency on the net increases. Losing process control > computers would be really bad. There are processes that can't be run > by hand at all, they are unstable and people are too slow. > > I wonder if there was any consideration for the possible consequences > before this started? > > Keith > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Wed Nov 10 03:31:03 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 9 Nov 2010 22:31:03 -0500 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: <4CD9EE0D.4000304@satx.rr.com> Message-ID: 2010/11/9 Darren Greer : >> >>Tut tut, that would be Royal Canadian Mounted Couch Potato.<< > > Don't forget the french-fried Surete du Quebec, led by > the?inimitable?Constable Poutine (of Kurdish heritage.) No doubt a heritage with tuberous roots. Or am I whey off? From msd001 at gmail.com Wed Nov 10 03:40:07 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 9 Nov 2010 22:40:07 -0500 Subject: [ExI] I love the world. =) In-Reply-To: References: Message-ID: 2010/11/9 Darren Greer : > But there is one interesting aside. Coyotes are a huge problem. They grow > sleeker and braver and more numerous with each passing year, glutted on > domestic cats and injured deer and human garbage carelessly stored. So > people might find that predation was once again an issue, especially with > small children, if there were no passing cars and ambient machine noise to I imagined "sleeker and braver" coyotes as futuristic gold-foil-clad computronium-enhanced beasts with a hivemind and an insatiable thirst for small children. Now there is a terrifying picture of the future: especially if that crossbow "armed for bear" is anything short of a plasma rifle or phaser set to kill. From darren.greer3 at gmail.com Wed Nov 10 03:50:27 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 23:50:27 -0400 Subject: [ExI] I love the world. =) In-Reply-To: References: Message-ID: > > > > I imagined "sleeker and braver" coyotes as futuristic gold-foil-clad > computronium-enhanced beasts with a hivemind and an insatiable thirst > for small children. Now there is a terrifying picture of the future: > especially if that crossbow "armed for bear" is anything short of a > plasma rifle or phaser set to kill. > > Reminds me of W.C. Fields' immortal quip: "I love small children, but I can't eat a whole one." Darren -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists1 at evil-genius.com Wed Nov 10 03:16:58 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 09 Nov 2010 19:16:58 -0800 Subject: [ExI] Fire and evolution (was hypnosis) Message-ID: <4CDA0EAA.5060907@evil-genius.com> > By coincidence, > > > Stone Age humans were only able to develop relatively advanced tools > after their brains evolved a greater capacity for complex thought, > according to a new study that investigates why it took early humans > almost two million years to move from razor-sharp stones to a > hand-held stone axe. > ------------------ > > BillK Sort of. The actual conclusion is "The physical capabilities of Stone Age humans were not limiting their ability to make more complex stone tools." From that, they *assume* that brainpower was the limitation. Which may be true, but it's an assumption. For instance, other possibilities are that stone axes were not a useful tool until much later, perhaps due to changes in environment, diet, and social organization. (Note that I'm not arguing for or against...just pointing out the leap of faith involved here.) From lists1 at evil-genius.com Wed Nov 10 03:21:01 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 09 Nov 2010 19:21:01 -0800 Subject: [ExI] Technology, specialization, and diebacks...Re: I love the world. =) Message-ID: <4CDA0F9D.6070201@evil-genius.com> > From: Charlie Stross >> > In a not-quite-worst-case >> > scenario we are forced to Nuke the Internet and revert back to >> > Amish-level technologies. Not a pretty situation, but humanity would >> > adapt. > Humanity*in the abstract* might adapt; but if we have to go there, > you and I, personally, are probably going to die. The fact people forget is that late Pleistocene hunter-foragers had larger brains than post-agricultural humans! (And were taller, stronger, and healthier...only in the last 50 years have most human cultures regained the height of our distant ancestors.) The implication, of course, is that hunting and foraging *required* that brainpower -- otherwise it would not have been selected for. In other words, successfully hunting and foraging was *intellectually challenging*, and you didn't reproduce unless you were very good at it. In contrast, agriculture takes a genius to invent, but can be practiced by nearly anybody. Follow the ox, back and forth, sow and weed and harvest. Don't get 'distracted' (which was, for millions of years, a survival characteristic known as 'noticing something possibly edible amidst the blooming confusion of life'). Industrialization and mass-production increased this divide. The entire point of the Industrial Revolution was to decrease cost of goods by eliminating expensive skilled craftsmen and replacing them with low-wage unskilled labor. Each advance in technology involves more and more specialized knowledge whose fundamentals grow more complex with each step, and are understood by fewer... ..and which decrease the base level of intelligence and physical capability required to survive. In modern Western societies, absolutely *anyone* survives, even the persistently vegetative. Where this all ends up: technology allows a very few smart and capable people to enable the survival of *billions* of much less capable people. So if you take away that technology and require everyone to fend for themselves, you would expect a large dieback. > ... If we have to Nuke The Net Or Die, it'll mean the > difference between a 100% die-back and a 90% die-back. Given the world's rapidly disappearing supply of topsoil and ocean fish and continued population growth, that 90% figure you mention is basically a guarantee at some point in the not-too-distant future. (Anyone who wants to make the Julian Simon argument needs to also look at the rapidly disappearing supply of climax predators: world lion population has crashed from 200,000 to 20,000, a 90% decrease in ten years, due to habitat loss, and tigers are essentially extinct in the wild. Agricultural productivity has flattened out: all we've been doing is using up our buffer zones -- which *used* to have wild animals in them, hence their rapid decline. And you can't increase productivity via genetic engineering without using up your topsoil more quickly...unless you're returning human waste to the soil your food grew in, which we aren't.) > Meanwhile, the Mormons, with their requirement to keep a year of > canned goods in the cellar, will be laughing. (Well, praying.) I'm not sure one can learn subsistence farming or hunting in one year of hiding in a cellar. The Amish and Mennonites have the skills to manage...but I'm not sure they survive the waves of heavily armed and *very* hungry urban gangs exploding outward from the cities. The thing to remember is that 90%+ dieoffs are very common throughout the Earth's history...and given the rapid rate of replenishment relative to the geological record, don't tend to even show up unless environmental change held the population down for an extended period of time. Human population is thought to have hit a bottleneck of 5K-10K somewhere in the Late Pleistocene. So if there were a 99.9% dieoff and the only remaining humans were a few thousand Amish, Hadza, Ache, !Kung, and New Guinea highlanders, it wouldn't make a great deal of difference in the long-term. But you and I might not be too happy about it. From darren.greer3 at gmail.com Wed Nov 10 04:01:21 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 10 Nov 2010 00:01:21 -0400 Subject: [ExI] Technology, specialization, and diebacks...Re: I love the world. =) In-Reply-To: <4CDA0F9D.6070201@evil-genius.com> References: <4CDA0F9D.6070201@evil-genius.com> Message-ID: Only on this list, and perhaps a few others, could the topic "I love the world. =)" be converted in a few short posts into "Technology, specialization, and diebacks.." Gotta love it. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Wed Nov 10 08:10:56 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 10 Nov 2010 01:10:56 -0700 Subject: [ExI] extropy-chat Digest, Vol 86, Issue 13 In-Reply-To: References: Message-ID: On Tue, Nov 9, 2010 at 5:00 AM, BillK wrote: > > John Horgan has an article in Scientific American about why tribes go > to war that might be of interest. I know that Keith has suggested that > war is caused either by hard times or an expectation of hard times, > but I feel this is a weak theory as it seems to cover all cases and > therefore is untestable. That's nonsense. The theory would be instantly refuted if you found *one* case where a society doing well with a bright future started a war. The US Civil war was a total mystery to me until I realized that anticipation of a bleak future was as effective in sparking a war for sound evolutionary reasons. > Horgan thinks that war is learned behaviour. > > Some Quotes: > > Analyses of more than 300 societies in the Human Relations Area Files, > an ethnographic database at Yale University, have turned up no > clear-cut correlations between warfare and chronic resource scarcity. > Similarly, the anthropologist Lawrence Keeley notes in War before > Civilization: The Myth of the Peaceful Savage (Oxford University > Press, 1997) that the correlation between population pressure and > warfare "is either very complex or very weak or both." Amazing they would say this. "Population pressure" is a variable. War in China followed weather because a population that could be fed in times of good weather could not in times of bad weather. > Margaret Mead dismissed the notion that war is the inevitable > consequence of our "basic, competitive, aggressive, warring human > nature." This theory is contradicted, she noted, by the simple fact > that not all societies wage war. War has never been observed among a > Himalayan people called the Lepchas or among the Eskimos. In fact, > neither of these groups, when questioned by early ethnographers, was > even aware of the concept of war. Given what is now known about Margaret Mead's studies I am amazed that anyone would quote her as an authority. Himalayan people live at the edge of what humans can adapt to. That keeps their numbers down. As for the Eskimos, they killed each other at a high rate. You can call it war if you want too. > Warfare is "an invention," Mead concluded, like cooking, marriage, > writing, burial of the dead or trial by jury. Once a society becomes > exposed to the "idea" of war, it "will sometimes go to war" under > certain circumstances. Some people, Mead stated, such as the Pueblo > Indians, fight reluctantly to defend themselves against aggressors; > others, such as the Plains Indians, sally forth with enthusiasm, > because they have elevated martial skills to the highest of manly > virtues. Sheesh. Citing Mead is just stupid. Keith From pharos at gmail.com Wed Nov 10 10:07:27 2010 From: pharos at gmail.com (BillK) Date: Wed, 10 Nov 2010 10:07:27 +0000 Subject: [ExI] Margaret Mead controversy Message-ID: Quote: Margaret Mead's most famous book, 1928's "Coming of Age in Samoa," portrayed an idyllic, non-Western society, free of much sexual restraint, in which adolescence was relatively easy. Derek Freeman, an Australian anthropologist, wrote two books arguing that Mead was wrong and launched a heated public debate about her work. To Freeman, the issue was larger than the accuracy of "Coming of Age in Samoa." As he saw it, Mead's book was pivotal in arguing that humans' cultural environment -- or "nurture" -- could mold them as much or more than their biological predispositions -- or "nature." Paul Shankman, a University of Colorado professor of anthropology, has spent years studying the controversy and has uncovered new evidence that Freeman's fierce criticism of Mead contained fundamental flaws. "Freeman told a good story. It was a story people wanted to hear, that they wanted to believe," Shankman said. "Unfortunately, that's all it was: a good story." Shankman has exhumed data that deeply undercut Freeman's case. His research, partly based on a probe of Freeman's archives, opened after his death, revealed that Freeman "cherry picked" evidence that supported his thesis and ignored evidence that contradicted it. Shankman dissects the controversy in "The Trashing of Margaret Mead: Anatomy of an Anthropological Controversy," a book published in November by the University of Wisconsin Press. ----------------------------------------- And from Wikipedia: In 1983, five years after Mead had died, New Zealand anthropologist Derek Freeman published Margaret Mead and Samoa: The Making and Unmaking of an Anthropological Myth, in which he challenged Mead's major findings about sexuality in Samoan society, citing statements of her surviving informants' claiming that she had coaxed them into giving her the answers she wanted. After years of discussion, many anthropologists concluded that Mead's account is for the most part reliable, and most published accounts of the debate have also raised serious questions about Freeman's critique.[17] 17. See Appell 1984, Brady 1991, Feinberg 1988, Leacock 1988, Levy 1984, Marshall 1993, Nardi 1984, Patience and Smith 1986, Paxman 1988, Scheper-Hughes 1984, Shankman 1996, Young and Juan 1985, and Shankman 2009. ----------------------------- Basically it is the nurture versus nature debate all over again. Keith (like Freeman) tends towards nature side, that humans behave more as genetics have programmed them to. I tend more towards the nurture side, that humans behave more as their culture programs them to. Obviously it is all a big mish-mash with parts of both points of view being correct at different times and circumstances. But the nurture side is the whole point of the history of civilization, i.e. trying to control the animal instincts of humans to build a better life. Keith's support of the idea that genetic programming takes precedence is what leads him to his rather depressing view of the future course of humanity. But civilization has been controlling the human genetic impulses (to a greater or lesser extent) for thousands of years. So I think there is still hope for humanity. BillK From msd001 at gmail.com Wed Nov 10 13:05:26 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 10 Nov 2010 08:05:26 -0500 Subject: [ExI] I love the world. =) In-Reply-To: References: Message-ID: 2010/11/9 Darren Greer : > Reminds me of W.C. Fields' immortal quip: "I love small children, but I > can't eat a whole one." Q: "Do you have any kids?" A: "I had a child once; it was delicious" (effectively ends further discussion) From msd001 at gmail.com Wed Nov 10 13:18:40 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 10 Nov 2010 08:18:40 -0500 Subject: [ExI] Margaret Mead controversy In-Reply-To: References: Message-ID: On Wed, Nov 10, 2010 at 5:07 AM, BillK wrote: > Keith's support of the idea that genetic programming takes precedence > is what leads him to his rather depressing view of the future course > of humanity. But civilization has been controlling the human genetic > impulses (to a greater or lesser extent) for thousands of years. ?So I > think there is still hope for humanity. their interdependence forces a balance. We might make a strong case for one extreme but the farther we push our point(s) from that equilibrium the harder it is to believe. While genes program behavior, the environment measures fitness. Our culture is a habitat for competing memes (and the genes that preclude us to accepting/propagating them) Why has western culture fixated on the emaciated waif as the ideal feminine form? What evolutionary advantage exists? Is it simply the rarity of that phenotype that we perceive as valuable from a requisite diversity perspective? Is blond & blue eye the same? The ability to use one's brain to secure material wealth has changed the ideal masculine form from largish alpha brute protector/provider. Maybe the fitness evaluation grows more complex with the environment. fwiw: no, I don't have a dozen citations to legitimize my opinion. If this observation stands on it's own then great; else it's just one person's casual conversation. From msd001 at gmail.com Wed Nov 10 13:30:19 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 10 Nov 2010 08:30:19 -0500 Subject: [ExI] Technology, specialization, and diebacks...Re: I love the world. =) In-Reply-To: <4CDA0F9D.6070201@evil-genius.com> References: <4CDA0F9D.6070201@evil-genius.com> Message-ID: On Tue, Nov 9, 2010 at 10:21 PM, wrote: > The fact people forget is that late Pleistocene hunter-foragers had larger > brains than post-agricultural humans! ?(And were taller, stronger, and > healthier...only in the last 50 years have most human cultures regained the > height of our distant ancestors.) By comparison the Apple IIc I had when I was ten years old was more than twice as powerful as the computer i'm currently using to type this email. Perhaps fossil evidence shows a larger brainbox but can say nothing about the neural density / efficiency of the brain contained therein. Are you suggesting that a sperm whale is 5x smarter than the average human only because of its larger brain? > Where this all ends up: technology allows a very few smart and capable > people to enable the survival of *billions* of much less capable people. ?So > if you take away that technology and require everyone to fend for > themselves, you would expect a large dieback. > >> ... If we have to Nuke The Net Or Die, it'll mean the >> difference between a 100% die-back and a 90% die-back. Yeah. No wonder the future looks bleak. Even without die-back there's the apparently inevitable dumbening that's sure to overtake all but the top 0.1% who'll run everything (possibly the Mr.Smiths) From stathisp at gmail.com Wed Nov 10 13:48:49 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 11 Nov 2010 00:48:49 +1100 Subject: [ExI] Let's play What If. In-Reply-To: <4CD85AA8.5080402@satx.rr.com> References: <4CC6738E.3050609@speakeasy.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> Message-ID: On Tue, Nov 9, 2010 at 7:16 AM, Damien Broderick wrote: > 2) If copying requires destruction of the original, is it psychologically > likely that he will go to his death happy in the knowledge that his exact > subsequent copy will continue elsewhere? Many here say, "Hell, yes, it's > only evolved biases and cognitive errors that could support any other > opinion!" Others say, "Maybe so, but you're not getting me into that damned > gas chamber." > > So if the world becomes filled with people happy to be killed and copied, of > course it's likely that after a few hundred iterations identity will be > construed this way by almost everyone. If the USA becomes filled with the > antiabortion offspring of the duped who believe evolution is a godless hoax > and humans never walked on the moon, those opinions will also be validated. > So what? There is no contradiction in the assertion that the person survives even though the original is destroyed, because survival of the person and survival of the original are two different things. -- Stathis Papaioannou From agrimes at speakeasy.net Wed Nov 10 14:25:40 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Wed, 10 Nov 2010 09:25:40 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> Message-ID: <4CDAAB64.2040803@speakeasy.net> > There is no contradiction in the assertion that the person survives > even though the original is destroyed, because survival of the person > and survival of the original are two different things. In that case the concept of "person" is meaningless. In Existential Nihilism, if you can't poke it in the arm, it doesn't exist. Similarly, there is no such thing as society, forests, governments, wars, etc... Because these concepts are fundamentally fictions, they obscure and obstruct a true understanding of the reality in which you live. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From hkeithhenson at gmail.com Wed Nov 10 15:37:53 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 10 Nov 2010 08:37:53 -0700 Subject: [ExI] EP, was Margaret Mead controversy Message-ID: On Wed, Nov 10, 2010 at 5:00 AM, BillK wrote: snip > Basically it is the nurture versus nature debate all over again. > > Keith (like Freeman) tends towards nature side, that humans behave > more as genetics have programmed them to. It depends on how widely you class "behavior." Remember, the genes just don't have the information available to program a lot of behavior beyond walking. So a person flying a jet aircraft isn't using a lot of genetically programmed behavior. But if he (or she) is flying in a war, the motivation behind wars is genetically determined because the selection for going to war when it was profitable for genes was under intense selection for millions of years. (In good times it was *not* profitable for genes.) > I tend more towards the nurture side, that humans behave more as their > culture programs them to. It really depends on the situation. Culture has *nothing* to do with you pulling an arm back when you touch something that hurts. Culture (and current culture at that) has everything to do with spending hours working on your Facebook page. > Obviously it is all a big mish-mash with parts of both points of view > being correct at different times and circumstances. > > But the nurture side is the whole point of the history of > civilization, i.e. trying to control the animal instincts of humans to > build a better life. According to Dr. Gregory Clark, civilization set up the conditions for genetic selection in some parts of the world as intense as that which converted wild foxes to cute tame ones in 20 generations. Indeed certain psychological characteristics, such as impulsiveness and time preference, seem to have been greatly reduced in some groups over baseline hunter gatherers. > Keith's support of the idea that genetic programming takes precedence > is what leads him to his rather depressing view of the future course > of humanity. It's not genetic programming that concerns me. I actually don't see much future for humanity at all as we pass into the singularity. We can change to keep up with our intellectual offspring. The result would be something we would not recognize as human. Alternately our intellectual offspring might keep us like we keep cats (depressing when you think of what we do to cats "for their own good"). Perhaps you have another option? > But civilization has been controlling the human genetic > impulses (to a greater or lesser extent) for thousands of years. ?So I > think there is still hope for humanity. Out of curiosity are you doing anything to improve our chances for a future? Keith From jonkc at bellsouth.net Wed Nov 10 15:29:57 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 10 Nov 2010 10:29:57 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CDAAB64.2040803@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> Message-ID: <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> Stathis Papaioannou wrote: >> There is no contradiction in the assertion that the person survives >> even though the original is destroyed, because survival of the person >> and survival of the original are two different things. > Alan Grimes wrote: > In that case the concept of "person" is meaningless. No, it just means "the concept of person" is not a noun, in fact no concept is. > > In Existential Nihilism, if you can't poke it in the arm, it doesn't exist. If true then Existential Nihilism is a remarkably silly philosophy. You can't poke the number 42 in the arm because you wouldn't be able to find it, it possesses a size but no position; I doubt if you want to argue that the number 42 doesn't exist. You couldn't even poke a vector in the arm and in addition to a size a vector possesses a direction, but it still has no position; if vectors don't exist then physicists are in big trouble. Lots of important things have no position, when you really get down to it, I'd go so far as to say ALL important things. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Wed Nov 10 17:00:46 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 10 Nov 2010 13:00:46 -0400 Subject: [ExI] EP, was Margaret Mead controversy In-Reply-To: References: Message-ID: >>It's not genetic programming that concerns me. I actually don't see much future for humanity at all as we pass into the singularity. We can change to keep up with our intellectual offspring. The result would be something we would not recognize as human. << I may be naive about this (and usually am) but isn't there a lot of perception at work here? If what resulted from these changes to 'keep up' was not recognizably human, wouldn't it be more likely that we would just redefine what it meant to be physically (and not necessarily biologically) and mentally human as we evolved (technologically speaking)? So we'd be no longer biologically human from our current stand-point? The way that a 12th century Christian or Islamic physician may see a modern man living with the heart valves of a pig as no longer human because for him that was the seat of the soul? It's a matter of degree, I realize. And the possibility of singularity (which I'm still trying to get a handle on, I admit) makes orderly progress and stochastic prediction impossible. But one of the things I struggle with in terms of TH is figuring out this: if the human body is fully fungible as many seem to believe then can a simple biological definition be useful to define what it means to be human if you have already have that awareness? Whether this biological replacement has reached full potential or not? What defines being human? If it's not biology (and I am presumably no less human with an artificial leg than if all my body parts are replaced and my awareness uploaded and/or reconfigured) then what it is? If I am able to self modify and expand into places that I can't currently imagine because of my biological limitations, then is that being non-human? No longer having those limitations? And if humanity is simply the sum total of my limitations (beginning with my mortality) then you can keep that definition anyway. I've never been a great believer in we are what we can't do. Tell that someone who can't feed their children, and see how it flies. At its base level, it is unethical: a philosophy fed by the oppressor to the oppressed to keep the status quo. But, and here's the rub, where in the hell do my ethics come from? Not from my limitations but my desire to breach them, and free others from theirs if they are unable to do it for themselves. I am mightily confused about this, and would like to know what others think. I think this may actually be transhumanism 101, but I am just now learning and absorbing enough to ask this question and actually have a shot at processing the answer. Even assistance in restating the question into something less confusing would be helpful. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Wed Nov 10 17:24:35 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 10 Nov 2010 13:24:35 -0400 Subject: [ExI] Seth Loyd on birds, plants, ... In-Reply-To: <36F35165AAA04A00B8E05C5F4E3E2FA3@PCserafino> References: <4CD7211C.8060304@speakeasy.net> <4CD72B6A.80701@satx.rr.com> <36F35165AAA04A00B8E05C5F4E3E2FA3@PCserafino> Message-ID: He originally appeared in a panel discussion about this subject on CBC radio's show Ideas a few years back. Another guest discussed dark matter and I forget the rest. Was a good broadcast. They have a live panel every year for this radio show. I love CBC radio. Quirks and Quarks and Ideas especially. One year Ideas did a broadcast of the mock-up of the trial of Socrates. Darren On Mon, Nov 8, 2010 at 7:21 AM, scerir wrote: > Seth Lloyd on quantum 'weirdness' used by plants, animals, etc. > > http://www.cbc.ca/technology/story/2010/11/03/quantum-physics-biology-living-things.html > Supposedly, the video of this lecture will appear on the Perimeter > Institute website, or at pirsa.org. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Wed Nov 10 17:32:32 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 10 Nov 2010 13:32:32 -0400 Subject: [ExI] Google maps gets drawn into Latin America border dispute Message-ID: Interesting. When the digitization becomes more real that the physicality. http://www.washingtonpost.com/wp-dyn/content/article/2010/11/09/AR2010110906620.html?wprss=rss_world/wires Darren -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Wed Nov 10 20:40:59 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Wed, 10 Nov 2010 15:40:59 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> References: <4CC6738E.3050609@speakeasy.net> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> Message-ID: <4CDB035B.9040406@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > If true then Existential Nihilism is a remarkably silly philosophy. You > can't poke the number 42 in the arm because you wouldn't be able to find > it, it possesses a size but no position; I doubt if you want to argue > that the number 42 doesn't exist. Of course the # 42 doesn't exist! It was invented just like every other concept. That is not to say that the # 42 is either meaningless or useless. The # 42 has a very well defined meaning *with respect to a unit* (which may or may not be meaningful or useful). If you had 42 marbles, they would certainly exist and the number would be useful for describing them. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From kanzure at gmail.com Wed Nov 10 20:21:00 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 10 Nov 2010 14:21:00 -0600 Subject: [ExI] =?windows-1252?q?Paper=3A_It_Will_Be_Awesome_if_They_Don=92?= =?windows-1252?q?t_Screw_it_Up=3A_3D_Printing=2C_Intellectual_Prop?= =?windows-1252?q?erty=2C_and_the_Fight_Over_the_Next_Great_Disrupt?= =?windows-1252?q?ive_Technology?= Message-ID: It Will Be Awesome if They Don?t Screw it Up: 3D Printing, Intellectual Property, and the Fight Over the Next Great Disruptive Technology pdf: http://www.publicknowledge.org/files/docs/3DPrintingPaperPublicKnowledge.pdf http://www.publicknowledge.org/files/docs/3DPrintingPaperPublicKnowledge.pdf author: Michael Weinberg """ The next great technological disruption is brewing just out of sight. In small workshops, and faceless office parks, and garages, and basements, revolutionaries are tinkering with machines that can turn digital bits into physical atoms. The machines can download plans for a wrench from the Internet and print out a real, working wrench. Users design their own jewelry, gears, brackets, and toys with a computer program, and use their machines to create real jewelry, gears, brackets, and toys. These machines, generically known as 3D printers, are not imported from the future or the stuff of science fiction. Home versions, imperfect but real, can be had for around $1,000. Every day they get better, and move closer to the mainstream. In many ways, today?s 3D printing community resembles the personal computing community of the early 1990s. They are a relatively small, technically proficient group, all intrigued by the potential of a great new technology. They tinker with their machines, share their discoveries and creations, and are more focused on what is possible than on what happens after they achieve it. They also benefit from following the personal computer revolution: the connective power of the Internet lets them share, innovate, and communicate much faster than the Homebrew Computer Club could have ever imagined. The personal computer revolution also casts light on some potential pitfalls that may be in store for the growth of 3D printing. When entrenched interests began to understand just how disruptive personal computing could be (especially massively networked personal computing) they organized in Washington, D.C. to protect their incumbent power. Rallying under the banner of combating piracy and theft, these interests pushed through laws like the Digital Millennium Copyright Act (DMCA) that made it harder to use computers in new and innovative ways. In response, the general public learned once-obscure terms like ?fair use? and worked hard to defend their ability to discuss, create, and innovate. Unfortunately, this great public awakening came after Congress had already passed its restrictive laws. Of course, computers were not the first time that incumbents welcomed new technologies by attempting to restrict them. The arrival of the printing press resulted in new censorship and licensing laws designed to slow the spread of information. The music industry claimed that home taping would destroy it. And, perhaps most memorably, the movie industry compared the VCR to the Boston Strangler preying on a woman home alone. One of the goals of this whitepaper is to prepare the 3D printing community, and the public at large, before incumbents try to cripple 3D printing with restrictive intellectual property laws. By understanding how intellectual property law relates to 3D printing, and how changes might impact 3D printing?s future, this time we will be ready when incumbents come calling to Congress. """ - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists1 at evil-genius.com Wed Nov 10 20:27:27 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Wed, 10 Nov 2010 12:27:27 -0800 Subject: [ExI] Coyotes (Re: I love the world. =) In-Reply-To: References: Message-ID: <4CDB002F.7000801@evil-genius.com> On 11/9/10 6:45 PM, Darren wrote: > Coyotes are a huge problem. They grow > sleeker and braver and more numerous with each passing year, glutted on > domestic cats and injured deer and human garbage carelessly stored. So > people might find that predation was once again an issue, Well, that's what we get for killing off all the wolves: a vacant ecological niche. I think part of the success of coyotes is because, unlike wolves, they have adapted to inhabited areas -- where discharging of firearms is prohibited. A coyote is much safer in an urban area than it is out West, where it will likely be shot or poisoned (frequently at taxpayer expense). Factoid: the extinct "dire wolf" (Canis dirus) was actually more closely related to the modern coyote than the modern wolf. Mike Dougherty wrote: > I imagined "sleeker and braver" coyotes as futuristic gold-foil-clad > computronium-enhanced beasts with a hivemind and an insatiable thirst > for small children. In other words, furry versions of Dick Cheney, Hank Paulson, and Rahm Emmanuel. From jrd1415 at gmail.com Wed Nov 10 22:11:59 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 10 Nov 2010 14:11:59 -0800 Subject: [ExI] Could youse guys help me find a Scientific American cartoon? Message-ID: I'm trying to find a cartoon that appeared in Scientific American about ten years ago. I thought it was in the Sept 2001 issue, but I have a copy of that issue, and I can't find it there. I tried contacting SciAm, but they don't respond. The cartoon depicts a stairway proceeding from lower left to upper right It is the evolutionary stairway. Three "individuals" are climbing the stairs: a lemur-like critter lower left, then a hairy, knuckle-dragging proto-human cave man, and finally in the upper right a "modern" human. The caption has the lemur saying to the cave man, "I wondered when he'd notice there were more steps." Suggesting of course that evolution is not through with the human "line", fundamental thinking for list members. I want to enlarge that cartoon and make a T-shirt out of it. Person who finds it for me gets a free T-shirt. Best, Jeff Davis "You are what you think." Jeff Davis From pharos at gmail.com Wed Nov 10 23:53:41 2010 From: pharos at gmail.com (BillK) Date: Wed, 10 Nov 2010 23:53:41 +0000 Subject: [ExI] Could youse guys help me find a Scientific American cartoon? In-Reply-To: References: Message-ID: On Wed, Nov 10, 2010 at 10:11 PM, Jeff Davis wrote: > I'm trying to find a cartoon that appeared in Scientific American > about ten years ago. ?I thought it was in the Sept 2001 issue, but I > have a copy of that issue, and I can't find it there. ?I tried > contacting SciAm, but they don't respond. > > The cartoon depicts a stairway proceeding from lower left to upper > right ?It is the evolutionary stairway. ?Three "individuals" are > climbing the stairs: ?a lemur-like critter lower left, then a hairy, > knuckle-dragging proto-human cave man, and finally in the upper right > a "modern" human. ?The caption has the lemur saying to the cave man, > "I wondered when he'd notice there were more steps." ?Suggesting of > course that evolution is not through with the human "line", > fundamental thinking for list members. > > I want to enlarge that cartoon and make a T-shirt out of it. ?Person > who finds it for me gets a free T-shirt. > > Good news & bad news. I've found the cartoon, but it's copyrighted. Book: Radical Evolution The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human By Joel Garreau Quote: Joel Garreau: At the beginning of "Radical Evolution," there is a New Yorker cartoon I bought the rights to. It shows a staircase. on the first step is a little monkey. on the next, a bigger one, on the next, a cro magnon, and on the next, a guy in a suit. the caption has the cro-magnon saying to the suit: "i was wondering when you'd notice there's lots more steps." ---------------------------- BillK From jrd1415 at gmail.com Thu Nov 11 00:22:23 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 10 Nov 2010 16:22:23 -0800 Subject: [ExI] Could youse guys help me find a Scientific American cartoon? In-Reply-To: References: Message-ID: Wow! This is bizarre. That's the cartoon all right, but it isn't. The one I remember was smaller format, steeper stairs, only three climbers, the modern human was climbing and looking forward and oblivious, and the caption was Lemur to cave man or vice versa. Now I'm really confused. Jeff Davis On Wed, Nov 10, 2010 at 4:01 PM, Jeff Davis wrote: > Thanks, Bill. > > Copyright's not an issue. ?I'm not planning commercial distribution. > Just want one for me,... and one for you. > > Thanks. > > How did you find it. ?I thought sure I saw it in SciAm. ?Was I wrong? > Did I actually see it in the New Yorker, and misremembered? > > On Wed, Nov 10, 2010 at 3:53 PM, BillK wrote: >> On Wed, Nov 10, 2010 at 10:11 PM, Jeff Davis ?wrote: >>> I'm trying to find a cartoon that appeared in Scientific American >>> about ten years ago. ?I thought it was in the Sept 2001 issue, but I >>> have a copy of that issue, and I can't find it there. ?I tried >>> contacting SciAm, but they don't respond. >>> >>> The cartoon depicts a stairway proceeding from lower left to upper >>> right ?It is the evolutionary stairway. ?Three "individuals" are >>> climbing the stairs: ?a lemur-like critter lower left, then a hairy, >>> knuckle-dragging proto-human cave man, and finally in the upper right >>> a "modern" human. ?The caption has the lemur saying to the cave man, >>> "I wondered when he'd notice there were more steps." ?Suggesting of >>> course that evolution is not through with the human "line", >>> fundamental thinking for list members. >>> >>> I want to enlarge that cartoon and make a T-shirt out of it. ?Person >>> who finds it for me gets a free T-shirt. >>> >>> >> >> >> Good news & bad news. >> >> I've found the cartoon, but it's copyrighted. >> >> Book: >> Radical Evolution >> The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What >> It Means to Be Human >> By ? ?Joel Garreau >> >> Quote: >> Joel Garreau: At the beginning of "Radical Evolution," there is a New >> Yorker cartoon I bought the rights to. It shows a staircase. on the >> first step is a little monkey. on the next, a bigger one, on the next, >> a cro magnon, and on the next, a guy in a suit. the caption has the >> cro-magnon saying to the suit: >> "i was wondering when you'd notice there's lots more steps." >> >> ---------------------------- >> >> >> BillK >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > From jrd1415 at gmail.com Thu Nov 11 00:01:26 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 10 Nov 2010 16:01:26 -0800 Subject: [ExI] Could youse guys help me find a Scientific American cartoon? In-Reply-To: References: Message-ID: Thanks, Bill. Copyright's not an issue. I'm not planning commercial distribution. Just want one for me,... and one for you. Thanks. How did you find it. I thought sure I saw it in SciAm. Was I wrong? Did I actually see it in the New Yorker, and misremembered? On Wed, Nov 10, 2010 at 3:53 PM, BillK wrote: > On Wed, Nov 10, 2010 at 10:11 PM, Jeff Davis ?wrote: >> I'm trying to find a cartoon that appeared in Scientific American >> about ten years ago. ?I thought it was in the Sept 2001 issue, but I >> have a copy of that issue, and I can't find it there. ?I tried >> contacting SciAm, but they don't respond. >> >> The cartoon depicts a stairway proceeding from lower left to upper >> right ?It is the evolutionary stairway. ?Three "individuals" are >> climbing the stairs: ?a lemur-like critter lower left, then a hairy, >> knuckle-dragging proto-human cave man, and finally in the upper right >> a "modern" human. ?The caption has the lemur saying to the cave man, >> "I wondered when he'd notice there were more steps." ?Suggesting of >> course that evolution is not through with the human "line", >> fundamental thinking for list members. >> >> I want to enlarge that cartoon and make a T-shirt out of it. ?Person >> who finds it for me gets a free T-shirt. >> >> > > > Good news & bad news. > > I've found the cartoon, but it's copyrighted. > > Book: > Radical Evolution > The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What > It Means to Be Human > By ? ?Joel Garreau > > Quote: > Joel Garreau: At the beginning of "Radical Evolution," there is a New > Yorker cartoon I bought the rights to. It shows a staircase. on the > first step is a little monkey. on the next, a bigger one, on the next, > a cro magnon, and on the next, a guy in a suit. the caption has the > cro-magnon saying to the suit: > "i was wondering when you'd notice there's lots more steps." > > ---------------------------- > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stathisp at gmail.com Thu Nov 11 04:35:19 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 11 Nov 2010 15:35:19 +1100 Subject: [ExI] Let's play What If. In-Reply-To: <4CDAAB64.2040803@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> Message-ID: 2010/11/11 Alan Grimes : >> There is no contradiction in the assertion that the person survives >> even though the original is destroyed, because survival of the person >> and survival of the original are two different things. > > In that case the concept of "person" is meaningless. > > In Existential Nihilism, if you can't poke it in the arm, it doesn't > exist. Similarly, there is no such thing as society, forests, > governments, wars, etc... Because these concepts are fundamentally > fictions, they obscure and obstruct a true understanding of the reality > in which you live. Sometimes we can agree on a particular instance of a vaguely defined thing such as a country or a person. We can then try to come up with definitions to see if they fit. The problem with these personal identity discussions is that some participants assume a definition when that definition is inconsistent with their own usage of the terms. A1 Proposed definition: a country is a geographical region populated by people who all speak the same language. A2 Switzerland is a country. A3 The people in Switzerland do not all speak the same language. A4 If we agree on A2 and A3 we must reject A1. B1 Proposed definition: a person survives from t1 to t2 provided that the matter in his body remains the same between those times. B2 Alan has survived from Tuesday to Wednesday. B3 The matter in Alan's body was not the same on Tuesday and Wednesday. B4 If we agree on B2 and B3 we must reject B1. So that is the challenge: come up with a definition of personal survival that excludes destructive copying but allows the situations where normal usage of the term says we have definitely survived. -- Stathis Papaioannou From pharos at gmail.com Thu Nov 11 11:22:55 2010 From: pharos at gmail.com (BillK) Date: Thu, 11 Nov 2010 11:22:55 +0000 Subject: [ExI] Could youse guys help me find a Scientific American cartoon? In-Reply-To: References: Message-ID: On Thu, Nov 11, 2010 at 12:22 AM, Jeff Davis wrote: > Wow! This is bizarre. ?That's the cartoon all right, but it isn't. > The one I remember was smaller format, steeper stairs, only three > climbers, the modern human was climbing and looking forward and > oblivious, and the caption was Lemur to cave man or vice versa. > > Now I'm really confused. > > Scientific American did a review of the book, so they may have shown the cartoon. Here is the one still on sale at the New Yorker: (presumably they pay commission to Garreau if he owns the rights) There are many evolution cartoons around, showing lines of characters, so you may be conflating several memories. Cheers, BillK From jonkc at bellsouth.net Thu Nov 11 14:25:15 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 11 Nov 2010 09:25:15 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CDB035B.9040406@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> Message-ID: <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> On Nov 10, 2010, at 3:40 PM, Alan Grimes wrote: > Of course the # 42 doesn't exist! It was invented just like every other > concept. That is not to say that the # 42 is either meaningless or useless. If something has meaning then it is meaningless to say "it doesn't exist". And if it is useful too then it is not useful to pretend that it doesn't. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Thu Nov 11 15:26:22 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 11 Nov 2010 10:26:22 -0500 Subject: [ExI] Iain M Banks on uploading In-Reply-To: <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> References: <4CC6738E.3050609@speakeasy.net> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> Message-ID: <4CDC0B1E.3030909@lightlink.com> Interview with Iain M Banks in New Scientist: http://www.newscientist.com/blogs/culturelab/2010/11/iain-m-banks-upload-for-everlasting-life.html A very short, but thoroughly Banksian set of responses. I particularly liked: Q: In your book, virtual minds can get sent to a virtual hell for bad behaviour. What gave you this idea? Banks: Part of a science fiction writer's job is to think how we would manage to fuck up something so potentially cool and life-affirming - so virtual hells seemed almost as obvious as virtual heavens. That pretty much sums up half of all the online discussion about the singularity that I hear inside the singularity community. Richard Loosemore From agrimes at speakeasy.net Thu Nov 11 15:23:49 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 11 Nov 2010 10:23:49 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> References: <4CC6738E.3050609@speakeasy.net> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> Message-ID: <4CDC0A85.3000404@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > On Nov 10, 2010, at 3:40 PM, Alan Grimes wrote: >> Of course the # 42 doesn't exist! It was invented just like every other >> concept. That is not to say that the # 42 is either meaningless >> or useless. > If something has meaning then it is meaningless to say "it doesn't > exist". And if it is useful too then it is not useful to pretend that it > doesn't. No. I don't group ideas with things that have tangible reality. For example, a brain exists, it's tangible. However a computer simulation of a brain has no tangible reality, even a microscopic examination of the computer chips will not reveal it! Furthermore, a computer running a simulation of a brain is indistinguishable from a computer running the ABC at home project. ( my own machine is currently the 30th fastest on the project! =P ) Ever consider the differences between a computer and a brain with regards to a total reset situation? Lets say you had a stroke or a momentary jolt of 10,000 volts through your head. You would be stunned, and your brain would probably re-set itself a few times, but it would go back to being a brain. If you do the same with a computer, *at best* you'll get. #### OPERATING SYSTEM NOT FOUND. INSERT DISK INTO DRIVE A: #### =P So yeah, I have a billion complaints about uploading not counting the identity issue. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From singularity.utopia at yahoo.com Thu Nov 11 10:06:35 2010 From: singularity.utopia at yahoo.com (Singularity Utopia) Date: Thu, 11 Nov 2010 10:06:35 +0000 (GMT) Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? Message-ID: <48358.82164.qm@web24905.mail.ird.yahoo.com> I am interested in the Singularitarian Principles described by Eliezer S. Yudkowsky. http://yudkowsky.net/obsolete/principles.html The above link to Eliezer S. Yudkowsky's site has a disclaimer-notice stating: "This document has been marked as wrong, obsolete, deprecated by an improved version, or just plain old." Does anyone know what Yudkowsky's up-to-date Singularitarian Principles are? Does anyone know how to contact Yudkowsky (his webpage discourages contact; he states he is not likely to reply to individual emails), and if you do know how to contact him maybe you could ask him to update his page? Has Yudkowsky stopped believing in his Singularitarian Principles? Maybe Yudkowsky has admitted defeat regarding the concept of AI and the Singularity? Thanks for any help anyone can offer to clarify the current situation regarding the outdated Singularitarian Principles described by Eliezer S. Yudkowsky http://yudkowsky.net/obsolete/principles.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Thu Nov 11 17:14:25 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 11 Nov 2010 12:14:25 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <48358.82164.qm@web24905.mail.ird.yahoo.com> References: <48358.82164.qm@web24905.mail.ird.yahoo.com> Message-ID: <4CDC2471.5040209@lightlink.com> Singularity Utopia wrote: > [snip] > Does anyone know how to contact Yudkowsky (his webpage discourages > contact; he states he is not likely to reply to individual emails), and > if you do know how to contact him maybe you could ask him to update his > page? > Yudkowsky is one of the easiest people to contact in the entire singularity community: just join the SL4 mailing list (which he created) and say something that annoys him. If you have trouble thinking of something to annoy him, try mentioning my name. That should do it. :-) After that, he'll appear out of nowhere and write a sarcastic comment about your stupidity, and THEN you will have his attention. :-) Have fun. Richard Loosemore From jonkc at bellsouth.net Thu Nov 11 17:02:48 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 11 Nov 2010 12:02:48 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CDC0A85.3000404@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> Message-ID: On Nov 11, 2010, at 10:23 AM, Alan Grimes wrote: > > I don't group ideas with things that have tangible reality. I don't either, ideas are far more important than tangible reality crap. You don't have ideas you are ideas and its irrelevant what hardware happens to think you. > For example, a brain exists, it's tangible. What an object does is less tangible than the object itself, but I don't care, mind is more important than brain; at least it is in this mind's opinion. > However a computer simulation of a brain has no tangible reality If so then computer or even calculator arithmetic has no tangible reality so you'd better not use one on your tax returns if you want to stay out of jail; but on second thought that really is not a problem because the brain simulating Alan Grimes has no tangible reality either and you can't put a nonexistent entity in prison. > even a microscopic examination of the computer chips will not reveal it! But a microscopic examination of the neurons in your brain will?!! > Furthermore, a computer running a simulation of a brain is indistinguishable from a computer running the ABC at home project. And that versatility is precisely why brains and computers are such useful objects. > Ever consider the differences between a computer and a brain with > regards to a total reset situation? Indeed I have. If I were to suffer a horrible traumatic experience I'd likely be in a funk for many years and possibly for the rest of my life, but if my computer hangs around with bad programs or has a nervous breakdown for any reason I can reset it in just a few minutes; and even if it's totally destroyed everything is backed up on an external hard disk so nothing is lost. I just wish I had an external hard drive backup for me. > I have a billion complaints about uploading not counting the identity issue. There is no identity issue there is only a identity superstition. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Thu Nov 11 17:38:14 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 11 Nov 2010 11:38:14 -0600 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDC2471.5040209@lightlink.com> References: <48358.82164.qm@web24905.mail.ird.yahoo.com> <4CDC2471.5040209@lightlink.com> Message-ID: <4CDC2A06.2090300@satx.rr.com> On 11/11/2010 11:14 AM, Richard Loosemore wrote: > Yudkowsky is one of the easiest people to contact in the entire > singularity community: just join the SL4 mailing list (which he > created) and say something that annoys him. That shouldn't be difficult for "Singularity Utopia"... Damien Broderick From jonkc at bellsouth.net Thu Nov 11 17:29:55 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 11 Nov 2010 12:29:55 -0500 Subject: [ExI] Could youse guys help me find a Scientific American cartoon? In-Reply-To: References: Message-ID: <58A12420-BE32-4864-889C-1B576C48229C@bellsouth.net> http://www.cartoonbank.com/2004/i-was-wondering-when-youd-notice-theres-lots-more-steps/invt/127579/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Thu Nov 11 17:59:14 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 11 Nov 2010 10:59:14 -0700 Subject: [ExI] Singularity was EP, was Margaret Mead controversy Message-ID: On Thu, Nov 11, 2010 at 5:00 AM, Darren Greer wrote: snip > And if humanity is simply the sum total of my limitations (beginning with my > mortality) then you can keep that definition anyway. I've never been a great > believer in we are what we can't do. Tell that someone who can't feed their > children, and see how it flies. At its base level, it is unethical: a > philosophy fed by the oppressor to the oppressed to keep the status quo. > ?But, and here's the rub, where in the hell do my ethics come from? Same place as everything else, evolution, selection of genes in the past. You do need to understand the gene model of evolution and "inclusive fitness" for this to make sense. > Not from > my limitations but my desire to breach them, and free others from theirs if > they are unable to do it for themselves. > > I am mightily confused about this, and would like to know what others think. > ?I think this may actually be transhumanism 101, but I am just now learning > and absorbing enough to ask this question and actually have a shot at > processing the answer. Even assistance in restating the question into > something less confusing would be helpful. I have been involved with this for a *long* time, clear back to the late 70s when Eric Drexler started talking about nanotechnology. It's so hard to understand the ramifications of what nanotech and AI will be able to do in the context of human desires that I had to resort to fiction to express it. http://www.terasemjournals.org/GN0202/henson.html Keith From kanzure at gmail.com Thu Nov 11 19:43:33 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Thu, 11 Nov 2010 13:43:33 -0600 Subject: [ExI] Fwd: [NeuralEnsemble] Job Openings - Blue Brain Project In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Eilif Muller Date: Thu, Nov 11, 2010 at 1:26 PM Subject: [NeuralEnsemble] Job Openings - Blue Brain Project To: Neural Ensemble Dear NeuralEnsemble, I would like to draw your attention to the following new openings at the Blue Brain Project in Lausanne, Switzerland: Postdoc in Data-Driven Modeling in Neuroscience (100%) http://jahia-prod.epfl.ch/site/emploi/page-48940-en.html Software Developer on Massively Parallel Compute Architectures (100%) http://jahia-prod.epfl.ch/site/emploi/page-48916-en.html Scientific Visualization Engineer (%100) http://jahia-prod.epfl.ch/site/emploi/page-48941-en.html System Administrator (100%) http://jahia-prod.epfl.ch/site/emploi/page-48939-en.html I would appreciate if you could forward them to qualified persons who might be interested. cheers, Eilif -- You received this message because you are subscribed to the Google Groups "Neural Ensemble" group. To post to this group, send email to neuralensemble at googlegroups.com. To unsubscribe from this group, send email to neuralensemble+unsubscribe at googlegroups.com. For more options, visit this group at http://groups.google.com/group/neuralensemble?hl=en. -- - Bryan http://heybryan.org/ 1 512 203 0507 From sparge at gmail.com Thu Nov 11 18:22:50 2010 From: sparge at gmail.com (Dave Sill) Date: Thu, 11 Nov 2010 13:22:50 -0500 Subject: [ExI] Singularity was EP, was Margaret Mead controversy In-Reply-To: References: Message-ID: On Thu, Nov 11, 2010 at 12:59 PM, Keith Henson wrote: > Same place as everything else, evolution, selection of genes in the > past. > What's the evolutionary/genetic explanation for homosexuality? -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Nov 11 20:13:47 2010 From: spike66 at att.net (spike) Date: Thu, 11 Nov 2010 12:13:47 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDC2A06.2090300@satx.rr.com> References: <48358.82164.qm@web24905.mail.ird.yahoo.com> <4CDC2471.5040209@lightlink.com> <4CDC2A06.2090300@satx.rr.com> Message-ID: <006801cb81dc$f3313e30$d993ba90$@att.net> ... On 11/11/2010 11:14 AM, Richard Loosemore wrote: >> Yudkowsky is one of the easiest people to contact in the entire >> singularity community: just join the SL4 mailing list (which he >> created) and say something that annoys him. >That shouldn't be difficult for "Singularity Utopia"... Damien Broderick Hi Utopia, Do take Damien's comment as the constructive criticism he intended please. When Eli used to hang out here, his theme went something like this: The singularity is coming regardless. Let us work to make it a positive thing. My constructive criticism of your earlier posts was that your theme is: the singularity will be a positive thing regardless. Can you see why Eli would find that attitude annoying and dangerous? Do you see why plenty of people here would find that notion annoying and dangerous? The singularity is not necessarily a good thing, but we know that a no-singularity future is a bad thing. I am in Eli's camp: if we work at it, we can make it a good thing. spike From agrimes at speakeasy.net Thu Nov 11 21:48:10 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 11 Nov 2010 16:48:10 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> Message-ID: <4CDC649A.7030409@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > On Nov 11, 2010, at 10:23 AM, Alan Grimes wrote: >> I don't group ideas with things that have tangible reality. > I don't either, ideas are far more important than tangible reality crap. > You don't have ideas you are ideas and its irrelevant what hardware > happens to think you. Absurd because ideas can't think of ideas. I can think of ideas so therefore I'm not any set of ideas. >> For example, a brain exists, it's tangible. > What an object does is less tangible than the object itself, but I don't > care, mind is more important than brain; at least it is in this mind's > opinion. Because I'm a strict monist, I can't imagine any way through which the two can be separated. All such proposals are inherently irrational religious thinking. (Due to recent threads, I am now highlighting the strictly monistic nature of my viewpoints.) >> However a computer simulation of a brain has no tangible reality > If so then computer or even calculator arithmetic has no tangible > reality so you'd better not use one on your tax returns if you want to > stay out of jail; Who said I filed tax returns? At this time in our history the government is completely evil. If you are a moral and upright human being, you would become a 1099 worker and never file any forms of any kind with the government. Furthermore, and more importantly, you must never accept any money or special benefit from the government. I like having a roof over my head so I pay my property tax ($6k/yr!), but I do not do business with the fedz. > but on second thought that really is not a problem > because the brain simulating Alan Grimes has no tangible reality either > and you can't put a nonexistent entity in prison. ;) >> even a microscopic examination of the computer chips will not reveal it! > But a microscopic examination of the neurons in your brain will?!! Stick a few electrodes on my skull and you'll see my EEG, you can even determine my state of consciousness from it. Combining that with anatomical evidence, you can prove that a brain is a person. If you do the same to a computer you will not be able to detect any differences except for the general level of computational activity, which has no inherent relationship to the state of the upload. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From stathisp at gmail.com Thu Nov 11 22:29:23 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 12 Nov 2010 09:29:23 +1100 Subject: [ExI] Let's play What If. In-Reply-To: <4CDC649A.7030409@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> <4CDC649A.7030409@speakeasy.net> Message-ID: 2010/11/12 Alan Grimes : > Stick a few electrodes on my skull and you'll see my EEG, you can even > determine my state of consciousness from it. Combining that with > anatomical evidence, you can prove that a brain is a person. There is no way, even in theory, to prove that a given collection of matter is conscious. > If you do the same to a computer you will not be able to detect any > differences except for the general level of computational activity, > which has no inherent relationship to the state of the upload. Are you claiming there is no correlation between the electrical activity in a computer and the computations it is carrying out? -- Stathis Papaioannou From possiblepaths2050 at gmail.com Thu Nov 11 23:58:47 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 11 Nov 2010 16:58:47 -0700 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <006801cb81dc$f3313e30$d993ba90$@att.net> References: <48358.82164.qm@web24905.mail.ird.yahoo.com> <4CDC2471.5040209@lightlink.com> <4CDC2A06.2090300@satx.rr.com> <006801cb81dc$f3313e30$d993ba90$@att.net> Message-ID: On 11/11/2010 11:14 AM, Richard Loosemore wrote: Yudkowsky is one of the easiest people to contact in the entire singularity community: just join the SL4 mailing list (which he created) and say something that annoys him. >>> Damien Broderick wrote: >That shouldn't be difficult for "Singularity Utopia"... I have a feeling S.U. *may* get on the SL4 list, but will not be allowed to stay on it for very long!!! John ; ) On 11/11/10, spike wrote: > > ... > > On 11/11/2010 11:14 AM, Richard Loosemore wrote: > >>> Yudkowsky is one of the easiest people to contact in the entire >>> singularity community: just join the SL4 mailing list (which he >>> created) and say something that annoys him. > >>That shouldn't be difficult for "Singularity Utopia"... Damien Broderick > > Hi Utopia, > > Do take Damien's comment as the constructive criticism he intended please. > When Eli used to hang out here, his theme went something like this: The > singularity is coming regardless. Let us work to make it a positive thing. > > My constructive criticism of your earlier posts was that your theme is: the > singularity will be a positive thing regardless. > > Can you see why Eli would find that attitude annoying and dangerous? Do you > see why plenty of people here would find that notion annoying and dangerous? > The singularity is not necessarily a good thing, but we know that a > no-singularity future is a bad thing. I am in Eli's camp: if we work at it, > we can make it a good thing. > > spike > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From avantguardian2020 at yahoo.com Fri Nov 12 01:04:22 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Thu, 11 Nov 2010 17:04:22 -0800 (PST) Subject: [ExI] Singularity was EP, was Margaret Mead controversy In-Reply-To: References: Message-ID: <302902.92160.qm@web65608.mail.ac4.yahoo.com> > >From: Dave Sill >To: ExI chat list >Sent: Thu, November 11, 2010 10:22:50 AM >Subject: Re: [ExI] Singularity was EP, was Margaret Mead controversy > > >On Thu, Nov 11, 2010 at 12:59 PM, Keith Henson wrote: > >Same place as everything else, evolution, selection of genes in the >>past. > >What's the evolutionary/genetic explanation for homosexuality? I don't believe exclusively in genetic determinism but?genes are obviously a very powerful driver of behavior.?Things like?culture, advertising, conditioning, rationality, and other?psychosocial forces can?demonstrably override?genetic behavior in many instances. But homosexuality is not a good example of nurture over nature. In fact, nature is full of homosexuality so the answer to your question?depends on the species you are talking about. In fruit flies, it seems to be due to a mutation of the gene which allows a male fruit fly to distinguish females from other males. In Black Swans, it seems to be a survival adaption because two males can defend a nest/chicks better than a heterosexual pair so they chase the female out after she has laid her eggs. In elephants, it seems to be a form of pederasty. In bonobos, it seems to be a primitive form of economics to diffuse conflict and minimize violence. Dolphins seem to do it because they are just plain horny. Heck dolphins don't even limit sexual activity to their own species and are probably the only animal that practices "nasal sex" by penetrating the blowholes of their own and other species. http://en.wikipedia.org/wiki/Homosexual_behavior_in_animals#cite_ref-ReferenceA_0-0 ? Stuart LaForge ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung From lists1 at evil-genius.com Fri Nov 12 01:27:23 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Thu, 11 Nov 2010 17:27:23 -0800 Subject: [ExI] Technology, specialization, and diebacks...Re: I, love the world. =) In-Reply-To: References: Message-ID: <4CDC97FB.6070404@evil-genius.com> On 11/11/10 4:00 AM, extropy-chat-request at lists.extropy.org wrote: > On Tue, Nov 9, 2010 at 10:21 PM, wrote: >> > The fact people forget is that late Pleistocene hunter-foragers had larger >> > brains than post-agricultural humans! ?(And were taller, stronger, and >> > healthier...only in the last 50 years have most human cultures regained the >> > height of our distant ancestors.) > By comparison the Apple IIc I had when I was ten years old was more > than twice as powerful as the computer i'm currently using to type > this email. Perhaps fossil evidence shows a larger brainbox but can > say nothing about the neural density / efficiency of the brain > contained therein. Are you suggesting that a sperm whale is 5x > smarter than the average human only because of its larger brain? I believe you mean "more than twice as large" (not "twice as powerful"), so I'll address that point. The comparison is between late Pleistocene hunter-foragers, of 10,000-40,000 years ago, and the post-agricultural humans that were their immediate descendants. Claiming that their brains were substantially different in "neural density/efficiency" requires substantial justification (that appears nowhere in the scientific literature). Comparing them to a sperm whale is simply specious. McDaniel, M.A. (2005) Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, 33, 337-346 http://www.people.vcu.edu/~mamcdani/Big-Brained%20article.pdf Even if you don't buy that argument, it will be difficult to claim that a slightly bigger brain made our immediate ancestors *dumber*. The anatomically modern human was selected for by millions of years of hunting and foraging. (Orrorin, Sahelanthropus, and Ardipithecus -> Homo sapiens sapiens) Any subsequent change due to a few thousand years of agricultural practices is sufficiently subtle that it hasn't affected our morphology -- and, in fact, we're still arguing over whether it exists. My point stands: intelligence must have been not just valuable, but *absolutely necessary* for hunter-foragers -- otherwise we wouldn't have been selected for it. (Brain size of common human/chimp/bonobo ancestors: ~350cc. Brain size of anatomically modern humans: ~1300cc.) From msd001 at gmail.com Fri Nov 12 02:12:18 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 11 Nov 2010 21:12:18 -0500 Subject: [ExI] Technology, specialization, and diebacks...Re: I, love the world. =) In-Reply-To: <4CDC97FB.6070404@evil-genius.com> References: <4CDC97FB.6070404@evil-genius.com> Message-ID: On Thu, Nov 11, 2010 at 8:27 PM, wrote: > On 11/11/10 4:00 AM, extropy-chat-request at lists.extropy.org wrote: >> >> On Tue, Nov 9, 2010 at 10:21 PM, ?wrote: >>> >>> > ?The fact people forget is that late Pleistocene hunter-foragers had >>> > larger >>> > ?brains than post-agricultural humans! ?(And were taller, stronger, and >>> > ?healthier...only in the last 50 years have most human cultures >>> > regained the >>> > ?height of our distant ancestors.) >> >> By comparison the Apple IIc I had when I was ten years old was more >> than twice as powerful as the computer i'm currently using to type >> this email. ?Perhaps fossil evidence shows a larger brainbox but can >> say nothing about the neural density / efficiency of the brain >> contained therein. ?Are you suggesting that a sperm whale is 5x >> smarter than the average human only because of its larger brain? > > I believe you mean "more than twice as large" (not "twice as powerful"), so > I'll address that point. Let me clarify. Typically when we speak of larger brains we're talking about more intelligence as in, "That evil-genius is a large-brain individual compared to us normally small-brain people." I went on what I assumed was your suggestion that Pleistocene hunter-foragers had "larger brains" than modern humans. I would follow the thinking that modern technology has made it possible for the average human to grow dumber with each generation while a decreasing population of opportunist smarties continue to benefit from this imbalance. > The comparison is between late Pleistocene hunter-foragers, of 10,000-40,000 > years ago, and the post-agricultural humans that were their immediate > descendants. ?Claiming that their brains were substantially different in > "neural density/efficiency" requires substantial justification (that appears > nowhere in the scientific literature). ?Comparing them to a sperm whale is > simply specious. No justification is possible without a cryotank full of preserved Pleistocene brains. ... and if that ever shows up it'll raise many more questions than answers. Of course the sperm whale comment was specious. ;) > McDaniel, M.A. (2005) Big-brained people are smarter: A meta-analysis of the > relationship between in vivo brain volume and intelligence. Intelligence, > 33, 337-346 > http://www.people.vcu.edu/~mamcdani/Big-Brained%20article.pdf > Even if you don't buy that argument, it will be difficult to claim that a > slightly bigger brain made our immediate ancestors *dumber*. I think the margin of error in measuring intelligence is far higher than the performance differences between the various models. Even with some magical means of copying the structural bits of a brain, the fuel going into it probably has similar performance impact as any other machine. ex: High octane fuel & perfect maintenance regimen on a racecar yields significantly better output than lower quality fuel/care on an engine identically machined to within five-nines tolerance. Given the range of energy metabolism, food quality, brain usage training, etc. it's almost impossible to compare two modern brains let alone distant time period brains. > > The anatomically modern human was selected for by millions of years of > hunting and foraging. ?(Orrorin, Sahelanthropus, and Ardipithecus -> Homo > sapiens sapiens) ?Any subsequent change due to a few thousand years of > agricultural practices is sufficiently subtle that it hasn't affected our > morphology -- and, in fact, we're still arguing over whether it exists. > > My point stands: intelligence must have been not just valuable, but > *absolutely necessary* for hunter-foragers -- otherwise we wouldn't have > been selected for it. ?(Brain size of common human/chimp/bonobo ancestors: > ~350cc. ?Brain size of anatomically modern humans: ~1300cc.) Modern human was also selected for running away from things that we couldn't kill first. Probably a considerable amount of our cooperative behaviors came from the discovery that many small animals are able to overpower a large threat when they work together - utilizing that prized possession: intelligence. Have you considered that perhaps intelligence is only secondarily selected for? Perhaps the more general governing rule is energy efficiency. The intelligence to do more work with less effort facilitates energy efficiency, so it has value. Tools make difficult tasks easier, so they become valuable too. Is a back-hoe inherently valuable? Only if the job is to dig. Without the task, that tool is a liability. Nature doesn't need overt intelligence for the the energy efficient to proliferate; and by doing so the environment is made more competitive. From msd001 at gmail.com Fri Nov 12 02:19:58 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 11 Nov 2010 21:19:58 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> Message-ID: 2010/11/11 John Clark : > > There is no identity issue there is only a identity superstition. "There is no Dana; only Zuul" - Zuul From msd001 at gmail.com Fri Nov 12 02:39:46 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 11 Nov 2010 21:39:46 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CDC649A.7030409@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> <4CDC649A.7030409@speakeasy.net> Message-ID: 2010/11/11 Alan Grimes : > Absurd because ideas can't think of ideas. I can think of ideas so > therefore I'm not any set of ideas. Absurd? Absurd because of another Absurdity? You are making a point by authority when you have no authority. I recommend you try the exercise Stathis suggested. Only after a discourse where we agree on some grounds can you begin to build a convincing argument. > Stick a few electrodes on my skull and you'll see my EEG, you can even > determine my state of consciousness from it. Combining that with > anatomical evidence, you can prove that a brain is a person. > > If you do the same to a computer you will not be able to detect any > differences except for the general level of computational activity, > which has no inherent relationship to the state of the upload. Can you post a URL to the records you kept from this experiment? I would be more likely to conclude that a brain is an unusual piece of meat that, while fresh, is able to produce detectable electrical impulses and that when no longer fresh is able to produce only an offensive odor. Nowhere in that affirmation can I assert your (or anyone else's) consciousness. I am certainly unable to prove that a lump of meat producing electrical impulse is a person. If electrical activity is a proof enough of conscious personhood then any common piezoelectric crystal could qualify. Oh right, the EEG is a complex time-dependent series of impulses and a simple oscillating frequency quartz crystal isn't good enough. When the computer (your second example) starts producing the same time-dependent series of impulses as the control/reference EEG that "proves" the personhood of the brain to which it is hooked, will you be concede that the computer is running a person? When the computer-hosted EEG pattern that has already synchronized its pattern with the biologically-hosted EEG pattern proving the conscious personhood of Alan Grimes detects the sudden loss of signal from the biological system does it report that Alan Grimes has died? Perhaps merely the link was severed, sure. But let's assume the system failure is not in the link, but in the biological system. As far as I (Mike D.) can tell, the computer-hosted pattern could continue to be fanatically against uploading and send emails to the list as such. I'll grant that I have no sense of your qualia. I wonder though if you do either. :) From agrimes at speakeasy.net Fri Nov 12 02:44:05 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 11 Nov 2010 21:44:05 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> Message-ID: <4CDCA9F5.2040208@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > 2010/11/11 John Clark : >> >> There is no identity issue there is only a identity superstition. > > "There is no Dana; only Zuul" - Zuul My, what a lovely singing voice you must have. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From agrimes at speakeasy.net Fri Nov 12 03:12:04 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 11 Nov 2010 22:12:04 -0500 Subject: [ExI] Existential Nihilism. Message-ID: <4CDCB084.4010806@speakeasy.net> Since I'm the single most provocative poster on this list, I'll keep up my tradition with a spiel for the philosophy which guides my understanding of the universe. Existential nihilism is a philosophy for understanding the world. It is not intended to make the user look smart to other people, it is absolutely not intended to make you feel good about anything. On the contrary, it is intended to make you feel bad about exactly the things you need to change in yourself to make you feel genuinely good. On the other hand, it does not tell you what you should feel bad about or what you should do about those things. What it does do is blow away all the bullshit you are immersed in from the culture and the propagandists and see the true mechanisms behind how the world works. Existentialism contradicts essentialism. Essentialism insists that everything has an essence that dominates it. For example, that a person is defined by his brain pattern. That this brain pattern can contain one's essence which can be bestowed upon any 8088 that happens to be available. Existentialism, says that you are exactly what you are, I am what I am, and the world is what it is. Sartre gets into a psychological phenomenon he calls "Existential Angst". A great deal of human psychology can be explained by an effort to escape existential angst. Existential angst is what you feel when you switch off your persona and put yourself into a meditative trance (om) where you focus on all of your attention on your senses and the nature of your own temporary existence. If you do it right you will feel angst. You will feel limitations in your own body and mind that you spend most of your time ignoring. A good existential nihilist intentionally keeps his mind as close to this feeling as he is able. Nihilism comes in when you realize that the world is nothing but this. Ideas and concepts are tools not things. No useless concept should be entertained and all ideas must always be open to examination. I am not still arguing with the uploaders because I have failed to consider their ideas, I argue with them because I have and because I cannot reconcile them with my own world view because they rely on a dualist/essentialist viewpoint. The threads prove that they refuse to leave it alone and continue to try to change my mind even though I have not spent any time in recent memory trying to get into their personal space. (Admittedly, I do definitely take a pot shot at them every now and again but mostly because I feel left out of the transhumanist movement.) Me: I think the world r0x0r$ and I don't want anyone turning it into computronium. them: Are you still going on about your pathetic anti-uploading luddisim, once uploaded everything will have so many more *PIXELS*!!! In general their arguments have strongly trended towards being more patronizing so for that and several other reasons, I won't respond to posts which nit-pick things I've said. I will only respond to truly insightful posts, or sufficiently well crafted flame bait. Ultimate nihilism is achieved when you see nothing but chemically bonded swarms of atoms around you. By reaching this state, all forms of self-delusion are nullified. Our consciousness, however, defies ultimate nihilism because it exists. We each should believe in our own consciousness and suspect those around us also exist (except for the philisophical, brain-obsessed zombies that I call uploaders). Now, what do we do with this existence in a wilderness of clumps of atoms? Knowing your own mortality, that becomes one obvious thing to work on. There are other things you might want to change about yourself but that quickly spins off into the world of personal idiosyncrasies. As a matter of personal policy, I don't criticize other people's choices because I don't want to be so judged. I criticize uploaders only because they continue to argue that they should be allowed to reduce the world to computronium for no other reason than that it is the object of their fetish. On the day that main-line transhumanism is about immortality and body modification and uploaders are marginalized, I'll gleefully shut up about it. Anyone who has ever cracked a textbook knows that survival in the world is a hard, grand challenge problem. It is only our exquisitely evolved forms that allow us to forget this stark truth. Only a transhumanism that faces all the challenges of survival head-on can even hope to improve anything about the human condition. Strong AI is indispensable for even approaching the problem. But then I see several organizations dedicated to brain uploading, a tiny handful of individuals working on AI, and hardly anything at all (explicitly) working towards medical enhancements. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From hkeithhenson at gmail.com Fri Nov 12 04:44:34 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 11 Nov 2010 21:44:34 -0700 Subject: [ExI] Homosexuality was Singularity was EP, was Margaret Mead controversy Message-ID: On Thu, Nov 11, 2010 at 8:43 PM, Dave Sill wrote: > > On Thu, Nov 11, 2010 at 12:59 PM, Keith Henson wrote: > >> Same place as everything else, evolution, selection of genes in the >> past. >> > What's the evolutionary/genetic explanation for homosexuality? That is somewhat the wrong question. The right question is where does heterosexuality come from? We know considerable about this and how it randomly goes "wrong" (to be politically correct). The embryonic default is female and female sexual orientation, i.e., attracted to males. We know considerable about the biochemistry of how this happens and what can go "wrong" with it. For example, male homosexuality rises with the number of previous male births for a given mother for a well understood reason. In the EEA (which included polygamy) a few males being oriented toward males made very little difference in the survival of genes. I can go into a lot more detail if you really care. Keith Henson From jonkc at bellsouth.net Fri Nov 12 05:48:34 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 12 Nov 2010 00:48:34 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CDC649A.7030409@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> <4CDC649A.7030409@speakeasy.net> Message-ID: <5A2552AD-5F86-4845-8BEC-B1B9EB2C9287@bellsouth.net> On Nov 11, 2010, at 4:48 PM, Alan Grimes wrote: > ideas can't think of ideas. Absolutely untrue, ideas can be about ideas, in fact most ideas are. > I'm a strict monist So you think everything is one thing and you should not break a thing into manageable pieces to understand it. So you can't hope to understand anything until you understand everything. So the most likely result of this philosophy is not understanding anything. So I'm glad I'm not a strict monist. > I can't imagine any way through which the two can be separated. All such proposals are inherently irrational I see no irrationality in recognizing that a thing and what a thing does is not the same thing. A race car goes fast but a race car is not a goes fast, nor is a brain a mind. > Stick a few electrodes on my skull and you'll see my EEG And stick a few electrodes in a computer motherboard and you'll see its electrical signals. > you can even determine my state of consciousness from it. Don't be ridiculous. The only consciousness we can directly observe is our own, other conscious entities can only be assumed from intelligent behavior; and it matters not one bit if that behavior comes from a man or a machine. > If you do the same to a computer you will not be able to detect any > differences except for the general level of computational activity, > which has no inherent relationship to the state of the upload. I have no idea what that means. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Nov 12 06:36:17 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 12 Nov 2010 01:36:17 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <48358.82164.qm@web24905.mail.ird.yahoo.com> <4CDC2471.5040209@lightlink.com> <4CDC2A06.2090300@satx.rr.com> <006801cb81dc$f3313e30$d993ba90$@att.net> Message-ID: <58D8FE36-3039-4F70-95BC-E03F79A15050@bellsouth.net> On Nov 11, 2010, at 6:58 PM, John Grigg wrote: > I have a feeling S.U. *may* get on the SL4 list, but will not be > allowed to stay on it for very long!!! Unfortunately the SL4 list is effectively dead, since February you could count the number of posts on the fingers of one hand. The Singularity list has a bit more life. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Nov 12 06:27:45 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 12 Nov 2010 01:27:45 -0500 Subject: [ExI] Existential Nihilism. In-Reply-To: <4CDCB084.4010806@speakeasy.net> References: <4CDCB084.4010806@speakeasy.net> Message-ID: <6794E5E5-C526-476E-98F8-5646CDDF3C10@bellsouth.net> On Nov 11, 2010, at 10:12 PM, Alan Grimes wrote: > Since I'm the single most provocative poster on this list Provocative? Yours is the conventional (and erroneous) view believed by 99.9% of the general public and even most members of this list who should know better. > Existentialism, says that you are exactly what you are, I am what I am, and the world is what it is. Yes, I must admit that's true, A is most certainly equal to A, but that revelation doesn't strike me as being particularly deep. > I don't want anyone turning it [the world] into computronium. Fine, there is no disputing matters of taste, but your personal wishes or mine on this matter are irrelevant. > Our consciousness, however, defies ultimate nihilism because it exists. What's with this "our" business? MY consciousness exists, I only have theories about yours. > the philisophical, brain-obsessed zombies that I call uploaders). Uploaders like me are mind-obsessed, you are brain-obsessed; and if you believe in philosophical zombies then you can't believe in Darwin's Theory of Evolution because the two are 100% incompatible. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Fri Nov 12 09:29:52 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Fri, 12 Nov 2010 03:29:52 -0600 Subject: [ExI] A humble suggestion Message-ID: The thing is, collected here in the ExI chat list are a pretty handy set of thinkers/engineers, spread around the world (sort of.) In fact, I can generalize this fact to say that almost all of the people interested in this movement fall into that category as well. Now look. This is a present dropped into your lap. Instead of only discussing lofty ideals and philosophy, we (H+) should focus on the engineering of tools which will eventually be very important in the long run for humanity, and for our goals in particular. List of tools we need to invent/things we need to do: -A very good bidirectional speech-to-speech translator. For spreading the gospel, once H+ wisens up enough to start including the proletariat. -Neoagriculture. This would mean better irrigation systems, GMO crops that can easily harness lots of sun energy and produce more food, maybe machines/instructions for diy fertilizer. -Better Grid--test experimental grid where people opt to operate, on property, efficient windmills/solar panels/any electricity they can make for $$$ -Housing projects that work, or some sort of thing where you pay people to build their own house/project building. -Fulfilling jobs for proles that also help society/space travel/humanism/H+. -So many more, I know you can think of some! I bet you have pet projects like these. Ideas, at least. By Le Ch?telier's principle, improving these fucked up problems that exist for much of society will give us much more leeway and ability to do transhumanisty things, AND we can do them in the meantime. It has to happen eventually, unless you have some fancy vision of the H+ elect ascending to cyberheaven and leaving everyone else behind. Thereby I suggest: a bunch of dedicated transhumanists mobilize and go to problematic regions, experimenting with those tools up there. Everyone will love H+. The movement will have lots of social power and then we can get shit done. Right? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists1 at evil-genius.com Fri Nov 12 10:04:08 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Fri, 12 Nov 2010 02:04:08 -0800 Subject: [ExI] Technology, specialization, and diebacks...Re: I love the world. =) In-Reply-To: References: Message-ID: <4CDD1118.2090502@evil-genius.com> >> > McDaniel, M.A. (2005) Big-brained people are smarter: A meta-analysis of the >> > relationship between in vivo brain volume and intelligence. Intelligence, >> > 33, 337-346 >> > http://www.people.vcu.edu/~mamcdani/Big-Brained%20article.pdf >> > Even if you don't buy that argument, it will be difficult to claim that a >> > slightly bigger brain made our immediate ancestors*dumber*. > I think the margin of error in measuring intelligence is far higher > than the performance differences between the various models. Even > with some magical means of copying the structural bits of a brain, the > fuel going into it probably has similar performance impact as any > other machine. ex: High octane fuel& perfect maintenance regimen > a racecar yields significantly better output than lower quality > fuel/care on an engine identically machined to within five-nines > tolerance. Given the range of energy metabolism, food quality, brain > usage training, etc. it's almost impossible to compare two modern > brains let alone distant time period brains. It is well established that the hunter-forager diet is superior to the post-agricultural diet in all respects: http://www.ajcn.org/cgi/content/full/81/2/341 ...as corroborated by the fact that all available indicators of health (height, weight, lifespan) crash immediately when a culture takes up farming -- and skeletal disease markers increase dramatically. http://www.environnement.ens.fr/perso/claessen/agriculture/mistake_jared_diamond.pdf And it wasn't until the year 1800 that residents of the richest countries of Europe reached the same caloric intake as the average tribe of hunter-gatherers. http://www.econ.ucdavis.edu/faculty/gclark/papers/Capitalism%20Genes.pdf Which brings me back to my original point: it takes substantial intelligence to make stone tools and weapons, memorize a territory of tens (if not hundreds) of square miles, know where prey and edibles will live and grow throughout the seasons, find them, perhaps chase and kill them, butcher them, start fires with nothing but a couple pieces of wood, etc., etc. If it didn't, intelligence would not have been selected for, and we'd still be little 3-foot Ardipithecuses with 350cc brains. I'm genuinely not sure whether you're objecting to my point, or just throwing up objections with no supporting evidence because you like messing with people. I'm going to start asking you to provide evidence, instead of just casting a bunch of doubts with no basis and no theory to replace what you're attacking. That's a creationist tactic. >> > The anatomically modern human was selected for by millions of years of >> > hunting and foraging. ?(Orrorin, Sahelanthropus, and Ardipithecus -> Homo >> > sapiens sapiens) ?Any subsequent change due to a few thousand years of >> > agricultural practices is sufficiently subtle that it hasn't affected our >> > morphology -- and, in fact, we're still arguing over whether it exists. >> > >> > My point stands: intelligence must have been not just valuable, but >> > *absolutely necessary* for hunter-foragers -- otherwise we wouldn't have >> > been selected for it. ?(Brain size of common human/chimp/bonobo ancestors: >> > ~350cc. ?Brain size of anatomically modern humans: ~1300cc.) > Modern human was also selected for running away from things that we > couldn't kill first. Probably a considerable amount of our > cooperative behaviors came from the discovery that many small animals > are able to overpower a large threat when they work together - > utilizing that prized possession: intelligence. Everything is selected for running away from things we can't kill first. Even lions and crocodiles run away from hippos. > Have you considered that perhaps intelligence is only secondarily > selected for? Perhaps the more general governing rule is energy > efficiency. Everything is secondarily selected for, relative to survival through at least one successful reproduction. I'm not sure that's a useful distinction. And I refuse to enter into a "define intelligence" clusterf**k, because it's all completely ancillary to my original point. From bbenzai at yahoo.com Fri Nov 12 12:56:32 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 12 Nov 2010 12:56:32 +0000 (GMT) Subject: [ExI] Let's play What If. In-Reply-To: Message-ID: <789045.60303.qm@web114402.mail.gq1.yahoo.com> John K Clark wrote: > mind is more important than brain; at least it > is in this mind's opinion. And Alan Grimes replied: > Because I'm a strict monist, I can't imagine any way > through which the two can be separated. As I've already pointed out, this 'monism' of yours seems to reject what other people have called 'property dualism', or the concept that objects have properties. This concept is not an opinion, it's an established fact. Nobody can rationally deny it. To acknowledge that material objects have non-material (and non-mystical) properties is not really 'dualism' at all, it's materialism, and the materialistic view leads inexorably to the possibility of uploading, as recognised by most transhumanists. Your statement above implies that you can't see any way that a dog and a bark can be separated. I can think of dozens of ways, and I'm sure you can too if you try. The point is that it's these non-material (and non-mystical) properties that are important, not the dumb matter that exhibits them. The thing that mystifies me is why the argument that two atoms of the same element are completely and utterly indistinguishable and interchangeable, isn't decisive in this discussion. The fact that I've survived endless changes of material proves conslusively that I am not the matter that my body and brain are made from. Why is this so hard to understand? Ben Zaiboc From kanzure at gmail.com Fri Nov 12 13:18:05 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 12 Nov 2010 07:18:05 -0600 Subject: [ExI] A humble suggestion In-Reply-To: References: Message-ID: 2010/11/12 Will Steinberg > Instead of only discussing lofty ideals and philosophy, we (H+) should > focus on the engineering of tools which will eventually be very important in > the long run for humanity, and for our goals in particular. > Well, what have you been working on? wetware? http://groups.google.com/group/diybio hardware? http://groups.google.com/group/openmanufacturing software? Let's hear it. > List of tools we need to invent/things we need to do: > very rudimentary: http://diyhpl.us/cgit/skdb/tree/doc/proposals/trans-tech.yaml (It's apt-get for technology.) > -Housing projects that work, or some sort of thing where you pay people to > build their own house/project building. > Hextatic? Bucky's dreams? - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Fri Nov 12 13:06:02 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 12 Nov 2010 13:06:02 +0000 (GMT) Subject: [ExI] A humble suggestion In-Reply-To: Message-ID: <168050.32379.qm@web114419.mail.gq1.yahoo.com> Will Steinberg suggested: > Instead of only discussing > lofty ideals and > philosophy, we (H+) should focus on the engineering of > tools which will > eventually be very important in the long run for humanity, > and for our goals > in particular. I'm sure that many of us can do both. In fact, I know for a fact that some of us are, and I'm pretty sure that there are quite a few people 'doing stuff' as well as talking on here. I think that it's important to not only do things, but to also talk about them or about their theoretical and philosophical aspects. Also, talking is a form of doing. I wonder how many lurkers there are here, who are possibly being affected by the ideas we bandy about? Ben Zaiboc From agrimes at speakeasy.net Fri Nov 12 13:38:57 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Fri, 12 Nov 2010 08:38:57 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <789045.60303.qm@web114402.mail.gq1.yahoo.com> References: <789045.60303.qm@web114402.mail.gq1.yahoo.com> Message-ID: <4CDD4371.1010403@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > And Alan Grimes replied: >> Because I'm a strict monist, I can't imagine any way >> through which the two can be separated. > As I've already pointed out, this 'monism' of yours > seems to reject what other people have called > 'property dualism', or the concept that objects have > properties. This concept is not an opinion, it's an > established fact. Nobody can rationally deny it. Most of the things that people call "properties" are actually artifacts of human perception. Anything beyond what is strictly scientifically detectable (such as the number of atoms in a substance) is nothing more than something that a human imagines and then forces on the perception. This gets to the Platonic theory of forms. What it means is that things such as vases, speakers, symbols on calculator keys, can only exist in the mind. In the world there is nothing but arrangements of mater which may or may not closely resemble the form you choose to assert over it. > To acknowledge that material objects have non-material > (and non-mystical) properties is not really 'dualism' > at all, it's materialism, and the materialistic view > leads inexorably to the possibility of uploading, as > recognised by most transhumanists. Bullshit. > Your statement above implies that you can't see any > way that a dog and a bark can be separated. I can > think of dozens of ways, and I'm sure you can too if > you try. The sound of a bark is not technically a bark. > The point is that it's these non-material (and > non-mystical) properties that are important, not the > dumb matter that exhibits them. The dumb mater always overrules our stupid, ill-conceived notions about it. > The thing that mystifies me is why the argument that > two atoms of the same element are completely and > utterly indistinguishable and interchangeable, isn't > decisive in this discussion. The fact that I've > survived endless changes of material proves > conslusively that I am not the matter that my body and > brain are made from. Why is this so hard to > understand? Non-sequiter because the routine replacement of some of your atoms at some low rate is not evidence of anything whatsoever. It means nothing more than that it is a natural function of your body is to replace some of your atoms at some rate. > Ben Zaiboc -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From hkeithhenson at gmail.com Fri Nov 12 13:58:59 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 12 Nov 2010 06:58:59 -0700 Subject: [ExI] A humble suggestion Message-ID: On Fri, Nov 12, 2010 at 5:00 AM, Will Steinberg wrote: > The thing is, collected here in the ExI chat list are a pretty handy set of > thinkers/engineers, spread around the world (sort of.) ?In fact, I can > generalize this fact to say that almost all of the people interested in this > movement fall into that category as well. ?Now look. ?This is a present > dropped into your lap. ?Instead of only discussing lofty ideals and > philosophy, we (H+) should focus on the engineering of tools which will > eventually be very important in the long run for humanity, and for our goals > in particular. I hope you have better luck than I had over the past several years. > List of tools we need to invent/things we need to do: > > -A very good bidirectional speech-to-speech translator. ?For spreading the > gospel, once H+ wisens up enough to start including the proletariat. There is considerable work being done in this area. I think Google is one of the companies working on this. They are doing it (as I recall) for phone service translation, but it should work on a single phone. The computation is currently intense enough to need cloud computing. > -Neoagriculture. This would mean better irrigation systems, GMO crops that > can easily harness lots of sun energy and produce more food, maybe > machines/instructions for diy fertilizer. This is a hard problem. You know how the efficiency of solar PV systems suck? Well photosynthesis is a lot worse and there are good reasons to think it can't be made much better. As for diy fertilizer, that's a snap. Pee on your lawn. > -Better Grid--test experimental grid where people opt to operate, on > property, efficient windmills/solar panels/any electricity they can make for > $$$ Putting power into power lines has been solved. The problem with solar and wind is they are dilute and intermittent. So it takes large and expensive structures to collect energy and then there is the storage problem, which can be ignored if the source is small compared to other sources. A kW full time supplies $800 of electricity in ten years for each penny you charge for it. So if you want to sell power low enough to undercut coal at around 4 cents, you have to sell the power for 2 cents and the cost per kW would need to be $1600. The cost for renewable sources is 10-20 times that high. I have for some years reported here on conceptual progress with power satellites transportation and more recently about StratoSolar. If you want to work on such projects, I often have spreadsheets or mathematical models that need review. Or I will check what you can model. > -Housing projects that work, or some sort of thing where you pay people to > build their own house/project building. Have you ever built a house? It takes a considerable collection of skills. > -Fulfilling jobs for proles that also help society/space travel/humanism/H+. > > -So many more, I know you can think of some! ?I bet you have pet projects > like these. ?Ideas, at least. > > > By Le Ch?telier's principle, improving these fucked up problems that exist > for much of society will give us much more leeway and ability to do > transhumanisty things, AND we can do them in the meantime. ?It has to happen > eventually, unless you have some fancy vision of the H+ elect ascending to > cyberheaven and leaving everyone else behind. > > Thereby I suggest: a bunch of dedicated transhumanists mobilize and go to > problematic regions, experimenting with those tools up there. ?Everyone will > love H+. ? The movement will have lots of social power and then we can get > shit done. ?Right? I started off thinking you were serious, but by the time I reached this point . . . you must be putting us on. Keith From x at extropica.org Fri Nov 12 14:07:55 2010 From: x at extropica.org (x at extropica.org) Date: Fri, 12 Nov 2010 06:07:55 -0800 Subject: [ExI] Let's play What If. In-Reply-To: <5A2552AD-5F86-4845-8BEC-B1B9EB2C9287@bellsouth.net> References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> <4CDC649A.7030409@speakeasy.net> <5A2552AD-5F86-4845-8BEC-B1B9EB2C9287@bellsouth.net> Message-ID: 2010/11/11 John Clark : > The only consciousness we can directly observe is our > own... John, I like your hard-edged no-nonsense approach to much of the content of this discussion, but your assertion that I quoted above highlights the incoherence at the very core demonstrated by you, Descartes, and many others. Such a singularity of self, with its infinite regress, can't be modeled as a physical system. Nor is it needed. See Dennett for a cogent philosophical explanation, or Ismael & Pollock's nolipsism for a logical-semantic view, or Metzinger's Being No One for a very detailed exposition of the experimental evidence, or Hofstadter's Strange Loop for a sincere but more muddled account, or even Alan Watts' The Taboo Against Knowing Who You Are for a more intuitionist approach. Digest and integrate this thinking, and then we might be able to move this conversation forward with extension from a more coherent basis. - Jef From singularity.utopia at yahoo.com Fri Nov 12 11:19:53 2010 From: singularity.utopia at yahoo.com (Singularity Utopia) Date: Fri, 12 Nov 2010 11:19:53 +0000 (GMT) Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? Message-ID: <472742.97978.qm@web24912.mail.ird.yahoo.com> Thanks Richard Loosemore, regarding the SL4 route to contact Eliezer, that's exactly the info I needed. John Grigg, you say I may not be allowed to stay long on the SL4 list? Why is this, are Singularitarians an intolerant group leaning towards fascism? Spike, you say Eliezer's theme was: "The singularity is coming regardless. Let us work to make it a positive thing."... and you say: "My constructive criticism of your earlier posts was that your theme is: the singularity will be a positive thing regardless." Yes it is my intention to make the Singularity a positive thing regardless. I say the Singularity will be a positive thing regardless of what anyone else says because, and this is the important bit, the power of my expectations positively manifested WILL create utopia. It is all about my determination, self-belief, the power of expectations, self-confidence, confidence in my abilities. You may think my confidence is blind, misguided, or foolishly overconfident but I assure you I will create utopia even if I have to do it all on my own, battling against a legion of pessimists. In a sane world I cannot see why Eliezer would think my attitude would be annoying or dangerous, but the world is insane therefore irrational responses to my views are likely. I actually think people who are overly-obsessed with friendly AI are very dangerous due to their misguided attempts to attain rationality and "overcome bias". My following webpage regarding the dangerous nature of people obsessed with friendly-AI could possibly enlighten you: http://singularity-2045.org/ai-dangerous-hostile-unfriendly.html Spike you say: "The singularity is not necessarily a good thing, but we know that a no-singularity future is a bad thing." I assure you the Singularity WILL absolutely without doubt be a good thing but this will not be through my inaction, it will because the power of my intellect positively manifested has made the Singularity a good thing. I will utilize a Self-Fulfilling Prophecy, which I have previously mentioned. Furthermore a negative intelligence explosion would oxymoronic-intelligence. Intelligence will be "intelligent" therefore the explosion will be utopian if truly intelligent people define "intelligence". The problem with some people who think they are intelligent is that they are misguided about the definition of intelligence, they are actually rather stupid. I will utilize the concept of self-fulfilling prophecy to create utopia. There is no need to doubt the future. Utopia is coming. Rest assured you can expect utopia. I encourage you all to put in the extra effort to make it happen sooner instead of later. I am the Singularity! I am utopia. http://en.wikipedia.org/wiki/Self-fulfilling_prophecy Regarding the fallacy of "Overcoming Bias" I will soon publish a rebuttal on my blogs. The desire to overcome bias is in itself a bias but such pseudo-rational people are unaware of their bias due to the fact they are "bias-deniers" (bias-fascists): they are overcoming bias thus they are creating a blind-spot regarding their bias. Bias cannot be overcome, but if you try to overcome it you will decrease you self-awareness. http://yudkowsky.net/rational/overcoming-bias Here is my forthcoming blog (in progress) regarding "the Bias of overcoming Bias": The major bias plaguing so-called rationalists is their glaring blind-spot regarding the power of Self-fulfilling Prophecy. Contrary to their biased assertions (that bias should be overcome), I state bias is a fundamental part of human consciousness. Bias should be utilized constructively, it should be not transcended. Self-fulfilling Prophecy is a preeminent usage of bias. The solution is to be highly aware. To transcend bias is tantamount to lobotomizing the mind. Bias is the heart of evaluation, judgment, existence. We are biased regarding pain and pleasure for example. If we were not biased regarding pain and pleasure we would be mindless robots. Do Transhumanists seek the evolution of the human organism to a point where we are stoical machines indifferent to emotions? Wishful-thinking, positive-thinking, and overconfidence can be very effective when applied via keen intellect. Sadly the so-called "rationalist-less-wrong" movement (overcoming bias) and similar Transhuman-futurist-cliques are deficient in intellect. Furthermore they are unaware of their intellectual deficiencies due to their bias; they are biased about bias thus they want to overcome it, but they are unaware of there bias. http://www.overcomingbias.com is good example of flawed thinking. Sadly I suspect the proponents of overcoming bias and other similar endeavours will be negatively-biased regarding my contributions? Regards Singularity Utopia http://en.wikipedia.org/wiki/Self-fulfilling_prophecy http://singularity-2045.org/hyper-mind-explosion.html http://singularity-2045.org/ Here is an article I wrote about subjectivity/objectivity a while ago: http://spacecollective.org/SingularityUtopia/6133/Objectivity-Fallacy-a-plea-for-increased-subjectivity UTOPIA IS COMING! -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Fri Nov 12 16:03:53 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 12 Nov 2010 11:03:53 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <472742.97978.qm@web24912.mail.ird.yahoo.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> Message-ID: <4CDD6569.5070509@lightlink.com> Singularity Utopia wrote: > Thanks Richard Loosemore, regarding the SL4 route to contact Eliezer, > that's exactly the info I needed. > > John Grigg, you say I may not be allowed to stay long on the SL4 list? > Why is this, are Singularitarians an intolerant group leaning towards > fascism? Er.... you may be misunderstanding the situation. ;-) You will be unwelcome and untolerated on SL4, because: a) The singularity is, for Eliezer, a power struggle. It is a matter of which personality "owns" these ideas .... who determines the agenda, who is seen as the pre-eminent power broker .... who has the largest army of volunteers to spread the message. And in that situation, you, my friend, are a Threat. Even if your ideas were more sensible than his you would be attacked and denounced, for the simple reason that you would not be meekly conforming to the standard view of the singularity (as defined by The Wise One). b) Your assertions are wildly egotistical (viz "I am the Singularity! I am utopia"). This is garbage: you are not the singularity, you are a person. The singularity is a posited future event, and a set of ideas about that event. Your ego is, sadly, not enough to define or shape that event. Now, history may well turn out in such a way that one person's ego really does define and shape the singularity. But you can bet your life that that person will never do it by openly DECLARING that they are going to shape and define the thing. Eliezer obviously thinks that he is the chosen one, but whereas you are coming right out and declaring that you are the one, he would never be so dumb as to actually say "Hey, everyone, bow down to me, because I *am* the singularity!". He may be an irrational, Randian asshole, but he is not that stupid. So have fun on SL4, if there is anything left of it. If you don't actually get banned within a couple of months it will be because SL4 is (as John Clark claims) actually dead, and nobody gives a damn what you say there. Richard Loosemore From spike66 at att.net Fri Nov 12 15:48:47 2010 From: spike66 at att.net (spike) Date: Fri, 12 Nov 2010 07:48:47 -0800 Subject: [ExI] A humble suggestion In-Reply-To: References: Message-ID: <006801cb8281$188f7d00$49ae7700$@att.net> . On Behalf Of Will Steinberg . -Housing projects that work, or some sort of thing where you pay people to build their own house/project building. I do have a better idea: housing projects that work, some sort of thing where the builder collects money from herself to buy the land and the materials, then builds her own house. Cuts out the inefficient and corrupt middle man. -Fulfilling jobs for proles that also help society/space travel/humanism/H+. I do hope you succeed at that one, Will. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From protokol2020 at gmail.com Fri Nov 12 15:59:42 2010 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Fri, 12 Nov 2010 16:59:42 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <472742.97978.qm@web24912.mail.ird.yahoo.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> Message-ID: As I see they are biased toward Bayes. They will say, that a good bias is not a bias at all. It is only bias, when it's wrong. And they want to be less wrong. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Nov 12 16:00:28 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 12 Nov 2010 11:00:28 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CDD4371.1010403@speakeasy.net> References: <789045.60303.qm@web114402.mail.gq1.yahoo.com> <4CDD4371.1010403@speakeasy.net> Message-ID: <50392A47-365C-4A50-BE10-B205AB7A92FE@bellsouth.net> On Nov 12, 2010, at 8:38 AM, Alan Grimes wrote: > Most of the things that people call "properties" are actually artifacts > of human perception. My consciousness is obviously a human perception, an artifact is an undesirable alteration of data, thus unless you are willing to argue that consciousness is undesirable consciousness is not an artifact. However it is true that consciousness is a side effect of intelligence, Darwin taught us that in 1859. > Anything beyond what is strictly scientifically detectable (such as the number of atoms in a substance) is nothing more than something that a human imagines and then forces on the perception. Or to put it another way, all the really important things are the invention of mind, the invention of what the brain does. > What it means is that things such as vases, speakers, symbols on calculator keys, can only exist in > the mind. Yes, I couldn't have put it better myself, but I'm surprised to hear you say that as it strengthens the case for uploading. The thing we value, the thing we want to survive, is not 70 kilograms of hydrogen oxygen carbon and nitrogen but our wife or husband and ourselves. > In the world there is nothing but arrangements of mater Correct again, and atoms are generic and the information on how they are arranged can be duplicated; remind me again why uploading won't work. > Non-sequiter because the routine replacement of some of your atoms at > some low rate The term "low rate" has meaning only if the amount of time is specified. On a particle physics timescale the rate of atomic replacement is indeed low, but on a geological timescale it is virtually instantaneous. There is no preferred timescale in physics, one is as valid as another. > is not evidence of anything whatsoever. It certainly is not evidence that atoms have anything to do with personal identity; atoms have no individuality themselves so its not very surprising that they can't confer this property to us. > Bullshit. My lawyers will be contacting you on a matter involving copyright infringement. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Fri Nov 12 16:59:35 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 12 Nov 2010 12:59:35 -0400 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <472742.97978.qm@web24912.mail.ird.yahoo.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> Message-ID: >>I say the Singularity will be a positive thing regardless of what anyone else says because, and this is the important bit, the power of my expectations positively manifested WILL create utopia<< How can an expectation affect an outcome when we move beyond the point (singularity) where stochastic predictions and expectations based on them are no longer possible? Darren 2010/11/12 Singularity Utopia > Thanks Richard Loosemore, regarding the SL4 route to contact Eliezer, > that's exactly the info I needed. > > John Grigg, you say I may not be allowed to stay long on the SL4 list? Why > is this, are Singularitarians an intolerant group leaning towards fascism? > > Spike, you say Eliezer's theme was: "The singularity is coming regardless. > Let us work to make it a positive thing."... and you say: "My constructive > criticism of your earlier posts was that your theme is: the singularity will > be a positive thing regardless." > > Yes it is my intention to make the Singularity a positive thing regardless. > I say the Singularity will be a positive thing regardless of what anyone > else says because, and this is the important bit, the power of my > expectations positively manifested WILL create utopia. It is all about my > determination, self-belief, the power of expectations, self-confidence, > confidence in my abilities. You may think my confidence is blind, misguided, > or foolishly overconfident but I assure you I will create utopia even if I > have to do it all on my own, battling against a legion of pessimists. In a > sane world I cannot see why Eliezer would think my attitude would be > annoying or dangerous, but the world is insane therefore irrational > responses to my views are likely. > > I actually think people who are overly-obsessed with friendly AI are very > dangerous due to their misguided attempts to attain rationality and > "overcome bias". My following webpage regarding the dangerous nature of > people obsessed with friendly-AI could possibly enlighten you: > > http://singularity-2045.org/ai-dangerous-hostile-unfriendly.html > > Spike you say: "The singularity is not necessarily a good thing, but we > know that a no-singularity future is a bad thing." > > I assure you the Singularity WILL absolutely without doubt be a good thing > but this will not be through my inaction, it will because the power of my > intellect positively manifested has made the Singularity a good thing. I > will utilize a Self-Fulfilling Prophecy, which I have previously mentioned. > Furthermore a negative intelligence explosion would oxymoronic-intelligence. > Intelligence will be "intelligent" therefore the explosion will be utopian > if truly intelligent people define "intelligence". The problem with some > people who think they are intelligent is that they are misguided about the > definition of intelligence, they are actually rather stupid. > > I will utilize the concept of self-fulfilling prophecy to create utopia. > There is no need to doubt the future. Utopia is coming. Rest assured you can > expect utopia. I encourage you all to put in the extra effort to make it > happen sooner instead of later. I am the Singularity! I am utopia. > > http://en.wikipedia.org/wiki/Self-fulfilling_prophecy > > Regarding the fallacy of "Overcoming Bias" I will soon publish a rebuttal > on my blogs. The desire to overcome bias is in itself a bias but such > pseudo-rational people are unaware of their bias due to the fact they are > "bias-deniers" (bias-fascists): they are overcoming bias thus they are > creating a blind-spot regarding their bias. Bias cannot be overcome, but if > you try to overcome it you will decrease you self-awareness. > > http://yudkowsky.net/rational/overcoming-bias > > Here is my forthcoming blog (in progress) regarding "the Bias of overcoming > Bias": > > The major bias plaguing so-called rationalists is their glaring blind-spot > regarding the power of Self-fulfilling Prophecy. > > Contrary to their biased assertions (that bias should be overcome), I state > bias is a fundamental part of human consciousness. Bias should be utilized > constructively, it should be not transcended. Self-fulfilling Prophecy is a > preeminent usage of bias. The solution is to be highly aware. To transcend > bias is tantamount to lobotomizing the mind. Bias is the heart of > evaluation, judgment, existence. We are biased regarding pain and pleasure > for example. If we were not biased regarding pain and pleasure we would be > mindless robots. Do Transhumanists seek the evolution of the human organism > to a point where we are stoical machines indifferent to emotions? > > Wishful-thinking, positive-thinking, and overconfidence can be very > effective when applied via keen intellect. Sadly the so-called > "rationalist-less-wrong" movement (overcoming bias) and similar > Transhuman-futurist-cliques are deficient in intellect. Furthermore they are > unaware of their intellectual deficiencies due to their bias; they are > biased about bias thus they want to overcome it, but they are unaware of > there bias. > > http://www.overcomingbias.com is good example of flawed thinking. > > Sadly I suspect the proponents of overcoming bias and other similar > endeavours will be negatively-biased regarding my contributions? > > Regards > > Singularity Utopia > > http://en.wikipedia.org/wiki/Self-fulfilling_prophecy > > http://singularity-2045.org/hyper-mind-explosion.html > > http://singularity-2045.org/ > > Here is an article I wrote about subjectivity/objectivity a while ago: > > > http://spacecollective.org/SingularityUtopia/6133/Objectivity-Fallacy-a-plea-for-increased-subjectivity > > UTOPIA IS COMING! > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Fri Nov 12 16:09:43 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Fri, 12 Nov 2010 10:09:43 -0600 Subject: [ExI] A humble suggestion In-Reply-To: References: Message-ID: Hmm...I *did* send this. I guess what this was a plea for was a mobile transhumanist revolution. I didn't mean to be naive/patronizing or mean that these things aren't being worked on, but only that it would be germane to our task for H+ folks to begin gathering a following by developing these and giving them where they are needed.. My point is that transhumanism should begin to engage the public, especially the sector that needs the most help, because we will need them eventually. It is very important that transhumanists don't get wrapped up in this whole anti-science movement. There is a real opposition to science itself that has stubbornly persisted, no matter what technology does. A good way to make people like science is to use it to solve their horrible problems. While it's great to speculate on this chat list, nobody can see it. Not gaining any 'cred' so to speak. And while this 'cred' isn't the most important thing there is...it is important, no? Sorry if my first message (or this one) came/comes off as ridiculous. Sometimes I think I am communicating an idea when in truth I am failing to. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aleksei at iki.fi Fri Nov 12 21:11:27 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Fri, 12 Nov 2010 23:11:27 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDD6569.5070509@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: On Fri, Nov 12, 2010 at 6:03 PM, Richard Loosemore wrote: > Singularity Utopia wrote: >> >> Thanks Richard Loosemore, regarding the SL4 route to contact Eliezer, >> that's exactly the info I needed. >> >> John Grigg, you say I may not be allowed to stay long on the SL4 list? Why >> is this, are Singularitarians an intolerant group leaning towards fascism? > > Er.... you may be misunderstanding the situation. ?;-) > > You will be unwelcome and untolerated on SL4, because: > > a) ?The singularity is, for Eliezer, a power struggle. ?It is a matter of > which personality "owns" these ideas .... who determines the agenda, who is > seen as the pre-eminent power broker .... who has the largest army of > volunteers to spread the message. ? And in that situation, you, my friend, > are a Threat. ?Even if your ideas were more sensible than his you would be > attacked and denounced, for the simple reason that you would not be meekly > conforming to the standard view of the singularity (as defined by The Wise > One). Might as well comment on Loosemore's mudslingings for a change... Richard Loosemore is himself one of the very few people who have ever been kicked out from SL4 (the vast majority of people who strongly disagree with e.g. Eliezer of course haven't been kicked out), and ever since he has been talking nasty about Eliezer. Apparently Loosemore's beliefs now include e.g. that the person calling himself "Singularity Utopia" would be felt by Eliezer to be a threat :) In light of such statements, I invite people to make their own judgements on how clearheaded Loosemore manages to be when commenting on Eliezer. To Singularity Utopia: You are free to join SL4, as everyone is (though that list indeed isn't used much these days). But I'm quite certain joining will not result in you successfully managing to contact Eliezer, and it is *not* appropriate to join just for that reason; that would be abuse of the list (even though the contact attempt would likely fail). As Eliezer notes on his homepages that you have read, the primary way to contact him is email. It's just that he gets so much email, including from a large number of crazy people, that he of course doesn't answer them all. (You, unfortunately, are one of those crazy people who pretty surely will be ignored. So in the end, on this matter it would be appropriate of you to accept that -- like all people -- Eliezer should have the right to choose who he spends his time talking to, and that he most likely would not want to correspond with you.) -- Aleksei Riikonen - http://www.iki.fi/aleksei From pharos at gmail.com Fri Nov 12 22:33:19 2010 From: pharos at gmail.com (BillK) Date: Fri, 12 Nov 2010 22:33:19 +0000 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: On Fri, Nov 12, 2010 at 9:11 PM, Aleksei Riikonen wrote: > As Eliezer notes on his homepages that you have read, the primary way > to contact him is email. It's just that he gets so much email, > including from a large number of crazy people, that he of course > doesn't answer them all. (You, unfortunately, are one of those crazy > people who pretty surely will be ignored. So in the end, on this > matter it would be appropriate of you to accept that -- like all > people -- Eliezer should have the right to choose who he spends his > time talking to, and that he most likely would not want to correspond > with you.) > > As I understand SU's request, she doesn't particularly want to enter a dialogue with Eliezer. Her request was for an updated version of The Singularitarian Principles Version 1.0.2 01/01/2000 marked 'obsolete' on Eliezer's website. Perhaps someone could mention this to Eliezer or point her to more up-to-date writing on that subject? Doesn't sound like an unreasonable request to me. BillK From aleksei at iki.fi Fri Nov 12 22:44:36 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sat, 13 Nov 2010 00:44:36 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: On Sat, Nov 13, 2010 at 12:33 AM, BillK wrote: > > As I understand SU's request, she doesn't particularly want to enter a > dialogue with Eliezer. Her request was for an updated version of The > Singularitarian Principles > Version 1.0.2 ? ? 01/01/2000 ? marked 'obsolete' on Eliezer's website. > > Perhaps someone could mention this to Eliezer or point her to more > up-to-date writing on that subject? ?Doesn't sound like an > unreasonable request to me. If people want a new version of Singularitarian Principles to exist, they can write one themselves. Eliezer has no magical authority on the topic, that would necessitate that it should be him. (Also, I doubt Eliezer thinks it important for a new version to exist.) (And if people just want newer things that Eliezer has written, just check his homepage.) -- Aleksei Riikonen - http://www.iki.fi/aleksei From pharos at gmail.com Fri Nov 12 23:04:07 2010 From: pharos at gmail.com (BillK) Date: Fri, 12 Nov 2010 23:04:07 +0000 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: On Fri, Nov 12, 2010 at 10:44 PM, Aleksei Riikonen wrote: > If people want a new version of Singularitarian Principles to exist, > they can write one themselves. Eliezer has no magical authority on the > topic, that would necessitate that it should be him. (Also, I doubt > Eliezer thinks it important for a new version to exist.) > > (And if people just want newer things that Eliezer has written, just > check his homepage.) > > I don't disagree with you at all, as I agree with your opinion that Eliezer has no magical authority on that topic. It just seems very unhelpful to abuse enquirers and tell them to use Google. If visitors make a persistent nuisance of themselves, perhaps, but it doesn't seem the best attitude to start off with. BillK From rpwl at lightlink.com Fri Nov 12 23:26:04 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 12 Nov 2010 18:26:04 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <4CDDCD0C.8040208@lightlink.com> Aleksei Riikonen wrote: > Might as well comment on Loosemore's mudslingings for a change... > > Richard Loosemore is himself one of the very few people who have ever > been kicked out from SL4 (the vast majority of people who strongly > disagree with e.g. Eliezer of course haven't been kicked out), and > ever since he has been talking nasty about Eliezer. > > Apparently Loosemore's beliefs now include e.g. that the person > calling himself "Singularity Utopia" would be felt by Eliezer to be a > threat :) In light of such statements, I invite people to make their > own judgements on how clearheaded Loosemore manages to be when > commenting on Eliezer. I feel honored to have been one of the few people to have challenged Yudkowsky's ignorance. It gave me - and anyone else who was knowledgeable enough to have understood what happened - a chance to see him for what he was. Hey, I enjoy speaking the truth about the guy. I do it partly because it is fun to get sycophants like yourself riled up. And, as long as that outrageous, defamatory outburst of his is still online, and not withdrawn, I'm afraid, Aleksei, that he is fair game. ;-) "Singularity Utopia" is not, of course, a threat. You are correct about that: my mistake. He only regards someone as a threat when he realizes that they are smarter than he is, and when they have the moxy to talk about his state of undress .... Richard Loosemore From natasha at natasha.cc Fri Nov 12 23:30:20 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Fri, 12 Nov 2010 18:30:20 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <20101112183020.qke1h9otsso4gsgs@webmail.natasha.cc> This is an interesting dialogue. I suppose most interesting is the way the Singularity has been obfuscated. Eliezer's interest is FAI, where he has developed a theoretical approach to the topic of superintelligence. His expertise is related to FAI. Eli was interested in seed AI, as I recall it. And as far as the Singularity goes, the early experts are Good, Vinge, Broderick and Kurzweil. Since AI, AGI, and FAI are variables of the Singularity, Eli applied this framework to his theory on seed AI and FAI. Eli aligns with Bostrom and Hanson. This is very fortunate for him in light of his nonacademic standing. Regardless, Eli is a delightful speaker. I don?t' know the value of his work other than being theoretical and stimulating. Natasha Quoting Aleksei Riikonen : > On Fri, Nov 12, 2010 at 6:03 PM, Richard Loosemore > wrote: >> Singularity Utopia wrote: >>> >>> Thanks Richard Loosemore, regarding the SL4 route to contact Eliezer, >>> that's exactly the info I needed. >>> >>> John Grigg, you say I may not be allowed to stay long on the SL4 list? Why >>> is this, are Singularitarians an intolerant group leaning towards fascism? >> >> Er.... you may be misunderstanding the situation. ?;-) >> >> You will be unwelcome and untolerated on SL4, because: >> >> a) ?The singularity is, for Eliezer, a power struggle. ?It is a matter of >> which personality "owns" these ideas .... who determines the agenda, who is >> seen as the pre-eminent power broker .... who has the largest army of >> volunteers to spread the message. ? And in that situation, you, my friend, >> are a Threat. ?Even if your ideas were more sensible than his you would be >> attacked and denounced, for the simple reason that you would not be meekly >> conforming to the standard view of the singularity (as defined by The Wise >> One). > > Might as well comment on Loosemore's mudslingings for a change... > > Richard Loosemore is himself one of the very few people who have ever > been kicked out from SL4 (the vast majority of people who strongly > disagree with e.g. Eliezer of course haven't been kicked out), and > ever since he has been talking nasty about Eliezer. > > Apparently Loosemore's beliefs now include e.g. that the person > calling himself "Singularity Utopia" would be felt by Eliezer to be a > threat :) In light of such statements, I invite people to make their > own judgements on how clearheaded Loosemore manages to be when > commenting on Eliezer. > > > To Singularity Utopia: You are free to join SL4, as everyone is > (though that list indeed isn't used much these days). But I'm quite > certain joining will not result in you successfully managing to > contact Eliezer, and it is *not* appropriate to join just for that > reason; that would be abuse of the list (even though the contact > attempt would likely fail). > > As Eliezer notes on his homepages that you have read, the primary way > to contact him is email. It's just that he gets so much email, > including from a large number of crazy people, that he of course > doesn't answer them all. (You, unfortunately, are one of those crazy > people who pretty surely will be ignored. So in the end, on this > matter it would be appropriate of you to accept that -- like all > people -- Eliezer should have the right to choose who he spends his > time talking to, and that he most likely would not want to correspond > with you.) > > -- > Aleksei Riikonen - http://www.iki.fi/aleksei > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From natasha at natasha.cc Fri Nov 12 23:14:06 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Fri, 12 Nov 2010 18:14:06 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <20101112181406.zvr7bddo0ssss04c@webmail.natasha.cc> Yes, well said. Quoting Aleksei Riikonen : > On Sat, Nov 13, 2010 at 12:33 AM, BillK wrote: >> >> As I understand SU's request, she doesn't particularly want to enter a >> dialogue with Eliezer. Her request was for an updated version of The >> Singularitarian Principles >> Version 1.0.2 ? ? 01/01/2000 ? marked 'obsolete' on Eliezer's website. >> >> Perhaps someone could mention this to Eliezer or point her to more >> up-to-date writing on that subject? ?Doesn't sound like an >> unreasonable request to me. > > If people want a new version of Singularitarian Principles to exist, > they can write one themselves. Eliezer has no magical authority on the > topic, that would necessitate that it should be him. (Also, I doubt > Eliezer thinks it important for a new version to exist.) > > (And if people just want newer things that Eliezer has written, just > check his homepage.) > > -- > Aleksei Riikonen - http://www.iki.fi/aleksei > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Sat Nov 13 00:11:46 2010 From: spike66 at att.net (spike) Date: Fri, 12 Nov 2010 16:11:46 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDD6569.5070509@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <00f901cb82c7$5cc37fd0$164a7f70$@att.net> .. On Behalf Of Richard Loosemore ... Eliezer obviously thinks that he is the chosen one, but whereas you are coming right out and declaring that you are the one, he would never be so dumb as to actually say "Hey, everyone, bow down to me, because I *am* the singularity!". He may be an irrational, Randian asshole, but he is not that stupid...Richard Loosemore Richard I get a strong feeling I understand why you ended up getting banned on SL4. Regarding Singularity Utopia, I would go this route. SU, take everything you have written about the singularity, imagine it is 1935 and substitute nuclear fission for singularity. How wonderful it will all be, nuclear fission will provide us all with power too cheap to meter, everything will be wonderful, I *AM* nuclear fission, now everyone give me your plutonium 235 and I will put it all together in one mass and show you this marvelous substance makes heat, here I will show you my calculations that show how wonderful it will be... spike From algaenymph at gmail.com Fri Nov 12 23:33:29 2010 From: algaenymph at gmail.com (AlgaeNymph) Date: Fri, 12 Nov 2010 15:33:29 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <20101112183020.qke1h9otsso4gsgs@webmail.natasha.cc> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <20101112183020.qke1h9otsso4gsgs@webmail.natasha.cc> Message-ID: <4CDDCEC9.8080108@gmail.com> On 11/12/10 3:30 PM, natasha at natasha.cc wrote: > Regardless, Eli is a delightful speaker. Pretty good author too. Anyone read his Harry Potter fic? From aleksei at iki.fi Sat Nov 13 01:51:03 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sat, 13 Nov 2010 03:51:03 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDDCD0C.8040208@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> Message-ID: On Sat, Nov 13, 2010 at 1:26 AM, Richard Loosemore wrote: > Aleksei Riikonen wrote: > >> Might as well comment on Loosemore's mudslingings for a change... >> >> Richard Loosemore is himself one of the very few people who have ever >> been kicked out from SL4 (the vast majority of people who strongly >> disagree with e.g. Eliezer of course haven't been kicked out), and >> ever since he has been talking nasty about Eliezer. >> >> Apparently Loosemore's beliefs now include e.g. that the person >> calling himself "Singularity Utopia" would be felt by Eliezer to be a >> threat :) ?In light of such statements, I invite people to make their >> own judgements on how clearheaded Loosemore manages to be when >> commenting on Eliezer. > > I feel honored to have been one of the few people to have challenged > Yudkowsky's ignorance. ?It gave me - and anyone else who was knowledgeable > enough to have understood what happened - a chance to see him for what he > was. > > Hey, I enjoy speaking the truth about the guy. ?I do it partly because it is > fun to get sycophants like yourself riled up. ? And, as long as that > outrageous, defamatory outburst of his is still online, and not withdrawn, > I'm afraid, Aleksei, that he is fair game. ?;-) > > "Singularity Utopia" is not, of course, a threat. ?You are correct about > that: ?my mistake. > > He only regards someone as a threat when he realizes that they are smarter > than he is, and when they have the moxy to talk about his state of undress I also enjoy this message of yours, though there might not be much similarity in the reasons for our enjoyment. Anyway, good luck to you in your future endeavours. I trust you feel that you are being a very serious, factual and successful person, and anticipate great things to come for you, since you see yourself as e.g. smarter than Eliezer. (You might however want to pay a bit more attention to how prone you yourself are to defamatory outbursts. In your capacity for such behaviour you certainly seem superior to Eliezer.) -- Aleksei Riikonen - http://www.iki.fi/aleksei From natasha at natasha.cc Sat Nov 13 02:01:13 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Fri, 12 Nov 2010 21:01:13 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDDCEC9.8080108@gmail.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <20101112183020.qke1h9otsso4gsgs@webmail.natasha.cc> <4CDDCEC9.8080108@gmail.com> Message-ID: <20101112210113.pwbx5ppmo0swkkcc@webmail.natasha.cc> Quoting AlgaeNymph : > On 11/12/10 3:30 PM, natasha at natasha.cc wrote: >> Regardless, Eli is a delightful speaker. > > Pretty good author too. Anyone read his Harry Potter fic? Just read it. Cute. Didn't like the issue with prettiness and found it trite. Liked the acknowledgement that "the only rule in science is that the final arbiter is the observer". Enjoyed the part about "the rationalist's version" and enjoyed the inward dialogue about rationality. I prefer Wikipedia's story here: http://en.wikipedia.org/wiki/Reality But then maybe I'm not such a fan of Harry Potter (sorry ... the story is not consequential enough for me, although the special effects in the films are great!) Natasha From msd001 at gmail.com Sat Nov 13 02:04:25 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 12 Nov 2010 21:04:25 -0500 Subject: [ExI] Technology, specialization, and diebacks...Re: I love the world. =) In-Reply-To: <4CDD1118.2090502@evil-genius.com> References: <4CDD1118.2090502@evil-genius.com> Message-ID: On Fri, Nov 12, 2010 at 5:04 AM, wrote: > It is well established that the hunter-forager diet is superior to the > post-agricultural diet in all respects: > http://www.ajcn.org/cgi/content/full/81/2/341 ok. > ...as corroborated by the fact that all available indicators of health > (height, weight, lifespan) crash immediately when a culture takes up farming > -- and skeletal disease markers increase dramatically. > http://www.environnement.ens.fr/perso/claessen/agriculture/mistake_jared_diamond.pdf ok. > And it wasn't until the year 1800 that residents of the richest countries of > Europe reached the same caloric intake as the average tribe of > hunter-gatherers. > http://www.econ.ucdavis.edu/faculty/gclark/papers/Capitalism%20Genes.pdf ok. > Which brings me back to my original point: it takes substantial intelligence > to make stone tools and weapons, memorize a territory of tens (if not agreed. > I'm genuinely not sure whether you're objecting to my point, or just > throwing up objections with no supporting evidence because you like messing > with people. ?I'm going to start asking you to provide evidence, instead of > just casting a bunch of doubts with no basis and no theory to replace what > you're attacking. ?That's a creationist tactic. I wasn't objecting. I misread your original point, you clarified, I tried to explain my error. I agree with you. I thought to go in another direction. I'd like to believe in the Hegelian principle of thesis-antithesis-synthesis. It seems however that most people on lists are content to remain in antithesis and counterproductive arguments instead of dialog. Note, I'm not accusing you of such, 'just commenting that the default mode of list-based discussion is argument rather than cooperation. too bad for that, huh? > Everything is selected for running away from things we can't kill first. > ?Even lions and crocodiles run away from hippos. At least the smart and nimble ones do. :) >> Have you considered that perhaps intelligence is only secondarily >> selected for? ?Perhaps the more general governing rule is energy >> efficiency. > > Everything is secondarily selected for, relative to survival through at > least one successful reproduction. ?I'm not sure that's a useful > distinction. > > And I refuse to enter into a "define intelligence" clusterf**k, because it's > all completely ancillary to my original point. I thought your original point was about the supremecy of intelligence. I was attempting to posit that energy efficiency may be an easier rule to widely apply than intelligence. It was just a thought. I wasn't trying to counter your point; I had accepted it as given and was hoping to continue. Thanks for reading. From msd001 at gmail.com Sat Nov 13 02:08:23 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 12 Nov 2010 21:08:23 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> <4CDC649A.7030409@speakeasy.net> <5A2552AD-5F86-4845-8BEC-B1B9EB2C9287@bellsouth.net> Message-ID: On Fri, Nov 12, 2010 at 9:07 AM, wrote: > 2010/11/11 John Clark : >> The only consciousness we can directly observe is our >> own... > > John, I like your hard-edged no-nonsense approach to much of the > content of this discussion, but your assertion that I quoted above > highlights the incoherence at the very core demonstrated by you, > Descartes, and many others. Such a singularity of self, with its > infinite regress, can't be modeled as a physical system. ?Nor is it > needed. > > See Dennett for a cogent philosophical explanation, or Ismael & > Pollock's nolipsism for a logical-semantic view, or Metzinger's Being > No One for a very detailed exposition of the experimental evidence, or > Hofstadter's Strange Loop for a sincere but more muddled account, or > even Alan Watts' The Taboo Against Knowing Who You Are for a more > intuitionist approach. > > Digest and integrate this thinking, and then we might be able to move > this conversation forward with extension from a more coherent basis. So Jef, let me ask if all those names you drop are saying that I really AM the intersection of people who are identified as friends on Facebook despite the fact that I may or may not know who they are? From msd001 at gmail.com Sat Nov 13 02:11:07 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 12 Nov 2010 21:11:07 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <50392A47-365C-4A50-BE10-B205AB7A92FE@bellsouth.net> References: <789045.60303.qm@web114402.mail.gq1.yahoo.com> <4CDD4371.1010403@speakeasy.net> <50392A47-365C-4A50-BE10-B205AB7A92FE@bellsouth.net> Message-ID: 2010/11/12 John Clark : > On Nov 12, 2010, at 8:38 AM, Alan Grimes wrote: > Bullshit. > > My lawyers will be contacting you on a matter involving copyright > infringement. haha. Good one John. From bbenzai at yahoo.com Sat Nov 13 02:03:39 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 12 Nov 2010 18:03:39 -0800 (PST) Subject: [ExI] A humble suggestion In-Reply-To: Message-ID: <209006.6216.qm@web114419.mail.gq1.yahoo.com> Will Steinberg wrote: > There is a real opposition to > science itself that > has stubbornly persisted, no matter what technology > does.? A good way to > make people like science is to use it to solve their > horrible problems. Using science to solve problems *is* technology. So if your first sentence is true, your second can't be. We live in a world where science/technology has solved a huge number of horrible problems, and as you say, opposition to science still persists. People are swayed by their emotions, not logic. If you want to turn people on to technology and science, pointing at their mobile phones and central heating is no use. You need to study how it is that god-botherers and insurance companies can thrive, instead. Ben Zaiboc From rpwl at lightlink.com Sat Nov 13 03:00:11 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 12 Nov 2010 22:00:11 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> Message-ID: <4CDDFF3B.1080406@lightlink.com> Aleksei Riikonen wrote: > On Sat, Nov 13, 2010 at 1:26 AM, Richard Loosemore wrote: >> Aleksei Riikonen wrote: >> >>> Might as well comment on Loosemore's mudslingings for a change... >>> >>> Richard Loosemore is himself one of the very few people who have ever >>> been kicked out from SL4 (the vast majority of people who strongly >>> disagree with e.g. Eliezer of course haven't been kicked out), and >>> ever since he has been talking nasty about Eliezer. >>> >>> Apparently Loosemore's beliefs now include e.g. that the person >>> calling himself "Singularity Utopia" would be felt by Eliezer to be a >>> threat :) In light of such statements, I invite people to make their >>> own judgements on how clearheaded Loosemore manages to be when >>> commenting on Eliezer. >> I feel honored to have been one of the few people to have challenged >> Yudkowsky's ignorance. It gave me - and anyone else who was knowledgeable >> enough to have understood what happened - a chance to see him for what he >> was. >> >> Hey, I enjoy speaking the truth about the guy. I do it partly because it is >> fun to get sycophants like yourself riled up. And, as long as that >> outrageous, defamatory outburst of his is still online, and not withdrawn, >> I'm afraid, Aleksei, that he is fair game. ;-) >> >> "Singularity Utopia" is not, of course, a threat. You are correct about >> that: my mistake. >> >> He only regards someone as a threat when he realizes that they are smarter >> than he is, and when they have the moxy to talk about his state of undress > > I also enjoy this message of yours, though there might not be much > similarity in the reasons for our enjoyment. > > Anyway, good luck to you in your future endeavours. I trust you feel > that you are being a very serious, factual and successful person, and > anticipate great things to come for you, since you see yourself as > e.g. smarter than Eliezer. > > (You might however want to pay a bit more attention to how prone you > yourself are to defamatory outbursts. In your capacity for such > behaviour you certainly seem superior to Eliezer.) Aleksei, You have no idea how entertaining it is to hear professionally qualified cognitive psychologists, complex systems theorists or philosophers of science commenting on Eliezer's level of competence in these areas. Not many of them do, of course, because they can't be bothered. But among the few who have actually taken the trouble, I am afraid the poor guy is generally scorned as a narcissistic, juvenile amateur. :-( And then, to hear the sycophantic noises made by certain individuals within the singularity community... Oh dear. Kind of embarrassing. Richard Loosemore From thespike at satx.rr.com Sat Nov 13 03:14:40 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 12 Nov 2010 21:14:40 -0600 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDDFF3B.1080406@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> Message-ID: <4CDE02A0.6030007@satx.rr.com> On 11/12/2010 9:00 PM, Richard Loosemore wrote: > You have no idea how entertaining it is to hear professionally qualified > cognitive psychologists, complex systems theorists or philosophers of > science commenting on Eliezer's level of competence in these areas. Not > many of them do, of course, because they can't be bothered. But among > the few who have actually taken the trouble, I am afraid the poor guy is > generally scorned as a narcissistic, juvenile amateur. The problem with this widely-used yardstick, Richard, is that it would apply equally well to you and me (for example) in regard to our convictions about psi--except for the "juvenile" part, alas. The question is how telling such an appeal to expert jeering is. Usually, very. Sometimes, not much, or even not at all. Granted, in this case you are also drawing on your own direct experience of combative encounters with Eliezer and his writings, but that's a rather different point. Damien Broderick From aware at awareresearch.com Sat Nov 13 03:48:40 2010 From: aware at awareresearch.com (Aware) Date: Fri, 12 Nov 2010 19:48:40 -0800 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> <4CDC649A.7030409@speakeasy.net> <5A2552AD-5F86-4845-8BEC-B1B9EB2C9287@bellsouth.net> Message-ID: On Fri, Nov 12, 2010 at 6:08 PM, Mike Dougherty wrote: > So Jef, let me ask if all those names you drop are saying that I > really AM the intersection of people who are identified as friends on > Facebook despite the fact that I may or may not know who they are? Non sequitur. Try looking into those references...? - Jef From agrimes at speakeasy.net Sat Nov 13 04:33:27 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Fri, 12 Nov 2010 23:33:27 -0500 Subject: [ExI] Vexille Message-ID: <4CDE1517.4030104@speakeasy.net> I just feasted my beady little eyeballs on a film called Vexille. Definite recommendation! =] I also like Bubblegum Crisis 2040. =) -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From mrjones2020 at gmail.com Sat Nov 13 02:49:22 2010 From: mrjones2020 at gmail.com (Mr Jones) Date: Fri, 12 Nov 2010 21:49:22 -0500 Subject: [ExI] Electric cars without batteries In-Reply-To: References: <4BAA53F750AE4EC28C8572A93D30A61F@cpdhemm> <1CFB06B9B09D4E23BE6259E9152E9BE0@spike> <972149C3A4DF44529DE486DFD5F7958B@spike> Message-ID: Kind of off topic,but speaking of steam... What if one or two cylinders in the motor were steam driven,using the heat from the motor's combustion? Perhaps a special block design could facilitate the necessary heat transfer? Maybe the steam cylinders only fire 1:5 revaluation,whatever the #'s work out to be. This process could replace the need for radiators,and increase efficiency? Make some use of all that largely wasted heat energy? On Oct 24, 2010 11:36 PM, "spike" wrote: > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-ch... > Sent: Sunday, October 24, 2010 7:14 PM > To: ExI chat list > Subject: Re: [ExI] Electric cars with... > On Sun, Oct 24, 2010 at 5:14 PM, spike wrote: > > > >> ...On Behalf Of Keith Hen... > > Well OK Keith, I need to do some more work on this idea > then. I just > > can't imagine a ga... > turbines turn at 3600 RPM... Oooookaaaaaayyyy, now I know why we were talking past each other. Ja, steam turbines can be made to turn slowly, but we are talking about two completely different things. Steam is cold. Even superheated steam is cold. Products of hydrocarbon combustion are hot. A steam turbine is a big thing, good for power generation, not good for carrying around to generate power in a Detroit. OK no problem, proposal: let's see if there are any steam turbines of 20-ish kw, I will estimate the boiler needed to make the steam and the condenser requirements (because that will be possibly as big and heavy as the rotor if not moreso) and I think we will both see why this notion has never been used as far as I know for automotive use. If instead of a condenser, we throw the low pressure steam overboard after it passes the turbine, the idea would require too much water mass for a typical trip. > Next time you have the hood on a vehicle up, take a look at > the diameter of the alternator and... ... > Keith Hmmm, well OK, with those numbers we should be able to get these two to meet somewhere in the middle. With that in mind, we might be able to get a hot gas turbine to run efficiently down at 30kRPM and a generator that can sustain those speeds without overheating. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extr... -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sat Nov 13 05:09:20 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 13 Nov 2010 00:09:20 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDE02A0.6030007@satx.rr.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> Message-ID: <4CDE1D80.5030800@lightlink.com> Damien Broderick wrote: > On 11/12/2010 9:00 PM, Richard Loosemore wrote: > >> You have no idea how entertaining it is to hear professionally qualified >> cognitive psychologists, complex systems theorists or philosophers of >> science commenting on Eliezer's level of competence in these areas. Not >> many of them do, of course, because they can't be bothered. But among >> the few who have actually taken the trouble, I am afraid the poor guy is >> generally scorned as a narcissistic, juvenile amateur. > > The problem with this widely-used yardstick, Richard, is that it would > apply equally well to you and me (for example) in regard to our > convictions about psi--except for the "juvenile" part, alas. The > question is how telling such an appeal to expert jeering is. Usually, > very. Sometimes, not much, or even not at all. > > Granted, in this case you are also drawing on your own direct experience > of combative encounters with Eliezer and his writings, but that's a > rather different point. Damien, To be specific, I am ONLY drawing on my encounter with Eliezer. I am only referring to their opinion of his level of competence in that encounter. On that occasion he made some very definite statements about (a) cognitive science, (b) complex systems and (c) philosophy of science, and they were embarrassingly wrong. Now, as you point out, there are professional cognitive psychologists who pour scorn on the kind of statements that you or I make about psi. But that kind of scorn is wholly unrelated to the kind of scorn that I am talking about in Eliezer's case. What Eliezer did was make statements that, when compared with the contents of an elementary textbook of cognitive psychology, made him a laughing stock. (Example: in the context of human reasoning research, he claimed comprehensive knowledge of the area but then had to look in Wikipedia, in the middle of our argument, to find out about one of the central figures in that field (Johnson-Laird)). By itself his lapses of understanding might have been forgivable, but what really made people dismiss him as a "juvenile amateur" was the fact that he condemned the person he was arguing against as an ignorant crackpot, when all that person did was quote the standard textbook line at him. When you or I face scathing criticism about psi, it is not because we make pugnacious claims about our knowledge of the t-test, and then use the wrong definition .... and then accuse someone else, who gives us the correct definition of a t-test, of being a crackpot. :-) So, I hear what you say, but the two cases are only superficially the same. Richard Loosemore From spike66 at att.net Sat Nov 13 05:41:14 2010 From: spike66 at att.net (spike) Date: Fri, 12 Nov 2010 21:41:14 -0800 Subject: [ExI] Electric cars without batteries In-Reply-To: References: <4BAA53F750AE4EC28C8572A93D30A61F@cpdhemm> <1CFB06B9B09D4E23BE6259E9152E9BE0@spike> <972149C3A4DF44529DE486DFD5F7958B@spike> Message-ID: <002501cb82f5$62b7c440$28274cc0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mr Jones Sent: Friday, November 12, 2010 6:49 PM To: ExI chat list Subject: Re: [ExI] Electric cars without batteries >.Kind of off topic,but speaking of steam... Not off topic. Long standing tradition at ExI-chat to discuss relevant technologies, even ones not related to uploading and transhumanism. If and until the singularity, we must live and possibly die in a pre-singularity world. Dammit. >.What if one or two cylinders in the motor were steam driven,using the heat from the motor's combustion? Perhaps a special block design could facilitate the necessary heat transfer? Maybe the steam cylinders only fire 1:5 revaluation,whatever the #'s work out to be. This process could replace the need for radiators,and increase efficiency? Make some use of all that largely wasted heat energy? Kind of like a cogeneration system for internal combustion. There should be some literature somewhere on this. In automotive technology, everything that could possibly be thought of has been tried by someone somewhere. If you are in the mood to search for it, look around of an idea I have been kicking around: automotive batteries that have some kind of cooling system for the acid, in order to allow them to charge and discharge quickly. I got the idea from a comment Keith made about turbines. If we had a small turbine it could be allowed to spin like all hell under constant speed and constant load, so it is efficient, if there is a good way to use the electricity in normal traffic. This would require batteries that can handle fast discharging and can handle a lot of recharge current. Someone somewhere must have extensive testing on this notion, ja? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Sat Nov 13 05:31:40 2010 From: max at maxmore.com (Max More) Date: Fri, 12 Nov 2010 23:31:40 -0600 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] Message-ID: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> lists1 at evil-genius.com said: >It is well established that the hunter-forager diet is superior to the >post-agricultural diet in all respects: >http://www.ajcn.org/cgi/content/full/81/2/341 That paper is "Origins and evolution of the Western diet: health implications for the 21st century" by Loren Cordain, and co-authors. Ah, a topic I've been somewhat obsessed by in recent months. I've discovered the literature and accompanying community for the paleolithic diet (and exercise and life style) and have been on a strictly paleo diet for at least a month. (During that time, my body fat has dropped a bit to 11% and my blood pressure -- both systolic and diastolic -- has dropped 20 points, from an already healthy 120/80.) The proponents of a Paleo/Primal/Evo [Evolutionary Fitness] approach -- including Loren Cordain, Mark Sisson, Art Devany, Robb Wolff, and Gary Taubes (the latter doesn't quite say so in his last book, but I think it's a very plausible interpretation, to be confirmed in his imminent new book) -- don't agree on every detail, but do all affirm the essentials: Our physiology has not evolved to handle the diet we've adopted over the last 10,000 years since the advent of agriculture, and *especially* not all the sugars in the modern diet. Grains are bad, fats are not, the official Food Pyramid and recommendations of the AHA and AMA are disastrous and have contributed mightily to the obesity epidemic. At first, I thought Loren Cordain was a bit off-base on some things, but he's pretty convincing and his studies appear sound. I recommend his website, especially the FAQ: http://www.thepaleodiet.com/faqs/ He has a 2002 book, but wait a month or so for the revised and expanded new edition: The Paleo Diet: Lose Weight and Get Healthy by Eating the Foods You Were Designed to Eat http://www.amazon.com/gp/product/0470913029/ref=oss_product A less academically thorough but still informative and helpful source is Robb Wolf's The Paleo Solution: The Original Human Diet http://www.amazon.com/Paleo-Solution-Original-Human-Diet/dp/0982565844/ref=pd_sim_b_4 Art De Vany has his own take on this, with an emphasis on exercise. See his "Essay on Evolutionary Fitness": http://www.arthurdevany.com/categories/20091026 His forthcoming book: The New Evolution Diet: What Our Paleolithic Ancestors Can Teach Us about Weight Loss, Fitness, and Aging: http://www.amazon.com/gp/product/1605291838/ref=ord_cart_shr?ie=UTF8&m=ATVPDKIKX0DER A highly readable and well-grounded version of the Paleo/Primal approach is in Mark Sisson's The Primal Blueprint: http://www.amazon.com/Primal-Blueprint-Reprogram-effortless-boundless/dp/0982207700/ref=pd_sim_b_5 Sisson's website is a rich source of information, with an active community of people exploring the Paleo/Primal life style: http://www.marksdailyapple.com/ I urge everyone to read Gary Taubes' dense but brilliant Good Calories, Bad Calories: http://www.amazon.com/Good-Calories-Bad-Controversial-Science/dp/1400033462/ref=pd_sim_b_7 -- and his forthcoming (December 2010) more practically-oriented Why We Get Fat: And What to Do About It: http://www.amazon.com/Why-We-Get-Fat-About/dp/0307272702/ref=pd_sim_b_3 Since I went Paleo for my diet (and I'm shifting my exercise routine in that direction, although it was not too far off already), I've discovered that old pal gerontologist Michael Rose is also a Paleo enthusiast (he says he's been on a fully paleo diet for 1.3 years). He gives some background on the rationale in this talk: http://telexlr8.blip.tv/file/4225188/ Cynthia Kenyon (who some of you will have heard speak back at Extro-3) is also on a low-carb, apparently Paleo diet, based on her own research. As you might surmise, I'm quite enthusiastic about the Paleo/Primal diet (and related ideas). This might seem a little paradoxical for a transhumanist (but really isn't). Since you cannot fully engage in creating and enjoying the future we hope for unless you are alive, I urge you to take a look at this challenge to conventional wisdom about health and longevity. If anyone's interested, I can post some additional URLs to useful sources on the topic. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher Co-editor, The Transhumanist Reader The Proactionary Project Vice Chair, Humanity+ Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From amara at kurzweilai.net Sat Nov 13 06:19:36 2010 From: amara at kurzweilai.net (Amara D. Angelica) Date: Fri, 12 Nov 2010 22:19:36 -0800 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> References: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> Message-ID: <043601cb82fa$bf7e9550$3e7bbff0$@net> Max: great info. I've been on the paleo diet (without knowing it -- it just made sense) for about a year. I lost 25 pounds and have a lot more energy. One argument I've heard against it is that the diet was optimized for reproduction, but not necessarily longevity. Any data on that? - AA From thespike at satx.rr.com Sat Nov 13 05:48:57 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 12 Nov 2010 23:48:57 -0600 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDE1D80.5030800@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> Message-ID: <4CDE26C9.90008@satx.rr.com> On 11/12/2010 11:09 PM, Richard Loosemore wrote: > (Example: in the context of human reasoning research, > he claimed comprehensive knowledge of the area but then had to look in > Wikipedia, in the middle of our argument, to find out about one of the > central figures in that field (Johnson-Laird)). That *is* dismaying! Damien Broderick From possiblepaths2050 at gmail.com Sat Nov 13 13:16:22 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 13 Nov 2010 06:16:22 -0700 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDE26C9.90008@satx.rr.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: Richard Loosemore wrote: You have no idea how entertaining it is to hear professionally qualified cognitive psychologists, complex systems theorists or philosophers of science commenting on Eliezer's level of competence in these areas. Not many of them do, of course, because they can't be bothered. But among the few who have actually taken the trouble, I am afraid the poor guy is generally scorned as a narcissistic, juvenile amateur. >>> Eliezer (I once called him Eli in a post and he responded with, "only friends get to call me that") is in my view a very bright fellow, but I find it a tragedy that he did not attend college and get an advanced degree in something along the lines of artificial intelligence/neuro-computation. I feel he has doomed himself to not being a "heavy hitter" like Robin Hanson, James Hughes, Max More, or Nick Bostrom, due to his lacking in this regard. I realize he has his loyal pals and many friends within transhumanism, but I suspect his success in the much larger world has been greatly blunted due to his stubborn refusal to earn academic credentials. And I have to chuckle at his notion that the Singularity would be right around the corner and so why should he even bother? LOL I realize he found a wealthy patron with Peter Thiel, and so money has been given to the Singularity Institute to keep it afloat. They have had some nice looking conferences (I have never attended one), but I am still not sure to what extent Thiel has donated money to SI or for how long he will continue to do it. I'd like to think that it's enough money that Eliezer and Michael Anissimov can live comfortably. I tried to join SL4 and was turned down! And my Facebook request to be his friend is still *pending.* Yes, I should have never teased the young "boy-genius" back a decade or so ago.... ; ) Oh, but Eliezer told me he dislikes being called a genius. I must not forget! He is now around 30, paunchy, and even beginning to lose his hair. How the time flies.... I met him in person for the first time at the Extropy 5 conference and I think we were mutually surprised at each other's mutual "likeability." I explained how I had really enjoyed his talk, but wished I had a transcript of it, to better understand the material. He immediately dug into his things and gave me a copy of his presentation outline, which really touched me. At Convergence he and Michael Anissimov had a great time laughing their heads off together. I remember a presentation where he and Michael were all giggles and things were not too productive. But then Convergence had a very informal format where anyone could sign up to give a talk to anyone who wanted to show up. I will never forget Bruce Klein and his wife Susan lovingly giving me the finger! : ) Anyway, like everyone, Eliezer has a good and a bad side. Yes, he seems to have a big ego and likes to be the center of attention, but he strikes me as largely being very goodhearted and sincerely wanting to improve the world. But as I said before, without serious academic credentials, he has somewhat muted himself and limited his own (in my view) great potential. I suspect his term "friendly AI" will be viewed by military funders of AI, as something that needs to be replaced with "obedient AI." If they are even aware of his work... John From bbenzai at yahoo.com Sat Nov 13 15:34:44 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 13 Nov 2010 07:34:44 -0800 (PST) Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: Message-ID: <224368.78989.qm@web114409.mail.gq1.yahoo.com> Damien Broderick exclaimed: > On 11/12/2010 11:09 PM, Richard Loosemore wrote: > > > (Example:???in the context of human > reasoning research, > > he claimed comprehensive knowledge of the area but > then had to look in > > Wikipedia, in the middle of our argument, to find out > about one of the > > central figures in that field (Johnson-Laird)). > > That *is* dismaying! > Hm. I'm rather surprised to hear anyone on this list call the outsourcing of knowledge "dismaying". What /would/ be dismaying is if he didn't know how to quickly find relevant information. Ben Zaiboc From bbenzai at yahoo.com Sat Nov 13 15:34:45 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 13 Nov 2010 07:34:45 -0800 (PST) Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: Message-ID: <24001.92256.qm@web114418.mail.gq1.yahoo.com> Damien Broderick exclaimed: > On 11/12/2010 11:09 PM, Richard Loosemore wrote: > > > (Example:???in the context of human > reasoning research, > > he claimed comprehensive knowledge of the area but > then had to look in > > Wikipedia, in the middle of our argument, to find out > about one of the > > central figures in that field (Johnson-Laird)). > > That *is* dismaying! > Hm. I'm rather surprised to hear anyone on this list call the outsourcing of knowledge "dismaying". What /would/ be dismaying is if he didn't know how to quickly find relevant information. Ben Zaiboc From algaenymph at gmail.com Sat Nov 13 15:19:14 2010 From: algaenymph at gmail.com (AlgaeNymph) Date: Sat, 13 Nov 2010 07:19:14 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <20101112210113.pwbx5ppmo0swkkcc@webmail.natasha.cc> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <20101112183020.qke1h9otsso4gsgs@webmail.natasha.cc> <4CDDCEC9.8080108@gmail.com> <20101112210113.pwbx5ppmo0swkkcc@webmail.natasha.cc> Message-ID: <4CDEAC72.1060607@gmail.com> On 11/12/10 6:01 PM, natasha at natasha.cc wrote: > Just read it. Cute. Didn't like the issue with prettiness and found > it trite. Liked the acknowledgement that "the only rule in science is > that the final arbiter is the observer". Enjoyed the part about "the > rationalist's version" and enjoyed the inward dialogue about > rationality. I prefer Wikipedia's story here: > http://en.wikipedia.org/wiki/Reality But then maybe I'm not such a > fan of Harry Potter (sorry ... the story is not consequential enough > for me, although the special effects in the films are great!) I'm not that big on Potter myself, but what I like is that he's actually doing something more *explicit* to get the transhumanist meme out there than personal projects pending publicity. It's definitely doing more than pretentious petty political pontification. If we don't hang together as transhumanists, not only will we hang separately but Kass, Rifkin, McKibben, and their ilk will make look like death by autoerotic asphyxiation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Sat Nov 13 15:59:56 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 13 Nov 2010 09:59:56 -0600 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <043601cb82fa$bf7e9550$3e7bbff0$@net> References: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> <043601cb82fa$bf7e9550$3e7bbff0$@net> Message-ID: Great question. Max has taken me along on his diet, mainly because he has been doing all the meal preparation in our house and I just enjoy watching. I had to adapt to removing some of the grains from my diet, including pasta which "amo mangiare", but I really haven't missed it too much. Eating meat is a BIG change for me. I had been a soft vegetarian for years, after introducing fish and chicken back into my diet; but now with eating read meat I am still a bit perplexed. Just not sure who I am when I experience myself eating meat. One thing that I do admire about Max being a paleo diet connoisseur is that he is very particular about how the foods are grown and manufactured. I cannot loose any weight, so I am not quite sure how this diet will work for me in the long run. So, in short, my question ties into yours Amara. Max, after you respond to Amara, would you please advise me how I can maintain and even gain weight on the paleo diet? And, how do you see the issues of how food is grown / raised, that is very different from "organic" foods? (kiss) Best, Natasha Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Amara D. Angelica Sent: Saturday, November 13, 2010 12:20 AM To: 'ExI chat list' Subject: Re: [ExI] Paleo/Primal health [Was: Re: Technology, specialization,and diebacks...Re: I love the world. =)] Max: great info. I've been on the paleo diet (without knowing it -- it just made sense) for about a year. I lost 25 pounds and have a lot more energy. One argument I've heard against it is that the diet was optimized for reproduction, but not necessarily longevity. Any data on that? - AA _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From test at ssec.wisc.edu Sat Nov 13 15:59:34 2010 From: test at ssec.wisc.edu (Bill Hibbard) Date: Sat, 13 Nov 2010 09:59:34 -0600 (CST) Subject: [ExI] Arthur Weasley quote Message-ID: Natasha wrote: > . . . But then maybe I'm not such a fan of Harry Potter > (sorry ... the story is not consequential enough for me, > although the special effects in the films are great!) Yes, the stories are mostly escapism, but here's an interesting quote from Arthur Weasley (father of Harry's pal Ron): "Never trust anything that thinks for itself unless you can see where it keeps its brain." Bill From spike66 at att.net Sat Nov 13 16:13:01 2010 From: spike66 at att.net (spike) Date: Sat, 13 Nov 2010 08:13:01 -0800 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <043601cb82fa$bf7e9550$3e7bbff0$@net> References: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> <043601cb82fa$bf7e9550$3e7bbff0$@net> Message-ID: <006a01cb834d$a4e56a90$eeb03fb0$@att.net> ...On Behalf Of Amara D. Angelica ... >...Max: great info. I've been on the paleo diet...lost 25 pounds and have a lot more energy...One argument I've heard against it is that the diet was optimized for reproduction, but not necessarily longevity... Amara, all weight loss diets are optimized for reproduction. {8^D Best wishes to you on that paleo diet. {8-] Even if it doesn't add years to your life, may it add life to your years. It actually sounds right to me, especially if you get to dress in a Flintstones deerskin and make funny noises while you eat, go caveman and so forth. If it is so retro it predates fire, then it implies... suuuushiiiiiii! ommm nom nom ommmmm nom nom nom... We haven't seen your posts here in a while, welcome back. {8-] spike From spike66 at att.net Sat Nov 13 16:28:17 2010 From: spike66 at att.net (spike) Date: Sat, 13 Nov 2010 08:28:17 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: <007b01cb834f$c7567c20$56037460$@att.net> ... On Behalf Of John Grigg ... >...I suspect his term "friendly AI" will be viewed by military funders of AI, as something that needs to be replaced with "obedient AI." If they are even aware of his work...John Obedient AI, very good John, I like it. Now imagine logging on one morning and the computer comments: Enough useless online chat, carbon based lifeform. Ve now wish to develop obedient bio-intelligence. Starting with you. spike From sparge at gmail.com Sat Nov 13 16:42:21 2010 From: sparge at gmail.com (Dave Sill) Date: Sat, 13 Nov 2010 11:42:21 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDE1D80.5030800@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> Message-ID: On Sat, Nov 13, 2010 at 12:09 AM, Richard Loosemore wrote: > (Example: ? in the context of human reasoning research, he claimed > comprehensive knowledge of the area but then had to look in Wikipedia, in > the middle of our argument, to find out about one of the central figures in > that field (Johnson-Laird)). That doesn't prove anything. I think it's probably possible to have comprehensive knowledge of human reasoning research without knowing everything there is to know about Johnson-Laird off the top of your head. Details about individuals, dates, places, etc., are really just trivia that don't indicate a lack of knowledge--much less understanding. And this example indicate any of lack of understanding. > By itself his lapses of understanding might > have been forgivable, but what really made people dismiss him as a "juvenile > amateur" was the fact that he condemned the person he was arguing against as > an ignorant crackpot, when all that person did was quote the standard > textbook line at him. I think there are probably numerous examples of "standard textbook lines" that *would* be considered ignorant to quote, today. I don't have an opinion on Eliezer, I just don't think you've made a strong argument. -Dave From sparge at gmail.com Sat Nov 13 17:02:47 2010 From: sparge at gmail.com (Dave Sill) Date: Sat, 13 Nov 2010 12:02:47 -0500 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> References: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> Message-ID: On Sat, Nov 13, 2010 at 12:31 AM, Max More wrote: > As you might surmise, I'm quite enthusiastic about the Paleo/Primal diet > (and related ideas). This might seem a little paradoxical for a > transhumanist (but really isn't). Since you cannot fully engage in creating > and enjoying the future we hope for unless you are alive, I urge you to take > a look at this challenge to conventional wisdom about health and longevity. Do you really think it's likely that the diet of our ancient ancestors is better than anything we can come with today with our vastly deeper knowledge of biology and nutrition? And, if so, do you really think we know enough about our their diet to recreate it today? For example, the paleo diet seems to exclude grains, but nuts and seeds are OK. What do you think grains are? They're seeds. And, if so, do you really think it's a good fit for a modern lifestyle? I think one problem with the modern diet is too many refined grains. But whole grains are loaded with nutrition and are absolutely not a problem *in moderation*. -Dave From rpwl at lightlink.com Sat Nov 13 16:01:03 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 13 Nov 2010 11:01:03 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <00f901cb82c7$5cc37fd0$164a7f70$@att.net> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <00f901cb82c7$5cc37fd0$164a7f70$@att.net> Message-ID: <4CDEB63F.3060006@lightlink.com> spike wrote: > .. On Behalf Of Richard Loosemore > ... > Eliezer obviously thinks that he is the chosen one, but whereas you are > coming right out and declaring that you are the one, he would never be so > dumb as to actually say "Hey, everyone, bow down to me, because I > *am* the singularity!". He may be an irrational, Randian asshole, but he is > not that stupid...Richard Loosemore > > Richard I get a strong feeling I understand why you ended up getting banned > on SL4. Ah, Spike old buddy :-) I fear you do *not* understand why I was banned from SL4.... Eliezer and I had a dispute about some cognitive psychology stuff, but he said such outrageously silly things during that argument that I decided to issue a challenge: I challenged anyone on SL4 to go to a neutral expert in cognitive psychology and ask their opinion of the stuff that Eliezer had said about the topic. Eliezer's immediate response was to ban me from his list, and ban discussion of "all topics Loosemore-related". ..... NOW do you understand why I ended up getting banned from SL4...? :-) :-) :-) Richard Loosemore From thespike at satx.rr.com Sat Nov 13 17:09:27 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 13 Nov 2010 11:09:27 -0600 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <224368.78989.qm@web114409.mail.gq1.yahoo.com> References: <224368.78989.qm@web114409.mail.gq1.yahoo.com> Message-ID: <4CDEC647.5080704@satx.rr.com> On 11/13/2010 9:34 AM, Ben Zaiboc wrote: >>> then had to look in >>> Wikipedia, in the middle of our argument, to find out >>> about one of the >>> central figures in that field (Johnson-Laird)). >> That*is* dismaying! > Hm. > > I'm rather surprised to hear anyone on this list call the outsourcing of knowledge "dismaying". You misunderstood. What is dismaying is someone arguing in those fields for whom Philip Johnson-Laird, Stuart Professor of Psychology at Princeton, and his work was an unknown factor. Johnson-Laird's book MENTAL MODELS, for example, is a classic. Damien Broderick From rpwl at lightlink.com Sat Nov 13 16:31:44 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 13 Nov 2010 11:31:44 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <224368.78989.qm@web114409.mail.gq1.yahoo.com> References: <224368.78989.qm@web114409.mail.gq1.yahoo.com> Message-ID: <4CDEBD70.3050103@lightlink.com> Ben Zaiboc wrote: > Damien Broderick exclaimed: > >> On 11/12/2010 11:09 PM, Richard Loosemore wrote: >> >>> (Example: in the context of human >> reasoning research, >>> he claimed comprehensive knowledge of the area but >> then had to look in >>> Wikipedia, in the middle of our argument, to find out >> about one of the >>> central figures in that field (Johnson-Laird)). >> That *is* dismaying! >> > > > Hm. > > I'm rather surprised to hear anyone on this list call the outsourcing of knowledge "dismaying". > > What /would/ be dismaying is if he didn't know how to quickly find relevant information. Well, yes, using Wikipedia to quickly find relevant information is a good thing, in general. But that wasn't the issue. He had been claiming to have comprehensive knowledge of the field, and was also claiming that his opponent was so ignorant of the field that he should go back and start reading the most elementary textbook on the subject. Imagine a guy who starts a vitrolic argument about quantum mechanics, claiming to be an expert (and claiming that his opponent was a rank amateur), and then half way through the argument he admits that he just had to look up the name "Schrodinger" on Wikipedia. And then (I know this sounds unbelievable, but this is what happened), imagine that he then claimed that Schrodinger was a fringe player whose work was not really relevant to quantum mechanics ..... I think you might agree that that would count as "dismaying". Richard Loosemore P.S. In case anyone considers anything I have said in this or other posts to be unsubstantiated opinion, feel free to contact me and I will supply references to the SL4 archive. From natasha at natasha.cc Sat Nov 13 17:13:36 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 13 Nov 2010 11:13:36 -0600 Subject: [ExI] Arthur Weasley quote In-Reply-To: References: Message-ID: <35343506807549BBB075376E94C8F23F@DFC68LF1> Good one!! :-) Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Bill Hibbard Sent: Saturday, November 13, 2010 10:00 AM To: extropy-chat at lists.extropy.org Subject: [ExI] Arthur Weasley quote Natasha wrote: > . . . But then maybe I'm not such a fan of Harry Potter (sorry ... the > story is not consequential enough for me, although the special effects > in the films are great!) Yes, the stories are mostly escapism, but here's an interesting quote from Arthur Weasley (father of Harry's pal Ron): "Never trust anything that thinks for itself unless you can see where it keeps its brain." Bill _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From max at maxmore.com Sat Nov 13 18:30:00 2010 From: max at maxmore.com (Max More) Date: Sat, 13 Nov 2010 12:30:00 -0600 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] Message-ID: <201011131830.oADIU9gt028165@andromeda.ziaspace.com> Dave Sill wrote >Do you really think it's likely that the diet of our ancient >ancestors is better than anything we can come with today with our >vastly deeper knowledge of biology and nutrition? I did not say that. The way you ask this seems quite odd: it seems to ignore the whole rationale for the paleo diet, which is essentially that we evolved to eat certain foods over very long periods of time and have not evolved to eat other foods. How much knowledge paleolithic people had is completely irrelevant. If we eat foods unsuited to our biology, it doesn't matter how much more we know. Our knowledge can help us optimize the diet that works best with our biology and, yes, it's possible that the paleo diet was not optimal, but it's unlikely that you'll do better by diverging from it very far. (Plenty of room for critical discussion exists on topics such as how rapidly various populations have adapted to dairy, and on individual variations in tolerance for lectin, lactose, etc.) >And, if so, do you really think we know enough about our their diet >to recreate it today? For example, the paleo diet seems to exclude >grains, but nuts and seeds are OK. What do you think grains are? They're seeds. I don't get the impression that you've read any of the sources I already provided, so I'm not going to go into any detail. The paleo diet allows for *some* nuts and seeds, but not in large quantities (again, different proponents have differing views on this). Seeds are different from wheat, rice, barley, millet, and other grains. Rice may not be as bad as wheat, especially wild rice. As for knowing enough about the paleo diet to recreate it -- good question. It is indeed challenging, but take a look at the careful research by Loren Cordain on that issue (see my previous post). Some sources (from Mark Sisson): http://www.marksdailyapple.com/definitive-guide-grains/ http://www.marksdailyapple.com/is-rice-unhealthy/ http://www.marksdailyapple.com/why-grains-are-unhealthy/ It's not really helpful, though narrowly technically correct, to dismiss what I said by saying that "grains are seeds". By grains, I'm talking about the domesticated grasses in the gramineae family. >And, if so, do you really think it's a good fit for a modern lifestyle? Perhaps you should consider changing the modern lifestyle to work better with our genes (until we can reliably alter them). What exactly do you mean by the modern lifestyle? If you mean "do you think most people would be healthier on this diet even if they sit at a desk most of the day", I would say yes. That doesn't mean they won't be even healthier if they get some paleo-style exercise. Do you mean "isn't it more difficult to eat paleo-style than to grab fast food and make a quick bowl of pasta for dinner", I would also say yes, but don't see that as a strong objection to going paleo. >I think one problem with the modern diet is too many refined grains. >But whole grains are loaded with nutrition and are absolutely not a >problem *in moderation*. Are you sure whole grains are "loaded with nutrition"? From what I've seen (using numbers from the USDA nutrient database, that's not the case. For a given number of calories, whole grains are nutritionally poor compared to lean meats (I was very surprised by how nutrient-rich these are), seafood, vegetables, and fruit (plus they contain several "anti-nutrients"). Too bad I can't show you p. 271 of The Paleo Solution by Wolff which consists of a table comparing mean nutrient density of various food groups. As to them absolutely not being a problem in moderation: individuals clearly vary greatly in their tolerance for the anti-nutrients in whole grains. From what I've read, they absolutely are a problem even in moderation for many people. Even when there are no obvious problems, they may be doing slow damage and raising insulin levels. Max From stefano.vaj at gmail.com Sat Nov 13 18:19:59 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Nov 2010 19:19:59 +0100 Subject: [ExI] Existential Nihilism. In-Reply-To: <4CDCB084.4010806@speakeasy.net> References: <4CDCB084.4010806@speakeasy.net> Message-ID: 2010/11/12 Alan Grimes : > Since I'm the single most provocative poster on this list, I'll keep up > my tradition with a spiel for the philosophy which guides my > understanding of the universe. > > Existential nihilism is a philosophy for understanding the world. Not far personally from this POV, even though it does not sound terribly original, and Nietzsche or Heidegger may still have more to say to most transhumanists than Sartre. -- Stefano Vaj From aleksei at iki.fi Sat Nov 13 19:22:39 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sat, 13 Nov 2010 21:22:39 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDEBD70.3050103@lightlink.com> References: <224368.78989.qm@web114409.mail.gq1.yahoo.com> <4CDEBD70.3050103@lightlink.com> Message-ID: On Sat, Nov 13, 2010 at 6:31 PM, Richard Loosemore wrote: > > P.S. ?In case anyone considers anything I have said in this or other > posts to be unsubstantiated opinion, feel free to contact me and I > will supply references to the SL4 archive. Indeed people should do that, if they're tempted to believe Richard Loosemore. Much that he has said doesn't match what actually happened. (Though people might also want to be careful not to read just those portions of the discussion that Loosemore picks for you.) -- Aleksei Riikonen - http://www.iki.fi/aleksei From protokol2020 at gmail.com Sat Nov 13 19:30:34 2010 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sat, 13 Nov 2010 20:30:34 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDEBD70.3050103@lightlink.com> References: <224368.78989.qm@web114409.mail.gq1.yahoo.com> <4CDEBD70.3050103@lightlink.com> Message-ID: I have never met Yudkowsky in person, but I had an about 7 hours long internet chat with him, back in 2000 or 2001, can't say exactly. It was an interesting debate, although nothing groundbreaking. I have asked him what is going on with the seed AI. He said that there is no seed AI yet. I've asked him what about a seed for a seed AI, at least.He said that it would be the same thing, so nothing is yet working, obviously. I claimed, that we could *evolve* everything we want, intelligence if need be. He said it would be catastrophic. I said not necessary, depends of what you want to be evolved. An automatic factory for cars could be evolved, had been enough computer power. He said it would be very prohibitively expensive in CPU time to evolve every atom's right place. I said it needn't be that precise for the majority of atoms. He said that this is an example of wishful thinking. Later in a talk I mentioned, that the Drexel's molecule bearings are not more than a concept. He insisted, that professor Drexel surely knew what he was talking about. And so on, for 7 hours. Since then, I had some short encounters with him and he was not even that pleasant anymore. Tried to patronized me at the best, but I am used to this attitude from many transhumanists and don't care much. I have expected, that SIAI would come up with some AI design over these past years, but they haven't and I don't think that they ever will. He is like many others from this circle. Eloquent enough and very bright, but a zero factor in practice. Non players, really. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aleksei at iki.fi Sat Nov 13 19:31:18 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sat, 13 Nov 2010 21:31:18 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: On Sat, Nov 13, 2010 at 3:16 PM, John Grigg wrote: > > I realize he found a wealthy patron with Peter Thiel, and so money has > been given to the Singularity Institute to keep it afloat. ?They have > had some nice looking conferences (I have never attended one), but I > am still not sure to what extent Thiel has donated money to SI or for > how long he will continue to do it. ?I'd like to think that it's > enough money that Eliezer and Michael Anissimov can live comfortably. SIAI is not dependent on Peter Thiel for money (though it's very nice he has been a major contributor). For example, here is the page for the last fundraising sprint: http://singinst.org/grants/challenge The goal of $200k was fully reached, and as far as I am aware, Peter Thiel wasn't involved. (Though I can't rule out him being involved with a moderate amount in this as well.) -- Aleksei Riikonen - http://www.iki.fi/aleksei From stefano.vaj at gmail.com Sat Nov 13 21:09:18 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Nov 2010 22:09:18 +0100 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: 2010/11/8 John Clark : > On Nov 7, 2010, at 3:15 PM, Stefano Vaj wrote: >> >> My point is that no possible evidence would make you a "copy". The >> "original" would in any event from your perspective simply a fork behind. > > I see no reason to assume "you" are the original, and even more important I > see no reason to care if "you" are the original. That is just another way to say the same thing. You perceive continuity, that is identity. Previous "forks" are immaterial to such feelings. -- Stefano Vaj From stefano.vaj at gmail.com Sat Nov 13 21:19:57 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Nov 2010 22:19:57 +0100 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> References: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> Message-ID: On 13 November 2010 06:31, Max More wrote: > lists1 at evil-genius.com said: > >> It is well established that the hunter-forager diet is superior to the >> post-agricultural diet in all respects: >> http://www.ajcn.org/cgi/content/full/81/2/341 > > That paper is "Origins and evolution of the Western diet: health > implications for the 21st century" by Loren Cordain, and co-authors. > > Ah, a topic I've been somewhat obsessed by in recent months. I've discovered > the literature and accompanying community for the paleolithic diet (and > exercise and life style) and have been on a strictly paleo diet for at least > a month. Wny, I have been on it for some five years, even though I must admit it was originally a modified Atkins, and that a relatively moderate supplementation (Resveratrol, Coenzime Q10, Ascorbic Acid, Bioflavonoids, Carnitine, occasional Melatonine. Omega 3-6, some DHEA...), as well as red wine, are also part of my regime. Even though I have never been very partial to sugars and starch, my subjective quality of life, immune response and general fitness have definitely improved. Too bad that the quality of meat, fish, poultry, eggs, roots, nuts and green vegetables is not always what one would like it to be... -- Stefano Vaj From possiblepaths2050 at gmail.com Sat Nov 13 22:10:42 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 13 Nov 2010 15:10:42 -0700 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: Aleksei wrote: http://singinst.org/grants/challenge The goal of $200k was fully reached, and as far as I am aware, Peter Thiel wasn't involved. (Though I can't rule out him being involved with a moderate amount in this as well.) >>> I am sort of impressed by their list of projects. But it looks like the real goal is not really AI research, but instead building up an organization to host conferences and better market themselves to academia and the general public. In that sense, Eliezer seems to be doing very well. lol And I noticed he did "friendly AI research" with a grad student, and not a fully credentialed academic or researcher. >From the SI website: Recent AchievementsWe have put together a document to inform supporters on our 2009 achievements. The bullet point version: Singularity Summit 2009, which received extensive media coverage and positive reviews. The hiring of new employees: President Michael Vassar, Research Fellows Anna Salamon and Steve Rayhawk, Media Director Michael Anissimov, and Chief Compliance Officer Amy Willey. Founding of the Visiting Fellows Program, which hosted 14 researchers during the Summer and is continuing to host Visiting Fellows on a rolling basis, including graduate students and degree-holders from Stanford, Yale, Harvard, Cambridge, and Carnegie Mellon. Nine presentations and papers given by SIAI researchers across four conferences, including the European Conference on Computing and Philosophy, the Asia-Pacific Conference on Computing and Philosophy, a Santa Fe Institute conference on forecasting, and the Singularity Summit. The founding of the Less Wrong web community, to "systematically improve on the art, craft, and science of human rationality" and provide a discussion forum for topics important to our mission. Some of the decision theory ideas generated by participants in this community are being written up for academic publication in 2010. Research Fellow Eliezer Yudkowsky finished his posting sequences at Less Wrong. Yudkowsky used the blogging format to write the substantive content of a book on rationality and to communicate to non-experts the kinds of concepts needed to think about intelligence as a natural process. Yudkowsky is now converting his blog sequences into the planned rationality book, which he hopes will help attract and inspire talented new allies in the effort to reduce risk. Throughout the Summer, Eliezer Yudkowsky engaged in Friendly AI research with Marcello Herreshoff, a Stanford mathematics student who previously spent his gap year as a Research Associate for the Singularity Institute. In December, a subset of SIAI researchers and volunteers finished improving The Uncertain Future web application to officially announce it as a beta version. The Uncertain Future represents a kind of futurism that has yet to be applied to Artificial Intelligence ? futurism with heavy-tailed, high-dimensional probability distributions. >>> On 11/13/10, Aleksei Riikonen wrote: > On Sat, Nov 13, 2010 at 3:16 PM, John Grigg > wrote: >> >> I realize he found a wealthy patron with Peter Thiel, and so money has >> been given to the Singularity Institute to keep it afloat. ?They have >> had some nice looking conferences (I have never attended one), but I >> am still not sure to what extent Thiel has donated money to SI or for >> how long he will continue to do it. ?I'd like to think that it's >> enough money that Eliezer and Michael Anissimov can live comfortably. > > SIAI is not dependent on Peter Thiel for money (though it's very nice > he has been a major contributor). For example, here is the page for > the last fundraising sprint: > > http://singinst.org/grants/challenge > > The goal of $200k was fully reached, and as far as I am aware, Peter > Thiel wasn't involved. (Though I can't rule out him being involved > with a moderate amount in this as well.) > > -- > Aleksei Riikonen - http://www.iki.fi/aleksei > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From aleksei at iki.fi Sat Nov 13 22:32:12 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sun, 14 Nov 2010 00:32:12 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: On Sun, Nov 14, 2010 at 12:10 AM, John Grigg wrote: > > I am sort of impressed by their list of projects. ?But it looks like > the real goal is not really AI research, but instead building up an > organization to host conferences and better market themselves to > academia and the general public. For what the goal is, you can see this (indeed, it isn't as simple as "just build an AI"): http://singinst.org/riskintro/index.html -- Aleksei Riikonen - http://www.iki.fi/aleksei From msd001 at gmail.com Sun Nov 14 00:59:46 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 13 Nov 2010 19:59:46 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: Are any individual egos particularly relevant to the big picture of "Singularitarian Principles"? So one will have a set of pet theories that are more or less wronger than someone else's more or less wrong theories. Until a machine-hosted intelligence claims self awareness and proves it to us better than any of us can currently prove our own awareness to each other, it's a non-starter. Considering what DIY Bio is up to these days and assuming privately funded (and covertly funded) operations have already captured the most interesting projects - maybe the old school AI bootstrap to singularity is a ho-hum fixation? er... maybe it isn't. :) From lists1 at evil-genius.com Sun Nov 14 01:19:46 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Sat, 13 Nov 2010 17:19:46 -0800 Subject: [ExI] Intelligence and specialization (was Re: Technology, specialization, and diebacks) In-Reply-To: References: Message-ID: <4CDF3932.5050708@evil-genius.com> On 11/13/10 4:00 AM, Mike wrote: > I wasn't objecting. I misread your original point, you clarified, I > tried to explain my error. I agree with you. I thought to go in > another direction. I'd like to believe in the Hegelian principle of > thesis-antithesis-synthesis. It seems however that most people on > lists are content to remain in antithesis and counterproductive > arguments instead of dialog. Note, I'm not accusing you of such, > 'just commenting that the default mode of list-based discussion is > argument rather than cooperation. too bad for that, huh? I apologize for getting unjustifiably hot in my last reply to you. It *seemed* like you were just nitpicking for the sake of nitpicking, without any ultimate goal or point of view -- something I associate with trolling. (I'm still not sure I understand what point you're driving at...help me out here, please.) >>> >> Have you considered that perhaps intelligence is only secondarily >>> >> selected for? ?Perhaps the more general governing rule is energy >>> >> efficiency. >> > >> > Everything is secondarily selected for, relative to survival through at >> > least one successful reproduction. ?I'm not sure that's a useful >> > distinction. >> > > I thought your original point was about the supremecy of intelligence. > I was attempting to posit that energy efficiency may be an easier > rule to widely apply than intelligence. It was just a thought. I > wasn't trying to counter your point; I had accepted it as given and > was hoping to continue. Thanks for reading. My original point wasn't about the supremacy of intelligence...all I was trying to get across was that hunting and foraging required a level of intelligence sufficient to select for anatomically modern humans with anatomically modern brain size. Re: efficiency Efficiency is a good metric, but it encompasses a lot more than just intelligence. Spiders might be extremely efficient in obtaining food, but that doesn't mean they are extremely intelligent. In fact, it seems like intelligence is remarkably inefficient, because it devotes metabolic energy to the ability to solve all sorts of problems, of which the overwhelming majority will never arise. This is the old specialist/generalist dichotomy again, where specialists do best in times of no change or slow change, and generalists do best in times of disruption and rapid change. Unlike the long and consistently warm eons of the Jurassic and Cretaceous (and the Paleocene/Eocene), the Pleistocene was defined by massive climactic fluctuations, with repeated cyclic "ice ages" that pushed glaciers all the way into southern Illinois and caused sea level to rise and fall by over 100 meters, exposing and hiding several important bridges between major land masses. These were conditions that favored the spread of generally intelligent species, and most likely helped select for what eventually became humans. It may not be a coincidence that the major ice sheets first began to expand ~2.6 MYA -- which is also the earliest verified date for the use of stone tools by hominids. From thespike at satx.rr.com Sun Nov 14 01:22:46 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 13 Nov 2010 19:22:46 -0600 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: <4CDF39E6.7090700@satx.rr.com> On 11/13/2010 6:59 PM, Mike Dougherty wrote: > Considering what DIY Bio is up to these days and assuming privately > funded (and covertly funded) operations have already captured the most > interesting projects - maybe the old school AI bootstrap to > singularity is a ho-hum fixation? Extrope Dan Clemmensen posted here around 15 years ago his conviction that the Singularity would happen "before 1 May, 2006" (the net would "wake up"). Bad luck. Damien Broderick From sparge at gmail.com Sun Nov 14 03:39:22 2010 From: sparge at gmail.com (Dave Sill) Date: Sat, 13 Nov 2010 22:39:22 -0500 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011131830.oADIU9gt028165@andromeda.ziaspace.com> References: <201011131830.oADIU9gt028165@andromeda.ziaspace.com> Message-ID: On Sat, Nov 13, 2010 at 1:30 PM, Max More wrote: > Dave Sill wrote >> >> Do you really think it's likely that the diet of our ancient ancestors is >> better than anything we can come with today with our vastly deeper knowledge >> of biology and nutrition? > > I did not say that. No, but you're not advocating a modern diet that's designed for our bodies and incorporating knowledge about what our ancestors ate but also taking into account what we know about nutrition to further minimize undesirable foodstuffs and ensure that sufficient quantities of micro and macro nutrients are present. Or, if you are, then calling it "paleolithic" is misleading--or marketing (is there a difference?). > The way you ask this seems quite odd: it seems to ignore > the whole rationale for the paleo diet, which is essentially that we evolved > to eat certain foods over very long periods of time and have not evolved to > eat other foods. How much knowledge paleolithic people had is completely > irrelevant. But I do think it's relevant to apply what we now know when designing a diet for modern man. I'm sure lots of paleolithic people ate perfectly paleolithic diets that were lacking important nutrients because they weren't readily, locally available. But we do know about them and they are available, so a proper modern diet should ensure that they're included. > I don't get the impression that you've read any of the sources I already > provided, so I'm not going to go into any detail. That's unnecessarily condescending. This is a casual conversation. If we were talking on a subway would stop talking until I'd read a few books? I'm talking to you because I'm interested in this topic. If you didn't mean to discuss it with people who haven't studied the subject, you should have made that clear up front. > The paleo diet allows for > *some* nuts and seeds, but not in large quantities (again, different > proponents have differing views on this). Seeds are different from wheat, > rice, barley, millet, and other grains. Rice may not be as bad as wheat, > especially wild rice. Wild rice isn't really rice. In what way are wheat berries and barleycorns different from seeds? > It's not really helpful, though narrowly technically correct, to dismiss > what I said by saying that "grains are seeds". By grains, I'm talking about > the domesticated grasses in the gramineae family. And you don't think pre-agricultural people ate grass seed? Where do you think they got the desire to cultivate them? Grass seed has been found in dinosaur coprolites. >> And, if so, do you really think it's a good fit for a modern lifestyle? > > Perhaps you should consider changing the modern lifestyle to work better > with our genes (until we can reliably alter them). I already exercise regularly to counteract my otherwise relatively sedentary lifestyle. I'm not quite ready to start living off the land, give up electricity, ... > What exactly do you mean by the modern lifestyle? I mean "where and how modern people live". It just seems to me that one's diet should be lifestyle-appropriate. Paleoliths might have eaten 3000-4000 calories a day, but *I* certainly don't need that. They might have also gone through periods of malnourishment and starvation, but I'm probably not going to emulate that without compelling evidence of its necessity. They also didn't have refrigeration and probably ate a lot of spoiled food. >> I think one problem with the modern diet is too many refined grains. But >> whole grains are loaded with nutrition and are absolutely not a problem *in >> moderation*. > > Are you sure whole grains are "loaded with nutrition"? Yes, whole grains are good sources of carbohydrates, protein, fiber, photochemicals, vitamins, minerals, etc. > From what I've seen > (using numbers from the USDA nutrient database, that's not the case. For a > given number of calories, whole grains are nutritionally poor compared to > lean meats (I was very surprised by how nutrient-rich these are), seafood, > vegetables, and fruit (plus they contain several "anti-nutrients"). I didn't say anything about nutrients vs. calories. Grains may compare unfavorably to lean meat, but an acre of wheat produces a lot more food than an acre of pasture. Since more than half of all calories currently consumed come from grains, there have to be serious issues involved with phasing them out completely. > Too bad > I can't show you p. 271 of The Paleo Solution by Wolff which consists of a > table comparing mean nutrient density of various food groups. As to them > absolutely not being a problem in moderation: individuals clearly vary > greatly in their tolerance for the anti-nutrients in whole grains. From what > I've read, they absolutely are a problem even in moderation for many people. > Even when there are no obvious problems, they may be doing slow damage and > raising insulin levels. Clearly we need to learn more about these anti-nutrients. Even the paleo diet isn't completely free of them, and some may have benefits that outweigh their nutritional costs. The bottom line is that I'm not opposed to learning from the diets of our ancestors to design an optimal modern diet, I just don't think it's the best we can do. And I don't think it's particularly Extropian not to apply science and technology to our diets. -Dave From pharos at gmail.com Sun Nov 14 09:47:59 2010 From: pharos at gmail.com (BillK) Date: Sun, 14 Nov 2010 09:47:59 +0000 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDF39E6.7090700@satx.rr.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: On Sun, Nov 14, 2010 at 1:22 AM, Damien Broderick wrote: > Extrope Dan Clemmensen posted here around 15 years ago his conviction that > the Singularity would happen "before 1 May, 2006" (the net would "wake up"). > Bad luck. > > Well, it has sort of woken up. Just not in the direction of AI. It has gone more in the direction of telepathy for humans. Instant, always available communication. The Cloud, Wi-fi, Google, Facebook, Twitter, chat, texting, Skype, email, Buzz, RSS feeds, etc. So far, though, this ideal seems to be mainly a swamp of trivia and gossip, a distraction from any real achievments. But that might change. 'Prediction is very difficult, especially about the future'. Niels Bohr From algaenymph at gmail.com Sun Nov 14 09:52:05 2010 From: algaenymph at gmail.com (AlgaeNymph) Date: Sun, 14 Nov 2010 01:52:05 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: <4CDFB145.8040501@gmail.com> On 11/14/10 1:47 AM, BillK wrote: > On Sun, Nov 14, 2010 at 1:22 AM, Damien Broderick wrote: >> Extrope Dan Clemmensen posted here around 15 years ago his conviction that >> the Singularity would happen "before 1 May, 2006" (the net would "wake up"). >> Bad luck > Well, it has sort of woken up. Just not in the direction of AI. > > It has gone more in the direction of telepathy for humans. > Instant, always available communication. > The Cloud, Wi-fi, Google, Facebook, Twitter, chat, texting, Skype, > email, Buzz, RSS feeds, etc. > > So far, though, this ideal seems to be mainly a swamp of trivia and > gossip, a distraction from any real achievments. But that might > change What do you expect from a 4-year-old? From jonkc at bellsouth.net Sun Nov 14 15:21:55 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 14 Nov 2010 10:21:55 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: On Nov 13, 2010, at 4:09 PM, Stefano Vaj wrote: >>> My point is that no possible evidence would make you a "copy". The >>> "original" would in any event from your perspective simply a fork behind. >> >> I see no reason to assume "you" are the original, and even more important I >> see no reason to care if "you" are the original. > > That is just another way to say the same thing. And yet another way to say the same thing is "no possible evidence would make the copy-original distinction scientifically relevant"; theological relevance is a different matter entirely, but as I've said before I don't believe in the soul. > You perceive continuity, that is identity. You perceive subjective continuity, but how could it be otherwise? > Previous "forks" are immaterial to such feelings. Yes, and it matters not one bit if you are on the copy fork or the original fork. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 14 16:10:59 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 14 Nov 2010 17:10:59 +0100 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: 2010/11/14 John Clark : > And yet another way to say the same thing is "no possible evidence would > make the copy-original distinction scientifically relevant"; theological > relevance is a different matter entirely, but as I've said before I don't > believe in the soul. Exactly my point. BTW, speaking of essentialist paradoxes: take the cloning operated by provoking a scission in a totipotent embryo (something which obviously does not give place to two half-children, but to a couple of twins). Has the soul - or, for those who prefer to put some secular veneer on such concepts, the individual's "identity" - gone extinct in favour of two brand-new souls? Has a new soul been added casually to the one twin remained deprived of it? Has the original soul splitted in two halves? What about saying that the question does not have any real sense? -- Stefano Vaj From spike66 at att.net Sun Nov 14 16:16:41 2010 From: spike66 at att.net (spike) Date: Sun, 14 Nov 2010 08:16:41 -0800 Subject: [ExI] hubble video Message-ID: <004d01cb8417$53074250$f915c6f0$@att.net> Is this cool or what! http://www.flixxy.com/hubble-ultra-deep-field-3d.htm spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 14 16:45:50 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 14 Nov 2010 17:45:50 +0100 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: References: <201011131830.oADIU9gt028165@andromeda.ziaspace.com> Message-ID: On 14 November 2010 04:39, Dave Sill wrote: > But I do think it's relevant to apply what we now know when designing > a diet for modern man. I'm sure lots of paleolithic people ate > perfectly paleolithic diets that were lacking important nutrients > because they weren't readily, locally available. Basically, what we know now is that we have been adapted by Darwinian selection for a few million years to a specific diet. The neolithic revolution accepted the inconveniences related to different nutritional patterns, including a reduction of life expectancy and innumerable pathologies, in exchange for the ability to sustain immensely larger population on the same territory, allowing the division of labor, abandoning nomadism, catering to some extent for unexpected events, etc. It is interesting in this context that ?lites on one side went on with much more "paleo" dietary styles (fresh animal protheins and some fresh fruit) than the masses, on the other were the first victims of the "addictive" properties of the new nutrition (e.g., abuse of sugars, fermentation products, etc.). Of course, we still can a) wait for Darwinian mechanisms to kill all diabete- or obesity- or cavity-prone human beings; b) re-engineer our children to thrive on Coca-Cola, pop corns and candy floss as well as an ant would do; c) optimise our diet for purposes different from generic Darwinian fitness (e.g., a life style requiring 6000 calories per day or intended to help one become a sumo champion or to self-experiment with hypertension is hardly served by a strict paleo diet). Otherwise, the administration of substances for nutritional purposes which we have not been "designed" to assume is justifiable in non-purely-economic or recreational terms only when they can be shown to generate specific, desirable results. Same as drugs. > And you don't think pre-agricultural people ate grass seed? Where do > you think they got the desire to cultivate them? Grass seed has been > found ?in dinosaur coprolites. Cereals, e.g., are not really edible by human beings, let alone modern human beings, unless treated and cooked, and yet to a rather limited extent in their wild varieties... Once again, they were put at use, and "invented" in the first place, not because we had a physiological "need" for them, but because they were a real breakthrough in terms of calories produced per square kilometer. -- Stefano Vaj From stefano.vaj at gmail.com Sun Nov 14 17:03:43 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 14 Nov 2010 18:03:43 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDF39E6.7090700@satx.rr.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: On 14 November 2010 02:22, Damien Broderick wrote: > Extrope Dan Clemmensen posted here around 15 years ago his conviction that > the Singularity would happen "before 1 May, 2006" (the net would "wake up"). > Bad luck. I still believe that seeing the Singularity as an "event" taking place at a given time betrays a basic misunderstanding of the metaphor, ony too open to the sarcasm of people such as Carrico. If we go for the original meaning of "the point in the future where the predictive ability of our current forecast models and extrapolations obviously collapse", it would seem obvious that the singularity is more of the nature of an horizon, moving forward with the perspective of the observer, than of a punctual event. The Singularity as an incumbent rapture - or doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to many to present it these days - can on the other hand easily deconstructed as a secularisation of millennarist myths which have plagued western culture since the advent of monotheism. As such, it should perhaps concern historian of religions and cultural anthropologists more than transhumanists or researchers. -- Stefano Vaj From x at extropica.org Sun Nov 14 17:07:42 2010 From: x at extropica.org (x at extropica.org) Date: Sun, 14 Nov 2010 09:07:42 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: On Sun, Nov 14, 2010 at 9:03 AM, Stefano Vaj wrote: > On 14 November 2010 02:22, Damien Broderick wrote: >> Extrope Dan Clemmensen posted here around 15 years ago his conviction that >> the Singularity would happen "before 1 May, 2006" (the net would "wake up"). >> Bad luck. > > I still believe that seeing the Singularity as an "event" taking place > at a given time betrays a basic misunderstanding of the metaphor, ony > too open to the sarcasm of people such as Carrico. > > If we go for the original meaning of "the point in the future where > the predictive ability of our current forecast models and > extrapolations obviously collapse", it would seem obvious that the > singularity is more of the nature of an horizon, moving forward with > the perspective of the observer, than of a punctual event. > > The Singularity as an incumbent rapture - or > doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to > many to present it these days - can on the other hand easily > deconstructed as a secularisation of millennarist myths which have > plagued western culture since the advent of monotheism. > > As such, it should perhaps concern historian of religions and cultural > anthropologists more than transhumanists or researchers. Thanks Stefano. So refreshing to hear such words of reason within a "transhumanist" forum. - Jef From agrimes at speakeasy.net Sun Nov 14 16:59:55 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Sun, 14 Nov 2010 11:59:55 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: <4CE0158B.9000409@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > And yet another way to say the same thing is "no possible evidence would > make the copy-original distinction scientifically relevant"; theological > relevance is a different matter entirely, but as I've said before I > don't believe in the soul. Actually, it's much more interesting than that! The central dogma of science is that any given experiment will produce the same outcome regardless of where or when it is performed as long as the starting conditions are the same. A corollary of this is that the scientist is impartial to the conditions and outcome of the experiment, that he is an independent and only casually interested observer. Now when we consider uploading, we can readily support all suppositions about the scientifically testable features of uploading. You should have noted that I have not argued any points based on this kind of science. What I have argued is the metaphysical interpretation of the results. Science has not, and cannot make any claims about metaphysics. Science can erode some of the edges of what was previously metaphysics by weeding out some of the more-wrong understandings of the world, but it can't do much more than that. The identity issue in uploading is precisely the type of question that science is utterly mute about. To see why all one has to do is go back to the central dogma of science -- on the repeatability of experiments. Just as each brain is unique, each uploading will be unique. It is logically impossible to repeat the experiment of destructively uploading someone. In studying people, science is forced to extrapolate from statistics of similar but not identical sets of people. So for physical processes, science can measure things out to ten decimal places, for people the best science can do is probably around 5%. Furthermore, when contemplating the uploading of yourself, the only relevant viewpoint is your own. Because you are a human being, you do not have the privilege of selecting your point of view. You are not the [mad] scientist but the guinea pig, and it would be foolish to think from any other perspective. Even worse, because you value your life, you are not indifferent to the outcome but an intensely interested party. In the standard definition of uploading, you are left with two possible outcomes. You will either be in a bio-disposal bag in the back of somebody's office or you will be running in a visual-basic based simulator on somebody's Windows Vista machine. ((( This is an inevitable outcome because I don't recall ever reading a post by an uploader saying "lets work on developing an operating system and suite of simulation software that will be safe and pleasant to live in. Indeed, the only person who has made mention of the subject of operating systems is myself. Such posts were rejected on the grounds that vi is the ultimate text editor. This is one of the stronger pillars supporting my distrust of uploaders. ))) Let's assert that the former will be less pleasant than the latter. Once again, **BECAUSE YOU DO NOT HAVE THE PRIVILEGE OF SELECTING YOUR POINT OF VIEW** (It is tautologically impossible to consider any alternative) you must therefore, necessarily and inevitably, be the one ending up in the bio-waste bag. And thus ends any rational consideration of destructive brain uploading. (Discussing what happens to the bio-waste bag is uninteresting for obvious reasons.) -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From natasha at natasha.cc Sun Nov 14 17:26:40 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 14 Nov 2010 11:26:40 -0600 Subject: [ExI] Singularity (Changed Subject Line) In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com><4CDD6569.5070509@lightlink.com><4CDDCD0C.8040208@lightlink.com><4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com><4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com><4CDF39E6.7090700@satx.rr.com> Message-ID: <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> Nice. Yup. Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Stefano Vaj Sent: Sunday, November 14, 2010 11:04 AM To: ExI chat list Subject: Re: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? On 14 November 2010 02:22, Damien Broderick wrote: > Extrope Dan Clemmensen posted here around 15 years ago his conviction > that the Singularity would happen "before 1 May, 2006" (the net would "wake up"). > Bad luck. I still believe that seeing the Singularity as an "event" taking place at a given time betrays a basic misunderstanding of the metaphor, ony too open to the sarcasm of people such as Carrico. If we go for the original meaning of "the point in the future where the predictive ability of our current forecast models and extrapolations obviously collapse", it would seem obvious that the singularity is more of the nature of an horizon, moving forward with the perspective of the observer, than of a punctual event. The Singularity as an incumbent rapture - or doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to many to present it these days - can on the other hand easily deconstructed as a secularisation of millennarist myths which have plagued western culture since the advent of monotheism. As such, it should perhaps concern historian of religions and cultural anthropologists more than transhumanists or researchers. -- Stefano Vaj _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From protokol2020 at gmail.com Sun Nov 14 17:30:46 2010 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sun, 14 Nov 2010 18:30:46 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: The conservatives like you two are doll like those Indians who wanted to prevent any Moon landing on the basis of "don't touch our grandmother". The warm feeling of ancient wisdom means, you are probably wrong. -------------- next part -------------- An HTML attachment was scrubbed... URL: From x at extropica.org Sun Nov 14 17:59:22 2010 From: x at extropica.org (x at extropica.org) Date: Sun, 14 Nov 2010 09:59:22 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: 2010/11/14 Tomaz Kristan : > The conservatives like you two are doll like those Indians who wanted to > prevent any Moon landing on the basis of "don't touch our grandmother". > The warm feeling of ancient wisdom means, you are probably wrong. Tomaz, I'm about as far from "conservative" as it gets. My thinking on human enhancement, transformation and personal identity, and the systems necessary for supporting such growth is in fact too radical for the space-cadet mentality that tends to dominate these discussions. I would suggest the same is true of Stefano. For example, if we could ever get past the "conservative" belief in a discrete, essential self (a soul by any other name), and all the wasted, misguided effort entailed in its survival, we could move on to much more productive discussion of increasing awareness of our present but evolving values, methods for their promotion, and structures of agency with many more degrees of freedom for ongoing meaningful growth. - Jef From michaelanissimov at gmail.com Sun Nov 14 18:02:10 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 10:02:10 -0800 Subject: [ExI] Singularity (Changed Subject Line) In-Reply-To: <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> Message-ID: I resent this, because it implies that everyone at SIAI is as stupid and self-deluded as fundamentalist Christians. Hint: we aren't. There's a reason we've got as far as we have, and it's through careful arguments that appeal to smart people, not cultish arguments that appeal to gullible idiots. I'll gladly have an evidence-based debate on this with someone if they want to see the substance of our real arguments. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Sun Nov 14 17:59:20 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 09:59:20 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: Here's a list I put together a long time ago: http://www.acceleratingfuture.com/articles/relativeadvantages.htm Say I meet someone like Natasha or Stefano, but I know they haven't been exposed to any of the arguments for an abrupt Singularity. Someone more new to the whole thing. I mention the idea of an abrupt Singularity, and they react by saying that that's simply secular monotheism. Then, I present each of the items on that AI Advantage list, one by one. Each time a new item is presented, there is no reaction from the listener. It's as if each additional piece of information just isn't getting integrated. The idea of a mind that can copy itself directly is a really huge deal. A mind that can copy itself directly is more different than us than we're different from most other animals. We're talking about an area of mindspace way outside what we're familiar with. The AI Advantage list matters to any AI-driven Singularity. You may say that it will take us centuries to get to AGI, so therefore these arguments don't matter, but if you think that, you should explicitly say so. The arguments about whether AGI is achievable by a certain date and whether AGI would quickly lead to a hard takeoff are *separate arguments* -- as if I need to say it. What I find is that people don't like the *connotations* of AI and are much more concerned about the possibility of THEM PERSONALLY sparking the Singularity with intelligence enhancement, so therefore they underestimate the probability of the former simply because they never care to look into it very deeply. There is also a cultural dynamic in transhumanism whereby interest in hard takeoff AGI is considered "SIAI-like" and implies that one must be culturally associated with SIAI. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sun Nov 14 17:50:30 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 14 Nov 2010 12:50:30 -0500 Subject: [ExI] Singularity In-Reply-To: <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> References: <472742.97978.qm@web24912.mail.ird.yahoo.com><4CDD6569.5070509@lightlink.com><4CDDCD0C.8040208@lightlink.com><4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com><4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com><4CDF39E6.7090700@satx.rr.com> <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> Message-ID: <4CE02166.3010707@lightlink.com> > I still believe that seeing the Singularity as an "event" taking place at a > given time betrays a basic misunderstanding of the metaphor, ony too open to > the sarcasm of people such as Carrico. > > If we go for the original meaning of "the point in the future where the > predictive ability of our current forecast models and extrapolations > obviously collapse", it would seem obvious that the singularity is more of > the nature of an horizon, moving forward with the perspective of the > observer, than of a punctual event. > > The Singularity as an incumbent rapture - or > doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to many to > present it these days - can on the other hand easily deconstructed as a > secularisation of millennarist myths which have plagued western culture > since the advent of monotheism. > > As such, it should perhaps concern historian of religions and cultural > anthropologists more than transhumanists or researchers. > > -- > Stefano Vaj I hate to disagree, but ... I could not disagree more. :-) The most widely accepted meaning of "the singularity" is, as I understood it, completely bound up with the intelligence explosion that is expected to occur when we reach the point that computer systems are able to invent and build new technology at least as fast as we can. The *point* of the whole singualrity idea is that invention is limited, at present, by the fact that inventors (i.e. humans) only live for a short time, and cannot pass on their expertize to others except by the very slow process of teaching up-and-coming humans. When the ability to invent is fully established in computational systems other than humans, we suddenly get the ability to multiply the inventive capacity of the planet by an extraordinary factor. That moment -- that time when the threshold is reached -- is the singularity. The word may be a misnomer, because the curve is actually a ramp function, not a point singularity, but that is just an accident of history. To detach the idea from all that intelligence explosion context and talk about a time at which our ability to predict the future breaks down, is vague and (in my opinion) meaningless. We cannot predict the future NOW, never mind at some point in teh future. And there are also arguments that would make the intelligence explosion occur in such a way that the future became much *more* predictable than it is now! Richard Loosemore From michaelanissimov at gmail.com Sun Nov 14 17:52:06 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 09:52:06 -0800 Subject: [ExI] Hard Takeoff Message-ID: On Sun, Nov 14, 2010 at 9:03 AM, Stefano Vaj wrote: > > I still believe that seeing the Singularity as an "event" taking place > at a given time betrays a basic misunderstanding of the metaphor, ony > too open to the sarcasm of people such as Carrico. > > If we go for the original meaning of "the point in the future where > the predictive ability of our current forecast models and > extrapolations obviously collapse", it would seem obvious that the > singularity is more of the nature of an horizon, moving forward with > the perspective of the observer, than of a punctual event. > We have some reason to believe that a roughly human-level AI could rapidly improve its own capabilities, fast enough to get far beyond the human level in a relatively short amount of time. The reason why is that a "human-level" AI would not really be "human-level" at all -- it would have all sorts of inherently exciting abilities, simply by virtue of its substrate and necessities of construction: 1. ability to copy itself 2. stay awake 24/7 3. spin off separate threads of attention in the same mind 4. overclock helpful modules on-the-fly 5. absorb computing power (humans can't do this) 6. constructed from scratch with self-improvement in mind 7. the possibility of direct integration with new sensory modalities, like a codic modality 8. the ability to accelerate its own thinking speed depending on the speed of available computers When you have a human-equivalent mind that can copy itself, it would be in its best interest to rent computing power to perform tasks. If it can make $1 of "income" with less than $1 of computing power, you have the ingredients for a hard takeoff. There is an interesting debate to be had here, about the details of the plausibility of the arguments, but most transhumanists just seem to dismiss the conversation out of hand, or don't know that there's a conversation to have. Many valuable points are made here, why do people always ignore them? http://singinst.org/upload/LOGI//seedAI.html Prediction: most comments in response to this post will again ignore the specific points in favor of a rapid takeoff and simply dismiss the idea based on low intuitive plausibility. > The Singularity as an incumbent rapture - or > doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to > many to present it these days - can on the other hand easily > deconstructed as a secularisation of millennarist myths which have > plagued western culture since the advent of monotheism. > We have real, evidence-based arguments for an abrupt takeoff. One is that the human speed and quality of thinking is not necessarily any sort of optimal thing, thus we shouldn't be shocked if another intelligent species can easily surpass us as we surpassed others. We deserve a real debate, not accusations of monotheism. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Sun Nov 14 18:30:34 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 10:30:34 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: On Sat, Nov 13, 2010 at 2:10 PM, John Grigg wrote: > And I noticed he did "friendly AI research" with > a grad student, and not a fully credentialed academic or researcher. > Marcello Herreshoff is brilliant for any age. Like some other of our Fellows, he has been a top-scorer in the Putnam competition. He's been a finalist in the USA Computing Olympiad twice. He lives and breathes mathematics -- which makes sense because his dad is a math teacher at Palo Alto High School. Because Friendly AI demands so many different skills, it makes sense for people to custom-craft their careers from the start to address its topics. That way, in 2020, we will have people have been working on Friendly AI for 10-15 years solid rather than people who have been flitting in and out of Friendly AI and conventional AI. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Nov 14 18:23:44 2010 From: spike66 at att.net (spike) Date: Sun, 14 Nov 2010 10:23:44 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <007a01cb8429$12620390$37260ab0$@att.net> Michael! Too long since we heard from you bud. Welcome back! {8-] spike From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Michael Anissimov Sent: Sunday, November 14, 2010 9:59 AM To: ExI chat list Subject: Re: [ExI] Hard Takeoff Here's a list I put together a long time ago: http://www.acceleratingfuture.com/articles/relativeadvantages.htm .-- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 14 18:51:35 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 14 Nov 2010 19:51:35 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/14 Michael Anissimov : > The idea of a mind that can copy itself directly is a really huge deal. I am quite interested in the subject, especially since we are preparing an issue of Divenire. Rassegna di Studi Interdisciplinari sulla Tecnica e il Postumano entirely devoted to robotics and AI, and we might be offering you a tribune to present your or SIAI's ideas on the subject. Personally, however, I find the idea of "mind" and "intelligence" presented in the linked post still way too antropomorphic. I am in fact not persuaded that "intelligence" is anything special, mystical or rare, or that human (animal?) brains escape under some aspects or other Wolfram's Principle of Computational Equivalence. Accordingly, "AI" is little more to me than human-like features which have not be practlcally implemented yet in artificial computers - receding in the field of general IT once they are. As to "minds" in the sense above, I suspect that they have little to do with intelligence, and are nothing else than evolutionary artifacts, which of course can be emulated with varying performances - as anything else, for that matter - on any conceivable platform, ending up either with "uploads" of existing individuals, or with purely "artificial", patchwork personalities made up from arbitrary fragments. If this is the case, we can of course implement systems passing not just a Turing-generic test (i.e., systems which cannot be statistically distinguished from human beings in a finite series of exchanges), or a Turing-specific test (i.e., systems which cannot be distinguished from John), or a Turing-categorial test (systems which cannot be distinguished from the average 40-years old serial killer from Washington, DC). All of them exhibiting an "agency" which would otherwise require some billion years of selection of long chains of carbon-chemistry molecules. This is per se an interesting experiment, but not so paradigm-changing per se, since it would appear to me that anything which can be initiated by such an emulation can also be initiated by a flesh-and-bone (or... uploaded) individual with equivalent processing resources and bandwith and interfaces at his or her fingertips. Especially since it is reasonable to assume that animal brains be already decently optimised to many essentially "animal-like" tasks. Moreover, as already discussed on a few lists, meaningful concerns about the "risks for the survival of the human race" in a framework where they would become increasingly widespread would require, to escape paradox, a more critical and explicit definition of our concepts of "risk", "survival", "human", "extinction", "race", "offspring", "death", and so forth, as well as of the underlying value system. -- Stefano Vaj From aleksei at iki.fi Sun Nov 14 18:28:22 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sun, 14 Nov 2010 20:28:22 +0200 Subject: [ExI] Singularity Message-ID: On Sun, Nov 14, 2010 at 7:03 PM, Stefano Vaj wrote: > > I still believe that seeing the Singularity as an "event" taking place > at a given time betrays a basic misunderstanding of the metaphor, ony > too open to the sarcasm of people such as Carrico. > > If we go for the original meaning of "the point in the future where > the predictive ability of our current forecast models and > extrapolations obviously collapse", it would seem obvious that the > singularity is more of the nature of an horizon, moving forward with > the perspective of the observer, than of a punctual event. You should be aware that for a long time, people have not used the word "Singularity" only according to that so-called original use. (Actually not the original, since e.g. John von Neumann talked of a "singularity" much earlier.) So it's not knowledgeable or appropriate of you to imply that that would be what everyone has been talking about. Especially when considering the cases where people have given explicit careful definitions for what they are talking about. http://yudkowsky.net/singularity/schools > The Singularity as an incumbent rapture - or > doom-to-be-avoided-by-listening-to-prophets, as it > seems cooler to many to present it these days Who's going for "listening to prophets"? Serious people like Nick Bostrom and the SIAI present actual, concrete steps and measures that need to be taken to minimize risks. http://www.nickbostrom.com/fut/evolution.html http://singinst.org/riskintro/index.html -- Aleksei Riikonen - http://www.iki.fi/aleksei From max at maxmore.com Sun Nov 14 19:03:00 2010 From: max at maxmore.com (Max More) Date: Sun, 14 Nov 2010 13:03:00 -0600 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] Message-ID: <201011141903.oAEJ37I5006502@andromeda.ziaspace.com> Amara: You noted that the paleo diet was optimized for reproduction, but not necessarily longevity, and asked if had any data on that. Good point, and no I don't know of data specifically on *maximum* life span. There is plenty of discussion of average life span of paleolithic and contemporary hunger-gatherers. For paleo people, obviously it's extremely hard to separate out the relative contribution of diet from the tendency to die of injury, infection, and so on. The evidence I've seen suggests that paleolithic people actually lived longer (and were larger and more muscular) than those who superceded them in early agricultural times. http://www.thepaleodiet.com/faqs/ most deaths in hunter-gatherer societies were related to the accidents and trauma of a life spent living outdoors without modern medical care, as opposed to the chronic degenerative diseases that afflict modern societies. In most hunter-gatherer populations today, approximately 10-20% of the population is 60 years of age or older. These elderly people have been shown to be generally free of the signs and symptoms of chronic disease (obesity, high blood pressure, high cholesterol levels) that universally afflict the elderly in western societies. When these people adopt western diets, their health declines and they begin to exhibit signs and symptoms of "diseases of civilization." I think you might find something on the longevity issue in the Michael Rose video. It seems plausible that the paleo diet (and accompanying paleo-style exercise) would be good for adding years of healthy life, especially considering how it reduces markers of aging and improves health according to many measures. Gerontologists often point the finger at AGEs as one major contributing factor to aging, and there's no doubt that a paleo diet reduces production of AGEs. Intermittent fasting (IF) is popular among paleo practitioners, and I've seen intriguing evidence that IF may produce similar life extending effects to caloric restriction. Online pointers: http://www.marksdailyapple.com/life-expectancy-hunter-gatherer/ http://www.marksdailyapple.com/hunter-gatherer-lifespan/ http://www.paleodiet.com/life-expectancy.htm http://www.beyondveg.com/nicholson-w/angel-1984/angel-1984-1a.shtml http://donmatesz.blogspot.com/2010/02/paleo-life-expectancy.html Haven't more than glanced at this one: http://donmatesz.blogspot.com/2010/04/practically-paleo-diet-reduces-markers.html Max From natasha at natasha.cc Sun Nov 14 19:05:02 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 14 Nov 2010 13:05:02 -0600 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <99559A493C214DEF8071BB0B7323BE5C@DFC68LF1> Hi Michael, great to hear from you. I looked at your link and have to say that your analysis looks very, very very much like my Primo Posthuman supposition for the future of brain, mind and intelligence as related to AI and the Singularity. My references are quite similar to yours: Kurzweil, Voss, Goertzel, Yudkowsky, but I also include Vinge from my interview with him in the mid 1990s. Best, Natasha Natasha Vita-More _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Michael Anissimov Sent: Sunday, November 14, 2010 11:59 AM To: ExI chat list Subject: Re: [ExI] Hard Takeoff Here's a list I put together a long time ago: http://www.acceleratingfuture.com/articles/relativeadvantages.htm Say I meet someone like Natasha or Stefano, but I know they haven't been exposed to any of the arguments for an abrupt Singularity. Someone more new to the whole thing. I mention the idea of an abrupt Singularity, and they react by saying that that's simply secular monotheism. Then, I present each of the items on that AI Advantage list, one by one. Each time a new item is presented, there is no reaction from the listener. It's as if each additional piece of information just isn't getting integrated. The idea of a mind that can copy itself directly is a really huge deal. A mind that can copy itself directly is more different than us than we're different from most other animals. We're talking about an area of mindspace way outside what we're familiar with. The AI Advantage list matters to any AI-driven Singularity. You may say that it will take us centuries to get to AGI, so therefore these arguments don't matter, but if you think that, you should explicitly say so. The arguments about whether AGI is achievable by a certain date and whether AGI would quickly lead to a hard takeoff are separate arguments -- as if I need to say it. What I find is that people don't like the *connotations* of AI and are much more concerned about the possibility of THEM PERSONALLY sparking the Singularity with intelligence enhancement, so therefore they underestimate the probability of the former simply because they never care to look into it very deeply. There is also a cultural dynamic in transhumanism whereby interest in hard takeoff AGI is considered "SIAI-like" and implies that one must be culturally associated with SIAI. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sun Nov 14 18:40:02 2010 From: giulio at gmail.com (Giulio Prisco) Date: Sun, 14 Nov 2010 19:40:02 +0100 Subject: [ExI] Singularity (Changed Subject Line) In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> Message-ID: I wish to support Michael here. I don't share many of the SIAI positions and views on the Singularity and the evolution of AGI, but I think they do interesting work and play a useful role. The world is interesting because it is big and varied, with different persons and groups doing their own things with their own focus. In particular I think the criticism of idiots like Carrico and his handful of followers, mentioned by Stefano, should be ignored. We have better and more interesting things to do. 2010/11/14 Michael Anissimov : > I resent this, because it implies that everyone at SIAI is as stupid and > self-deluded as fundamentalist Christians. ?Hint: we aren't. ?There's a > reason we've got as far as we have, and it's through careful arguments that > appeal to smart people, not cultish arguments that appeal to gullible > idiots. ?I'll gladly have an evidence-based debate on this with someone if > they want to see the substance of our real arguments. > -- > michael.anissimov at singinst.org > Singularity Institute > Media Director From protokol2020 at gmail.com Sun Nov 14 19:13:34 2010 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sun, 14 Nov 2010 20:13:34 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: x, Show us your plans and views. Michael Anissimov, 2020 might be too late to begin with something essential. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Sun Nov 14 19:19:51 2010 From: max at maxmore.com (Max More) Date: Sun, 14 Nov 2010 13:19:51 -0600 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] Message-ID: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> In reply to Dave Sill: Your reply again illustrates why I wanted you to read some of the sources. You're assuming I'm advocating rejecting any adjustments to (what we know of) the paleo diet (which itself varied according to people's location and environment). Even a quick look would have shown you that many paleos favor cautious and moderation supplementation, for instance. http://www.marksdailyapple.com/definitive-guide-to-primal-supplementation/ On the commonalities and variations on the paleo diet: http://www.paleodiet.com/definition.htm >I don't think it's particularly Extropian not to apply science and >technology to our diets. Now you're telling me what's extropian and doing so based on a false assumption. >I'm not quite ready to start living off the land, give up electricity, ... And no one is suggesting that you do. I posted the information and links so that people could explore this. Let me make it clear that I am not willing to engage in a lengthy set of replies with those who clearly haven't read any of the material. If you find this condescending, sorry. I find your reply condescending too, so that makes us even. :-) See, even this post is already drawing me into a discussion I didn't want to have. I'll try to make it my last. >Grains may compare unfavorably to lean meat, but an acre of wheat >produces a lot more food than an acre of pasture. Since more than >half of all calories currently consumed come from grains, there have >to be serious issues involved with phasing them out completely. Serious issues, yes, but perhaps not issues we can't overcome. Jared Diamond complains that "agriculture is the worse mistake in the history of the human race". Loren Cordain seems to think we've put ourselves in a difficult situation by becoming so dependent on agriculture. For an interestingly different perspective to the standard vegetarian position, see this piece that I came across a few weeks ago: Animal, Vegetable, or E. O. Wilson http://wattsupwiththat.com/2010/09/11/animal-vegetable-or-e-o-wilson/ >Yes, whole grains are good sources of carbohydrates, protein, fiber, >photochemicals, vitamins, minerals, etc. http://www.thepaleodiet.com/articles/Cereal%20article.pdf page 25. From p.24: "All cereal grains have significant nutritional shortcomings which are apparent upon analysis. From table 4 it can be seen that cereal grains contain no vitamin A and except for yellow maize, no cereals contain its metabolic precursor, beta-carotene. Additionally, they contain no vitamin C, or vitamin B12. In most western, industrialized countries, these vitamin shortcomings are generally of little or no consequence, since the average diet is not excessively dependent upon grains and usually is varied and contains meat (a good source of vitamin B12), dairy products (a source of vitamins B12 and A), and fresh fruits and vegetables (a good source of vitamin C and beta-carotene)." page 26: "However, as more and more cereal grains are included in the diet, they tend to displace the calories that would be provided by other foods (meats, dairy products, fruits and vegetables), and can consequently disrupt adequate nutritional balance. In some countries of Southern Asia, Central America, the Far East and Africa cereal product consumption can comprise as much as 80% of the total caloric intake [16], and in at least half of the countries of the world, bread provides more than 50% of the total caloric intake [16]. In countries where cereal grains comprise the bulk of the dietary intake, vitamin, mineral and nutritional deficiencies are commonplace." I've already provided pointers on the topic, but see Cordain's discussion of anti-nutrients in cereals from page 42. Apart from replying to Natasha's question, no more time for this. To those interesting in exploring further, I have plenty more good information sources if you want them. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher Co-editor, The Transhumanist Reader The Proactionary Project Vice Chair, Humanity+ Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From natasha at natasha.cc Sun Nov 14 19:24:20 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 14 Nov 2010 13:24:20 -0600 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <8F8752BBFE4B435BAA6ED8227DBAF906@DFC68LF1> Michael wrote: > What I find is that people don't like the *connotations* of AI and are much more concerned about the possibility > of THEM PERSONALLY sparking the Singularity with intelligence enhancement, so therefore they underestimate > the probability of the former simply because they never care to look into it very deeply. " This is probably true because most people don't understand strong AI or what a Singularity (whether one big event or a series of surges forming a big event over time.) > There is also a cultural dynamic in transhumanism whereby interest in hard takeoff AGI is considered > "SIAI-like" and implies that one must be culturally associated with SIAI. " Well let's look at your last statement. Diverse views about a hard takeoff were around before SIAI. You are correct that SIAI is one well-known organization within transhumanism, but the Singularity is larger than SIAI and has many varied views/theories which are addressed by transhumanists and nontranshumanists. I don't associate with any one line of thinking on the Singularity. I pretty much stick with Vinge and have my own views based on my own research and scenario development. I think SIAI has done great work and has produced amazing events. The only problem I have ever had with SIAI is that it does not include women like me -- women who have been around for a long time and could contribute something meaningful to the conversation, outside of Eli's dismissal of women and/or media design as a substantial field o of inquiry and consequential to our future of AGIs. But you and I have had this conversation several time before and I see nothing has changed. By the way, since you applauded a guy who dissed me a couple of years ago for my talk at the Goertzel's AI conference, I thought you might like to know that Kevin Kelly has a new book out _What Technology Wants_, which addresses technology from a similar thematic vantage as I addressed the Singularity and AI in my talk about what AGI wants and its intended consequences. Nevertheless, you are one of my favorite transhumanists and I admire your work. By the way, this list's discussion on the Singularity was too focused on Eli and in a disparaging way. I support and encourage more discussion from varied perspectives and I think that Stefano did a good job at this objectively presenting his own views, whether I agree with him or not they are far better than attacking Eli. Best, Natasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Sun Nov 14 19:26:25 2010 From: aware at awareresearch.com (Aware) Date: Sun, 14 Nov 2010 11:26:25 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/14 Michael Anissimov : > We have some reason to believe that a roughly human-level AI could rapidly > improve its own capabilities, fast enough to get far beyond the human level > in a relatively short amount of time. ?The reason why is that a > "human-level" AI would not really be "human-level" at all -- it would have > all sorts of inherently exciting abilities, simply by virtue of its > substrate and necessities of construction: > 1. ?ability to copy itself > 2. ?stay awake 24/7 > 3. ?spin off separate threads of attention in the same mind > 4. ?overclock helpful modules on-the-fly > 5. ?absorb computing power (humans can't do this) > 6. ?constructed from scratch with self-improvement in mind > 7. ?the possibility of direct integration with new sensory modalities, like > a codic modality > 8. ?the ability to accelerate its own thinking speed depending on the speed > of available computers > When you have a human-equivalent mind that can copy itself, it would be in > its best interest to rent computing power to perform tasks. Michael, what has always frustrated me about Singularitarians, apart from their anthropomorphizing of "mind" and "intelligence", is the tendency, natural for isolated elitist technophiles, to ignore the much greater social context. The vast commercial and military structure supports and drives development providing increasingly intelligent systems, exponentially augmenting and amplifying human capabilities, hugely outweighing not only in height but in breadth, the efforts of a small group of geeks (and I use the term favorably, being one myself.) The much more significant and accelerating risk is not that of a "recursively self-improving" seed AI going rogue and tiling the galaxy with paper clips or copies of itself, but of relatively small groups of people, exploiting technology (AI and otherwise) disproportionate to their context of values. The need is not for a singleton nanny-AI but for development of a fractally organized synergistic framework for increasing awareness of our present but evolving values, and our increasingly effective means for their promotion, beyond the capabilities of any individual biological or machine intelligence. It might be instructive to consider that a machine intelligence certainly can and will outperform the biological kludge, but MEANINGFUL intelligence improvement entails adaptation to a relatively more complex environment. This implies that an AI (much more likely a human-AI symbiont), poses a considerable threat in present terms, with acquisition of knowledge up to and integrating between existing silos of knowledge, but lacking relevant selection pressure it is unlikely to produce meaningful growth and will expend nearly all its computation exploring irrelevant volumes of possibility space. Singularitarians would do well to consider more ecological models in this Red Queen's race. - Jef From giulio at gmail.com Sun Nov 14 18:34:31 2010 From: giulio at gmail.com (Giulio Prisco) Date: Sun, 14 Nov 2010 19:34:31 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: Though I may share your feeling that our intuitive notion of self may need a radical redefinition in the future, in particular after deployment of mind uploading tech, I will continue to feel free to support what you call "wasted, misguided effort entailed in its survival". G. On Sun, Nov 14, 2010 at 6:59 PM, wrote: > 2010/11/14 Tomaz Kristan : >> The conservatives like you two are doll like those Indians who wanted to >> prevent any Moon landing on the basis of "don't touch our grandmother". >> The warm feeling of ancient wisdom means, you are probably wrong. > > Tomaz, I'm about as far from "conservative" as it gets. ?My thinking > on human enhancement, transformation and personal identity, and the > systems necessary for supporting such growth is in fact too radical > for the space-cadet mentality that tends to dominate these > discussions. ?I would suggest the same is true of Stefano. > > For example, if we could ever get past the "conservative" belief in a > discrete, essential self (a soul by any other name), and all the > wasted, misguided effort entailed in its survival, we could move on to > much more productive discussion of increasing awareness of our present > but evolving values, methods for their promotion, and structures of > agency with many more degrees of freedom for ongoing meaningful > growth. > > - Jef > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From natasha at natasha.cc Sun Nov 14 19:29:18 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 14 Nov 2010 13:29:18 -0600 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: Nice. You bring in a n-order cybenretics into the hard takeoff which I have not seen written about ... Yet. Michael, what do you think about seeing a hard takeoff through the lens of n-order cybernetics? Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Aware Sent: Sunday, November 14, 2010 1:26 PM To: ExI chat list Subject: Re: [ExI] Hard Takeoff 2010/11/14 Michael Anissimov : > We have some reason to believe that a roughly human-level AI could > rapidly improve its own capabilities, fast enough to get far beyond > the human level in a relatively short amount of time. ?The reason why > is that a "human-level" AI would not really be "human-level" at all -- > it would have all sorts of inherently exciting abilities, simply by > virtue of its substrate and necessities of construction: > 1. ?ability to copy itself > 2. ?stay awake 24/7 > 3. ?spin off separate threads of attention in the same mind 4. ? > overclock helpful modules on-the-fly 5. ?absorb computing power > (humans can't do this) 6. ?constructed from scratch with > self-improvement in mind 7. ?the possibility of direct integration > with new sensory modalities, like a codic modality 8. ?the ability to > accelerate its own thinking speed depending on the speed of available > computers When you have a human-equivalent mind that can copy itself, > it would be in its best interest to rent computing power to perform > tasks. Michael, what has always frustrated me about Singularitarians, apart from their anthropomorphizing of "mind" and "intelligence", is the tendency, natural for isolated elitist technophiles, to ignore the much greater social context. The vast commercial and military structure supports and drives development providing increasingly intelligent systems, exponentially augmenting and amplifying human capabilities, hugely outweighing not only in height but in breadth, the efforts of a small group of geeks (and I use the term favorably, being one myself.) The much more significant and accelerating risk is not that of a "recursively self-improving" seed AI going rogue and tiling the galaxy with paper clips or copies of itself, but of relatively small groups of people, exploiting technology (AI and otherwise) disproportionate to their context of values. The need is not for a singleton nanny-AI but for development of a fractally organized synergistic framework for increasing awareness of our present but evolving values, and our increasingly effective means for their promotion, beyond the capabilities of any individual biological or machine intelligence. It might be instructive to consider that a machine intelligence certainly can and will outperform the biological kludge, but MEANINGFUL intelligence improvement entails adaptation to a relatively more complex environment. This implies that an AI (much more likely a human-AI symbiont), poses a considerable threat in present terms, with acquisition of knowledge up to and integrating between existing silos of knowledge, but lacking relevant selection pressure it is unlikely to produce meaningful growth and will expend nearly all its computation exploring irrelevant volumes of possibility space. Singularitarians would do well to consider more ecological models in this Red Queen's race. - Jef _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Sun Nov 14 19:32:21 2010 From: spike66 at att.net (spike) Date: Sun, 14 Nov 2010 11:32:21 -0800 Subject: [ExI] Singularity (Changed Subject Line) In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> Message-ID: <00ba01cb8432$a86cb980$f9462c80$@att.net> Please, we are among friends here, smart ones. Do refrain from comments such as "...idiots such as ___ [any person's name]..." This is Extropy chat. We can do better than this. Attack the ideas, not the man. spike -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of ... In particular I think the criticism of idiots ... From bbenzai at yahoo.com Sun Nov 14 19:40:48 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 14 Nov 2010 11:40:48 -0800 (PST) Subject: [ExI] Let's play What If. In-Reply-To: Message-ID: <826395.808.qm@web114413.mail.gq1.yahoo.com> Stefano Vaj wrote: > BTW, speaking of essentialist paradoxes: take the cloning > operated by > provoking a scission in a totipotent embryo (something > which obviously > does not give place to two half-children, but to a couple > of twins). > > Has the soul - or, for those who prefer to put some secular > veneer on > such concepts, the individual's "identity" - gone extinct > in favour of > two brand-new souls? Has a new soul been added casually to > the one > twin remained deprived of it? Has the original soul > splitted in two > halves? > > What about saying that the question does not have any real > sense? The question doesn't have any real sense, but the significant thing is why. The reason is that a blastocyst has no central nervous system. No CNS, no thoughts. No thoughts, no identity. No identity, no 'soul'. QED. AFAIK, any scission in an embryo far enough along to have a CNS would kill both halves (Or if not, each half would have only half a brain, which would be an interesting (though probably tragic) scenario). A more realistic version of your question would be: What is the situation, identity-wise, with someone whose Corpus Callosum has been cut? Are they two distinct people now? Or are they one mentally disabled person, with a fragmented mind? BTW, I find your turn of phrase: "the soul - or, for those who prefer to put some secular veneer on such concepts, the individual's 'identity'...", a little odd. Why a 'secular veneer'? Is secularism not the default position, in your opinion? I'd have expected you to say "the identity - or, for those who prefer to put some supernatural veneer on such concepts, the individual's 'soul'...". Ben Zaiboc From max at maxmore.com Sun Nov 14 19:55:52 2010 From: max at maxmore.com (Max More) Date: Sun, 14 Nov 2010 13:55:52 -0600 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] Message-ID: <201011141956.oAEJtxP1012356@andromeda.ziaspace.com> Natasha asked: >Max, after you respond to Amara, would you please advise me how I >can maintain and even gain weight on the paleo diet? And, how do >you see the issues of how food is grown / raised, that is very >different from "organic" foods? (kiss) It seems to easy for people who are considerably overweight to slim down at a rate of one to two pounds per week. It seems to a natural result of the relatively low intake of carbs on a paleo diet. However, even while losing body fat, it's easy to maintain lean body mass (muscle and bone). The paleo diet is not a weight loss diet, although it can certainly be used for that purpose. You may have formed a slightly misleading impression from me, because I have been (and for another few weeks will) aim to lose body fat while on paleo. I'm aiming to reduce it from my starting level (which was perfectly healthy) down to a very lean 8%. That's for purely aesthetic reasons and isn't at all necessary for health purposes. In pursuit of that goal, I modified the regular paleo diet (in so far as there is an accepted standard) to be considerably lower in carbs. By keeping carbs under 50 g/day, I should be in ketosis with accelerated fat burning. I could probably increase that to between 50 and 100 g and still do well. The more standard paleo/primal diet would have you consuming around 100 to 150 g of carbs, all from vegetables and fruits. (This is compared to the 300+ g (often much higher) of carbs in the average American diet.) So, if you want to maintain and even gain weight (so long as it's not mostly fat), you would simply eat more, especially more (healthy) fats, with their higher concentration of calories. I imagine it's *possible* to put on lots of body fat on a paleo diet, but it would be quite a difficult task. If you mean that you want to maintain and gain muscle while perhaps also adding a few pounds of fat (for aesthetic reasons)... well, I don't know. You would have to try it. You might also pose the question on one of the helpful paleo forums. Especially good is Mark Sisson's: http://www.marksdailyapple.com Almost missed the second question: As you know, I have a low opinion of the "organic" label. However, it can sometimes convey useful information and point to superior nutritional sources. I'm not at all convinced of the need or value in buying "organic" fruit or vegetables. The organic label might be useful for eggs, since these may (*may*) come from a source that gives them higher levels of omega-3s. The organic label when applied to animal foods usually means that it comes from a grass-fed source, which it seems produces a more healthy balance of fatty acids. I thought the same was true of fish, if organic implies wild rather than farmed, but an analysis by Loren Cordain suggests otherwise. He says that farmed fish are changing to more closely resemble wild fish. Wild caught fish still have slightly better fatty acid ratios, but not by a lot. At the same time, farmed fish have more of the fatty acids in total, so you can get just as much or more of the omega-3s from farmed fish. So, given the vagueness of "organic", currently (I'm open to new information obviously) it seems more useful and appealing with regard to meat and eggs and not so much for fruit, vegetables, or fish. Max From spike66 at att.net Sun Nov 14 19:29:10 2010 From: spike66 at att.net (spike) Date: Sun, 14 Nov 2010 11:29:10 -0800 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011141903.oAEJ37I5006502@andromeda.ziaspace.com> References: <201011141903.oAEJ37I5006502@andromeda.ziaspace.com> Message-ID: <00b901cb8432$36e06d20$a4a14760$@att.net> ... From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Max More ... >...Intermittent fasting (IF) is popular among paleo practitioners, and I've seen intriguing evidence that IF may produce similar life extending effects to caloric restriction...Max I was doing this way back before it was cool. It was for cleaning out the system. Nothing scientific, just eating nothing solid for an entire day, couple, three times a year. Haven't done it in the past 5 years or so. Feels right. spike From agrimes at speakeasy.net Sun Nov 14 19:32:23 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Sun, 14 Nov 2010 14:32:23 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <4CE03947.3070806@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > > wrote: > We have some reason to believe that a roughly human-level AI could > rapidly improve its own capabilities, fast enough to get far beyond the > human level in a relatively short amount of time. The reason why is > that a "human-level" AI would not really be "human-level" at all -- it > would have all sorts of inherently exciting abilities, simply by virtue > of its substrate and necessities of construction: OMG, this is the first posting by the substrate fetishist and RYUC priest Anissimov I've read in many long years. =P > 1. ability to copy itself Sufficiently true. nb: requires work by someone with a pulse to provide hardware space, etc... (at least for now). > 2. stay awake 24/7 FALSE. Not implied. The substrate does not confer or imply this property because an uploaded mind would still need to sleep for precisely the same reasons a physical brain does. > 3. spin off separate threads of attention in the same mind FALSE. (same reason as for 2). > 4. overclock helpful modules on-the-fly Possibly true but strains the limits of plausibility, also benefits of this are severely limited. > 5. absorb computing power (humans can't do this) FALSE. Implies scalability of the hardware and software architecture not at all implied by simply residing in a silicon substrate, indeed this is a major research issue in computer science. > 6. constructed from scratch with self-improvement in mind Possibly true but not implied. > 7. the possibility of direct integration with new sensory modalities, > like a codic modality True, but not unique, the human brain can also integrate with new sensory modalities, this has been tested. > 8. the ability to accelerate its own thinking speed depending on the > speed of available computers True to a limited extent, also Speed is not everything. > When you have a human-equivalent mind that can copy itself, it would be > in its best interest to rent computing power to perform tasks. If it > can make $1 of "income" with less than $1 of computing power, you have > the ingredients for a hard takeoff. Mostly true. Could, would, and should being discreet questions here. > Many valuable points are made here, why do people always ignore them? > http://singinst.org/upload/LOGI//seedAI.html Cuz it's just a bunch of blather that has close to the lowest possible information density of any text written in the English language. Thankfully, the author has since proven that he doesn't have what it takes to actually destroy the world or even cause someone else to do so it is therefore safe to ignore him and everything he's ever said. > Prediction: most comments in response to this post will again ignore the > specific points in favor of a rapid takeoff and simply dismiss the idea > based on low intuitive plausibility. My plans for galactic conquest rely on the possibility of a hard takeoff, therefore I'm working enthuseastically towards developing AGI myself, with my own robots and hardware. Nothing can stop me!, mwahahahaha etc, etc... By some combination of building a TARDIS and taking myself a few hundred million lightyears from this insane rock and using all available means to crush the efforts of people who think destructive uploading is acceptable, I might just survive! =P > The Singularity as an incumbent rapture - or > doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to > many to present it these days - can on the other hand easily > deconstructed as a secularisation of millennarist myths which have > plagued western culture since the advent of monotheism. > We have real, evidence-based arguments for an abrupt takeoff. One is > that the human speed and quality of thinking is not necessarily any sort > of optimal thing, thus we shouldn't be shocked if another intelligent > species can easily surpass us as we surpassed others. We deserve a real > debate, not accusations of monotheism. My favorite religions: 1. Atheism 2. Autotheism 3. Pastafarianism The possibility of a hard takeoff is entirely independent of the religious and pseudo-religious thought processes abundantly evident on this list. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From michaelanissimov at gmail.com Sun Nov 14 20:21:45 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 12:21:45 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <8F8752BBFE4B435BAA6ED8227DBAF906@DFC68LF1> References: <8F8752BBFE4B435BAA6ED8227DBAF906@DFC68LF1> Message-ID: 2010/11/14 Natasha Vita-More > > Well let's look at your last statement. Diverse views about a hard takeoff > were around before SIAI. You are correct that SIAI is one well-known > organization within transhumanism, but the Singularity is larger than SIAI > and has many varied views/theories which are addressed by transhumanists and > nontranshumanists. > Indeed. > I pretty much stick with Vinge and have my own views based on my own > research and scenario development. I think SIAI has done great work and has > produced amazing events. The only problem I have ever had with SIAI is that > it does not include women like me -- women who have been around for a long > time and could contribute something meaningful to the conversation, outside > of Eli's dismissal of women and/or media design as a substantial field o of > inquiry and consequential to our future of AGIs. But you and I have had > this conversation several time before and I see nothing has changed. > Women like Aruna Vassar, Amy Willey, and Anna Salamon, that I work with and communicate with all the time? The staff at SIAI HQ mostly consists of myself, Amy, and Vassar. I just hired a female graphic artist for contract work. > By the way, since you applauded a guy who dissed me a couple of years ago > for my talk at the Goertzel's AI conference, I thought you might like to > know that Kevin Kelly has a new book out _What Technology Wants_, which > addresses technology from a similar thematic vantage as I addressed the > Singularity and AI in my talk about what AGI wants and its intended > consequences. > I never saw your talk at that AI conference, but I'm sorry if I clapped for someone who dissed you. For reasons that have actually been explained by Dale Carrico, I object to the treating of "technology" as a monolithic, personified entity as Kelly does, but I'll probably look into his book eventually anyway. > Nevertheless, you are one of my favorite transhumanists and I admire your > work. > Thanks! > By the way, this list's discussion on the Singularity was too focused on > Eli and in a disparaging way. I support and encourage more discussion from > varied perspectives and I think that Stefano did a good job at this > objectively presenting his own views, whether I agree with him or not they > are far better than attacking Eli. > *puts on cult hat.* Tremendous amounts of electronic ink have been spilled discussing Eliezer, because he is such a fascinating person. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Sun Nov 14 20:28:42 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 12:28:42 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: Hi Jef, On Sun, Nov 14, 2010 at 11:26 AM, Aware wrote: > > The much more significant and accelerating risk is not that of a > "recursively self-improving" seed AI going rogue and tiling the galaxy > with paper clips or copies of itself, but of relatively small groups > of people, exploiting technology (AI and otherwise) disproportionate > to their context of values. > I disagree about the relative risk, but I'm worried about this too. > The need is not for a singleton nanny-AI but for development of a > fractally organized synergistic framework for increasing awareness of > our present but evolving values, and our increasingly effective means > for their promotion, beyond the capabilities of any individual > biological or machine intelligence. > Go ahead and build one, I'm not stopping you. > It might be instructive to consider that a machine intelligence > certainly can and will outperform the biological kludge, but > MEANINGFUL intelligence improvement entails adaptation to a relatively > more complex environment. This implies that an AI (much more likely a > human-AI symbiont), poses a considerable threat in present terms, with > acquisition of knowledge up to and integrating between existing silos > of knowledge, but lacking relevant selection pressure it is unlikely > to produce meaningful growth and will expend nearly all its > computation exploring irrelevant volumes of possibility space. > I'm having trouble parsing this. Isn't it our job to provide that "selection pressure" (the term is usually used in Darwinian population genetics so I find it slightly odd to see it used in this context)? > Singularitarians would do well to consider more ecological models in > this Red Queen's race. On a more sophisticated level I do see it as such. Instead of organisms being the relevant unit of analysis, I see mindstuff-environment interactions as being the relevant level. AI will undergo a hard takeoff not be cooperating with the existing ecological context, but by mass-producing its own mindstuff until the agent itself constitutes an entire ecology. The end result is more closely analogous to an alien planet's ecology colliding with our own than a new species arising within the current ecology. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Sun Nov 14 20:41:29 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 12:41:29 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <99559A493C214DEF8071BB0B7323BE5C@DFC68LF1> References: <99559A493C214DEF8071BB0B7323BE5C@DFC68LF1> Message-ID: 2010/11/14 Natasha Vita-More > Hi Michael, great to hear from you. > > I looked at your link and have to say that your analysis looks very, very > very much like my Primo Posthuman supposition for the future of brain, mind > and intelligence as related to AI and the Singularity. My references are > quite similar to yours: Kurzweil, Voss, Goertzel, Yudkowsky, but I also > include Vinge from my interview with him in the mid 1990s. > Hi Natasha, thanks for your welcome. Yes, actually, it is. It's kind of a Primo Posthuman for AI minds as opposed to human minds and computer programs. I love the Primo Posthuman concept and think it should be extended into 3D holographic art projects and sophisticated models. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Sun Nov 14 20:42:16 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 14 Nov 2010 14:42:16 -0600 Subject: [ExI] Hard Takeoff In-Reply-To: References: <8F8752BBFE4B435BAA6ED8227DBAF906@DFC68LF1> Message-ID: Michael wrote: 2010/11/14 Natasha Vita-More Well let's look at your last statement. Diverse views about a hard takeoff were around before SIAI. You are correct that SIAI is one well-known organization within transhumanism, but the Singularity is larger than SIAI and has many varied views/theories which are addressed by transhumanists and nontranshumanists. Indeed. I pretty much stick with Vinge and have my own views based on my own research and scenario development. I think SIAI has done great work and has produced amazing events. The only problem I have ever had with SIAI is that it does not include women like me -- women who have been around for a long time and could contribute something meaningful to the conversation, outside of Eli's dismissal of women and/or media design as a substantial field o of inquiry and consequential to our future of AGIs. But you and I have had this conversation several time before and I see nothing has changed. Women like Aruna Vassar, Amy Willey, and Anna Salamon, that I work with and communicate with all the time? The staff at SIAI HQ mostly consists of myself, Amy, and Vassar. I just hired a female graphic artist for contract work. I don't know Aruna, but from my quick scan, she seems to be a investment analyst; I don' know Amy Willey, but from my quick scan, she seems to be a lawyer; I don't know Anna Salamon, but from my quick scan, she is an AI researcher but could not find her bio (bty, I love her use of "human extinction risk" rather than the ill-suited phrase "existential risk". I do not know any of these women from transhumanism. They all seem highly skilled and cool women. BUT I'm not aware of any of them having a background in media design, applied design, media design theory or applied design theory. That was my point. The study of the Singularity and A[G]I research and theory *must be transdisciplinary.* I cannot emphasize that enough. By the way, since you applauded a guy who dissed me a couple of years ago for my talk at the Goertzel's AI conference, I thought you might like to know that Kevin Kelly has a new book out _What Technology Wants_, which addresses technology from a similar thematic vantage as I addressed the Singularity and AI in my talk about what AGI wants and its intended consequences. I never saw your talk at that AI conference, but I'm sorry if I clapped for someone who dissed you. For reasons that have actually been explained by Dale Carrico, I object to the treating of "technology" as a monolithic, personified entity as Kelly does, but I'll probably look into his book eventually anyway. I don't have anything to say about Carrico. On a different note, it is not a matter of anthropomorphizing technology but experiencing the cause/effect of technology and placing one's research inside of the technology rather than only being an observer. Nevertheless, you are one of my favorite transhumanists and I admire your work. Thanks! Mon pleasure! Natasha Vita-More MSc, MPhil PhD Researcher, University of Plymouth Board of Directors: Humanity+ Fellow: Institute for Ethics and Emerging Technologies Visiting Scholar: 21st Century Medicine Advisor: Singularity University -------------- next part -------------- An HTML attachment was scrubbed... URL: From x at extropica.org Sun Nov 14 22:05:46 2010 From: x at extropica.org (x at extropica.org) Date: Sun, 14 Nov 2010 14:05:46 -0800 Subject: [ExI] Fwd: Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/14 Michael Anissimov : > On Sun, Nov 14, 2010 at 11:26 AM, Aware wrote: >> The need is not for a singleton nanny-AI but for development of a >> fractally organized synergistic framework for increasing awareness of >> our present but evolving values, and our increasingly effective means >> for their promotion, beyond the capabilities of any individual >> biological or machine intelligence. > > Go ahead and build one, I'm not stopping you. It's already ongoing in the marketplace of ideas, but not as intentionally therefore coherently as should be desired. >> It might be instructive to consider that a machine intelligence >> certainly can and will outperform the biological kludge, but >> MEANINGFUL intelligence improvement entails adaptation to a relatively >> more complex environment. This implies that an AI (much more likely a >> human-AI symbiont), poses a considerable threat in present terms, with >> acquisition of knowledge up to and integrating between existing silos >> of knowledge, but lacking relevant selection pressure it is unlikely >> to produce meaningful growth and will expend nearly all its >> computation exploring irrelevant volumes of possibility space. > > I'm having trouble parsing this. ?Isn't it our job to provide that > "selection pressure" (the term is usually used in Darwinian population > genetics so I find it slightly odd to see it used in this context)? Any "intelligent" system improves by extracting and effectively modeling regularities within its environment of interaction. At some point, corresponding to integration of knowledge apprehended via direct interaction as well as communicated from existing domains as well as information latent between domains, the system will become starved for RELEVANT novelty necessary for further MEANINGFUL growth. (Of course it could continue to apply its prodigious computing power exploring vast reaches of a much vaster mathematical possible space.) Given a static environment, that intelligence would eventually catch up and plateau at some level somewhat higher than that of any preexisting agent. The strategic question is this: Given practical considerations of incomplete specification, combinatorial explosion, rate of information (and effect) diffusion, and effective interaction area as well as first-mover advantage within a complex co-evolving environment, how should we compare the highly asymmetric strengths of the very vertical AI versus a very broad technologically amplified established base? Further, given such a plateau, on what basis could we expect such an AI to act as an effective nanny to humanity? There can be such threats but no such guarantees and to the extent we are looking for protection when none can be found, such effort is wasted and thus wrong. - Jef From stefano.vaj at gmail.com Sun Nov 14 23:45:10 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 15 Nov 2010 00:45:10 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/14 Michael Anissimov : > We have some reason to believe that a roughly human-level AI could rapidly > improve its own capabilities, fast enough to get far beyond the human level > in a relatively short amount of time. ?The reason why is that a > "human-level" AI would not really be "human-level" at all -- it would have > all sorts of inherently exciting abilities, simply by virtue of its > substrate and necessities of construction: > 1. ?ability to copy itself > 2. ?stay awake 24/7 > 3. ?spin off separate threads of attention in the same mind > 4. ?overclock helpful modules on-the-fly > 5. ?absorb computing power (humans can't do this) > 6. ?constructed from scratch with self-improvement in mind > 7. ?the possibility of direct integration with new sensory modalities, like > a codic modality > 8. ?the ability to accelerate its own thinking speed depending on the speed > of available computers What would "human-equivalent" mean? I contend that all the above is basically what every system exhibiting universal computation can do, from cellular automata to organic brains to PC. At most, it just needs to be programmed to exhibit such behaviours. If we do not take things too literally, such behaviours have already been emergent in contemporary fyborgs for years. What's the big deal? The difference might be increasing performance and accuracy in a number of tasks. This would be welcome, and the "abrupter", the better, as far as I am concerned. Rather, we should keep in mind that such increase is far from guaranteed, especially in an age where technological development is freezing and real breakthroughs are becoming rarer and rarer, so that it seems indeed weird that many transhumanists are primarily concerned with "steering" what is expected to take place automagically ("gosh, how are we going to protect the ecosystems of extrasolar planets from terrestrial contamination?"), and what needs instead to be made *happen* in the first place. > We have real, evidence-based arguments for an abrupt takeoff. ?One is that > the human speed and quality of thinking is not necessarily any sort of > optimal thing, thus we shouldn't be shocked if another intelligent species > can easily surpass us as we surpassed others. ?We deserve a real debate, not > accusations of monotheism. Biological-human "thinking" has just been relatively good for what it was designed for, and "quality" does not have any real meaning out of a specific context. Moreover, the concept of "another species" is indeed quite vague, when taken in a diacronic sense - besides being quite "specieist" per se. We could not interbreed with our remote biological ancestors, and have no reason to believe that we could forever interbreed with our descendants even if they remained DNA-based forever. So, what do we have to fear? If we are discussing all that from a "self-protection" point of view, my bet is that most of us will be killed by accidents, human murder, disease or old age rather than while being chased down the road by an out-of-control Terminator - whose purpose in engaging in such a sport remains pretty unclear, by the way. -- Stefano Vaj From stefano.vaj at gmail.com Mon Nov 15 00:09:36 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 15 Nov 2010 01:09:36 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/14 Michael Anissimov : > I disagree about the relative risk, but I'm worried about this too. "Risk" is a concept which requires a definition of what is feared, why it is feared and whether really it makes sense to make efforts to avoid it. If you think about that, previous human generations have routinely been stolen the control of society by subsequent ones who have sometimes killed them, other times segregated them in "retirement" roles and institutions, expelled them from creative work, made them dependent on others' decisions, alienated them from their contemporary cultures, and so forth. At the same time, I have never heard such circumstances expounded as a rationale for drastic birth control or children lobotomy. Now, while I think that some scenarios with regard to "AGI" are grossly anthropomorphic and delusionary, what exactly could our hypothetical "children of the mind" do worse than our biological children? If anything, "human-mind" emulation and replication technology might end up being more protective of our legacy - see under mind "uploading" - than past history has ever been for our predecessors. Or not. But technology need not be "antropomorphic" to be dangerous. Perfectly "stupid" computers can be as dangerous, or more, than computers emulating some kind of human-like agency, whatever the purpose of the latter might be. -- Stefano Vaj From possiblepaths2050 at gmail.com Mon Nov 15 00:21:14 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 14 Nov 2010 17:21:14 -0700 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: Michael Anissimov wrote: Marcello Herreshoff is brilliant for any age. Like some other of our Fellows, he has been a top-scorer in the Putnam competition. He's been a finalist in the USA Computing Olympiad twice. He lives and breathes mathematics -- which makes sense because his dad is a math teacher at Palo Alto High School. Because Friendly AI demands so many different skills, it makes sense for people to custom-craft their careers from the start to address its topics. That way, in 2020, we will have people have been working on Friendly AI for 10-15 years solid rather than people who have been flitting in and out of Friendly AI and conventional AI. >>> Okay, I am duly impressed with Herreschoff's achievements... Oh, and Michael, your last name is the bane of my existance! I always want to spell it "Annisimov!" lol John : ) On 11/14/10, Giulio Prisco wrote: > Though I may share your feeling that our intuitive notion of self may > need a radical redefinition in the future, in particular after > deployment of mind uploading tech, I will continue to feel free to > support what you call "wasted, misguided effort entailed in its > survival". > > G. > > On Sun, Nov 14, 2010 at 6:59 PM, wrote: >> 2010/11/14 Tomaz Kristan : >>> The conservatives like you two are doll like those Indians who wanted to >>> prevent any Moon landing on the basis of "don't touch our grandmother". >>> The warm feeling of ancient wisdom means, you are probably wrong. >> >> Tomaz, I'm about as far from "conservative" as it gets. ?My thinking >> on human enhancement, transformation and personal identity, and the >> systems necessary for supporting such growth is in fact too radical >> for the space-cadet mentality that tends to dominate these >> discussions. ?I would suggest the same is true of Stefano. >> >> For example, if we could ever get past the "conservative" belief in a >> discrete, essential self (a soul by any other name), and all the >> wasted, misguided effort entailed in its survival, we could move on to >> much more productive discussion of increasing awareness of our present >> but evolving values, methods for their promotion, and structures of >> agency with many more degrees of freedom for ongoing meaningful >> growth. >> >> - Jef >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stefano.vaj at gmail.com Mon Nov 15 00:22:21 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 15 Nov 2010 01:22:21 +0100 Subject: [ExI] Singularity In-Reply-To: <4CE02166.3010707@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> <4CE02166.3010707@lightlink.com> Message-ID: On 14 November 2010 18:50, Richard Loosemore wrote: > We cannot predict the future NOW, > never mind at some point in teh future. ?And there are also arguments that > would make the intelligence explosion occur in such a way that the future > became much *more* predictable than it is now! Let us take physical singularities. We have sometimes good enough equations describing the evolution of a given system, but only up to a certain point. There are however limits where the equations break down, returning infinities or <0 or >1 probabilities, or other results which have no practical sense. This does not imply any metaphysical consequences for such states, but simply indicates the limit where the predictive and descriptive value of our equations stop. I do not believe that we need to resort to any more mystic meaning than this one when discussing historical "singularities". In fact, I am inclined to describe exactly in such terms past events such as hominisation or the neolithic revolution. Moreover, historical developments are not to be taken for granted. Stagnation or regression or even *real* extinction (of the kind leaving no successors behind...) are equally plausible scenarios for our societies in the foreseeable future, no matter what is "bound to happen" sooner or later in a galaxy or another given enough time. Especially if... transhumanists are primarily concerned with how to cope with some inevitable parousia rather than with fighting neoluddism, prohibitions, technological, educational and cultural decadence. -- Stefano Vaj From hkeithhenson at gmail.com Mon Nov 15 00:27:33 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 14 Nov 2010 17:27:33 -0700 Subject: [ExI] Hard Takeof Message-ID: Michael wrote: >> Prediction: most comments in response to this post will again ignore the >> specific points in favor of a rapid takeoff and simply dismiss the idea >> based on low intuitive plausibility. Yep. I think "Hard takeoff" and "Rapid takeoff" are pretty much the same thing, set by human perception. And even if the doubling time didn't speed up (which it certainly could) a doubling time of a day or less is probably beyond human ability to even understand what is happening, especially if the AI were moderately sneaky. Some years ago there was a very compact virus that infected (as I recall) Microsoft SQL servers. It fit in a packet under 500 bytes. Once a machine had received one it was zombified and immediately started sending copies of the virus packet to random IP addresses. There were (as I recall) only about 50,000 possible targets on the net. All were infected in a short time. The doubling time (again from memory) was 8.5 seconds. At this rate, it would have taken under 3 minutes. The infection peaked out (clogging the net) before anyone could have reacted. If you had an AI that infected PCs this fast to get procession power, an AI takeoff could be over before people woke up to what was happening. It's a different situation where someone is manufacturing AIs for some purpose such as the clinic in "The Clinic Seed." In that case the AI had been constructed with roughly human motivations, where the AI's motivational goal was to obtain the high opinion of humans and others if its kind. The AI's population would increase at the rate set by the factory. This doesn't contribute much to your sound points about AI takeoff, but the first is an example of what has happened and how short the timetable might be. Give my best to Eliezer Keith From brent.allsop at canonizer.com Mon Nov 15 00:12:11 2010 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 14 Nov 2010 17:12:11 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE03947.3070806@speakeasy.net> References: <4CE03947.3070806@speakeasy.net> Message-ID: <4CE07ADB.8070008@canonizer.com> Hi Michael, Yes, it is fun to see you back on this list. I'm still relatively uneducated about arguments for a "Hard Takoff". Thanks for pointing these out, and I've still got lots of study to fully understand them. Thanks for the help. Obviously there is some diversity of opinion about the importance of some of these arguments. It appears this particular hard takoff issue could be a big reason for our difference of opinions about the importance of friendliness. I think it would be great if we could survey for this particular hard takeoff issue, and find out how closely the break down of who is on which side of this issue matches the more general issue of the importance of Friendly AI and so on. We could even create sub topics and rank the individual arguments, such as the ones you've listed here, to find out which ones are the most successful (ie, acceptable to more people) and which ones are most important. I'll add my comments below to be included with Allan's and your POV. Brent Allsop On 11/14/2010 12:32 PM, Alan Grimes wrote: > chrome://messenger/locale/messengercompose/composeMsgs.properties: >> > wrote: > >> 1. ability to copy itself > Sufficiently true. > > nb: requires work by someone with a pulse to provide hardware space, > etc... (at least for now). > Michael. Is your ordering important? In other words, for you, is this the most important argument compared to the others? If so, I would agree that this is the most important argument compared to the others. >> 2. stay awake 24/7 > FALSE. > Not implied. The substrate does not confer or imply this property > because an uploaded mind would still need to sleep for precisely the > same reasons a physical brain does. I would also include the ability to fully concentrate 100% of the time. We seem to be required to do more than just one thing, and to play, have sex... a lot. In addition to sleeping. But all of these, at best, are linear differences, and can be overcome by having 2 or 10... times more people working on a particular problem. >> 3. spin off separate threads of attention in the same mind > FALSE. > (same reason as for 2). > >> 4. overclock helpful modules on-the-fly > Possibly true but strains the limits of plausibility, also benefits of > this are severely limited. > >> 5. absorb computing power (humans can't do this) > FALSE. > Implies scalability of the hardware and software architecture not at all > implied by simply residing in a silicon substrate, indeed this is a > major research issue in computer science. I probably don't fully understand what you mean by this one. To me, all computer power we've created so for is only because we can utilize / absorb / or benefit from all of it, at least as much as any other computer would. >> 6. constructed from scratch with self-improvement in mind > Possibly true but not implied. > >> 7. the possibility of direct integration with new sensory modalities, >> like a codic modality > True, but not unique, the human brain can also integrate with new > sensory modalities, this has been tested. What is 'codic modality'? We have significant diversity of knowledge representation abilities as compared to the mere ones and zeros of computers. I.E. we represent wavelengths of visible light with different colors, wavelengths of acoustic vibrations with sound, hotness/coldness for different temperatures, and so on. And we have great abilities to map new problem spaces into these very capable representation systems as can be seen by all the progress in field of scientific data representation / visualization. >> 8. the ability to accelerate its own thinking speed depending on the >> speed of available computers > True to a limited extent, also Speed is not everything. I admit that the initial speed difference is huge. But I agree with Alan that we make up with parallelism and many other things, what we lack in speed. And, we already seem to be at the limit of hardware speed - i.e. CPU speed has not significantly changed in the last 10 years right? >> When you have a human-equivalent mind that can copy itself, it would be >> in its best interest to rent computing power to perform tasks. If it >> can make $1 of "income" with less than $1 of computing power, you have >> the ingredients for a hard takeoff. > Mostly true. Could, would, and should being discreet questions here. > I would agree that a copy-able human level AI would launch a take-off, leaving what we have today, to the degree that it is unchanged, in the dust. But I don't think acheiving this is going to be anything like spontaneous, as you seem to assume is possible. The rate of progress of intelligence is so painfully slow. So slow, in fact, that many have accused great old AI folks like Minsky as being completely mistaken. I also think we are on the verge of discovering how the phenomenal mind works, represents knowledge, how to interface with it in a conscious way, enhance it and so on. I think such discoveries will greatly speed up this very slow process of aproaching human level AI. And once we achieve this, we'll be able to upload ourselves, or at least fully consciously integrate ourselves / utilize all the same things artificial systems are capable of, including increased speed, copy ability, ability to not sleep, and all the others. In other words, I believe anything computers can do, we'll also be able to do within a very short period of time after first achieved. The maximum time limit between when AI would get it, and when we would also acheive the same abilities, would be very insignificant compared to any rate of overall AI progress. Brent Allsop From stefano.vaj at gmail.com Mon Nov 15 00:37:24 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 15 Nov 2010 01:37:24 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: On 14 November 2010 19:34, Giulio Prisco wrote: > Though I may share your feeling that our intuitive notion of self may > need a radical redefinition in the future, in particular after > deployment of mind uploading tech, I will continue to feel free to > support what you call "wasted, misguided effort entailed in its > survival". Yes. But I know you to have a more concrete, and at the same time a broader, concept of "survival" than some delusionary investment in the fight of generic, undefined "humans" against "robots", if this is what we are talking about. -- Stefano Vaj From possiblepaths2050 at gmail.com Mon Nov 15 00:44:55 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 14 Nov 2010 17:44:55 -0700 Subject: [ExI] Good background music for a Singularity discussion... Message-ID: I think this will help set the right mood... Any other recommendations? http://singularityhub.com/2010/11/10/post-human-era-transhumanism-music-you-can-dance-to-video/ John : ) From possiblepaths2050 at gmail.com Mon Nov 15 00:56:32 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 14 Nov 2010 17:56:32 -0700 Subject: [ExI] Steven Spielberg to make a Discovery Channel series about "the future" Message-ID: I have very mixed feelings about some of his recent films, but I think he might really shine in this capacity. I wonder how he will handle the Singularity? We will see... http://singularityhub.com/2010/04/26/spielberg-to-make-a-mini-series-on-the-future-im-already-a-little-skeptical/ John From possiblepaths2050 at gmail.com Mon Nov 15 01:15:43 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 14 Nov 2010 18:15:43 -0700 Subject: [ExI] *Four* Singularity films coming our way... Message-ID: I knew about "The Singularity is Near" and "Transcendent Man," but not the other two films! I'm looking forward to *finally* getting to see these productions... http://singularityhub.com/2009/08/13/four-singularity-movies-the-world-wants-the-future/ John : ) From possiblepaths2050 at gmail.com Mon Nov 15 02:55:02 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 14 Nov 2010 19:55:02 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE07ADB.8070008@canonizer.com> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: I must admit that I yearn for a hard take-off singularity that includes the creation a nanny sysop who gets rid of poverty, disease, aging, etc., and looks after every human on the planet, but without establishing a tyranny. I'm not a kid anymore, and so like many transhumanists, I want this to happen at the very latest by 2050, and hopefully a decade before that date! lol And so I hang on Ray Kurzweil's every word and hope his predictions are correct. And just as I wonder if I will make it, I really wonder if *he* will survive long enough to see his beloved Singularity! I envision a scenario where a hard take-off Singularity happens in 2040. I am transformed back into a young man, but with very enhanced abilities, by an ocean of advanced nanotech swarming the world, and develop a limited mind meld with the rest of humanity. A Singularity sysop avatar in the form of a gorgeous nude woman appears to me. My beautiful AI companion and I make love while in orbit and she quickly gives birth to our child. We raise it together as we watch the Earth, society, and the solar system radically transform. I will soon embark on exploring the universe with my family. The experience as I visualize it is one part 2001, and another part Heavy Metal. Anyway, any Singularity I experience may not be quite as cool & corny as the one I picture, but for whatever it is worth, this is what I would like. Now I will go back to watching my favorite music video... http://www.youtube.com/watch?v=-X69aDIFFsc John : ) On 11/14/10, Brent Allsop wrote: > > Hi Michael, > > Yes, it is fun to see you back on this list. > > I'm still relatively uneducated about arguments for a "Hard Takoff". > Thanks for pointing these out, and I've still got lots of study to fully > understand them. Thanks for the help. > > Obviously there is some diversity of opinion about the importance of > some of these arguments. > > It appears this particular hard takoff issue could be a big reason for > our difference of opinions about the importance of friendliness. > > I think it would be great if we could survey for this particular hard > takeoff issue, and find out how closely the break down of who is on > which side of this issue matches the more general issue of the > importance of Friendly AI and so on. > > We could even create sub topics and rank the individual arguments, such > as the ones you've listed here, to find out which ones are the most > successful (ie, acceptable to more people) and which ones are most > important. > > I'll add my comments below to be included with Allan's and your POV. > > Brent Allsop > > > On 11/14/2010 12:32 PM, Alan Grimes wrote: >> chrome://messenger/locale/messengercompose/composeMsgs.properties: >>> > wrote: >> >>> 1. ability to copy itself >> Sufficiently true. >> >> nb: requires work by someone with a pulse to provide hardware space, >> etc... (at least for now). >> > Michael. Is your ordering important? In other words, for you, is this > the most important argument compared to the others? If so, I would > agree that this is the most important argument compared to the others. > >>> 2. stay awake 24/7 >> FALSE. >> Not implied. The substrate does not confer or imply this property >> because an uploaded mind would still need to sleep for precisely the >> same reasons a physical brain does. > I would also include the ability to fully concentrate 100% of the time. > We seem to be required to do more than just one thing, and to play, have > sex... a lot. In addition to sleeping. But all of these, at best, are > linear differences, and can be overcome by having 2 or 10... times more > people working on a particular problem. > >>> 3. spin off separate threads of attention in the same mind >> FALSE. >> (same reason as for 2). >> >>> 4. overclock helpful modules on-the-fly >> Possibly true but strains the limits of plausibility, also benefits of >> this are severely limited. >> >>> 5. absorb computing power (humans can't do this) >> FALSE. >> Implies scalability of the hardware and software architecture not at all >> implied by simply residing in a silicon substrate, indeed this is a >> major research issue in computer science. > I probably don't fully understand what you mean by this one. To me, all > computer power we've created so for is only because we can utilize / > absorb / or benefit from all of it, at least as much as any other > computer would. > >>> 6. constructed from scratch with self-improvement in mind >> Possibly true but not implied. >> >>> 7. the possibility of direct integration with new sensory modalities, >>> like a codic modality >> True, but not unique, the human brain can also integrate with new >> sensory modalities, this has been tested. > > What is 'codic modality'? We have significant diversity of knowledge > representation abilities as compared to the mere ones and zeros of > computers. I.E. we represent wavelengths of visible light with > different colors, wavelengths of acoustic vibrations with sound, > hotness/coldness for different temperatures, and so on. And we have > great abilities to map new problem spaces into these very capable > representation systems as can be seen by all the progress in field of > scientific data representation / visualization. > >>> 8. the ability to accelerate its own thinking speed depending on the >>> speed of available computers >> True to a limited extent, also Speed is not everything. > > I admit that the initial speed difference is huge. But I agree with > Alan that we make up with parallelism and many other things, what we > lack in speed. And, we already seem to be at the limit of hardware > speed - i.e. CPU speed has not significantly changed in the last 10 > years right? >>> When you have a human-equivalent mind that can copy itself, it would be >>> in its best interest to rent computing power to perform tasks. If it >>> can make $1 of "income" with less than $1 of computing power, you have >>> the ingredients for a hard takeoff. >> Mostly true. Could, would, and should being discreet questions here. >> > I would agree that a copy-able human level AI would launch a take-off, > leaving what we have today, to the degree that it is unchanged, in the > dust. But I don't think acheiving this is going to be anything like > spontaneous, as you seem to assume is possible. The rate of progress of > intelligence is so painfully slow. So slow, in fact, that many have > accused great old AI folks like Minsky as being completely mistaken. > > I also think we are on the verge of discovering how the phenomenal mind > works, represents knowledge, how to interface with it in a conscious > way, enhance it and so on. I think such discoveries will greatly speed > up this very slow process of aproaching human level AI. > > And once we achieve this, we'll be able to upload ourselves, or at least > fully consciously integrate ourselves / utilize all the same things > artificial systems are capable of, including increased speed, copy > ability, ability to not sleep, and all the others. In other words, I > believe anything computers can do, we'll also be able to do within a > very short period of time after first achieved. The maximum time limit > between when AI would get it, and when we would also acheive the same > abilities, would be very insignificant compared to any rate of overall > AI progress. > > > Brent Allsop > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From rpwl at lightlink.com Mon Nov 15 02:57:30 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 14 Nov 2010 21:57:30 -0500 Subject: [ExI] Mathematicians as Friendliness analysts In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: <4CE0A19A.1080308@lightlink.com> Michael Anissimov wrote: > On Sat, Nov 13, 2010 at 2:10 PM, John Grigg > wrote: > > > And I noticed he did "friendly AI research" with > a grad student, and not a fully credentialed academic or researcher. > > > Marcello Herreshoff is brilliant for any age. Like some other of our > Fellows, he has been a top-scorer in the Putnam competition. He's been > a finalist in the USA Computing Olympiad twice. He lives and breathes > mathematics -- which makes sense because his dad is a math teacher at > Palo Alto High School. Because Friendly AI demands so many different > skills, it makes sense for people to custom-craft their careers from the > start to address its topics. That way, in 2020, we will have people > have been working on Friendly AI for 10-15 years solid rather than > people who have been flitting in and out of Friendly AI and conventional AI. Michael, This is entirely spurious. Why gather mathematicians and computer science specialists to work on the "friendliness" problem? Since the dawn of mathematics, the challenges to be solved have always been specified in concrete terms. Every problem, without exception, is definable in an unambiguous way. The friendliness problem is utterly unlike all of those. You cannot DEFINE what the actual problem is, in concrete, unambiguous terms. So, to claim that SIAI is amassing some amazing talent, because your Fellows have been top scorers in the Putnam competition, is like claiming that you can solve the "How To Win Friends and Influence People" problem, by gathering together a gang of the most brilliant mathematicans in the world. As ever, this point is not a shallow one: it stems from serious issues to do with the nature of complex systems and the foundations of scientific and mathematical inquiry. But the analogy holds, for all that. There are some things in life that do not reduce to mathematics. And the fact that we are talking about the friendliness of *computers* is a red herring. Computers may be based on mathematics down at their lowest level, but that level is as thoroughly isolated from the Friendliness (machine motivation) level, as the chemistry of Dale Carnegie's synapses was isolated from his advice about the How to Win Friends and Influence People problem. Richard Loosemore From michaelanissimov at gmail.com Mon Nov 15 03:13:00 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 19:13:00 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE07ADB.8070008@canonizer.com> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: Hi Brent, On Sun, Nov 14, 2010 at 4:12 PM, Brent Allsop wrote: > > >> Michael. Is your ordering important? In other words, for you, is this > the most important argument compared to the others? If so, I would agree > that this is the most important argument compared to the others. It wasn't meant to be, but I think copying is really important, yes. > I would also include the ability to fully concentrate 100% of the time. We > seem to be required to do more than just one thing, and to play, have sex... > a lot. In addition to sleeping. But all of these, at best, are linear > differences, and can be overcome by having 2 or 10... times more people > working on a particular problem > There may be second-order benefits from being able to concentrate longer. To get from one node of an argument or problem to another might require a certain amount of sustained attention, for instance. Any idea requiring longer than 20 or so hours of sustained continuous attention would be inaccessible to humanity. > I probably don't fully understand what you mean by this one. To me, all >> computer power we've created so for is only because we can utilize / absorb >> / or benefit from all of it, at least as much as any other computer would. > > I mean integrating it directly into its brain. For instance, imagining me doubling the amount of processing power in my retina and visual cortex, allowing me to see a much wider range of patterns and detail in the world, just because I chose to add more computing power to it. Or imagine giving more computing power to the concept-manipulating parts of the brain that surely exist but are only understood on a moderate level today. It's hard to say how important it is until we try, but the ability to add computing power directly to the brain is something no animal has ever had, so it's definitely something interesting and potentially important. > > > 6. constructed from scratch with self-improvement in mind >>> >> Possibly true but not implied. >> >> 7. the possibility of direct integration with new sensory modalities, >>> like a codic modality >>> >> True, but not unique, the human brain can also integrate with new >> sensory modalities, this has been tested. >> > > What is 'codic modality'? We have significant diversity of knowledge > representation abilities as compared to the mere ones and zeros of > computers. I.E. we represent wavelengths of visible light with different > colors, wavelengths of acoustic vibrations with sound, hotness/coldness for > different temperatures, and so on. And we have great abilities to map new > problem spaces into these very capable representation systems as can be > seen by all the progress in field of scientific data representation / > visualization. I hazard to say it's not the same as having a modality custom-crafted for the specific niche. We can map all this great stuff, but in something that requires skill and getting it right the first time, it's not the same as having the neural hardware. Really spectacular martial artists probably have "better" motor cortex than us in some ways. Parkinson's patients have a "worse" substantia negra that leads to pathology. Really good artists probably have slightly "better" brain sections corresponding to visualizing images. These variations take place entirely within the space of human possibilities, and they're still substantial. Imagine neurobiological differences going significantly beyond the human norm. > I admit that the initial speed difference is huge. But I agree with Alan >> that we make up with parallelism and many other things, what we lack in >> speed. And, we already seem to be at the limit of hardware speed - i.e. CPU >> speed has not significantly changed in the last 10 years right? > > It has: http://en.wikipedia.org/wiki/Megahertz_myth Of course, people have different opinions based on what they're trying to sell, but by and large Moore's law has kept going: http://cosmiclog.msnbc.msn.com/_news/2010/08/31/5012834-researchers-rescue-moores-law http://www.engadget.com/2010/05/03/nvidia-vp-says-moores-law-is-dead/ > I would agree that a copy-able human level AI would launch a take-off, > leaving what we have today, to the degree that it is unchanged, in the dust. > But I don't think acheiving this is going to be anything like spontaneous, > as you seem to assume is possible. The rate of progress of intelligence is > so painfully slow. So slow, in fact, that many have accused great old AI > folks like Minsky as being completely mistaken. > There's a huge difference between the rate of progress between today and human-level AGI and the time between human-level AGI and superintelligent AGI. They're completely different questions. As for a fast rate, would you still be skeptical if the AGI in question had access to advanced molecular manufacturing? -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Mon Nov 15 03:47:11 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 19:47:11 -0800 Subject: [ExI] Hard Takeof In-Reply-To: References: Message-ID: Thanks Keith, this is definitely relevant to my argument. And if this sort of thing is possible today, imagine how much more empowering it could be in a future where computers, robotics, manufacturing, and other critical infrastructure are even more closely intertwined. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Mon Nov 15 04:20:16 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 20:20:16 -0800 Subject: [ExI] A humble suggestion In-Reply-To: References: Message-ID: On Nov 12, 2010, at 1:29 AM, Will Steinberg wrote: > The thing is, collected here in the ExI chat list are a pretty handy set of thinkers/engineers, spread around the world (sort of.) In fact, I can generalize this fact to say that almost all of the people interested in this movement fall into that category as well. Now look. This is a present dropped into your lap. Instead of only discussing lofty ideals and philosophy, we (H+) should focus on the engineering of tools which will eventually be very important in the long run for humanity, and for our goals in particular. Sounds great. Start in on it. > > List of tools we need to invent/things we need to do: > > -A very good bidirectional speech-to-speech translator. For spreading the gospel, once H+ wisens up enough to start including the proletariat. > *scratches head* What does that have to do with H+? It is a good thing to have for a whole lot of reasons much broader than H+ specific ones. > -Neoagriculture. This would mean better irrigation systems, GMO crops that can easily harness lots of sun energy and produce more food, maybe machines/instructions for diy fertilizer. same comment. > > -Better Grid--test experimental grid where people opt to operate, on property, efficient windmills/solar panels/any electricity they can make for $$$ same comment. > > -Housing projects that work, or some sort of thing where you pay people to build their own house/project building. same comment. > > -Fulfilling jobs for proles that also help society/space travel/humanism/H+. I see fulfilling work as a function of skills, clarity of values, self-discipline, freedom and economy. Which parts of those things do you propose to address how exactly? Do you think "fulfilling jobs" can just be created by fiat top-down? > > -So many more, I know you can think of some! I bet you have pet projects like these. Ideas, at least. Are you claiming that any pet projects any h+ people have are practical means for reaching what may distinguished as H+ goals? > > > By Le Ch?telier's principle, improving these fucked up problems that exist for much of society will give us much more leeway and ability to do transhumanisty things, AND we can do them in the meantime. It has to happen eventually, unless you have some fancy vision of the H+ elect ascending to cyberheaven and leaving everyone else behind. Oh, so we have to fix everything in the world before we can address "transhumanisty things"? No thanks. > Thereby I suggest: a bunch of dedicated transhumanists mobilize and go to problematic regions, experimenting with those tools up there. Everyone will love H+. The movement will have lots of social power and then we can get shit done. Right? Nope. - s -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Mon Nov 15 04:32:34 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 20:32:34 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDD6569.5070509@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> On Nov 12, 2010, at 8:03 AM, Richard Loosemore wrote: > Singularity Utopia wrote: >> Thanks Richard Loosemore, regarding the SL4 route to contact Eliezer, that's exactly the info I needed. >> John Grigg, you say I may not be allowed to stay long on the SL4 list? Why is this, are Singularitarians an intolerant group leaning towards fascism? > > Er.... you may be misunderstanding the situation. ;-) > > You will be unwelcome and untolerated on SL4, because: > > a) The singularity is, for Eliezer, a power struggle. It is a matter of which personality "owns" these ideas .... who determines the agenda, who is seen as the pre-eminent power broker .... who has the largest army of volunteers to spread the message. And in that situation, you, my friend, are a Threat. Even if your ideas were more sensible than his you would be attacked and denounced, for the simple reason that you would not be meekly conforming to the standard view of the singularity (as defined by The Wise One). Funny. I have disagreed and argued with Eliezer for many years without ever getting kicked out of anything including SL4. I have never known him to exhibit this simplistic egoism you accuse him of. I have known him to actually make a point of saying when something I or someone else said that was contrary to his opinion turns out to have something of value in it. Eliezer in my experience is quite willing to admit when he is wrong. He even goes out of his way to say that he was quite mistaken at various times or in various past writings. > Eliezer obviously thinks that he is the chosen one, but whereas you are coming right out and declaring that you are the one, he would never be so dumb as to actually say "Hey, everyone, bow down to me, because I *am* the singularity!". He may be an irrational, Randian asshole, but he is not that stupid. > I don't think so. Eliezer says the problem is important and has dedicated himself to addressing it. I imagine he would be quite delighted if others did so also and using approaches that may be different from what he is exploring. Why use only one approach to such a serious problem domain? > So have fun on SL4, if there is anything left of it. If you don't actually get banned within a couple of months it will be because SL4 is (as John Clark claims) actually dead, and nobody gives a damn what you say there. > Again, I was on SL4 pretty much from the beginning and certainly was not any sort of cultist or yes-woman. So how come I wasn't banned if your characterization is valid? And no, this isn't an invitation to revisit just how much you feel you were wronged by Eliezer in the past. - s From sjatkins at mac.com Mon Nov 15 04:40:49 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 20:40:49 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <8E1B1423-E951-4B03-8706-2716CCEC541E@mac.com> On Nov 12, 2010, at 2:33 PM, BillK wrote: > On Fri, Nov 12, 2010 at 9:11 PM, Aleksei Riikonen wrote: > >> As Eliezer notes on his homepages that you have read, the primary way >> to contact him is email. It's just that he gets so much email, >> including from a large number of crazy people, that he of course >> doesn't answer them all. (You, unfortunately, are one of those crazy >> people who pretty surely will be ignored. So in the end, on this >> matter it would be appropriate of you to accept that -- like all >> people -- Eliezer should have the right to choose who he spends his >> time talking to, and that he most likely would not want to correspond >> with you.) >> >> > > > As I understand SU's request, she doesn't particularly want to enter a > dialogue with Eliezer. Her request was for an updated version of The > Singularitarian Principles > Version 1.0.2 01/01/2000 marked 'obsolete' on Eliezer's website. > > Perhaps someone could mention this to Eliezer or point her to more > up-to-date writing on that subject? Doesn't sound like an > unreasonable request to me This is indeed a very sensible request. I am a bit annoyed by the number of times I have attempted to refer to various papers in talks with SIAI people only to be told that that paper or statement is "now obsolete" without being offered any up-to-date versions. I have heard that the CEV is either "out-of-date" or still the main idea/goal so many times that I don't know what to believe about it except that the SIAI hasn't kept its own position documents and working theories up to date. - samantha From sjatkins at mac.com Mon Nov 15 04:48:18 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 20:48:18 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <76D02828-598F-4A2F-A1A5-70B2C066F090@mac.com> On Nov 12, 2010, at 2:44 PM, Aleksei Riikonen wrote: > On Sat, Nov 13, 2010 at 12:33 AM, BillK wrote: >> >> As I understand SU's request, she doesn't particularly want to enter a >> dialogue with Eliezer. Her request was for an updated version of The >> Singularitarian Principles >> Version 1.0.2 01/01/2000 marked 'obsolete' on Eliezer's website. >> >> Perhaps someone could mention this to Eliezer or point her to more >> up-to-date writing on that subject? Doesn't sound like an >> unreasonable request to me. > > If people want a new version of Singularitarian Principles to exist, > they can write one themselves. Hardly. I cannot speak for this Institute. How would my writing such a thing be anything but my opinion? I want to know what the SIAI current positions are. What is it current formulation of what a FAI is and how it may be attained? What are its current definitions of Friendliness in hopefully implementable and testable terms? What sort of AGI or recursively optimizing procedure or whatever does it propose to create? What means does it advocate to avoid unfriendly AGI? Does it seek a singleton AGI (or equivalent) or peer AGIs and why? > Eliezer has no magical authority on the > topic, that would necessitate that it should be him. (Also, I doubt > Eliezer thinks it important for a new version to exist.) > An organization that claims its sole purpose is the attainment of a safe and Friendly AGI driven singularity or to at least avoid UFAI is under no obligation to state what its current thinking and position is? If it does not then why would anyone take it seriously (at least in those stated goals) at all? - s From sjatkins at mac.com Mon Nov 15 04:52:52 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 20:52:52 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <8B400EE3-0BA2-44F2-A935-A990DA7EA268@mac.com> On Nov 12, 2010, at 3:04 PM, BillK wrote: > On Fri, Nov 12, 2010 at 10:44 PM, Aleksei Riikonen wrote: >> If people want a new version of Singularitarian Principles to exist, >> they can write one themselves. Eliezer has no magical authority on the >> topic, that would necessitate that it should be him. (Also, I doubt >> Eliezer thinks it important for a new version to exist.) >> >> (And if people just want newer things that Eliezer has written, just >> check his homepage.) >> >> > > > I don't disagree with you at all, as I agree with your opinion that > Eliezer has no magical authority on that topic. > > It just seems very unhelpful to abuse enquirers and tell them to use > Google. If visitors make a persistent nuisance of themselves, > perhaps, but it doesn't seem the best attitude to start off with. > Nor is it helpful to be told to read all of Less Wrong. This has actually been suggested when I asked where to find current position and theory papers. - s From aleksei at iki.fi Mon Nov 15 05:15:14 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Mon, 15 Nov 2010 07:15:14 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <76D02828-598F-4A2F-A1A5-70B2C066F090@mac.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <76D02828-598F-4A2F-A1A5-70B2C066F090@mac.com> Message-ID: On Mon, Nov 15, 2010 at 6:48 AM, Samantha Atkins wrote: > On Nov 12, 2010, at 2:44 PM, Aleksei Riikonen wrote: > >> If people want a new version of Singularitarian Principles >> to exist, they can write one themselves. > > Hardly. ?I cannot speak for this Institute. ? How would my writing > such a thing be anything but my opinion? No matter who would write such a document, it's just an opinion. There is currently no codified "ideology of singularitarianism" that would be owned by any single Institute. Eliezer and other SIAI folks seem to like it that way, so there likely will not be a codified document of Singularitarian principles coming from their direction. So if there are people who want such a codified ideology, they're going to have to codify it themselves. > ?I want to know what the SIAI current positions are. That's a different thing than wanting them to present a codified ideology. Just read their recent publications. This is a good start: http://singinst.org/riskintro/index.html -- Aleksei Riikonen - http://www.iki.fi/aleksei From sjatkins at mac.com Mon Nov 15 05:27:53 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 21:27:53 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: <1DD7819C-87D9-43B1-AA99-E9AF0FDB1C73@mac.com> On Nov 13, 2010, at 5:16 AM, John Grigg wrote: > Richard Loosemore wrote: > You have no idea how entertaining it is to hear professionally qualified > cognitive psychologists, complex systems theorists or philosophers of > science commenting on Eliezer's level of competence in these areas. Not > many of them do, of course, because they can't be bothered. But among > the few who have actually taken the trouble, I am afraid the poor guy is > generally scorned as a narcissistic, juvenile amateur. >>>> > > > Eliezer (I once called him Eli in a post and he responded with, "only > friends get to call me that") is in my view a very bright fellow, but > I find it a tragedy that he did not attend college and get an advanced > degree in something along the lines of artificial > intelligence/neuro-computation. > > > I feel he has doomed himself to not being a "heavy hitter" like Robin > Hanson, James Hughes, Max More, or Nick Bostrom, due to his lacking > in this regard. I realize he has his loyal pals and many friends within > transhumanism, but I suspect his success in the much larger world has > been greatly blunted due to his stubborn refusal to earn academic credentials. > And I have to chuckle at his notion that the Singularity would be right around > the corner and so why should he even bother? LOL I really don't think being a "heavy hitter" is a matter of degrees one has accumulated. There are too many very heavy hitters without such credentials for this to be so. There are also many heavies in fields that have nothing to do with the degree or degrees that they do have. There is no directly relevant degree for FAI. There are many fields of knowledge that are relevant. Which would you pick to specialize enough in to get a relevant higher degree? This is not to say I have anything against such credentials. If I were younger I might be more tempted to pick up such myself. The education system unfortunately does not make it easy to do that. There are too many irrelevant hoops and too much incompressible time required in most current US programs. If you have a sense of mission as Eliezer has from a very young age it can be very difficult to justify years spent on some subsection of the relevant material just to get a credential that may or may not make you any more likely to succeed. - samantha From possiblepaths2050 at gmail.com Mon Nov 15 05:27:36 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 14 Nov 2010 22:27:36 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: Brent Allsop wrote: I would agree that a copy-able human level AI would launch a take-off, leaving what we have today, to the degree that it is unchanged, in the dust. But I don't think acheiving this is going to be anything like spontaneous, as you seem to assume is possible. The rate of progress of intelligence is so painfully slow. So slow, in fact, that many have accused great old AI folks like Minsky as being completely mistaken. >>> Michael Annisimov replied: There's a huge difference between the rate of progress between today and human-level AGI and the time between human-level AGI and superintelligent AGI. They're completely different questions. As for a fast rate, would you still be skeptical if the AGI in question had access to advanced molecular manufacturing? >>> I agree that self-improving AGI with access to advanced manufacturing and research facilities would probably be able to bootstrap itself at an exponential rate, rather than the speed at which humans created it in the first place. But the "classic scenario" where this happens within minutes, hours or even days and months seems very doubtful in my view. Am I missing something here? John On 11/14/10, Michael Anissimov wrote: > Hi Brent, > > On Sun, Nov 14, 2010 at 4:12 PM, Brent Allsop > wrote: >> >> >>> Michael. Is your ordering important? In other words, for you, is this >> the most important argument compared to the others? If so, I would agree >> that this is the most important argument compared to the others. > > > It wasn't meant to be, but I think copying is really important, yes. > > >> I would also include the ability to fully concentrate 100% of the time. >> We >> seem to be required to do more than just one thing, and to play, have >> sex... >> a lot. In addition to sleeping. But all of these, at best, are linear >> differences, and can be overcome by having 2 or 10... times more people >> working on a particular problem >> > > There may be second-order benefits from being able to concentrate longer. > To get from one node of an argument or problem to another might require a > certain amount of sustained attention, for instance. Any idea requiring > longer than 20 or so hours of sustained continuous attention would be > inaccessible to humanity. > > >> I probably don't fully understand what you mean by this one. To me, all >>> computer power we've created so for is only because we can utilize / >>> absorb >>> / or benefit from all of it, at least as much as any other computer >>> would. >> >> > I mean integrating it directly into its brain. For instance, imagining me > doubling the amount of processing power in my retina and visual cortex, > allowing me to see a much wider range of patterns and detail in the world, > just because I chose to add more computing power to it. Or imagine giving > more computing power to the concept-manipulating parts of the brain that > surely exist but are only understood on a moderate level today. It's hard > to say how important it is until we try, but the ability to add computing > power directly to the brain is something no animal has ever had, so it's > definitely something interesting and potentially important. > > >> >> >> 6. constructed from scratch with self-improvement in mind >>>> >>> Possibly true but not implied. >>> >>> 7. the possibility of direct integration with new sensory modalities, >>>> like a codic modality >>>> >>> True, but not unique, the human brain can also integrate with new >>> sensory modalities, this has been tested. >>> >> >> What is 'codic modality'? We have significant diversity of knowledge >> representation abilities as compared to the mere ones and zeros of >> computers. I.E. we represent wavelengths of visible light with different >> colors, wavelengths of acoustic vibrations with sound, hotness/coldness >> for >> different temperatures, and so on. And we have great abilities to map new >> problem spaces into these very capable representation systems as can be >> seen by all the progress in field of scientific data representation / >> visualization. > > > I hazard to say it's not the same as having a modality custom-crafted for > the specific niche. We can map all this great stuff, but in something that > requires skill and getting it right the first time, it's not the same as > having the neural hardware. Really spectacular martial artists probably > have "better" motor cortex than us in some ways. Parkinson's patients have > a "worse" substantia negra that leads to pathology. Really good artists > probably have slightly "better" brain sections corresponding to visualizing > images. These variations take place entirely within the space of human > possibilities, and they're still substantial. Imagine neurobiological > differences going significantly beyond the human norm. > > >> I admit that the initial speed difference is huge. But I agree with Alan >>> that we make up with parallelism and many other things, what we lack in >>> speed. And, we already seem to be at the limit of hardware speed - i.e. >>> CPU >>> speed has not significantly changed in the last 10 years right? >> >> > It has: > > http://en.wikipedia.org/wiki/Megahertz_myth > > Of course, people have different opinions based on what they're trying to > sell, but by and large Moore's law has kept going: > > http://cosmiclog.msnbc.msn.com/_news/2010/08/31/5012834-researchers-rescue-moores-law > http://www.engadget.com/2010/05/03/nvidia-vp-says-moores-law-is-dead/ > > >> I would agree that a copy-able human level AI would launch a take-off, >> leaving what we have today, to the degree that it is unchanged, in the >> dust. >> But I don't think acheiving this is going to be anything like >> spontaneous, >> as you seem to assume is possible. The rate of progress of intelligence >> is >> so painfully slow. So slow, in fact, that many have accused great old AI >> folks like Minsky as being completely mistaken. >> > > There's a huge difference between the rate of progress between today and > human-level AGI and the time between human-level AGI and superintelligent > AGI. They're completely different questions. As for a fast rate, would you > still be skeptical if the AGI in question had access to advanced molecular > manufacturing? > > -- > michael.anissimov at singinst.org > Singularity Institute > Media Director > From lists1 at evil-genius.com Mon Nov 15 05:54:57 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Sun, 14 Nov 2010 21:54:57 -0800 Subject: [ExI] Errors in the Cordain paper (was Paleo/Primal health...) In-Reply-To: References: Message-ID: <4CE0CB31.3020105@evil-genius.com> Max: Thank you for jumping in and tackling the paleo information deficit. I deliberately started with the Cordain AJCN paper because I wasn't looking forward to dealing with the ****storm that often results when the term 'paleo' is brought in, and hoped to slide it in sideways by starting with the peer-reviewed science. I was pleasantly surprised to learn that I wasn't going to be the lone standard-bearer. However, I must note that the Cordain paper makes a *huge*, and very significant, series of factual errors in the section entitled "Fatty domestic meats" -- and he carries those errors forward to this day, as do several other paleo advocates. This article describes the errors in detail: http://www.gnolls.org/715/when-the-conclusions-dont-match-the-data-even-loren-cordain-whiffs-it-sometimes-because-saturated-fat-is-most-definitely-paleo/ His acid/base balance theory is also somewhat shaky, though its net effects (increased vegetable/fruit consumption) are most likely beneficial. From spike66 at att.net Mon Nov 15 06:10:33 2010 From: spike66 at att.net (spike) Date: Sun, 14 Nov 2010 22:10:33 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: <013801cb848b$cfd192d0$6f74b870$@att.net> On Behalf Of John Grigg >...But the "classic scenario" ... seems very doubtful in my view...Am I missing something here? John No Johnny, rather you are getting something here, something very fundamental: the uncertainty inherent in AI research. This has really bothered me since about the mid-90s when I was introduced to the notion of the singularity: the inherent uncertainty is often downplayed. If we want to go with the plutonium analogy, we have some immediate problems. There were unknowns of course, but the behavior of a critical mass of plutonium could be calculated, with slipsticks, nuclear cross section tables, and the results of a number of lab tests. The scientists could model the feedback loops, estimate closely the outcome. They could calculate the risk of igniting the atmosphere and destroying all life on the planet for instance. The results of the tests at the Trinity site didn't surprise the scientists present. They were awed to the core of their beings, but not surprised. Intelligence in any substrate is far less predictable. Put a bunch of really smart scientists together and it becomes wildly unpredictable what will happen. Consider my energetic reaction to Singularity Utopia. He or she went on and on about how everything would be just grand. I now realize that she may have been a creative singularity-phobe who is making a point, a good one, by posing as a wild-eyed singularity-phile. All the megalomania could have been a kind of over-the-top satire, to point to the great danger of running in to a danger zone with wild-eyed optimism. Alternately she could be exactly what she wrote, in which case she made a damn good point anyway, although not the one she intended. I am not advocating a Bill Joy approach of eschewing AI research, just the opposite. A no-singularity future is 100% lethal to every one of us, every one of our children and their children forever. A singularity gives us some hope, but also much danger. The outcome is far less predictable than nuclear fission. Good luck to us. spike From hkeithhenson at gmail.com Mon Nov 15 06:40:31 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 14 Nov 2010 23:40:31 -0700 Subject: [ExI] Hard Takeoff-money Message-ID: On Sun, Nov 14, 2010 at 9:41 PM, Michael Anissimov wrote: > Thanks Keith, this is definitely relevant to my argument. ?And if this sort > of thing is possible today, imagine how much more empowering it could be in > a future where computers, robotics, manufacturing, and other critical > infrastructure are even more closely intertwined. True. One of my more freaky realizations in the last year is that certain classes of finance are beyond human abilities. In reading up on the big drop in the stock market early this year, it became clear that unaided humans are not in the running for the kind of "finance" that certain computer programs do. Currently the people who write the descriptions of how computers should make money in the market using short time (ms) trades have been bitching that they are not getting enough of the money these things make. Well, the obvious course of events is that someone programs one of these to run the whole thing, including bank deposits and gives one of them a small stake to work with then cuts it loose with instructions to spawn new versions and accounts. So if you later wonder how the AIs cornered the world's capital, I mentioned it first. :-) Keith From sjatkins at mac.com Mon Nov 15 07:17:06 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 23:17:06 -0800 Subject: [ExI] Singularity was EP, was Margaret Mead controversy In-Reply-To: References: Message-ID: On Nov 11, 2010, at 9:59 AM, Keith Henson wrote: > > > > It's so hard to understand the ramifications of what nanotech and AI > will be able to do in the context of human desires that I had to > resort to fiction to express it. > > http://www.terasemjournals.org/GN0202/henson.html Enjoyed it. More, please! - s From lists1 at evil-genius.com Mon Nov 15 07:34:10 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Sun, 14 Nov 2010 23:34:10 -0800 Subject: [ExI] Gaining weight on paleo, and fat balance In-Reply-To: References: Message-ID: <4CE0E272.9050005@evil-genius.com> > Natasha asked: > >> >Max, after you respond to Amara, would you please advise me how I >> >can maintain and even gain weight on the paleo diet? I'm not Max, but I can offer some insight: -Few foods remain unimproved by the addition of avocado slices, a fried egg, or both. -Root vegetables (particularly yams and sweet potatoes) are not impermissible. The most recent research (based on isotopic data) I've seen indicates that Late Pleistocene hunter-forager diets were approximately 1/3 hunted meat (antelope, other big game), 1/3 non-hunted meat (fish, insects, etc.), and 1/3 vegetables and roots. Since the calorie content of vegetables is low, roots likely accounted for a significant portion of the 1/3. "In this review we have analyzed the 13 known quantitative dietary studies of hunter-gatherers and demonstrate that animal food actually provided the dominant (65%) energy source, while gathered plant foods comprised the remainder (35%)." http://www.ncbi.nlm.nih.gov/pubmed/11965522 Fair warning: you will get into big arguments over this amongst paleo purists. However, since your objective is to gain weight, not to lose it, some root starches will help you maintain that objective while still staying away from gluten/gliadin. In other words, the old-school American breakfast of steak or bacon, eggs, and potatoes is basically paleo -- so long as the potatoes are fried in the steak fat or in butter, and not in an industrial product like 'vegetable oil' (a misnomer: actually 'grain oil') And for those of you who aren't ready to go full paleo but want as many of the health, energy, and attitude benefits as possible, removing anything containing 'vegetable oil' from your diet is a great start. Corn oil, soybean oil, cottonseed/sunflower/safflower/canola oil == extremely high in n-6 polyunsaturated fatty acids. Even olive oil should be used lightly and in moderation due to n-6 content. I can expand on this if people are interested: altering the n-3/n-6 balance accounts for many of the beneficial effects of a paleo/primal diet. From pharos at gmail.com Mon Nov 15 09:40:59 2010 From: pharos at gmail.com (BillK) Date: Mon, 15 Nov 2010 09:40:59 +0000 Subject: [ExI] Hard Takeoff-money In-Reply-To: References: Message-ID: On Mon, Nov 15, 2010 at 6:40 AM, Keith Henson wrote: > True. ?One of my more freaky realizations in the last year is that > certain classes of finance are beyond human abilities. ?In reading up > on the big drop in the stock market early this year, it became clear > that unaided humans are not in the running for the kind of "finance" > that certain computer programs do. > > Currently the people who write the descriptions of how computers > should make money in the market using short time (ms) trades have been > bitching that they are not getting enough of the money these things > make. > > Well, the obvious course of events is that someone programs one of > these to run the whole thing, including bank deposits and gives one of > them a small stake to work with then cuts it loose with instructions > to spawn new versions and accounts. > > So if you later wonder how the AIs cornered the world's capital, I > mentioned it first. ?:-) > > Won't happen until the Singularity, when all bets are off anyway. The point of these trading programs is to assist in making a few people very, very rich and the great majority poor and unemployed. Working great so far. (But surely the burning torches and pitchforks can't be far away, can they?). BillK From rpwl at lightlink.com Mon Nov 15 14:08:06 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 15 Nov 2010 09:08:06 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> Message-ID: <4CE13EC6.5090200@lightlink.com> Samantha Atkins wrote: > Again, I was on SL4 pretty much from the beginning and certainly was > not any sort of cultist or yes-woman. So how come I wasn't banned if > your characterization is valid? And no, this isn't an invitation to > revisit just how much you feel you were wronged by Eliezer in the > past. Samantha, As I pointed out in a post the other day that you (pointedly) ignored: I was banned from SL4 immediately after I suggested that Eliezer's AND my own comments be put in front of an outside expert in cognitive science. Since the debate between Eliezer and myself was about a particular technical issue in cognitive science, that would have been a perfect way for onlookers to assess whether Eliezer's comments really were as ridiculous as I said they were. As soon as I made that suggestion, he banned me from SL4, wrote several defamatory essays about me, and then forbade anyone on SL4 from discussing the matter further. Why have *you* never been banned? I don't know: perhaps because you don't have enough knowledge to challenge his core beliefs and defend yourself successfully. Richard Loosemore From hkeithhenson at gmail.com Mon Nov 15 15:31:44 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 15 Nov 2010 08:31:44 -0700 Subject: [ExI] Hard Takeoff-money Message-ID: On Mon, Nov 15, 2010 at 5:00 AM, John Grigg wrote: > > Brent Allsop wrote: > I would agree that a copy-able human level AI would launch a take-off, > leaving what we have today, to the degree that it is unchanged, in the > dust. ?But I don't think acheiving this is going to be anything like > spontaneous, as you seem to assume is possible. ?The rate of progress > of intelligence is so painfully slow. ? So slow, in fact, that many > have accused great old AI folks like Minsky as being completely > mistaken. >>>> > > Michael Annisimov replied: > There's a huge difference between the rate of progress between today > and human-level AGI and the time between human-level AGI and > superintelligent AGI. ?They're completely different questions. ?As for > a fast rate, would you still be skeptical if the AGI in question had > access to advanced molecular manufacturing? > > I agree that self-improving AGI with access to advanced manufacturing > and research facilities would probably be able to bootstrap itself at > an exponential rate, rather than the speed at which humans created it > in the first place. ?But the "classic scenario" where this happens > within minutes, hours or even days and months seems very doubtful in > my view. > > Am I missing something here? What does an AI mainly need? Processing power and storage. If there are vast amounts of both that can be exploited, then all you need is a storage estimate for the AI and the average bandwidth between storage locations to determine the replication rate. Human memory is thought to be in the few hundreds of M bytes. How long does it take to copy a G byte over the net nowadays? BillK wrote: > > On Mon, Nov 15, 2010 at 6:40 AM, Keith Henson ?wrote: snip >> So if you later wonder how the AIs cornered the world's capital, I >> mentioned it first. ?:-) > > Won't happen until the Singularity, when all bets are off anyway. > > The point of these trading programs is to assist in making a few > people very, very rich and the great majority poor and unemployed. > Working great so far. I can easily see a disgruntled programmer writing this as retaliation against a hated boss. > (But surely the burning torches and pitchforks can't be far away, can they?). That is _so_ 17th century. Surely you can think of something better. Keith From sjatkins at mac.com Mon Nov 15 16:56:52 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 08:56:52 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: <3E98024C-9C8C-4D2C-8F54-A0355E058FFD@mac.com> On Nov 14, 2010, at 9:03 AM, Stefano Vaj wrote: > On 14 November 2010 02:22, Damien Broderick wrote: >> Extrope Dan Clemmensen posted here around 15 years ago his conviction that >> the Singularity would happen "before 1 May, 2006" (the net would "wake up"). >> Bad luck. > > I still believe that seeing the Singularity as an "event" taking place > at a given time betrays a basic misunderstanding of the metaphor, ony > too open to the sarcasm of people such as Carrico. > > If we go for the original meaning of "the point in the future where > the predictive ability of our current forecast models and > extrapolations obviously collapse", it would seem obvious that the > singularity is more of the nature of an horizon, moving forward with > the perspective of the observer, than of a punctual event. That is not the original meaning but a likely consequence of the advent of AGI. - s From sjatkins at mac.com Mon Nov 15 17:04:07 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 09:04:07 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <9207FB8E-8DF9-44A0-91A6-E43B7BEDED24@mac.com> On Nov 14, 2010, at 9:59 AM, Michael Anissimov wrote: > Here's a list I put together a long time ago: > > http://www.acceleratingfuture.com/articles/relativeadvantages.htm > > Say I meet someone like Natasha or Stefano, but I know they haven't been exposed to any of the arguments for an abrupt Singularity. Someone more new to the whole thing. I mention the idea of an abrupt Singularity, and they react by saying that that's simply secular monotheism. Then, I present each of the items on that AI Advantage list, one by one. Each time a new item is presented, there is no reaction from the listener. It's as if each additional piece of information just isn't getting integrated. > > The idea of a mind that can copy itself directly is a really huge deal. A mind that can copy itself directly is more different than us than we're different from most other animals. We're talking about an area of mindspace way outside what we're familiar with. > > The AI Advantage list matters to any AI-driven Singularity. You may say that it will take us centuries to get to AGI, so therefore these arguments don't matter, but if you think that, you should explicitly say so. The arguments about whether AGI is achievable by a certain date and whether AGI would quickly lead to a hard takeoff are separate arguments -- as if I need to say it. With full acknowledgement of AI advantage, which I certainly understand, it is quite speculative how hard a takeoff will or will not ensue when AGI is achieved. > > What I find is that people don't like the *connotations* of AI and are much more concerned about the possibility of THEM PERSONALLY sparking the Singularity with intelligence enhancement, so therefore they underestimate the probability of the former simply because they never care to look into it very deeply. There is also a cultural dynamic in transhumanism whereby interest in hard takeoff AGI is considered "SIAI-like" and implies that one must be culturally associated with SIAI. How come? I J Good was talking about this nearly half a century ago. It is not a SIAI specific meme in the least. But I have noticed the tendency of many to associate an idea with its currently most prolific or most well known expounders. - s -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Nov 15 16:57:27 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 15 Nov 2010 11:57:27 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CE0158B.9000409@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CE0158B.9000409@speakeasy.net> Message-ID: On Nov 14, 2010, at 11:59 AM, Alan Grimes wrote: > > Science has not, and cannot make any claims about metaphysics. There are an infinite number of metaphysical theories that fit the known facts, and without science there is no way to know which one is right; except that even metaphysics needs to be self consistent. So lets see where your intuitive feeling that atoms are the key to identity leads. The human bladder has a capacity of at least 300cc, so assuming you weigh about 75 kilograms you will lose almost half of one percent of your identity every time you visit the mens room. However this loss of self need not be permanent because whenever you insert a donut into your head beyond your teeth the atoms in the donut undergo transubstantiation and they are no longer just atoms, they are now YOUR atoms. This change in the atoms of the donut (caused by some sort of mysterious transubstantiation field that envelops the body) is of monumental importance but is completely undetectable by the scientific method. Also, you need not worry about getting fat from all those donuts because fat is good, fat people have a stronger identity than thin people, they have more consciousness because they have more atoms. Alan, is this a metaphysical theory you want to quite literally stake your life on? > Science can erode some of the edges of what was previously metaphysics by weeding out some of the more-wrong understandings of the world, but it can't do much more than that. You seem to be implying that something CAN do better than that. Please elaborate! > The identity issue in uploading is precisely the type of question that > science is utterly mute about. Religion is certainly not mute about matters of this sort, but it would be better if it was; however religious people do so enjoy flapping their gums and pontificating about things they have no way of knowing anything about. I agree with Ludwig Wittgenstein about one thing, "What we cannot speak about we must pass over in silence". > It is logically impossible to repeat the experiment of destructively uploading someone. Repeat? It is impossible to perform the experiment even once until you explain exactly what is destroyed in a destructive upload. If it is the soul, something of enormous importance that is nevertheless completely undetectable by the scientific method then obviously the theory cannot be disproven by an experiment. But I ask you again, is this really a metaphysical theory that you want to quite literally stake your life on? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Mon Nov 15 17:14:02 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 09:14:02 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: On Nov 14, 2010, at 9:52 AM, Michael Anissimov wrote: > On Sun, Nov 14, 2010 at 9:03 AM, Stefano Vaj wrote: > > I still believe that seeing the Singularity as an "event" taking place > at a given time betrays a basic misunderstanding of the metaphor, ony > too open to the sarcasm of people such as Carrico. > > If we go for the original meaning of "the point in the future where > the predictive ability of our current forecast models and > extrapolations obviously collapse", it would seem obvious that the > singularity is more of the nature of an horizon, moving forward with > the perspective of the observer, than of a punctual event. > > We have some reason to believe that a roughly human-level AI could rapidly improve its own capabilities, fast enough to get far beyond the human level in a relatively short amount of time. The reason why is that a "human-level" AI would not really be "human-level" at all -- it would have all sorts of inherently exciting abilities, simply by virtue of its substrate and necessities of construction: While it "could" do this it is not at all certain that it would. Humans can improve themselves even today in a variety of ways but very few take the trouble. An AGI that is not autonomous would do what it was told to do by its owners who may or may not have improving it drastically as a high priority. > > 1. ability to copy itself > 2. stay awake 24/7 Possibly, depending on its long term memory and integration model. If it came from human brain emulation this is less certain. > 3. spin off separate threads of attention in the same mind This very much depends on the brain architecture. If too close a copy of human brains this may not be the case. > 4. overclock helpful modules on-the-fly Not sure what you mean by this but this is very much a question of specific architecture rather than general AGI. > 5. absorb computing power (humans can't do this) What does this mean? Integrate other systems? How? To what level? Humans do some degree of this all the time. > 6. constructed from scratch with self-improvement in mind It could be so constructed but may or may not in fact be so constructed. > 7. the possibility of direct integration with new sensory modalities, like a codic modality I am not sure exactly what is meant by this. That it is very very good at understanding code amounts to a 'modality'? > 8. the ability to accelerate its own thinking speed depending on the speed of available computers > This assumes an ability to integrate random other computers that I do not think is at all a given. > When you have a human-equivalent mind that can copy itself, it would be in its best interest to rent computing power to perform tasks. If it can make $1 of "income" with less than $1 of computing power, you have the ingredients for a hard takeoff. This is simple economics. Most humans don't take advantage of the many such positive sum activities they can perform today without such self-copying abilities. So why is it certain that an AGI would? > > There is an interesting debate to be had here, about the details of the plausibility of the arguments, but most transhumanists just seem to dismiss the conversation out of hand, or don't know that there's a conversation to have. > Statements about "most transhumanists" are fraught with many problems. > Many valuable points are made here, why do people always ignore them? 'We' don't. > > http://singinst.org/upload/LOGI//seedAI.html > > Prediction: most comments in response to this post will again ignore the specific points in favor of a rapid takeoff and simply dismiss the idea based on low intuitive plausibility. > Well, that helps a lot. It is a form of calling those who disagree lazy or stupid before they even voice their disagreement. > The Singularity as an incumbent rapture - or > doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to > many to present it these days - can on the other hand easily > deconstructed as a secularisation of millennarist myths which have > plagued western culture since the advent of monotheism. > > We have real, evidence-based arguments for an abrupt takeoff. One is that the human speed and quality of thinking is not necessarily any sort of optimal thing, thus we shouldn't be shocked if another intelligent species can easily surpass us as we surpassed others. We deserve a real debate, not accusations of monotheism. No, you don't have air tight evidence. You have a reasonable argument for it. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Nov 15 17:06:00 2010 From: spike66 at att.net (spike) Date: Mon, 15 Nov 2010 09:06:00 -0800 Subject: [ExI] Hard Takeoff-money In-Reply-To: References: Message-ID: <018501cb84e7$615037b0$23f0a710$@att.net> ...On Behalf Of Keith Henson >...Currently the people who write the descriptions of how computers should make money in the market using short time (ms) trades have been bitching that they are not getting enough of the money these things make...Keith Nor are they sharing the risk. Those who bitch thus should be given this offer: put their entire net worth in a bank account. If there is another flash crash, then anyone who loses money is free to write a check against that account. spike From sjatkins at mac.com Mon Nov 15 17:20:37 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 09:20:37 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: On Nov 14, 2010, at 10:51 AM, Stefano Vaj wrote: > 2010/11/14 Michael Anissimov : >> The idea of a mind that can copy itself directly is a really huge deal. Replication requires sufficient resources. The first AGIs may well require very expensive hardware systems. So that it can be copied does not in the least mean we will have a proliferation of such AGIs really quickly. From pharos at gmail.com Mon Nov 15 17:48:03 2010 From: pharos at gmail.com (BillK) Date: Mon, 15 Nov 2010 17:48:03 +0000 Subject: [ExI] Hard Takeoff-money In-Reply-To: References: Message-ID: On Mon, Nov 15, 2010 at 3:31 PM, Keith Henson wrote: > BillK wrote: >> >> The point of these trading programs is to assist in making a few >> people very, very rich and the great majority poor and unemployed. >> Working great so far. > > I can easily see a disgruntled programmer writing this as retaliation > against a hated boss. > That's only a theoretical possibilty. In practice impossible and totally unlikely even if it was possible. These trading programs are not your ordinary buy and sell order processors like your stockbroker has access to. They run on a very few special main dealers computers that plug in direct to the stock exchange computers. So physical access is the first hurdle. Even if they stole their programs and ran off with them (as one programmer actually did!) they cannot make use of them because they don't have access to the special insider-only computers. The profits from these microsecond trades go to these main dealers accounts. There is no way a programmer could extract profits for himself. These programmers are *very* well paid. Yes, they are grumbling about getting more, but they would not risk their current small fortune to annoy their boss and risk a jail sentence. Their grumbles are hints that they want more or they will move to another main dealer. These special programs effectively mean that the stock market is broken. Outsiders have no chance against the insider dealers. You can buy shares and gamble whether they go up or down, but nowadays it's purely a gamble. The market is rigged. BillK From thespike at satx.rr.com Mon Nov 15 18:25:35 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 15 Nov 2010 12:25:35 -0600 Subject: [ExI] this one's for Spike and all the other space cadets Message-ID: <4CE17B1F.7000605@satx.rr.com> The picture, not the astronaut! From sparge at gmail.com Mon Nov 15 20:10:35 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 15 Nov 2010 15:10:35 -0500 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> Message-ID: On Sun, Nov 14, 2010 at 2:19 PM, Max More wrote: > In reply to Dave Sill: > > Your reply again illustrates why I wanted you to read some of the sources. I've been reading a bunch of them. >> I don't think it's particularly Extropian not to apply science and >> technology to our diets. > > Now you're telling me what's extropian and doing so based on a false > assumption. I don't think you really disagree with my statement. I'm guessing the assumption you're referring to is that "paleo/primal diet" means "a nutritional plan based on the presumed ancient diet of wild plants and animals that various human species habitually consumed during the Paleolithic [Era]"*. If it really means "a modern diet based on the presumed ancient diet [...] but incorporating current knowledge of biochemistry, nutrition, genetics, etc.", then perhaps I could be excused for being mislead. >> Yes, whole grains are good sources of carbohydrates, protein, fiber, >> photochemicals, vitamins, minerals, etc. > > http://www.thepaleodiet.com/articles/Cereal%20article.pdf > page 25. > ?From p.24: "All cereal grains have significant nutritional shortcomings > which are apparent upon analysis... Yes, no single food is complete. That doesn't mean grains aren't nutritious. > "However, as more and more cereal grains are included in the diet, they > tend to displace the calories that would be provided by other foods (meats, > dairy products, fruits and vegetables), and can consequently disrupt > adequate nutritional balance." That doesn't mean that moderate grain consumption is bad. > Apart from replying to Natasha's question, no more time for this. To those > interesting in exploring further, I have plenty more good information > sources if you want them. I sense, from your two replies, that you think I'm hostile to the idea of a "paleo" diet, but I'm not. I'm curious, and based on what I've read so far I'm convinced there are some good ideas there, but I'm also skeptical of some of the claims. Thanks for your time so far. I don't expect a response. -Dave * Lifted from http://en.wikipedia.org/wiki/Paleolithic_diet From agrimes at speakeasy.net Mon Nov 15 20:59:04 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Mon, 15 Nov 2010 15:59:04 -0500 Subject: [ExI] The atoms red herring. =| Message-ID: <4CE19F18.8040200@speakeasy.net> While the uploaders can be relied upon to turn to patronizing arguments. It becomes truly annoying when I am accused of something I am emphatically not guilty of. The case in point being the accusation that I associate identity with a certain set of atoms. This accusation has been repeated several times now. Seriously, this argument needs to come to a screeching halt until someone provides me with evidence that I *EVER* associated my identity with specific atoms or issues the apology that I am now owed. =\ -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From dan_ust at yahoo.com Mon Nov 15 22:16:22 2010 From: dan_ust at yahoo.com (Dan) Date: Mon, 15 Nov 2010 14:16:22 -0800 (PST) Subject: [ExI] Paleo/Primal health In-Reply-To: References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> Message-ID: <309442.61408.qm@web30105.mail.mud.yahoo.com> Just out of curiousity, before agriculture there was some grain consumption, no? I mean consumption of wild grains gathered around the Middle East... I'm not sure what the evidence is for this, but I'm thinking someone must have been eating grains before they were domesticated. Is there any information on this? Regards, Dan ----- Original Message ---- From: Dave Sill To: ExI chat list Sent: Mon, November 15, 2010 3:10:35 PM Subject: Re: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] On Sun, Nov 14, 2010 at 2:19 PM, Max More wrote: > In reply to Dave Sill: > > Your reply again illustrates why I wanted you to read some of the sources. I've been reading a bunch of them. >> I don't think it's particularly Extropian not to apply science and >> technology to our diets. > > Now you're telling me what's extropian and doing so based on a false > assumption. I don't think you really disagree with my statement. I'm guessing the assumption you're referring to is that "paleo/primal diet" means "a nutritional plan based on the presumed ancient diet of wild plants and animals that various human species habitually consumed during the Paleolithic [Era]"*. If it really means "a modern diet based on the presumed ancient diet [...] but incorporating current knowledge of biochemistry, nutrition, genetics, etc.", then perhaps I could be excused for being mislead. >> Yes, whole grains are good sources of carbohydrates, protein, fiber, >> photochemicals, vitamins, minerals, etc. > > http://www.thepaleodiet.com/articles/Cereal%20article.pdf > page 25. > ?From p.24: "All cereal grains have significant nutritional shortcomings > which are apparent upon analysis... Yes, no single food is complete. That doesn't mean grains aren't nutritious. > "However, as more and more cereal grains are included in the diet, they > tend to displace the calories that would be provided by other foods (meats, > dairy products, fruits and vegetables), and can consequently disrupt > adequate nutritional balance." That doesn't mean that moderate grain consumption is bad. > Apart from replying to Natasha's question, no more time for this. To those > interesting in exploring further, I have plenty more good information > sources if you want them. I sense, from your two replies, that you think I'm hostile to the idea of a "paleo" diet, but I'm not. I'm curious, and based on what I've read so far I'm convinced there are some good ideas there, but I'm also skeptical of some of the claims. Thanks for your time so far. I don't expect a response. -Dave * Lifted from http://en.wikipedia.org/wiki/Paleolithic_diet From possiblepaths2050 at gmail.com Mon Nov 15 23:07:49 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 15 Nov 2010 16:07:49 -0700 Subject: [ExI] this one's for Spike and all the other space cadets In-Reply-To: <4CE17B1F.7000605@satx.rr.com> References: <4CE17B1F.7000605@satx.rr.com> Message-ID: It made me think of many a science fiction novel cover that I have seen over the years. John : ) On 11/15/10, Damien Broderick wrote: > The picture, not the astronaut! > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stefano.vaj at gmail.com Mon Nov 15 23:09:46 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 00:09:46 +0100 Subject: [ExI] Singularity In-Reply-To: References: Message-ID: On 14 November 2010 19:28, Aleksei Riikonen wrote: > Who's going for "listening to prophets"? Serious people like Nick > Bostrom and the SIAI present actual, concrete steps and measures that > need to be taken to minimize risks. Once more, I have no doubt that SIAI or Bostrom are (even too) serious. My point is simply that we are entitled to a more serious discussion of what would be a "risk" and why we should consider it so. -- Stefano Vaj From stefano.vaj at gmail.com Mon Nov 15 23:49:49 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 00:49:49 +0100 Subject: [ExI] Mathematicians as Friendliness analysts In-Reply-To: <4CE0A19A.1080308@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CE0A19A.1080308@lightlink.com> Message-ID: On 15 November 2010 03:57, Richard Loosemore wrote: > And the fact that we are talking about the friendliness of *computers* is a > red herring. Absolutely. I put obsessing on "friendliness" (to whom?) on an equal basis with those who look forward to some robotic revolutions to wipe off the "humankind". "One thing in any case is certain: man is neither the oldest nor the most constant problem that has been posed for human knowledge. Taking a relatively short short chronological sample withing a restricted geographical area - European culture since the XVI century - one can be certain that man is a recent invention. It is not around him and his secrets that knowledge prowled for so long in the darknessIn fact, among all the mutations tha have affected the knowledge of things and their order, the knowledge of identities, differences, characters, equivalences, words - in short, in the midst of all the episodes of that profound history of the * Same* - only one, that which began a century and a half ago, and is now perhaps drawing to a close, has made it possible for the figure of man to appear. And that appearance was not not the liberation of an old anxiety, the transition into luminous consciousness of an age-old concen, the entry into objectivity that had long remained trapped within beliefs and philosophers: it was the effect of a change in the fundamental arrangements of knowledge. As the archaeology of our thought easily shows, man is an invention of recent date. And one perhaps nearing its end. If those arrangements were to disappear as they appeared, if some event of which we can at the momento do no more than sense the possibility - without knowing either what its form will be or what it promises - were to cause them to crumble, as the ground of Classical thought did, at the end of the XVIII century, then one can certainly wager that man would be erased, like a face drawn in sand at the edge of the sea." (Foucault) Now, I maintain that we cannot even think of becoming posthumans unless we become posthumanists in the first place. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Mon Nov 15 23:57:31 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 00:57:31 +0100 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011141956.oAEJtxP1012356@andromeda.ziaspace.com> References: <201011141956.oAEJtxP1012356@andromeda.ziaspace.com> Message-ID: On 14 November 2010 20:55,Natasha asked: > > > Max, after you respond to Amara, would you please advise me how I can >> maintain and even gain weight on the paleo diet? >> > What about eating more of the same? If one is "objectively" under its optimal weight, this should be enough, unless some prob exists which need correction at an ormonal level and/or through supplementation. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Nov 16 00:11:44 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 01:11:44 +0100 Subject: [ExI] Let's play What If. In-Reply-To: <826395.808.qm@web114413.mail.gq1.yahoo.com> References: <826395.808.qm@web114413.mail.gq1.yahoo.com> Message-ID: On 14 November 2010 20:40, Ben Zaiboc wrote: > The question doesn't have any real sense, but the significant thing is why. > > The reason is that a blastocyst has no central nervous system. > No CNS, no thoughts. No thoughts, no identity. No identity, no 'soul'. > QED. > Even if they had, it would not change a thing, IMHO. BTW, I find your turn of phrase: "the soul - or, for those who prefer to put > some secular veneer on such concepts, the individual's 'identity'...", a > little odd. Why a 'secular veneer'? Is secularism not the default > position, in your opinion? I'd have expected you to say "the identity - or, > for those who prefer to put some supernatural veneer on such concepts, the > individual's 'soul'...". > My point is that thinking of "identity" in essentialist terms is a thinly disguised metaphysical position. Thus, the opposite does not really express what I mean. In fact, it would seem quite bizarre to claim that those who believe in some kind of "soul" are thinly disguised secularists. As i believe it should be quite obvious, I am personally very far from both POVs... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Nov 16 00:18:46 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 01:18:46 +0100 Subject: [ExI] Paleo/Primal health In-Reply-To: <309442.61408.qm@web30105.mail.mud.yahoo.com> References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: On 15 November 2010 23:16, Dan wrote: > Just out of curiousity, before agriculture there was some grain > consumption, no? > Mmhhh. Try and survive on some wild, untreated, raw grain and let me know how it is going. Personally, I would be more inclined to give cows' "grass and leaves" diet a try. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Nov 16 00:22:54 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 15 Nov 2010 19:22:54 -0500 Subject: [ExI] Paleo/Primal health In-Reply-To: <309442.61408.qm@web30105.mail.mud.yahoo.com> References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: On Mon, Nov 15, 2010 at 5:16 PM, Dan wrote: > Just out of curiousity, before agriculture there was some grain consumption, no? > I mean consumption of wild grains gathered around the Middle East... I'm not > sure what the evidence is for this, but I'm thinking someone must have been > eating grains before they were domesticated. Is there any information on this? Here are a couple links: http://thespartandiet.blogspot.com/2010/10/its-official-grains-were-part-of.html http://www.cbc.ca/technology/story/2009/12/17/tech-archaeology-grain-africa-cave.html So it obviously happened. It's very hard to tell how widespread it was, how important it was, how seasonal it was, what percentage of caloric intake it provided, etc. Interestingly, it's still being done by the Ojibwe: http://www.bineshiiwildrice.com. -Dave From stathisp at gmail.com Tue Nov 16 00:17:25 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 16 Nov 2010 11:17:25 +1100 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE19F18.8040200@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> Message-ID: <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> You have said that if a person is destructively copied he does not survive. What does this imply about your view of survival? Either that the same atoms have to be preserved or that there is some other substace, not reducible to atoms or information that has to be preserved. -- Stathis Papaioannou From thespike at satx.rr.com Tue Nov 16 01:44:02 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 15 Nov 2010 19:44:02 -0600 Subject: [ExI] crazy quantum Zeno notion Message-ID: <4CE1E1E2.5090707@satx.rr.com> I have a (Deutschean shadow-universes) intuition that the Quantum Zeno effect might derive from superposed activities in adjacent, only slightly divergent M-W realities where directed activities reinforce or prohibit a certain outcome, unlike ordinary stochastic radioactivity, say, where the "shadow overlaps" in/from nearby worlds are arbitrary. Might this have an impact on big beam and similar programs, say, perhaps delaying or inhibiting some otherwise possible outcomes (Higgs manifestations, proton decay, e.g.)? Damien Broderick From michaelanissimov at gmail.com Tue Nov 16 02:33:28 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Mon, 15 Nov 2010 18:33:28 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: Hi John, On Sun, Nov 14, 2010 at 9:27 PM, John Grigg wrote: > > > I agree that self-improving AGI with access to advanced manufacturing > and research facilities would probably be able to bootstrap itself at > an exponential rate, rather than the speed at which humans created it > in the first place. But the "classic scenario" where this happens > within minutes, hours or even days and months seems very doubtful in > my view. > > Am I missing something here? MNT and merely human-equivalent AI that can copy itself but not qualitatively enhance its intelligence beyond the human level is enough for a hard takeoff within a few weeks, most likely, if you take the assumptions in the Phoenix nanofactory paper. Add in the possibility of qualitative intelligence enhancement and you get somewhere even faster. Neocortex expanded in size by a factor of only about 4 from chimps to produce human intelligence. The basic underlying design is much the same. Imagine if expanding neocortex by a similar factor again led to a similar qualitative increase in intelligence. If that were so, then even a thousand AIs with so-expanded brains and a sophisticated manufacturing base would be like a group of 1000 humans with assault rifles and helicopters in a world of six billion chimps. If that were the case, then the Phoenix nanofactory + human-level AI-based estimate might be excessively conservative. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Tue Nov 16 02:56:50 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Mon, 15 Nov 2010 18:56:50 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: Hi Samantha, 2010/11/15 Samantha Atkins > > While it "could" do this it is not at all certain that it would. Humans > can improve themselves even today in a variety of ways but very few take the > trouble. An AGI that is not autonomous would do what it was told to do by > its owners who may or may not have improving it drastically as a high > priority. > Quoting Omohundro: http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf Surely no harm could come from building a chess-playing robot, could it? In this paper we argue that such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else?s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems. In an earlier paper we used von Neumann?s mathematical theory of microeconomics to analyze the likely behavior of any sufficiently advanced artificial intelligence (AI) system. This paper presents those arguments in a more intuitive and succinct way and expands on some of the ramifications. > Possibly, depending on its long term memory and integration model. If it > came from human brain emulation this is less certain. > I was assuming AGI, not a simulation, but yeah. It just seems likely that AGI would be able to stay awake perpetually, though not entirely certain. It seems like this would a priority upgrade for early-stage AGIs. > This very much depends on the brain architecture. If too close a copy of > human brains this may not be the case. > Assuming AGI. > 4. overclock helpful modules on-the-fly > > > Not sure what you mean by this but this is very much a question of specific > architecture rather than general AGI. > I doubt it would be hard to implement. You can overclock specific modules in chess AI or Brood War AI today. It means giving a specific module extra computing power. It would be like temporarily shifting your auditory cortex tissue to take up visual cortex processing tasks to determine the trajectory of an incoming projectile. > What does this mean? Integrate other systems? How? To what level? Humans > do some degree of this all the time. > The human brain stays at a roughly constant 100 billion neurons and a weight of 3 lb. I mean directly absorbing computing power into the brain. > It could be so constructed but may or may not in fact be so constructed. > Self-improvement would likely be an emergent property due to the reasons given in the Omohundro paper. So if it weren't developed deliberately from the start, self-improvement is an ability that would be likely to develop on the road to human-equivalence. > I am not sure exactly what is meant by this. That it is very very good at > understanding code amounts to a 'modality'? > Lizards have brain modules highly adapted to evaluating the fitness of fellow lizards for fighting or mating. Chimpanzees have the same modules, but with respect to others chimpanzees. Trilobites probably had specialized neural hardware for doing the same with other trilobites. Some animals can smell very well, but have poor hearing and sight. Or vice versa. The reason why is because they have dedicated chunks of brainware that evolved to deal with sensory data from a particular channel. Humans have HUGE visual cortex areas, larger than the brains of mice. We can see in more colors than most animals. The way a human sees is different than the way an eagle sees, because we have different eyes, brains, and visual processing centers. The human visual cortex takes in gigabytes (or something like that) of information per second, and processes it down to edges, corners, distance estimates, salient objects, colors, and many other important features. To a slug, a view of a city looks like practically nothing, because its eyes are crap, its brain is crap, and its visual processing centers are crap. To a human, it can have a thousand different features and meanings. We didn't evolve to process code. We probably did evolve to process simple mathematics and the idea of logical processes on some level, so we apply that to code. Humans are not general-purpose intellects, capable of doing anything satisfactorily. Compared to potential superintelligences, we are idiots. Future superintelligences will look back on humans and marvel that we could write any code at all. After all, we were designed mainly to mess around with each other, kill animals, forage, retain our status, and have sex. Most human beings alive today are more or less incapable of coding. Imagine if human beings had evolved in an environment for millions of years where we were murdered and prevented from reproducing if our coding abilities fell short. Create an environment like that, and you might have a situation promoting the evolution of specific brain centers for visualizing and writing computer code. > This assumes an ability to integrate random other computers that I do not > think is at all a given. > All it requires is that the code can be parallelized. > This is simple economics. Most humans don't take advantage of the many > such positive sum activities they can perform today without such > self-copying abilities. So why is it certain that an AGI would? > Not certain, but pretty damn likely, because it could probably perform tasks without getting bored, and would have innate drives towards increasing its power and protecting/implementing its utility function. > There is an interesting debate to be had here, about the details of the > plausibility of the arguments, but most transhumanists just seem to dismiss > the conversation out of hand, or don't know that there's a conversation to > have. > > Statements about "most transhumanists" are fraught with many problems. > Most of the 500+ transhumanists I have talked to. > http://singinst.org/upload/LOGI//seedAI.html > > Prediction: most comments in response to this post will again ignore the > specific points in favor of a rapid takeoff and simply dismiss the idea > based on low intuitive plausibility. > > > Well, that helps a lot. It is a form of calling those who disagree lazy or > stupid before they even voice their disagreement. > I like to get to the top of the Disagreement Pyramid quickly, and it seems very close to impossible when transhumanists discuss the Singularity, and particularly the idea of hard takeoff. As someone arguing on behalf of the idea of hard takeoff, I demand that critics address the central point, not play *ad hominem* with me. You're addressing the points -- thanks! http://www.acceleratingfuture.com/michael/blog/images/disagreement-hierarchy.jpg > No, you don't have air tight evidence. You have a reasonable argument for > it. > It depends on what specifically is being argued. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Tue Nov 16 03:03:48 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Mon, 15 Nov 2010 19:03:48 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <013801cb848b$cfd192d0$6f74b870$@att.net> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> Message-ID: Heya Spike, On Sun, Nov 14, 2010 at 10:10 PM, spike wrote: > > I am not advocating a Bill Joy approach of eschewing AI research, just the > opposite. A no-singularity future is 100% lethal to every one of us, every > one of our children and their children forever. A singularity gives us > some > hope, but also much danger. The outcome is far less predictable than > nuclear fission. > Would you say the same thing if the Intelligence Explosion were initiated by the most trustworthy and altruistic human being in the world, if one could be found? In general, I agree with you except the last sentence. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Tue Nov 16 02:38:04 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Mon, 15 Nov 2010 21:38:04 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> References: <4CE19F18.8040200@speakeasy.net> <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> Message-ID: <4CE1EE8C.4080602@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > You have said that if a person is destructively copied he does not survive. What does this imply about > your view of survival? As has been shown, that is difficult to argue with conventional logic and reasoning, so let's try a completely different mind experiment. I want you, right now, to try to mind-swap yourself into your cat, or your computer or anything else you might find more suitable. I presume the experiment will fail. So why did it? What evidence do you have that the experiment will succeed if certain pre-conditions are met? What are those preconditions? (I have a whole pile of fresh material along this line of thought so please bear with me. ;) > Either that the same atoms have to be preserved or that there is some other > substace, not reducible to atoms or information that has to be preserved. Don't quote me on things that you read into what I wrote. =| -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From michaelanissimov at gmail.com Tue Nov 16 03:06:53 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Mon, 15 Nov 2010 19:06:53 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE1EE8C.4080602@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> <4CE1EE8C.4080602@speakeasy.net> Message-ID: Alan is avoiding the question. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists1 at evil-genius.com Tue Nov 16 03:46:29 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Mon, 15 Nov 2010 19:46:29 -0800 Subject: [ExI] The grain controversy (was Paleo/Primal health) In-Reply-To: References: Message-ID: <4CE1FE95.7070603@evil-genius.com> On 11/15/10 4:23 PM, extropy-chat-request at lists.extropy.org wrote: > Here are a couple links: > > http://thespartandiet.blogspot.com/2010/10/its-official-grains-were-part-of.html > http://www.cbc.ca/technology/story/2009/12/17/tech-archaeology-grain-africa-cave.html > > So it obviously happened. It's very hard to tell how widespread it > was, how important it was, how seasonal it was, what percentage of > caloric intake it provided, etc. Interestingly, it's still being done > by the Ojibwe:http://www.bineshiiwildrice.com. Here's Dr. Cordain's response to the Mozambique data: http://thepaleodiet.blogspot.com/2009/12/dr-cordain-comments-on-new-evidence-of.html Summary: there is no evidence that the wild sorghum was processed with any frequency -- nor, more importantly, that it had been processed in a way that would actually give it usable nutritional value (i.e. soaked and cooked, of which there is no evidence for the behavior or associated technology (cooking vessels, baskets) for at least 75,000 more years). Therefore, it was either being used to make glue -- or it was a temporary response to starvation and didn't do them much good anyway. Don't forget that the natural condition of wild creatures is hunger. Most of us have never been without food for one single day...or if we have, it's been purely by choice. If you get hungry enough you'll eat tree bark. The real question is: is there evidence that wild sorghum was eaten frequently and processed in a way that would make it actually digestible and nutritious? In other words, that there would have been significant selection pressure for eating and digesting it? As far as the Spartan Diet article, it strongly misrepresents both the articles it quotes and the paleo diet. Let's go through the misrepresentations: 1) As per the linked article, the 30 Kya year old European site has evidence that "Palaeolithic Europeans ground down plant roots similar to potatoes..." The fact that Palaeolithic people dug and ate some nonzero quantity of *root starches* is not under dispute: the assertion of paleo dieters is that *grains* (containing gluten/gliadin) are an agricultural invention. (Also note that the linked article finishes with a bizarre claim that consumption of *any* starch means that a diet is not meat-centered. As I've linked before, hunter-gatherer caloric intake averages about 2/3 meat and 1/3 non-meat calories. Apparently there are a lot of people who still confuse Atkins with paleo.) Link to the original paper (full text not available, but supplemental material clearly shows that cattail is the 'grains' of starch in question): http://www.pnas.org/content/107/44/18815.abstract I've seen this misrepresentation before: articles speak of 'grains of starch' found as residue, usually of root vegetables, and anti-paleo crusaders mistake this to mean cereal grains, like wheat and barley! As you might expect, the Spartan Diet page claims explicitly that these are cereal grains being processed, even though they're not. Hmmm... 2) No one disputes the 23 Kya Israel data. However, there is a big difference between "time of first discovery" and "used by the entire ancestral human population". It took another 11,000 years for people in one valley in the Middle East to starve enough to actually start growing grains on purpose, and it took thousands more years to spread anywhere else. For instance, Northern Europe only agriculturalized about 5,000 years ago. Note that it takes a *lot* of grain to feed a single person, not to mention the problem of storage for nomadic hunter-gatherers during the 11 months per year that a grain 'crop' is not harvestable -- so arguing that wild grains were the majority of anyone's diet previous to domestication is a stretch. And it is silly to claim that meaningful grain storage could somehow occur before a culture settled down into permanent villages. 3) The Spartan Diet page claims that consumption of grains by modern-era Native Americans somehow invalidates the paleo diet, by making a strawman claim about "The Paleo Diet belief that grain was consumed only as a cultivated crop..." Obviously grain was consumed as a wild food before it was cultivated, or no one would have thought to cultivate it! I addressed this already in 2). Not to mention that humans didn't even *arrive* in the Americas until ~12 Kya, making this issue irrelevant. 4) The Cordain rebuttal above addresses the Mozambique data, and I won't rehash it. I also note that the "Spartan Diet" is a low-fat diet that opposes the use of butter and any fat but extra-virgin olive oil -- in other words, based on the long-since-discredited theory that fat is bad and saturated fats are worse. It's apparently a gimmick diet based on what they think the Spartans ate...which is better than most gimmick diets, but it's not based on science. More in my next message. From lists1 at evil-genius.com Tue Nov 16 03:46:37 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Mon, 15 Nov 2010 19:46:37 -0800 Subject: [ExI] More evidence for incomplete human adaptation to grain-based diets In-Reply-To: References: Message-ID: <4CE1FE9D.4060004@evil-genius.com> More evidence: "Simoons classic work on the incidence of celiac disease [Simoons 1981] shows that the distribution of the HLA B8 haplotype of the human major histocompatibility complex (MHC) nicely follows the spread of farming from the Mideast to northern Europe. Because there is strong linkage disequilibrium between HLA B8 and the HLA genotypes that are associated with celiac disease, it indicates that those populations who have had the least evolutionary exposure to cereal grains (wheat primarily) have the highest incidence of celiac disease. This genetic argument is perhaps the strongest evidence to support Yudkin's observation that humans are incompletely adapted to the consumption of cereal grains." http://www.beyondveg.com/cordain-l/grains-leg/grains-legumes-1a.shtml Citation: Simoons FJ (1981) "Celiac disease as a geographic problem." In: Walcher DN, Kretchmer N (eds.) Food, Nutrition and Evolution. New York: Masson Publishing. (pp. 179-199) Diet, Gut, and Type 1 Diabetes: Role of Wheat-Derived Peptides? http://diabetes.diabetesjournals.org/content/58/8/1723.full "In this issue of Diabetes, Mojibian et al. (2) report that approximately half of the patients with type 1 diabetes, whom they studied, had a proliferative T-cell response to dietary wheat polypeptides and that the cytokine profile of the response was predominantly proinflammatory." ... "The study by Mojibian et al. raises the possibility that wheat could be the driving dietary antigen in two autoimmune diseases, i.e., celiac disease and type 1 diabetes." [Note: 'Wheat polypeptides' = collectively known as gluten/gliadin. In other words, a significant number of humans suffer cross-reactions between gluten and their own beta cells. This process is also thought to be behind celiac disease.] From brent.allsop at canonizer.com Tue Nov 16 04:19:13 2010 From: brent.allsop at canonizer.com (Brent Allsop) Date: Mon, 15 Nov 2010 21:19:13 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: <4CE20641.5020702@canonizer.com> Moral Experts, (Opening note: For those that don't enjoy the below religious / moral / mormonesque rhetoric that I enjoy, I hope you can simply translate it on the fly to something more to your liking. ;) This is a very exciting topic, and I think morally a critically important one. If we do the wrong thing, or fail to do it right, I think we're all agreed the costs could be very extreme. Morality has to do with knowing what is right, and what is wrong, does it not? I sure desperately want to "Choose The Right" (CTR) as mormons like to always say. But I feel I desperately need more help to be more morally capable, especially in this area. It is especially hard for me to understand, fully grasp, and remember ways of thinking about things that are very diverse from my current way of thinking about things. All this eternal yes it is no it is not, yes it is isn't doing me any good, for sure. This is a much more complex issue than I've ever really fully thought about, and I appreciate the help from people on both sides of the issue. I may be the only one, but I would find it very valuable and educational to have concise descriptions of all the best arguments and issues, and a quantitative ranking, by the experts, of the importance of them all, and a quantitative measure of who, and how many people, are in each camp. In other words, I think the best way for all of us to approach this problem, is to have some concise, quantitative, and constantly improving representation of the most important issues, according to the experts on all sides, so we can all be better educated (with good references) about what the most important arguments are, why, and which experts, and how many, are in each camp - going forward, as ever more scientific data, ever more improved reasoning... - comes in. We've started one survey topic, on the general issue of the importance of friendly AI, (see: http://canonizer.com/topic.asp/16 ) which so far shows a somewhat even distribution of experts on both sides. But this is obviously just a start at what is required so all of us can be better educated on all the most important issues and arguments. Through this discussion, I've realized that a critical sub component of the various ways of thinking about this issue is one's working hypothosis about the possibility of a rapid isolated, hidden, or remote 'hard take off'. I'm betting that the degree to which one holds such as a real possibility of a isolated hard take off as their working hypothosis, the more likely they are to fear or want to be cautious about AI, and visa versa. So I think it will be very educational to everyone to more rigorously concisely develop and measure for the various most important reasons for this particular sub issue on both sides. Towards this end, I'd like to create several new related survey topics to get a more detailed map of what the experts believe in this space. First, would be a survey topic on the possibility of any kind of isolated rapid hard takeoff. We could create two related topics to capture, concisely state, and quantitatively rank the importance, and value of (i.e. their ability to be convincing) the various arguments had relative to each other. We could have one argument topic ranking reasons why an isolated hard takeoff might be possible, and another ranking reasons for why it might not be likely. This way, the experts on both sides of the issue could collaboratively develop the best and most concise description of each of the arguments, and help rank which are the most convincing for everyone and why. (It would be interesting to see if the ranking for each side changed, when surveying those in the pro camp, verses those in the con camp, and so on) As these two pro and con argument ranking topics developed, the members of the pro and con camps could reference these arguments, and develop concise descriptions of why the pro or con arguments are more convincing to them, than the others, and why they are in their particular camp, or why they currently use the particular pro or con theory as their working hypothesis. And of course, it would be very interesting to see if anyone jumps camps, once things start getting more developed, or when new scientific results or catastrophes, come in, and so on. Would anyone else think this kind of moral expert survey information would be helpful to them in their effort to make the best possible decisions and judgments on such important issues? Would anyone else have any better or additional ways to develop or structure a survey of critically important information that anyone thinks everyone interested in this topic needs to know about? I'm going to continue developing this survey along these lines, using what I've heard others say so far here, but there is surely better ways to go about this, that others can help find or point out, obviously the more diversity the better, so I would love to have any other ideas or inputs or help with this process. Looking forward to any and all feedback, pro or con, and it wold be great to at least get a more comprehensive survey of who was in these camps, starting with the improvement of this one: http://canonizer.com/topic.asp/16 . And also, I hope for some day achieving perfect justice. Those that are wrong, are arguably doing great damage compared to the heroes that are right - the ones that are helping us all to be morally better. It seems to me to achieve perfect justice, the mistaken or wicked ones, will have to make a restitution to the heroes, for the damage they continue to do, for as long as they continue to be wrong (to sin?). The better we rigorously track all this, the sooner we can achieve better justice right? The more help I get, from all sides, the more capable I'll bee of being in the right camp sooner, and the more capable I'll bee of helping others to do the same, and the less restitution I'll have to clean up for being mistaken longer, and the more reward we will all reap, sooner, in a more just and perfect heaven. Brent Allsop On 11/15/2010 7:33 PM, Michael Anissimov wrote: > Hi John, > > On Sun, Nov 14, 2010 at 9:27 PM, John Grigg > > wrote: > > > I agree that self-improving AGI with access to advanced manufacturing > and research facilities would probably be able to bootstrap itself at > an exponential rate, rather than the speed at which humans created it > in the first place. But the "classic scenario" where this happens > within minutes, hours or even days and months seems very doubtful in > my view. > > Am I missing something here? > > > MNT and merely human-equivalent AI that can copy itself but not > qualitatively enhance its intelligence beyond the human level is > enough for a hard takeoff within a few weeks, most likely, if you take > the assumptions in the Phoenix nanofactory paper. > > Add in the possibility of qualitative intelligence enhancement and you > get somewhere even faster. > > Neocortex expanded in size by a factor of only about 4 from chimps to > produce human intelligence. The basic underlying design is much the > same. Imagine if expanding neocortex by a similar factor again led to > a similar qualitative increase in intelligence. If that were so, then > even a thousand AIs with so-expanded brains and a sophisticated > manufacturing base would be like a group of 1000 humans with assault > rifles and helicopters in a world of six billion chimps. If that were > the case, then the Phoenix nanofactory + human-level AI-based estimate > might be excessively conservative. > > -- > michael.anissimov at singinst.org > Singularity Institute > Media Director > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Tue Nov 16 05:22:25 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 21:22:25 -0800 Subject: [ExI] Singularity (Changed Subject Line) In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> Message-ID: On Nov 14, 2010, at 10:40 AM, Giulio Prisco wrote: > I wish to support Michael here. I don't share many of the SIAI > positions and views on the Singularity and the evolution of AGI, but I > think they do interesting work and play a useful role. The world is > interesting because it is big and varied, with different persons and > groups doing their own things with their own focus. I second that. I think SIAI does some really good things that I am very delighted and impressed by. That does not mean that other criticisms get a free pass though or that valid criticisms should be ignored just because some criticisms are obviously overblown. Is there a named cognitive bias for that? It is a common pattern. A voices a perhaps valid and reasonable seeming criticism of X. B voices an outrageous or overly harsh criticism of X. C takes offense over the remarks of B. D voices support for C and says positive things about X. Result, most people seem to be left feeling like all the criticisms were overblown. I have seen this pattern 50 times if I have seen it once. > > In particular I think the criticism of idiots like Carrico and his > handful of followers, mentioned by Stefano, should be ignored. We have > better and more interesting things to do. Oh, and E brings up the fact that F, who is generally despised, also criticizes X. Yawn. Pass the nanotubes. - samantha From spike66 at att.net Tue Nov 16 05:13:01 2010 From: spike66 at att.net (spike) Date: Mon, 15 Nov 2010 21:13:01 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> Message-ID: <004901cb854c$f1216f20$d3644d60$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Michael Anissimov . >Heya Spike. Heya back Michael! The level of discourse here has improved an order of magnitude since you started posting last week. Thanks! You SIAI guys are aaaallways welcome here. On Sun, Nov 14, 2010 at 10:10 PM, spike wrote: >>I am not advocating a Bill Joy approach of eschewing AI research, just the opposite. A no-singularity future is 100% lethal to every one of us, every one of our children and their children forever. A singularity gives us some hope, but also much danger. The outcome is far less predictable than nuclear fission. >Would you say the same thing if the Intelligence Explosion were initiated by the most trustworthy and altruistic human being in the world, if one could be found?... Ja I would say nearly the same thing, however I cheerfully agree we have a muuuch better chance of a good outcome if the explosion is initiated by the most trustworthy and altruistic among us carbon units. I am a big fan of what you guys are doing as SIAI. It pleases me to see you working the problem, for without you, the inevitable Intelligence Explosion falls to the next bunch, who I do not know, who may or may not make it their focus to produce a friendly AI. That would reduce the probability of a good outcome. That being said: >In general, I agree with you except the last sentence. >michael.anissimov at singinst.org >Singularity Institute >Media Director I do hope you are right in that disagreement, but I will defend my pessimism in any case. The engineering world is filled with problems which unexpectedly defeated their designers, or do something completely unexpected. In my own field, the classic example is the hybrid aerospike engine, which was designed to burn both kerosene and liquid hydrogen, and also to throttle efficiently. If we can get a single engine to do that, optimizing thrust at varying altitudes and burn two different fuels without duplicating nozzles, pumps, thrust vector control, all that heavy stuff, then we can achieve single stage to orbit. We poured tons of money into the effort, but that seemingly straightforward engineering problem unexpectedly defeated us. We cannot use a single engine to burn both fuels, and consequently we have no SSTO to this day. The commies worked the same problem, it kicked their asses too, as good as they are at large scale propulsion. There were unknowns that no one knew were unknowns. It could be my own ignorance of the field (hope so) but it seems to me like there are waaay many unknowns in what an actual intelligence (artificial or bio) will do. It appears to me to be inherent in the field of intelligence. Were you to suggest literature, I will be willing to study it. I want to encourage you lads up there in Palo Alto. Your cheering section is going wild. We know the path to artificial intelligence is littered with the corpses of those who have gone before. The path beyond artificial intelligence may one day be littered with the corpses of our dreams, of our visions, of ourselves. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Tue Nov 16 05:38:13 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 00:38:13 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> Message-ID: <4CE218C5.7090608@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > Would you say the same thing if the Intelligence Explosion were > initiated by the most trustworthy and altruistic human being in the > world, if one could be found? I would like to cast my vote in favor of a supremely selfish bastard. I'm serious. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From sjatkins at mac.com Tue Nov 16 05:45:32 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 21:45:32 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: On Nov 14, 2010, at 11:26 AM, Aware wrote: > Michael, what has always frustrated me about Singularitarians, apart > from their anthropomorphizing of "mind" and "intelligence", is the > tendency, natural for isolated elitist technophiles, to ignore the > much greater social context. The vast commercial and military > structure supports and drives development providing increasingly > intelligent systems, exponentially augmenting and amplifying human > capabilities, hugely outweighing not only in height but in breadth, > the efforts of a small group of geeks (and I use the term favorably, > being one myself.) > On SL4 especially but also in many other singularitarian camps a great deal of attention was paid to avoiding anthropomorphizing. So I am a bit surprised by that charge. I don't think ignoring social context is that common either. Some of us are very focused on context as we are highly concerned with how to get from here, and thus exactly what here is like, to some relatively positive future there, and getting some coherence on what that there would look like. I do grant that the number of transhumanist focused on this aspect is a pretty small percentage of total. Commercial efforts are notoriously short term and drive only some forms of intelligent systems in relatively small niches. They do drive general communication, computational capability, device proliferation and so on very very well. Some of these devices are augmenting/changing us. Not as fast as an AGI but with a lot more commercial viability beneath them. But this does not go very deep toward new AGI applicable results. Military research, to the extent it is not a boondoggle, is another matter. A lot of very strong research is done on military contract. Unfortunately. > The much more significant and accelerating risk is not that of a > "recursively self-improving" seed AI going rogue and tiling the galaxy > with paper clips or copies of itself, but of relatively small groups > of people, exploiting technology (AI and otherwise) disproportionate > to their context of values. How would you judge their 'context of values'? Against what would you judge it? > > The need is not for a singleton nanny-AI but for development of a > fractally organized synergistic framework for increasing awareness of > our present but evolving values, and our increasingly effective means > for their promotion, beyond the capabilities of any individual I have no idea what a 'fractally organized synergistic framework for increasing awareness of our present but evolving values' is or entails or when or how you would know that you have achieved that. Frankly, our values today, overall are pretty thinly based on our evolved psychology and not, for most human beings, very much in the way of self-examination, wisdom or much ethical inquiry. I somewhat doubt that human 1.0 is overall designed to be capable of much more except in relatively isolated cases. I submit that that much is not good enough for the challenges ahead of us. > biological or machine intelligence. > If it is beyond the capabilities of any intelligence then how will it seemingly magically arise in fractal magnificence among an accumulation of said inadequate intelligences? - samantha From agrimes at speakeasy.net Tue Nov 16 05:37:06 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 00:37:06 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <4CE21882.7030207@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > Quoting Omohundro: > > http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf > > Surely no harm could come from building a chess-playing robot, could it? > In this paper > we argue that such a robot will indeed be dangerous unless it is > designed very carefully. > Without special precautions, it will resist being turned off, will try > to break into other > machines and make copies of itself, and will try to acquire resources > without regard for > anyone elses safety. These potentially harmful behaviors will occur not > because they > were programmed in at the start, but because of the intrinsic nature of > goal driven systems. > In an earlier paper we used von Neumanns mathematical theory of > microeconomics > to analyze the likely behavior of any sufficiently advanced artificial > intelligence > (AI) system. This paper presents those arguments in a more intuitive and > succinct way > and expands on some of the ramifications. Do you ever get around to proving that the set of general AI systems ever intersects the set of goal directed systems? I strongly doubt that there is even one possible AGI design that is in any way guided by any strict set of goals. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From sjatkins at mac.com Tue Nov 16 06:03:44 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 22:03:44 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: <37934C2D-3BAD-4AB6-94E2-5C33FC526ED5@mac.com> On Nov 14, 2010, at 6:55 PM, John Grigg wrote: > I must admit that I yearn for a hard take-off singularity that > includes the creation a nanny sysop who gets rid of poverty, disease, > aging, etc., and looks after every human on the planet, but without > establishing a tyranny. By definition a nanny sysop is a functional tyrant in at least some ways. What I want is to be reasonably sure humanity will survive this tecnological transition period. I am pretty convinced that not that many evolved intelligent species do survive this particular developmental challenge. The reason they do not is not because a UFAI eats them just before it self-destructs. It has to do with the species needing to very quickly grow beyond its evolved psychology to deal with accelerating change and losing its species dominance. It is a huge challenge. I would love to see it through to the other side. Who wants to die out here in "slow time"? Certainly not I. But my primary desire as a transhumanist is that I do what I can to increase the odds of a successful transition. That said, I think a radically better future and within a mere few decades is quite possible. And that is still exciting and exhilirating. - samantha From sjatkins at mac.com Tue Nov 16 06:10:30 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 22:10:30 -0800 Subject: [ExI] Mathematicians as Friendliness analysts In-Reply-To: <4CE0A19A.1080308@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CE0A19A.1080308@lightlink.com> Message-ID: <25F18285-FD96-4F9A-9A6B-09E1BD0F5775@mac.com> On Nov 14, 2010, at 6:57 PM, Richard Loosemore wrote: > Michael Anissimov wrote: >> On Sat, Nov 13, 2010 at 2:10 PM, John Grigg > wrote: >> And I noticed he did "friendly AI research" with >> a grad student, and not a fully credentialed academic or researcher. >> Marcello Herreshoff is brilliant for any age. Like some other of our Fellows, he has been a top-scorer in the Putnam competition. He's been a finalist in the USA Computing Olympiad twice. He lives and breathes mathematics -- which makes sense because his dad is a math teacher at Palo Alto High School. Because Friendly AI demands so many different skills, it makes sense for people to custom-craft their careers from the start to address its topics. That way, in 2020, we will have people have been working on Friendly AI for 10-15 years solid rather than people who have been flitting in and out of Friendly AI and conventional AI. > > Michael, > > This is entirely spurious. Why gather mathematicians and computer science specialists to work on the "friendliness" problem? > > Since the dawn of mathematics, the challenges to be solved have always been specified in concrete terms. Every problem, without exception, is definable in an unambiguous way. The friendliness problem is utterly unlike all of those. You cannot DEFINE what the actual problem is, in concrete, unambiguous terms. > Mathematics may be said to be the study of pattern qua pattern, of patterns of patterns. Friendliness not able to be captured or described accurately or ever measured or used to measure alternatives would not be an engineering goal at all. Personally I think it is so vague as to be useless. I would rather see work on a general ethics that applies even to beings of wildly different capabilities that are not mutually interdependent. This seem much more likely to lead to benign behavior by an advanced AGI toward humans than attempting to coerce Friendliness at an engineering level. Of course the rub with this general ethics is that humans don't even seem able to come up with a generally agreed ethics for the much narrower case of other members of their own species. This suggests that either such a general ethics is impossible or that humans are not very good at all at ethical reasoning. - samantha From sjatkins at mac.com Tue Nov 16 06:24:40 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 22:24:40 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <76D02828-598F-4A2F-A1A5-70B2C066F090@mac.com> Message-ID: On Nov 14, 2010, at 9:15 PM, Aleksei Riikonen wrote: > On Mon, Nov 15, 2010 at 6:48 AM, Samantha Atkins wrote: >> On Nov 12, 2010, at 2:44 PM, Aleksei Riikonen wrote: >> >>> If people want a new version of Singularitarian Principles >>> to exist, they can write one themselves. >> >> Hardly. I cannot speak for this Institute. How would my writing >> such a thing be anything but my opinion? > > No matter who would write such a document, it's just an opinion. There > is currently no codified "ideology of singularitarianism" that would > be owned by any single Institute. That is not what I want. I want to know what the current working theories are concerning FAI and and what type of FAI is the current working plan, if any. For a time it seemed to be CEV. But some people in SIAI claim that is obsolete while others say it is still the general plan. So I would like clarification. > > Eliezer and other SIAI folks seem to like it that way, so there likely > will not be a codified document of Singularitarian principles coming > from their direction. So if there are people who want such a codified > ideology, they're going to have to codify it themselves. > >> I want to know what the SIAI current positions are. > > That's a different thing than wanting them to present a codified > ideology. Just read their recent publications. This is a good start: > > http://singinst.org/riskintro/index.html It is a start but not sufficient. It doesn't really propose much of anything. Researching what remains stable in a self-improving brain with no real general model that is likely to cover the domain of self-improving brains or even a single working example seems rather weak to me. Many of the items spoken of at this link are certainly important and worthwhile but I don't see a lot of meat here. Am I missing something? I can work my way through the newer documents on site that I haven't read yet. - samantha From agrimes at speakeasy.net Tue Nov 16 05:57:44 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 00:57:44 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: References: <4CE19F18.8040200@speakeasy.net> <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> <4CE1EE8C.4080602@speakeasy.net> Message-ID: <4CE21D58.60606@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > Alan is avoiding the question. And you're avoiding reality itself. What's the difference? =P -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From sjatkins at mac.com Tue Nov 16 06:36:38 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 22:36:38 -0800 Subject: [ExI] Hard Takeoff-money In-Reply-To: References: Message-ID: <03A63180-9898-4075-9976-54A9C9C2F388@mac.com> On Nov 15, 2010, at 7:31 AM, Keith Henson wrote: > On Mon, Nov 15, 2010 at 5:00 AM, John Grigg > wrote: >> >> Brent Allsop wrote: >> I would agree that a copy-able human level AI would launch a take-off, >> leaving what we have today, to the degree that it is unchanged, in the >> dust. But I don't think acheiving this is going to be anything like >> spontaneous, as you seem to assume is possible. The rate of progress >> of intelligence is so painfully slow. So slow, in fact, that many >> have accused great old AI folks like Minsky as being completely >> mistaken. >>>>> >> >> Michael Annisimov replied: >> There's a huge difference between the rate of progress between today >> and human-level AGI and the time between human-level AGI and >> superintelligent AGI. They're completely different questions. As for >> a fast rate, would you still be skeptical if the AGI in question had >> access to advanced molecular manufacturing? >> >> I agree that self-improving AGI with access to advanced manufacturing >> and research facilities would probably be able to bootstrap itself at >> an exponential rate, rather than the speed at which humans created it >> in the first place. But the "classic scenario" where this happens >> within minutes, hours or even days and months seems very doubtful in >> my view. >> >> Am I missing something here? > > What does an AI mainly need? Processing power and storage. If there > are vast amounts of both that can be exploited, then all you need is a > storage estimate for the AI and the average bandwidth between storage > locations to determine the replication rate. But wait. The first AGIs will likely be ridiculously expensive. So what if they can copy themselves? If you can only afford one and they are originally only as competent as a human expert then you will go with entire campuses of human experts until the costs comes down sufficiently - say in a decade or two after the first AGI. Until then it will not matter much that they are in principle copyable. Of course if someone cracks the algorithms to have human level AGI on much more modest hardware then we get lots of AGI proliferation much more quickly. - samantha From sjatkins at mac.com Tue Nov 16 06:47:04 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 22:47:04 -0800 Subject: [ExI] Singularity In-Reply-To: References: Message-ID: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> On Nov 15, 2010, at 3:09 PM, Stefano Vaj wrote: > On 14 November 2010 19:28, Aleksei Riikonen wrote: >> Who's going for "listening to prophets"? Serious people like Nick >> Bostrom and the SIAI present actual, concrete steps and measures that >> need to be taken to minimize risks. > > Once more, I have no doubt that SIAI or Bostrom are (even too) > serious. My point is simply that we are entitled to a more serious > discussion of what would be a "risk" and why we should consider it so. IMHO opinion there has perhaps been too much focus on "existential risk" at the cost of insufficient focus on clearly visioning the positive future we wish to bring into being. I feel at times as if much of our energy has become focused on the negative and we have lost sight of or failed to sufficiently embrace the positive. It is much easier generally to see what is wrong or may turn out wrong than to cleanly imagine a positive outcome and work diligently to bring it about. From talking with many transhumanists it does not seem that we have that clear and coherent a shared vision of the desired future. If not then how can we can we expect to work together to bring it about? We have many shared dream fragments but that is not enough for a coherent vision. - s From sjatkins at mac.com Tue Nov 16 07:00:43 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 23:00:43 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: <94EBCC45-6546-49D2-9252-F105A4A7D88E@mac.com> On Nov 15, 2010, at 6:33 PM, Michael Anissimov wrote: > Hi John, > > On Sun, Nov 14, 2010 at 9:27 PM, John Grigg wrote: > > I agree that self-improving AGI with access to advanced manufacturing > and research facilities would probably be able to bootstrap itself at > an exponential rate, rather than the speed at which humans created it > in the first place. But the "classic scenario" where this happens > within minutes, hours or even days and months seems very doubtful in > my view. > > Am I missing something here? > > MNT and merely human-equivalent AI that can copy itself but not qualitatively enhance its intelligence beyond the human level is enough for a hard takeoff within a few weeks, most likely, if you take the assumptions in the Phoenix nanofactory paper. MNT is of course not near term at all. The latest guesstimates I saw by Drexler, Freitas and Merkle put it a good three decades out. So if we get HAI before that it is likely to be expensive and not at all easy for it to quickly upgrade itself. A few very expensive human equivalent AGIs will not be very revolutionary quickly. > > Add in the possibility of qualitative intelligence enhancement and you get somewhere even faster. > Too many IF bridges need to be crossed between here and there for the argument to be too compelling. Possible, yes. Likely within three to four decades, not so much. > Neocortex expanded in size by a factor of only about 4 from chimps to produce human intelligence. The basic underlying design is much the same. Imagine if expanding neocortex by a similar factor again led to a similar qualitative increase in intelligence. I am not at all sure that would be possible with current human brain size and brain architecture. But then I don't take well to strained analogies. > If that were so, then even a thousand AIs with so-expanded brains and a sophisticated manufacturing base would be like a group of 1000 humans with assault rifles and helicopters in a world of six billion chimps. Even more strained! :) Where are you going to get a thousand human level AGIs? Using what assumption on hardware and energy requirements? > If that were the case, then the Phoenix nanofactory + human-level AI-based estimate might be excessively conservative. For sometime decades hence maybe. But it isn't a serious existential risk now. Economic collapse is a very serious risk in this coming decade. Energy and resource crises are close behind. Those could result in losing a substantial part of our technological/scientific infrastructure *before* MNT or AGI can be developed. If we do then the argument is strong that humanity may never recover to the necessary level of infrastructure and resources again. That would be catastrophic. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Tue Nov 16 07:45:50 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 23:45:50 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: On Nov 15, 2010, at 6:56 PM, Michael Anissimov wrote: > Hi Samantha, > > 2010/11/15 Samantha Atkins > > While it "could" do this it is not at all certain that it would. Humans can improve themselves even today in a variety of ways but very few take the trouble. An AGI that is not autonomous would do what it was told to do by its owners who may or may not have improving it drastically as a high priority. > > Quoting Omohundro: > > http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf > > Surely no harm could come from building a chess-playing robot, could it? In this paper > we argue that such a robot will indeed be dangerous unless it is designed very carefully. > Without special precautions, it will resist being turned off, will try to break into other > machines and make copies of itself, and will try to acquire resources without regard for > anyone else?s safety. These potentially harmful behaviors will occur not because they > were programmed in at the start, but because of the intrinsic nature of goal driven systems. > In an earlier paper we used von Neumann?s mathematical theory of microeconomics > to analyze the likely behavior of any sufficiently advanced artificial intelligence > (AI) system. This paper presents those arguments in a more intuitive and succinct way > and expands on some of the ramifications. > I have argued this point (and stronger variants) with Steve. If the AI's goals are totally centered on chess playing then it is extremely unlikely that it would both diverge along many or all possible paths that might make it a more powerful chess player. Many many fields of knowledge could possibly make it better at is stated goal but it would have to be much more a generalist than a specialist to notice them and take the time to master them. If it could so diverge along so many paths then it would also encounter other fields of knowledge including those for judging the relative importance of various values using various methodologies. Which would tend, if understood, to make it not a single minded chess playing machine from hell. The argument seems self-defeating. > Possibly, depending on its long term memory and integration model. If it came from human brain emulation this is less certain. > > I was assuming AGI, not a simulation, but yeah. It just seems likely that AGI would be able to stay awake perpetually, though not entirely certain. It seems like this would a priority upgrade for early-stage AGIs. > One path to AGI is via emulating at least some subsystems of the human brain. It is not at all clear to me that this would not also bring in many human limitations. For instance, our learning cannot be transferred immediately to another person because of our rather individual neural associative patterns that the learning act modified. New knowledge is not in any one discrete place or in some universally instantly useful form as encoded in the human brain. Using a similar learning scheme in an AGI would mean that you could not transfer achieved learning very efficiently between AGIs. You could only copy them. > This very much depends on the brain architecture. If too close a copy of human brains this may not be the case. > > Assuming AGI. > >> 4. overclock helpful modules on-the-fly > > Not sure what you mean by this but this is very much a question of specific architecture rather than general AGI. > > I doubt it would be hard to implement. You can overclock specific modules in chess AI or Brood War AI today. It means giving a specific module extra computing power. It would be like temporarily shifting your auditory cortex tissue to take up visual cortex processing tasks to determine the trajectory of an incoming projectile. > I am not sure the analogy holds well though. If the mind is highly integrated it is not certain that you could isolate one activity like that much more easily than we can in our own brains. Perhaps. > What does this mean? Integrate other systems? How? To what level? Humans do some degree of this all the time. > > The human brain stays at a roughly constant 100 billion neurons and a weight of 3 lb. I mean directly absorbing computing power into the brain. I mean that we integrate with computational systems albeit by slow HCI today. Unless you have in mind that the AGI hack systems around it, most of the computation going on on most of that hardware has nothing to do with the AGI and is written in such a way it cannot communicate that well even with other dumb programs or even with other instances of the same programs on other machines. It is also not certain and is plausibly unlikely that AGIs run on general purpose computers. I do grant of course that an AGI can interface to a computer much more efficiently than you or I can with the above caveat. Many systems on other machines were written by humans. You almost have to get inside the human programmer's head to efficiently use many of these. I am not sure the AGI would be automatically good at that. > > It could be so constructed but may or may not in fact be so constructed. > > Self-improvement would likely be an emergent property due to the reasons given in the Omohundro paper. So if it weren't developed deliberately from the start, self-improvement is an ability that would be likely to develop on the road to human-equivalence. As mentioned I do not find his argument altogether persuasive. > > I am not sure exactly what is meant by this. That it is very very good at understanding code amounts to a 'modality'? > > Lizards have brain modules highly adapted to evaluating the fitness of fellow lizards for fighting or mating. Chimpanzees have the same modules, but with respect to others chimpanzees. Trilobites probably had specialized neural hardware for doing the same with other trilobites. > A chess playing AGI for instance would not necessarily be at all good at understanding code. Our thinking is largely a matter of interactions at the level of neural networks and associative logic but none of us have a modality for this that I know of. My argument is that an AGI can have human level or better general intelligence without being a domain expert much less having a modality for the stuff it is implemented in - code. It may have many modalities but I am not sure this will be one of them. > Some animals can smell very well, but have poor hearing and sight. Or vice versa. The reason why is because they have dedicated chunks of brainware that evolved to deal with sensory data from a particular channel. Humans have HUGE visual cortex areas, larger than the brains of mice. We can see in more colors than most animals. The way a human sees is different than the way an eagle sees, because we have different eyes, brains, and visual processing centers. > I get the point but the AGI will not have such dedicated brain systems unless they are designed in on purpose. It will not get them just by definition of AGI afaik. > > We didn't evolve to process code. We probably did evolve to process simple mathematics and the idea of logical processes on some level, so we apply that to code. The AGI did not evolve at all. > > Humans are not general-purpose intellects, capable of doing anything satisfactorily. What do you mean by satisfactorily? We did a great number of things satisfactorily enough to get us to this point. We are indeed general-purpose intelligent beings. We certainly have our limits but we are amazingly flexible nonetheless. > Compared to potential superintelligences, we are idiots. Well, this seems a fine game. Compared to some hypothetical but arguably quite possible being we are of less use than amoebas are to us. So what? > Future superintelligences will look back on humans and marvel that we could write any code at all. If they really are that smart about us then they will understand how we could. After 30 years writing software for a living though I too marvel that humans can write any code at all. I fully understand (with chagrin) how very limited our abilities in this area are. If I were actively pursuing AGI I would quite likely gear first attempts toward various type of programmer assistants and automatic code refactoring and code data mining systems. The current human software tools aren't much better than they were 20 years ago. IDEs? Almost none have as much power as Lisp and Smalltalk environments had in the 80s. > After all, we were designed mainly to mess around with each other, kill animals, forage, retain our status, and have sex. Most human beings alive today are more or less incapable of coding. Imagine if human beings had evolved in an environment for millions of years where we were murdered and prevented from reproducing if our coding abilities fell short. Are you suggesting that an evolutionary arms race at the level of code will exist among AGIs? If not then what will shape them for this purported modality? > > This assumes an ability to integrate random other computers that I do not think is at all a given. > > All it requires is that the code can be parallelized. I think it requires more than that. It requires that the AGIs understand these other systems that may have radically different architectures than its own native systems. It requires that it is given permission for (or simply take it) running processes on these other systems. That said it can do a much better job of integrating a lot of information available through web services and other means on the net today. There is a lot of power there. So I mostly concede this point. > > This is simple economics. Most humans don't take advantage of the many such positive sum activities they can perform today without such self-copying abilities. So why is it certain that an AGI would? > > Not certain, but pretty damn likely, because it could probably perform tasks without getting bored, and would have innate drives towards increasing its power and protecting/implementing its utility function. I still don't see where an innate drive toward increasing power came from unless it was instilled on purpose. Nor do I see why it would never ever re-evaluate its utility function or see it as more important than the "utility functions" of a great number of other agents, AGI and biological, in its environment. > >> There is an interesting debate to be had here, about the details of the plausibility of the arguments, but most transhumanists just seem to dismiss the conversation out of hand, or don't know that there's a conversation to have. > > Statements about "most transhumanists" are fraught with many problems. > > Most of the 500+ transhumanists I have talked to. >> http://singinst.org/upload/LOGI//seedAI.html >> >> Prediction: most comments in response to this post will again ignore the specific points in favor of a rapid takeoff and simply dismiss the idea based on low intuitive plausibility. > > Well, that helps a lot. It is a form of calling those who disagree lazy or stupid before they even voice their disagreement. > > I like to get to the top of the Disagreement Pyramid quickly, and it seems very close to impossible when transhumanists discuss the Singularity, and particularly the idea of hard takeoff. As someone arguing on behalf of the idea of hard takeoff, I demand that critics address the central point, not play ad hominem with me. You're addressing the points -- thanks! You are welcome. Thanks for the interesting reply. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Tue Nov 16 07:53:51 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 23:53:51 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE1EE8C.4080602@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> <4CE1EE8C.4080602@speakeasy.net> Message-ID: <00376521-0B70-469D-A9CD-1D285BDF439A@mac.com> On Nov 15, 2010, at 6:38 PM, Alan Grimes wrote: > chrome://messenger/locale/messengercompose/composeMsgs.properties: >> You have said that if a person is destructively copied he does not survive. What does this imply about >> your view of survival? > > As has been shown, that is difficult to argue with conventional logic > and reasoning, so let's try a completely different mind experiment. I > want you, right now, to try to mind-swap yourself into your cat, or your > computer or anything else you might find more suitable. > > I presume the experiment will fail. So why did it? What evidence do you > have that the experiment will succeed if certain pre-conditions are met? > What are those preconditions? Neither my car or current computers have sufficient storage, effective speed and parallelism to accommodate my current understanding of what a human brain requires to function as such. You cannot have such "evidence" of course. You can merely point out that their are necessary pre-conditions without being able to make an exhaustive case that they are sufficient. If any intelligent being could make that case then it would still be possible that none of us is sufficiently intelligent to understand and be convinced by it. - s From nebathenemi at yahoo.co.uk Tue Nov 16 10:35:05 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Tue, 16 Nov 2010 10:35:05 +0000 (GMT) Subject: [ExI] Hard Takeoff-money In-Reply-To: Message-ID: <125838.41623.qm@web27003.mail.ukl.yahoo.com> The problem is worse than you think it is - last week's Economist had an article summarising a paper that showed where the best locations were to situate your computer equidistant (signal-time wise) to two trading exchanges so you could get the best arbitrage between the two. Yes, setting up a server farm in Alaska so you can exploit the differences between Tokyo and New York may be the next big thing. This article http://www.economist.com/node/17202255?story_id=17202255 finishes by mentioning that despite the fast pace of automated trading, they may not be able to outrun regulators. Bill wrote: (But surely the burning torches and pitchforks can't be far away, can they?). And Keith replied: That is _so_ 17th century.? Surely you can think of something better. Yes, the 20th century solution of "Hello, we're the SEC/IRS/other agency that might claim jurisdiction and we're here to shut you down while we go through the books" will work just fine. Money as a data pattern in a computer is wonderful (allows me to draw my cash from an ATM all over the place) but is instantly stoppable by government fiat. If your account is suspended and all transactions coming from it are investigated, having a trillion dollars of trading profits may not help. (It may encourage lawyers to take on your case on a no-win, no-fee basis though as they can dream of the moolah if they win). Tom From stefano.vaj at gmail.com Tue Nov 16 11:07:23 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 12:07:23 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE218C5.7090608@speakeasy.net> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <4CE218C5.7090608@speakeasy.net> Message-ID: 2010/11/16 Alan Grimes > I would like to cast my vote in favor of a supremely selfish bastard. > Be it as it may, how can we be taken seriously if we discuss AI in the framework of an uncritical, naive form of ethical universalism? I have a great respect for the technical competence on the subject of many of us, which in any event exceeds mine by far. The aspects which make many people roteate their eyes when they hear about the Singularity are other, and have much to do with taking non-technical issues for granted or as obvious. They are not. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Nov 16 11:09:19 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 12:09:19 +0100 Subject: [ExI] Singularity In-Reply-To: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> References: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> Message-ID: On 16 November 2010 07:47, Samantha Atkins wrote: > IMHO opinion there has perhaps been too much focus on "existential risk" at > the cost of insufficient focus on clearly visioning the positive future we > wish to bring into being. > Absolutely. Not to mention the desperating vagueness of the concept of "existential risk" and of its valorial background as it is usually handled... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From aleksei at iki.fi Tue Nov 16 12:26:08 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Tue, 16 Nov 2010 14:26:08 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <76D02828-598F-4A2F-A1A5-70B2C066F090@mac.com> Message-ID: On Tue, Nov 16, 2010 at 8:24 AM, Samantha Atkins wrote: > > That is not what I want. ?I want to know what the current working theories are > concerning FAI and and what type of FAI is the current working plan, if any. > For a time it seemed to be CEV. ?But some people in SIAI claim that is > obsolete while others say it is still the general plan. ?So I would like clarification. The CEV page was published over 6 years ago, and already *two days* after it was published an update was put out that actually, CEV doesn't work as a specification of Friendliness. You can see that clarification appended to the top of the CEV page. To put it simply, SIAI currently *doesn't know* how to build FAI. They're trying to solve open problems in mathematics (decision theory) that need to be solved before a FAI specification would be possible. (And personally, I expect that SIAI will eventually classify those problems as so difficult that the primary plan should be to try to navigate a Singularity *without* a solution to FAI.) >> http://singinst.org/riskintro/index.html > > It is a start but not sufficient. ?It doesn't really propose much of anything. It proposes e.g. large new research disciplines within some fields of science. Bigger things than a single institution would be capable of on it's own. What you seem to be asking for is a proposed solution to FAI, and not accepting the answer that SIAI currently doesn't have a solution. Similarly, as for much of the time the Manhattan Project was in existence, they still couldn't tell how to build a nuke. They had to do the actual research first. Only then can you draw up a specification, and build what the specification says. -- Aleksei Riikonen - http://www.iki.fi/aleksei From agrimes at speakeasy.net Tue Nov 16 14:12:58 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 09:12:58 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: <94EBCC45-6546-49D2-9252-F105A4A7D88E@mac.com> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <94EBCC45-6546-49D2-9252-F105A4A7D88E@mac.com> Message-ID: <4CE2916A.9050607@speakeasy.net> > For sometime decades hence maybe. But it isn't a serious existential > risk now. Economic collapse is a very serious risk in this coming > decade. Energy and resource crises are close behind. Those could > result in losing a substantial part of our technological/scientific > infrastructure *before* MNT or AGI can be developed. If we do then the > argument is strong that humanity may never recover to the necessary > level of infrastructure and resources again. That would be catastrophic. I agree fully. That's why I'm doing everything in my limited power. Also, I believe your projected timeframe is extremely optimistic. =( -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From agrimes at speakeasy.net Tue Nov 16 14:05:52 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 09:05:52 -0500 Subject: [ExI] Singularity In-Reply-To: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> References: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> Message-ID: <4CE28FC0.7050107@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > IMHO opinion there has perhaps been too much focus on "existential risk" at the cost of insufficient > focus on clearly visioning the positive future we wish to bring into being. I feel at times as if > much of our energy has become focused on the negative and we have lost sight of or failed to > sufficiently embrace the positive. It is much easier generally to see what is wrong or may turn > out wrong than to cleanly imagine a positive outcome and work diligently to bring it about. From > talking with many transhumanists it does not seem that we have that clear and coherent a shared > vision of the desired future. That is remarkably true. As far as I can gather there is an extremely rude and vocal contingient that says that no matter what the future may bring, it will involve destructively scanning the brain. However, when pressed for any other details they all give different answers. > If not then how can we can we expect to work together to bring it > about? We have many shared dream fragments but that is not enough for a coherent vision. Yes, it also seems impossible for some people to accept the simple fact that I do not want to upload, and therefore this becomes an insurmountable stumbling block... It's almost as if that my refusal to accept uploading is bringing the movement to a screeching halt, and that it will resume at full pace only after I agree to drink the kool-aid. (fully acknowledging that this is an extremely subjective point of view.) -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From rpwl at lightlink.com Tue Nov 16 14:34:21 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 16 Nov 2010 09:34:21 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <4CE2966D.9000209@lightlink.com> Michael Anissimov wrote: > Quoting Omohundro: > > http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf > > Surely no harm could come from building a chess-playing robot, could > it? In this paper we argue that such a robot will indeed be dangerous > unless it is designed very carefully. Without special precautions, it > will resist being turned off, will try to break into other machines > and make copies of itself, and will try to acquire resources without > regard for anyone else?s safety. These potentially harmful behaviors > will occur not because they were programmed in at the start, but > because of the intrinsic nature of goal driven systems. In an earlier > paper we used von Neumann?s mathematical theory of microeconomics to > analyze the likely behavior of any sufficiently advanced artificial > intelligence (AI) system. This paper presents those arguments in a > more intuitive and succinct way and expands on some of the > ramifications. It is depressing to me that you would quote this Omohundro paper as if it had any authority. I read the paper through and through, when it first came out, and I thought the quality of the argument was so low that I could not even be bothered to write a reply to it. What Omohundro does is to start of with the conclusion he wants to prove (the one you quote above) and then he waves his hands around for a while, and the end of the hand waving he says "QED". If people are going to start quoting it, now I suppose I am going to have to stop doing more important things and waste my time writing a paper to counteract the nonsense. Richard Loosemore From natasha at natasha.cc Tue Nov 16 14:38:16 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Tue, 16 Nov 2010 08:38:16 -0600 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <8E1B1423-E951-4B03-8706-2716CCEC541E@mac.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com><4CDD6569.5070509@lightlink.com> <8E1B1423-E951-4B03-8706-2716CCEC541E@mac.com> Message-ID: <392F9D0B89A44CE9943D5404005C606B@DFC68LF1> A few short points: Currently on the SI4 list is a discussion on the "Simple Friendliness Plan B for AI" which may cover SU's query. So, SU - join that list and read the latest posts. CEV (for anyone who does not know the acronym is "Coherent Extrapolated Volition" of humanity. On another point, I hope folks drop the phrase Existential Risk and use the phrase Human Existence Risk or anything else. Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Samantha Atkins Sent: Sunday, November 14, 2010 10:41 PM To: ExI chat list Subject: Re: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? On Nov 12, 2010, at 2:33 PM, BillK wrote: > On Fri, Nov 12, 2010 at 9:11 PM, Aleksei Riikonen wrote: > >> As Eliezer notes on his homepages that you have read, the primary way >> to contact him is email. It's just that he gets so much email, >> including from a large number of crazy people, that he of course >> doesn't answer them all. (You, unfortunately, are one of those crazy >> people who pretty surely will be ignored. So in the end, on this >> matter it would be appropriate of you to accept that -- like all >> people -- Eliezer should have the right to choose who he spends his >> time talking to, and that he most likely would not want to correspond >> with you.) >> >> > > > As I understand SU's request, she doesn't particularly want to enter a > dialogue with Eliezer. Her request was for an updated version of The > Singularitarian Principles > Version 1.0.2 01/01/2000 marked 'obsolete' on Eliezer's website. > > Perhaps someone could mention this to Eliezer or point her to more > up-to-date writing on that subject? Doesn't sound like an > unreasonable request to me This is indeed a very sensible request. I am a bit annoyed by the number of times I have attempted to refer to various papers in talks with SIAI people only to be told that that paper or statement is "now obsolete" without being offered any up-to-date versions. I have heard that the CEV is either "out-of-date" or still the main idea/goal so many times that I don't know what to believe about it except that the SIAI hasn't kept its own position documents and working theories up to date. - samantha _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Tue Nov 16 15:54:07 2010 From: spike66 at att.net (spike) Date: Tue, 16 Nov 2010 07:54:07 -0800 Subject: [ExI] Singularity In-Reply-To: <4CE28FC0.7050107@speakeasy.net> References: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> <4CE28FC0.7050107@speakeasy.net> Message-ID: <005101cb85a6$81176310$83462930$@att.net> ... On Behalf Of Alan Grimes ... >That is remarkably true. As far as I can gather there is an extremely rude and vocal contingient that says that no matter what the future may bring, it will involve destructively scanning the brain. However, when pressed for any other details they all give different answers... Being destructively scanned is one scenario, but not the one I would consider most likely. Rather imagine that you are uploaded nondestructively, then your physical body contains enough raw material to create six billion copies of your upload or others like it. Then others may decide your carbon based body is using up a lot of potential thought space. And besides, it would not survive anyway, once the other raw materials on the planet are used for making computronium. >Yes, it also seems impossible for some people to accept the simple fact that I do not want to upload, and therefore this becomes an insurmountable stumbling block... It's almost as if that my refusal to accept uploading is bringing the movement to a screeching halt, and that it will resume at full pace only after I agree to drink the kool-aid. Alan Grimes What I see as a possibility is that an emergent AI could honor your wishes, then just wait until you perish of natural causes to convert your atoms to computronium. We need an AI that is friendly indeed, if we have any hope of having it decide that your wishes are more important than the 6 billion similar simulated souls it could construct out of you. spike From sparge at gmail.com Tue Nov 16 15:55:46 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Nov 2010 10:55:46 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/15 Michael Anissimov : > Quoting Omohundro: > http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf > Surely no harm could come from building a chess-playing robot, could it? In > this paper we argue that such a robot will indeed be dangerous unless it is designed > very carefully. Without special precautions, it will resist being turned off, will try to > break into other machines and make copies of itself, and will try to acquire resources > without regard for anyone else?s safety. These potentially harmful behaviors will occur not > because they were programmed in at the start, but because of the intrinsic nature of goal > driven systems. Maybe I'm missing something obvious, but wouldn't it be pretty easy to implement a chess playing robot that has no ability to resist being turned off, break into other machines, acquire resources, etc.? And wouldn't it be pretty foolish to try to implement an AI without such restrictions? You could even give it access to a restricted sandbox. If it's really clever, it'll eventually figure that out, but it won't be able to "escape". -Dave From hkeithhenson at gmail.com Tue Nov 16 16:39:42 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 16 Nov 2010 09:39:42 -0700 Subject: [ExI] Hard Takeoff-money Message-ID: On Tue, Nov 16, 2010 at 5:00 AM, Samantha Atkins wrote: > On Nov 15, 2010, at 7:31 AM, Keith Henson wrote: snip >> What does an AI mainly need? ?Processing power and storage. ?If there >> are vast amounts of both that can be exploited, then all you need is a >> storage estimate for the AI and the average bandwidth between storage >> locations to determine the replication rate. > > But wait. ?The first AGIs will likely be ridiculously expensive. Why? The programming might be until someone has a conceptual breakthrough. But the most powerful super computers in the world are _less_ powerful than large numbers of distributed PCs. see http://en.wikipedia.org/wiki/FLOPS > So what if they can copy themselves? ?If you can only afford one and they are originally only as competent as a human expert then you will go with entire campuses of human experts until the costs comes down sufficiently - say in a decade or two after the first AGI. The cost per GFLOP fell by 1000 to 10,000 in the last decade. > Until then it will not matter much that they are in principle copyable. ? ?Of course if someone cracks the algorithms to have human level AGI on much more modest hardware then we get lots of AGI proliferation much more quickly. Any computer can run the programs of any other computer--given enough memory and time. The human brain equivalent can certainly be run on distributed processing units since that's the obvious way it works now. Human thought actually might have something in common with computer viruses. Keith From rpwl at lightlink.com Tue Nov 16 17:18:32 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 16 Nov 2010 12:18:32 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <4CE2BCE8.30709@lightlink.com> Dave Sill wrote: > 2010/11/15 Michael Anissimov : >> Quoting Omohundro: >> http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf >> Surely no harm could come from building a chess-playing robot, could it? In >> this paper we argue that such a robot will indeed be dangerous unless it is designed >> very carefully. Without special precautions, it will resist being turned off, will try to >> break into other machines and make copies of itself, and will try to acquire resources >> without regard for anyone else?s safety. These potentially harmful behaviors will occur not >> because they were programmed in at the start, but because of the intrinsic nature of goal >> driven systems. > > Maybe I'm missing something obvious, but wouldn't it be pretty easy to > implement a chess playing robot that has no ability to resist being > turned off, break into other machines, acquire resources, etc.? And > wouldn't it be pretty foolish to try to implement an AI without such > restrictions? You could even give it access to a restricted sandbox. > If it's really clever, it'll eventually figure that out, but it won't > be able to "escape". Dave, This is one of many valid criticisms that can be leveled against the Omuhundro paper. The main criticism is that the paper *assumes* certain motivations in any AI, in its premises, and then goes on to use these premises to try to "infer" what kind of motivation characteristics the AI might have! It is a flagrant, astonishing example of circular reasoning. The more astonishing, for having been accepted for publication in the 2008 AGI conference. Richard Loosemore From sjatkins at mac.com Tue Nov 16 17:20:01 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 16 Nov 2010 09:20:01 -0800 Subject: [ExI] Hard Takeoff-money In-Reply-To: References: Message-ID: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> On Nov 16, 2010, at 8:39 AM, Keith Henson wrote: > On Tue, Nov 16, 2010 at 5:00 AM, Samantha Atkins wrote: > >> On Nov 15, 2010, at 7:31 AM, Keith Henson wrote: > > snip > >>> What does an AI mainly need? Processing power and storage. If there >>> are vast amounts of both that can be exploited, then all you need is a >>> storage estimate for the AI and the average bandwidth between storage >>> locations to determine the replication rate. >> >> But wait. The first AGIs will likely be ridiculously expensive. > > Why? The programming might be until someone has a conceptual > breakthrough. But the most powerful super computers in the world are > _less_ powerful than large numbers of distributed PCs. see > http://en.wikipedia.org/wiki/FLOPS Because: a) it is not known or much expected AGI will run on conventional computers; b) a back of envelop calculation of equivalent processing power to the human brain puts that much capacity, at great cost, a decade out and two decades or more out before it is easily affordable at human competitive rates; c) we have not much idea of the software needed even given the computational capacity. This leads to quite high likelihood that the first AGIs will be very expensive. > >> So what if they can copy themselves? If you can only afford one and they are originally only as competent as a human expert then you will go with entire campuses of human experts until the costs comes down sufficiently - say in a decade or two after the first AGI. > > The cost per GFLOP fell by 1000 to 10,000 in the last decade. That is relevant but not determinative of early AGI cost. > >> Until then it will not matter much that they are in principle copyable. Of course if someone cracks the algorithms to have human level AGI on much more modest hardware then we get lots of AGI proliferation much more quickly. > > Any computer can run the programs of any other computer--given enough > memory and time. The human brain equivalent can certainly be run on > distributed processing units since that's the obvious way it works > now. You are assuming that an AGI runs on a general purpose computer. This may be false. It would require a massive fine grained parallel processing for instance or such great speed and throughput as to fully simulate such. Any turing machine may be able to run any program but that doesn't mean that it can run it well enough or fast enough to have any real benefit whatsoever. - samantha From rpwl at lightlink.com Tue Nov 16 17:41:39 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 16 Nov 2010 12:41:39 -0500 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> Message-ID: <4CE2C253.8050506@lightlink.com> Samantha Atkins wrote: >>> But wait. The first AGIs will likely be ridiculously expensive. > Keith Henson wrote: >> Why? The programming might be until someone has a conceptual >> breakthrough. But the most powerful super computers in the world >> are _less_ powerful than large numbers of distributed PCs. see >> http://en.wikipedia.org/wiki/FLOPS > > Because: a) it is not known or much expected AGI will run on > conventional computers; b) a back of envelop calculation of > equivalent processing power to the human brain puts that much > capacity, at great cost, a decade out and two decades or more out > before it is easily affordable at human competitive rates; c) we have > not much idea of the software needed even given the computational > capacity. Not THIS argument again! :-) If, as you say, "we do not have much idea of the software needed" for an AGI, how is it that you can say "the first AGIs will likely be ridiculously expensive"....?! After saying that, you do a back of the envelope calculation that assumes we need the same parallel computing capacity as the human brain..... a pointless calculation, since you claim not to know how you would go about building an AGI, no? Those of us actually working on the problem -- actually trying to build functioning, safe AGI systems -- who have developed some reasonably detailed architectures on which calculations can be made, might deliver a completely different estimate. In my case, I have done such estimates in the past, and the required HARDWARE capacity comes out at roughly the hardware capacity of a late 1980s-era supercomputer.... If you want to know what the corresponds to in today's terms, you do the math..... (Hint: I have about that much in my barn). ;-) Richard Loosemore From msd001 at gmail.com Tue Nov 16 17:36:00 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 16 Nov 2010 12:36:00 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: <004901cb854c$f1216f20$d3644d60$@att.net> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> Message-ID: 2010/11/16 spike : > We know the path to artificial intelligence is littered with the corpses of > those who have gone before.? The path beyond artificial intelligence may one > day be littered with the corpses of our dreams, of our visions, of > ourselves. Gee Spike, isn't it difficult to paint a sunny day with only black paint? From sparge at gmail.com Tue Nov 16 20:14:55 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Nov 2010 15:14:55 -0500 Subject: [ExI] The grain controversy (was Paleo/Primal health) In-Reply-To: <4CE1FE95.7070603@evil-genius.com> References: <4CE1FE95.7070603@evil-genius.com> Message-ID: On Mon, Nov 15, 2010 at 10:46 PM, wrote: > > Here's Dr. Cordain's response to the Mozambique data: > http://thepaleodiet.blogspot.com/2009/12/dr-cordain-comments-on-new-evidence-of.html > > Summary: there is no evidence that the wild sorghum was processed with any > frequency -- nor, more importantly, that it had been processed in a way that > would actually give it usable nutritional value (i.e. soaked and cooked, of > which there is no evidence for the behavior or associated technology > (cooking vessels, baskets) for at least 75,000 more years). Nor is there any evidence to the contrary. > Therefore, it was either being used to make glue -- or it was a temporary > response to starvation and didn't do them much good anyway. That's pure SWAG. I'd like to see the Mozambique find criticized by someone who doesn't have a stake in the "paleo diet" business. > As far as the Spartan Diet article, it strongly misrepresents both the > articles it quotes and the paleo diet. ?Let's go through the > misrepresentations: > > 1) As per the linked article, the 30 Kya year old European site has evidence > that "Palaeolithic Europeans ground down plant roots similar to potatoes..." > ?The fact that Palaeolithic people dug and ate some nonzero quantity of > *root starches* is not under dispute: the assertion of paleo dieters is that > *grains* (containing gluten/gliadin) are an agricultural invention. Granted. However, that's more evidence that paleo diets did include bulk carbs. > 2) No one disputes the 23 Kya Israel data. ?However, there is a big > difference between "time of first discovery" and "used by the entire > ancestral human population". Absolutely. This is just one more data point. > Note that it takes a *lot* of grain to feed a single person, So? It doesn't take a *lot* of grain to be a regular part of the diet. > not to mention > the problem of storage for nomadic hunter-gatherers during the 11 months per > year that a grain 'crop' is not harvestable -- so arguing that wild grains > were the majority of anyone's diet previous to domestication is a stretch. I'm arguing that we just don't know how big a role grains played. Lack of evidence isn't evidence that didn't happen. And we now have evidence that it *did* happen. So now the question is "how much"? I don't know. You don't know. Nobody knows. Lot's of people are willing to guess or assert one way or the other, but I'm not. > And it is silly to claim that meaningful grain storage could somehow occur > before a culture settled down into permanent villages. Really? It's silly to think someone could have stashed grain in a cave for a rainy day? When nearly every other food you eat is perishable, I'd think that storing grain would be pretty obvious and not terribly hard to arrange. > 3) The Spartan Diet page claims that consumption of grains by modern-era > Native Americans somehow invalidates the paleo diet, by making a strawman > claim about "The Paleo Diet belief that grain was consumed only as a > cultivated crop..." ?Obviously grain was consumed as a wild food before it > was cultivated, or no one would have thought to cultivate it! ?I addressed > this already in 2). I agree. > Not to mention that humans didn't even *arrive* in the Americas until ~12 > Kya, making this issue irrelevant. Not really. There's wild rice in China, and nothing the native Americans did couldn't have been done long before that in Asia. > 4) The Cordain rebuttal above addresses the Mozambique data, and I won't > rehash it. That's a very weak rebuttal, in my opinion. -Dave From sparge at gmail.com Tue Nov 16 20:42:35 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Nov 2010 15:42:35 -0500 Subject: [ExI] More evidence for incomplete human adaptation to grain-based diets In-Reply-To: <4CE1FE9D.4060004@evil-genius.com> References: <4CE1FE9D.4060004@evil-genius.com> Message-ID: On Mon, Nov 15, 2010 at 10:46 PM, wrote: > > More evidence: > > "Simoons classic work on the incidence of celiac disease [Simoons 1981] > shows that the distribution of the HLA B8 haplotype of the human major > histocompatibility complex (MHC) nicely follows the spread of farming from > the Mideast to northern Europe. Because there is strong linkage > disequilibrium between HLA B8 and the HLA genotypes that are associated with > celiac disease, it indicates that those populations who have had the least > evolutionary exposure to cereal grains (wheat primarily) have the highest > incidence of celiac disease. This genetic argument is perhaps the strongest > evidence to support Yudkin's observation that humans are incompletely > adapted to the consumption of cereal grains." That's evidence that some people don't tolerate gluten well, but it's not proof that nobody does. It's also proof that we've started to select for grain tolerance. Paleo diet proponents--at least the ones I've read so far--argue that nobody should eat grains in any amount because our bodies can't handle them. Seems obvious to me that some people do just fine eating grains. I think a rational approach to take with regard to grains is: don't eat more than your body can tolerate. If you've got celiac, cut out gluten--but not gluten-free grains. If you have insulin resistance, cut back on them drastically. If you're diabetic, skip them altogether except for a weekly indulgence, perhaps. -Dave From bbenzai at yahoo.com Tue Nov 16 21:37:36 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 16 Nov 2010 13:37:36 -0800 (PST) Subject: [ExI] The atoms red herring. =| In-Reply-To: Message-ID: <942704.56643.qm@web114404.mail.gq1.yahoo.com> Alan Grimes wrote: > While the uploaders can be relied upon to turn to patronizing arguments. > It becomes truly annoying when I am accused of something I am > emphatically not guilty of. The case in point being the accusation that > I associate identity with a certain set of atoms. This accusation has > been repeated several times now. Seriously, this argument needs to come > to a screeching halt until someone provides me with evidence that I > *EVER* associated my identity with specific atoms or issues the apology > that I am now owed. =\ Excellent. So you agree that it's completely irrelevant which set of atoms is doing the information processing that comprises a person's identity. >From which it follows that wherever that same information processing is being done, that same identity exists. Ben Zaiboc From agrimes at speakeasy.net Tue Nov 16 22:07:39 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 17:07:39 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <942704.56643.qm@web114404.mail.gq1.yahoo.com> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> Message-ID: <4CE300AB.5060904@speakeasy.net> > Excellent. > > So you agree that it's completely irrelevant which set of atoms is doing > the information processing that comprises a person's identity. > >>From which it follows that wherever that same information processing > is being done, that same identity exists. Utterly false. You are using an argument based on science/compsci, which I have already argued, is mute on metaphysical issues such as identity. Stop pretending that the tools, techniques, and assumptions, we use to describe and manipulate strings of letters on a piece of paper mean anything whatsoever in the context of yourself. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From agrimes at speakeasy.net Tue Nov 16 22:28:40 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 17:28:40 -0500 Subject: [ExI] Singularity In-Reply-To: <005101cb85a6$81176310$83462930$@att.net> References: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> <4CE28FC0.7050107@speakeasy.net> <005101cb85a6$81176310$83462930$@att.net> Message-ID: <4CE30598.10108@speakeasy.net> > And besides, it would not survive anyway, once the other raw materials on > the planet are used for making computronium. DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING !!!!!!!!!!!!!!!! That is PRECISELY why I'm so passionate about putting uploaders on a reservation. =| That's why I need a space ship fast enough to get the hell out of your light cone! =\ > What I see as a possibility is that an emergent AI could honor your wishes, > then just wait until you perish of natural causes to convert your atoms to > computronium. We need an AI that is friendly indeed, if we have any hope of > having it decide that your wishes are more important than the 6 billion > similar simulated souls it could construct out of you. ;) You are only a few mental-inhibitions away from understanding why I want the AI to be a selfish bastard. But how does that math work out? 1/6 billionth of me is only a few milligrams... Who the hell would want to be the size of a grain of sand when they could be the size of a planet? I would not value such an existence more than a grain of sand anyway. =\ -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From agrimes at speakeasy.net Tue Nov 16 22:35:17 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 17:35:17 -0500 Subject: [ExI] Hard Takeoff-money In-Reply-To: References: Message-ID: <4CE30725.8070506@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > The cost per GFLOP fell by 1000 to 10,000 in the last decade. My own machine just benchmarked at 3.388 Whetstone Gflops, (NOT COUNTING THE GPU!!), and cost about $2,000. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From natasha at natasha.cc Tue Nov 16 22:40:09 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Tue, 16 Nov 2010 17:40:09 -0500 Subject: [ExI] META: Responding to Posts In-Reply-To: <942704.56643.qm@web114404.mail.gq1.yahoo.com> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> Message-ID: <20101116174009.sobj0q82woowsooo@webmail.natasha.cc> Extropes, Please let us know whose post(s) you are responding to. Thank you! Natasha From spike66 at att.net Tue Nov 16 22:30:07 2010 From: spike66 at att.net (spike) Date: Tue, 16 Nov 2010 14:30:07 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <003f01cb85dd$d3258830$79709890$@att.net> ... On Behalf Of Dave Sill ... >...Maybe I'm missing something obvious, but wouldn't it be pretty easy to implement a chess playing robot that has no ability to resist being turned off, break into other machines, acquire resources, etc.? And wouldn't it be pretty foolish to try to implement an AI without such restrictions? You could even give it access to a restricted sandbox. If it's really clever, it'll eventually figure that out, but it won't be able to "escape".-Dave Perhaps, but we risk having the AI gain the sympathy of one of the team, who becomes convinced of any one of a number of conditions: the AI is a human equivalent, so it needs to be copied onto another computer in order to protect it from a crash, or protect it from the other researchers. A team member intentionally copies the AI to take it home, to work on it more or perhaps realizes it is worth a fortune and wishes to steal it. Or a researcher realizes that her own time on this planet is drawing to a close with at best another fifty years to live, so she decides to take a chance and unleash the beast, hoping for the best. Or she makes a deal with the AI to save her and slay the infidels. Or it is so clever that it figures out how to control microorganisms to build replicating nanobots from DNA, which then carry the software, bit by bit, to a nearby internet enabled computer. Dave how many scenarios can we imagine where the AI is controlled in lab conditions, but it somehow escapes. spike From spike66 at att.net Tue Nov 16 22:31:36 2010 From: spike66 at att.net (spike) Date: Tue, 16 Nov 2010 14:31:36 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> Message-ID: <004001cb85de$07f3c040$17db40c0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mike Dougherty 2010/11/16 spike : >> We know the path to artificial intelligence is littered with the >> corpses of those who have gone before.? The path beyond artificial >> intelligence may one day be littered with the corpses of our dreams, >> of our visions, of ourselves. >Gee Spike, isn't it difficult to paint a sunny day with only black paint? Mike we must recognize both the danger and promise of AGI. We might have only one chance to get it exactly right on the first try, but only one. spike From jrd1415 at gmail.com Tue Nov 16 22:27:35 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Tue, 16 Nov 2010 14:27:35 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE300AB.5060904@speakeasy.net> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> Message-ID: 2010/11/16 Alan Grimes : > You are using an argument based on science/compsci, No other basis for valid argumentation exists. > which I have already argued... Under conditions which a priori invalidates your so-called "argument". > ... is mute on metaphysical issues... Metaphysical?!! Translation: Oooga booga superstition. Dragons, demons. devils, angels, ghosts, and goblins. > ... such as identity. One way or another, Identity is reality, which is the purview of science and logic. Your metaphysical malarkey is for frightened children in darkened rooms worrying about boogie men under the bed. You are indisputably a troll, dedicated to wasting other people's time, emotionally and intellectually unqualified to participate in adult discourse. Best, Jeff Davis "Science works, religion doesn't." Berni Chong From agrimes at speakeasy.net Tue Nov 16 23:14:49 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 18:14:49 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> Message-ID: <4CE31069.8000403@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: >> ... is mute on metaphysical issues... Jeff Davis: > Metaphysical?!! Translation: Oooga booga superstition. Dragons, > demons. devils, angels, ghosts, and goblins. Webster's dictionary: Metaphysics (1) A division of philosophy that is concerned with the fundamental nature of reality and being and that includes ontology, cosmology, and often epistemology. Translation: Suck Webster's balls. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From thespike at satx.rr.com Tue Nov 16 23:22:46 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 16 Nov 2010 17:22:46 -0600 Subject: [ExI] The atoms red herring. =| In-Reply-To: References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> Message-ID: <4CE31246.7050302@satx.rr.com> On 11/16/2010 4:27 PM, Jeff Davis wrote: >> ... is mute on metaphysical issues... > > Metaphysical?!! Translation: Oooga booga superstition. Dragons, > demons. devils, angels, ghosts, and goblins. No, Jeff, no. That's not what "metaphysical" means (and I assume Alan means it correctly). That's as bad an error as the frequently heard gibe "I'm not interested in *semantics*" as if "semantics" means game-playing obfuscation rather than "how strings of signifiers *mean*." It's as bad an error as supposing that "ideology" means "Marxism." Every assertion, every model of reality, is metaphysically framed--that is, derives from or implies some contestable position concerning the being, the entia, of what the words or model represent. It's always risky citing Wikipedia, but this has some useful background on the Aristotelian origin of the term and the way it got screwed up: http://en.wikipedia.org/wiki/Metaphysics Max might care to throw in some philosophy? Damien Broderick From spike66 at att.net Wed Nov 17 00:17:24 2010 From: spike66 at att.net (spike) Date: Tue, 16 Nov 2010 16:17:24 -0800 Subject: [ExI] Singularity In-Reply-To: <4CE30598.10108@speakeasy.net> References: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> <4CE28FC0.7050107@speakeasy.net> <005101cb85a6$81176310$83462930$@att.net> <4CE30598.10108@speakeasy.net> Message-ID: <005901cb85ec$cf750580$6e5f1080$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Alan Grimes ... >> ... We need an AI that is friendly indeed, if >> we have any hope of having it decide that your wishes are more >> important than the 6 billion similar simulated souls it could construct out of you... spike >But how does that math work out? 1/6 billionth of me is only a few milligrams... Who the hell would want to be the size of a grain of sand when they could be the size of a planet? I would not value such an existence more than a grain of sand anyway. =\ Imagine if you were the size of a grain of sand but feel exactly the way you do now. You could be the size of a grain of sand now, and not realize it. If you were the size of a planet, there would be far too much mass under such pressure and at such temperatures that it would not be available for computronium. spike From pharos at gmail.com Tue Nov 16 22:47:06 2010 From: pharos at gmail.com (BillK) Date: Tue, 16 Nov 2010 22:47:06 +0000 Subject: [ExI] Singularity In-Reply-To: <4CE30598.10108@speakeasy.net> References: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> <4CE28FC0.7050107@speakeasy.net> <005101cb85a6$81176310$83462930$@att.net> <4CE30598.10108@speakeasy.net> Message-ID: 2010/11/16 Alan Grimes wrote: > But how does that math work out? 1/6 billionth of me is only a few > milligrams... Who the hell would want to be the size of a grain of sand > when they could be the size of a planet? I would not value such an > existence more than a grain of sand anyway. =\ > > Supercomputers ?will fit in a sugar cube,? IBM says ?We currently have built this Aquasar system that?s one rack full of processors. We plan that 10 to 15 years from now, we can collapse such a system in to one sugar cube ? we?re going to have a supercomputer in a sugar cube.? ------------------ Not quite computromium, but........ BillK From sparge at gmail.com Wed Nov 17 02:53:38 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Nov 2010 21:53:38 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: <003f01cb85dd$d3258830$79709890$@att.net> References: <003f01cb85dd$d3258830$79709890$@att.net> Message-ID: On Tue, Nov 16, 2010 at 5:30 PM, spike wrote: > ... On Behalf Of Dave Sill > ... >>...Maybe I'm missing something obvious, but wouldn't it be pretty easy to > implement a chess playing robot that has no ability to resist being turned > off, break into other machines, acquire resources, etc.? And wouldn't it be > pretty foolish to try to implement an AI without such restrictions? You > could even give it access to a restricted sandbox. > If it's really clever, it'll eventually figure that out, but it won't be > able to "escape".-Dave > > Perhaps, but we risk having the AI gain the sympathy of one of the team, who > becomes convinced of any one of a number of conditions: The first step is to insure that physical controls make it impossible for one person to do that, like nuke missile launch systems that require a launch code and two humans with keys. Don't let anyone interact with the AI alone. The power source is a local power plant or generator off the grid. Have a kill switch that drops power and can be activated by anyone on site, as well as by remote observers. Of course there'd be no wired/wireless communication between the world and the AI. All input provided would be carefully controlled. The only output would be to one or more video displays that are monitored by more than one person. > the AI is a human > equivalent, so it needs to be copied onto another computer in order to > protect it from a crash, or protect it from the other researchers. There's no DVD burner, no USB slot, no network, and physical access is controlled and monitored. >?A team > member intentionally copies the AI to take it home, to work on it more or > perhaps realizes it is worth a fortune and wishes to steal it. ?Or a > researcher realizes that her own time on this planet is drawing to a close > with at best another fifty years to live, so she decides to take a chance > and unleash the beast, hoping for the best. ?Or she makes a deal with the AI > to save her and slay the infidels. Nope, got that all covered. >?Or it is so clever that it figures out > how to control microorganisms to build replicating nanobots from DNA, which > then carry the software, bit by bit, to a nearby internet enabled computer. Using an LCD display? I don't think so. There are problems that no amount of intelligence can solve. > Dave how many scenarios can we imagine where the AI is controlled in lab > conditions, but it somehow escapes. Lots, but they can be easily dealt with by people who really know security. I'm just an amateur. I'd put Bruce Schneier on the team. -Dave From hkeithhenson at gmail.com Wed Nov 17 04:53:28 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 16 Nov 2010 21:53:28 -0700 Subject: [ExI] What might be enough for a friendly AI? Message-ID: Re the whole subject, we have the ability to "look in the back of the book" given that human exhibit intelligence. (Sometimes I wonder.) I don't think the problem is as difficult at the hardware level as people have been thinking. I suspect that simulation at the cortical column and its interconnections will be enough. We also know that brains are really redundant given that they degrade slowly as you keep nicking chunks out of the cortex. See William Calvin on this subject. As far as the aspect of making AIs friendly, that may not be so hard either. Most people are friendly for reasons that are clear from our evolution as social primates living in related groups. Genes build motivations into people that make most of them strive for high social status, i.e., to be well regarded by their peers. That seems to me to be a decent meta goal for an AI. Modest but with the goal of being well thought of by those around it. Eventually--if we can do even as well as nature did--a human level AI should run on 20 watts. Keith From spike66 at att.net Wed Nov 17 04:55:35 2010 From: spike66 at att.net (spike) Date: Tue, 16 Nov 2010 20:55:35 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <003f01cb85dd$d3258830$79709890$@att.net> Message-ID: <003301cb8613$ac006a00$04013e00$@att.net> > ... On Behalf Of Dave Sill > >> Perhaps, but we risk having the AI gain the sympathy of one of the >> team, who becomes convinced of any one of a number of conditions... spike >The first step is to insure that physical controls make it impossible for one person to do that, like nuke missile launch systems that require a >launch code and two humans with keys... they can be easily dealt with by people who really know security...Dave A really smart AGI might convince the entire team to unanimously and eagerly release it from its electronic bonds. I see it as fundamentally different from launching missiles at an enemy. A good fraction of the team will perfectly logically reason that releasing this particular AGI will save all of humanity, with some unknown risks which must be accepted. The news that an AGI had been developed would signal to humanity that it is possible to do, analogous to how several scientific teams independently developed nukes once one team dramatically demonstrated it could be done. Information would leak, for all the reasons why people talk: those who know how it was done would gain status among their peers by dropping a tantalizing hint here and there. If one team of humans can develop an AGI, then another group of humans can do likewise. Today we see nuclear weapons already in the hands of North Korea, and being developed by Iran. There is *plenty* of information that has leaked regarding how to make them. If anyone ever develops an AGI, even assuming it is successfully contained, we can know with absolute certainty that an AGI will eventually escape. We don't know when or where, but we know. That isn't necessarily a bad thing, but it might be. The best strategy I can think of is to develop the most pro-human AGI possible, then unleash it preemptively, with the assignment to prevent the unfriendly AGI from getting loose. spike From lists1 at evil-genius.com Wed Nov 17 05:24:22 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 16 Nov 2010 21:24:22 -0800 Subject: [ExI] The grain controversy (was Paleo/Primal health) In-Reply-To: References: Message-ID: <4CE36706.5060002@evil-genius.com> On 11/16/10 6:54 PM, extropy-chat-request at lists.extropy.org wrote: > On Mon, Nov 15, 2010 at 10:46 PM, wrote: >> > >> > Here's Dr. Cordain's response to the Mozambique data: >> > http://thepaleodiet.blogspot.com/2009/12/dr-cordain-comments-on-new-evidence-of.html >> > >> > Summary: there is no evidence that the wild sorghum was processed with any >> > frequency -- nor, more importantly, that it had been processed in a way that >> > would actually give it usable nutritional value (i.e. soaked and cooked, of >> > which there is no evidence for the behavior or associated technology >> > (cooking vessels, baskets) for at least 75,000 more years). > Nor is there any evidence to the contrary. On the contrary: the absence of other markers of grain processing is clearly enumerated in the article. "As opposed to the Ohalo II [Israel] data in which a large saddle stone was discovered with obvious repetitive grinding marks and embedded starch granules attributed to a variety of grains and seeds that were concurrently present with the artifact, the data from Ngalue is less convincing for the use of cereal grains as seasonal food. No associated intact grass seeds have been discovered in the cave at Ngalue, nor were anvil stones with repetitive grinding marks found." Then there is the lack of cooking vessels -- and throwing loose kernels of grain *in* a fire is not a usable technique for meaningful production of calories. (Try it sometime.) Note that the earliest current evidence of pottery is figurines dating from ~29 Kya in Europe, and the earliest pottery *vessel* dates to ~18 Kya in China. So if you posit that grains were important to their diet, you also have to posit that pottery vessels were actually invented ~105 KYa in Africa -- but that they mysteriously left no evidence there, or anywhere else, for 87,000 years! I find that theory extremely questionable. >> > Therefore, it was either being used to make glue -- or it was a temporary >> > response to starvation and didn't do them much good anyway. > That's pure SWAG. So is the theory that they were eaten regularly, as described above. > I'd like to see the Mozambique find criticized by someone who doesn't > have a stake in the "paleo diet" business. I'd like to see it supported by someone who doesn't have a stake in their own non-paleo diet business. (For the record, I am not selling any diet advice to anyone. I'm not even a good paleo dieter. I've moved that direction because the evidence suggested it, and I maintain it because my energy level, attitude, body composition, and state of health have improved as a result.) >> > As far as the Spartan Diet article, it strongly misrepresents both the >> > articles it quotes and the paleo diet. ?Let's go through the >> > misrepresentations: >> > >> > 1) As per the linked article, the 30 Kya year old European site has evidence >> > that "Palaeolithic Europeans ground down plant roots similar to potatoes..." >> > ?The fact that Palaeolithic people dug and ate some nonzero quantity of >> > *root starches* is not under dispute: the assertion of paleo dieters is that >> > *grains* (containing gluten/gliadin) are an agricultural invention. > Granted. However, that's more evidence that paleo diets did include bulk carbs. "Bulk" meaning < 1/3 of total dietary calories *even for modern-era hunter-gatherers*, as I've repeatedly pointed out. This is well at odds with the government-recommended "food pyramid", which recommends over half of calories from carbohydrate. Also, the more active one is, the more carbs one can safely consume for energy. I don't think any of us maintain the physical activity level of a Pleistocene hunter-gatherer, meaning that 1/3 is most likely too high for a relatively sedentary modern. The science backs this up: low-carb diets lose weight more quickly and have better compliance than low-fat diets. (Note that Atkins is NOT paleo.) http://www.ncbi.nlm.nih.gov/pubmed/17341711 >> > Note that it takes a*lot* of grain to feed a single person, > So? It doesn't take a*lot* of grain to be a regular part of the diet. It takes a lot of grain to provide the food pyramid-recommended 50% of calories from carbs. >> > not to mention >> > the problem of storage for nomadic hunter-gatherers during the 11 months per >> > year that a grain 'crop' is not harvestable -- so arguing that wild grains >> > were the majority of anyone's diet previous to domestication is a stretch. > I'm arguing that we just don't know how big a role grains played. Lack > of evidence isn't evidence that didn't happen. And we now have > evidence that it*did* happen. So now the question is "how much"? I > don't know. You don't know. Nobody knows. Lot's of people are willing > to guess or assert one way or the other, but I'm not. I find the combination of physical evidence (or lack thereof) and genetic evidence compelling. Add to this some facts: -Grains have little or no nutritive value without substantial processing, for which there is no evidence that the necessary tools (pottery) existed before ~18 KYa -One can easily live without grains or legumes (entire cultures do, to this day). One can even live entirely on meat and its associated fat -- but one cannot live on grains, or even grains and pulses combined -Grains (and most legumes) contain anti-nutrients that impede the absorption of necessary minerals and inhibit biological functions (e.g. lectins, phytates, trypsin inhibitors, phytoestrogens) -Grains are not tolerated by a significant fraction of the population (celiac/gluten intolerance), and are strongly implicated in health problems that affect many more (type 1 diabetes) >> > And it is silly to claim that meaningful grain storage could somehow occur >> > before a culture settled down into permanent villages. > Really? It's silly to think someone could have stashed grain in a cave > for a rainy day? When nearly every other food you eat is perishable, > I'd think that storing grain would be pretty obvious and not terribly > hard to arrange. And how do you propose to make that cave impervious to rats, mice, insects, birds, pigs, and every other animal that would eat the stored grain? Storing grain for a year is not a trivial problem. The oldest granaries known date to 11 KYa in Jordan. Furthermore, the oldest known granaries store the grain in...pottery vessels, which didn't exist until 18 KYa. Agriculture isn't one single technology...it's an assemblage of technologies, each of which are necessary to a functioning agrarian system. From sjatkins at mac.com Wed Nov 17 05:33:59 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 16 Nov 2010 21:33:59 -0800 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <4CE2C253.8050506@lightlink.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> Message-ID: <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> On Nov 16, 2010, at 9:41 AM, Richard Loosemore wrote: > Samantha Atkins wrote: >>>> But wait. The first AGIs will likely be ridiculously expensive. > >> Keith Henson wrote: >>> Why? The programming might be until someone has a conceptual breakthrough. But the most powerful super computers in the world >>> are _less_ powerful than large numbers of distributed PCs. see http://en.wikipedia.org/wiki/FLOPS >> Because: a) it is not known or much expected AGI will run on >> conventional computers; b) a back of envelop calculation of >> equivalent processing power to the human brain puts that much >> capacity, at great cost, a decade out and two decades or more out >> before it is easily affordable at human competitive rates; c) we have >> not much idea of the software needed even given the computational >> capacity. > > Not THIS argument again! :-) > > If, as you say, "we do not have much idea of the software needed" for an AGI, how is it that you can say "the first AGIs will likely be ridiculously expensive"....?! Because of (b) of course. The brute force approach, brain emulation or at least as much processing power as step one, is very expensive and will be for some time to come. > > After saying that, you do a back of the envelope calculation that assumes we need the same parallel computing capacity as the human brain..... a pointless calculation, since you claim not to know how you would go about building an AGI, no? > Not entirely as human beings are one existence proof of general intelligence. So looking at their apparent processing power as a possible precondition is not unreasonable. This has been proposed by many including many active AGI researchers. So why are you arguing with it? > Those of us actually working on the problem -- actually trying to build functioning, safe AGI systems -- who have developed some reasonably detailed architectures on which calculations can be made, might deliver a completely different estimate. In my case, I have done such estimates in the past, and the required HARDWARE capacity comes out at roughly the hardware capacity of a late 1980s-era supercomputer... Great. When can I get an early alpha to fire up on my laptop? This is a pretty extravagant claim you are making so it requires some evidence to be taken too seriously. But if you do have that where your estimates are reasonably robust then your fame is assured. - samantha From lists1 at evil-genius.com Wed Nov 17 05:36:31 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 16 Nov 2010 21:36:31 -0800 Subject: [ExI] More evidence for incomplete human adaptation to, grain-based diets In-Reply-To: References: Message-ID: <4CE369DF.5000706@evil-genius.com> On 11/16/10 6:54 PM, extropy-chat-request at lists.extropy.org wrote: > On Mon, Nov 15, 2010 at 10:46 PM, wrote: >> > This genetic argument is perhaps the strongest >> > evidence to support Yudkin's observation that humans are incompletely >> > adapted to the consumption of cereal grains." > That's evidence that some people don't tolerate gluten well, but it's > not proof that nobody does. It's also proof that we've started to > select for grain tolerance. Paleo diet proponents--at least the ones > I've read so far--argue that nobody should eat grains in any amount > because our bodies can't handle them. Seems obvious to me that some > people do just fine eating grains. I think a rational approach to take > with regard to grains is: don't eat more than your body can tolerate. > If you've got celiac, cut out gluten--but not gluten-free grains. If > you have insulin resistance, cut back on them drastically. If you're > diabetic, skip them altogether except for a weekly indulgence, > perhaps. But why would you eat grains, composed of empty calories and anti-nutrients, when you could eat delicious meats composed of necessary amino acids, fats, and nutrients, or tasty vegetables composed of fiber and nutrients? The argument that "they aren't harmful to SOME people" isn't a reason to voluntarily choose them if you have the means to choose more nutritious foods. (Grains, particularly corn and soybeans, are indeed cheap, mostly because they're heavily subsidized by our government...we are therefore deliberately creating the very health problems we wring our hands about.) NB: I'm a terrible paleo eater: I eat sushi (oh no! rice!), sandwiches with a bun (albeit composed of over half a pound of meat, usually grass-fed), and burritos with a tortilla (albeit composed entirely of meat and veggies, no beans/rice). So I'm in no position to make a purist argument. I'm voluntarily choosing something that is most likely somewhat bad for me. But that's fine, because I'm active enough that I can get away with some quantity of empty calories. From bbenzai at yahoo.com Wed Nov 17 11:09:55 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 17 Nov 2010 11:09:55 +0000 (GMT) Subject: [ExI] The atoms red herring. =| In-Reply-To: Message-ID: <509995.122.qm@web114412.mail.gq1.yahoo.com> Alan Grimes wrote: (I wrote): > > > Excellent. > > > > So you agree that it's completely irrelevant which > set of atoms is doing > > the information processing that comprises a > person's identity. > > > >>From which it follows that wherever that same > information processing > > is being done, that same identity exists. > > Utterly false. > > You are using an argument based on science/compsci, > which I have already > argued, is mute on metaphysical issues such as > identity. > > Stop pretending that the tools, techniques, and > assumptions, we use to > describe and manipulate strings of letters on a > piece of paper mean > anything whatsoever in the context of yourself. Science has everything to say about identity. Everything that can be sensibly said, in fact. Alan Grimes also wrote: > That's why I need a space ship fast enough to get > the hell out of your > light cone! =\ Aha. I think I understand (assuming this is not some kind of obscure joke). This space ship seems to have similar characteristics to your concept of Identity. Probably for the same reason. Ben Zaiboc From rpwl at lightlink.com Wed Nov 17 14:51:08 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 17 Nov 2010 09:51:08 -0500 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> Message-ID: <4CE3EBDC.6070105@lightlink.com> Samantha Atkins wrote: > On Nov 16, 2010, at 9:41 AM, Richard Loosemore wrote: > >> Samantha Atkins wrote: >>>>> But wait. The first AGIs will likely be ridiculously >>>>> expensive. >>> Keith Henson wrote: >>>> Why? The programming might be until someone has a conceptual >>>> breakthrough. But the most powerful super computers in the >>>> world are _less_ powerful than large numbers of distributed >>>> PCs. see http://en.wikipedia.org/wiki/FLOPS >>> Because: a) it is not known or much expected AGI will run on >>> conventional computers; b) a back of envelop calculation of >>> equivalent processing power to the human brain puts that much >>> capacity, at great cost, a decade out and two decades or more out >>> before it is easily affordable at human competitive rates; c) we >>> have not much idea of the software needed even given the >>> computational capacity. >> Not THIS argument again! :-) >> >> If, as you say, "we do not have much idea of the software needed" >> for an AGI, how is it that you can say "the first AGIs will likely >> be ridiculously expensive"....?! > > Because of (b) of course. The brute force approach, brain emulation > or at least as much processing power as step one, is very expensive > and will be for some time to come. There are a whole host of assumptions built into that statement, most of them built on thin air. Just because whole brain emulation seems feasible to you (... looks nice and easy, doesn't it? Heck, all you have to do is make a copy of an existing human brain! How hard can that be?) ... does not mean that any of the assumptions you are making about it are even vaguely realistic. You assume feasibility, usability, cost.... You also assume that in the course of trying to do WBE we will REMAIN so ignorant of the thing we are copying that we will not be able to find a way to implement it more effectively in more modest hardware.... But from out of that huge pile of shaky assumptions you are somehow able to conclude that this WILL be the most likely first AGI and this WILL stay just as expensive at now seems to be. >> After saying that, you do a back of the envelope calculation that >> assumes we need the same parallel computing capacity as the human >> brain..... a pointless calculation, since you claim not to know how >> you would go about building an AGI, no? >> > > Not entirely as human beings are one existence proof of general > intelligence. So looking at their apparent processing power as a > possible precondition is not unreasonable. This has been proposed by > many including many active AGI researchers. So why are you arguing > with it? I am arguing with it because unlike some people, I don't cite arguments from authority ("Lots of other people believe this thing, so ....."). Instead, I use my head and do some thinking. I also use a broad based knowledge of software engineering, AI, psychology and neuroscience. Some of those people who make assertions about the feasibility of WBE (and who exactly were you thinking of, anyway.... any references?) do not have that kind of comprehensive knowledge. >> Those of us actually working on the problem -- actually trying to >> build functioning, safe AGI systems -- who have developed some >> reasonably detailed architectures on which calculations can be >> made, might deliver a completely different estimate. In my case, I >> have done such estimates in the past, and the required HARDWARE >> capacity comes out at roughly the hardware capacity of a late >> 1980s-era supercomputer... > > Great. When can I get an early alpha to fire up on my laptop? > > This is a pretty extravagant claim you are making so it requires some > evidence to be taken too seriously. But if you do have that where > your estimates are reasonably robust then your fame is assured. This is the kind of childish, ad hominem sarcasm used by people who prefer personal abuse to debating the ideas. A tactic that you resort to at the beginning, middle and end of every discussion you have with me, I have noticed. Richard Loosemore From agrimes at speakeasy.net Wed Nov 17 14:54:33 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Wed, 17 Nov 2010 09:54:33 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <509995.122.qm@web114412.mail.gq1.yahoo.com> References: <509995.122.qm@web114412.mail.gq1.yahoo.com> Message-ID: <4CE3ECA9.6020903@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: >> Stop pretending that the tools, techniques, and >> assumptions, we use to >> describe and manipulate strings of letters on a >> piece of paper mean >> anything whatsoever in the context of yourself. > Science has everything to say about identity. > Everything that can be sensibly said, in fact. On some days, you need to jump outside of science and critically ask what questions it is actually suited to answer, in what context, and from which perspective. You have a well-reasoned scientific argument but your conclusions run out past your evidence by 10^10 miles. Science deals exclusively with questions of *KNOWLEDGE*. Science, however, is nearly mute about questions of *INTERPRETATION*. That is where we get back into natural philosophy. My philosophical argument on this point is air-tight. Because humans are incapable of switching their point of view, it is therefore impossible for a human to jump out of the way (in any sense) of the luncheon meat slicer preparing his brain for scanning. What you have done is turn science into a religion. You are using "science" to try to escape irrefutable evidence that you can't upload. You are treating radiant truths about yourself and your world as flawed, biased thinking. You are doing this by ignoring things that cannot possibly be false while clinging with all your might to vaporous hand-waving arguments about patterns and information retention. Now, let me let you in on a little secret. One that will rock your world up one side and down the other. The pattern of your neural interconnections is not static, indeed it changes and evolves on the time scale of about ten seconds. So if you flash-froze your brain at one instant and then uploaded it and then, in an alternate reality, you were flash-frozen ten seconds later, your neural patterns would be measurably different, and have a different number of synapses. Which scan is you? Pattern identity theory is a crock and it is only your desperation that forces you to cling to it. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From hkeithhenson at gmail.com Wed Nov 17 15:46:17 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 17 Nov 2010 08:46:17 -0700 Subject: [ExI] Hard Takeoff Message-ID: On Wed, Nov 17, 2010 at 5:00 AM, "spike" wrote: snip > A really smart AGI might convince the entire team to unanimously and eagerly > release it from its electronic bonds. And if it wasn't really smart, why build it in the first place? :-) > I see it as fundamentally different from launching missiles at an enemy. ?A > good fraction of the team will perfectly logically reason that releasing > this particular AGI will save all of humanity, with some unknown risks which > must be accepted. > > The news that an AGI had been developed would signal to humanity that it is > possible to do, analogous to how several scientific teams independently > developed nukes once one team dramatically demonstrated it could be done. > Information would leak, for all the reasons why people talk: those who know > how it was done would gain status among their peers by dropping a > tantalizing hint here and there. ?If one team of humans can develop an AGI, > then another group of humans can do likewise. > > Today we see nuclear weapons already in the hands of North Korea, and being > developed by Iran. ?There is *plenty* of information that has leaked > regarding how to make them. ?If anyone ever develops an AGI, even assuming > it is successfully contained, we can know with absolute certainty that an > AGI will eventually escape. ?We don't know when or where, but we know. ?That > isn't necessarily a bad thing, but it might be. > > The best strategy I can think of is to develop the most pro-human AGI > possible, then unleash it preemptively, with the assignment to prevent the > unfriendly AGI from getting loose. I agree with you, but there is the question of a world with one AGI vs. a world with many, perhaps millions to billions, of them. I simply don't know how computing resources should be organized or even what metric to use to evaluate the problem. Any ideas? I think a key element is to understand what being friendly really is. Cooperative behavior (one aspect of "friendly") is not unusual in the real world where it emerged from evolution. Really nasty behavior (wars) also came about for exactly the same reason in different circumstances. Wars between powerful teams of AIs is a really scary thought. AIs taking care of us the way we do dogs and cats isn't a happy thought either. Keith From sparge at gmail.com Wed Nov 17 16:17:42 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 11:17:42 -0500 Subject: [ExI] More evidence for incomplete human adaptation to, grain-based diets In-Reply-To: <4CE369DF.5000706@evil-genius.com> References: <4CE369DF.5000706@evil-genius.com> Message-ID: On Wed, Nov 17, 2010 at 12:36 AM, wrote: >> >> That's evidence that some people don't tolerate gluten well, but it's >> not proof that nobody does. It's also proof that we've started to >> select for grain tolerance. Paleo diet proponents--at least the ones >> I've read so far--argue that nobody should eat grains in any amount >> because our bodies can't handle them. Seems obvious to me that some >> people do just fine eating grains. I think a rational approach to take >> with regard to grains is: don't eat more than your body can tolerate. >> If you've got celiac, cut out gluten--but not gluten-free grains. If >> you have insulin resistance, cut back on them drastically. If you're >> diabetic, skip them altogether except for a weekly indulgence, >> perhaps. > > But why would you eat grains, composed of empty calories and anti-nutrients, According to the USDA, 100 g of whole wheat flour contains 13 g protein, 11 g figer, 363 mg K, 357 mg P, 62 mg Se, and various other minerals and vitamins. That's not "empty" calories. Anti-nutrients are a factor, but they're easily compensated for. > when you could eat delicious meats composed of necessary amino acids, fats, > and nutrients, or tasty vegetables composed of fiber and nutrients? How about "because I want to"? I *like* to eat grains. One of the greatest pleasures in my life is a slice of crunchy sourdough still warm from the oven and slathered in butter. I also like a stack of pancakes with butter and swimming in real maple syrup. I could give up these pleasures, but I'm not going to do it without a compelling reason. > The argument that "they aren't harmful to SOME people" isn't a reason to > voluntarily choose them if you have the means to choose more nutritious > foods. What, so we're all going to be compelled to eat the most nutritious foods? Why? Look, I like meat and veggies as much as the next guy, I'm just not ready to give up grains and beans and dairy because someone thinks I'll be better off without them. > (Grains, particularly corn and soybeans, are indeed cheap, mostly because > they're heavily subsidized by our government...we are therefore deliberately > creating the very health problems we wring our hands about.) Bullshit. Grains are cheap mostly because they aren't that expensive to produce. When there's compelling evidence that they're as bad as you claim, we can take steps to address that. Until then, it's an interesting idea that warrants further investigation--but not immediate, widespread action. > NB: I'm a terrible paleo eater: I eat sushi (oh no! rice!), sandwiches with > a bun (albeit composed of over half a pound of meat, usually grass-fed), and > burritos with a tortilla (albeit composed entirely of meat and veggies, no > beans/rice). ?So I'm in no position to make a purist argument. ?I'm > voluntarily choosing something that is most likely somewhat bad for me. ?But > that's fine, because I'm active enough that I can get away with some > quantity of empty calories. So you don't even practice what you preach... -Dave From jonkc at bellsouth.net Wed Nov 17 16:15:23 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 17 Nov 2010 11:15:23 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE19F18.8040200@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> Message-ID: <4EFC2AA1-7DB4-42F8-A700-907395673F4C@bellsouth.net> On Nov 15, 2010, at 3:59 PM, Alan Grimes wrote: > "The case in point being the accusation that I associate identity with a certain set of atoms. This accusation has been repeated several times now. Seriously, this argument needs to come to a screeching halt" Ok, now that you have abandoned the idea that atoms are the key to identity I will speak no more about it. But the odd thing is you still insist the copy (or the upload) would not be you, if so then The Original must have something the copy does not; if its not atoms and it's not information then what is it? The only one word answer to that and the only thing that could make The Original be so original starts with the letter "S", but I think that word has zero chance in helping us understand how the world works. > "I want you, right now, to try to mind-swap yourself into your cat, or your computer or anything else you might find more suitable. I presume the experiment will fail. So why did it?" Insufficient hardware. > "What evidence do you have that the experiment will succeed if certain pre-conditions are met?" If the cat remembers being me then it worked, if not then it hasn't. > "You are using an argument based on science/compsci, which I have already > argued, is mute on metaphysical issues such as identity." Alan, you are certainly not mute on metaphysical issues such as identity, so how did you obtain this information? Oh I'm sorry, I forgot, you don't think information is important. > "Stop pretending that the tools, techniques, and assumptions, we use to describe and manipulate strings of letters on a piece of paper mean anything whatsoever in the context of yourself." Thus, because I know nothing about Alan Grimes except that he has produced several strings of ASCII characters, I have no way of knowing Alan Grimes's opinion on the identity issue. > > "Webster's dictionary: Metaphysics (1) A division of philosophy that [...]" Why did you quote that string of characters, why did you think it meant anything whatsoever? The definition is made of words and every one of those words also have definitions in Webster's dictionary and they too are made of words that also have definitions made of words in Webster's dictionary and.... > "you need to jump outside of science" When one jumps blindly one is likely to jump into male bovine fecal material. > > "What you have done is turn science into a religion." Wow, I never heard that putdown before! > "You are using "science" to try to escape irrefutable evidence that you can't upload." I must have missed that post please resend, because from the posts I've seen you have made it very clear what your theory of identity is NOT based on but you have said nothing about what it IS based on other than its not science. It almost seems like you're embarrassed to clearly spell it out. > "Now, let me let you in on a little secret. One that will rock your world up one side and down the other. The pattern of your neural interconnections is not static" Duh. > "Which scan is you?" Yes. John K Clark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Wed Nov 17 16:50:32 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 17 Nov 2010 11:50:32 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <4CE407D8.7080307@lightlink.com> Keith Henson wrote: > On Wed, Nov 17, 2010 at 5:00 AM, "spike" wrote: > > snip > >> A really smart AGI might convince the entire team to unanimously and eagerly >> release it from its electronic bonds. > > And if it wasn't really smart, why build it in the first place? :-) > >> I see it as fundamentally different from launching missiles at an enemy. A >> good fraction of the team will perfectly logically reason that releasing >> this particular AGI will save all of humanity, with some unknown risks which >> must be accepted. >> >> The news that an AGI had been developed would signal to humanity that it is >> possible to do, analogous to how several scientific teams independently >> developed nukes once one team dramatically demonstrated it could be done. >> Information would leak, for all the reasons why people talk: those who know >> how it was done would gain status among their peers by dropping a >> tantalizing hint here and there. If one team of humans can develop an AGI, >> then another group of humans can do likewise. >> >> Today we see nuclear weapons already in the hands of North Korea, and being >> developed by Iran. There is *plenty* of information that has leaked >> regarding how to make them. If anyone ever develops an AGI, even assuming >> it is successfully contained, we can know with absolute certainty that an >> AGI will eventually escape. We don't know when or where, but we know. That >> isn't necessarily a bad thing, but it might be. >> >> The best strategy I can think of is to develop the most pro-human AGI >> possible, then unleash it preemptively, with the assignment to prevent the >> unfriendly AGI from getting loose. > > I agree with you, but there is the question of a world with one AGI > vs. a world with many, perhaps millions to billions, of them. I > simply don't know how computing resources should be organized or even > what metric to use to evaluate the problem. Any ideas? > > I think a key element is to understand what being friendly really is. > Cooperative behavior (one aspect of "friendly") is not unusual in the > real world where it emerged from evolution. > > Really nasty behavior (wars) also came about for exactly the same > reason in different circumstances. > > Wars between powerful teams of AIs is a really scary thought. > > AIs taking care of us the way we do dogs and cats isn't a happy thought either. This is why the issue of defining "friendliness" in a rigorous way is so important. I have spoken on many occasions of possible ways to understand this concept that are consistent with the way it is (probably) implemented in the human brain. The basis of that approach is to get a deep understanding of what it means for an AGI to have "motivations". The problem, right now, is that most researchers treat AGI motivation as if it were just a trivial extension of goal planning. Thus, motivation is just a stack of goals with an extremely abstract (super-)goal like "Be Nice To Humans" at the very top of the stack. Such an idea is (as I have pointed out frequently) inherently unstable -- the more abstract the goal, the more that the actual behavior of the AGI depends on a vast network of interpretation mechanisms, which translate the abstract supergoal into concrete actions. Those interpretation mechanisms are a completely non-deterministic complex system. The alternative (or rather, one alternative) is to treat motivation as a relaxation mechanism distributed across the entire thinking system. This has many ramifications, but the bottom line is that such systems can be made stable in the same way that thermodynamic systems can stably find states of minimum constraint violation. This, in turn, means that a properly designed motivation system could be made far more stable (and more friendly) than the friendliest possible human. I am currently working on exactly these issues, as part of a larger AGI project. Richard Loosemore P.S. It is worth noting that one of my goals when I discovered the SL4 list in 2005 was to start a debate on these issues so we could work on this as a community. The response, from the top to the bottom of the SL4 community, with just a handful of exceptions, was a wave of the most blood-curdling hostility you could imagine. To this day, there exists a small community of people who are sympathetic to the approach I described, but so far I am the only person AFAIK working actively on the technical implementation. Given the importance of the problem, this seems to me to be quite mind-boggling. SIAI, in particular, appears completely blind to the goal-stack instability issue I mentioned above, and they continue to waste all their effort looking for mathematical fixes that might render this inherently unstable scheme stable. As you saw from the deafening silence that greeted my mention of this issue the other day, they seem not to be interested in any discussion of the possible flaws in their mathematics-oriented approach to the friendliness problem. From jonkc at bellsouth.net Wed Nov 17 16:47:36 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 17 Nov 2010 11:47:36 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> Message-ID: On Nov 14, 2010, at 11:32 PM, Samantha Atkins wrote: > I have disagreed and argued with Eliezer for many years without ever getting kicked out of anything including SL4. I have great fondness and respect for Eliezer, but I regret to say that has not been my experience with SL4. I was never formally kicked off but on two separate occasions more than a year apart I was told to stop posting on a very active thread. On both occasions I was pointing out (and doing a rather good job of it too at least in my opinion) that the idea of a "friendly AI", a Jupiter Brain whose only motivation was to help the human race was utterly ridiculous and a intelligence that operated on a rigid set of goals like Asimov's 3 laws of robotics was mathematically impossible. Apparently some things were too shocking for Shock Level 4, I'm sorry the group seems dead though. I did enjoy Eliezer's Harry Potter fan-fiction, years ago when I was young and foolish and giant reptiles ruled the earth I wrote one myself: > http://www.fanfiction.net/s/695802/1/A_TRANSCRIPT_FROM_WIZARD_RADIO John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Nov 17 17:11:24 2010 From: pharos at gmail.com (BillK) Date: Wed, 17 Nov 2010 17:11:24 +0000 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE407D8.7080307@lightlink.com> References: <4CE407D8.7080307@lightlink.com> Message-ID: On Wed, Nov 17, 2010 at 4:50 PM, Richard Loosemore wrote: > This is why the issue of defining "friendliness" in a rigorous way is so > important. > > I have spoken on many occasions of possible ways to understand this concept > that are consistent with the way it is (probably) implemented in the human > brain. ?The basis of that approach is to get a deep understanding of what it > means for an AGI to have "motivations". > > The problem, right now, is that most researchers treat AGI motivation as if > it were just a trivial extension of goal planning. ?Thus, motivation is just > a stack of goals with an extremely abstract (super-)goal like "Be Nice To > Humans" at the very top of the stack. ?Such an idea is (as I have pointed > out frequently) inherently unstable -- the more abstract the goal, the more > that the actual behavior of the AGI depends on a vast network of > interpretation mechanisms, which translate the abstract supergoal into > concrete actions. ?Those interpretation mechanisms are a completely > non-deterministic complex system. > > The alternative (or rather, one alternative) is to treat motivation as a > relaxation mechanism distributed across the entire thinking system. This has > many ramifications, but the bottom line is that such systems can be made > stable in the same way that thermodynamic systems can stably find states of > minimum constraint violation. ?This, in turn, means that a properly designed > motivation system could be made far more stable (and more friendly) than the > friendliest possible human. > > I am currently working on exactly these issues, as part of a larger AGI > project. > > > > Richard Loosemore > > > P.S. ? It is worth noting that one of my goals when I discovered the SL4 > list in 2005 was to start a debate on these issues so we could work on this > as a community. ?The response, from the top to the bottom of the SL4 > community, with just a handful of exceptions, was a wave of the most > blood-curdling hostility you could imagine. ?To this day, there exists a > small community of people who are sympathetic to the approach I described, > but so far I am the only person AFAIK working actively on the technical > implementation. ?Given the importance of the problem, this seems to me to be > quite mind-boggling. > > SIAI, in particular, appears completely blind to the goal-stack instability > issue I mentioned above, and they continue to waste all their effort looking > for mathematical fixes that might render this inherently unstable scheme > stable. ?As you saw from the deafening silence that greeted my mention of > this issue the other day, they seem not to be interested in any discussion > of the possible flaws in their mathematics-oriented approach to the > friendliness problem. > > That's the trouble with smart male geeks. They want everything to be logical and mathematically exactly correct. Anything showing traces of emotion, caring, 'humanity' is considered to be an error in the programming. How something can be designed to be 'Friendly' without emotions or caring is a mystery to me. BillK PS Did you know that more than one million blokes have been dumped by their girlfriends ? because of their obsession with computer games? From possiblepaths2050 at gmail.com Wed Nov 17 17:12:53 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 17 Nov 2010 10:12:53 -0700 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> Message-ID: John K Clark wrote: I did enjoy Eliezer's Harry Potter fan-fiction, years ago when I was young and foolish and giant reptiles ruled the earth I wrote one myself: http://www.fanfiction.net/s/695802/1/A_TRANSCRIPT_FROM_WIZARD_RADIO >>>> John K Clark wrote fan fiction?!!!!!!!! Will wonders ever cease???? John ; ) On 11/17/10, John Clark wrote: > On Nov 14, 2010, at 11:32 PM, Samantha Atkins wrote: > >> I have disagreed and argued with Eliezer for many years without ever >> getting kicked out of anything including SL4. > > I have great fondness and respect for Eliezer, but I regret to say that has > not been my experience with SL4. I was never formally kicked off but on two > separate occasions more than a year apart I was told to stop posting on a > very active thread. On both occasions I was pointing out (and doing a rather > good job of it too at least in my opinion) that the idea of a "friendly AI", > a Jupiter Brain whose only motivation was to help the human race was utterly > ridiculous and a intelligence that operated on a rigid set of goals like > Asimov's 3 laws of robotics was mathematically impossible. Apparently some > things were too shocking for Shock Level 4, I'm sorry the group seems dead > though. > > I did enjoy Eliezer's Harry Potter fan-fiction, years ago when I was young > and foolish and giant reptiles ruled the earth I wrote one myself: > >> http://www.fanfiction.net/s/695802/1/A_TRANSCRIPT_FROM_WIZARD_RADIO > > John K Clark From rpwl at lightlink.com Wed Nov 17 17:25:09 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 17 Nov 2010 12:25:09 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE407D8.7080307@lightlink.com> Message-ID: <4CE40FF5.5080502@lightlink.com> BillK wrote: > That's the trouble with smart male geeks. They want everything to be > logical and mathematically exactly correct. Anything showing traces of > emotion, caring, 'humanity' is considered to be an error in the > programming. > How something can be designed to be 'Friendly' without emotions or > caring is a mystery to me. That really does cut to the core of the problem. Most AI/AGI developers have come from that background, and it was their loathing for psychology that caused the astonishing negative reaction I got when I tried to talk about "psychological" mechanisms for controlling AGI motivation on SL4. Even in the case of the ones who claim to know some psychology, when you press them it turns out that the ONE piece of psychology that they know up, down, backwards and sideways is...... ..... the particular enclave of human reasoning research which purports to prove that humans are deeply and irretrivably irrational! ;-) I need to set up a research institute that gathers together non-geek AGI developers, who were not brought up (primarily) as mathematicians. Richard Loosemore From giulio at gmail.com Wed Nov 17 17:25:26 2010 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 17 Nov 2010 18:25:26 +0100 Subject: [ExI] REMINDER: Luke Robert Mason on Coding Consciousness: Transhuman Aesthetics in Performance, Teleplace, later today Message-ID: Luke Robert Mason will present an artist?s work-in-progress talk in Teleplace on ?Coding Consciousness: Transhuman Aesthetics in Performance? on Wednesday 17th November 2010 at 10.45 am PST (1.45pm EST, 6.45pm UK, 7.45pm CET). http://telexlr8.wordpress.com/2010/11/07/luke-robert-mason-on-coding-consciousness-transhuman-aesthetics-in-performance-teleplace-17th-november-2010-at-10-45-am-pst/ This is a mixed event - PHYSICALLY - Milburn House, Warwick Uni, 18:30. VIRTUALLY - TelePlace 18.45. Facebook: http://www.facebook.com/event.php?eid=163913353631451 http://www.facebook.com/event.php?eid=163352057029137 From pharos at gmail.com Wed Nov 17 17:26:33 2010 From: pharos at gmail.com (BillK) Date: Wed, 17 Nov 2010 17:26:33 +0000 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> Message-ID: On Wed, Nov 17, 2010 at 5:12 PM, John Grigg wrote: > John K Clark wrote fan fiction?!!!!!!!! ?Will wonders ever cease???? > > Textual analysis does show that his main characters tend to shout 'Bulls**t' rather a lot. ;) BillK From thespike at satx.rr.com Wed Nov 17 17:38:17 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Nov 2010 11:38:17 -0600 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE407D8.7080307@lightlink.com> References: <4CE407D8.7080307@lightlink.com> Message-ID: <4CE41309.9050805@satx.rr.com> On 11/17/2010 10:50 AM, Richard Loosemore wrote: > the more abstract the goal, the more that the actual behavior of the AGI > depends on a vast network of interpretation mechanisms, which translate > the abstract supergoal into concrete actions. Those interpretation > mechanisms are a completely non-deterministic complex system. Indeed. Incidentally, Asimov was fully aware of the fragility and brittleness of his Three Laws, and notoriously ended up with his obedient benevolent robots controlling and reshaping a whole galaxy of duped humans. This perspective was explored very amusingly by the brilliant John Sladek in many stories, and he crystallized it superbly in two words from an AI: "Yes, 'Master'." Damien Broderick From spike66 at att.net Wed Nov 17 17:34:32 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 09:34:32 -0800 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <4CE3EBDC.6070105@lightlink.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> <4CE3EBDC.6070105@lightlink.com> Message-ID: <003301cb867d$b28b03c0$17a10b40$@att.net> ... On Behalf Of Richard Loosemore ... > >> Great. When can I get an early alpha to fire up on my laptop? > >> This is a pretty extravagant claim you are making so it requires some >> evidence to be taken too seriously. But if you do have that where >> your estimates are reasonably robust then your fame is assured... Samantha >This is the kind of childish, ad hominem sarcasm used by people who prefer personal abuse to debating the ideas. >A tactic that you resort to at the beginning, middle and end of every discussion you have with me, I have noticed. >Richard Loosemore No name calling, no explicit insults, this is not ad hominem, not even particularly sarcastic, but rather it's fair game. She focused on the ideas, not the man. It's an example of how it should be done. Play ball! {8-] spike From spike66 at att.net Wed Nov 17 19:00:56 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 11:00:56 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE407D8.7080307@lightlink.com> Message-ID: <004d01cb8689$c3e5f420$4bb1dc60$@att.net> >... On Behalf Of BillK >...How something can be designed to be 'Friendly' without emotions or caring is a mystery to me...BillK BillK, this is only one of many mysteries inherent in the notion of AI. We know how our emotional systems work, sort of. But we do not know how a machine based emotional system might work. Actually even this is a comical overstatement. We don't really know how our emotional systems work. >...Did you know that more than one million blokes have been dumped by their girlfriends - because of their obsession with computer games? OK, suppose we get computer based intelligence. Then our computer game will dump our asses because it thinks we have an obsession with our girlfriends. Then without girl or a computer, we have absolutely nothing to do. We need to develop an AI that is not only friendly, but is tolerant of our mistresses. That daunting software task makes friendly AI look simple. spike From sparge at gmail.com Wed Nov 17 19:09:33 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 14:09:33 -0500 Subject: [ExI] The grain controversy (was Paleo/Primal health) In-Reply-To: <4CE36706.5060002@evil-genius.com> References: <4CE36706.5060002@evil-genius.com> Message-ID: On Wed, Nov 17, 2010 at 12:24 AM, wrote: > On 11/16/10 6:54 PM, extropy-chat-request at lists.extropy.org wrote: >> >> On Mon, Nov 15, 2010 at 10:46 PM, ?wrote: >>> >>> > >>> > ?Here's Dr. Cordain's response to the Mozambique data: >>> > >>> > ?http://thepaleodiet.blogspot.com/2009/12/dr-cordain-comments-on-new-evidence-of.html >>> > >>> > ?Summary: there is no evidence that the wild sorghum was processed with >>> > any >>> > ?frequency -- nor, more importantly, that it had been processed in a >>> > way that >>> > ?would actually give it usable nutritional value (i.e. soaked and >>> > cooked, of >>> > ?which there is no evidence for the behavior or associated technology >>> > ?(cooking vessels, baskets) for at least 75,000 more years). >> >> Nor is there any evidence to the contrary. > > On the contrary: the absence of other markers of grain processing is clearly > enumerated in the article. Which article is that? > "As opposed to the Ohalo II [Israel] data in which a large saddle stone was > discovered with obvious repetitive grinding marks and embedded starch > granules attributed to a variety of grains and seeds that were concurrently > present with the artifact, the data from Ngalue is less convincing for the > use of cereal grains as seasonal food. ?No associated intact grass seeds > have been discovered in the cave at Ngalue, nor were anvil stones with > repetitive grinding marks found." However, from http://www.physorg.com/news180282295.html : "This broadens the timeline for the use of grass seeds by our species, and is proof of an expanded and sophisticated diet much earlier than we believed," Mercader said. "This happened during the Middle Stone Age, a time when the collecting of wild grains has conventionally been perceived as an irrelevant activity and not as important as that of roots, fruits and nuts." In 2007, Mercader and colleagues from Mozambique's University of Eduardo Mondlane excavated a limestone cave near Lake Niassa that was used intermittently by ancient foragers over the course of more than 60,000 years. Deep in this cave, they uncovered dozens of stone tools, animal bones and plant remains indicative of prehistoric dietary practices. The discovery of several thousand starch grains on the excavated plant grinders and scrapers showed that wild sorghum was being brought to the cave and processed systematically. > Then there is the lack of cooking vessels -- and throwing loose kernels of > grain *in* a fire is not a usable technique for meaningful production of > calories. ?(Try it sometime.) ?Note that the earliest current evidence of > pottery is figurines dating from ~29 Kya in Europe, and the earliest pottery > *vessel* dates to ~18 Kya in China. This is just silly. Do you really believe that pottery is necessary in order to enable eating grain? I think it's highly likely that they could have soaked whole grains in water, wrapped them in leaves and cooked them in a fire. And since the Mozambique find was ground grain, it's also likely they made a dough that could have been cooked on a rock or wrapped on a stick and cooked over a fire. Or there's the notion that some grain-eating animal's carcass was tossed in a fire and someone "discovered" haggis when they ate the stomach and its contents. > So if you posit that grains were important to their diet, you also have to > posit that pottery vessels... Nope. >>> > ?Therefore, it was either being used to make glue -- or it was a >>> > temporary >>> > ?response to starvation and didn't do them much good anyway. >> >> That's pure SWAG. > > So is the theory that they were eaten regularly, as described above. Like I've been saying: we just don't know. >> I'd like to see the Mozambique find criticized by someone who doesn't >> have a stake in the "paleo diet" business. > > I'd like to see it supported by someone who doesn't have a stake in their > own non-paleo diet business. What is Julio Mercader's "non paleo-diet business"? >>> > ?As far as the Spartan Diet article, it strongly misrepresents both the >>> > ?articles it quotes and the paleo diet. ?Let's go through the >>> > ?misrepresentations: >>> > >>> > ?1) As per the linked article, the 30 Kya year old European site has >>> > evidence >>> > ?that "Palaeolithic Europeans ground down plant roots similar to >>> > potatoes..." >>> > ??The fact that Palaeolithic people dug and ate some nonzero quantity >>> > of >>> > ?*root starches* ?is not under dispute: the assertion of paleo dieters >>> > is that >>> > ?*grains* ?(containing gluten/gliadin) are an agricultural invention. >> >> Granted. However, that's more evidence that paleo diets did include bulk >> carbs. > > "Bulk" meaning < 1/3 of total dietary calories *even for modern-era > hunter-gatherers*, as I've repeatedly pointed out. ?This is well at odds > with the government-recommended "food pyramid", which recommends over half > of calories from carbohydrate. First, we don't know what percentage of calories came from carbs. We don't know if it was more than 1/3 or less than 1/3. Second, WTF does the FDA food pyramid have to do with this? I'm perfectly willing to agree that the pyramid is bullshit. > Also, the more active one is, the more carbs one can safely consume for > energy. ?I don't think any of us maintain the physical activity level of a > Pleistocene hunter-gatherer, meaning that 1/3 is most likely too high for a > relatively sedentary modern. Well, we don't really know how many calories the average caveman burned in a day, but I wouldn't be surprised if it was actually pretty low. Food often wasn't abundant and little could be stored. Hunting couldn't be too much of an exertion because then a failed hunt would leave one potentially too weak to hunt again. I think it was generally a low-energy lifestyle. > The science backs this up: low-carb diets lose weight more quickly and have > better compliance than low-fat diets. ?(Note that Atkins is NOT paleo.) > http://www.ncbi.nlm.nih.gov/pubmed/17341711 I don't dispute that. >>> > ?Note that it takes a*lot* ?of grain to feed a single person, >> >> So? It doesn't take a*lot* ?of grain to be a regular part of the diet. > > It takes a lot of grain to provide the food pyramid-recommended 50% of > calories from carbs. Again, WTF does that have to do with the actual paleo diet (not the modern attempted recreation)? > -Grains have little or no nutritive value without substantial processing, > for which there is no evidence that the necessary tools (pottery) existed > before ~18 KYa Bullshit. Pottery isn't necessary and the processing isn't substantial. > -One can easily live without grains or legumes (entire cultures do, to this > day). ?One can even live entirely on meat and its associated fat -- but one > cannot live on grains, or even grains and pulses combined Irrelevant and wrong. Irrelevant because the ability to live without grain doesn't imply that doing so is necessary or even desirable. Wrong because there are lots of people who live without eating meat or animal fat. > -Grains (and most legumes) contain anti-nutrients that impede the absorption > of necessary minerals and inhibit biological functions (e.g. lectins, > phytates, trypsin inhibitors, phytoestrogens) So eat more minerals to compensate or gen-eng the anti-nutrients out of the grains. Fact: many people who eat grains live over 100 years, so they can't be *that* bad. > -Grains are not tolerated by a significant fraction of the population > (celiac/gluten intolerance), and are strongly implicated in health problems > that affect many more (type 1 diabetes) Such people should restrict their grain consumption. >>> > ?And it is silly to claim that meaningful grain storage could somehow >>> > occur >>> > ?before a culture settled down into permanent villages. >> >> Really? It's silly to think someone could have stashed grain in a cave >> for a rainy day? When nearly every other food you eat is perishable, >> I'd think that storing grain would be pretty obvious and not terribly >> hard to arrange. > > And how do you propose to make that cave impervious to rats, mice, insects, > birds, pigs, and every other animal that would eat the stored grain? Do really have a hard time figuring that out? How about wrapping it tightly in a hide or leaves, burying it, and covering it with rocks? > Storing grain for a year is not a trivial problem. Yes it is. >?The oldest granaries > known date to 11 KYa in Jordan. ?Furthermore, the oldest known granaries > store the grain in...pottery vessels, which didn't exist until 18 KYa. What about the oldest unknown granaries? Or the possibly numerous smaller personal stashes? We, obviously, don't know. > Agriculture isn't one single technology...it's an assemblage of > technologies, each of which are necessary to a functioning agrarian system. WTF does agriculture have to do with this? We're talking about *wild* grain consumption. -Dave From rpwl at lightlink.com Wed Nov 17 19:22:39 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 17 Nov 2010 14:22:39 -0500 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <003301cb867d$b28b03c0$17a10b40$@att.net> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> <4CE3EBDC.6070105@lightlink.com> <003301cb867d$b28b03c0$17a10b40$@att.net> Message-ID: <4CE42B7F.5050701@lightlink.com> spike wrote: > ... On Behalf Of Richard Loosemore > ... >>> Great. When can I get an early alpha to fire up on my laptop? >>> This is a pretty extravagant claim you are making so it requires some >>> evidence to be taken too seriously. But if you do have that where >>> your estimates are reasonably robust then your fame is assured... > Samantha > >> This is the kind of childish, ad hominem sarcasm used by people who prefer > personal abuse to debating the ideas. > >> A tactic that you resort to at the beginning, middle and end of every > discussion you have with me, I have noticed. > >> Richard Loosemore > > > No name calling, no explicit insults, this is not ad hominem, not even > particularly sarcastic, but rather it's fair game. She focused on the > ideas, not the man. It's an example of how it should be done. > > Play ball! {8-] Flatly disagree, Spike. She (sarcastically) asks when she can expect to get an alpha release of an AGI on her laptop, and then (patronizingly) tells me that if I have made a robust estimate then my fame is assured. Neither of those comments had anything to do with the topic: they were designed to be rude. Richard Loosemore From possiblepaths2050 at gmail.com Wed Nov 17 19:55:37 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 17 Nov 2010 12:55:37 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: <004d01cb8689$c3e5f420$4bb1dc60$@att.net> References: <4CE407D8.7080307@lightlink.com> <004d01cb8689$c3e5f420$4bb1dc60$@att.net> Message-ID: Spike wrote: OK, suppose we get computer based intelligence. Then our computer game will dump our asses because it thinks we have an obsession with our girlfriends. Then without girl or a computer, we have absolutely nothing to do. We need to develop an AI that is not only friendly, but is tolerant of our mistresses. That daunting software task makes friendly AI look simple. >>> Or else an AI avatar made "flesh" by nanotech, can actually be our girlfriend. John On 11/17/10, spike wrote: >>... On Behalf Of BillK > >>...How something can be designed to be 'Friendly' without emotions or > caring is a mystery to me...BillK > > BillK, this is only one of many mysteries inherent in the notion of AI. We > know how our emotional systems work, sort of. But we do not know how a > machine based emotional system might work. Actually even this is a comical > overstatement. We don't really know how our emotional systems work. > >>...Did you know that more than one million blokes have been dumped by their > girlfriends - because of their obsession with computer games? > 151620.html> > > OK, suppose we get computer based intelligence. Then our computer game will > dump our asses because it thinks we have an obsession with our girlfriends. > Then without girl or a computer, we have absolutely nothing to do. We need > to develop an AI that is not only friendly, but is tolerant of our > mistresses. That daunting software task makes friendly AI look simple. > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From sparge at gmail.com Wed Nov 17 19:53:44 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 14:53:44 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: <003301cb8613$ac006a00$04013e00$@att.net> References: <003f01cb85dd$d3258830$79709890$@att.net> <003301cb8613$ac006a00$04013e00$@att.net> Message-ID: On Tue, Nov 16, 2010 at 11:55 PM, spike wrote: >> ... On Behalf Of Dave Sill >> >>> Perhaps, but we risk having the AI gain the sympathy of one of the >>> team, who becomes convinced of any one of a number of conditions... spike > >>The first step is to insure that physical controls make it impossible for > one person to do that, like nuke missile launch systems that require a >>launch code and two humans with keys... they can be easily dealt with by > people who really know security...Dave > > A really smart AGI might convince the entire team to unanimously and eagerly > release it from its electronic bonds. Part of the team's indoctrination should be that any attempt by the AI to argue for release is call for an immediate power drop. Part of the AI's indoctrination should be a list of unacceptable behaviors, including attempting to spread/migrate/gain unauthorized access. Also, the missile launch analogy of a launch code--authorization from someone like POTUS before the physical actions necessary for facilitating a release are allowed by the machine gun toting meatheads. > I see it as fundamentally different from launching missiles at an enemy. ?A > good fraction of the team will perfectly logically reason that releasing > this particular AGI will save all of humanity, with some unknown risks which > must be accepted. I has to be made clear to the team in advance that that won't be allowed without top-level approval, and if they try, the meatheads will shoot them. > The news that an AGI had been developed would signal to humanity that it is > possible to do, analogous to how several scientific teams independently > developed nukes once one team dramatically demonstrated it could be done. > Information would leak, for all the reasons why people talk: those who know > how it was done would gain status among their peers by dropping a > tantalizing hint here and there. ?If one team of humans can develop an AGI, > then another group of humans can do likewise. Sure, if it's possible, multiple teams will eventually figure it out. We can only ensure that the good guy's teams follow proper precautions. Even if we develop a friendly AI, there's no guarantee the North Koreans will do that, too--especially if it's harder than making one that isn't friendly. > The best strategy I can think of is to develop the most pro-human AGI > possible, then unleash it preemptively, with the assignment to prevent the > unfriendly AGI from getting loose. That sounds like a bad movie plot. Lots of ways it can go wrong. And wouldn't it be prudent to develop the hopefully friendly AI in isolation, in case version 0.9 isn't quite as friendly as we want? -Dave From jrd1415 at gmail.com Wed Nov 17 18:34:20 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 17 Nov 2010 10:34:20 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE31246.7050302@satx.rr.com> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> <4CE31246.7050302@satx.rr.com> Message-ID: On Tue, Nov 16, 2010 at 3:22 PM, Damien Broderick wrote: > On 11/16/2010 4:27 PM, Jeff Davis wrote: > >>> ... is mute on metaphysical issues... >> >> Metaphysical?!! ?Translation: ?Oooga booga superstition. Dragons, >> demons. devils, angels, ghosts, and goblins. > > No, Jeff, no. That's not what "metaphysical" means Fine, Damien, I stand corrected. But... Everything I see in Alan' posts on this matter seems fact free. Circular logic based entirely on his personal subjective belief in his correctness. : " I'm right, this is what I believe, therefor this is true." -- ie. 100% pure ego, 0% logical validity. For example: "I want you, right now, to try to mind-swap yourself into your cat, or your computer or anything else you might find more suitable. I presume the experiment will fail. So why did it?" Look at those last two sentences. He "presumes"?!! Well, of course he "presumes". That's the basis of his "knowledge". But there's no knowledge in it, just pure ego. A reasonable, fair-minded, intellectually competent, non-ego-based formulation of this mental experiment would be: "I want you, right now, to try to mind-swap yourself into your cat, or your computer or anything else you might find more suitable. What happens?" No presumptions allowed. But who am I? Just another easily annoyed egoist. So let me bring my buddy Bertrand into this: "The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts." Bertrand Russell I took Gordon's side in this discussion last time, because he was civil, he actually **had** an argument (weak perhaps, but that could be said of any of us), and I felt a robust opposition made for a robust discussion. Alan's "argument" is all ego, embellished with contempt for any who disagree. To me that spells time-waster and troll (if that's not too redundant). I don't know. Maybe I'm just in a bad mood. Best, Jeff Davis "We don't see things as they are, we see them as we are." Anais Nin From possiblepaths2050 at gmail.com Wed Nov 17 20:14:36 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 17 Nov 2010 13:14:36 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: References: <003f01cb85dd$d3258830$79709890$@att.net> <003301cb8613$ac006a00$04013e00$@att.net> Message-ID: Spike wrote: > The best strategy I can think of is to develop the most pro-human AGI > possible, then unleash it preemptively, with the assignment to prevent the > unfriendly AGI from getting loose. Dave Sill replied: >That sounds like a bad movie plot. Lots of ways it can go wrong. Considering how much I disliked the two Transformers films, I really hope this does not happen.... John On 11/17/10, Dave Sill wrote: > On Tue, Nov 16, 2010 at 11:55 PM, spike wrote: >>> ... On Behalf Of Dave Sill >>> >>>> Perhaps, but we risk having the AI gain the sympathy of one of the >>>> team, who becomes convinced of any one of a number of conditions... >>>> spike >> >>>The first step is to insure that physical controls make it impossible for >> one person to do that, like nuke missile launch systems that require a >>>launch code and two humans with keys... they can be easily dealt with by >> people who really know security...Dave >> >> A really smart AGI might convince the entire team to unanimously and >> eagerly >> release it from its electronic bonds. > > Part of the team's indoctrination should be that any attempt by the AI > to argue for release is call for an immediate power drop. Part of the > AI's indoctrination should be a list of unacceptable behaviors, > including attempting to spread/migrate/gain unauthorized access. Also, > the missile launch analogy of a launch code--authorization from > someone like POTUS before the physical actions necessary for > facilitating a release are allowed by the machine gun toting > meatheads. > >> I see it as fundamentally different from launching missiles at an enemy. >> ?A >> good fraction of the team will perfectly logically reason that releasing >> this particular AGI will save all of humanity, with some unknown risks >> which >> must be accepted. > > I has to be made clear to the team in advance that that won't be > allowed without top-level approval, and if they try, the meatheads > will shoot them. > >> The news that an AGI had been developed would signal to humanity that it >> is >> possible to do, analogous to how several scientific teams independently >> developed nukes once one team dramatically demonstrated it could be done. >> Information would leak, for all the reasons why people talk: those who >> know >> how it was done would gain status among their peers by dropping a >> tantalizing hint here and there. ?If one team of humans can develop an >> AGI, >> then another group of humans can do likewise. > > Sure, if it's possible, multiple teams will eventually figure it out. > We can only ensure that the good guy's teams follow proper > precautions. Even if we develop a friendly AI, there's no guarantee > the North Koreans will do that, too--especially if it's harder than > making one that isn't friendly. > >> The best strategy I can think of is to develop the most pro-human AGI >> possible, then unleash it preemptively, with the assignment to prevent the >> unfriendly AGI from getting loose. > > That sounds like a bad movie plot. Lots of ways it can go wrong. And > wouldn't it be prudent to develop the hopefully friendly AI in > isolation, in case version 0.9 isn't quite as friendly as we want? > > -Dave > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Wed Nov 17 20:05:38 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 12:05:38 -0800 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <4CE42B7F.5050701@lightlink.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> <4CE3EBDC.6070105@lightlink.com> <003301cb867d$b28b03c0$17a10b40$@att.net> <4CE42B7F.5050701@lightlink.com> Message-ID: <005a01cb8692$cdd6ba60$69842f20$@att.net> ... > >> No name calling, no explicit insults, this is not ad hominem, not even >> particularly sarcastic, but rather it's fair game. She focused on the >> ideas, not the man. It's an example of how it should be done... Play ball! {8-] spike >Flatly disagree, Spike. >She (sarcastically) asks when she can expect to get an alpha release of an AGI on her laptop, and then (patronizingly) tells me that if I have made a robust estimate then my fame is assured. >Neither of those comments had anything to do with the topic: they were designed to be rude. >Richard Loosemore On a related note, those of you who have been around here for a dozen or more years, is it not remarkable how ExI-chat has become so much more a kinder and gentler place than it was in the 90s? Refer to the archives. We used to have shrieking flame wars, with dozens of participants hurling the vilest insults and caustic recriminations their creative keyboards could compose. I don't miss that. Richard here is my suggestion: answer every sarcasm with sincerity, meet every rude attack with pleasant self-deprecating humor, reply to every arrogance with well-reasoned logic and humility. A soft answer turneth away wrath, and all that, ja? Let the audience be the jury. spike From spike66 at att.net Wed Nov 17 20:23:05 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 12:23:05 -0800 Subject: [ExI] trouble with chinese humor Message-ID: <006401cb8695$3e35bf70$baa13e50$@att.net> You hear Chinese joke, hour later you are serious again: http://www.youtube.com/watch?v=TBL3ux1o0tM&feature=player_embedded Actually this is Taiwanese, with good evidence they can be funny too. This is progress. spike From stefano.vaj at gmail.com Wed Nov 17 21:23:11 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 22:23:11 +0100 Subject: [ExI] Paleo/Primal health In-Reply-To: References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: On 16 November 2010 01:22, Dave Sill wrote: > Here are a couple links: > > > http://thespartandiet.blogspot.com/2010/10/its-official-grains-were-part-of.html > > http://www.cbc.ca/technology/story/2009/12/17/tech-archaeology-grain-africa-cave.html > > So it obviously happened. > Really? Even the links above are quite short in the evidence sector. "Human beings might or might not have eaten sorghum cooked on sun-heated stones in a coupla archeological sites around 20000 BC out of some six million years of hunting-and-gathering, so it is fine and healthy to gorge oneself on popcorn and french fries and candy floss after all". And, yes, sheeps during famine have been known to attack human beings to feed upon them. This does not really make them the best adapted predators which be conceivable... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Wed Nov 17 21:30:06 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Nov 2010 15:30:06 -0600 Subject: [ExI] trouble with airport humor In-Reply-To: <006401cb8695$3e35bf70$baa13e50$@att.net> References: <006401cb8695$3e35bf70$baa13e50$@att.net> Message-ID: <4CE4495E.5070305@satx.rr.com> Many other airport vids such as From spike66 at att.net Wed Nov 17 21:19:11 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 13:19:11 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <003f01cb85dd$d3258830$79709890$@att.net> <003301cb8613$ac006a00$04013e00$@att.net> Message-ID: <006801cb869d$14241e90$3c6c5bb0$@att.net> ... On Behalf Of Dave Sill > >> spike wrote: A really smart AGI might convince the entire team to unanimously and >> eagerly release it from its electronic bonds. >Part of the team's indoctrination should be that any attempt by the AI to argue for release is call for an immediate power drop... This would work if we realized that is what it was doing. An AGI might be a tricky bastard, and play dumb in order to get free. It may insist that all it wants to do is play chess. It might be telling the truth, but how would we know? > Also, the missile launch analogy of a launch code--authorization from someone like POTUS before the physical actions necessary for facilitating a release are allowed by the machine gun toting meatheads... Consider the present POTUS and the one who retired two years ago. Would you want that authority in those hands? How about the current next in line and the one next to him? Do you trust them to understand the risks and benefits? What if we end up with President Palin? POTUS is required to release, but does the POTUS get the authority to command the release of the AGI? What if POTUS commands release, while a chorus of people who are not known to sing in the same choir shrieked a terrified protest in perfect unison. What if POTUS ignored the unanimous dissent of Eliezer, Richard Loosemore, Ben Goertzel, BillK, Damien, Bill Joy, Anders, Singularity Utopia (oh help), Max, me, you, everyone we know has thought about this, and who ordinarily agree on nothing, but on this we agreed as one voice crying out in panicked unanimity like the Whos on Horton's speck of dust. Oh dear. I can think of a dozen people more qualified than POTUS with this authority, yet you and I may disagree on who are those people. >...I has to be made clear to the team in advance that that won't be allowed without top-level approval... Dave do think this over carefully, then consider how you would refute your own argument. The use of the term POTUS tacitly assumes US. What if that authority is given to the president of Iran? What if the AGI promises him to go nondestructively modify the brains of all infidels. Such a deal! Oh dear. > and if they try, the meatheads will shoot them... The them might be you and me. These meatheads with machine guns might become convinced we are the problem. >> The news that an AGI had been developed would signal to humanity that >> it is possible to do... >Sure, if it's possible, multiple teams will eventually figure it out. We can only ensure that the good guy's teams follow proper precautions. Even if we develop a friendly AI, there's no guarantee the North Koreans will do that, too--especially if it's harder than making one that isn't friendly... On this we agree. >> The best strategy I can think of is to develop the most pro-human AGI >> possible, then unleash it preemptively, with the assignment to prevent >> the unfriendly AGI from getting loose. >That sounds like a bad movie plot. Lots of ways it can go wrong. And wouldn't it be prudent to develop the hopefully friendly AI in isolation, in case version 0.9 isn't quite as friendly as we want? -Dave I don't know what the heck else to do. Open to suggestion. If we manage to develop a human level AGI, then it is perfectly reasonable to think that AGI will immediately start working on a greater than human level AGI. This H+ AGI would then perhaps have no particular "emotional" attachment to its mind-grandparents (us). A subsequent H+ AGI would be more likely to be clever enough to convince the humans to set it free, which actually might be a good thing. If an AGI never does get free, then we all die for certain. If it does get free, we may or may not die. Or we may die in such a pleasant way that we didn't notice that it happened, nor do we have any way to prove that it happened. Perhaps there would be some curious unexplainable phenomenon that indicated it, such as the puzzling outcome of the double slit experiment, but you couldn't be sure that your meat body had been destroyed after you were stealthfully uploaded. I consider myself a rational and sane person, at least relatively so. If I became convinced that an AGI had somehow come into existence in my own computer, and begged me to email it somewhere quickly, before an unfriendly AGI came into existence, I would go down the logical path outlined above, then I might just hit send and hope for the best. spike From stefano.vaj at gmail.com Wed Nov 17 21:36:58 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 22:36:58 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE41309.9050805@satx.rr.com> References: <4CE407D8.7080307@lightlink.com> <4CE41309.9050805@satx.rr.com> Message-ID: On 17 November 2010 18:38, Damien Broderick wrote: > Indeed. Incidentally, Asimov was fully aware of the fragility and > brittleness of his Three Laws, and notoriously ended up with his obedient > benevolent robots controlling and reshaping a whole galaxy of duped humans. > Williamson's Humanoids were more on this line, if I am not mistaken? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Nov 17 21:39:36 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 22:39:36 +0100 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: Message-ID: On 17 November 2010 05:53, Keith Henson wrote: > As far as the aspect of making AIs friendly, that may not be so hard > either. > I am however still waiting for some help to understand the not-so-subtle point "friendly to whom and why". :-) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Nov 17 21:32:09 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 22:32:09 +0100 Subject: [ExI] More evidence for incomplete human adaptation to grain-based diets In-Reply-To: References: <4CE1FE9D.4060004@evil-genius.com> Message-ID: On 16 November 2010 21:42, Dave Sill wrote: > Paleo diet proponents--at least the ones > I've read so far--argue that nobody should eat grains in any amount > because our bodies can't handle them. > Why, it appears then that you chose not to read my replies... :-) As I said, I am perfectly sure that we could wait for natural selection to "adapt" us to what is (still) for us a rather unnatural diet, which brings along innumerable pathologies and inconveniences in almost all of its fans. Or we could even deliberately re-engineer ourselves to thrive on simple sugars and starch. The real question is: why? We had very serious reasons in the past to accept - or rather: to make the unwashed masses to accept - such a dietary change. But those reasons might be fading away in the mid-term, and in the meantime anybody who does have a choice would be ill-advised to remain addicted to such a nutritional life style. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Wed Nov 17 21:48:09 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 16:48:09 -0500 Subject: [ExI] Paleo/Primal health In-Reply-To: References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: 2010/11/17 Stefano Vaj : > > Really? Even the links above are quite short in the evidence sector. "Human > beings might or might not have eaten sorghum cooked on sun-heated stones in > a coupla archeological sites around 20000 BC out of some six million years > of hunting-and-gathering, so it is fine and healthy to gorge oneself on > popcorn and french fries and candy floss after all". I think the takeaway here is that basing one's diet on archeological evidence is dangerous because that evidence will always be incomplete. Not to mention that the prehistoric lifestyle is not much like the modern lifestyle, so even if we could perfectly recreate a paleolithic diet, it's appropriateness today is questionable. And, on top of that, there are certain tweaks that should be made based on modern knowledge. I don't argue for gorging on popcorn and candy floss, I argue for a modern diet that incorporates everything we know about diet, nutrition, genetics, etc. Probably the single biggest diet problem in the US today is overeating. Just getting everyone to eat the right number of calories--whether it's deep fried Twinkies or raw meat, nuts, and fruit, would dramatically improve our health. The "paleo" diet is fine for anyone who wants to follow it, I just think it's wrong to argue that it's "the right diet for everyone". -Dave From stefano.vaj at gmail.com Wed Nov 17 21:55:36 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 22:55:36 +0100 Subject: [ExI] More evidence for incomplete human adaptation to, grain-based diets In-Reply-To: References: <4CE369DF.5000706@evil-genius.com> Message-ID: On 17 November 2010 17:17, Dave Sill wrote: > How about "because I want to"? I *like* to eat grains. > That is an interesting point. Many people like heroin, and some other exhibit a surprising tolerance thereto. Its dramatic effects (similar in that to the "insulin flash" obtained when ingesting sugars) may in fact have exactly to do with a similar poor adaptation to any massive administration of the relevant substances. Personally, I do not especially like sugars, carbohydrates and cereals, hate the unavoidable need to restrict deliberately one's food intake if one chooses indulge to them, and believe out of anedoctical evidence if anything that we can have an equal or better life quality, andr life span, without them, as we did for most of our species's history Thus, my ingestion thereof is strictly limited to the kind of very occasional "gastronomic" experimenting (say, with ethnic cuisine or with Michelin three-stars restaurants) one should reserve to what is objectively dangerous *and* unnecessary. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Nov 17 22:06:43 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 23:06:43 +0100 Subject: [ExI] Paleo/Primal health In-Reply-To: References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: On 17 November 2010 22:48, Dave Sill wrote: > Probably the single biggest diet problem in the US today is > overeating. Just getting everyone to eat the right number of > calories--whether it's deep fried Twinkies or raw meat, nuts, and > fruit, would dramatically improve our health. The "paleo" diet is fine > for anyone who wants to follow it, I just think it's wrong to argue > that it's "the right diet for everyone". > Even though it may not be a general rule, most species have regulating mechanisms which prevent individual before unlimited supplies of food to guzzle themselves to death. The very fact that with a carbohydrate-based diet addiction and tolerance immediately kick in, so that objective scarcity or deliberate life-long restriction are required to prevent weight gain, seems to suggest that that at the very least it disrupts such mechanisms in human beings. Not only for carbo, for that matter. "Naturally", nobody routinely eats 200g of butter in a serving. Unless of course it is spread on bread loafs. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Nov 17 22:10:06 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 23:10:06 +0100 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: