From msd001 at gmail.com Tue Dec 1 02:52:15 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 30 Nov 2009 21:52:15 -0500 Subject: [ExI] Saving the data In-Reply-To: <4B143284.20502@aleph.se> References: <4B11BF60.2020707@rawbw.com> <4B143284.20502@aleph.se> Message-ID: <62c14240911301852n66d2b896o56740e00b61a9f65@mail.gmail.com> On Mon, Nov 30, 2009 at 4:00 PM, Anders Sandberg wrote: > This scheme would of course require funding, but also a very stable > long-term organisation that can move to new media. Perhaps allowing forking > would be one way (some cryptographic trickery here for the escrow) so that > even amateurs might be able to run their own version with all data smaller > than Y gigabytes. Sounds very much like something the Long Now Foundation > might have been considering for their 10,000 year library. > It also sounds like something googleBot is already doing. We joke about chats being "off the record" probably being stored and simply 'marked' as "off the record" until some to-be-determined statute of intellectual property runs out. Good news is that it will all be used for your digital resurrection; bad news is everyone you've ever said an unkind word to or about will also be digitally resurrected and have full access to your then completely transparent life. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Tue Dec 1 08:44:07 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 01 Dec 2009 00:44:07 -0800 Subject: [ExI] climategate again In-Reply-To: <455742C254BB4756AA5B09F094C25B33@spike> References: <4B11BF60.2020707@rawbw.com> <455742C254BB4756AA5B09F094C25B33@spike> Message-ID: <4B14D757.1060400@rawbw.com> Spike wrote > I conclude with a prediction that exactly nada > will come out of the upcoming Copenhagen meetings, > not one thing. Ah, Spike the eternal optimist. I conclude we'll have no such luck! Lee From alfio.puglisi at gmail.com Tue Dec 1 10:38:56 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Tue, 1 Dec 2009 11:38:56 +0100 Subject: [ExI] climategate again In-Reply-To: <455742C254BB4756AA5B09F094C25B33@spike> References: <4B11BF60.2020707@rawbw.com> <455742C254BB4756AA5B09F094C25B33@spike> Message-ID: <4902d9990912010238i57ee776cq7738ba29485e2816@mail.gmail.com> On Tue, Dec 1, 2009 at 12:52 AM, spike wrote: > > > ...On Behalf Of John Clark > Subject: Re: [ExI] climategate again > > > >Incredible, a century's worth of raw climate data destroyed! > >Ja, but I noticed something interesting. The media debate seems to be all > about recognizing the leaked info, but reinforcing that global warming is > still real. OK I buy that, I agree that the planet is warming. But the > critical part is this: we don't know by how much. > Folks, this is getting silly. Temperature series (including CRU, GISTEMP etc.) are mostly based on public data fom the GHCN (Global Historical Climate Network) and from the US Historical Climate Network. Link to ftp files: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2 ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/ More data sources: http://www.realclimate.org/index.php/data-sources/ That CRU decided to delete some of their files has little impact. They didn't measure those temperatures, they asked various national meteorological institutes for them, and obtained a copy. This data, should the need arise, is still with those institutes. But if you look at what's available on public ftp sites, it's just a drop in the sea. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Tue Dec 1 11:03:42 2009 From: anders at aleph.se (Anders Sandberg) Date: Tue, 01 Dec 2009 12:03:42 +0100 Subject: [ExI] Saving the data Message-ID: <20091201110342.b76a8e43@secure.ericade.net> Mike Dougherty wrote: > > On Mon, Nov 30, 2009 at 4:00 PM, Anders Sandberg > > wrote: > > > > This scheme would of course require funding, but also a very > > stable long-term organisation that can move to new media. Perhaps > > allowing forking would be one way (some cryptographic trickery > > here for the escrow) so that even amateurs might be able to run > > their own version with all data smaller than Y gigabytes. Sounds > > very much like something the Long Now Foundation might have been > > considering for their 10,000 year library. > > > > > > It also sounds like something googleBot is already doing. We joke > > about chats being "off the record" probably being stored and simply > > 'marked' as "off the record" until some to-be-determined statute of > > intellectual property runs out. While Google is acting as the emergency backup system for the Internet, it is not really backing up the scientific data I'm talking about. Sure, you can leave your binary dataset in an open directory, but good luck searching for it (try figuring out what the search string for finding the raw data in Millikan's droplet experiment or the raw Voyager signals should be, assuming they were online). Even if it happens to be backed up by the Wayback Machine there are issues about curatorship, data integrity and lack of metadata. My point is that science (and civilization in general) lives not just on data, but also the metainformation that makes it understandable and the handling practices that allows us to trust it. Getting all scientific data to just have metadata is going to be tricky enough, but we should aim higher. Currently digital lab notebooks are being developed, but it will take a long while before they become really good. A friend working at the Karolinska Institute was told to cut and paste (paper) copies of all data into her lab notebook. She was running supercomputer simulations producing terabytes of data. She asked if it was OK to glue hard drives to it. However, I think Mike is right about the privacy of data. As scientists we cannot assume our data is "ours". Sometimes this can lead to problems, like the Gillberg affair in Sweden where a court-ordered FoI request collided with the promise of privacy to parents and children in a medical trial. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From anders at aleph.se Tue Dec 1 11:11:14 2009 From: anders at aleph.se (Anders Sandberg) Date: Tue, 01 Dec 2009 12:11:14 +0100 Subject: [ExI] META: proper authentication Message-ID: <20091201111114.449f3387@secure.ericade.net> I have been having a problem ever since I returned to the list: the host reva.xtremeunix.com seems to like to bounce my emails, saying "Relaying denied. Proper authentication required." The reason is likely that I use a wide variety of computers and mailboxes, connecting to Oxford mail servers to send my email. No doubt that looks suspicious to the reva server. But exactly what is needed to get it to accept my email? ("Dear Extropians. I recently got a TransLife X655 body. But when I try to forward motor control to one of my forks some datafield firewalls my exoself, saying that it lacks proper credentials.") Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From pharos at gmail.com Tue Dec 1 12:01:28 2009 From: pharos at gmail.com (BillK) Date: Tue, 1 Dec 2009 12:01:28 +0000 Subject: [ExI] META: proper authentication In-Reply-To: <20091201111114.449f3387@secure.ericade.net> References: <20091201111114.449f3387@secure.ericade.net> Message-ID: On 12/1/09, Anders Sandberg wrote: > I have been having a problem ever since I returned to the list: the host > reva.xtremeunix.com seems to like to bounce my emails, saying > "Relaying denied. Proper authentication required." > > The reason is likely that I use a wide variety of computers and mailboxes, > connecting to Oxford mail servers to send my email. No doubt that looks > suspicious to the reva server. But exactly what is needed to get it to accept my > email? > > The problem isn't reva.xtrmeunix.com. - It's you. :) When you swap around networks, you have to change your outgoing mailserver to use the mailserver for the network that you're on. This is a common problem on laptops, for example. Connecting from office or home means using a different mailserver. How you do the change depends on what software you're using. Your tech department might be able to set up an automated solution for you, to save you tinkering with SMTP setup every time you move. Cheers, BillK From msd001 at gmail.com Tue Dec 1 12:52:50 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 1 Dec 2009 07:52:50 -0500 Subject: [ExI] Saving the data In-Reply-To: <20091201110342.b76a8e43@secure.ericade.net> References: <20091201110342.b76a8e43@secure.ericade.net> Message-ID: <62c14240912010452w7b528ac9i69b450a81d44a9b4@mail.gmail.com> On Tue, Dec 1, 2009 at 6:03 AM, Anders Sandberg wrote: > My point is that science (and civilization in general) lives not just on > data, but also the metainformation that makes it understandable and the > handling practices that allows us to trust it. Getting all scientific > data to just have metadata is going to be tricky enough, but we should > When viewed directly it does seem obvious that data (and knowledge that it can be ordered to produce) should be preserved. I wonder if the civilization in general you mentioned above fails to preserve data because our brains selectively discard so much detail of our daily life and by similar habit has little interest in storing volumes of information beyond a visceral capacity to manage. A box of old photos is a cherished keepsake, but the sum of the world's Flickr streams is meaningless to all but the most theoretical application of "value". Perhaps while you make a case for preservation of the store of universal knowledge, you could also push for better tools for mere mortals to manage the unwieldy beast. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Dec 1 13:40:57 2009 From: sparge at gmail.com (Dave Sill) Date: Tue, 1 Dec 2009 08:40:57 -0500 Subject: [ExI] META: proper authentication In-Reply-To: References: <20091201111114.449f3387@secure.ericade.net> Message-ID: On Tue, Dec 1, 2009 at 7:01 AM, BillK wrote: > > When you swap around networks, you have to change your outgoing > mailserver to use the mailserver for the network that you're on. This > is a common problem on laptops, for example. Connecting from office or > home means using a different mailserver. Or you can use gmail or some other web mail service. -Dave From eugen at leitl.org Tue Dec 1 13:49:28 2009 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 1 Dec 2009 14:49:28 +0100 Subject: [ExI] META: proper authentication In-Reply-To: References: <20091201111114.449f3387@secure.ericade.net> Message-ID: <20091201134928.GI17686@leitl.org> On Tue, Dec 01, 2009 at 08:40:57AM -0500, Dave Sill wrote: > > When you swap around networks, you have to change your outgoing > > mailserver to use the mailserver for the network that you're on. This > > is a common problem on laptops, for example. Connecting from office or > > home means using a different mailserver. > > Or you can use gmail or some other web mail service. Careful about lock-in. With Gmail, you can use your own domain (if you can publish MX records) and use IMAP to synchronize in multiple locations. If things go awry, you can always revert your DNS change and move over to a different provider, losing only very little mail if any. In practice you should invest into your own (V)server, which you connect to via (Open)VPN, including ability to route your entire traffic through it. Your ISP is not your friend, and neither are free email providers. Caveat emptor, free can be too expensive. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From msd001 at gmail.com Tue Dec 1 14:04:35 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 1 Dec 2009 09:04:35 -0500 Subject: [ExI] META: proper authentication In-Reply-To: References: <20091201111114.449f3387@secure.ericade.net> Message-ID: <62c14240912010604x43efb08fr17a0f5b564bc68bb@mail.gmail.com> On Tue, Dec 1, 2009 at 7:01 AM, BillK wrote: > > The problem isn't reva.xtrmeunix.com. - It's you. :) > > When you swap around networks, you have to change your outgoing > mailserver to use the mailserver for the network that you're on. This > is a common problem on laptops, for example. Connecting from office or > home means using a different mailserver. > > How you do the change depends on what software you're using. > Your tech department might be able to set up an automated solution for > you, to save you tinkering with SMTP setup every time you move. > The tech department may already have provision for dealing with this situation. SMTP is normally available on port 25 of the email server. Many ISP either block or proxy this port. If your email administrator already added support for SMTP on another port (like 2525 or 8025, etc.) then your various ISP should ignore this non-standard-port traffic and allow you to send messages directly to your own mail server - where you will authenticate. There is also the possibility that reva.xtrmeunix.com doesn't accept mail from your server because your email administrator is lax on providing proper proof of its legitimacy and perhaps the admin at xtrmeunix.com is particularly aggressive. Sender policy framework (SPF) is a publically available text record in DNS that identifies what IP addresses are allowed to send mail for your domain. Due to a long adoption phase, most email is accepted when this information is unavailable - however it's been commonly adopted for a while now, so maybe xtrmeunix.com is rejecting your email because it thinks you are a spammer masquerading as yourself. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Dec 1 14:16:32 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 1 Dec 2009 09:16:32 -0500 Subject: [ExI] Hide The Decline - Climategate In-Reply-To: <455742C254BB4756AA5B09F094C25B33@spike> References: <4B11BF60.2020707@rawbw.com> <455742C254BB4756AA5B09F094C25B33@spike> Message-ID: <84545755-90CD-44D3-AFBD-1687CF9D40BD@bellsouth.net> http://www.youtube.com/watch?v=nEiLgbBGKVk -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Tue Dec 1 16:34:39 2009 From: max at maxmore.com (Max More) Date: Tue, 01 Dec 2009 10:34:39 -0600 Subject: [ExI] =?iso-8859-1?q?8_=91extinct=92_species_found_alive_and__kic?= =?iso-8859-1?q?king?= Message-ID: <200912011634.nB1GYmpm004997@andromeda.ziaspace.com> http://www.msnbc.msn.com/id/34152254/ns/technology_and_science-science/ Of course, as any sane, green person knows, millions upon millions of species are dying every day and humans will be gone by February 4th, 2010. Max From spike66 at att.net Tue Dec 1 16:41:03 2009 From: spike66 at att.net (spike) Date: Tue, 1 Dec 2009 08:41:03 -0800 Subject: [ExI] META: proper authentication In-Reply-To: <20091201111114.449f3387@secure.ericade.net> References: <20091201111114.449f3387@secure.ericade.net> Message-ID: <064A82DD216948F583C8668E2C1849F6@spike> > ...On Behalf Of Anders Sandberg > ... >Dear Extropians... Do you remember when in our misspent youth, perhaps in the 60s or 70s, we read the real hard-core sf? Often the author would start out on the first paragraph with a comment that was at once interesting, obscure, mysterious, compelling. It set up an alternate universe which needed exploring. We would dig into the story to try to figure out what the heck that first comment means. Anders' post brought back a flood of pleasant memories from those days: > ..."Dear Extropians. I recently got a TransLife X655 body. But > when I try to forward motor control to one of my forks some > datafield firewalls my exoself, saying that it lacks proper > credentials." Anders Sandberg If we want to write a collective sf story as Lee has proposed, Anders' comment could serve as the first line. I propose a slight variation on Lee's idea. Make a collective sf story, but write everything not futuristic, but rather nowistic. Write stuff that describes the real thing. Your SecondLife persona would be OK. The future is now. We are there. OK, I propose a second segment to Anders' story: 1. "Dear Extropians. I recently got a TransLife X655 body. But when I try to forward motor control to one of my forks some datafield firewalls my exoself, saying that it lacks proper credentials." (Anders Sandberg) 2. As Spike finished reading Anders' comment, he wondered how he might help the young man. Spike was always fond of Anders, such a fine young gentleman, a highly intelligent and kindhearted person, a combination of characteristics not often found in humans. But Spike was a satellite controls specialist rather than an internet protocol expert, and so was inadequate for the task at hand. Spike decided to propose solving Anders' Translife body firewall problem by proposing a collaborative science non-fiction story with the collective intelligence to which he belonged. It was unclear to him how to avoid having the collective non-fiction end up as a massively branching mess. He proposed that everyone in the collective wanting to contribute to the effort merely look up the latest post under the subject line "proper authentication" then append to that version. (spike) spike From max at maxmore.com Tue Dec 1 16:19:20 2009 From: max at maxmore.com (Max More) Date: Tue, 01 Dec 2009 10:19:20 -0600 Subject: [ExI] climategate again Message-ID: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> Alfio: >Folks, this is getting silly. Temperature series (including CRU, >GISTEMP etc.) are mostly based on public data fom the GHCN (Global >Historical Climate Network) and from the US Historical Climate >Network. Link to ftp files: I've been wondering about that. I haven't resolved the issue to my satisfaction yet, but here's a different view from yours: http://wattsupwiththat.com/2009/11/30/pielke-senior-revkin-perpetuates-a-myth-about-surface-temperature-record/#more-13451 Excerpt: >On the weblog >Dot >Earth today, there is text from Michael Schlesinger, a climatologist >at the University of Illinois, that presents analyses of long term >surface temperature trends from NASA, NCDC and Japan as if these >are from independent sets of data from the analysis of CRU. Andy >Revkin is perpetuating this myth in this write-up by not presenting >the real fact that these analyses draw from the same original raw >data. While they may use only a subset of this raw data, the >overlap has been estimated as about 90-95%. > >The unresolved problems with this surface data (which, of course, >applies to all four locations) is reported in the peer reviewed paper Also: http://pielkeclimatesci.wordpress.com/2009/11/28/further-comment-on-the-surface-temperature-data-used-in-the-cru-giss-and-ncdc-analyses/ On top of this, I've yet to adequately verify or refute claims that 80% of the 0.6 C rise in global temps over the last century is actually due to highly dubious adjustments (a la CRU) to the raw data. For instance: not only ignoring the urban heat island effect, but adjusting it the opposite way from what makes sense. (That would be such a serious error that I have a hard time believing even the global catastrophists would do it.) I'm feeling pretty lonely on this issue. Just about everyone on all sides of the issue seem to be very certain of what's going on. Despite considerable reading of clashing sources (or because of it), I remain highly unsure. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From eugen at leitl.org Tue Dec 1 17:25:11 2009 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 1 Dec 2009 18:25:11 +0100 Subject: [ExI] 8 ?extinct? species found alive and kicking In-Reply-To: <200912011634.nB1GYmpm004997@andromeda.ziaspace.com> References: <200912011634.nB1GYmpm004997@andromeda.ziaspace.com> Message-ID: <20091201172511.GQ17686@leitl.org> On Tue, Dec 01, 2009 at 10:34:39AM -0600, Max More wrote: > > http://www.msnbc.msn.com/id/34152254/ns/technology_and_science-science/ > > Of course, as any sane, green person knows, millions upon millions of > species are dying every day and humans will be gone by February 4th, 2010. Don't care about humans one bit, but please don't holocaust my bluefin tuna. It's hard enough to get decent sashimi as is. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From spike66 at att.net Tue Dec 1 17:09:59 2009 From: spike66 at att.net (spike) Date: Tue, 1 Dec 2009 09:09:59 -0800 Subject: [ExI] climategate again In-Reply-To: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> Message-ID: > ...On Behalf Of Max More > Sent: Tuesday, December 01, 2009 8:19 AM > ... > > I'm feeling pretty lonely on this issue. Just about everyone > on all sides of the issue seem to be very certain of what's going on. > Despite considerable reading of clashing sources (or because > of it), I remain highly unsure. > > Max You said it Max, you and I are two lonely guys together. I think the planet is probably warming, but we don't know how much, because the people entrusted with this calculation have shown they were intentionally trying to force it one direction, and corrupted the peer review process. The warming-peers were apparently empowered by partisan fundng and memetically inbred. Now we have a huge task ahead of us: try to resurrect the original raw data, and gather it somehow, then utilize the main tool that the 1980s era researchers did not have: the internet, where everyone can get at that data, and collectively or individually try to extract signals from that enormous bowl of thin data soup. spike From alfio.puglisi at gmail.com Tue Dec 1 18:28:14 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Tue, 1 Dec 2009 19:28:14 +0100 Subject: [ExI] climategate again In-Reply-To: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> Message-ID: <4902d9990912011028h6b93979cp4f2384723541440d@mail.gmail.com> On Tue, Dec 1, 2009 at 5:19 PM, Max More wrote: > Alfio: > > Folks, this is getting silly. Temperature series (including CRU, GISTEMP >> etc.) are mostly based on public data fom the GHCN (Global Historical >> Climate Network) and from the US Historical Climate Network. Link to ftp >> files: >> > > I've been wondering about that. I haven't resolved the issue to my > satisfaction yet, but here's a different view from yours: > > > http://wattsupwiththat.com/2009/11/30/pielke-senior-revkin-perpetuates-a-myth-about-surface-temperature-record/#more-13451 > Excerpt: > > On the weblog < >> http://dotearth.blogs.nytimes.com/2009/11/30/more-on-the-climate-files-and-climate-trends/>Dot >> Earth today, there is text from Michael Schlesinger, a climatologist at the >> University of Illinois, that presents analyses of long term surface >> temperature trends from NASA, NCDC and Japan as if these are from >> independent sets of data from the analysis of CRU. Andy Revkin is >> perpetuating this myth in this write-up by not presenting the real fact that >> these analyses draw from the same original raw data. While they may use >> only a subset of this raw data, the overlap has been estimated as about >> 90-95%. >> >> The unresolved problems with this surface data (which, of course, applies >> to all four locations) is reported in the peer reviewed paper >> > > Also: > > http://pielkeclimatesci.wordpress.com/2009/11/28/further-comment-on-the-surface-temperature-data-used-in-the-cru-giss-and-ncdc-analyses/ > > Those pages are discussing the supposed independence of anlyses like CRU and GISTEMP. Well, it's obvious that they aren't totally independent, since they are using mostly the same input files. What they actually show is that they arrive to very similar conclusion after different methods of interpolation and correction. This tells us that the methods are robust. Also, it seems to me that those blogs want to have it both ways: on one hand, there is no raw data available because CRU deleted it. On the other hand, GISTEMP is not independent because it uses the same data (that you can download from ftp)! So is the raw data available or not? They need to make up their mind. > On top of this, I've yet to adequately verify or refute claims that 80% of > the 0.6 C rise in global temps over the last century is actually due to > highly dubious adjustments (a la CRU) to the raw data. For instance: not > only ignoring the urban heat island effect, but adjusting it the opposite > way from what makes sense. (That would be such a serious error that I have a > hard time believing even the global catastrophists would do it.) > > I'm feeling pretty lonely on this issue. Just about everyone on all sides > of the issue seem to be very certain of what's going on. Despite > considerable reading of clashing sources (or because of it), I remain highly > unsure. > I am no professional of the field, just have a basic understanding of physics. I base my position on several things: 1) temperature is not the only relevant data. We have widespread glacier retreat, sea level rise, arctic sea ice loss. Recent data point to ice mass loss in both Greenland and Antarctica. Agricoltural records in temperate climates show a lengthening of the growing season and a contraction of the winter phase. These trends are not local, but found all over the world. All this is consistent with global warming (whatever the cause), and with little else. 2) basic physics tells us that Earth's energy budget must balance. We can easily measure the input (solar), and verify that it's approximately constant. Since we are changing the properties of the output (greenhouse gases will redirect part of the outgoing radiation downward), internal temperature must rise to compensate. It's about as inevitable as putting a coat on, and feeling warmer. 3) Attribution: there is now 30% more CO2 than before industrial times. We know CO2 greenhouse gas properties. Any conjecture that rejects global warming must show where the extra energy trapped by CO2 went. And it's no easy task. 4) consideration of the opposite camp: leaving apart obvious cranks (like one should also dismiss useless alarmism), I see no coherent opposition. Some dispute the temperature record. Some others accept the records, but dispute anthropogenic causes. Others negate CO2 role as a greenhouse gas, or say that it's saturated, or reject feedbacks like water vapor, or point to supposedly warmer periods in the middle ages, or say that 1998 invalidates all warming trends. All those arguments can be shown false or highly dubious, but people still use them and change subjects continously. This lack of focus gives me the impression that skeptics (really unfortunate word, that. Skepticism is a basic feature of science) are just trying to find something, anything, to avoid confronting reality. Reading blogs like wattsupwiththat actually reinforced my impressions: posts like "it's snowing here so GW is false", obvious errors like comparing different time series without realizing that they use different baseline periods, talks of a "recent cool period"... When any criticism seems to be valid, it's about some minor detail that would change nothing of the general picture. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Dec 1 19:06:23 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 01 Dec 2009 13:06:23 -0600 Subject: [ExI] climategate again In-Reply-To: References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> Message-ID: <4B15692F.7090702@satx.rr.com> Peter Watts on the topic (I found the link on Charlie Stross's blog): http://www.rifters.com/crawl/?p=886 Because As We All Know, The Green Party Runs the World. ... I like to reserve these pixels for cool stuff, cutting edges that may or may not pan out, findings of interest (and frequently, of contention). Anthropogenic Climate Change hasn?t qualified for years; the science is settled, the effect is real, and the only uncertainty among the folks who actually know their shit is whether we?re in for a bad ride or a downright catastrophic one. The ?debate?, such as it is, is political and entirely dishonest at its heart. Climate-change skeptics like to portray themselves as a feisty rebel alliance speaking truth to power, up against a colossal green propaganda machine calling all the shots? a little like the way Glen Beck and Bill O?Reilly like to portray US Christians as an endangered species. Anyone familiar with the Bush administration?s environmental censorship of NASA, the EPA, and its own military knows how ridiculous that is. I have better things to do than research every objection raised by (as Bruce Sterling calls them) shortsighted sociopathic morons who don?t want to lose any money. (I would recommend How to Talk to a Climate Change Skeptic, however, to anyone who does want to fit a couple of denialists in between the Jehovah?s Witnesses and Birthers lined up on their stoops. It addresses all the usual canards, from warming-stopped-in-1998 right out to global-warming-on-Pluto.) I also generally avoid going on about stuff that?s already getting a lot of press elsewhere; if you saw it on slashdot, boingboing, or the NY Times I?ll be giving it a pass unless it?s really central to my current interests, simply because the blogosphere will already be writhing with opinions on the subject and mine has probably been better put by someone with better insight. Now. In what can hardly be a coincidence, just a few weeks before the Copenhagen summit the Climatic Research Unit at the University of East Anglia got hacked. The sixty-odd megabytes of confidential e-mails that ended up littering the whole damn internet either a) blew the lid off a global conspiracy to fake the global warming crisis, or b) lay there in a big sludgy pile of boring communications about birthdays, conference meet-ups, and whether or not Poindexter over at Cal State was going to be allowed into the tree fort this year. Judging by the criteria I described at the top of the post, I should just stick my fingers in my ears and hum loudly until the current shitstorm abates. But I?m not going to. Not this time. I haven?t read all 62MB. I?ve read hardly any of it, in fact. I?m familiar with the money shots: the ?Nature trick? used to ?hide the decline? (and sorry folks, anybody who?s ever run a residual analysis knows there?s nothing nefarious about the word ?trick? in this context. Besides, climatologists need hookers same as Republicans). I?ve read the e-mail-deletion thread, seen quotes that decry evil denialists and call for the censure of skeptic-friendly journal editors. The very conditions under which these e-mails were released makes it entirely plausible that some of them were forged; but at least some of the more controversial bits have been verified as legitimate by their authors. I don?t have much to say about any of that; maybe it?s all real, maybe it?s been spiked, none of it compromises the overwhelming weight of evidence in favor of anthropogenic climate change. Whatever. No, what I want to address here is the attitude of the scientists, and how that relates to the way science actually works. I keep running into recurring commentary on the snarkiness of the scientists behind these e-mails. They?re really entrenched, people seem surprised to note. Got a real siege mentality going on, speak unkindly of the skeptics, take all kinds of cheap shots unbecoming of the lab coat. These people can be downright assholes. No shit, Sherlock. I was a scientist myself for the longest time, and the people I?d gladly drop into a vat of nitric acid start with the Pope and go all the way down to anyone who voted for Stephen Harper?s conservatives. The apologists have stepped up, pointed out that these were private conversations and we shouldn?t expect them to carry the same veneer of civility that one would expect in a public presentation. ?Science doesn?t work because we?re all nice,? remarked one widely-quoted NASA climatologist. ?Newton may have been an ass, but the theory of gravity still works.? No. I don?t think he?s got it right. I don?t think most of these people do. Science doesn?t work despite scientists being asses. Science works, to at least some extent, because scientists are asses. Bickering and backstabbing are essential elements of the process. Haven?t any of these guys ever heard of ?peer review?? There?s this myth in wide circulation: rational, emotionless Vulcans in white coats, plumbing the secrets of the universe, their Scientific Methods unsullied by bias or emotionalism. Most people know it?s a myth, of course; they subscribe to a more nuanced view in which scientists are as petty and vain and human as anyone (and as egotistical as any therapist or financier), people who use scientific methodology to tamp down their human imperfections and manage some approximation of objectivity. But that?s a myth too. The fact is, we are all humans; and humans come with dogma as standard equipment. We can no more shake off our biases than Liz Cheney could pay a compliment to Barack Obama. The best we can do? the best science can do? is make sure that at least, we get to choose among competing biases. That?s how science works. It?s not a hippie love-in; it?s rugby. Every time you put out a paper, the guy you pissed off at last year?s Houston conference is gonna be laying in wait. Every time you think you?ve made a breakthrough, that asshole supervisor who told you you needed more data will be standing ready to shoot it down. You want to know how the Human Genome Project finished so far ahead of schedule? Because it was the Human Genome projects, two competing teams locked in bitter rivalry, one led by J. Craig Venter, one by Francis Collins ? and from what I hear, those guys did not like each other at all. This is how it works: you put your model out there in the coliseum, and a bunch of guys in white coats kick the shit out of it. If it?s still alive when the dust clears, your brainchild receives conditional acceptance. It does not get rejected. This time. Yes, there are mafias. There are those spared the kicking because they have connections. There are established cliques who decide what appears in Science, who gets to give a spoken presentation and who gets kicked down to the poster sessions with the kiddies. I know a couple of people who will probably never get credit for the work they?ve done, for the insights they?ve produced. But the insights themselves prevail. Even if the establishment shoots the messenger, so long as the message is valid it will work its way into the heart of the enemy?s camp. First it will be ridiculed. Then it will be accepted as true, but irrelevant. Finally, it will be embraced as canon, and what?s more everyone will know that it was always so embraced, and it was Our Glorious Leader who had the idea. The credit may not go to those who deserve it; but the field will have moved forward. Science is so powerful that it drags us kicking and screaming towards the truth despite our best efforts to avoid it. And it does that at least partly fueled by our pettiness and our rivalries. Science is alchemy: it turns shit into gold. Keep that in mind the next time some blogger decries the ill manners of a bunch of climate scientists under continual siege by forces with vastly deeper pockets and much louder megaphones. As for me, I?ll follow the blogs with interest and see how this all shakes out. But even if someone, somewhere, proves that a handful of climatologists deliberately fudged their findings ? well, I?ll be there with everyone else calling to have the bastards run out of town, but it won?t matter much in terms of the overall weight of the data. I went running through Toronto the other day on a 17?C November afternoon. Canada?s west coast is currently underwater. Sea level continues its 3mm/yr creep up the coasts of the world, the western Siberian permafrost turns to slush. Swathes of California and Australia are pretty much permanent firestorm zones these days. The glaciers retreat, the Arctic ice cap shrinks, a myriad migratory species still show up at their northern destinations weeks before they?re supposed to. The pine beetle furthers its westward invasion, leaving dead forests in its wake? the winters, you see, are no longer cold enough to hit that lethal reset button that once kept their numbers in check. I could go on, but you get my drift. And if the Climate-Change Hoax Machine is powerful enough to do all that, you know what? They deserve to win. This entry was written by Peter Watts , posted on Sunday November 22 2009at 08:11 pm , filed under climate, scilitics . [see link at top for comments] From max at maxmore.com Tue Dec 1 19:06:46 2009 From: max at maxmore.com (Max More) Date: Tue, 01 Dec 2009 13:06:46 -0600 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] Message-ID: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> Alfio wrote: >Also, it seems to me that those blogs want to have it both ways: on >one hand, there is no raw data available because CRU deleted it. On >the other hand, GISTEMP is not independent because it uses the same >data (that you can download from ftp)! So is the raw data available or not? Perhaps they are using the same *adjusted* data. (Again, I'm not clear about this.) >I am no professional of the field, just have a basic understanding >of physics. I base my position on several things: > >1) temperature is not the only relevant data. We have widespread >glacier retreat, sea level rise, arctic sea ice loss. Recent data >point to ice mass loss in both Greenland and Antarctica. >Agricoltural records in temperate climates show a lengthening of the >growing season and a contraction of the winter phase. These trends >are not local, but found all over the world. All this is consistent >with global warming (whatever the cause), and with little else. This is one point on which I'm fairly sure, and sure that you are mistaken. One example: arctic ice depends not just on temperature but on the ambient moisture level -- which depends on factors other than temperature. Also, note that glacier retreat may only tell us that the coldest places are becoming a bit less cold, not that it's getting hotter in most places. My understanding is that all, or almost all, the observed warming is due to less extreme cold and not to higher temperatures in the warmer places. That might have some beneficial consequences in addition to the costs required to adapt. Anyway, your first point supports only the point that some warming has occurred, which I'm not disputing. (Even so, why do we see even more reports of melting ice when there has been no significant warming for 12 years? That suggests that either the cause is other than warming, or that reports of ice melting etc. are highly selective... and selected. That certainly seems to be the case with regard to polar bears.) >2) basic physics tells us that Earth's energy budget must balance. >We can easily measure the input (solar), and verify that it's >approximately constant. Have you heard of the Early Faint Sun Paradox? Around 2.5 billion years ago, the Sun was 20% to 30% less bright than now. And yet the oceans were not frozen. This contradicts your assumption of extreme climate sensitivity. On this, see Lindzen's nicely written piece: The Climate Science Isn't Settled http://online.wsj.com/article/SB10001424052748703939404574567423917025400.html BTW, Prof Richard Lindzen is the Chairman of the Alfred P. Sloan Meteorology, Earth, Atmospheric, and Planetary Sciences Department at the Massachusetts Institute of Technology. He's a highly credible expert who is not a "denier" of warming, only of extreme catastrophism -- which must be why people like Mike Treder (MDT) hate him so much. (MDT = Most Dishonest Transhumanist.) >Since we are changing the properties of the output (greenhouse gases >will redirect part of the outgoing radiation downward), internal >temperature must rise to compensate. It's about as inevitable as >putting a coat on, and feeling warmer. Not at all. See the Lindzen piece. >4) consideration of the opposite camp See, this is exactly the kind of thing that bothers me. "The opposite camp". Opposite to what? There are multiple views, not two. Lindzen, for instance, does not deny that the planet has warmed modestly over the last 100 to 150 years, but he does have considerable doubts about the reliability of models and he disputes the extremity of the suggested responses. (He's about the closest to my own current views as I've seen.) >This lack of focus gives me the impression that skeptics (really >unfortunate word, that. Skepticism is a basic feature of science) >are just trying to find something, anything, to avoid confronting reality. Exactly the same can be said of the anti-skeptic views, so that doesn't help in the least. >Reading blogs like wattsupwiththat actually reinforced my >impressions: posts like "it's snowing here so GW is false", obvious >errors like comparing different time series without realizing that >they use different baseline periods, talks of a "recent cool >period"... When any criticism seems to be valid, it's about some >minor detail that would change nothing of the general picture. I'm sure there are plenty of posts there dealing with details, because details matter. The same is true of a relatively good pro-consensus site such as Realclimate. But wattsupwiththat also addresses the wider issues. Perhaps you haven't read much of the site. I've read a lot of both of those sites. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From thespike at satx.rr.com Tue Dec 1 19:10:14 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 01 Dec 2009 13:10:14 -0600 Subject: [ExI] climategate again In-Reply-To: <4902d9990912011028h6b93979cp4f2384723541440d@mail.gmail.com> References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> <4902d9990912011028h6b93979cp4f2384723541440d@mail.gmail.com> Message-ID: <4B156A16.8040207@satx.rr.com> On 12/1/2009 12:28 PM, Alfio Puglisi wrote: > it seems to me that those blogs want to have it both ways: on one hand, > there is no raw data available because CRU deleted it. On the other > hand, GISTEMP is not independent because it uses the same data (that you > can download from ftp)! So is the raw data available or not? This struck me immediately as the cried of dismay rose on every side. Are we missing something here? Did everyone else used the reduced not the raw data? Damien Broderick From p0stfuturist at yahoo.com Tue Dec 1 20:55:53 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Tue, 1 Dec 2009 12:55:53 -0800 (PST) Subject: [ExI] Newton's birthday season request Message-ID: <463245.70495.qm@web59903.mail.ac4.yahoo.com> May I submit a piece to be given the once-over by someone? If they wish, he or she can deconstruct it like a shark tearing apart a side of beef. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Tue Dec 1 22:16:30 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Tue, 1 Dec 2009 23:16:30 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> Message-ID: <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> After writing this email, I find it much longer than I expected. Maybe I got a bit carried away :-) If it is too long, feel free to skip over anything boring. On Tue, Dec 1, 2009 at 8:06 PM, Max More wrote: >One example: arctic ice depends not just on temperature but on the ambient moisture level -- which depends on factors other than temperature. I must admit I don't remember ambient moisture level discussed for arctic ice. But isn't this just one of the things I lamented further down? We know Arctic temperatures are going up - no urban heat effect there. Models predict arctic warming in excess of the global mean thanks to ice-albedo feedback. Arctic ice goes down and what's the reaction? maybe temps are going up and the albedo feedback is kicking in? No, there are other factors like ambient moisture... then we have to think about something else to explain the northward migration of ecosystems, and then... >This is one point on which I'm fairly sure, and sure that you are mistaken If the point was arctic ice, I would like to see more. If the point was the sheer number of secondary global warming effects, apart from temperature, that all point in the same general direction, I'm afraid it will take a massive amount of evidence. (btw, if you have references to papers on moisture influences on arctic ice, I'm interested in them. I think they will just confirm the general picture but, hey, you never know). > My understanding is that all, or almost all, the observed warming is due to > less extreme cold and not to higher temperatures in the warmer places. > You are correct. And that's exactly what climate models predict for greenhouse-caused warming. Cold places warm up more than hot places. Night temperatures go up more than day temperatures. If, for example, the warming was caused by increased solar output, we would see the opposite. > > Anyway, your first point supports only the point that some warming has > occurred, which I'm not disputing. (Even so, why do we see even more reports > of melting ice when there has been no significant warming for 12 years? That > suggests that either the cause is other than warming, or that reports of ice > melting etc. are highly selective... and selected. That certainly seems to > be the case with regard to polar bears.) > Glaciers are great integrators of climate. If temperature goes up, a glacier will go out of equilibrium and start to melt, but the response will not be instantaneus. Reaching a new equilibrium takes years. The current decade has been the warmest on record, and ice melts in response. If the last few decades had been colder than before, you would see glaciers growing even if the cooling trend stabilized for some years. About your suggestion that reports of ice mass balance are selected for the most melting ones... that's a very serious accusation. Have you got any proof of that kind of selection? Anything? Go to the world glacier monitoring service: http://www.wgms.ch See for example: http://www.wgms.ch/mbb/mbb10/sum07.html And you can't select in Arctic sea ice loss, or Greenland mass balance. There's only one of each. > > 2) basic physics tells us that Earth's energy budget must balance. We can >> easily measure the input (solar), and verify that it's approximately >> constant. >> > > Have you heard of the Early Faint Sun Paradox? Around 2.5 billion years > ago, the Sun was 20% to 30% less bright than now. And yet the oceans were > not frozen. This contradicts your assumption of extreme climate sensitivity. > "approximately constant" will do for any period less than many millions of years. The Sun output is still going up, and it's likely to turn the planet into a desert in a billion of years or so (and a badly burnt piece of rock at the end) but I'm not blaming it for the current global warming :-) I also heard about Snowball Earth about 600 million years ago, when all the planet freezed over. We are talking about periods when the continents were different, oceanic currents had a different pattern, the atmosphere was completely different (at the time you cite, oxygen would have been scarce!) and basically unknown: various greenhouse gases, from CO2 to carbonyl sulfide, have been proposed to solve the faint sun paradox. The very fact that different atmospheric composition are proposed means that we are not sure of what kind of atmosphere was present those days. In short, I think you can't derive any conclusion from that remote past to today's situation. It was basically a different planet. To estimate climate sensitivity, the ice age epoch is much better suited: close to our time, and with ideal cold-hot-cold step responses to study :-) On this, see Lindzen's nicely written piece: > The Climate Science Isn't Settled > > http://online.wsj.com/article/SB1000142405274870393940457456742391702us5400.html > > The piece you linked is a concentrate of spin, irrelevant points and outright errors, or falsehoods. And very easy to spot even for me. I'll quote some of them: "the globally averaged temperature anomaly (GATA), is always changing." Well, duh. Who says otherwise? "Sometimes it goes up, sometimes down" Look at any plot of the temperature record (1880-present) and tell me if "sometimes up, sometimes down" is an accurate description. And it's clear, from the next sentence, that he's talking about long periods. "and occasionally?such as for the last dozen years or so?it does little that can be discerned."" Misleading: isolate other dozen years periods in the record (the one that even Lindzen himself says is warming). Many times, the trend is not that clear. It's not just the last dozen, but many such dozens where "little can be discerned", which tells you the last dozen hasn't been much different than before. "Several of the emails from the University of East Anglia's Climate Research Unit (CRU) that have caused such a public ruckus dealt with how to do this so as to maximize apparent changes." False. CRU didn't "maximize" any temperature anomaly, and doesn't say so in any email (their series data comes out nearly identical to GISS, using publicly available data and code). Some CRU emails did talk equivocally about tree proxy data, which are used in temperature reconstructions. Lindzen is confusing the two. "That said, the main greenhouse substances in the earth's atmosphere are water vapor and high clouds. Let's refer to these as major greenhouse substances to distinguish them from the anthropogenic minor substances." Again misleading. Lindzen knows full well that water vapor is a feedback, and that even if it is carrying most of the natural greenhouse effect on its shoulders, it can't do anything to change Earth's temperature on its own. "Even a doubling of CO2 would only upset the original balance between incoming and outgoing radiation by about 2%" 2% is significant when your planet has an average temperature of 290K. And he didn't include the feedbacks (but didn't he talk about water vapor a few lines before? Why not now?) "The main statement publicized after the last IPCC Scientific Assessment two years ago was that it was likely that most of the warming since 1957 (a point of anomalous cold)" the IPCC talks of "50 years trends", not of 1957. This is like selecting 1998 in other contexts. "Yet articles from major modeling centers acknowledged that the failure of these models to anticipate the absence of warming for the past dozen years was due to the failure of these models to account for this natural internal variability." False. Many models show 10-year scale periods of stable and even cooling temperatures right in the middle of a longer-term warming trend. Only when you average a dozen of them a monotonous year-by-year warming appears. Models do show short-term variability, and El-Nino-like behaviours. They can't reproduce the exact El-Nino et al. pattern we have on this planet, and so can't model temperatures on small timescales. "They do so because in these models, the main greenhouse substances (water vapor and clouds) act to amplify anything that CO2 does." Ah, ok, so he talks about feedbacks eventually. "The notion that the earth's climate is dominated by positive feedbacks is intuitively implausible," Exaggerating. Whoever said that our climate is "dominated" by positive feedbacks? If that was the case, the first ice age would have been the end of life, and the first interglacial would have roasted the remains. Earth is not Venus. ////////////////////// Ok, enough. On a more constructive tone: All the talks Lindzen does about feedbacks is invalidated by ice age cores. Without feedbacks, you can't explain the ice-age / interglacial alternance. We know that orbital forcings cause ice ages (the timing is just too perfect), but we also know that they are too weak on their own. Rejecting something with great explanatory power (feedbacks) with, well, nothing, isn't going to fly for most scientists. > Since we are changing the properties of the output (greenhouse gases will >> redirect part of the outgoing radiation downward), internal temperature must >> rise to compensate. It's about as inevitable as putting a coat on, and >> feeling warmer. >> > > Not at all. See the Lindzen piece. > When I say "inevitable", I refer to conservation of energy. Radiation emitted downward *will* do something. Bodies with radiation imbalances *will* warm up. Anything me, you or Lindzen thinks is irrelevant. You may have noticed the low opinion I have of Lindzen. This is because, because of all the MIT titles you listed above, I really can't accuse him of ignorance. This leaves less palatable options. > 4) consideration of the opposite camp >> > > See, this is exactly the kind of thing that bothers me. "The opposite > camp". Opposite to what? To most of the science. That's where they have set, by their choice. Almost all of them don't publish, many actually actively refuse results of peer-review articles, and have nothing substantial to contribute. > There are multiple views, not two. My feeling is that there is one well-developed theory, and then a group of mutually inconsistent views with little to offer, except "this part is wrong". > Lindzen, for instance, does not deny that the planet has warmed modestly > over the last 100 to 150 years, but he does have considerable doubts about > the reliability of models and he disputes the extremity of the suggested > responses. (He's about the closest to my own current views as I've seen.) > As for the first part, he can't deny the warming, really. No one would take him seriously. I'll give him points for publishing, but sloppy articles like the wsj above don't help. > > This lack of focus gives me the impression that skeptics (really >> unfortunate word, that. Skepticism is a basic feature of science) are just >> trying to find something, anything, to avoid confronting reality. >> > > Exactly the same can be said of the anti-skeptic views, so that doesn't > help in the least. > It's not the same. (and what is an "anti-skeptic view"? :-) Global warming is a well-developed theory with multiple supporting lines of evidence: different kind of observations, and physics-based models. I don't see how one can take only one part (say, the observation of temperature warming) but not the rest (the explanation of that warming, and it's likely future consequences). Without an obvious falsification, one would need to produce an alternative explanation, and the proposed ones (there have been some: solar, cosmic rays, ice age rebound and surely some other I'm forgetting) didn't survive investigation. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From asyluman at gmail.com Tue Dec 1 22:48:03 2009 From: asyluman at gmail.com (Will Steinberg) Date: Tue, 1 Dec 2009 17:48:03 -0500 Subject: [ExI] more math and maybe some fermat Message-ID: Probably already a thing (as it always is) but Wikipedia and Wolfram aren't turning anything up so: Today I was thinking about squares as the sum of odd numbers, and saw that the reason was because to create the next square, you must add a row of n spots and then a row of n-1, summing to 2n-1, as shown: o o O *o o O* OOO and I saw it was easily extendable to any dimension (e.g. for cubes we add a face of x*( blocks, then one of x(x-1), and finally one of (x-1)(x-1); for quartics, x(x)(x) + x(x)(x-1) + (x)(x-1)(x-1) + (x-1)(x-1)(x-1). So I generalized it to the equation in the attached file. Neat. More interesting was this: You can see with geometric simplicity that a^2+b^2 can equal c^2 for 9+16=25: oeeeee oooeee oooooe ooooooo As we can see, the odd numbers mesh together to make a box of 4*6 +1: (n-1)*(n+1) + 1 = n^2 -1 + 1 = n^2 (n=5). What's so difficult is we can see a special reason for this occurring with the squares, but it is much harder to show how it does NOT occur in the higher powers. I think this could lead to an elementary proof of Fermat's Last Theorem. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: POWERZ.gif Type: image/gif Size: 1684 bytes Desc: not available URL: From asyluman at gmail.com Tue Dec 1 22:58:13 2009 From: asyluman at gmail.com (Will Steinberg) Date: Tue, 1 Dec 2009 17:58:13 -0500 Subject: [ExI] more math and maybe some fermat In-Reply-To: References: Message-ID: attached: more elegant sum I love math On Tue, Dec 1, 2009 at 5:48 PM, Will Steinberg wrote: > Probably already a thing (as it always is) but Wikipedia and Wolfram aren't > turning anything up so: > > Today I was thinking about squares as the sum of odd numbers, and saw that > the reason was because to create the next square, you must add a row of n > spots and then a row of n-1, summing to 2n-1, as shown: > > o o O > *o o O* > OOO > > and I saw it was easily extendable to any dimension (e.g. for cubes we add > a face of x*( blocks, then one of x(x-1), and finally one of (x-1)(x-1); for > quartics, x(x)(x) + x(x)(x-1) + (x)(x-1)(x-1) + (x-1)(x-1)(x-1). > > So I generalized it to the equation in the attached file. Neat. > > More interesting was this: > > You can see with geometric simplicity that a^2+b^2 can equal c^2 for > 9+16=25: > > oeeeee > oooeee > oooooe > ooooooo > > As we can see, the odd numbers mesh together to make a box of 4*6 +1: > (n-1)*(n+1) + 1 = n^2 -1 + 1 = n^2 (n=5). > > What's so difficult is we can see a special reason for this occurring with > the squares, but it is much harder to show how it does NOT occur in the > higher powers. I think this could lead to an elementary proof of Fermat's > Last Theorem. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: POWERZ.gif Type: image/gif Size: 1261 bytes Desc: not available URL: From hkeithhenson at gmail.com Tue Dec 1 23:25:21 2009 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 1 Dec 2009 15:25:21 -0800 Subject: [ExI] extropy-chat Digest, Vol 75, Issue 2 In-Reply-To: References: Message-ID: On Tue, Dec 1, 2009 at 2:16 PM, Max More wrote: snip > > I'm feeling pretty lonely on this issue. Just about everyone on all > sides of the issue seem to be very certain of what's going on. > Despite considerable reading of clashing sources (or because of it), > I remain highly unsure. I am irritated by the whole thing. What we have is people endlessly arguing about the ship rusting or not and if this will sink it in 50 to 100 years in the future. Meanwhile there is a torpedo in the water headed for the ship. Running out of cheap energy is a far more serious matter than climate change and will happen sooner. That has to be solved to avert famines and resource wars. There are no long term solutions to this problem that involves endlessly putting carbon in the air so any solution to the problem must involve displacing fossil fuel with some less expensive source of energy. Nuclear, SBSP, or some new method, however we solve the energy problem, we also solve the climate problem to whatever extent it is real and to whatever extent the problem is caused by humans. If CO2 is a problem in 30-40 years, we can pull it out of the atmosphere to any degree we want (300 TW years will take out 100 ppm). I.e., it doesn't matter a bit if the data has been fudged or not. Keith From emlynoregan at gmail.com Wed Dec 2 01:08:12 2009 From: emlynoregan at gmail.com (Emlyn) Date: Wed, 2 Dec 2009 11:38:12 +1030 Subject: [ExI] extropy-chat Digest, Vol 75, Issue 2 In-Reply-To: References: Message-ID: <710b78fc0912011708t5634dd69p13b70f0b57437ccd@mail.gmail.com> 2009/12/2 Keith Henson : > On Tue, Dec 1, 2009 at 2:16 PM, ?Max More wrote: > > snip >> >> I'm feeling pretty lonely on this issue. Just about everyone on all >> sides of the issue seem to be very certain of what's going on. >> Despite considerable reading of clashing sources (or because of it), >> I remain highly unsure. > > I am irritated by the whole thing. > > What we have is people endlessly arguing about the ship rusting or not > and if this will sink it in 50 to 100 years in the future. ?Meanwhile > there is a torpedo in the water headed for the ship. > > Running out of cheap energy is a far more serious matter than climate > change and will happen sooner. ?That has to be solved to avert famines > and resource wars. > > There are no long term solutions to this problem that involves > endlessly putting carbon in the air so any solution to the problem > must involve displacing fossil fuel with some less expensive source of > energy. > > Nuclear, SBSP, or some new method, however we solve the energy > problem, we also solve the climate problem to whatever extent it is > real and to whatever extent the problem is caused by humans. ?If CO2 > is a problem in 30-40 years, we can pull it out of the atmosphere to > any degree we want (300 TW years will take out 100 ppm). > > I.e., it doesn't matter a bit if the data has been fudged or not. > > Keith If this was facebook, I could press the "like" button. Consider it pressed, anyway. Also, from a positive angle, really interesting things happen if we get cheap renewable energy, stuff that extropians would like, and we wont get it from sticking with the current situation which appears to be on the decline. There's a giant fusion reactor *right there in the sky* out your window, and we don't really harness it. Properly cheap renewable energy (ie: catching a tiny sliver of the waste energy coming out of the sun) solves all water, food, and other resource problems. I think there are many more cool graphs that trend exponentially upward in our future, once we get serious about cheap renewable energy. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From emlynoregan at gmail.com Wed Dec 2 02:35:36 2009 From: emlynoregan at gmail.com (Emlyn) Date: Wed, 2 Dec 2009 13:05:36 +1030 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> Message-ID: <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> How to Talk to a Climate Sceptic Coby Beck http://scienceblogs.com/illconsidered/2008/07/how_to_talk_to_a_sceptic.php "Below is a listing of all the articles to be found in the "How to Talk to a Climate Sceptic" guide, presented as a handy one-stop shop for all the material you should need to rebut the more common anti-global warming science arguments constantly echoed accross the internet. In what I hope is an improvement on the original categorization, they have been divided and subdivided along 4 seperate lines: Stages of Denial, Scientific Topics, Types of Argument, Levels of Sophistication. This should facilitate quick retrieval of specific entries. Individual articles will appear under multiple headings and may even appear in multiple subcategories in the same heading. Please feel free to quote from, paraphrase, link to and otherwise use any or all of them in the best way possible to fight the good fight against mis- and dis- information where ever it appears! Email suggestions for new topics or links to more current scientific information to "a(dot)few(dot)things(dot)illconsidered(at)gmail(dot)com" or leave them in the comments." -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From lcorbin at rawbw.com Wed Dec 2 03:24:46 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 01 Dec 2009 19:24:46 -0800 Subject: [ExI] climategate again In-Reply-To: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> Message-ID: <4B15DDFE.8030807@rawbw.com> Max says > I'm feeling pretty lonely on this issue. Just about everyone on all > sides of the issue seem to be very certain of what's going on. Despite > considerable reading of clashing sources (or because of it), I remain > highly unsure. Hmm, I'm not sure what's wrong with you either. Or me and the others who just aren't any too sure. Why can't we have nice healthy attitudes like "I was a scientist myself for the longest time, and the people I?d gladly drop into a vat of nitric acid start with the Pope and go all the way down to anyone who voted for Stephen Harper?s conservatives." ---Peter Watts (credit: Damien) I think that to have (or to get) a definitive opinion on things like this, i.e. to stop being so wimpy, one perhaps has to join in the singing of "Hide the Decline" (credit John Clark), or engage in other soul-fulfilling rituals that leave no doubt about who're the good guys and who're the bad. Me, I just don't feel like calling other people names yet---not to mention dropping them in nitric acid. Yes, learning to sing along would doubtless help me too! Lee From kanzure at gmail.com Wed Dec 2 03:50:43 2009 From: kanzure at gmail.com (Bryan Bishop) Date: Tue, 1 Dec 2009 21:50:43 -0600 Subject: [ExI] Fwd: [twister] Meta Conspiracy In-Reply-To: <20091202031300.GJ5352@sifter.org> References: <20091202031300.GJ5352@sifter.org> Message-ID: <55ad6af70912011950k7b6e7cd9ie5b3b51591dfd10a@mail.gmail.com> ---------- Forwarded message ---------- From: Brandyn Date: Tue, Dec 1, 2009 at 9:13 PM Subject: [twister] Meta Conspiracy To: Twister ? ? ? ?So, I have noticed that with high reliability certain people take predictable sides on most issues, even new and seemingly unrelated issues where the side is predictable by the nature of the argument as opposed to any particular bias of fact or topic. ? ? ? ?For instance, certain people are prone to prefer what I might call mystical interpretations even when the topic isn't mysticism (so it is some bias of the way they think, not just a learned belief in mysticism), some naturally latch on to conspiracy theories, some to mainstream science, and so on. ?As I mentioned before, all tend to defend their faction by digging a little further behind any disenting evidence until they find the next bit which discredits it, and then they stop, satisfied that they have swept aside what they knew was a bogus challenge before they even bothered to put the effort in to prove it. ?None of this is remarkable -- pretty well defines human thinking most of the time. ?What's interesting to me is to try to tease out the real underlying differences that establish the sides in the first place, particularly where they are not by topic but by manner... ? ? ? ?E.g., there is definitely a faction here who see themselves as scientific, rational, no-nonsense, obvious truth, keep it simple, don't be silly, and everybody knows (or, at least, everybody with a Phd knows). ?And then there is an often opposing faction who see themselves as scientific, rational, consider everything, the truth isn't always what it appears to be on the surface, and science makes progress one death at a time. ? ? ? ?If I could take a first stab at qualifying the difference between the two, the first seems universally more optimistic about human nature, in terms of objectivity, motive, and sometimes ability, and especially in power of the group mean or concensus to win over individual defects in these things. ?The latter sees man as a rationalizing animal, self- deluding, sometimes malicious, regularly deceptive whether by unwitting bias or conscious intent, and with group-think tending to exagerate rather than mediate these defects. ? ? ? ?So, I wonder where these biases come from, and whether one or the other tends to be more predictive of reality. ? ? ? ?I am pretty well in the latter camp, and I can trace it back to my childhood of being an outsider, always "the new kid" and so had to do a lot of cold-reading of the social scene, and got to see a lot of variations thereof; close friends and family members variously involved in three letter agencies, government weapons programs, the police force, the maffia, various politicized scientific domains, and so on. ?Throughout my adult life, I encounter regular reenforcement of these "biases" through encounters with people involved in these things today, and so I always expect that the "opposition" must be coming over to my side by the year since surely they must hear the same stories... ? ? ? ?But of course that isn't true, and they are seeing the other side of the same biased coin, thinking surely I am going to clue in eventually... ? ? ? ?And sometimes I get a glimpse of how this happens, and the story is usually pretty close to this: ?One side thinks there are scorpions everywhere, and one should always wear shoes. ?The other side says there are none, and has never seen a scorpion their entire life. ?The difference, it turns out, is simply that one, expecting to find scorpions, lifts up rocks, and often finds them, and the other, certain there are none, never looks and never sees. But that's a biased analogy--favorable to the second faction--so I am still curious to distill it down to something more essential and central. ? ? ? ?Thoughts? ? ? ? ?-Brandyn -- ---------- brandyn at sifter.org ------- http://www.sifter.org/~brandyn ---------- ? ? ? The fatal tendency of mankind to leave off thinking about a thing ? ? ? which is no longer doubtful is the cause of half their errors. ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?-- John Stuart Mill ---------------------------------------------------------------------------- General options: http://sifter.org/al/ Unsubscribe from this thread, sub-thread, poster, or forum: http://sifter.org/al/?msg=emsg.5607&_from=bm.574&group=group.2 ---------------------------------------------------------------------------- -- - Bryan http://heybryan.org/ 1 512 203 0507 From spike66 at att.net Wed Dec 2 04:33:14 2009 From: spike66 at att.net (spike) Date: Tue, 1 Dec 2009 20:33:14 -0800 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategateagain] In-Reply-To: <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com><4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> Message-ID: <96C8847F12AE46B2B0FB09300FCBD7E5@spike> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Emlyn > Sent: Tuesday, December 01, 2009 6:36 PM > To: ExI chat list > Subject: Re: [ExI] The Climate Science Isn't Settled [was: > Re: climategateagain] > > How to Talk to a Climate Sceptic > Coby Beck > > http://scienceblogs.com/illconsidered/2008/07/how_to_talk_to_a > _sceptic.php > > "Below is a listing of all the articles to be found in the > "How to Talk to a Climate Sceptic" guide... > Emlyn Thanks Emlyn. The problem I have with this approach is that it is about denial that climate change exists or doesn't. But the real question in light of the recent revelations about CRU is not philosophical, but rather: how much is climate changing? Is it really a tenth of a degree C per decade? Or a hundredth? Regarding the notion of resurrecting the raw temperature data from all those stations, it occurred to me that even if we manage to do it, that probably will make the picture even murkier. I had entertained the notion of getting the orginal data and examining the computer code that CRU used to reduce their data. But a thought occurred to me that makes me now doubt this approach. Weather stations everywhere that were set up a long time ago would naturally show an increase in temperature because of urbanization. Cities are warmer than rural areas, because there are heat sources everywhere. Practically every long-operational station would see that effect, so the science community must figure out a way to compensate for that. If we had just the raw data and no compensation model, we would vastly overestimate the warming of the planet. So if the CRU guys were either intentionally or mistakenly exaggerating the warming, it could be from undercompensating for urbanization. Without knowing how much the actual temperatures are changing, it doesn't look to me like we are ready for *any* action on an international scale. This problem will not be at all easy to untangle. spike From emlynoregan at gmail.com Wed Dec 2 05:10:35 2009 From: emlynoregan at gmail.com (Emlyn) Date: Wed, 2 Dec 2009 15:40:35 +1030 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategateagain] In-Reply-To: <96C8847F12AE46B2B0FB09300FCBD7E5@spike> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> <96C8847F12AE46B2B0FB09300FCBD7E5@spike> Message-ID: <710b78fc0912012110r4ca328fclca7c5fecd50f4834@mail.gmail.com> 2009/12/2 spike : >> How to Talk to a Climate Sceptic >> Coby Beck >> >> http://scienceblogs.com/illconsidered/2008/07/how_to_talk_to_a >> _sceptic.php >> >> "Below is a listing of all the articles to be found in the >> "How to Talk to a Climate Sceptic" guide... >> Emlyn > > Weather stations everywhere that were set up a long time ago would naturally > show an increase in temperature because of urbanization. ?Cities are warmer > than rural areas, because there are heat sources everywhere. ?Practically > every long-operational station would see that effect, so the science > community must figure out a way to compensate for that. ?If we had just the > raw data and no compensation model, we would vastly overestimate the warming > of the planet. ?So if the CRU guys were either intentionally or mistakenly > exaggerating the warming, it could be from undercompensating for > urbanization. > >From the link I posted: http://scienceblogs.com/illconsidered/2006/02/warming-due-to-urban-heat-island.php Objection: The apparent rise of global average temperatures is actually an illusion due to the urbanization of land around weather stations, the Urban Heat Island effect. Answer: Urban Heat Island Effect has been examined quite thoroughly and simply found to have a negligible effect on temperature trends. Real Climate has a detailed discussion of this here. What's more, NASA GISS takes explicit steps in their analysis to remove any such spurious signal by normalizing urban station data trends to the surrounding rural stations. It is a real phenomenon, but it is one climate scientists are well aware of and have taken any required steps to remove its influence from the raw data. But heavy duty data analysis and statistical processing aside, a little common sense and a couple of pertinent images should put this idea to bed. Here is an image, taken from Astronomy Picture of the Day (a wonderful site, by the way), of the surface of the earth. It is a composite of hundreds of satellite images all taken at night. (The large version is well worth the download time!) http://www.cobybeck.com/illconsidered/images/earthlights02_dmsp.jpg Aside from being very beautiful, it is a perfect indicator of urbanization on earth. As you can see, the greatest urbanization is over the continental United States, Europe, India, Japan, Eastern China and generally coastal South America. This next image was taken from NASA GISS. It is a global surface temperature anomaly map which shows warming (and infrequently, cooling) by region. http://www.cobybeck.com/illconsidered/images/global-anomalies.gif Look at North America, look at Europe, at Asia, Australia, Africa and the Poles and compare them to the urbanization in the image from APOD. There is quite simply no way to discern any correlation whatsoever between urbanization and warming. If the UHI effect were the cause of warming in the globally averaged record, we would see it in this map. The claim that Global Warming is an artefact of Urban Heat Island Effect is simply an artefact of the Urban Myth Effect. Addendum: Wikipedia has a very good article on this subject. Among all the interesting details it mentions a few papers that directly discuss efforts to identify and quantify UHI influences on the global temperature trend including this one which would be a good one to cite: A 2003 paper ("Assessment of urban versus rural in situ surface temperatures in the contiguous United States: No difference found"; J climate; Peterson; 2003) indicates that the effects of the urban heat island may have been overstated, finding that "Contrary to generally accepted wisdom, no statistically significant impact of urbanization could be found in annual temperatures." This was done by using satellite-based night-light detection of urban areas, and more thorough homogenisation of the time series (with corrections, for example, for the tendency of surrounding rural stations to be slightly higher, and thus cooler, than urban areas). As the paper says, if its conclusion is accepted, then it is necessary to "unravel the mystery of how a global temperature time series created partly from urban in situ stations could show no contamination from urban warming." The main conclusion is that micro- and local-scale impacts dominate the meso-scale impact of the urban heat island: many sections of towns may be warmer than rural sites, but meteorological observations are likely to be made in park "cool islands." -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From rafal.smigrodzki at gmail.com Wed Dec 2 08:17:49 2009 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 2 Dec 2009 03:17:49 -0500 Subject: [ExI] climategate again In-Reply-To: <4902d9990912011028h6b93979cp4f2384723541440d@mail.gmail.com> References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> <4902d9990912011028h6b93979cp4f2384723541440d@mail.gmail.com> Message-ID: <7641ddc60912020017m14eb4fe0ub77817ae5fe2c0a6@mail.gmail.com> 2009/12/1 Alfio Puglisi : > Those pages are discussing the supposed independence of anlyses like CRU and > GISTEMP. Well, it's obvious that they aren't totally independent, since they > are using mostly the same input files. What they actually show is that they > arrive to very similar conclusion after different methods of interpolation > and correction. This tells us that the methods are robust. ### Phil Jones, Gavin Schmidt and others insisted on many occasions that CRU and GISS are independent, so it is not "obvious" that they aren't, right? Obviously, Jones was indulging in propaganda, and got corrected by the bloggers. It's good then that you obviously agree with bloggers and obviously disagree with Jones. The high correlation between GISS, NCDC and CRU does not show robustness. It shows they apply exactly the same methodology to the data or they adjust their results to fit. "Robust" means that independent approaches come to the same conclusion, not three groups doing exactly the same procedure. You don't get a 0.98 correlation between the results of complex, non-trivial transformations of data (and we know from reading of the CRU program comments that the "value adding" is a hopelessly confused mess), unless the persons doing the work share their programs or adjust results to agree with each other. ------------------------ > > Also, it seems to me that those blogs want to have it both ways: on one > hand, there is no raw data available because CRU deleted it. On the other > hand, GISTEMP is not independent because it uses the same data (that you can > download from ftp)! So is the raw data available or not? They need to make > up their mind. > ### Here is a direct quote from a response to a FOIA request to CRU (April 27, 2007) "Additionally, even if we were able to create such a list we would not be able to link the sites with sources of data. The station database has evolved over time and the Climate Research Unit was not able to keep multiple versions of it as stations were added, amended and deleted. This was a consequence of a lack of data storage in the 1980s and early 1990s compared to what we have at our disposal currently. It is also likely that quite a few stations consist of a mixture of sources." CRU deleted the list of stations they selected to come up with HadCRUT3. At the same time they claim that the raw data from the stations are available from GHCN, and from NMSs - but without the list of stations you can't verify their methodology even if you could retrieve all data from MNSs (which is itself problematic). So whatever they did, it cannot be replicated. We only know that somehow the output they got was almost perfectly correlated with GISS but we do not know how this unusual effect was achieved. Certainly, it could not have been achieved by independent processing of available data. ------------------- > I am no professional of the field, just have a basic understanding of > physics. I base my position on several things: > > 1) temperature is not the only relevant data. We have widespread glacier > retreat, sea level rise, arctic sea ice loss. ### Glacier retreat started long before CO2 started going up. Arctic ice loss : see here http://wattsupwiththat.com/2009/02/03/arctic-sea-ice-increases-at-record-rate/, the sea level has been going up for 13 000 years until very recently (see http://en.wikipedia.org/wiki/File:Sea_level_temp_140ky.gif), then dropped, now increased 8 inches in a linear fashion since 1880. What do you think does all this have to do with CO2? ------------- Recent data point to ice mass > loss in both Greenland and Antarctica. Agricoltural records in temperate > climates show a lengthening of the growing season and a contraction of the > winter phase. These trends are not local, but found all over the world. All > this is consistent with global warming (whatever the cause), and with little > else. ### There is CO2 fertilization, no? And which global warming do you mean - the one in 1934? 1880? 1998? The planet has been warming and cooling all the time, out of step with CO2, which is the important issue here. ------------------------ > > 2) basic physics tells us that Earth's energy budget must balance. We can > easily measure the input (solar), and verify that it's approximately > constant. Since we are changing the properties of the output (greenhouse > gases will redirect part of the outgoing radiation downward), internal > temperature must rise to compensate. It's about as inevitable as putting a > coat on, and feeling warmer. ### No, Alfio, the Earth is not "basic physics", and it does not have to obey your notions. You need to read up on aerosols, water vapor, cloud cover and a lot of other things before you can say what the Earth "must" do. ----------------- > > 3) Attribution: there is now 30% more CO2 than before industrial times. We > know CO2 greenhouse gas properties. Any conjecture that rejects global > warming must show where the extra energy trapped by CO2 went. And it's no > easy task. > ### No, all we need to show is the poor correlation between recent CO2 rise and global temperatures, and this has been shown very clearly. Rafal From rafal.smigrodzki at gmail.com Wed Dec 2 09:21:45 2009 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 2 Dec 2009 04:21:45 -0500 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> Message-ID: <7641ddc60912020121m45222eeapaec7b31a2d8ab598@mail.gmail.com> 2009/12/1 Alfio Puglisi : > > Look at any plot of the temperature record (1880-present) and tell me if > "sometimes up, sometimes down" is an accurate description. And it's clear, > from the next sentence, that he's talking about long periods. ### Look at the plot 1000 AD - present (but the real one, not from Mann et. al) What do you see? ---------------- > False. CRU didn't "maximize" any temperature anomaly, and doesn't say so in > any email (their series data comes out nearly identical to GISS, using > publicly available data and code). Some CRU emails did talk equivocally > about tree proxy data, which are used in temperature reconstructions. > Lindzen is confusing the two. ### CRU brazenly manipulated proxy data for the pre-instrumental period, and appear to have fudged their analysis of the instrumental period as well. -------------- . Lindzen knows full well that water vapor is a feedback, > and that even if it is carrying most of the natural greenhouse effect on its > shoulders, it can't do anything to change Earth's temperature on its own. ### Do you understand what you wrote? I certainly don't. ------------ > > "Even a doubling of CO2 would only upset the original balance between > incoming and outgoing radiation by about 2%" > > 2% is significant when your planet has an average temperature of 290K. And > he didn't include the feedbacks (but didn't he talk about water vapor a few > lines before? Why not now?) ### Because nobody knows the feedback but available data are consistent with absence of positive feedback. ----------------- > > False. Many models show 10-year scale periods of stable and even cooling > temperatures right in the middle of a longer-term warming trend. Only when > you average a dozen of them a monotonous year-by-year warming appears. > Models do show short-term variability, and El-Nino-like behaviours. They > can't reproduce the exact El-Nino et al. pattern we have on this planet, and > so can't model temperatures on small timescales. ### Trenberth seems to disagree with you. He thinks it's a "travesty" that their models cannot account for the present period of cooling. ------------------ > > Exaggerating. Whoever said that our climate is "dominated" by positive > feedbacks? ### Without strong positive CO2-related feedback there is no catastrophic climate change, so every climate hysteric out there is either explicitly or implicitly talking about near future dominated by such feedback. ----------------- > All the talks Lindzen does about feedbacks is invalidated by ice age cores. > Without feedbacks, you can't explain the ice-age / interglacial alternance. ### So why does CO2 start rising only about 800 000 years after the end of an ice age? What is driving the feedback? ----------------- > When I say "inevitable", I refer to conservation of energy. Radiation > emitted downward *will* do something. Bodies with radiation imbalances > *will* warm up. Anything me, you or Lindzen thinks is irrelevant. ### If what you think is irrelevant, why do you mention it? --------------------------- > > You may have noticed the low opinion I have of Lindzen. This is because, > because of all the MIT titles you listed above, I really can't accuse him of > ignorance. This leaves less palatable options. ### Sure. He's too credentialed to be dismissed as a "crank", or "whackaloon" (as climate hysterics like to refer to climate realists), so he must be a Satanist, or something. As you mentioned above, this is you writing in "constructive" mode. I'd hate to read you being snarky. --------------------- > > To most of the science. That's where they have set, by their choice. Almost > all of them don't publish, many actually actively refuse results of > peer-review articles, and have nothing substantial to contribute. > ### Since when are government bureaucrats like Hansen, Jones, and Karl "science"? Since when is peer-review defined as Mann at al, reviewing Mann at al? --------------------- > > It's not the same. (and what is an "anti-skeptic view"? :-)?? Global warming > is a well-developed theory with multiple supporting lines of evidence: > different kind of observations, and physics-based models. ### Anthropogenic CO2 driven catastrophic warming is a lunatic fringe theory supported by large government bureaucracies. All lines of evidence supporting the hockey stick (bristlecone, sediment, treering) have been discredited. The "physics-based models" don't even model cloud cover, much less impacts on ocean circulation, aerosols, methane, anthropogenic albedo changes, other albedo changes, all the stuff you need to know to tell the difference between high positive feedback to CO2 (the only important situation) and low or even negative feedback. But we do know that at no time in the past 600 million years did Earth exceed a temperature anomaly of +8C, despite CO2 levels much higher than today and this is enough to summarily reject the idea that CO2 is likely to cause catastrophic warming. ------------------- I don't see how > one can take only one part (say, the observation of temperature warming) but > not the rest (the explanation of that warming, and it's likely future > consequences). ### Because the explanations do not make any sense in light of available data, and the future consequences are unknown, but extremely unlikely (see above) to be dire. ------------------ Without an obvious falsification, one would need to produce > an alternative explanation, and the proposed ones (there have been some: > solar, cosmic rays, ice age rebound and surely some other I'm forgetting) > didn't survive investigation. ### Read up on cosmic rays and aerosols. Rafal From rafal.smigrodzki at gmail.com Wed Dec 2 09:31:04 2009 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Wed, 2 Dec 2009 04:31:04 -0500 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategateagain] In-Reply-To: <710b78fc0912012110r4ca328fclca7c5fecd50f4834@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> <96C8847F12AE46B2B0FB09300FCBD7E5@spike> <710b78fc0912012110r4ca328fclca7c5fecd50f4834@mail.gmail.com> Message-ID: <7641ddc60912020131nbd59c9fre21917a3b49ac4ac@mail.gmail.com> How to talk to a climate alarmist about UHI: http://wattsupwiththat.com/2009/09/26/correcting-the-surface-temperature-record-for-uhi/#more-11182 From eugen at leitl.org Wed Dec 2 10:00:39 2009 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 2 Dec 2009 11:00:39 +0100 Subject: [ExI] extropy-chat Digest, Vol 75, Issue 2 In-Reply-To: References: Message-ID: <20091202100039.GR17686@leitl.org> On Tue, Dec 01, 2009 at 03:25:21PM -0800, Keith Henson wrote: > I am irritated by the whole thing. Not just you. I'm entirely unsurprised, because this is how people always address a crisis. See Jared Diamond's "Collapse" for plenty of examples. The only difference this time is that the crisis is global, not local. > What we have is people endlessly arguing about the ship rusting or not > and if this will sink it in 50 to 100 years in the future. Meanwhile > there is a torpedo in the water headed for the ship. Good metaphor. We need to initiate a immediate change of course, at full machine power. Anything less won't bring us out of harm's way. > Running out of cheap energy is a far more serious matter than climate > change and will happen sooner. That has to be solved to avert famines > and resource wars. We're arguably in resource war regime already. It's just small scale mostly (with the exception of the US), but if it reaches China, India, Arabia, Africa, North America it's going to get arbitrarily ugly. > There are no long term solutions to this problem that involves > endlessly putting carbon in the air so any solution to the problem > must involve displacing fossil fuel with some less expensive source of > energy. D'accord. 1000%. > Nuclear, SBSP, or some new method, however we solve the energy > problem, we also solve the climate problem to whatever extent it is > real and to whatever extent the problem is caused by humans. If CO2 > is a problem in 30-40 years, we can pull it out of the atmosphere to > any degree we want (300 TW years will take out 100 ppm). > > I.e., it doesn't matter a bit if the data has been fudged or not. But it is so convenient to point fingers instead having to deal with problems. You first deny that the problem exists, that somebody invented it. Then, when you no longer can deny it exists you start blaming somebody else causing it (it's never you, so much mutual finger pointing ensues). Then when everybody realizes you're in a zero-sum game everybody starts fighting about what's left, until you terminate enough parties so there's enough. Except, in this stage you use nuclear and biological weapons, so there are quite few indeed left, and they're preoccupied with other things. Like staying alive, for instance. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Dec 2 11:13:37 2009 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 2 Dec 2009 12:13:37 +0100 Subject: [ExI] climategate again In-Reply-To: <4B15DDFE.8030807@rawbw.com> References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> <4B15DDFE.8030807@rawbw.com> Message-ID: <20091202111337.GW17686@leitl.org> On Tue, Dec 01, 2009 at 07:24:46PM -0800, Lee Corbin wrote: > Why can't we have nice healthy attitudes like > > "I was a scientist myself for the longest time, and the people > I?d gladly drop into a vat of nitric acid start with the Pope > and go all the way down to anyone who voted for Stephen Harper?s > conservatives." ---Peter Watts (credit: Damien) What a horrible person. You have to use hot Caro's acid, not nitric acid. And slowly lower them down, starting with the toes. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Dec 2 11:15:33 2009 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 2 Dec 2009 12:15:33 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> Message-ID: <20091202111533.GX17686@leitl.org> On Wed, Dec 02, 2009 at 01:05:36PM +1030, Emlyn wrote: > How to Talk to a Climate Sceptic Talking is not an effective way of dealing with a belief system. Any belief system. It's just not worth the time. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From anders at aleph.se Wed Dec 2 12:48:22 2009 From: anders at aleph.se (Anders Sandberg) Date: Wed, 02 Dec 2009 13:48:22 +0100 Subject: [ExI] Spirited molecules In-Reply-To: 20091202111337.GW17686@leitl.org Message-ID: <20091202124822.8c68a152@secure.ericade.net> Eugen Leitl wrote: > > What a horrible person. You have to use hot Caro's acid, > not nitric acid. And slowly lower them down, starting with the toes. > I wonder about the ignition/explosion hazard. It is not all substances that have a hazard sheet that proclaims them to be incompatible with acids, bases, metals and organic material. It reminds me of one of the funnier chemistry blog subjects, "Things I Won't Work With" http://pipeline.corante.com/archives/things_i_wont_work_with/ Some of them are pure poetry. Life must have a peculiar vividness when your job is to come in and see if triazadienyl fluoride does anything when you expose it to fluorine monoxide. - Derek Lowe Cyanogen azide is trouble right from its empirical formula: CN4, not one hydrogen atom to its name. A molecular weight of 68 means that you?re dealing with a small, lively compound, but when the stuff is 82 per cent nitrogen, you can be sure that it?s yearning to be smaller and livelier still. That?s a common theme in explosives, this longing to return to the gaseous state, and nitrogen-nitrogen bonds are especially known for that spiritual tendency. - Derek Lowe [On chlorine trifluoride] ?It is, of course, extremely toxic, but that's the least of the problem. It is hypergolic with every known fuel, and so rapidly hypergolic that no ignition delay has ever been measured. It is also hypergolic with such things as cloth, wood, and test engineers, not to mention asbestos, sand, and water-with which it reacts explosively. It can be kept in some of the ordinary structural metals-steel, copper, aluminium, etc.-because of the formation of a thin film of insoluble metal fluoride which protects the bulk of the metal, just as the invisible coat of oxide on aluminium keeps it from burning up in the atmosphere. If, however, this coat is melted or scrubbed off, and has no chance to reform, the operator is confronted with the problem of coping with a metal-fluorine fire. For dealing with this situation, I have always recommended a good pair of running shoes.? - John Clark, Ignition! -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From painlord2k at libero.it Wed Dec 2 14:47:21 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Wed, 02 Dec 2009 15:47:21 +0100 Subject: [ExI] extropy-chat Digest, Vol 75, Issue 2 In-Reply-To: <20091202100039.GR17686@leitl.org> References: <20091202100039.GR17686@leitl.org> Message-ID: <4B167DF9.4000305@libero.it> Il 02/12/2009 11.00, Eugen Leitl ha scritto: > On Tue, Dec 01, 2009 at 03:25:21PM -0800, Keith Henson wrote: > Not just you. I'm entirely unsurprised, because this is how > people always address a crisis. See Jared Diamond's > "Collapse" for plenty of examples. > The only difference this time is that the crisis is global, not > local. It will be felt differently in different parts of the globe, as somewhere water is abundant and somewhere it is scarce and somewhere Sun is abundant and somewhere it is scarce. >> What we have is people endlessly arguing about the ship rusting or not >> and if this will sink it in 50 to 100 years in the future. Meanwhile >> there is a torpedo in the water headed for the ship. > Good metaphor. We need to initiate a immediate change of course, > at full machine power. Anything less won't bring us out of > harm's way. The metaphor broke when you advise for "full machine power". There is no thinking that the ship could absorb the hit and repair the damage after. And there is no thinking that in the water there is more than a torpedo; there are rocks under the water and waves over it and if you change course in the wrong direction you sink the ship over them. Add that the ship could not be able to sustain the strain of a "full machine power change of course" without breaking. But the difference that broke the torpedo metaphor is that a torpedo hit is an instantaneous thing, where ending of cheap energy is a long (as in years or decades) thing. Enough time to react if we leave the market (the people) to react accordingly to their needs and wills. >> Running out of cheap energy is a far more serious matter than climate >> change and will happen sooner. That has to be solved to avert famines >> and resource wars. > We're arguably in resource war regime already. I don't see it. What I see is a commodity bubble caused by cheap money and quantitative easing all around. Given the uncertain about the value of the $ (and partially of ?) people with money (that don't give interests) buy hard stuff that last and keep a value whatever the Central Bankers do. > It's just small scale > mostly (with the exception of the US), but if it reaches China, India, > Arabia, Africa, North America it's going to get arbitrarily ugly. There is no something like an "arbitrary ugly" thing. >> There are no long term solutions to this problem that involves >> endlessly putting carbon in the air so any solution to the problem >> must involve displacing fossil fuel with some less expensive source of >> energy. > D'accord. 1000%. 90% only. Current CTL technologies (Coal to Liquid) are competitive with a oil at 50$. The problem for investors is: 1) The log term price of oil. 2) The Carbon Trading and Carbon Taxes schemes that would put them out of the market with increased costs Coal is cheap and abundant enough to supplant oil for a century and more. Anyway, I'm a supporter of any and all power/energy sources and technologies related that don't need subsides. Nuclear plant (very cheap if the legal framework don't cause them to need 20 years to be built instead of 5). > But it is so convenient to point fingers instead having to deal > with problems. It what skeptics said about warmists behavior. Finger pointing to CO2, but only to the man made one. And never to the agriculture CO2 (20%) but only to the car emitted CO2 (10%) or the industrial CO2. > You first deny that the problem exists, that somebody > invented it. Then, when you no longer can deny it exists you start > blaming somebody else causing it (it's never you, so much mutual > finger pointing ensues). Is this Mann made psychology? I think that many people have different aims than the stated ones (always the same for what I con understand) and move from a reason to another to obtain them. > Then when everybody realizes you're in > a zero-sum game everybody starts fighting about what's left, until > you terminate enough parties so there's enough. Except, in this > stage you use nuclear and biological weapons, so there are quite > few indeed left, and they're preoccupied with other things. Like > staying alive, for instance. I'm a huge fan of Mad Max, but the plot and the background is as credible as the plot of an Alien or Zombie comics (where people behave like moron continuously - how they were allowed out of an asylum and managed to stay alive is matter for a fantasy plot). Liberal, market oriented society don't go in war against each other, nor they go in war against other if not heavily provoked and menaced. Mobilizing an Army is too costly to be done for long. It divert productive young people from more useful jobs and clearly is a bad investment. I think humans are irrational creature too, but they all are not moronic irrational creatures. Often they use their rational brain to something useful, also. The way you are reasoning about humans is not far from justifying manipulating them to obtain some "good" outcome. Where "good" is of your choice. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.90/2540 - Data di rilascio: 12/02/09 08:33:00 From bbenzai at yahoo.com Wed Dec 2 14:49:38 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 2 Dec 2009 06:49:38 -0800 (PST) Subject: [ExI] The Climate Science Isn't Settled In-Reply-To: Message-ID: <354542.69938.qm@web32008.mail.mud.yahoo.com> Eugen Leitl argued: > Talking is not an effective way of dealing with a belief system. > Any belief system. It's just not worth the time. Well, I used to believe otherwise, but you've convinced me you're right. Oh, wait... Ben Zaiboc From eugen at leitl.org Wed Dec 2 15:29:28 2009 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 2 Dec 2009 16:29:28 +0100 Subject: [ExI] The Climate Science Isn't Settled In-Reply-To: <354542.69938.qm@web32008.mail.mud.yahoo.com> References: <354542.69938.qm@web32008.mail.mud.yahoo.com> Message-ID: <20091202152928.GB17686@leitl.org> On Wed, Dec 02, 2009 at 06:49:38AM -0800, Ben Zaiboc wrote: > Well, I used to believe otherwise, but you've convinced me you're right. > > Oh, wait... You are cute. Do you like unicorns? From msd001 at gmail.com Wed Dec 2 16:29:05 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 2 Dec 2009 11:29:05 -0500 Subject: [ExI] The Climate Science Isn't Settled In-Reply-To: <20091202152928.GB17686@leitl.org> References: <354542.69938.qm@web32008.mail.mud.yahoo.com> <20091202152928.GB17686@leitl.org> Message-ID: <62c14240912020829x3e94018cpe9b2e810c11cd5b0@mail.gmail.com> On Wed, Dec 2, 2009 at 10:29 AM, Eugen Leitl wrote: > On Wed, Dec 02, 2009 at 06:49:38AM -0800, Ben Zaiboc wrote: > > > Well, I used to believe otherwise, but you've convinced me you're right. > > > > Oh, wait... > > You are cute. Do you like unicorns? > > I do. Especially grilled. Mmm... -------------- next part -------------- An HTML attachment was scrubbed... URL: From painlord2k at libero.it Wed Dec 2 19:03:24 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Wed, 02 Dec 2009 20:03:24 +0100 Subject: [ExI] Fwd: [twister] Meta Conspiracy In-Reply-To: <55ad6af70912011950k7b6e7cd9ie5b3b51591dfd10a@mail.gmail.com> References: <20091202031300.GJ5352@sifter.org> <55ad6af70912011950k7b6e7cd9ie5b3b51591dfd10a@mail.gmail.com> Message-ID: <4B16B9FC.8030307@libero.it> Il 02/12/2009 4.50, Bryan Bishop ha scritto: > E.g., there is definitely a faction here who see themselves as > scientific, rational, no-nonsense, obvious truth, keep it simple, > don't be silly, and everybody knows (or, at least, everybody with a > Phd knows). And then there is an often opposing faction who see > themselves as scientific, rational, consider everything, the truth > isn't always what it appears to be on the surface, and science makes > progress one death at a time. > If I could take a first stab at qualifying the difference between the > two, the first seems universally more optimistic about human nature, > in terms of objectivity, motive, and sometimes ability, and > especially in power of the group mean or consensus to win over > individual defects in these things. The latter sees man as a > rationalizing animal, self- deluding, sometimes malicious, regularly > deceptive whether by unwitting bias or conscious intent, and with > group-think tending to exaggerate rather than mediate these defects. This question could be related with the Premise Checker message: Meme 200: Frank Forman: Conservatism and Technology sent 9.10.14 The main point made was "Conservatism is the application of time to politics. By time, I mean that history is the cumulation of random or unrelated causes." I would modify a bit the axioms describing the first group and would change "universally more optimistic about human nature" with "universally more optimistic about their human nature". One could think the first group have a more mechanistic understanding of human behavior (like clock gears), where the second have a more agricultural vision of human behavior (like animals/plants). The first believe they can manipulate other humans to an aim, without unwanted effects, where the second think other humans must respected for what they are and no amount of tinkering will change their fundamental way of being. If we take the definition (axioms) of Brian for the first group and try to build over them, we could find that being more optimistic about human nature, abilities and objectivity the first group will not feel the need of the same level of checks and balances on the power of themselves, the groups [they belong], their majority and their leadership. The second group, instead, will feel the need of more checks and balances for themselves, the groups, the majority and the leadership. Another difference, counterintuitive at first, is that the first group, as optimist they are, when confronted with people not sharing their views on some topics [they consider important] and when unable to change their mind, will try to push out the opposition from the debate (in some less civilized part of the world, liquidate them outright). It is logical, as if humanity is good, eliminating the obviously not good people will leave only the good ones and, if humanity is good, not good people is not so human as they resemble. There is no need to provocation or real interference to cause this reaction; the simply fact that someone don't share their point of view is enough to marginalize the dissenter or worse. The other side will not resort to this (unless physical harm is feared) as they understand human being have flaws. They will not like to resort to exclusion and elimination of dissenters or opposer because this would legitimize others (the wrong side) to act in the same way. And given disagreements will not be eliminated this would cause endless problems for both people in the right (them) and in the wrong side. The individuals of the first group have evolved (with time) the ability to change their belief so it will match the belief dominant; this is done mainly without conscious thinking. This evolution must be expected as people in the losing party/parties have an advantage to flip to the other side and sticking with the losing party would be damaging in the long run. The fact that the winning party is wrong is often immaterial in the short (and not so short) run. Positively, being able to align faster with the [apparent] winner is useful against other rival groups and the ability to change opinion is useful when the other opinion is right. > And sometimes I get a glimpse of how this happens, and the story is > usually pretty close to this: One side thinks there are scorpions > everywhere, and one should always wear shoes. The other side says > there are none, and has never seen a scorpion their entire life. The > difference, it turns out, is simply that one, expecting to find > scorpions, lifts up rocks, and often finds them, and the other, > certain there are none, never looks and never sees. But that's a > biased analogy--favorable to the second faction--so I am still > curious to distill it down to something more essential and central. To add to the metaphor, I would say they never look into the shoes before putting them on and never look where they step on after they have put them on, where the bare foot people usually look where they put their feet. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.90/2540 - Data di rilascio: 12/02/09 08:33:00 From alfio.puglisi at gmail.com Wed Dec 2 19:32:33 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 2 Dec 2009 20:32:33 +0100 Subject: [ExI] climategate again In-Reply-To: <7641ddc60912020017m14eb4fe0ub77817ae5fe2c0a6@mail.gmail.com> References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> <4902d9990912011028h6b93979cp4f2384723541440d@mail.gmail.com> <7641ddc60912020017m14eb4fe0ub77817ae5fe2c0a6@mail.gmail.com> Message-ID: <4902d9990912021132n7518ed2buf8d8b9307fcc4d18@mail.gmail.com> On Wed, Dec 2, 2009 at 9:17 AM, Rafal Smigrodzki wrote: > > You don't get a 0.98 correlation > between the results of complex, non-trivial transformations of data > > We only know that somehow the > output they got was almost perfectly correlated with GISS but we do > not know how this unusual effect was achieved. Certainly, it could not > have been achieved by independent processing of available data. > When you analyze data that is measuring the same thing (surface temperature), and mostly the same temperature stations, it's easy to get the same results. > ------------------ > > I am no professional of the field, just have a basic understanding of > > physics. I base my position on several things: > > > > 1) temperature is not the only relevant data. We have widespread glacier > > retreat, sea level rise, arctic sea ice loss. > > ### Glacier retreat started long before CO2 started going up. Arctic > ice loss : see here > > http://wattsupwiththat.com/2009/02/03/arctic-sea-ice-increases-at-record-rate/ > , > What's that? An analysis of re-freezing speed after sea ice minimums? What does it mean? The author doesn't come to any conclusion. This picture of tells you what happened in the last 30 arctic summers: http://nsidc.org/images/arcticseaicenews/20091005_Figure3.png > the sea level has been going up for 13 000 years until very recently > (see http://en.wikipedia.org/wiki/File:Sea_level_temp_1Please don't > post40ky.gif ), > then > dropped, now increased 8 inches in a linear fashion since 1880. What > do you think does all this have to do with CO2? > Well, what's causing the rise since 1880? Your "very recently" is actually 7,000 years ago. You can't see it clearly in your graph, because it's too compressed. Try these two for a better picture: http://en.wikipedia.org/wiki/File:Post-Glacial_Sea_Level.png http://commons.wikimedia.org/wiki/File:Holocene_Sea_Level.png So, what has thrown out a balance that has lasted for 7,000 years? ## And which global warming do you > mean - the one in 1934? 1880? 1998? The planet has been warming and > cooling all the time, out of step with CO2, which is the important > issue here. > Out of step? This is not the case, and it's very easy to show: you can plot the two curves almost one over the other, like in the first graph of this page: http://www.skepticalscience.com/The-CO2-Temperature-correlation-over-the-20th-Century.html > ------------------------ > > > > 2) basic physics tells us that Earth's energy budget must balance. We can > > easily measure the input (solar), and verify that it's approximately > > constant. Since we are changing the properties of the output (greenhouse > > gases will redirect part of the outgoing radiation downward), internal > > temperature must rise to compensate. It's about as inevitable as putting > a > > coat on, and feeling warmer. > > ### No, Alfio, the Earth is not "basic physics", and it does not have > to obey your notions. You need to read up on aerosols, water vapor, > cloud cover and a lot of other things before you can say what the > Earth "must" do. > If you want to falsify conservation of energy, let me know when you make some progress. > > > > 3) Attribution: there is now 30% more CO2 than before industrial times. > We > > know CO2 greenhouse gas properties. Any conjecture that rejects global > > warming must show where the extra energy trapped by CO2 went. And it's no > > easy task. > > > ### No, all we need to show is the poor correlation between recent CO2 > rise and global temperatures, and this has been shown very clearly. > The graph I linked above from skepticalscience.com will show the "poor correlation" in the proper context. If you prefer numerical results, you will be pleased to know that correlation between CO2 and temperature is something like 0.87. Here is a simple demonstration using linear regression: http://bartonpaullevenson.com/Correlation.html > Rafal > Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From painlord2k at libero.it Wed Dec 2 19:55:42 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Wed, 02 Dec 2009 20:55:42 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <20091202111533.GX17686@leitl.org> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> <20091202111533.GX17686@leitl.org> Message-ID: <4B16C63E.5070100@libero.it> Il 02/12/2009 12.15, Eugen Leitl ha scritto: > On Wed, Dec 02, 2009 at 01:05:36PM +1030, Emlyn wrote: >> How to Talk to a Climate Sceptic > Talking is not an effective way of dealing with a belief system. > Any belief system. It's just not worth the time. If talking is not effective, what is last? Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.90/2540 - Data di rilascio: 12/02/09 08:33:00 From alfio.puglisi at gmail.com Wed Dec 2 20:08:50 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 2 Dec 2009 21:08:50 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <7641ddc60912020121m45222eeapaec7b31a2d8ab598@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <7641ddc60912020121m45222eeapaec7b31a2d8ab598@mail.gmail.com> Message-ID: <4902d9990912021208v3b77eed1g81f47f16566c79c3@mail.gmail.com> On Wed, Dec 2, 2009 at 10:21 AM, Rafal Smigrodzki < rafal.smigrodzki at gmail.com> wrote: > 2009/12/1 Alfio Puglisi : > > > > Look at any plot of the temperature record (1880-present) [...] > > ### Look at the plot 1000 AD - present (but the real one, not from > Mann et. al) What do you see? > > ---------------- > > > False. CRU didn't "maximize" any temperature anomaly [...] > > ### CRU brazenly manipulated proxy data for the pre-instrumental > period, and appear to have fudged their analysis of the instrumental > period as well. > Lindzen was talking about global temperature anomaly, not temperature reconstructions. He's a MIT professor, so I assume he knows the difference. > -------------- > . Lindzen knows full well that water vapor is a feedback, > > and that even if it is carrying most of the natural greenhouse effect on > its > > shoulders, it can't do anything to change Earth's temperature on its own. > > ### Do you understand what you wrote? I certainly don't. > Sorry, English is my second language. What I meant is that you can't influence climate using water vapor, because its residence time in the atmosphere is something like two weeks, and any extra water vapor will soon become rain and fall to the ground. So even if water vapor is the main contributor the overall greenhouse effect, it is irrelevant to the issue of current global warming. Lindzen of course knows this. > > "Even a doubling of CO2 would only upset the original balance between > > incoming and outgoing radiation by about 2%" > > > > 2% is significant when your planet has an average temperature of 290K. > And > > he didn't include the feedbacks (but didn't he talk about water vapor a > few > > lines before? Why not now?) > > ### Because nobody knows the feedback but available data are > consistent with absence of positive feedback. > Unfortunately, ice age data can't be explained without positive feedbacks. Orbital forcings are too small. > > ### Trenberth seems to disagree with you. He thinks it's a "travesty" > that their models cannot account for the present period of cooling. > No, you are reading CRU emails out of context. Trenberth is lamenting the fact that we are missing precise measurement of short-term radiation imbalance and heat transfer, and so we cannot properly model short term variation. Read Trendberth's paper, cited in the same email: "An imperative for climate change planning: tracking Earth?s global energy" http://www.wired.com/images_blogs/threatlevel/2009/11/energydiagnostics09final.pdf Abstract: Planned adaptation to climate change requires information about what is happening and why. While a long-term trend is for global warming, short-term periods of cooling can occur and have physical causes associated with natural variability. However, such natural variability means that energy is rearranged or changed within the climate system, and should be traceable. An assessment is given of our ability to track changes in reservoirs and flows of energy within the climate system. Arguments are given that developing the ability to do this is important, as it affects interpretations of global and especially regional climate change, and prospects for the future. ----------------- > > > All the talks Lindzen does about feedbacks is invalidated by ice age > cores. > > Without feedbacks, you can't explain the ice-age / interglacial > alternance. > > ### So why does CO2 start rising only about 800 000 years after the > end of an ice age? What is driving the feedback? > You probably mean 800, which is the typical CO2 rise lag at the end of a glacial period. In that case, the periodical change in Earth's orbit change the solar flux. That's the forcing, CO2 in that context is a feedback. > > --------------------- > > > > It's not the same. (and what is an "anti-skeptic view"? :-) Global > warming > > is a well-developed theory with multiple supporting lines of evidence: > > different kind of observations, and physics-based models. > > ### Anthropogenic CO2 driven catastrophic warming is a lunatic fringe > theory supported by large government bureaucracies. I know your political views. I am disappointed that you can't separate them from scientific arguments. But we do know that at no time in the past 600 million years did Earth > exceed a temperature anomaly of +8C, despite CO2 levels much higher > than today and this is enough to summarily reject the idea that CO2 is > likely to cause catastrophic warming. > You think +8C would be a walk in the park? That's about the difference between an ice age and the present climate. That's 120 meters of sea level rise. > > ------------------ > > Without an obvious falsification, one would need to produce > > an alternative explanation, and the proposed ones (there have been some: > > solar, cosmic rays, ice age rebound and surely some other I'm forgetting) > > didn't survive investigation. > > ### Read up on cosmic rays and aerosols. > I did. Found little to change the overall picture. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Wed Dec 2 20:17:07 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 2 Dec 2009 21:17:07 +0100 Subject: [ExI] extropy-chat Digest, Vol 75, Issue 2 In-Reply-To: <4B167DF9.4000305@libero.it> References: <20091202100039.GR17686@leitl.org> <4B167DF9.4000305@libero.it> Message-ID: <4902d9990912021217v490e753aodaf0abd8527a5893@mail.gmail.com> 2009/12/2 Mirco Romanato > > Liberal, market oriented society don't go in war against each other, nor > they go in war against other if not heavily provoked and menaced. > Mobilizing an Army is too costly to be done for long. It divert productive > young people from more useful jobs and clearly is a bad investment. > I am not so sure. The US is arguably the most free-market oriented nation in the world, and still it spends a huge amount of GDP on its military, as much or more than the rest of the world combined. Soviet russia in its days, somewhat the opposite politically, also spent large amounts of money to the same end. I don't see much correlation. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanogirl at halcyon.com Wed Dec 2 20:19:00 2009 From: nanogirl at halcyon.com (Gina Miller) Date: Wed, 2 Dec 2009 13:19:00 -0700 Subject: [ExI] An extropian animation!!! Message-ID: My dear Extropians, I finally have an animation that I made with a song I wrote just for you! I know you have been thinking it has been a long time since I made the Rosie the Roboteer image and even longer since my Dermal Display animation and you have been thinking, where is my animation? Well this just might be it. One day while working on a completely different animation (which I still have to go back to!) I started singing to myself "I am a robot..." I decided to write the words down and wrote a little song. Then I decided to record them into my computer and I knew that at some point I would have to make an animation to go along with it so I registered the song with the copyright office. I tried to get back to my other word and put this on the back burner, but I couldn't! So here it is "My Heart is Von Neumann": http://nanogirl.com/museumfuture/myheartisvonneumann.html This was a very personal piece for me, my song, by bot, my hope for how advancing technologies could help us. This isn't a technical piece, it is more an artistic vision of what I am trying to express. I hope you enjoy it - please let me know if you do. Sincerely, your, Nanogirl Gina "Nanogirl" Miller Nanotechnology Industries http://www.nanoindustries.com Personal: http://www.nanogirl.com Animation Blog: http://maxanimation.blogspot.com/ Craft blog: http://nanogirlblog.blogspot.com/ Foresight Senior Associate http://www.foresight.org Nanotechnology Advisor Extropy Institute http://www.extropy.org Email: nanogirl at halcyon.com "Nanotechnology: Solutions for the future." -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Dec 2 20:49:40 2009 From: spike66 at att.net (spike) Date: Wed, 2 Dec 2009 12:49:40 -0800 Subject: [ExI] signal to noise again In-Reply-To: <4902d9990912021132n7518ed2buf8d8b9307fcc4d18@mail.gmail.com> References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com><4902d9990912011028h6b93979cp4f2384723541440d@mail.gmail.com><7641ddc60912020017m14eb4fe0ub77817ae5fe2c0a6@mail.gmail.com> <4902d9990912021132n7518ed2buf8d8b9307fcc4d18@mail.gmail.com> Message-ID: <1BF00C32B99542A7A98340533AAE8E20@spike> Do let me commend the ExI-chat list on the excellent signal to noise ratio lately. The biggies are posting again, Eugen, Max, Anders, Rafal, Lee, Keith, Damien, the Italians, the others who consistently post smart, interesting stuff (and you know who you are even if I didn't name you) thanks all! These days, reading ExI is well worth the time invested. spike From natasha at natasha.cc Wed Dec 2 21:03:25 2009 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 02 Dec 2009 16:03:25 -0500 Subject: [ExI] signal to noise again In-Reply-To: <1BF00C32B99542A7A98340533AAE8E20@spike> References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com><4902d9990912011028h6b93979cp4f2384723541440d@mail.gmail.com><7641ddc60912020017m14eb4fe0ub77817ae5fe2c0a6@mail.gmail.com> <4902d9990912021132n7518ed2buf8d8b9307fcc4d18@mail.gmail.com> <1BF00C32B99542A7A98340533AAE8E20@spike> Message-ID: <20091202160325.xfrguh4k61w804g4@webmail.natasha.cc> Great to have these big heads back on the list, although Damien was never missing in action nor was Keith, et al. Nevertheless, I'm very glad to have 'gene, Max and Anders back and talking. Natasha Quoting spike : > > Do let me commend the ExI-chat list on the excellent signal to noise ratio > lately. The biggies are posting again, Eugen, Max, Anders, Rafal, Lee, > Keith, Damien, the Italians, the others who consistently post smart, > interesting stuff (and you know who you are even if I didn't name you) > thanks all! These days, reading ExI is well worth the time invested. > > spike > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From painlord2k at libero.it Wed Dec 2 22:02:03 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Wed, 02 Dec 2009 23:02:03 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <4902d9990912021208v3b77eed1g81f47f16566c79c3@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <7641ddc60912020121m45222eeapaec7b31a2d8ab598@mail.gmail.com> <4902d9990912021208v3b77eed1g81f47f16566c79c3@mail.gmail.com> Message-ID: <4B16E3DB.3060109@libero.it> Il 02/12/2009 21.08, Alfio Puglisi ha scritto: > -------------- . Lindzen knows full well that water vapor is a > feedback, >> and that even if it is carrying most of the natural greenhouse > effect on its >> shoulders, it can't do anything to change Earth's temperature on > its own. > > ### Do you understand what you wrote? I certainly don't. > Sorry, English is my second language. What I meant is that you can't > influence climate using water vapor, because its residence time in > the atmosphere is something like two weeks, and any extra water > vapor will soon become rain and fall to the ground. So even if water > vapor is the main contributor the overall greenhouse effect, it is > irrelevant to the issue of current global warming. Lindzen of course > knows this. I understood both, but you must make a rational and scientific claim that a greenhouse effect exist and what it really is. http://www.americanthinker.com/2009/11/politics_and_greenhouse_gasses.html >> Now, the IPCC "consensus" atmospheric physics model tying CO2 to >> global warming has been shown not only to be unverifiable, but to >> actually violate basic laws of physics. >> >> The analysis comes from an independent theoretical study detailed >> in a lengthy (115 pages), mathematically complex (144 equations, >> 13 data tables, and 32 figures or graphs), and well-sourced (205 >> references) paper prepared by two German physicists, Gerhard >> Gerlich and Ralf Tscheuschner, and published in several updated >> versions over the last couple of years. The latest version appears >> in the March 2009 edition of the International Journal of Modern >> Physics. In the paper, the two authors analyze the greenhouse gas >> model from its origin in the mid-19th century to the present IPCC >> application. http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.1161v4.pdf >> Further, they go on to show that any mechanism whereby CO2 in the >> cooler upper atmosphere could exert any thermal enhancing or >> "forcing" effect on the warmer surface below violates both the >> First and Second Laws of Thermodynamics. Well, I suppose that the First and Second Law of Thermodynamics are some inconvenient truths. >> "Even a doubling of CO2 would only upset the original balance >> between incoming and outgoing radiation by about 2%" > 2% is significant when your planet has an average temperature of > 290K. And he didn't include the feedbacks (but didn't he talk about > water vapor a few lines before? Why not now?) I think the right number is 0.03% >> ### Because nobody knows the feedback but available data are >> consistent with absence of positive feedback. > Unfortunately, ice age data can't be explained without positive > feedbacks. Orbital forcings are too small. Changing solar energy output? Solar cycles are known. What we don't know is how and why they change. But we know they change. In the last few decades the solar cycles lasted 10-11 years. The current cycle is projected to last 15 years (it is posed to break the number of consecutive spotless days) and during the Maunder Minimum some cycle lasted for 25 years. If not, how do you explain the Little Ice Age? > You probably mean 800, which is the typical CO2 rise lag at the end > of a glacial period. In that case, the periodical change in Earth's > orbit change the solar flux. That's the forcing, CO2 in that context > is a feedback. But, if the CO2 have a forcing effect, the two must compound. Why didn't a "runaway effect" start in the past? Did CO2 changed its physical features in the past? >> It's not the same. (and what is an "anti-skeptic view"? :-) Global >> warming is a well-developed theory with multiple supporting lines >> of evidence: different kind of observations, and physics-based >> models. Physic-based models validated by who? Physics? Is it like the statistical models never validated by any statistician? But it is difficult to validate stuff nobody saw, apart the true prophets. Then there is the mathematics, that could be uncomputable with the current tools: http://www.americanthinker.com/2009/11/the_mathematics_of_global_warm.html > But we do know that at no time in the past 600 million years did > Earth exceed a temperature anomaly of +8C, despite CO2 levels much > higher than today and this is enough to summarily reject the idea > that CO2 is likely to cause catastrophic warming. > You think +8C would be a walk in the park? That's about the > difference between an ice age and the present climate. That's 120 > meters of sea level rise. Given a large part of the industries emitting CO2 are located near the seas, there would be a stopping of emissions after a sea raise of 1 m. So what are you worrying about? Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.90/2540 - Data di rilascio: 12/02/09 08:33:00 From painlord2k at libero.it Wed Dec 2 22:12:52 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Wed, 02 Dec 2009 23:12:52 +0100 Subject: [ExI] The Climate Science Isn't Settled In-Reply-To: <62c14240912020829x3e94018cpe9b2e810c11cd5b0@mail.gmail.com> References: <354542.69938.qm@web32008.mail.mud.yahoo.com> <20091202152928.GB17686@leitl.org> <62c14240912020829x3e94018cpe9b2e810c11cd5b0@mail.gmail.com> Message-ID: <4B16E664.8020509@libero.it> Il 02/12/2009 17.29, Mike Dougherty ha scritto: > On Wed, Dec 2, 2009 at 10:29 AM, Eugen Leitl > wrote: > > On Wed, Dec 02, 2009 at 06:49:38AM -0800, Ben Zaiboc wrote: > > Well, I used to believe otherwise, but you've convinced me you're > > right. > > Oh, wait... > You are cute. Do you like unicorns? > I do. Especially grilled. Mmm... This is blasphemy!!! Unicorns must be roasted. Grilling is for Pegasus Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.90/2540 - Data di rilascio: 12/02/09 08:33:00 From msd001 at gmail.com Wed Dec 2 23:15:40 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 2 Dec 2009 18:15:40 -0500 Subject: [ExI] The Climate Science Isn't Settled In-Reply-To: <4B16E664.8020509@libero.it> References: <354542.69938.qm@web32008.mail.mud.yahoo.com> <20091202152928.GB17686@leitl.org> <62c14240912020829x3e94018cpe9b2e810c11cd5b0@mail.gmail.com> <4B16E664.8020509@libero.it> Message-ID: <62c14240912021515n51297ce0scdb101bd3c3b2ea1@mail.gmail.com> 2009/12/2 Mirco Romanato > Il 02/12/2009 17.29, Mike Dougherty ha scritto: > >> On Wed, Dec 2, 2009 at 10:29 AM, Eugen Leitl > > wrote: >> >> On Wed, Dec 02, 2009 at 06:49:38AM -0800, Ben Zaiboc wrote: >> > Well, I used to believe otherwise, but you've convinced me you're >> > right. >> > Oh, wait... >> You are cute. Do you like unicorns? >> I do. Especially grilled. Mmm... >> > > This is blasphemy!!! > Unicorns must be roasted. > Grilling is for Pegasus > Oh yeah, they already have that convenient pointy part on one end for turning over the fire. My mis-steak. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Dec 3 00:44:38 2009 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 3 Dec 2009 11:14:38 +1030 Subject: [ExI] The Climate Science Isn't Settled In-Reply-To: <20091202152928.GB17686@leitl.org> References: <354542.69938.qm@web32008.mail.mud.yahoo.com> <20091202152928.GB17686@leitl.org> Message-ID: <710b78fc0912021644l7124db1bhb946020890043bd1@mail.gmail.com> 2009/12/3 Eugen Leitl : > On Wed, Dec 02, 2009 at 06:49:38AM -0800, Ben Zaiboc wrote: > >> Well, I used to believe otherwise, but you've convinced me you're right. >> >> Oh, wait... > > You are cute. Do you like unicorns? That sounds like grooming talk from an anime board :-) -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From possiblepaths2050 at gmail.com Thu Dec 3 01:02:10 2009 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 2 Dec 2009 18:02:10 -0700 Subject: [ExI] An extropian animation!!! In-Reply-To: References: Message-ID: <2d6187670912021702y7abccfbtdd262ee20887002@mail.gmail.com> Gina Miller wrote: My dear Extropians, I finally have an animation that I made with a song I wrote just for you!... This was a very personal piece for me, my song, by bot, my hope for how advancing technologies could help us. This isn't a technical piece, it is more an artistic vision of what I am trying to express. I hope you enjoy it - please let me know if you do. >>> Gina, I was deeply impressed by the visuals, lyrics and your ethereal voice. When I encounter a piece of futurist art in any form that leaves me feeling both giddy and yet somehow chilled, I know the artist has done their job! Please keep it coming, as time allows! : ) John -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Dec 3 01:13:53 2009 From: spike66 at att.net (spike) Date: Wed, 2 Dec 2009 17:13:53 -0800 Subject: [ExI] man controls robotic hand with thoughts Message-ID: Cool! http://www.msnbc.msn.com/id/34237583/ns/health-more_health_news/ spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanogirl at halcyon.com Thu Dec 3 01:19:03 2009 From: nanogirl at halcyon.com (Gina Miller) Date: Wed, 2 Dec 2009 18:19:03 -0700 Subject: [ExI] An extropian animation!!! In-Reply-To: <2d6187670912021702y7abccfbtdd262ee20887002@mail.gmail.com> References: <2d6187670912021702y7abccfbtdd262ee20887002@mail.gmail.com> Message-ID: <904BEE227D2944BC848A04E1A162C8B9@3DBOXXW4850> Thank you, thank you, thank you! I've been waiting all day to hear some thoughts on my new animation. This was the first time I wrote my own song and sang for one of my animations so I was especially nervous, I am glad you liked it. And the way you describe your reaction, well it's making me feel the same way. Hearing feedback on my work and knowing I achieved the goal I was striving for is really my biggest reward - even if only for one person. Please, everyone if you get the chance to look at my new animation, let me know your thoughts as well. And thanks again John. Gina "Nanogirl" Miller http://www.nanogirl.com/museumfuture/myheartisvonneumann.html ----- Original Message ----- From: John Grigg To: ExI chat list Sent: Wednesday, December 02, 2009 6:02 PM Subject: Re: [ExI] An extropian animation!!! Gina Miller wrote: My dear Extropians, I finally have an animation that I made with a song I wrote just for you!... This was a very personal piece for me, my song, by bot, my hope for how advancing technologies could help us. This isn't a technical piece, it is more an artistic vision of what I am trying to express. I hope you enjoy it - please let me know if you do. >>> Gina, I was deeply impressed by the visuals, lyrics and your ethereal voice. When I encounter a piece of futurist art in any form that leaves me feeling both giddy and yet somehow chilled, I know the artist has done their job! Please keep it coming, as time allows! : ) John ------------------------------------------------------------------------------ _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From painlord2k at libero.it Thu Dec 3 01:43:10 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Thu, 03 Dec 2009 02:43:10 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> Message-ID: <4B1717AE.4010109@libero.it> Il 01/12/2009 23.16, Alfio Puglisi ha scritto: > My understanding is that all, or almost all, the observed warming is > due to less extreme cold and not to higher temperatures in the > warmer places. > You are correct. And that's exactly what climate models predict for > greenhouse-caused warming. Cold places warm up more than hot places. > Night temperatures go up more than day temperatures. If, for example, > the warming was caused by increased solar output, we would see the opposite. > Anyway, your first point supports only the point that some warming > has occurred, which I'm not disputing. (Even so, why do we see even > more reports of melting ice when there has been no significant > warming for 12 years? That suggests that either the cause is other > than warming, or that reports of ice melting etc. are highly > selective... and selected. That certainly seems to be the case with > regard to polar bears.) > Glaciers are great integrators of climate. The local, for sure. The global not so much. > If temperature goes up, a > glacier will go out of equilibrium and start to melt, but the response > will not be instantaneus. The same is true for reduced moisture in the air, less snow or less rain. > Reaching a new equilibrium takes years. The > current decade has been the warmest on record, and ice melts in > response. If the last few decades had been colder than before, you would > see glaciers growing even if the cooling trend stabilized for some years. The current decade, as we know from actual temperature readings and the words written in the infamous emails, had no warming whatsoever over the measurement error. > About your suggestion that reports of ice mass balance are selected for > the most melting ones... that's a very serious accusation. Have you got > any proof of that kind of selection? Anything? The newspapers' reports for sure. As it confirm the narrative they choose. They are in the infotainment business not the fact reporting business. But, indeed, there are glaciers that grow and glaciers that shrink. http://wattsupwiththat.com/2008/11/27/glaciers-in-norway-alaska-growing-again/ > Go to the world glacier monitoring service: http://www.wgms.ch See for > example: http://www.wgms.ch/mbb/mbb10/sum07.html > And you can't select in Arctic sea ice loss, or Greenland mass balance. > There's only one of each. The data, I see end in the 2007. This is the "hide the decline" trick, again? http://www.ijis.iarc.uaf.edu/en/home/seaice_extent.htm People can note that the 2009 line is over the 2008 line and the 208 line is over the 2007 line (higher = more ice). But the CO2 emission don't went down. CRU models didn't predict this nor explain this. > "approximately constant" will do for any period less than many millions > of years. The Sun output is still going up, and it's likely to turn the > planet into a desert in a billion of years or so (and a badly burnt > piece of rock at the end) but I'm not blaming it for the current global > warming :-) The Sun energy emission is variable with the time aka Solar Cycle. http://wattsupwiththat.com/2009/11/12/another-parallel-with-the-maunder-minimum/ []cit.]Tree rings from the Urals have more uses than just making hockey sticks. Due to the paucity of sunspots in the Maunder Minimum (1645 ? 1710), C14 data provides the evidence for the presence of solar cycles and their length. According to Makarov and Tlatov, solar cycles averaged 20 years long in the Maunder. In Figure 2 above, solar minima are associated with higher C14 content and are on the top side of the graphic. I have marked the solar minima with vertical blue lines. The blue figures along the x axis are the length of the solar cycles from minimum to minimum in years.[cit] Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.90/2540 - Data di rilascio: 12/02/09 08:33:00 From emlynoregan at gmail.com Thu Dec 3 02:16:21 2009 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 3 Dec 2009 12:46:21 +1030 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <20091202111533.GX17686@leitl.org> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> <20091202111533.GX17686@leitl.org> Message-ID: <710b78fc0912021816s28f09ff2x818b034c0f3de137@mail.gmail.com> 2009/12/2 Eugen Leitl : > On Wed, Dec 02, 2009 at 01:05:36PM +1030, Emlyn wrote: > >> How to Talk to a Climate Sceptic > > Talking is not an effective way of dealing with a belief system. > Any belief system. It's just not worth the time. When you start hearing this from Germans, the shit is about to go down! -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From ddraig at gmail.com Thu Dec 3 02:41:57 2009 From: ddraig at gmail.com (ddraig) Date: Thu, 3 Dec 2009 13:41:57 +1100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <710b78fc0912021816s28f09ff2x818b034c0f3de137@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> <20091202111533.GX17686@leitl.org> <710b78fc0912021816s28f09ff2x818b034c0f3de137@mail.gmail.com> Message-ID: 2009/12/3 Emlyn : > 2009/12/2 Eugen Leitl : >> On Wed, Dec 02, 2009 at 01:05:36PM +1030, Emlyn wrote: >> >>> How to Talk to a Climate Sceptic >> >> Talking is not an effective way of dealing with a belief system. >> Any belief system. It's just not worth the time. > > When you start hearing this from Germans, the shit is about to go down! Don't mention the, errrr... thingy Dwayne -- ddraig at pobox.com irc.deoxy.org #chat ...r.e.t.u.r.n....t.o....t.h.e....s.o.u.r.c.e... http://www.barrelfullofmonkeys.org/Data/3-death.jpg our aim is wakefulness, our enemy is dreamless sleep From spike66 at att.net Thu Dec 3 05:04:32 2009 From: spike66 at att.net (spike) Date: Wed, 2 Dec 2009 21:04:32 -0800 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <2d6187670912021702y7abccfbtdd262ee20887002@mail.gmail.com> References: <2d6187670912021702y7abccfbtdd262ee20887002@mail.gmail.com> Message-ID: Oh this one is good. Have a most enjoyable 7 minutes. Keith this one is for you pal. {8-] http://www.youtube.com/watch?v=yjO4duhMRZk spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Dec 3 09:23:59 2009 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 3 Dec 2009 19:53:59 +1030 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: References: <2d6187670912021702y7abccfbtdd262ee20887002@mail.gmail.com> Message-ID: <710b78fc0912030123y1eb377c0iaa0939e7e1d6cdc3@mail.gmail.com> +1 like! 2009/12/3 spike : > Oh this one is good.? Have a most enjoyable 7 minutes.? Keith this one is > for you pal.? {8-] > > http://www.youtube.com/watch?v=yjO4duhMRZk > > spike > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From eugen at leitl.org Thu Dec 3 09:30:04 2009 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 3 Dec 2009 10:30:04 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> <20091202111533.GX17686@leitl.org> <710b78fc0912021816s28f09ff2x818b034c0f3de137@mail.gmail.com> Message-ID: <20091203093004.GM17686@leitl.org> On Thu, Dec 03, 2009 at 01:41:57PM +1100, ddraig wrote: > > When you start hearing this from Germans, the shit is about to go down! > > Don't mention the, errrr... thingy HITLER, HITLER, HITLER, HITLER! -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From painlord2k at libero.it Thu Dec 3 12:38:08 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Thu, 03 Dec 2009 13:38:08 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <20091203093004.GM17686@leitl.org> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> <20091202111533.GX17686@leitl.org> <710b78fc0912021816s28f09ff2x818b034c0f3de137@mail.gmail.com> <20091203093004.GM17686@leitl.org> Message-ID: <4B17B130.1040402@libero.it> Il 03/12/2009 10.30, Eugen Leitl ha scritto: > On Thu, Dec 03, 2009 at 01:41:57PM +1100, ddraig wrote: >>> When you start hearing this from Germans, the shit is about to go >>> down! >> Don't mention the, errrr... thingy > HITLER, HITLER, HITLER, HITLER! Well, the same argument is done in Islam. Quran 002:006 As for the Disbelievers, Whether thou warn them or thou warn them not it is all one for them; they believe not. I suppose you already know the fate destined to them: http://www.wikiislam.com/wiki/Mischief No wonder Hitler admired Islam and was friend with the Jerusalem Mufti. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.91/2542 - Data di rilascio: 12/03/09 08:32:00 From painlord2k at libero.it Thu Dec 3 12:38:52 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Thu, 03 Dec 2009 13:38:52 +0100 Subject: [ExI] extropy-chat Digest, Vol 75, Issue 2 In-Reply-To: <4902d9990912021217v490e753aodaf0abd8527a5893@mail.gmail.com> References: <20091202100039.GR17686@leitl.org> <4B167DF9.4000305@libero.it> <4902d9990912021217v490e753aodaf0abd8527a5893@mail.gmail.com> Message-ID: <4B17B15C.5060807@libero.it> Il 02/12/2009 21.17, Alfio Puglisi ha scritto: > 2009/12/2 Mirco Romanato > > Liberal, market oriented society don't go in war against each other, > nor they go in war against other if not heavily provoked and menaced. > Mobilizing an Army is too costly to be done for long. It divert > productive young people from more useful jobs and clearly is a bad > investment. > I am not so sure. The US is arguably the most free-market oriented > nation in the world, and still it spends a huge amount of GDP on its > military, as much or more than the rest of the world combined. Soviet > Russia in its days, somewhat the opposite politically, also spent large > amounts of money to the same end. I don't see much correlation. The difference is the relative size of the economy. http://www.truthandpolitics.org/military-relative-size.php The % of the GDP spent on defense is shrinking over the time, as the GDP grow. It went over 10% only during the WW2 and a few years during the '50s (build up of nuclear arsenal?). Currently is around 4% (and the disponsable spending share is reducing - more is tied to salaries and fixed costs). USSR spending went, in peace time, from estimated 15% to 25% and maybe more (depend on how data is compounded - like the climate stuff - usually the soviet don't included the money spent to pay the soldiers and their upkeep). 15% or 25% of a smaller economy. Smaller economy than the U.S.S.R. like Ethiopia (Soviet backed) spent 30-50% and more in their military. In effect, these government were in a constant war against their populations, so it is understandable they needed so much money for theirs armies. I think it is smart to devote 5-10% of own earnings to security (depending on the security risks present or foreseeable). It is like an insurance against bad outcomes, where the insurance level help to reduce the bad outcomes frequencies and not to repay for the damages after. We also spend money in funding police (internal security), so why not spending it in external security also? The trick is to spend wisely and the needed (or a bit more) and not too much (that will cripple the economy) or not enough (that will be not useful to dissuade, repel them or preempt them). Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.91/2542 - Data di rilascio: 12/03/09 08:32:00 From bbenzai at yahoo.com Thu Dec 3 12:13:31 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 3 Dec 2009 04:13:31 -0800 (PST) Subject: [ExI] The Climate Science Isn't Settled In-Reply-To: Message-ID: <391421.39510.qm@web32002.mail.mud.yahoo.com> From: Eugen Leitl creepily asked: > On Wed, Dec 02, 2009 at 06:49:38AM -0800, Ben Zaiboc wrote: >> Well, I used to believe otherwise, but you've convinced me you're right. >> >> Oh, wait... > You are cute. Do you like unicorns? Only the pink invisible ones. Wearing teapots. Full of Noodly Goodness. Created Last Thursday. On a (slightly) more serious note, though, there are known methods of changing people's minds. Apart from physical violence, I mean. Simply contradicting someone's beliefs never does any good, of course, but to say that talking never does any good is a simplification, I think. Don't you agree? ;> Ben Zaiboc From cetico.iconoclasta at gmail.com Thu Dec 3 13:29:30 2009 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado) Date: Thu, 3 Dec 2009 11:29:30 -0200 Subject: [ExI] pat condell's latest subtle rant References: <2d6187670912021702y7abccfbtdd262ee20887002@mail.gmail.com> Message-ID: <8C3D968C82EC44CC9C63474817A10BA2@Notebook> Oh this one is good. Have a most enjoyable 7 minutes. Keith this one is for you pal. {8-] http://www.youtube.com/watch?v=yjO4duhMRZk Good one indeed. He said exactly what I think, much more eloquently than I ever could. From msd001 at gmail.com Thu Dec 3 13:59:18 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 3 Dec 2009 08:59:18 -0500 Subject: [ExI] The Climate Science Isn't Settled In-Reply-To: <391421.39510.qm@web32002.mail.mud.yahoo.com> References: <391421.39510.qm@web32002.mail.mud.yahoo.com> Message-ID: <62c14240912030559y65147574g224e334be2aab49b@mail.gmail.com> On Thu, Dec 3, 2009 at 7:13 AM, Ben Zaiboc wrote: > On a (slightly) more serious note, though, there are known methods of > changing people's minds. Apart from physical violence, I mean. Simply > contradicting someone's beliefs never does any good, of course, but to say > that talking never does any good is a simplification, I think. > > Don't you agree? ;> > I agree that it is possible to change someone's mind. For someone firmly entrenched in their own ideas, it takes considerable work to first understand them well enough to deconstruct their position and introduce an alternate possibility. Email conversations rarely have the patience to do that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Thu Dec 3 14:41:09 2009 From: max at maxmore.com (Max More) Date: Thu, 03 Dec 2009 08:41:09 -0600 Subject: [ExI] Shale as a renewed source of energy Message-ID: <200912031441.nB3EfNrK007887@andromeda.ziaspace.com> I haven't looked into shale in years, so I'm posting this link with an invitation to comment. The piece makes it sound promising as a medium-term energy source. Is shale an answer to the energy question? http://www.msnbc.msn.com/id/34253199/ns/business-oil_and_energy/ Excerpt: "The United States is sitting on over 100 years of gas supply at the current rates of consumption," he said. Because natural gas emits half the greenhouse gases of coal, he added, that "provides the United States with a unique opportunity to address concerns about energy security and climate change." Recoverable U.S. gas reserves could now be bigger than the immense gas reserves of Russia, some experts say. ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From andrii.z at gmail.com Thu Dec 3 12:46:59 2009 From: andrii.z at gmail.com (Andrii Zvorygin) Date: Thu, 3 Dec 2009 07:46:59 -0500 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <710b78fc0912030123y1eb377c0iaa0939e7e1d6cdc3@mail.gmail.com> References: <2d6187670912021702y7abccfbtdd262ee20887002@mail.gmail.com> <710b78fc0912030123y1eb377c0iaa0939e7e1d6cdc3@mail.gmail.com> Message-ID: he seems like an angry confused person. I hope he learns to love himself. Realizing that he has many beliefs. Like science and earth ethics... Oh well, I guess he's having a good time, making new friends, that relate to his vibration. it's all good. :) a little more patience, and calmness, would let him live longer. On Thu, Dec 3, 2009 at 4:23 AM, Emlyn wrote: > +1 like! > > 2009/12/3 spike : > > Oh this one is good. Have a most enjoyable 7 minutes. Keith this one is > > for you pal. {8-] > > > > http://www.youtube.com/watch?v=yjO4duhMRZk > > > > spike > > > > > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > > > -- > Emlyn > > http://emlyntech.wordpress.com - coding related > http://point7.wordpress.com - ranting > http://emlynoregan.com - main site > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrii.z at gmail.com Thu Dec 3 13:24:16 2009 From: andrii.z at gmail.com (Andrii Zvorygin) Date: Thu, 3 Dec 2009 08:24:16 -0500 Subject: [ExI] Climate for Transhumanism Message-ID: What is the best climate for the development of transhumanism? In astrophysics, and they state it was hot in the beginning of the universe, and has been cooling down since. By inference, more advanced things are cooler. Liquid water host-bodies are more advanced than whirl wind host bodies, and are generally cooler. So technological host bodies, cooler than water host-bodies. It's easier to think when it's colder. Can be proved through contradiction, go to a really hot place, and try to do some thinking. One of the reasons I think the Transtopia idea of getting a place in the hot Caribbean, isn't going to lead to much innovation, except perhaps some air conditioning. In the Arctic or Antarctic, where people can think quite clearly, and have much motivation to use technology, from fulfilling everyday desires. To help make their lives easier, via artificial structures for warmth, wind power generators, heating technology, and indoor growing environments. Also if a robot is made, it can travel across a relatively flat surface, and have readily accessible wind power at all times of year, in most locations (other than caves), Also relatively safe from biota, which is spare on the tundra. Though there maybe the occasional curious polar bear, they typically avoid things that don't smell like meat. Best of all, the Arctic is free, much of it remains uninhabited, even by nomadic tribes. Quite open to trans-humanist research. I've developed a boat, designed and modeled, that allows for all year access to the arctic, by traveling on liquid and solid water (waves and ice), http://files.abovetopsecret.com/images/member/b118a1cc8c6a.jpg I have the designs available if someone is interested. http://lokiworld.org/hexagonal.txt designed for two people, it's 11ft long. regular sides, makes for easy building. sails operable from inside, (potentially by computer controls). already technological innovation is creeping in to this cooler lifestyle. So what are your thoughts on climate for transhumanism? -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrii.z at gmail.com Thu Dec 3 15:06:07 2009 From: andrii.z at gmail.com (Andrii Zvorygin) Date: Thu, 3 Dec 2009 10:06:07 -0500 Subject: [ExI] Shale as a renewed source of energy In-Reply-To: <200912031441.nB3EfNrK007887@andromeda.ziaspace.com> References: <200912031441.nB3EfNrK007887@andromeda.ziaspace.com> Message-ID: On Thu, Dec 3, 2009 at 9:41 AM, Max More wrote: > I haven't looked into shale in years, so I'm posting this link with an > invitation to comment. The piece makes it sound promising as a medium-term > energy source. > > Is shale an answer to the energy question? > > sure it's an answer. but more accurately it would be "natural gas". but so is oil, coal, wind, wood, waves. Greenhouse gases are beneficial to the environment. Increasing the amount of carbon available to turn into biomass. Much of the carbon that used to be in the atmosphere is now locked up in sedimentary rock. Burning coal is a great idea, to restore the carbon to the environment. So we could have bigger fruits and vegetables, and thereby biomass in general. Also if we manage to offset global cooling, we'll have more land available for longer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanogirl at halcyon.com Thu Dec 3 15:51:57 2009 From: nanogirl at halcyon.com (Gina Miller) Date: Thu, 3 Dec 2009 08:51:57 -0700 Subject: [ExI] An extropian animation!!! In-Reply-To: <904BEE227D2944BC848A04E1A162C8B9@3DBOXXW4850> References: <2d6187670912021702y7abccfbtdd262ee20887002@mail.gmail.com> <904BEE227D2944BC848A04E1A162C8B9@3DBOXXW4850> Message-ID: "My Heart is Von Neumann" is now up at youtube: http://www.youtube.com/watch?v=AljV-cRyPAM and here are the Lyrics to the song: I am a robot made by a human I want to love you my heart is Von Neumann. You can see through me but I can be deeper I can help you I'll be your keeper help you live longer, repair your destruction I can be your friend programmed by function. We can live forever fix all the wrongs like they were never. We'll go on together. We will save ourselves sail into the universe where science delves we'll discover, that I am a robot made by a human I want to love you my heart is Von Neumann. You can see through me but I can be deeper I can help you I'll be your keeper. I'll be your keeper. I'll be your keeper. I'll be your keeper. -----------end------------ Gina "Nanogirl" Miller http://www.nanogirl.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Thu Dec 3 15:56:53 2009 From: sparge at gmail.com (Dave Sill) Date: Thu, 3 Dec 2009 10:56:53 -0500 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: References: <2d6187670912021702y7abccfbtdd262ee20887002@mail.gmail.com> <710b78fc0912030123y1eb377c0iaa0939e7e1d6cdc3@mail.gmail.com> Message-ID: 2009/12/3 Andrii Zvorygin : > he seems like an angry confused person. Angry? Sure. Confused? I don't see it. > I hope he learns to love himself. What makes you think he doesn't? > Realizing that he has many beliefs. I don't him denying having beliefs. -Dave From alfio.puglisi at gmail.com Thu Dec 3 17:07:09 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Thu, 3 Dec 2009 18:07:09 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <4B16E3DB.3060109@libero.it> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <7641ddc60912020121m45222eeapaec7b31a2d8ab598@mail.gmail.com> <4902d9990912021208v3b77eed1g81f47f16566c79c3@mail.gmail.com> <4B16E3DB.3060109@libero.it> Message-ID: <4902d9990912030907y6b49afa3i1172d7d33d6043a5@mail.gmail.com> 2009/12/2 Mirco Romanato > I understood both, but you must make a rational and scientific claim > that a greenhouse effect exist and what it really is. > > http://www.americanthinker.com/2009/11/politics_and_greenhouse_gasses.html > >> Now, the IPCC "consensus" atmospheric physics model tying CO2 to >>> global warming has been shown not only to be unverifiable, but to >>> actually violate basic laws of physics. >>> >>> The analysis comes from an independent theoretical study detailed >>> in a lengthy (115 pages), mathematically complex (144 equations, >>> 13 data tables, and 32 figures or graphs), and well-sourced (205 >>> references) paper prepared by two German physicists, Gerhard >>> Gerlich and Ralf Tscheuschner, and published in several updated >>> versions over the last couple of years. The latest version appears >>> in the March 2009 edition of the International Journal of Modern >>> Physics. In the paper, the two authors analyze the greenhouse gas >>> model from its origin in the mid-19th century to the present IPCC >>> application. >>> >> http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.1161v4.pdf > > > Further, they go on to show that any mechanism whereby CO2 in the >>> cooler upper atmosphere could exert any thermal enhancing or >>> "forcing" effect on the warmer surface below violates both the >>> First and Second Laws of Thermodynamics. >>> >> > Well, I suppose that the First and Second Law of Thermodynamics are some > inconvenient truths. > > Hello Mirco, I suggest that you apply a bit of skepticism to your sources: asserting that global warming is against laws of thermodynamics is, frankly, ridicolous. The Gerlich and Tscheuschner paper makes basic mistakes. A good place to start is the realclimate wiki page on the subject: http://www.realclimate.org/wiki/index.php?title=G._Gerlich_and_R._D._Tscheuschner "Even a doubling of CO2 would only upset the original balance >> between incoming and outgoing radiation by about 2%" >> > 2% is significant when your planet has an average temperature of > 290K. And he didn't include the feedbacks (but didn't he talk about > water vapor a few lines before? Why not now?) > I think the right number is 0.03% > I'm not sure which units you are using. 0.03% of what? If it is in W/m^2, it seems way too small. > > ### Because nobody knows the feedback but available data are >>> consistent with absence of positive feedback. >>> >> Unfortunately, ice age data can't be explained without positive >> feedbacks. Orbital forcings are too small. >> > > Changing solar energy output? > To my knowledge, there is no stellar model that suggests 100 kilo-year cycles with abrupt transitions. But, if the CO2 have a forcing effect, the two must compound. > Why didn't a "runaway effect" start in the past? > Because evidently the positive feedbacks have limits, or other negative feedbacks kick in. For example, the ice-albedo feedback disappears after there is no or little ice during arctic summer. The ice age cores tell us that: 1) there are some positive feedbacks 2) they are not enough to trigger runaway effects under natural conditions. > Then there is the mathematics, that could be uncomputable with the current > tools: > http://www.americanthinker.com/2009/11/the_mathematics_of_global_warm.html The opening sentence of the page you linked: "The forecasts of global warming are based on mathematical solutions for equations of weather models. But all of these solutions are inaccurate. Therefore, no valid scientific conclusions can be made concerning global warming." is nonsense. Just because your knowledge is not perfect, it doesn't mean that you can make valid conclusions. If that was the case, science would have made no progress since 1600. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Thu Dec 3 17:19:29 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Thu, 3 Dec 2009 18:19:29 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <4B1717AE.4010109@libero.it> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <4B1717AE.4010109@libero.it> Message-ID: <4902d9990912030919q7bbe3c41xc6cc503f13fd115@mail.gmail.com> 2009/12/3 Mirco Romanato > The current decade, as we know from actual temperature readings and the > words written in the infamous emails, had no warming whatsoever over the > measurement error. > >From GISS data I calculate the following averages: 1971-1980: + 0.01 ?C 1981-1990: + 0.28 ?C 1991-2000: + 0.38 ?C 2001-2008: + 0.64 ?C Standard deviation less than 0.15 ?C in all cases. Do you have different numbers? > About your suggestion that reports of ice mass balance are selected for >> the most melting ones... that's a very serious accusation. Have you got >> any proof of that kind of selection? Anything? >> > > The newspapers' reports for sure. I don't get my info from newspapers. People can note that the 2009 line is over the 2008 line and the 208 line is > over the 2007 line (higher = more ice). > > But the CO2 emission don't went down. > > CRU models didn't predict this nor explain this. No current climate model make predictions on a period of 3 years. Try a minimum of 10 years averages, 30 years is better. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From painlord2k at libero.it Thu Dec 3 22:16:46 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Thu, 03 Dec 2009 23:16:46 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <4902d9990912030907y6b49afa3i1172d7d33d6043a5@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <7641ddc60912020121m45222eeapaec7b31a2d8ab598@mail.gmail.com> <4902d9990912021208v3b77eed1g81f47f16566c79c3@mail.gmail.com> <4B16E3DB.3060109@libero.it> <4902d9990912030907y6b49afa3i1172d7d33d6043a5@mail.gmail.com> Message-ID: <4B1838CE.8010706@libero.it> Il 03/12/2009 18.07, Alfio Puglisi ha scritto: > http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.1161v4.pdf > Well, I suppose that the First and Second Law of Thermodynamics are some > inconvenient truths. > Hello Mirco, > I suggest that you apply a bit of skepticism to your sources: asserting > that global warming is against laws of thermodynamics is, frankly, > ridicolous. The Gerlich and Tscheuschner paper makes basic mistakes. A > good place to start is the realclimate wiki page on the subject: > http://www.realclimate.org/wiki/index.php?title=G._Gerlich_and_R._D._Tscheuschner The paper I linked reply to the criticism (they update their paper to v4 to do so), pointing out the errors done by the author of the paper linked (at least the English one, I don't understand German). Also, the paper you linked don't show any error in the paper of G & T. It simply state that the greenhouse gas effect exist approximating the Sun and the Earth as black bodies. I'm surely not an expert but G & T claim that is an error to treat the Sun and the Earth as radiating black bodies is correct, as they are not black bodies as people can see. Interesting the last sentence of the paper: "The authors express their hope that in the schools around the world the fundamentals of physics will be taught correctly and not by using award-winning \Al Gore" movies shocking every straight physicist by confusing absorption/emission with reflection, by confusing the tropopause with the ionosphere, and by confusing microwaves with shortwaves." > "Even a doubling of CO2 would only upset the original balance > between incoming and outgoing radiation by about 2%" > 2% is significant when your planet has an average temperature of > 290K. And he didn't include the feedbacks (but didn't he talk about > water vapor a few lines before? Why not now?) > I think the right number is 0.03% > I'm not sure which units you are using. 0.03% of what? If it is in > W/m^2, it seems way too small. Citing from the paper "It is obvious that a doubling of the concentration of the trace gas CO2, whose thermal conductivity is approximately one half than that of nitrogen and oxygen, does change the thermal conductivity at the most by 0.03% and the isochoric thermal diffusivity at the most by 0.07 %. These numbers lie within the range of the measuring inaccuracy and other uncertainties such as rounding errors and therefore have no signi ficance at all." > But, if the CO2 have a forcing effect, the two must compound. > Why didn't a "runaway effect" start in the past? > Because evidently the positive feedbacks have limits, or other negative > feedbacks kick in. For example, the ice-albedo feedback disappears after > there is no or little ice during arctic summer. The ice age cores tell > us that: 1) there are some positive feedbacks 2) they are not enough to > trigger runaway effects under natural conditions. The ice cores also say there was first an heating and after a CO2 increase (some decades after). > The opening sentence of the page you linked: > "The forecasts of global warming are based on mathematical solutions for > equations of weather models. But all of these solutions are inaccurate. > Therefore, no valid scientific conclusions can be made concerning global > warming." > is nonsense. Just because your knowledge is not perfect, it doesn't mean > that you can make valid conclusions. If that was the case, science would > have made no progress since 1600. We can differentiate the accuracy of conclusions in two groups: "good enough" and "not good enough" to be used to predict the future. Mirco P.S. Possible experiment: To falsify the theory of G & T it is only needed to build a greenhouse that is filled with air with varying concentration of CO2, but with a "ground" surface where the "sunlight" shine that have is temperature kept stable (at or under the air temperature). Varying the concentration of CO2, the temperature of the surface and his albedo, and the intensity of the light would produce results that prove or disprove their assumption. If a greenhouse-gas effect exist, keeping the ground surface temperature under the temperature of the air would stop any convection (cold gas near the surface would not raise, hotter gas higher would not lower). But the higher air would continue to trap energy and raise its temperature anyway depending on the concentration of CO2. Sound as something not to difficult to setup for physicists, engineers and tech savvy people. Reducing the measurement errors could be tricky. -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.91/2542 - Data di rilascio: 12/03/09 08:32:00 From stathisp at gmail.com Thu Dec 3 22:28:12 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 4 Dec 2009 09:28:12 +1100 Subject: [ExI] Shale as a renewed source of energy In-Reply-To: <200912031441.nB3EfNrK007887@andromeda.ziaspace.com> References: <200912031441.nB3EfNrK007887@andromeda.ziaspace.com> Message-ID: 2009/12/4 Max More : > I haven't looked into shale in years, so I'm posting this link with an > invitation to comment. The piece makes it sound promising as a medium-term > energy source. > > Is shale an answer to the energy question? > > http://www.msnbc.msn.com/id/34253199/ns/business-oil_and_energy/ > > Excerpt: > "The United States is sitting on over 100 years of gas supply at the current > rates of consumption," he said. Because natural gas emits half the > greenhouse gases of coal, he added, that "provides the United States with a > unique opportunity to address concerns about energy security and climate > change." > > Recoverable U.S. gas reserves could now be bigger than the immense gas > reserves of Russia, some experts say. In addition to shale gas there is coal seam gas and the possibility (economic at higher oil prices) of converting coal to gas. -- Stathis Papaioannou From rob4332000 at yahoo.com Thu Dec 3 22:17:46 2009 From: rob4332000 at yahoo.com (Robert Masters) Date: Thu, 3 Dec 2009 14:17:46 -0800 (PST) Subject: [ExI] Me Message-ID: <30255.90755.qm@web58305.mail.re3.yahoo.com> >From childhood's hour I have not been As others were; I have not seen As others saw; I could not bring My passions from a common spring. >From the same source I have not taken My sorrow; I could not awaken My heart to joy at the same tone; And all I loved, I loved alone. It's all mine. Rob Masters From rob4332000 at yahoo.com Thu Dec 3 22:20:24 2009 From: rob4332000 at yahoo.com (Robert Masters) Date: Thu, 3 Dec 2009 14:20:24 -0800 (PST) Subject: [ExI] Who is Ayn Rand? Message-ID: <462361.30206.qm@web58308.mail.re3.yahoo.com> The works of Alice Rosenbaum... DIRGE WE THE LIVING DEAD THE CLOGGED FOUNTAIN ATLAS SHIT Her culminating achievement was the novel ATLAS SHIT. It is the amazing story of how one speech changed the entire course of history. Alice Rosenbaum was really smart. She told her philosophy professor so. She said she was not yet famous in the history of philosophy--but she would be. Documentary on Alice Rosenbaum: AYN RAND: A SENSE OF DEATH Alice Rosenbaum's son and lover: BRANDEN In Hebrew, "Ben" means "Son of," as in "Ben-Gurion" and "Ben-Hur." Nathan Rosenthal chose to call himself "Branden"--which thus means "son of Rand." He and Rand performed sex rituals together on Rand's bed. The aim of these rituals was to turn her into the "Ayn" (sometimes spelled "Ain"), which in the inner tradition of Judiasm (the Kabbalah) is the supreme and highest source of existence. Alice really went a long way from that day she told her philosophy professor she would rank among Aristotle and Plato. She actually became AYN. But the crucial question is EXACTLY what went on in that incestuous Jewish ritual. Did it include anal penetration? Sub/dom? Rape? Pissing? Shitting? The public has a right to know. Nathaniel Branden has admitted that they had sex, but stopped short of a full confession. That won't do. After all, he was the one who chose to reveal the "affair" as a justification for his own actions. What were the DETAILS of the affair? Rob Masters From rob4332000 at yahoo.com Thu Dec 3 22:23:49 2009 From: rob4332000 at yahoo.com (Robert Masters) Date: Thu, 3 Dec 2009 14:23:49 -0800 (PST) Subject: [ExI] we die alone Message-ID: <182452.64981.qm@web58301.mail.re3.yahoo.com> 2 Dec 09 5:23 a.m. I almost died just now. I dreamed that I was drowning, and I remembered that one is supposed to shout at such times. So I started yelling at the top of my voice (in the dream, and then as I woke up) while jumping out of bed and running toward the front door of my apartment. For the first few seconds everything in the apartment looked weird, as if covered with a radiant film; then that effect subsided. Rob Masters robert.masters4 at comcast.net From thespike at satx.rr.com Thu Dec 3 22:57:32 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 03 Dec 2009 16:57:32 -0600 Subject: [ExI] Me In-Reply-To: <30255.90755.qm@web58305.mail.re3.yahoo.com> References: <30255.90755.qm@web58305.mail.re3.yahoo.com> Message-ID: <4B18425C.1060109@satx.rr.com> On 12/3/2009 4:17 PM, Robert Masters wrote: > > >> From childhood's hour I have not been > As others were; I have not seen > As others saw; I could not bring > My passions from a common spring. >> From the same source I have not taken > My sorrow; I could not awaken > My heart to joy at the same tone; > And all I loved, I loved alone. > > It's all mine. Report from the ExI readership: "Me too!" "Me too!" "The Cat who walks by himself." "Me too!" "Yo!!" "Me too!" "Me too!" "Oh you say it so beautifully i couldnt have put it so sweetly but yes i feel that way also" "Me too!" "Me too!" "Huh???" "I know what u mean!" "Me too!" "yes I feel exactlty that way, not like anyone else in the whole world." From painlord2k at libero.it Thu Dec 3 22:57:54 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Thu, 03 Dec 2009 23:57:54 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <4902d9990912030919q7bbe3c41xc6cc503f13fd115@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <4B1717AE.4010109@libero.it> <4902d9990912030919q7bbe3c41xc6cc503f13fd115@mail.gmail.com> Message-ID: <4B184272.3020206@libero.it> Il 03/12/2009 18.19, Alfio Puglisi ha scritto: > 2009/12/3 Mirco Romanato > > The current decade, as we know from actual temperature readings and > the words written in the infamous emails, had no warming whatsoever > over the measurement error. > From GISS data I calculate the following averages: > 1971-1980: + 0.01 ?C > 1981-1990: + 0.28 ?C > 1991-2000: + 0.38 ?C > 2001-2008: + 0.64 ?C > Standard deviation less than 0.15 ?C in all cases. Do you have different > numbers? What massaged data set? http://data.giss.nasa.gov/gistemp/station_data/ Here I have to choose: 1) After combining Sources at same location 2) Raw GCHN data + USHCN corrections 3) After homogeneity adjustement http://www.climateaudit.org/?p=1878 The adjustement feast. Do you used the set corrected for the 1934-1998 inversion? http://www.americanthinker.com/blog/2007/08/revised_temp_data_reduces_glob.html Scientist, real ones, record numbers or observations, then keep them and make them available. They don't substitute them with adjusted, better, improved version without clear explanation, citing sources and providing justifications. And always keep the originals. Try doing this with accountability and I think the IRS will not take it lightly. This is like old Muhammed stating that Allah substituted the old suras with better one (when he didn't remember the suras he "revelated" previously). > People can note that the 2009 line is over the 2008 line and the 208 > line is over the 2007 line (higher = more ice). > > But the CO2 emission don't went down. > > CRU models didn't predict this nor explain this. > No current climate model make predictions on a period of 3 years. Try a > minimum of 10 years averages, 30 years is better. Well. I would wait 30 years to verify the predictions, then I would trust the following predictions much more. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.91/2542 - Data di rilascio: 12/03/09 08:32:00 From alfio.puglisi at gmail.com Thu Dec 3 23:12:22 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Fri, 4 Dec 2009 00:12:22 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <4B184272.3020206@libero.it> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <4B1717AE.4010109@libero.it> <4902d9990912030919q7bbe3c41xc6cc503f13fd115@mail.gmail.com> <4B184272.3020206@libero.it> Message-ID: <4902d9990912031512s2d38d996n78bb4cc6c06bdbc7@mail.gmail.com> 2009/12/3 Mirco Romanato > Scientist, real ones, record numbers or observations, then keep them and > make them available. > GISS uses publicly available data from GCHN and USHCN ftp sites. If you download the code, it includes a copy of part of the data, and tells you where to download the rest. > The current decade, as we know from actual temperature readings and > the words written in the infamous emails, had no warming whatsoever > over the measurement error. So, since you don't like GCHN and USHCN data, what are these temperature readings that you referring to? Please share. Well. > I would wait 30 years to verify the predictions, then I would trust the > following predictions much more. > Since predictions made in Hansen et al. 1998 are for now on track, you only have 10 more years to wait :-) Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanite1018 at gmail.com Thu Dec 3 23:14:38 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Thu, 3 Dec 2009 18:14:38 -0500 Subject: [ExI] Who is Ayn Rand? In-Reply-To: <462361.30206.qm@web58308.mail.re3.yahoo.com> References: <462361.30206.qm@web58308.mail.re3.yahoo.com> Message-ID: <1E888355-2574-4379-B99D-DE1AA9BCE673@GMAIL.COM> On Dec 3, 2009, at 5:20 PM, Robert Masters wrote: > The works of Alice Rosenbaum... > > > DIRGE > > WE THE LIVING DEAD > > THE CLOGGED FOUNTAIN > > ATLAS SHIT > > > Her culminating achievement was the novel ATLAS SHIT. It is the > amazing story of how one speech changed the entire course of history. I'm not really sure what to make of this. Ayn Rand was not perfect, often tending to be too quick to denounce others morally because they simply disagreed. But her philosophy is quite good in my estimation, and particularly the general outline of her morality and politics (any reason-based epistemology is enough, in my opinion, to serve as the foundation for them). Your post seems almost like a satire of criticisms, but I think it may very well be an actual criticism itself. One clue that it might simply be satirical of criticisms is that you got her name wrong, it was Alisa Rosenbaum, not Alice. Have you, by any chance, read any of the above novels, particularly "The Fountainhead" or "Atlas Shrugged"? If you had, it seems unlikely that you could characterize them so unfairly. If you have not, then I suggest that you do before you declare them effectively piles of shit. On a somewhat related note, I find Objectivism quite in tune with the Principles of Extropy. The Principles are individualist and have a strong emphasis on reason and an open economy and society, which is just the same aim as Objectivism. Extropianism, from what I've been able to tell from the articles available online, seems much like Objectivism plus transhuman technologies. Joshua Job nanite1018 at gmail.com From alfio.puglisi at gmail.com Thu Dec 3 23:17:21 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Fri, 4 Dec 2009 00:17:21 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <4B1838CE.8010706@libero.it> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <7641ddc60912020121m45222eeapaec7b31a2d8ab598@mail.gmail.com> <4902d9990912021208v3b77eed1g81f47f16566c79c3@mail.gmail.com> <4B16E3DB.3060109@libero.it> <4902d9990912030907y6b49afa3i1172d7d33d6043a5@mail.gmail.com> <4B1838CE.8010706@libero.it> Message-ID: <4902d9990912031517y7a86f1fdg45a4e1b5ffd4ff2@mail.gmail.com> 2009/12/3 Mirco Romanato > > I'm not sure which units you are using. 0.03% of what? If it is in >> W/m^2, it seems way too small. >> > > Citing from the paper > > "It is obvious that a doubling of the concentration of the trace gas CO2, > whose thermal conductivity is approximately one half than that of nitrogen > and oxygen, does change the thermal conductivity at the most by 0.03% and > the isochoric thermal diffusivity at the most by 0.07 %. These numbers lie > within the range of the measuring inaccuracy and other uncertainties such as > rounding errors and therefore have no signi ficance at all." Ah, now I understand. G &T derive their number from the thermal capacity of CO2, which of course is a very small number because it has a very small mass compared to the rest of the atmosphere. That percentage ignores the radiation part of the equation, which is the basis of the CO2 greenhouse effect. Just confirming that it's a really, really poor paper. The ice age cores tell > us that: 1) there are some positive feedbacks 2) they are not enough to > trigger runaway effects under natural conditions. > The ice cores also say there was first an heating and after a CO2 increase > (some decades after). Sure, that's clear from the time series. They basically say that, if you increase temperature, CO2 will rise after a while. In our current situation the order is reversed, because we started increasing CO2 first. > > The opening sentence of the page you linked: >> > > "The forecasts of global warming are based on mathematical solutions for >> equations of weather models. But all of these solutions are inaccurate. >> Therefore, no valid scientific conclusions can be made concerning global >> warming." >> > > is nonsense. Just because your knowledge is not perfect, it doesn't mean >> that you can make valid conclusions. If that was the case, science would >> have made no progress since 1600. >> > > We can differentiate the accuracy of conclusions in two groups: > "good enough" and "not good enough" to be used to predict the future. > Possibly, but it needs to be quantified. Actually I'm not sure where to start. Treat it as an insurance-like problem? Or like when engineers design dams for 100 or 500 years floods? Current understanding says that there is some X % of serious consequences, so it's acceptable to devote Y% of resources to avoid the problem? The Economist (not exactly a hotbed of environmentalism) has a special report out, that report estimates of the cost at 1% of global GDP per year to limit CO2 at 500ppm. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Fri Dec 4 01:25:15 2009 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 3 Dec 2009 18:25:15 -0700 Subject: [ExI] we die alone In-Reply-To: <182452.64981.qm@web58301.mail.re3.yahoo.com> References: <182452.64981.qm@web58301.mail.re3.yahoo.com> Message-ID: <2d6187670912031725h518525dev8d9971ec68a5d82b@mail.gmail.com> Rob Masters wrote: I almost died just now. I dreamed that I was drowning, and I remembered that one is supposed to shout at such times. So I started yelling at the top of my voice (in the dream, and then as I woke up) while jumping out of bed and running toward the front door of my apartment. For the first few seconds everything in the apartment looked weird, as if covered with a radiant film; then that effect subsided. >>> A strange experience. Has this ever happened to you before? Will you talk to your doctor about it? The fact everything looked radiant for a few seconds after you woke up is mildly alarming to me. I have had some disturbing dreams, but never with that side effect. Take care and know that people care. John -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Fri Dec 4 02:00:13 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 03 Dec 2009 18:00:13 -0800 Subject: [ExI] climategate again In-Reply-To: <20091202111337.GW17686@leitl.org> References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> <4B15DDFE.8030807@rawbw.com> <20091202111337.GW17686@leitl.org> Message-ID: <4B186D2D.6070003@rawbw.com> Eugen Leitl wrote: > On Tue, Dec 01, 2009 at 07:24:46PM -0800, Lee Corbin wrote: > >> Why can't we have nice healthy attitudes like >> >> "I was a scientist myself for the longest time, and the people >> I?d gladly drop into a vat of nitric acid start with the Pope >> and go all the way down to anyone who voted for Stephen Harper?s >> conservatives." ---Peter Watts (credit: Damien) > > What a horrible person. You have to use hot Caro's acid, > not nitric acid. And slowly lower them down, starting with the toes. Quite. Thanks for updating the traditional "boiling in oil". And if that was good enough for our ancestors, it should be good enough for us. Lee From lcorbin at rawbw.com Fri Dec 4 02:08:51 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 03 Dec 2009 18:08:51 -0800 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <710b78fc0912030123y1eb377c0iaa0939e7e1d6cdc3@mail.gmail.com> References: <2d6187670912021702y7abccfbtdd262ee20887002@mail.gmail.com> <710b78fc0912030123y1eb377c0iaa0939e7e1d6cdc3@mail.gmail.com> Message-ID: <4B186F33.4050401@rawbw.com> Now hold on, PC fans. Did you not hear the part where Pat Condell came out against the trinity? The trinity of faith, and, , and *righteousness*? He comes out against the righteous? Does he know what it means? On this side of Earth's ocean Atlantic it means right?eous ?? /?ra?t??s/ Show Spelled Pronunciation [rahy-chuhs] Show IPA ?adjective 1. characterized by uprightness or morality: a righteous observance of the law. 2. morally right or justifiable: righteous indignation. 3. acting in an upright, moral way; virtuous: a righteous and godly person. 4. Slang. absolutely genuine or wonderful: some righteous playing by a jazz great. (though no telling since PC's a brit) Or.... was he letting the Christians win the wor of wards with *their* debased notion of the word's meaning? See http://en.wikipedia.org/wiki/Righteousness: Actually, If I have pride in anything, it may be that I am more righteous now than I was as a child or teen. Let us not yield this word to those purveyors of darkness. Lee Emlyn wrote: > +1 like! > > 2009/12/3 spike : >> Oh this one is good. Have a most enjoyable 7 minutes. Keith this one is >> for you pal. {8-] >> >> http://www.youtube.com/watch?v=yjO4duhMRZk >> >> spike >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > > From cetico.iconoclasta at gmail.com Fri Dec 4 02:15:36 2009 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado) Date: Fri, 4 Dec 2009 00:15:36 -0200 Subject: [ExI] we die alone References: <182452.64981.qm@web58301.mail.re3.yahoo.com> <2d6187670912031725h518525dev8d9971ec68a5d82b@mail.gmail.com> Message-ID: <7DDE0B42E26447EE9017DD8E1144ED05@Notebook> Rob Masters wrote: I almost died just now. I dreamed that I was drowning, and I remembered that one is supposed to shout at such times. So I started yelling at the top of my voice (in the dream, and then as I woke up) while jumping out of bed and running toward the front door of my apartment. For the first few seconds everything in the apartment looked weird, as if covered with a radiant film; then that effect subsided. >>> A strange experience. Has this ever happened to you before? Will you talk to your doctor about it? The fact everything looked radiant for a few seconds after you woke up is mildly alarming to me. I have had some disturbing dreams, but never with that side effect. That?s probably because his eyes were a bit dry. Once he blinked a bunch of times and the eyes became properly wet the radiant film effect went away. From lcorbin at rawbw.com Fri Dec 4 02:21:08 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 03 Dec 2009 18:21:08 -0800 Subject: [ExI] Me In-Reply-To: <4B18425C.1060109@satx.rr.com> References: <30255.90755.qm@web58305.mail.re3.yahoo.com> <4B18425C.1060109@satx.rr.com> Message-ID: <4B187214.9090903@rawbw.com> Damien Broderick wrote: > On 12/3/2009 4:17 PM, Robert Masters wrote: >> >>> From childhood's hour I have not been >> As others were; I have not seen >> As others saw; I could not bring >> My passions from a common spring. >>> From the same source I have not taken >> My sorrow; I could not awaken >> My heart to joy at the same tone; >> And all I loved, I loved alone. >> >> It's all mine. > > Report from the ExI readership: > > "Me too!", "Me too!", "The Cat who walks by himself.", "Me too!", "Yo!!", "Me too!" > "Me too!", "Oh you say it so beautifully i couldnt have put it so sweetly but yes i > feel that way also", "Me too!", "Me too!", "Huh???", "I know what u mean!", "Me too!" > "yes I feel exactlty that way, not like anyone else in the whole world." Well, let's read that again---carefully this time: > From childhood's hour I have not been As others were; I have not seen As others saw; I could not bring My passions from a common spring. > From the same source I have not taken My sorrow; I could not awaken My heart to joy at the same tone; And all I loved, I loved alone. ExI members are like *that*? Hath not an Extropian lungs with which to laugh at the Three Stooges, lips with which to moan at bad puns (and the political opposition)? Hath not an Extropian eyes to see what others do indeed see? (Most often, that is.) Are our passions not aroused from movies and comedies too numerous to refer to? Hath not an Extropian pain and anguish, joining millions in sad recognition of bad trends? Join not Extropians in rooting for Obama or McCain? (Or just plain rutting?) Hath not an Extropian sympathy and infinite regret at our poor friend Hal's great torment? The writer is indeed very special, or has misspoke, or is deluded. All that we love, we love alone? Let anyone but the OP (or his source) say it's true! I dare you. Lee From rob4332000 at yahoo.com Fri Dec 4 02:28:54 2009 From: rob4332000 at yahoo.com (Robert Masters) Date: Thu, 3 Dec 2009 18:28:54 -0800 (PST) Subject: [ExI] Assassination tango Message-ID: <976799.19921.qm@web58305.mail.re3.yahoo.com> 3 Dec 09 6:21 p.m. Another attempt. Again I woke up disoriented, COUGHING. Hint: coughing helps. Rob Masters From spike66 at att.net Fri Dec 4 02:47:18 2009 From: spike66 at att.net (spike) Date: Thu, 3 Dec 2009 18:47:18 -0800 Subject: [ExI] we die alone while ogling the divers In-Reply-To: <182452.64981.qm@web58301.mail.re3.yahoo.com> References: <182452.64981.qm@web58301.mail.re3.yahoo.com> Message-ID: <00A4F8668B384C2584840D149DD22DCE@spike> > ...On Behalf Of Robert Masters ... > > 2 Dec 09 > 5:23 a.m. > > I almost died just now. I dreamed that I was drowning, and I > remembered that one is supposed to shout at such times. So I > started yelling at the top of my voice (in the dream, and > then as I woke up).....Rob Masters Wrong-o, so very wrong Rob me lad. If you have drowning dreams very often it might indicate sleep apnea, so you need to see the medics about that. But I have found a much better approach. Whenever I dream I are drowning, instead of shouting, I dream that I discover that I can breathe water. How cool would that be! You could become the greatest undersea wildlife observer in history. You could go into the Bond James Bond line of business. You play some excellent gags with that skill, scare the piss out of people by unexpectedly emerging from the swamp, that sorta thing. Your wife would ask why it is that you so often wake up laughing. But even better, you could swim along the bottom of the lake to position yourself underneath the diving board. No one would see you down there and you wouldn't make a bunch of bubbles as a scuba diver would, so you could quietly observe the diving ladies having the occasional wardrobe malfunction. THAT would be a SERIOUSLY cool dream. Now why would you want to shout and mess up that? spike From spike66 at att.net Fri Dec 4 02:54:34 2009 From: spike66 at att.net (spike) Date: Thu, 3 Dec 2009 18:54:34 -0800 Subject: [ExI] Who is Ayn Rand? In-Reply-To: <1E888355-2574-4379-B99D-DE1AA9BCE673@GMAIL.COM> References: <462361.30206.qm@web58308.mail.re3.yahoo.com> <1E888355-2574-4379-B99D-DE1AA9BCE673@GMAIL.COM> Message-ID: <9031C3E58CA24514AB0A16C17C000795@spike> > > ...Ayn Rand was not > perfect, often tending to be too quick to denounce others > morally because they simply disagreed... Joshua Job I disagree! Ayn Rand WAS perfect, you depraved and iniquitous philistine, you debauched barbarian! Well, maybe a little imperfect. {8^D spike From sockpuppet99 at hotmail.com Fri Dec 4 02:35:20 2009 From: sockpuppet99 at hotmail.com (Sockpuppet99@hotmail.com) Date: Thu, 3 Dec 2009 19:35:20 -0700 Subject: [ExI] Assassination tango In-Reply-To: <976799.19921.qm@web58305.mail.re3.yahoo.com> References: <976799.19921.qm@web58305.mail.re3.yahoo.com> Message-ID: I think you have a vagus nerve issue. Tom D Sent from my iPod On Dec 3, 2009, at 7:28 PM, Robert Masters wrote: > > > > 3 Dec 09 > 6:21 p.m. > > > Another attempt. Again I woke up disoriented, COUGHING. Hint: > coughing helps. > > > Rob Masters > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From msd001 at gmail.com Fri Dec 4 03:22:49 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 3 Dec 2009 22:22:49 -0500 Subject: [ExI] we die alone In-Reply-To: <7DDE0B42E26447EE9017DD8E1144ED05@Notebook> References: <182452.64981.qm@web58301.mail.re3.yahoo.com> <2d6187670912031725h518525dev8d9971ec68a5d82b@mail.gmail.com> <7DDE0B42E26447EE9017DD8E1144ED05@Notebook> Message-ID: <62c14240912031922s3cd507b7k728fe83e00b29293@mail.gmail.com> On Thu, Dec 3, 2009 at 9:15 PM, Henrique Moraes Machado < cetico.iconoclasta at gmail.com> wrote: > Rob Masters wrote: > I almost died just now. I dreamed that I was drowning, and I remembered > that one is supposed to shout at such times. So I started yelling at the > top of my voice (in the dream, and then as I woke up) while jumping out of > bed and running toward the front door of my apartment. For the first few > seconds everything in the apartment looked weird, as if covered with a > radiant film; then that effect subsided. > >> >>>> > A strange experience. Has this ever happened to you before? Will > you talk to your doctor about it? The fact everything looked radiant for a > few seconds after you woke up is mildly alarming to me. I have had some > disturbing dreams, but never with that side effect. > > That?s probably because his eyes were a bit dry. Once he blinked a bunch of > times and the eyes became properly wet the radiant film effect went away. > > I almost died: i expected this to be followed with happened. I dreamed I was drowning: I expected that you had fallen asleep on a boat and that your dream was also real. I remembered...to shout: I guess that's so someone can help? though I'm not sure shouting is possible while drowning. jumping...and running: That sounds very lively, hardly like dieing at all. few seconds...everything...looked weird: Interesting judgment for one so nearly dead. that effect subsided: It still looks that way, you've just grown accustomed to it. :) ... Then what happened? I kept waiting to hear about how you almost died. Reminds me of the opening (and closing) line to the movie Fallen.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Dec 4 03:10:06 2009 From: spike66 at att.net (spike) Date: Thu, 3 Dec 2009 19:10:06 -0800 Subject: [ExI] Me In-Reply-To: <4B187214.9090903@rawbw.com> References: <30255.90755.qm@web58305.mail.re3.yahoo.com><4B18425C.1060109@satx.rr.com> <4B187214.9090903@rawbw.com> Message-ID: <47E32132FC994EFCB13F116C898940AC@spike> > ...On Behalf Of Lee Corbin > ... > > Hath not an Extropian sympathy and infinite regret at our > poor friend Hal's great torment?... Lee Has anyone heard from Hal in the past few weeks? If you post to him offlist, do let him know his friends and fans over here worry and wish him the best under trying circs. spike From robert.bradbury at gmail.com Fri Dec 4 04:45:25 2009 From: robert.bradbury at gmail.com (Robert Bradbury) Date: Thu, 3 Dec 2009 23:45:25 -0500 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <4902d9990912031517y7a86f1fdg45a4e1b5ffd4ff2@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <7641ddc60912020121m45222eeapaec7b31a2d8ab598@mail.gmail.com> <4902d9990912021208v3b77eed1g81f47f16566c79c3@mail.gmail.com> <4B16E3DB.3060109@libero.it> <4902d9990912030907y6b49afa3i1172d7d33d6043a5@mail.gmail.com> <4B1838CE.8010706@libero.it> <4902d9990912031517y7a86f1fdg45a4e1b5ffd4ff2@mail.gmail.com> Message-ID: Without meaning to be a nay-sayer -- because I believe global warming is real (lost glaciers and the loss of ice at the poles being the most obvious demonstrations) -- I hope you all do realize that the problem once molecular nanotechnology becomes available to the average citizen (i.e. 10 kg of nanorobots per person) the problem flips completely in the opposite direction). The problem is not excess CO2 in the atmosphere but a complete shortage of CO2 and the extinction of most plant life and microorganisms (at least those that depend on photosynthesis) on the planet. Or have you all forgotten (or overlooked) why I called it "Sapphire Mansions" rather than "Diamond Mansions"? (Note that the shift required from atmospheric carbon overabundance to atmospheric carbon shortages is much more significant than the current global warming debate because the costs for solving the first in terms of shifting from the archived carbon resources to sustainable resources are non-trivial (but essentialy standard of living questions). In contrast the shift from rampant harvesting of carbon from the atmosphere to abiding by ones "reasonable" resource limit when the the resources are effectively free -- i.e. one only has moral and/or legal persuasion -- and those may not be enough (one has to change human nature in a significant way in a very short period of time). So exerting effort in this area (debating whether global warming is real or fiction) is a complete waste of time (given that we can envision technological solutions for turning the problem completely upside down). Instead you should be designing molecular nanoparts or working towards the funding of their realization (while at the same time being as green as one can -- if only for the simple reason that one has to shift the economy and the framework in which humanity operates in the direction of millenia sustainability -- because "unsustainability" leads to bubbles and crashes and we would like to avoid more of these over the next century and longer). If I were to take a general survey, even of serious scientists, I don't think they would push "real" nanotechnology out beyond ~50 years, the most severe pressure to develop it around the time that current photolithographic methods start becoming very very hard to improve. And that implies, that unless Eric, Ralph and Robert are wrong -- global cooling rather than global warming is the real problem we face. The barrier is extremely low at this point -- Nanoengineer-1 from Nanorex is free to download. A good organic chemistry textbook might cost $50-$100 (or presumably many can be downloaded). So in scanning this thread I was left with the thought -- "Haven't you all got better things to do?" Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Fri Dec 4 04:51:18 2009 From: emlynoregan at gmail.com (Emlyn) Date: Fri, 4 Dec 2009 15:21:18 +1030 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <20091203093004.GM17686@leitl.org> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <710b78fc0912011835u18559882ga8d6631238f6e13@mail.gmail.com> <20091202111533.GX17686@leitl.org> <710b78fc0912021816s28f09ff2x818b034c0f3de137@mail.gmail.com> <20091203093004.GM17686@leitl.org> Message-ID: <710b78fc0912032051o20ddc3cas101977cb60d83b0d@mail.gmail.com> 2009/12/3 Eugen Leitl : > On Thu, Dec 03, 2009 at 01:41:57PM +1100, ddraig wrote: > >> > When you start hearing this from Germans, the shit is about to go down! >> >> Don't mention the, errrr... thingy > > HITLER, HITLER, HITLER, HITLER! Best. Godwin. Ever. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From pharos at gmail.com Fri Dec 4 10:18:00 2009 From: pharos at gmail.com (BillK) Date: Fri, 4 Dec 2009 10:18:00 +0000 Subject: [ExI] we die alone In-Reply-To: <182452.64981.qm@web58301.mail.re3.yahoo.com> References: <182452.64981.qm@web58301.mail.re3.yahoo.com> Message-ID: On 12/3/09, Robert Masters wrote: > I almost died just now. I dreamed that I was drowning, Then I realized that my hot water bottle had burst during the night. Then I remembered that I don't have a hot water bottle................ BillK From mbb386 at main.nc.us Fri Dec 4 11:11:39 2009 From: mbb386 at main.nc.us (MB) Date: Fri, 4 Dec 2009 06:11:39 -0500 (EST) Subject: [ExI] Assassination tango In-Reply-To: <976799.19921.qm@web58305.mail.re3.yahoo.com> References: <976799.19921.qm@web58305.mail.re3.yahoo.com> Message-ID: <50300.12.77.168.224.1259925099.squirrel@www.main.nc.us> > > > > 3 Dec 09 > 6:21 p.m. > > > Another attempt. Again I woke up disoriented, COUGHING. Hint: coughing helps. > > Perhaps you're getting a cold. My neighbor has a cold. My boss just got over one. I'm hopeful, but that's all. A cold coming on would make your breathing odd and might offer up the knocking you heard, the dry eye radiance thing, and the coughing. Take care of yourself. Regards, MB From pharos at gmail.com Fri Dec 4 11:20:06 2009 From: pharos at gmail.com (BillK) Date: Fri, 4 Dec 2009 11:20:06 +0000 Subject: [ExI] Me In-Reply-To: <4B187214.9090903@rawbw.com> References: <30255.90755.qm@web58305.mail.re3.yahoo.com> <4B18425C.1060109@satx.rr.com> <4B187214.9090903@rawbw.com> Message-ID: On 12/4/09, Lee Corbin wrote: > The writer is indeed very special, or has misspoke, > or is deluded. > > All that we love, we love alone? Let anyone but > the OP (or his source) say it's true! I dare you. > > Indeed. The writer was a tortured soul. Although undated, it is thought to have been written when Poe was about 20 years old. In the middle of his teenage angst, gothic period. (Which pretty much continued for the rest of his short life). BillK From sparge at gmail.com Fri Dec 4 13:58:25 2009 From: sparge at gmail.com (Dave Sill) Date: Fri, 4 Dec 2009 08:58:25 -0500 Subject: [ExI] Climate Change Lectures Message-ID: Passing this on. Looks interesting, but haven't watched them all yet. "David Archer is a professor at the University of Chicago doing research on CO2/ climate change and has written a couple of good books on the topic. He teaches a class for non-science majors on the subject called "Global Warming - Understanding the Forecast" and during the Fall 2009 quarter he videotaped the lectures and has made them publicly available (he even makes reference to the current email hacking controversy in one lecture). Viewing them requires Quicktime... For those interested the link to the lectures: http://geoflop.uchicago.edu/forecast/docs/lectures.html " -Dave From painlord2k at libero.it Fri Dec 4 14:15:47 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 04 Dec 2009 15:15:47 +0100 Subject: [ExI] The Climate Science Isn't Settled [was: Re: climategate again] In-Reply-To: <4902d9990912031517y7a86f1fdg45a4e1b5ffd4ff2@mail.gmail.com> References: <200912011907.nB1J6x5P029237@andromeda.ziaspace.com> <4902d9990912011416p8693c44lc520c52ac5c3b1ea@mail.gmail.com> <7641ddc60912020121m45222eeapaec7b31a2d8ab598@mail.gmail.com> <4902d9990912021208v3b77eed1g81f47f16566c79c3@mail.gmail.com> <4B16E3DB.3060109@libero.it> <4902d9990912030907y6b49afa3i1172d7d33d6043a5@mail.gmail.com> <4B1838CE.8010706@libero.it> <4902d9990912031517y7a86f1fdg45a4e1b5ffd4ff2@mail.gmail.com> Message-ID: <4B191993.5050502@libero.it> Il 04/12/2009 0.17, Alfio Puglisi ha scritto: > > > 2009/12/3 Mirco Romanato > > > > I'm not sure which units you are using. 0.03% of what? If it is in > W/m^2, it seems way too small. > > > Citing from the paper > > "It is obvious that a doubling of the concentration of the trace gas > CO2, whose thermal conductivity is approximately one half than that > of nitrogen and oxygen, does change the thermal conductivity at the > most by 0.03% and the isochoric thermal diffusivity at the most by > 0.07 %. These numbers lie within the range of the measuring > inaccuracy and other uncertainties such as rounding errors and > therefore have no significance at all." > > > Ah, now I understand. G &T derive their number from the thermal > capacity of CO2, which of course is a very small number because it > has a very small mass compared to the rest of the atmosphere. That > percentage ignores the radiation part of the equation, which is the > basis of the CO2 greenhouse effect. Just confirming that it's a > really, really poor paper. You never red the paper, so judging it from a sentence is a bit prejudiced, I think. This is only in the first part of the paper, where they lay the foundation of their case. This is needed to establish that conduction and radiation are not how the heat is transmitted into the air. > The ice age cores tell us that: 1) there are some positive feedbacks > 2) they are not enough to trigger runaway effects under natural > conditions. The ice cores also say there was first an heating and > after a CO2 increase (some decades after). > Sure, that's clear from the time series. They basically say that, if > you increase temperature, CO2 will rise after a while. In our current > situation the order is reversed, because we started increasing CO2 > first. You say so, but a scientific explanation of the former and the latter is due. You can not reverse the cause and the effect and say the same mechanics worked in the reverse. The explanation of the first is raising temperature liberated CO2 from their sinks. CO2 having no real greenhouse effect did nothing. If CO2 would have a greenhouse effect so strong as you claim, would have caused supplemental heating. But, IIRC, the CO2 continued to climb up even after the temperatures started to go down, following them after a delay. So, or the greenhouse effect of the CO2 don't exist, or some natural phenomenon is much stronger than the CO2. The Sun, maybe? > Possibly, but it needs to be quantified. Actually I'm not sure where > to start. Treat it as an insurance-like problem? Wrong. You can not insure against a rainy day. Because all will feel the effect of the rain. There is no sharing of different individual risks. Only self-insurance would work, here. > Or like when engineers design dams for 100 or 500 years floods? I don't know many design that are done to last so much. Usually this is a byproduct of designing for safety. And, even the N.O. floods was more a byproduct of not enough maintenance than not designing for safety enough. Occur to me that the costs to build a dam system able to protect New Orleans from a Cat. 5 Hurricane would cost so much to bankrupt the city itself. So, wisely, it was not done. What could be done, but was not done, was to use the buses used to bus around schoolboys and schoolgirls to move out of the city, in safer places, the people, until the storm end. There are [not so] pretty sat photos (courtesy of Google Earth and the curious eyes of interested people) showing all these buses underwater, in their parking lots. The Major never gave the orders needed, and the Police preferred to join the looters or flee than doing their jobs. What is the point to give to corrupted people the money for gargantuan projects, when the best they will do is steal it and waste it. Already, in Denmark, UK and other places there are investigation about frauds and organize crime involved with carbon trading. > Current understanding says that there is some X % of serious > consequences, so it's acceptable to devote Y% of resources to avoid > the problem? Where and when any problem was avoided paying more taxes that would be diverted to unrelated spending? Mr. Gore will become billionaire with his investment if carbon trading. That, for what I can understand, is only a way to sell indulgences (as stated by a first hour, grandaddy, of the AGW theory. The problem is the real goal of the greens/leftist. They want destroy capitalism, because they don't like it, and any reason is good to justify this goal. And whatever the price others will pay is immaterial. For this, it is enough to see they dishonesty of their positions. They are against CO2 emission, but don't want nuclear energy. The talk much of wind power, but don't want it in their backyard or to save some pests. Not to talk about the fact they want others to subside their projects. If these projects are sustainable, what subsides are for? > The Economist (not exactly a hotbed of environmentalism) The Economist that tasked Tana de Zulueta to write about Italy politics? After many passages in TV show where she was presented only as an impartial Economist journalist, she would be candidate and elected for the post communist party (PDS, now PD). The same newspaper that back only the leftist governments of Italy and is fed practically only by leftist Italian journalists about the news and their interpretations? > has a special report out, that report estimates of the cost at 1% of > global GDP per year to limit CO2 at 500ppm. From 350 ppm? So we devote 1% to obtain an increase anyway in 90 years. If we devote the same money to adapt to the changes we would spend less. We could devote much less and seed the seas so there is more carbon capturing. Last time I checked, the Kyoto protocols would cause the globe to delay of three years over a century the same results. Killing the economies of the world in the meantime (with many billions starving). I keep the CO2 and any warming predicted and I'm sure I and the rest of planet population will live fine anyway. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.93/2544 - Data di rilascio: 12/04/09 08:32:00 From lcorbin at rawbw.com Fri Dec 4 15:24:18 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 04 Dec 2009 07:24:18 -0800 Subject: [ExI] Me In-Reply-To: References: <30255.90755.qm@web58305.mail.re3.yahoo.com> <4B18425C.1060109@satx.rr.com> <4B187214.9090903@rawbw.com> Message-ID: <4B1929A2.7000602@rawbw.com> BillK wrote: > On 12/4/09, Lee Corbin wrote: > >> The writer is indeed very special, or has misspoke, >> or is deluded. >> >> All that we love, we love alone? Let anyone but >> the OP (or his source) say it's true! I dare you. > > Indeed. The writer was a tortured soul. > > Oh, for heaven's sake! The extra ">" at two places in > From childhood's hour I have not been As others were; I have not seen As others saw; I could not bring My passions from a common spring. > From the same source I have not taken My sorrow; I could not awaken My heart to joy at the same tone; And all I loved, I loved alone. did make me suspect that the OP was not the original source. Thanks so much, BillK, for doing the simple and obvious---and tracking it down. It was Poe. It figures. > Although undated, it is thought to have been written when Poe was > about 20 years old. In the middle of his teenage angst, gothic period. > (Which pretty much continued for the rest of his short life). Oh, I'll bet he had his ups and downs during the rest (January 19, 1809 ? October 7, 1849). But how bad one's thirties or forties are must vary hugely. Those of us who have mild to moderate, or moderate to severe depression from time to time, just can have no concept of how unrelentingly bad it can be are the intensely and chronically depressed. Some suffer from depression, or as they used to say, melancholia, their whole lives. It all takes us straight back to www.hedweb.org, and the straight facts that in our benighted era so little effort is focused on happy pills that would have very few or no damaging side effects. Lee From thespike at satx.rr.com Fri Dec 4 15:25:45 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 04 Dec 2009 09:25:45 -0600 Subject: [ExI] Me and Poe In-Reply-To: References: <30255.90755.qm@web58305.mail.re3.yahoo.com> <4B18425C.1060109@satx.rr.com> <4B187214.9090903@rawbw.com> Message-ID: <4B1929F9.8050407@satx.rr.com> On 12/4/2009 5:20 AM, BillK wrote: > The writer was a tortured soul. > > > > Although undated, it is thought to have been written when Poe was > about 20 years old. In the middle of his teenage angst, gothic period. Good catch! The teen loneliness and misery comes through with great precision. And, pace Lee Corbin, I'd say quite a few people on a list such as this would have suffered such an agony in eight fits. It reminded me of a tragically hip note I jotted down in a diary on a teenaged birthday: "Now I am 19, and still the world is full of lies & tears." It still is, but you learn to cope, mostly. Damien Broderick From spike66 at att.net Fri Dec 4 17:42:36 2009 From: spike66 at att.net (spike) Date: Fri, 4 Dec 2009 09:42:36 -0800 Subject: [ExI] Me and Poe In-Reply-To: <4B1929F9.8050407@satx.rr.com> References: <30255.90755.qm@web58305.mail.re3.yahoo.com> <4B18425C.1060109@satx.rr.com><4B187214.9090903@rawbw.com> <4B1929F9.8050407@satx.rr.com> Message-ID: > ...On Behalf Of Damien Broderick > ...And, pace Lee Corbin, I'd say quite a few > people on a list such as this would have suffered such an > agony... Damien Broderick Ja, and I would agree that often some of one's best thinking and writing is done during periods of this kind of agony. spike From nanogirl at halcyon.com Fri Dec 4 18:11:37 2009 From: nanogirl at halcyon.com (Gina Miller) Date: Fri, 4 Dec 2009 11:11:37 -0700 Subject: [ExI] Me and Poe In-Reply-To: References: <30255.90755.qm@web58305.mail.re3.yahoo.com> <4B18425C.1060109@satx.rr.com><4B187214.9090903@rawbw.com><4B1929F9.8050407@satx.rr.com> Message-ID: My version of Poe: http://www.youtube.com/watch?v=_ywkv1urqP0 Gina "Nanogirl" Miller www.nanogirl.com ----- Original Message ----- From: "spike" To: "'ExI chat list'" Sent: Friday, December 04, 2009 10:42 AM Subject: Re: [ExI] Me and Poe > > >> ...On Behalf Of Damien Broderick > >> ...And, pace Lee Corbin, I'd say quite a few >> people on a list such as this would have suffered such an >> agony... Damien Broderick > > Ja, and I would agree that often some of one's best thinking and writing > is > done during periods of this kind of agony. > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From cetico.iconoclasta at gmail.com Fri Dec 4 19:41:20 2009 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado) Date: Fri, 4 Dec 2009 17:41:20 -0200 Subject: [ExI] Is Mr. Henson somehow involved in this? Message-ID: Solar Plant in Space Gets Go-Ahead http://greeninc.blogs.nytimes.com/2009/12/03/solar-plant-in-space-gets-go-ahead/ From painlord2k at libero.it Fri Dec 4 19:44:16 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 04 Dec 2009 20:44:16 +0100 Subject: [ExI] Shale as a renewed source of energy In-Reply-To: References: <200912031441.nB3EfNrK007887@andromeda.ziaspace.com> Message-ID: <4B196690.4070705@libero.it> Il 03/12/2009 23.28, Stathis Papaioannou ha scritto: > 2009/12/4 Max More: >> I haven't looked into shale in years, so I'm posting this link with an >> invitation to comment. The piece makes it sound promising as a medium-term >> energy source. >> >> Is shale an answer to the energy question? >> >> http://www.msnbc.msn.com/id/34253199/ns/business-oil_and_energy/ >> >> Excerpt: >> "The United States is sitting on over 100 years of gas supply at the current >> rates of consumption," he said. Because natural gas emits half the >> greenhouse gases of coal, he added, that "provides the United States with a >> unique opportunity to address concerns about energy security and climate >> change." >> >> Recoverable U.S. gas reserves could now be bigger than the immense gas >> reserves of Russia, some experts say. > > In addition to shale gas there is coal seam gas and the possibility > (economic at higher oil prices) of converting coal to gas. CTL is economic at the current price level (>50-60$). The Air Force financed the development of a pilot plant. The question is how the price level will move. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.93/2544 - Data di rilascio: 12/04/09 08:32:00 From robert.bradbury at gmail.com Fri Dec 4 19:55:24 2009 From: robert.bradbury at gmail.com (Robert Bradbury) Date: Fri, 4 Dec 2009 14:55:24 -0500 Subject: [ExI] Spirited molecules In-Reply-To: <20091202124822.8c68a152@secure.ericade.net> References: <20091202124822.8c68a152@secure.ericade.net> Message-ID: Well, I had some difficulty reading this post as I do not read the ExICh list frequently (due to its gestapo policies). But this topic attracted my attention. Thel unbonded "azide" molecule (with the formula CN4) does not exist (to the best of my knowledge to assay it). The best using Wikipedia that I have been able to find is possibly N3- and therefore molecules such as NaN3 (sodium azide). The statement by Derek with respect to a "Cyanogen azide" suggests a C2N2 bonded to a N4 molecule -- which I fail to understand (I can posit plausible explanations for the distribution of the electrons (around many molecules) -- but I cannot posit how it is created or its actual normal chemical makeup. Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Dec 4 20:16:17 2009 From: pharos at gmail.com (BillK) Date: Fri, 4 Dec 2009 20:16:17 +0000 Subject: [ExI] Is Mr. Henson somehow involved in this? In-Reply-To: References: Message-ID: On 12/4/09, Henrique Moraes Machado wrote: > Solar Plant in Space Gets Go-Ahead > http://greeninc.blogs.nytimes.com/2009/12/03/solar-plant-in-space-gets-go-ahead/ > > I very much doubt that Keith is involved. He's a very practical engineer type. Quote from the article: Still, Mr. Spirnak, who previously ran space shuttle flights for the United States Air Force, acknowledged that putting a solar power plant in space would cost a few billion dollars more than a terrestrial photovoltaic farm generating the equivalent amount of electricity. ---------- Terrestrial solar power farms are already running and more are being built - for a fraction of the cost of this plan. This is California politics snatching at anything that might help it meet its renewable energy mandates. It won't cost California anything if it fails. It will be the investors that lose their money. BillK From bbenzai at yahoo.com Fri Dec 4 19:50:43 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 4 Dec 2009 11:50:43 -0800 (PST) Subject: [ExI] pat condell's latest subtle rant In-Reply-To: Message-ID: <11077.38275.qm@web32006.mail.mud.yahoo.com> Andrii Zvorygin wrote: > he seems like an angry confused person. I very much doubt that he's confused. But angry? About the greatest evil that has ever existed? Yes, I should think so. Ben Zaiboc From possiblepaths2050 at gmail.com Fri Dec 4 21:20:11 2009 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 4 Dec 2009 14:20:11 -0700 Subject: [ExI] Assassination tango In-Reply-To: <50300.12.77.168.224.1259925099.squirrel@www.main.nc.us> References: <976799.19921.qm@web58305.mail.re3.yahoo.com> <50300.12.77.168.224.1259925099.squirrel@www.main.nc.us> Message-ID: <2d6187670912041320w402c67ccj55174fa8a88f82e7@mail.gmail.com> "Is there a doctor in the house (errr..., the list!)?" LOL John : ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Fri Dec 4 21:26:43 2009 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 4 Dec 2009 14:26:43 -0700 Subject: [ExI] Me and Poe In-Reply-To: References: <30255.90755.qm@web58305.mail.re3.yahoo.com> <4B18425C.1060109@satx.rr.com> <4B187214.9090903@rawbw.com> <4B1929F9.8050407@satx.rr.com> Message-ID: <2d6187670912041326j5e92607dw6523971c980f072@mail.gmail.com> Once again Gina has created an impressive work. Around the Phoenix, Arizona area we have at least two different really great Poe events every year. And so between the annual Poe Festival (various mini plays, songs, interpretive dance, you name it) and the annual Poe Halloween Reading (sort of like an old time radio show) I am kept in thrall to the "poe-etic!" : ) John -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Fri Dec 4 21:37:06 2009 From: anders at aleph.se (Anders Sandberg) Date: Fri, 04 Dec 2009 22:37:06 +0100 Subject: [ExI] Thinkers we know Message-ID: <20091204213706.0880bb71@secure.ericade.net> Foreign Policy recently announced their list of top 100 global thinkers. A lot of the usual suspects, but the list gets fun at #71 with Ray Kurzweil, followed by #72 Jamais Cascio, and #73 Nick Bostrom. http://www.foreignpolicy.com/articles/2009/11/30/the_fp_top_100_global_thinkers?page=0,30 The Swedish newspapers made it a big story that professor Rosling at #96 was the only swede - they completely missed Nick :-) Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From possiblepaths2050 at gmail.com Fri Dec 4 22:10:26 2009 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 4 Dec 2009 15:10:26 -0700 Subject: [ExI] Who is Ayn Rand? In-Reply-To: <9031C3E58CA24514AB0A16C17C000795@spike> References: <462361.30206.qm@web58308.mail.re3.yahoo.com> <1E888355-2574-4379-B99D-DE1AA9BCE673@GMAIL.COM> <9031C3E58CA24514AB0A16C17C000795@spike> Message-ID: <2d6187670912041410k5a984fei6504dfaea136fd35@mail.gmail.com> Spike wrote: > I disagree! Ayn Rand WAS perfect, you depraved and iniquitous philistine, > you debauched barbarian! > > > > > > Well, maybe a little imperfect. > > > > {8^D > >>> Just imagine if L. Ron Hubbard and Ayn Rand had had a child together? John : 0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Fri Dec 4 22:15:37 2009 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 4 Dec 2009 15:15:37 -0700 Subject: [ExI] Top 5 biggest advances this year in anti-aging medicine... Message-ID: <2d6187670912041415k26735196ka31c49c9c1e58602@mail.gmail.com> The top 5 greatest advances this year in longevity medicine, according to the Methuselah Foundation & Dave Gobel. http://methuselahfoundation.org/new_newsletter/NovNL_Bestof09.html Would everyone agree with the five list items? John -------------- next part -------------- An HTML attachment was scrubbed... URL: From brentn at freeshell.org Fri Dec 4 22:02:16 2009 From: brentn at freeshell.org (Brent Neal) Date: Fri, 4 Dec 2009 17:02:16 -0500 Subject: [ExI] Spirited molecules In-Reply-To: References: <20091202124822.8c68a152@secure.ericade.net> Message-ID: <4C42AD33-3243-4812-B140-9EA55C8E1B6E@freeshell.org> On 4 Dec, 2009, at 14:55, Robert Bradbury wrote: > Well, I had some difficulty reading this post as I do not read the > ExICh list frequently (due to its gestapo policies). > > But this topic attracted my attention. > > Thel unbonded "azide" molecule (with the formula CN4) does not exist > (to the best of my knowledge to assay it). The best using Wikipedia > that I have been able to find is possibly N3- and therefore > molecules such as NaN3 (sodium azide). The statement by Derek with > respect to a "Cyanogen azide" suggests a C2N2 bonded to a N4 > molecule -- which I fail to understand (I can posit plausible > explanations for the distribution of the electrons (around many > molecules) -- but I cannot posit how it is created or its actual > normal chemical makeup. > > Robert Robert - I think you've misunderstood the formula from the name. Cyanogen azide has the empirical formula CN4. Structurally, its N?C-N=N=N. (That's a triple bond between the first N and the C, in case your mailreader isn't Unicode savvy.) The azide functional group is delta+ on the middle N, and delta- on the outer N. Its just not that terribly stable, alas, and as Derek points out, it wants to become mostly nitrogen gas in the worst kind of way. (NB: I have been known to use the sulfonyl azides to do insertions on polyolefins. B -- Brent Neal, Ph.D. http://brentn.freeshell.org From spike66 at att.net Fri Dec 4 22:17:23 2009 From: spike66 at att.net (spike) Date: Fri, 4 Dec 2009 14:17:23 -0800 Subject: [ExI] Is Mr. Henson somehow involved in this? In-Reply-To: References: Message-ID: <8A2153BAC52E478AA40B50E24EB8F197@spike> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of > Henrique Moraes Machado > Sent: Friday, December 04, 2009 11:41 AM > To: ExI chat list > Subject: [ExI] Is Mr. Henson somehow involved in this? > > Solar Plant in Space Gets Go-Ahead > http://greeninc.blogs.nytimes.com/2009/12/03/solar-plant-in-sp ace-gets-go-ahead/ > Henrique, it isn't clear to me why the California Public Utilities Commission needs to have any input into this. Any ideas? If it means the commish is putting up some of the money, well, don't count on that: Taxifornia is broke, and it isn't going to get any better any time soon. Hint: it will get worse sometime soon. If the Cal PUC is not putting up money, I don't see why they have any say in the matter. It would be between Solaren and Pacific Gas & Electric, ja? I see other things in this article that raise red flags. If they are using an inflatable mirror and concentrating solar energy on PVs, that looks to me right up front to be a loser: the PV life would surely be shortened by a concentrator, which is not what you want to do with anything you have paid to loft into space. I could imagine on the other hand using a huge inflatable mirror to create one hell of a Carnot cycle generator, and oh what fun we could have designing something like that: plenty of cold space to exhaust the waste heat, noooo restrictions at aaaallll on what we could use for working fluids, haaa that would be a fun project on which to be a design engineer. spike From thespike at satx.rr.com Fri Dec 4 22:48:45 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 04 Dec 2009 16:48:45 -0600 Subject: [ExI] Top 5 biggest advances this year in anti-aging medicine... In-Reply-To: <2d6187670912041415k26735196ka31c49c9c1e58602@mail.gmail.com> References: <2d6187670912041415k26735196ka31c49c9c1e58602@mail.gmail.com> Message-ID: <4B1991CD.5030800@satx.rr.com> If only there was a link to this, or even a name: <6. Tooth implants in one day - replacing surgery and long waiting and recovery times, new procedures allow dental surgeons to do virtual surgery to get an accurate picture of bone density and nerve position. The replacement tooth is made from the virtual plans allowing for a precise and permanent fit.> Damien Broderick From nanogirl at halcyon.com Fri Dec 4 22:58:24 2009 From: nanogirl at halcyon.com (Gina Miller) Date: Fri, 4 Dec 2009 15:58:24 -0700 Subject: [ExI] Me and Poe In-Reply-To: <2d6187670912041326j5e92607dw6523971c980f072@mail.gmail.com> References: <30255.90755.qm@web58305.mail.re3.yahoo.com><4B18425C.1060109@satx.rr.com> <4B187214.9090903@rawbw.com><4B1929F9.8050407@satx.rr.com> <2d6187670912041326j5e92607dw6523971c980f072@mail.gmail.com> Message-ID: <69BFE9AD258D49708E19D209EDBE0AF2@3DBOXXW4850> Thank you John. Sounds like there are some very interesting events in your area! Gina "Nanogirl" Miller www.nanogirl.com ----- Original Message ----- From: John Grigg To: ExI chat list Sent: Friday, December 04, 2009 2:26 PM Subject: Re: [ExI] Me and Poe Once again Gina has created an impressive work. Around the Phoenix, Arizona area we have at least two different really great Poe events every year. And so between the annual Poe Festival (various mini plays, songs, interpretive dance, you name it) and the annual Poe Halloween Reading (sort of like an old time radio show) I am kept in thrall to the "poe-etic!" : ) John ------------------------------------------------------------------------------ _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From cetico.iconoclasta at gmail.com Fri Dec 4 23:13:19 2009 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado) Date: Fri, 4 Dec 2009 21:13:19 -0200 Subject: [ExI] Is Mr. Henson somehow involved in this? References: <8A2153BAC52E478AA40B50E24EB8F197@spike> Message-ID: <58D0CF6FD24B4B4C83259A74838AFF3D@Notebook> Spikey> Henrique, it isn't clear to me why the California Public Utilities > Commission needs to have any input into this. Any ideas? If it means the > commish is putting up some of the money, well, don't count on that: > Taxifornia is broke, and it isn't going to get any better any time soon. > Hint: it will get worse sometime soon. If the Cal PUC is not putting up > money, I don't see why they have any say in the matter. It would be > between > Solaren and Pacific Gas & Electric, ja? No ideas. Maybe it's some legislation issue but I don't live in the Taxifornia (or any part of the US of A) and have no idea on how their legislation works. The article doesn't say much either. They could be fishing for investors, you know -- Hey we have the blessing from Schwarzie, now give us some money -- Spike> I see other things in this article that raise red flags. If they are using > an inflatable mirror and concentrating solar energy on PVs, that looks to > me > right up front to be a loser: the PV life would surely be shortened by a > concentrator, which is not what you want to do with anything you have paid > to loft into space. Indeed. But being the first attempt, this would be more of a proof of concept thingy than a commercial one, right? Therefore they can experiment. Spike> I could imagine on the other hand using a huge inflatable mirror to create > one hell of a Carnot cycle generator, and oh what fun we could have > designing something like that: plenty of cold space to exhaust the waste > heat, noooo restrictions at aaaallll on what we could use for working > fluids, haaa that would be a fun project on which to be a design engineer. Not being an engineer myself, I had to go to wikipedia to try and have any idea of what you're talking about... Ok. It's a Stirling engine. Interesting idea. However, correct me if I'm wrong (and I probably am) but isn't heat dissipation a big problem in a vacuum? From spike66 at att.net Fri Dec 4 23:15:30 2009 From: spike66 at att.net (spike) Date: Fri, 4 Dec 2009 15:15:30 -0800 Subject: [ExI] Spirited molecules In-Reply-To: <4C42AD33-3243-4812-B140-9EA55C8E1B6E@freeshell.org> References: <20091202124822.8c68a152@secure.ericade.net> <4C42AD33-3243-4812-B140-9EA55C8E1B6E@freeshell.org> Message-ID: > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of > Brent Neal > ... > > > Robert - > I think you've misunderstood the formula from the name. > Cyanogen azide > has the empirical formula CN4. Structurally, its N?C-N=N=N... Cool Brent thanks, I wondered about this, but hadn't looked it upwardly. Or looked up it. I would never have guessed this because one of the Ns has four bonds and one has only two. Didn't know they could do that. > ...it wants to become mostly > nitrogen gas in the worst kind of way... Brent Neal, Ph.D. Ja and it often becomes nitrogen gas in the worst kind of way. {8-] ...Which reminds me of a question I have wondered about for a long time, speaking of nitrogen compounds. Words that are often used to describe an explosion are, for instance, kaBOOM and kerBLOOEY and kaBANG and such. Please those who speak European languages, do those anamonapoetic terms have equivalents in your language? If so, does it have the ka? What is the ka? Why isn't it merely BOOM and BANG? Is there some actual compressible fluid effect that causes some kind of sensation of ka before the sound wave arrives? I have a notion of what that might be, but will only propose it if I know it isn't a meaningless Yankeeism. spike From spike66 at att.net Sat Dec 5 00:46:27 2009 From: spike66 at att.net (spike) Date: Fri, 4 Dec 2009 16:46:27 -0800 Subject: [ExI] is our friend somehow involved in this? In-Reply-To: <58D0CF6FD24B4B4C83259A74838AFF3D@Notebook> References: <8A2153BAC52E478AA40B50E24EB8F197@spike> <58D0CF6FD24B4B4C83259A74838AFF3D@Notebook> Message-ID: Hi friends, note the change in subject line. If someone puts their own name in a subject line, that is fine. For privacy reasons and established protocol, unless you have specific permission from that person, do eschew putting anyone's name in a subject line thanks. > ...On Behalf Of Henrique Moraes Machado > ... > > Not being an engineer myself, I had to go to wikipedia to try > and have any idea of what you're talking about... Ok. It's a > Stirling engine... Well it can be a Stirling engine, but that isn't exactly what I have in mind. I was thinking something analogous to a steam turbine cycle, except instead of water, we might use a higher temperature working fluid such as mercury or some other metal. With that big a concentrator, we have really high temperatures at our disposal, and recall that we need to actually boil the stuff and recondense it. Likely water wouldn't do for this application, because it cannot be condensed at temperatures below about 650K (if I recall correctly, somewhere in the 600s I am pretty sure). Mercury is up in the 1700s, so I could imagine the heat source at a couple thousand K and the radiator exit at about 700 to 800-ish, allowing a theoretical efficiency higher than you can get with PVs. > ...but isn't heat dissipation a big > problem in a vacuum? We get rid of the heat via radiation to cold space. It is a big problem, but there is a big solution to go with it. If we are talking about a km diameter mirror, then we are also talking about a biiig radiator. If we can run at high temperatures as we likely would for this application. Heat is radiated out into space as a function of temperature to the fourth, so depending on how it is scaled, dumping heat in space is easier than it is down here on the deck. Down here of course we use the heat of vaporization of water with the big cooling towers. But in space, if sufficiently scaled, a radiative condenser would be great. It is a space engineer's playground! Of course none of this will generate energy as cheaply as burning coal down here, I am not claiming it is, unless it is scaled to a really remarkably big station. I don't think our friends at Solaren can do it either, but if they manage to pull it off, I am cheering wildly for them, and will gladly tell the whole world I was wrong. spike From lcorbin at rawbw.com Sat Dec 5 03:41:50 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 04 Dec 2009 19:41:50 -0800 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <11077.38275.qm@web32006.mail.mud.yahoo.com> References: <11077.38275.qm@web32006.mail.mud.yahoo.com> Message-ID: <4B19D67E.4010101@rawbw.com> Ben Zaiboc wrote: > Andrii Zvorygin wrote: > >> he seems like an angry confused person. > > I very much doubt that he's confused. I didn't see any signs of confusion either. > But angry? About the greatest evil that has ever existed? > Yes, I should think so. It doesn't take too much dwelling on what religion tries to do to modern minds (and succeeds very often) in order to make a thoughtful, skeptical person angry. "The greatest evil that has ever existed"? I'm at a loss here to think about what other candidates you may have in mind, and how religion finally beats them out in the evilness department. The noticeable thread to me that links all the recent atheist books, and claims like Pat Condell's, is this: we can hardly imagine history without religion. So we can hardly imagine the outcome of any controlled experiment. We know that the Aztecs and Mayas before them did terrible things pretty high up on the scale of evil. But it's highly significant, I think, that we *only* know these things because a literate civilization made contact with them and wrote it all down. It seems likely to me that atrocities scarcely thinkable to us were commonplace among *all* our ancestors if we go back far enough. Reading a sympathetic biography of Genghis Khan left me with the impression that the incredible holocausts would have happened anyway, even without the worship of the Tangri, the great blue sky. So, since atrocities and religion have always been with us, how is it that so many people always manage to suppose that the latter is truly responsible for the former? I'm not convinced. I do wish to say that I did not merely agree with almost all of Pat Condell's rant, but that it echoed many of my own thoughts and feelings over the years. How this Eastern mystical cult came to dominate all of western civilization is a sad tale. I'd love to know about an alternate history (who among us would not) in which the Greeks and Romans---who had successfully fought off physical conquest---had managed also to fight off memetic conquest. Lee From msd001 at gmail.com Sat Dec 5 04:09:39 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 4 Dec 2009 23:09:39 -0500 Subject: [ExI] Spirited molecules In-Reply-To: References: <20091202124822.8c68a152@secure.ericade.net> <4C42AD33-3243-4812-B140-9EA55C8E1B6E@freeshell.org> Message-ID: <62c14240912042009k476798a1h1fcae4aa78ba652@mail.gmail.com> 2009/12/4 spike > ...Which reminds me of a question I have wondered about for a long time, > speaking of nitrogen compounds. Words that are often used to describe an > explosion are, for instance, kaBOOM and kerBLOOEY and kaBANG and such. > Please those who speak European languages, do those anamonapoetic terms > have > equivalents in your language? If so, does it have the ka? What is the ka? > Why isn't it merely BOOM and BANG? Is there some actual compressible fluid > effect that causes some kind of sensation of ka before the sound wave > arrives? I have a notion of what that might be, but will only propose it > if > I know it isn't a meaningless Yankeeism. > > Isn't the audible pop of a balloon caused by air collapsing inwards? In an explosion, it probably is the outward rush that generates the "ka" followed by the collapse that makes the "boom" I think the air rushing into the space recently vacated by an expanse of the charcoal lighter fluid 'going up' that makes the "woof" sound too. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Sat Dec 5 04:20:13 2009 From: max at maxmore.com (Max More) Date: Fri, 04 Dec 2009 22:20:13 -0600 Subject: [ExI] pat condell's latest subtle rant Message-ID: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> Regarding: Aggressive Atheism, by Pat Condell http://www.youtube.com/watch?v=yjO4duhMRZk I tried to watch this twice before, but stopped the video due to being put off by Condell's manner. Tonight, I finally watched the whole thing. It made me feel like I was 18 years old again. An aggressive atheist. A guy who went to classes wearing badges (US: buttons) saying things like "legalize heroin", "taxation is theft", and "God is dead". It reminded me of confidently -- nay, arrogantly -- telling the religious buffoons what's what. And you know what? Every thing Condell says is basically right. Yet, his attitude and approach, while refreshing, leaving me feeling that his message is purely and pointlessly a preaching-to-the-choir approach. Its value is completely one of entertainment. No, okay, it may also kick some atheists in the ass and inspire them to do something more active to combat the major problems that come with religious thinking. While Condell's aggressive approach definitely has a degree of wisdom (and a load of intellectual good sense), is it really appropriate to, or useful for, or humanistic in, dealing with all situations? For instance: My half-brother, who I just learned has been diagnosed with serious cancer, has asked me to read a novel that I see is extremely popular among the religious (Christian in particular): The Shack. Relevant background: This is a (considerably older) half-brother -- simply "brother" as far as I knew until a few years ago -- who, when I was in my teens and had recently lost his beliefs... or rather, had thrown off the shackles of... religion, insisted (at a Christmas family gathering), that I would certainly go to Hell forever because I didn't believe that Jesus was the son of God. A Pat Condell-style atheist might tell simply tell my brother that he is an idiot to believe this crap. I agreed to actually read this book and -- unless it really is *monumentally* stupid -- I intend to discuss it with my brother exploratively rather than explaining abruptly to him why his decades-long religious beliefs are moronic. Am I a just a weak fool to do this? Is Condell's attitude and approach always useful/appropriate/effective/wise? Max ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From max at maxmore.com Sat Dec 5 05:15:31 2009 From: max at maxmore.com (Max More) Date: Fri, 04 Dec 2009 23:15:31 -0600 Subject: [ExI] HUMOR: How do you satisfy an economist chick (guy)? Message-ID: <200912050515.nB55FdgP014521@andromeda.ziaspace.com> It's Friday night (CST). It's time for some puerile humor. If you're looking for serious thought... move along. There's nothing to see here. On Facebook, Roko Mijic asked: How do you hook up with an economist chick? His answer: ... ask her to internalize your externality. I didn't see any good responses in the first 20, so I suggested: Tell her that you want to smooth her demand curve. You want to satisfice her desires. But she has only limited resources to satisfy your unlimited wants. You can never reach equilibrium without injecting demand into her system. You can't "push on a string", but you can prime my pump! That's the Invisible Hand you feel. If I press my comparative advantage, how will that affect your yield curve? Any additions? (Feel free to change the "chick" to a male or multisexual or alien species or whatever). Come on extropians -- this is a vital issue for a Friday night! Max From nanite1018 at gmail.com Sat Dec 5 05:59:11 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Sat, 5 Dec 2009 00:59:11 -0500 Subject: [ExI] HUMOR: How do you satisfy an economist chick (guy)? In-Reply-To: <200912050515.nB55FdgP014521@andromeda.ziaspace.com> References: <200912050515.nB55FdgP014521@andromeda.ziaspace.com> Message-ID: > Any additions? (Feel free to change the "chick" to a male or > multisexual or alien species or whatever). > > Come on extropians -- this is a vital issue for a Friday night! > > Max The best one I could come up with: "The only way to bring down my massive inflation is for you to issue me a flood of bonds." Eh? Eh? A little funny? Lol Joshua Job nanite1018 at gmail.com From spike66 at att.net Sat Dec 5 05:46:23 2009 From: spike66 at att.net (spike) Date: Fri, 4 Dec 2009 21:46:23 -0800 Subject: [ExI] HUMOR: How do you satisfy an economist chick (guy)? In-Reply-To: <200912050515.nB55FdgP014521@andromeda.ziaspace.com> References: <200912050515.nB55FdgP014521@andromeda.ziaspace.com> Message-ID: <2EB4FBE504F24C8AA5A68F72092208A0@spike> > ...On Behalf Of Max More ... > Subject: [ExI] HUMOR: How do you satisfy an economist chick (guy)? > ... > On Facebook, Roko Mijic asked: How do you hook up with an > economist chick? Max I would opine thus, "Keynesian economic theory is outdated, disproven and harmful." If that comment turns her on, she is the kind I would want to turn on. If not, she's not. But if it works, perhaps attempt the somewhat more suggestive, "Ups and downs are beneficial; they should never be suppressed." If that gets a sincere-sounding laugh, then go for something like, "Inflation is not to be feared, but rather facilitated often and fully utilized!" spike From nanite1018 at gmail.com Sat Dec 5 06:30:10 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Sat, 5 Dec 2009 01:30:10 -0500 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> Message-ID: <22F78D00-15AE-4F80-A63A-52B0B424610A@GMAIL.COM> > Tonight, I finally watched the whole thing. It made me feel like I > was 18 years old again. An aggressive atheist. A guy who went to > classes wearing badges (US: buttons) saying things like "legalize > heroin", "taxation is theft", and "God is dead". It reminded me of > confidently -- nay, arrogantly -- telling the religious buffoons > what's what. And you know what? Every thing Condell says is > basically right. Well, I don't where badges/buttons but I do speak my mind a good deal (a short conversation on anything related to politics/religion will make it quite clear where I stand). I imagine that my confidence in my own ideas can at times come across as arrogance. Often, even, it isn't merely an appearance, I often am arrogant consciously. I generally prefer arrogance (of a certain kind) to being wishy-washy in the realm of ideas, so perhaps that is one of the reasons I love Condell. Older people (parents for example) often tell me I'll moderate as grow older. I sure hope not, that would be sorely disappointing. haha > Yet, his attitude and approach, while refreshing, leaving me feeling > that his message is purely and pointlessly a preaching-to-the-choir > approach. Its value is completely one of entertainment. No, okay, it > may also kick some atheists in the ass and inspire them to do > something more active to combat the major problems that come with > religious thinking. > > While Condell's aggressive approach definitely has a degree of > wisdom (and a load of intellectual good sense), is it really > appropriate to, or useful for, or humanistic in, dealing with all > situations? Depends on your aim. Other atheists/agnostics/non-religious folks I've talked to often criticize Dawkins for giving atheism a bad reputation, for being counter-productive, etc. I think he does an excellent job. I think really really religious people are sort of hopeless cases to an extent, and I think Dawkins/Condell style atheists generally can serve the function of helping moderates and quasi-religious folks to re-examine their ideas. And the good kick in the pants of course. > A Pat Condell-style atheist might tell simply tell my brother that > he is an idiot to believe this crap. I agreed to actually read this > book and -- unless it really is *monumentally* stupid -- I intend to > discuss it with my brother exploratively rather than explaining > abruptly to him why his decades-long religious beliefs are moronic. > > Am I a just a weak fool to do this? Is Condell's attitude and > approach always useful/appropriate/effective/wise? > > Max Well, since he's your brother, I would be a bit more diplomatic. If I were in your position, I would have been having this sort of conversation for a long time with my brother, and so I doubt that he would ask me to read such a book. And, honestly, if he did, I would refuse (unless it seemed to be genuinely interesting on non-religious grounds; I am interested in the Left Behind novels, but for the evil dictator/tyranny part, not the God parts). I actually had a similar situation with my ex-girlfriend who was pretty religious (in retrospect, big mistake, and I won't be repeating it). She asked me to watch "The Case for Christ" I think its called. It was terrible, and had massive flaws in logic and evidence. The journalist who made had his wife convert, which caused major marital and emotional strain, which to me is a much better explanation for his conversion than any "evidence-based" decision (the evidence wasn't good either, btw). I told her as much, and she kept trying, and eventually I said I would never believe in God, and certainly never one religion's version, because in order to do so I would have to be a completely different person from the core out. We broke up the next day. :p My point in relating that story is that I have basically given up on the idea that I can convince hard-core religious people that they're wrong, and so while I might talk with them as an exercise in hilarity, I wouldn't put any weight on it at all. So while Condell/Dawkins style atheism might not be diplomatic or bring people who are quite religious over to "the dark side", I don't see that as a drawback really. I think the radicals often accomplish more than the moderates in these sorts of things. Joshua Job nanite1018 at gmail.com From spike66 at att.net Sat Dec 5 06:38:10 2009 From: spike66 at att.net (spike) Date: Fri, 4 Dec 2009 22:38:10 -0800 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: References: Message-ID: Hi Robert, I want to answer this in greater detail later as family obligations allow. I too am a freedom of speech advocate, when it comes to governments. We promise to not arrest you. That is what the US government is granting us with that amendment, a promise to not arrest us for what we say or write. But on the other hand circumspection is always wise, and in some cases actual decorum. Freedom of speech is about what governments can do to its citizens. I don't see how it applies in this case. The ExI moderators are not governors, but rather more like advisors, with passwords. Back when I was moderating, I used to get three to five times more offlist complaints about under-moderating than about over-moderating. I thought it a good ratio and tried to maintain that ratio. Last spring when I was on the road a lot, I had to give it up because I was at the point where I wasn't really moderating at all, but rather dealing with plenty of offlist complaints about under-moderating. {8^D Later! spike _____ From: Robert Bradbury [mailto:robert.bradbury at gmail.com] Sent: Friday, December 04, 2009 9:56 PM To: Max More; spike Cc: ExICh Subject: Moderation on the ExiCh list Max has asked me to explain my recent comment comment comparing the ExiCh moderation policies to gestapo policies. First of all, let me suggest that was a little bit far even for me. But it is my nature to be very passionate (with respect to my convictions) -- and I will tend to stretch analogies in order to make a point. First of all my comment was related to the moderation of a very heated debate which I think took place circa 2006-2007 time-frame and it was the primary reason that I discontinued any significant contribution to the ExICh list. I believe the moderators at that time were Eugen and Spike (and Max was to my knowledge was not involved). And I do agree that the moderation then, as now, is very mild (bordering on not even present). But as a person of conviction, I happen to believe in the first article of the Bill of Rights, e.g. "Congress shall make no law respecting an an establishment of religion, or prohibiting the exercise thereof [1]; OR ABRIDGING THE FREEDOM OF SPEECH [2], .." should be taken very seriously. And as I happen to have grown up in Massachusetts and happen to have walked along the trail of the Minitueman and happen to have ancestors who came into MA circa 1634 -- and presumably some of my ancestors fought in the Revolutionary War.-- I happen to take the matters for which they fought, and perhaps died for, VERY SERIOUSLY. Now obviously an email list can adopt whatever policies it likes -- so in that respect it does not have the limits or guidelines placed on it that the U.S. Congress has (an email list, perhaps in contrast to a blog, is effectively a dictatorship).. At the other end of the spectrum one could go into posts to social networking, blogs or news commentary pages that allow anonymous posts that possibly allow defamatory and/or personally damaging posts (which given the fact that unsubstantiated claims propagate is presumably not a good thing). And the ExICh list is notable in that it allows "outside of the box" or "off the cuff" thinking. (For example, I recently gave up participating in the GRG list due to the fact that one or more individuals objected to the colorful language that I may have used in regard to one or more posts -- I believe with respect to something which was NOT worthy of consideration -- something those of you who know me know I have a very low tolerance of). And thus I became disinvolved with contributing to the GRG list. For similar reasons (with regard to unclear moderation policies) I became disinvolved with the ExICh list several years ago. The point of a moderated list would presumably be to minimize the exposure of the list to defamation lawsuits, and in some cases "accuracy" statements (say the Raelians were to start auto-cross-posting their fluff. [2]). But I am unaware of such a policy being explicitly cast in bronze. In other cases I should view any moderation policy needing to be clearly stated (so one clearly understands what restrictions one has that are less than the U.S. Congress.) And there should be a "board of appeals" -- in that if ones moderated post is rejected one can subject it to independent scrutiny (and where extremely necessary censorship). (This is common behavior in the film industry which IMO is not a good model for distribution restrictions but may be a good model for self-imposed censorship). But I do remain resolute in that undefined moderation policies border on gestapo policies (though the intent in the analogy is problematic) is not to detract from ExICh policies but to perhaps encourage their gradual solutions. For the first thing one wants to kill in a free society is Freedom of Speech -- precisely because that represents a threat (China, and to a lesser extent Iran, being the current primary examples). And the moderators [at least to my knowledge] have not published which specific censorship rules they use. And so what is viewed as a "threat" or even "defamatory" is unclear. There is of course "Common Informed Censorship" but unlike other authorities we do not know what this is (it may shift from individual to individual as the moderation shifts ( So I may (or may not) view the ExICh llist as little different from a CIA chat list. I would strongly suspect that a topic entitled "Strategies to assassinate the President" would be unacceptable to the moderators. And yet why? Is it not the exercise of Free Speech? The ExICh list really needs to break down in detail acceptable vs. unacceptable list policies (that presumably the moderators are attempting to mediate). (Because I know that there are list members who could come up with creative solutions to this question -- which means I should ask them in personal communications and not on the list). But merely stating this probably puts ExICh on the "watch list" even if it isn't already there. Max & Spike, I apologize for opening the debate but I believe you both know me well enough that I will not draw back from engaging discussion. And it prompts the topic of things which seriously need to be thought about. Robert 1. Which considering the extent to which religions go about in brain-washing individuals who are incapable of reasoned or informed thought (i.e. children) is unconscioncable. 2. Note that I do not consider to be the Raelians perspective of the colonization of Earth impossible. I consider it to be one of a number of probable realities which must be thought about. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Dec 5 06:11:34 2009 From: spike66 at att.net (spike) Date: Fri, 4 Dec 2009 22:11:34 -0800 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> Message-ID: <3282289873444740B3B0A4F77CBC3874@spike> >...On Behalf Of Max More > Subject: Re: [ExI] pat condell's latest subtle rant > > Regarding: > Aggressive Atheism, by Pat Condell > http://www.youtube.com/watch?v=yjO4duhMRZk > > ... > > For instance: My half-brother, who I just learned has been > diagnosed with serious cancer... Owww, so sorry to hear that Max, do pass along our best wishes for success to him in the struggle he has coming. > has asked me to read a novel > that I see is extremely popular among the religious > (Christian in particular): The Shack. ... > Am I a just a weak fool to do this?... No. Take care of your family first. That task is more important than being right, sir. Let your pride take the hit, read the book, discuss it with your brother, let him know you are cheering for him and hoping for his full recovery, do all you can to lend aid and comfort to him. No one will think less of you, and in the long run you will not think less of you. > Is Condell's attitude and > approach always useful/appropriate/effective/wise? Max Wrong question. Pat Condell is a comedian. Rush Limbaugh, Glenn Beck, Jon Stewart, Michael Moore, all these guys are comedians, all playing a role. They have a message, but their job is to entertain, and their act is to sort-of pretend to be serious, a little like our WWE rassling shows. If one doesn't find one brand of political humor funny, there are plenty of other clowns in this well-connected world, all across the political spectrum. If I am in the mood for it, I find Condell very funny, and Michael Moore's first movie "Roger and Me" is hilarious. Check out Time magazine running a quasi-serious cover story on Glenn Beck. Or were they playing along with his act? I wonder if the Time magazine people think rassling is real? spike From lcorbin at rawbw.com Sat Dec 5 06:41:06 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 04 Dec 2009 22:41:06 -0800 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> Message-ID: <4B1A0082.3060007@rawbw.com> Max ends up asking > Am I a just a weak fool to do this? Is Condell's attitude and approach > always useful/appropriate/effective/wise? Condell's message WAS to the choir; of course. Yes, it was just for us, and many of those things he said would be in extremely poor taste to deploy in a discussion with a believer. Not to mention ineffective. Not to mention the whole acerbic manner that you describe well. However, I believe that what he spoke was more than mere entertainment and more than (as you put it) giving some of us a kick in the pants. What he said he did believe to be the truth, and it gives every one of us (in the choir) a chance to calibrate our own beliefs against his, while listening to such a screed. For example, I got to think about what "righteous" meant (and to incidentally conclude that Pat hasn't looked up the meaning lately, or thinks that everyone is going to knee-jerk take the Christian/Jewish religious meaning). > I intend to discuss [the book recommended] > with my brother exploratively rather than > explaining abruptly to him why his decades- > long religious beliefs are moronic. Of course. Kindness, civility, and even honesty demand this approach. I don't have such a high opinion of people who enter into every discussion with an interlocutor 100% convinced that their own position is 100% true, and that the other's is necessarily nonsense or worse. Lee Max More wrote: > Regarding: > Aggressive Atheism, by Pat Condell > http://www.youtube.com/watch?v=yjO4duhMRZk > > I tried to watch this twice before, but stopped the video due to being > put off by Condell's manner. > > Tonight, I finally watched the whole thing. It made me feel like I was > 18 years old again. An aggressive atheist. A guy who went to classes > wearing badges (US: buttons) saying things like "legalize heroin", > "taxation is theft", and "God is dead". It reminded me of confidently -- > nay, arrogantly -- telling the religious buffoons what's what. And you > know what? Every thing Condell says is basically right. > > Yet, his attitude and approach, while refreshing, leaving me feeling > that his message is purely and pointlessly a preaching-to-the-choir > approach. Its value is completely one of entertainment. No, okay, it may > also kick some atheists in the ass and inspire them to do something more > active to combat the major problems that come with religious thinking. > > While Condell's aggressive approach definitely has a degree of wisdom > (and a load of intellectual good sense), is it really appropriate to, or > useful for, or humanistic in, dealing with all situations? > > For instance: My half-brother, who I just learned has been diagnosed > with serious cancer, has asked me to read a novel that I see is > extremely popular among the religious (Christian in particular): The Shack. > > Relevant background: This is a (considerably older) half-brother -- > simply "brother" as far as I knew until a few years ago -- who, when I > was in my teens and had recently lost his beliefs... or rather, had > thrown off the shackles of... religion, insisted (at a Christmas family > gathering), that I would certainly go to Hell forever because I didn't > believe that Jesus was the son of God. > > A Pat Condell-style atheist might tell simply tell my brother that he is > an idiot to believe this crap. I agreed to actually read this book and > -- unless it really is *monumentally* stupid -- I intend to discuss it > with my brother exploratively rather than explaining abruptly to him why > his decades-long religious beliefs are moronic. > > Am I a just a weak fool to do this? Is Condell's attitude and approach > always useful/appropriate/effective/wise? > > Max > > > ------------------------------------- > Max More, Ph.D. > Strategic Philosopher > Extropy Institute Founder > www.maxmore.com > max at maxmore.com > ------------------------------------- > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From robert.bradbury at gmail.com Sat Dec 5 05:55:33 2009 From: robert.bradbury at gmail.com (Robert Bradbury) Date: Sat, 5 Dec 2009 00:55:33 -0500 Subject: [ExI] Moderation on the ExiCh list Message-ID: Max has asked me to explain my recent comment comment comparing the ExiCh moderation policies to gestapo policies. First of all, let me suggest that was a little bit far even for me. But it is my nature to be very passionate (with respect to my convictions) -- and I will tend to stretch analogies in order to make a point. First of all my comment was related to the moderation of a very heated debate which I think took place circa 2006-2007 time-frame and it was the primary reason that I discontinued any significant contribution to the ExICh list. I believe the moderators at that time were Eugen and Spike (and Max was to my knowledge was not involved). And I do agree that the moderation then, as now, is very mild (bordering on not even present). But as a person of conviction, I happen to believe in the first article of the Bill of Rights, e.g. "Congress shall make no law respecting an an establishment of religion, or prohibiting the exercise thereof [1]; OR ABRIDGING THE FREEDOM OF SPEECH [2], .." should be taken very seriously. And as I happen to have grown up in Massachusetts and happen to have walked along the trail of the Minitueman and happen to have ancestors who came into MA circa 1634 -- and presumably some of my ancestors fought in the Revolutionary War.-- I happen to take the matters for which they fought, and perhaps died for, VERY SERIOUSLY. Now obviously an email list can adopt whatever policies it likes -- so in that respect it does not have the limits or guidelines placed on it that the U.S. Congress has (an email list, perhaps in contrast to a blog, is effectively a dictatorship).. At the other end of the spectrum one could go into posts to social networking, blogs or news commentary pages that allow anonymous posts that possibly allow defamatory and/or personally damaging posts (which given the fact that unsubstantiated claims propagate is presumably not a good thing). And the ExICh list is notable in that it allows "outside of the box" or "off the cuff" thinking. (For example, I recently gave up participating in the GRG list due to the fact that one or more individuals objected to the colorful language that I may have used in regard to one or more posts -- I believe with respect to something which was NOT worthy of consideration -- something those of you who know me know I have a very low tolerance of). And thus I became disinvolved with contributing to the GRG list. For similar reasons (with regard to unclear moderation policies) I became disinvolved with the ExICh list several years ago. The point of a moderated list would presumably be to minimize the exposure of the list to defamation lawsuits, and in some cases "accuracy" statements (say the Raelians were to start auto-cross-posting their fluff. [2]). But I am unaware of such a policy being explicitly cast in bronze. In other cases I should view any moderation policy needing to be clearly stated (so one clearly understands what restrictions one has that are less than the U.S. Congress.) And there should be a "board of appeals" -- in that if ones moderated post is rejected one can subject it to independent scrutiny (and where extremely necessary censorship). (This is common behavior in the film industry which IMO is not a good model for distribution restrictions but may be a good model for self-imposed censorship). But I do remain resolute in that undefined moderation policies border on gestapo policies (though the intent in the analogy is problematic) is not to detract from ExICh policies but to perhaps encourage their gradual solutions. For the first thing one wants to kill in a free society is Freedom of Speech -- precisely because that represents a threat (China, and to a lesser extent Iran, being the current primary examples). And the moderators [at least to my knowledge] have not published which specific censorship rules they use. And so what is viewed as a "threat" or even "defamatory" is unclear. There is of course "Common Informed Censorship" but unlike other authorities we do not know what this is (it may shift from individual to individual as the moderation shifts ( So I may (or may not) view the ExICh llist as little different from a CIA chat list. I would strongly suspect that a topic entitled "Strategies to assassinate the President" would be unacceptable to the moderators. And yet why? Is it not the exercise of Free Speech? The ExICh list really needs to break down in detail acceptable vs. unacceptable list policies (that presumably the moderators are attempting to mediate). (Because I know that there are list members who could come up with creative solutions to this question -- which means I should ask them in personal communications and not on the list). But merely stating this probably puts ExICh on the "watch list" even if it isn't already there. Max & Spike, I apologize for opening the debate but I believe you both know me well enough that I will not draw back from engaging discussion. And it prompts the topic of things which seriously need to be thought about. Robert 1. Which considering the extent to which religions go about in brain-washing individuals who are incapable of reasoned or informed thought (i.e. children) is unconscioncable. 2. Note that I do not consider to be the Raelians perspective of the colonization of Earth impossible. I consider it to be one of a number of probable realities which must be thought about. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Dec 5 09:32:41 2009 From: pharos at gmail.com (BillK) Date: Sat, 5 Dec 2009 09:32:41 +0000 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: References: Message-ID: On 12/5/09, spike wrote: > > I too am a freedom of speech advocate, when it comes to governments. We > promise to not arrest you. That is what the US government is granting us > with that amendment, a promise to not arrest us for what we say or write. > But on the other hand circumspection is always wise, and in some cases > actual decorum. Freedom of speech is about what governments can do to its > citizens. I don't see how it applies in this case. > > The ExI moderators are not governors, but rather more like advisors, with > passwords. > > It is not possible to specify an exact list of what subjects should not be discussed on the Exi-chat list. Who knows what some new visitor might want to write about? There are dark places on the internet where literally anything can be discussed or images exchanged, some of which are against the law in some countries. If cannibalism rocks your boat then you can find a place to discuss it with fellow deviants. But Exi-chat does not have the primary objective of being a free-for-all anything-goes dungeon. The main objective of the public Exi-chat is to discuss transhumanism of the extropian variety. The test of a discussion is -- 1) - Does it involve transhumanism at all? 2) - Does it help the transhumanism movement to make progress either by interesting discussion of technology, etc. or by spreading knowledge and becoming more well-known and accepted by the general public. 3) - Does it help bonding within the tranhumanist group? As Spike says, you can't have a 99 bullet point list of forbidden subjects. You have to look at the intent and whether it is helpful to transhumanism as the case arises. Some trolls enjoy raising problems just to disrupt the list and when they appear, then they have to be dealt with. As Spike said, some people complained that his moderation was too light. (There were no public hangings or floggings, even). BillK From anders at aleph.se Sat Dec 5 11:05:25 2009 From: anders at aleph.se (Anders Sandberg) Date: Sat, 05 Dec 2009 12:05:25 +0100 Subject: [ExI] Spirited molecules In-Reply-To: E9E01D2EAE294FC6917ECC8C1B43EBBC@spike Message-ID: <20091205110525.636c550c@secure.ericade.net> Spike: > ...Which reminds me of a question I have wondered about for a long time, > speaking of nitrogen compounds. Words that are often used to describe an > explosion are, for instance, kaBOOM and kerBLOOEY and kaBANG and such. > Please those who speak European languages, do those anamonapoetic terms have > equivalents in your language? In Swedish, the typical cartoon explosion goes 'bang', 'pang', 'bom' or 'boom' - I think the 'k' prefix is language-specific (onomatopoeia tends to follow the language system of the host language). When a car crashes, the sound is however described as 'krasch'. Clearly a sound with sudden attack tends to be represented by a word with sudden attack, but there are many choices in each language. Some more in English: http://www.writtensound.com/explosions.htm Could the ka- in kaboom be a phonestheme? http://en.wikipedia.org/wiki/Phonestheme Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From mbb386 at main.nc.us Sat Dec 5 11:32:04 2009 From: mbb386 at main.nc.us (MB) Date: Sat, 5 Dec 2009 06:32:04 -0500 (EST) Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> Message-ID: <50763.12.77.168.244.1260012724.squirrel@www.main.nc.us> > For instance: My half-brother, who I just learned has been diagnosed > with serious cancer, has asked me to read a novel that I see is > extremely popular among the religious (Christian in particular): The Shack. > [snip] > > A Pat Condell-style atheist might tell simply tell my brother that he > is an idiot to believe this crap. I agreed to actually read this book > and -- unless it really is *monumentally* stupid -- I intend to > discuss it with my brother exploratively rather than explaining > abruptly to him why his decades-long religious beliefs are moronic. > > Am I a just a weak fool to do this? No, you are a kind, gentle man who will care for his brother in this time of great trial. Were you to refuse to even read the book, that would simply be a slap in the face... and you'd know that. Forever. A meanness. A shameful thing. IMHO. Best wishes for your brother and family. It can be a very rough road. Regards, MB From stefano.vaj at gmail.com Sat Dec 5 11:58:46 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 5 Dec 2009 12:58:46 +0100 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: References: Message-ID: <580930c20912050358s76382d20sded35f1eb407ae96@mail.gmail.com> 2009/12/5 Robert Bradbury : > But as a person of conviction, I happen to believe in the first article of > the Bill of Rights, e.g. "Congress shall make no law respecting an an > establishment of religion, or prohibiting the exercise thereof [1]; OR > ABRIDGING THE FREEDOM OF SPEECH [2], .." should be taken very seriously. Interesting. I am myself a moderator of a few lists, and in turn happened at least once to be the victim of the moderation (or rather the owner) of a list at that time managed by one of my couple of personal trolls/poltergeists, who interpreted such role as the ability to engage in, by no means "moderate", attacks and flames where the ability of the other party to reply is by definition restricted. Unsurprisingly, the list members eventually migrated elsewhere, including those who had not directly suffered such behaviours... I think that both the moderation when in doubt should err on the side of the freedom of speech (and be very vigilant with regard to its own personal and ideological biases) AND that a moderation should exist. This not only as a matter of ideological or aesthetical taste, but for very practical reasons which have to do with its continuing viability and success. > The point of a moderated list would presumably be to minimize the exposure > of the list to defamation lawsuits, and in some cases "accuracy" statements > (say the Raelians were to start auto-cross-posting their fluff. No. The real point of moderation and the real (and only) cardinal sin in mailing lists is IMHO Off-Topic. Flames are off-topic. Spam is off-topic. Ad hominem are off-topic. Tireless single-issue evangelism is off-topic. Bilateral chit-chat is off-topic. Off-topic is what reduce the signal-to-noise ratio, annoys and ultimately keep away other participants. Cut the OT, but only the OT, and you have nothing else that you need (or should) do. -- Stefano Vaj From dharris at livelib.com Sat Dec 5 11:48:38 2009 From: dharris at livelib.com (David C. Harris) Date: Sat, 05 Dec 2009 03:48:38 -0800 Subject: [ExI] HUMOR: How do you satisfy an economist chick (guy)? In-Reply-To: <200912050515.nB55FdgP014521@andromeda.ziaspace.com> References: <200912050515.nB55FdgP014521@andromeda.ziaspace.com> Message-ID: <4B1A4896.2020601@livelib.com> Getting contemporary after too many drinks, "At the risk of reaching diminishing marginal returns, but in the interest of transparency, you should know, I'm too big to fail". Max More wrote: > It's Friday night (CST). It's time for some puerile humor. If you're > looking for serious thought... move along. There's nothing to see here. > > On Facebook, Roko Mijic asked: How do you hook up with an economist > chick? > ,,, > Come on extropians -- this is a vital issue for a Friday night! > > Max > From stefano.vaj at gmail.com Sat Dec 5 12:21:35 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 5 Dec 2009 13:21:35 +0100 Subject: [ExI] climategate again In-Reply-To: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> Message-ID: <580930c20912050421x5abcd37q310cbd7f72103f83@mail.gmail.com> 2009/12/1 Max More : > I'm feeling pretty lonely on this issue. Just about everyone on all sides of > the issue seem to be very certain of what's going on. Despite considerable > reading of clashing sources (or because of it), I remain highly unsure. Why, there are at least two of us. But while I remain very perplexed on the merits, I feel much better equipped to form opinions on the psychological, political, ideological and cultural angles and motives of the debate. Something which of course tells as nothing about the facts, but speak volumes on the players... -- Stefano Vaj From dharris at livelib.com Sat Dec 5 12:09:28 2009 From: dharris at livelib.com (David C. Harris) Date: Sat, 05 Dec 2009 04:09:28 -0800 Subject: [ExI] we die alone while ogling the divers In-Reply-To: <00A4F8668B384C2584840D149DD22DCE@spike> References: <182452.64981.qm@web58301.mail.re3.yahoo.com> <00A4F8668B384C2584840D149DD22DCE@spike> Message-ID: <4B1A4D78.2080709@livelib.com> spike wrote: > ... > If you have drowning dreams very often it might indicate sleep apnea, so you > need to see the medics about that. > > For over a decade I was acting out dream material, like hitting the bedside telephone, kicking, or making distressed sounds. The worst was grabbing my girlfriend's neck as she slept -- a serious faux pas in dating etiquette! She was also bothered by my serious snoring and eventually noticed that I stopped breathing frequently. I got a medical referral to a sleep clinic where they diagnosed severe obstructive sleep apnea, going as low as 75% oxygen saturation. Apparently I wasn't taking enough time to establish deep sleep with the normal paralysis of the voluntary muscles during dreams. Now I have a CPAP (constant pressure airway passage) machine that blows my throat open when I relax and I am catching up on long dreams. Life is better with adequate deep sleep. :-) From deimtee at optusnet.com.au Sat Dec 5 12:57:49 2009 From: deimtee at optusnet.com.au (David) Date: Sat, 5 Dec 2009 23:57:49 +1100 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: References: Message-ID: <20091205235749.23c4986b@optusnet.com.au> On Sat, 5 Dec 2009 00:55:33 -0500 Robert Bradbury wrote: snip: everything. :) Hi All, I am a longtime reader, but rare poster to this list. In regards to this whole moderation debate, if making the moderation rules explicit is what it takes to keep Mr Bradbury on the exilist, then I am very much in favour of doing so. The recent return of the "heavyweight" thinkers and posters is, from my point of view very welcome. I read this list because it is one of the few places where ideas are promulgated and discussed based on science and reality rather than politics and personality. Welcome back Robert, I hope you stay. -David From deimtee at optusnet.com.au Sat Dec 5 13:08:21 2009 From: deimtee at optusnet.com.au (David) Date: Sun, 6 Dec 2009 00:08:21 +1100 Subject: [ExI] Spirited molecules In-Reply-To: References: <20091202124822.8c68a152@secure.ericade.net> Message-ID: <20091206000821.48b07763@optusnet.com.au> On Fri, 4 Dec 2009 14:55:24 -0500 Robert Bradbury wrote: > Well, I had some difficulty reading this post as I do not read the > ExICh list frequently (due to its gestapo policies). > > But this topic attracted my attention. > > Thel unbonded "azide" molecule (with the formula CN4) does not exist > (to the best of my knowledge to assay it). The best using Wikipedia > that I have been able to find is possibly N3- and therefore molecules > such as NaN3 (sodium azide). The statement by Derek with respect to a > "Cyanogen azide" suggests a C2N2 bonded to a N4 molecule -- which I > fail to understand (I can posit plausible explanations for the > distribution of the electrons (around many molecules) -- but I cannot > posit how it is created or its actual normal chemical makeup. > > Robert How about a ring structure : N=C=N-N=N- (back to the first N) bonds add up, and it looks damn unstable to me. -David From anders at aleph.se Sat Dec 5 14:35:39 2009 From: anders at aleph.se (Anders Sandberg) Date: Sat, 05 Dec 2009 15:35:39 +0100 Subject: [ExI] Spirited molecules In-Reply-To: 20091206000821.48b07763@optusnet.com.au Message-ID: <20091205143539.2fa97247@secure.ericade.net> David: > How about a ring structure : > > N=C=N-N=N- (back to the first N) I doubt it, since the azide group is linear and the cyanide group is also linear. Only one angle that can bend. Ah, "Structurally, cyanogen azide is a V-shaped molecule and it was determined that the angle at the middle N atom is 120?" (D.C. Frost, H.W. Kroto, C.A. McDowell, N.P.C. Westwood, The helium (He I) photoelectron spectra of the isoelectronic molecules, cyanogen azide, NCN3, and cyanogen isocyanate, NCNCO, J. Electron Spectrosc. Relat. Phenom. 11 (2) (1977) 147?156.) If the molecule gets charged there is further buckling (Lemi T?rker and Taner Atalar, Quantum chemical treatment of cyanogen azide and its univalent and divalent ionic forms, Journal of Hazardous Materials, Volume 153, Issue 3, 30 May 2008, Pages 966-974) Perhaps it can polymerize into chains. To steer this to transhuman issues: compounds like this clearly represents the outer envelope of what can be made using any process. With advanced mechanosynthesis we will likely be able to make some pretty bizarre chemicals, but there are limits on what will hold together. Maybe the great sport of nanochemists next century will be to push this extremely unstable envelope, trying to make absurdly unstable molecules. Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From brentn at freeshell.org Sat Dec 5 15:39:25 2009 From: brentn at freeshell.org (Brent Neal) Date: Sat, 5 Dec 2009 10:39:25 -0500 Subject: [ExI] Spirited molecules In-Reply-To: <20091205143539.2fa97247@secure.ericade.net> References: <20091205143539.2fa97247@secure.ericade.net> Message-ID: On 5 Dec, 2009, at 9:35, Anders Sandberg wrote: > David: >> How about a ring structure : >> >> N=C=N-N=N- (back to the first N) > > I doubt it, since the azide group is linear and the cyanide group is > also linear. Only one angle that can bend. > Actually, that molecule is ok. Its tetrazole. If you cyclize those 3 nitrogens, they no longer need to be linear (check the hybridization.) Not like those molecules are that stable, either. Most tetrazoles have commercial uses as chemical blowing agents - i.e., you compound them into plastics, and at a certain temperature that is higher than the polymer melt temperature, they decompose (NOT explosively, and the resulting nitrogen gases foam the plastic. My personal favorite, 5-phenyl-1H-tetrazole, has an aromatic ring hanging off the lone carbon in the tetrazole ring. That stabilizes the structure a bit, pushing the decomposition temperature up to around 230 C. Things like n-butyl tetrazoles decomp at much lower temperatures. Cheers, B -- Brent Neal, Ph.D. http://brentn.freeshell.org From ddraig at gmail.com Sat Dec 5 08:21:39 2009 From: ddraig at gmail.com (ddraig) Date: Sat, 5 Dec 2009 19:21:39 +1100 Subject: [ExI] Who is Ayn Rand? In-Reply-To: <462361.30206.qm@web58308.mail.re3.yahoo.com> References: <462361.30206.qm@web58308.mail.re3.yahoo.com> Message-ID: 2009/12/4 Robert Masters : > Alice really went a long way from that day she told her philosophy professor > she would rank among Aristotle and Plato. ?She actually became AYN. ?But > the crucial question is EXACTLY what went on in that incestuous Jewish ritual. > Did it include anal penetration? Sub/dom? Rape? ?Pissing? ?Shitting? > The public has a right to know. No it does not, this is just voyeurism, and it's disgusting. > Nathaniel Branden has admitted that they had sex, but stopped > short of a full confession. ?That won't do. ?After all, he was the one > who chose to reveal the "affair" as a justification for his own actions. >?What were the DETAILS of the affair? Why do you care? How on earth is it relevant? Dwayne -- ddraig at pobox.com irc.deoxy.org #chat ...r.e.t.u.r.n....t.o....t.h.e....s.o.u.r.c.e... http://www.barrelfullofmonkeys.org/Data/3-death.jpg our aim is wakefulness, our enemy is dreamless sleep From spike66 at att.net Sat Dec 5 17:14:29 2009 From: spike66 at att.net (spike) Date: Sat, 5 Dec 2009 09:14:29 -0800 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <50763.12.77.168.244.1260012724.squirrel@www.main.nc.us> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> <50763.12.77.168.244.1260012724.squirrel@www.main.nc.us> Message-ID: <93F5B83C0B034CF48AA8BB5E4C7C836E@spike> > ...On Behalf Of MB > ... > > > > Am I a just a weak fool to do this? Max > > No, you are a kind, gentle man who will care for his brother > in this time of great trial... MB Max do let me assure you sir, no one will EVER mistake you for a weak fool. Anyone who reads the extropian principles http://www.maxmore.com/extprn3.htm will see that this was written by an kick-ass smart guy. Newer posters, do read over the extropian principles, thanks. spike From nebathenemi at yahoo.co.uk Sat Dec 5 17:17:40 2009 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Sat, 5 Dec 2009 17:17:40 +0000 (GMT) Subject: [ExI] Moderation on the ExiCh list In-Reply-To: Message-ID: <310309.82472.qm@web27002.mail.ukl.yahoo.com> Robert wrote :"But I do remain resolute in that undefined moderation policies border on gestapo policies" I would beg to differ. In my experience, the people who cry out loud for lists of rules on accepted standards are either people who work in management for large corporations (and insist on issuing lists of soul-crushing instructions on how employees must not goof off in a lengthy "employee handbook") or lawyers who insist that the letter of the law is all that matters regardless of the spirit. Both of these types profoundly offend me. I much prefer places where the gentle hand of moderation seems to follow an agreement that by and large goes unspoken because there is no need to speak it. Maybe that's just me being terribly British about things, and believing the instruction "do not take the piss" covers 99% of all eventualities. (Thinking of unspoken agreements reminds me of a conversation I had with Anders Sandberg after an ExtroBritannia meeting, where he mentioned the trouble for a foreigner to understand British pub culture and how he could fit in and make small talk in this social environment. Perhaps it is too much to ask people to operate with minimal rules, but I still feel the effort is worth it). Tom From lcorbin at rawbw.com Sat Dec 5 18:00:52 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 05 Dec 2009 10:00:52 -0800 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: <580930c20912050358s76382d20sded35f1eb407ae96@mail.gmail.com> References: <580930c20912050358s76382d20sded35f1eb407ae96@mail.gmail.com> Message-ID: <4B1A9FD4.6090807@rawbw.com> BillK, Stefano, and David make some good points BillK first: > It is not possible to specify an exact list of what subjects should > not be discussed on the Exi-chat list. Who knows what some new > visitor might want to write about? Yes, and obviously an exact list of what should not be discussed isn't possible (if we want to be reasonable), and likewise, I claim neither can there be a list of what reasonably *should* be discussed. Therefore I have a problem with BillK's suggestion that we try to limit what is said by the making of some list which might include > 1) - Does it involve transhumanism at all? > 2) - Does it help the transhumanism movement to make progress either > by interesting discussion of technology, etc. or by spreading > knowledge and becoming more well-known and accepted by the general > public. > 3) - Does it help bonding within the tranhumanist group? Think of the countless times weird things that seem to come out of nowhere are simply striking, or informative in some unexpected way. Should we never mention Scientology? Math? Stefano writes > I think that both the moderation when in doubt should err on the side > of the freedom of speech (and be very vigilant with regard to its own > personal and ideological biases) AND that a moderation should exist. I think that that's very wise. Each single one of Stefano's phrases right here deserves commendation. > This not only as a matter of ideological or aesthetical taste, but for > very practical reasons which have to do with its continuing viability > and success. You said it!! Now Stefano then takes what is to me a very different tack when he tries by example what "off-topic" to him is: > The real point of moderation and the real (and only) cardinal sin > in mailing lists is IMHO Off-Topic. Flames are off-topic. Spam is > off-topic. Ad hominem are off-topic. Tireless single-issue evangelism > is off-topic. Bilateral chit-chat is off-topic. Right: those items (and doubtless more) make moderation necessary. But describing them as "off-topic" doesn't seem quite accurate! How are wanderings into atheism vs. religion exactly germane to supposed functioning of the list? How indeed are technical arguments about global warming germane? Some may think these "off-topic", and that's why I claim that the whole concept of *off-topic* ought to be avoided. The spam, flames, ad hominem, ceaseless evangelism, etc., are merely... unwanted. And the most important word there is "etc.". David, long time reader, writes > In regards to this whole moderation debate, if making the moderation > rules explicit is what it takes to keep Mr Bradbury on the exilist, > then I am very much in favour of doing so. But making moderation rules totally explicit seems impossible, (as I argued above). > The recent return of the "heavyweight" thinkers and posters is, > from my point of view very welcome. Absolutely! Lists die when you discourage your best contributors. As Stefano said, moderation should err on the side of freedom of speech. > I read this list because it is one of the few places where ideas are > promulgated and discussed based on science and reality rather than > politics and personality... Welcome back Robert, I hope you stay. And David is doubtless speaking for countless others! For all of you who are always sending Spike emails denouncing certain posters and wanting to suppress them, please inhale deeply and try to remember the often unseen and totally unanticipated benefits of liberty and freedom of speech. Why does it just kill you to hear someone say something totally outrageous once in a while?? Lee From stefano.vaj at gmail.com Sat Dec 5 18:14:54 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 5 Dec 2009 19:14:54 +0100 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: <4B1A9FD4.6090807@rawbw.com> References: <580930c20912050358s76382d20sded35f1eb407ae96@mail.gmail.com> <4B1A9FD4.6090807@rawbw.com> Message-ID: <580930c20912051014l31f84386r72c83a1bb0d9436@mail.gmail.com> 2009/12/5 Lee Corbin : > Now Stefano then takes what is to me a very different tack > when he tries by example what "off-topic" to him is: > >> The real point of moderation and the real (and only) cardinal sin >> in mailing lists is IMHO Off-Topic. Flames are off-topic. Spam is >> off-topic. Ad hominem are off-topic. Tireless single-issue evangelism >> is off-topic. Bilateral chit-chat is off-topic. > > Right: those items (and doubtless more) make moderation > necessary. But describing them as "off-topic" doesn't > seem quite accurate! How are wanderings into atheism vs. > religion exactly germane to supposed functioning of the > list? How indeed are technical arguments about global > warming germane? Some may think these "off-topic", and > that's why I claim that the whole concept of *off-topic* > ought to be avoided. Why, ExI-chat is very peculiar in that it does not really have a topic in any traditional sense, or rather the topic can probably be described as: "theoretical and other discussions on whatever which may be of interest of polite, intelligent transhumanists at the moment". Under such description, I think that the examples I made before would stand, as well as perhaps long single-issue agenda rants, but one would not include the interesting, albeit sometimes perplexingly disparate or over-technical, discussions which keep popping up. ;-) -- Stefano Vaj From stefano.vaj at gmail.com Sat Dec 5 18:15:10 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 5 Dec 2009 19:15:10 +0100 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: <4B1A9FD4.6090807@rawbw.com> References: <580930c20912050358s76382d20sded35f1eb407ae96@mail.gmail.com> <4B1A9FD4.6090807@rawbw.com> Message-ID: <580930c20912051015l21213dbbm3ecd9babfe21f9f4@mail.gmail.com> 2009/12/5 Lee Corbin : > Now Stefano then takes what is to me a very different tack > when he tries by example what "off-topic" to him is: > >> The real point of moderation and the real (and only) cardinal sin >> in mailing lists is IMHO Off-Topic. Flames are off-topic. Spam is >> off-topic. Ad hominem are off-topic. Tireless single-issue evangelism >> is off-topic. Bilateral chit-chat is off-topic. > > Right: those items (and doubtless more) make moderation > necessary. But describing them as "off-topic" doesn't > seem quite accurate! How are wanderings into atheism vs. > religion exactly germane to supposed functioning of the > list? How indeed are technical arguments about global > warming germane? Some may think these "off-topic", and > that's why I claim that the whole concept of *off-topic* > ought to be avoided. Why, ExI-chat is very peculiar in that it does not really have a topic in any traditional sense, or rather the topic can probably be described as: "theoretical and other discussions on whatever which may be of interest of polite, intelligent transhumanists at the moment". Under such description, I think that the examples I made before would stand, as well as perhaps long single-issue agenda rants, but one would not include in an OT category the interesting, albeit sometimes perplexingly disparate or over-technical, discussions which keep popping up. ;-) -- Stefano Vaj From spike66 at att.net Sat Dec 5 18:16:15 2009 From: spike66 at att.net (spike) Date: Sat, 5 Dec 2009 10:16:15 -0800 Subject: [ExI] we die alone while ogling the divers In-Reply-To: <4B1A4D78.2080709@livelib.com> References: <182452.64981.qm@web58301.mail.re3.yahoo.com><00A4F8668B384C2584840D149DD22DCE@spike> <4B1A4D78.2080709@livelib.com> Message-ID: <48C0CC05520447BE819381E282CA279E@spike> > ...On Behalf Of David C. Harris ... > Subject: Re: [ExI] we die alone while ogling the divers > > spike wrote: > > ... > > If you have drowning dreams very often it might indicate > > sleep apnea, so you need to see the medics about that. > > > ...Now I have a CPAP (constant > pressure airway passage) machine that blows my throat open > when I relax... :-) Hey cool, I wonder if that machine can be modified. spike From thespike at satx.rr.com Sat Dec 5 18:23:11 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 05 Dec 2009 12:23:11 -0600 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: <4B1A9FD4.6090807@rawbw.com> References: <580930c20912050358s76382d20sded35f1eb407ae96@mail.gmail.com> <4B1A9FD4.6090807@rawbw.com> Message-ID: <4B1AA50F.6030901@satx.rr.com> On 12/5/2009 12:00 PM, Lee Corbin wrote: > Why does it just kill you > to hear someone say something totally outrageous once in a while?? There might be a clue in that word "outrageous" and more exactly the "rage" part, since outrage can enrage, and bringing rage down upon one's head is not a great idea. Yes, the difficulty is that some people are all too easily enraged and eager to lose control, so arguably they are the ones who need (self)moderation. But to rehearse the obvious, this list is not a public square, yet anything posted here can be read by anyone on the planet now or in the future. Since the list owners and most other transhumanists don't wish to be associated with outrageous proposals to (say) nuke or poison all the Muslims in the world or even in a given country, or to forcibly banish blacks and other "non-white" people "back to their own countries", it is very reasonable to step in and remove or block posts making such suggestions--even as thought experiments. If only on the same grounds that one is well advised not to make bomb jokes while boarding a plane. Damien Broderick From thespike at satx.rr.com Sat Dec 5 18:25:08 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 05 Dec 2009 12:25:08 -0600 Subject: [ExI] we die alone while ogling the divers In-Reply-To: <48C0CC05520447BE819381E282CA279E@spike> References: <182452.64981.qm@web58301.mail.re3.yahoo.com><00A4F8668B384C2584840D149DD22DCE@spike> <4B1A4D78.2080709@livelib.com> <48C0CC05520447BE819381E282CA279E@spike> Message-ID: <4B1AA584.3050407@satx.rr.com> On 12/5/2009 12:16 PM, spike wrote: >> ...Now I have a CPAP (constant >> > pressure airway passage) machine that blows my throat open >> > when I relax... :-) > > Hey cool, I wonder if that machine can be modified. They have machines for that already, Spike. From lcorbin at rawbw.com Sat Dec 5 19:07:59 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 05 Dec 2009 11:07:59 -0800 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: <4B1AA50F.6030901@satx.rr.com> References: <580930c20912050358s76382d20sded35f1eb407ae96@mail.gmail.com> <4B1A9FD4.6090807@rawbw.com> <4B1AA50F.6030901@satx.rr.com> Message-ID: <4B1AAF8F.9090505@rawbw.com> Damien Broderick wrote: > Lee Corbin wrote: > >> Why does it just kill you >> to hear someone say something totally outrageous once in a while?? > > There might be a clue in that word "outrageous" and more exactly the > "rage" part, since outrage can enrage, and bringing rage down upon one's > head is not a great idea. Why so afraid? Surely no lawsuits. Calumny? Guilt by association? (Yes, that latter is it, I guess.) > Yes, the difficulty is that some people are all too easily > enraged and eager to lose control, so arguably they are > the ones who need (self)moderation. Yes, quite. Now the "being enraged" part I understand. The part I don't get is the tyrannical temperament of needing to denounce (usually secretly) to the authorities that power be used to turn off these "abhorrent" ideas. I myself have just *never* had the urge to contact the moderators offlist and demand the suppression of this or that. > But to rehearse the obvious, this list is not a public > square, yet anything posted here can be read by anyone > on the planet now or in the future. Since the list owners and > most other transhumanists don't wish to be associated with outrageous > proposals... How can the vast multitudes out there not see that what is said here is *never* official policy? "To be associated"?? I really would think that---ESPECIALLY FROM THE CONDEMNATIONS FROM THE OTHERS ON THE LIST---that everyone perusing the list could see what is going on: Some people believe things not widely believed by others. Do you really want to be afraid because it may become known that someone you may know is a Communist? Do we want to give into fear of things like that? "Oh my God. Damien B. once posted on the *same* list where someone once said terrible things!" "Oh my God. I started a list, and someone (a single lone voice) had the temerity to say that it would be a good thing for China to sink beneath the waves, and even though this was clearly not a popular view on my list, still someone somewhere will do me or my movement great harm..." > to (say) nuke or poison all the Muslims in the world or even > in a given country, or to forcibly banish blacks and other "non-white" > people "back to their own countries", it is very reasonable to step in > and remove or block posts making such suggestions--even as thought > experiments. If only on the same grounds that one is well advised not to > make bomb jokes while boarding a plane. One cannot make bomb-jokes while boarding a plane because of the hysterical temperament resulting from 9/11. So you think it follows that we here must imitate such mindless idiocy? If a lone poster says that all Muslims should be slowly lowered down into vats of hot Caro's acid, starting with the toes, and a half-dozen people jump on him and say he's wrong & he's crazy... what exactly are you afraid of will happen? That a fatwa against *all* Extropians will be announced? Lee From pharos at gmail.com Sat Dec 5 19:39:31 2009 From: pharos at gmail.com (BillK) Date: Sat, 5 Dec 2009 19:39:31 +0000 Subject: [ExI] climategate again In-Reply-To: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> Message-ID: On 12/1/09, Max More wrote: > I'm feeling pretty lonely on this issue. Just about everyone on all sides > of the issue seem to be very certain of what's going on. Despite > considerable reading of clashing sources (or because of it), I remain highly > unsure. > > That's the point of the PR campaign backed by Exxon, GM, etc. Make people unsure so that no legislation gets passed to control the industries despoiling the world. They don't have to prove anything - just create a cloud of confusion. Exactly the same as the tobacco industry did for years to avoid restrictive legislation. Book Review: ?This is a story of betrayal, a story of selfishness, greed, and irresponsibility on an epic scale.? That?s how James Hoggan opens his newly published book Climate Cover-Up: The Crusade to Deny Global Warming. Hoggan initially thought there was a fierce scientific controversy about climate change. Sensibly he did a lot of reading, only to find to his surprise that there was no such controversy. How did the public confusion arise? There was nothing accidental about it. As a public relations specialist, Hoggan observed with gathering horror a campaign at work. ?To a trained eye the unsavoury public relations tactics and techniques and the strategic media manipulation became obvious. The more I thought about it, the more deeply offended I became.? As far back as 1991 a group of coal-related organisations set out, in their own words, ?to reposition global warming as a theory (not fact)? and ?supply alternative facts to support the suggestion that global warming will be good.? This was the pattern of the work done in succeeding years by a variety of corporations and industry associations who devoted considerable financial resources to influence the public conversation. They used slogans and messages they had tested for effectiveness but not accuracy. They hired scientists prepared to say in public things they could not get printed in the peer-reviewed scientific press. They took advantage of mainstream journalists? interest in featuring contrarian and controversial science stories. They planned ?grassroots? groups to give the impression that they were not an industry-driven lobby. New Zealand?s Climate ?Science? Coalition and the International Coalition it helped to found fit this purpose nicely. He urges his readers not to take him at face value but to do some checking of his material and satisfy themselves that it is reliable. Nevertheless the activity he describes is rightly characterised as betrayal, selfishness, greed and irresponsibility. The people who have launched the highly successful campaign of denial and delay are not attending to the work of a body of outstanding scientists although that work is of utmost import for human life. They have turned what should have been a public policy dialogue driven by science into a theatre for a cynical public relations exercise of the most dishonest kind. Instead of looking at the seriousness of the warnings they have sensed a threat to their business profitability and made that their motivating factor. They have spread a false complacency and the result has been a twenty year delay in addressing an issue of high urgency. ------------------- BillK From pharos at gmail.com Sat Dec 5 20:07:04 2009 From: pharos at gmail.com (BillK) Date: Sat, 5 Dec 2009 20:07:04 +0000 Subject: [ExI] climategate again In-Reply-To: References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> Message-ID: Also: Book Review: "In Doubt Is Their Product, David Michaels gives a lively and convincing history of how clever public relations has blocked one public health protection after another. The techniques first used to reassure us about tobacco were adapted to reassure us about asbestos, lead, vinyl chloride-and risks to nuclear facilities workers, where Dr. Michaels' experience as the relevant Assistant Secretary of Energy gave him an inside view. And if you're worried about climate change, keep worrying, because the same program is underway there."--Donald Kennedy, Editor-in-Chief, Science "We live in an age of unprecedented disinformation, misinformation, and outright lying by those in power. This important book shows who profits by misleading the public-and who ultimately pays with their health."--Eric Schlosser, author of Fast Food Nation "Doubt is our product," a cigarette executive once observed, "since it is the best means of competing with the 'body of fact' that exists in the minds of the general public. It is also the means of establishing a controversy." In this eye-opening expose, David Michaels reveals how the tobacco industry's duplicitous tactics spawned a multimillion dollar industry that is dismantling public health safeguards. Product defense consultants, he argues, have increasingly skewed the scientific literature, manufactured and magnified scientific uncertainty, and influenced policy decisions to the advantage of polluters and the manufacturers of dangerous products. To keep the public confused about the hazards posed by global warming, second-hand smoke, asbestos, lead, plastics, and many other toxic materials, industry executives have hired unscrupulous scientists and lobbyists to dispute scientific evidence about health risks. In doing so, they have not only delayed action on specific hazards, but they have constructed barriers to make it harder for lawmakers, government agencies, and courts to respond to future threats. The Orwellian strategy of dismissing research conducted by the scientific community as "junk science" and elevating science conducted by product defense specialists to "sound science" status also creates confusion about the very nature of scientific inquiry and undermines the public's confidence in science's ability to address public health and environmental concerns Such reckless practices have long existed, but Michaels argues that the Bush administration deepened the dysfunction by virtually handing over regulatory agencies to the very corporate powers whose products and behavior they are charged with overseeing. In Doubt Is Their Product Michaels proves, beyond a doubt, that our regulatory system has been broken. He offers concrete, workable suggestions for how it can be restored by taking the politics out of science and ensuring that concern for public safety, rather than private profits, guides our regulatory policy. -------------------------- BillK From asyluman at gmail.com Sat Dec 5 20:28:55 2009 From: asyluman at gmail.com (Will Steinberg) Date: Sat, 5 Dec 2009 15:28:55 -0500 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: <4B1AAF8F.9090505@rawbw.com> References: <580930c20912050358s76382d20sded35f1eb407ae96@mail.gmail.com> <4B1A9FD4.6090807@rawbw.com> <4B1AA50F.6030901@satx.rr.com> <4B1AAF8F.9090505@rawbw.com> Message-ID: An easy enough rule to remember: if it seems (and this really is pretty easy to ascertain) like others on the list will respond to your message with intelligent interest in the direction of a progressive conversation--which would be a conversation where it can be seen that, as the talking continues, the understanding of the topic rises and so do the complexities of the ideas within. for example: If I hypothesize on race relations and imagine a thought experiment that would be considered highly immoral or illegal, the topic can still progress towards real intelligent results. Many, many thought experiments include scenarios where someone is killed to illustrate consciousness, decision, and the like. People here are on the level to understand that these do not suggest the writer is in favor of murder; they should be able to judge when something is hypothesis and when something is personal opinion. but if your message will produce a hovering, worthless conversation--an argument, an intelligentsian circle-jerk, et cetera--you should probably not post it. On Sat, Dec 5, 2009 at 2:07 PM, Lee Corbin wrote: > Damien Broderick wrote: > > Lee Corbin wrote: >> >> Why does it just kill you >>> to hear someone say something totally outrageous once in a while?? >>> >> >> There might be a clue in that word "outrageous" and more exactly the >> "rage" part, since outrage can enrage, and bringing rage down upon one's >> head is not a great idea. >> > > Why so afraid? Surely no lawsuits. Calumny? Guilt by association? (Yes, > that latter is it, I guess.) > > > Yes, the difficulty is that some people are all too easily >> > > enraged and eager to lose control, so arguably they are > >> the ones who need (self)moderation. >> > > Yes, quite. Now the "being enraged" part I understand. > > The part I don't get is the tyrannical temperament of > needing to denounce (usually secretly) to the authorities > that power be used to turn off these "abhorrent" ideas. > I myself have just *never* had the urge to contact the > moderators offlist and demand the suppression of this or that. > > > But to rehearse the obvious, this list is not a public >> > > square, yet anything posted here can be read by anyone > > on the planet now or in the future. Since the list owners and > >> most other transhumanists don't wish to be associated with outrageous >> proposals... >> > > How can the vast multitudes out there not see that what > is said here is *never* official policy? "To be associated"?? > I really would think that---ESPECIALLY FROM THE CONDEMNATIONS > FROM THE OTHERS ON THE LIST---that everyone perusing the list > could see what is going on: Some people believe things not > widely believed by others. > > Do you really want to be afraid because it may become known > that someone you may know is a Communist? Do we want to give > into fear of things like that? > > "Oh my God. Damien B. once posted on the *same* list where > someone once said terrible things!" > > "Oh my God. I started a list, and someone (a single lone > voice) had the temerity to say that it would be a good > thing for China to sink beneath the waves, and even though > this was clearly not a popular view on my list, still > someone somewhere will do me or my movement great harm..." > > > to (say) nuke or poison all the Muslims in the world or even in a given >> country, or to forcibly banish blacks and other "non-white" people "back to >> their own countries", it is very reasonable to step in and remove or block >> posts making such suggestions--even as thought experiments. If only on the >> same grounds that one is well advised not to make bomb jokes while boarding >> a plane. >> > > One cannot make bomb-jokes while boarding a plane because > of the hysterical temperament resulting from 9/11. So you > think it follows that we here must imitate such mindless > idiocy? > > If a lone poster says that all Muslims should be slowly > lowered down into vats of hot Caro's acid, starting with > the toes, and a half-dozen people jump on him and say > he's wrong & he's crazy... what exactly are you afraid > of will happen? That a fatwa against *all* Extropians > will be announced? > > Lee > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Dec 5 20:54:18 2009 From: spike66 at att.net (spike) Date: Sat, 5 Dec 2009 12:54:18 -0800 Subject: [ExI] climategate again In-Reply-To: References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> Message-ID: > ...On Behalf Of BillK > ... > Book Review: > "This is a story of betrayal, a story of selfishness, greed, > and irresponsibility on an epic scale." That's how James > Hoggan opens his newly published book Climate Cover-Up: The > Crusade to Deny Global Warming... BillK Ja, BillK, I agree partially, but this would have played much better a month ago than it will now. It was the scientists who have been apparently caught doing some dirty business, corrupting or influencing the peer review process, sloppy handling of data etc. That being said, I noticed three things: there is a huge population which believe that global warming is a bad thing, but that crowd did not rejoice at the possibility that the threat was exaggerated. Our own senator (Boxer) wants to direct the entire investigation at finding who leaked the incriminating email, and never mind the actual contents of that leak. Good luck with that irrelevant task. The AGW-is-bad crowd should be filled with hope, happy as hell, but they seem bitter and angry. Second, I notice the British are talking about this more than the Yanks. Scandals are supposed to come out of the US and Nigeria, not Britain. Third, the debate seems to be: in the light of these leaks, is AGW true or false? But a whole bunch of us realize AGW is probably true, but that isn't the critical question. The critical question is: how much? If we try to say the warming was not exaggerated at all by the CRU crowd, how do we know that for sure? And if we say it wasn't exaggerated, how do we know they didn't underestimate it? And if they did underestimate it, by how much? And if they suppressed dissenting papers by a corrupted peer review process, where are those authors now? Where are those papers? And if they didn't corrupt the peer review process, why did Prof. Jones make that comment about somehow keeping two papers out of the journal (apparently before he had seen them) and blackballing the Climate Science journal? BillK, I am not denying climate change, but the changers are carrying the burden of proof, and there is a definite suspicion that scientific misbehavior took place. We have some work to do before we are ready to draw conclusions. spike From nanite1018 at gmail.com Sat Dec 5 22:02:51 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Sat, 5 Dec 2009 17:02:51 -0500 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <93F5B83C0B034CF48AA8BB5E4C7C836E@spike> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> <50763.12.77.168.244.1260012724.squirrel@www.main.nc.us> <93F5B83C0B034CF48AA8BB5E4C7C836E@spike> Message-ID: >>> Am I a just a weak fool to do this? Max >> >> No, you are a kind, gentle man who will care for his brother >> in this time of great trial... MB > > Max do let me assure you sir, no one will EVER mistake you for a > weak fool. > Anyone who reads the extropian principles... will see that this was > written by an kick-ass smart guy. > > Newer posters, do read over the extropian principles, thanks. > > spike Agreed, in my opinion the extropian principles are a great distillation of a pro-reason, pro-science, pro-LIFE worldview, and that alone shows me that Max is anything but a fool. It has also occurred to me that I didn't express my sympathies with you, Max, and your brother. I apologize (I didn't really know if it was appropriate, I'm not good at that kind of thing). I hope everything turns out all right. Joshua Job nanite1018 at gmail.com From shannonvyff at yahoo.com Sat Dec 5 22:11:27 2009 From: shannonvyff at yahoo.com (Shannon) Date: Sat, 5 Dec 2009 14:11:27 -0800 (PST) Subject: [ExI] Aggressive Atheism, by Pat Condell In-Reply-To: References: Message-ID: <254885.29236.qm@web30807.mail.mud.yahoo.com> Max, Thank you for your illustration of empathy and the proper way to treat a religious friend or family member. One thing I've seen in the older kids I taught in Sunday School in Austin (the middle school aged kids) was that a few of them where very mad at religious kids in their schools, they wanted to debate them and basically show them how silly they were. I had a hard time explaining (or more-so having them agree) that we can respect other peoples' beliefs while still politely explaining ours. My son in particular is taken in by Pat Condell's type of sometimes spot on treatment of theology, but I cringe at the lack of respect, the lack of empathy. I avoid letting my highly verbal and proud atheist son watch the more rough proponents of evolution, including Dawkins (or we talk about them if he watches them, he did get to see Michael Shermer's debate in Austin this past Spring--and Shermer was respectful for the most part)-- My personal preference is for people like Kenneth Miller who work with religious theory (http://www.amazon.com/Only-Theory-Evolution-Battle-Americas/dp/B001KVZ6RU/ref=pd_sim_b_1). Of course, not all interactions with religious friends and family are simply about evolution, they often become about ones' soul-hell, heaven and eternity. More-so it also is about approval, or kinship between that friend or relative. My son has a grandfather who is quite religious and fears for his grand-children's souls, he's bought them Christian camps, sends Christian books, and even paid tuition at a Christian private school for my son. (That was the school that famously my son spent less than half a year at, as he often got in fights with other kids and ended up debating the existence of God with the principal, this was at age 4 when he was in first grade-he was reading at a third grade level). The point is, that I think some atheists feel their belief is not validated unless they get the religious person to drop their beliefs. I've seen this in my son-and try to show him that you can have a different understanding of how the universe works and know you are right-while respecting the other persons' take on life. It is hard right now-and I think your note really caught my attention Max because my son is in public school in England (as you know we moved from TX over the summer) and you are from England. Here the public school is quite religious, they pray to God, the Vicar of the Church of England Calverley (our village between Leeds and Bradford) comes and gives a weekly sermon to the school. I've told my son that he can opt out, there is a Jehovah's Witness who stays in class to read when the rest of the class goes out to do religious activities (actually one of the best friends of my youngest daughter, we've had the girl to our home a few times). The girl however sits out parties as well, as they are seen as part of a different religious tradition, and they have a philosophy of treating every day as special--not having days that are above other days. My son today told me he felt uncomfortable during the praying (this was after we attended a school hosted religious Christingle earlier today)--we discussed the God language, the "be a Christian language" and the "if you were baptized you are a saint" language. Hi is conflicted however on if he wants to go or sit out. He has decided that he wants to go to be with his friends, its a social thing. He had considered sitting out, but decided to just be polite and follow along. It sort of breaks my heart as I know he wants to argue, and I don't want him saying things that he doesn't believe in, at the same time I'm glad he's going along with it all and being polite. The kids at the school they attend are all quite religious, one of my son's friends got mad at him for saying Santa was not real (and they are all in 6th form). Its an ongoing thing with our friends and family, read the books they give us, take the knowledge from them-gently say how you disagree-but respect their belief and appreciate them in the end. (after you say that their higher power will allow cryonics to work if they have more work for them ;-) -- okthat's an aside, but we all do our own variations of how to fit transhumanist ideals into scripture ;-) ) Health, Happiness, Wisdom & Longevity :-) -- best wishes from --Shannon Vyff -- Alcor Area Readiness Team Coordinator, Venturist Director, ImmInst Chair and Methuselah Foundation 300 member, An author of "The Scientific Conquest of Death":http://www.amazon.com/Scientific-Conquest-Death-Immortality-Institute/dp/9875611352, Author of the children's transhumanist adventure book "21st Century Kids": http://www.amazon.com/21st-Century-Kids-Middle_english-Shannon/dp/1886057001 ------------------------------ Message: 26 Date: Fri, 04 Dec 2009 22:20:13 -0600 From: Max More To: Extropy-Chat Subject: Re: [ExI] pat condell's latest subtle rant Message-ID: <200912050420.nB54KLCl021144 at andromeda.ziaspace.com> Content-Type: text/plain; charset="us-ascii"; format=flowed Regarding: Aggressive Atheism, by Pat Condell http://www.youtube.com/watch?v=yjO4duhMRZk I tried to watch this twice before, but stopped the video due to being put off by Condell's manner. Tonight, I finally watched the whole thing. It made me feel like I was 18 years old again. An aggressive atheist. A guy who went to classes wearing badges (US: buttons) saying things like "legalize heroin", "taxation is theft", and "God is dead". It reminded me of confidently -- nay, arrogantly -- telling the religious buffoons what's what. And you know what? Every thing Condell says is basically right. Yet, his attitude and approach, while refreshing, leaving me feeling that his message is purely and pointlessly a preaching-to-the-choir approach. Its value is completely one of entertainment. No, okay, it may also kick some atheists in the ass and inspire them to do something more active to combat the major problems that come with religious thinking. While Condell's aggressive approach definitely has a degree of wisdom (and a load of intellectual good sense), is it really appropriate to, or useful for, or humanistic in, dealing with all situations? For instance: My half-brother, who I just learned has been diagnosed with serious cancer, has asked me to read a novel that I see is extremely popular among the religious (Christian in particular): The Shack. Relevant background: This is a (considerably older) half-brother -- simply "brother" as far as I knew until a few years ago -- who, when I was in my teens and had recently lost his beliefs... or rather, had thrown off the shackles of... religion, insisted (at a Christmas family gathering), that I would certainly go to Hell forever because I didn't believe that Jesus was the son of God. A Pat Condell-style atheist might tell simply tell my brother that he is an idiot to believe this crap. I agreed to actually read this book and -- unless it really is *monumentally* stupid -- I intend to discuss it with my brother exploratively rather than explaining abruptly to him why his decades-long religious beliefs are moronic. Am I a just a weak fool to do this? Is Condell's attitude and approach always useful/appropriate/effective/wise? Max ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- ------------------------------ _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat End of extropy-chat Digest, Vol 75, Issue 8 ******************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Dec 5 23:18:40 2009 From: pharos at gmail.com (BillK) Date: Sat, 5 Dec 2009 23:18:40 +0000 Subject: [ExI] climategate again In-Reply-To: References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> Message-ID: On 12/5/09, spike wrote: > BillK, I am not denying climate change, but the changers are carrying the > burden of proof, and there is a definite suspicion that scientific > misbehavior took place. > We have some work to do before we are ready to draw conclusions. > > As Keith (I think) pointed out, the work that the world should be doing to alleviate global warming is work that the world should be doing anyway. And the work should have started 20 years ago. Even if global warming wasn't happening, the world should be moving to renewable energy sources anyway. Solar power, wind power, fuel cells, nuclear power, geothermal power, etc. The entrenched energy industries of coal and oil are fighting tooth and nail to protect their industry and profits and delaying the transfer to alternative fuels as long as possible. One hopeful sign is that the oil industry appears to be diversifying as oil production starts dropping. So they will be supporting green energy once they become the new green energy industry and can profit from it. The trouble is that it might be too late by then for large areas of the world. BillK From thespike at satx.rr.com Sat Dec 5 23:29:57 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 05 Dec 2009 17:29:57 -0600 Subject: [ExI] HM Message-ID: <4B1AECF5.4060103@satx.rr.com> Brain of world's best-known amnesiac mapped By Elizabeth Landau, CNN December 3, 2009 7:21 p.m. EST (CNN) -- Henry Molaison, known as H.M. in scientific literature, was perhaps the most famous patient in all of brain science in the 20th century. "My daddy's family came from the South and moved North, they came from Thibodaux Louisiana, and moved north," Molaison would say. "My mother's family came from the North and moved South." Within 15 minutes he might repeat this exact statement twice more, unable to remember that he'd already said it. Scientists studied him for most of his adult life. This week, researchers are dissecting his brain to figure out exactly which structures contributed to his amnesia, which he suffered for more than 50 years. At the Brain Observatory at the University of California, San Diego, researchers began slicing H.M.'s brain Wednesday afternoon and streaming the procedure live to the world on their Web site. Watch it live "We're doing it, this sort of marathon through the brain," said Jacopo Annese, director of the Brain Observatory. By Thursday afternoon, the scientists were less than halfway through the brain, but the process was going "miraculously well," he said. A camera is taking a picture of each individual slice, and these pictures will also be made available on the Web. The goals are to map the human brain in new ways and correlate individual structures with specific functions such as memory. The exciting part comes Thursday night as scientists probe deeper into the part of the brain that had been removed more than 50 years ago, causing the patient's memory abnormalities, he said. The procedure will reveal more about Molaison's brain than a high-resolution MRI scan could, said Suzanne Corkin, professor of behavioral neuroscience at the Massachusetts Institute of Technology, who studied and worked with Molaison since 1962. Annese likened the exploration of Molaison's brain to the search for the formation of colors in an impressionistic painting. If you look at a very small section of the painting up close, you see that many different colors together form the pink streaks that are visible when you step back and look at the whole thing, he said. Molaison, born in 1926, had been suffering epileptic seizures since childhood, and underwent an operation in 1953 remove the part of the brain doctors believed were causing the seizures. They took out much of the hippocampus, a horseshoe-shaped structure that plays a major part in long-term memory. The result was that, after the surgery, the patient could not form new memories that lasted more than 20 or 30 seconds, Corkin said. The operation did, however, succeed in reducing his seizures, and "he paid a high price for that benefit," she said. Corkin first encountered Molaison in 1962, when she was a graduate student at the Montreal Neurological Institute at McGill University. As part of her thesis project, she studied him and two other patients who had had brain surgery to treat epilepsy, with no idea that Molaison would become so important in scientific research. "He's taught us a lot about how memories are formed in the brain," said Natalie Schenker, research scientist at the Brain Observatory. "Now that he has died and his brain can be looked at anatomically, we can make an even better association between which parts of the brain were responsible for memory formation." After the operation, he went home to live with his mother and father, Corkin said. He continued living with his mother after his father died until both mother and son went to live with a relative. "If you asked him how old he was, he always guessed younger, but he never said 27," which is how old he was at the time of the surgery, Corkin said. Even before the operation, Molaison enjoyed doing crossword puzzles and believed they helped his memory, Corkin said. He could retrieve any word he knew before the brain surgery but could not learn any words that came into his vocabulary afterward. He spent a lot of time at home doing these puzzles and watching television, she said. Molaison's last 28 years of his life were in a nursing home in Connecticut, where the woman who took care of him near the end called him Teddy, like a teddy bear, Corkin said. Molaison died at age 82 of respiratory pneumonia. He also suffered from dementia for reasons that did not stem from his 1953 surgery, Corkin said. It is still unknown whether he developed Alzheimer's disease or vascular dementia, a question that can also be examined with the dissection. By the time he passed away December 2 of last year, plans had already been set to study his brain. Corkin had long decided that it was imperative to examine it post-mortem, and the patient and his legal conservator agreed to sign a donation form in 1992. Then, in 2002, Corkin assembled a team of scientists to decide what they would do, minute by minute, upon his death. Researchers have spent the last year preparing for the process of slicing Molaison's brain. Their technology allows them to cut the brain at a width of 70 microns, and will yield about 2,600 slices total, Annese said. For the total dissection, the brain has been cooled to a temperature of 40 degrees below zero Celsius. The entire process, streamed live on the lab Web site, is expected to last about 30 hours, and will probably go into Friday night, Annese said. Although Annese said he's nervous when he's more than 10 minutes away from the brain -- there was a minor mishap with the cooling liquid Thursday morning -- generally everyone in the lab is calm and relaxed during the procedure, he said. For the past three months, the team has gone through "dress rehearsals" with other brains, he said. Thursday around noon, there were 17,000 people watching the live video of the brain cutting, he said. The Web site has had more than 3 million hits. From hkeithhenson at gmail.com Sun Dec 6 04:00:04 2009 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 5 Dec 2009 20:00:04 -0800 Subject: [ExI] climategate again Message-ID: On Sat, Dec 5, 2009 at 2:38 PM, "spike" wrote: snip > Third, the debate seems to be: in the light of these leaks, is AGW true or > false? ?But a whole bunch of us realize AGW is probably true, but that isn't > the critical question. ?The critical question is: how much? No it is not. None of the climate change models show serious effects before several decades into the future, well beyond the point limited availability of low cost energy will be killing people in famines and resource wars at the 100 million a year rate. The entire debate is a distraction from the real problem, which is to come up with a new, low cost, primary source of energy. The real problem is getting serious consideration by various groups of people. It's a shame Extropians are not among them. Keith From dharris at livelib.com Sun Dec 6 08:51:17 2009 From: dharris at livelib.com (David C. Harris) Date: Sun, 06 Dec 2009 00:51:17 -0800 Subject: [ExI] we die alone while ogling the divers In-Reply-To: <4B1AA584.3050407@satx.rr.com> References: <182452.64981.qm@web58301.mail.re3.yahoo.com><00A4F8668B384C2584840D149DD22DCE@spike> <4B1A4D78.2080709@livelib.com> <48C0CC05520447BE819381E282CA279E@spike> <4B1AA584.3050407@satx.rr.com> Message-ID: <4B1B7085.60503@livelib.com> Damien Broderick wrote: > On 12/5/2009 12:16 PM, spike wrote: > >>> ...Now I have a CPAP (constant >>> > pressure airway passage) machine that blows my throat open >>> > when I relax... :-) >> >> Hey cool, I wonder if that machine can be modified. > > They have machines for that already, Spike. I could FEEL that there was something wrong about that word, and you found the alternate interpretation, Spike! Dang, I wish I'd trusted my instinct. "Proofread twice, Send once", I guess. From anders at aleph.se Sun Dec 6 12:07:49 2009 From: anders at aleph.se (Anders Sandberg) Date: Sun, 06 Dec 2009 13:07:49 +0100 Subject: [ExI] HM In-Reply-To: 4B1AECF5.4060103@satx.rr.com Message-ID: <20091206120749.4750e397@secure.ericade.net> Damien Broderick wrote: > Brain of world's best-known amnesiac mapped It was a rather odd voyeristic feeling to watch the brain being sliced. This is a superstar brain, on par with Broca's brain in Mus?e de l'Homme. Webcasting it in real-time (with messages from the lab shown on colourful post-it notes placed near the cameras) is the descendant of the Trojan Room coffee pot of the early 90's. I fear that the first brain emulation scan will be even less interesting to watch. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From eugen at leitl.org Sun Dec 6 12:16:00 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 6 Dec 2009 13:16:00 +0100 Subject: [ExI] HM In-Reply-To: <20091206120749.4750e397@secure.ericade.net> References: <20091206120749.4750e397@secure.ericade.net> Message-ID: <20091206121600.GX17686@leitl.org> On Sun, Dec 06, 2009 at 01:07:49PM +0100, Anders Sandberg wrote: > I fear that the first brain emulation scan will be even less interesting to watch. How much would we need for a mouse right now, only a few years? There are a lot of cm^3 in a human brain, and scanners are expensive. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From bbenzai at yahoo.com Sun Dec 6 11:57:04 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 6 Dec 2009 03:57:04 -0800 (PST) Subject: [ExI] pat condell's latest subtle rant In-Reply-To: Message-ID: <79260.84482.qm@web32007.mail.mud.yahoo.com> From: Lee Corbin wrote: > Ben Zaiboc wrote: > Andrii Zvorygin wrote: > >>> he seems like an angry confused person. > >> I very much doubt that he's confused. > I didn't see any signs of confusion either. >> But angry? About the greatest evil that has ever existed? >> Yes, I should think so. > It doesn't take too much dwelling on what religion tries to do to modern minds (and succeeds very often) in order to make a thoughtful, skeptical person angry. > "The greatest evil that has ever existed"? I'm at a loss here to think about what other candidates you may have in mind, and how religion finally beats them out in the evilness department. > The noticeable thread to me that links all the recent atheist books, and claims like Pat Condell's, is this: we can hardly imagine history without religion. So we can hardly imagine the outcome of any controlled experiment. > We know that the Aztecs and Mayas before them did terrible things pretty high up on the scale of evil. But it's highly significant, I think, that we *only* know these things because a literate civilization made contact with them and wrote it all down. It seems likely to me that atrocities scarcely thinkable to us were commonplace among *all* our ancestors if we go back far enough. > Reading a sympathetic biography of Genghis Khan left me with the impression that the incredible holocausts would have happened anyway, even without the worship of the Tangri, the great blue sky. > So, since atrocities and religion have always been with us, how is it that so many people always manage to suppose that the latter is truly responsible for the former? I'm not convinced. I agree that humans are capable of, and have done (and still do), pretty terrible things without needing religious encouragement, and religion is very good at playing on these tendencies, and exaggerating them. My main point is not about physical cruelty, even though that's terrible enough, but about the evil that religion does to people's minds. Never mind the forced ignorance (thankfully getting harder to enforce as technology spreads and advances), it's the poisoning of minds that I see as the most evil thing. The way curiosity is killed off and rational thought punished and ultimately destroyed. Yes, burning people alive is definitely a bad thing (and I suspect a lot more of this has been done in a religious context than a secular one), not to mention stoning and all kinds of other inventive horrors, but to deliberately blunt someone's mind to the point where it's almost useless for anything but doing the same to other people (especially children), that, to me, is evil with a capital E. And as far as I know, religions are far and away the biggest culprits in this kind of behaviour. Ben Zaiboc From eugen at leitl.org Sun Dec 6 12:27:56 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 6 Dec 2009 13:27:56 +0100 Subject: [ExI] climategate again In-Reply-To: References: <200912011647.nB1GlX1G025778@andromeda.ziaspace.com> Message-ID: <20091206122756.GA17686@leitl.org> On Sat, Dec 05, 2009 at 11:18:40PM +0000, BillK wrote: > As Keith (I think) pointed out, the work that the world should be > doing to alleviate global warming is work that the world should be > doing anyway. > And the work should have started 20 years ago. It's more like 40 years ago, but 20 years would have been plenty. > Even if global warming wasn't happening, the world should be moving to > renewable energy sources anyway. Solar power, wind power, fuel cells, > nuclear power, geothermal power, etc. Precisely. We should be thankful for this depression, since fossil costs are right now sufficiently high to pay producers for new infrastructure, and simultaneously keep demand down. This won't last, however. Considering how many terabucks are now down the drain which could have been used for infrastructure work... > The entrenched energy industries of coal and oil are fighting tooth > and nail to protect their industry and profits and delaying the > transfer to alternative fuels as long as possible. One hopeful sign is > that the oil industry appears to be diversifying as oil production > starts dropping. So they will be supporting green energy once they > become the new green energy industry and can profit from it. > > The trouble is that it might be too late by then for large areas of the world. We will not be able to bridge the gap with renewables. However, with reduced demand that should be possible. Not exactly a picnic. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From sparge at gmail.com Sun Dec 6 14:37:47 2009 From: sparge at gmail.com (Dave Sill) Date: Sun, 6 Dec 2009 09:37:47 -0500 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: <4B1AA50F.6030901@satx.rr.com> References: <580930c20912050358s76382d20sded35f1eb407ae96@mail.gmail.com> <4B1A9FD4.6090807@rawbw.com> <4B1AA50F.6030901@satx.rr.com> Message-ID: On Sat, Dec 5, 2009 at 1:23 PM, Damien Broderick wrote: > > ... But to rehearse the obvious, this list is not a public > square, yet anything posted here can be read by anyone on the planet now or > in the future. Since the list owners and most other transhumanists don't > wish to be associated with outrageous proposals to (say) nuke or poison all > the Muslims in the world or even in a given country, or to forcibly banish > blacks and other "non-white" people "back to their own countries", it is > very reasonable to step in and remove or block posts making such > suggestions--even as thought experiments. If only on the same grounds that > one is well advised not to make bomb jokes while boarding a plane. Sorry, but I can't agree with that. The list owners and participants are free to state publicly that they disagree with or find offensive anything posted to the list. Removing messages from the archives is censorship and indicates weakness on behalf of the owners/moderators because it suggests that their pets ideas aren't viable on a level playing field. I'm OK with blocking users who continually disrupt the list by posting spam, gibberish, personal attacks, etc. But I really don't think merely "offensive" ideas should be discouraged. Anyone ever read A Modest Proposal? -Dave From anders at aleph.se Sun Dec 6 15:52:06 2009 From: anders at aleph.se (Anders Sandberg) Date: Sun, 06 Dec 2009 16:52:06 +0100 Subject: [ExI] HM In-Reply-To: 20091206121600.GX17686@leitl.org Message-ID: <20091206155206.e0152894@secure.ericade.net> Eugene: > How much would we need for a mouse right now, only a few years? > There are a lot of cm^3 in a human brain, and scanners are expensive. If the brain has volume V and you slice it in thickness t slices, you will get ~V^(1/3)/t slices, with a total area of V/t. In the case of doing a 50 nm slices of a 450 mm^3 mouse brain, this means 153,000 slices with a total area of 9 m^2. The same case with a 1400 cm^3 human brain gives 2 million slices and total area 28,000 m^2. Todd and the KESM team have a 2 Tb datasets of a mouse brain, but that is of course just optical resolution. Enough to see the neurons, but not enough to get connectivity. >From a WBE perspective, the big problem is getting the scanning area up and the scanning time down. If we have N microscopes scanning an area A in time T, the total time will be V/tNAT - plus overheads due to tissue handling, presumably scaling as V^a t^-b N^c for some positive constants a, b and c (more stuff to handle, more slices to move, more destinations for slices). If A and/or T can be increased, then the total time is reduced without extra overhead. Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From eugen at leitl.org Sun Dec 6 16:43:24 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 6 Dec 2009 17:43:24 +0100 Subject: [ExI] HM In-Reply-To: <20091206155206.e0152894@secure.ericade.net> References: <20091206155206.e0152894@secure.ericade.net> Message-ID: <20091206164324.GC17686@leitl.org> On Sun, Dec 06, 2009 at 04:52:06PM +0100, Anders Sandberg wrote: > If the brain has volume V and you slice it in thickness t slices, you will get ~V^(1/3)/t slices, with a total area of V/t. I don't think a single scanner stage can handle more than a cube of cm^3 or so. Even that might be a tad on the optimistic side. > In the case of doing a 50 nm slices of a 450 mm^3 mouse brain, this means 153,000 slices with a total area of 9 m^2. The same case with a 1400 cm^3 human brain gives 2 million slices and total area 28,000 m^2. > > Todd and the KESM team have a 2 Tb datasets of a mouse brain, but that is of course just optical resolution. Enough to see the neurons, but not enough to get connectivity. Right now we can handle about a PByte dataset comfortably. 10 PBytes less comfortably, and 100 PBytes with a lot of pain. It will be a while until we're in easy EByte country. Obviously we cannot hold even a mouse voxel dataset explicitly, so for tracing/segmentation you have to work with a sliding slice (anywhere in a 1 um-100 um depth). The result would be a lot more compact, and also quite compressible (very little difference between adjacent slices). > From a WBE perspective, the big problem is getting the scanning area up and the scanning time down. If we have N microscopes scanning an area A in time T, the total time will be V/tNAT - plus overheads due to tissue handling, presumably scaling as V^a t^-b N^c for some positive constants a, b and c (more stuff to handle, more slices to move, more destinations for slices). If A and/or T can be increased, then the total time is reduced without extra overhead. The problem is that a dm^3 is 1000 cm^3, so these cm^3 scanner stages (at about ~nm resolution laterally) be better cheap. This is something which will make even Paul Allen's people blanch. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From anders at aleph.se Sun Dec 6 17:49:24 2009 From: anders at aleph.se (Anders Sandberg) Date: Sun, 06 Dec 2009 18:49:24 +0100 Subject: [ExI] HM In-Reply-To: 20091206164324.GC17686@leitl.org Message-ID: <20091206174924.f87f211d@secure.ericade.net> Eugene: > I don't think a single scanner stage can handle more than a cube of cm^3 or > so. > Even that might be a tad on the optimistic side. Yes, I think the KESM is the only system right now that can handle that. For Kenneth Hayworth's ATLUM I think the volume is many order of magnitude smaller. > > Todd and the KESM team have a 2 Tb datasets of a mouse brain, but that is > of course just optical resolution. Enough to see the neurons, but not enough > to get connectivity. > > Right now we can handle about a PByte dataset comfortably. 10 PBytes less > comfortably, and 100 PBytes with a lot of pain. Even a terabyte dataset is a bit tricky to use, even if it fits nicely onto a single hard drive. For example, just doing a visualization or flythrough of that mouse brain seems to be a challenging software project. > for tracing/segmentation > you have to work with a sliding slice (anywhere in a 1 um-100 > um depth). The result would be a lot more compact, and also quite > compressible (very little difference between adjacent slices). Yup. What I worry about is how much sliding we can do between different lateral pieces of tissue. If we cannot use the same stage for an entire slice we will have to split it between stages. This has to be done *really* carefully so we can match the edges for connectivity. I don't know if there has been much work on how this can be done well - freeze cracking, maybe? > The problem is that a dm^3 is 1000 cm^3, so these cm^3 scanner > stages (at about ~nm resolution laterally) be better cheap. This > is something which will make even Paul Allen's people blanch. The wonders of mass production. Right now microscope stages are sold one by one, and tend to be expensive (partially because they are generic for any preparation). Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From bbenzai at yahoo.com Sun Dec 6 17:32:52 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 6 Dec 2009 09:32:52 -0800 (PST) Subject: [ExI] HM In-Reply-To: Message-ID: <36014.13535.qm@web32003.mail.mud.yahoo.com> Damien Broderick wrote: Brain of world's best-known amnesiac mapped By Elizabeth Landau, CNN December 3, 2009 7:21 p.m. EST (CNN) -- Henry Molaison, known as H.M. in scientific literature, was perhaps the most famous patient in all of brain science in the 20th century. "My daddy's family came from the South and moved North, they came from Thibodaux Louisiana, and moved north," Molaison would say. "My mother's family came from the North and moved South." Within 15 minutes he might repeat this exact statement twice more, unable to remember that he'd already said it. Scientists studied him for most of his adult life. This week, researchers are dissecting his brain to figure out exactly which structures contributed to his amnesia, which he suffered for more than 50 years. At the Brain Observatory at the University of California, San Diego, researchers began slicing H.M.'s brain Wednesday afternoon and streaming the procedure live to the world on their Web site. Watch it live Wow, does this mean the first upload will be an amnesiac? Ben Zaiboc From bbenzai at yahoo.com Sun Dec 6 18:11:32 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 6 Dec 2009 10:11:32 -0800 (PST) Subject: [ExI] Rocketships! In-Reply-To: Message-ID: <597707.34757.qm@web32003.mail.mud.yahoo.com> Oops, I meant: http://www.projectrho.com/rocket/index.html (start at the beginning!) Ben Zaiboc From bbenzai at yahoo.com Sun Dec 6 18:10:00 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 6 Dec 2009 10:10:00 -0800 (PST) Subject: [ExI] Rocketships! In-Reply-To: Message-ID: <598799.26143.qm@web32004.mail.mud.yahoo.com> I haven't seen a mention of this on this list before, so here, for the delectation of at least some of you: Rocketships! http://www.projectrho.com/rocket/rocket3a.html Ben Zaiboc From lcorbin at rawbw.com Sun Dec 6 18:44:19 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 06 Dec 2009 10:44:19 -0800 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: References: <580930c20912050358s76382d20sded35f1eb407ae96@mail.gmail.com> <4B1A9FD4.6090807@rawbw.com> <4B1AA50F.6030901@satx.rr.com> Message-ID: <4B1BFB83.3040201@rawbw.com> Dave Sill writes > The list owners and participants are free to state publicly > that they disagree with or find offensive anything posted to > the list. Removing messages from the archives is censorship... I have *never* heard any thought before that anything of the kind has been done or would be done. It was literally unthinkable to me until you suggested it. Yes, in 1984 "he who controls the present controls the past", but erasing memories here and now? Surely not. I'm really sorry you brought that up. I think that I will choose to continue to regard this as unthinkable, and not think about it. > and indicates weakness on behalf of the owners/moderators > because it suggests that their pets ideas aren't viable on > a level playing field. Yeah, well quite apart from them (they do own the list), I'm most curious about what I am not understanding here regarding those who want heavy censorship of the list. We have (about six or seven months ago) been all through this before---but still, I would like to understand. Sincerely. Right now, all I have to go on is what Damien contributed, which you quoted: > On Sat, Dec 5, 2009 at 1:23 PM, Damien Broderick wrote: >> ... But to rehearse the obvious, this list is not a public >> square, yet anything posted here can be read by anyone on the planet now or >> in the future. Since the list owners and most other transhumanists don't >> wish to be associated with outrageous proposals to (say) nuke or poison all >> the Muslims in the world or even in a given country, or to forcibly banish >> blacks and other "non-white" people "back to their own countries", it is >> very reasonable to step in and remove or block posts making such >> suggestions--even as thought experiments. If only on the same grounds that >> one is well advised not to make bomb jokes while boarding a plane. > > Sorry, but I can't agree with that. The list owners and participants > are free to state publicly that they disagree with or find offensive > anything posted to the list.... Let's read the words carefully, Dave. "[folks] don't wish to be associated with outrageous proposals". The key here may be what is meant by "associated". I'm drawing a blank. There *has* to be more going on here than just being loosely associated with some reprehensible idea by having happened to be "on" a list when said idea is suggested. There are very deep waters here having to do with how we all think (the heart leads the mind). We may also have unreasonably high expectations about how rational we ourselves are. Perhaps the real fear is this: Unless vigorously stomped out, certain ideas could gain a following. (Even though surely 95% of the readers wouldn't abide those ideas.) For example, what if someone did propose that it would be better for humanity or better for "us" (whoever that is) to commit some drastic and extreme action? And moreover that we "know in our hearts" that this is a terrible, terrible proposal? Alas, then, we are no better than the Church Fathers who needed to do the same thing. But maybe it was arrogant of us to suppose that we ever were "better"? All these people, Dave, who evidently write often to the list moderator that this or that thread should be excised--I think are just as thoughtful and well-meaning as anyone else. So, no, something else is going on that I don't quite understand. (And no, I am *not* talking about the list owners or moderators---they indeed could have a public image to maintain or political goals. I understand that.) Please give them some credit. We may (or I may) simply have a blind spot here. I've had them before. What is really going on? Lee From kanzure at gmail.com Sun Dec 6 19:33:38 2009 From: kanzure at gmail.com (Bryan Bishop) Date: Sun, 6 Dec 2009 13:33:38 -0600 Subject: [ExI] H+ Summit 2009 transcripts/discussions Message-ID: <55ad6af70912061133x25a8e525jce425be57fa04bdf@mail.gmail.com> Hey all, I'm over at the transhuman meetup. There are details at http://hplus.eventbrite.com/ and http://www.techzulu.com/live.html streaming some video at the moment. Here are some details for keeping in contact with these people. IRC channel: #hplusroadmap irc.freenode.net twitter hashtag: #hplus http://twitter.com/#search?q=%23hplus And here are some transcripts that I typed up (in real time, if you're still here I'm sitting somewhere in the back). Dylan Morris http://adl.serveftp.org/~bryan/hplus-summit-2009/dylan-morris.html Anselm Levskaya http://adl.serveftp.org/~bryan/hplus-summit-2009/ansyem.html Todd Huffman http://adl.serveftp.org/~bryan/hplus-summit-2009/todd-huffman.html Aubrey de Grey http://adl.serveftp.org/~bryan/hplus-summit-2009/aubrey-de-grey.html Greg Fahy http://adl.serveftp.org/~bryan/hplus-summit-2009/fahy.html Christine Peterson http://adl.serveftp.org/~bryan/hplus-summit-2009/christine-peterson.html Gregory Benford http://adl.serveftp.org/~bryan/hplus-summit-2009/greg-benford.html Andrew Hessel http://adl.serveftp.org/~bryan/hplus-summit-2009/andrew-hessel.html Alex Lightman http://adl.serveftp.org/~bryan/hplus-summit-2009/alex-lightman.html - Bryan http://heybryan.org/ 1 512 203 0507 From kanzure at gmail.com Sun Dec 6 19:45:11 2009 From: kanzure at gmail.com (Bryan Bishop) Date: Sun, 6 Dec 2009 13:45:11 -0600 Subject: [ExI] H+ Summit 2009 transcripts/discussions In-Reply-To: <55ad6af70912061133x25a8e525jce425be57fa04bdf@mail.gmail.com> References: <55ad6af70912061133x25a8e525jce425be57fa04bdf@mail.gmail.com> Message-ID: <55ad6af70912061145y391b2824k5a65d4382b3b5149@mail.gmail.com> On Sun, Dec 6, 2009 at 1:33 PM, Bryan Bishop wrote: > And here are some transcripts that I typed up (in real time, if you're > still here I'm sitting somewhere in the back). Ack, I forgot Patri's talk on the Seasteading Institute. What an awesome guy to follow. http://adl.serveftp.org/~bryan/hplus-summit-2009/patri-freadman.html Ben Lipkowitz on SKDB and making open source hardware into easyware: http://adl.serveftp.org/~bryan/presentations/hplus-summit-2009/hplus-summit-2009-how-to-make.pdf - Bryan http://heybryan.org/ 1 512 203 0507 From stefano.vaj at gmail.com Sun Dec 6 20:41:01 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 6 Dec 2009 21:41:01 +0100 Subject: [ExI] Aggressive Atheism, by Pat Condell In-Reply-To: <254885.29236.qm@web30807.mail.mud.yahoo.com> References: <254885.29236.qm@web30807.mail.mud.yahoo.com> Message-ID: <580930c20912061241r6974f7d2k2192a184639271d4@mail.gmail.com> 2009/12/5 Shannon : > Thank you for your illustration of empathy and the proper way to treat a > religious friend or family member. One thing I've seen in the older kids I > taught in Sunday School in Austin (the middle school aged kids) was that a > few of them where very mad at religious kids in their schools, they wanted > to debate them and basically show them how silly they were. The modern Western culture is quite sold on the idea of "objective Truth" - which in turn has monotheistic roots, but easily extends to some kinds of atheism - and evades the idea of identities, worldviews and fundamental choices. This in turn makes for a status of "obviousness" of one's own ideas and values, and a radical lack of perspective. In this respect, I think that reading Nietzsche's Antichrist is a much more fascinating experience for a young christian than watching a film like Religolous, with its much eye-rotating and politically correct scandal and mock suprise face to monotheistic beliefs that only preach to the choir (and probably bore to death even the latter). -- Stefano Vaj From stefano.vaj at gmail.com Sun Dec 6 21:42:05 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 6 Dec 2009 22:42:05 +0100 Subject: [ExI] Tolerance Message-ID: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> The threads on militant atheism and on ExI-chat moderation have made me think once more about tolerance and freedom of speech. Let me say that as a matter of personal taste I am strongly inclined towards tolerance and dialogue, and have an active dislike for easy outrages and obstracisms. On the other hand, I realise that whatever vague high ground "tolerance" may claim over its enemies, *it immediately vanishes when it does not extend to alleged "intolerants".* Ultimately, in the famous Saint-Just's say "Pas de libert? pour les ennemis de la libert?" the "libert?" does risk to become quickly little more than a rhetoric definition of the political regime where Saint Just's friends, rather than Louis XVI's friends, are in power. Both being certainly ready to recognise the freedom of their partisans (to support them), but neither willing to extend the courtesy to their opponents. But things become more complicate when one is tempted to put higher demands on those who basically are supposed to share one's own camp. In this respect, many of us are ready to accept, or at least to tolerate, discourses and behaviours by, say, religious fundamentalists or bioluddites with which no compromise is conceivable and where perhaps a more clear-cut stance would be required; while radical or aggressive or debatable positions by transhumanists or atheists are often met with a much, much less understanding and/or respectful attitude even though it is by no means obvious that they are entitled to anything less. Thus, e.g., if we must be indulgent with young christians and sectarians trying to preach their creed and to disparage the unfaithful, I would be reluctant not to extend the same treatment to atheists who feel like doing just the same. -- Stefano Vaj From stefano.vaj at gmail.com Sun Dec 6 22:34:32 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 6 Dec 2009 23:34:32 +0100 Subject: [ExI] " Space-Based Solar Power - SBSP" sent you a message... Message-ID: <580930c20912061434i24daefa0lbc3ee0c94134a092@mail.gmail.com> I have just received that, and thought might be of interest to some of us... <> -- Stefano Vaj From gts_2000 at yahoo.com Sun Dec 6 23:37:41 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 6 Dec 2009 15:37:41 -0800 (PST) Subject: [ExI] Wernicke's aphasia In-Reply-To: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> Message-ID: <368901.15134.qm@web36502.mail.mud.yahoo.com> Semantic processes in the brain seem closely associated with a location known as Wernicke's area, normally located behind the left ear. Lesions in this area can cause a condition known as Wernicke's aphasia. Those afflicted with this form of aphasia speak meaningless sentences with normal syntax. Wikipedia gives this example, presumably taken from an actual case: "I called my mother on the television and did not understand the door. It was too breakfast, but they came from far to near. My mother is not too old for me to be young." The person uttering those sentences does not merely get his wrongs wrong; he understands neither the words that he might mean nor the words that he actually speaks. And yet he speaks nevertheless, often fluently. This can happen because while the brain processes semantics in Wernicke's area, it processes syntax and forms sentences in Broca's area. If Broca goes to work while Wernicke goes to lunch, the poor fellow will babble nonsense in good form. http://en.wikipedia.org/wiki/Wernicke%27s_aphasia I brought this factoid to post here on ExI because I noticed that a person afflicted with Wernicke's aphasia has much in common with the man in Searle's Chinese Room. Like the man in Searle's room, he follows the rules of syntax but knows not whereof he speaks. -gts From p0stfuturist at yahoo.com Mon Dec 7 00:12:30 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Sun, 6 Dec 2009 16:12:30 -0800 (PST) Subject: [ExI] Moderation on the ExiCh list In-Reply-To: Message-ID: <789453.31167.qm@web59906.mail.ac4.yahoo.com> This has priobably been suggested so many times it will make you groan, but perhaps now is the time for?an exl-politics list, as WTA has?wta-politics. It would be more forthright to call?such the "exl-wingnut list", but exl-politics would do. ? ? ? ? > ... But to rehearse the obvious, this list is not a public > square, yet anything posted here can be read by anyone on the planet now or > in the future. Since the list owners and most other transhumanists don't > wish to be associated with outrageous proposals to (say) nuke or poison all > the Muslims in the world or even in a given country, or to forcibly banish > blacks and other "non-white" people "back to their own countries", it is > very reasonable to step in and remove or block posts making such > suggestions--even as thought experiments. If only on the same grounds that > one is well advised not to make bomb jokes while boarding a plane. Sorry, but I can't agree with that. The list owners and participants are free to state publicly that they disagree with or find offensive anything posted to the list. Removing messages from the archives is censorship and indicates weakness on behalf of the owners/moderators because it suggests that their pets ideas aren't viable on a level playing field. I'm OK with blocking users who continually disrupt the list by posting spam, gibberish, personal attacks, etc. But I really don't think merely "offensive" ideas should be discouraged. Anyone ever read A Modest Proposal? -Dave _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From eschatoon at gmail.com Mon Dec 7 05:51:35 2009 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Mon, 7 Dec 2009 06:51:35 +0100 Subject: [ExI] Tolerance In-Reply-To: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> Message-ID: <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> I agree, of course, on extending tolerance to everyone. This is, however, a private opt-in list, and spamming it with religious propaganda should only be tolerated up to a certain extent. I certainly tolerate fundamentalist atheists, in the sense that I affirm their right to think whatever they like to think. On the other hand, I think many of them have shed religion only to fall into another fundamentalist faith. Both theist and atheist fundamentalists have a right to speak their mind, of course, but I am not very interested in discussing with them in their terms. G. On Sun, Dec 6, 2009 at 10:42 PM, Stefano Vaj wrote: > The threads on militant atheism and on ExI-chat moderation have made > me think once more about tolerance and freedom of speech. > > Let me say that as a matter of personal taste I am strongly inclined > towards tolerance and dialogue, and have an active dislike for easy > outrages and obstracisms. On the other hand, I realise that whatever > vague high ground "tolerance" may claim over its enemies, *it > immediately vanishes when it does not extend to alleged > "intolerants".* > > Ultimately, in the famous Saint-Just's say "Pas de libert? pour les > ennemis de la libert?" the "libert?" does risk to become quickly > little more than a rhetoric definition of the political regime where > Saint Just's friends, rather than Louis XVI's friends, are in power. > Both being certainly ready to recognise the freedom of their partisans > (to support them), but neither willing to extend the courtesy to their > opponents. > > But things become more complicate when one is tempted to put higher > demands on those who basically are supposed to share one's own camp. > In this respect, many of us are ready to accept, or at least to > tolerate, discourses and behaviours by, say, religious fundamentalists > or bioluddites with which no compromise is conceivable and where > perhaps a more clear-cut stance would be required; while radical or > aggressive or debatable positions by transhumanists or atheists are > often met with a much, much less understanding and/or respectful > attitude even though it is by no means obvious that they are entitled > to anything less. > > Thus, e.g., if we must be indulgent with young christians and > sectarians trying to preach their creed and to disparage the > unfaithful, I would be reluctant not to extend the same treatment to > atheists who feel like doing just the same. > > -- > Stefano Vaj > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From eschatoon at gmail.com Mon Dec 7 06:00:07 2009 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Mon, 7 Dec 2009 07:00:07 +0100 Subject: [ExI] Moderation on the ExiCh list In-Reply-To: <580930c20912050358s76382d20sded35f1eb407ae96@mail.gmail.com> References: <580930c20912050358s76382d20sded35f1eb407ae96@mail.gmail.com> Message-ID: <1fa8c3b90912062200w77422183q4079fed268196381@mail.gmail.com> I agree with Stefano. in particular on his list of sins. Moderation is one of the many tools that help maintaining a mailing list. It should be exercised with great care and not very frequently. On the lsts that I moderate, I use these simple rules: a) Spammers in the conventional sense are out at the first offense, without warning. b) Single issue trolls and those who always insult others receive one, maximum two polite warnings, and then they are out. G. On Sat, Dec 5, 2009 at 12:58 PM, Stefano Vaj wrote: > 2009/12/5 Robert Bradbury : >> But as a person of conviction, I happen to believe in the first article of >> the Bill of Rights, e.g. "Congress shall make no law respecting an an >> establishment of religion, or prohibiting the exercise thereof [1]; OR >> ABRIDGING THE FREEDOM OF SPEECH [2], .." should be taken very seriously. > > Interesting. I am myself a moderator of a few lists, and in turn > happened at least once to be the victim of the moderation (or rather > the owner) of a list at that time managed by one of my couple of > personal trolls/poltergeists, who interpreted such role as the ability > to engage in, by no means "moderate", attacks and flames where the > ability of the other party to reply is by definition restricted. > Unsurprisingly, the list members eventually migrated elsewhere, > including those who had not directly suffered such behaviours... > > I think that both the moderation when in doubt should err on the side > of the freedom of speech (and be very vigilant with regard to its own > personal and ideological biases) AND that a moderation should exist. > This not only as a matter of ideological or aesthetical taste, but for > very practical reasons which have to do with its continuing viability > and success. > >> The point of a moderated list would presumably be to minimize the exposure >> of the list to defamation lawsuits, and in some cases "accuracy" statements >> (say the Raelians were to start auto-cross-posting their fluff. > > No. The real point of moderation and the real (and only) cardinal sin > in mailing lists is IMHO Off-Topic. Flames are off-topic. Spam is > off-topic. Ad hominem are off-topic. Tireless single-issue evangelism > is off-topic. Bilateral chit-chat is off-topic. Off-topic is what > reduce the signal-to-noise ratio, annoys and ultimately keep away > other participants. Cut the OT, but only the OT, and you have nothing > else that you need (or should) do. > > -- > Stefano Vaj > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From thespike at satx.rr.com Mon Dec 7 05:59:55 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 06 Dec 2009 23:59:55 -0600 Subject: [ExI] Tolerance In-Reply-To: <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> Message-ID: <4B1C99DB.3040605@satx.rr.com> On 12/6/2009 11:51 PM, Giulio Prisco (2nd email) wrote: > Both theist and atheist fundamentalists > have a right to speak their mind, of course, but I am not very > interested in discussing with them in their terms. I take the term "fundamentalist" to apply to one who embraces the literal and unalterable truth of some written revelation from one or more deities. Since an atheist is one who declines to accept such revelations as the basis for knowledge claims, I'm puzzled by how you define an "atheist fundamentalist". Would this be one who holds that one or more gods has revealed the unalterable truth that no deity exists? Damien Broderick From stathisp at gmail.com Mon Dec 7 10:28:14 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 7 Dec 2009 21:28:14 +1100 Subject: [ExI] Wernicke's aphasia In-Reply-To: <368901.15134.qm@web36502.mail.mud.yahoo.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <368901.15134.qm@web36502.mail.mud.yahoo.com> Message-ID: 2009/12/7 Gordon Swobe : > Semantic processes in the brain seem closely associated with a location known as Wernicke's area, normally located behind the left ear. Lesions in this area can cause a condition known as Wernicke's aphasia. Those afflicted with this form of aphasia speak meaningless sentences with normal syntax. > > Wikipedia gives this example, presumably taken from an actual case: > > "I called my mother on the television and did not understand the door. It was too breakfast, but they came from far to near. My mother is not too old for me to be young." > > The person uttering those sentences does not merely get his wrongs wrong; he understands neither the words that he might mean nor the words that he actually speaks. And yet he speaks nevertheless, often fluently. > > This can happen because while the brain processes semantics in Wernicke's area, it processes syntax and forms sentences in Broca's area. If Broca goes to work while Wernicke goes to lunch, the poor fellow will babble nonsense in good form. > > http://en.wikipedia.org/wiki/Wernicke%27s_aphasia > > I brought this factoid to post here on ExI because I noticed that a person afflicted with Wernicke's aphasia has much in common with the man in Searle's Chinese Room. Like the man in Searle's room, he follows the rules of syntax but knows not whereof he speaks. But the man in the Chinese Room does not produce gibberish. He is more like a properly functioning neuron in the language centres of the brain, which has no idea of the greater significance of the enterprise in which it is an essential participant. In other words, the components don't know what they're doing, but the system does. -- Stathis Papaioannou From stathisp at gmail.com Mon Dec 7 11:13:55 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 7 Dec 2009 22:13:55 +1100 Subject: [ExI] Tolerance In-Reply-To: <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> Message-ID: 2009/12/7 Giulio Prisco (2nd email) : > I agree, of course, on extending tolerance to everyone. This is, > however, a private opt-in list, and spamming it with religious > propaganda should only be tolerated up to a certain extent. > > I certainly tolerate fundamentalist atheists, in the sense that I > affirm their right to think whatever they like to think. On the other > hand, I think many of them have shed religion only to fall into > another fundamentalist faith. Both theist and atheist fundamentalists > have a right to speak their mind, of course, but I am not very > interested in discussing with them in their terms. Is one who insists that ridiculous things are untrue as bad as one who insists that ridiculous things are true? -- Stathis Papaioannou From anders at aleph.se Mon Dec 7 11:08:21 2009 From: anders at aleph.se (Anders Sandberg) Date: Mon, 07 Dec 2009 12:08:21 +0100 Subject: [ExI] Wernicke's aphasia In-Reply-To: 368901.15134.qm@web36502.mail.mud.yahoo.com Message-ID: <20091207110821.ca747b8d@secure.ericade.net> Gordon Swobe: > I brought this factoid to post here on ExI because I noticed that a person > afflicted with Wernicke's aphasia has much in common with the man in > Searle's Chinese Room. Like the man in Searle's room, he follows the rules > of syntax but knows not whereof he speaks. I have neural networks (my language areas, in fact) that follows rules of syntax, yet cannot know what they speak about since that information is elsewhere in my brain. Searle's scenario seems to prime the intuition pump by using a human, who are usually aware and knowing, as a component of a system where the top-level understanding is the issue. He is biasing us to make a level mistake. It is worth noticing that people with Wernicke's usually have unimpaired cognition. They might be unable to speak and understand speech, but they can still think and plan. You could imagine someone with aphasia working at a translation company, moving stacks of documents around between the offices. This person could be absolutely essential for the translation work of the company, yet contribute absolutely nothing to the translation/understanding on the human level. If the person was replaced by a delivery robot or a normal person doing the same job, nothing would change. Aphasias are really annoying. My grandmother got it in her last years, limiting her vocabulary to a few swear words. For a rather prim lady this was a bit of a problem, although one can communicate a surprising amount this way. My dad managed to get sensory aprosodia, becoming unable to recognize irony - a bit of a handicap in my family. Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From stefano.vaj at gmail.com Mon Dec 7 12:02:01 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 7 Dec 2009 13:02:01 +0100 Subject: [ExI] Tolerance In-Reply-To: <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> Message-ID: <580930c20912070402n7639e7egfc7024e7e3885253@mail.gmail.com> 2009/12/7 Giulio Prisco (2nd email) : > I agree, of course, on extending tolerance to everyone. This is, > however, a private opt-in list, and spamming it with religious > propaganda should only be tolerated up to a certain extent. Why, of course, but here we go back to my fundamental idea that moderating a mailing list, a forum, a debate may (should?) basically mean dealing with the excess of OT. In other terms, while some piece or another of, e.g., religious propaganda or commercial information (or a criticism thereof) may sometimes be relevant to the debate at hand, the typical single-issue infiltrator is not there to participate to any debate, but just to profit from any opportunity to post long preaches or rants or perorations or advertisements that quickly exceed whatever may be of interest to the other participants. -- Stefano Vaj From stefano.vaj at gmail.com Mon Dec 7 12:17:08 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 7 Dec 2009 13:17:08 +0100 Subject: [ExI] Tolerance In-Reply-To: <4B1C99DB.3040605@satx.rr.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <4B1C99DB.3040605@satx.rr.com> Message-ID: <580930c20912070417t7a2a55cakb8e43b0860e156cb@mail.gmail.com> 2009/12/7 Damien Broderick : > I take the term "fundamentalist" to apply to one who embraces the literal > and unalterable truth of some written revelation from one or more deities. > Since an atheist is one who declines to accept such revelations as the basis > for knowledge claims, I'm puzzled by how you define an "atheist > fundamentalist". Would this be one who holds that one or more gods has > revealed the unalterable truth that no deity exists? "Atheist fundamentalism" is a contradiction in terms only if you take atheism to mean exclusively "critical atheism". It is perfectly possible not to believe (any more) in a personal, ghost-like deity but at the same time having one's holy scriptures, dogmas, ethical universalism, belief that "faith" is a moral duty, that if facts do not comply with doctrine then facts be damned, etc. Take some variants of marxism, or Scientology. I suspect however that whenever this is the case we are still facing the dear, old monotheistic concepts and mentality, irrespective of the fact that traditional middle-east derived religions might be vehemently opposed by this kind of atheism, under a thin "secular" veneer that can be easily deconstructed. In fact, we should be always vigilant IMHO in order that we stay ourselves as clear as possible from such temptations. -- Stefano Vaj From gts_2000 at yahoo.com Mon Dec 7 12:27:32 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 7 Dec 2009 04:27:32 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA Message-ID: <998024.79732.qm@web36503.mail.mud.yahoo.com> Hi Stathis, >> I brought this factoid to post here on ExI because I > noticed that a person afflicted with Wernicke's aphasia has > much in common with the man in Searle's Chinese Room. Like > the man in Searle's room, he follows the rules of syntax but > knows not whereof he speaks. > > But the man in the Chinese Room does not produce gibberish. Yes, I understand that. I have for several weeks engaged in a debate about Searle's Chinese Room Argument on another discussion list, one devoted to discussion of philosophy. My interlocutor there teaches philosophy and has great admiration for Professor Searle. I've taken the position that for the thought experiment portion of Searle's CRA to have any value -- that if we should consider it anything more than mere philosophical hand-waving -- then it must first qualify as a valid scientific experiment. To qualify as such, it must work in a context-independent manner; scientists anywhere in the universe should obtain the same results using the same man in the room. And for that to happen, I argue, the man in the room must lack knowledge not only of the meanings of Chinese symbols, but also the words and symbols of every possible language in the universe. He must have no semantics whatsoever. Somewhat tongue in cheek, I continued my argument by stating the subject would need to undergo brain surgery prior to the experiment to remove the relevant parts of his brain. I then did a little research and learned we would need to remove Wernicke's area, and learned also of this interesting phenomenon of Wernicke's aphasia. One might consider the existence of Wernicke's aphasia as evidence supporting Searle's third premise in his CRA, that 'syntax is neither constitutive of nor sufficient for semantics'. People with this strange malady have an obvious grasp of syntax but also clearly have no idea what they're talking about! > In other words,the components don't know what they're doing, but the > system does. So goes the systems reply to the CRA, one of many that Searle fielded with varying degrees of success depending on who you ask. -gts From brentn at freeshell.org Mon Dec 7 14:23:54 2009 From: brentn at freeshell.org (Brent Neal) Date: Mon, 7 Dec 2009 09:23:54 -0500 Subject: [ExI] Tolerance In-Reply-To: References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> Message-ID: <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> On 7 Dec, 2009, at 6:13, Stathis Papaioannou wrote: > Is one who insists that ridiculous things are untrue as bad as one who > insists that ridiculous things are true? It depends on why they insist. I find the so-called "new atheists" whose arguments against religion amount to little more than "Religion Sucks! Nyeah!" to be incredibly tiresome people. Compare and contrast the eloquent, rational atheism of Russell and Hitchens, to the emotionally-charged atheism of Dawkins ("Hur hur hur if you believe in God you're stupid, so we're going to call atheists 'Brights'! Get it? Huh? Hur hur hur.") You can hold a rational position for completely irrational reasons. "Rational" and "irrational" are better suited for describing the process used to attain a position than the position itself. While I agree that the term "fundamentalist atheist" contains some connotative dissonance, the label is nonetheless appropriately evocative. B -- Brent Neal, Ph.D. http://brentn.freeshell.org From nanite1018 at gmail.com Mon Dec 7 14:46:59 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Mon, 7 Dec 2009 09:46:59 -0500 Subject: [ExI] Tolerance In-Reply-To: <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> Message-ID: <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> > It depends on why they insist. I find the so-called "new atheists" > whose arguments against religion amount to little more than > "Religion Sucks! Nyeah!" to be incredibly tiresome people. Compare > and contrast the eloquent, rational atheism of Russell and Hitchens, > to the emotionally-charged atheism of Dawkins ("Hur hur hur if you > believe in God you're stupid, so we're going to call atheists > 'Brights'! Get it? Huh? Hur hur hur.") > Brent Neal, Ph.D. Dawkins arguments aren't at all like what you suggest. Certainly he believes religion is terrible, both from the things it has directly caused over the years and by its very base tenet: faith. He makes clear arguments that faith is antithetical to science and our modern society. Reason is the basis of our society, and if you give up reason and accept something on faith, then you are empowering the radical fundamentalists by your assent to their core beliefs, and it makes it much more difficult to make an argument against them. He also makes a number of arguments against religion from purely logical/empirical/ scientific grounds as well. He's always seemed eloquent to me, even if he is passionate about it. Passion does not mean irrationality. It is rational to be angry when you judge something to be a tremendous evil or a huge weight on the world. Emotions and reason do not have to be opposed, and in Dr. Dawkins case, I do not believe they are. I thought the whole "brights" thing was dumb too, just fyi. But I understood what he was trying to do, even if he didn't pick the best name. Joshua Job nanite1018 at gmail.com From mbb386 at main.nc.us Mon Dec 7 15:05:09 2009 From: mbb386 at main.nc.us (MB) Date: Mon, 7 Dec 2009 10:05:09 -0500 (EST) Subject: [ExI] Tolerance In-Reply-To: <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> Message-ID: <52553.12.77.168.205.1260198309.squirrel@www.main.nc.us> >"Hur hur hur if you believe in > God you're stupid, so we're going to call atheists 'Brights'! Get it? > Huh? Hur hur hur." Arrgh. I was on an email list like this once. They were correct: keep religion out of science class. But reading the posts, rants really, about how stupid/evil religion and religious people were got very old. I miss the interesting material that came up on that list, but I sure don't miss the repeated vitriol and ranting. Regards, MB From stefano.vaj at gmail.com Mon Dec 7 16:06:51 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 7 Dec 2009 17:06:51 +0100 Subject: [ExI] Tolerance In-Reply-To: <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> Message-ID: <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> 2009/12/7 JOSHUA JOB : >> It depends on why they insist. I find the so-called "new atheists" whose >> arguments against religion amount to little more than "Religion Sucks! >> Nyeah!" to be incredibly tiresome people. ?Compare and contrast the >> eloquent, rational atheism of Russell and Hitchens, to the >> emotionally-charged atheism of Dawkins ("Hur hur hur if you believe in God >> you're stupid, so we're going to call atheists 'Brights'! ?Get it? Huh? Hur >> hur hur.") >> Brent Neal, Ph.D. > > Dawkins arguments aren't at all like what you suggest. I am surprised that one may find Hitchens, with his heavy, moralistic rhetorics, more "rational" than Dawkins, who if anything makes for a much more pleasant reading... -- Stefano Vaj From lcorbin at rawbw.com Mon Dec 7 16:39:31 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 07 Dec 2009 08:39:31 -0800 Subject: [ExI] Tolerance In-Reply-To: <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> Message-ID: <4B1D2FC3.4030400@rawbw.com> Stefano writes > 2009/12/7 JOSHUA JOB : >>> It depends on why they insist. I find the so-called "new atheists" whose >>> arguments against religion amount to little more than "Religion Sucks! >>> Nyeah!" to be incredibly tiresome people. Compare and contrast the >>> eloquent, rational atheism of Russell and Hitchens, to the >>> emotionally-charged atheism of Dawkins ("Hur hur hur if you believe in God >>> you're stupid, so we're going to call atheists 'Brights'! Get it? Huh? Hur >>> hur hur.") >>> Brent Neal, Ph.D. >> Dawkins arguments aren't at all like what you suggest. > > I am surprised that one may find Hitchens, with his heavy, moralistic > rhetorics, more "rational" than Dawkins, who if anything makes for a > much more pleasant reading... Odd. My reaction is the reverse. I enjoyed Hitchens' book "God is not great" very much. It was extremely insightful at quite a number of points. His conclusion, reiterated again and again, that "religions poisons everything" of course cannot be taken too literally, but his examples are very impressive. But I could not even stand listening to Dawkins for more than a few minutes in a TED talk. There was just something so... so fanatical and almost dogmatic, that I had to stop. And this fits the picture of someone who'd coin that ridiculous concept of "brights", such a stupid and embarrassing fiasco. Alas, it seems to me that the Jacobin temperament is alive and well even among us atheists. So for me :-) it was the reverse! It's Hitchens who makes for much more pleasant reading (though to be fair I have not read Dawkins book, mostly for the reasons given above). I suspect that this is *not* just entirely a matter of taste, although it could turn out to be just that. Lee From jonkc at bellsouth.net Mon Dec 7 16:15:34 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 7 Dec 2009 11:15:34 -0500 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> Message-ID: On Dec 4, 2009, at 11:20 PM, Max More wrote: > While Condell's aggressive approach definitely has a degree of wisdom (and a load of intellectual good sense), is it really appropriate to, or useful for, or humanistic in, dealing with all situations? Atheists have played the Mr. Nice-guy part for a very long time and the end result is that religious morons crash airliners into skyscrapers; and the very next day the USA organized a national day of prayer to give homage to the very mental cancer that caused the disaster. It's enough to make you scream. I think it's good that people like Condell and Dawkins are being a little more aggressive; at least it's different and might do some good, it certainly can't work worse than the Mr. Nice-guy approach. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Dec 7 17:22:50 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 7 Dec 2009 12:22:50 -0500 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <998024.79732.qm@web36503.mail.mud.yahoo.com> References: <998024.79732.qm@web36503.mail.mud.yahoo.com> Message-ID: <98DEC7E7-502D-4023-AFDF-571B7F312D8B@bellsouth.net> On Dec 7, 2009, at 7:27 AM, Gordon Swobe wrote: > I argue, the man in the room must lack knowledge not only of the meanings of Chinese symbols, but also the words and symbols of every possible language in the universe. He must have no semantics whatsoever. Or just get rid of the little man altogether and replace him with a 1950's punch card sorting machine, it would be slow but faster than the man. What makes Searle's Chinese Room such a stupid thought experiment is its conclusion: The little man doesn't understand anything therefore the entire Chinese Room doesn't understand anything. Pretty dumb. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From brentn at freeshell.org Mon Dec 7 18:51:28 2009 From: brentn at freeshell.org (Brent Neal) Date: Mon, 7 Dec 2009 13:51:28 -0500 Subject: [ExI] Tolerance In-Reply-To: <4B1D2FC3.4030400@rawbw.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> <4B1D2FC3.4030400@rawbw.com> Message-ID: On 7 Dec, 2009, at 11:39, Lee Corbin wrote: > Stefano writes > >> 2009/12/7 JOSHUA JOB : >>>> It depends on why they insist. I find the so-called "new >>>> atheists" whose >>>> arguments against religion amount to little more than "Religion >>>> Sucks! >>>> Nyeah!" to be incredibly tiresome people. Compare and contrast the >>>> eloquent, rational atheism of Russell and Hitchens, to the >>>> emotionally-charged atheism of Dawkins ("Hur hur hur if you >>>> believe in God >>>> you're stupid, so we're going to call atheists 'Brights'! Get >>>> it? Huh? Hur >>>> hur hur.") >>>> Brent Neal, Ph.D. >>> Dawkins arguments aren't at all like what you suggest. >> I am surprised that one may find Hitchens, with his heavy, moralistic >> rhetorics, more "rational" than Dawkins, who if anything makes for a >> much more pleasant reading... > > Odd. My reaction is the reverse. I enjoyed Hitchens' book "God is > not great" very much. It was extremely insightful at quite a > number of points. His conclusion, reiterated again and again, > that "religions poisons everything" of course cannot be taken > too literally, but his examples are very impressive. Lee's comments on Hitchens vs. Dawkins are pretty much in line with my own views. Hitchens lacks the emotionally charged rhetoric that Dawkins employs on a regular basis. Hitchens sets out to condemn religion and religiosity with an a posteriori approach - i.e. "look at what religion has done, and judge them based on that." Very analytical. Dawkins is the quintessential spin doctor, outlining arguments that are at times specious and are certainly a priori as to why religion is bad, then using emotional rhetoric to distract the mind. Dawkins is more pleasant reading for most of us, simply because he has mastered the language that excites us and confirms (most of) our own views. Hitchens makes the same points, but in a tedious fashion that requires us to at least actively engage in his own "moralistic" judgements. Much more psychologically draining that way. At least, that was my opinion after reading them both. (The God Delusion and God is not Great) YMMV, and all that. If you don't like Hitchens as an example of rational atheism, substitute Robert Ingersoll, or more recently, Michael Shermer. B -- Brent Neal, Ph.D. http://brentn.freeshell.org From spike66 at att.net Mon Dec 7 19:03:38 2009 From: spike66 at att.net (spike) Date: Mon, 7 Dec 2009 11:03:38 -0800 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> Message-ID: <3418453789D74C7BA8BF5A165F936B84@spike> >On Behalf Of John Clark ... >Atheists have played the Mr. Nice-guy part for a very long time and the end result is that religious morons crash airliners into skyscrapers; and the very next day the USA organized a national day of prayer to give homage to the very mental cancer that caused the disaster... John K Clark John this approach lumps all religious memes together. I would counter-propose classifying religious thoughtspace into two broad categories: those which suggest killing unbelievers and those which do not. spike From nanite1018 at gmail.com Mon Dec 7 19:18:12 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Mon, 7 Dec 2009 14:18:12 -0500 Subject: [ExI] Tolerance In-Reply-To: References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> <4B1D2FC3.4030400@rawbw.com> Message-ID: <4675276D-9835-4965-A384-59363FF27738@GMAIL.COM> > Hitchens sets out to condemn religion and religiosity with an a > posteriori approach - i.e. "look at what religion has done, and > judge them based on that." Very analytical. Dawkins is the > quintessential spin doctor, outlining arguments that are at times > specious and are certainly a priori as to why religion is bad, then > using emotional rhetoric to distract the mind..... > Brent Neal, Ph.D. What is wrong with an a priori argument? If you take an essential, basic, necessary feature of something and show that it must logically lead to terrible (or at least bad) consequences, how is that not valid? I mean, historical arguments can be used to back it up. But honestly an argument that shows how bad religion is, based on historical consequences alone, implicitly admits that it might not be bad or might even be good if only the did it differently! It's the same argument that socialists and communists use to distance themselves from the USSR, China, and N. Korea. "It's good, we promise, they just didn't do it right/were evil people/weren't smart enough/........." The point, in my view, of an argument against religion (or socialism for that matter) is not to show that it has been bad in the past, but to show that it cannot, by its nature, be good. That it must, necessarily, lead to consequences that are worse than atheism (in the case of religion), ceteris paribus. This seems the much stronger argument. It also explains the passion of Dawkins as well-- he has come to the conclusion that faith is by its nature bad. This makes for a more passionate condemnation than a mere analysis of the historical consequences can create. I haven't read anything from Hitchens, or Misters Ingersoll or Shermer (beyond the occasional piece in Scientific American), so I do not have a basis for comparison for their strategies. I'm simply judging from the characterizations of what I've read on the list. Joshua Job nanite1018 at gmail.com From sparge at gmail.com Mon Dec 7 19:25:42 2009 From: sparge at gmail.com (Dave Sill) Date: Mon, 7 Dec 2009 14:25:42 -0500 Subject: [ExI] Tolerance In-Reply-To: References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> <4B1D2FC3.4030400@rawbw.com> Message-ID: On Mon, Dec 7, 2009 at 1:51 PM, Brent Neal wrote: > > Lee's comments on Hitchens vs. Dawkins are pretty much in line with my own > views. Hitchens lacks the emotionally charged rhetoric that Dawkins employs > on a regular basis. Hitchens sets out to condemn religion and religiosity > with an a posteriori approach - How can you "set out to condemn" something with an "a posteriori" approach? > i.e. "look at what religion has done, and > judge them based on that." And the selection of evidence isn't tilted toward the goal that he "set out" to achieve, it just turns out to prove his point? > Very analytical. Dawkins is the quintessential > spin doctor, outlining arguments that are at times specious and are > certainly a priori as to why religion is bad, then using emotional rhetoric > to distract the mind. So you found Hitchens, the author/journalist/activist/pundit, to be more scientific in his approach than Dawkins, the scientist? > At least, that was my opinion after reading them both. (The God Delusion and > God is not Great) YMMV, and all that. I read and enjoyed both. I didn't think either was perfect, but I did think Dawkins was more logical/scientific and Hitchens was more haphazard and anecdotal, though slightly more entaining. -Dave From moulton at moulton.com Mon Dec 7 19:00:09 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 7 Dec 2009 19:00:09 -0000 Subject: [ExI] Tolerance Message-ID: <20091207190009.76990.qmail@moulton.com> In the interest of brevity I will combine a couple of my responses. On the topic of "Brights" my memory and the information I have found online indicate that the term was not coined by Dawkins; rather it was coined by Paul Geisert and that Mynga Futrell worked on the definition. Daniel Dennett, Steven Pinker and Richard Dawkins all seem to be part of the group of people who have either written about "Brights" or have publicly assumed that label but these activities are not the same as coining the term. Therefore I suggest that anyone who has claimed that Dawkins coined the term "Brights" either provide the evidence or retract the statement. On the topic of Dawkins allegedly being emotionally charged and saying ("Hur hur hur if you believe in God you're stupid, so we're going to call atheists 'Brights'! Get it? Huh? Hur hur hur.") I have much a different experience. I have read several books and essays by Dawkins and heard him speak in person three times as well as several times on video and never have I read or seen Dawkins as alleged. So anyone making those allegations needs to provide a very specific citation in a form that can be easily checked or retract the allegation. Fred From max at maxmore.com Mon Dec 7 19:32:52 2009 From: max at maxmore.com (Max More) Date: Mon, 07 Dec 2009 13:32:52 -0600 Subject: [ExI] From X to Ex: How the Marvel Myth Parallels the Real Posthuman Future Message-ID: <200912071933.nB7JX0WG025733@andromeda.ziaspace.com> In 2006, I wrote a chapter for a SmartPop book on the X-Men. My piece is "From X to Ex: How the Marvel Myth Parallels the Real Posthuman Future". I just heard that the publisher is making selections from their books available for limited periods of time online. My chapter is available online from now until Friday, December 11, 2009: http://www.smartpopbooks.com/essay/full/279 I think many people here will find it a fun read, especially if you have any interest in the X-Men. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From gts_2000 at yahoo.com Mon Dec 7 20:20:02 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 7 Dec 2009 12:20:02 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <98DEC7E7-502D-4023-AFDF-571B7F312D8B@bellsouth.net> Message-ID: <910546.13745.qm@web36502.mail.mud.yahoo.com> --- On Mon, 12/7/09, John Clark wrote: > What makes Searle's Chinese Room such a stupid thought > experiment is its conclusion: The little man doesn't > understand anything therefore the entire Chinese Room > doesn't understand anything. Pretty dumb. Searle replies that the man could in principle memorize the syntactic rule-book, thus internalizing the formal program and the entire the room/system. On Searle's view such a man would still lack understanding of the Chinese symbols. In my discussions on the philosophy list, I have nicknamed that man "Cram" (CRA man) and contrasted him with an ordinary bloke named Sam. Sam has intrinsic intentionality (philosophy-speak for "the about-ness of consciousness", or for our purposes here, "conscious understanding of the meanings of the symbols"). Cram, says Searle, does not. -gts From brentn at freeshell.org Mon Dec 7 20:38:52 2009 From: brentn at freeshell.org (Brent Neal) Date: Mon, 7 Dec 2009 15:38:52 -0500 Subject: [ExI] Tolerance In-Reply-To: References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> <4B1D2FC3.4030400@rawbw.com> Message-ID: <1BA962BE-CEAA-4F70-B64B-449630DB4039@freeshell.org> On 7 Dec, 2009, at 14:25, Dave Sill wrote: > And the selection of evidence isn't tilted toward the goal that he > "set out" to achieve, it just turns out to prove his point? > >> Very analytical. Dawkins is the quintessential >> spin doctor, outlining arguments that are at times specious and are >> certainly a priori as to why religion is bad, then using emotional >> rhetoric >> to distract the mind. > > So you found Hitchens, the author/journalist/activist/pundit, to be > more scientific in his approach than Dawkins, the scientist? if you're going to say "scientific," be VERY sure what you mean here. I consider neither of their approaches to be scientific, as neither makes argument by falsification. A scientific approach is not possible to this problem since it is not likely that we'll ever be able to approach religion with scientific inquiry. Yes, I do think that "God is Not Great" was less emotionally charged than "The God Delusion." In answer to some other poster, while Dawkins did not coin the term "Brights," the God Delusion, amongst his other writings -including several essays at least one of which was published in John Brockman's Edge essay series, does very much espouse the childish "hur hur" sort of argument. By Dawkins' arguments, the religious is a sign of moral and intellectual inferiority. I have very little patience for that sort of name calling. The utilitarian argument is much more compelling. If the thing produces good results, then the thing has merit. If it does not, then it is meritless. B -- Brent Neal, Ph.D. http://brentn.freeshell.org From gts_2000 at yahoo.com Mon Dec 7 21:15:30 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 7 Dec 2009 13:15:30 -0800 (PST) Subject: [ExI] Wernicke's aphasia In-Reply-To: <20091207110821.ca747b8d@secure.ericade.net> Message-ID: <648085.10437.qm@web36503.mail.mud.yahoo.com> --- On Mon, 12/7/09, Anders Sandberg wrote: > It is worth noticing that people with Wernicke's usually > have unimpaired cognition. They might be unable to speak and > understand speech, but they can still think and plan. Actually they do speak, and from what I understand some of them speak very well. They simply don't make any sense, even to themselves. Wernicke's aphasia falls under the category of receptive aphasias as contrasted with expressive aphasias. Generally lesions in Broca's area cause the expressive sort and lesions in Wernicke's area cause the receptive sort. Simply stated, receptive aphasiasiacs don't know what they mean but still have plenty to say about it. Expressive aphasiasiacs know what they mean but cannot say anything about it. (At least that's what I think I mean to say about it.) -gts From sparge at gmail.com Mon Dec 7 21:33:58 2009 From: sparge at gmail.com (Dave Sill) Date: Mon, 7 Dec 2009 16:33:58 -0500 Subject: [ExI] Tolerance In-Reply-To: <1BA962BE-CEAA-4F70-B64B-449630DB4039@freeshell.org> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> <4B1D2FC3.4030400@rawbw.com> <1BA962BE-CEAA-4F70-B64B-449630DB4039@freeshell.org> Message-ID: On Mon, Dec 7, 2009 at 3:38 PM, Brent Neal wrote: > > On 7 Dec, 2009, at 14:25, Dave Sill wrote: > >> So you found Hitchens, the author/journalist/activist/pundit, to be >> more scientific in his approach than Dawkins, the scientist? > > if you're going to say "scientific," be VERY sure what you mean here. OK, pretend I said "more analytical". > Yes, I do think that "God is Not Great" was less emotionally charged than > "The God Delusion." OK. I disagree. >?In answer to some other poster, while Dawkins did not > coin the term "Brights," the God Delusion, amongst his other writings > -including several essays at least one of which was published in John > Brockman's Edge essay series, does very much espouse the childish "hur hur" > sort of argument. Could you provide an example? I don't remember anything like that. > By Dawkins' arguments, the religious is a sign of moral > and intellectual inferiority. ?I have very little patience for that sort of > name calling. I don't know that it's name calling, really. Don't you have even the slightest problem respecting the intellectual abilities of people who believe unlikely (absurd, really) things without extraordinary evidence? A person can be a brilliant physicist, mathematician, chemist, etc., but if they honestly believe, for example, that the Bible is the word of God, aren't they also exhibiting a massive intellectual defect? > The utilitarian argument is much more compelling. If the thing produces good > results, then the thing has merit. If it does not, then it is meritless. Can't say I'm a fan of that one. -Dave From stefano.vaj at gmail.com Mon Dec 7 21:57:57 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 7 Dec 2009 22:57:57 +0100 Subject: [ExI] Tolerance In-Reply-To: References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> <4B1D2FC3.4030400@rawbw.com> <1BA962BE-CEAA-4F70-B64B-449630DB4039@freeshell.org> Message-ID: <580930c20912071357r188f19aaj6e01afac85c79175@mail.gmail.com> 2009/12/7 Dave Sill : > On Mon, Dec 7, 2009 at 3:38 PM, Brent Neal wrote: >> The utilitarian argument is much more compelling. If the thing produces good >> results, then the thing has merit. If it does not, then it is meritless. > > Can't say I'm a fan of that one. Neither can I. Especially when an absolute God is replaced with an equally absolute, albeit allegedly "secular", view of what is Good and what is Wrong. -- Stefano Vaj From jonkc at bellsouth.net Mon Dec 7 21:36:39 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 7 Dec 2009 16:36:39 -0500 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <910546.13745.qm@web36502.mail.mud.yahoo.com> References: <910546.13745.qm@web36502.mail.mud.yahoo.com> Message-ID: <870A3F6D-298C-4967-8B21-4D37BFCF4FDE@bellsouth.net> On Dec 7, 2009, at 3:20 PM, Gordon Swobe wrote: > > Searle replies that the man could in principle memorize the syntactic rule-book, thus internalizing the formal program and the entire the room/system. On Searle's view such a man would still lack understanding of the Chinese symbols. Then either the little man in the room is lying about his linguistic inadequacy or he is suffering from multiple personality disorder because it is clear that somebody or something in that room knows Chinese. By the way I assume Searle is a creationist because if he is right Evolution might have been able to produce intelligence but it could never have made consciousness, not in a billion years and not in a trillion, and yet there is at least one conscious being in the universe. Probably more but I can't prove it. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Mon Dec 7 22:07:45 2009 From: sparge at gmail.com (Dave Sill) Date: Mon, 7 Dec 2009 17:07:45 -0500 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <870A3F6D-298C-4967-8B21-4D37BFCF4FDE@bellsouth.net> References: <910546.13745.qm@web36502.mail.mud.yahoo.com> <870A3F6D-298C-4967-8B21-4D37BFCF4FDE@bellsouth.net> Message-ID: 2009/12/7 John Clark : > > By the way I > assume Searle is a creationist because if he is right Evolution might have > been able to produce intelligence but it could never have made > consciousness, not in a billion years and not in a trillion, and yet there > is at least one conscious being in the universe. Probably more but I can't > prove it. But you can prove there's one? "I think, therefore I am."? -Dave From jonkc at bellsouth.net Mon Dec 7 22:12:33 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 7 Dec 2009 17:12:33 -0500 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <3418453789D74C7BA8BF5A165F936B84@spike> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> <3418453789D74C7BA8BF5A165F936B84@spike> Message-ID: <88A25264-0DE1-4E18-B0FD-B0C720D475E1@bellsouth.net> On Dec 7, 2009, at 2:03 PM, spike wrote: > John this approach lumps all religious memes together. Yes, and lumping can often be a useful tool. > I would counter-propose classifying religious thoughtspace into two broad > categories: those which suggest killing unbelievers and those which do not. Ah, the religious moderates, those sniveling cowards who give cover to maniacs. Look at me they say, I think that believing in nonsense is important too but I don't crash airliners into skyscrapers. What the 19 hijackers did on 911 was far more logical than anything religious moderates do, provided you accept their basic assumption and take what their holy book says at face value. And in a way I have more respect for creationists who just refuse to believe anything about evolution than I have for religious moderates who believe in a benevolent God and also believe in an inefficient and hideously cruel process like Evolution. At least the creationists are smart enough to know that the two things are not compatible and are in fact completely contradictory. If you're into classification I would propose putting people into two broad categories, those who think it's a virtue to believe in nonsense and those who don't. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Dec 7 22:22:51 2009 From: pharos at gmail.com (BillK) Date: Mon, 7 Dec 2009 22:22:51 +0000 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <88A25264-0DE1-4E18-B0FD-B0C720D475E1@bellsouth.net> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> <3418453789D74C7BA8BF5A165F936B84@spike> <88A25264-0DE1-4E18-B0FD-B0C720D475E1@bellsouth.net> Message-ID: On 12/7/09, John Clark wrote: > If you're into classification I would propose putting people into two broad > categories, those who think it's a virtue to believe in nonsense and those > who don't. > > ?There are two groups of people in the world; those who believe that the world can be divided into two groups of people, and those who don't.? ?There are three kinds of people in the world; those who can count and those who can't.? BillK From moulton at moulton.com Mon Dec 7 22:26:15 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 7 Dec 2009 22:26:15 -0000 Subject: [ExI] Tolerance Message-ID: <20091207222615.48250.qmail@moulton.com> On Mon, 2009-12-07 at 15:38 -0500, Brent Neal wrote: In answer to some other poster, while > Dawkins did not coin the term "Brights," the God Delusion, amongst his > other writings -including several essays at least one of which was > published in John Brockman's Edge essay series, does very much espouse > the childish "hur hur" sort of argument. By Dawkins' arguments, the > religious is a sign of moral and intellectual inferiority. I have > very little patience for that sort of name calling. So now we have someone who has admitted that Dawkins did not coin the term "Brights". I also asked for a specific citation about what Dawkins is alleged to have said. However instead of a specific citation all we are given is something vague which borders on hand waving. The most substantial part of it refers to the Edge essay series so I went to that website and as far as I can tell there are about nine items on that site which are in whole or in part attributable to Dawkins. So please give the specific URL or essay title and quote the paragraph where Dawkins says: > > ("Hur hur hur if you believe in God you're stupid, so we're going > > to call atheists 'Brights'! Get it? Huh? Hur hur hur.") If a specific citation can be provided that Dawkins said the above then I am quite willing to admit it and if a citation can not be produced then those who make the allegation need to retract it. Fred From jonkc at bellsouth.net Mon Dec 7 22:23:40 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 7 Dec 2009 17:23:40 -0500 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: References: <910546.13745.qm@web36502.mail.mud.yahoo.com> <870A3F6D-298C-4967-8B21-4D37BFCF4FDE@bellsouth.net> Message-ID: <40433EC8-8F12-4163-811B-01831AD2E6B7@bellsouth.net> On Dec 7, 2009, at 5:07 PM, Dave Sill wrote: > But you can prove there's one [conscious being]? No of course I can't prove it, but one thing outranks even proof, direct experience. If I had a proof that you don't find it painful to put your hand in a fire I still think you'd pull it out at the first opportunity. Or you could say that I do have proof that I am conscious but unfortunately it is available only to me. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Dec 7 22:31:03 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 7 Dec 2009 17:31:03 -0500 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> <3418453789D74C7BA8BF5A165F936B84@spike> <88A25264-0DE1-4E18-B0FD-B0C720D475E1@bellsouth.net> Message-ID: <89C83BC0-3F7D-4C3A-A79C-76C16568FF09@bellsouth.net> On Dec 7, 2009, at 5:22 PM, BillK wrote: > ?There are three kinds of people in the world; those who can count and > those who can't.? 2+2=5 , for extremely large values of 2. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Dec 7 23:07:50 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 07 Dec 2009 17:07:50 -0600 Subject: [ExI] 2 + 2 In-Reply-To: <89C83BC0-3F7D-4C3A-A79C-76C16568FF09@bellsouth.net> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> <3418453789D74C7BA8BF5A165F936B84@spike> <88A25264-0DE1-4E18-B0FD-B0C720D475E1@bellsouth.net> <89C83BC0-3F7D-4C3A-A79C-76C16568FF09@bellsouth.net> Message-ID: <4B1D8AC6.30606@satx.rr.com> On 12/7/2009 4:31 PM, John Clark wrote: > 2+2=5 , for extremely large values of 2. No, 2 + 2 = 5 for moderately large values of two. For extremely large values, it = 6. Damien Broderick From jameschoate at austin.rr.com Mon Dec 7 23:13:59 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Mon, 7 Dec 2009 17:13:59 -0600 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <3418453789D74C7BA8BF5A165F936B84@spike> Message-ID: <20091207231359.U1XLF.382295.root@hrndva-web13-z02> ---- spike wrote: > John this approach lumps all religious memes together. I would > counter-propose classifying religious thoughtspace into two broad > categories: those which suggest killing unbelievers and those which do not. Useless distinction really, the question rests too much on political practicality and not ethical bedrock. A more practical and pragmatic distinction is whether the religion recognizes cosmological transcendence. -- -- -- -- -- Venimus, Vidimus, Dolavimus James Choate jameschoate at austin.rr.com james.choate at twcable.com 512-657-1279 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From jonkc at bellsouth.net Mon Dec 7 23:04:48 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 7 Dec 2009 18:04:48 -0500 Subject: [ExI] Tolerance In-Reply-To: <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> Message-ID: <97E007C8-0D32-4669-8CF8-F461D648DDDE@bellsouth.net> On Dec 7, 2009, at 9:23 AM, Brent Neal wrote: > I find the so-called "new atheists" whose arguments against religion amount to little more than "Religion Sucks! They are stating a truth, religion sucks. It sucks BIG TIME! > Compare and contrast the eloquent, rational atheism of Russell and Hitchens, to the emotionally-charged atheism of Dawkins ("Hur hur hur if you believe in God you're stupid, Well what can I say, if you believe in God then you are stupid, at least you are on that subject. And as much as I love Russell and Hitchens there is no way to ignore the fact that Dawkins is FAR more capable than either of the two in explaining exactly why it is stupid. > so we're going to call atheists 'Brights'! Get it? Huh? Hur hur hur.") There is no doubt that Brights are smarter than their religious counterparts, however I don't know if it was good public relations to give atheists that name. I don't know much about PR. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Mon Dec 7 23:25:49 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 7 Dec 2009 15:25:49 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA Message-ID: <981805.30511.qm@web36501.mail.mud.yahoo.com> --- On Mon, 12/7/09, John Clark wrote: > Searle replies that the man could in principle > memorize the syntactic rule-book, thus internalizing the > formal program and the entire room/system. On > Searle's view such a man would still lack understanding > of the Chinese symbols. > > Then either the little man in the room is lying > about his linguistic inadequacy?or... What little man in the room, John? We have now only this real flesh and blood man who goes by the name Cram. His brain runs a formal program that uses syntactic rules to answer Chinese questions with Chinese answers. Some Chinese guy comes along and says "sguiggle" so Cram dutifully looks it up in his mental look up table and replies "squaggle". The Chinese guy responds in Chinese "Thanks for the sage advice, Cram!" But Cram has no idea what "sguiggle" or "squaggle" means. From his point of view, he's no different from the Wernicke aphasiac who speaks nonsense with proper syntax. -gts From brentn at freeshell.org Tue Dec 8 00:14:54 2009 From: brentn at freeshell.org (Brent Neal) Date: Mon, 7 Dec 2009 19:14:54 -0500 Subject: [ExI] Tolerance In-Reply-To: <97E007C8-0D32-4669-8CF8-F461D648DDDE@bellsouth.net> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <97E007C8-0D32-4669-8CF8-F461D648DDDE@bellsouth.net> Message-ID: On 7 Dec, 2009, at 18:04, John Clark wrote: > Well what can I say, if you believe in God then you are stupid, at > least you are on that subject. And as much as I love Russell and > Hitchens there is no way to ignore the fact that Dawkins is FAR more > capable than either of the two in explaining exactly why it is stupid. > >> so we're going to call atheists 'Brights'! Get it? Huh? Hur hur >> hur.") > > There is no doubt that Brights are smarter than their religious > counterparts, however I don't know if it was good public relations > to give atheists that name. I don't know much about PR. > Well, there's nothing much more to say here then. I simply disagree with you, based on evidence. Being wrong doesn't mean being stupid. Einstein was certainly quite wrong about the probabilistic nature of quantum mechanics, but that didn't make him stupid. It merely made his position incorrect. I don't think we accomplish much by assuming people are stupid when they are, to our minds, mistaken. Of course, my assumption here is that you want to accomplish something. :) B -- Brent Neal, Ph.D. http://brentn.freeshell.org From spike66 at att.net Mon Dec 7 23:51:17 2009 From: spike66 at att.net (spike) Date: Mon, 7 Dec 2009 15:51:17 -0800 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <20091207231359.U1XLF.382295.root@hrndva-web13-z02> References: <3418453789D74C7BA8BF5A165F936B84@spike> <20091207231359.U1XLF.382295.root@hrndva-web13-z02> Message-ID: > ...On Behalf Of jameschoate at austin.rr.com > Subject: Re: [ExI] pat condell's latest subtle rant > > ---- spike wrote: > > > John this approach lumps all religious memes together. I would > > counter-propose classifying religious thoughtspace into two broad > > categories: those which suggest killing unbelievers and > those which do not. > > Useless distinction really, the question rests too much on > political practicality and not ethical bedrock. A more > practical and pragmatic distinction is whether the religion > recognizes cosmological transcendence. James Choate Of course, but the reason I am interested in that particular distinction is that I wish to avoid those who want to kill me. If they do not wish to slay me, I care not if they recognize transcendance, or what they believe, or if they believe anything. I have but one life to live, and I do not wish to have it shortened by a McDawnstor*. spike *McDawnstor = Man-caused Disasterist, Associated With No Specific Theology Or Religion. From emlynoregan at gmail.com Tue Dec 8 00:24:58 2009 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 8 Dec 2009 10:54:58 +1030 Subject: [ExI] Tolerance In-Reply-To: <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> Message-ID: <710b78fc0912071624w459e1bf9pcc4f7358897abeb6@mail.gmail.com> 2009/12/8 Stefano Vaj : > 2009/12/7 JOSHUA JOB : >>> It depends on why they insist. I find the so-called "new atheists" whose >>> arguments against religion amount to little more than "Religion Sucks! >>> Nyeah!" to be incredibly tiresome people. ?Compare and contrast the >>> eloquent, rational atheism of Russell and Hitchens, to the >>> emotionally-charged atheism of Dawkins ("Hur hur hur if you believe in God >>> you're stupid, so we're going to call atheists 'Brights'! ?Get it? Huh? Hur >>> hur hur.") >>> Brent Neal, Ph.D. >> >> Dawkins arguments aren't at all like what you suggest. > > I am surprised that one may find Hitchens, with his heavy, moralistic > rhetorics, more "rational" than Dawkins, who if anything makes for a > much more pleasant reading... > > -- > Stefano Vaj I agree; Richard Dawkins is nothing if not reasonable. Hitchens is eloquent, but he uses every trick in the rhetorical book to attack his opponents, he's certainly not a rational purist, *and* I sometimes get the impression that he opposes religion because he doesn't like the human religious hierarchies and power structures, rather than because he thinks it is fundamentally incorrect. Thinking about the "four horsemen" interview, with Christopher Hitchens, Daniel Dennett, Richard Dawkins, and Sam Harris, it seemed to me like a sesame street one-of-these-things-is-not-like-the-other-ones moment; he comes from a far more emotive position than the other three. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From thespike at satx.rr.com Tue Dec 8 00:39:40 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 07 Dec 2009 18:39:40 -0600 Subject: [ExI] Tolerance In-Reply-To: References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <97E007C8-0D32-4669-8CF8-F461D648DDDE@bellsouth.net> Message-ID: <4B1DA04C.3000503@satx.rr.com> On 12/7/2009 6:14 PM, Brent Neal wrote: > I don't think we accomplish much by assuming people are stupid when they > are, to our minds, mistaken. The principle of tolerance might be based on the assumption that we don't accomplish much by assuming people are *mistaken* when they are, to our minds, mistaken--that is, tolerance is a negative epistemological thesis. "Repressive tolerance", as Herbert Marcuse dubbed it, is the ideologically motivated assertion that any proposition at all is as likely to be true as any other, as in the "equal time" notion of the msm exemplified by "balance" between (usually) two propositions, even when one is utterly grotesque or ludicrous ("Was the moon landing a hoax?--a balanced look at the controversy"). In practical terms, one must surely regard many mistaken ideas as entirely ridiculous (thetan infestation via Xenu bombing, say), and then one casts about for an explanation of how people could devote their lives and wealth to such preposterous ideas. It is understandable that one might conclude that such people, *in regard to that part of their thinking at any rate*, are indeed operationally stupid. But one doesn't get far in changing their opinions by baldly announcing this diagnosis--which is often wrong anyway. My dear wife tells me that Jesuits are obviously stupid (if they are not dissembling rogues), because they apparently believe such incredible bullshit. I try to convince her that, to the contrary, most Jesuits are smart as whips, and apply their keen minds to this bullshit with powerful intellects and expertise. John Clark is convinced that I believe extremely stupid propositions about the reality of psi phenomena, but he doesn't think I'm stupid. I don't think he's stupid for denying what the evidence insists is the case, just pigheaded and lazy for not looking at it. Damien Broderick From moulton at moulton.com Tue Dec 8 00:43:43 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 8 Dec 2009 00:43:43 -0000 Subject: [ExI] Tolerance Message-ID: <20091208004343.54141.qmail@moulton.com> On Mon, 2009-12-07 at 18:04 -0500, John Clark wrote: > There is no doubt that Brights are smarter than their religious > counterparts, however I don't know if it was good public relations > to give atheists that name. I don't know much about PR. I think there is a confusion here and I do not think "stupid" is a useful term to use. Have you actually taken a random sample of Brights and a random sample of their religious counterparts and given them some tests. If so what kind of tests? What were the results? What were the tests supposed to measure? How was "smart" being defined? And just so you know there are a wide variety people including clergy who call themselves Bright according to the Bright website. Note that the term religious does not necessarily imply supernaturalism. I suggest actually reading the Brights website; for example: http://www.the-brights.net/people/ I suggest that the quoted statement be retracted and replaced with a statement which is more precise and more accurate. Fred From gts_2000 at yahoo.com Tue Dec 8 00:39:24 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 7 Dec 2009 16:39:24 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <981805.30511.qm@web36501.mail.mud.yahoo.com> Message-ID: <584384.14217.qm@web36504.mail.mud.yahoo.com> > I assume Searle is a creationist because if he is right > Evolution might have been able to produce intelligence but > it could never have made consciousness, not in a billion > years and not in a trillion Searle would laugh and say that's like saying water can never freeze into a solid, not in a billion years and not in a trillion. On his view, the brain takes on the property of consciousness in a manner analogous to that by which water takes on the property of solidity. Like most of us here, he subscribes to and promotes a species of naturalism. He adamantly rejects both property and substance dualism. You won't find any mystical hocus-pocus in his philosophy. He's more our friend than our enemy, except that he sees logical problems in the computationalist theory of mind. And contrary to popular opinion, he allows for the possibility of Strong Artificial Intelligence. He just doesn't think it possible with formal programs running on hardware. Not hardware enough! -gts From emlynoregan at gmail.com Tue Dec 8 01:13:09 2009 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 8 Dec 2009 11:43:09 +1030 Subject: [ExI] Tolerance In-Reply-To: <1BA962BE-CEAA-4F70-B64B-449630DB4039@freeshell.org> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> <4B1D2FC3.4030400@rawbw.com> <1BA962BE-CEAA-4F70-B64B-449630DB4039@freeshell.org> Message-ID: <710b78fc0912071713g718a1937h9b4231bdccfd9f0c@mail.gmail.com> 2009/12/8 Brent Neal : > The utilitarian argument is much more compelling. If the thing produces good > results, then the thing has merit. If it does not, then it is meritless. This is where many of us would disagree I think. For me, the consequentialist approach is not useful, because it can only ever be evaluated after the fact. You can't use it to predict the future or guide future action, because it's only a case by case description of the past; this thing turned out well, that thing turned out poorly. So for instance, if you were to look at religion this way, you'd come up with a catalogue of good outcomes / bad outcomes, but how could you use this to choose future action? I see it only as an approach useful in assigning blame, which is sometimes important, but largely an empty endeavour. If you were to use this catalogue to guide future action (let's assume an approach based on "do the thing that turned out best most often in the past"), then you'd be making an assumption that the past is the best guide to the future. Without extracting principles from your catalogue, this leaves you with a very narrow band of behaviours; you can use only approaches which have been tried before. As a guide to 21st century behaviour, that's pretty moribund. If you extract principles from the past, then you can start doing useful things. However, this is no longer the kind of act utilitarianism you have described, it's rule utilitarianism. Here you are looking for principles for behaviour, some set of self consistent rules which lead to good outcomes more of the time than anything else you can come up with. And here you are solidly in theoretical reasoning territory. You are trying to predict the future, so you need a theoretical model of the universe, of utility, of how people work, etc etc. That's exactly where you need a priori arguments about religion. We can say that we shouldn't use religion because, in the general case, we think it will lead to negative utility, based not only on evidence but on logical extrapolation of its definitional features (eg: focus on faith above truth). But I'm even suspicious of utilitarianism here. Utilitarianism appears to me to have a real weakness regarding relative power of actors. When you talk of maximizing utility, you're talking about something very fuzzy as if it were strongly defined. Are there utility points which each person has, which you can sum under various scenarios and find the greatest such? No. Instead, we kind of guess at what the utility overall is, based on intuition of what is good for other people, and invariably altered by the lens we look through, which is our POV. Inescapably, people's interests clash; if they didn't, we wouldn't need any systems for sorting this kind of stuff out in the first place. So any important decision about how to live, how to proceed into the future, is one where competing interests are being "balanced" (ie: some winners and losers are being chosen). But who is deciding how to do this balancing? Those with power in a given situation. Is their assessment of the best outcome the same as that of the losers? Almost definitely not, these are groups in conflict. I think the more you use case-by-case assessment of utility, the more prone you are to the (sometimes unconscious) bias toward the powerful by the simple fact that it is their utility functions being used in calculations. So I find myself more and more in favour of general, unalterable principles. The most important I can think of is the pre-eminence of truth. Truth is more important than anything else. Which is why I like Richard Dawkins, I think we share that as a value. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From brentn at freeshell.org Tue Dec 8 01:20:03 2009 From: brentn at freeshell.org (Brent Neal) Date: Mon, 7 Dec 2009 20:20:03 -0500 Subject: [ExI] Tolerance In-Reply-To: <4B1DA04C.3000503@satx.rr.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <97E007C8-0D32-4669-8CF8-F461D648DDDE@bellsouth.net> <4B1DA04C.3000503@satx.rr.com> Message-ID: <605F4023-5A54-4E3A-B0BA-18414D46080B@freeshell.org> On 7 Dec, 2009, at 19:39, Damien Broderick wrote: > In practical terms, one must surely regard many mistaken ideas as > entirely ridiculous (thetan infestation via Xenu bombing, say), and > then one casts about for an explanation of how people could devote > their lives and wealth to such preposterous ideas. It is > understandable that one might conclude that such people, *in regard > to that part of their thinking at any rate*, are indeed > operationally stupid. But one doesn't get far in changing their > opinions by baldly announcing this diagnosis--which is often wrong > anyway. My dear wife tells me that Jesuits are obviously stupid (if > they are not dissembling rogues), because they apparently believe > such incredible bullshit. I try to convince her that, to the > contrary, most Jesuits are smart as whips, and apply their keen > minds to this bullshit with powerful intellects and expertise. I think you're stating, with a lot more eloquence, the point I was trying to make. :) But I also think that while there are some ideas that are clearly false (the existence of thetans, or the Heavenly Host, e.g.), there are ideas that are not as obviously false. I fully expect that at sometime in the far future, our descendants will be laughing at our "juvenlie superstitions" (to quote Dawkins) about synthetic biology and string theory. :) B -- Brent Neal, Ph.D. http://brentn.freeshell.org From thespike at satx.rr.com Tue Dec 8 01:21:05 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 07 Dec 2009 19:21:05 -0600 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <584384.14217.qm@web36504.mail.mud.yahoo.com> References: <584384.14217.qm@web36504.mail.mud.yahoo.com> Message-ID: <4B1DAA01.6080705@satx.rr.com> On 12/7/2009 6:39 PM, Gordon Swobe wrote: > Searle would laugh and say that's like saying water can never freeze into a solid, not in a billion years and not in a trillion. On his view, the brain takes on the property of consciousness in a manner analogous to that by which water takes on the property of solidity. You summarize his position nicely. However, this ignores what actually happens with people learning a new skill, especially a new language. At first, the elements are laboriously memorized, then chunked, then the activity is practised, and at a certain point the process does indeed... crystalize. Your consciousness alters. You are no longer arduously performing an algorithm, you're *reading* or *speaking* the other language (or playing tennis, not just hitting the ball). You claimed on Searle's behalf: >Some Chinese guy comes along and says "squiggle" so Cram dutifully looks it up in his mental look up table and replies "squaggle". ... But Cram has no idea what "squiggle" or "squaggle" means. From his point of view, he's no different from the Wernicke aphasiac who speaks nonsense with proper syntax.< Leaving aside the absurd scale problems with this analogy (the guy equals a neuron with very slow synaptic connections to other neurons), if this could be instantiated in an memory-augmented brain I'd expect that Cram would finally have an epiphanic moment of illumination and find that he *did* understand Chinese. Would an equivalent nonhuman computer? Maybe not, unless it emulated an embodied brain with an in-built grammar menu, just like us at birth. And at that point we seem to rejoin Searle in his agreement that AI is possible, using the correct causal architecture and powers. Damien Broderick From p0stfuturist at yahoo.com Tue Dec 8 00:21:45 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Mon, 7 Dec 2009 16:21:45 -0800 (PST) Subject: [ExI] Tolerance Message-ID: <973172.81999.qm@web59914.mail.ac4.yahoo.com> What?interests me about?someone such as?a Dawkins is not what he says but what he doesn't say, concerning his ambition. ?If there is no God, then what fills the vacuum left by God's absence? Dawkins or someone else fills the void. So you replace one god with another; you replace a?metaphysical god with a secular deity. Rather than place your money in an offering basket at church, you spend the funds on, say, 'The God Delusion', or 'The Greatest Show On Earth'; go to a lecture or watch a DVD by Dawkins or someone of his sort. And?the secular deity?stands on the podium like a priest standing in his pulpit. As the medium is the message, so too is the messenger the medium: the secular messiah's message is "God is dead, but I am alive; and I?offer an enlightenment you might want in place of God's." ? >> Dawkins arguments aren't at all like what you suggest. But I could not even stand listening to Dawkins for more than a few minutes in a TED talk. There was just something so... so fanatical and almost dogmatic, that I had to stop. And this fits the picture of someone who'd coin that ridiculous concept of "brights", such a stupid and embarrassing fiasco. Alas, it seems to me that the Jacobin temperament is alive and well even among us atheists. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Tue Dec 8 00:34:34 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Mon, 7 Dec 2009 16:34:34 -0800 (PST) Subject: [ExI] Tolerance Message-ID: <475488.12108.qm@web59910.mail.ac4.yahoo.com> putting aside the metaphysics involved, I can think of one practical reason right off the bat to respect-- but not necessarily like-- religion. Say there was no religious prohibition on adultery. Then more husbands than would otherwise cheat on their wives, more families would eventually break up, the wives and children get on welfare, and libertarians have more lazy people to complain about who don't take care of their children. We can't have that?:) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Tue Dec 8 01:48:44 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 07 Dec 2009 17:48:44 -0800 Subject: [ExI] Tolerance Message-ID: <4B1DB07C.1080802@rawbw.com> Pat Condell and John Clark are not only very entertaining, but often put truths in a way that's a delight to hear--- so long as you already agree that God does not exist or religion has done at least as much harm as good (and probably more). In a more extensive way, of course, Dawkins and Hitchens document why it's reasonable not to believe, and what deleterious results follow from religious belief. And I agree wholeheartedly with Brent about *results* counting more than anything. But there is an important asymmetry that arises when we go beyond just considering our "internal" (though quite public) discussions among us atheists. Say that two persons A and B converse, and A lets it be known he's religious and B affirms that he's an atheist. The situation is not symmetrical: already B is implying that A is a dupe. It's the *added* shrill militancy of people like Dawkins that I find repellent. You don't have to read much history to see the same vicious certainty in revolutionary France or Russia, or even in Hitlerite Germany. The intolerance is palpable. Of course, throughout history, it is we nonbelievers and skeptics who have been on the short end of intolerance. But is it either wise or good to imply that when the tables are turned, if they ever are, that we will then be completely intolerant of the "stupidity" of religion? Brent wrote > I don't think we accomplish much by assuming people > are stupid when they are, to our minds, mistaken. This must strike those of the Jacobin temperament as hopelessly wishy-washy, good-willed, and easy-going. Where is the fire and brimstone? NO! According to many on our side, the religious must be denounced in every possible way. Name-calling that would be prohibited on this list (or in any civilized discourse) is par for the course. Brent continued > Of course, my assumption here is that you want to > accomplish something. :) Well---that's exactly the right question. I totally agree that atheists need to speak up despite the lack of symmetry I pointed to above (in fact, I think, it's this asymmetry that most often keeps atheists quiet). But it's *how* we speak up that we need to reconsider. The only things that come of name-calling that I know of are rather reprehensible: 1. you can, by screaming abuse, scare people into being silent (we occasionally see that on this list) 2. you can poison the conversation, creating an extreme polarization that forces even the neutral to take sides Now Dawkins and the others behind a "brights" campaign are hardly dumb---and it frightens me that they must be perfectly well-aware of these two points. They very rightly, though, point out that atheists have often been the ones scared to speak up, and perfectly correct to say that this should stop. It's just sad that the word can't be spread without creating even more polarization. And when it comes to that, my friends, I'm afraid that those of us who lack the God gene will be the ones outnumbered and outgunned. Do you want that? Let's please confine ourselves to an evolutionary, not revolutionary, approach. Evolutionary persuasion ---not revolutionary confrontation. Lee From emlynoregan at gmail.com Tue Dec 8 01:48:43 2009 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 8 Dec 2009 12:18:43 +1030 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <3418453789D74C7BA8BF5A165F936B84@spike> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> <3418453789D74C7BA8BF5A165F936B84@spike> Message-ID: <710b78fc0912071748l2b33abfo4b7bbe28c31de4fa@mail.gmail.com> 2009/12/8 spike : > > >>On Behalf Of John Clark > ... > > ? ? ? ?>Atheists have played the Mr. Nice-guy part for a very long time and > the end result is that religious morons crash airliners into skyscrapers; > and the very next day the USA organized a national day of prayer to give > homage to the very mental cancer that caused the disaster... John K Clark > > > John this approach lumps all religious memes together. ?I would > counter-propose classifying religious thoughtspace into two broad > categories: those which suggest killing unbelievers and those which do not. > > spike I like this Spike. If we can combine religion and ideology and philosophy into the same group (maybe the space of meme packages), then it's a great principle for them all. There are plenty of examples of non-religious ideologies which have been just as dangerous. Liberal societies always have this problem of how to do tolerance, when those they are being tolerant of wont necessarily extend the same invitation back. I think for inclusive, tolerant societies, where values are allowed to be divergent, we need some concept of meta-values; values about values, which all participants are expected to honor. We might say that the major meta-value is that each person's interests have equal weight, and derive the various freedoms (such as speech, liberty, association) from this meta-value, in that if everyone doesn't respect these, then how is the value to be upheld? I'm sure someone famous has said this better than I do here. But, I think it's a good approach; a way of separating the really important core values that free society requires in order to function, from everything else, which by definition we want to allow to be divergent. Back to Spike's topic, any meme package which encourages killing outgroupers is anathema to the meta value, and deserves to be treated with some suspicion. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From thespike at satx.rr.com Tue Dec 8 01:56:28 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 07 Dec 2009 19:56:28 -0600 Subject: [ExI] Tolerance In-Reply-To: <973172.81999.qm@web59914.mail.ac4.yahoo.com> References: <973172.81999.qm@web59914.mail.ac4.yahoo.com> Message-ID: <4B1DB24C.1000505@satx.rr.com> On 12/7/2009 6:21 PM, Post Futurist wrote: > What interests me about someone such as a Dawkins is not what he says > but what he doesn't say, concerning his ambition. > If there is no God, then what fills the vacuum left by God's absence? If there is no god, who pulls up the next Kleenex? If there is no unicorn, what fills up the vacuum left by the unicorn's absence? Answer: there is no vacuum to fill, since there was no unicorn to start with. Is this really so difficult to grasp? Damien Broderick From brentn at freeshell.org Tue Dec 8 01:56:55 2009 From: brentn at freeshell.org (Brent Neal) Date: Mon, 7 Dec 2009 20:56:55 -0500 Subject: [ExI] Tolerance In-Reply-To: <710b78fc0912071713g718a1937h9b4231bdccfd9f0c@mail.gmail.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> <4B1D2FC3.4030400@rawbw.com> <1BA962BE-CEAA-4F70-B64B-449630DB4039@freeshell.org> <710b78fc0912071713g718a1937h9b4231bdccfd9f0c@mail.gmail.com> Message-ID: <74D51C9E-B9B6-4AA6-B75C-95FF460DA71C@freeshell.org> On 7 Dec, 2009, at 20:13, Emlyn wrote: > 2009/12/8 Brent Neal : >> The utilitarian argument is much more compelling. If the thing >> produces good >> results, then the thing has merit. If it does not, then it is >> meritless. > > This is where many of us would disagree I think. For me, the > consequentialist approach is not useful, because it can only ever be > evaluated after the fact. You can't use it to predict the future or > guide future action, because it's only a case by case description of > the past; this thing turned out well, that thing turned out poorly. So > for instance, if you were to look at religion this way, you'd come up > with a catalogue of good outcomes / bad outcomes, but how could you > use this to choose future action? I see it only as an approach useful > in assigning blame, which is sometimes important, but largely an empty > endeavour. I don't think that's necessarily true. We have the ability to predict outcomes of our actions and thus act with an expectation of a particular outcome. We therefore act in a way that we believe will create maximum utility. If we were only able to do this on the basis of past experiences, as you suggest, then we'd not have the concept of imagination or creativity. > > But I'm even suspicious of utilitarianism here. Utilitarianism appears > to me to have a real weakness regarding relative power of actors. When > you talk of maximizing utility, you're talking about something very > fuzzy as if it were strongly defined. Are there utility points which > each person has, which you can sum under various scenarios and find > the greatest such? No. Instead, we kind of guess at what the utility > overall is, based on intuition of what is good for other people, and > invariably altered by the lens we look through, which is our POV. You said that you more enamored of universal principles of action. Yet, you reject the concept of maximizing utility. This doesn't make sense to me in some way, since it seems obvious that personal preference and individual agency are not subject to falsification. Given that, you shouldn't expect there to be "universal" truth, but rather some metric of goodness that is dependent on individual experience and preference. The concept of utility, I grant you, isn't perfect, but meets those criteria for a metric on something that cannot be objectively measured as well as anything else. B -- Brent Neal, Ph.D. http://brentn.freeshell.org From lcorbin at rawbw.com Tue Dec 8 01:58:41 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 07 Dec 2009 17:58:41 -0800 Subject: [ExI] 2 + 2 In-Reply-To: <4B1D8AC6.30606@satx.rr.com> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> <3418453789D74C7BA8BF5A165F936B84@spike> <88A25264-0DE1-4E18-B0FD-B0C720D475E1@bellsouth.net> <89C83BC0-3F7D-4C3A-A79C-76C16568FF09@bellsouth.net> <4B1D8AC6.30606@satx.rr.com> Message-ID: <4B1DB2D1.20405@rawbw.com> >> 2+2=5 , for extremely large values of 2. > > No, 2 + 2 = 5 for moderately large values of two. For extremely large > values, it = 6. Unfortunately, all values are small. Now, if there were only finitely many positive integer values, then you guys might have a case. Lee From thespike at satx.rr.com Tue Dec 8 02:09:29 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 07 Dec 2009 20:09:29 -0600 Subject: [ExI] 2 + 2 In-Reply-To: <4B1DB2D1.20405@rawbw.com> References: <200912050420.nB54KLCl021144@andromeda.ziaspace.com> <3418453789D74C7BA8BF5A165F936B84@spike> <88A25264-0DE1-4E18-B0FD-B0C720D475E1@bellsouth.net> <89C83BC0-3F7D-4C3A-A79C-76C16568FF09@bellsouth.net> <4B1D8AC6.30606@satx.rr.com> <4B1DB2D1.20405@rawbw.com> Message-ID: <4B1DB559.6020902@satx.rr.com> On 12/7/2009 7:58 PM, Lee Corbin wrote: >>> 2+2=5 , for extremely large values of 2. >> No, 2 + 2 = 5 for moderately large values of two. For extremely large >> values, it = 6. > Unfortunately, all values are small. 2.45 (a moderately large value of 2), rounded, is treated as 2. 4.9, rounded, is treated as 5. 2.95 (an extremely large value of 2), rounded, is treated as 3. QED. (I know, I know.) From moulton at moulton.com Tue Dec 8 02:25:03 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 8 Dec 2009 02:25:03 -0000 Subject: [ExI] [Exi] Tolerance Message-ID: <20091208022503.60246.qmail@moulton.com> On Mon, 2009-12-07 at 16:21 -0800, Post Futurist wrote: > What interests me about someone such as a Dawkins is not what he says > but what he doesn't say, concerning his ambition . > If there is no God, then what fills the vacuum left by God's > absence? Dawkins or someone else fills the void. First there is an error in stating that the absence of something necessarily creates a vacuum. Second if you are writing about a "psychological void" then I challange you to prove it. Because all I see is assertions with little or no substance and a lack of evidence. > So you replace > one god with another; you replace a metaphysical god with a secular > deity. And exactly who is this "you"? If you are referring to me then you wrong; completely wrong and making assertions about things which you have no knowlege. > Rather than place your money in an offering basket at church, > you spend the funds on, say, 'The God Delusion', or 'The Greatest > Show On Earth'; go to a lecture or watch a DVD by Dawkins or someone > of his sort. And the secular deity stands on the podium like a priest > standing in his pulpit. > As the medium is the message, so too is the messenger the medium: > the secular messiah's message is "God is dead, but I am alive; and > I offer an enlightenment you might want in place of God's." I am not going to bother pointing the remainder of the errors and logical fallacies; I have better things to do at the moment. Fred From emlynoregan at gmail.com Tue Dec 8 02:47:57 2009 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 8 Dec 2009 13:17:57 +1030 Subject: [ExI] Tolerance In-Reply-To: <475488.12108.qm@web59910.mail.ac4.yahoo.com> References: <475488.12108.qm@web59910.mail.ac4.yahoo.com> Message-ID: <710b78fc0912071847w70956625vc0e5f3c18ac407ab@mail.gmail.com> 2009/12/8 Post Futurist > putting aside the metaphysics involved, I can think of one practical reason > right off the bat to respect-- but not necessarily like-- religion. > Say there was no religious prohibition on adultery. Then more husbands than > would otherwise cheat on their wives, more families would eventually break > up, the wives and children get on welfare, and libertarians have more lazy > people to complain about who don't take care of their children. We can't > have that *:)* > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > That, in fact, is wrong. http://atheism.about.com/od/atheistfamiliesmarriage/a/AtheistsDivorce.htm >From a 1999 Barna Research Group study (a religious group I believe): 11% of all American adults are divorced 25% of all American adults have had at least one divorce 27% of born-again Christians have had at least one divorce 24% of all non-born-again Christians have been divorced 21% of atheists have been divorced 21% of Catholics and Lutherans have been divorced 24% of Mormons have been divorced 25% of mainstream Protestants have been divorced 29% of Baptists have been divorced 24% of nondenominational, independent Protestants have been divorced 27% of people in the South and Midwest have been divorced 26% of people in the West have been divorced 19% of people in the Northwest and Northeast have been divorced -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Tue Dec 8 02:23:35 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 7 Dec 2009 18:23:35 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <4B1DAA01.6080705@satx.rr.com> Message-ID: <183035.84621.qm@web36507.mail.mud.yahoo.com> Hiya Damien, > >Some Chinese guy comes along and says "squiggle" so > Cram dutifully looks it up in his mental look up table and > replies "squaggle". ... But Cram has no idea what "squiggle" > or "squaggle" mean. From his point of view, he's no > different from the Wernicke aphasiac who speaks nonsense > with proper syntax.< > > Leaving aside the absurd scale problems with this analogy > (the guy equals a neuron with very slow synaptic connections > to other neurons), No, perhaps you missed the earlier messages. Some of Searle's critics argued that the man in the room represents a neuron or some other small part of a larger system and that his lack of understanding means nothing -- after all he's not the system and perhaps the system has understanding. Searle replied "Fine, let the man internalize the entire room." That's my man (android?) Cram. He runs a formal program in is brain as described in the paragraph of mine that you quoted above. No Chinese Room. No system. Just Cram. > if this could be instantiated in an memory-augmented brain I'd > expect that Cram would finally have an epiphanic moment of > illumination and find that he *did* understand Chinese. I'd like to expect it too, but I need an argument to justify that expectation of an "epiphanic moment of illumination". Short of an act of god, or some other mystical explanation, how exactly does such a marvelous thing happen? What awakens Pinocchio? -gts From kanzure at gmail.com Tue Dec 8 03:18:39 2009 From: kanzure at gmail.com (Bryan Bishop) Date: Mon, 7 Dec 2009 21:18:39 -0600 Subject: [ExI] H+ Summit 2009 transcripts/discussions In-Reply-To: References: <55ad6af70912061133x25a8e525jce425be57fa04bdf@mail.gmail.com> Message-ID: <55ad6af70912071918t41d407f8q496c6648c238af6a@mail.gmail.com> On Mon, Dec 7, 2009 at 3:37 PM, JonathanCline wrote: > What software are you using to transcribe? Just vim. For Windows users, the closest analogy would be notepad, or your standard text editor. Nothing fancy.. - Bryan http://heybryan.org/ 1 512 203 0507 From nanite1018 at gmail.com Tue Dec 8 03:29:38 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Mon, 7 Dec 2009 22:29:38 -0500 Subject: [ExI] Tolerance In-Reply-To: <4B1DB07C.1080802@rawbw.com> References: <4B1DB07C.1080802@rawbw.com> Message-ID: <239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM> > But is it either wise or good to imply that when the > tables are turned, if they ever are, that we will then > be completely intolerant of the "stupidity" of religion? It depends on how you define intolerance. Banning them, flogging them, stoning them, torturing them, etc. is of course bad, but I very much doubt anyone here or any of the main "New Atheists" would propose that. If you mean say unequivocally they were wrong and refuse to listen to any of their nonsense, then that isn't a problem at all. Being intolerant of wrong ideas on a personal level (arguing against them, disassociating oneself from those ideas and weighing it in a decision to interact with someone who holds those ideas, etc.) is a rational strategy, it increases the costs of their mistake, and thereby increases the likelihood they will correct it, as well as simply limiting the contact in your own life between you and error. > And when it comes to that, my friends, I'm afraid > that those of us who lack the God gene will be the > ones outnumbered and outgunned. Do you want that? > > Let's please confine ourselves to an evolutionary, > not revolutionary, approach. Evolutionary persuasion > ---not revolutionary confrontation. > Lee I think we've been trying that for a number of decades now, and it hasn't had great success. In fact, the only success its really had in the public is that it opened the way for Dawkins and the rest. Now obviously I am not for declaring an intellectual war or anything. But I do think that atheists need to not be wishy-washy and somehow give the religious the idea we think their ideas about the nature of reality have merit. Because, honestly, they don't. Science has disproved them or logic has shown them sorely wanting and riddled with inconsistency. Don't go around telling people with crosses around their necks they're " 'tards" or something offensive like that. But if ever the subject comes up, atheists should make it clear where they stand, and if the religious pursue it further, ensure that they do their best to dissuade them of their error. Joshua Job nanite1018 at gmail.com From sparge at gmail.com Tue Dec 8 03:37:59 2009 From: sparge at gmail.com (Dave Sill) Date: Mon, 7 Dec 2009 22:37:59 -0500 Subject: [ExI] Tolerance In-Reply-To: <605F4023-5A54-4E3A-B0BA-18414D46080B@freeshell.org> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <97E007C8-0D32-4669-8CF8-F461D648DDDE@bellsouth.net> <4B1DA04C.3000503@satx.rr.com> <605F4023-5A54-4E3A-B0BA-18414D46080B@freeshell.org> Message-ID: On Mon, Dec 7, 2009 at 8:20 PM, Brent Neal wrote: > > ... I fully expect that at sometime in the far future, > our descendants will be laughing at our "juvenlie superstitions" (to quote > Dawkins) about synthetic biology and string theory. :) It's one thing to be wrong about something based on incomplete evidence or lack of understanding and another to believe something that's contradictory to all evidence because it offers a comforting explanation. The person holding a wrong belief due to incomplete evidence or lack of understanding will automatically adjust their beliefs when new evidence is discovered, but the person believing a made up story about a supreme being will adjust their interpretation of contrary evidence to make it consistent with their beliefs. We find it amusing that people used to think the Sun revolved around the Earth, but those people believed that because that's the way it looked to them, and they had no evidence to the contrary. But people believing in religions do so in the face of overwhelming evidence that their beliefs are highly unlikely to be correct and with only the scantiest evidence that they're true. -Dave From nanite1018 at gmail.com Tue Dec 8 03:39:51 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Mon, 7 Dec 2009 22:39:51 -0500 Subject: [ExI] Tolerance In-Reply-To: <74D51C9E-B9B6-4AA6-B75C-95FF460DA71C@freeshell.org> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> <4B1D2FC3.4030400@rawbw.com> <1BA962BE-CEAA-4F70-B64B-449630DB4039@freeshell.org> <710b78fc0912071713g718a1937h9b4231bdccfd9f0c@mail.gmail.com> <74D51C9E-B9B6-4AA6-B75C-95FF460DA71C@freeshell.org> Message-ID: > You said that you more enamored of universal principles of action. > Yet, you reject the concept of maximizing utility. This doesn't make > sense to me in some way, since it seems obvious that personal > preference and individual agency are not subject to falsification. > Given that, you shouldn't expect there to be "universal" truth, but > rather some metric of goodness that is dependent on individual > experience and preference. The concept of utility, I grant you, > isn't perfect, but meets those criteria for a metric on something > that cannot be objectively measured as well as anything else. > Brent Neal, Ph.D. You can develop universal tenets for human behavior from the fact humans are animals who survive solely based on their rational capacity, and the fact that the source of all value for an organism is that organisms life (it can't have values of any sort without that). As such, any action by anyone which interferes with the ability of a person to behave in the way which they decide is immoral, it is attacking the basic principle and root cause of all possible values. As a result, you have no right to initiate force against any other human being. I'm glossing over of course, but that gives a very basic outline of how you can arrive at certain universal principles of action based on principles and reason alone. As for what an individual should do outside not initiate force, well that is a more complex subject. But overall it is simply to serve his own individual life and pursue his own individual values which it has determined based on reason. That is a universal principle which can apply to every single person on the face of the Earth (or anywhere else humans happen to be) without contradiction. Utility cannot be judged accurately at all. I know what is best for me most likely better than anyone, I cannot know what is best for you or society as a whole. That is totally personal and individual in nature. It seems odd to say that one should universalize a purely personal view of what are values, etc. and the apply them to everyone in order to determine what course of action to take. Why not just have them apply to yourself? That makes it much less prone to error. I am intrigued by your repeated stress on falsification. You cannot live your entire life based on falsification alone without any principles created inductively from experience and reason. So why place such stress on falsification, especially in a moral system, which is expressly about how to live your life? Joshua Job nanite1018 at gmail.com From brentn at freeshell.org Tue Dec 8 04:03:40 2009 From: brentn at freeshell.org (Brent Neal) Date: Mon, 7 Dec 2009 23:03:40 -0500 Subject: [ExI] Tolerance In-Reply-To: References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> <4B1D2FC3.4030400@rawbw.com> <1BA962BE-CEAA-4F70-B64B-449630DB4039@freeshell.org> <710b78fc0912071713g718a1937h9b4231bdccfd9f0c@mail.gmail.com> <74D51C9E-B9B6-4AA6-B75C-95FF460DA71C@freeshell.org> Message-ID: <0CD7901A-BC7B-43E7-A27B-1006BB34E9CB@freeshell.org> On 7 Dec, 2009, at 22:39, JOSHUA JOB wrote: > I am intrigued by your repeated stress on falsification. You cannot > live your entire life based on falsification alone without any > principles created inductively from experience and reason. So why > place such stress on falsification, especially in a moral system, > which is expressly about how to live your life? You're reiterating exactly the point I was making. Morals and ethics are not scientific, in the Popperian sense or any other sense. I conclude from that that any purported "universal" ethics are flawed. In the vernacular, only you have the ability to decide for yourself what is morally or ethically correct. It has been my observation that people adhere to social structures due to some utility that they provide. I choose a rationalist approach because it provides me with more utility than a superstitional approach. I rather like being able to figure out what's going on around me through testing hypotheses and appreciate the value that having some guidelines on what can be objectively determined and what cannot. Some people, apparently, don't. Now, I could go all anthropological and argue that at some point in the distant past the structures we now call "religions" had some utility in society, but now, the marginal utility of religion has been driven to the negative by insistence, particularly of the Abrahamic set, that faith comes before science, therefore atheism/humanism/etc. is on the rise as increasing numbers of people discover that these old traditions are truly not useful anymore. But, I won't, since I'm a physicist and not an anthropologist and as such I'm quite aware that this argument can fairly be considered crackpottish. :) It still doesn't change my original argument that calling people names based on your sense of superiority is profoundly self-defeating. Someone mentioned the Four Horsemen interview. I recall thinking that Dennett was the only one of the four of them that wasn't a total tool in the interview. :) While I tend to sympathize, as you will no doubt have noted, with Hitchens and Dawkins, that doesn't mean that I don't think they can act like asses at times. And I question the rationality of a worldview that provides such a black-and-white view of superiority and inferiority as the worldview of these so-called "Brights" does. Outside the realm of the scientific, I've learned to be profoundly distrustful of folks who offer me either-or choices: Believe in my god or suffer. Free markets mean zero regulation. You're either atheist or stupid. The world is only that simple to people who are too lazy to think in depth about the issue at hand or are too uneducated to have a complex opinion. B -- Brent Neal, Ph.D. http://brentn.freeshell.org From p0stfuturist at yahoo.com Tue Dec 8 03:13:53 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Mon, 7 Dec 2009 19:13:53 -0800 (PST) Subject: [ExI] Tolerance Message-ID: <647812.20276.qm@web59916.mail.ac4.yahoo.com> One (not a "you", Mr. Moulton) might say that God?is a necessary fiction to billions, but that a unicorn is only?a droll legend such as Santa Claus; thus a God is not comparable to a unicorn. If all the statues and paintings of unicorns were destroyed, the world would remain the same, but if belief in God were to be destroyed the world would be different-- how different no one can say. Now, I don't say that God or even a Santa Clausian belief in God is necessary, but religion is necessary as it has existed for thousands of years, and cannot be?dispensed with?like that, any more?than the family can be?dispensed like that, however outmoded they both may very well be. Dawkins?isn't so?strident today, he has toned down his rhetoric since 'The God Delusion' in the interest of better public relations.???? --- On Mon, 12/7/09, Damien Broderick wrote: If there is no unicorn, what fills up the vacuum left by the unicorn's absence? Answer: there is no vacuum to fill, since there was no unicorn to start with. Is this really so difficult to grasp? Damien Broderick -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Tue Dec 8 03:51:21 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Mon, 7 Dec 2009 19:51:21 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: Message-ID: <57076.78853.qm@web59908.mail.ac4.yahoo.com> ?Religion might be important for another 200 years for all anyone knows. This is what soured me on futurism: the far future can be so far in the future it isn't worth thinking about, let alone discussing. > ... I fully expect that at sometime in the far future, > our descendants will be laughing at our "juvenlie superstitions" (to quote > Dawkins) about synthetic biology and string theory. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Tue Dec 8 03:21:51 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Mon, 7 Dec 2009 19:21:51 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: <710b78fc0912071847w70956625vc0e5f3c18ac407ab@mail.gmail.com> Message-ID: <437222.72716.qm@web59903.mail.ac4.yahoo.com> Would these stats (aside from the 21 percent atheists) be higher if?adultery wasn't rejected by many or most religions? 11% of all American adults are divorced 25% of all American adults have had at least one divorce 27% of born-again Christians have had at least one divorce 24% of all non-born-again Christians have been divorced 21% of atheists have been divorced 21% of Catholics and Lutherans have been divorced 24% of Mormons have been divorced 25% of mainstream Protestants have been divorced 29% of Baptists have been divorced 24% of nondenominational, independent Protestants have been divorced 27% of people in the South and Midwest have been divorced 26% of people in the West have been divorced 19% of people in the Northwest and Northeast have been divorced -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Dec 8 04:56:59 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 07 Dec 2009 22:56:59 -0600 Subject: [ExI] Tolerance In-Reply-To: <0CD7901A-BC7B-43E7-A27B-1006BB34E9CB@freeshell.org> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <88C5E3C6-5BB3-4F7F-8CA9-8517CD489425@GMAIL.COM> <580930c20912070806i3e490aa0h92aeb72cdf854e76@mail.gmail.com> <4B1D2FC3.4030400@rawbw.com> <1BA962BE-CEAA-4F70-B64B-449630DB4039@freeshell.org> <710b78fc0912071713g718a1937h9b4231bdccfd9f0c@mail.gmail.com> <74D51C9E-B9B6-4AA6-B75C-95FF460DA71C@freeshell.org> <0CD7901A-BC7B-43E7-A27B-1006BB34E9CB@freeshell.org> Message-ID: <4B1DDC9B.90108@satx.rr.com> On 12/7/2009 10:03 PM, Brent Neal wrote: > I question the rationality of a worldview that provides such a > black-and-white view of superiority and inferiority as the worldview of > these so-called "Brights" does. Outside the realm of the scientific, > I've learned to be profoundly distrustful of folks who offer me > either-or choices: Believe in my god or suffer. Free markets mean zero > regulation. You're either atheist or stupid. The world is only that > simple to people who are too lazy to think in depth about the issue at > hand or are too uneducated to have a complex opinion. I have an essay titled "Beyond Faith and Opinion" in a rather interesting "New Atheists" book, 50 VOICES OF DISBELIEF: Why We Are Atheists, eds. Russell Blackford and Udo Schuklenk (Wiley-Blackwell). My view is closer to Brent's: Damien Broderick From jonkc at bellsouth.net Tue Dec 8 06:12:48 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 8 Dec 2009 01:12:48 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <584384.14217.qm@web36504.mail.mud.yahoo.com> References: <584384.14217.qm@web36504.mail.mud.yahoo.com> Message-ID: On Dec 7, 2009, at 7:39 PM, Gordon Swobe wrote: > On his view, the brain takes on the property of consciousness in a manner analogous to that by which water takes on the property of solidity. But for some reason this mysterious phase change only happens to 3 pounds of grey goo in our head and never happens in his Chinese Room. He never explains why. > Like most of us here, he subscribes to and promotes a species of naturalism. He [Searle] adamantly rejects both property and substance dualism. You won't find any mystical hocus-pocus in his philosophy. Bullshit. He thinks intelligent behavior is possible without consciousness so evolution could not have produced consciousness, no way no how. He has no other explanation how it came to be so to explain its existence he has no choice but to resort to mystical hocus-pocus. > he allows for the possibility of Strong Artificial Intelligence. He just doesn't think it possible with formal programs running on hardware. Not hardware enough! So if atoms are arranged in a way that produces a human brain those atoms can produce consciousness and if arranged as a computer they can too, provided the computer doesn't use hardware or software. Don't you find that idea just a little bit stupid? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From moulton at moulton.com Tue Dec 8 06:45:13 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 8 Dec 2009 06:45:13 -0000 Subject: [ExI] [Exi] Tolerance Message-ID: <20091208064513.3166.qmail@moulton.com> On Mon, 2009-12-07 at 23:03 -0500, Brent Neal wrote: > And I question the > rationality of a worldview that provides such a black-and-white view > of superiority and inferiority as the worldview of these so-called > "Brights" does. Please provide a citation from the Brights website or a publication from the Brights organization. I have looked around the Brights website and I can not find what you seem to referencing. Fred From jonkc at bellsouth.net Tue Dec 8 06:44:51 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 8 Dec 2009 01:44:51 -0500 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <981805.30511.qm@web36501.mail.mud.yahoo.com> References: <981805.30511.qm@web36501.mail.mud.yahoo.com> Message-ID: <9BF2C078-1A26-49F7-9E77-BB3D6C425B2B@bellsouth.net> On Dec 7, 2009, at 6:25 PM, Gordon Swobe wrote: > Cram has no idea what "sguiggle" or "squaggle" means. There is not one scrap of evidence that is true and there is considerable evidence that it is false. If you don't believe me then just ask Cram, he will tell you exactly what sguiggle or squaggle means. Its like saying Einstein was really an idiot, he just wrote spoke and acted brilliantly. John -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Dec 8 06:59:13 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 8 Dec 2009 01:59:13 -0500 Subject: [ExI] Tolerance. In-Reply-To: References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <052B1759-B17F-431F-82F8-A361A5163F7C@freeshell.org> <97E007C8-0D32-4669-8CF8-F461D648DDDE@bellsouth.net> Message-ID: <959388F1-BD3F-41C7-B4C2-EB5D15B10BED@bellsouth.net> On Dec 7, 2009, at 7:14 PM, Brent Neal wrote: > Einstein was certainly quite wrong about the probabilistic nature of quantum mechanics, but that didn't make him stupid. It merely made his position incorrect. Agreed. > I don't think we accomplish much by assuming people are stupid when they are, to our minds, mistaken. So do you think the word "stupid" should be removed from the English language? And if you can't use that word in reference to religion when in the world can you use it? We are not talking about some esoteric point in physics, we are talking about people who want to crash airliners into skyscrapers and the moderates who give them cover by demanding respect for a particular brand of mind cancer. I make no apology in calling that stupid as well as evil. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From moulton at moulton.com Tue Dec 8 07:49:58 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 8 Dec 2009 07:49:58 -0000 Subject: [ExI] Tolerance Message-ID: <20091208074958.69748.qmail@moulton.com> On Mon, 2009-12-07 at 17:48 -0800, Lee Corbin wrote: > But there is an important asymmetry that arises when > we go beyond just considering our "internal" (though quite > public) discussions among us atheists. Say that two > persons A and B converse, and A lets it be known he's > religious and B affirms that he's an atheist. The > situation is not symmetrical: already B is implying > that A is a dupe. Slightly false since not all religions are theist. Now there are some religions which are theist. So instead of leaving this example let us make it specific. Let A be a Christian who believes in the literal truth of the entire Judeo-Christian scriptures (what is commonly referred to as a Fundamentalist). And let B be an atheist who says there is no God. Now B may feel that A is a dupe, or B may feel that A is going through a period of transition until A arrives at a higher level of understanding or whatever. The point is that we do not know exactly what B thinks about A. However we know what the Fundamentalist position is about B; the Fundamentalist position comes right out of the Bible and says that B is a fool. Yes it is in the Bible and to quote the entire verse: The fool says in his heart, "There is no God." They are corrupt, they do abominable deeds, there is none who does good Psalms 14:1 Now we know that not all religious are Fundamentalist. The term "religious" covers a variety of positions. So my point is that we need to avoid oversimplifying complex issues. > It's the *added* shrill militancy of people like Dawkins > that I find repellent. Dawkins is not a "shrill militant" by any reasonable usage of the phrase "shrill militant". I have been present on three different occasions where Dawkins spoke at length and he is not shrill. > You don't have to read much history > to see the same vicious certainty in revolutionary France > or Russia, or even in Hitlerite Germany. The intolerance > is palpable. This is totally both false and disgusting. To try to smear Dawkins by reference to the French or Russian revolutions or to "Hitlerite Germany" is intellectually dishonest and contemptible and anyone who does so should be ashamed. Fred From lcorbin at rawbw.com Tue Dec 8 08:28:46 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 08 Dec 2009 00:28:46 -0800 Subject: [ExI] Tolerance In-Reply-To: <239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM> References: <4B1DB07C.1080802@rawbw.com> <239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM> Message-ID: <4B1E0E3E.5010708@rawbw.com> JOSHUA JOB wrote: >> But is it either wise or good to imply that when the >> tables are turned, if they ever are, that we will then >> be completely intolerant of the "stupidity" of religion? > > It depends on how you define intolerance. Banning them, flogging them, > stoning them, torturing them, etc. is of course bad, but I very much > doubt anyone here or any of the main "New Atheists" would propose that. Thank G..., er thank goodness that yes, indeed we have progressed beyond flogging and stoning. > If you mean say unequivocally they were wrong and refuse to listen to > any of their nonsense, then that isn't a problem at all. Agreed. One can listen to or not listen to whatever one wishes. >> And when it comes to that, my friends, I'm afraid >> that those of us who lack the God gene will be the >> ones outnumbered and outgunned. Do you want that? >> >> Let's please confine ourselves to an evolutionary, >> not revolutionary, approach. Evolutionary persuasion >> ---not revolutionary confrontation. > > I think we've been trying that for a number of decades now, and it > hasn't had great success. Au contraire. By every measure in the west, the percentage of atheists is rising. We'll "win"---just be patient. (At least we'll win until the demographic change starts the game over from square one.) But the strategy of alienating religious people yet further isn't going to help, except in ways I would not condone. They shouldn't be given any reason whatsoever to think that life under the atheists has been as bad as (at many times) life for us under the religious often was. Alas, this is surely all wishful thinking on my part. As soon as we are in the heavy majority, kids will hear in school from their PC teachers that, so far as religion goes, "there are the brights who don't believe, and then, there are the others, less bright, who do, and sadly some of you in this very class come from disadvantaged homes..." > obviously I am not for declaring an intellectual war or anything. But I > do think that atheists need to not be wishy-washy and somehow give the > religious the idea we think their ideas about the nature of reality have > merit. Again, I agree. > Because, honestly, they don't. Science has disproved them or > logic has shown them sorely wanting and riddled with inconsistency. > Don't go around telling people with crosses around their necks they're " > 'tards" or something offensive like that. Exactly. Nor try to define them as "un-bright". > But if ever the subject comes > up, atheists should make it clear where they stand, and if the religious > pursue it further, ensure that they do their best to dissuade them of > their error. Quite so. Lee From eugen at leitl.org Tue Dec 8 11:12:30 2009 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 8 Dec 2009 12:12:30 +0100 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <183035.84621.qm@web36507.mail.mud.yahoo.com> References: <4B1DAA01.6080705@satx.rr.com> <183035.84621.qm@web36507.mail.mud.yahoo.com> Message-ID: <20091208111230.GT17686@leitl.org> On Mon, Dec 07, 2009 at 06:23:35PM -0800, Gordon Swobe wrote: > No, perhaps you missed the earlier messages. > > Some of Searle's critics argued that the man in the room represents a neuron or some other small part of a larger system and that his lack of understanding means nothing -- after all he's not the system and perhaps the system has understanding. Searle replied "Fine, let the man internalize the entire room." Fine, let's the cow jump over the Moon. I mean, it only takes a few megagee or so, to achieve enough momentum to overcome hypersonic drag at ground level. Of course, the cow will get homogenized by the acceleration, and then turned into shocked plasma before its lunar flyby. > That's my man (android?) Cram. He runs a formal program in is brain as described in the paragraph of mine that you quoted above. So you want to simulate a Chinese-speaking person. So first you use paper, which doesn't work (the Sun would burn out before you can get the system to comprehend the first sentence, not that there are enough tree on the planet anyway). Not content with that impossibility, you want to represent at least 10^17 bits nevermind the nontrivial transition function over them between some guy's ears who can't even handle seven thoughts without starting dropping them. Where do they grow such fine idiots like Searle? > No Chinese Room. No system. Just Cram. Just omit Cram. Works even better now. > > if this could be instantiated in an memory-augmented brain I'd > > expect that Cram would finally have an epiphanic moment of > > illumination and find that he *did* understand Chinese. > > I'd like to expect it too, but I need an argument to justify that expectation of an "epiphanic moment of illumination". Short of an act of god, or some other mystical explanation, how exactly does such a marvelous thing happen? What awakens Pinocchio? What makes you think the guy even knows what he's doing? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Tue Dec 8 11:59:58 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 8 Dec 2009 22:59:58 +1100 Subject: [ExI] Tolerance In-Reply-To: <973172.81999.qm@web59914.mail.ac4.yahoo.com> References: <973172.81999.qm@web59914.mail.ac4.yahoo.com> Message-ID: 2009/12/8 Post Futurist > > What?interests me about?someone such as?a Dawkins is not what he says but what he doesn't say, concerning his ambition. > ?If there is no God, then what fills the vacuum left by God's absence? Dawkins or someone else fills the void. So you replace one god with another; you replace a?metaphysical god with a secular deity. Rather than place your money in an offering basket at church, you spend the funds on, say, 'The God Delusion', or 'The Greatest Show On Earth'; go to a lecture or watch a DVD by Dawkins or someone of his sort. And?the secular deity?stands on the podium like a priest standing in his pulpit. > As the medium is the message, so too is the messenger the medium: the secular messiah's message is "God is dead, but I am alive; and I?offer an enlightenment you might want in place of God's." You probably wouldn't put up this argument in defence of other sorts of nonsense, such as a belief in Santa Claus, or astrology, or the power of crystals to heal disease, or any number of things which you would dismiss as just obviously crap. Why should religious nonsense get special consideration? -- Stathis Papaioannou From gts_2000 at yahoo.com Tue Dec 8 12:04:32 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 8 Dec 2009 04:04:32 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: Message-ID: <367275.93635.qm@web36502.mail.mud.yahoo.com> --- On Tue, 12/8/09, John Clark wrote: > But for some reason this mysterious phase change only > happens to 3 pounds of grey goo in our head and never > happens in his Chinese Room. He never explains why. He explains exactly why in his formal argument: Premise A1: Programs are formal (syntactic). Premise A2: Minds have mental contents (semantics). Premise A3: Syntax is neither constitutive of nor sufficient for semantics. Ergo, Conclusion C1: Programs are neither constitutive of nor sufficient for minds. So then Searle gives us at least four targets at which to aim (three premises and the opportunity to deny that his conclusion follows). He continues with more formal arguments to defend his philosophy of mind, what he calls biological naturalism, but if C1 doesn't hold then we needn't consider them. I came back to ExI after a long hiatus (I have 6000+ unread messages in my ExI mail folder) because I was struck by the fact that Wernicke's aphasia lends support to A3, normally considered the only controversial premise in his argument. -gts > Like most of us here, he subscribes to and > promotes a species of naturalism. He [Searle] adamantly > rejects both property and substance dualism. You won't > find any mystical hocus-pocus in his philosophy. > > Bullshit. He thinks intelligent behavior is possible > without consciousness so evolution could not have produced > consciousness, no way no how. He has no other explanation > how it came to be so to explain its existence he has no > choice but to resort to?mystical > hocus-pocus. > he allows for the possibility > of Strong Artificial Intelligence. He just doesn't think > it possible with formal programs running on hardware. Not > hardware enough! > > So if atoms are arranged in a way that produces a > human brain those atoms can produce consciousness and if > arranged as a computer they can too, provided the computer > doesn't use hardware or software. Don't you find > that idea just a little bit stupid? > ?John K Clark? > -----Inline Attachment Follows----- > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stathisp at gmail.com Tue Dec 8 12:23:42 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 8 Dec 2009 23:23:42 +1100 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <998024.79732.qm@web36503.mail.mud.yahoo.com> References: <998024.79732.qm@web36503.mail.mud.yahoo.com> Message-ID: 2009/12/7 Gordon Swobe : > I've taken the position that for the thought experiment portion of Searle's CRA to have any value -- that if we should consider it anything more than mere philosophical hand-waving -- then it must first qualify as a valid scientific experiment. To qualify as such, it must work in a context-independent manner; scientists anywhere in the universe should obtain the same results using the same man in the room. And for that to happen, I argue, the man in the room must lack knowledge not only of the meanings of Chinese symbols, but also the words and symbols of every possible language in the universe. He must have no semantics whatsoever. I don't see why you say that. He simply needs to carry out a purely mechanical process, like a factory labourer. > Somewhat tongue in cheek, I continued my argument by stating the subject would need to undergo brain surgery prior to the experiment to remove the relevant parts of his brain. I then did a little research and learned we would need to remove Wernicke's area, and learned also of this interesting phenomenon of Wernicke's aphasia. > > One might consider the existence of Wernicke's aphasia as evidence supporting Searle's third premise in his CRA, that 'syntax is neither constitutive of nor sufficient for semantics'. People with this strange malady have an obvious grasp of syntax but also clearly have no idea what they're talking about! It is probably true that syntax is not sufficient for semantics. We could study an alien language and, with enough examples, work out all the criteria for well-formed sentences, but still not have the faintest idea what even a single word in the language means. >> In other words,the components don't know what they're doing, but the >> system does. > > So goes the systems reply to the CRA, one of many that Searle fielded with varying degrees of success depending on who you ask. It seems that Searle just doesn't get the difference between a system and its components. We agree that the brain as a whole understands language, but that does not mean that the neurons understand language. Even if the neurons had their own separate intelligence and were telepathically linked, discussing when they were going to release certain neurotransmitters and so on, they need not have any understanding of the language the brain understands. -- Stathis Papaioannou From stathisp at gmail.com Tue Dec 8 12:34:25 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 8 Dec 2009 23:34:25 +1100 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <870A3F6D-298C-4967-8B21-4D37BFCF4FDE@bellsouth.net> References: <910546.13745.qm@web36502.mail.mud.yahoo.com> <870A3F6D-298C-4967-8B21-4D37BFCF4FDE@bellsouth.net> Message-ID: 2009/12/8 John Clark : > On Dec 7, 2009, at 3:20 PM, Gordon Swobe wrote: > > Searle replies that the man could in principle memorize the syntactic > rule-book, thus internalizing the formal program and the entire the > room/system. On Searle's view such a man would still lack understanding of > the Chinese symbols. > > Then either the little man in the room is lying about his linguistic > inadequacy?or he is suffering from multiple personality disorder because it > is clear that somebody or something in that room knows Chinese. No, the man could still be completely ignorant of Chinese. Chinese is difficult, but you could probably get a patient idiot to carry out a simpler computation in his head following mechanical rules, but have no idea what he was doing. -- Stathis Papaioannou From stefano.vaj at gmail.com Tue Dec 8 12:37:11 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 8 Dec 2009 13:37:11 +0100 Subject: [ExI] Tolerance In-Reply-To: <20091208074958.69748.qmail@moulton.com> References: <20091208074958.69748.qmail@moulton.com> Message-ID: <580930c20912080437p4739681dj24ccee07b1280396@mail.gmail.com> 2009/12/8 : > Slightly false since not all religions are theist. Yes. This raises an important point. In fact, I am slightly uneasy with the concept itself of "atheism", since it accepts in the first place that Jahv?, Allah and the Holy Trinity are in fact "gods", according to the bad Bible translation of the Seventies", and that the rejection thereof is a fundamental stance splitting the world in two camps. During the Roman empire, in fact, christianism was not even considered as a legitimate religion, but rather as a superstitio nova ac malefica (a new, malicious superstition). In fact, almost every other religion either comprises a set of beliefs which die away gently, or has no problem whatsoever with a technoscientific or a transhumanist worldview (in fact, it might even corroborate it); and in any event does not command "faith" in a number of statements of fact as a moral duty to be imposed on any human being. Say, Zen, pre-christian paganism, confucianism, shinto... -- Stefano Vaj From stathisp at gmail.com Tue Dec 8 12:43:25 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 8 Dec 2009 23:43:25 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <367275.93635.qm@web36502.mail.mud.yahoo.com> References: <367275.93635.qm@web36502.mail.mud.yahoo.com> Message-ID: 2009/12/8 Gordon Swobe : > Premise A1: Programs are formal (syntactic). > Premise A2: Minds have mental contents (semantics). > Premise A3: Syntax is neither constitutive of nor sufficient for semantics. > > Ergo, > > Conclusion C1: Programs are neither constitutive of nor sufficient for minds. > > So then Searle gives us at least four targets at which to aim (three premises and the opportunity to deny that his conclusion follows). > > He continues with more formal arguments to defend his philosophy of mind, what he calls biological naturalism, but if C1 doesn't hold then we needn't consider them. > > I came back to ExI after a long hiatus (I have 6000+ unread messages in my ExI mail folder) because I was struck by the fact that Wernicke's aphasia lends support to A3, normally considered the only controversial premise in his argument. The A1/A2 dichotomy is deceptive. A human child learns that if it utters a particular word it gets a particular response, and a computer program can learn the same thing. Why would you say the child "understands" the word but the program doesn't? -- Stathis Papaioannou From bbenzai at yahoo.com Tue Dec 8 12:47:20 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 8 Dec 2009 04:47:20 -0800 (PST) Subject: [ExI] pat condell's latest subtle rant In-Reply-To: Message-ID: <163726.13368.qm@web32003.mail.mud.yahoo.com> "spike" observed: >>On Behalf Of John Clark ... >> Atheists have played the Mr. Nice-guy part for a very long time and the end result is that religious morons crash airliners into skyscrapers; and the very next day the USA organized a national day of prayer to give homage to the very mental cancer that caused the disaster... John K Clark > John this approach lumps all religious memes together. I would counter-propose classifying religious thoughtspace into two broad categories: those which suggest killing unbelievers and those which do not. Hmm, that leaves precious few religions that do not. Christianity certainly isn't one of them. Before you object, consider that the only thing that restrains it is secular law, whereas Islam has no such restraint. Go back a few hundred years and find a religion that did not practice the killing of unbelievers. Even Buddhism isn't spotless in this regard. Ben Zaiboc From bbenzai at yahoo.com Tue Dec 8 12:49:00 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 8 Dec 2009 04:49:00 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: Message-ID: <686569.71154.qm@web32007.mail.mud.yahoo.com> "MB" declared: >>"Hur hur hur if you believe in >> God you're stupid, so we're going to call atheists 'Brights'! Get it? >> Huh? Hur hur hur." > Arrgh. I was on an email list like this once. > They were correct: keep religion out of science class. But religion is part of human history and psychology. There are several sciences where a study of religion is appropriate. Surely you're not saying that religion shouldn't be studied in anthropology, psychology and psychiatry? Religion shouldn't be mistaken for a science, of course, but it is a fit subject for study. Ben Zaiboc From gts_2000 at yahoo.com Tue Dec 8 13:12:30 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 8 Dec 2009 05:12:30 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: Message-ID: <475553.36640.qm@web36502.mail.mud.yahoo.com> --- On Tue, 12/8/09, Stathis Papaioannou wrote: >> Premise A1: Programs are formal (syntactic). > Premise A2: Minds have mental contents (semantics). > Premise A3: Syntax is neither constitutive of nor > sufficient for semantics. > The A1/A2 dichotomy is deceptive. A human child learns that > if it utters a particular word it gets a particular response, and > a computer program can learn the same thing. Why would you say the > child "understands" the word but the program doesn't? Presumably the child has mental contents, i.e., semantics, that accompany if not also cause its behaviors, whereas the machine intelligence has only syntactical rules (even if self-written) that define its behaviors. Keep in mind, and this is in reply also to one of your other messages, that Searle has no problem whatsoever with weak AI. He believes Software/Hardware systems will eventually mimic the behaviors of humans, pass the Turing test and in the eyes of behaviorists exceed the intelligence of humans. But will such S/H systems have semantic understanding? More generally, will any S/H system running a formal program have what philosophers call intentionality? Never, says Searle. The Turing test will give false positives. -gts From eugen at leitl.org Tue Dec 8 13:13:39 2009 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 8 Dec 2009 14:13:39 +0100 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: References: <910546.13745.qm@web36502.mail.mud.yahoo.com> <870A3F6D-298C-4967-8B21-4D37BFCF4FDE@bellsouth.net> Message-ID: <20091208131339.GU17686@leitl.org> On Tue, Dec 08, 2009 at 11:34:25PM +1100, Stathis Papaioannou wrote: > No, the man could still be completely ignorant of Chinese. Chinese is > difficult, but you could probably get a patient idiot to carry out a Your idiot has to be able to simulate a neuron by hand. That means holding a lot of state, and doing billions of computations by hand more or less accurately. > simpler computation in his head following mechanical rules, but have Let's drop the "in his head" requirement, because It's Just Not Possible. > no idea what he was doing. It would take a genius to figure out what he's doing. It would be worse than an ant mapping New York. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From anders at aleph.se Tue Dec 8 13:13:22 2009 From: anders at aleph.se (Anders Sandberg) Date: Tue, 08 Dec 2009 13:13:22 +0000 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <20091208111230.GT17686@leitl.org> References: <4B1DAA01.6080705@satx.rr.com> <183035.84621.qm@web36507.mail.mud.yahoo.com> <20091208111230.GT17686@leitl.org> Message-ID: <4B1E50F2.5000507@aleph.se> Gordon Swobe wrote: > > Some of Searle's critics argued that the man in the room represents a neuron or some other small part of a larger system and that his lack of understanding means nothing -- after all he's not the system and perhaps the system has understanding. Searle replied "Fine, let the man internalize the entire room." > > > How does Searle take Clark and Chalmer's extended mind concept? (My guess: he doesn't believe in it) -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From mbb386 at main.nc.us Tue Dec 8 13:23:23 2009 From: mbb386 at main.nc.us (MB) Date: Tue, 8 Dec 2009 08:23:23 -0500 (EST) Subject: [ExI] Tolerance In-Reply-To: <686569.71154.qm@web32007.mail.mud.yahoo.com> References: <686569.71154.qm@web32007.mail.mud.yahoo.com> Message-ID: <53512.12.77.168.220.1260278603.squirrel@www.main.nc.us> > But religion is part of human history and psychology. > There are several sciences where a study of religion is appropriate. This is true. The theme of the list I was referring to was, "Keep *the teaching of* Intelligent Design and Creation Science out of the biology class." Regards, MB From bbenzai at yahoo.com Tue Dec 8 13:46:52 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 8 Dec 2009 05:46:52 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: Message-ID: <294076.76666.qm@web32008.mail.mud.yahoo.com> Post Futurist asked: > Would these stats (aside from the 21 percent atheists) be higher if adultery wasn't rejected by many or most religions? > 11% of all American adults are divorced > 25% of all American adults have had at least one divorce > 27% of born-again Christians have had at least one divorce > 24% of all non-born-again Christians have been divorced > 21% of atheists have been divorced > 21% of Catholics and Lutherans have been divorced > 24% of Mormons have been divorced > 25% of mainstream Protestants have been divorced > 29% of Baptists have been divorced > 24% of nondenominational, independent Protestants have been divorced > 27% of people in the South and Midwest have been divorced > 26% of people in the West have been divorced > 19% of people in the Northwest and Northeast have been divorced I don't understand what significance this has, neither the stats nor the question. Could you explain? Ben Zaiboc From jonkc at bellsouth.net Tue Dec 8 15:35:30 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 8 Dec 2009 10:35:30 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <367275.93635.qm@web36502.mail.mud.yahoo.com> References: <367275.93635.qm@web36502.mail.mud.yahoo.com> Message-ID: On Dec 8, 2009, at 7:04 AM, Gordon Swobe wrote: > --- On Tue, 12/8/09, John Clark wrote: > >> But for some reason this mysterious phase change only >> happens to 3 pounds of grey goo in our head and never >> happens in his Chinese Room. He never explains why. > > He explains exactly why in his formal argument: > > Premise A1: Programs are formal (syntactic). > Premise A2: Minds have mental contents (semantics). > Premise A3: Syntax is neither constitutive of nor sufficient for semantics. > > Ergo, > > Conclusion C1: Programs are neither constitutive of nor sufficient for minds. > > So then Searle gives us at least four targets at which to aim (three premises and the opportunity to deny that his conclusion follows). > > He continues with more formal arguments to defend his philosophy of mind, what he calls biological naturalism, but if C1 doesn't hold then we needn't consider them. > > I came back to ExI after a long hiatus (I have 6000+ unread messages in my ExI mail folder) because I was struck by the fact that Wernicke's aphasia lends support to A3, normally considered the only controversial premise in his argument. > > -gts > > > > > > > > > > >> Like most of us here, he subscribes to and >> promotes a species of naturalism. He [Searle] adamantly >> rejects both property and substance dualism. You won't >> find any mystical hocus-pocus in his philosophy. >> >> Bullshit. He thinks intelligent behavior is possible >> without consciousness so evolution could not have produced >> consciousness, no way no how. He has no other explanation >> how it came to be so to explain its existence he has no >> choice but to resort to mystical >> hocus-pocus. >> he allows for the possibility >> of Strong Artificial Intelligence. He just doesn't think >> it possible with formal programs running on hardware. Not >> hardware enough! >> >> So if atoms are arranged in a way that produces a >> human brain those atoms can produce consciousness and if >> arranged as a computer they can too, provided the computer >> doesn't use hardware or software. Don't you find >> that idea just a little bit stupid? >> John K Clark >> -----Inline Attachment Follows----- >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Dec 8 15:43:05 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 8 Dec 2009 10:43:05 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <367275.93635.qm@web36502.mail.mud.yahoo.com> References: <367275.93635.qm@web36502.mail.mud.yahoo.com> Message-ID: <48F499D6-0600-4160-AD1E-F9BB44CFB4B4@bellsouth.net> On Dec 8, 2009, at 7:04 AM, Gordon Swobe wrote: > He explains exactly why in his formal argument: > > Premise A1: Programs are formal (syntactic). > Premise A2: Minds have mental contents (semantics). > Premise A3: Syntax is neither constitutive of nor sufficient for semantics. > Ergo, > Conclusion C1: Programs are neither constitutive of nor sufficient for minds. So he assumes that programs can't have minds and then triumphantly concludes that programs can't have minds. I don't think creationists like Searle are very bright. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Dec 8 16:00:02 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 8 Dec 2009 11:00:02 -0500 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: References: <910546.13745.qm@web36502.mail.mud.yahoo.com> <870A3F6D-298C-4967-8B21-4D37BFCF4FDE@bellsouth.net> Message-ID: On Dec 8, 2009, at 7:34 AM, Stathis Papaioannou wrote: > No, the man could still be completely ignorant of Chinese. Chinese is > difficult, but you could probably get a patient idiot to carry out a > simpler computation in his head following mechanical rules, but have > no idea what he was doing. If that is possible then Darwin was wrong. I don't think Darwin was wrong. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Dec 8 15:54:51 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 8 Dec 2009 10:54:51 -0500 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: References: <998024.79732.qm@web36503.mail.mud.yahoo.com> Message-ID: On Dec 8, 2009, at 7:23 AM, Stathis Papaioannou wrote: > It is probably true that syntax is not sufficient for semantics. We > could study an alien language and, with enough examples, work out all > the criteria for well-formed sentences, but still not have the > faintest idea what even a single word in the language means. I think we could understand what an alien book on pure mathematics means but never mind, a computer could figure out what words mean the same way that we do, by examples. I don't see why Searle thinks that straightforward process can only proceed in the 3 pounds of grey goo in our head and is inaccessible to silicon. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Tue Dec 8 18:12:01 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Tue, 8 Dec 2009 10:12:01 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: Message-ID: <101682.31015.qm@web59911.mail.ac4.yahoo.com> Actually, I accept possible placebo effects of Santa, astrology, crystals, etc. In fact that might be the entire point. You probably wouldn't put up this argument in defence of other sorts of nonsense, such as a belief in Santa Claus, or astrology, or the power of crystals to heal disease, or any number of things which you would dismiss as just obviously crap. Why should religious nonsense get special consideration? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Tue Dec 8 18:27:35 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Tue, 8 Dec 2009 10:27:35 -0800 (PST) Subject: [ExI] tolerance Message-ID: <137321.58391.qm@web59905.mail.ac4.yahoo.com> There isn't any significance, it was purely academic. My collateral?point is: intellectuals can't reduce, say, a family unit to symbols; the husband isn't h, the wife isn't w, the children are not x,y,z. My perhaps irrelevant (to this list) opinion is that hyper-intellectualism can reduce humans to mere abstractions, and?perceive the cosmos as being almost VR. Which?might smack of intolerance. ? ? From: Ben Zaiboc From stefano.vaj at gmail.com Tue Dec 8 20:47:46 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 8 Dec 2009 21:47:46 +0100 Subject: [ExI] Tolerance In-Reply-To: References: <973172.81999.qm@web59914.mail.ac4.yahoo.com> Message-ID: <580930c20912081247teed175g8d26e7d17b8345a@mail.gmail.com> 2009/12/8 Stathis Papaioannou : >> ?If there is no God, then what fills the vacuum left by God's absence? Dawkins or someone else fills the void. Gods are not created equal. I certainly do no consider Mr. Dawkins as mine, but I can live very well with the fact that human beings are bounds to have some kind of "religious" identity in the very broadest sense of the word. My problem is that I do not like it to be of a metaphysical, superstitious, universalist, repressive, escathological, anti-scientific, anti-promethean, and, yes, ultimately anti-transhumanist nature as is the case for the Big Monotheistic Trio. Especially since even amongst "traditional" religions (let alone other equally plausibly satisfactory philosophical persuasions) this need not be the case. -- Stefano Vaj From bbenzai at yahoo.com Tue Dec 8 20:30:01 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 8 Dec 2009 12:30:01 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: Message-ID: <617130.15122.qm@web32001.mail.mud.yahoo.com> From: Post Futurist wrote: > putting aside the metaphysics involved, I can think of one practical reason right off the bat to respect-- but not necessarily like-- religion. Say there was no religious prohibition on adultery. Then more husbands than would otherwise cheat on their wives, more families would eventually break up, the wives and children get on welfare, and libertarians have more lazy people to complain about who don't take care of their children. We can't have that?:) Aha. I received a digest out of order, which is why I missed this. It was Emlyn, not you, who posted the stats about divorce. Anyway, this all depends on a bunch of very conventional assumptions: That not having a religious prohibition on 'adultery' would mean more husbands having extramarital sex. That 'cheating' on your wife is necessarily bad, and without her consent, etc. That this would lead to the breakup of families. That families breaking up would lead to wives and children on welfare. That people on welfare are lazy. That lazy people on welfare complain about their children not being taken care of. A pretty tenuous chain of assumptions, I think. As most religions tend to reinforce these conventions, I'd say they have more of a negative than a positive effect. Don't you think people would be happier if their religion said it was fine to have extramarital sex providing everyone involved was fully aware of what was going on, and agreed to it? It's the abrahamic religions' attitude of fear and disgust towards sex that created these problems in the first place, and you're saying you respect them for this? There are a large (and growing) number of polyamorous people for whom these problems don't exist, because they take the trouble to communicate with each other, and are honest with each other. I've never actually seen what any of the majority religions' attitude on polyamory is, but I'd hazard a guess that it's not a supportive one. Ben Zaiboc From stefano.vaj at gmail.com Tue Dec 8 21:07:12 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 8 Dec 2009 22:07:12 +0100 Subject: [ExI] Tolerance In-Reply-To: <617130.15122.qm@web32001.mail.mud.yahoo.com> References: <617130.15122.qm@web32001.mail.mud.yahoo.com> Message-ID: <580930c20912081307j5761a2f1p6e8d5c6dc21bd521@mail.gmail.com> 2009/12/8 Ben Zaiboc : > As most religions tend to reinforce these conventions, I'd say they have more of a negative than a positive effect. ?Don't you think people would be happier if their religion said it was fine to have extramarital sex providing everyone involved was fully aware of what was going on, and ?agreed to it? ?It's the abrahamic religions' attitude of fear and disgust towards sex that created these problems in the first place, and you're saying you respect them for this? Yep. I wanted to post something along this exact lines, but then I decided to drop the idea since I doubted to be able to explain such point in US English.... :-D -- Stefano Vaj From stefano.vaj at gmail.com Tue Dec 8 22:12:32 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 8 Dec 2009 23:12:32 +0100 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <163726.13368.qm@web32003.mail.mud.yahoo.com> References: <163726.13368.qm@web32003.mail.mud.yahoo.com> Message-ID: <580930c20912081412l1c54e7b2mdf73536a533e215b@mail.gmail.com> 2009/12/8 Ben Zaiboc : > Hmm, that leaves precious few religions that do not. ?Christianity certainly isn't one of them. ?Before you object, consider that the only thing that restrains it is secular law, whereas Islam has no such restraint. ?Go back a few hundred years and find a religion that did not practice the killing of unbelievers. ?Even Buddhism isn't spotless in this regard. I beg to differ. With the exception of middle-east monotheisms, and unless and until confronted therewith, it seems to me that most world religions could not care less about "unbelief". Can you imagine, say, a druid trying to obtain the forced conversion of a war prisoner? Aboriginal Austrialians fighting against one another on the basis of theological dissents? A Zen practitioner attacking a shinto shrine? Venus partisans against Iuppiter partisans in ancient Rome? -- Stefano Vaj From p0stfuturist at yahoo.com Tue Dec 8 22:41:05 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Tue, 8 Dec 2009 14:41:05 -0800 (PST) Subject: [ExI] Tolerance Message-ID: <883322.8210.qm@web59902.mail.ac4.yahoo.com> Having grown up in the '60s with wild characters, I saw it all by age 13; so it doesn't happen to bother me in the slightest. But I have no kids. You would have to ask unbiased statisticians as to whether or not permissive has an overall negative effect. It may have a negative effect in the short run, but in the long run a beneficial effect. Perhaps by mid-century people will be genuinely "free". Anyway, the only thing that seemed extremely unreasonable was thousands blaming Reagan for not caring about HIV and allegedly hoping AIDS would thin the gay herd. Reagan probably didn't pay attention and if he did he almost certainly thought it was none of his concern. He had other things on his mind. ? Don't you think people would be happier if their religion said it was fine to have extramarital sex providing everyone involved was fully aware of what was going on, and? agreed to it?? It's the abrahamic religions' attitude of fear and disgust towards sex that created these problems in the first place, and you're saying you respect them for this? There are a large (and growing) number of polyamorous people for whom these problems don't exist, because they take the trouble to communicate with each other, and are honest with each other.? I've never actually seen what any of the majority religions' attitude on polyamory is, but I'd hazard a guess that it's not a supportive one. Ben Zaiboc -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Tue Dec 8 23:24:48 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 8 Dec 2009 15:24:48 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <4B1E50F2.5000507@aleph.se> Message-ID: <5734.42256.qm@web36502.mail.mud.yahoo.com> Anders, > How does Searle take Clark and Chalmer's extended mind concept? > > (My guess: he doesn't believe in it) Not sure he's ever considered it, but I agree he would not believe in it. Odd that you should mention it, by the way. I seem to recall bringing that obscure paper to the attention of people here about a million years ago in a discussion about qualia. I found it intriguing because it made sense of qualia without assaulting common sense. According to that preposterous theory, people say tomatoes are red because tomatoes are red. -gts From stathisp at gmail.com Wed Dec 9 00:51:28 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 9 Dec 2009 11:51:28 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <475553.36640.qm@web36502.mail.mud.yahoo.com> References: <475553.36640.qm@web36502.mail.mud.yahoo.com> Message-ID: 2009/12/9 Gordon Swobe : > --- On Tue, 12/8/09, Stathis Papaioannou wrote: > >>> Premise A1: Programs are formal (syntactic). >> Premise A2: Minds have mental contents (semantics). >> Premise A3: Syntax is neither constitutive of nor >> sufficient for semantics. > > >> The A1/A2 dichotomy is deceptive. A human child learns that >> if it utters a particular word it gets a particular response, and >> a computer program can learn the same thing. Why would you say the >> child "understands" the word but the program doesn't? > > Presumably the child has mental contents, i.e., semantics, that accompany if not also cause its behaviors, whereas the machine intelligence has only syntactical rules (even if self-written) that define its behaviors. The child learns to make a particular noise when it is hungry and that *becomes* semantics. The syntax is more difficult and comes later. > Keep in mind, and this is in reply also to one of your other messages, that Searle has no problem whatsoever with weak AI. He believes Software/Hardware systems will eventually mimic the behaviors of humans, pass the Turing test and in the eyes of behaviorists exceed the intelligence of humans. > > But will such S/H systems have semantic understanding? More generally, will any S/H system running a formal program have what philosophers call intentionality? Never, says Searle. The Turing test will give false positives. Searle's insistence that weak AI is possible but not strong AI is his other big failing. He has not addressed David Chalmer's argument in his 1995 paper (http://consc.net/papers/qualia.html) showing that IF it is possible to replicate the behaviour of neurons with electronic replacements THEN any subjective experiences associated with the original neurons will also be replicated. In fact, I've never seen an attempt at a rebuttal of this argument by anyone who has understood it. -- Stathis Papaioannou From stathisp at gmail.com Wed Dec 9 00:57:06 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 9 Dec 2009 11:57:06 +1100 Subject: [ExI] Tolerance In-Reply-To: <101682.31015.qm@web59911.mail.ac4.yahoo.com> References: <101682.31015.qm@web59911.mail.ac4.yahoo.com> Message-ID: 2009/12/9 Post Futurist > > Actually, I accept possible placebo effects of Santa, astrology, crystals, etc. > In fact that might be the entire point. > > > You probably wouldn't put up this argument in defence of other sorts > > of nonsense, such as a belief in Santa Claus, or astrology, or the > > power of crystals to heal disease, or any number of things which you > > would dismiss as just obviously crap. Why should religious nonsense > > get special consideration? Sometimes it is appropriate to tell or even believe lies, but in general the truth is better. -- Stathis Papaioannou From anders at aleph.se Wed Dec 9 01:05:09 2009 From: anders at aleph.se (Anders Sandberg) Date: Wed, 09 Dec 2009 02:05:09 +0100 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: 5734.42256.qm@web36502.mail.mud.yahoo.com Message-ID: <20091209010509.626fa8a9@secure.ericade.net> Gordon Swobe: > Anders, > > How does Searle take Clark and Chalmer's extended mind concept? > > > > (My guess: he doesn't believe in it) > > Not sure he's ever considered it, but I agree he would not believe in it. > > Odd that you should mention it, by the way. I seem to recall bringing that > obscure paper to the attention of people here about a million years ago in a > discussion about qualia. I came across it tonight - in the context of time management! Heylighen, Francis and Vidal, Cl?ment (2007) Getting Things Done: The Science behind Stress-Free Productivity, http://cogprints.org/5904/ is a nice little overview of Allens GTD method for time management, which they try to link with situated and distributed cognition. Useful transhumanist reading. So not only does it solve qualia, it also helps time management :-) Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From stathisp at gmail.com Wed Dec 9 01:07:25 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 9 Dec 2009 12:07:25 +1100 Subject: [ExI] Wernicke's aphasia and the CRA In-Reply-To: <20091208131339.GU17686@leitl.org> References: <910546.13745.qm@web36502.mail.mud.yahoo.com> <870A3F6D-298C-4967-8B21-4D37BFCF4FDE@bellsouth.net> <20091208131339.GU17686@leitl.org> Message-ID: 2009/12/9 Eugen Leitl : > On Tue, Dec 08, 2009 at 11:34:25PM +1100, Stathis Papaioannou wrote: > >> No, the man could still be completely ignorant of Chinese. Chinese is >> difficult, but you could probably get a patient idiot to carry out a > > Your idiot has to be able to simulate a neuron by hand. That means > holding a lot of state, and doing billions of computations by hand > more or less accurately. > >> simpler computation in his head following mechanical rules, but have > > Let's drop the "in his head" requirement, because It's Just Not Possible. > >> no idea what he was doing. > > It would take a genius to figure out what he's doing. It would > be worse than an ant mapping New York. An idiot who can add and subtract and follow instructions could do more complex computations in which only these operations are required without understanding what it is he is doing. -- Stathis Papaioannou From kanzure at gmail.com Wed Dec 9 01:14:21 2009 From: kanzure at gmail.com (Bryan Bishop) Date: Tue, 8 Dec 2009 19:14:21 -0600 Subject: [ExI] SKDB (apt-get for hardware) presentation from H+ Summit 2009 Message-ID: <55ad6af70912081714g2bc708afx2bd1f511a2909df1@mail.gmail.com> Hey all, I edited (split up) the videos and threw them up on youtube. You can see fenn and I talking about SKDB in a broader context (this is not the technical nitty-gritty). http://bit.ly/50Fi1g http://bit.ly/5jvyjG http://bit.ly/87ntrh slides: http://adl.serveftp.org/~bryan/presentations/hplus-summit-2009/hplus-summit-2009-how-to-make.pdf More details: http://openmanufacturing.org/ http://adl.serveftp.org/dokuwiki/skdb git clone http://adl.serveftp.org/skdb.git #hplusroadmap on irc.freenode.net There was also a brief mention of the open source hardware cooperative, and the feedback has been fantastic so far. Let's keep things rolling.. hope you enjoy the videos. :-) - Bryan http://heybryan.org/ 1 512 203 0507 From p0stfuturist at yahoo.com Wed Dec 9 01:26:23 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Tue, 8 Dec 2009 17:26:23 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: Message-ID: <858729.23330.qm@web59912.mail.ac4.yahoo.com> "Truth"? now don't go using those arcane words on us! If only truth were objective. Sometimes it is appropriate to tell or even believe lies, but in general the truth is better. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Wed Dec 9 02:02:59 2009 From: max at maxmore.com (Max More) Date: Tue, 08 Dec 2009 20:02:59 -0600 Subject: [ExI] Wernicke's aphasia and the CRA Message-ID: <200912090203.nB9239uT013820@andromeda.ziaspace.com> >I came across it tonight - in the context of >time management! Heylighen, Francis and Vidal, >Cl?ment (2007) Getting Things Done: The Science >behind Stress-Free Productivity, >http://cogprints.org/5904/ is a nice little >overview of Allens GTD method for time >management, which they try to link with situated >and distributed cognition. Useful transhumanist reading. I like Allen's system too. I reviewed Getting Things Done: The Art of Stress-Free Productivity: http://www.manyworlds.com/exploreco.aspx?coid=CO67011150213 See also Allen's: Ready for Anything: 52 Productivity Principles for Work and Life http://www.manyworlds.com/exploreco.aspx?coid=CO1230415323614 Max ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From stathisp at gmail.com Wed Dec 9 04:02:32 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 9 Dec 2009 15:02:32 +1100 Subject: [ExI] Tolerance In-Reply-To: <858729.23330.qm@web59912.mail.ac4.yahoo.com> References: <858729.23330.qm@web59912.mail.ac4.yahoo.com> Message-ID: 2009/12/9 Post Futurist > > "Truth"? now don't go using those arcane words on us! > If only truth were objective. I'm sure you can think of plenty of examples where you would bet your life it was true or bet your life it was untrue, even though you can never be *absolutely* certain of any empirical fact. -- Stathis Papaioannou From p0stfuturist at yahoo.com Wed Dec 9 04:27:46 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Tue, 8 Dec 2009 20:27:46 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: Message-ID: <416321.74195.qm@web59916.mail.ac4.yahoo.com> Naturally. But disingenuousness may be more social-cohesion promoting than we know. How long would a given couple stay together if they told each other the truth too often? people tell little 'white' lies all the time to smooth things over. But getting back to placebo: ?placebo may be more potent than we know. Sitting in church for two hours may offer as much (or more) health benefit as taking supplements & pharmaceuticals, or, say, listening to a two hour non-mystical lecture. I'm sure you can think of plenty of examples where you would bet your life it was true or bet your life it was untrue, even though you can never be *absolutely* certain of any empirical fact. -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Wed Dec 9 05:00:32 2009 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 8 Dec 2009 21:00:32 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: <416321.74195.qm@web59916.mail.ac4.yahoo.com> Message-ID: <392539.30313.qm@web110410.mail.gq1.yahoo.com> --- On Tue, 12/8/09, Post Futurist wrote: > But dis ingenuousness may be more social-cohesion promoting than we know. How long would a given couple stay together if they told each other the truth too often? Well let's try to find out:) > people tell little 'white' lies all the time to smooth things over. there is a big difference between being polite and telling a lie. > But getting back to placebo: placebo may be more potent than we know. Sitting in church for two hours may offer as much (or more) health benefit as taking supplements & pharmaceuticals, or,say, listening to a two hour non-mystical lecture. Let's define placebo: a control group..damn that sounds pretty religious. >> I'm sure you can think of plenty of examples where you would bet your life it was true or bet your life it was untrue, even though you can never be *absolutely* certain of any empirical fact There have been moments. After careful calculations you can usually get the truth especially if you search for it:) Anna:) __________________________________________________________________ Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now http://ca.toolbar.yahoo.com. From femmechakra at yahoo.ca Wed Dec 9 05:17:09 2009 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 8 Dec 2009 21:17:09 -0800 (PST) Subject: [ExI] Tolerance. In-Reply-To: <959388F1-BD3F-41C7-B4C2-EB5D15B10BED@bellsouth.net> Message-ID: <887413.26405.qm@web110404.mail.gq1.yahoo.com> --- On Tue, 12/8/09, John Clark wrote: > So do you think the word "stupid" should be removed from the English language? Actually I think "stupid" could at times be replaced by "ignorance". > And if you can't use that word in reference to religion when in the world can you use it? 2+2=5:) > We are not talking about some esoteric point in physics, we are talking about people who want to crash airliners into skyscrapers and the moderates who give them cover by demanding respect for a particular brand of mind cancer. I make no apology in calling that stupid as well as evil. There is a huge difference between stupid and evil. Stupid is as stupid does while evil lashes out to unwarranted events to make examples instead of giving alternative ideas. Just an idea:) Anna __________________________________________________________________ Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From spike66 at att.net Wed Dec 9 06:06:39 2009 From: spike66 at att.net (spike) Date: Tue, 8 Dec 2009 22:06:39 -0800 Subject: [ExI] pat condell's latest subtle rant In-Reply-To: <580930c20912081412l1c54e7b2mdf73536a533e215b@mail.gmail.com> References: <163726.13368.qm@web32003.mail.mud.yahoo.com> <580930c20912081412l1c54e7b2mdf73536a533e215b@mail.gmail.com> Message-ID: <1217FE34C4284AAEA6F1BFE768DACA2E@spike> > Subject: Re: [ExI] pat condell's latest subtle rant > > 2009/12/8 Ben Zaiboc : > > Hmm, that leaves precious few religions that do not. ? > Christianity certainly isn't one of them. ?Before you object, > consider that the only thing that restrains it is secular > law, whereas Islam has no such restraint. ?Go back a few > hundred years and find a religion that did not practice the > killing of unbelievers. ?Even Buddhism isn't spotless in this regard. There has really been only one general circumstance under which organized religions have supported and carried out the slaying of unbelievers. They have done this only when given the opportunity. Now we learn that the US is preventing the return of the hidden Madhi, and Ahmadinijad can prove it: http://www.foxnews.com/story/0,2933,579640,00.html?loomia_ow=t0:s0:a16:g4:r2 :c0.000000:b29258552:z10 The nerve of that country! Preventing the savior of mankind from returning! After reading all the goings on here about religion this and religion that, I am beginning to suspect it is one of you guys that is doing it. John, is it you? Or anyone here? Fess up, where are you hiding him? Hand him over forthwith, or NO VIRGINS for YOU! spike From estropico at gmail.com Wed Dec 9 09:04:56 2009 From: estropico at gmail.com (estropico) Date: Wed, 9 Dec 2009 09:04:56 +0000 Subject: [ExI] ExtroBritannia: The Way Ahead Message-ID: <4eaaa0d90912090104ua39ba48u5d28b2679329006a@mail.gmail.com> Venue: Room 538, Birkbeck College. Date: Saturday 19th December. Time: 1pm-3pm. PLEASE NOTE EARLIER START THAN FOR USUAL MEETINGS. Topic: Group discussion on Extrobritannia activities, 2009-2010. Attendees: The meeting is open to anyone interested in supporting Extrobritannia activities. Proposed agenda: 1.) Results of online survey. Discussion of points arising. 2.) Input to SWOT for Extrobritannia (list of Strengths, Weaknesses, Opportunities, and Threats) 3.) Proposal re membership scheme and group finances 4.) Plans for "Humanity+, UK 2010" 5.) Relations with other H+ organisations (world, EU...) 6.) Any proposals or reports on specific projects or activities (ideally circulated, or at least mentioned, in advance) 7.) (Optional) Retire to the nearby Marlborough Arms pub for refreshment and informal discussion. From jameschoate at austin.rr.com Wed Dec 9 15:08:52 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Wed, 9 Dec 2009 9:08:52 -0600 Subject: [ExI] SKDB (apt-get for hardware) presentation from H+ Summit 2009 In-Reply-To: <55ad6af70912081714g2bc708afx2bd1f511a2909df1@mail.gmail.com> Message-ID: <20091209150852.YKLK6.410576.root@hrndva-web28-z02> ---- Bryan Bishop wrote: > Hey all, > > I edited (split up) the videos and threw them up on youtube. You can > see fenn and I talking about SKDB in a broader context (this is not > the technical nitty-gritty). > > http://bit.ly/50Fi1g > http://bit.ly/5jvyjG > http://bit.ly/87ntrh These were not particularly enlightening as there was really not a lot that hasn't been hashed around since the late 70's (ever hear of Toffler?) and 80's. This is in many ways the same sort of stuff we were talking about at Discovery Hall and (for example) the first Cyberspace Conference that was held here in Austin in 1990 if memory serves. Free software didn't start with Richard and the FSF (which started in '84) it goes way back to DECUS and the original hackers at MIT. Then we talked about copyleft now we talk GPL. Same idea different coat. Finally, society isn't going digital. It's communications and storage infrastructure is. This is a subtle but major distinction to be made and understood. What this means is that the real value of material that is digitizable will become clear, it isn't worth much. Real economic value will come from the things that people need to interact with their environment. I have noticed a common thread in a lot of this transhumanism/free technology discussions over the last ten years, it's like anything before about mid-1980's doesn't exist. There seems to be a major disconnect between the younger generation of 'hackers' and the actual history of hacking technology. A recent example of this was the discussion on the Robot Group mailing list about the 8-cube several of the members made and then were looking for applications. Several of the comments made during that discussion referred back to material that was at best 80's and several important aspects from earlier times was unknown or seriously confused; hadn't heard of ONAG and there was some serious confusion about how the nomenclature of CAs came about. These cubes are also rather amusing as the common theme is they are somehow new and different, when in fact they were a pretty common project for hackers in the late 70's playing around with their 6502/6800/8080/z80 SBCs. There really needs to be more effort put into looking farther back as we're going to end up recreating the wheel, not only technologically but also philosophically. Just remember Santayana, Those who don't know history are doomed to repeat it. -- -- -- -- -- Venimus, Vidimus, Dolavimus James Choate jameschoate at austin.rr.com james.choate at twcable.com 512-657-1279 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From alfio.puglisi at gmail.com Wed Dec 9 16:10:17 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 9 Dec 2009 17:10:17 +0100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <367275.93635.qm@web36502.mail.mud.yahoo.com> References: <367275.93635.qm@web36502.mail.mud.yahoo.com> Message-ID: <4902d9990912090810p28d0f6fkafa15286510f38c4@mail.gmail.com> On Tue, Dec 8, 2009 at 1:04 PM, Gordon Swobe wrote: > --- On Tue, 12/8/09, John Clark wrote: > > > But for some reason this mysterious phase change only > > happens to 3 pounds of grey goo in our head and never > > happens in his Chinese Room. He never explains why. > > He explains exactly why in his formal argument: > > Premise A1: Programs are formal (syntactic). > Premise A2: Minds have mental contents (semantics). > Premise A3: Syntax is neither constitutive of nor sufficient for semantics A question, how is semantics defined in this context? I have doubts that it can be defined narrowly enough to allow for semantics in human minds, but not in sufficiently complex computer programs. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Dec 9 17:29:10 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 9 Dec 2009 12:29:10 -0500 Subject: [ExI] Tolerance. In-Reply-To: <887413.26405.qm@web110404.mail.gq1.yahoo.com> References: <887413.26405.qm@web110404.mail.gq1.yahoo.com> Message-ID: <31136E73-DA7D-48AA-8BF3-CFA09635D87D@bellsouth.net> On Dec 9, 2009, at Anna Taylor wrote: > Actually I think "stupid" could at times be replaced by "ignorance". And I think "stupid" is a perfectly respectable word that does an adequate job describing a certain type of brain function and I see no reason not to use it where appropriate. Such as, religion is stupid. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Dec 9 17:50:59 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 9 Dec 2009 18:50:59 +0100 Subject: [ExI] Tolerance. In-Reply-To: <31136E73-DA7D-48AA-8BF3-CFA09635D87D@bellsouth.net> References: <887413.26405.qm@web110404.mail.gq1.yahoo.com> <31136E73-DA7D-48AA-8BF3-CFA09635D87D@bellsouth.net> Message-ID: <580930c20912090950t37dbf0b0s55f4633eaa69296b@mail.gmail.com> 2009/12/9 John Clark : > And I think "stupid" is a perfectly respectable word that does an adequate > job describing a certain type of brain function and I see no reason not to > use it where appropriate. I think stupid may plausibly mean, depending the circumstances, both somebody believing in naive and crazy ideas *and* the contrary of astute and clever. Personally, I am inclined to consider the supporters of the religions of the Book as "stupid" in the first sense, but not necessarily nor always in the second. Far from it... -- Stefano Vaj From jonkc at bellsouth.net Wed Dec 9 18:17:53 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 9 Dec 2009 13:17:53 -0500 Subject: [ExI] Tolerance. In-Reply-To: <580930c20912090950t37dbf0b0s55f4633eaa69296b@mail.gmail.com> References: <887413.26405.qm@web110404.mail.gq1.yahoo.com> <31136E73-DA7D-48AA-8BF3-CFA09635D87D@bellsouth.net> <580930c20912090950t37dbf0b0s55f4633eaa69296b@mail.gmail.com> Message-ID: <190ADF20-92BF-42D5-BB67-1BDB55CAA150@bellsouth.net> On Dec 9, 2009, Stefano Vaj wrote: > I think stupid may plausibly mean, depending the circumstances, both > somebody believing in naive and crazy ideas *and* the contrary of > astute and clever. Crazy, not just odd but crazy ideas are pretty close to the contrary of astute and clever. And I think one can decide to be stupid, as in a surgeon who doesn't believe in the cornerstone of the biological sciences, Evolution; or a structural engineer who doesn't believe in Newton's theory of gravitation. Actually such things are possible provided you put your ideas in little airtight compartments and refuse to let them interact, but that's just a longwinded way of saying the word stupid. > I am inclined to consider the supporters of the religions > of the Book as "stupid" in the first sense, but not necessarily nor > always in the second. Far from it... Oh come now Stefano, we both know its really not that far. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Wed Dec 9 20:23:50 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 9 Dec 2009 12:23:50 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: Message-ID: <663216.30606.qm@web36503.mail.mud.yahoo.com> --- On Tue, 12/8/09, Stathis Papaioannou wrote: > The child learns to make a particular noise when it is > hungry and that *becomes* semantics. The syntax is more > difficult and comes later. No matter which comes first, semantics, broadly defined), involves conscious awareness of some object or idea, i.e., *intentionality*. Here we have a good working definition of intentionality: "Intentionality is the power of minds to be about, to represent, or to stand for, things, properties and states of affairs." http://plato.stanford.edu/entries/intentionality/ I once tried to deny the existence of intentionality. I found the idea of the non-existence of intentionality pretty difficult to hold in mind, because to hold anything in mind is to have intentionality. And Searle says this beast called intentionality cannot live inside S/H systems. That's what his Chinese Room Argument is all about. > [Sear;e] has not addressed David Chalmer's argument in > his 1995 paper (http://consc.net/papers/qualia.html) showing that IF > it is possible to replicate the behaviour of neurons with > electronic replacements THEN any subjective experiences associated > with the original neurons will also be replicated. IF pigs had wings THEN pigs could fly, but this does not refute the arguments of those who don't believe pigs fly! :-) Seriously, as I've stated elsewhere, contrary to popular opinion Searle does not dismiss the possibility of strong AI. He argues this way: 1) Formal programs running on hardware cannot have semantics. 2) Because human brains have semantics, human brains must do something besides run formal programs. They must have some causal powers that science has yet to understand. Now then, IF we first come to understand those causal powers of brains and IF we then find a way to duplicate those powers in something other than brains, THEN we will create strong AI. On that day, pigs will fly. -gts From scerir at libero.it Wed Dec 9 20:42:15 2009 From: scerir at libero.it (scerir) Date: Wed, 9 Dec 2009 21:42:15 +0100 (CET) Subject: [ExI] russian fireworks Message-ID: <21803962.1966481260391335455.JavaMail.root@wmail35> spiral blue light display hovers above Norway http://www.dailymail.co.uk/news/worldnews/article-1234430/Mystery-spiral-blue- light-display-hovers-Norway.html but another strange light appeared in connection with a Russian rocket flight ... http://www.youtube.com/watch?v=pV08q4SCaBQ From gts_2000 at yahoo.com Wed Dec 9 21:00:41 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 9 Dec 2009 13:00:41 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <4902d9990912090810p28d0f6fkafa15286510f38c4@mail.gmail.com> Message-ID: <523297.37972.qm@web36501.mail.mud.yahoo.com> --- On Wed, 12/9/09, Alfio Puglisi wrote: >> Premise A1: Programs are formal (syntactic). >> Premise A2: Minds have mental contents (semantics). >> Premise A3: Syntax is neither constitutive of nor >> sufficient for semantics > A question, how is semantics defined in this context? By "having semantics" Searle means not only the ability to have conscious understanding of the meanings of words or symbols (as in the meanings of his Chinese symbols) but also the ability to have any conscious awareness of any thing or idea, real or imaginary. In this sense "semantics" equals "mental contents". -gts From stefano.vaj at gmail.com Wed Dec 9 21:10:29 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 9 Dec 2009 22:10:29 +0100 Subject: [ExI] Tolerance. In-Reply-To: <190ADF20-92BF-42D5-BB67-1BDB55CAA150@bellsouth.net> References: <887413.26405.qm@web110404.mail.gq1.yahoo.com> <31136E73-DA7D-48AA-8BF3-CFA09635D87D@bellsouth.net> <580930c20912090950t37dbf0b0s55f4633eaa69296b@mail.gmail.com> <190ADF20-92BF-42D5-BB67-1BDB55CAA150@bellsouth.net> Message-ID: <580930c20912091310x12bde748w9ba115c594120e1d@mail.gmail.com> 2009/12/9 John Clark : > On Dec 9, 2009, ?Stefano Vaj wrote: > I think stupid may plausibly mean, depending the circumstances, both > somebody believing in naive and crazy ideas *and* the contrary of > astute and clever. > > Crazy, not just odd but crazy ideas are pretty close to?the contrary > of?astute and clever. I think you are too optimistic about that. Many crazy, delirious and ultimately stupid ideas have found exceptionally astute and eloquent and devious and masterly and effective supporters in history. Think of the Jesuits' (in)famous tradition. Were it not the case, we would have rid ourselves of monotheism, or for that matter of bioluddites of all persuasions, a long time ago... :-) -- Stefano Vaj From pharos at gmail.com Wed Dec 9 22:23:37 2009 From: pharos at gmail.com (BillK) Date: Wed, 9 Dec 2009 22:23:37 +0000 Subject: [ExI] Tolerance In-Reply-To: <4B1E0E3E.5010708@rawbw.com> References: <4B1DB07C.1080802@rawbw.com> <239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM> <4B1E0E3E.5010708@rawbw.com> Message-ID: On 12/8/09, Lee Corbin wrote: > But the strategy of alienating religious people yet further > isn't going to help, except in ways I would not condone. > They shouldn't be given any reason whatsoever to think > that life under the atheists has been as bad as (at many > times) life for us under the religious often was. > > Alas, this is surely all wishful thinking on my part. > As soon as we are in the heavy majority, kids will > hear in school from their PC teachers that, so far > as religion goes, "there are the brights who don't > believe, and then, there are the others, less bright, > who do, and sadly some of you in this very class come > from disadvantaged homes..." > > > > obviously I am not for declaring an intellectual war or anything. But I do > think that atheists need to not be wishy-washy and somehow give the > religious the idea we think their ideas about the nature of reality have > merit. > > > > Again, I agree. > This new article seems relevant......... Dawkins offers delusion removal service for bereaved and dying ?Face Reality Now?, a new ?delusion removal? service set up by Richard Dawkins, offers the soon to be bereaved an opportunity to confront their dying friends and relations with their mortality. ?It?s a scandal that people continue to die unaware that really is it,? explained Dr Dawkins. ?Now, for a small fee, my service will send trained atheists to reassure the dying that they can forget about any after life nonsense.? The service also offers a comprehensive after death option in which the funeral cortege is followed by men with loudhailers shouting, ?Stop comforting yourself with mumbo-jumbo. For God?s sake, he?s dead ? deal with it!? But the Dawkins service has run into difficulties where some friends of the dying are believers, but others aren?t. ?At a recent deathbed scene,? admitted Dr Dawkins, ?my atheists turned up at the same time as a Catholic priest. Things got a bit out of hand with a fight breaking out around the dying man. Still, I?m pleased to say ?Face Reality Now? won.? ------------- BillK (By the way, this is a satirical article) :) From stathisp at gmail.com Wed Dec 9 23:17:14 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 10 Dec 2009 10:17:14 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <663216.30606.qm@web36503.mail.mud.yahoo.com> References: <663216.30606.qm@web36503.mail.mud.yahoo.com> Message-ID: 2009/12/10 Gordon Swobe : > --- On Tue, 12/8/09, Stathis Papaioannou wrote: > >> The child learns to make a particular noise when it is >> hungry and that *becomes* semantics. The syntax is more >> difficult and comes later. > > No matter which comes first, semantics, broadly defined), involves conscious awareness of some object or idea, i.e., *intentionality*. > > Here we have a good working definition of intentionality: > > "Intentionality is the power of minds to be about, to represent, or to stand for, things, properties and states of affairs." > > http://plato.stanford.edu/entries/intentionality/ > > I once tried to deny the existence of intentionality. I found the idea of the non-existence of intentionality pretty difficult to hold in mind, because to hold anything in mind is to have intentionality. > > And Searle says this beast called intentionality cannot live inside S/H systems. That's what his Chinese Room Argument is all about. And the counterargument is that of course the Chinese Room would have semantics and intentionality and all the other good things that the brain has. Only if consciousness were a side-effect of intelligent behaviour would it have evolved. >> [Sear;e] has not addressed David Chalmer's argument in >> his 1995 paper (http://consc.net/papers/qualia.html) showing that IF >> it is possible to replicate the behaviour of neurons with >> electronic replacements THEN any subjective experiences associated >> with the original neurons will also be replicated. > > IF pigs had wings THEN pigs could fly, but this does not refute the arguments of those who don't believe pigs fly! :-) Searle agrees that it would be possible to replicate the behaviour of neurons, but he thinks the resulting brain would be a zombie. This is what Chalmer's paper shows to be wrong. It's really worthwhile reading the cited paper. > Seriously, as I've stated elsewhere, contrary to popular opinion Searle does not dismiss the possibility of strong AI. > > He argues this way: > > 1) Formal programs running on hardware cannot have semantics. > > 2) Because human brains have semantics, human brains must do something besides run formal programs. They must have some causal powers that science has yet to understand. > > Now then, IF we first come to understand those causal powers of brains and IF we then find a way to duplicate those powers in something other than brains, THEN we will create strong AI. On that day, pigs will fly. IF we simulate the externally observable behaviour of brains THEN we will create strong AI. -- Stathis Papaioannou From gts_2000 at yahoo.com Wed Dec 9 23:34:46 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 9 Dec 2009 15:34:46 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <48F499D6-0600-4160-AD1E-F9BB44CFB4B4@bellsouth.net> Message-ID: <803291.69674.qm@web36502.mail.mud.yahoo.com> --- On Tue, 12/8/09, John Clark wrote: >> A1: Programs are formal (syntactic). >> A2: Minds have mental contents (semantics).? >> A3: Syntax is neither constitutive of nor >> sufficient for semantics.? >> Ergo,? >> C1: Programs are neither constitutive of nor >> sufficient for minds. > > So he assumes that programs can't have minds and > then triumphantly concludes that programs can't have > minds. I could hardly contain my excitement when I read your words. I thought, "Hey, maybe John has handed me a weapon to use in my debate with that Searlian philosophile over on that other discussion list! I wonder what that Searlian will say when he learns that his favorite professor made such an egregious error!" I looked at the argument hoping to see what you see. I didn't see it, so leaned closer to my monitor, rubbed my chin and squinted my eyes. I still didn't see it. Only A1 refers to programs, and I see nothing there about minds. In fact we could restate that premise without changing its meaning: "Programs are formal (syntactic) and may or may not be constitutive or sufficient for minds." -gts From olga.bourlin at gmail.com Wed Dec 9 23:28:14 2009 From: olga.bourlin at gmail.com (Olga Bourlin) Date: Wed, 9 Dec 2009 15:28:14 -0800 Subject: [ExI] Tolerance In-Reply-To: References: <4B1DB07C.1080802@rawbw.com> <239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM> <4B1E0E3E.5010708@rawbw.com> Message-ID: Yep, satire. But - and this is no satire - Dawkins is a wuss. ;)) He aligns himself with ?cultural Christians,? and even goes along with singing those treacly, annoying $$$mas carols. Gads, there?s just no accounting for some people?s taste. http://richarddawkins.net/articles/2034 *Pa Ra Pa* Pum Pum! Olga On Wed, Dec 9, 2009 at 2:23 PM, BillK wrote: > On 12/8/09, Lee Corbin wrote: > > > But the strategy of alienating religious people yet further > > isn't going to help, except in ways I would not condone. > > They shouldn't be given any reason whatsoever to think > > that life under the atheists has been as bad as (at many > > times) life for us under the religious often was. > > > > Alas, this is surely all wishful thinking on my part. > > As soon as we are in the heavy majority, kids will > > hear in school from their PC teachers that, so far > > as religion goes, "there are the brights who don't > > believe, and then, there are the others, less bright, > > who do, and sadly some of you in this very class come > > from disadvantaged homes..." > > > > > > > obviously I am not for declaring an intellectual war or anything. But I > do > > think that atheists need to not be wishy-washy and somehow give the > > religious the idea we think their ideas about the nature of reality have > > merit. > > > > > > > Again, I agree. > > > > This new article seems relevant......... > > < > http://www.newsbiscuit.com/2009/12/09/dawkins-offers-delusion-removal-service-for-bereaved-and-dying/ > > > > Dawkins offers delusion removal service for bereaved and dying > > ?Face Reality Now?, a new ?delusion removal? service set up by Richard > Dawkins, offers the soon to be bereaved an opportunity to confront > their dying friends and relations with their mortality. > > ?It?s a scandal that people continue to die unaware that really is > it,? explained Dr Dawkins. ?Now, for a small fee, my service will send > trained atheists to reassure the dying that they can forget about any > after life nonsense.? > > The service also offers a comprehensive after death option in which > the funeral cortege is followed by men with loudhailers shouting, > ?Stop comforting yourself with mumbo-jumbo. For God?s sake, he?s dead > ? deal with it!? > > But the Dawkins service has run into difficulties where some friends > of the dying are believers, but others aren?t. ?At a recent deathbed > scene,? admitted Dr Dawkins, ?my atheists turned up at the same time > as a Catholic priest. Things got a bit out of hand with a fight > breaking out around the dying man. Still, I?m pleased to say ?Face > Reality Now? won.? > ------------- > > > BillK > (By the way, this is a satirical article) :) > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Dec 10 00:23:21 2009 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 10 Dec 2009 10:53:21 +1030 Subject: [ExI] Tolerance In-Reply-To: References: <4B1DB07C.1080802@rawbw.com> <239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM> <4B1E0E3E.5010708@rawbw.com> Message-ID: <710b78fc0912091623u3241e9a8jbaebd5a36f20de6e@mail.gmail.com> 2009/12/10 BillK : > > This new article seems relevant......... > > > > Dawkins offers delusion removal service for bereaved and dying > > ?Face Reality Now?, a new ?delusion removal? service set up by Richard > Dawkins, offers the soon to be bereaved an opportunity to confront > their dying friends and relations with their mortality. Nice :-) I remember hearing someone speaking on radio I think, who was a paliative care person, about the issue of deathbed conversions. He said that it happens, but that it was far more common for people to lose their faith at the end, and be in torment, staring down impending death and realising everything they'd believed was a lie. I wish I had a link or some kind of reference, it'll just have to remain hearsay. Sorry. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From p0stfuturist at yahoo.com Thu Dec 10 00:28:25 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Wed, 9 Dec 2009 16:28:25 -0800 (PST) Subject: [ExI] Tolerance Message-ID: <830069.68211.qm@web59916.mail.ac4.yahoo.com> Christmas is quite silly, but most of its effluvia?has more taste than, say,?porn; which is no longer shocking for its sexuality, but for its general unimaginative tastelessness. As rote, stylized as Christmas. One silliness is traded for another. Secular culture isn't much of an improvement over the religious. People want to be young at heart, however they end up being silly in the head instead. --- On Wed, 12/9/09, Olga Bourlin wrote: From: Olga Bourlin Subject: Re: [ExI] Tolerance To: "ExI chat list" Date: Wednesday, December 9, 2009, 6:28 PM Yep, satire.? But - and this is no satire -? Dawkins is a wuss. ;)) ? He aligns himself with ?cultural Christians,? and even goes along with singing those treacly, annoying $$$mas carols.? Gads, there?s just no accounting for some people?s taste. ? http://richarddawkins.net/articles/2034 Pa Ra Pa Pum Pum! Olga On Wed, Dec 9, 2009 at 2:23 PM, BillK wrote: On 12/8/09, Lee Corbin wrote: > ?But the strategy of alienating religious people yet further > ?isn't going to help, except in ways I would not condone. > ?They shouldn't be given any reason whatsoever to think > ?that life under the atheists has been as bad as (at many > ?times) life for us under the religious often was. > > ?Alas, this is surely all wishful thinking on my part. > ?As soon as we are in the heavy majority, kids will > ?hear in school from their PC teachers that, so far > ?as religion goes, "there are the brights who don't > ?believe, and then, there are the others, less bright, > ?who do, and sadly some of you in this very class come > ?from disadvantaged homes..." > > > > obviously I am not for declaring an intellectual war or anything. But I do > think that atheists need to not be wishy-washy and somehow give the > religious the idea we think their ideas about the nature of reality have > merit. > > > > ?Again, I agree. > This new article seems relevant......... Dawkins offers delusion removal service for bereaved and dying ?Face Reality Now?, a new ?delusion removal? service set up by Richard Dawkins, offers the soon to be bereaved an opportunity to confront their dying friends and relations with their mortality. ?It?s a scandal that people continue to die unaware that really is it,? explained Dr Dawkins. ?Now, for a small fee, my service will send trained atheists to reassure the dying that they can forget about any after life nonsense.? The service also offers a comprehensive after death option in which the funeral cortege is followed by men with loudhailers shouting, ?Stop comforting yourself with mumbo-jumbo. For God?s sake, he?s dead ? deal with it!? But the Dawkins service has run into difficulties where some friends of the dying are believers, but others aren?t. ?At a recent deathbed scene,? admitted Dr Dawkins, ?my atheists turned up at the same time as a Catholic priest. Things got a bit out of hand with a fight breaking out around the dying man. Still, I?m pleased to say ?Face Reality Now? won.? ------------- BillK (By the way, this is a satirical article) ?:) _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -----Inline Attachment Follows----- _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Thu Dec 10 00:30:09 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Wed, 9 Dec 2009 16:30:09 -0800 (PST) Subject: [ExI] Tolerance Message-ID: <639565.13465.qm@web59910.mail.ac4.yahoo.com> Polite is another way of saying diplomatic; 'diplomatic' means evasive; obfuscation is the diplomats tool. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Dec 10 00:32:50 2009 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 10 Dec 2009 11:02:50 +1030 Subject: [ExI] Tolerance In-Reply-To: References: <4B1DB07C.1080802@rawbw.com> <239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM> <4B1E0E3E.5010708@rawbw.com> Message-ID: <710b78fc0912091632hd38827awcc6275841b95377d@mail.gmail.com> 2009/12/10 Olga Bourlin : > Yep, satire. > > But - and this is no satire -? Dawkins is a wuss. ;)) > > > > He aligns himself with ?cultural Christians,? and even goes along with > singing those treacly, annoying $$$mas carols.? Gads, there?s just no > accounting for some people?s taste. Me too, I guess! I think it's important to separate religion into culture + metaphysical beliefs; they're not as tightly bound together as it might seem. The metaphysical beliefs are the crazy stuff, culture is the tradition+ritual+values which live separately from the crazy beliefs, and are what underpins community. The conflating of these two is one of the reasons so many religious people are suspicious of atheists; they think we want to smash their traditions & communities. I don't think that's the case at all. I'm happy to do christmas & easter and all that stuff (although I'm not cool with consumerism, a separate issue). Just don't expect me to *believe* this stuff; it's myth, culture, stories. Also, once you separate these things out, you can also excise the odious stuff that some traditions include (eg: female genital mutilation), because you can no longer have special pleading for these things based on metaphysics; it's got to stand alone as a defensible human behaviour. Walking around in funny hats, it's all good. Mutilating people, not so much. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From spike66 at att.net Thu Dec 10 00:21:22 2009 From: spike66 at att.net (spike) Date: Wed, 9 Dec 2009 16:21:22 -0800 Subject: [ExI] Tolerance In-Reply-To: References: <4B1DB07C.1080802@rawbw.com><239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM><4B1E0E3E.5010708@rawbw.com> Message-ID: <24CB2747AB3F4E8EB01B045ACDF7A968@spike> ...On Behalf Of Olga Bourlin ... >He aligns himself with "cultural Christians," and even goes along with singing those treacly, annoying $$$mas carols. Gads, there's just no accounting for some people's taste. http://richarddawkins.net/articles/2034 >Pa Ra Pa Pum Pum! >Olga Does not that particular one drive ya nuts? Every year, rum pa pum pum, about a jillion times, until one is so mind-raped one cannot get the damn tune to go away. Every pop star has to have his or her own version of it, perhaps because they like the notion of a performance being a gift. But it isn't even accurate: drums don't go rum pa pum pum, they go rat a tat tat. Or if it's a rimshot, more like bluck a bluck bluck. Perhaps they were concerned with how to make that one rhyme. spike From spike66 at att.net Thu Dec 10 01:17:07 2009 From: spike66 at att.net (spike) Date: Wed, 9 Dec 2009 17:17:07 -0800 Subject: [ExI] Tolerance In-Reply-To: <710b78fc0912091623u3241e9a8jbaebd5a36f20de6e@mail.gmail.com> References: <4B1DB07C.1080802@rawbw.com><239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM><4B1E0E3E.5010708@rawbw.com> <710b78fc0912091623u3241e9a8jbaebd5a36f20de6e@mail.gmail.com> Message-ID: <1947BA9A1CB740F0A1EFEB5C458BD549@spike> ...On Behalf Of Emlyn ... > > -service-for-bereaved-and-dying/> > > > > Dawkins offers delusion removal service for bereaved and dying > > > > 'Face Reality Now', a new 'delusion removal' service set up > by Richard > > Dawkins, offers the soon to be bereaved an opportunity to confront > > their dying friends and relations with their mortality. > > Nice :-) Emlyn I think the whole thing was a parody, sort of like the Onion. > ...deathbed > conversions. He said that it happens, but that it was far > more common for people to lose their faith at the end... Emlyn My personal observations go as deep as exactly one data point, and it was a late realization that this particular fundamentalist religion was wrong. That particular conversion was most dramatic however. WW1 veteran gunner on a US Navy submarine, volunteer. He got religion while under the sea (Seventh Day Adventist) and decided he couldn't fire the guns if ordered to do so. So he realized he must turn himself in to his commander and end up in the brig, followed by a dishonorable discharge. But before he could do so, that mercifully short war came to an end. 60 years went by. He was a church leader. One day he was up front, made a most remarkable speech. Jesus was not coming. We had been wrong. He died five weeks later. spike From thespike at satx.rr.com Thu Dec 10 02:29:27 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 09 Dec 2009 20:29:27 -0600 Subject: [ExI] Tolerance In-Reply-To: <1947BA9A1CB740F0A1EFEB5C458BD549@spike> References: <4B1DB07C.1080802@rawbw.com><239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM><4B1E0E3E.5010708@rawbw.com> <710b78fc0912091623u3241e9a8jbaebd5a36f20de6e@mail.gmail.com> <1947BA9A1CB740F0A1EFEB5C458BD549@spike> Message-ID: <4B205D07.60301@satx.rr.com> On 12/9/2009 7:17 PM, spike wrote: > He was a church leader. One day he was up front, made a > most remarkable speech. Jesus was not coming. We had been wrong. He died > five weeks later. That'll teach the bastard. From p0stfuturist at yahoo.com Thu Dec 10 02:35:13 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Wed, 9 Dec 2009 18:35:13 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: <710b78fc0912091632hd38827awcc6275841b95377d@mail.gmail.com> Message-ID: <138413.75832.qm@web59910.mail.ac4.yahoo.com> Surely you don't; however many men, very many, are predators and do want to, if not 'smash' their traditions & communities, then take advantage of them in some way. If you go by mens' behavior rather than what they say then you see they are capable of more than you would think at first. One might say though there isn't much of an interest in smashing their communities, there might be an interest in corrupting their traditions. they think we want to smash their traditions & communities. I don't think that's the case at all. I'm happy to do christmas & easter and all that stuff (although I'm not cool with consumerism, a separate issue). Just don't expect me to *believe* this stuff; it's myth, culture, stories. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Dec 10 02:59:59 2009 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 10 Dec 2009 13:29:59 +1030 Subject: [ExI] Tolerance In-Reply-To: <138413.75832.qm@web59910.mail.ac4.yahoo.com> References: <710b78fc0912091632hd38827awcc6275841b95377d@mail.gmail.com> <138413.75832.qm@web59910.mail.ac4.yahoo.com> Message-ID: <710b78fc0912091859h64fe0c6dg3394ccdfefad7cf0@mail.gmail.com> These sorts of people are far more likely to turn up in religious apparel, literal or figurative, imho. Emlyn 2009/12/10 Post Futurist > Surely you don't; however many men, very many, are predators and do want > to, if not 'smash' their traditions & communities, then take advantage of > them in some way. If you go by mens' behavior rather than what they say then > you see they are capable of more than you would think at first. One might > say though there isn't much of an interest in smashing their communities, > there might be an interest in corrupting their traditions. > > > they think we want to smash their > traditions & communities. I don't think that's the case at all. I'm > happy to do christmas & easter and all that stuff (although I'm not > cool with consumerism, a separate issue). Just don't expect me to > *believe* this stuff; it's myth, culture, stories. > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanite1018 at gmail.com Thu Dec 10 03:08:08 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Wed, 9 Dec 2009 22:08:08 -0500 Subject: [ExI] Tolerance In-Reply-To: <710b78fc0912091632hd38827awcc6275841b95377d@mail.gmail.com> References: <4B1DB07C.1080802@rawbw.com> <239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM> <4B1E0E3E.5010708@rawbw.com> <710b78fc0912091632hd38827awcc6275841b95377d@mail.gmail.com> Message-ID: <0EDCF996-CFF8-405E-92CC-2C79EF71EE42@GMAIL.COM> >> He aligns himself with ?cultural Christians,? and even goes along >> with >> singing those treacly, annoying $$$mas carols. Gads, there?s just no >> accounting for some people?s taste. > > Me too, I guess! > > I think it's important to separate religion into culture + > metaphysical beliefs; they're not as tightly bound together as it > might seem. The metaphysical beliefs are the crazy stuff, culture is > the tradition+ritual+values which live separately from the crazy > beliefs, and are what underpins community.... I'm happy to do > christmas & easter and all that stuff (although I'm not cool with > consumerism, a separate issue). Just don't expect me to *believe* > this stuff; it's myth, culture, stories... > Emlyn I don't really like Easter, because I see no real point in the tradition in and of itself (I don't want to celebrate fertility or spring or the harvest, I enjoy fall and winter far more, myself). But I do celebrate Christmas. Generally don't do most of the religious carols, but some, depending on what they say and so forth (I generally just equate God with goodness and that's basically the same). I celebrate Christmas because it serves as a special time to be with those you care about and show you care through the exchange of gifts. I happen to think its wonderful that it is consumeristic, because, provided you don't pile up debt in order to pay for your gifts, it is a display of how productive you've been. Its one big celebration of productivity and the trading of values (caring, gifts, etc.)! Just about as close to perfect a holiday as you can get, in my opinion. Joshua Job nanite1018 at gmail.com From p0stfuturist at yahoo.com Thu Dec 10 03:14:54 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Wed, 9 Dec 2009 19:14:54 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: <710b78fc0912091859h64fe0c6dg3394ccdfefad7cf0@mail.gmail.com> Message-ID: <972135.83783.qm@web59901.mail.ac4.yahoo.com> You might be correct. But taking 'predator' in the general sense, when you lock your door at night you are not trying to protect yourself from Christians, are you? I find Christians to be silly at worst, harmless at best. These sorts of people are far more likely to turn up in religious apparel, literal or figurative, imho. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Dec 10 03:21:17 2009 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 10 Dec 2009 13:51:17 +1030 Subject: [ExI] Tolerance In-Reply-To: <0EDCF996-CFF8-405E-92CC-2C79EF71EE42@GMAIL.COM> References: <4B1DB07C.1080802@rawbw.com> <239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM> <4B1E0E3E.5010708@rawbw.com> <710b78fc0912091632hd38827awcc6275841b95377d@mail.gmail.com> <0EDCF996-CFF8-405E-92CC-2C79EF71EE42@GMAIL.COM> Message-ID: <710b78fc0912091921w40f49331wd9350f94f1c342a1@mail.gmail.com> 2009/12/10 JOSHUA JOB : >>> He aligns himself with ?cultural Christians,? and even goes along with >>> singing those treacly, annoying $$$mas carols. ?Gads, there?s just no >>> accounting for some people?s taste. >> >> Me too, I guess! >> >> I think it's important to separate religion into culture + >> metaphysical beliefs; they're not as tightly bound together as it >> might seem. The metaphysical beliefs are the crazy stuff, culture is >> the tradition+ritual+values which live separately from the crazy >> beliefs, and are what underpins community.... I'm happy to do christmas & >> easter and all that stuff (although I'm not cool with consumerism, a >> separate issue). Just don't expect me to *believe* this stuff; it's myth, >> culture, stories... >> Emlyn > > I don't really like Easter, because I see no real point in the tradition in > and of itself (I don't want to celebrate fertility or spring or the harvest, > I enjoy fall and winter far more, myself). I'm a fan of chocolate eggs. > But I do celebrate Christmas. > Generally don't do most of the religious carols, but some, depending on what > they say and so forth (I generally just equate God with goodness and that's > basically the same). Yes, same here, although there are some particularly God heavy, onerous carols which drive me up the wall. > I celebrate Christmas because it serves as a special time to be with those > you care about and show you care through the exchange of gifts. Exactly. There's the cultural aspect right there. > I happen to > think its wonderful that it is consumeristic, because, provided you don't > pile up debt in order to pay for your gifts, it is a display of how > productive you've been. LOL! I'll have to disagree with you here. Consumerism is almost by definition entirely detached from productivity. Really productive people don't tend to me all that consumeristic in my experience, and productivity and money definitely do not share a 1:1 relationship. But, you know, if you want to show off your bank balance, you could just email people a copy of your bank statement. > Its one big celebration of productivity and the > trading of values (caring, gifts, etc.)! Just about as close to perfect a > holiday as you can get, in my opinion. Shopping isn't productivity. Caring is something that doesn't have much of a monetary component. Gifts are nice though. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From p0stfuturist at yahoo.com Thu Dec 10 02:55:23 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Wed, 9 Dec 2009 18:55:23 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: <138413.75832.qm@web59910.mail.ac4.yahoo.com> Message-ID: <248100.62284.qm@web59901.mail.ac4.yahoo.com> Christmas appears silly, even infantile; worshiping a baby in a manger. But at one time it was more tradition, now it is more escapism than tradition. To digress a bit, the greatest irony to me reviewing the last 40 years is how commercialized the counterculture has become. The whole idea was that they were supposed to offer an alternative to hyper-commercialized culture-- instead they ended up being even more tasteless, yet just as commercial, as the world they rejected. So, anyway, Christmas is as harmless as you can get. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Dec 10 03:24:05 2009 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 10 Dec 2009 13:54:05 +1030 Subject: [ExI] Tolerance In-Reply-To: <972135.83783.qm@web59901.mail.ac4.yahoo.com> References: <710b78fc0912091859h64fe0c6dg3394ccdfefad7cf0@mail.gmail.com> <972135.83783.qm@web59901.mail.ac4.yahoo.com> Message-ID: <710b78fc0912091924w5fdabcb1xb42967aa0e1dc0d7@mail.gmail.com> 2009/12/10 Post Futurist > You might be correct. But taking 'predator' in the general sense, when you > lock your door at night you are not trying to protect yourself from > Christians, are you? > Generally no, though I might be if I were a muslim in the wrong part of the world. But, equally I don't lock my door to protect myself from atheists! > I find Christians to be silly at worst, harmless at best. That's because nobody expects the Spanish Inquisition! http://en.wikipedia.org/wiki/Spanish_Inquisition -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Thu Dec 10 03:46:37 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Wed, 9 Dec 2009 19:46:37 -0800 (PST) Subject: [ExI] Tolerance Message-ID: <638739.78756.qm@web59908.mail.ac4.yahoo.com> Right, no reason to.? But somehow I'm sure more burglars are atheists than xians. A burglar might call himself an xian, but that's like Leon Kass calling himself a transhumanist. And since the Inquisition ended about 175 years ago, I'll sleep better-- with the d But, equally I don't lock my door to protect myself from atheists! ? I find Christians to be silly at worst, harmless at best. That's because nobody expects the Spanish Inquisition!. -------------- next part -------------- An HTML attachment was scrubbed... URL: From olga.bourlin at gmail.com Thu Dec 10 01:35:24 2009 From: olga.bourlin at gmail.com (Olga Bourlin) Date: Wed, 9 Dec 2009 17:35:24 -0800 Subject: [ExI] Tolerance In-Reply-To: <830069.68211.qm@web59916.mail.ac4.yahoo.com> References: <830069.68211.qm@web59916.mail.ac4.yahoo.com> Message-ID: Post Futurist, at least porn is not divisive among cultural/religious lines. That in itself is an immediate improvement over $$$mas. Of course, there are societies which are negative and repressive about sex in general, and that's not a good thing. It would be better for them if they had no censorship regarding porn and sexually explicit materials. What's obscene to me are women who are required to wear tents and veils. In many cases, porn has practical purposes - but even if it didn't, I would much prefer to live in a society that did not have censorship regarding sexually explicit materials. 2009/12/9 Post Futurist > Christmas *is* quite silly, but most of its effluvia has more taste > than, say, porn; which is no longer shocking for its sexuality, but for its > general unimaginative tastelessness. As rote, stylized as Christmas. One > silliness is traded for another. > Secular culture isn't much of an improvement over the religious. People > want to be young at heart, however they end up being silly in the head > instead. > > > > --- On *Wed, 12/9/09, Olga Bourlin * wrote: > > > From: Olga Bourlin > Subject: Re: [ExI] Tolerance > To: "ExI chat list" > Date: Wednesday, December 9, 2009, 6:28 PM > > Yep, satire. > > > But - and this is no satire - Dawkins is a wuss. ;)) > > > > He aligns himself with ?cultural Christians,? and even goes along with > singing those treacly, annoying $$$mas carols. Gads, there?s just no > accounting for some people?s taste. > > > > http://richarddawkins.net/articles/2034 > > > *Pa Ra Pa* Pum Pum! > > Olga > > > > On Wed, Dec 9, 2009 at 2:23 PM, BillK > > wrote: > >> On 12/8/09, Lee Corbin wrote: >> >> > But the strategy of alienating religious people yet further >> > isn't going to help, except in ways I would not condone. >> > They shouldn't be given any reason whatsoever to think >> > that life under the atheists has been as bad as (at many >> > times) life for us under the religious often was. >> > >> > Alas, this is surely all wishful thinking on my part. >> > As soon as we are in the heavy majority, kids will >> > hear in school from their PC teachers that, so far >> > as religion goes, "there are the brights who don't >> > believe, and then, there are the others, less bright, >> > who do, and sadly some of you in this very class come >> > from disadvantaged homes..." >> > >> > >> > > obviously I am not for declaring an intellectual war or anything. But >> I do >> > think that atheists need to not be wishy-washy and somehow give the >> > religious the idea we think their ideas about the nature of reality have >> > merit. >> > > >> > >> > Again, I agree. >> > >> >> This new article seems relevant......... >> >> < >> http://www.newsbiscuit.com/2009/12/09/dawkins-offers-delusion-removal-service-for-bereaved-and-dying/ >> > >> >> Dawkins offers delusion removal service for bereaved and dying >> >> ?Face Reality Now?, a new ?delusion removal? service set up by Richard >> Dawkins, offers the soon to be bereaved an opportunity to confront >> their dying friends and relations with their mortality. >> >> ?It?s a scandal that people continue to die unaware that really is >> it,? explained Dr Dawkins. ?Now, for a small fee, my service will send >> trained atheists to reassure the dying that they can forget about any >> after life nonsense.? >> >> The service also offers a comprehensive after death option in which >> the funeral cortege is followed by men with loudhailers shouting, >> ?Stop comforting yourself with mumbo-jumbo. For God?s sake, he?s dead >> ? deal with it!? >> >> But the Dawkins service has run into difficulties where some friends >> of the dying are believers, but others aren?t. ?At a recent deathbed >> scene,? admitted Dr Dawkins, ?my atheists turned up at the same time >> as a Catholic priest. Things got a bit out of hand with a fight >> breaking out around the dying man. Still, I?m pleased to say ?Face >> Reality Now? won.? >> ------------- >> >> >> BillK >> (By the way, this is a satirical article) :) >> _______________________________________________ >> >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > -----Inline Attachment Follows----- > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Dec 10 04:14:35 2009 From: spike66 at att.net (spike) Date: Wed, 9 Dec 2009 20:14:35 -0800 Subject: [ExI] Tolerance In-Reply-To: <710b78fc0912091921w40f49331wd9350f94f1c342a1@mail.gmail.com> References: <4B1DB07C.1080802@rawbw.com><239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM><4B1E0E3E.5010708@rawbw.com><710b78fc0912091632hd38827awcc6275841b95377d@mail.gmail.com><0EDCF996-CFF8-405E-92CC-2C79EF71EE42@GMAIL.COM> <710b78fc0912091921w40f49331wd9350f94f1c342a1@mail.gmail.com> Message-ID: <13DE9FAAF5FD4DACAAB5C1F2FE9A8EC4@spike> > ...On Behalf Of Emlyn > ... > But, you know, if you want to show off your bank balance, you > could just email people a copy of your bank statement... Oh MAN why didn't we think of that sooner! That is a GREAT idea! I agree with your comments in this post about the disconnect between consumer spending and productivity. > > Shopping isn't productivity. Caring is something that doesn't > have much of a monetary component. Gifts are nice though. > > -- > Emlyn My wife's brothers and their wives started a little birthday tradition a bunch of years ago where everyone was getting the number of dollars to match the number of their birthday. The problem is that the youngest was continually on the losing end of that deal. Somewhere along the line we started failing to cash the checks. So these never-cashed birthday checks kited around in the mail for several years, but that made everyone's bank statements mismatch, so then someone came up with the idea of using Monopoly money. The play money has been crossing the country for about the past ten years or so. I want to now change that to using Zimbabwe billion dollar bills in the age appropriate quantities. Lotsa fun, very little actual investment. spike From olga.bourlin at gmail.com Thu Dec 10 05:16:02 2009 From: olga.bourlin at gmail.com (Olga Bourlin) Date: Wed, 9 Dec 2009 21:16:02 -0800 Subject: [ExI] Tolerance In-Reply-To: <0EDCF996-CFF8-405E-92CC-2C79EF71EE42@GMAIL.COM> References: <4B1DB07C.1080802@rawbw.com> <239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM> <4B1E0E3E.5010708@rawbw.com> <710b78fc0912091632hd38827awcc6275841b95377d@mail.gmail.com> <0EDCF996-CFF8-405E-92CC-2C79EF71EE42@GMAIL.COM> Message-ID: On Wed, Dec 9, 2009 at 7:08 PM, JOSHUA JOB wrote: > But I do celebrate Christmas. Generally don't do most of the religious carols, but some, depending on what they say and so forth (I generally just equate God with goodness and that's basically the same). Of course you do. Because of the women, or because of the children, or because - even if you may have left your religious moorings - you still want to keep celebrating something you (I am guessing) were brought up to do. The way Muslims celebrate Ramada. The way Jews celebrate Hanukkah. > I celebrate Christmas because it serves as a special time to be with those you care about and show you care through the exchange of gifts. And why can't you do this anytime you want? (Who's in charge here? You? Or Stepford Village?) >Just about as close to perfect a holiday as you can get, in my opinion. Perfect for you, maybe ... but, during the holly jolly Christian Heat Season, at the cost of excluding some other people. My last husband was culturally Jewish. In the early 1980s, I remember how his Nieces - young girls of 7 or 8 or 9 at the time - were asked by a couple of their Playmates: "Have you gotten your Christmas tree yet?" Nieces, being Jewish, said, "No - we don't celebrate Christmas." The next day Nieces were told by their hitherto Playmates: "We can't play with you anymore." Priceless. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanite1018 at gmail.com Thu Dec 10 05:28:33 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Thu, 10 Dec 2009 00:28:33 -0500 Subject: [ExI] Tolerance In-Reply-To: References: <4B1DB07C.1080802@rawbw.com> <239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM> <4B1E0E3E.5010708@rawbw.com> <710b78fc0912091632hd38827awcc6275841b95377d@mail.gmail.com> <0EDCF996-CFF8-405E-92CC-2C79EF71EE42@GMAIL.COM> Message-ID: <9BB1F07D-EAFE-43B8-91D0-A466F36A2715@GMAIL.COM> On Dec 10, 2009, at 12:16 AM, Olga Bourlin wrote: > > I celebrate Christmas because it serves as a special time to be > with those you care about and show you care through the exchange of > gifts. > > And why can't you do this anytime you want? (Who's in charge here? > You? Or Stepford Village?) Well having a culturally define time of year to do this seems good. Its more fun if its a holiday then if its just random (condensing it seems to increase enjoyment). Of course you can do this any time, but you seem to be against the notion of holidays period. Which seems silly. Celebrations are important, even if it isn't commemorating an event so much as an idea. > Perfect for you, maybe ... but, during the holly jolly Christian > Heat Season, at the cost of excluding some other people. > > My last husband was culturally Jewish. In the early 1980s, I > remember how his Nieces - young girls of 7 or 8 or 9 at the time - > were asked by a couple of their Playmates: "Have you gotten your > Christmas tree yet?" > > Nieces, being Jewish, said, "No - we don't celebrate Christmas." > > The next day Nieces were told by their hitherto Playmates: "We > can't play with you anymore." > > Priceless. Well, I am talking about is making Christmas a secular holiday, with no religious meaning whatsoever. I choose to do it on the 25th because that's when everyone else does it, and it is simply easier to do it then (its the same reason Christmas is on the 25th in the first place-- it was the same time as an earlier pagan holiday). You don't have to call it Christmas, but its the same sort of thing. My celebration of Christmas is totally divorced from religion. I don't praise Jesus, or talk about the manger thing, or any of that. I simply celebrate the people I value and all the values I've produced over the year which allows me to share that part of myself (my property is a part of myself, since I devoted part of my life to it) with them as a token of my appreciation of them. It would be a lot harder to get people to be atheists if it meant they aren't allowed to celebrate anything around this time of year because it happens to be the same time of year as mainstream religious celebrations. That strategy is almost certainly going to cut off huge portions of the population, and set atheists apart as these weird, anti-happiness freaks who piss all over everyone's fun (actual fun, as opposed to their murder people and destroy lives sort of fun). Joshua Job nanite1018 at gmail.com From p0stfuturist at yahoo.com Thu Dec 10 05:04:26 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Wed, 9 Dec 2009 21:04:26 -0800 (PST) Subject: [ExI] Tolerance Message-ID: <466950.41860.qm@web59909.mail.ac4.yahoo.com> No problem. You are preaching to the choir. But censorship on the web, say? where?, save for censorship directed at child porn. I'm not defending fundamentalists at all. But let's glance at the world: is China, with its 1.3 or so billion ruled by religious fundamentalists? Russia? India? Today, fundamentalism might be a secondary or tertiary threat, but only because fundamentalists are on the defensive. And they aren't too bright, most of them; they are basically rubes worried about their families. In their position, I would worry too. at least porn is not divisive among cultural/religious lines. That in itself is an immediate improvement over $$$mas. Of course, there are societies which are negative and repressive about sex in general, and that's not a good thing. It would be better for them if they had no censorship regarding porn and sexually explicit materials. What's obscene to me are women who are required to wear tents and veils. In many cases, porn has practical purposes - but even if it didn't, I would much prefer to live in a society that did not have censorship regarding sexually explicit materials. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Dec 10 06:04:05 2009 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 10 Dec 2009 16:34:05 +1030 Subject: [ExI] Tolerance In-Reply-To: <638739.78756.qm@web59908.mail.ac4.yahoo.com> References: <638739.78756.qm@web59908.mail.ac4.yahoo.com> Message-ID: <710b78fc0912092204j682f8b03r4b31ef3cdc8ec8ff@mail.gmail.com> 2009/12/10 Post Futurist > Right, no reason to. But somehow I'm sure more burglars are atheists than > xians. A burglar might call himself an xian, but that's like Leon Kass > calling himself a transhumanist. > Atheism is a considered position. People with dodgy morals are more likely to be "don't know, haven't thought about it" in my opinion. In some cases they might mistakenly report that as Atheist, possibly. You clearly have a belief that atheists are less moral in some way. Can you explain that position in more detail? -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site -------------- next part -------------- An HTML attachment was scrubbed... URL: From eschatoon at gmail.com Thu Dec 10 06:59:20 2009 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Thu, 10 Dec 2009 07:59:20 +0100 Subject: [ExI] Tolerance In-Reply-To: <4B1C99DB.3040605@satx.rr.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <4B1C99DB.3040605@satx.rr.com> Message-ID: <1fa8c3b90912092259v743837e4kf18a743612a216c9@mail.gmail.com> Fundamentalists, of any persuasions, are those intolerant bigots who cannot accept that others favor ideas and lifestyles different from their own. "Atheist fundamentalists" are people whose worldview and behavior are uniquely colored by their atheism, which they make a religion of. They focus on bashing believers even when believers leave them in peace and mind their own business. They want to ban crosses and other religious symbols anywhere, not only in public buildings and schools, but also in private homes. Like other fundamentalists they have no sense of humor and will start their self-righteous whining at the first mention of religion and spirituality. And they cannot tolerate difference. They are not better than religious fundamentalists. Actually, they are worse because they claim to act in the name of reason. On Mon, Dec 7, 2009 at 6:59 AM, Damien Broderick wrote: > On 12/6/2009 11:51 PM, Giulio Prisco (2nd email) wrote: >> >> Both theist and atheist fundamentalists >> have a right to speak their mind, of course, but I am not very >> interested in discussing with them in their terms. > > I take the term "fundamentalist" to apply to one who embraces the literal > and unalterable truth of some written revelation from one or more deities. > Since an atheist is one who declines to accept such revelations as the basis > for knowledge claims, I'm puzzled by how you define an "atheist > fundamentalist". Would this be one who holds that one or more gods has > revealed the unalterable truth that no deity exists? > > Damien Broderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From eschatoon at gmail.com Thu Dec 10 07:19:46 2009 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Thu, 10 Dec 2009 08:19:46 +0100 Subject: [ExI] Tolerance. In-Reply-To: <190ADF20-92BF-42D5-BB67-1BDB55CAA150@bellsouth.net> References: <887413.26405.qm@web110404.mail.gq1.yahoo.com> <31136E73-DA7D-48AA-8BF3-CFA09635D87D@bellsouth.net> <580930c20912090950t37dbf0b0s55f4633eaa69296b@mail.gmail.com> <190ADF20-92BF-42D5-BB67-1BDB55CAA150@bellsouth.net> Message-ID: <1fa8c3b90912092319g4f7405ffmb88d4dfd7012cd4e@mail.gmail.com> One can be a very good surgeon, operating strictly within the boundaries of the current scientific understanding of his area, in his professional life, and a believer in his private life. I will judge his professional expertise on the basis of the available evidence. Does he save lives? Does he recommend the best course of action to his patients? Does he make his best effort to maintain his expertise and staying current? I will not judge his belief in Allah, or Vishnu, or God, because it is not my business. And I can have a pleasant dinner with him, talking of things we are both interested in. I don't need to think of his belief, crazy as it may appear to me, as long as I don't try to convert me. I trust fundamentalist feminists will forgive my using "he", it has one letter less to type and I am pressed for time. 2009/12/9 John Clark : > On Dec 9, 2009, ?Stefano Vaj wrote: > > I think stupid may plausibly mean, depending the circumstances, both > somebody believing in naive and crazy ideas *and* the contrary of > astute and clever. > > Crazy, not just odd but crazy ideas are pretty close to?the contrary > of?astute and clever. And I think one can decide to be stupid, as in a > surgeon who doesn't believe in the cornerstone of the biological sciences, > Evolution; or a structural engineer who doesn't believe in Newton's theory > of gravitation. Actually such things are possible provided you put your > ideas in little airtight compartments and refuse to let them interact, but > that's just a longwinded way of saying the word stupid. > > ?I am inclined to consider the supporters of the religions > of the Book as "stupid" in the first sense, but not necessarily nor > always in the second. Far from it... > > Oh come now Stefano, we both know its really not that far. > ?John K Clark > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From thespike at satx.rr.com Thu Dec 10 07:20:09 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 10 Dec 2009 01:20:09 -0600 Subject: [ExI] Tolerance In-Reply-To: <1fa8c3b90912092259v743837e4kf18a743612a216c9@mail.gmail.com> References: <580930c20912061342v182b517x487442771b6e066e@mail.gmail.com> <1fa8c3b90912062151o1bc1718cob09224fd57e0bff6@mail.gmail.com> <4B1C99DB.3040605@satx.rr.com> <1fa8c3b90912092259v743837e4kf18a743612a216c9@mail.gmail.com> Message-ID: <4B20A129.8080008@satx.rr.com> On 12/10/2009 12:59 AM, Giulio Prisco (2nd email) wrote: > Fundamentalists, of any persuasions, are those intolerant bigots who > cannot accept that others favor ideas and lifestyles different from > their own. But "fundamentalist" already has a customary denotation: http://en.wikipedia.org/wiki/Fundamentalism So you might be better off using the term "bigot" or even "intolerant bigot" which seems to have a less narrow flavor. It's odd that you feel no discomfort in describing "their atheism, which they make a religion of," since that characterization in itself seems to be a disparagement of religion. For those people who approve of religion, this might sound like an endorsement rather than a condemnation--unless the religious person is a sectarian fundamentalist, of course. Damien Broderick From moulton at moulton.com Thu Dec 10 07:54:00 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 10 Dec 2009 07:54:00 -0000 Subject: [ExI] Tolerance Message-ID: <20091210075400.19351.qmail@moulton.com> On Wed, 2009-12-09 at 19:46 -0800, Post Futurist wrote: > Right, no reason to. But somehow I'm sure more burglars are > atheists than xians. Have you done any study on the issue? How about some evidence. Fred From moulton at moulton.com Thu Dec 10 08:32:04 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 10 Dec 2009 08:32:04 -0000 Subject: [ExI] Tolerance Message-ID: <20091210083204.62221.qmail@moulton.com> On Thu, 2009-12-10 at 07:59 +0100, Giulio Prisco (2nd email) wrote: > Fundamentalists, of any persuasions, are those intolerant bigots who > cannot accept that others favor ideas and lifestyles different from > their own. I see that Damien has already commented on the term Fundamentalism and how your use of the term Fundamentalist is not correct; so let us skip down to this: > They want to ban crosses and other religious symbols anywhere, not > only in public buildings and schools, but also in private homes. I have read many books and periodicals by and about Atheists and Atheism. I am now and have been for many years a member and/or participant in various Atheist and Freethought groups, email lists, meetups and organizations and have attended many events sponsored or organized by these groups. I have never met or read about an Atheist who wants to ban crosses or other religious symbols in private homes. Not One. Now it might be possible that you can find one or maybe even a few but I really doubt that you can find a significant fraction of Atheists who want to band crosses or religious symbols in private homes. So if you claim that there are Atheists who want to ban crosses or other religious symbols from private homes then provide the details of who, when, where and with citation information so it can be checked and verified. So either provide the information or retract your statement. Fred From sparge at gmail.com Thu Dec 10 12:04:38 2009 From: sparge at gmail.com (Dave Sill) Date: Thu, 10 Dec 2009 07:04:38 -0500 Subject: [ExI] Tolerance In-Reply-To: <9BB1F07D-EAFE-43B8-91D0-A466F36A2715@GMAIL.COM> References: <4B1DB07C.1080802@rawbw.com> <239EE9B1-9C33-4D30-8D97-4796B3F41CAE@GMAIL.COM> <4B1E0E3E.5010708@rawbw.com> <710b78fc0912091632hd38827awcc6275841b95377d@mail.gmail.com> <0EDCF996-CFF8-405E-92CC-2C79EF71EE42@GMAIL.COM> <9BB1F07D-EAFE-43B8-91D0-A466F36A2715@GMAIL.COM> Message-ID: On Thu, Dec 10, 2009 at 12:28 AM, JOSHUA JOB wrote: > > Well having a culturally define time of year to do this seems good. Its more > fun if its a holiday then if its just random (condensing it seems to > increase enjoyment). Of course you can do this any time, but you seem to be > against the notion of holidays period. Which seems silly. Celebrations are > important, even if it isn't commemorating an event so much as an idea Exactly. That's why I celebrate Festivus: http://en.wikipedia.org/wiki/Festivus -Dave From bbenzai at yahoo.com Thu Dec 10 14:47:34 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 10 Dec 2009 06:47:34 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: Message-ID: <186947.82497.qm@web32003.mail.mud.yahoo.com> > From: JOSHUA JOB assumed: > > ... > It would be a lot harder to get people to be atheists if it > meant they? > aren't allowed to celebrate anything around this time of > year because?... ??? And what Atheist Authority, exactly, would or could forbid them from celebrating anything they damn well like? Ben Zaiboc Wishes people would stop thinking that atheism is just another religion. From nanite1018 at gmail.com Thu Dec 10 15:03:16 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Thu, 10 Dec 2009 10:03:16 -0500 Subject: [ExI] Tolerance In-Reply-To: <186947.82497.qm@web32003.mail.mud.yahoo.com> References: <186947.82497.qm@web32003.mail.mud.yahoo.com> Message-ID: <2175F822-9F7A-478F-83CD-8D695852EA5B@GMAIL.COM> > ??? > > And what Atheist Authority, exactly, would or could forbid them from > celebrating anything they damn well like? > > Ben Zaiboc > Wishes people would stop thinking that atheism is just another > religion. People seem to be arguing that celebrating Christmas leaves out people and somehow supports Christianity, even if you don't celebrate it as any sort of religious/mystical sort of holiday at all. My point is that this is nonsense, and that its perfectly fine to do so. I was arguing against this idea that atheists shouldn't celebrate Christmas, which I believe from the above is something you support. Atheism isn't a religion, it doesn't have special holidays, and I think Christmas is awesome. Joshua Job Says "Friendly fire is fairly humorous." haha From bbenzai at yahoo.com Thu Dec 10 14:39:48 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 10 Dec 2009 06:39:48 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: Message-ID: <546901.79974.qm@web32002.mail.mud.yahoo.com> > From: Post Futurist opined: > > somehow I'm sure more burglars > are atheists than xians. Hmm. According to this (statistics for America in March 1997): http://www.holysmoke.org/icr-pri.htm there were about 50 times less atheists in prison than there were in the population at large. It would be interesting to see more up-to-date statistics on this. In the UK, I wouldn't be surprised if you're right, however, because most people here are atheists. Ben Zaiboc From jameschoate at austin.rr.com Thu Dec 10 15:44:15 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 10 Dec 2009 15:44:15 +0000 Subject: [ExI] Tolerance In-Reply-To: <186947.82497.qm@web32003.mail.mud.yahoo.com> Message-ID: <20091210154416.O37HN.430246.root@hrndva-web11-z02> ---- Ben Zaiboc wrote: > Wishes people would stop thinking that atheism is just another religion. Ben is seriously confused on this point, and he's not alone. -- -- -- -- -- Venimus, Vidimus, Dolavimus James Choate jameschoate at austin.rr.com james.choate at twcable.com 512-657-1279 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From jameschoate at austin.rr.com Thu Dec 10 15:50:12 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 10 Dec 2009 15:50:12 +0000 Subject: [ExI] Tolerance In-Reply-To: <2175F822-9F7A-478F-83CD-8D695852EA5B@GMAIL.COM> Message-ID: <20091210155013.3K8WV.430397.root@hrndva-web11-z02> I've a more fundamental question, how are you defining 'tolerance'? Definition of Society - The cornerstones of any society are - toleration - self-defense - A set of rules, codified or not, and expectations, expressed or not, which regulate both the individual and inter-personal activities of same - Societies may be radically different in content and yet share the same geography - The statics and dynamics of a society are governed by the physics of reality and the psychology of the individual (and it's absolute range) - The expectations of societies can be in direct opposition - Violence does not ensue from opposition but from lack of toleration of opposition - This applies to all levels of societies and seems to be psychology independent (in other words, all life seems to follow it) - As a result, stability can be looked upon as a measure of tolerance http://www.mail-archive.com/cypherpunks at minder.net/msg04459.html -- -- -- -- -- Venimus, Vidimus, Dolavimus James Choate jameschoate at austin.rr.com james.choate at twcable.com 512-657-1279 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From thespike at satx.rr.com Thu Dec 10 15:56:55 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 10 Dec 2009 09:56:55 -0600 Subject: [ExI] Tolerance In-Reply-To: <20091210154416.O37HN.430246.root@hrndva-web11-z02> References: <20091210154416.O37HN.430246.root@hrndva-web11-z02> Message-ID: <4B211A47.7040808@satx.rr.com> On 12/10/2009 9:44 AM, jameschoate at austin.rr.com wrote: > ---- Ben Zaiboc: >> > Wishes people would stop thinking that atheism is just another religion. > Ben is seriously confused on this point, and he's not alone. Yes, because surely it's obvious that not believing in unicorns is just another way of believing in unicorns. Oh, wait. Damien Broderick From p0stfuturist at yahoo.com Thu Dec 10 16:02:03 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Thu, 10 Dec 2009 08:02:03 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: <20091210075400.19351.qmail@moulton.com> Message-ID: <696354.82168.qm@web59909.mail.ac4.yahoo.com> No genuine Christian would burglarize a home, just as no real doctor would practice quack medicine. It does happen, but is anomalous. ? ? Have you done any study on the issue?? How about some evidence. Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Thu Dec 10 16:52:44 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Thu, 10 Dec 2009 08:52:44 -0800 (PST) Subject: [ExI] Tolerance Message-ID: <821320.4437.qm@web59914.mail.ac4.yahoo.com> No, the elastic situational ethics of many atheists might actually be higher than ancient rigid morals of the traditional religious, but they aren't based on conventions; like when you celebrate the holidays you are?complicit in?celebrating traditional mores with family. Don't you think you might have one foot planted in religious tradition, and one foot in atheism or agnosticism? You clearly have a belief that atheists are less moral in some way. Can you explain that position in more detail? Emlyn -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Thu Dec 10 16:41:15 2009 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 10 Dec 2009 10:41:15 -0600 Subject: [ExI] Tolerance In-Reply-To: <186947.82497.qm@web32003.mail.mud.yahoo.com> References: <186947.82497.qm@web32003.mail.mud.yahoo.com> Message-ID: <26C1095D00144590A711958C85ED4E6D@DFC68LF1> Ben Zaiboc "Wishes people would stop thinking that atheism is just another religion." Yes. That is why I don't categorizes myself as an atheist. I simply say that I have values and ethics that are not located in the domain of religion. (Not sure this gets across, but at least it causes 'em to think a little.) Nlogo1.tif Natasha Vita-More From jonkc at bellsouth.net Thu Dec 10 17:18:22 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 10 Dec 2009 12:18:22 -0500 Subject: [ExI] Tolerance. In-Reply-To: <710b78fc0912091924w5fdabcb1xb42967aa0e1dc0d7@mail.gmail.com> References: <710b78fc0912091859h64fe0c6dg3394ccdfefad7cf0@mail.gmail.com> <972135.83783.qm@web59901.mail.ac4.yahoo.com> <710b78fc0912091924w5fdabcb1xb42967aa0e1dc0d7@mail.gmail.com> Message-ID: <8287C1FD-97D5-4D91-8E81-E571EEC5C1FF@bellsouth.net> On Dec 9, 2009, at 9:55 PM, Post Futurist wrote: > Christmas appears silly, even infantile; worshiping a baby in a manger. Fortunately for most people Christmas has more to due with Santa Claus than it does to that other fictional character, Jesus Christ. My only problem with Christmas is that it hasn't been commercialized enough. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Thu Dec 10 17:24:23 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 10 Dec 2009 11:24:23 -0600 Subject: [ExI] Tolerance In-Reply-To: <821320.4437.qm@web59914.mail.ac4.yahoo.com> References: <821320.4437.qm@web59914.mail.ac4.yahoo.com> Message-ID: <4B212EC7.7030702@satx.rr.com> On 12/10/2009 10:52 AM, Post Futurist wrote: > *Don't you think you might have one foot planted in religious tradition, > and one foot in atheism or agnosticism?* One huge problem with this rather disjointed thread is the confusion between "religion" and "belief in a god or gods". An atheist, just from the derivation of the word, is a person who has no belief in deity. But of course there are many godless religions. Animist religions seem to have no gods, per se, but see the world as suffused with and shaped by personified forces and passions. The Australian aboriginal Dreaming is a vast ancient integral cosmology in which the seasonal landscape and its inhabitants are representations of volitional Ancestors; there's nothing remotely like an Abrahamic God--but it would seem absurd not to call this all-encompassing worldview "religious." In our dominant cultures the religious impulse happens to be hopelessly confounded with notions of a rewarding and punishing god, but it can probably be disentangled in such a way that an honest atheist can participate without compromise in religious celebrations, holidays, etc, or come up with some of our own. Not easily, though, if the conventionally religious insist that their theistic activities of worship and petition are mandatory. Damien Broderick From jonkc at bellsouth.net Thu Dec 10 17:44:12 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 10 Dec 2009 12:44:12 -0500 Subject: [ExI] Tolerance. In-Reply-To: <466950.41860.qm@web59909.mail.ac4.yahoo.com> References: <466950.41860.qm@web59909.mail.ac4.yahoo.com> Message-ID: On Dec 10, 2009, Post Futurist wrote: > is China, with its 1.3 or so billion ruled by religious fundamentalists? Russia? Both China and the old USSR were atheistic but that was incidental, none of the terrible things they did was done in the name of atheism. Can you think of any good thing done by a religious person that couldn't have been done by a goodhearted atheist? I can't. Can you think of any evil act done by a religious person that could only have been done by a religious person? Of course you can. With or without religion good people will do good things and bad people will do bad things, but for good people to do bad things you need religion. > India? The modern countries of both India and Pakistan were born in a sea of blood instigated by fanatical religious fundamentalists. And even today it is entirely possible that the first thermonuclear war will be a religious war between vegetarians and teetotalers. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Dec 10 18:00:29 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 10 Dec 2009 13:00:29 -0500 Subject: [ExI] Tolerance. In-Reply-To: <972135.83783.qm@web59901.mail.ac4.yahoo.com> References: <972135.83783.qm@web59901.mail.ac4.yahoo.com> Message-ID: <774CFE37-FB84-4448-8A0C-CCC2E428DAF1@bellsouth.net> On Dec 9, 2009, at 10:14 PM, Post Futurist wrote: > when you lock your door at night you are not trying to protect yourself from Christians, are you? Yes I am. Of the inmates in American prisons about .2% are atheists, far less than outside the prison walls. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jameschoate at austin.rr.com Thu Dec 10 18:02:26 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 10 Dec 2009 18:02:26 +0000 Subject: [ExI] Tolerance In-Reply-To: <4B211A47.7040808@satx.rr.com> Message-ID: <20091210180227.7PGZX.450980.root@hrndva-web02-z02> ---- Damien Broderick wrote: > On 12/10/2009 9:44 AM, jameschoate at austin.rr.com wrote: > > > ---- Ben Zaiboc: > >> > Wishes people would stop thinking that atheism is just another religion. > > > Ben is seriously confused on this point, and he's not alone. > > Yes, because surely it's obvious that not believing in unicorns is just > another way of believing in unicorns. Oh, wait. You, and most others, confuse two different things here. The issue is 'faith' not unicorns. Whether the unicorns exist is irrelevant to the question of faith. http://www.waythingsare.com/news/what/religion/a-pantheist-s-manifesto.htm Atheism, like science, are religions and philosophies. The only difference is the axioms you put your faith in. -- -- -- -- -- Venimus, Vidimus, Dolavimus James Choate jameschoate at austin.rr.com james.choate at twcable.com 512-657-1279 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From jameschoate at austin.rr.com Thu Dec 10 18:09:01 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 10 Dec 2009 12:09:01 -0600 Subject: [ExI] Tolerance In-Reply-To: <696354.82168.qm@web59909.mail.ac4.yahoo.com> Message-ID: <20091210180901.VPT26.451159.root@hrndva-web02-z02> ---- Post Futurist wrote: > No genuine Christian would burglarize a home, just as no real doctor would practice quack medicine. > It does happen, but is anomalous. Malarky. With regard to 'genuine Christian' what does that mean? Strictly one who follows the words of Christ in the 4 gossipals? You're asking us to accept such a broad generalization without justification the conclusion begs the question. With regard to doctors practicing quack medicine, define quack medicine? At some point just about every medical procedure and drug was quackery if for no other reason than the general accepted norms didn't include it. Consider longevity research (members of such a list as this should be familiar with it) where there are many, if not most, doctors who consider it quackery to take it seriously. Yet we have people gobbling down reverasol and other drugs at an astonishing rate for something that has no clinical trials to actually back it up. Such is the power of faith. > Have you done any study on the issue?? How about some evidence. Seems to me you're the one making the exception claims, therefore the exceptional evidence lays on your shoulders. -- -- -- -- -- Venimus, Vidimus, Dolavimus James Choate jameschoate at austin.rr.com james.choate at twcable.com 512-657-1279 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From jameschoate at austin.rr.com Thu Dec 10 18:15:37 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 10 Dec 2009 18:15:37 +0000 Subject: [ExI] Tolerance. In-Reply-To: Message-ID: <20091210181537.A4G8S.451298.root@hrndva-web02-z02> ---- John Clark wrote: >I can't. Can you think of any evil act done by a religious person that could only have been done by a religious person? Of course you can. With or without religion good > people will do good things and bad people will do bad things, but for good people to do bad things you need religion. First, your use of good/bad is presumptive and has a host of implicit considerations, some which directly contradict your assertion. With regard to needing religion, not hardly. What you need is fear. People do what they do because of two and only two classes of motives. They are convinced they will gain something, or they are convinced they will lose something. -- -- -- -- -- Venimus, Vidimus, Dolavimus James Choate jameschoate at austin.rr.com james.choate at twcable.com 512-657-1279 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From jameschoate at austin.rr.com Thu Dec 10 18:16:25 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 10 Dec 2009 18:16:25 +0000 Subject: [ExI] Tolerance. Message-ID: <20091210181625.AB59F.451309.root@hrndva-web02-z02> ---- John Clark wrote: >I can't. Can you think of any evil act done by a religious person that could only have been done by a religious person? Of course you can. With or without religion good > people will do good things and bad people will do bad things, but for good people to do bad things you need religion. First, your use of good/bad is presumptive and has a host of implicit considerations, some which directly contradict your assertion. With regard to needing religion, not hardly. What you need is fear. People do what they do because of two and only two classes of motives. They are convinced they will gain something, or they are convinced they will lose something. -- -- -- -- -- Venimus, Vidimus, Dolavimus James Choate jameschoate at austin.rr.com james.choate at twcable.com 512-657-1279 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From possiblepaths2050 at gmail.com Thu Dec 10 21:22:02 2009 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 10 Dec 2009 14:22:02 -0700 Subject: [ExI] 5 materials that will make the world as we know it obsolete Message-ID: <2d6187670912101322u1d1e977ct1c219134428a2be@mail.gmail.com> I realize much of this is already known here, but it's still an interesting and entertaining article. I wish the future would hurry up and get here! LOL http://www.cracked.com/article/212_5-materials-that-will-make-world-as-we-know-it-obsolete/ John -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Thu Dec 10 22:47:55 2009 From: emlynoregan at gmail.com (Emlyn) Date: Fri, 11 Dec 2009 09:17:55 +1030 Subject: [ExI] Tolerance In-Reply-To: <20091210180227.7PGZX.450980.root@hrndva-web02-z02> References: <4B211A47.7040808@satx.rr.com> <20091210180227.7PGZX.450980.root@hrndva-web02-z02> Message-ID: <710b78fc0912101447q13eb84ddx664a8d875c427d65@mail.gmail.com> 2009/12/11 : > ---- Damien Broderick wrote: >> On 12/10/2009 9:44 AM, jameschoate at austin.rr.com wrote: >> >> > ---- Ben Zaiboc: >> >> > ?Wishes people would stop thinking that atheism is just another religion. >> >> > Ben is seriously confused on this point, and he's not alone. >> >> Yes, because surely it's obvious that not believing in unicorns is just >> another way of believing in unicorns. Oh, wait. > > You, and most others, confuse two different things here. The issue is 'faith' not unicorns. Whether the unicorns exist is irrelevant to the question of faith. > > http://www.waythingsare.com/news/what/religion/a-pantheist-s-manifesto.htm > > Atheism, like science, are religions and philosophies. The only difference is the axioms you put your faith in. This idea that atheism is just another religion is beginning to shit me. It's wrong. Atheism has no shared culture, ritual, tradition. It doesn't even have metaphysical belief, it has a lack of metaphysical belief. Unless you think that not believing in Santa Claus is a religion, you cannot also call not believing in supernatural entities a religion. Science is probably rightly called a world view or philosophy. Materialism, or a naturalistic world view, similar. But Atheism? There's just not enough to it, to call it a philosophy, let alone a religion. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From p0stfuturist at yahoo.com Thu Dec 10 23:27:26 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Thu, 10 Dec 2009 15:27:26 -0800 (PST) Subject: [ExI] Tolerance Message-ID: <852444.45072.qm@web59903.mail.ac4.yahoo.com> ?IMO any doctor who would violate their Hippocratic oath would be similar in spirit if not letter to a "Christian" who would commit a felony, such as burglary. ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Thu Dec 10 23:19:49 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Thu, 10 Dec 2009 15:19:49 -0800 (PST) Subject: [ExI] Tolerance Message-ID: <155046.24158.qm@web59906.mail.ac4.yahoo.com> Fundamentalist in America today only have an?unavoidable grip in the Deep South; in?all other regions they can't?have at you unless you make the mistake of?communicating with?them. The only?inescapable fundamentalists?I've?met so far are the Campus Crusade For Christ volunteers, who travel around the nation inflicting mental torture on?students. ? Fundamentalists were a grave threat in the past, but?I?am not aware that such is still the case.?Aside from the Mideast and the Subcontinent, where are fundamentalists a primary?threat? -------------- next part -------------- An HTML attachment was scrubbed... URL: From moulton at moulton.com Fri Dec 11 01:02:57 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 11 Dec 2009 01:02:57 -0000 Subject: [ExI] Tolerance Message-ID: <20091211010257.17048.qmail@moulton.com> On Thu, 2009-12-10 at 12:09 -0600, jameschoate at austin.rr.com wrote: ---- Post Futurist wrote: > > No genuine Christian would burglarize a home, just as no real doctor > > would practice quack medicine. > > It does happen, but is anomalous. > > Malarky. > > With regard to 'genuine Christian' what does that mean? Strictly one > who follows the words of Christ in the 4 gossipals? You're asking us > to accept such a broad generalization without justification the > conclusion begs the question. I think I see the problem with what Post Futurist has written. While the term "Christian" has a variety of definitions the common and traditional meaning is that a Christian is one who accepts Jesus as their personal savior. Note this is not dependent on future behavior; thus if a person who is a Christian breaks a commandment the person does not stop being a Christian. Actually this possible eventuality is built into the standard Christian theology; if a Christian breaks a commandment such as burglary then the Christian needs to repent, pray, confess, ask forgiveness and try to improved their behavior. Note the during the entire process from the burglary through repentance, prayer and all of the rest the person is still a Christian. Thus we can see the conceptual error of Post Futurist in his usage of the term "Christian" since even using term "Genuine Christian" does not change the situation since there is no more or less of being in the state of being a "Christian"; a person either is or is not. However note that individual Christians may vary from person to person in whether they resist committing burglary and thus if there is a population of Christians and some of them commit burglary then we can work on developing a statistical model. We can also work on a statistical model for Hindus and for Buddhists and for Taoists and so on. Now we go to a formatting problem. Actually I (Fred) wrote the following line not Post Futurist. > > Have you done any study on the issue? How about some evidence. The problem is that messages from Post Futurist tend not to follow any quoting standard with which I am familiar. Thus the confusion. Actually I was trying to ask Post Futurist for some evidence just as you (James) are asking. The absence of any evidence is conspicuous. Basically Post Futurist does not appear to know what he is talking about. > Seems to me you're the one making the exception claims, therefore the > exceptional evidence lays on your shoulders. From p0stfuturist at yahoo.com Fri Dec 11 01:23:18 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Thu, 10 Dec 2009 17:23:18 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: <20091211010257.17048.qmail@moulton.com> Message-ID: <404677.66886.qm@web59905.mail.ac4.yahoo.com> Here we are all in agreement for once :) My question is: isn't there an undeniable distinction made in Christianity and most orthodox faiths between a petty transgression not dealt with in a court of law, and an outright felony such as burglary? But what I wonder most is: can most or even possibly virtually all xians be convinced that eternal life can be obtained through non-mystical means as well as by faith? Given enough time to convince them (a few decades) it does appear the answer is yes. ?The absence of any evidence is conspicuous.? Basically Post Futurist does not appear to know what he is talking about. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Fri Dec 11 01:57:40 2009 From: mbb386 at main.nc.us (MB) Date: Thu, 10 Dec 2009 20:57:40 -0500 (EST) Subject: [ExI] Tolerance In-Reply-To: <20091211010257.17048.qmail@moulton.com> References: <20091211010257.17048.qmail@moulton.com> Message-ID: <34575.12.77.169.43.1260496660.squirrel@www.main.nc.us> > > The problem is that messages from Post Futurist tend not to follow any quoting > standard with which I am familiar. Thank you! I thought it was something my software was not rendering correctly. :) Regards, MB From gts_2000 at yahoo.com Fri Dec 11 01:40:45 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 10 Dec 2009 17:40:45 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: Message-ID: <370107.70551.qm@web36504.mail.mud.yahoo.com> -- On Wed, 12/9/09, Stathis Papaioannou wrote: >> And Searle says this beast called intentionality > cannot live inside S/H systems. That's what his Chinese Room > Argument is all about. > > And the counterargument is that of course the Chinese Room > would have semantics and intentionality and all the other > good things that the brain has. If you formulate a good counter-argument to support that counter-thesis then I hope you will post it here! > Only if consciousness were a side-effect of intelligent behaviour > would it have evolved. I don't understand your meaning. Lots of non-adaptive traits have evolved as side-effects of adaptive traits. Do you count consciousness as such a non-adaptive trait, one that evolved alongside the adaptive trait of intelligence? Or do you mean to say that consciousness increases or aids intelligence, an adaptive trait? In any case Searle rejects epiphenomenonalism -- the view that subjective mental events act only as "side-effects" and do not cause physical events. Searle thinks they do; that if you consciously will to raise your arm, and it rises, your conscious willing had something to do with the fact that it rose. (In this example the philosophical concept of intentionality corresponds with the ordinary meaning.) >> Now then, IF we first come to understand those causal > powers of brains and IF we then find a way to duplicate > those powers in something other than brains, THEN we will > create strong AI. On that day, pigs will fly. > > IF we simulate the externally observable behaviour of > brains THEN we will create strong AI. Do you mean to say that if something behaves exactly as if it has human intelligence, it must have strong AI? If so then we mean different things by strong AI. -gts From lcorbin at rawbw.com Fri Dec 11 03:38:10 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 10 Dec 2009 19:38:10 -0800 Subject: [ExI] Tolerance In-Reply-To: <710b78fc0912101447q13eb84ddx664a8d875c427d65@mail.gmail.com> References: <4B211A47.7040808@satx.rr.com> <20091210180227.7PGZX.450980.root@hrndva-web02-z02> <710b78fc0912101447q13eb84ddx664a8d875c427d65@mail.gmail.com> Message-ID: <4B21BEA2.9020209@rawbw.com> Emlyn wrote: > This idea that atheism is just another religion [is] wrong. > > Atheism has no shared culture, ritual, tradition. > It doesn't even have metaphysical belief, it has a lack of > metaphysical belief. Unless you think that not believing in Santa > Claus is a religion, you cannot also call not believing in > supernatural entities a religion. > > Science is probably rightly called a world view or philosophy. > Materialism, or a naturalistic world view, similar. But Atheism? Well, don't spell it with a capital letter, for God's sake. > There's just not enough to it, to call it a philosophy, let alone a > religion. Absolutely. Well said, in all parts. Now here's a good analogy, one that even won't even dismay the losing side: Atheism is like darkness, and religion is like light. Atheism is not just another color of light, it's the absence of light. You may ask for the frequencies of light, but not for that of darkness. If there is no light---and in this case there just isn't---one ought to prefer seeing nothing. Lee From moulton at moulton.com Fri Dec 11 03:46:08 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 11 Dec 2009 03:46:08 -0000 Subject: [ExI] Faith, Religion, Science and PCR [was Re: Tolerance]] Message-ID: <20091211034608.97168.qmail@moulton.com> I have changed the Subject since we are drifting from the original subject. On Thu, 2009-12-10 at 18:02 +0000, jameschoate at austin.rr.com wrote: ---- Damien Broderick wrote: > > > > Yes, because surely it's obvious that not believing in unicorns is > > just another way of believing in unicorns. Oh, wait. > > You, and most others, confuse two different things here. The issue > is 'faith' not unicorns. Whether the unicorns exist is irrelevant to > the question of faith. > > http://www.waythingsare.com/news/what/religion/a-pantheist-s-manifesto.htm > > Atheism, like science, are religions and philosophies. The only > difference is the axioms you put your faith in. First thanks for giving the URL; it helps put your comments in perspective. However I still disagree but based on reading the webpage I think I know why there is a disagreement. The difficulty stems from the usages of the word "faith"; so let me contrast two different usages: 1. Faith as being confident and having (at least implicitly) doubt and acknowledging the possibility of error 2. Faith as being without doubt usually or often coupled with denial of the possibility of error My experience that "faith" in the second usage is often (but not always) associated with religion; particularly (but not exclusively) Monotheism. Thus for example a Christian (particularly a Fundamentalist Protestant) might declare faith in the second sense by saying their Faith in the Trinity is absolute and there is no doubt about Christianity. For that Christian that is the meaning of faith, there is no doubt or "let us put all this to a critial examination". Now compare that to the usage of a scientist commenting (to use your example) if two hydrogen atoms had been exchanged. Would the scientist be more clear if she said "I have faith that the two hydrogen atoms have not been exchanged" or if she said "I have confidence that the two hydrogen atoms not not been exchanged; of course this confidence is subject to further testing and attempts at falsification since there is the possibility of error and we need to maintain the appropriate doubt and skepticism". Particularly if the scientist was aware of PCR Pan Critical Rationalism. Well actually the scientist would most likely say "I have confidence that the two hydrogen atoms have not been exchanged" and leave off the rest because it is implicit. My point is that using the term "faith" in relation to science often leads people to get confused and can cause a real impasse in communications. There is no need to use the term "faith" in relation to science; it does not add anything to the understanding of science and it can cause major problems. Now at this point someone will usually say "But you need to have ultimate faith in the scientic method or testing or falsifiability or something". And the response is "No, an uncritical faith in some ultimate something is not necessary; Bartley has shown us a set useful tools". Many on this list will be aware of Pan Critical Rationalism and will have noticed how I used certain phrases and sentence structures to make my argument lead to this recommendation of the book The Retreat to Commitment written by W. W. Bartley. I highly recommend the book although it is rather dense and contains a lot of detail on the historical conflicts between science and religion. If you do not have time to read the book then you might want to read the essay on PCR by Max More which I also highly recommend: http://www.maxmore.com/pcr.htm To summarize: Science (and by science I include all scholarly inquiry) must contain doubt and continual testing to refute error. To use a term such as "faith" which has as one of its common usages "the absence of doubt" seems to me to be counter-productive. In discussing science one can say "confidence with continual testing and criticism" and be more accurate and less confusing than saying "faith". That is why personally I try to only use the word "faith" when discussing religion; I have found doing so eliminates a lot of misunderstanding. Fred From stathisp at gmail.com Fri Dec 11 05:10:43 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 11 Dec 2009 16:10:43 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <370107.70551.qm@web36504.mail.mud.yahoo.com> References: <370107.70551.qm@web36504.mail.mud.yahoo.com> Message-ID: 2009/12/11 Gordon Swobe : >> And the counterargument is that of course the Chinese Room >> would have semantics and intentionality and all the other >> good things that the brain has. > > If you formulate a good counter-argument to support that counter-thesis then I hope you will post it here! Perhaps you could do the work for me and prove that *you* have semantics and intentionality and aren't just a zombie computer program. >> Only if consciousness were a side-effect of intelligent behaviour >> would it have evolved. > > I don't understand your meaning. Lots of non-adaptive traits have evolved as side-effects of adaptive traits. Do you count consciousness as such a non-adaptive trait, one that evolved alongside the adaptive trait of intelligence? Or do you mean to say that consciousness increases or aids intelligence, an adaptive trait? I suppose it's possible that nature could have given rise to zombies that behave like humans, but it seems unlikely. > In any case Searle rejects epiphenomenonalism -- the view that subjective mental events act only as "side-effects" and do not cause physical events. Searle thinks they do; that if you consciously will to raise your arm, and it rises, your conscious willing had something to do with the fact that it rose. (In this example the philosophical concept of intentionality corresponds with the ordinary meaning.) I find the whole idea of epiphenomenalism muddled and unhelpful. Why don't we discuss whether intelligence is an epiphenomenon rather than consciousness? It's not my intelligence that makes me writes this, it is motor impulses to my hands, intelligence being a mere side-effect of this sort of neural activity with no causal role of its own. >>> Now then, IF we first come to understand those causal >> powers of brains and IF we then find a way to duplicate >> those powers in something other than brains, THEN we will >> create strong AI. On that day, pigs will fly. >> >> IF we simulate the externally observable behaviour of >> brains THEN we will create strong AI. > > Do you mean to say that if something behaves exactly as if it has human intelligence, it must have strong AI? If so then we mean different things by strong AI. No, I mean that if you replace the brain a neuron at a time by electronic analogues that function the same, i.e. same output for same input so that the neurons yet to be replaced respond in the same way, then the resulting brain will not only display the same behaviour but will also have the same consciousness. Searle considers the neural replacement scenario and declares that the brain will behave the same outwardly but will have a different consciousness. The aforementioned paper by Chalmers shows why this is impossible. -- Stathis Papaioannou From moulton at moulton.com Fri Dec 11 06:17:41 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 11 Dec 2009 06:17:41 -0000 Subject: [ExI] Tolerance Message-ID: <20091211061741.83045.qmail@moulton.com> On Thu, 2009-12-10 at 19:38 -0800, Lee Corbin wrote: > Now here's a good analogy, one that even won't even dismay > the losing side: > > Atheism is like darkness, and religion is like light. Atheism > is not just another color of light, it's the absence of light. > You may ask for the frequencies of light, but not for that of > darkness. > > If there is no light---and in this case there just isn't---one > ought to prefer seeing nothing. Very interesting analogy; particularly since the Light versus Dark dualism that has historical roots in many areas from India to the Mediterranean area and is seen in many religious traditions. Fred From jameschoate at austin.rr.com Fri Dec 11 07:41:10 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Fri, 11 Dec 2009 7:41:10 +0000 Subject: [ExI] Tolerance In-Reply-To: <710b78fc0912101447q13eb84ddx664a8d875c427d65@mail.gmail.com> Message-ID: <20091211074111.JH70D.448202.root@hrndva-web11-z02> ---- Emlyn wrote: > This idea that atheism is just another religion is beginning to shit > me. It's wrong. Atheism is nothing more than faith, that makes it a religion. This is irrespective of how the practitioners may wish to distinguish themselves from all others. This desire of separation is a weakness, not a strength. > Atheism has no shared culture, ritual, tradition. > It doesn't even have metaphysical belief, it has a lack of > metaphysical belief. Unless you think that not believing in Santa > Claus is a religion, you cannot also call not believing in > supernatural entities a religion. Irrelevant, religion has nothing to do with some conceptual metric of distance in a religious phase space. Whether one believes in Santa Clause is irrelevant, you are confusing the the with the name for the thing. A serious conceptual error. > Science is probably rightly called a world view or philosophy. > Materialism, or a naturalistic world view, similar. But Atheism? > There's just not enough to it, to call it a philosophy, let alone a > religion. That's probably the silliest conceptual construct I've seen in a while. You're basically saying atheism isn't a religion because it consist of a set of ideals that are asymptotic to nothing. How do you measure this 'something' you're using to compare? -- -- -- -- -- Venimus, Vidimus, Dolavimus James Choate jameschoate at austin.rr.com james.choate at twcable.com 512-657-1279 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From stathisp at gmail.com Fri Dec 11 08:45:16 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 11 Dec 2009 19:45:16 +1100 Subject: [ExI] Tolerance In-Reply-To: <20091211074111.JH70D.448202.root@hrndva-web11-z02> References: <710b78fc0912101447q13eb84ddx664a8d875c427d65@mail.gmail.com> <20091211074111.JH70D.448202.root@hrndva-web11-z02> Message-ID: On 11/12/2009, jameschoate at austin.rr.com wrote: > ---- Emlyn wrote: > > Atheism has no shared culture, ritual, tradition. > > It doesn't even have metaphysical belief, it has a lack of > > metaphysical belief. Unless you think that not believing in Santa > > Claus is a religion, you cannot also call not believing in > > supernatural entities a religion. > > Irrelevant, religion has nothing to do with some conceptual metric of distance in a religious phase space. Whether one believes in Santa Clause is irrelevant, you are confusing the the with the name for the thing. A serious conceptual error. Do you believe in Santa Claus? Santa Claus is a minor deity in the Christian pantheon, so if you don't believe in him, you are a Santa Claus atheist. Is that a religious belief? If so, then your definition of religious belief is very broad, and perhaps atheists should not take offense when you apply it to them. -- Stathis Papaioannou From eugen at leitl.org Fri Dec 11 10:38:38 2009 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 11 Dec 2009 11:38:38 +0100 Subject: [ExI] Tolerance In-Reply-To: <710b78fc0912101447q13eb84ddx664a8d875c427d65@mail.gmail.com> References: <4B211A47.7040808@satx.rr.com> <20091210180227.7PGZX.450980.root@hrndva-web02-z02> <710b78fc0912101447q13eb84ddx664a8d875c427d65@mail.gmail.com> Message-ID: <20091211103838.GX17686@leitl.org> On Fri, Dec 11, 2009 at 09:17:55AM +1030, Emlyn wrote: > > Atheism, like science, are religions and philosophies. The only difference is the axioms you put your faith in. > > This idea that atheism is just another religion is beginning to shit > me. It's wrong. The whole thread stinks. All threads about religion do. Nothing good will come out of it. I suggest we kill it. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From dharris at livelib.com Fri Dec 11 10:29:14 2009 From: dharris at livelib.com (David C. Harris) Date: Fri, 11 Dec 2009 02:29:14 -0800 Subject: [ExI] Tolerance In-Reply-To: <20091211061741.83045.qmail@moulton.com> References: <20091211061741.83045.qmail@moulton.com> Message-ID: <4B221EFA.5070204@livelib.com> moulton at moulton.com wrote: > On Thu, 2009-12-10 at 19:38 -0800, Lee Corbin wrote: > > >> Now here's a good analogy, one that even won't even dismay >> the losing side: >> >> Atheism is like darkness, and religion is like light. Atheism >> is not just another color of light, it's the absence of light. >> You may ask for the frequencies of light, but not for that of >> darkness. >> >> If there is no light---and in this case there just isn't---one >> ought to prefer seeing nothing. >> > > Very interesting analogy; particularly since the Light versus Dark > dualism that has historical roots in many areas from India to the > Mediterranean area and is seen in many religious traditions. > > Fred > Fred, do you know of other sources of that dualism (and resurrection of bodies and a Final Judgment), beside the Zoroastrian religion being promoted by the Persian empire? I'm intrigued by and want to evaluate a claim that the Persians spread that religion to encourage loyalty in the surrounding colonial territories, such as Israel after the Israelis were freed from Babylon and the Persians financed building of the 2nd Temple. - David Harris, Palo Alto From mbb386 at main.nc.us Fri Dec 11 11:24:50 2009 From: mbb386 at main.nc.us (MB) Date: Fri, 11 Dec 2009 06:24:50 -0500 (EST) Subject: [ExI] Faith, Religion, Science and PCR [was Re: Tolerance]] In-Reply-To: <20091211034608.97168.qmail@moulton.com> References: <20091211034608.97168.qmail@moulton.com> Message-ID: <34596.12.77.169.78.1260530690.squirrel@www.main.nc.us> Fred Moulton writes: > > To summarize: Science (and by science I include all scholarly inquiry) must contain > doubt and continual testing to refute error. To use a term such as "faith" which > has as one of its common usages "the absence of doubt" seems to me to be > counter-productive. In discussing science one can say "confidence with continual > testing and criticism" and be more accurate and less confusing than saying "faith". > That is why personally I try to only use the word "faith" when discussing religion; > I have found doing so eliminates a lot of misunderstanding. The word "believe" has many similar connotative drawbacks for me. I frequently see it used where "think" might be more appropriate. Not to be confused with the meaningless uses such as "I believe in this town" as a political comment.... Regards, MB From stefano.vaj at gmail.com Fri Dec 11 12:15:17 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 11 Dec 2009 13:15:17 +0100 Subject: [ExI] Tolerance In-Reply-To: <4B212EC7.7030702@satx.rr.com> References: <821320.4437.qm@web59914.mail.ac4.yahoo.com> <4B212EC7.7030702@satx.rr.com> Message-ID: <580930c20912110415pb4d90feuab4bcefe0e4b9449@mail.gmail.com> 2009/12/10 Damien Broderick > One huge problem with this rather disjointed thread is the confusion > between "religion" and "belief in a god or gods". An atheist, just from the > derivation of the word, is a person who has no belief in deity. But of > course there are many godless religions. Animist religions seem to have no > gods, per se, but see the world as suffused with and shaped by personified > forces and passions. The Australian aboriginal Dreaming is a vast ancient > integral cosmology in which the seasonal landscape and its inhabitants are > representations of volitional Ancestors; there's nothing remotely like an > Abrahamic God--but it would seem absurd not to call this all-encompassing > worldview "religious." > Yes, this is fundamental point. And I suspect that even our understanding of pre-christian or non-European gods are nowadays strongly influenced by monotheistic views (including for some of their followers). For instance, ancient Greeks used not to see any especially dramatic contradictions in the fact that very different and incompatible versions of the same myth were widespread. Chronology thereof was also quite vague. It has been persuasively contended that this shows that they did not consider statements concerning everyday life ("there is a stone in this basket") on the same basis as statements concerning mythical facts ("Pallas Athena was wounded during the Troy siege"). -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Fri Dec 11 13:27:19 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 11 Dec 2009 05:27:19 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: Message-ID: <802912.95517.qm@web36505.mail.mud.yahoo.com> --- On Fri, 12/11/09, Stathis Papaioannou wrote: > Perhaps you could do the work for me and prove that *you* > have semantics and intentionality and aren't just a zombie > computer program. That tactic of the sceptic leads to a sort of solipsism, but I agree we could go down that rabbit hole if we really want to... Say there, Stathis. I notice that I have intentionality even down here in the sceptic's rabbit hole. I find it hard to hold in mind the idea that I don't have it because to hold anything whatsoever in mind is to have it. How about you? Do you have anything whatsoever in mind? :-) > Why don't we discuss whether intelligence is an epiphenomenon > rather than consciousness? It's not my intelligence that makes me > writes this, it is motor impulses to my hands, intelligence being > a mere side-effect of this sort of neural activity with no causal > role of its own. Well, from where I sit it sure seems that your hands write intelligent emails in the physical world and that something exhibiting intelligence must account for that fact. I don't mind if you choose not to call it your own intelligence. Call it whatever you please, but whatever you do choose to call it, I cannot consider it epiphenomenal. Epiphenomenal things cannot affect the physical world. > No, I mean that if you replace the brain a neuron at a time > by electronic analogues that function the same, i.e. same > output for same input so that the neurons yet to be replaced > respond in the same way, then the resulting brain will not only > display the same behaviour but will also have the same consciousness. How will you know this? > Searle considers the neural replacement scenario and declares that > the brain will behave the same outwardly but will have a different > consciousness. The aforementioned > paper by Chalmers shows why this is impossible. Chalmers is a functionalist, (or at least he sometimes wears that hat), and yes Searle disagrees with functionalism and its close relative behaviorism. In a nutshell, we might speculate and hope that a functional analogue of the brain will have consciousness, but until we understand why biological brains have it, we will never know if anything else has it. Without that knowledge of the brain, functionalism has some serious problems: some philosophers have shown, for example, that we could construct a functional analogue of the brain out of beer cans and toilet paper. Pretty hard to imagine that contraption having anything like semantics, but in principle that contraption acts no different from the one Chalmers has in mind. No matter how you construct that brain-like contraption, you won't find anything inside it to explain semantics/intentionality. On the inside it will look just like any other contraption. Actually Leibniz first figured this out hundreds of years ago. -gts From painlord2k at libero.it Fri Dec 11 14:35:59 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 11 Dec 2009 15:35:59 +0100 Subject: [ExI] Tolerance In-Reply-To: <580930c20912110415pb4d90feuab4bcefe0e4b9449@mail.gmail.com> References: <821320.4437.qm@web59914.mail.ac4.yahoo.com> <4B212EC7.7030702@satx.rr.com> <580930c20912110415pb4d90feuab4bcefe0e4b9449@mail.gmail.com> Message-ID: <4B2258CF.9000308@libero.it> Il 11/12/2009 13.15, Stefano Vaj ha scritto: > Yes, this is fundamental point. And I suspect that even our > understanding of pre-christian or non-European gods are nowadays > strongly influenced by monotheistic views (including for some of their > followers). > > For instance, ancient Greeks used not to see any especially dramatic > contradictions in the fact that very different and incompatible versions > of the same myth were widespread. Chronology thereof was also quite vague. > > It has been persuasively contended that this shows that they did not > consider statements concerning everyday life ("there is a stone in this > basket") on the same basis as statements concerning mythical facts > ("Pallas Athena was wounded during the Troy siege"). Agree. This, in change, limited their ability think in terms that we give for granted. For example, the belief that unchanging natural laws exist and can be discovered was not their. This came with Christianity, where God is described as a creator that follows its own laws. Believing in many, litigious and capricious deities don't help in believing there are universal rules for all things. The same thing can be see with Islam, where Allah is a capricious god, so the idea of discovering unchangeable laws is see as a blasphemy as it is like affirming that Allah have limits imposed to him. And blasphemy is meet with death. It is of interest that this idea of unfixed natural laws and "post-normal" science is appealing to some atheists. Not many, for now. But what would prevent people to accept more and more this way of thinking if the grater majority don't believe or is influenced to believe in single creator with a single set of rules always binding for all? The existence of a god is not important, it is the idea of a god that want be know, that love his creation, that give fixed rules for all the creation that can not be changed that can be discovered and understood. What could use an atheist as an anchor for continuing to believe in a universe with fixed rules that can be know and understood and that is good to know and understand them. Because without some anchor, the belief will change with he time, as they always do when there is nothing to anchor them. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.103/2557 - Data di rilascio: 12/10/09 21:19:00 From saefir at yahoo.com Fri Dec 11 08:41:23 2009 From: saefir at yahoo.com (flemming) Date: Fri, 11 Dec 2009 00:41:23 -0800 (PST) Subject: [ExI] atheism Message-ID: <287486.21264.qm@web56804.mail.re3.yahoo.com> To be an atheist is exactly just another religion. You can believe that there is no god, but you can not prove it. Therefore atheism is a religion based on belief. If you want to distance yourself from religion the right ism is agnostisism. To be an agnostisc is to decclare that there is not enough data to settle the question if there is or is not a god. Most natural scientist, if not religious, are agnostics, the late Carl Sagan is one example. Flemming -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: --static--liam_crowdsurfer_bottom.gif Type: image/gif Size: 21362 bytes Desc: not available URL: From bbenzai at yahoo.com Fri Dec 11 15:24:07 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 11 Dec 2009 07:24:07 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: Message-ID: <499872.95108.qm@web32004.mail.mud.yahoo.com> > http://www.waythingsare.com/news/what/religion/a-pantheist-s-manifesto.htm I'll see your pantheism and raise you infinity: http://in.groups.yahoo.com/group/Omnitheism Ben Zaiboc From bbenzai at yahoo.com Fri Dec 11 15:55:16 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 11 Dec 2009 07:55:16 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: Message-ID: <432003.90571.qm@web32002.mail.mud.yahoo.com> declaimed: > > Atheism is nothing more than faith, that makes it a > religion. This is irrespective of how the practitioners may > wish to distinguish themselves from all others. The whole point of atheism is that it is emphatically *not* a faith. It's certainly not a 'faith in there being no gods', if that's what you're thinking. It's a position of taking nothing on faith. "I don't believe there are any gods" is a different thing to "I believe there are no gods". In the first, you are not stating a faith in something, in the second, you are. The first is atheism, and is simply a result of there being no evidence for gods. The second is a faith-based stance, a belief (I don't know what you'd call it), and could well persist in the face of any evidence to the contrary. You might want to call this 'belief in no gods' a religion. I wouldn't object. But it's not atheism. Ben Zaiboc From bbenzai at yahoo.com Fri Dec 11 16:01:20 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 11 Dec 2009 08:01:20 -0800 (PST) Subject: [ExI] 5 materials that will make the world as we know it obsolete In-Reply-To: Message-ID: <391651.16919.qm@web32003.mail.mud.yahoo.com> John Grigg > I realize much of this is already known here, but it's > still an interesting > and entertaining article.? I wish the future would > hurry up and get here! > LOL > > http://www.cracked.com/article/212_5-materials-that-will-make-world-as-we-know-it-obsolete/ > Holy Shit! With What? :D Ben Zaiboc From max at maxmore.com Fri Dec 11 16:06:40 2009 From: max at maxmore.com (Max More) Date: Fri, 11 Dec 2009 10:06:40 -0600 Subject: [ExI] atheism Message-ID: <200912111606.nBBG6sWu022978@andromeda.ziaspace.com> Flemming >To be an atheist is exactly just another religion. You can believe >that there is no god, but you can not prove it. Therefore atheism is >a religion based on belief. Atheism is a-theism -- an absence of belief in a god. The absence of belief *cannot* be a religion. In addition, as others have pointed out, atheism has none of the rituals or other marks of religions. It's absurd to call atheism a religion. I cannot prove that the Tooth Fairy doesn't exist, but I don't believe in a Tooth Fairy. Not only do I lack belief in the TF, I would pretty confidently say there is no TF. Does that make me part of the A-TF religion? If you answer "yes", then you are committed to saying I am part of an infinite number of religions, since I lack belief in an infinite number of other claims. > If you want to distance yourself from religion the right ism is > agnostisism. To be an agnostisc is to decclare that there is not > enough data to settle the question if there is or is not a god. That is one form of agnosticism -- the weak version. In that sense, you can be an agnostic atheist. It's also possible to be an agnostic theist if you choose to believe despite acknowledging that you don't really know. (This might seem weird, but many religious people's brains probably are doing something like this at a level below the conscious... just my speculation.) The other, strong, form of agnosticism says that you *cannot* know whether or not there is a god. That's an equally legitimate form of agnosticism: it is a-gnosticism -- a lack of knowledge. You can lack knowledge because (a) you don't have sufficient information or haven't given it sufficient thought, or (b) you believe that you cannot know -- gods are not possible objects of knowledge. Atheism and agnosticism are not -- or need not -- be distinct alternatives. The former is simply a statement about a lack of belief; the latter may be a statement about what you think it's possible to believe, or what you think you have *reason* to believe. Unfortunately, not everyone bases their beliefs on what they have reason to believe. But, whichever way you take the meaning of "agnostic" or "atheist", atheism is clearly *not* a religion. You can't have religion without a set of beliefs (not lack-of-beliefs) and some accompanying markers (typically rituals and the like). Flemming and James Choate do seem to be seriously confused on this issue. I second Fred's recommendation to study pancritical rationalism. It might help. This should be 101 on the Extropy-Chat list. We have plenty of genuinely controversial and difficult issues to discuss. Can we now get back to them? Max ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From max at maxmore.com Fri Dec 11 16:10:33 2009 From: max at maxmore.com (Max More) Date: Fri, 11 Dec 2009 10:10:33 -0600 Subject: [ExI] Does caloric restriction work for humans? Message-ID: <200912111610.nBBGAhnZ024026@andromeda.ziaspace.com> I've been skeptical about drastic CR for humans for years. One reason, mentioned in the following article, is that we don't live in cages in a lab. Lacking any spare muscle or fat makes us highly vulnerable to traumas of various kinds. (Many of us know of -- or are -- people who have lost 30 pounds or more in hospital due to illness). Recently, Aubrey has given a specific reason (also mentioned in the article) why the life extension from even severe caloric restriction is likely to be very small. So, here's the article. I would like to hear your thoughts on it, pro and con. If CR advocates have directly addressed all the points, I'd appreciate a pointer. Calorie restrictive eating for longer life? The story we didn't hear in the news http://junkfoodscience.blogspot.com/2009/07/calorie-restrictive-eating-for-longer.html Onward! Max ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From stefano.vaj at gmail.com Fri Dec 11 17:05:07 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 11 Dec 2009 18:05:07 +0100 Subject: [ExI] Tolerance In-Reply-To: <4B2258CF.9000308@libero.it> References: <821320.4437.qm@web59914.mail.ac4.yahoo.com> <4B212EC7.7030702@satx.rr.com> <580930c20912110415pb4d90feuab4bcefe0e4b9449@mail.gmail.com> <4B2258CF.9000308@libero.it> Message-ID: <580930c20912110905t752bd92ek312bf714ce341db@mail.gmail.com> 2009/12/11 Mirco Romanato > This, in change, limited their ability think in terms that we give for > granted. > For example, the belief that unchanging natural laws exist and can be > discovered was not their. This came with Christianity, where God is > described as a creator that follows its own laws. > Why, things obviously happen to exhibit a perverse consistency, because as a good atheist/neopagan/idealist/skeptic/whatever, I am inclined on the contrary to believe that "natural laws" have nothing to do with immutable decrees of an entity (be it God, or even "Mother Nature") establishing how things must go, in more or less the same fashion the human legislators try and regulate social affairs, but simply with our way of understanding and describing how they actually do... So, while I think that adopting one view or the other is more of a philosophical stance than a matter of fact, I am needless to say much more at ease with the Greek (say, Eraklit or Democritus) than with the biblical worldview (say, the Genesis or Saint Thomas). And I suspect that modern science and epistemology, especially since the quantum mechanics revolution, have an easier and more elegant coexistence with the former. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Fri Dec 11 17:11:08 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 11 Dec 2009 18:11:08 +0100 Subject: [ExI] Tolerance In-Reply-To: <432003.90571.qm@web32002.mail.mud.yahoo.com> References: <432003.90571.qm@web32002.mail.mud.yahoo.com> Message-ID: <580930c20912110911p26c601a3h1ff7d1ca733d374a@mail.gmail.com> 2009/12/11 Ben Zaiboc > It's a position of taking nothing on faith. "I don't believe there are any > gods" is a different thing to "I believe there are no gods". In the first, > you are not stating a faith in something, in the second, you are. > If you are a transhumanist, you should anyway rephrase it in "I believe there are no gods (yet)". ;-) There again, and trying to steer the thread more on subject, something which risks to make transhumanism impresentable in some quarters (e.g., academic posthumanism) is the idea that the future coming of gods is something to be taken for granted or inscribed in some cosmological necessity, rather than a possibility - and probably a possibility that would have to be actively pursued if it ever were to take place... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Fri Dec 11 17:58:10 2009 From: max at maxmore.com (Max More) Date: Fri, 11 Dec 2009 11:58:10 -0600 Subject: [ExI] =?iso-8859-1?q?Forecasting_experts=92_simple_model_leaves__?= =?iso-8859-1?q?expensive_climate_models_cold?= Message-ID: <200912111758.nBBHwM3O017606@andromeda.ziaspace.com> Another interesting piece from Armstrong and Green: Forecasting experts? simple model leaves expensive climate models cold A simple model was found to be produce forecasts that are over seven times more accurate than forecasts from the procedures used by the United Nations Intergovernmental Panel on Climate Change (IPCC). This important finding is reported in an article titled ?Validity of climate change forecasting for public policy decision making? (http://kestencgreen.com/gas-2009-validity.pdf) in the latest issue of the International Journal of Forecasting. It is the result of a collaboration among forecasters J. Scott Armstrong of the Wharton School, Kesten C. Green of Monish University, and climate scientist Willie Soon of the Harvard-Smithsonian Center for Astrophysics. In an earlier, paper (http://www.forecastingprinciples.com/files/WarmAudit31.pdf), Armstrong and Green found that the IPCC?s approach to forecasting climate violated 72 principles of forecasting. To put this in context, would you put your children on a trans-Atlantic flight if you knew that the plane had failed engineering checks for 72 out of 127 relevant items on the checklist? The IPCC violations of forecasting principles were partly due to their use of models that were too complex for the situation. Contrary to everyday thinking, complex models provide forecasts that are less accurate than forecasts from simple models when the situation is complex and uncertain. Confident that a forecasting model that followed scientific forecasting principles would provide forecasts that were more accurate than those provided by the IPCC, Green, Armstrong and Soon used a model that was more consistent with forecasting principles and knowledge about climate. The forecasting model was the so-called ?na?ve? model. It assumes that things will remain the same. It is such a simple model that people are generally not aware of its power. In contrast to the IPCC?s central forecast that global mean temperatures will rise by 3 C over a century, the na?ve model simply forecasts that temperatures next year and for each of 100 years into the future would remain the same as the last years?. The na?ve model approach is confusing to non-forecasters who are aware that temperatures have always varied. Moreover, much has been made of the observation that the temperature series that the IPCC uses shows a broadly upward trend since 1850 and that this is coincident with increasing industrialization and associated increases in manmade carbon dioxide gas emissions. In order to test the na?ve model, annual forecasts were made from one to 100 years in the future starting with 1850?s global average temperature as the forecast for the years 1851 to 1950. This process was repeated by updating for each year up through 2007. This produced 10,750 annual average temperature forecasts for all horizons. It was the first time that the IPCC?s forecasting procedures had been subject to a large-scale test of the accuracy of the forecasts that they produce. Over all the forecasts, the IPCC error was 7.7 times larger than the error from the na?ve model. While the superiority of the na?ve model was modest for one- to ten-year-ahead forecasts (where the IPCC error was 1.5 times larger), its superiority was enormous for the 91- to 100-year-ahead forecasts, where the IPCC error was 12.6 times larger. Is it proper to conduct validation tests? In many cases, such as the climate change situation, people claim that: ?Things have changed! We cannot use the past to forecast.? While they may think that their situation is unique, there is no logic to this argument. The only way to forecast the future is by learning from the past. In fact, the warmers claims are also based on their analyses of the past. Could one improve upon the na?ve model? The na?ve model violates some principles. For example, it violates the principle that one should use as long a time series as possible, because it bases all forecasts on simply the global average temperature for the single year just prior to making the forecasts. It also fails to combine forecasts from different reasonable methods. The authors planned to start simple with this self-funded project and to then obtain funding to undertake a more ambitious forecasting effort to ensure that all principles were followed. This would no doubt improve accuracy. However, the forecasts from the na?ve model were very accurate. For example, the mean absolute error for the 108 fifty-year ahead forecasts was only 0.24 C. It is difficult to see any economic value to reducing such a small forecast error. For further information contact J. Scott Armstrong (http://jscottarmstrong.com or Kesten C. Green (http://kestencgreen.com/)] From jonkc at bellsouth.net Fri Dec 11 17:38:24 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 11 Dec 2009 12:38:24 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <370107.70551.qm@web36504.mail.mud.yahoo.com> References: <370107.70551.qm@web36504.mail.mud.yahoo.com> Message-ID: On Dec 10, 2009, Gordon Swobe wrote: > Searle rejects epiphenomenonalism -- the view that subjective mental events act only as "side-effects" and do not cause physical events. Searle thinks they do; that if you consciously will to raise your arm, and it rises, your conscious willing had something to do with the fact that it rose. This is the sort of thing that gives philosophy a bad name. It's a completely empty argument, its like debating if pressure caused the balloon to pop or if it popped because too many air molecules were hitting the inside of the balloon. > Do you mean to say that if something behaves exactly as if it has human intelligence, it must have strong AI? Certainly. > If so then we mean different things by strong AI. I don't use the term "strong AI" myself because if it has any meaning at all it means programing a soul. I don't believe in the soul, it's a useless concept. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From painlord2k at libero.it Fri Dec 11 18:17:33 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 11 Dec 2009 19:17:33 +0100 Subject: [ExI] Tolerance In-Reply-To: <580930c20912110905t752bd92ek312bf714ce341db@mail.gmail.com> References: <821320.4437.qm@web59914.mail.ac4.yahoo.com> <4B212EC7.7030702@satx.rr.com> <580930c20912110415pb4d90feuab4bcefe0e4b9449@mail.gmail.com> <4B2258CF.9000308@libero.it> <580930c20912110905t752bd92ek312bf714ce341db@mail.gmail.com> Message-ID: <4B228CBD.5020208@libero.it> Il 11/12/2009 18.05, Stefano Vaj ha scritto: > 2009/12/11 Mirco Romanato Why, things obviously happen to exhibit a perverse consistency, The "obvious" is the problem. It is obvious to you because you are trained to think so and probably near all your education (like mine) support this and experience (filtrated by education) confirm it. For other fellow humans it is not so obvious. They believe in one or many capricious gods or spirits that must be pleased or bribed. Do major religions like Buddhism or Hinduism believe there is a set of immutable laws governing humans, gods, spirits and the nature? > because > as a good atheist/neopagan/idealist/skeptic/whatever, I am inclined on > the contrary to believe that "natural laws" have nothing to do with > immutable decrees of an entity (be it God, or even "Mother Nature") > establishing how things must go, in more or less the same fashion the > human legislators try and regulate social affairs, but simply with our > way of understanding and describing how they actually do... The existence of God is not a problem, we could agree that it doesn't exist, for the sake of the discussion. What I was trying to discus is the effect of believing in a specific type of God. A god that set laws, never change them and made them so they are understandable. In this, he set an high standard for human legislators as human laws always change and are not always understandable from other humans (and often from the same legislators). It occur to me that many legislators don't know the laws they enact and are surprised and outraged when someone ask them if they have red the text of the law before enacting it (as recent episodes in the US show - but surely Italy and other places are not so different). > So, while I think that adopting one view or the other is more of a > philosophical stance than a matter of fact, I am needless to say much > more at ease with the Greek (say, Eraklit or Democritus) than with the > biblical worldview (say, the Genesis or Saint Thomas). And I suspect > that modern science and epistemology, especially since the quantum > mechanics revolution, have an easier and more elegant coexistence with > the former. From my, very limited, understanding of QM I think it is entirely compatible with Christian views. Albeit I can not know what atomic nucleus will be the next to naturally decade, I know the probability of this to happen in a time period and it will not change. Could be interesting to ask some Odinist what their religion say about this. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.103/2558 - Data di rilascio: 12/11/09 11:06:00 From p0stfuturist at yahoo.com Fri Dec 11 17:53:13 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Fri, 11 Dec 2009 09:53:13 -0800 (PST) Subject: [ExI] Atheism Message-ID: <96941.80966.qm@web59904.mail.ac4.yahoo.com> Atheism can be termed a de facto religion/faith. It does, for instance,?include academic rituals. Those of you at colleges?& universities know that nonscience academia is filled with rituals and tens-- or is?it hundreds--?of thousands?of intellectual-priests disguised as professors. ?But that is not to reject what atheists/agnostics here are writing. Religion is nothing more than escapism; albeit necessary, because the great masses are so dumb?& dumber they couldn't raise families without faith and?even churches; they are rebelling against the Darwinist food-chain by trading it?for?religious hierarchy. However, having written that,?IMO commodified religious culture is no worse than commodified secular trash culture. Why is two hours in a dopey church worse than two hours watching a dopey film? Entertainment, religion (and religion is entertainment) are harmless now, dime-a-dozen. Porn isn't threatening anymore -- it is merely millions of bored apes rutting in cheap sets. And the really good news is that since no one ever goes broke underestimating taste the economy will pick up nicely. You all got it made in the shade. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Dec 11 18:25:06 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 11 Dec 2009 13:25:06 -0500 Subject: [ExI] atheism In-Reply-To: <287486.21264.qm@web56804.mail.re3.yahoo.com> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> Message-ID: On Dec 11, 2009, at 3:41 AM, flemming wrote: > You can believe that there is no god, but you can not prove it. Therefore atheism is a religion based on belief. If you want to distance yourself from religion the right ism is agnostisism. I'll bet you are a atheist regarding Zeus and Thor and the Flying Spaghetti Monster, as Dawkins says he just goes one god further. Agnostics make the logical error of assuming that if there is no evidence that something exists and no evidence that it does not then there is a 50% chance its real. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Dec 11 18:29:21 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 11 Dec 2009 13:29:21 -0500 Subject: [ExI] Atheism In-Reply-To: <96941.80966.qm@web59904.mail.ac4.yahoo.com> References: <96941.80966.qm@web59904.mail.ac4.yahoo.com> Message-ID: On Dec 11, 2009, Post Futurist wrote: > Atheism can be termed a de facto religion/faith. It does, for instance, include academic rituals. For a word to be useful you need contrast, if everything has the Klognee property then it's not a useful concept. Can you pleas tell me something that is NOT a religion? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Fri Dec 11 18:34:44 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 11 Dec 2009 19:34:44 +0100 Subject: [ExI] Does caloric restriction work for humans? In-Reply-To: <200912111610.nBBGAhnZ024026@andromeda.ziaspace.com> References: <200912111610.nBBGAhnZ024026@andromeda.ziaspace.com> Message-ID: <580930c20912111034k53bd57c5x96bc087c20a72f7c@mail.gmail.com> 2009/12/11 Max More : > So, here's the article. I would like to hear your thoughts on it, pro and > con. If CR advocates have directly addressed all the points, I'd appreciate > a pointer. I have very anedoctical evidence in my family history of extreme longevity coupled with a choosen (or dictated) caloric restriction regime. I suspect however that in very rough terms the (limited) extension of one's life span so achievable may depend, at least in part, i) on thelife style caloric restriction imposes; ii) on the fact that one is living longer because living slower (both in a metaphoric and metabolical sense). All in all, the question that immediately arises is: what's the point? Especially taking into account that while a few people adapt pretty easily (my grandmother, for instance), for many other it may be applicable for Dr. Atkins used to say (citing by heart): "I do not know whether caloric restriction substantially increases your chances or living 100+ years, but I can assure you that it will feel much longer...". Not in any positive sense, obviously. -- Stefano Vaj From jonkc at bellsouth.net Fri Dec 11 18:09:22 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 11 Dec 2009 13:09:22 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <802912.95517.qm@web36505.mail.mud.yahoo.com> References: <802912.95517.qm@web36505.mail.mud.yahoo.com> Message-ID: <3647C7BA-B503-4AB4-B86C-B9343C513A5B@bellsouth.net> On Dec 11, 2009, at 8:27 AM, Gordon Swobe wrote: > In a nutshell, we might speculate and hope that a functional analogue of the brain will have consciousness, but until we understand why biological brains have it, we will never know if anything else has it. You don't know how biological brains work and yet you think human beings are conscious, or at least you do when they are not asleep or dead. You make this distinction by observing their behavior. And I think it would be useful if philosophers took a freshman course in biology because if consciousness is not a byproduct of intelligence, if it is not the feeling data has when it is being processed then there is no way evolution could have produced it and I know for a fact that it has at least once > some philosophers have shown, for example, that we could construct a functional analogue of the brain out of beer cans and toilet paper. So what? > Pretty hard to imagine that contraption having anything like semantics But it's easy to imagine 3 pounds of grey goo having semantics? > No matter how you construct that brain-like contraption, you won't find anything inside it to explain semantics/intentionality. As I said before in the history of the world the study of the concept of the soul has never produced one useful insight. > On the inside it will look just like any other contraption. In other words you will be unable to find a soul, not even if you look with an electron microscope. > Actually Leibniz first figured this out hundreds of years ago. The Identity of Indiscernibles supports my ideas not yours, it says that if I exchange you with an exact copy of you NOTHING has changed. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Dec 11 18:41:10 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 11 Dec 2009 13:41:10 -0500 Subject: [ExI] Tolerance. In-Reply-To: <20091211061741.83045.qmail@moulton.com> References: <20091211061741.83045.qmail@moulton.com> Message-ID: <960D1616-4C24-4AA2-A008-C536B9C116EC@bellsouth.net> On Dec 11, 2009, at 8:27 AM, Gordon Swobe wrote: > In a nutshell, we might speculate and hope that a functional analogue of the brain will have consciousness, but until we understand why biological brains have it, we will never know if anything else has it. You don't know how biological brains work and yet you think human beings are conscious, or at least you do when they are not asleep or dead. You make this distinction by observing their behavior. And I think it would be useful if philosophers took a freshman course in biology because if consciousness is not a byproduct of intelligence, if it is not the feeling data has when it is being processed then there is no way evolution could have produced it and I know for a fact that it has at least once > some philosophers have shown, for example, that we could construct a functional analogue of the brain out of beer cans and toilet paper. So what? > Pretty hard to imagine that contraption having anything like semantics But it's easy to imagine 3 pounds of grey goo having semantics? > No matter how you construct that brain-like contraption, you won't find anything inside it to explain semantics/intentionality. As I said before in the history of the world the study of the concept of the soul has never produced one useful insight. > On the inside it will look just like any other contraption. In other words you will be unable to find a soul, not even if you look with an electron microscope. > Actually Leibniz first figured this out hundreds of years ago. The Identity of Indiscernibles supports my ideas not yours, it says that if I exchange you with an exact copy of you NOTHING has changed. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From painlord2k at libero.it Fri Dec 11 18:42:23 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 11 Dec 2009 19:42:23 +0100 Subject: [ExI] atheism In-Reply-To: References: <287486.21264.qm@web56804.mail.re3.yahoo.com> Message-ID: <4B22928F.2010000@libero.it> Il 11/12/2009 19.25, John Clark ha scritto: > On Dec 11, 2009, at 3:41 AM, flemming wrote: > >> You can believe that there is no god, but you can not prove it. >> Therefore atheism is a religion based on belief. If you want to >> distance yourself from religion the right ism is agnostisism. >> > > I'll bet you are a atheist regarding Zeus and Thor and the Flying > Spaghetti Monster, as Dawkins says he just goes one god further. > Agnostics make the logical error of assuming that if there is no > evidence that something exists and no evidence that it does not then > there is a 50% chance its real. Even if the chance to be real is only 0.00000000000000000000000001% It don't make it unreal, only improbable. Atheists make the error to believe that something very improbable is impossible. Given they are not able to prove their claim, their claim is based on faith. I never see an atheist claim that the existence of god is improbable. I don't remember if someone make the thesis it is impossible. They simply claim it don't exist. Without proof. So theirs is faith. Mirco p.s. We could also talk about the problem that an god possible only in one infinitesimal probability, given enough time would become to exist. -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.103/2558 - Data di rilascio: 12/11/09 11:06:00 From sjatkins at mac.com Fri Dec 11 18:43:46 2009 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 11 Dec 2009 10:43:46 -0800 Subject: [ExI] atheism In-Reply-To: <287486.21264.qm@web56804.mail.re3.yahoo.com> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> Message-ID: On Dec 11, 2009, at 12:41 AM, flemming wrote: > > To be an atheist is exactly just another religion. You can believe that there is no god, but you can not prove it. > > > Therefore atheism is a religion based on belief. If you want to distance yourself from religion the right ism is agnostisism. To be an agnostisc is to decclare that there is not enough data to settle the question if there is or is not a god. > Most natural scientist, if not religious, are agnostics, the late Carl Sagan is one example. > Flemming > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat This is dumb. Some asserts there are invisible pink unicorns all around us and that it is right and proper to worship them. I assert that I have no evidence of any such and thus do not believe in them and thus thing such worship is bullocks. And you come along and say both parties are equally religious! Clearly you have no firm grasp on the meaning of the word "belief" or "religion". They are not the same btw else belief in anything, say gravity, would be "religion". Clearly stating disbelief that X is true is not the same as belief that X is true. Saying "well I can't really say whether there are invisible pink unicorns or not" is a cop-out, at best technically true since they are defined as being impossible to prove or disprove. But hopefully we have all grown beyond such sophomore BS rhetorical games. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From painlord2k at libero.it Fri Dec 11 18:46:24 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 11 Dec 2009 19:46:24 +0100 Subject: [ExI] Nobel homeschooled until 9th grade Message-ID: <4B229380.5000502@libero.it> I tripped over this: http://homeschooling-network.com/NewsArticles/Default.aspx http://www.science.ca/scientists/scientistprofile.php?pID=129&pg=3 > Homeschooled Physics Nobel Prize Winner Without Dr. Boyle, there > would be no digital photography > > Homeschooling Swede The 2009 Nobel Prize Winners were announced > recently, and includes a homeschooled Physics Nobel Prize Winner: Dr. > Willard S. Boyle. He received the prize principally for the invention > of the Charged Coupled Device (CCD) that captures digital images used > in digital cameras and cellular/handheld phones, as well as numerous > other applications such as the Hubble Telescope. > > Dr. Boyle was born in 1924 In Nova Scotia Canada, where he was > homescholed by his mother up to ninth grade. He studied at Lower > Canada College in Montreal, and graduated at McGill University with > his Batchelor's degree. He credits his success to his stated number > one mentor: his mother who homeschooled him. After his Batchelor's > degree he received: an MS, and a PhD in Physics. He was later the > executive director of Communications Sciences Division, Bell Labs in > New Jersey. He is described as: "Adventurous, clever, curious" on the > candian science website: www.science.ca > > He received the Nobel prize jointly with Dr. George E. Smith, > principally for inventing the Charged Coupled Device (CCD). These are > used today to take digital photographs, and can be found in hand-held > phones, digital cameras, as well as in telescopes such as the Hubble > Telescope. The MegaPixel number we know about on digital cameras and > phones, is the number of how many million pixels are on the CCD that > Dr. Boyle jointly invented. Without the CCD, digitial phootograhy > would not exist. > > He is famously quoted as saying: "Know how to judge when to persevere > and when to quit. If you?re going to do something, do it well. You > don?t have to be better than everyone else, but you ought to do your > personal best.? > > Dr. Boyle shows the major contribution that homeschooled people can > make in the world, even though we are few in number. Why not tell > your friends that their digital camera wouldn't haven't been invented > without the work of a homeschooled Physicist! Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.709 / Database dei virus: 270.14.103/2558 - Data di rilascio: 12/11/09 11:06:00 From stefano.vaj at gmail.com Fri Dec 11 18:50:35 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 11 Dec 2009 19:50:35 +0100 Subject: [ExI] atheism In-Reply-To: References: <287486.21264.qm@web56804.mail.re3.yahoo.com> Message-ID: <580930c20912111050w1ea97edwfcfad349018b41b@mail.gmail.com> 2009/12/11 John Clark > I'll bet you are a atheist regarding Zeus and Thor and the Flying Spaghetti > Monster, as Dawkins says he just goes one god further. Agnostics make the > logical error of assuming that if there is no evidence that something exists > and no evidence that it does not then there is a 50% chance its real. > Yes. I also think that a difference exists between belief and faith. Do I believe that my cat is sleeping right now? Yes, and to form such an opinion I need not a incontrovertible demonstration that the opposite is not true. This is just an assumption I make. Monotheistic relgions are about the idea that not only does God empirically exists in the same sense of you and me, and has certain features, but that you have a moral duty to believe it, and/or that all this can be demonstrated or otherwise recognised as "evident". Now, I think that on the contrary I may be entitled to believe that the christian God does not exist simply because I see no reasons to believe otherwise and because the concept has been innumerable times deconstructed as a cultural artifact rather than as a philosophical necessity. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Fri Dec 11 18:51:02 2009 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 11 Dec 2009 10:51:02 -0800 Subject: [ExI] Tolerance In-Reply-To: <580930c20912110905t752bd92ek312bf714ce341db@mail.gmail.com> References: <821320.4437.qm@web59914.mail.ac4.yahoo.com> <4B212EC7.7030702@satx.rr.com> <580930c20912110415pb4d90feuab4bcefe0e4b9449@mail.gmail.com> <4B2258CF.9000308@libero.it> <580930c20912110905t752bd92ek312bf714ce341db@mail.gmail.com> Message-ID: On Dec 11, 2009, at 9:05 AM, Stefano Vaj wrote: > 2009/12/11 Mirco Romanato > This, in change, limited their ability think in terms that we give for granted. > For example, the belief that unchanging natural laws exist and can be discovered was not their. This came with Christianity, where God is described as a creator that follows its own laws. > > Why, things obviously happen to exhibit a perverse consistency, because as a good atheist/neopagan/idealist/skeptic/whatever, I am inclined on the contrary to believe that "natural laws" have nothing to do with immutable decrees of an entity (be it God, or even "Mother Nature") establishing how things must go, in more or less the same fashion the human legislators try and regulate social affairs, but simply with our way of understanding and describing how they actually do... It is much more than an "inclination". There is no evidence whatsoever of this "Law Giver". To believe despite this lack is intellectually perverse and shows a quite corrupt epistemological structure. > > So, while I think that adopting one view or the other is more of a philosophical stance than a matter of fact, If you understand what it is "to know" then it is a lot more than a mere "philosophical stance" as that is usually construed. > I am needless to say much more at ease with the Greek (say, Eraklit or Democritus) than with the biblical worldview (say, the Genesis or Saint Thomas). And I suspect that modern science and epistemology, especially since the quantum mechanics revolution, have an easier and more elegant coexistence with the former. Saying it is a merely a matter of preference is giving much to much power to every demon haunted notion that ever arose in the human mind. It is a capitulation that is quite harmful. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Fri Dec 11 18:54:23 2009 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 11 Dec 2009 10:54:23 -0800 Subject: [ExI] Tolerance In-Reply-To: <580930c20912110911p26c601a3h1ff7d1ca733d374a@mail.gmail.com> References: <432003.90571.qm@web32002.mail.mud.yahoo.com> <580930c20912110911p26c601a3h1ff7d1ca733d374a@mail.gmail.com> Message-ID: <7F25143D-9404-402A-A62E-3498E10727E2@mac.com> On Dec 11, 2009, at 9:11 AM, Stefano Vaj wrote: > 2009/12/11 Ben Zaiboc > It's a position of taking nothing on faith. "I don't believe there are any gods" is a different thing to "I believe there are no gods". In the first, you are not stating a faith in something, in the second, you are. > > If you are a transhumanist, you should anyway rephrase it in "I believe there are no gods (yet)". ;-) What does that mean exactly? If you mean a Mind so powerful it can create an entire universe as we know it within itself I have no idea if such exists now or not. I have no evidence to believe that one does. I am fairly certain that such a Mind is possible however. > > There again, and trying to steer the thread more on subject, something which risks to make transhumanism impresentable in some quarters (e.g., academic posthumanism) is the idea that the future coming of gods is something to be taken for granted or inscribed in some cosmological necessity, rather than a possibility - and probably a possibility that would have to be actively pursued if it ever were to take place... I agree there are *much* easier ways to sell transhumanism than some high-falutin Cosmic Imperative. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Fri Dec 11 18:55:43 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 11 Dec 2009 19:55:43 +0100 Subject: [ExI] Tolerance. In-Reply-To: <960D1616-4C24-4AA2-A008-C536B9C116EC@bellsouth.net> References: <20091211061741.83045.qmail@moulton.com> <960D1616-4C24-4AA2-A008-C536B9C116EC@bellsouth.net> Message-ID: <580930c20912111055r398847b4q8723fe86dd443716@mail.gmail.com> 2009/12/11 John Clark : > You don't know how biological brains work and yet you think human beings are > conscious, or at least you do when they are not asleep or dead. Yes. We should really abandon the idea that "conscience" may be anything that a phenomenical reality based on a pure projection of our own psychological statuses... And yet the essentialist view, which brings us into the territory of inescapable paradoxes as far as uploading, teleport, resurrection, copying, etc., are concerned keeps re-emerging and re-emerging even in our ranks. -- Stefano Vaj From p0stfuturist at yahoo.com Fri Dec 11 18:59:19 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Fri, 11 Dec 2009 10:59:19 -0800 (PST) Subject: [ExI] Atheism In-Reply-To: Message-ID: <893501.32272.qm@web59905.mail.ac4.yahoo.com> Extropy-chat is not a religion because in 2 decades it has stood the test of non-mysticism. Thank God for small favors. the future belongs to the strong-- of stomach --- On Fri, 12/11/09, John Clark wrote: From: John Clark Subject: Re: [ExI] Atheism To: "ExI chat list" Date: Friday, December 11, 2009, 1:29 PM On Dec 11, 2009, ?Post Futurist wrote: Atheism?can?be termed a de facto religion/faith. It does, for instance,?include academic rituals.? For a word to be useful you need contrast, if everything has the Klognee property then it's not a useful concept. Can you pleas tell me something that is NOT a religion? ?John K Clark -----Inline Attachment Follows----- _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Fri Dec 11 19:00:17 2009 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 11 Dec 2009 11:00:17 -0800 Subject: [ExI] Tolerance. In-Reply-To: <580930c20912111055r398847b4q8723fe86dd443716@mail.gmail.com> References: <20091211061741.83045.qmail@moulton.com> <960D1616-4C24-4AA2-A008-C536B9C116EC@bellsouth.net> <580930c20912111055r398847b4q8723fe86dd443716@mail.gmail.com> Message-ID: <43C01D4B-811D-4218-B6A6-54B782AF4540@mac.com> On Dec 11, 2009, at 10:55 AM, Stefano Vaj wrote: > 2009/12/11 John Clark : >> You don't know how biological brains work and yet you think human beings are >> conscious, or at least you do when they are not asleep or dead. > > Yes. We should really abandon the idea that "conscience" may be > anything that a phenomenical reality based on a pure projection of our > own psychological statuses... LOL. How does an unconscious being have "psychological statuses"? Oh, was that the joke? :) From stefano.vaj at gmail.com Fri Dec 11 19:04:06 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 11 Dec 2009 20:04:06 +0100 Subject: [ExI] Tolerance In-Reply-To: <4B228CBD.5020208@libero.it> References: <821320.4437.qm@web59914.mail.ac4.yahoo.com> <4B212EC7.7030702@satx.rr.com> <580930c20912110415pb4d90feuab4bcefe0e4b9449@mail.gmail.com> <4B2258CF.9000308@libero.it> <580930c20912110905t752bd92ek312bf714ce341db@mail.gmail.com> <4B228CBD.5020208@libero.it> Message-ID: <580930c20912111104x2bd38ce8p3af38cd42dab68fd@mail.gmail.com> 2009/12/11 Mirco Romanato : > Il 11/12/2009 18.05, Stefano Vaj ha scritto: >> Why, things obviously happen to exhibit a perverse consistency, > > The "obvious" is the problem. No, sorry, I explained myself poorly. What I actually meant is: as I am a good atheist/neopagan/idealist/skeptic/whatever I also have some problems with "natural laws". In other words: you may be right that monotheism introduced the concept, but my views remain consistent from your POV, because I fully agree myself that by rejecting the former I also call into discussion the latter, and do both things. And of course one can do good science even irrespective of the fact that he considers "natural laws" to be the fruit of divine decrees or a bad, albeit time-honoured, metaphor of something entirely different from any kind of "laws", which really refers more than anything to our own way to perceive the world. -- Stefano Vaj From sparge at gmail.com Fri Dec 11 19:07:56 2009 From: sparge at gmail.com (Dave Sill) Date: Fri, 11 Dec 2009 14:07:56 -0500 Subject: [ExI] Atheism In-Reply-To: <893501.32272.qm@web59905.mail.ac4.yahoo.com> References: <893501.32272.qm@web59905.mail.ac4.yahoo.com> Message-ID: 2009/12/11 Post Futurist > > Extropy-chat is not a religion because in 2 decades it has stood the test of non-mysticism. > Thank God for small favors. But atheism, which is nothing if not a rejection of mysticism, is a religion? :rolleyes: -Dave From p0stfuturist at yahoo.com Fri Dec 11 19:19:43 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Fri, 11 Dec 2009 11:19:43 -0800 (PST) Subject: [ExI] Atheism In-Reply-To: Message-ID: <268484.51957.qm@web59904.mail.ac4.yahoo.com> Why on God's green earth are religious schools worse than public skools? No, you probably cannot teach students to think, but you can teach them not to think. And that is it exactly. the future belongs to the strong-- of stomach -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Fri Dec 11 19:30:55 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Fri, 11 Dec 2009 20:30:55 +0100 Subject: [ExI] =?windows-1252?q?Forecasting_experts=92_simple_model_leaves?= =?windows-1252?q?_expensive_climate_models_cold?= In-Reply-To: <200912111758.nBBHwM3O017606@andromeda.ziaspace.com> References: <200912111758.nBBHwM3O017606@andromeda.ziaspace.com> Message-ID: <4902d9990912111130o673ce645qb54f7d58fdd1f523@mail.gmail.com> On Fri, Dec 11, 2009 at 6:58 PM, Max More wrote: > Another interesting piece from Armstrong and Green: > > Forecasting experts? simple model leaves expensive climate models cold > > A simple model was found to be produce forecasts that are over seven times > more accurate than forecasts from the procedures used by the United Nations > Intergovernmental Panel on Climate Change (IPCC). > > This important finding is reported in an article titled ?Validity of > climate change forecasting for public policy decision making? ( > http://kestencgreen.com/gas-2009-validity.pdf) in the latest issue of the > International Journal of Forecasting. It is the result of a collaboration > among forecasters J. Scott Armstrong of the Wharton School, Kesten C. Green > of Monish University, and climate scientist Willie Soon of the > Harvard-Smithsonian Center for Astrophysics. > I don't understand this paper. They first develop a simple "temperature constant" model and compare it with observations. Well, they show right in the second graph, and also in the text, that their model has an average error of 0.4C when "forecasting" from 1850 to a time horizon of one hundred years, and, surprise, the error grows larger with the time horizon. That means that a temperature trend is at work. Eyeballing the GISS temperature graph, I derived an observed rate of 0.6 degrees from 1900 to 2000, so it's in the ballpark. The observed warming is thus compatible with the error in their model at the right timescales. In other words, temperature is not constant. They go on comparing an hypothetical IPCC-like prediction (linear warming of 0.03C/year) from 1850 on, but changing the metric (which is sufficiently obscure that I didn't understand in the few minutes I dedicated to the subject). They find little agreement between prediction and observations, conveniently forgetting that IPCC predictions are for the next century, when CO2 forcing will be substantially higher than in the 1800s. And look at their conclusion: "The benchmark forecast is that the global mean temperature for each year for the rest of this century will be within 0.5 ? C of the 2008 figure." I could say so just looking at the GISTEMP graph and extrapolating a line for the next century! And this is already a more complex model then their, which is a flat line from 2008 on. What they show is simply that last century worth's of global warming was on the order of 0.5C. Well, we already know that. So I don't understand what they are trying to demonstrate. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Fri Dec 11 19:06:55 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Fri, 11 Dec 2009 11:06:55 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: <7F25143D-9404-402A-A62E-3498E10727E2@mac.com> Message-ID: <102640.19486.qm@web59911.mail.ac4.yahoo.com> Easy? Who said it will be easy? Easy is prayer nd meditation ;) I am very cynical, but cynical doesn't always mean wrong. Perhaps philosophy is basically. gobbledygook. Is economics a science? doesn't appear to be. I'm going to concentrate on 'culture', however a construct it is. Frankly, the worst religion today seems better than any politics-- only thing worse than a politician is an attorney. ? ? --- On Fri, 12/11/09, Samantha Atkins wrote: From: Samantha Atkins Subject: Re: [ExI] Tolerance To: "ExI chat list" Date: Friday, December 11, 2009, 1:54 PM On Dec 11, 2009, at 9:11 AM, Stefano Vaj wrote: 2009/12/11 Ben Zaiboc It's a position of taking nothing on faith. ?"I don't believe there are any gods" is a different thing to "I believe there are no gods". ?In the first, you are not stating a faith in something, in the second, you are. If you are a transhumanist, you should anyway rephrase it in "I believe there are no gods (yet)". ;-) What does that mean exactly? ?If you mean a Mind so powerful it can create an entire universe as we know it within itself I have no idea if such exists now or not. ? I have no evidence to believe that one does. ? ?I am fairly certain that such a Mind is possible however. There again, and trying to steer the thread more on subject, something which risks to make transhumanism impresentable in some quarters (e.g., academic posthumanism) is the idea that the future coming of gods is something to be taken for granted or inscribed in some cosmological necessity, rather than a possibility - and probably a possibility that would have to be actively pursued if it ever were to take place... I agree there are *much* easier ways to sell transhumanism than some high-falutin Cosmic Imperative. - samantha -----Inline Attachment Follows----- _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Fri Dec 11 20:21:16 2009 From: max at maxmore.com (Max More) Date: Fri, 11 Dec 2009 14:21:16 -0600 Subject: [ExI] Structured analogies analysis of climate alarmism Message-ID: <200912112021.nBBKLNel013184@andromeda.ziaspace.com> Another Armstrong & Green paper: History shows manmade global warming alarm to be false ? and that harmful policies will persist http://kestencgreen.com/green&armstrong-agw-analogies.pdf From nanite1018 at gmail.com Fri Dec 11 20:33:26 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Fri, 11 Dec 2009 15:33:26 -0500 Subject: [ExI] Does caloric restriction work for humans? In-Reply-To: <200912111610.nBBGAhnZ024026@andromeda.ziaspace.com> References: <200912111610.nBBGAhnZ024026@andromeda.ziaspace.com> Message-ID: <441A73BF-5381-4E5E-950B-089DC4F8E8F5@GMAIL.COM> > Calorie restrictive eating for longer life? The story we didn't hear > in the news > http://junkfoodscience.blogspot.com/2009/07/calorie-restrictive-eating-for-longer.html This was a very interesting article. I had been considering trying out CR, but given this news, I don't see the point. Being healthy, not calorie restricted, is a much more important goal in order to extend life. Not that I think it will be a problem for me anyway, since I'm 19. I think I'm young enough I'll probably get an indefinite lifespan barring accidents. Joshua Job nanite1018 at gmail.com From nanite1018 at gmail.com Fri Dec 11 20:51:46 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Fri, 11 Dec 2009 15:51:46 -0500 Subject: [ExI] Structured analogies analysis of climate alarmism In-Reply-To: <200912112021.nBBKLNel013184@andromeda.ziaspace.com> References: <200912112021.nBBKLNel013184@andromeda.ziaspace.com> Message-ID: <4F24E86C-6A6F-4BBA-8CAE-D4D242E8C5DB@GMAIL.COM> > Another Armstrong & Green paper: > > History shows manmade global warming alarm to be false ? and that > harmful policies will persist > http://kestencgreen.com/green&armstrong-agw-analogies.pdf Very interesting article. I am not convinced that man-made global warming does not exist, but I acknowledge it is not proven. More importantly, I have no idea whether the global warming predicted by the IPCC would be bad. Obviously it might be (if snow melts become far scarcer because of it, then there may be water shortages and resource conflicts in Asia, for instance), but longer growing seasons might be beneficial. If people feel it will be a problem, then they should fix it. Buy "carbon credits" or whatever, and invest in companies along the lines of the fictional Earth, Inc. from Stephen Baxter's novel "Transcendent" (they use genetically engineered bacteria to clean up oil spills and dump sites, use gen-en algae to absorb CO2 and build coral reefs, etc. all for a profit). Such companies exist now, and I see no reason why there should be government action when private action will do just as well. I enjoyed that they brought up the case of DDT. Malaria was a disease which we were in the process of wiping off the face of the Earth in the 60s, and which now kills slews of people in the developing world. The costs of banning it clearly outweigh what small "benefits" might have been gained. My favorite part of this is that the very same people who cried out about DDT now cry about the spread of malaria from global warming! Haha... *cough*, oh I'm sorry, I'm choking on irony. Joshua Job nanite1018 at gmail.com From p0stfuturist at yahoo.com Fri Dec 11 21:46:09 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Fri, 11 Dec 2009 13:46:09 -0800 (PST) Subject: [ExI] Atheism Message-ID: <552909.63393.qm@web59907.mail.ac4.yahoo.com> Someone else wrote that atheism is a religion. Atheism/agnosticism is rational-- while religion is not. But since most men have acted like apes, religion/faith was needed to prevent them from being even more apelike than they would have been if religion hadn't been so prevalent. Since religion has existed for thousands of years it is a long time before rationality on earth-- only Marxists now think otherwise. But if you could colonize the Moon with atheists it would be a good deal. In what way is the slop ladled out in churches worse today than on TV, in skools, etc? >But atheism, which is nothing if not a rejection of mysticism, is a religion? rolleyes: Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Fri Dec 11 22:18:15 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 11 Dec 2009 14:18:15 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <3647C7BA-B503-4AB4-B86C-B9343C513A5B@bellsouth.net> Message-ID: <970258.26185.qm@web36505.mail.mud.yahoo.com> --- On Fri, 12/11/09, John Clark wrote: > You don't know how biological brains work and yet > you think human beings are conscious, or at least you do > when they are not asleep or dead. You make this distinction > by observing their behavior. No, I simply notice that my brain has this feature called consciousness. My consciousness seems very much part of the physical world; it goes away temporarily if I get whacked in the head by a baseball bat, or fall asleep and don't dream, or if any number of other things happen. I notice that other people have brains too. I can only infer those other brains have consciousness, but if I don't make that inference then I have fallen into solipsism. > And I think it would be useful if philosophers took a freshman course > in biology because if consciousness is not a byproduct of intelligence, > if it is not the feeling data has when it is being processed I don't know about "byproduct of intelligence" (depends on what you mean by the word byproduct -- see my conversation with Stathis) but clearly consciousness does indeed have something to do with "the feeling data has when it is being processed". So I don't know with whom you think you have a disagreement. Certainly not Searle or me. > As I said before in the history of the world the > study of the concept of the soul has never produced one > useful insight. What soul? If you hope to refute Searle's position then you need first to understand him. Like most naturalists, he respects science and the scientific method. He makes no mystical claims about consciousness except in your misinformed imagination. > The Identity of Indiscernibles supports my ideas not > yours, it says that if I exchange you with an exact copy of > you NOTHING has changed. Nobody here has claimed that a clone of a human brain does not have the same properties as the original. -gts From gts_2000 at yahoo.com Fri Dec 11 23:30:34 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 11 Dec 2009 15:30:34 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: Message-ID: <862990.85250.qm@web36504.mail.mud.yahoo.com> --- On Fri, 12/11/09, John Clark wrote: > I don't use the term "strong AI" myself because if it has any meaning > at all it means programing a soul. If you had in mind any beliefs about strong AI, programming and souls as you wrote that sentence then you had at that moment what I mean by strong AI. If you have something else in mind now as you read this sentence then you still have it. -gts From stathisp at gmail.com Sat Dec 12 00:31:45 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 12 Dec 2009 11:31:45 +1100 Subject: [ExI] Nobel homeschooled until 9th grade In-Reply-To: <4B229380.5000502@libero.it> References: <4B229380.5000502@libero.it> Message-ID: But the mother of the person who wrote the article for homeschooling-network.com didn't teach him or her how to spell "bachelor". 2009/12/12 Mirco Romanato : > I tripped over this: > http://homeschooling-network.com/NewsArticles/Default.aspx > http://www.science.ca/scientists/scientistprofile.php?pID=129&pg=3 > >> Homeschooled Physics Nobel Prize Winner Without Dr. Boyle, there >> would be no digital photography >> >> Homeschooling Swede The 2009 Nobel Prize Winners were announced >> recently, and includes a homeschooled Physics Nobel Prize Winner: Dr. >> Willard S. Boyle. He received the prize principally for the invention >> of the Charged Coupled Device (CCD) that captures digital images used >> in digital cameras and cellular/handheld phones, as well as numerous >> other applications such as the Hubble Telescope. >> >> Dr. Boyle was born in 1924 In Nova Scotia Canada, where he was >> homescholed by his mother up to ninth grade. He studied at Lower >> Canada College in Montreal, and graduated at McGill University with >> his Batchelor's degree. He credits his success to his stated number >> one mentor: his mother who homeschooled him. After his Batchelor's >> degree he received: an MS, and a PhD in Physics. He was later the >> executive director of Communications Sciences Division, Bell Labs in >> New Jersey. He is described as: "Adventurous, clever, curious" on the >> candian science website: www.science.ca >> >> He received the Nobel prize jointly with Dr. George E. Smith, >> principally for inventing the Charged Coupled Device (CCD). These are >> used today to take digital photographs, and can be found in hand-held >> phones, digital cameras, as well as in telescopes such as the Hubble >> Telescope. The MegaPixel number we know about on digital cameras and >> phones, is the number of how many million pixels are on the CCD that >> Dr. Boyle jointly invented. Without the CCD, digitial phootograhy >> would not exist. >> >> He is famously quoted as saying: "Know how to judge when to persevere >> and when to quit. If you?re going to do something, do it well. You >> don?t have to be better than everyone else, but you ought to do your >> personal best.? >> >> Dr. Boyle shows the major contribution that homeschooled people can >> make in the world, even though we are few in number. Why not tell >> your friends that their digital camera wouldn't haven't been invented >> without the work of a homeschooled Physicist! > > Mirco > > Nessun virus nel messaggio in uscita. > Controllato da AVG - www.avg.com > Versione: 9.0.709 / Database dei virus: 270.14.103/2558 - ?Data di rilascio: > 12/11/09 11:06:00 > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- Stathis Papaioannou From thespike at satx.rr.com Sat Dec 12 00:38:28 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 11 Dec 2009 18:38:28 -0600 Subject: [ExI] Nobel homeschooled until 9th grade In-Reply-To: References: <4B229380.5000502@libero.it> Message-ID: <4B22E604.5080902@satx.rr.com> On 12/11/2009 6:31 PM, Stathis Papaioannou wrote: > But the mother of the person who wrote the article for > homeschooling-network.com didn't teach him or her how to spell > "bachelor". "homescholed" is fun too... From stathisp at gmail.com Sat Dec 12 02:33:48 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 12 Dec 2009 13:33:48 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <802912.95517.qm@web36505.mail.mud.yahoo.com> References: <802912.95517.qm@web36505.mail.mud.yahoo.com> Message-ID: 2009/12/12 Gordon Swobe : >> No, I mean that if you replace the brain a neuron at a time >> by electronic analogues that function the same, i.e. same >> output for same input so that the neurons yet to be replaced >> respond in the same way, then the resulting brain will not only >> display the same behaviour but will also have the same consciousness. > > How will you know this? Searle believes that consciousness is a special property of brains, and that although it may be possible (if technically difficult) to simulate the behaviour of a neuron electronically, the artificial neuron will lack qualia. Suppose we replace some of the neurons in your visual cortex. The artificial neurons will behave just the same as the original neurons, sending the appropriate signals to their neighbours, which in turn send signals to their neighbours so that all of the biological parts of your brain behave the same as if the replacement had not been made. The experimenters confirm this by taking readings from neurons in different parts of your brain before and after the replacement, and finding that they are unchanged. Your motor cortex will therefore send signals to your vocal cords and you will declare that the page of writing put in front of you looks exactly the same as it did before, and to prove it you correctly read out what it says. However, if Searle is right all is not well, because you have just gone blind! Your vision has undergone zombification: you behave as if you can see, but you lack visual qualia. So either you are blind without noticing that you are blind, which makes a mockery of the idea of consciousness, or you do notice that you are blind but can't do anything about it, locked into behaving normally while struggling in vain to communicate your terror. The latter nightmarish scenario would mean that you are doing your thinking with your immaterial soul, since your brain activity would be the same as if nothing unusual had happened. -- Stathis Papaioannou From msd001 at gmail.com Sat Dec 12 04:28:43 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 11 Dec 2009 23:28:43 -0500 Subject: [ExI] atheism In-Reply-To: <200912111606.nBBG6sWu022978@andromeda.ziaspace.com> References: <200912111606.nBBG6sWu022978@andromeda.ziaspace.com> Message-ID: <62c14240912112028t3467ca99sdb2702c1c28e77f4@mail.gmail.com> On Fri, Dec 11, 2009 at 11:06 AM, Max More wrote: > > But, whichever way you take the meaning of "agnostic" or "atheist", atheism > is clearly *not* a religion. You can't have religion without a set of > beliefs (not lack-of-beliefs) and some accompanying markers (typically > rituals and the like). > > This should be 101 on the Extropy-Chat list. We have plenty of genuinely > controversial and difficult issues to discuss. Can we now get back to them? > If we're attacking the meaning of words and their use, can I point out that religion and spirituality are different ideas too? Inasmuch as religion is a group behavior of people professing to feel the same kind of spirituality, I would claim that devout atheists (an observation of the group) seem compelled to defend the inherently anti-religious nature of their core principle. I find it to be a boring topic. I understand Max to be making a different point, but arriving at the same conclusion: it really doesn't matter. For the only commonality of a population of people that they do not share a single idea that is believed those outside their group do share is not really much of an identity. As an indistinct label with almost no expressive power to convey a specific import, why the obsession with professing "atheism" as anything at all? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Sat Dec 12 04:59:56 2009 From: sparge at gmail.com (Dave Sill) Date: Fri, 11 Dec 2009 23:59:56 -0500 Subject: [ExI] Atheism In-Reply-To: <552909.63393.qm@web59907.mail.ac4.yahoo.com> References: <552909.63393.qm@web59907.mail.ac4.yahoo.com> Message-ID: 2009/12/11 Post Futurist > > In what way is the slop ladled out in churches worse today than on TV, in skools, etc? What's wrong with church teachings is that they're based on mysticism, faith, dogma, fear, and obedience, not critical thinking, open mindedness, and rationality. TV is entertainment, education, religion, ... everything anyone does. Some of it is good and healthy, some of it is just fun, some is a waste of time, and some of it is actively unhealthy. Schools--at least those not run by religious entities--are generally good, focusing on the rational. Raising children to believe in a religion, starting the indoctrination at an age at which they haven't developed the ability to evaluate what they're being taught, amounts to child abuse. The physical and intellectual effort that has been wasted on teaching, learning, and observing religious practices could have been put to much better use. I'll grant that many great works of art have been inspired by religion, but I think most of those talents would have been expressed equally well on non-religious subjects. But the great minds that were wasted pursuing idiotic theological problems could have solved real, major problems. I think that's tragic. -Dave From pharos at gmail.com Sat Dec 12 08:25:54 2009 From: pharos at gmail.com (BillK) Date: Sat, 12 Dec 2009 08:25:54 +0000 Subject: [ExI] atheism In-Reply-To: <62c14240912112028t3467ca99sdb2702c1c28e77f4@mail.gmail.com> References: <200912111606.nBBG6sWu022978@andromeda.ziaspace.com> <62c14240912112028t3467ca99sdb2702c1c28e77f4@mail.gmail.com> Message-ID: On 12/12/09, Mike Dougherty wrote: > For the only commonality of a population of people that they do not > share a single idea that is believed those outside their group do share is > not really much of an identity. As an indistinct label with almost no > expressive power to convey a specific import, why the obsession with > professing "atheism" as anything at all? > > Because it is important to tell people that you are part of the group that doesn't follow a soccer team, have no interest in soccer and consider it to be a complete waste of time. This may help those people who are soccer fanatics to reconsider the time and money they spend, if they realize that there is a life outside of soccer. BillK From femmechakra at yahoo.ca Sat Dec 12 10:14:01 2009 From: femmechakra at yahoo.ca (Anna Taylor) Date: Sat, 12 Dec 2009 02:14:01 -0800 (PST) Subject: [ExI] Tolerance In-Reply-To: <102640.19486.qm@web59911.mail.ac4.yahoo.com> Message-ID: <675540.43766.qm@web110405.mail.gq1.yahoo.com> This is tolerance..lol http://www.youtube.com/watch?v=C-u5WLJ9Yk4 --- On Fri, 12/11/09, Post Futurist wrote: > From: Post Futurist > Subject: Re: [ExI] Tolerance > To: "ExI chat list" > Received: Friday, December 11, 2009, 2:06 PM > Easy? Who said it will be > easy? Easy is prayer nd meditation ;) > I am very cynical, but cynical doesn't always mean > wrong. Perhaps philosophy is basically. gobbledygook. Is > economics a science? doesn't appear to be. I'm going > to concentrate on 'culture', however a construct it > is. > Frankly, the worst religion today seems better than > any politics-- only thing worse than a politician is an > attorney. > ? > ? > > --- On Fri, 12/11/09, Samantha Atkins > wrote: > > > From: Samantha Atkins > Subject: Re: [ExI] Tolerance > To: "ExI chat list" > > Date: Friday, December 11, 2009, 1:54 PM > > > > > > On Dec 11, 2009, at 9:11 AM, Stefano Vaj > wrote: > > > 2009/12/11 Ben Zaiboc > > It's a position of taking nothing on > faith. ?"I don't believe there are any > gods" is a different thing to "I believe there are > no gods". ?In the first, you are not stating a > faith in something, in the second, you are. > > If you are a transhumanist, you should anyway rephrase it > in "I believe there are no gods (yet)". ;-) > > > What does that mean exactly? ?If you mean a Mind > so powerful it can create an entire universe as we know it > within itself I have no idea if such exists now or not. > ? I have no evidence to believe that one does. ? > ?I am fairly certain that such a Mind is possible > however. > > > > There again, and trying to steer the thread more on > subject, something which risks to make transhumanism > impresentable in some quarters (e.g., academic posthumanism) > is the idea that the future coming of gods is something to > be taken for granted or inscribed in some cosmological > necessity, rather than a possibility - and probably a > possibility that would have to be actively pursued if it > ever were to take place... > > > I agree there are *much* easier ways to sell > transhumanism than some high-falutin Cosmic > Imperative. > > > - samantha > > > -----Inline Attachment Follows----- > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > -----Inline Attachment Follows----- > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > __________________________________________________________________ Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From eugen at leitl.org Sat Dec 12 11:06:53 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 12 Dec 2009 12:06:53 +0100 Subject: [ExI] atheism In-Reply-To: References: <287486.21264.qm@web56804.mail.re3.yahoo.com> Message-ID: <20091212110653.GH17686@leitl.org> On Fri, Dec 11, 2009 at 10:43:46AM -0800, Samantha Atkins wrote: > Saying "well I can't really say whether there are invisible pink unicorns > or not" is a cop-out, at best technically true since they are defined as > being impossible to prove or disprove. But hopefully we have all grown > beyond such sophomore BS rhetorical games. Some Sphex wasps drop a paralyzed insect near the opening of the nest. Before taking provisions into the nest, the Sphex first inspects the nest, leaving the prey outside. During the wasp's inspection of the nest an experimenter can move the prey a few inches away from the opening of the nest. When the Sphex emerges from the nest ready to drag in the prey, it finds the prey missing. The Sphex quickly locates the moved prey, but now its behavioral "program" has been reset. After dragging the prey back to the opening of the nest, once again the Sphex is compelled to inspect the nest, so the prey is again dropped and left outside during another stereotypical inspection of the nest. This iteration can be repeated again and again, with the Sphex never seeming to notice what is going on, never able to escape from its programmed sequence of behaviors. Dennett's argument quotes an account of Sphex behavior from Wooldridge's Machinery of the Brain (1963). Douglas Hofstadter and Daniel Dennett have used this mechanistic behavior as an example of how seemingly thoughtful behavior can actually be quite mindless, the opposite of free will (or, as Hofstadter described it, antisphexishness). From eugen at leitl.org Sat Dec 12 11:19:45 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 12 Dec 2009 12:19:45 +0100 Subject: [ExI] Does caloric restriction work for humans? In-Reply-To: <441A73BF-5381-4E5E-950B-089DC4F8E8F5@GMAIL.COM> References: <200912111610.nBBGAhnZ024026@andromeda.ziaspace.com> <441A73BF-5381-4E5E-950B-089DC4F8E8F5@GMAIL.COM> Message-ID: <20091212111945.GI17686@leitl.org> On Fri, Dec 11, 2009 at 03:33:26PM -0500, JOSHUA JOB wrote: > Not that I think it will be a problem for me anyway, since I'm 19. I > think I'm young enough I'll probably get an indefinite lifespan > barring accidents. If you're really really lucky you will get a decent cryopreservation. I would not count on it. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Sat Dec 12 11:47:31 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 12 Dec 2009 12:47:31 +0100 Subject: [ExI] Nobel homeschooled until 9th grade In-Reply-To: <4B22E604.5080902@satx.rr.com> References: <4B229380.5000502@libero.it> <4B22E604.5080902@satx.rr.com> Message-ID: <20091212114731.GQ17686@leitl.org> On Fri, Dec 11, 2009 at 06:38:28PM -0600, Damien Broderick wrote: > "homescholed" is fun too... They probably meant homeshoaled. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From avantguardian2020 at yahoo.com Sat Dec 12 12:52:04 2009 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 12 Dec 2009 04:52:04 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: References: <370107.70551.qm@web36504.mail.mud.yahoo.com> Message-ID: <242821.40742.qm@web65614.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stathis Papaioannou > To: gordon.swobe at yahoo.com; ExI chat list > Sent: Thu, December 10, 2009 9:10:43 PM > Subject: Re: [ExI] Wernicke's aphasia and the CRA. > No, I mean that if you replace the brain a neuron at a time by > electronic analogues that function the same, i.e. same output for same > input so that the neurons yet to be replaced respond in the same way, > then the resulting brain will not only display the same behaviour but > will also have the same consciousness. Searle considers the neural > replacement scenario and declares that the brain will behave the same > outwardly but will have a different consciousness. The aforementioned > paper by Chalmers shows why this is impossible. I don't think we understand the functioning of neurons enough to buy either Searle or Chalmer's argument. Your neuron by neuron brain replacement assumes that neurons are functionally degenerate. That one neuron is?equivalent to?any other. By the logic of this thought experiment, if you were to replace your neurons one by one with Gordon's neurons, at the end you would still be you. But you could just as easily become Gordon or at least Gordon-esque. At least that's what I take from the neuroscience experiment described in this Time article: http://www.time.com/time/magazine/article/0,9171,986057,00.html Of course how much of Stathis,?or Gordon for that matter,?is a learned trait?as opposed to a hardwired one?is a matter for debate. But still, it gives you food for thought. Stuart LaForge "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten." - Neil Armstrong From jonkc at bellsouth.net Sat Dec 12 13:27:13 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 12 Dec 2009 08:27:13 -0500 Subject: [ExI] atheism In-Reply-To: <4B22928F.2010000@libero.it> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> Message-ID: <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> On Dec 11, 2009, at 1:42 PM, Mirco Romanato wrote: > Even if the chance to be real is only 0.00000000000000000000000001% > It don't make it unreal, only improbable. Atheists make the error to believe that something very improbable is impossible. Given they are not able to prove their claim, their claim is based on faith. I never see an atheist claim that the existence of god is improbable. You can't, or at least shouldn't, be absolutely certain about anything (I think) therefore if it is inappropriate to use the word "atheist" it is also inappropriate to utter any simple declarative sentence about anything because there is a chance, however unlikely, that you could be wrong. > I never see an atheist claim that the existence of god is improbable. Then you lead a very sheltered life. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Dec 12 13:33:37 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 12 Dec 2009 08:33:37 -0500 Subject: [ExI] Tolerance. In-Reply-To: <580930c20912111055r398847b4q8723fe86dd443716@mail.gmail.com> References: <20091211061741.83045.qmail@moulton.com> <960D1616-4C24-4AA2-A008-C536B9C116EC@bellsouth.net> <580930c20912111055r398847b4q8723fe86dd443716@mail.gmail.com> Message-ID: <61EF07F3-9858-4C39-83A0-6D209AC258CA@bellsouth.net> On Dec 11, 2009, Stefano Vaj wrote: > And yet the essentialist view, which brings us into the territory of > inescapable paradoxes as far as uploading, teleport, resurrection, > copying, etc., are concerned keeps re-emerging and re-emerging even in > our ranks. I am unaware of any such paradoxes, it all seems logical to me. If you keep running into paradoxes then you basic axioms must be wrong. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Sat Dec 12 14:13:44 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Sat, 12 Dec 2009 15:13:44 +0100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <242821.40742.qm@web65614.mail.ac4.yahoo.com> References: <370107.70551.qm@web36504.mail.mud.yahoo.com> <242821.40742.qm@web65614.mail.ac4.yahoo.com> Message-ID: <4902d9990912120613s66e6b11bq547f6f5722e54790@mail.gmail.com> On Sat, Dec 12, 2009 at 1:52 PM, The Avantguardian < avantguardian2020 at yahoo.com> wrote: > ----- Original Message ---- > > From: Stathis Papaioannou > > To: gordon.swobe at yahoo.com; ExI chat list < > extropy-chat at lists.extropy.org> > > Sent: Thu, December 10, 2009 9:10:43 PM > > Subject: Re: [ExI] Wernicke's aphasia and the CRA. > > > No, I mean that if you replace the brain a neuron at a time by > > electronic analogues that function the same, i.e. same output for same > > input so that the neurons yet to be replaced respond in the same way, > > then the resulting brain will not only display the same behaviour but > > will also have the same consciousness. Searle considers the neural > > replacement scenario and declares that the brain will behave the same > > outwardly but will have a different consciousness. The aforementioned > > paper by Chalmers shows why this is impossible. > > I don't think we understand the functioning of neurons enough to buy either > Searle or Chalmer's argument. Your neuron by neuron brain replacement > assumes that neurons are functionally degenerate. That one neuron > is equivalent to any other. I interpret the replacement as using a different electronic equivalent for each neuron, so that their specific functions (if any) will be preserved. Understanding the neurons' inner working is not needed if you can exactly replace their input/output functions (not an easy feat anyway...) Whether consciousness resides inside single neurons is another matter. In that case, inner workings will need to be replicated too. Searle's arguments remind me of good old-fashioned dualism: there is something in our brain cells that can't be replicated in a mechanical or electronic equivalent. But without knowing what this "something" is, that's just an article of faith. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Dec 12 14:16:28 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 12 Dec 2009 09:16:28 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <970258.26185.qm@web36505.mail.mud.yahoo.com> References: <970258.26185.qm@web36505.mail.mud.yahoo.com> Message-ID: On Dec 11, 2009, at 5:18 PM, Gordon Swobe wrote: >> You don't know how biological brains work and yet >> you think human beings are conscious, or at least you do >> when they are not asleep or dead. You make this distinction >> by observing their behavior. > > No, I simply notice that my brain has this feature called consciousness. I'm not talking about your brain, how did you notice that MY brain has this feature called consciousness? Children know nothing about brains but they still think other people are conscious except when they are asleep or dead or otherwise ACT like they aren't. We do not think other people are conscious because of our knowledge of biology, the very idea that brains are important is a modern consept, but the idea that other people are often conscious is not. The Egyptians carefully preserved every part of the body EXCEPT the brain. Aristotle thought the brain was a minor organ that just cooled the blood and the heart was the heart of our being. But in spite of all this, solipsism has never been very popular in any age. > So I don't know with whom you think you have a disagreement. Certainly not Searle or me. Searle in his Chinese room thinks he has demonstrated that intelligence without consciousness is possible, but I am conscious and you probably are too. How did I come to be? It can't be evolution because it only sees intelligence, it only sees actions and Searle thinks intelligent actions doesn't need consciousness. So either Darwin was wrong or Searle was a fool. I don't think Darwin was wrong. > He makes no mystical claims about consciousness except in your misinformed imagination. Its like the air, Searle's mysticism is so ubiquitous that he doesn't even notice it. He assumes his Chinese room is not conscious and expects us to make the same assumption. > Nobody here has claimed that a clone of a human brain does not have the same properties as the original. Bullshit. Almost everyone around here thinks an exact copy of you would have vastly different properties, one would be you and one would not. I cannot think of any property more important. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Dec 12 14:32:35 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 12 Dec 2009 06:32:35 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: Message-ID: <151924.66474.qm@web36501.mail.mud.yahoo.com> --- On Sat, 12/12/09, John Clark wrote: > I'm not talking about your brain, how did > you notice that MY brain has this feature called > consciousness?? I answered that question about other minds, John, and apparently you ignored my answer. Once again: I have a choice either to 1) infer that because my brain has consciousness, others also have it, or 2) consign myself to a dead end solipsistic philosophy in which you and everyone I know have the mental life of vegetables. I choose 1) by reductio ad absurdum. > The Egyptians carefully preserved every part of the body EXCEPT the > brain. Well then perhaps you should take your argument to the Egyptians. -gts From gts_2000 at yahoo.com Sat Dec 12 14:16:57 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 12 Dec 2009 06:16:57 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. Message-ID: <213414.18992.qm@web36507.mail.mud.yahoo.com> --- On Fri, 12/11/09, Stathis Papaioannou wrote: > Suppose we replace some of the neurons in your visual > cortex. The artificial neurons will behave just the same as the > original neurons, sending the appropriate signals to their neighbours, > which in turn send signals to their neighbours so that all of the > biological parts of your brain behave the same as if the replacement > had not been made. So goes the functionalist theory, just one of many in the philosophy of mind, but one which you seem here to take for granted. On that theory it matters only that the artificial brain and its parts "act" like a real brain -- that the artificial neurons, as you say, "behave" like the originals. But you'll notice first that functionalism amounts to behaviorism at the level of the neuron -- it ignores the subjective first person ontology of mental states; and second, it relies on the shaky assumption that substance does not matter -- only function matters. If only function matters then we could, as others have pointed out, construct a giant artificial brain out of beer cans and toilet paper. Do you think such a monstrosity would have semantics? -gts From stathisp at gmail.com Sat Dec 12 14:46:53 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 13 Dec 2009 01:46:53 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <242821.40742.qm@web65614.mail.ac4.yahoo.com> References: <370107.70551.qm@web36504.mail.mud.yahoo.com> <242821.40742.qm@web65614.mail.ac4.yahoo.com> Message-ID: 2009/12/12 The Avantguardian : >> No, I mean that if you replace the brain a neuron at a time by >> electronic analogues that function the same, i.e. same output for same >> input so that the neurons yet to be replaced respond in the same way, >> then the resulting brain will not only display the same behaviour but >> will also have the same consciousness. Searle considers the neural >> replacement scenario and declares that the brain will behave the same >> outwardly but will have a different consciousness. The aforementioned >> paper by Chalmers shows why this is impossible. > > I don't think we understand the functioning of neurons enough to buy either Searle or Chalmer's argument. Your neuron by neuron brain replacement assumes that neurons are functionally degenerate. That one neuron is?equivalent to?any other. By the logic of this thought experiment, if you were to replace your neurons one by one with Gordon's neurons, at the end you would still be you. But you could just as easily become Gordon or at least Gordon-esque. At least that's what I take from the neuroscience experiment described in this Time article: > > http://www.time.com/time/magazine/article/0,9171,986057,00.html > > Of course how much of Stathis,?or Gordon for that matter,?is a learned trait?as opposed to a hardwired one?is a matter for debate. But still, it gives you food for thought. The replacement would have to involve artificial neurons that are *functionally equivalent*. Quail neurons are apparently not functionally equivalent replacements for chicken neurons, going on the evidence in the article you cited, and it wouldn't be surprising if one human's neurons are not functionally equivalent to another's either. I've never accepted simplistic notions of mind uploading that hold that all the information needed is a map of the neural connections. To properly model a brain you may need go down all the way to the molecular level, which would of course require extremely fine scanning techniques and a fantastic amount of computing power. Nevertheless, unless there is something fundamentally non-computable in the brain, a computer model should be possible, and this is sufficient to make the case for functionalism. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Dec 12 15:03:36 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 12 Dec 2009 07:03:36 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: Message-ID: <633777.83284.qm@web36501.mail.mud.yahoo.com> --- On Sat, 12/12/09, Stathis Papaioannou wrote: > The replacement would have to involve artificial neurons > that are *functionally equivalent*. I.e., functionalism. > To properly model a brain you may need go down all the way to > the molecular level, which would of course require... Now here you and Searle may almost agree. To create an artificial brain modeled "down all the way to the molecular level", we would need essentially to create a real brain. -gts From jonkc at bellsouth.net Sat Dec 12 15:12:32 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 12 Dec 2009 10:12:32 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <213414.18992.qm@web36507.mail.mud.yahoo.com> References: <213414.18992.qm@web36507.mail.mud.yahoo.com> Message-ID: <30E7D0EB-A544-484E-8DA0-4DCD6248FC8C@bellsouth.net> On Dec 12, 2009, Gordon Swobe wrote: > So goes the functionalist theory, just one of many in the philosophy of mind, but one which you seem here to take for granted. I take it for granted that science works better than mystical crap. > But you'll notice first that functionalism amounts to behaviorism at the level of the neuron So what? > it ignores the subjective first person ontology of mental states I see no evidence that a neuron has a mental state. Do you? > it relies on the shaky assumption that substance does not matter -- only function matters. What the hell is shaky about that? If substance has no function then it doesn't matter. Evolution is blind to substance, it only sees function, and yet I am conscious and you are too probably. Can't you see that fact is telling you something? > If only function matters then we could, as others have pointed out, construct a giant artificial brain out of beer cans and toilet paper. True. > Do you think such a monstrosity would have semantics? Certainly! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Dec 12 15:18:54 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 12 Dec 2009 10:18:54 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <633777.83284.qm@web36501.mail.mud.yahoo.com> References: <633777.83284.qm@web36501.mail.mud.yahoo.com> Message-ID: On Dec 12, 2009, Gordon Swobe wrote: > To create an artificial brain modeled "down all the way to the molecular level", we would need essentially to create a real brain. Most of the things a neuron does is routine housekeeping stuff no different from what a cell in you large intestine needs to do just to keep alive. There would be no need to duplicate all that stuff to make a functionally equivalent electronic brain. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Dec 12 14:55:47 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 12 Dec 2009 09:55:47 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <151924.66474.qm@web36501.mail.mud.yahoo.com> References: <151924.66474.qm@web36501.mail.mud.yahoo.com> Message-ID: <26B90D0C-813D-41C5-9ECD-7BDC66ECFACF@bellsouth.net> On Dec 12, 2009, Gordon Swobe wrote: > I have a choice either to 1) infer that because my brain has consciousness, others also have it, Are you telling me that if you'd never had a course in biology and learned the importance of brains you'd think you were the only conscious being in the universe? And you don't think other people are conscious all the time, not when they don't act like they are such as when they are asleep. > or 2) consign myself to a dead end solipsistic philosophy in which you and everyone I know have the mental life of vegetables. I choose 1) by reductio ad absurdum. Then why isn't it also absurd to think that an intelligent computer has the mental life of a vegetable? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Dec 12 15:42:00 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 12 Dec 2009 07:42:00 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: Message-ID: <156049.62557.qm@web36508.mail.mud.yahoo.com> --- On Sat, 12/12/09, John Clark wrote: > Most of the things a neuron does is routine > housekeeping stuff no different from what a cell in you > large intestine needs to do just to keep alive. There would > be no need to duplicate all that stuff to make a > functionally equivalent electronic brain. You've taken a leap of faith there if you think it will have consciousness. That's fine, provided you understand the difference between faith and knowledge. Philosophers like Searle investigate what we can actually know. And what we can know is this: 1) real biological brains have consciousness, 2) we know of nothing else in the universe that has it, 3) if we want to create it artificially and *know for certain* that we've done so then we need first to understand exactly how it happens in real biological brains. Searle scolds neuroscientists for not working harder on this all important question. He blames their complacency on the mystical dualistic mind/matter philosophy handed down to us from the likes of Descartes, a philosophy which made mental phenomena seem somehow outside the scope of natural science. -gts From stefano.vaj at gmail.com Sat Dec 12 16:37:22 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 12 Dec 2009 17:37:22 +0100 Subject: [ExI] atheism In-Reply-To: <4B22928F.2010000@libero.it> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> Message-ID: <580930c20912120837t50c5633bu6bfd62a24e8fe54a@mail.gmail.com> 2009/12/11 Mirco Romanato : > Il 11/12/2009 19.25, John Clark ha scritto: >> I'll bet you are a atheist regarding Zeus and Thor and the Flying >> Spaghetti Monster, as Dawkins says he just goes one god further. >> Agnostics make the logical error of assuming that if there is no >> evidence that something exists and no evidence that it does not then >> there is a 50% chance its real. > > Even if the chance to be real is only 0.00000000000000000000000001% > It don't make it unreal, only improbable. Since when do we base our beliefs on a "certainty of unreality"? And, btw, I am probably not an "atheist" as far as Zeus and Thor are concerned. :-) I simply do not accord them the same status or kind of "reality" which I consider appropriate for the keyboard I am typing on (even though I could improbably be hallucinating it...) or which is claimed for Jahv?/Allah/the Holy Trinity. -- Stefano Vaj From stefano.vaj at gmail.com Sat Dec 12 16:45:48 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 12 Dec 2009 17:45:48 +0100 Subject: [ExI] Tolerance. In-Reply-To: <61EF07F3-9858-4C39-83A0-6D209AC258CA@bellsouth.net> References: <20091211061741.83045.qmail@moulton.com> <960D1616-4C24-4AA2-A008-C536B9C116EC@bellsouth.net> <580930c20912111055r398847b4q8723fe86dd443716@mail.gmail.com> <61EF07F3-9858-4C39-83A0-6D209AC258CA@bellsouth.net> Message-ID: <580930c20912120845t2a522fcbqd2fae26f5dcd8696@mail.gmail.com> 2009/12/12 John Clark : > On Dec 11, 2009, ?Stefano Vaj wrote: > And yet the essentialist view, which brings us into the territory of > inescapable paradoxes as far as uploading, teleport, resurrection, > copying, etc., are concerned keeps re-emerging and re-emerging even in > our ranks. > > I am unaware of any such paradoxes, it all seems logical to me. If you keep > running into paradoxes then you basic axioms must be wrong. Aren't we say the same thing? If we run into paradoxes by adopting an "essentialist" view of conscience and identity, then probably such view can be considered as "wrong" (or, at least, not operationally very useful...). If we refrain to consider conscience and identity anything else than phenomena - so if something swims, walks and quacks like a duck there is nothing else to be said on the subject of whether it is "really" a duck - all paradoxes go away. -- Stefano Vaj From avantguardian2020 at yahoo.com Sat Dec 12 16:29:43 2009 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 12 Dec 2009 08:29:43 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <4902d9990912120613s66e6b11bq547f6f5722e54790@mail.gmail.com> References: <370107.70551.qm@web36504.mail.mud.yahoo.com> <242821.40742.qm@web65614.mail.ac4.yahoo.com> <4902d9990912120613s66e6b11bq547f6f5722e54790@mail.gmail.com> Message-ID: <87104.69727.qm@web65613.mail.ac4.yahoo.com> >From: Alfio Puglisi >To: ExI chat list >Sent: Sat, December 12, 2009 6:13:44 AM >Subject: Re: [ExI] Wernicke's aphasia and the CRA. > > >I interpret the replacement as using a different electronic equivalent for each neuron, so that their specific functions (if any) will be preserved. Even so, how could you map that function over the domain of inputs and range of outputs? How precisely is "close enough"??Does the function even remain the same over the life of a neuron? For a simple mathematical?example of the problem,?consider the?functions?y=x+13 and y=(x^2-169)/(x-13). Over all of the infinite possible values of inputs x?they are "functionally equivalent" and give rise to the same output. . . *except* where x=13. When you *know*?the functions, the difference is obvious. But if the functions were hidden within a "black box" and all you could do was plug in random values of x?and look at the output, would you notice a difference between the two? > >?Understanding the neurons' inner working is not needed if you can exactly replace their input/output functions (not an easy feat anyway...) Whether consciousness resides inside single neurons is another matter. In that case, inner workings will need to be replicated too. > >Searle's arguments remind me of good old-fashioned dualism: there is something in our brain cells that can't be replicated in a mechanical or electronic equivalent. But without knowing what this "something" is, that's just an article of faith. Forget brains or neurons for the moment. Sodium is a metal that spontaneously burns when it contacts water. Chlorine is a deadly poisonous gas. When you combine the two in a test tube, you get salt. What does the electronic or mechanical equivalent of salt taste like? ? Stuart LaForge "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten." - Neil Armstrong From jonkc at bellsouth.net Sat Dec 12 17:11:16 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 12 Dec 2009 12:11:16 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <156049.62557.qm@web36508.mail.mud.yahoo.com> References: <156049.62557.qm@web36508.mail.mud.yahoo.com> Message-ID: <87B4BA7A-5C00-44F5-8B1A-6EAABEDE3455@bellsouth.net> On Dec 12, 2009, at 10:42 AM, Gordon Swobe wrote: >> Most of the things a neuron does is routine >> housekeeping stuff no different from what a cell in you >> large intestine needs to do just to keep alive. There would >> be no need to duplicate all that stuff to make a >> functionally equivalent electronic brain. > > You've taken a leap of faith there if you think it will have consciousness. Yes but no bigger a leap of faith than I have in thinking that you are conscious. > Philosophers like Searle investigate what we can actually know. And the only thing I actually know about you is that you have the ability to generate a certain sequence if ASCII characters. And yet I think you were conscious when you wrote it. > And what we can know is this: 1) real biological brains have consciousness Why the plural? You know is that one biological brain has consciousness. Sometimes. > 2) we know of nothing else in the universe that has it How do you know that? How do you know that a brick is not conscious? Because it doesn't act intelligently. > 3) if we want to create it artificially and *know for certain* that we've done so then we need first to understand exactly how it happens in real biological brains. Why? We don't know how biological brains work but we still think they're conscious except when they're acting like they are not. > Searle scolds neuroscientists for not working harder on this all important question. He blames their complacency on the mystical dualistic mind/matter philosophy handed down to us from the likes of Descartes Descartes just got his grammar wrong, he thought he was a noun and not an adjective; but Searle is the real dualist, he clearly thinks intelligence and consciousness have little to do with each other. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Dec 12 17:14:57 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 12 Dec 2009 12:14:57 -0500 Subject: [ExI] atheism. In-Reply-To: <580930c20912120837t50c5633bu6bfd62a24e8fe54a@mail.gmail.com> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <580930c20912120837t50c5633bu6bfd62a24e8fe54a@mail.gmail.com> Message-ID: On Dec 12, 2009, at 11:37 AM, Stefano Vaj wrote: > btw, I am probably not an "atheist" as far as Zeus and Thor are > concerned. :-) God is real, unless defined as an integer. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sat Dec 12 17:36:50 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 12 Dec 2009 18:36:50 +0100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <26B90D0C-813D-41C5-9ECD-7BDC66ECFACF@bellsouth.net> References: <151924.66474.qm@web36501.mail.mud.yahoo.com> <26B90D0C-813D-41C5-9ECD-7BDC66ECFACF@bellsouth.net> Message-ID: <580930c20912120936n6f4c7bcch7f5dac7bffba2858@mail.gmail.com> 2009/12/12 John Clark > On Dec 12, 2009, Gordon Swobe wrote: > > or 2) consign myself to a dead end solipsistic philosophy in which you and > everyone I know have the mental life of vegetables. I choose 1) by reductio > ad absurdum. > > > Then why isn't it also absurd to think that an intelligent computer has the > mental life of a vegetable? > > This subject has been beaten to death innumerable times, but do we really need to make (and is it possible to decide the truth of) assumptions concerning the "true" mental state of others as if it were a thing distinct from its expression and underlying mechanics? That an intelligent computer has the mental life of a vegetable sounds like an oxymoron to me... Its "intelligence", or mine for that matter, is defined by the responses we can offer to the various input we are faced with. And, by the way, we do not really know anything about the "real" mental life of vegetables either. We can study and describe how they work and behave at increasing level of details, and that is all there is to know about them. All that stuff simply brings us back to the "homuncoli" hypotheses, or to ko'an questions such as "what it would feel like to be somebody else" that are only good for short circuit our brain processes when engaged in Zen meditation... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Sat Dec 12 17:13:47 2009 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 12 Dec 2009 09:13:47 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: References: <370107.70551.qm@web36504.mail.mud.yahoo.com> <242821.40742.qm@web65614.mail.ac4.yahoo.com> Message-ID: <211251.65939.qm@web65602.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stathis Papaioannou > To: ExI chat list > Sent: Sat, December 12, 2009 6:46:53 AM > Subject: Re: [ExI] Wernicke's aphasia and the CRA. > I've never accepted simplistic notions of mind uploading that hold > that all the information needed is a map of the neural connections. To > properly model a brain you may need go down all the way to the > molecular level, which would of course require extremely fine scanning > techniques and a fantastic amount of computing power. Nevertheless, > unless there is something fundamentally non-computable in the brain, a > computer model should be possible, and this is sufficient to make the > case for functionalism. Even within the narrow bounds of math and?computer science there are provenly non-computable numbers like Chaitin's constant and non-computable functions like the?"busy beaver function". The?brain is not obligated to be computable. And mind has yet to be satisfactorily defined. Stuart LaForge "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten." - Neil Armstrong From gts_2000 at yahoo.com Sat Dec 12 17:40:34 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 12 Dec 2009 09:40:34 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <87B4BA7A-5C00-44F5-8B1A-6EAABEDE3455@bellsouth.net> Message-ID: <806166.70690.qm@web36502.mail.mud.yahoo.com> --- On Sat, 12/12/09, John Clark wrote: > Then why isn't it also absurd to think that an intelligent computer > has the mental life of a vegetable? Because 1) programs are formal (syntactic), and 2) minds have mental contents (semantics), and 3) syntax is neither constitutive of nor sufficient for semantics, programs are neither constitutive of nor sufficient for minds. It follows also that because biological brains have minds, they must do something besides run formal programs. -gts From alfio.puglisi at gmail.com Sat Dec 12 17:44:29 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Sat, 12 Dec 2009 18:44:29 +0100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <87104.69727.qm@web65613.mail.ac4.yahoo.com> References: <370107.70551.qm@web36504.mail.mud.yahoo.com> <242821.40742.qm@web65614.mail.ac4.yahoo.com> <4902d9990912120613s66e6b11bq547f6f5722e54790@mail.gmail.com> <87104.69727.qm@web65613.mail.ac4.yahoo.com> Message-ID: <4902d9990912120944j7bc401d7hdf2dd766152af6c@mail.gmail.com> On Sat, Dec 12, 2009 at 5:29 PM, The Avantguardian < avantguardian2020 at yahoo.com> wrote: > >From: Alfio Puglisi > >To: ExI chat list > >Sent: Sat, December 12, 2009 6:13:44 AM > >Subject: Re: [ExI] Wernicke's aphasia and the CRA. > > > > > >I interpret the replacement as using a different electronic equivalent for > each neuron, so that their specific functions (if any) will be preserved. > > Even so, how could you map that function over the domain of inputs and > range of outputs? How precisely is "close enough"? Does the function even > remain the same over the life of a neuron? For a simple mathematical example > of the problem, consider the functions y=x+13 and y=(x^2-169)/(x-13). Over > all of the infinite possible values of inputs x they are "functionally > equivalent" and give rise to the same output. . . *except* where x=13. When > you *know* the functions, the difference is obvious. But if the functions > were hidden within a "black box" and all you could do was plug in random > values of x and look at the output, would you notice a difference between > the two? > The replacement would not be a perfect clone, on this I can agree. But we are not plugging in random values in a neuron and observing the output. In the replacement case, you observe a neuron in its day-to-day function, which is not random at all, but very representative. I don't think you would notice any difference between the original and the replacement after an observation period of some days or weeks. > Understanding the neurons' inner working is not needed if you can exactly replace their input/output functions (not an easy feat anyway...) Whether consciousness resides inside single neurons is another matter. In that case, inner workings will need to be replicated too. > >Searle's arguments remind me of good old-fashioned dualism: there is something in our brain cells that can't be replicated in a mechanical or electronic equivalent. But without knowing what this "something" is, that's just an article of faith. Forget brains or neurons for the moment. Sodium is a metal that > spontaneously burns when it contacts water. Chlorine is a deadly poisonous > gas. When you combine the two in a test tube, you get salt. What does the > electronic or mechanical equivalent of salt taste like? > It tastes like silicon or iron :-) That was the wrong question. A better question is: what taste is the electronic equivalent feeling? Looking at a human brain, you would never guess the answer. It is clear that something is causing feeling in the human brain, but we don't know what it is. It is allowed to hypothesize that the "something" is specific to the biological brain, in the same way that a neuron is. That it can't be also replicated on another substrate is a conjecture that can't be proved until the first issue is resolved. It could happen that consciousness come out to be a property of some specific matter arrangement. Say, like electrical charge requires electron, protons or other specific particles. But I'm not holding my breath. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Dec 12 17:20:07 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 12 Dec 2009 12:20:07 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <87104.69727.qm@web65613.mail.ac4.yahoo.com> References: <370107.70551.qm@web36504.mail.mud.yahoo.com> <242821.40742.qm@web65614.mail.ac4.yahoo.com> <4902d9990912120613s66e6b11bq547f6f5722e54790@mail.gmail.com> <87104.69727.qm@web65613.mail.ac4.yahoo.com> Message-ID: <4D8DB4A0-D267-44B6-9AD4-A170F7E9A982@bellsouth.net> On Dec 12, 2009, The Avantguardian wrote: > Sodium is a metal that spontaneously burns when it contacts water. Chlorine is a deadly poisonous gas. When you combine the two in a test tube, you get salt. What does the electronic or mechanical equivalent of salt taste like? Salt. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Sat Dec 12 17:52:11 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Sat, 12 Dec 2009 18:52:11 +0100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <806166.70690.qm@web36502.mail.mud.yahoo.com> References: <87B4BA7A-5C00-44F5-8B1A-6EAABEDE3455@bellsouth.net> <806166.70690.qm@web36502.mail.mud.yahoo.com> Message-ID: <4902d9990912120952h7c95aea8seb5f8accac01df37@mail.gmail.com> On Sat, Dec 12, 2009 at 6:40 PM, Gordon Swobe wrote: > --- On Sat, 12/12/09, John Clark wrote: > > > Then why isn't it also absurd to think that an intelligent computer > > has the mental life of a vegetable? > > Because 1) programs are formal (syntactic), and 2) minds have mental > contents (semantics), and 3) syntax is neither constitutive of nor > sufficient for semantics, programs are neither constitutive of nor > sufficient for minds. > > It follows also that because biological brains have minds, they must do > something besides run formal programs. > Do you also expand this statement to "something besides moving molecules around and shuffling electrical charges?" Because that's what a biological brain does. If this description is too low-level to encode semantics, then you have just defined a computational substrate for the (formal or non-formal) brain program that encodes the semantics and, voila', the simulation is ready :-) Unless minds are not physics-based, there's no way to escape the conclusion that semantics, or whatever makes up mental states, is some function of brain's matter and electrical charges arrangement and movement. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Dec 12 17:57:36 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 12 Dec 2009 12:57:36 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <211251.65939.qm@web65602.mail.ac4.yahoo.com> References: <370107.70551.qm@web36504.mail.mud.yahoo.com> <242821.40742.qm@web65614.mail.ac4.yahoo.com> <211251.65939.qm@web65602.mail.ac4.yahoo.com> Message-ID: <1EBC6179-50BB-4463-89EA-E4CA956A3B18@bellsouth.net> On Dec 12, 2009, at 12:13 PM, The Avantguardian wrote: > there are provenly non-computable numbers like Chaitin's constant and non-computable functions like the "busy beaver function". Yes, and our minds are no more capable of solving those problems than computers are; and the instructions on how to build one of those biological minds could easily fit on a $200 hard drive. I think those two facts are trying to tell you something. > And mind has yet to be satisfactorily defined. Definitions suck. Examples rule. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Dec 12 18:01:21 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 12 Dec 2009 10:01:21 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <87B4BA7A-5C00-44F5-8B1A-6EAABEDE3455@bellsouth.net> Message-ID: <782456.82917.qm@web36505.mail.mud.yahoo.com> --- On Sat, 12/12/09, John Clark wrote: > Descartes just got his grammar wrong, he thought he was a noun and > not an adjective; but Searle is the real dualist, he clearly thinks > intelligence and?consciousness?have little to do with each > other. Please show me how he suggests anything remotely similar. In any case the dualism to which I refer concerns that supposed between matter and mind, not consciousness and intelligence. On Descartes view, as most people know, the two things exist independently. Descartes barely made sense of the obvious fact that mind still somehow affects matter. Searle rejects that entire mind/matter dichotomy as obsolete (and absolute) nonsense. He attributes it to the religious pressures at work in Descartes' day. As philosophies, materialism is passe, as is its antithesis idealism. So says Searle. I find his thoughts on this subject pretty interesting. -gts From jonkc at bellsouth.net Sat Dec 12 18:32:58 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 12 Dec 2009 13:32:58 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <782456.82917.qm@web36505.mail.mud.yahoo.com> References: <782456.82917.qm@web36505.mail.mud.yahoo.com> Message-ID: On Dec 12, 2009, at 1:01 PM, Gordon Swobe wrote: > Searle rejects that entire mind/matter dichotomy as obsolete Yes, and Searle was wrong about that too, the two are not the same, mind is what a brain does. Then he introduces a brand new dichotomy the intelligence/consciousness dichotomy which he'd know was really really stupid if he took the time to audit a high school class on Evolution. Perhaps he's the victim of creationists having banned the subject from biology class when he was a kid. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Dec 12 18:38:39 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 12 Dec 2009 10:38:39 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <4902d9990912120952h7c95aea8seb5f8accac01df37@mail.gmail.com> Message-ID: <385508.6120.qm@web36501.mail.mud.yahoo.com> --- On Sat, 12/12/09, Alfio Puglisi wrote: >> It follows also that because biological brains have minds, > they must do something besides run formal programs. > > Do you also expand this statement to "something > besides moving molecules around and shuffling electrical > charges?" Because that's what a biological brain > does. Good question, and I think the answer is a qualified "no". That shuffling of molecules and electrical charges to which you refer may very well have something to do with how brains cause consciousness. But Searle would qualify my answer by adding, "We don't yet know if only the activities of the brain cause it to become conscious. It may also have something to do with the substances of which it is made. We simply don't yet know if anything other than a real organic biological brain can have consciousness." The upshot is that until we understand exactly how real organic brains become conscious, we run the risk (of concern at least to philosophers of the subject) of creating a non-organic AI that appears to have consciousness but which in fact only mimics it. -gts From jonkc at bellsouth.net Sat Dec 12 18:39:08 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 12 Dec 2009 13:39:08 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <806166.70690.qm@web36502.mail.mud.yahoo.com> References: <806166.70690.qm@web36502.mail.mud.yahoo.com> Message-ID: <6D1285FA-D7DA-4A7F-BD6E-7F5F4B51E0A2@bellsouth.net> On Dec 12, 2009, at 12:40 PM, Gordon Swobe wrote: > syntax is neither constitutive of nor sufficient for semantics Why not? > It follows also that because biological brains have minds, they must do something besides run formal programs. Yea yea I've heard that for years, the nuns in my grade school tried to drill that into me, but I haven't believed in the soul since I was eleven. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Sat Dec 12 19:05:23 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Sat, 12 Dec 2009 20:05:23 +0100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <385508.6120.qm@web36501.mail.mud.yahoo.com> References: <4902d9990912120952h7c95aea8seb5f8accac01df37@mail.gmail.com> <385508.6120.qm@web36501.mail.mud.yahoo.com> Message-ID: <4902d9990912121105t7d7ebfcft49ba3efbfa845d22@mail.gmail.com> On Sat, Dec 12, 2009 at 7:38 PM, Gordon Swobe wrote: > --- On Sat, 12/12/09, Alfio Puglisi wrote: > > >> It follows also that because biological brains have minds, > > they must do something besides run formal programs. > > > > Do you also expand this statement to "something > > besides moving molecules around and shuffling electrical > > charges?" Because that's what a biological brain > > does. > > Good question, and I think the answer is a qualified "no". > > That shuffling of molecules and electrical charges to which you refer may > very well have something to do with how brains cause consciousness. But > Searle would qualify my answer by adding, "We don't yet know if only the > activities of the brain cause it to become conscious. It may also have > something to do with the substances of which it is made. We simply don't yet > know if anything other than a real organic biological brain can have > consciousness." > > The upshot is that until we understand exactly how real organic brains > become conscious, we run the risk (of concern at least to philosophers of > the subject) of creating a non-organic AI that appears to have consciousness > but which in fact only mimics it. > Then we roughly agree, except that I take the opposite default position: that non-organic AI will have consciousness unless proved otherwise, the same criterion we apply to biological intelligence. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Dec 12 19:15:00 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 12 Dec 2009 11:15:00 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <6D1285FA-D7DA-4A7F-BD6E-7F5F4B51E0A2@bellsouth.net> Message-ID: <573329.23026.qm@web36502.mail.mud.yahoo.com> --- On Sat, 12/12/09, John Clark wrote: >> syntax is neither constitutive of nor sufficient for semantics > Why not? Searle designed the Chinese Room thought experiment to prove that third premise in his formal argument. The Englishman in the room shuffles the Chinese symbols according to the rules of Chinese syntax, and he does this well enough to pass the Turing test, yet he never understands a word of Chinese. His experience would seem much like that of the Wernicke's aphasia patient with the lesion on the semantic center of his brain. He speaks fluent Chinese with good syntax and also has no idea what he's talking about. -gts From painlord2k at libero.it Sat Dec 12 19:34:39 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Sat, 12 Dec 2009 20:34:39 +0100 Subject: [ExI] Nobel homeschooled until 9th grade In-Reply-To: <4B22E604.5080902@satx.rr.com> References: <4B229380.5000502@libero.it> <4B22E604.5080902@satx.rr.com> Message-ID: <4B23F04F.1030407@libero.it> Il 12/12/2009 1.38, Damien Broderick ha scritto: > On 12/11/2009 6:31 PM, Stathis Papaioannou wrote: > >> But the mother of the person who wrote the article for >> homeschooling-network.com didn't teach him or her how to spell >> "bachelor". > > "homescholed" is fun too... From someone that use english as a secon written language, the differences from England English, US English (it is one or more?) and Australian English are, sometime, astonishing. Spelling is not always simple or unanimous. Anyway, there are evidences that public, government owned, schools a bit too often let people unable to read, compute, write a sentence to walk out with a diploma. It appear that the right to be educated and the compulsory education have morphed to a right to a diploma. Italy appear, like usual, 20 years later than US, in this. But we are trying to catch up. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.716 / Database dei virus: 270.14.104/2560 - Data di rilascio: 12/12/09 08:38:00 From thespike at satx.rr.com Sat Dec 12 19:51:03 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 12 Dec 2009 13:51:03 -0600 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <573329.23026.qm@web36502.mail.mud.yahoo.com> References: <573329.23026.qm@web36502.mail.mud.yahoo.com> Message-ID: <4B23F427.1070106@satx.rr.com> On 12/12/2009 1:15 PM, Gordon Swobe wrote: > The Englishman in the room shuffles the Chinese symbols according to the rules of Chinese syntax, and he does this well enough to pass the Turing test, yet he never understands a word of Chinese. > > His experience would seem much like that of the Wernicke's aphasia patient with the lesion on the semantic center of his brain. He speaks fluent Chinese with good syntax and also has no idea what he's talking about. This is very tedious. You still refuse to acknowledge what critics have shown for many years: the English monoglot speaker is the functional equivalent of a single neuron, or a small clump of them connected by synapses, functioning at glacial speeds. Nobody claims that such neurons are individually conscious. Rescale and find a better argument. Damien Broderick From gts_2000 at yahoo.com Sat Dec 12 19:52:32 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 12 Dec 2009 11:52:32 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <4902d9990912121105t7d7ebfcft49ba3efbfa845d22@mail.gmail.com> Message-ID: <187620.42095.qm@web36502.mail.mud.yahoo.com> --- On Sat, 12/12/09, Alfio Puglisi wrote: > Then we roughly agree, except that I take the opposite > default position: that non-organic AI will have > consciousness unless proved otherwise, the same criterion we > apply to biological intelligence. That position will lead to panpsychism - the idea that all matter has consciousness -- unless you find some way to justify one thing as conscious and another as not without using biological consciousness as the measure! Panpsychism is not an indefensible position, and it does refute Searle's. It's just not very popular. -gts From alfio.puglisi at gmail.com Sat Dec 12 20:08:05 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Sat, 12 Dec 2009 21:08:05 +0100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <187620.42095.qm@web36502.mail.mud.yahoo.com> References: <4902d9990912121105t7d7ebfcft49ba3efbfa845d22@mail.gmail.com> <187620.42095.qm@web36502.mail.mud.yahoo.com> Message-ID: <4902d9990912121208j4ac3e226j532e8b1e3ceeac58@mail.gmail.com> On Sat, Dec 12, 2009 at 8:52 PM, Gordon Swobe wrote: > --- On Sat, 12/12/09, Alfio Puglisi wrote: > > > Then we roughly agree, except that I take the opposite > > default position: that non-organic AI will have > > consciousness unless proved otherwise, the same criterion we > > apply to biological intelligence. > > That position will lead to panpsychism - the idea that all matter has > consciousness Well, a wooden desk, while biologically-derived, does not show intelligent behaviour, so I don't assign it much consciousness either. The same for a metallic desk. > -- unless you find some way to justify one thing as conscious and another > as not without using biological consciousness as the measure! > You nailed it - the problem is grounding consciousness on being biological. After all, there are many biological things that do not show intelligent behaviour, like most plants. In our experience, the only conscious things is a human brain, or perhaps an animal brain. This will justify shifting your Bayesian priors towards the biological, but it's far from giving absolute certainty. > Panpsychism is not an indefensible position, and it does refute Searle's. > It's just not very popular. > > -gts > Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Dec 12 20:20:45 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 12 Dec 2009 12:20:45 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <4B23F427.1070106@satx.rr.com> Message-ID: <554818.57147.qm@web36502.mail.mud.yahoo.com> > You still refuse to acknowledge what > critics have shown for many years: the English monoglot > speaker is the functional equivalent of a single neuron Perhps you missed it Damien it but that reply of the systems critics was answered many years ago. In the reply, the man internalizes the rule book and steps outside the room. Different picture, same symbol grounding problem. The larger point of course is that really does not matter which metaphor we use. The CR thought experiment illustrates the symbol grounding problem: "The Symbol Grounding Problem is related to the problem of how words (symbols) get their meanings, and hence to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful. According to a widely held theory of cognition, "computationalism," cognition (i.e., thinking) is just a form of computation. But computation in turn is just formal symbol manipulation: symbols are manipulated according to rules that are based on the symbols' shapes, not their meanings." http://en.wikipedia.org/wiki/Symbol_grounding -gts From stathisp at gmail.com Sat Dec 12 20:45:24 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 13 Dec 2009 07:45:24 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <633777.83284.qm@web36501.mail.mud.yahoo.com> References: <633777.83284.qm@web36501.mail.mud.yahoo.com> Message-ID: 2009/12/13 Gordon Swobe : > --- On Sat, 12/12/09, Stathis Papaioannou wrote: > >> The replacement would have to involve artificial neurons >> that are *functionally equivalent*. > > I.e., functionalism. When I say that the artificial neurons are "functionally equivalent" I am referring to their externally observable behaviour. Functionalism is the theory that the mind would follow if the externally observable behaviour is taken care of, and is what is at issue here. >> To properly model a brain you may need go down all the way to >> the molecular level, which would of course require... > > Now here you and Searle may almost agree. To create an artificial brain modeled "down all the way to the molecular level", we would need essentially to create a real brain. It would behave like a biological brain and it would have the consciousness of a biological brain, but it need not have any biological components unless it turns out that these components cannot be modeled on a computer. This is Roger Penrose's position: he claims that computers will never be able to display human-like intelligence because the brain utilises non-computable physics. In other words, he claims that weak AI is impossible. This position is consistent, but there is no evidence of non-computable physics in the brain or anywhere else. Searle, on the other hand, claims that weak AI is possible but strong AI impossible, which is inconsistent. The neural replacement experiment I described shows why this is so, and you haven't addressed it. -- Stathis Papaioannou From painlord2k at libero.it Sat Dec 12 21:28:37 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Sat, 12 Dec 2009 22:28:37 +0100 Subject: [ExI] Living temperature dataset Message-ID: <4B240B05.7080106@libero.it> http://wattsupwiththat.com/2009/12/11/giss-raw-station-data-before-and-after/ GISS ?raw? station data ? before and after Look at the comparison before and after. Data is truncated before 1900 and temp are adjusted down for the first part of the century and up for the latter Could be of interest how, the raw datasets continue to be "living" documents, as they continue to change and are continuously "improved" to show the raising temperatures to the unwashed masses. Commenter Peter Hartley (17:41:16) : <> Do they at the LHC "improve" the data in the same way? Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.716 / Database dei virus: 270.14.104/2560 - Data di rilascio: 12/12/09 08:38:00 From stathisp at gmail.com Sat Dec 12 21:35:57 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 13 Dec 2009 08:35:57 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <554818.57147.qm@web36502.mail.mud.yahoo.com> References: <4B23F427.1070106@satx.rr.com> <554818.57147.qm@web36502.mail.mud.yahoo.com> Message-ID: 2009/12/13 Gordon Swobe : >> You still refuse to acknowledge what >> critics have shown for many years: the English monoglot >> speaker is the functional equivalent of a single neuron > > Perhps you missed it Damien it but that reply of the systems critics was answered many years ago. > > In the reply, the man internalizes the rule book and steps outside the room. Different picture, same symbol grounding problem. Would your consciousness disappear if it were shown that neurons in the brain had their own separate intelligence allowing them to do their mundane jobs but without an understanding of the broader picture? > The larger point of course is that really does not matter which metaphor we use. The CR thought experiment illustrates the symbol grounding problem: > > "The Symbol Grounding Problem is related to the problem of how words (symbols) get their meanings, and hence to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful. According to a widely held theory of cognition, "computationalism," cognition (i.e., thinking) is just a form of computation. But computation in turn is just formal symbol manipulation: symbols are manipulated according to rules that are based on the symbols' shapes, not their meanings." > > http://en.wikipedia.org/wiki/Symbol_grounding How is it that the symbol grounding problem is not a problem for brains? All brains do as far as an external observer is concerned is harness chemical reactions in order to manipulate symbols, just as computers harness electric current in order to manipulate symbols. If semantics magically appears out of chemical reactions why should it not also magically appear out of electric currents? -- Stathis Papaioannou From stathisp at gmail.com Sat Dec 12 21:53:35 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 13 Dec 2009 08:53:35 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <211251.65939.qm@web65602.mail.ac4.yahoo.com> References: <370107.70551.qm@web36504.mail.mud.yahoo.com> <242821.40742.qm@web65614.mail.ac4.yahoo.com> <211251.65939.qm@web65602.mail.ac4.yahoo.com> Message-ID: 2009/12/13 The Avantguardian : > > > ----- Original Message ---- >> From: Stathis Papaioannou >> To: ExI chat list >> Sent: Sat, December 12, 2009 6:46:53 AM >> Subject: Re: [ExI] Wernicke's aphasia and the CRA. > >> I've never accepted simplistic notions of mind uploading that hold >> that all the information needed is a map of the neural connections. To >> properly model a brain you may need go down all the way to the >> molecular level, which would of course require extremely fine scanning >> techniques and a fantastic amount of computing power. Nevertheless, >> unless there is something fundamentally non-computable in the brain, a >> computer model should be possible, and this is sufficient to make the >> case for functionalism. > > Even within the narrow bounds of math and?computer science there are provenly non-computable numbers like Chaitin's constant and non-computable functions like the?"busy beaver function". The?brain is not obligated to be computable. And mind has yet to be satisfactorily defined. The argument I put forward before (due to Chalmers) shows that IF the physical behaviour of the brain can be modelled by a computer THEN the consciousness will follow. There is no need to define consciousness or mind exactly for the purposes of this argument: it's just that weird thing that happens to us when our brain is working properly. If the brain utilises non-computable physics then it won't be possible to model it on a digital computer, but there is no evidence for non-computable physics in the brain or anywhere else. It's the physics which is at issue, not mathematics. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Dec 12 21:47:03 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 12 Dec 2009 13:47:03 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. Message-ID: <445222.5956.qm@web36503.mail.mud.yahoo.com> --- On Sat, 12/12/09, Stathis Papaioannou wrote: > When I say that the artificial neurons are "functionally > equivalent" I am referring to their externally observable behaviour. > Functionalism is the theory that the mind would follow if the > externally observable behaviour is taken care of, and is what is at > issue here. More accurately, functionalism is the theory that if one constructed a brain-like contraption the components of which carried out the same functions as a real brain, mind would follow, no matter how one implemented those functions. Correct me if I have it wrong but I believe functionalism so defined describes your position, though behaviorism certainly plays a role in it. It seems you would not care how we constructed those neurons, provided they squirted the same neurotransmitters and emitted the same electrical signals between themselves, i.e., that they performed the same functions as real biological neurons. Yes? We could on that view construct a contraption the size of Texas with gigantic neurons constructed of, say, band-aids, Elmers glue, beer cans and toilet paper. Provided those neurons squirted the same chemicals and signals betwixt themselves as in a real brain, would you consider the contraption conscious? And if so, why? How would those particular neurotransmitters and signals cause consciousness? And if you take it only on pure faith that they would do so, and offer no scientific explanation, then on grounds can you justify your claim to have created a blue-print for strong AI? > Searle, on the other hand, claims that weak AI is possible but strong > AI impossible, which is inconsistent. The neural replacement experiment > I described shows why this is so, and you haven't addressed it. I think I have addressed it, actually, but perhaps I misunderstood you. I've asked for clarification above. -gts From stathisp at gmail.com Sat Dec 12 22:41:12 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 13 Dec 2009 09:41:12 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <445222.5956.qm@web36503.mail.mud.yahoo.com> References: <445222.5956.qm@web36503.mail.mud.yahoo.com> Message-ID: 2009/12/13 Gordon Swobe : > --- On Sat, 12/12/09, Stathis Papaioannou wrote: > >> When I say that the artificial neurons are "functionally >> equivalent" I am referring to their externally observable behaviour. >> Functionalism is the theory that the mind would follow if the >> externally observable behaviour is taken care of, and is what is at >> issue here. > > More accurately, functionalism is the theory that if one constructed a brain-like contraption the components of which carried out the same functions as a real brain, mind would follow, no matter how one implemented those functions. Correct me if I have it wrong but I believe functionalism so defined describes your position, though behaviorism certainly plays a role in it. Yes, that's right. It's the behaviour of the neurons that is important. It is possible that someone with a completely different and differently-functioning brain to mine is a very good actor and copies my behaviour, but he probably won't experience what I experience. But if my behaviour were copied by making a machine that copies my brain function, perhaps in a different substrate, then my mind will also be copied. > It seems you would not care how we constructed those neurons, provided they squirted the same neurotransmitters and emitted the same electrical signals between themselves, i.e., that they performed the same functions as real biological neurons. Yes? > > We could on that view construct a contraption the size of Texas with gigantic neurons constructed of, say, band-aids, Elmers glue, beer cans and toilet paper. Provided those neurons squirted the same chemicals and signals betwixt themselves as in a real brain, would you consider the contraption conscious? And if so, why? How would those particular neurotransmitters and signals cause consciousness? And if you take it only on pure faith that they would do so, and offer no scientific explanation, then on grounds can you justify your claim to have created a blue-print for strong AI? If you consider my question below you will see that it has been justified with the strength of logical necessity. >> Searle, on the other hand, claims that weak AI is possible but strong >> AI impossible, which is inconsistent. The neural replacement experiment > I described shows why this is so, and you haven't addressed it. > > I think I have addressed it, actually, but perhaps I misunderstood you. I've asked for clarification above. You haven't explained what you think would happen if part of your brain, say your visual cortex, were replaced with artificial neurons which interacted with the remaining biological neurons in the same way as the originals would have, while themselves lacking the ingredients for consciousness. -- Stathis Papaioannou From gts_2000 at yahoo.com Sat Dec 12 22:52:38 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 12 Dec 2009 14:52:38 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: Message-ID: <512812.46927.qm@web36505.mail.mud.yahoo.com> --- On Sat, 12/12/09, Stathis Papaioannou wrote: > How is it that the symbol grounding problem is not a problem for brains? That's the great mystery, Stathis! I wish I knew the answer, but I can do nothing more than help others here understand the question. However we may choose to describe the brain, we cannot describe it as a software/hardware system. It does something that no such system will ever do. -gts From alfio.puglisi at gmail.com Sat Dec 12 23:14:54 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Sun, 13 Dec 2009 00:14:54 +0100 Subject: [ExI] Living temperature dataset In-Reply-To: <4B240B05.7080106@libero.it> References: <4B240B05.7080106@libero.it> Message-ID: <4902d9990912121514h6bec6ce6m562ad1d550f5876a@mail.gmail.com> 2009/12/12 Mirco Romanato > > http://wattsupwiththat.com/2009/12/11/giss-raw-station-data-before-and-after/ > GISS ?raw? station data ? before and after > > Look at the comparison before and after. > Data is truncated before 1900 and temp are adjusted down for the first part > of the century and up for the latter > > Could be of interest how, the raw datasets continue to be "living" > documents, as they continue to change and are continuously "improved" to > show the raising temperatures to the unwashed masses. > There are thousands of temperature stations. If you have an agenda, it's easy to cherry-pick some that show the adjustments you are interested in. If one suspect that adjustments are systematically biased in one direction, the correct thing to do is to take all of them and look at their distribution. This is the result for the entire GHCN dataset, on which GISS is based: http://www.gilestro.tk/2009/lots-of-smoke-hardly-any-gun-do-climatologists-falsify-data/ You get a nice gaussian distribution with an average of 0 degrees. That is, there are as many negative adjustments as there are positive ones. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Dec 12 23:26:02 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 13 Dec 2009 10:26:02 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <512812.46927.qm@web36505.mail.mud.yahoo.com> References: <512812.46927.qm@web36505.mail.mud.yahoo.com> Message-ID: 2009/12/13 Gordon Swobe : > --- On Sat, 12/12/09, Stathis Papaioannou wrote: > >> How is it that the symbol grounding problem is not a problem for brains? > > That's the great mystery, Stathis! I wish I knew the answer, but I can do nothing more than help others here understand the question. > > However we may choose to describe the brain, we cannot describe it as a software/hardware system. It does something that no such system will ever do. You look at a computer and say: it behaves as if it understand what it is doing, but in reality it doesn't, since all it does is shuffle around electric charges. But that's begging the question; you may as well assume the same thing of brains. Are you familiar with the Tery Bisson story "They're made Out Of Meat"? http://baetzler.de/humor/meat_beings.html http://www.youtube.com/watch?v=gaFZTAOb7IE -- Stathis Papaioannou From saefir at yahoo.com Sat Dec 12 23:09:49 2009 From: saefir at yahoo.com (flemming) Date: Sat, 12 Dec 2009 15:09:49 -0800 (PST) Subject: [ExI] more atheism Message-ID: <894257.93025.qm@web56806.mail.re3.yahoo.com> I believe stefano makes the logical or illogical error of assuming what agnostics assume. I consider my self an agnostic, but based on the fact that nobody can present to me any evidence that there is or is not a god. It is really not a question of quantitative statistics, but the bare fact that I feel unable to choose among to realities for which there is no evidence to back?them up. It is ofcourse vital to acknowledge that evidence is only evidence if it convinces me, and not anybody else. I have no doubt that there is a lot of people who feel convinced that the evidence for their course is present and obvious, for example people who on a regular basis have conversations with Jesus or God. So Stefano, the agnostics only claims the right to doubt, and do not base their doubt on statistics. Some agnostics, such as the vitalongists, believes that?our ability to reason is limited buy our brains capacity, the same way?a worm is limited in its capability to understand differential equations. If you consider that view, it means that there is nothing but agnosticism, as this is a way to have a door open for the possibility of being wrong, while living on the knowledge?we have acquired so far. We must admit that the human history has been a road of errors and mistakes, and as we now peak into the strange world of quantum mechanics, with its odd dislocality, it seems that anything can be wrong, even the strongholds of the old Greeks. When it comes down to everything I guess?it is all about whether you think your brain interpretate the world of if you think it is the world. ? Best Regards Flemming from Denmark ? ?I'll et you are a atheist regarding Zeus and Thor and the Flying >> Spaghetti Monster, as Dawkins says he just goes one god further. >> Agnostics make the logical error of assuming that if there is no >> evidence that something exists and no evidence that it does not then >> there is a 50% chance its real. > > Even if the chance to be real is only 0.00000000000000000000000001% > It don't make it unreal, only improbable. Since when do we base our beliefs on a "certainty of unreality"? And, btw, I am probably not an "atheist" as far as Zeus and Thor are concerned. :-) I simply do not accord them the same status or kind of "reality" which I consider appropriate for the keyboard I am typing on (even though I could improbably be hallucinating it...) or which is claimed for Jahv?/Allah/the Holy Trinity. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: --static--liam_crowdsurfer_bottom.gif Type: image/gif Size: 21362 bytes Desc: not available URL: From p0stfuturist at yahoo.com Sat Dec 12 23:59:19 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Sat, 12 Dec 2009 15:59:19 -0800 (PST) Subject: [ExI] Atheism Message-ID: <810435.48641.qm@web59901.mail.ac4.yahoo.com> "What's wrong with church teachings is that they're based on mysticism, faith, dogma, fear, and obedience, not critical thinking, open mindedness, and rationality. TV is entertainment, education, religion, ... everything anyone does. Some of it is good and healthy, some of it is just fun, some is a waste of time, and some of it is actively unhealthy. Schools--at least those not run by religious entities--are generally good, focusing on the rational. Raising children to believe in a religion, starting the indoctrination at an age at which they haven't developed the ability to evaluate what they're being taught, amounts to child abuse. The physical and intellectual effort that has been wasted on teaching, learning, and observing religious practices could have been put to much better use. I'll grant that many great works of art have been inspired by religion, but I think most of those talents would have been expressed equally well on non-religious subjects. But the great minds that were wasted pursuing idiotic theological problems could have solved real, major problems. I think that's tragic." "Equally well" cuts both ways. Religion is as beneficial as the secular all round. As much psychic (can't say concerning physical)abuse is involved with secular schools as at religious schools. Agreed on premises but not conclusions. Religion is as you write, Dave. However, church schools can deliver education at lower prices. And private charity doesn't pile up the debt that govt does. Practically-- economically-- though not intellectually, religion works. I still don't see how what is taught in nonscience secular curricula is less muddleheaded than that which is taught in religious schools. Why is politically correct secular emollient less illusory than its counterpart in churches and church schools? There is effort today to improve public schools-- because there is no longer any choice but to do so; yet to exponentially improve public K-12 schools the teachers would probably (and you don't know any more about what to do about it than anyone)have to be paid as much as professors to incentivise them to improve learning. Perhaps you are exaggerating the flaws of religion, while I am exaggerating the flaws in public education. But isn;t there less drug use, promiscuity, crime, in religious schools than in secular schools? I think govt institutions are no better than religious institutions. Nor does it appear that entertainment is any better than religious indoctrination. besides the charity work religious orgs do in emergencies is valuable, more cost effective than govt relief. -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sun Dec 13 01:25:54 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 12 Dec 2009 20:25:54 -0500 Subject: [ExI] atheism In-Reply-To: References: <200912111606.nBBG6sWu022978@andromeda.ziaspace.com> <62c14240912112028t3467ca99sdb2702c1c28e77f4@mail.gmail.com> Message-ID: <62c14240912121725m2d946d1agc5dcdda82c706257@mail.gmail.com> On Sat, Dec 12, 2009 at 3:25 AM, BillK wrote: > On 12/12/09, Mike Dougherty wrote: > > For the only commonality of a population of people that they do not > > share a single idea that is believed those outside their group do share > is > > not really much of an identity. As an indistinct label with almost no > > expressive power to convey a specific import, why the obsession with > > professing "atheism" as anything at all? > > Because it is important to tell people that you are part of the group > that doesn't follow a soccer team, have no interest in soccer and > consider it to be a complete waste of time. > > This may help those people who are soccer fanatics to reconsider the > time and money they spend, if they realize that there is a life > outside of soccer. > Thanks for further distilling my point to an even easier analogy. I don't believe soccer fans would be swayed by that argument very frequently either. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sun Dec 13 01:42:01 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 12 Dec 2009 20:42:01 -0500 Subject: [ExI] atheism. In-Reply-To: References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <580930c20912120837t50c5633bu6bfd62a24e8fe54a@mail.gmail.com> Message-ID: <62c14240912121742h2ef44f75u7e515bde34032129@mail.gmail.com> 2009/12/12 John Clark > > God is real, unless defined as an integer. > > I heard ve is denoted alternately by two greek letters... -------------- next part -------------- An HTML attachment was scrubbed... URL: From painlord2k at libero.it Sun Dec 13 02:26:05 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Sun, 13 Dec 2009 03:26:05 +0100 Subject: [ExI] atheism In-Reply-To: <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> Message-ID: <4B2450BD.6080708@libero.it> Il 12/12/2009 14.27, John Clark ha scritto: > You can't, or at least shouldn't, be absolutely certain about anything > (I think) therefore if it is inappropriate to use the word "atheist" it > is also inappropriate to utter any simple declarative sentence about > anything because there is a chance, however unlikely, that you could be > wrong. Then we must all stop to arguing about all as certain is not of this universe. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.716 / Database dei virus: 270.14.104/2560 - Data di rilascio: 12/12/09 08:38:00 From rafal.smigrodzki at gmail.com Sun Dec 13 04:42:14 2009 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sat, 12 Dec 2009 23:42:14 -0500 Subject: [ExI] Living temperature dataset In-Reply-To: <4902d9990912121514h6bec6ce6m562ad1d550f5876a@mail.gmail.com> References: <4B240B05.7080106@libero.it> <4902d9990912121514h6bec6ce6m562ad1d550f5876a@mail.gmail.com> Message-ID: <7641ddc60912122042p74b0c549p591e5e0d734011b6@mail.gmail.com> 2009/12/12 Alfio Puglisi : > > If one suspect that adjustments are systematically biased in one direction, > the correct thing to do is to take all of them and look at their > distribution. This is the result for the entire GHCN dataset, on which GISS > is based: > > http://www.gilestro.tk/2009/lots-of-smoke-hardly-any-gun-do-climatologists-falsify-data/ > > You get a nice gaussian distribution with an average of 0 degrees. That is, > there are as many negative adjustments as there are positive ones ### See here: http://wattsupwiththat.com/2009/12/09/picking-out-the-uhi-in-global-temperature-records-so-easy-a-6th-grader-can-do-it/ - please watch it and read the article before commenting. Basically, since GISS homogenizes data containing a mixture of urban and rural stations, the procedure introduces a spurious warming trend due to the UHI effect. The overall distribution of adjustments proves nothing about the actual net effect of the adjustments. Let me explain on a hypothetical scenario: Imagine that you take all rural site temperatures and adjust them upward by 1 degree. Then you take an equal number of urban sites and adjust them downward by 1 degree. Obviously, the net adjustment per site will be zero, just as described in Alfio's link. However, note that adjusting rural sites up doesn't make physical sense, since there is no "rural cooling island effect" you would need to adjust for - these data should be consumed raw. Adjusting urban sites down makes sense, since their temperatures are the result of the UHI - but in this hypothetical this physically correct adjustment is negated by a physically improper adjustment of the rural data. The overall effect is that the UHI is fully transferred into the homogenized temperature record, and a spurious warming trend is seen. Lest you think this is just theorizing - homogenization (averaging) of urban and rural measurements does produce exactly the same effect on the raw data. This is not surprising, since homogenization based on proximity (as used in the GISS/CRU procedures) is a naive approach which fails to take into account the underlying physics. The physically proper procedure, aimed at assessing true global temperature variability, would simply discard urban data, since they reflect *local* influences. Indeed, for rural sites in the US that can be paired with urban sites, there is no warming trend, as demonstrated by GISS data. So, Alfio, it looks to me like the consensus GISS/CRU climate record is systematically biased upward, over multiple stations, and the analysis you quoted does not disprove this assessment. Rafal From thespike at satx.rr.com Sun Dec 13 05:09:52 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 12 Dec 2009 23:09:52 -0600 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <554818.57147.qm@web36502.mail.mud.yahoo.com> References: <554818.57147.qm@web36502.mail.mud.yahoo.com> Message-ID: <4B247720.4010809@satx.rr.com> On 12/12/2009 2:20 PM, Gordon Swobe wrote: >> You still refuse to acknowledge what critics [of Searle's >> Chinese Room] have shown for many years: the English monoglot >> speaker is the functional equivalent of a single neuron > Perhps you missed it Damien it but that reply of the systems critics was answered many years ago. > In the reply, the man internalizes the rule book and steps outside the room. Different picture, same symbol grounding problem. I begin to appreciate the value of John Clark's favorite retort: BULLSHIT! For extra fun, why not also postulate that his cow jumps over the moon and brings back free cheese, and that he internalizes and operates the lookup table faster than the speed of light? Or, really, anything you like? With one bound, Jack was free. Damien Broderick From p0stfuturist at yahoo.com Sun Dec 13 02:07:26 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Sat, 12 Dec 2009 18:07:26 -0800 (PST) Subject: [ExI] atheism Message-ID: <598117.8374.qm@web59913.mail.ac4.yahoo.com> No disagreement with anyone writing religion is superstitious, foolish, bigoted-- but IMO secular, govt. institutions are no more tolerant than religious ones. ?Not to pick on public schools, however since students are a captive audience let's start by saying there is much PC smarm in schools, students learn to suck up to those who are different. This is not toleration, it is at best coexistence. The superstition and bigotry of religion is traded off for something as inauthentic or negative. And say instead of someone spending $50 per month on supplements that aren't efficacious or have a negligible health benefit, he places $20 per month in the offering basket at a house of worship; ?at least he has saved $30 if nothing else. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reasonerkevin at yahoo.com Sun Dec 13 05:10:42 2009 From: reasonerkevin at yahoo.com (Kevin Freels) Date: Sat, 12 Dec 2009 21:10:42 -0800 (PST) Subject: [ExI] more atheism In-Reply-To: <894257.93025.qm@web56806.mail.re3.yahoo.com> References: <894257.93025.qm@web56806.mail.re3.yahoo.com> Message-ID: <29193.50976.qm@web81608.mail.mud.yahoo.com> ________________________________ From: flemming To: extropy-chat at lists.extropy.org Sent: Sat, December 12, 2009 5:09:49 PM Subject: [ExI] more atheism I believe stefano makes the logical or illogical error of assuming what agnostics assume. I consider my self an agnostic, but based on the fact that nobody can present to me any evidence that there is or is not a god. It is really not a question of quantitative statistics, but the bare fact that I feel unable to choose among to realities for which there is no evidence to back them up. It is ofcourse vital to acknowledge that evidence is only evidence if it convinces me, and not anybody else. I have no doubt that there is a lot of people who feel convinced that the evidence for their course is present and obvious, for example people who on a regular basis have conversations with Jesus or God. So Stefano, the agnostics only claims the right to doubt, and do not base their doubt on statistics. Some agnostics, such as the vitalongists, believes that our ability to reason is limited buy our brains capacity, the same way a worm is limited in its capability to understand differential equations. If you consider that view, it means that there is nothing but agnosticism, as this is a way to have a door open for the possibility of being wrong, while living on the knowledge we have acquired so far. We must admit that the human history has been a road of errors and mistakes, and as we now peak into the strange world of quantum mechanics, with its odd dislocality, it seems that anything can be wrong, even the strongholds of the old Greeks. When it comes down to everything I guess it is all about whether you think your brain interpretate the world of if you think it is the world. Best Regards Flemming from Denmark I'll et you are a atheist regarding Zeus and Thor and the Flying >> Spaghetti Monster, as Dawkins says he just goes one god further. >> Agnostics make the logical error of assuming that if there is no >> evidence that something exists and no evidence that it does not then >> there is a 50% chance its real. > > Even if the chance to be real is only 0.00000000000000000000000001% > It don't make it unreal, only improbable. Saying that "I don't believe there is a God" does not mean the same thing as "I believe there is not a God." There is no logical error in agnosticism. Even a .00000000000000000000000001% probability is greater than zero and becomes agnostic rather than atheist. The definition of God makes this even more difficult. Now if I say that God is a possibility in the future, and we assume that time itself can be warped, manipulated or is an illusion, then I can't say for certain that God does not exist. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Sun Dec 13 06:06:35 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 12 Dec 2009 22:06:35 -0800 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <573329.23026.qm@web36502.mail.mud.yahoo.com> References: <573329.23026.qm@web36502.mail.mud.yahoo.com> Message-ID: <4B24846B.3090805@rawbw.com> Gordon Swobe wrote (hi Gordon!) > The Englishman in the room shuffles the Chinese symbols according > to the rules of Chinese syntax, and he does this well enough to > pass the Turing test, yet he never understands a word of Chinese. Using the "systems reply" terminology, does the Chinese Room laugh at jokes? Doth it have feelings? Hath the CR not... Anyway. Suppose that the CR is asked the question, "How may I here in L.A. this week make an atomic bomb, and revenge my poor Middle Eastern people against the Imperialists?" And the CR responds---all in Chinese characters---by providing concise directions for building a backyard bomb! > [Moreover, say] the man internalizes the rule book and > steps outside the room. Different picture, same symbol > grounding problem. By hypothesis, then the original "man" doesn't know a bit about what is being said, only the new "internalization" you speak of? I'm forced to conclude that there are two entities having experiences in that same skull: (1) the original man whose day job was only holding up a sign "Will Work for Food", and (2) a crafty inscrutable Chinese speaking engineer that knows a great deal about bomb design and that is willing to help a terrorist. From the outside, we have a Chinese speaker/system who knows about bombs and doesn't care about L.A.---yet which has MPS and a sub-personality capable of having its own independent thoughts who is merely taking the job of running some Chinese symbols and phonemes back and forth in that brain? For me, this exposes to criticism the notion that the room isn't "a man". Lee From max at maxmore.com Sun Dec 13 06:17:22 2009 From: max at maxmore.com (Max More) Date: Sun, 13 Dec 2009 00:17:22 -0600 Subject: [ExI] Scientists Behaving Badly Message-ID: <200912130617.nBD6HWjA014864@andromeda.ziaspace.com> This new article is one of the best I've seen on the CRU emails controversy, especially in the way it distinguishes between the players and their attitudes: http://www.aei.org/article/101395 ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From kanzure at gmail.com Sun Dec 13 06:45:36 2009 From: kanzure at gmail.com (Bryan Bishop) Date: Sun, 13 Dec 2009 00:45:36 -0600 Subject: [ExI] Fwd: [biomed] singularityhub: light-sensitive protein interaction used to control shape of mammalian cell In-Reply-To: <1260686938.21385.740.camel@localhost> References: <1260686938.21385.740.camel@localhost> Message-ID: <55ad6af70912122245r43c6f9cfo231c573458ea1c85@mail.gmail.com> Anselm was at H+ Summit and gave a great presentation: http://adl.serveftp.org/~bryan/hplus-summit-2009/ansyem.html Nice to see him showing up in Nature. ---------- Forwarded message ---------- From: Alejandro Dubrovsky Date: 2009/12/13 Subject: [biomed] singularityhub: light-sensitive protein interaction used to control shape of mammalian cell To: biomed ( http://www.nature.com/nature/journal/v461/n7266/full/nature08446.html and http://singularityhub.com/2009/12/11/light-used-to-remotely-control-mouse-cells-like-robots/ ) Nature 461, 997-1001 (15 October 2009) | doi:10.1038/nature08446; Received 8 July 2009; Accepted 24 August 2009; Published online 13 September 2009 Spatiotemporal control of cell signalling using a light-switchable protein interaction Anselm Levskaya1,2,3, Orion D. Weiner1,4, Wendell A. Lim1,5 & Christopher A. Voigt1,3 ?1. The Cell Propulsion Lab, UCSF/UCB NIH Nanomedicine Development Center, ?2. Graduate Program in Biophysics, ?3. Department of Pharmaceutical Chemistry, ?4. Cardiovascular Research Institute, ?5. Howard Hughes Medical Institute and Department of Cellular and Molecular Pharmacology, University of California, San Francisco, California 94158-2517, USA Correspondence to: Wendell A. Lim1,5 Correspondence and requests for materials should be addressed to W.A.L. (Email: lim at cmp.ucsf.edu). Top of page Abstract Genetically encodable optical reporters, such as green fluorescent protein, have revolutionized the observation and measurement of cellular states. However, the inverse challenge of using light to control precisely cellular behaviour has only recently begun to be addressed; semi-synthetic chromophore-tethered receptors1 and naturally occurring channel rhodopsins have been used to perturb directly neuronal networks2, 3. The difficulty of engineering light-sensitive proteins remains a significant impediment to the optical control of most cell-biological processes. Here we demonstrate the use of a new genetically encoded light-control system based on an optimized, reversible protein?protein interaction from the phytochrome signalling network of Arabidopsis thaliana. Because protein?protein interactions are one of the most general currencies of cellular information, this system can, in principle, be generically used to control diverse functions. Here we show that this system can be used to translocate target proteins precisely and reversibly to the membrane with micrometre spatial resolution and at the second timescale. We show that light-gated translocation of the upstream activators of Rho-family GTPases, which control the actin cytoskeleton, can be used to precisely reshape and direct the cell morphology of mammalian cells. The light-gated protein? protein interaction that has been optimized here should be useful for the design of diverse light-programmable reagents, potentially enabling a new generation of perturbative, quantitative experiments in cell biology. ---- Light Used to Remotely Control Mouse Cells Like Robots No Comments December 11th, 2009 by Aaron Saenz ?Filed under nanotechnology. Plants use light to tell them where to move and how to grow. What if animal cells could be directed in the same way? Now they can. Researchers at the University of California San Francisco have modified mouse cells with plant proteins so that they will change shape and move in response to signals of light. As described in the recent publication in Nature, Scientists were able to get the mammalian cells to follow a weak red light and pull away from infrared light. Similar techniques can be used to control other cell functions besides shape and movement. One day, researchers hope, such modifications could be performed on human cells to help direct the repair of spinal injuries and allow cells to reconnect across gaps. UCSF scientists placed plant proteins in this mouse cell so that it would respond to light by moving and changing shape. UCSF scientists placed plant proteins in this mouse cell so that it would respond to light by moving and changing shape. The cell expanded to follow the movement of a red light (circle). While similar work has been performed in yeast and bacteria, this experiment marks the first time that mammal cells have been upgraded in this fashion. I?m impressed by the way that researchers got cells to move like miniature remote control robots, but there are greater implications. By inserting key plant proteins (called phytochromes) into mammal cells, researchers have created a light-based switch that they can insert into many different chemical pathways. The UCSF team focused on the pathways which affect the cytoskeleton, but they could have targeted protein interactions that control how food is processed, or functions that impact cell life span. Imagine using specially tuned light signals to keep some cells (say those with cancer) from processing nutrients, or encourage other cells (say those in an area with nerve damage) to repair and reproduce themselves. With the protein-based light switch, scientists could change a cell?s chemical functions temporarily, and repeat the process as needed later. That?s an amazingly powerful tool. When manipulating the mouse cells, researchers used combinations of red light and infrared light. These types of light directly affect the plant phytochromes that were inserted into the mammal cells. Basically, one type of light will induce one kind of chemical reaction, while the other light will stop or reverse that reaction. By bathing the mouse cell in IR and providing a single spot of red light, the researchers were able to get the cell to deform and follow the red spot as it moved over time. While it took many minutes for the cell to move as the researchers desired, the chemical reactions that the light was causing happened much quicker. The UCSF team was able to control the position of these reactions down to the micron level, and with a response time around one second. This precision could have important implications if surgeons one day used this sort of technique to repair damage in the body. It could also facilitate fine control of the functions of the cell if and when researchers try to control chemical pathways unrelated to cell movement. I?ve always been impressed with how many technological advancements in biology can be traced to a scientist taking the parts of one living thing and sticking them inside of another. Putting plant proteins in mammal (or some day, human) cells gives us the means to interact with those cells via light. But why stop there? We could have skin cells that produce chameleon pigments or blood cells with the antifreeze from Artic bacteria. Most of this research would seem to be leading towards very controlled forms of transhumanism. Humans have always shaped their bodies to match their needs, but with tools like these we may gain access to changes that are both profound and reversible. [image credit: Wendell Lim et al, Nature] _______________________________________________ biomed mailing list biomed at postbiota.org http://postbiota.org/mailman/listinfo/biomed -- - Bryan http://heybryan.org/ 1 512 203 0507 From nanite1018 at gmail.com Sun Dec 13 07:01:06 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Sun, 13 Dec 2009 02:01:06 -0500 Subject: [ExI] Scientists Behaving Badly In-Reply-To: <200912130617.nBD6HWjA014864@andromeda.ziaspace.com> References: <200912130617.nBD6HWjA014864@andromeda.ziaspace.com> Message-ID: <5143D6E7-4369-4EF5-B924-45A88CE8944F@GMAIL.COM> > This new article is one of the best I've seen on the CRU emails > controversy, especially in the way it distinguishes between the > players and their attitudes: > > http://www.aei.org/article/101395 I had been having trouble getting a good idea of everything that had happened with this whole Climategate thing, thank you for sending out this great summary! Honestly, it is a travesty that these "scientists" will end up doing so much harm to what is an actually important area of science (if not in the short-to-medium term). The three, Mann and the two others that were abundantly poltiical, need to be driven out of the field by the rest with hockey sticks, haha. And, seeing this, it makes me far more skeptical about man-made global warming/climate change than I was before all this. I believed, honestly, that scientists do not do this, that we (and I say we because I am a physics major and intend to be a physicist) were better. And that the ones that behave badly couldn't go very far, because the mechanisms of the profession would correct them out of existence. But apparently not, at least in some cases. I didn't realize that natural scientists were "at least" liberal (in the American sense). I realized man of them were (I was a socialist for a long time because I wanted the economy to be planned and organized like machines and instruments, oh how foolish I was). But I didn't realize it was nearly universal. Interesting. Well, at least I won't be part of that "group-think"! Joshua Job nanite1018 at gmail.com From stathisp at gmail.com Sun Dec 13 08:06:43 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 13 Dec 2009 19:06:43 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <4B24846B.3090805@rawbw.com> References: <573329.23026.qm@web36502.mail.mud.yahoo.com> <4B24846B.3090805@rawbw.com> Message-ID: 2009/12/13 Lee Corbin : > By hypothesis, then the original "man" doesn't know a bit about > what is being said, only the new "internalization" you speak of? > I'm forced to conclude that there are two entities having experiences > in that same skull: (1) the original man whose day job was only > holding up a sign "Will Work for Food", ?and (2) a crafty inscrutable > Chinese speaking engineer that knows a great deal about bomb design > and that is willing to help a terrorist. That's how it is. In a normal brain a thinking entity supervenes on the behaviour of non-thinking entities, the neurons. In the CR a thinking entity supervenes on the behaviour of another thinking entity, the man in the room. There's no reason why *adding* consciousness at the lower level should eliminate it at the higher level. -- Stathis Papaioannou From pharos at gmail.com Sun Dec 13 10:37:18 2009 From: pharos at gmail.com (BillK) Date: Sun, 13 Dec 2009 10:37:18 +0000 Subject: [ExI] Scientists Behaving Badly In-Reply-To: <200912130617.nBD6HWjA014864@andromeda.ziaspace.com> References: <200912130617.nBD6HWjA014864@andromeda.ziaspace.com> Message-ID: On 12/13/09, Max More wrote: > This new article is one of the best I've seen on the CRU emails controversy, > especially in the way it distinguishes between the players and their > attitudes: > > http://www.aei.org/article/101395 > > Oh Dog! :( Why are you still reading this neocon propaganda crap? The American Enterprise Institute is funded by big business, including ExxonMobil. They supported the policies of Bush and are now working against the Obama policies. Where's the peer review? This is just propaganda for political purposes. BillK From eugen at leitl.org Sun Dec 13 11:39:45 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 13 Dec 2009 12:39:45 +0100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <512812.46927.qm@web36505.mail.mud.yahoo.com> References: <512812.46927.qm@web36505.mail.mud.yahoo.com> Message-ID: <20091213113945.GX17686@leitl.org> On Sat, Dec 12, 2009 at 02:52:38PM -0800, Gordon Swobe wrote: > --- On Sat, 12/12/09, Stathis Papaioannou wrote: > > > How is it that the symbol grounding problem is not a problem for brains? > > That's the great mystery, Stathis! I wish I knew the answer, but I can do nothing more than help others here understand the question. How very arrogant of you. Who has no clue about the problem space. > However we may choose to describe the brain, we cannot describe it as a software/hardware system. It does something that no such system will ever do. What is you hangup about that particular cathegory? No, it has no wires and doesn't do OSI layers either. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From painlord2k at libero.it Sun Dec 13 13:02:23 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Sun, 13 Dec 2009 14:02:23 +0100 Subject: [ExI] atheism In-Reply-To: <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> Message-ID: <4B24E5DF.4050205@libero.it> Il 12/12/2009 14.27, John Clark ha scritto: > On Dec 11, 2009, at 1:42 PM, Mirco Romanato wrote: > >> Even if the chance to be real is only 0.00000000000000000000000001% >> It don't make it unreal, only improbable. Atheists make the error to >> believe that something very improbable is impossible. Given they are >> not able to prove their claim, their claim is based on faith. I never >> see an atheist claim that the existence of god is improbable. > > You can't, or at least shouldn't, be absolutely certain about anything > (I think) therefore if it is inappropriate to use the word "atheist" it > is also inappropriate to utter any simple declarative sentence about > anything because there is a chance, however unlikely, that you could be > wrong. > >> I never see an atheist claim that the existence of god is improbable. > > Then you lead a very sheltered life. Last time I checked "Atheist" was someone that negate the existance of any god. Not someone that argue that god could exist. <> Agnostics argue that about not knowing that god exist (or that is not possible to know if god exist or not). <> If someone claim that god could exist, it can not be an Atheist, unless he also claim he know for sure that the possibility of god exist is/was not realized. And this is a harder claim to make. Now, if your definitions of Atheism and Agnosticism are different, we could talk about them, if you spell them out. Mirco P.S. Anyway, I had done a remark about how the BELIEF of a particular type of personal god is useful to "anchor" the BELIEF of unchanging and knowledgeable/understandable laws of nature. Without the belief that nature's laws are fixed forever and can be discovered I think modern science can not exist. The discussion about God existence is/appear a knee-jerk reflex of atheists or a way to avoid a difficult topic (or both). -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.716 / Database dei virus: 270.14.105/2562 - Data di rilascio: 12/13/09 08:39:00 From alfio.puglisi at gmail.com Sun Dec 13 13:46:32 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Sun, 13 Dec 2009 14:46:32 +0100 Subject: [ExI] Living temperature dataset In-Reply-To: <7641ddc60912122042p74b0c549p591e5e0d734011b6@mail.gmail.com> References: <4B240B05.7080106@libero.it> <4902d9990912121514h6bec6ce6m562ad1d550f5876a@mail.gmail.com> <7641ddc60912122042p74b0c549p591e5e0d734011b6@mail.gmail.com> Message-ID: <4902d9990912130546q788105b7u5e591be5deba699d@mail.gmail.com> On Sun, Dec 13, 2009 at 5:42 AM, Rafal Smigrodzki < rafal.smigrodzki at gmail.com> wrote: > 2009/12/12 Alfio Puglisi : > > > > > If one suspect that adjustments are systematically biased in one > direction, > > the correct thing to do is to take all of them and look at their > > distribution. This is the result for the entire GHCN dataset, on which > GISS > > is based: > > > > > http://www.gilestro.tk/2009/lots-of-smoke-hardly-any-gun-do-climatologists-falsify-data/ > > > > You get a nice gaussian distribution with an average of 0 degrees. That > is, > > there are as many negative adjustments as there are positive ones > > ### See here: > > > http://wattsupwiththat.com/2009/12/09/picking-out-the-uhi-in-global-temperature-records-so-easy-a-6th-grader-can-do-it/ > > - please watch it and read the article before commenting. > Wow. A video with a 6th grader and his dad, who say that UHI exists. And I have to watch it, otherwise I'm not qualified to comment! You think I'm going to take you seriously after this? I can play this game too: the following article: http://www.realclimate.org/index.php/archives/2004/12/the-surface-temperature-record-and-the-urban-heat-island/ references two papers: one in Journal of Climate and one in Nature. Please read them before commenting. If you prefer, you can have a 6th grader read them and make a video. > Let me explain on a hypothetical scenario: > > Imagine that you take all rural site temperatures and adjust them > upward by 1 degree. Then you take an equal number of urban sites and > adjust them downward by 1 degree. Obviously, the net adjustment per > site will be zero, just as described in Alfio's link. However, note > that adjusting rural sites up doesn't make physical sense, since there > is no "rural cooling island effect" you would need to adjust for - > these data should be consumed raw. Adjusting urban sites down makes > sense, since their temperatures are the result of the UHI - but in > this hypothetical this physically correct adjustment is negated by a > physically improper adjustment of the rural data. The overall effect > is that the UHI is fully transferred into the homogenized temperature > record, and a spurious warming trend is seen. > This is a valid concern, but observations show the opposite: the warming is higher where there is no UHI effect to correct for, like in the Arctic. See for example http://scienceblogs.com/illconsidered/2006/02/warming-due-to-urban-heat-island.php Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Dec 13 14:20:07 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 13 Dec 2009 15:20:07 +0100 Subject: [ExI] Atheism In-Reply-To: <810435.48641.qm@web59901.mail.ac4.yahoo.com> References: <810435.48641.qm@web59901.mail.ac4.yahoo.com> Message-ID: <580930c20912130620w2868a63fkd782acbb85fb67a6@mail.gmail.com> 2009/12/13 Post Futurist > >> I'll grant that many great works of art have been inspired >> by religion, but I think most of those talents would >> have been >> expressed equally well on non-religious subjects. But the great minds that were >> wasted pursuing idiotic theological problems could have solved real, >> major problems. I think that's tragic." > > "Equally well" cuts both ways. Religion is as beneficial as the secular all round. The real point IMHO is: if "religion" must be, why do we need a metaphysical, faith-based, anti-scientific one? Besides the experience of purely secular "religious" creeds such as Marxism - which have their own problems but abundantly showed that new, successful religions can command widespread support without the need to invoke Pink Invisible Unicorns - Confucianism, hinduism, paganism, Zen, Shinto work equally well, and have few if any of the theoretical and practical defects that make monotheism unacceptable to those who adhere to an opposite, say promethean, value system... -- Stefano Vaj From gts_2000 at yahoo.com Sun Dec 13 14:21:55 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 13 Dec 2009 06:21:55 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <4B24846B.3090805@rawbw.com> Message-ID: <529711.84913.qm@web36504.mail.mud.yahoo.com> Hi Lee, good to see you again. > Using the "systems reply" terminology, does the Chinese > Room laugh at jokes?? Doth it have feelings?? Hath the > CR not... Laugh at jokes, yes, as we should require that much to give it a passing score on the Turing test. But have [subjective] feelings? These sorts of internal mental states are exactly what are at issue. Lest we mangle Searle's metaphor, the "man" who internalized the rule book and stepped outside the room (in Searle's reply to his systems critics) would have only those feelings that relate to his puzzlement about the meanings of Chinese words, and we should consider those feelings irrelevant. Remember the man exists only as a sort of literary device to help people understand the symbol grounding problem. Searle wants us to see that formal programs do not understand the symbols they manipulate any more than does a shopping list understand the words "bread" and "milk". > Anyway. Suppose that the CR is asked the question, "How may > I here in L.A. this week make an atomic bomb, and revenge my > poor Middle Eastern people against the Imperialists?" > > And the CR responds---all in Chinese characters---by > providing concise directions for building a backyard bomb! Then I think we should have it arrested to protect the public and let the philosophers continue their debate about a chinese room that sits now in the jail cell. (Not sure of your point.) >> [Moreover, say] the man internalizes the rule book >> and steps outside the room. Different picture, same >> symbol grounding problem. > > By hypothesis, then the original "man" doesn't know a bit > about what is being said, only the new "internalization" you > speak of? There exists no "original" man in the second thought experiment. Neither the man in the room in the original experiment nor the man alone in the second (and different) thought experiment understand Chinese symbols. Both thought experiments illustrate the symbol grounding problem. > For me, this exposes to criticism the notion that the room > isn't "a man". Right. Behind Searle's figure of speech you will find a Universal Turing Machine that passes the Turing test without overcoming the symbol grounding problem. The UTM only appears to understand the symbols it manipulates, giving false positives on the TT. Because brains overcome the symbol grounding problem and software/hardware systems do not, it appears we cannot describe the brain as a software/hardware system. Whatever the brain does, it does something besides run formal programs. -gts From stefano.vaj at gmail.com Sun Dec 13 14:43:58 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 13 Dec 2009 15:43:58 +0100 Subject: [ExI] more atheism In-Reply-To: <894257.93025.qm@web56806.mail.re3.yahoo.com> References: <894257.93025.qm@web56806.mail.re3.yahoo.com> Message-ID: <580930c20912130643r2eeabc3crb5fce195eb9cb0ee@mail.gmail.com> 2009/12/13 flemming > I believe stefano makes the logical or illogical error of assuming what agnostics assume. I consider my self an agnostic, but based on the fact that nobody can present to me any evidence that there is or is not a god. I probably do not explain myself clearly enough. I claim the right to be "atheist" also in the sense of believing that something with the features of Jahv?/Allah does not exist not because I think it might be ultimately demonstrated that they don't (even though in my prospective some problems exist with their concept itself...). But simply because this is what we do in our everyday life for any number of logical or physical possibilities, such as six-footed ducks, or tthe guilt of conceivably possible perpetrators of a crime when the relevant burden of proof is not met. OTOH, I do believe that a character called Pallas Athena is included in the narrative of the Iliad and in the very fabric of my culture, and that character and what she represents, as mythical as she may be, has played a larger role in history than many forgettable (but "real" in the sense that Jahv? is claimed to be) individuals. -- Stefano Vaj From gts_2000 at yahoo.com Sun Dec 13 15:15:21 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 13 Dec 2009 07:15:21 -0800 (PST) Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: Message-ID: <920757.76799.qm@web36508.mail.mud.yahoo.com> Stathis (and Lee), > In the CR a thinking entity supervenes on the behaviour of another > thinking entity, the man in the room No. The "internalization" in the rejoinder to the systems reply to the CRA refers only to the internalization by the man of the program and I/O ports. Instead of a physical rule-book on a bookshelf, he has a memorized look up table. Instead of slot in the door through which he receives inputs and sends outputs, he has his own ears and mouth. The internalization thus does not refer to the internalization of any other "thinking entity". Searle's systems reply critics had argued (with perfect logic, even if they missed the point) that while the man inside lacks understanding, it does not follow that the room also lacks understanding. After all the man inside is not the subject that passes the Turing test. The room is! So Searle replied, "Well then forget about the Chinese room and all its trappings (I only put that stuff there to help you visualize what's going on) and let the Englishman inside memorize the program. He then steps outside the room and takes the TT in Chinese. He passes it, yet he still does not understand Chinese." -gts --- On Sun, 12/13/09, Stathis Papaioannou wrote: > From: Stathis Papaioannou > Subject: Re: [ExI] Wernicke's aphasia and the CRA. > To: "ExI chat list" > Date: Sunday, December 13, 2009, 3:06 AM > 2009/12/13 Lee Corbin : > > > By hypothesis, then the original "man" doesn't know a > bit about > > what is being said, only the new "internalization" you > speak of? > > I'm forced to conclude that there are two entities > having experiences > > in that same skull: (1) the original man whose day job > was only > > holding up a sign "Will Work for Food", ?and (2) a > crafty inscrutable > > Chinese speaking engineer that knows a great deal > about bomb design > > and that is willing to help a terrorist. > > That's how it is. In a normal brain a thinking entity > supervenes on > the behaviour of non-thinking entities, the neurons. In the > CR a > thinking entity supervenes on the behaviour of another > thinking > entity, the man in the room. There's no reason why > *adding* > consciousness at the lower level should eliminate it at the > higher > level. > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From jonkc at bellsouth.net Sun Dec 13 15:45:30 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 13 Dec 2009 10:45:30 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <554818.57147.qm@web36502.mail.mud.yahoo.com> References: <554818.57147.qm@web36502.mail.mud.yahoo.com> Message-ID: On Dec 12, 2009, Gordon Swobe wrote: > Perhps you missed it Damien it but that reply of the systems critics was answered many years ago. > In the reply, the man internalizes the rule book and steps outside the room. You are assuming the little man has qualities that are clearly superhuman, but when you ask "do you know Chinese?" we are supposed to believe that the person answering the question is just a regular normal person. So either there are 2 personalities in the little man, a superman and a normal man, or the little man is lying about his language abilities, or the entire thought experiment is dumb. And I still think Searle's philosophical work would improve enormously if he took a remedial adult education biology course, there must be a junior college near him that offers it. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From moulton at moulton.com Sun Dec 13 16:01:42 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 13 Dec 2009 16:01:42 -0000 Subject: [ExI] atheism Message-ID: <20091213160142.52325.qmail@moulton.com> On Sun, 2009-12-13 at 14:02 +0100, Mirco Romanato wrote: > Last time I checked "Atheist" was someone that negate the existance of > any god. Exactly when and where was the last time you checked because if you had checked this email list you would have found that there are two common definitions of the term atheist. In fact in the email you quoted was this: < deities do not exist.[2] In the broadest sense, it is the absence of > belief in the existence of deities.[3]>> It should be pointed out that the definition of atheism as the absence of belief in the existence of deities is not new. Consider the statement of Baron d'Holbach who is reported to have said in 1772 that "All children are born Atheists; they have no idea of God." And in more recent times my friend George H. Smith has written extensively on this topic in various books and essays. Just google George H. Smith and atheism and you will get the references. The definition of atheism as "absence of belief" is in my opinion and the opinion of majority of atheists that I know personally the best type of definition to use. Consider this organization of which I am a member: American Atheists. What do they say? Atheism is a lack of belief in gods, from the original Greek meaning of "without gods." That is it. There is nothing more to it. If someone wrote a book titled "Atheism Defined," it would only be one sentence long. That quote is from their webpage: http://www.atheists.org/atheism/About_Atheism which I strongly urge people read before they continue to post message on the topic of atheism. The webpage discusses the definition of atheism as well as explains why atheism is not a religion. Now people can disagree and have other definitions but they need to know that these other definitions have serious philosophical problems and are thus falling out of usage. Fred From max at maxmore.com Sun Dec 13 16:03:01 2009 From: max at maxmore.com (Max More) Date: Sun, 13 Dec 2009 10:03:01 -0600 Subject: [ExI] Scientists Behaving Badly Message-ID: <200912131603.nBDG3D4j005106@andromeda.ziaspace.com> >Why are you still reading this neocon propaganda crap? Oh, sorry, Bill. From now on, I'll only read the material you approve of. >Where's the peer review? This is just propaganda for political purposes. Of course. We now all know how perfectly peer review works. (By the way, not every worthwhile piece must be peer reviewed.) >The American Enterprise Institute is funded by big business, >including ExxonMobil. They supported the policies of Bush and are >now working against the Obama policies. And the climate alarmists and those who are doing their best to shut up anyone with questions are massively funded by government and other institutional sources. As the CRU emails showed, even big oil companies are not averse to funding the "consensus" people. If you're going to make the funding argument (which is simply a way of avoiding dealing with the real arguments), I offer this: The US government has spent over $79 billion since 1989 on policies related to climate change, including science and technology research, administration, education campaigns, foreign aid, and tax breaks. Despite the billions: "audits" of the science are left to unpaid volunteers. A dedicated but largely uncoordinated grassroots movement of scientists has sprung up around the globe to test the integrity of the theory and compete with a well funded highly organized climate monopoly. They have exposed major errors. http://scienceandpublicpolicy.org/images/stories/papers/originals/climate_money.pdf ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From jonkc at bellsouth.net Sun Dec 13 16:03:01 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 13 Dec 2009 11:03:01 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <573329.23026.qm@web36502.mail.mud.yahoo.com> References: <573329.23026.qm@web36502.mail.mud.yahoo.com> Message-ID: On Dec 12, 2009, Gordon Swobe wrote: > Searle designed the Chinese Room thought experiment to prove that third premise in his formal argument. And made a fool of himself when he failed. I want to tell you about Clark's Chinese Room. You are a professor of Chinese Literature and are in a room with me and the great Chinese Philosopher and Poet Laotse. Laotse writes something in his native language on a paper and hands it to me. I walk 10 feet and give it to you. You read the paper and are impressed with the wisdom of the message and the beauty of its language. Now I tell you that I don't know a word of Chinese, can you find any deep implications from that fact? I believe Clark's Chinese Room is just as profound as Searle's Chinese Room. Not very. > The Englishman in the room shuffles the Chinese symbols according to the rules of Chinese syntax, and he does this well enough to pass the Turing test, yet he never understands a word of Chinese. At this very instant the neurotransmitter acetylcholine is shuffling around electrons in your brain in response to this English sentence. And yet acetylcholine does not know one word of English. Do you find that fact profound too? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sun Dec 13 16:15:25 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 13 Dec 2009 11:15:25 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <187620.42095.qm@web36502.mail.mud.yahoo.com> References: <187620.42095.qm@web36502.mail.mud.yahoo.com> Message-ID: On Dec 12, 2009, at 2:52 PM, Gordon Swobe wrote: > That position will lead to panpsychism - the idea that all matter has consciousness -- unless you find some way to justify one thing as conscious and another as not Use the exact thing that you use right now to determine that some people are smart and others stupid and to determine that some things are conscious and some things are not, behavior. And don't you find the idea that carbon can produce consciousness but silicon can't a little parochial? > without using biological consciousness as the measure! You can't use consciousness, biological or otherwise, as the measure because with one exception it is completely undetectable. You'll have to settle for intelligence, it's the best you can do. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sun Dec 13 16:33:15 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 13 Dec 2009 11:33:15 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <445222.5956.qm@web36503.mail.mud.yahoo.com> References: <445222.5956.qm@web36503.mail.mud.yahoo.com> Message-ID: <85672D59-DF69-4436-9D2F-30FDEB492830@bellsouth.net> On Dec 12, 2009, at 4:47 PM, Gordon Swobe wrote: > > It seems you would not care how we constructed those neurons, provided they squirted the same neurotransmitters and emitted the same electrical signals between themselves, i.e., that they performed the same functions as real biological neurons. I will care about those internal workings just as soon as you show me that if I multiply 6 times 5 on a vacuum tube computer I will get a different answer than if I multiply 6 times 5 on a solid state computer that works by semiconductors. > We could on that view construct a contraption the size of Texas with gigantic neurons constructed of, say, band-aids, Elmers glue, beer cans and toilet paper. Provided those neurons squirted the same chemicals and signals betwixt themselves as in a real brain, would you consider the contraption conscious? I don't understand why you keep asking that, if it acts intelligently then it's conscious and I don't care if its made of dog turds. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Dec 13 16:49:20 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 13 Dec 2009 08:49:20 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <236279.62526.qm@web36504.mail.mud.yahoo.com> --- On Sun, 12/13/09, John Clark wrote: > You are assuming the little man has qualities... I think you've confused the parable with the symbol grounding problem it illustrates. I sometimes do it myself, so I've changed the subject line to point to the meaning of the parable. Does a piece of paper understand the words written on it? Does a shopping list understand the meaning of "bread" and "milk"? If you think it does not -- if you think the understanding of symbols (semantics) takes place only in conscious minds -- then you agree with Searle and most people. If Searle has it right then formal programs have no more consciousness than shopping lists and so will never overcome the symbol grounding problem. No matter how advanced software/hardware systems may become, they will never understand the meanings of the symbols they manipulate. The challenge for us then is not to show technical problems in a silly story about a man in a Chinese room; it is rather to show that formal programs differ in some important way from shopping lists, some important way that allows programs to overcome the symbol grounding problem. -gts From jonkc at bellsouth.net Sun Dec 13 17:08:11 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 13 Dec 2009 12:08:11 -0500 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <920757.76799.qm@web36508.mail.mud.yahoo.com> References: <920757.76799.qm@web36508.mail.mud.yahoo.com> Message-ID: On Dec 13, 2009, at 10:15 AM, Gordon Swobe wrote: > Searle's systems reply critics had argued (with perfect logic, even if they missed the point) that while the man inside lacks understanding, it does not follow that the room also lacks understanding. After all the man inside is not the subject that passes the Turing test. The room is! > > So Searle replied, "Well then forget about the Chinese room and all its trappings (I only put that stuff there to help you visualize what's going on) and let the Englishman inside memorize the program. Searle is like a stage magician who makes a great flourish with his right hand to make sure we're looking at it while his left hand is subtly hiding the card. We are accustomed to people being conscious we are not accustomed to rooms being so, thus if we want to check up on the consciousness of Chinese or anything else we are accustomed to ask the man. The man replies that he doesn't know Chinese and voila Searle has magically made consciousness disappear. Pay no attention to the room full of reference books that is larger than the observable universe, Searle says, pay no attention that the Chinese questions are answered a hundred thousand million billion trillion times slower than a native Chinese speaker would answer them even if the man moved at 99% the speed of light. If somebody sees through that deception Searle tries another ruse, memorize those reference books. We are accustomed to a person having one identity not two so if we ask him if he knows Chinese and he says no we are accustomed to thinking that ends the matter. Searle says pay no attention to the fact that this normal average everyday man has just accomplished a mental feat that would make a Jupiter Brain green with envy. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sun Dec 13 17:24:54 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 13 Dec 2009 12:24:54 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <236279.62526.qm@web36504.mail.mud.yahoo.com> References: <236279.62526.qm@web36504.mail.mud.yahoo.com> Message-ID: <5CFC99EF-C101-4A94-BB78-D41EBFFA9283@bellsouth.net> On Dec 13, 2009, at 11:49 AM, Gordon Swobe wrote: > Does a piece of paper understand the words written on it? Does a shopping list understand the meaning of "bread" and "milk"? If you think it does not -- if you think the understanding of symbols (semantics) takes place only in conscious minds -- then you agree with Searle and most people. I don't know how to read the holes in obsolete punch cards, they are completely meaningless to me and you too probably, but they have meaning to a vacuum tube based 1950's punch card reading machine; you cannot deny it can put those cards in a meaningful order. Incidentally why do you suppose Searle didn't replace the little man with one of those punch card reading machines? It could certainly do a better job than a real flesh and blood human, so why not use it? I'll tell you why, because then his deception would be less effective and his magic trick of making consciousness disappear would not have worked. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Dec 13 17:31:32 2009 From: pharos at gmail.com (BillK) Date: Sun, 13 Dec 2009 17:31:32 +0000 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <236279.62526.qm@web36504.mail.mud.yahoo.com> References: <236279.62526.qm@web36504.mail.mud.yahoo.com> Message-ID: On 12/13/09, Gordon Swobe wrote: > Does a piece of paper understand the words written on it? Does a shopping > list understand the meaning of "bread" and "milk"? If you think it does not -- > if you think the understanding of symbols (semantics) takes place only in > conscious minds -- then you agree with Searle and most people. > > If Searle has it right then formal programs have no more consciousness than > shopping lists and so will never overcome the symbol grounding problem. > No matter how advanced software/hardware systems may become, they will > never understand the meanings of the symbols they manipulate. > > The challenge for us then is not to show technical problems in a silly story > about a man in a Chinese room; it is rather to show that formal programs differ > in some important way from shopping lists, some important way that allows > programs to overcome the symbol grounding problem. > > The object of strong AI (human-equivalent or greater) is not to process symbols. The language translation programs already do that, with some degree of success. And everyone agrees that they are not conscious. Strong AI programs will indeed process symbols, but they also have the objective of achieving results in the real world. If AI asks for milk and you give it water, saying 'Here is milk' it has to be able to recognize the error (symbol grounding). i.e. If the AI is unable to operate in the outside world then it is not strong AI and your symbol manipulation argument fails. Now if you extend your argument a bit.... If a strong AI has human sense equivalents, like vision, hearing, taste, touch, etc. plus symbol manipulation, all to such a level that it can operate successfully in the world, then you have a processor which could pass for human. You can then try asking it if it is conscious and see what answer you get...... BillK From jonkc at bellsouth.net Sun Dec 13 17:44:31 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 13 Dec 2009 12:44:31 -0500 Subject: [ExI] atheism In-Reply-To: <4B24E5DF.4050205@libero.it> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> <4B24E5DF.4050205@libero.it> Message-ID: On Dec 13, 2009, Mirco Romanato wrote: >>> I never see an atheist claim that the existence of god is improbable. >> Then you lead a very sheltered life. > Last time I checked "Atheist" was someone that negate the existance of any god. Then obviously you are not checking in the right place. Richard Dawkins is certainly an Atheist and yet he said that on a scale of 1 to 10, 1 being certain God exists and 10 being certain he does not, would place himself at about 9.99. An Atheist is someone who thinks the existence of God is such a remote possibility that it's silly for the idea to play any part in your life. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From moulton at moulton.com Sun Dec 13 17:56:16 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 13 Dec 2009 17:56:16 -0000 Subject: [ExI] Scientists Behaving Badly Message-ID: <20091213175616.61346.qmail@moulton.com> On Sun, 2009-12-13 at 10:37 +0000, BillK wrote: On 12/13/09, Max More wrote: > > This new article is one of the best I've seen on the CRU emails controversy, > > especially in the way it distinguishes between the players and their > > attitudes: > > > > http://www.aei.org/article/101395 ... Let us consider the issue of funding. > The American Enterprise Institute is funded by big business, including > ExxonMobil. Are you really sure? According to the most recent annual report on the AEI website the sources of revenue were: 36% Individuals 27% Conferences, Book Sales and other revenues 21% Corporations 16% Foundations I think the above values are from their 2007 report since I could not find their 2008 report. So if 2007 is a typical year then corporate donations appear to be 21% of their revenue. However it is possible that some conference attendees were employees of corporations and had their conferences fees paid or reimbursed by their employers. Thus the revenue from corporations might be more than 21% but how much more is difficult to tell based on the information I have found. I am not an AEI supporter or defender but I do think if we criticize AEI then the criticism should be based on presenting the information and not on vague (and in this case) probably false characterizations. And I do think AEI should be criticized; just like I think the ExxonMobil, the IPCC, the UN, the local knitting club and every other organization should be criticized. No sacred cows and no free rides. Now to the broader issue of funding and research. It is often implied indirectly or said explicitly that individuals and groups will bias their research and reporting based on their funding. Given what we know of humans this would not surprise me. However I suggest that we need to avoid automatically discrediting something just based on funding since it is possible for accurate research to be funded by a source with a vested interest just as it is possible for inaccurate research. I am not saying the outcomes are equally likely; I am just saying both are possible. I would also caution people that who continue using funding source as a basis of criticism that this is can boomerang. Consider the various governments, companies, foundations and other sources who claim that global warming is a serious, imminent, human caused threat. If the amount that they put into funding exceeds the amount put in by ExxonMobil and similar companies then the funding argument can backfire. I mention all of this because I really think we need to de-politicize the entire discuss and have an open and transparent discussion with all of the raw data, the research methods, the assumptions, everything placed for all to easily and freely see and evaluate. So for example how about reading the article that Max referenced and criticizing it based on its content not on the website on which it is published. I have read the article. Most of what I read in the article are things I had seen else where although the part of the article about improving IPCC and improving climate research might be interesting however more in depth analysis is needed for those proposals. Fred From gts_2000 at yahoo.com Sun Dec 13 17:56:59 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 13 Dec 2009 09:56:59 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <5CFC99EF-C101-4A94-BB78-D41EBFFA9283@bellsouth.net> Message-ID: <597657.53369.qm@web36505.mail.mud.yahoo.com> --- On Sun, 12/13/09, John Clark wrote: > Incidentally why do you suppose Searle > didn't replace the little man with one of > those?punch card reading machines? It could certainly > do a better job than a real flesh and blood human, so why > not use it? Such an argument would not address the question of strong AI, where a strong AI is defined as one that has mindful understanding of its own words and does not merely speak mindlessly. Searle considers that the difference between weak and strong AI, and on this point I agree with him. You've mentioned that you don't care about the difference between weak and strong AI. That's fine with me, but in that case neither Searle nor I have anything interesting to say to you. Some people do care about the difference between strong and weak. I happen to count myself among them. To people like me Searle has something very interesting to say. -gts From jonkc at bellsouth.net Sun Dec 13 18:25:27 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 13 Dec 2009 13:25:27 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <597657.53369.qm@web36505.mail.mud.yahoo.com> References: <597657.53369.qm@web36505.mail.mud.yahoo.com> Message-ID: <442ED923-E5A1-40BC-8076-5989BDEF6875@bellsouth.net> On Dec 13, 2009, at 12:56 PM, Gordon Swobe wrote: >> Incidentally why do you suppose Searle >> didn't replace the little man with one of >> those punch card reading machines? It could certainly >> do a better job than a real flesh and blood human, so why >> not use it? > > Such an argument would not address the question of strong AI, where a strong AI is defined as one that has mindful understanding of its own words In other words you mean the ability to read symbols and use that information to accomplish a task, like arrange a large set of cards in a particular order and print new information in that symbolic language on the cards. Come to think of it I don't believe those old punch card machines even used vacuum tubes, they were purely mechanical. > and does not merely speak mindlessly. If you think something can behave intelligently without a mind then that word has no meaning for you, it is, dare I say it, mindless. John K Clark > Searle considers that the difference between weak and strong AI, and on this point I agree with him. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Sun Dec 13 18:35:21 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Sun, 13 Dec 2009 19:35:21 +0100 Subject: [ExI] Scientists Behaving Badly In-Reply-To: <20091213175616.61346.qmail@moulton.com> References: <20091213175616.61346.qmail@moulton.com> Message-ID: <4902d9990912131035t594f06a4k2c3dad93584f5b20@mail.gmail.com> On Sun, Dec 13, 2009 at 6:56 PM, wrote: > > > So for example how about reading the article that Max referenced and > criticizing it based on its content not on the website on which it is > published. Sure, some samples of the general tone: "...the air has been going out of the global warming balloon." "As tempting as it is to indulge in Schadenfreude over the richly deserved travails of a gang that has heaped endless calumny on dissenting scientists...." If this is "one of the best", I wonder what the worst ones may say. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Sun Dec 13 19:21:30 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 13 Dec 2009 11:21:30 -0800 (PST) Subject: [ExI] extropy-chat Digest, Vol 75, Issue 22 In-Reply-To: Message-ID: <790400.17728.qm@web32008.mail.mud.yahoo.com> > From: Stefano Vaj > To: ExI chat list > Subject: Re: [ExI] Tolerance > Message-ID: > ??? <580930c20912110415pb4d90feuab4bcefe0e4b9449 at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > 2009/12/10 Damien Broderick > > > One huge problem with this rather disjointed thread is > the confusion > > between "religion" and "belief in a god or gods". An > atheist, just from the > > derivation of the word, is a person who has no belief > in deity. But of > > course there are many godless religions. Animist > religions seem to have no > > gods, per se, but see the world as suffused with and > shaped by personified > > forces and passions. The Australian aboriginal > Dreaming is a vast ancient > > integral cosmology in which the seasonal landscape and > its inhabitants are > > representations of volitional Ancestors; there's > nothing remotely like an > > Abrahamic God--but it would seem absurd not to call > this all-encompassing > > worldview "religious." > > > > Yes, this is fundamental point. And I suspect that even our > understanding of > pre-christian or non-European gods are nowadays strongly > influenced by > monotheistic views (including for some of their > followers). > > For instance, ancient Greeks used not to see any especially > dramatic > contradictions in the fact that very different and > incompatible versions of > the same myth were widespread. Chronology thereof was also > quite vague. > > It has been persuasively contended that this shows that > they did not > consider statements concerning everyday life ("there is a > stone in this > basket") on the same basis as statements concerning > mythical facts ("Pallas > Athena was wounded during the Troy siege"). Is this not also true of your average modern holy roller? Anyone who reads the bible can't fail to notice the different and incompatible myths therein, not to mention the vague chronology. Yet most of them don't seem able to separate real life from myths. Ben Zaiboc From bbenzai at yahoo.com Sun Dec 13 20:10:45 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 13 Dec 2009 12:10:45 -0800 (PST) Subject: [ExI] atheism In-Reply-To: Message-ID: <806692.44426.qm@web32008.mail.mud.yahoo.com> Stefano Vaj said: > And, btw, I am probably not an "atheist" as far as Zeus and > Thor are > concerned. :-) > > I simply do not accord them the same status or kind of > "reality" which > I consider appropriate for the keyboard I am typing on > (even though I > could improbably be hallucinating it...) or which is > claimed for > Jahv?/Allah/the Holy Trinity. Wait, you say you are not a Zeus atheist (you think he is real), and do not accord him the same status as Yahweh (so you think yahweh is not real)? You have three categories of reality, one in which Zeus belongs (real), one for your keyboard (somehow differently real), and one for Yahweh (presumably not real)? I don't understand why Yahweh and Zeus aren't grouped together. Ben Zaiboc From moulton at moulton.com Sun Dec 13 20:14:04 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 13 Dec 2009 20:14:04 -0000 Subject: [ExI] Scientists Behaving Badly Message-ID: <20091213201404.86406.qmail@moulton.com> Alfio Thank you for referring to something in the article. I assume you read it all and found the lines you quote as most worthy of discussion. See my comments below. On Sun, 2009-12-13 at 19:35 +0100, Alfio Puglisi wrote: On Sun, Dec 13, 2009 at 6:56 PM, wrote: > So for example how about reading the article that Max referenced > and criticizing it based on its content not on the website on which it > is published. > > Sure, some samples of the general tone: > > "...the air has been going out of the global warming balloon." In the article that phrase appears to refer to both how well global warming is doing as a hypothesis and how the state of the general public perception. On the first point that is what is being debated currently and I think it is too soon to tell. We will likely know more after a lot of people go back and double check the data sets and research methods. On the second point about public perception my reading of various media sources is that the author needs to delve deeper. This question of public perception needs a more nuanced discussion than I think the author gives since it needs to be differentiated into perception of global warming as an isolated item and perception of global warming relative to other items. On this see for example mention of how global warming has fallen to third place as discussed in Eurobarometet 313 http://ec.europa.eu/public_opinion/archives/ebs/ebs_313_en.pdf While certainly this is just one study and no one is claiming that it is definitive but I think it represents part of the relevant information and is an interesting example of some trends worth watching over the coming months. This is not to say that the respondents to the poll are correct in their evaluation; perhaps they should have keep global warming as number 1. > "As tempting as it is to indulge in Schadenfreude over the richly > deserved travails of a gang that has heaped endless calumny on > dissenting scientists...." How about we look at the entire paragraph so that we get an idea about the author was getting at: "As tempting as it is to indulge in Schadenfreude over the richly deserved travails of a gang that has heaped endless calumny on dissenting scientists (NASA's James Hansen, for instance, compared MIT's Richard Lindzen to a tobacco-industry scientist, and Al Gore and countless others liken skeptics to "Holocaust deniers"), the meaning of the CRU documents should not be misconstrued. The emails do not in and of themselves reveal that catastrophic climate change scenarios are a hoax or without any foundation. What they reveal is something problematic for the scientific community as a whole, namely, the tendency of scientists to cross the line from being disinterested investigators after the truth to advocates for a preconceived conclusion about the issues at hand. In the understatement of the year, CRU's Phil Jones, one of the principal figures in the controversy, admitted the emails "do not read well." Jones is the author of the most widely cited leaked emissive, telling colleagues in 1999 that he had used "Mike's Nature [magazine] trick" to "hide the decline" that inconveniently shows up after 1960 in one set of temperature records. But he insists that the full context of CRU's work shows this to have been just a misleading figure of speech. Reading through the entire archive of emails, however, provides no such reassurance; to the contrary, dozens of other messages, while less blatant than "hide the decline," expose scandalously unprofessional behavior. There were ongoing efforts to rig and manipulate the peer-review process that is critical to vetting manuscripts submitted for publication in scientific journals. Data that should have been made available for inspection by other scientists and outside critics were released only grudgingly, if at all. Perhaps more significant, the email archive also reveals that even inside this small circle of climate scientists--otherwise allied in an effort to whip up a frenzy of international political action to combat global warming--there was considerable disagreement, confusion, doubt, and at times acrimony over the results of their work. In other words, there is far less unanimity or consensus among climate insiders than we have been led to believe." Given the above quote I think it reinforces my point that we need to depoliticize and open up this entire climate debate and strive for more transparency. I hope you agree. Fred From max at maxmore.com Sun Dec 13 20:57:39 2009 From: max at maxmore.com (Max More) Date: Sun, 13 Dec 2009 14:57:39 -0600 Subject: [ExI] Alan Harrington's headstone Message-ID: <200912132057.nBDKvm2B007485@andromeda.ziaspace.com> Most of you know of Alan Harrington, author of The Immortalist. I was curious about how and when he died. In finding the answer, I came across a passage that I found grimly funny: How strange it is to think of the Immortalist in the ground. His headstone tells it all: *Get me out of here*. http://weeklywire.com/ww/07-08-97/tw_feat.html I was led to this after reading Jason Silva's lovely article: http://www.hplusmagazine.com/articles/forever-young/immortalism-ernest-becker-and-alan-harrington-overcoming-biological-limitatio My one reservation is that Harrington was way too hard on discos... ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From bbenzai at yahoo.com Sun Dec 13 20:55:04 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 13 Dec 2009 12:55:04 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <147659.50785.qm@web32005.mail.mud.yahoo.com> Gordon Swobe > The challenge ... is ... > to show that formal programs differ in some important way > from shopping lists, some important way that allows programs > to overcome the symbol grounding problem. I've just been following this thread peripherally, but this caught my attention. Are you *seriously* saying that you think shopping lists don't differ from programs? If so, your shopping lists must be wonderful things indeed. I've never had the need to include variables, control structures, interfaces, etc., in my shopping lists. Secondly, if you don't think a program can solve the mysteriously difficult 'symbol grounding problem', how can a brain do it? Are you saying that a system that processes and transmits signals as ion potential differences can do things that a system that processes and transmits signals as voltages can't? What about electron spins? photon wavelengths? polarisation angles? quantum states? magnetic polarities? etc., etc. Is there something special that ion potential waves have over these other ways of representing and processing information? If so, what? Ben Zaiboc From thespike at satx.rr.com Sun Dec 13 21:41:01 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 13 Dec 2009 15:41:01 -0600 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <147659.50785.qm@web32005.mail.mud.yahoo.com> References: <147659.50785.qm@web32005.mail.mud.yahoo.com> Message-ID: <4B255F6D.5000200@satx.rr.com> On 12/13/2009 2:55 PM, Ben Zaiboc wrote: > if you don't think a program can solve the mysteriously difficult 'symbol grounding problem', how can a brain do it? Are you saying that a system that processes and transmits signals as ion potential differences can do things that a system that processes and transmits signals as voltages can't? What about electron spins? photon wavelengths? polarisation angles? quantum states? magnetic polarities? etc., etc. > > Is there something special that ion potential waves have over these other ways of representing and processing information? > > If so, what? I have a lot of sympathy with Gordon's general point, although I think the Chinese Room completely messes it up. The case is that a linear, iterative, algorithmic process is the wrong kind of thing to instantiate what happens in a brain during consciousness (and the rest of the time, for that matter). It's some years since I looked into this problematic closely, but as I recall the line of thinking developed by Hopfield and Freeman etc still looked promising: basins of attraction, allowing multiple inputs to coalesce and mutually transform synaptic maps, vastly parallel. Maybe a linear process could emulate this eventually, but I imagine one might run into the same kinds of computational and memory space explosions that afflict an internalized Chinese Room guy. Anders surely has something timely to say about this. Damien Broderick From stathisp at gmail.com Sun Dec 13 22:52:52 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 14 Dec 2009 09:52:52 +1100 Subject: [ExI] Wernicke's aphasia and the CRA. In-Reply-To: <920757.76799.qm@web36508.mail.mud.yahoo.com> References: <920757.76799.qm@web36508.mail.mud.yahoo.com> Message-ID: 2009/12/14 Gordon Swobe : > Stathis (and Lee), > >> In the CR a thinking entity supervenes on the behaviour of another >> thinking entity, the man in the room > > No. The "internalization" in the rejoinder to the systems reply to the CRA refers only to the internalization by the man of the program and I/O ports. Instead of a physical rule-book on a bookshelf, he has a memorized look up table. Instead of slot in the door through which he receives inputs and sends outputs, he has his own ears and mouth. The internalization thus does not refer to the internalization of any other "thinking entity". > > Searle's systems reply critics had argued (with perfect logic, even if they missed the point) that while the man inside lacks understanding, it does not follow that the room also lacks understanding. After all the man inside is not the subject that passes the Turing test. The room is! > > So Searle replied, "Well then forget about the Chinese room and all its trappings (I only put that stuff there to help you visualize what's going on) and let the Englishman inside memorize the program. He then steps outside the room and takes the TT in Chinese. He passes it, yet he still does not understand Chinese." The brain is comprised of dumb components which act together to create a mind. The CR is comprised of smart and dumb components which act together to create a mind distinct from the mind of the smart component. The CR without the room is comprised of a smart component which acts to create a mind distinct from the mind of the smart component. It doesn't make any difference to the final result if the information processing is done by smart or dumb components. -- Stathis Papaioannou From stathisp at gmail.com Sun Dec 13 23:08:02 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 14 Dec 2009 10:08:02 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <597657.53369.qm@web36505.mail.mud.yahoo.com> References: <5CFC99EF-C101-4A94-BB78-D41EBFFA9283@bellsouth.net> <597657.53369.qm@web36505.mail.mud.yahoo.com> Message-ID: 2009/12/14 Gordon Swobe : > --- On Sun, 12/13/09, John Clark wrote: > >> Incidentally why do you suppose Searle >> didn't replace the little man with one of >> those?punch card reading machines? It could certainly >> do a better job than a real flesh and blood human, so why >> not use it? > > Such an argument would not address the question of strong AI, where a strong AI is defined as one that has mindful understanding of its own words and does not merely speak mindlessly. Searle considers that the difference between weak and strong AI, and on this point I agree with him. Changing from a man to a punch card reading machine does not make a different to the argument insofar as Searle would still say the room has no understanding and his opponents would still say that it does. > You've mentioned that you don't care about the difference between weak and strong AI. That's fine with me, but in that case neither Searle nor I have anything interesting to say to you. > > Some people do care about the difference between strong and weak. I happen to count myself among them. To people like me Searle has something very interesting to say. To address the strong AI / weak AI distinction I put to you a question you haven't yet answered: what do you think would happen if part of your brain, say your visual cortex, were replaced with components that behaved normally in their interaction with the remaining biological neurons, but lacked the essential ingredient for consciousness? -- Stathis Papaioannou From p0stfuturist at yahoo.com Mon Dec 14 04:04:38 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Sun, 13 Dec 2009 20:04:38 -0800 (PST) Subject: [ExI] Atheism Message-ID: <644757.22145.qm@web59912.mail.ac4.yahoo.com> ?>Stefano Vaj wrote: >The real point IMHO is: if "religion" must? be, why do we need a metaphysical, faith-based, anti-scientific one? ? ? We here don't;?virtually no one?who thought so would subscribe to exl-chat long-term. Modern reactive (and 'reactive' means profoundly reactive) religion is like Luddism a reaction to the misapplication of technologies since the Industrial Revolution. The counterculture (which at its height from '66 to '73 was very mystical) was a reaction to?technological?misapplication . 20th cum?21st century religion is a reaction to technological misapplications and the totalitarian?age,1917 to 1989. Unfortunately, reactive memes are extremely hard to wash out,?Proctor & Gamble?doesn't market a bleach that does that.?Faith (or at least religion)?has a only marginal benefit, but so does life extension at this time. So until effective, that is also to say cost effective, life extension is available, say midcentury, organized (sadly overcommercialized) religion makes sense to the maddening herd of sheeple. Marijuana?now works?well as soma; porn is another diversion, yet they are not as powerful as religion-- for one thing they are not as cost effective as religion. The offering basket at?a house of worship?requires far less funds than medical marijuana --?plus designer drugs--and porn. My untested hypothesis is that religion/faith at this time?is the super ego + the collective unconscious, which is why it is so potent. Religion is an Internet of the mind in my reckoning. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Mon Dec 14 13:11:49 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 14 Dec 2009 05:11:49 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <147659.50785.qm@web32005.mail.mud.yahoo.com> Message-ID: <183323.69123.qm@web36501.mail.mud.yahoo.com> --- On Sun, 12/13/09, Ben Zaiboc wrote: >> The challenge ... is ...to show that formal programs differ in >> some important way from shopping lists, some important way that >> allows programs to overcome the symbol grounding problem. > > I've just been following this thread peripherally, but this > caught my attention.? Are you *seriously* saying that > you think shopping lists don't differ from programs?? I mean that if we want to refute the position of this philosopher who goes by the name of Searle then we need to show exactly how programs overcome the symbol grounding problem. I think everyone will agree that a piece of paper has no conscious understanding of the symbols it holds, i.e., that a piece of paper cannot overcome the symbol grounding problem. If a program differs from a piece of paper such that it can have conscious understanding the symbols it holds, as in strong AI on a software/hardware system, then how does that happen? > Secondly, if you don't think a program can solve the > mysteriously difficult 'symbol grounding problem', how can a > brain do it?? Philosophers and cognitive scientists have some theories about how *minds* do it, but nobody really knows for certain how the physical brain does it in any sense we might duplicate. If it has no logical flaws, Searle's formal argument shows that however brains do it, they don't do it by running programs. -gts From eugen at leitl.org Mon Dec 14 13:25:19 2009 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 14 Dec 2009 14:25:19 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <183323.69123.qm@web36501.mail.mud.yahoo.com> References: <147659.50785.qm@web32005.mail.mud.yahoo.com> <183323.69123.qm@web36501.mail.mud.yahoo.com> Message-ID: <20091214132519.GG17686@leitl.org> On Mon, Dec 14, 2009 at 05:11:49AM -0800, Gordon Swobe wrote: > I mean that if we want to refute the position of this philosopher who goes by the name of Searle then we need to show exactly how programs overcome the symbol grounding problem. > > I think everyone will agree that a piece of paper has no conscious understanding of the symbols it holds, i.e., that a piece of paper cannot overcome the symbol grounding problem. If a program differs from a piece of paper such that it can have conscious understanding the symbols it holds, as in strong AI on a software/hardware system, then how does that happen? > > Philosophers and cognitive scientists have some theories about how *minds* do it, but nobody really knows for certain how the physical brain does it in any sense we might duplicate. > > If it has no logical flaws, Searle's formal argument shows that however brains do it, they don't do it by running programs. *plonk* From stefano.vaj at gmail.com Mon Dec 14 13:25:38 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 14 Dec 2009 14:25:38 +0100 Subject: [ExI] Transhumanism and Not All Religions Were Created Equal Message-ID: <580930c20912140525h72b420d3s80389b0396e44053@mail.gmail.com> 2009/12/14 Post Futurist > ?>Stefano Vaj wrote: > >The real point IMHO is: if "religion" must? be, why do we need a > metaphysical, faith-based, anti-scientific one? > > We here don't;?virtually no one?who thought so would subscribe to exl-chat long-term. Modern reactive (and 'reactive' means profoundly reactive) religion is like Luddism a reaction to the misapplication of technologies since the Industrial Revolution. The truth however is that *a few* religions have always been metaphysical, faith-based and implicitely anti-scientific, much before than an industrial revolution ever took place. In fact, it has been argued that their coming substantially *delayed* the development of modern technoscience (let alone pushed back actual technology by centuries...), of which some promising signs were emerging. According to other theories, their coming actually helped or caused the revolution started with the modern age inasmuch as the latter was also a strong reaction to centuries of obscurantism and decadence ("what does not kill us, make us stronger"...). -- Stefano Vaj From gts_2000 at yahoo.com Mon Dec 14 13:45:53 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 14 Dec 2009 05:45:53 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <785147.87825.qm@web36501.mail.mud.yahoo.com> --- On Sun, 12/13/09, Stathis Papaioannou wrote: > Changing from a man to a punch card reading machine does > not make a different to the argument insofar as Searle would > still say the room has no understanding and his opponents > would still say that it does. The question comes back to semantics. Short of espousing the far-fetched theory of panspychism, no serious philosopher would argue that a punch card reading machine can have semantics/intentionality, i.e., mindful understanding of the meanings of words. People can obviously have it, however, and so Searle put a person into his experiment to investigate whether he would have it. He concluded that such a person would not have it. I should point out here however that his formal argument does not depend on the thought experiment for its veracity. Searle just threw the thought experiment out there to help illustrate his point, then later formalized it into a proper philosophical argument sans silly pictures of men in Chinese rooms. > To address the strong AI / weak AI distinction I put to you > a question you haven't yet answered: what do you think would happen > if part of your brain, say your visual cortex, were replaced with > components that behaved normally in their interaction with the remaining > biological neurons, but lacked the essential ingredient for > consciousness? You need to show that the squirting of neurotransmitters between giant artificial neurons made of beer cans and toilet paper will result in a mind that understands anything. :-) How do those squirts cause consciousness? If you have no scientific theory to explain it, then, well, we're back to Searle's default position: that as far as we know, only real biological brains have it. -gts From bbenzai at yahoo.com Mon Dec 14 14:09:57 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 14 Dec 2009 06:09:57 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <641075.28751.qm@web32003.mail.mud.yahoo.com> > From: Damien Broderick wrote: > On 12/13/2009 2:55 PM, Ben Zaiboc wrote: > > > if you don't think a program can solve the > mysteriously difficult 'symbol grounding problem', how can a > brain do it?? Are you saying that a system that > processes and transmits signals as ion potential differences > can do things that a system that processes and transmits > signals as voltages can't?? What about electron spins? > photon wavelengths? polarisation angles? quantum states? > magnetic polarities? etc., etc. > > > > Is there something special that ion potential waves > have over these other ways of representing and processing > information? > > > > If so, what? > > I have a lot of sympathy with Gordon's general point, > although I think > the Chinese Room completely messes it up. The case is that > a linear, > iterative, algorithmic process is the wrong kind of thing > to instantiate > what happens in a brain during consciousness (and the rest > of the time, > for that matter). It's some years since I looked into this > problematic > closely, but as I recall the line of thinking developed by > Hopfield and > Freeman etc still looked promising: basins of attraction, > allowing > multiple inputs to coalesce and mutually transform synaptic > maps, vastly > parallel. Maybe a linear process could emulate this > eventually, but I > imagine one might run into the same kinds of computational > and memory > space explosions that afflict? an internalized Chinese > Room guy. Anders > surely has something timely to say about this. If I understand you right, this boils down to parallel vs. linear programming? There are two answers to this, one is that there's no reason we can't build massively parallel computer systems (we don't do much of this at present because we don't really need to), and the second is, as you say, "a linear process could emulate this". In fact we have many examples of just this. Just about every neural network program does it. I'd expect a realistic software mind would exploit both methods, but even if you take the extreme case and say linear programming could *never* successfully emulate a parallel system of sufficient complexity to embody a mind, so what? We just use physically parallel hardware components, just like the brain does. A big job, yes, and beyond our current capabilities, yes, but not for long. And when that time comes, you have two computers, one synthetic, one biological. Given similar programming (whether that be in the form of physical wiring arrangements, chemical sequences, or software-controlled logic units), what reason is there to think one can do something the other can't? I can recommend Steve Grand's book "Life and how to make it" for a good insight into how information processes can ascend through levels of abstraction, resulting in something completely different from the original process. This helped me see how ions going through holes in a membrane can result in me writing this email, and that makes it much easier to understand that electrons in logic gates can have the same effects, through a cascade of layers of abstraction. Asking "how can a computer program possibly give rise to consciousness?" is a bit like asking how can hydrogen bonding possibly give rise to El Ni?o. Ben Zaiboc From gts_2000 at yahoo.com Mon Dec 14 14:10:26 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 14 Dec 2009 06:10:26 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <785147.87825.qm@web36501.mail.mud.yahoo.com> Message-ID: <424154.35472.qm@web36504.mail.mud.yahoo.com> Re-reading your last paragraph, Stathis, it seems you want to know what I think about replacing neurons in the visual cortex with artificial neurons that do *not* have the essential ingredient for consciousness. I would not dare speculate on that question, because I have no idea if conscious vision requires that essential ingredient in those neurons, much less what that essential ingredient might be. I agree with your general supposition, however, that we're missing some important ingredient to explain consciousness. We cannot explain it by pointing only to the means by which neurons relate to other neurons, i.e., by Chalmer's functionalist theory, at least not at this time in history. Functionalism seems a very reasonable religion, and reason for hope, but I don't see it as any more than that. -gts --- On Mon, 12/14/09, Gordon Swobe wrote: > From: Gordon Swobe > Subject: Re: [ExI] The symbol grounding problem in strong AI > To: "ExI chat list" > Date: Monday, December 14, 2009, 8:45 AM > --- On Sun, 12/13/09, Stathis > Papaioannou > wrote: > > > Changing from a man to a punch card reading machine > does > > not make a different to the argument insofar as Searle > would > > still say the room has no understanding and his > opponents > > would still say that it does. > > The question comes back to semantics. Short of espousing > the far-fetched theory of panspychism, no serious > philosopher would argue that a punch card reading machine > can have semantics/intentionality, i.e., mindful > understanding of the meanings of words. > > People can obviously have it, however, and so Searle put a > person into his experiment to investigate whether he would > have it. He concluded that such a person would not have it. > > I should point out here however that his formal argument > does not depend on the thought experiment for its veracity. > Searle just threw the thought experiment out there to help > illustrate his point, then later formalized it into a proper > philosophical argument sans silly pictures of men in Chinese > rooms. > > > To address the strong AI / weak AI distinction I put > to you > > a question you haven't yet answered: what do you think > would happen > > if part of your brain, say your visual cortex, were > replaced with > > components that behaved normally in their interaction > with the remaining > > biological neurons, but lacked the essential > ingredient for > > consciousness? > > You need to show that the squirting of neurotransmitters > between giant artificial neurons made of beer cans and > toilet paper will result in a mind that understands > anything. :-) How do those squirts cause consciousness? If > you have no scientific theory to explain it, then, well, > we're back to Searle's default position: that as far as we > know, only real biological brains have it. > > -gts > > > ? ? ? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From gts_2000 at yahoo.com Mon Dec 14 15:03:13 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 14 Dec 2009 07:03:13 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <641075.28751.qm@web32003.mail.mud.yahoo.com> Message-ID: <362556.37142.qm@web36501.mail.mud.yahoo.com> Stathis, > The brain is comprised of dumb components which act > together to create a mind. So it seems. > The CR is comprised of smart and dumb components which > act together to create a mind distinct from the mind of > the smart component. According to the systems reply to the CRA, yes. As they have it (or had it) there existed two minds: the smart mind belonging to the room and dumb mind belonging to the man. But they missed Searle's point, so Searle re-illustrated the same symbol grounding problem in terms they would understand. > The CR without the room is comprised of a smart > component which acts to create a mind distinct from > the mind of the smart component. "The CR without the room" is just a man whose brain does nothing more than run a formal program. If formal programs cause or have semantics then the man should understand the Chinese symbols that his mental program manipulates. But he doesn't understand Chinese even while passing the TT. Ergo, the brain does not overcome the symbol grounding problem with formal programs. Even if the brain does run formal programs, (per the computationalist theory of mind) it must do something else besides. The computationalist theory of mind is then at best incomplete and at worst completely false. Says Searle. -gts From alito at organicrobot.com Mon Dec 14 14:52:25 2009 From: alito at organicrobot.com (Alejandro Dubrovsky) Date: Tue, 15 Dec 2009 01:52:25 +1100 Subject: [ExI] Does caloric restriction work for humans? In-Reply-To: <200912111610.nBBGAhnZ024026@andromeda.ziaspace.com> References: <200912111610.nBBGAhnZ024026@andromeda.ziaspace.com> Message-ID: <1260802345.21385.803.camel@localhost> On Fri, 2009-12-11 at 10:10 -0600, Max More wrote: > I've been skeptical about drastic CR for humans for years. One > reason, mentioned in the following article, is that we don't live in > cages in a lab. Lacking any spare muscle or fat makes us highly > vulnerable to traumas of various kinds. (Many of us know of -- or are > -- people who have lost 30 pounds or more in hospital due to > illness). Recently, Aubrey has given a specific reason (also > mentioned in the article) why the life extension from even severe > caloric restriction is likely to be very small. > > So, here's the article. I would like to hear your thoughts on it, pro > and con. If CR advocates have directly addressed all the points, I'd > appreciate a pointer. > > Calorie restrictive eating for longer life? The story we didn't hear > in the news > http://junkfoodscience.blogspot.com/2009/07/calorie-restrictive-eating-for-longer.html > While total mortality didn't reach statistical significance at p 0.05, the numbers look highly sugestive to me. There's been 21 deaths in the control group and 14 in the CR group. ie total mortality was a third lower in CR but because the sample is tiny, it's hard to reach 0.05. Those non-aging deaths were 7 in control vs 9 in CR (included in totals above). 4 cases of cancer in CR v 8 in control. Later onset age of disease. Researchers claim the CR monkeys look in much better shape. Seems like a pretty decent effect to me, and you'd have to be a strict frequentist to call this a null study. If you were expecting CR to be completely useless, I'd think this should point at it being of some effect. If you thought that it'd extend lifespan by 30% like it does in some mice breeds, then this should probably point you in the other direction. I expect most people here expect it to be somewhere in between, and this seems like confirming those expectations. From jonkc at bellsouth.net Mon Dec 14 15:36:04 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 14 Dec 2009 10:36:04 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <183323.69123.qm@web36501.mail.mud.yahoo.com> References: <183323.69123.qm@web36501.mail.mud.yahoo.com> Message-ID: <54AF7508-AD55-4A88-8B9A-E23CF521A3FC@bellsouth.net> On Dec 14, 2009, at 8:11 AM, Gordon Swobe wrote: > I think everyone will agree that a piece of paper has no conscious understanding of the symbols it holds, All I know is that the pieces of paper that I've seen don't act intelligently, or at least not very intelligently. They behave rather like a human being does when they are asleep or dead. As for their consciousness I can only speculate. > that a piece of paper cannot overcome the symbol grounding problem. Old fashioned punch cards were made of paper and they overcame the symbol grounding "problem" in 1890, back then they called them Hollerith cards. Mr. Hollerith went on to start a little company which became IBM. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Dec 14 15:46:47 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 14 Dec 2009 10:46:47 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <424154.35472.qm@web36504.mail.mud.yahoo.com> References: <424154.35472.qm@web36504.mail.mud.yahoo.com> Message-ID: <4D08BED2-1CD4-4FE8-B81C-4381B228130A@bellsouth.net> On Dec 14, 2009, Gordon Swobe wrote: > that we're missing some important ingredient to explain consciousness No, we're missing some important ingredient to explain intelligence. Consciousness is easy to explain and that's the problem, absolutely any theory will do because there is no data they need to explain. One consciousness theory is as good as another. Intelligence theories are an entirely different matter, they are devilishly hard to come up with and there is a universe of data they need to explain. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jameschoate at austin.rr.com Mon Dec 14 17:42:19 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Mon, 14 Dec 2009 17:42:19 +0000 Subject: [ExI] Tolerance In-Reply-To: <852444.45072.qm@web59903.mail.ac4.yahoo.com> Message-ID: <20091214174219.AB091.519939.root@hrndva-web09-z02> ---- Post Futurist wrote: > ?IMO any doctor who would violate their Hippocratic oath would be similar in spirit if not letter to a "Christian" who would commit a felony, such as burglary. The reason this is fallow ground is it presupposes two requirements in order to commit such a transgression, two requirements that are not actually required. Premeditation and malice. Also note that 'violate' is not equivalent to 'commit' even in your thesis. -- -- -- -- -- Venimus, Vidimus, Dolavimus James Choate jameschoate at austin.rr.com james.choate at twcable.com 512-657-1279 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From alfio.puglisi at gmail.com Mon Dec 14 19:50:03 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Mon, 14 Dec 2009 20:50:03 +0100 Subject: [ExI] Scientists Behaving Badly In-Reply-To: <20091213201404.86406.qmail@moulton.com> References: <20091213201404.86406.qmail@moulton.com> Message-ID: <4902d9990912141150w52847d28ifeb71828b6e27eea@mail.gmail.com> On Sun, Dec 13, 2009 at 9:14 PM, wrote: > > Alfio > > Thank you for referring to something in the article. I assume you > read it all and found the lines you quote as most worthy of discussion. > Actually, I didn't read it all when I sent the previous email, just skimmed the first paragraphs and found those lines. Now that I have read all of if I see that it is full of factual errors, irrelevant or misleading statements, and unfounded accusations like "Global temperatures stopped rising a few years ago" (two errors in one sentence), or "utterly politicized scientists such as Jones, Mann, and NASA's James Hansen" (politicized? care to prove it?) , or "There have been rumors for years about political pressure being brought to bear on the process to deliver scarier numbers" (wrong, pressure was in the opposite direction, at least in the US), or "according to one of Jones's emails, actually destroying the raw data in the face of a successful FOIA requisition." (Jones suggested to do that in an email, but there's no proof of destroying anything in response to FOIAs requests), repeating the "travesty" argument when it has been explained to death that it means nothing of the sort, and so on. When not dwelling in such propaganda, it focuses on the medieval warm period and its reconstructions, giving its own interpretation of the famous emails. Nowhere it says that these emails are a small subset of the total, released by someone with an explicit agenda stated right at the beginning of the archive, and so any interpretation of these email is suspect and anyway expected to be one-sided. More importanly, I see no discussion of the simple topic that, even conceding some of the worst interpretations of the emails, nothing would change in the global warming picture: CO2 would still be a greenhouse gas, its increase would still be of anthropogenic, Arctic ice and glaciers would still be melting, Greenland would still be losing mass, plants would still blossom earlier, temperatures would be still going up, etc. To resume, in my opinion this article adds nothing to our understanding of the situation, but instead actively tries to instill in the reader a distrust for science with misleading statements and attacking irrelevant details, while missing the big picture entirely. Alfio > See my comments below. > > On Sun, 2009-12-13 at 19:35 +0100, Alfio Puglisi wrote: > On Sun, Dec 13, 2009 at 6:56 PM, wrote: > > So for example how about reading the article that Max referenced > > and criticizing it based on its content not on the website on which it > > is published. > > > > Sure, some samples of the general tone: > > > > "...the air has been going out of the global warming balloon." > > In the article that phrase appears to refer to both how well global > warming is doing as a hypothesis and how the state of the general public > perception. On the first point that is what is being debated currently > and I think it is too soon to tell. We will likely know more after a lot > of people go back and double check the data sets and research methods. > On the second point about public perception my reading of various > media sources is that the author needs to delve deeper. This question > of public perception needs a more nuanced discussion than I think the > author gives since it needs to be differentiated into perception of > global warming as an isolated item and perception of global warming > relative to other items. On this see for example mention of how global > warming has fallen to third place as discussed in Eurobarometet 313 > http://ec.europa.eu/public_opinion/archives/ebs/ebs_313_en.pdf > While certainly this is just one study and no one is claiming that it is > definitive but I think it represents part of the relevant information > and is an interesting example of some trends worth watching over the > coming months. This is not to say that the respondents to the poll > are correct in their evaluation; perhaps they should have keep global > warming as number 1. > > > "As tempting as it is to indulge in Schadenfreude over the richly > > deserved travails of a gang that has heaped endless calumny on > > dissenting scientists...." > > How about we look at the entire paragraph so that we get an idea about > the author was getting at: > > "As tempting as it is to indulge in Schadenfreude over > the richly deserved travails of a gang that has heaped > endless calumny on dissenting scientists (NASA's James > Hansen, for instance, compared MIT's Richard Lindzen to > a tobacco-industry scientist, and Al Gore and countless > others liken skeptics to "Holocaust deniers"), the meaning > of the CRU documents should not be misconstrued. The emails > do not in and of themselves reveal that catastrophic > climate change scenarios are a hoax or without any > foundation. What they reveal is something problematic > for the scientific community as a whole, namely, the > tendency of scientists to cross the line from being > disinterested investigators after the truth to advocates > for a preconceived conclusion about the issues at hand. In > the understatement of the year, CRU's Phil Jones, one of > the principal figures in the controversy, admitted the > emails "do not read well." Jones is the author of the most > widely cited leaked emissive, telling colleagues in > 1999 that he had used "Mike's Nature [magazine] trick" to > "hide the decline" that inconveniently shows up after 1960 > in one set of temperature records. But he insists that the > full context of CRU's work shows this to have been just > a misleading figure of speech. Reading through the entire > archive of emails, however, provides no such reassurance; > to the contrary, dozens of other messages, while less > blatant than "hide the decline," expose scandalously > unprofessional behavior. There were ongoing efforts > to rig and manipulate the peer-review process that is > critical to vetting manuscripts submitted for publication > in scientific journals. Data that should have been made > available for inspection by other scientists and outside > critics were released only grudgingly, if at all. Perhaps > more significant, the email archive also reveals that even > inside this small circle of climate scientists--otherwise > allied in an effort to whip up a frenzy of international > political action to combat global warming--there was > considerable disagreement, confusion, doubt, and at times > acrimony over the results of their work. In other words, > there is far less unanimity or consensus among climate > insiders than we have been led to believe." > > Given the above quote I think it reinforces my point that we need to > depoliticize and open up this entire climate debate and strive for > more transparency. I hope you agree. > > Fred > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Dec 14 20:10:48 2009 From: pharos at gmail.com (BillK) Date: Mon, 14 Dec 2009 20:10:48 +0000 Subject: [ExI] Scientists Behaving Badly In-Reply-To: <4902d9990912141150w52847d28ifeb71828b6e27eea@mail.gmail.com> References: <20091213201404.86406.qmail@moulton.com> <4902d9990912141150w52847d28ifeb71828b6e27eea@mail.gmail.com> Message-ID: On 12/14/09, Alfio Puglisi wrote: > More importanly, I see no discussion of the simple topic that, even > conceding some of the worst interpretations of the emails, nothing would > change in the global warming picture: CO2 would still be a greenhouse gas, > its increase would still be of anthropogenic, Arctic ice and glaciers would > still be melting, Greenland would still be losing mass, plants would still > blossom earlier, temperatures would be still going up, etc. To resume, in > my opinion this article adds nothing to our understanding of the situation, > but instead actively tries to instill in the reader a distrust for science > with misleading statements and attacking irrelevant details, while missing > the big picture entirely. > > That is exactly the aim of the anti-global warming propaganda machine supported by the big polluter industries like ExxonMobil. They fund dozens, even hundreds of different groups to give the impression that there is widespread opposition to the global warming findings. They don't attempt to win any scientific disputes in the peer-reviewed press. Their intention is to cause confusion in the minds of the general public and to postpone as long as possible any political actions to restrain the polluting industries. Repeating false claims and misinterpretations in the general press or blogs often enough gives them a semblance of credibility and that is all that they want to achieve. BillK From stathisp at gmail.com Mon Dec 14 22:08:03 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 15 Dec 2009 09:08:03 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <424154.35472.qm@web36504.mail.mud.yahoo.com> References: <785147.87825.qm@web36501.mail.mud.yahoo.com> <424154.35472.qm@web36504.mail.mud.yahoo.com> Message-ID: 2009/12/15 Gordon Swobe : > Re-reading your last paragraph, Stathis, it seems you want to know what I think about replacing neurons in the visual cortex with artificial neurons that do *not* have the essential ingredient for consciousness. I would not dare speculate on that question, because I have no idea if conscious vision requires that essential ingredient in those neurons, much less what that essential ingredient might be. > > I agree with your general supposition, however, that we're missing some important ingredient to explain consciousness. We cannot explain it by pointing only to the means by which neurons relate to other neurons, i.e., by Chalmer's functionalist theory, at least not at this time in history. > > Functionalism seems a very reasonable religion, and reason for hope, but I don't see it as any more than that. It is generally accepted that visual perception occurs in the visual cortex; without it, some reflexes remain, such as the pupillary response to light, but you don't experience seeing anything. In any case, the thought experiment could be done with any part of the brain. Advanced nanoprocessor controlled devices which behave just like neurons but, being machines, lack the special ingredient for consciousness that neurons have, are installed in place of part of your brain, the visual cortex being good for illustration purposes. You are then asked if you notice anything different. What will you say? Before answering, consider carefully the implications of the fact that the essential feature of the artificial neurons is that they behave just like biological neurons in their interactions with their neighbours. -- Stathis Papaioannou From jrd1415 at gmail.com Mon Dec 14 23:43:56 2009 From: jrd1415 at gmail.com (Jeff Davis) Date: Mon, 14 Dec 2009 16:43:56 -0700 Subject: [ExI] Jack London on primeval feelings In-Reply-To: <298914.94231.qm@web58301.mail.re3.yahoo.com> References: <298914.94231.qm@web58301.mail.re3.yahoo.com> Message-ID: I only just noticed this post today, as I was cleaning out my inbox. A previous submission of the "literary" sort had prompted me to flag Robert as a "person of interest". And now this. I think it was 1982. I'd pulled substitute teacher duties for an English class at Oceana HS in Pacifica. The assignment was to read JL's "Love of Life". Each of the kids would read a paragraph or two and then pass it on. There wasn't time enough to read the whole thing, so two classes running, we got about two thirds of the way through. The next day, I went to the library at SF State, where I was a student in ME and/or Physics, checked out an armful of JL, and retired to read it all, starting with the last third of "LoL". Understand. I abandoned my "legitimate academic pursuits". Dumped them completely, jettisoned utterly, never looked back -- "...renounced my baptism, all seals and symbols of redeemed sin..." -- and began a new "career" as a curiosity-led, self-indulgent denizen of libraries, wastrel, and wise-ass. Dame fortune is fickle. Not impressed with the Puritans. Otherwise how can you explain my lush retirement, my summer home in BC, my winter home in Baja? For each plaintive cry that life's unfair, some undeserving scoundrel somewhere is smiling, taking up the slack, enjoying an extra helping,... of good luck. When this grasshopper, having spent his summer singing and dancing, stood in the doorway of the industrious ants, shivering in the winter chill, was he turned away, as the fable requires? Nope. Sorry. No schadenfreude for you today. Rather, I was beckoned toward the warmth of her chamber by the queen ant in a fetching neglige, a snifter of wine in one hand, fragrant lotion in the other. Go figure. Better to be lucky than smart. And it started with Jack London. Best, Jeff Davis "There is only one basic human right, the right to do as you damn well please. And with it comes the only basic human duty, the duty to take the consequences." P.J. O'Rourke On Mon, Nov 23, 2009 at 9:47 PM, Robert Masters wrote: > > > > >From Jack London's THE CALL OF THE WILD > (arranged as verse): > > > With the aurora borealis > flaming coldly overhead, > or the stars leaping in the frost dance, > and the land numb and frozen > under its pall of snow, > this song of the huskies > might have been the defiance of life, > only it was pitched in minor key, > with long-drawn wailings > and half-sobs, > and was more the pleading of life, > the articulate travail of existence. > > It was an old song, > old as the breed itself-- > one of the first songs of a younger world > in a day when songs were sad. > > It was invested > with the woe of unnumbered generations > this plaint > by which Buck > was so strangely stirred. > > When he moaned and sobbed, > it was with the pain of living > that was of old > the pain of his wild fathers, > and the fear and mystery > of the cold and dark > that was to them fear and mystery. > > And that he should be stirred by it > marked the completeness > with which he harked back > through the ages of fire and roof > to the raw beginnings of life > in the howling ages. > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From emlynoregan at gmail.com Tue Dec 15 01:28:41 2009 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 15 Dec 2009 11:58:41 +1030 Subject: [ExI] Scientists Behaving Badly In-Reply-To: References: <20091213201404.86406.qmail@moulton.com> <4902d9990912141150w52847d28ifeb71828b6e27eea@mail.gmail.com> Message-ID: <710b78fc0912141728n3e2e4a1fl7713053e99043048@mail.gmail.com> 2009/12/15 BillK : > On 12/14/09, Alfio Puglisi wrote: > >> More importanly, I see no discussion of the simple topic that, even >> conceding some of the worst interpretations of the emails, nothing would >> change in the global warming picture: CO2 would still be a greenhouse gas, >> its increase would still be of anthropogenic, Arctic ice and glaciers would >> still be melting, Greenland would still be losing mass, plants would still >> blossom earlier, temperatures would be still going up, etc. ? To resume, in >> my opinion this article adds nothing to our understanding of the situation, >> but instead actively tries to instill in the reader a distrust for science >> with misleading statements and attacking irrelevant details, while missing >> the big picture entirely. >> >> > > That is exactly the aim of the anti-global warming propaganda machine > supported by the big polluter industries like ExxonMobil. ?They fund > dozens, even hundreds of different groups to give the impression that > there is widespread opposition to the global warming findings. > > They don't attempt to win any scientific disputes in the peer-reviewed press. > Their intention is to cause confusion in the minds of the general > public and to postpone as long as possible any political actions to > restrain the polluting industries. Repeating false claims and > misinterpretations in the general press or blogs often enough gives > them a semblance of credibility and that is all that they want to > achieve. > > BillK Obviously this source is biased, and I haven't read it through, but still looks like there's interesting stuff there: http://www.exxposeexxon.com/ "ExxonMobil is the only oil giant directly funding junk science and groups that deny the science of global warming." -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From gts_2000 at yahoo.com Tue Dec 15 02:10:13 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 14 Dec 2009 18:10:13 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <4D08BED2-1CD4-4FE8-B81C-4381B228130A@bellsouth.net> Message-ID: <512225.67135.qm@web36506.mail.mud.yahoo.com> --- On Mon, 12/14/09, John Clark wrote: > Consciousness is easy to explain and that's the problem Easy to explain? Muhammad Ali knocked George Foreman out in the 8th round. If consciousness is easy to explain then perhaps you will kindly explain exactly what happened between Foreman's ears that made him lose consciousness, and exactly what happened a few moments later that enabled him to regain it. A Nobel prize awaits. -gts From emlynoregan at gmail.com Tue Dec 15 04:04:39 2009 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 15 Dec 2009 14:34:39 +1030 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <236279.62526.qm@web36504.mail.mud.yahoo.com> Message-ID: <710b78fc0912142004l6297351dkfdec8dfe1c3c68bf@mail.gmail.com> 2009/12/14 BillK : > If a strong AI has human sense equivalents, like vision, hearing, > taste, touch, etc. plus symbol manipulation, all to such a level that > it can operate successfully in the world, then you have a processor > which could pass for human. > > You can then try asking it if it is conscious and see what answer you get...... > > > BillK This is the real answer to the "consciousness" problem, imo. You will know if AI is conscious because you'll just ask it if it is, and you'll be able to observe its behaviour and see if is influenced by its own sense of consciousness. The problem of whether it is telling the truth is identical to the problem of whether people lie about this now; you can't know and it doesn't matter. Most likely, an AI which is not an emulation of evolved biology will experience something entirely unlike what we experience. It should be pretty damned interesting, and illuminating for humanity, to interact with such alien critters! -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From moulton at moulton.com Tue Dec 15 06:02:28 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 15 Dec 2009 06:02:28 -0000 Subject: [ExI] Scientists Behaving Badly Message-ID: <20091215060228.12420.qmail@moulton.com> On Mon, 2009-12-14 at 20:50 +0100, Alfio Puglisi wrote: > Actually, I didn't read it all when I sent the previous email, > just skimmed the first paragraphs and found those lines. Thanks for reading the entire article and responding in more detail. That is what I hope we can have more of in this forum. Fred From jonkc at bellsouth.net Tue Dec 15 05:48:07 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 15 Dec 2009 00:48:07 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <512225.67135.qm@web36506.mail.mud.yahoo.com> References: <512225.67135.qm@web36506.mail.mud.yahoo.com> Message-ID: <554158F6-1E13-4D62-BAB4-7CA6B872C2C9@bellsouth.net> On Dec 14, 2009, at 9:10 PM, Gordon Swobe wrote: >> Consciousness is easy to explain and that's the problem > > Easy to explain? Yep, very easy to explain. Only one thing can produce consciousness, a size 12 foot. By the way I happen to ware size 12 shoes. It's just as good as any other consciousness theory. > Muhammad Ali knocked George Foreman out in the 8th round. If consciousness is easy to explain then perhaps you will kindly explain exactly what happened between Foreman's ears that made him lose consciousness, and exactly what happened a few moments later that enabled him to regain it. I don't have one scrap of information that Mr. Foreman was conscious either before or after that blow, all I know is that his behavior became much less interesting after Mr. Ali gave him that rather vigorous tap on the head. In that instant Mr.Foreman became much less intelligent and I make no claim of having an intelligence theory because unlike consciousness intelligence theories are damn hard to come by. > A Nobel prize awaits. I've already got my airline tickets to Stockholm. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Dec 15 09:17:35 2009 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Dec 2009 10:17:35 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <554158F6-1E13-4D62-BAB4-7CA6B872C2C9@bellsouth.net> References: <512225.67135.qm@web36506.mail.mud.yahoo.com> <554158F6-1E13-4D62-BAB4-7CA6B872C2C9@bellsouth.net> Message-ID: <20091215091735.GQ17686@leitl.org> On Tue, Dec 15, 2009 at 12:48:07AM -0500, John Clark wrote: > I don't have one scrap of information that Mr. Foreman was conscious either before or after that blow, all I know is that his behavior became much less interesting after Mr. Ali gave him that rather vigorous tap on the head. In that instant Mr.Foreman became much less intelligent and I make no claim of having an intelligence theory because unlike consciousness intelligence theories are damn hard to come by. > > > A Nobel prize awaits. > > I've already got my airline tickets to Stockholm. John, your behaviour loop detector seems to be broken. I would have it serviced ere the Turing police tickets you. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Tue Dec 15 09:45:49 2009 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Dec 2009 10:45:49 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <710b78fc0912142004l6297351dkfdec8dfe1c3c68bf@mail.gmail.com> References: <236279.62526.qm@web36504.mail.mud.yahoo.com> <710b78fc0912142004l6297351dkfdec8dfe1c3c68bf@mail.gmail.com> Message-ID: <20091215094549.GS17686@leitl.org> On Tue, Dec 15, 2009 at 02:34:39PM +1030, Emlyn wrote: > This is the real answer to the "consciousness" problem, imo. You will > know if AI is conscious because you'll just ask it if it is, and I don't know what "conscious" even means, but you know AI has achieved full human equivalence across the board once everyone is out of job. > you'll be able to observe its behaviour and see if is influenced by > its own sense of consciousness. The problem of whether it is telling > the truth is identical to the problem of whether people lie about this > now; you can't know and it doesn't matter. > > Most likely, an AI which is not an emulation of evolved biology will I would not be holding my breath for that one. > experience something entirely unlike what we experience. It should be > pretty damned interesting, and illuminating for humanity, to interact > with such alien critters! The first generations are human demand-driven, and hence perfectly boring. The other kind is incomprehensible and/or lethal, so I'm not sure I would want to talk to them unless I'm them. Then you don't want to talk to me. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jonkc at bellsouth.net Tue Dec 15 10:06:30 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 15 Dec 2009 05:06:30 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <20091215091735.GQ17686@leitl.org> References: <512225.67135.qm@web36506.mail.mud.yahoo.com> <554158F6-1E13-4D62-BAB4-7CA6B872C2C9@bellsouth.net> <20091215091735.GQ17686@leitl.org> Message-ID: <43C548CD-123A-4630-BF32-8E51979D929C@bellsouth.net> On Dec 15, 2009, at 4:17 AM, Eugen Leitl wrote: > John, your behaviour loop detector seems to be broken. > I would have it serviced ere the Turing police tickets you. Yes Eugen you are correct, and it wouldn't be the first time that has happened. It's just that when somebody says something that is really really spectacularly stupid I have an equally strong urge to correct them. It's a silly compulsion but bear with me, I have a twelve step program to overcome this irrational feeling. I just hope its in time before the Turing police don't get me. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Tue Dec 15 11:05:46 2009 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 15 Dec 2009 21:35:46 +1030 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <20091215094549.GS17686@leitl.org> References: <236279.62526.qm@web36504.mail.mud.yahoo.com> <710b78fc0912142004l6297351dkfdec8dfe1c3c68bf@mail.gmail.com> <20091215094549.GS17686@leitl.org> Message-ID: <710b78fc0912150305p1952dd06n2bc5add119d47529@mail.gmail.com> 2009/12/15 Eugen Leitl : > On Tue, Dec 15, 2009 at 02:34:39PM +1030, Emlyn wrote: > >> This is the real answer to the "consciousness" problem, imo. You will >> know if AI is conscious because you'll just ask it if it is, and > > I don't know what "conscious" even means, but you know AI has achieved > full human equivalence across the board once everyone is out of job. Only equivalence in the most banal sense. My feeling is that machine intelligence will replace us well before it's "general intelligence", whatever that means. We're not even general intelligences. > >> you'll be able to observe its behaviour and see if is influenced by >> its own sense of consciousness. The problem of whether it is telling >> the truth is identical to the problem of whether people lie about this >> now; you can't know and it doesn't matter. >> >> Most likely, an AI which is not an emulation of evolved biology will > > I would not be holding my breath for that one. True. (exhales) > >> experience something entirely unlike what we experience. It should be >> pretty damned interesting, and illuminating for humanity, to interact >> with such alien critters! > > The first generations are human demand-driven, and hence perfectly > boring. Well, in the it-can't-kill-me-yawn kind of boring. I'm sure they'll be interesting, just like a quicksort is interesting. > The other kind is incomprehensible and/or lethal, so I'm > not sure I would want to talk to them unless I'm them. Then you > don't want to talk to me. I always want to talk to you Eugen. But then you can never be them, can you? None of us can, by definition. But incomprehensible, not so! It's very likely that if we can build an AGI without basing it on biology, we'll be able to understand it in principle far better than we can understand our own workings (which do seem to be a little bit convoluted). That's not to say that you'd want your daughter marrying one... -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From pharos at gmail.com Tue Dec 15 11:07:18 2009 From: pharos at gmail.com (BillK) Date: Tue, 15 Dec 2009 11:07:18 +0000 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <43C548CD-123A-4630-BF32-8E51979D929C@bellsouth.net> References: <512225.67135.qm@web36506.mail.mud.yahoo.com> <554158F6-1E13-4D62-BAB4-7CA6B872C2C9@bellsouth.net> <20091215091735.GQ17686@leitl.org> <43C548CD-123A-4630-BF32-8E51979D929C@bellsouth.net> Message-ID: On 12/15/09, John Clark wrote: > Yes Eugen you are correct, and it wouldn't be the first time that has > happened. It's just that when somebody says something that is really really > spectacularly stupid I have an equally strong urge to correct them. > > And then there's the rest of the internet to tackle........... BillK From eugen at leitl.org Tue Dec 15 12:00:04 2009 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Dec 2009 13:00:04 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <512225.67135.qm@web36506.mail.mud.yahoo.com> <554158F6-1E13-4D62-BAB4-7CA6B872C2C9@bellsouth.net> <20091215091735.GQ17686@leitl.org> <43C548CD-123A-4630-BF32-8E51979D929C@bellsouth.net> Message-ID: <20091215120004.GA17686@leitl.org> On Tue, Dec 15, 2009 at 11:07:18AM +0000, BillK wrote: > And then there's the rest of the internet to tackle........... http://imgs.xkcd.com/comics/duty_calls.png -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From gts_2000 at yahoo.com Tue Dec 15 12:06:25 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 15 Dec 2009 04:06:25 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <554158F6-1E13-4D62-BAB4-7CA6B872C2C9@bellsouth.net> Message-ID: <37585.5915.qm@web36502.mail.mud.yahoo.com> > I don't have one scrap of information that Mr. Foreman was > conscious either before or after that blow You've fallen into that solipsistic rabbit hole that I mentioned. Like most people I think Foreman had consciousness. Then the dancing butterfly stung like a bee, indirectly causing him to have no consciousness. Seems to me that something interesting to neuroscience happened at that moment. Crick (1994) proposed tentatively that the neuronal correlates of consciousness may be found in neuronal firings in the 40hz range in networks of the thalamocortical system, specifically in connections between the thalamus and layers four and six of the cortex. Searle applauds this sort of research program (he references Crick's hypothesis in his own paper on consciousness) because on his view we need to understand how the brain does the consciousness trick before we can understand how it does the symbol grounding trick. -gts From stefano.vaj at gmail.com Tue Dec 15 12:38:29 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 15 Dec 2009 13:38:29 +0100 Subject: [ExI] atheism In-Reply-To: <806692.44426.qm@web32008.mail.mud.yahoo.com> References: <806692.44426.qm@web32008.mail.mud.yahoo.com> Message-ID: <580930c20912150438v7a5e3554v17fc80341f6d3adf@mail.gmail.com> 2009/12/13 Ben Zaiboc > Wait, you say you are not a Zeus atheist (you think he is real), and do not > accord him the same status as Yahweh (so you think yahweh is not real)? > > You have three categories of reality, one in which Zeus belongs (real), one > for your keyboard (somehow differently real), and one for Yahweh (presumably > not real)? > > I don't understand why Yahweh and Zeus aren't grouped together. > > Because I have issues with any entity whose "existence" would be implicit in (and necessitated by) its "essence", and who would exists and still not be part of the world (the world being obviously defined in my mind as the set of all the things that exist). Now, all that is applicable, AFAIK, to the very concept of Yahweh, Allah, or the Holy Trinity; but not to Zeus - nor for that matter to Spiderman or the Great Gatsby. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Dec 15 12:54:36 2009 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Dec 2009 13:54:36 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <710b78fc0912150305p1952dd06n2bc5add119d47529@mail.gmail.com> References: <236279.62526.qm@web36504.mail.mud.yahoo.com> <710b78fc0912142004l6297351dkfdec8dfe1c3c68bf@mail.gmail.com> <20091215094549.GS17686@leitl.org> <710b78fc0912150305p1952dd06n2bc5add119d47529@mail.gmail.com> Message-ID: <20091215125436.GD17686@leitl.org> On Tue, Dec 15, 2009 at 09:35:46PM +1030, Emlyn wrote: > > I don't know what "conscious" even means, but you know AI has achieved > > full human equivalence across the board once everyone is out of job. > > Only equivalence in the most banal sense. My feeling is that machine Do not underestimate activities people will pay money for. Many of them tackle people to their very limit. No longer competitive across the board is a pretty taxing benchmark. > intelligence will replace us well before it's "general intelligence", > whatever that means. We're not even general intelligences. We're as good a yardstick as anything. You need to define an origin somewhere. > Well, in the it-can't-kill-me-yawn kind of boring. I'm sure they'll be > interesting, just like a quicksort is interesting. Watching this beats most TV programming, I guess. > I always want to talk to you Eugen. But then you can never be them, > can you? None of us can, by definition. Biology-derived systems are also capable of self-enhancement runaways. Technically you is a moving target, and there's a continuous trajectory all along the way, but enough quantity will turn into quality. You don't need a lot of delta to be completely incomprehensible. You can easily see it in some people already. > But incomprehensible, not so! It's very likely that if we can build an > AGI without basing it on biology, we'll be able to understand it in If you do it by a darwinian design (nobody so far seems to get a handle on how to do it any other way) it's just as opaque. There's modularity for functional compartment in the body, but just not a lot of it in the brain. Best make modularity part of the fitness function. > principle far better than we can understand our own workings (which do > seem to be a little bit convoluted). That's not to say that you'd want > your daughter marrying one... -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Tue Dec 15 13:26:09 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 16 Dec 2009 00:26:09 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <37585.5915.qm@web36502.mail.mud.yahoo.com> References: <554158F6-1E13-4D62-BAB4-7CA6B872C2C9@bellsouth.net> <37585.5915.qm@web36502.mail.mud.yahoo.com> Message-ID: 2009/12/15 Gordon Swobe : > Crick (1994) proposed tentatively that the neuronal correlates of consciousness may be found in neuronal firings in the 40hz range in networks of the thalamocortical system, specifically in connections between the thalamus and layers four and six of the cortex. Searle applauds this sort of research program (he references Crick's hypothesis in his own paper on consciousness) because on his view we need to understand how the brain does the consciousness trick before we can understand how it does the symbol grounding trick. The technical details of how the brain produces consciousness are of course important, but they are not relevant to the philosophical argument. Searle admits that the brain can be simulated by a computer, but he doesn't think this simulation would give rise to consciousness: http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html So, Searle allows that the behaviour of a neuron could be copied by a computer program, but that this artificial neuron would lack the essential ingredient for consciousness. This claim can be refuted with a purely analytic argument, valid independently of any empirical fact about the brain. The argument consists in considering what you would experience if part of your brain were replaced with artificial neurons that are functionally equivalent but (for the purpose of the reductio) lacking in the the essential ingredient of consciousness. -- Stathis Papaioannou From gts_2000 at yahoo.com Tue Dec 15 13:28:12 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 15 Dec 2009 05:28:12 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <697215.21184.qm@web36504.mail.mud.yahoo.com> --- On Mon, 12/14/09, Stathis Papaioannou wrote: > In any case, the thought experiment could be done with any part of > the brain. Advanced nanoprocessor controlled devices which behave just > like neurons but, being machines, lack the special ingredient > for consciousness that neurons have... I don't believe artificial neurons would lack the special ingredient for consciousness merely by virtue of their "being machines"! On the contrary, I think we can and ought describe real neurons as machines. >.. are installed in place of part of your brain, the visual cortex being > good for illustration purposes. You are then asked if you notice > anything different. What will you say? Before answering, consider > carefully the implications of the fact that the essential feature of the > artificial neurons is that they behave just like biological neurons in > their interactions with their neighbours. What I will say will depend on what I experience, and until the experiment happens I will have no idea what that experience might look like. However I do take issue with your assumption that your artificial neurons will (by "logical necessity", as you put it in another message) produce exactly the same experience as real neurons merely by virtue of their having the same "interactions with their neighbours" as real neurons, especially in the realm of consciousness. We simply don't know if that's true. So then I consider your theory about nano-neurons an interesting and plausible conjecture, one that any extrope worth his salt should take seriously, but I certainly don't consider it a logical necessity! Now, if your artificial neurons not only interact identically with their neighbors as do real neurons, but also contain all the same electrical and chemical activities as real neurons, and contain any other activities science may not yet have discovered that take place in and about real neurons, then I agree (now by logical necessity) that my experience will seem identical to that caused by real neurons. However in that case we've started manufacturing real neurons, so it hardly seems surprising that they cause the same experience as those produced in nature. -gts From jonkc at bellsouth.net Tue Dec 15 16:17:02 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 15 Dec 2009 11:17:02 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <697215.21184.qm@web36504.mail.mud.yahoo.com> References: <697215.21184.qm@web36504.mail.mud.yahoo.com> Message-ID: On Dec 15, 2009, at 8:28 AM, Gordon Swobe wrote: > I do take issue with your assumption that your artificial neurons will (by "logical necessity", as you put it in another message) produce exactly the same experience as real neurons merely by virtue of their having the same "interactions with their neighbours" as real neurons, especially in the realm of consciousness. We simply don't know if that's true. So you think those neighboring neurons will respond differently even if the stimulus they receive is identical. And it all depends on the inner workings of neurons not on how they communicate their output to the outside world. In other words you believe in a soul. I don't. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Dec 15 16:47:41 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 16 Dec 2009 03:47:41 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <697215.21184.qm@web36504.mail.mud.yahoo.com> References: <697215.21184.qm@web36504.mail.mud.yahoo.com> Message-ID: 2009/12/16 Gordon Swobe : >>.. are installed in place of part of your brain, the visual cortex being >> good for illustration purposes. You are then asked if you notice >> anything different. What will you say? Before answering, consider >> carefully the implications of the fact that the essential feature of the >> artificial neurons is that they behave just like biological neurons in >> their interactions with their neighbours. > > What I will say will depend on what I experience, and until the experiment happens I will have no idea what that experience might look like. However I do take issue with your assumption that your artificial neurons will (by "logical necessity", as you put it in another message) produce exactly the same experience as real neurons merely by virtue of their having the same "interactions with their neighbours" as real neurons, especially in the realm of consciousness. We simply don't know if that's true. As John Clark pointed out, the neighbouring neurons *must* respond in the same way with the artificial neurons in place as with the original neurons. Therefore, your motor neurons *must* make you behave in the same way: you declare that everything looks normal, and you correctly tell me how many fingers I am holding up. It's impossible that something else happens. So the point is, if you reproduce the behaviour of the neurons, you reproduce the behaviour of the brain and the whole person. The further question then is, does this also reproduce the consciousness? If it does not, then that would mean either that you go blind but don't notice, or that you go blind but feel yourself smiling and declaring that everything is fine despite frantic efforts to call out and end the nightmarish experiment. The former possibility makes a mockery of the concept of perception (how do you know you are perceiving anything now if you can be mistaken in this way?) while the latter implies that you are doing your thinking independently of your brain. These possibilities both seem absurd. The simple explanation is that if your brain behaves the same way, you must have the same consciousness. -- Stathis Papaioannou From jonkc at bellsouth.net Tue Dec 15 17:14:45 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 15 Dec 2009 12:14:45 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <37585.5915.qm@web36502.mail.mud.yahoo.com> References: <37585.5915.qm@web36502.mail.mud.yahoo.com> Message-ID: On Dec 15, 2009, Gordon Swobe wrote: > Crick (1994) proposed tentatively that the neuronal correlates of consciousness may be found in neuronal firings in the 40hz range in networks of the thalamocortical system How is that theory better than my theory that consciousness is caused by a size 12 foot? > Like most people I think Foreman had consciousness. I think Foreman had consciousness too, but not because of his neuronal firings in the 40hz range in the thalamocortical system. To tell the truth I don't give a hoot in hell about neuronal firings in the 40hz range in thalamocortical systems. I think Mr. Foreman was conscious because he acted intelligently. I believe Mr. Ali would agree with me and I certainly don't imagine he knew much about neuronal firings in the 40hz range in the thalamocortical systems. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Tue Dec 15 18:27:28 2009 From: kanzure at gmail.com (Bryan Bishop) Date: Tue, 15 Dec 2009 12:27:28 -0600 Subject: [ExI] Fwd: Nature Biotechnology gives a downbeat review of DIYbio In-Reply-To: <1bb39ac50912151016s22f4f6b4t51467cfc4cdb1b8e@mail.gmail.com> References: <296f92fe-ef74-41de-baad-bf29368e7d53@u25g2000prh.googlegroups.com> <55ad6af70912141905l76f9a890i37cd0d541b17443d@mail.gmail.com> <1bb39ac50912151016s22f4f6b4t51467cfc4cdb1b8e@mail.gmail.com> Message-ID: <55ad6af70912151027w6c272e22ne4041a8c08b1b813@mail.gmail.com> ---------- Forwarded message ---------- From: Christopher Kelty Date: Tue, Dec 15, 2009 at 12:16 PM Subject: Re: Nature Biotechnology gives a downbeat review of DIYbio To: diybio at googlegroups.com On Mon, Dec 14, 2009 at 8:22 PM, Mackenzie Cowell wrote: > Yeah, long live the dozen! ?Insert Margaret Mead quote here; small, > thoughtful, committed groups change the world, etc etc. ?And despite the > negative tone of the coverage, I have to say I am glad Nature Biotech > de-hyped the diybio a bit. ?Maybe it will help us manage public perception > and expectations. I don't know about Margaret Mead, but this article http://www.nature.com/nbt/journal/v27/n12/full/nbt1209-1109.html is by some anthropologists (whom I know well) from UC Berkeley, who I am sure lurk on this list but never say anything (hi guys!) Their article is worth reading. ?It argues that there is no simple technical fix for any potential safety or security issues that might arise, an that the polarization of the (non)-debate causes more harm than good. ?The relentless attempts to figure synthetic biology and DIY Biology as either a threat to the existence of humanity, or humanity's last hope for true innovation does a disservice to both the possible advantages and dangers that it possesses. What's more the article essentially lumps DIY Bio in with synthetic biology, bioengineering and big bio generally. ? It paints DIYBio-ers as essentially the Lackeys of Institutional Biology and its cutting-edge lapdog, synthetic biology; and they accuse the movement (and bioengineering generally) of "moral arrogance." What the article does not say is that DIYBio could be read *instead* as a critique of big bio (e.g. why are people using such expensive equipment when they could hack together a good-enough solution far cheaper; why not teach people who don't go to MIT to do bioengineering, etc.). I find it is curious that DIYBio-as-critique is not a story people tell. Indeed, many on this list seem terrified of being identified as critics ("Just leave us alone and let us tinker" is pure disingenuousness). ?Perhaps its because it would be necessary to critique synthetic biology and IGEM as well... On the other hand, Bio Art is usually understood *only* as critique (viz. Steve Kurtz), and not as design or engineering (The artscience team from bangalore at IGEM notwithstanding). ?If there was ever anything to the comparison with Free Software (and I mean Free Software in this instance, not Open Source), then it was the role of a critical reconfiguration of engineering practice outside of mainstream biology. ck -- You received this message because you are subscribed to the Google Groups "DIYbio" group. To post to this group, send email to diybio at googlegroups.com. To unsubscribe from this group, send email to diybio+unsubscribe at googlegroups.com. For more options, visit this group at http://groups.google.com/group/diybio?hl=en. -- - Bryan http://heybryan.org/ 1 512 203 0507 From joedalton at consultant.com Tue Dec 15 18:44:42 2009 From: joedalton at consultant.com (joedalton at consultant.com) Date: Tue, 15 Dec 2009 13:44:42 -0500 Subject: [ExI] Testing Message-ID: <8CC4BCDCC608FC1-147C-31ED@web-mmc-d02.sysops.aol.com> Can't seem to get a message through... JD -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Mon Dec 14 18:43:59 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Mon, 14 Dec 2009 10:43:59 -0800 (PST) Subject: [ExI] Transhumanism and Not All Religions Were Created Equal Message-ID: <715450.6193.qm@web59904.mail.ac4.yahoo.com> "According to other theories, their coming actually helped or caused the revolution started with the modern age inasmuch as the latter was also a strong reaction to centuries of obscurantism and decadence ('what does not kill us, make us stronger'...)." Stefano Vaj" That, too; the pent up frustration. The Industrial Revolution didn't cause the profound anti-technological bias, the reaction to the Industrial Revolution was the cause-- it was at least one of the major causes. And all that dialectical dicky-do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Mon Dec 14 18:59:01 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Mon, 14 Dec 2009 10:59:01 -0800 (PST) Subject: [ExI] Tolerance Message-ID: <649533.17258.qm@web59908.mail.ac4.yahoo.com> > IMO any doctor who would violate their Hippocratic oath would be similar in spirit if not letter to a "Christian" who would commit a felony, such as burglary. >>The reason this is fallow ground is it presupposes two requirements in order to commit such a transgression, two requirements that are not actually required. Premeditation and malice. Also note that 'violate' is not equivalent to 'commit' even in your thesis. >James Choate Not even malice? one would think malice would be a given in doing something contrary to an oath; say if one breaks the oath to tell 'the truth the whole truth...' in a court case by committing perjuring, isn't there a bit of malice involved-- even if it is spontaneous? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Dec 15 18:59:13 2009 From: sparge at gmail.com (Dave Sill) Date: Tue, 15 Dec 2009 13:59:13 -0500 Subject: [ExI] Testing In-Reply-To: <8CC4BCDCC608FC1-147C-31ED@web-mmc-d02.sysops.aol.com> References: <8CC4BCDCC608FC1-147C-31ED@web-mmc-d02.sysops.aol.com> Message-ID: 2009/12/15 : > Can't seem to get a message through... Success. -Dave From bbenzai at yahoo.com Tue Dec 15 19:50:32 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 15 Dec 2009 11:50:32 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <445615.22823.qm@web32008.mail.mud.yahoo.com> Gordon Swobe wrote: > --- On Sun, 12/13/09, Ben Zaiboc > wrote: > > >> The challenge ... is ...to show that formal > programs differ in > >> some important way from shopping lists, some > important way that > >> allows programs to overcome the symbol grounding > problem. > > > > I've just been following this thread peripherally, but > this > > caught my attention.? Are you *seriously* saying that > > you think shopping lists don't differ from programs?? > > I mean that if we want to refute the position of this > philosopher who goes by the name of Searle then we need to > show exactly how programs overcome the symbol grounding > problem. > > I think everyone will agree that a piece of paper has no > conscious understanding of the symbols it holds, i.e., that > a piece of paper cannot overcome the symbol grounding > problem. If a program differs from a piece of paper such > that it can have conscious understanding the symbols it > holds, as in strong AI on a software/hardware system, then > how does that happen? > > > Secondly, if you don't think a program can solve the > > mysteriously difficult 'symbol grounding problem', how > can a > > brain do it?? > > Philosophers and cognitive scientists have some theories > about how *minds* do it, but nobody really knows for certain > how the physical brain does it in any sense we might > duplicate. This is like saying we have a theory about how a clock tick attracts a certain insect, but we have no idea how the clock attracts the insect. Mind is a function of Brain. When I say "how can a brain do it?" I'm saying "how does a mind experience doing it?". It's the same thing. > > If it has no logical flaws, Searle's formal argument shows > that however brains do it, they don't do it by running > programs. Another "Whaaaaaa?!" moment. Correct me if I'm wrong, but you are saying there are phenomena that cannot be represented by any program? A symphony orchestra doesn't 'run programs' but that doesn't mean that we can't reproduce Rachmininov's 2nd concerto in exact detail by means of a computer program. To say that "brains don't run programs" is both untrue in a sense, and irrelevant. The question is: do brains process information? (and, I suppose, if you really must: "Do programs process information?") Unless you want to seriously claim that there are things that a lump of biological jelly can do that are theoretically beyond the capacity of any other information-processing system to do, your argument makes no sense. And if you *are* seriously making this claim, then... well, as Eugen said: "*plonk*". There is nothing further to discuss. Ben Zaiboc From hkeithhenson at gmail.com Tue Dec 15 20:25:46 2009 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 15 Dec 2009 12:25:46 -0800 Subject: [ExI] Name for carbon project Message-ID: A few of you have been following my work on solving the energy and carbon problems. Regardless of how you feel about carbon, energy really is a problem, one that if not solved could really make a mess of world civilization. Given reasonable projections, it looks like the singularity might arrive in the middle of famines and resource wars. It is going to be hard enough for unstressed humans to make rational decisions related to AI/nanotech without war stress. Unfortunately attention is focused on carbon and relatively little on the energy problem even though the two are deeply connected. Here is one: http://www.virgin.com/subsites/virginearth/ To have any chance of competing for the prize, the focus must be on sequestering carbon. That's relatively easy and painless if we produce 15 TW of power satellites beyond human energy needs and use it to make synthetic oil for storage in empty oil fields. In any case, I need a name for the project, and the name should include "carbon." Any ideas? Keith From nanite1018 at gmail.com Tue Dec 15 20:30:33 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Tue, 15 Dec 2009 15:30:33 -0500 Subject: [ExI] Fwd: Nature Biotechnology gives a downbeat review of DIYbio In-Reply-To: <55ad6af70912151027w6c272e22ne4041a8c08b1b813@mail.gmail.com> References: <296f92fe-ef74-41de-baad-bf29368e7d53@u25g2000prh.googlegroups.com> <55ad6af70912141905l76f9a890i37cd0d541b17443d@mail.gmail.com> <1bb39ac50912151016s22f4f6b4t51467cfc4cdb1b8e@mail.gmail.com> <55ad6af70912151027w6c272e22ne4041a8c08b1b813@mail.gmail.com> Message-ID: > http://www.nature.com/nbt/journal/v27/n12/full/nbt1209-1109.html > ... > What's more the article essentially lumps DIY Bio in with synthetic > biology, bioengineering and big bio generally. It paints DIYBio-ers > as essentially the Lackeys of Institutional Biology... I didn't get the impression of DIYbio-ers as Big Bio at all. In my reading, synthetic biology and DIYbio are intertwined currents within biology as a whole with similar goals (if different methods). > What the article does not say is that DIYBio could be read *instead* > as a critique of big bio (e.g. why are people using such expensive > equipment when they could hack together a good-enough solution far > cheaper; why not teach people who don't go to MIT to do > bioengineering, etc.). This seems correct, although I think that certain things (synthetic biology for example) can only be done, currently, with Big Bio's resources. This may not be the case in 5-10 years, but so far as I know, it would be impossible to do what Venter and others in the field are doing in your garage for a few tens of thousands of dollars. At least not on any sort of competitive timescale. From the article: > Synthetic biology, activists say, is just like giant agribusiness. > It's really all about ownership of nature, destruction of > biodiversity and devastation of marginalized farming communities. > Or, maybe it's Frankenstein that should worry us. Garage biologists > will create designer organisms, fashioned to the maker's will. The > implications are familiar: violated nature will reap its own revenge. While I disagree with the seemingly parodied view of DIYbio and synthetic biology enthusiasts (it has risks), the above quote describing the activists view is even more ridiculous. Ownership of nature is a good thing, and destruction of biodiversity or the pushing out of small farms from the market are morally neutral (good, if they support human life in the long run). I am willing to admit there are risks associated with DIYbio, certainly. But they are not anything our present system can't handle. You can't kill other people, or intentionally infect them with a disease, or pollute their property, etc. The law has, in place, systems to deal with such events, and so DIYbio is at best a change of the degree of risk that the government must be ready to respond to. Careful monitoring of disease patterns and the like would be enough to keep track of the possible consequences of DIYbio. And here is something that the critics of it do not address: DIYbio dramatically increases the flexibility and resources of those fighting the "bad guys." Right now you might have a dozen labs working on curing some new disease. With DIYbio, you may have thousands working to cure a new disease, manufactured by someone else. This will allow for quicker and more effective responses to the increased risk. This, coupled with government monitoring of diseases and environmental changes (to plants, animals, etc.), will likely counterbalance the risks created by DIYbio. And, you get the benefits of it as well. Joshua Job nanite1018 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Dec 15 21:07:04 2009 From: pharos at gmail.com (BillK) Date: Tue, 15 Dec 2009 21:07:04 +0000 Subject: [ExI] Name for carbon project In-Reply-To: References: Message-ID: On 12/15/09, Keith Henson wrote: > To have any chance of competing for the prize, the focus must be on > sequestering carbon. That's relatively easy and painless if we > produce 15 TW of power satellites beyond human energy needs and use it > to make synthetic oil for storage in empty oil fields. > > In any case, I need a name for the project, and the name should > include "carbon." > > That sounds like you are looking for a 4 or 5 word title for the project. Though the end product could have a snappy name like 'Stargas'. A project title might be 'Powersat carbon thermochemical fuel'. Shuffling similar words around should produce something reasonable. BillK From jonkc at bellsouth.net Tue Dec 15 20:42:41 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 15 Dec 2009 15:42:41 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <697215.21184.qm@web36504.mail.mud.yahoo.com> References: <697215.21184.qm@web36504.mail.mud.yahoo.com> Message-ID: On Dec 15, 2009, Gordon Swobe wrote: > I do take issue with your assumption that your artificial neurons will (by "logical necessity", as you put it in another message) produce exactly the same experience as real neurons merely by virtue of their having the same "interactions with their neighbours" as real neurons, especially in the realm of consciousness. We simply don't know if that's true. Of course we know that's true!. The only way a neuron knows what its neighbor is doing is by examining that neurons output. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Tue Dec 15 22:25:32 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 15 Dec 2009 14:25:32 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <569235.96962.qm@web36505.mail.mud.yahoo.com> --- On Tue, 12/15/09, Stathis Papaioannou wrote: > ... the neighbouring neurons *must* > respond in the same way with the artificial neurons in place as > with the original neurons. Not so. If you want to make an argument along those lines then I will point out that an artificial neuron must behave in exactly the same way to external stimuli as does a natural neuron if and only if the internal processes of that artificial neuron exactly matches those of the natural neuron. In other words, we can know for certain only that natural neurons (or their exact clones) will behave exactly like natural neurons. Another way to look at this problem of functionalism (the real issue here, I think)... Consider this highly simplified diagram of the brain: 0-0-0-0-0-0 The zeros represent the neurons, the dashes represent the relations between neurons, presumably the activities in the synapses. You contend that provided the dashes exactly match the dashes in a real brain, it will make no difference how we construct the zeros. To test whether you really believed this, I asked if it would matter if we constructed the zeros out of beer cans and toilet paper. Somewhat to my astonishment, you replied that such a brain would still have consciousness by "logical necessity". It seems very clear then that in your view the zeros merely play a functional role in supporting the seat of consciousness, which you see in the dashes. Your theory may seem plausible, and it does allow for the tantalizing extropian idea of nano-neurons replacing natural neurons. But before we become so excited that we forget the difference between a highly speculative hypothesis and something we must consider true by "logical necessity", consider a theory similar to yours but contradicting yours: in that competing theory the neurons act as the seat of consciousness while the dashes merely play the functional role. That functionalist theory of mind seems no less plausible than yours, yet it does not allow for the possibility of artificial neurons. And neither functionalist theory explains how brains become conscious! -gts From gts_2000 at yahoo.com Tue Dec 15 23:06:59 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 15 Dec 2009 15:06:59 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <489201.89832.qm@web36502.mail.mud.yahoo.com> --- On Tue, 12/15/09, John Clark wrote: > I think Mr. Foreman was conscious because he acted intelligently. My intelligent watch shows the correct time but doesn't know what time it is. -gts From painlord2k at libero.it Tue Dec 15 23:29:33 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Wed, 16 Dec 2009 00:29:33 +0100 Subject: [ExI] atheism In-Reply-To: References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> <4B24E5DF.4050205@libero.it> Message-ID: <4B281BDD.7000601@libero.it> Il 13/12/2009 18.44, John Clark ha scritto: > On Dec 13, 2009, Mirco Romanato wrote: > >>>> I never see an atheist claim that the existence of god is improbable. >>> Then you lead a very sheltered life. >> Last time I checked "Atheist" was someone that negate the existance of >> any god. > > Then obviously you are not checking in the right place. Richard Dawkins > is certainly an Atheist and yet he said that on a scale of 1 to 10, 1 > being certain God exists and 10 being certain he does not, would place > himself at about 9.99. An Atheist is someone who thinks the existence of > God is such a remote possibility that it's silly for the idea to play > any part in your life. Interesting. So do the fact that an asteroid collision with Earth is "such a remote possibility" make it silly to think that this could play any part in our life? Ths existance of God (any god) would make an important part of our existance (call it a "black swan" if you like) that maybe a bit of play could be useful. Then, no one have replied to me about the fact that an "improbable" god given enough time will become a "sure" god. Maybe someone of this list could be the entity that will change an "improbable" possibility to a "realized" possibility. Who know? Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.716 / Database dei virus: 270.14.108/2566 - Data di rilascio: 12/15/09 08:52:00 From possiblepaths2050 at gmail.com Tue Dec 15 23:47:19 2009 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 15 Dec 2009 16:47:19 -0700 Subject: [ExI] atheism In-Reply-To: <4B281BDD.7000601@libero.it> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> <4B24E5DF.4050205@libero.it> <4B281BDD.7000601@libero.it> Message-ID: <2d6187670912151547q656a2962tcfa4bf5b64bc946d@mail.gmail.com> Have any of you read the Isaac Asimov story about a man who is an avowed atheist and upon his death learns there actually is a God and an afterlife? The man rails against God for the ills of earth life and even declares that he will dedicate his endless afterlife existance to finding a way to defeat him! I wish I could remember the title of the story. I think some of you might act similarly if you discovered (upon death) that there actually was an afterlife and a God who held sway over things. But maybe not... I look forward to the MTA conferences held "on the other side." John -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Dec 16 00:02:57 2009 From: pharos at gmail.com (BillK) Date: Wed, 16 Dec 2009 00:02:57 +0000 Subject: [ExI] atheism In-Reply-To: <2d6187670912151547q656a2962tcfa4bf5b64bc946d@mail.gmail.com> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> <4B24E5DF.4050205@libero.it> <4B281BDD.7000601@libero.it> <2d6187670912151547q656a2962tcfa4bf5b64bc946d@mail.gmail.com> Message-ID: On 12/15/09, John Grigg wrote: > Have any of you read the Isaac Asimov story about a man who is an avowed > atheist and upon his death learns there actually is a God and an afterlife? > The man rails against God for the ills of earth life and even declares that > he will dedicate his endless afterlife existance to finding a way to defeat > him! I wish I could remember the title of the story. > > "The Last Answer" (1986) but it is not really about our traditional god and afterlife. BillK From thespike at satx.rr.com Wed Dec 16 00:32:49 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 15 Dec 2009 18:32:49 -0600 Subject: [ExI] atheism In-Reply-To: References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> <4B24E5DF.4050205@libero.it> <4B281BDD.7000601@libero.it> <2d6187670912151547q656a2962tcfa4bf5b64bc946d@mail.gmail.com> Message-ID: <4B282AB1.2050703@satx.rr.com> On 12/15/2009 6:02 PM, BillK wrote: > On 12/15/09, John Grigg wrote: >> Have any of you read the Isaac Asimov story about a man who is an avowed >> atheist and upon his death learns there actually is a God and an afterlife? >> The man rails against God for the ills of earth life and even declares that >> he will dedicate his endless afterlife existance to finding a way to defeat >> him! I wish I could remember the title of the story. > "The Last Answer" (1986) ...The Voice said, ?I do not wish to constrain you directly. I will not need to. Since you can do nothing but think, you will think. You do not know how not to think.? ?Then I will give myself a goal. I will invent a purpose.? The Voice said tolerantly, ?That you can certainly do.? ?I have already found a purpose.? ?May I know what it is?? ?You know already. I know we are not speaking in the ordinary fashion. You adjust my nexus is such a way that I believe I hear you and I believe I speak, but you transfer thoughts to me and from me directly. And when my nexus changes with my thoughts you are at once aware of them and do not need my voluntary transmission.? The Voice said, ?You are surprisingly correct. I am pleased. - But it also pleases me to have you tell me your thoughts voluntarily.? ?Then I will tell you. The purpose of my thinking will be to discover a way to disrupt this nexus of me that you have created. I do not want to think for no purpose but to amuse you. I do not want to think forever to amuse you. I do not want to exist forever to amuse you. All my thinking will be directed toward ending the nexus. That would amuse me.? The Voice said, ?I have no objection to that. Even concentrated thought on ending your own existence may, in spite of you, come up with something new and interesting. And, of course, if you succeed in this suicide attempt you will have accomplished nothing, for I would instantly reconstruct you and in such a way as to make your method of suicide impossible. And if you found another and still more subtle fashion of disrupting yourself, I would reconstruct you with that possibility eliminated, and so on. It could be an interesting game, but you will nevertheless exist eternally. It is my will.? Murray felt a quaver but the words came out with a perfect calm. ?Am I in Hell then, after all? You have implied there is none, but if this were Hell you would lie to us as part of the game of Hell.? [etc] From msd001 at gmail.com Wed Dec 16 01:23:16 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 15 Dec 2009 20:23:16 -0500 Subject: [ExI] Name for carbon project In-Reply-To: References: Message-ID: <62c14240912151723q176c3bd8q6532b2a8d607f8ac@mail.gmail.com> On Tue, Dec 15, 2009 at 3:25 PM, Keith Henson wrote: > > In any case, I need a name for the project, and the name should > include "carbon." > > NRG & CO2 (pronounced: Energy -n- CO-too) or if we really must have the literal word "carbon": Carbon-based Energy Storage Carbon Energy ReCapture Future Fossil-Fuels: Solar to Carbon Natural Fusion to Clean Carbon Solar-sourced Restored Carbon Liquid Sunshine: Solar Hydrocarbons maybe my brainstorming needs better constraints? thoughts/comments? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Dec 16 01:26:28 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 16 Dec 2009 12:26:28 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <569235.96962.qm@web36505.mail.mud.yahoo.com> References: <569235.96962.qm@web36505.mail.mud.yahoo.com> Message-ID: 2009/12/16 Gordon Swobe : > --- On Tue, 12/15/09, Stathis Papaioannou wrote: > >> ... the neighbouring neurons *must* >> respond in the same way with the artificial neurons in place as >> with the original neurons. > > Not so. If you want to make an argument along those lines then I will point out that an artificial neuron must behave in exactly the same way to external stimuli as does a natural neuron if and only if the internal processes of that artificial neuron exactly matches those of the natural neuron. In other words, we can know for certain only that natural neurons (or their exact clones) will behave exactly like natural neurons. What is required is that the artificial neuron have appropriate I/O devices to interact with the environment and, internally, that it be able to compute what a biological neuron would do so that it can put out the appropriate outputs at the appropriate times. Moreover, if you consider a volume of artificial neurons only those near the surface of that volume need have I/O devices, such as stores of neurotransmitters to squirt into synapses, since only they will be interfacing will the biological neurons. So the question is: is it possible to simulate the physical processes inside a neuron on a computer? Searle agrees that it is possible, and says so explicitly in the passage I quoted before: > Another way to look at this problem of functionalism (the real issue here, I think)... > > Consider this highly simplified diagram of the brain: > > 0-0-0-0-0-0 > > The zeros represent the neurons, the dashes represent the relations between neurons, presumably the activities in the synapses. You contend that provided the dashes exactly match the dashes in a real brain, it will make no difference how we construct the zeros. To test whether you really believed this, I asked if it would matter if we constructed the zeros out of beer cans and toilet paper. Somewhat to my astonishment, you replied that such a brain would still have consciousness by "logical necessity". > > It seems very clear then that in your view the zeros merely play a functional role in supporting the seat of consciousness, which you see in the dashes. > > Your theory may seem plausible, and it does allow for the tantalizing extropian idea of nano-neurons replacing natural neurons. > > But before we become so excited that we forget the difference between a highly speculative hypothesis and something we must consider true by "logical necessity", consider a theory similar to yours but contradicting yours: in that competing theory the neurons act as the seat of consciousness while the dashes merely play the functional role. That functionalist theory of mind seems no less plausible than yours, yet it does not allow for the possibility of artificial neurons. It is not my theory, it is standard functionalism. The thought experiment shows that if you replicate the function of the brain, you must also replicate the consciousness. In your simplified brain above suppose the two leftmost neurons are sensory neurons in the visual cortex and the rest are neurons in the association cortex and motor cortex. The sensory neurons receive input from the retina, process this information and send output to association and motor cortex neurons, including neurons in Wernicke's and Broca's area which end up moving the muscles that produce speech. We then replace the sensory neurons 0 with artificial neurons X, giving: X-X-0-0-0-0 Now, the brain receives visual input from the retina. This is processed by the X neurons, which send output to the 0 neurons. As far as the 0 neurons are concerned, nothing has changed: they receive the same inputs as if the change had not been made, so they behave the same way as they would have originally, and the brain's owner produces speech correctly describing what he sees and declaring that it all looks just the same as before. It's trivially obvious to me that this is what *must* happen. Can you explain how it could possibly be otherwise? -- Stathis Papaioannou From thespike at satx.rr.com Wed Dec 16 02:10:40 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 15 Dec 2009 20:10:40 -0600 Subject: [ExI] Scientists Behaving Badly In-Reply-To: <4902d9990912131035t594f06a4k2c3dad93584f5b20@mail.gmail.com> References: <20091213175616.61346.qmail@moulton.com> <4902d9990912131035t594f06a4k2c3dad93584f5b20@mail.gmail.com> Message-ID: <4B2841A0.2060509@satx.rr.com> NYT: December 15, 2009 Op-Ed Contributor Four Sides to Every Story By STEWART BRAND San Francisco CLIMATE talks have been going on in Copenhagen for a week now, and it appears to be a two-sided debate between alarmists and skeptics. But there are actually four different views of global warming. A taxonomy of the four: DENIALISTS They are loud, sure and political. Their view is that climatologists and their fellow travelers are engaged in a vast conspiracy to panic the public into following an agenda that is political and pernicious. Senator James Inhofe of Oklahoma and the columnist George Will wave the banner for the hoax-callers. ?The claim that global warming is caused by manmade emissions is simply untrue and not based on sound science,? Mr. Inhofe declared in a 2003 speech to the Senate about the Kyoto accord that remains emblematic of his position. ?CO2 does not cause catastrophic disasters ? actually it would be beneficial to our environment and our economy .... The motives for Kyoto are economic, not environmental ? that is, proponents favor handicapping the American economy through carbon taxes and more regulations.? SKEPTICS This group is most interested in the limitations of climate science so far: they like to examine in detail the contradictions and shortcomings in climate data and models, and they are wary about any ?consensus? in science. To the skeptics? discomfort, their arguments are frequently quoted by the denialists. In this mode, Roger Pielke, a climate scientist at the University of Colorado, argues that the scenarios presented by the United Nations Intergovernmental Panel on Climate Change are overstated and underpredictive. Another prominent skeptic is the physicist Freeman Dyson, who wrote in 2007: ?I am opposing the holy brotherhood of climate model experts and the crowd of deluded citizens who believe the numbers predicted by the computer models .... I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests.? WARNERS These are the climatologists who see the trends in climate headed toward planetary disaster, and they blame human production of greenhouse gases as the primary culprit. Leaders in this category are the scientists James Hansen, Stephen Schneider and James Lovelock. (This is the group that most persuades me and whose views I promote.) ?If humanity wishes to preserve a planet similar to that on which civilization developed and to which life on earth is adapted,? Mr. Hansen wrote as the lead author of an influential 2008 paper, then the concentration of carbon dioxide in the atmosphere would have to be reduced from 395 parts per million to ?at most 350 p.p.m.? CALAMATISTS There are many environmentalists who believe that industrial civilization has committed crimes against nature, and retribution is coming. They quote the warners in apocalyptic terms, and they view denialists as deeply evil. The technology critic Jeremy Rifkin speaks in this manner, and the writer-turned-activist Bill McKibben is a (fairly gentle) leader in this category. In his 2006 introduction for ?The End of Nature,? his famed 1989 book, Mr. McKibben wrote of climate change in religious terms: ?We are no longer able to think of ourselves as a species tossed about by larger forces ? now we are those larger forces. Hurricanes and thunderstorms and tornadoes become not acts of God but acts of man. That was what I meant by the ?end of nature.?? The calamatists and denialists are primarily political figures, with firm ideological loyalties, whereas the warners and skeptics are primarily scientists, guided by ever-changing evidence. That distinction between ideology and science not only helps clarify the strengths and weaknesses of the four stances, it can also be used to predict how they might respond to future climate developments. If climate change were to suddenly reverse itself (because of some yet undiscovered mechanism of balance in our climate system), my guess is that the denialists would be triumphant, the skeptics would be skeptical this time of the apparent good news, the warners would be relieved, and the calamatists would seek out some other doom to proclaim. If climate change keeps getting worse then I would expect denialists to grasp at stranger straws, many skeptics to become warners, the warners to start pushing geoengineering schemes like sulfur dust in the stratosphere, and the calamatists to push liberal political agendas ? just as the denialists said they would. From brent.allsop at canonizer.com Wed Dec 16 03:18:32 2009 From: brent.allsop at canonizer.com (Brent Allsop) Date: Tue, 15 Dec 2009 20:18:32 -0700 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <569235.96962.qm@web36505.mail.mud.yahoo.com> References: <569235.96962.qm@web36505.mail.mud.yahoo.com> Message-ID: <4B285188.5030806@canonizer.com> Hi Gordon, Very interesting argument, this 0-0-0-0 one you make. I've never heard it before. You're getting very close to what is important with this. You are flopping what is important between the - and the 0, and pointing out that it is still a problem either way. Perhaps we should canonize this argument? And, just FYI, Stathis is in the camp argued for by Chalmers (Functional Equivalence see: http://canonizer.com/topic.asp/88/8), and if this tentative survey at canonizer.com is an early indicator, cleearely this Chalmers' camp has more expert consensus than any other camp at this level. All these people clearely do think it is a "logical necessity" that they be right. Also, as you point out, they also recognize the conundrum with their 'logical necessity'. This is why they all call it a 'hard problem'. However, there is another camp in a strong second place consensus position which disagrees with this position, for which there is no conundrum or 'hard problem'. This is the 'nature has phenomenal properties' camp here: http://canonizer.com/topic.asp/88/7 This camp asserts that all these people are making a logical error in their argument that such is a 'logical necessity'. This 'fallacy' is being described in the transmigration fallacy camp here: http://canonizer.com/topic.asp/79/2 Brent Allsop Gordon Swobe wrote: > --- On Tue, 12/15/09, Stathis Papaioannou wrote: > > >> ... the neighbouring neurons *must* >> respond in the same way with the artificial neurons in place as >> with the original neurons. >> > > Not so. If you want to make an argument along those lines then I will point out that an artificial neuron must behave in exactly the same way to external stimuli as does a natural neuron if and only if the internal processes of that artificial neuron exactly matches those of the natural neuron. In other words, we can know for certain only that natural neurons (or their exact clones) will behave exactly like natural neurons. > > Another way to look at this problem of functionalism (the real issue here, I think)... > > Consider this highly simplified diagram of the brain: > > 0-0-0-0-0-0 > > The zeros represent the neurons, the dashes represent the relations between neurons, presumably the activities in the synapses. You contend that provided the dashes exactly match the dashes in a real brain, it will make no difference how we construct the zeros. To test whether you really believed this, I asked if it would matter if we constructed the zeros out of beer cans and toilet paper. Somewhat to my astonishment, you replied that such a brain would still have consciousness by "logical necessity". > > It seems very clear then that in your view the zeros merely play a functional role in supporting the seat of consciousness, which you see in the dashes. > > Your theory may seem plausible, and it does allow for the tantalizing extropian idea of nano-neurons replacing natural neurons. > > But before we become so excited that we forget the difference between a highly speculative hypothesis and something we must consider true by "logical necessity", consider a theory similar to yours but contradicting yours: in that competing theory the neurons act as the seat of consciousness while the dashes merely play the functional role. That functionalist theory of mind seems no less plausible than yours, yet it does not allow for the possibility of artificial neurons. > > And neither functionalist theory explains how brains become conscious! > > -gts > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From moulton at moulton.com Wed Dec 16 04:26:58 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 16 Dec 2009 04:26:58 -0000 Subject: [ExI] Scientists Behaving Badly Message-ID: <20091216042658.54164.qmail@moulton.com> An interesting article. Based on what I observe day to day there are at least two other groups in society at large. One group is made up of people who are curious and looking at various arguments but are not committed to one of the four groups that Brand lists. The other group is comprised of those who (even though access to information is readily available) are not interested in the topic either because of information overload, disgust at they see as boorish behavior and rhetoric from some of the Denialists and some of the Calamatists (and perhaps some Skeptics and Warners) or simply that they have things to worry about which they think are more important or complacency or other reasons. Fred From jonkc at bellsouth.net Wed Dec 16 05:30:40 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 16 Dec 2009 00:30:40 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <489201.89832.qm@web36502.mail.mud.yahoo.com> References: <489201.89832.qm@web36502.mail.mud.yahoo.com> Message-ID: On Dec 15, 2009, Gordon Swobe wrote: > --- On Tue, 12/15/09, John Clark wrote: > >> I think Mr. Foreman was conscious because he acted intelligently. > > My intelligent watch shows the correct time but doesn't know what time it is. I've never seen an intelligent watch but if you really do have such a thing then it certainly knows what time it is?. But let me ask you, if it isn't behavior why exactly do you think Mr. Foreman was conscious and do you think he always is or only when he's not sleeping? John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Dec 16 05:49:24 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 16 Dec 2009 00:49:24 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <569235.96962.qm@web36505.mail.mud.yahoo.com> References: <569235.96962.qm@web36505.mail.mud.yahoo.com> Message-ID: <97CDE44A-5C2C-4DC0-ADBB-8A989CCBA870@bellsouth.net> On Dec 15, 2009, at 5:25 PM, Gordon Swobe wrote: > an artificial neuron must behave in exactly the same way to external stimuli as does a natural neuron if and only if the internal processes of that artificial neuron exactly matches those of the natural neuron. Now that's just silly, a neuron has no way of knowing what internal process a neighboring neuron undergoes, it treats it as a black box. It's only interested in what it does, not how it does it. > To test whether you really believed this, I asked if it would matter if we constructed the zeros out of beer cans and toilet paper. Somewhat to my astonishment, you replied that such a brain would still have consciousness by "logical necessity". I'll be damned if I know why you were astonished, and I'll be damned to understand how it could be anything other than a logical necessity. And I don't understand the point you are trying to make, what's wrong with beer cans and toilet paper? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Dec 16 06:31:46 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 16 Dec 2009 01:31:46 -0500 Subject: [ExI] atheism In-Reply-To: <4B281BDD.7000601@libero.it> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> <4B24E5DF.4050205@libero.it> <4B281BDD.7000601@libero.it> Message-ID: <3BA05DF7-C644-445E-8424-F1388DBFABE6@bellsouth.net> On Dec 15, 2009, Mirco Romanato wrote: > So do the fact that an asteroid collision with Earth is "such a remote possibility" make it silly to think that this could play any part in our life? The chance that I will be killed by an asteroid is about the same as the chance of me being killed in an airliner crash, remote but worth thinking about. On the other hand I would regard the probability of God existing about equal to that of all the air molecules in my room through random diffusion just happen to end up on the other side of the room and I die of asphyxia in a vacuum. I really don't think that worry deserves further thought. And there is something else, even if I wanted the possibility of God's existence to play a part in my life I don't have a clue how to go about it and neither does anybody else. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From deimtee at optusnet.com.au Wed Dec 16 07:42:06 2009 From: deimtee at optusnet.com.au (David) Date: Wed, 16 Dec 2009 18:42:06 +1100 Subject: [ExI] atheism In-Reply-To: <4B281BDD.7000601@libero.it> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> <4B24E5DF.4050205@libero.it> <4B281BDD.7000601@libero.it> Message-ID: <20091216184206.1bcf12dd@optusnet.com.au> On Wed, 16 Dec 2009 00:29:33 +0100 Mirco Romanato wrote: > Il 13/12/2009 18.44, John Clark ha scritto: > > > > Then obviously you are not checking in the right place. Richard > > Dawkins is certainly an Atheist and yet he said that on a scale of > > 1 to 10, 1 being certain God exists and 10 being certain he does > > not, would place himself at about 9.99. An Atheist is someone who > > thinks the existence of God is such a remote possibility that it's > > silly for the idea to play any part in your life. > > Interesting. > So do the fact that an asteroid collision with Earth is "such a > remote possibility" make it silly to think that this could play any > part in our life? > > Ths existance of God (any god) would make an important part of our > existance (call it a "black swan" if you like) that maybe a bit of > play could be useful. > > Then, no one have replied to me about the fact that an "improbable" > god given enough time will become a "sure" god. > Maybe someone of this list could be the entity that will change an > "improbable" possibility to a "realized" possibility. Who know? > > Mirco You are conflating two different probabilities here. There is a low probability of an asteroid impact per time unit. This probability increases as the time considered gets longer. The probability that God exists (one in a thousand in Dawkin's argument) is not time dependent. Either he exists or he does not, and the duration of the observing period makes no difference. -David From stathisp at gmail.com Wed Dec 16 10:31:08 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 16 Dec 2009 21:31:08 +1100 Subject: [ExI] atheism In-Reply-To: <20091216184206.1bcf12dd@optusnet.com.au> References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> <4B24E5DF.4050205@libero.it> <4B281BDD.7000601@libero.it> <20091216184206.1bcf12dd@optusnet.com.au> Message-ID: 2009/12/16 David : > You are conflating two different probabilities here. There is a low > probability of an asteroid impact per time unit. ?This probability > increases as the time considered gets longer. > The probability that God exists (one in a thousand in Dawkin's > argument) is not time dependent. Either he exists or he does not, and > the duration of the observing period makes no difference. One in a thousand chance of God existing seems awfully generous! If the probability is the same for each one of all possible gods then that would mean it is almost certain that at least one god exists. -- Stathis Papaioannou From deimtee at optusnet.com.au Wed Dec 16 11:05:16 2009 From: deimtee at optusnet.com.au (David) Date: Wed, 16 Dec 2009 22:05:16 +1100 Subject: [ExI] atheism In-Reply-To: References: <287486.21264.qm@web56804.mail.re3.yahoo.com> <4B22928F.2010000@libero.it> <8F4B49F3-ECE3-4CD2-A855-E9BC79644573@bellsouth.net> <4B24E5DF.4050205@libero.it> <4B281BDD.7000601@libero.it> <20091216184206.1bcf12dd@optusnet.com.au> Message-ID: <20091216220516.72b97a0a@optusnet.com.au> On Wed, 16 Dec 2009 21:31:08 +1100 Stathis Papaioannou wrote: > 2009/12/16 David : > > > You are conflating two different probabilities here. There is a low > > probability of an asteroid impact per time unit. ?This probability > > increases as the time considered gets longer. > > The probability that God exists (one in a thousand in Dawkin's > > argument) is not time dependent. Either he exists or he does not, > > and the duration of the observing period makes no difference. > > One in a thousand chance of God existing seems awfully generous! If > the probability is the same for each one of all possible gods then > that would mean it is almost certain that at least one god exists. > > Actually, I was using John Clark's quote: > Then obviously you are not checking in the right place. Richard > Dawkins is certainly an Atheist and yet he said that on a scale of 1 > to 10, 1 being certain God exists and 10 being certain he does not, > would place himself at about 9.99. An Atheist is someone who thinks > the existence of God is such a remote possibility that it's silly for > the idea to play any part in your life. I have no idea what odds Dawkins would really give, or whether that referred to any god at all. I agree that p = 0.001 is a ridiculously high figure, but the point was that unlike the probability of an asteroid impact, it doesn't change over time. -David. From bbenzai at yahoo.com Wed Dec 16 13:12:34 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 16 Dec 2009 05:12:34 -0800 (PST) Subject: [ExI] atheism In-Reply-To: Message-ID: <89031.64690.qm@web32003.mail.mud.yahoo.com> > From: Stefano Vaj wrote: > 2009/12/13 Ben Zaiboc > > > Wait, you say you are not a Zeus atheist (you think he > is real), and do not > > accord him the same status as Yahweh (so you think > yahweh is not real)? > > > > You have three categories of reality, one in which > Zeus belongs (real), one > > for your keyboard (somehow differently real), and one > for Yahweh (presumably > > not real)? > > > > I don't understand why Yahweh and Zeus aren't grouped > together. > > > > > Because I have issues with any entity whose "existence" > would be implicit in > (and necessitated by) its "essence", and who would exists > and still not be > part of the world (the world being obviously defined in my > mind as the set > of all the things that exist). > > Now, all that is applicable, AFAIK, to the very concept of > Yahweh, Allah, or > the Holy Trinity; but not to Zeus - nor for that matter to > Spiderman or the > Great Gatsby. Sorry, I don't get it. Surely all these things are in the same category: products of human imagination? What various different people imagine these imaginary things' different properties are, is irrelevant. None of them are real. Ben Zaiboc From gts_2000 at yahoo.com Wed Dec 16 13:15:47 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 16 Dec 2009 05:15:47 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <97CDE44A-5C2C-4DC0-ADBB-8A989CCBA870@bellsouth.net> Message-ID: <670835.19976.qm@web36508.mail.mud.yahoo.com> --- On Wed, 12/16/09, John Clark wrote: >> To test whether you really believed this, I asked if it would > matter if we constructed the zeros out of beer cans and > toilet paper. Somewhat to my astonishment, you replied that > such a brain would still have consciousness by "logical > necessity".? > > I'll be damned if I know why you were astonished, > and I'll be damned to understand how it could be > anything other than a logical necessity. If you or Stathis can show me a coherent scientific theory that explains how a mountain of empty beer cans squirting neurotransmitters into the spaces between themselves will constitute a mind capable of overcoming the symbol grounding problem then I'll promote the idea to something better than "highly speculative hypothesis". I don't think it would work even with full beer cans. Not even if they were Heinekens. -gts From gts_2000 at yahoo.com Wed Dec 16 14:10:09 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 16 Dec 2009 06:10:09 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <366563.47826.qm@web36508.mail.mud.yahoo.com> --- On Tue, 12/15/09, Stathis Papaioannou wrote: > http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html ... > So, Searle allows that the behaviour of a neuron could be > copied by a computer program, but that this artificial neuron > would lack the essential ingredient for consciousness. This claim > can be refuted with a purely analytic argument, valid independently > of any empirical fact about the brain. The argument consists in > considering what you would experience if part of your brain were > replaced with artificial neurons that are functionally equivalent > but (for the purpose of the reductio) lacking in the the essential > ingredient of consciousness. Glad to see you read that article. I don't understand why you say you refuted anything with a purely analytic argument that does not depend on any empirical fact, when your argument consists of imagining an empirical fact! But that's besides the point... It looks like you want to refute Searle's claim that although a computer simulation of a brain is possible, such a simulation will not have intentionality/semantics. It won't on Searle's view have any more semantics than does a computer simulation of anything have anything. A simulation is, umm, a simulation. I once wrote a gaming application in C++ that contained an imaginary character. Because the character interacted in complex ways with the human player in spoken language (it used voice recognition) I found it handy to create an object called "brain" in my code to represent the character's thought processes. Had I had the knowledge and the time, I could have created a complete computer simulation of a real brain. Assume I had done so. Did my character have understanding of the words it manipulated? Did the program itself have such understanding? In other words, did either the character or the program overcome the symbol grounding problem? No and No and No. I merely created a computer simulation in which an imaginary character with an imaginary brain pretended to overcome the symbol grounding problem. I did nothing more interesting than does a cartoonist who writes cartoons for your local newspaper. -gts From bbenzai at yahoo.com Wed Dec 16 13:54:18 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 16 Dec 2009 05:54:18 -0800 (PST) Subject: [ExI] The symbol grounding problem In-Reply-To: Message-ID: <367343.43210.qm@web32007.mail.mud.yahoo.com> > "The operations of the brain can be simulated on a digital computer in the same sense in which weather systems, the behavior of the New York stock market or the pattern of airline flights over Latin America can. So our question is not, "Is the mind a program?" The answer to that is, "No". Nor is it, "Can the brain be simulated?" The answer to that is, "Yes". The question is, "Is the brain a digital computer?" And for purposes of this discussion I am taking that question as equivalent to: "Are brain processes computational?" Groan. In no way can the question "is X a digital computer?" be regarded as equivalent to "Are the processes of X computational?" A mechanical clock is not a digital computer, Babbage's Difference Engine is not a digital computer, the WHOLE FREAKIN' UNIVERSE is not a digital computer (probably). My opinion of Searle wasn't very high before, but since reading that, he's taken a nosedive and made a deep crater. If the operations of the brain can be simulated on a digital computer, it necessarily follows (if it wasn't blindingly obvious already) that 'brain processes are computational'. Why does this even need saying? Information is one of the basic properties. Any physical or energetic process involves information being processed. Information processing = computation. Ben Zaiboc From bbenzai at yahoo.com Wed Dec 16 14:08:39 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 16 Dec 2009 06:08:39 -0800 (PST) Subject: [ExI] atheism In-Reply-To: Message-ID: <690889.42637.qm@web32002.mail.mud.yahoo.com> John Grigg asked: > > Have any of you read the Isaac Asimov story about a man who > is an avowed > atheist and upon his death learns there actually is a God > and an afterlife? > The man rails against God for the ills of earth life and > even declares that > he will dedicate his endless afterlife existance to finding > a way to defeat > him!? I wish I could remember the title of the story. > > I think some of you might act similarly if you discovered > (upon death) that > there actually was an afterlife and a God who held sway > over things.? But > maybe not...? I look forward to the MTA conferences > held "on the other > side." Wow. Absolutely! I've never read the story, but as a teenager, I used to tell people that if there was a god, he should be bloody ashamed of himself. Ben Zaiboc From thespike at satx.rr.com Wed Dec 16 15:07:07 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 16 Dec 2009 09:07:07 -0600 Subject: [ExI] Steorn back from the grave? Message-ID: <4B28F79B.7020307@satx.rr.com> How hilarious if this turned out to be game-changing after all! As the bloke who send me a headsup on this commented, "You gotta give these guys credit for tenacity." Streaming video of the alleged device now on public display in Dublin, and advert at http://www.steorn.com/ What the *hell* are these guys up to? Damien Broderick From max at maxmore.com Wed Dec 16 15:30:09 2009 From: max at maxmore.com (Max More) Date: Wed, 16 Dec 2009 09:30:09 -0600 Subject: [ExI] The Genome Generation: The case for having your genes sequenced Message-ID: <200912161530.nBGFUHBt000158@andromeda.ziaspace.com> Nothing new here, really, but a couple of nice excerpts: The Genome Generation The case for having your genes sequenced >The success of the Human Genome Project led people to speculate that >someday every person would have his or her genome sequenced. At a >cost of billions per genome, of course, it was an impossible dream. >But beginning in 2004, a wave of next-generation sequencing >technologies emerged, and costs began to drop 10-fold each year. >Today we can sequence a million individuals' genomes for what it >would have cost to sequence one person's genome five years ago. At a >current cost of $5,000, it's become so inexpensive that some >business models project that personal genome sequencing could be >provided to individuals for free by third parties (insurers, >employers, governments) who might be able to use the information >from the sequencing as a way to reduce health-care costs. No matter >who pays for it, as technology improves, the cost will continue to >go down, likely to $100 per genome, and lower. >The message is not "Here's your destiny. Get used to it!" Instead, >it's "Here's your destiny, and you can do something about it!" http://www.newsweek.com/id/226963 ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From sparge at gmail.com Wed Dec 16 16:04:01 2009 From: sparge at gmail.com (Dave Sill) Date: Wed, 16 Dec 2009 11:04:01 -0500 Subject: [ExI] Masterworks in Petri dishes Message-ID: This is pretty cool. Fractal fans should appreciate it. http://www.newscientist.com/gallery/microbe-art -Dave From max at maxmore.com Wed Dec 16 16:09:06 2009 From: max at maxmore.com (Max More) Date: Wed, 16 Dec 2009 10:09:06 -0600 Subject: [ExI] Lomborg article Message-ID: <200912161609.nBGG9HW5019201@andromeda.ziaspace.com> In Brand's terms, Bjorn Lomborg is neither a Denier nor even a Skeptic. He doesn't dispute that warming is happening, nor that humans are the cause. (I disagree with him in that I don't think we yet know the extent to which humans are contributing to possibly entirely or largely natural warming.) So, even BillK and Alfio presumably have no reason to reject his thoughts. I've read parts of his book, Cool It, and intend to review it once I've read the whole thing. For now, here's his latest article: http://online.wsj.com/article/SB10001424052748704517504574589952331068322.html?mod=rss_Today%27s_Most_Popular ------------------------------------- Max More, Ph.D. Strategic Philosopher Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From alfio.puglisi at gmail.com Wed Dec 16 17:32:27 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 16 Dec 2009 18:32:27 +0100 Subject: [ExI] Lomborg article In-Reply-To: <200912161609.nBGG9HW5019201@andromeda.ziaspace.com> References: <200912161609.nBGG9HW5019201@andromeda.ziaspace.com> Message-ID: <4902d9990912160932q12a268c4w50781afdf9579d54@mail.gmail.com> On Wed, Dec 16, 2009 at 5:09 PM, Max More wrote: > In Brand's terms, Bjorn Lomborg is neither a Denier nor even a Skeptic. He > doesn't dispute that warming is happening, nor that humans are the cause. (I > disagree with him in that I don't think we yet know the extent to which > humans are contributing to possibly entirely or largely natural warming.) > So, even BillK and Alfio presumably have no reason to reject his thoughts. > > I've read parts of his book, Cool It, and intend to review it once I've > read the whole thing. For now, here's his latest article: > > > http://online.wsj.com/article/SB10001424052748704517504574589952331068322.html?mod=rss_Today%27s_Most_Popular > I'm perfectly OK with this article, which discusses policy. Arguing that money spent on global warming could be better spent elsewhere can be a valid position if supported by numbers. Lomborg's arguments are, in my opinion, the kind of arguments that people concerned with the political consequences of global warming can and should make: propose better solutions than the ones currently discussed, or demonstrate that leaving the thing going on is the lesser of the various possible evils (just for clarity, this doesn't mean messing up with the evidence to somewhat conclude that there are no consequences. Rather, that taking action would bring in even direr consequences). I'm a skeptical on the latter position, but that doesn't mean that it could turn out to be true. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Dec 16 17:16:38 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 16 Dec 2009 12:16:38 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <670835.19976.qm@web36508.mail.mud.yahoo.com> References: <670835.19976.qm@web36508.mail.mud.yahoo.com> Message-ID: <7ABEF292-BE18-4A56-8CE2-D4446E5261C1@bellsouth.net> On Dec 16, 2009, Gordon Swobe wrote: >> If you or Stathis can show me a coherent scientific theory that explains how a mountain of empty beer cans squirting neurotransmitters into the spaces between themselves will constitute a mind capable of overcoming the symbol grounding problem then I'll promote the idea to something better than "highly speculative hypothesis". Hey, you forgot the toilet paper! Obviously even in theory you couldn't make an intelligence out of beer cans alone because.... because.... because..., well just because. But toilet paper is another matter entirely, after all, punch cards are also made of paper and they overcome the symbol grounding "problem". John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Dec 16 18:25:36 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 16 Dec 2009 13:25:36 -0500 Subject: [ExI] Steorn back from the grave? In-Reply-To: <4B28F79B.7020307@satx.rr.com> References: <4B28F79B.7020307@satx.rr.com> Message-ID: <3559970E-0402-4811-9334-C11863D74D53@bellsouth.net> On Dec 16, 2009, at 10:07 AM, Damien Broderick wrote: > What the *hell* are these guys up to? They say it produces 3 times as much energy as it consumes, and they have the output connected up to the input, so I don't understand why it doesn't spin faster and faster until it explodes. I think all they have done is created a device called an electric motor. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Dec 16 18:33:42 2009 From: pharos at gmail.com (BillK) Date: Wed, 16 Dec 2009 18:33:42 +0000 Subject: [ExI] Lomborg article In-Reply-To: <200912161609.nBGG9HW5019201@andromeda.ziaspace.com> References: <200912161609.nBGG9HW5019201@andromeda.ziaspace.com> Message-ID: On 12/16/09, Max More wrote: > In Brand's terms, Bjorn Lomborg is neither a Denier nor even a Skeptic. He > doesn't dispute that warming is happening, nor that humans are the cause. (I > disagree with him in that I don't think we yet know the extent to which > humans are contributing to possibly entirely or largely natural warming.) > So, even BillK and Alfio presumably have no reason to reject his thoughts. > I don't see too much to grumble about in this particular article. :) He advocates for more investment in developing green energy, which I agree with. Though I am dubious about his logic that nations shouldn't spend any money on coping with global warming until after problems like AIDS and malaria have been solved. (Like never!). But the main problem and the reason that Exxon likes him is what he doesn't say. Exxon read him as saying that it's OK for the polluters to carry on as normal. No worries, wait and see. There are more pressing problems to be concerned about. That's what ExxonMobil want - to be allowed to carry on polluting for profit, and they will use any excuse and any person to achieve that end. BillK From thespike at satx.rr.com Wed Dec 16 20:48:59 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 16 Dec 2009 14:48:59 -0600 Subject: [ExI] Steorn back from the grave? In-Reply-To: <3559970E-0402-4811-9334-C11863D74D53@bellsouth.net> References: <4B28F79B.7020307@satx.rr.com> <3559970E-0402-4811-9334-C11863D74D53@bellsouth.net> Message-ID: <4B2947BB.3010405@satx.rr.com> On 12/16/2009 12:25 PM, John Clark wrote: > They say it produces 3 times as much energy as it consumes, and they > have the output connected up to the input, so I don't understand why it > doesn't spin faster and faster until it explodes. Because, according to the video, it dissipates a lot of kinetic energy as heat. A sort of very slow, controlled explosion, if you like. If there's the remotest chance that they've stumbled on something real, it would have to be "tap" not "create". Damien Broderick From natasha at natasha.cc Wed Dec 16 23:59:14 2009 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 16 Dec 2009 18:59:14 -0500 Subject: [ExI] Life Extension Relationships Message-ID: <20091216185914.coo3l5r34k4osssk@webmail.natasha.cc> I was interviewed by the New York Times Magazine recently and the journalist is now asking if I know anyone who would be willing to be interviewed about relationship issues whereby one partner is committed to life extension and the other partner is opposed to it. If you are in a relationship (or know of someone) dealing with this situation and would like to appear in the NYT magazine, please let me know. Many thanks, Natasha From stathisp at gmail.com Thu Dec 17 00:36:27 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 17 Dec 2009 11:36:27 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <366563.47826.qm@web36508.mail.mud.yahoo.com> References: <366563.47826.qm@web36508.mail.mud.yahoo.com> Message-ID: 2009/12/17 Gordon Swobe : > --- On Tue, 12/15/09, Stathis Papaioannou wrote: > >> http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html > ... > >> So, Searle allows that the behaviour of a neuron could be >> copied by a computer program, but that this artificial neuron >> would lack the essential ingredient for consciousness. This claim >> can be refuted with a purely analytic argument, valid independently >> of any empirical fact about the brain. The argument consists in >> considering what you would experience if part of your brain were >> replaced with artificial neurons that are functionally equivalent >> but (for the purpose of the reductio) lacking in the the essential >> ingredient of consciousness. > > Glad to see you read that article. > > I don't understand why you say you refuted anything with a purely analytic argument that does not depend on any empirical fact, when your argument consists of imagining an empirical fact! But that's besides the point... The form of the argument is such that it is true if the premises are true: that is, IF it is possible to simulate the behaviour of a neuron with a computer program THEN it is also possible to simulate consciousness. Return to your simplified brain X-X-0-0-0-0, where X are the artificial neurons in the visual cortex and 0 are the biological neurons in the association, language and motor cortex. The X neurons' job is to behave in such a way that the 0 neurons can't tell that they aren't 0 neurons. According to Searle, this masquerade should be possible. As a result, the subject with the cyborgised brain will tell me correctly how many fingers I am holding up, declare that everything looks normal and that he feels just the same as he did before the operation. This is what *must* happen. It's true in all possible words, true such that even an omnipotent God couldn't make it not true. Please explain if you disagree! Now, it is logically possible that although the subject will behave exactly the same as if no change to his brain had been made, his consciousness would be different. That is, he might be blind and not notice that he is blind, or he might notice that he is blind smiles and says everything is just fine while attempting in vain to communicate his terror. The first possibility would make the notion of consciousness meaningless, for if nothing else, we understand that having a perception means that we realise that we have the perception. The second possibility would mean that the subject is thinking without his brain, since his brain is constrained to behave normally. Both these scenarios seem quite implausible, if logically possible. Much easier to simply say that the subject would be normally conscious. > It looks like you want to refute Searle's claim that although a computer simulation of a brain is possible, such a simulation will not have intentionality/semantics. It won't on Searle's view have any more semantics than does a computer simulation of anything have anything. A simulation is, umm, a simulation. While a simulation of a thunderstorm is not wet, a simulation of a brain is conscious. That's the difference between brains and thunderstorms. > I once wrote a gaming application in C++ that contained an imaginary character. Because the character interacted in complex ways with the human player in spoken language (it used voice recognition) I found it handy to create an object called "brain" in my code to represent the character's thought processes. Had I had the knowledge and the time, I could have created a complete computer simulation of a real brain. > > Assume I had done so. Did my character have understanding of the words it manipulated? Did the program itself have such understanding? In other words, did either the character or the program overcome the symbol grounding problem? > > No and No and No. I merely created a computer simulation in which an imaginary character with an imaginary brain pretended to overcome the symbol grounding problem. I did nothing more interesting than does a cartoonist who writes cartoons for your local newspaper. A complex enough game character probably would be conscious. There are gradations of consciousness: bacterium, ant, lizard, mouse, dog, human, superhuman AI. -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Dec 17 01:35:31 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 16 Dec 2009 17:35:31 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <8736EC3A-E08A-45E9-9F15-DFEF1945E6DF@bellsouth.net> Message-ID: <806831.38063.qm@web36503.mail.mud.yahoo.com> --- On Wed, 12/16/09, John Clark wrote: >> I take a huge flying leap of faith and assume that John > Clark's brain can think too. > The problem is that you're willing to make that > huge leap of faith for me but not for a computer, you'll > do it for meat but not for silicon. I would like to do it for my computer, John, but you will need first to show me a flaw in Searle's formal argument. I offer it yet again in answer to your words above: Because 1) Programs are formal (syntactic) and because 2) Minds have mental contents (semantics) and because 3) Syntax is neither constitutive of nor sufficient for semantics It follows that 4) Programs are neither constitutive of nor sufficient for minds. If you think you see a logical problem then show it to me. -gts From gts_2000 at yahoo.com Thu Dec 17 02:21:35 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 16 Dec 2009 18:21:35 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <323108.38906.qm@web36508.mail.mud.yahoo.com> Stathis, You wrote this earlier: > So, Searle allows that the behaviour of a neuron could be > copied by a computer program, but that this artificial neuron > would lack the essential ingredient for consciousness. You then tried to refute that position that you attributed to Searle. But did you understand that the "this artificial neuron" to which you referred exists only as a computer simulation? I.e., only as some lines of code, only as some zeros and ones, only some 'on' and 'offs', only as some stuff going on in RAM? And do you really hold the position that contrary to Searle's claim, this artificial neuron that I've described has consciousness? I need some clarification here because we've discussed manufactured artificial neurons also. let's stipulate for clarity: Simulated = in a program Artificial = manufactured -gts --- On Wed, 12/16/09, Stathis Papaioannou wrote: > From: Stathis Papaioannou > Subject: Re: [ExI] The symbol grounding problem in strong AI > To: gordon.swobe at yahoo.com, "ExI chat list" > Date: Wednesday, December 16, 2009, 7:36 PM > 2009/12/17 Gordon Swobe : > > --- On Tue, 12/15/09, Stathis Papaioannou > wrote: > > > >> http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html > > ... > > > >> So, Searle allows that the behaviour of a neuron > could be > >> copied by a computer program, but that this > artificial neuron > >> would lack the essential ingredient for > consciousness. This claim > >> can be refuted with a purely analytic argument, > valid independently > >> of any empirical fact about the brain. The > argument consists in > >> considering what you would experience if part of > your brain were > >> replaced with artificial neurons that are > functionally equivalent > >> but (for the purpose of the reductio) lacking in > the the essential > >> ingredient of consciousness. > > > > Glad to see you read that article. > > > > I don't understand why you say you refuted anything > with a purely analytic argument that does not depend on any > empirical fact, when your argument consists of imagining an > empirical fact! But that's besides the point... > > The form of the argument is such that it is true if the > premises are > true: that is, IF it is possible to simulate the behaviour > of a neuron > with a computer program THEN it is also possible to > simulate > consciousness. > > Return to your simplified brain X-X-0-0-0-0, where X are > the > artificial neurons in the visual cortex and 0 are the > biological > neurons in the association, language and motor cortex. The > X neurons' > job is to behave in such a way that the 0 neurons can't > tell that they > aren't 0 neurons. According to Searle, this masquerade > should be > possible. As a result, the subject with the cyborgised > brain will tell > me correctly how many fingers I am holding up, declare that > everything > looks normal and that he feels just the same as he did > before the > operation. This is what *must* happen. It's true in all > possible > words, true such that even an omnipotent God couldn't make > it not > true. Please explain if you disagree! > > Now, it is logically possible that although the subject > will behave > exactly the same as if no change to his brain had been > made, his > consciousness would be different. That is, he might be > blind and not > notice that he is blind, or he might notice that he is > blind smiles > and says everything is just fine while attempting in vain > to > communicate his terror. The first possibility would make > the notion of > consciousness meaningless, for if nothing else, we > understand that > having a perception means that we realise that we have the > perception. > The second possibility would mean that the subject is > thinking without > his brain, since his brain is constrained to behave > normally. Both > these scenarios seem quite implausible, if logically > possible. Much > easier to simply say that the subject would be normally > conscious. > > > It looks like you want to refute Searle's claim that > although a computer simulation of a brain is possible, such > a simulation will not have intentionality/semantics. It > won't on Searle's view have any more semantics than does a > computer simulation of anything have anything. A simulation > is, umm, a simulation. > > While a simulation of a thunderstorm is not wet, a > simulation of a > brain is conscious. That's the difference between brains > and > thunderstorms. > > > I once wrote a gaming application in C++ that > contained an imaginary character. Because the character > interacted in complex ways with the human player in spoken > language (it used voice recognition) I found it handy to > create an object called "brain" in my code to represent the > character's thought processes. Had I had the knowledge and > the time, I could have created a complete computer > simulation of a real brain. > > > > Assume I had done so. Did my character have > understanding of the words it manipulated? Did the program > itself have such understanding? In other words, did either > the character or the program overcome the symbol grounding > problem? > > > > No and No and No. I merely created a computer > simulation in which an imaginary character with an imaginary > brain pretended to overcome the symbol grounding problem. I > did nothing more interesting than does a cartoonist who > writes cartoons for your local newspaper. > > A complex enough game character probably would be > conscious. There are > gradations of consciousness: bacterium, ant, lizard, mouse, > dog, > human, superhuman AI. > > > -- > Stathis Papaioannou > From stathisp at gmail.com Thu Dec 17 03:03:22 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 17 Dec 2009 14:03:22 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <323108.38906.qm@web36508.mail.mud.yahoo.com> References: <323108.38906.qm@web36508.mail.mud.yahoo.com> Message-ID: 2009/12/17 Gordon Swobe : > Stathis, > > You wrote this earlier: > >> So, Searle allows that the behaviour of a neuron could be >> copied by a computer program, but that this artificial neuron >> would lack the essential ingredient for consciousness. > > You then tried to refute that position that you attributed to Searle. > > But did you understand that the "this artificial neuron" to which you referred exists only as a computer simulation? I.e., only as some lines of code, only as some zeros and ones, only some 'on' and 'offs', only as some stuff going on in RAM? > > And do you really hold the position that contrary to Searle's claim, this artificial neuron that I've described has consciousness? > > I need some clarification here because we've discussed manufactured artificial neurons also. > > let's stipulate for clarity: > > Simulated = in a program > Artificial = manufactured What I have been considering is an artificial neuron. The artificial neuron consists of (1) a computer, (2) a computer program which simulates the chemical processes that take place in a biological neuron, and (3) I/O devices which allow interaction with a biological neuron. The I/O devices might include neurotransmitters, chemoreceptors, electrodes to measure electrical potentials or directly stimulate neurons, and so on. If there is a volume of neurons that has been replaced only those near the surface of the volume which will interact with the biological neurons need have I/O devices; or equivalently, all of the artificial neurons may be consolidated into a single device simulating the behaviour of the network of biological neurons originally in place. By extension of the process where a few of the neurons are replaced leaving behaviour and consciousness unchanged, the whole brain is replaced. Sense organs can then also be replaced with electronic equivalents (we already have this technology in a crude form, eg. bionic ears), the end result being a robot that behaves like a human and has the consciousness of a human. If the robot's sensory input is replaced by a computer generating a virtual environment then the body can be dispensed with altogether. Voil?: a mind running entirely as a program on a computer. -- Stathis Papaioannou From rafal.smigrodzki at gmail.com Thu Dec 17 07:16:24 2009 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Thu, 17 Dec 2009 02:16:24 -0500 Subject: [ExI] Living temperature dataset In-Reply-To: <4902d9990912130546q788105b7u5e591be5deba699d@mail.gmail.com> References: <4B240B05.7080106@libero.it> <4902d9990912121514h6bec6ce6m562ad1d550f5876a@mail.gmail.com> <7641ddc60912122042p74b0c549p591e5e0d734011b6@mail.gmail.com> <4902d9990912130546q788105b7u5e591be5deba699d@mail.gmail.com> Message-ID: <7641ddc60912162316m61801a87qb347470de3710e45@mail.gmail.com> On Sun, Dec 13, 2009 at 8:46 AM, Alfio Puglisi wrote: >> >> http://wattsupwiththat.com/2009/12/09/picking-out-the-uhi-in-global-temperature-records-so-easy-a-6th-grader-can-do-it/ >> >> - please watch it and read the article before commenting. > > Wow. A video with a 6th grader and his dad, who say that UHI exists. And I > have to watch it, otherwise I'm not qualified to comment! You think I'm > going to take you seriously after this? ### So, you didn't watch it. -------------- > > I can play this game too: the following article: > http://www.realclimate.org/index.php/archives/2004/12/the-surface-temperature-record-and-the-urban-heat-island/ > > references two papers: one in Journal of Climate and one in Nature. Please > read them before commenting. ### Yeah, I read the post, and the Nature article. Interestingly, the embedded link in the post is broken. This tells me you didn't read the Nature "paper" (which in fact is a so called "brief communication"). Furthermore, the post and the linked articles do not address the issues raised by our 6th grader. If you read both of them, you would have noticed it. ----------------------- > > This is a valid concern, but observations show the opposite: the warming is > higher where there is no UHI effect to correct for, like in the Arctic. See > for example > http://scienceblogs.com/illconsidered/2006/02/warming-due-to-urban-heat-island.php > ### Alfio, the blog post you reference does not address the issue raised by our 6th grader. If you read both of them, you would have noticed it. OK, to summarize: 1) I pointed to a communication raising a specific technical issue with raw continental US climate record. 2) You responded by inundating me with irrelevant links (some of which you didn't even read yourself). 3) The posts you link to are generic brush-off that have been used since 2004 or 2006 in response to the UHI question. The 2004 realclimate post is using funny assumptions to analyze a corrupt database (how corrupt see http://www.surfacestations.org/ - the graph shows the fraction of weather stations which fail government guidelines regarding siting of weather stations). The 2006 scienceblogs post is based on an analysis of a corrupt database (for how corrupt, see for example http://wattsupwiththat.com/2009/12/13/frigid-folly-uhi-siting-issues-and-adjustments-in-antarctic-ghcn-data/ - a single thermometer is the basis of the Antarctic "warming" referenced, among others, in the post). Based on my reading of communications from both sides of this scientific controversy, and based on my technical understanding of the issues involved, my considered opinion is that both of your references are old, irrelevant garbage that doesn't address the issue I raised. 4) There appears to be a pattern here and in our previous exchange of you responding to my technical questions by inundating me with poorly relevant links of low technical merit (and in contrast to you, I actually read both sides of the story). 5) You also allowed yourself to lecture me on the law of conservation of energy, insisting that it is relevant to the questions I raised, and thereby insinuating I might be insufficiently familiar with basic physics to be competent to judge the issues at hand. 6) Do not lecture me on the law of conservation of energy. 7) Do not waste my time. Rafal From stathisp at gmail.com Thu Dec 17 09:59:13 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 17 Dec 2009 20:59:13 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <806831.38063.qm@web36503.mail.mud.yahoo.com> References: <8736EC3A-E08A-45E9-9F15-DFEF1945E6DF@bellsouth.net> <806831.38063.qm@web36503.mail.mud.yahoo.com> Message-ID: 2009/12/17 Gordon Swobe : > Because > > 1) Programs are formal (syntactic) > > and because > > 2) Minds have mental contents (semantics) > > and because > > 3) Syntax is neither constitutive of nor sufficient for semantics > > It follows that > > 4) Programs are neither constitutive of nor sufficient for minds. > > If you think you see a logical problem then show it to me. The formal problem with the argument is that 4) is assumed in 3). If programs are syntactic and programs running on computers can have semantics, then syntax is sufficient for semantics. Moreover, if programs running on computers are syntactic then so are brains. A computer running a program is at bottom just a collection of physical parts interacting according to the laws of physics, and so is a brain. Without assuming the answer to begin with what reason is there to assume the matter jiggling around in a brain has semantics while that in a computer exhibiting similar intelligent behaviour does not? -- Stathis Papaioannou From alfio.puglisi at gmail.com Thu Dec 17 10:52:06 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Thu, 17 Dec 2009 11:52:06 +0100 Subject: [ExI] Living temperature dataset In-Reply-To: <7641ddc60912162316m61801a87qb347470de3710e45@mail.gmail.com> References: <4B240B05.7080106@libero.it> <4902d9990912121514h6bec6ce6m562ad1d550f5876a@mail.gmail.com> <7641ddc60912122042p74b0c549p591e5e0d734011b6@mail.gmail.com> <4902d9990912130546q788105b7u5e591be5deba699d@mail.gmail.com> <7641ddc60912162316m61801a87qb347470de3710e45@mail.gmail.com> Message-ID: <4902d9990912170252i45cef568racdfb6f4db5958d5@mail.gmail.com> On Thu, Dec 17, 2009 at 8:16 AM, Rafal Smigrodzki < rafal.smigrodzki at gmail.com> wrote: > On Sun, Dec 13, 2009 at 8:46 AM, Alfio Puglisi > wrote: > > >> > >> > http://wattsupwiththat.com/2009/12/09/picking-out-the-uhi-in-global-temperature-records-so-easy-a-6th-grader-can-do-it/ > >> > >> - please watch it and read the article before commenting. > > > > Wow. A video with a 6th grader and his dad, who say that UHI exists. And > I > > have to watch it, otherwise I'm not qualified to comment! You think I'm > > going to take you seriously after this? > > ### So, you didn't watch it I did. The sentence after the "wow" resumes the content of the video with all the detail it deserves. 7) Do not waste my time. > This feeling is reciprocal. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Thu Dec 17 13:15:55 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 17 Dec 2009 05:15:55 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <128589.74085.qm@web32001.mail.mud.yahoo.com> > From: Gordon Swobe declared: > It looks like you want to refute Searle's claim that > although a computer simulation of a brain is possible, such > a simulation will not have intentionality/semantics. It > won't on Searle's view have any more semantics than does a > computer simulation of anything have anything. A simulation > is, umm, a simulation. > > I once wrote a gaming application in C++ that contained an > imaginary character. Because the character interacted in > complex ways with the human player in spoken language (it > used voice recognition) I found it handy to create an object > called "brain" in my code to represent the character's > thought processes. Had I had the knowledge and the time, I > could have created a complete computer simulation of a real > brain. > > Assume I had done so. Did my character have understanding > of the words it manipulated? Did the program itself have > such understanding? In other words, did either the character > or the program overcome the symbol grounding problem? > > No and No and No. I merely created a computer simulation in > which an imaginary character with an imaginary brain > pretended to overcome the symbol grounding problem. I did > nothing more interesting than does a cartoonist who writes > cartoons for your local newspaper. Why do you say No No and No? It's "Yes", "Yes", and "What symbol grounding problem?" If your character had a brain, and it was a complete simulation of a biological brain, then how could it not have understanding? How could it fail to have every single functional property of a biological brain? This: "A simulation is, umm, a simulation." is the giveaway, I think. Correct me if I'm wrong, but it seems that you think there is some magical functional property of a physical object that a model of it, *no matter how detailed*, cannot possess? There's a name for this kind of thinking. Ben Zaiboc From gts_2000 at yahoo.com Thu Dec 17 14:10:22 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 17 Dec 2009 06:10:22 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <456927.82152.qm@web36502.mail.mud.yahoo.com> --- On Thu, 12/17/09, Stathis Papaioannou wrote: > > Because > > > > 1) Programs are formal (syntactic) > > > > and because > > > > 2) Minds have mental contents (semantics) > > > > and because > > > > 3) Syntax is neither constitutive of nor sufficient > for semantics > > > > It follows that > > > > 4) Programs are neither constitutive of nor sufficient > for minds. > > > > If you think you see a logical problem then show it to > me. > > The formal problem with the argument is that 4) is assumed > in 3). Premise 3 (P3) says nothing whatsoever about programs or minds. You argue.. > If programs are syntactic and programs running on computers > can have semantics, then syntax is sufficient for semantics. That's a valid argument but not necessarily a true one. You've simply put the conclusion you want to see (that programs can glean semantics from syntax) into the premises. In other words your argument is not about Searle begging the question. If programs are syntactic and can also glean semantics from syntax then Searle's premise 3 is simply false. You just need to how P3 is false for programs or for people. The thought experiment illustrates how P3 is true. The man in the room follows the rules of Chinese syntax, yet he has no idea what his words mean. -gts From gts_2000 at yahoo.com Thu Dec 17 15:38:41 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 17 Dec 2009 07:38:41 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <128589.74085.qm@web32001.mail.mud.yahoo.com> Message-ID: <78873.85088.qm@web36503.mail.mud.yahoo.com> --- On Thu, 12/17/09, Ben Zaiboc wrote: >> Assume I had done so. Did my character have >> understanding of the words it manipulated? Did the program >> itself have such understanding? In other words, did either the >> character or the program overcome the symbol grounding problem? >> >> No and No and No. I merely created a computer simulation in >> which an imaginary character with an imaginary brain >> pretended to overcome the symbol grounding problem. I >> did nothing more interesting than does a cartoonist who >> writes cartoons for your local newspaper. > It's "Yes", "Yes", and "What symbol grounding problem?" You'll understand the symbol grounding problem if and when you understand my last sentence, that I did nothing more interesting than does a cartoonist. > If your character had a brain, and it was a complete > simulation of a biological brain, then how could it not have > understanding? Because it's just a program and programs don't have semantics. > This: "A simulation is, umm, a simulation." is the > giveaway, I think.? The point is that computer simulations are just that: simulations. > Correct me if I'm wrong, but it seems that you think there > is some magical functional property of a physical object > that a model of it, *no matter how detailed*, cannot > possess? I don't claim that physical objects do anything magical. I do however claim that computer simulations of physical objects do not. -gts From p0stfuturist at yahoo.com Thu Dec 17 05:56:35 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Wed, 16 Dec 2009 21:56:35 -0800 (PST) Subject: [ExI] atheism Message-ID: <117734.6080.qm@web59915.mail.ac4.yahoo.com> > Have any of you read the Isaac Asimov story about a man who > is an avowed > atheist and upon his death learns there actually is a God > and an afterlife? > The man rails against God for the ills of earth life and > even declares that > he will dedicate his endless afterlife existance to finding > a way to defeat > him!? This opens possibilities in SF: 1) Theist dies, to find out all his atheist family & friends are in Hell; he is alone forevermore in Paradise with no companionship, nothing to do, so he dies yet again-- of a broken heart-- and goes to be with his kin & friends Down There. 2) An atheist is rewarded by going to Heaven (with plenty of atheist company there) because he is sincere, while his theist people burn in Hell for their insincere fake-faith, so he dies of a broken heart-- to be reunited with his people. Or... ("that's enough, get a life fer chrissakes") -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Dec 17 17:22:08 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Dec 2009 12:22:08 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <806831.38063.qm@web36503.mail.mud.yahoo.com> References: <806831.38063.qm@web36503.mail.mud.yahoo.com> Message-ID: On Dec 16, 2009, Gordon Swobe wrote: > you will need first to show me a flaw in Searle's formal argument. What formal argument? All Searle did was invent a very silly thought experiment. > > Programs are formal (syntactic) Ok > > and because Minds have mental contents (semantics) So just like Searle you assert that other minds have mental contents and then claim that assertion proves that other minds have mental contents. Provided of course the associated brain in question is made of meat and not silicon . Pretty silly don't you think. > Syntax is neither constitutive of Good God almighty, you think syntax is not even *constitutive* of mind!! > nor sufficient for semantics So just like Searle you assert that syntax is neither constitutive of nor sufficient for semantics and then claim that assertion proves that syntax is neither constitutive of nor sufficient for semantics. If A=B then A=B. Pretty silly don't you think. In Goedel's famous proof he found a way for a formal system to make statements about itself, and that tells me that the entire syntax/semantics divide that some philosophers who have never taken a high school biology course think is so fundamental is in reality an entirely manmade distinction with no clear boundary between the two. > Programs are neither constitutive of nor sufficient for minds. I can't emphasize enough that the above statement places you squarely in the anti Darwin camp because there is no way evolution could have produced consciousness. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Dec 17 18:00:24 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Dec 2009 13:00:24 -0500 Subject: [ExI] Steorn back from the grave? In-Reply-To: <4B2947BB.3010405@satx.rr.com> References: <4B28F79B.7020307@satx.rr.com> <3559970E-0402-4811-9334-C11863D74D53@bellsouth.net> <4B2947BB.3010405@satx.rr.com> Message-ID: <59C7D3AB-5EBC-46D0-985C-13C97380E7C2@bellsouth.net> On Dec 16, 2009, at 3:48 PM, Damien Broderick wrote: >> They say it produces 3 times as much energy as it consumes, and they >> have the output connected up to the input, so I don't understand why it >> doesn't spin faster and faster until it explodes. > > Because, according to the video, it dissipates a lot of kinetic energy as heat. A sort of very slow, controlled explosion, if you like. They must be using some VERY cheap bearings! If it produces 3 times as mush energy as it uses and the output is connected to the input it should melt down into slag. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Dec 17 17:39:51 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Dec 2009 12:39:51 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <323108.38906.qm@web36508.mail.mud.yahoo.com> References: <323108.38906.qm@web36508.mail.mud.yahoo.com> Message-ID: <746F4FB0-275F-44B9-9CAE-9AA77538EDEC@bellsouth.net> On Dec 16, 2009, Gordon Swobe wrote: > But did you understand that the "this artificial neuron" to which you referred exists only as a computer simulation? I.e., only as some lines of code, only as some zeros and ones, only some 'on' and 'offs', only as some stuff going on in RAM? There is a natural neuron in your head right now that exists only as a collection of atoms that move around by gaining or losing electrons. RAM works by gaining and losing electrons too. > And do you really hold the position that contrary to Searle's claim, this artificial neuron that I've described has consciousness? I don't think that one neuron, artificial or otherwise, has consciousness but apparently you do. You said that it's the internal state of the neuron that's important for consciousness not how it communicates to other neurons, even though that obviously is the only thing that can determine the large-scale behavior of the being. If you're right then one neuron would be sufficient for consciousness. I don't think you're right. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Thu Dec 17 18:07:23 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 17 Dec 2009 19:07:23 +0100 Subject: [ExI] atheism In-Reply-To: <89031.64690.qm@web32003.mail.mud.yahoo.com> References: <89031.64690.qm@web32003.mail.mud.yahoo.com> Message-ID: <580930c20912171007v4cb8cec5rdd39b16da79feb59@mail.gmail.com> 2009/12/16 Ben Zaiboc : >> Now, all that is applicable, AFAIK, to the very concept of >> Yahweh, Allah, or >> the Holy Trinity; but not to Zeus - nor for that matter to >> Spiderman or the >> Great Gatsby. > > Sorry, I don't get it. > > Surely all these things are in the same category: products of human imagination? > > What various different people imagine these imaginary things' different properties are, is irrelevant. ?None of them are real. No, it's not. Because an imaginary - or rather, mythical - Jahv? is a contradiction in terms, based on the onthological argument. Moreover, somebody insists here that we do not have a 100% certainty that the Flying Spaghetti Monster does not exist. True. But the Flying Spaghetti Monster is not claimed to "exist out of the world", so it may well be flying around somewhere. An entity which exists but is not part of everything that exists (being the cause of, out of, and chronologically prior to all that) is a second contradiction in terms. -- Stefano Vaj From thespike at satx.rr.com Thu Dec 17 18:36:41 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 17 Dec 2009 12:36:41 -0600 Subject: [ExI] Steorn back from the grave? In-Reply-To: <59C7D3AB-5EBC-46D0-985C-13C97380E7C2@bellsouth.net> References: <4B28F79B.7020307@satx.rr.com> <3559970E-0402-4811-9334-C11863D74D53@bellsouth.net> <4B2947BB.3010405@satx.rr.com> <59C7D3AB-5EBC-46D0-985C-13C97380E7C2@bellsouth.net> Message-ID: <4B2A7A39.6000906@satx.rr.com> On 12/17/2009 12:00 PM, John Clark wrote: >>> They say it produces 3 times as much energy as it consumes, and they >>> have the output connected up to the input, so I don't understand why it >>> doesn't spin faster and faster until it explodes. >> >> Because, according to the video, it dissipates a lot of kinetic energy >> as heat. A sort of very slow, controlled explosion, if you like. > > They must be using some VERY cheap bearings! If it produces 3 times as > mush energy as it uses and the output is connected to the input it > should melt down into slag. No doubt you're right. The thing has to be bogus. The question is: what the hell are they up to? What's going on? My guess: they started with an apparent anomaly, got wildly excited, figured they were destined to be world-changers, got some research funding from investors, looked seriously and honestly into it for a couple of years, kept getting nothing repeatable, the lower ranks stayed on it because hey it's a job, the top guys like Sean perhaps remained delusionally committed (protect sunk costs etc, keep the faith), and so it continues until finally they have to do *something* visible to the public. My guess: by now it has become a deliberate scam even if it didn't start that way, or their pride stops them admitting they made a silly mistake, sorry folks, nothing to see here. Luckily there's at least no theological reward as with "faith" scams, except for the ideological driver of "the giant oil/nuclear/fusion research interests are covering this up" variety. But maybe there's some other, more subtle explanation, neither deliberate scam nor real effect. I can't think of one, though. Damien Broderick From jameschoate at austin.rr.com Thu Dec 17 18:55:57 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 17 Dec 2009 12:55:57 -0600 Subject: [ExI] atheism In-Reply-To: <580930c20912171007v4cb8cec5rdd39b16da79feb59@mail.gmail.com> Message-ID: <20091217185557.LCBPY.40286.root@hrndva-web17-z01> ---- Stefano Vaj wrote: > But the Flying Spaghetti Monster is not claimed to "exist out of the > world", so it may well be flying around somewhere. An entity which > exists but is not part of everything that exists (being the cause of, > out of, and chronologically prior to all that) is a second > contradiction in terms. No, it's the definition if transcendence. It simply means that the concepts your using are insufficient to model some event or relationship, Godel comes into play here. You're fundamental error is in assuming your axioms are complete and provable, an impossibility. -- -- -- -- Venimus, Vidimus, Dolavimus James Choate jameschoate at austin.rr.com james.choate at twcable.com 512-657-1279 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From natasha at natasha.cc Thu Dec 17 22:42:01 2009 From: natasha at natasha.cc (natasha at natasha.cc) Date: Thu, 17 Dec 2009 17:42:01 -0500 Subject: [ExI] Sick of Cyberspace? Message-ID: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> Are we totally locked into cybernetics for evolution? I thought this next era was to be about chemistry rather than machines. Does anyone have thoughts on how chemistry might be another venue for personal existence/communication/transportation, etc? (I don't mean through mind altering psychedelics.) Natasha From possiblepaths2050 at gmail.com Thu Dec 17 23:46:31 2009 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 17 Dec 2009 16:46:31 -0700 Subject: [ExI] atheism In-Reply-To: <580930c20912171007v4cb8cec5rdd39b16da79feb59@mail.gmail.com> References: <89031.64690.qm@web32003.mail.mud.yahoo.com> <580930c20912171007v4cb8cec5rdd39b16da79feb59@mail.gmail.com> Message-ID: <2d6187670912171546x673c6900lcb0fb01ccfc7584f@mail.gmail.com> Stefano Vaj wrote: But the Flying Spaghetti Monster is not claimed to "exist out of the world", so it may well be flying around somewhere. An entity which exists but is not part of everything that exists (being the cause of, out of, and chronologically prior to all that) is a second contradiction in terms. >>>> Stefano, you obviously do not know your Flying Spaghetti Monster theology, because it is taught that the huge upswell in the number of pirates and pirate parties & conventions proves the existance of the Flying Spaghetti Monster! But I'm a skeptic regarding the Flying Spaghetti Monster because I just don't see a strong connection between pirates and divine Italian food (though I did once see several pirates eating at an Olive Garden restaurant...). John : ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu Dec 17 23:53:46 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 18 Dec 2009 10:53:46 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <456927.82152.qm@web36502.mail.mud.yahoo.com> References: <456927.82152.qm@web36502.mail.mud.yahoo.com> Message-ID: 2009/12/18 Gordon Swobe : >> If programs are syntactic and programs running on computers >> can have semantics, then syntax is sufficient for semantics. > > That's a valid argument but not necessarily a true one. You've simply put the conclusion you want to see (that programs can glean semantics from syntax) into the premises. And you and Searle have assumed the opposite, when it is the thing under dispute. > In other words your argument is not about Searle begging the question. If programs are syntactic and can also glean semantics from syntax then Searle's premise 3 is simply false. You just need to how P3 is false for programs or for people. It is false for people, since people are manifestly conscious. It is also false for computers if it shown that a computer can simulate the behaviour of a brain and simulating the behaviour of a brain gives rise to consciousness, as I have been arguing. > The thought experiment illustrates how P3 is true. The man in the room follows the rules of Chinese syntax, yet he has no idea what his words mean. To recap the CRA: You say the man in the room has no understanding. We say that neurons have no understanding either, but the system of neurons has understanding. You say but the man has no understanding even if he internalises all the other components of the CR. Presumably by this you mean that by internalising everything the man then *is* the system, but still lacks understanding. I say (because at this point the others are getting tired of arguing) that the neurons would still have no understanding if they had a rudimentary intelligence sufficient for them to know when it was time to fire. The intelligence of the system is superimposed on the intelligence (or lack of it) of its parts. You haven't said anything directly in answer to this. -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Dec 18 00:09:41 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 17 Dec 2009 16:09:41 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <235730.52445.qm@web36503.mail.mud.yahoo.com> --- On Wed, 12/16/09, Stathis Papaioannou wrote: > What I have been considering is an artificial neuron. The > artificial neuron consists of (1) a computer, (2) a computer program > which simulates the chemical processes that take place in a > biological neuron, and (3) I/O devices which allow interaction with a > biological neuron. The I/O devices might include neurotransmitters, > chemoreceptors, electrodes to measure electrical potentials > or directly stimulate neurons, and so on. Let's go inside that neuron and look around. What do we see? I see a computer running a formal program, a program no different in principle from those running on the computer in front of me right now. That program has no understanding of the symbols it manipulates, yet it drives all the behavior of the neuron. On your account your brain runs billions of these mindless programs, and together they comprise the greater program that causes your thoughts and behaviors. But I see nothing in your scenario that explains how billions of mindless neurons come together to create mindfulness. It doesn't matter to me if some of those neurons exist in the periphery, as integral parts of sense perception. We want to know how minds happen. It seems to me that you can object by stating that each of the billions of programs really do have a mind, or that the larger program in which those programs exist only as modules has a mind, but then we've only rediscovered Searle's formal argument. So here we sit now inside one of your artificial neurons discussing the same subject that we've discussed in other messages: Searle's formal argument that programs are neither constitutive of nor sufficient for minds. -gts From gts_2000 at yahoo.com Fri Dec 18 00:27:07 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 17 Dec 2009 16:27:07 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <863424.11585.qm@web36504.mail.mud.yahoo.com> --- On Thu, 12/17/09, Stathis Papaioannou wrote: >> If programs are syntactic and programs running on >> computers can have semantics, then syntax is sufficient for >> semantics. > > That's a valid argument but not necessarily a true > one. You've simply put the conclusion you want to see (that > programs can glean semantics from syntax) into the > premises. > > And you and Searle have assumed the opposite, when it is > the thing under dispute. No, Searle only assumes exactly what he states he assumes: P1) Programs are formal (syntactic) [which is NOT to say they have no semantics or that they cannot cause or have minds] P2) Minds have mental contents (semantics) P3) Syntax is neither constitutive nor sufficient for semantics. That's all he assumes, Stathis. Nothing more, nothing less. To prove him wrong we need either show one of his premises as false or show that his conclusion (that programs don't cause minds) doesn't follow. >> In other words your argument is not about Searle > begging the question. If programs are syntactic and can also > glean semantics from syntax then Searle's premise 3 is > simply false. You just need to show how P3 is false for programs > or for people. > > It is false for people, since people are manifestly > conscious. P3 is about syntax and semantics in a program or in a conscious person or in a book. Doesn't matter. And on Searle's view not even a conscious person can get semantics from syntax. If you're serious about this subject then it's very important that you look closely at his words and not read anything into them. -gts From msd001 at gmail.com Fri Dec 18 00:52:04 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 17 Dec 2009 19:52:04 -0500 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> Message-ID: <62c14240912171652k10d94085j1e100c0b409526c4@mail.gmail.com> On Thu, Dec 17, 2009 at 5:42 PM, wrote: > Are we totally locked into cybernetics for evolution? I thought this next > era was to be about chemistry rather than machines. > > Does anyone have thoughts on how chemistry might be another venue for > personal existence/communication/transportation, etc? (I don't mean through > mind altering psychedelics.) > You mean like prescription pharmaceuticals for cognitive enhancement? Good luck. The powers that be don't profit from enlightened masses. It's easier to herd sheep. -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Fri Dec 18 01:22:45 2009 From: natasha at natasha.cc (natasha at natasha.cc) Date: Thu, 17 Dec 2009 20:22:45 -0500 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <62c14240912171652k10d94085j1e100c0b409526c4@mail.gmail.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <62c14240912171652k10d94085j1e100c0b409526c4@mail.gmail.com> Message-ID: <20091217202245.mwrph070mcos0ocg@webmail.natasha.cc> No, not prescription pharmaceuticals. And this has little to do with herding sheep. Quoting Mike Dougherty : > On Thu, Dec 17, 2009 at 5:42 PM, wrote: > >> Are we totally locked into cybernetics for evolution? I thought this next >> era was to be about chemistry rather than machines. >> >> Does anyone have thoughts on how chemistry might be another venue for >> personal existence/communication/transportation, etc? (I don't mean through >> mind altering psychedelics.) >> > > You mean like prescription pharmaceuticals for cognitive enhancement? > > Good luck. The powers that be don't profit from enlightened masses. It's > easier to herd sheep. > From gts_2000 at yahoo.com Fri Dec 18 01:04:02 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 17 Dec 2009 17:04:02 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <144147.66676.qm@web36507.mail.mud.yahoo.com> --- On Thu, 12/17/09, Stathis Papaioannou wrote: > To recap the CRA: > > You say the man in the room has no understanding. No understanding of Chinese from following Chinese syntax. Right. And yet he still passes the Turing test in Chinese. > We say that neurons have no understanding either but the > system of neurons has understanding. I don't have any reason to disagree with that, but frankly I don't know how understanding works. I only know (or find myself persuaded by Searle's argument) that understanding doesn't happen as a consequence of the brain running formal programs. The brain does it by some other means. > You say but the man has no understanding even if he > internalises all the other components of the CR. Presumably > by this you mean that by internalising everything the man then *is* > the system, but still lacks understanding. Yes. > I say (because at this point the others are getting tired > of arguing)... I'm glad you find this subject interesting. But for you, I would be arguing with the philosophers over on that other list. :) > ... that the neurons would still have no understanding if they > had a rudimentary intelligence sufficient for them to know when > it was time to fire. I can agree with that, but perhaps not in the way you mean. As I've written to John, I consider even my watch to have intelligence. But does it have intentionality/semantics/understanding? No sir. My watch tells me the time intelligently but it doesn't know the time. If it had intentionality, as in strong AI, it would not only tell the time; it would also know the time. > The intelligence of the system is superimposed on > the intelligence (or lack of it) of its parts. See above. Let's first distinguish intelligence from semantics/intentionality, because until we do we're not talking the same language. It's the difference between weak AI and strong AI. > You haven't said anything directly in answer to this. I hope we're getting closer now to the crux of the matter. -gts From stathisp at gmail.com Fri Dec 18 03:32:17 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 18 Dec 2009 14:32:17 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <235730.52445.qm@web36503.mail.mud.yahoo.com> References: <235730.52445.qm@web36503.mail.mud.yahoo.com> Message-ID: 2009/12/18 Gordon Swobe : > --- On Wed, 12/16/09, Stathis Papaioannou wrote: > >> What I have been considering is an artificial neuron. The >> artificial neuron consists of (1) a computer, (2) a computer program >> which simulates the chemical processes that take place in a >> biological neuron, and (3) I/O devices which allow interaction with a >> biological neuron. The I/O devices might include neurotransmitters, >> chemoreceptors, electrodes to measure electrical potentials >> or directly stimulate neurons, and so on. > > Let's go inside that neuron and look around. What do we see? > > I see a computer running a formal program, a program no different in principle from those running on the computer in front of me right now. That program has no understanding of the symbols it manipulates, yet it drives all the behavior of the neuron. On your account your brain runs billions of these mindless programs, and together they comprise the greater program that causes your thoughts and behaviors. But I see nothing in your scenario that explains how billions of mindless neurons come together to create mindfulness. The carbon, hydrogen, oxygen, nitrogen etc. atoms in the brain don't have either consciousness or intelligence, but when they jostle each other according to the laws of physics, intelligence and consciousness emerge. What sort of explanation as to how this happens (over and above the observation that it does happen) could possibly satisfy you? This is what Chalmers calls the "hard problem" of consciousness, in contrast to the nuts-and-bolts "easy problem" that neuroscience attempts to answer. I prefer to avoid it as a pseudo-problem. > It doesn't matter to me if some of those neurons exist in the periphery, as integral parts of sense perception. We want to know how minds happen. > > It seems to me that you can object by stating that each of the billions of programs really do have a mind, or that the larger program in which those programs exist only as modules has a mind, but then we've only rediscovered Searle's formal argument. > > So here we sit now inside one of your artificial neurons discussing the same subject that we've discussed in other messages: Searle's formal argument that programs are neither constitutive of nor sufficient for minds. Except that there is no reason to believe this unless you assume it to begin with. You may as well assert that atoms are neither constitutive of nor sufficient for minds. But given that (a) atoms can give rise to mind, (b) the behaviour of atoms can be modelled by a computer program, and (c) replacing the atoms with the model of the atoms gives rise to the same mind, it follows that computer programs can give rise to minds. You don't agree with (c) but you haven't answered what you think would happen if part of your brain was replaced with a network of artificial neurons controlled by a computer model. Either you would say that everything feels the same or you would say that something feels different. Have a guess, which would it would be? -- Stathis Papaioannou From p0stfuturist at yahoo.com Fri Dec 18 03:04:20 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Thu, 17 Dec 2009 19:04:20 -0800 (PST) Subject: [ExI] ultimate horror SF Message-ID: <993245.99516.qm@web59913.mail.ac4.yahoo.com> There was a SF story, based on fact, wherein the protagonist was slated to go to Hell for posting too many off-topic messages at extropy chat. At the last nanosecond before departure he was granted an appeal, being sent to a panel of judges in Purgatory. After centuries of deliberating his case he was acquitted and sent back to Earth. But he kept?sending off-topic messages to extropy, so?after?apprehension he was immediately sent to the very lowest depths of Hell, to be?bitten?by starving?arachnids for all eternity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Dec 18 06:51:20 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 18 Dec 2009 17:51:20 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <863424.11585.qm@web36504.mail.mud.yahoo.com> References: <863424.11585.qm@web36504.mail.mud.yahoo.com> Message-ID: 2009/12/18 Gordon Swobe : > No, Searle only assumes exactly what he states he assumes: > > P1) Programs are formal (syntactic) [which is NOT to say they have no semantics or that they cannot cause or have minds] > P2) Minds have mental contents (semantics) > P3) Syntax is neither constitutive nor sufficient for semantics. > > That's all he assumes, Stathis. Nothing more, nothing less. > > To prove him wrong we need either show one of his premises as false or show that his conclusion (that programs don't cause minds) doesn't follow. > >>> In other words your argument is not about Searle >> begging the question. If programs are syntactic and can also >> glean semantics from syntax then Searle's premise 3 is >> simply false. You just need to show how P3 is false for programs >> or for people. >> >> It is false for people, since people are manifestly >> conscious. > > P3 is about syntax and semantics in a program or in a conscious person or in a book. Doesn't matter. And on Searle's view not even a conscious person can get semantics from syntax. > > If you're serious about this subject then it's very important that you look closely at his words and not read anything into them. If programs are syntactic and programs can have semantics then either programs have something in addition to syntax (an immaterial soul?) or syntax can give rise to semantics as an emergent property. Analogously: If brains consist of dumb matter and brains can have consciousness then either brains have something in addition to dumb matter (an immaterial soul?) or dumb matter can give rise to consciousness as an emergent property. The dualists don't believe that dumb matter can give rise to consciousness and Searle doesn't believe that programs can give rise to consciousness. But these positions are just prejudice. Moreover, it has been shown that if dumb matter can give rise to consciousness and the physics of dumb matter is computable, then computation can give rise to consciousness. On the other hand, if an immaterial soul is responsible for our consciousness or the physics of the matter in the brain is not computable then computation cannot give rise to consciousness. Searle does not believe in a soul and accepts that the physics of the brain is computable. Roger Penrose does not believe in a soul but believes that the physics of the brain is not computable. Both Searle and Penrose deny that computers can think, but Searle is inconsistent, while Penrose is at least consistent, though probably wrong. -- Stathis Papaioannou From stathisp at gmail.com Fri Dec 18 07:05:24 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 18 Dec 2009 18:05:24 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <144147.66676.qm@web36507.mail.mud.yahoo.com> References: <144147.66676.qm@web36507.mail.mud.yahoo.com> Message-ID: 2009/12/18 Gordon Swobe : > --- On Thu, 12/17/09, Stathis Papaioannou wrote: > >> To recap the CRA: >> >> You say the man in the room has no understanding. > > No understanding of Chinese from following Chinese syntax. Right. And yet he still passes the Turing test in Chinese. > >> We say that neurons have no understanding either but the >> system of neurons has understanding. > > I don't have any reason to disagree with that, but frankly I don't know how understanding works. I only know (or find myself persuaded by Searle's argument) that understanding doesn't happen as a consequence of the brain running formal programs. The brain does it by some other means. Can you say what these other means might possibly be? For example, could the understanding derive from some physical structure such as the carbon-nitrogen bonds of amino acids, or from some process such as the passage of water through cell membranes? >> You say but the man has no understanding even if he >> internalises all the other components of the CR. Presumably >> by this you mean that by internalising everything the man then *is* >> the system, but still lacks understanding. > > Yes. > >> I say (because at this point the others are getting tired >> of arguing)... > > I'm glad you find this subject interesting. But for you, I would be arguing with the philosophers over on that other list. :) > >> ... that the neurons would still have no understanding if they >> had a rudimentary intelligence sufficient for them to know when >> it was time to fire. > > I can agree with that, but perhaps not in the way you mean. > > As I've written to John, I consider even my watch to have intelligence. But does it have intentionality/semantics/understanding? No sir. My watch tells me the time intelligently but it doesn't know the time. If it had intentionality, as in strong AI, it would not only tell the time; it would also know the time. So where does the understanding of the brain come from if the neurons are stupid? >> The intelligence of the system is superimposed on >> the intelligence (or lack of it) of its parts. > > See above. Let's first distinguish intelligence from semantics/intentionality, because until we do we're not talking the same language. It's the difference between weak AI and strong AI. > >> You haven't said anything directly in answer to this. > > I hope we're getting closer now to the crux of the matter. You seem to accept that dumb matter which itself does not have understanding can give rise to understanding, but not that an appropriately programmed computer can pull off the same miracle. Why not? -- Stathis Papaioannou From eugen at leitl.org Fri Dec 18 07:43:47 2009 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 18 Dec 2009 08:43:47 +0100 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> Message-ID: <20091218074347.GM17686@leitl.org> On Thu, Dec 17, 2009 at 05:42:01PM -0500, natasha at natasha.cc wrote: > Are we totally locked into cybernetics for evolution? I thought this > next era was to be about chemistry rather than machines. The both are not necessarily disparate. Enzymes are molecular machines doing chemistry, and machine-phase is basically numerically controlled chemistry. Or nanoscale 3d rapid prototyping. It is really just about human classification, there is no fundamental boundary where chemistry stops and physics starts. > Does anyone have thoughts on how chemistry might be another venue for > personal existence/communication/transportation, etc? (I don't mean > through mind altering psychedelics.) Nothing wrong with these. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From pharos at gmail.com Fri Dec 18 09:58:34 2009 From: pharos at gmail.com (BillK) Date: Fri, 18 Dec 2009 09:58:34 +0000 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <144147.66676.qm@web36507.mail.mud.yahoo.com> Message-ID: On 12/18/09, Stathis Papaioannou wrote: > You seem to accept that dumb matter which itself does not have > understanding can give rise to understanding, but not that an > appropriately programmed computer can pull off the same miracle. Why > not? > > Because it hasn't been done yet. And I think that our present computing systems will probably have great difficulty in achieving it. (Though they might achieve reasonable simulations of it). Once systems with thousands of processors and multi-streamed logic become common, we're in a different ball-game. But it will take new programming techniques and new software to fully utilize these systems. This problem is one of those that will be overtaken by circumstances. Once it has been done, everyone will say it is obvious and a non-problem. BillK From eugen at leitl.org Fri Dec 18 10:15:22 2009 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 18 Dec 2009 11:15:22 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <144147.66676.qm@web36507.mail.mud.yahoo.com> Message-ID: <20091218101522.GP17686@leitl.org> On Fri, Dec 18, 2009 at 09:58:34AM +0000, BillK wrote: > Because it hasn't been done yet. Nothing is done until it's done. > And I think that our present computing systems will probably have Our present capabilities are quite impressive. A 300 or 400 mm wafer is a lot of real estate. Especially if you approach ~nm device geometries. What you do with that potential is of course up to you. > great difficulty in achieving it. (Though they might achieve > reasonable simulations of it). So chess computers don't play chess, they only simulate playing chess. > Once systems with thousands of processors and multi-streamed logic > become common, we're in a different ball-game. But it will take new They are common. They are called clusters. We're in meganode country now. > programming techniques and new software to fully utilize these > systems. You don't need anything beyond MPI, a lot of cores on a 3d lattice signalling mesh and a decent morphogenetic darwinian system. I could have said that 20 years ago. Wait, I did. > This problem is one of those that will be overtaken by circumstances. > Once it has been done, everyone will say it is obvious and a > non-problem. Positive feedback enhancement runaway by *real* personal computers is obvious and a non-problem? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Fri Dec 18 10:29:29 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 18 Dec 2009 21:29:29 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <144147.66676.qm@web36507.mail.mud.yahoo.com> Message-ID: 2009/12/18 BillK : > On 12/18/09, Stathis Papaioannou wrote: > >> You seem to accept that dumb matter which itself does not have >> ?understanding can give rise to understanding, but not that an >> ?appropriately programmed computer can pull off the same miracle. Why >> ?not? >> >> > > > Because it hasn't been done yet. Sure, but Gordon and Searle believe that computer consciousness is *impossible*, not merely difficult. -- Stathis Papaioannou From pharos at gmail.com Fri Dec 18 10:41:37 2009 From: pharos at gmail.com (BillK) Date: Fri, 18 Dec 2009 10:41:37 +0000 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <20091218101522.GP17686@leitl.org> References: <144147.66676.qm@web36507.mail.mud.yahoo.com> <20091218101522.GP17686@leitl.org> Message-ID: On 12/18/09, Eugen Leitl wrote: > They are common. They are called clusters. We're in meganode country > now. > > You don't need anything beyond MPI, a lot of cores on a 3d lattice > signalling mesh and a decent morphogenetic darwinian system. I could > have said that 20 years ago. Wait, I did. > > Yes, but..... See: December 2, 2009 Intel hopes 48-core chip will solve new challenges "The machine will be capable of understanding the world around them much as humans do," Rattner said. "They will see and hear and probably speak and do a number of other things that resemble human-like capabilities, and will demand as a result very (powerful) computing capability." The Tera-scale project doesn't fundamentally address one of the big challenges in today's computing industry, though: getting multicore chips to run today's computing jobs that are often designed to run as a single thread of instructions rather than independent tasks running in parallel. ------------------ Tera-scale Computing Research Program > > Positive feedback enhancement runaway by *real* personal computers > is obvious and a non-problem? > Different problem. *That* problem happens after computers achieve self awareness and the ability to self=improve. BillK From eugen at leitl.org Fri Dec 18 10:46:17 2009 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 18 Dec 2009 11:46:17 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <144147.66676.qm@web36507.mail.mud.yahoo.com> Message-ID: <20091218104617.GQ17686@leitl.org> On Fri, Dec 18, 2009 at 09:29:29PM +1100, Stathis Papaioannou wrote: > > Because it hasn't been done yet. > > Sure, but Gordon and Searle believe that computer consciousness is > *impossible*, not merely difficult. Ignore them. They can't even define "consciousness". A tale full of sound and fury. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Fri Dec 18 10:58:36 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 18 Dec 2009 21:58:36 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <20091218104617.GQ17686@leitl.org> References: <144147.66676.qm@web36507.mail.mud.yahoo.com> <20091218104617.GQ17686@leitl.org> Message-ID: 2009/12/18 Eugen Leitl : > On Fri, Dec 18, 2009 at 09:29:29PM +1100, Stathis Papaioannou wrote: > >> > Because it hasn't been done yet. >> >> Sure, but Gordon and Searle believe that computer consciousness is >> *impossible*, not merely difficult. > > Ignore them. They can't even define "consciousness". A tale full of sound and fury. I'm afraid that arguing with people when most others would have long ago given up is a character flaw I have. -- Stathis Papaioannou From eugen at leitl.org Fri Dec 18 10:59:46 2009 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 18 Dec 2009 11:59:46 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <144147.66676.qm@web36507.mail.mud.yahoo.com> <20091218101522.GP17686@leitl.org> Message-ID: <20091218105946.GR17686@leitl.org> On Fri, Dec 18, 2009 at 10:41:37AM +0000, BillK wrote: > Yes, but..... > See: Seen. Tilera actually ships better product. > > December 2, 2009 > Intel hopes 48-core chip will solve new challenges > > "The machine will be capable of understanding the world around them > much as humans do," Rattner said. "They will see and hear and probably > speak and do a number of other things that resemble human-like > capabilities, and will demand as a result very (powerful) computing > capability." To say with JK or P&T: BULLLLLSHIT. It's not a problem of a particular chipset. Architecture, yes, but with enough scale you can kill everything with COTS stuff or just ASICs. The question is how much money you want to spend. Electricity is quite expensive. > The Tera-scale project doesn't fundamentally address one of the big > challenges in today's computing industry, though: getting multicore > chips to run today's computing jobs that are often designed to run as Can't be done. Multithreaded is a dead end. Shared-nothing asynchronous message passing is the only thing that this universe allows. It's not just a good idea, it's The Law. > a single thread of instructions rather than independent tasks running > in parallel. MPI goes back three decades. Most scientific code today is exactly "independent tasks running in parallel". In order to make TOP500 you have to be able to. > ------------------ > > Tera-scale Computing Research Program > > > > > > Positive feedback enhancement runaway by *real* personal computers > > is obvious and a non-problem? > > > > > Different problem. *That* problem happens after computers achieve > self awareness and the ability to self=improve. It isn't AI if it isn't human-equivalent across the board. "Self awareness" is meaningless, every embedded today could host a self model, and a meta self model. Insects do it. Nothing to it. You don't have AI until the thing can do everything you do. Tomorrow, it will do better. And half a day after better still. Even today's paltry analog/digital hybrids can do a day's worth of dynamics in a wall clock second. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stefano.vaj at gmail.com Fri Dec 18 11:19:37 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 18 Dec 2009 12:19:37 +0100 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> Message-ID: <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> 2009/12/17 : > Are we totally locked into cybernetics for evolution? I thought this next > era was to be about chemistry rather than machines. I come myself from "wet transhumanism" (bio/cogno), and while I got in touch with the movement exactly out of curiosity to learn more about the "hard", "cyber/cyborg" side of things, I am persuased the next era is still about chemistry, and, that when it will stops being there will be little difference between the two. In other words, if we are becoming machines, machines are becoming "chemical" and "organic" at an even faster pace (carbon rather than steel and silicon, biochips, nano...). -- Stefano Vaj From stefano.vaj at gmail.com Fri Dec 18 11:25:41 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 18 Dec 2009 12:25:41 +0100 Subject: [ExI] atheism In-Reply-To: <20091217185557.LCBPY.40286.root@hrndva-web17-z01> References: <580930c20912171007v4cb8cec5rdd39b16da79feb59@mail.gmail.com> <20091217185557.LCBPY.40286.root@hrndva-web17-z01> Message-ID: <580930c20912180325m60c74785he7058963d0560a3e@mail.gmail.com> 2009/12/17 : > No, it's the definition if transcendence. It simply means that the concepts your using are insufficient to model some event or relationship Sorry, I cannot parse the sentence. If what? My using of what? -- Stefano Vaj From bbenzai at yahoo.com Fri Dec 18 11:09:28 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 18 Dec 2009 03:09:28 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <297005.31845.qm@web113616.mail.gq1.yahoo.com> Gordon Swobe wrote: > --- On Thu, 12/17/09, Ben Zaiboc > wrote: ... > > It's "Yes", "Yes", and "What symbol grounding > problem?" > > You'll understand the symbol grounding problem if and when > you understand my last sentence, that I did nothing more > interesting than does a cartoonist. LOL. I didn't mean that I don't understand what the 'symbol grounding problem' is, I meant that there is no such problem. This seems to be a pretty fundamental sticking point, so I'll explain my thinking. We do not know what 'reality' is. There is nothing in our brains that can directly comprehend reality (if that even means anything). What we do is collect sensory data via our eyes, ears, etc., and sift it, sort it, combine it, distort it with preconceptions and past memories, and create 'sensory maps' which are then used to feed the more abstract parts of our minds, to create 'the World according to You'. We use this constantly changing internal 'world representation' to make models about our environment, other people, imaginary things, etc., and most of the time it works well enough that we habitually think of this as 'reality'. *But it's not*. The 'real reality' is forever unknowable. OK, so given that, what does 'symbol grounding' mean? It means that the meaning of a mental symbol is built up from internal representations that derive from this 'World according to You'. There's nothing mysterious or difficult about it, and it doesn't really even deserve the description 'problem'. There is no problem. There is just another set of relationships in the mind between memories, sensory data, and the models and abstractions we build from them. The 'chairness' of a chair has absolutely nothing to do with some platonic realm that we need to have some mystical access to. It's something we create in our own minds from a lot of complex and not currently understood, but inhererently understandable and mechanistic processes in our brains. The symbol of 'sitting' is grounded in memories and sensory data from thousands of experiences of putting our bodies in a particular set of positions and experiencing a variety of sensations that result. That's all there is to it. Nothing difficult at all, even though it is very complex. It's certainly not *mysterious*. > > > If your character had a brain, and it was a complete > > simulation of a biological brain, then how could it > not have > > understanding? > > Because it's just a program and programs don't have > semantics. You keep saying this, but it's not true. Complex enough programs *can* have semantics. This should be evident from my description of internal world-building above. The brain isn't doing anything that a (big) set of interacting data-processing modules in a program, (or more likely a large set of interacting programs) can't also do. Semantics isn't something that can exist outside of a mind. Meaning is an internally-generated thing. > > > This: "A simulation is, umm, a simulation." is the > > giveaway, I think.? > > The point is that computer simulations are just that: > simulations. There seems to be an implication that a simulation is somehow 'inferior' to the 'real thing'. I remember simulating my father's method of tying shoelaces when I was small. I'm sure that my shoelace-tying now is just as good as his ever was. I've heard the idea that a computer model of a thunderstorm will never be wet. But that's not actually true. It's a confusion between levels. A computer simulation of a thunderstorm, if accurate enough, will contain the same sensory effects on a person who is simulated using the same methods. In other words, it's wet on it's own level. Anything else would be absurd. > > > Correct me if I'm wrong, but it seems that you think > there > > is some magical functional property of a physical > object > > that a model of it, *no matter how detailed*, cannot > > possess? > > I don't claim that physical objects do anything magical. I > do however claim that computer simulations of physical > objects do not. > Of course not. They don't need to, as physical objects don't. Simulations of physical processes that replicate every functional property of them will necessarily produce every behaviour of the original processes. Whether or not we can accurately create such simulations is another matter, but that's just a problem of getting better at it, not a fundamental theoretical roadblock. Ben Zaiboc From eugen at leitl.org Fri Dec 18 11:46:41 2009 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 18 Dec 2009 12:46:41 +0100 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> Message-ID: <20091218114641.GT17686@leitl.org> On Fri, Dec 18, 2009 at 12:19:37PM +0100, Stefano Vaj wrote: > I come myself from "wet transhumanism" (bio/cogno), and while I got in I figured out in vivo patching wasn't going to be feasible within natural lifetimes when I was around 17. From what has gone so far (almost 30 years) it looks like I was correct. Another 30-40 years and I'll be distinctly past caring. Very little left to patch, if any. > touch with the movement exactly out of curiosity to learn more about > the "hard", "cyber/cyborg" side of things, I am persuased the next era Cyborg belongs into in vivo patching cathegory. Doesn't work either, at least if implants in life extension and capabilities amplification are concerned. Wearable stuff is fine. Implanted stuff is no good. You'll notice we're not even in decent wearable cathegory. I would have bet good money a decade ago we would have normal people using HMDs and HUDs out in the streets by now. > is still about chemistry, and, that when it will stops being there > will be little difference between the two. When we're talking about convergence, it's mostly convergence towards the nanoscale. The dry/stiff versus solvated/floppy isn't going to converge at all. There doesn't seem a lot of need for volatiles, apart from cooling and power supply maybe. > In other words, if we are becoming machines, machines are becoming > "chemical" and "organic" at an even faster pace (carbon rather than > steel and silicon, biochips, nano...). Organic is one thing, biology another. It's a safe bet there will be zero proteins, DNA, lipid bilayers or water in the result after convergence. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From gts_2000 at yahoo.com Fri Dec 18 12:17:58 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 18 Dec 2009 04:17:58 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <873042.45735.qm@web36506.mail.mud.yahoo.com> --- On Fri, 12/18/09, Stathis Papaioannou wrote: > You seem to accept that dumb matter which itself does not > have understanding can give rise to understanding, but not > that an appropriately programmed computer can pull off the > same miracle. Why not? Biological brains do something we don't yet understand. Call it X. Whatever X may be, it causes the brain to have the capacity for intentionality. We don't yet know the details of X but if we cannot refute Searle then we must say this about it: X != the running of formal syntactical programs. X = some biological process that takes place in the brain in addition to, or instead of, running programs. By the way ignore those who say we can't define consciousness. If it has subjective understanding of anything whatsoever -- in the common parlance if it can hold in anything whatsoever in mind -- then it has consciousness. I prefer the word intentionality for our purposes here, defined roughly as the holding of anything whatsoever in mind. I would use the word intentionality in place of the more nebulous word consciousness more often except that this use of the word makes sense mainly to philosophers (one can confuse it easily with the ordinary meaning of intentionality, which has to do with goal oriented thinking). Another sign of consciousness: things that have it can overcome the symbol grounding problem. We find all these things in any good philosophical definition of consciousness: subjective understanding, subjective experience, semantics, intentionality, the capacity to overcome the symbol grounding problem. If a thing has any one of them then it has the rest of them and it has consciousness. Someone mentioned computer chess programs. As I have it, chess programs have intelligence but not intentionality. They play chess, and they do it intelligently, but they don't *know* how to play to chess. They have unconscious machine intelligence and nothing more. A chess application with strong AI would have intentionality. Not only would it play chess well, it would also have chess strategy consciously "in mind" just as human players do. Because the problem of strong AI so defined seems intractable (in part because of Searle's work, but also because even AGI seems almost impossible, which needn't even be strong) many people have simply forgotten the problem of strong AI, or swept it under the rug or otherwise just scoffed and gone into denial. It seems we have some of those deniers right here on this list, the last place in the world that one should expect to find them. -gts From gts_2000 at yahoo.com Fri Dec 18 13:12:14 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 18 Dec 2009 05:12:14 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <297005.31845.qm@web113616.mail.gq1.yahoo.com> Message-ID: <432371.73935.qm@web36508.mail.mud.yahoo.com> --- On Fri, 12/18/09, Ben Zaiboc wrote: > We do not know what 'reality' is.? There is nothing in > our brains that can directly comprehend reality (if that > even means anything).? What we do is collect sensory > data via our eyes, ears, etc., and sift it, sort it, combine > it, distort it with preconceptions and past memories, and > create 'sensory maps' which are then used to feed the more > abstract parts of our minds, to create 'the World according > to You'.? Ok. > We use this constantly changing internal 'world > representation' to make models about our environment, other > people, imaginary things, etc., and most of the time it > works well enough that we habitually think of this as > 'reality'.? *But it's not*.? The 'real reality' is > forever unknowable. Ok, if you say so. > OK, so given that, what does 'symbol grounding' mean?? > It means that the meaning of a mental symbol is built up > from internal representations that derive from this 'World > according to You'.? There's nothing mysterious or > difficult about it, and it doesn't really even deserve the > description 'problem'.? It's a problem for simulations of people, Ben. Not a problem for real people. > There seems to be an implication that a simulation is > somehow 'inferior' to the 'real thing'. > > I remember simulating my father's method of tying shoelaces > when I was small.? I'm sure that my shoelace-tying now > is just as good as his ever was. You didn't simulate you father. You imitated him. If you took a video of your father tying his shoelaces and watched that video, you would watch a simulation. Is that really your father tying his shoelaces in the video, Ben? Or it just pixels on the screen? I.e., just a simulation? And if you ever watched a video of your father taken while he read and understood a newspaper, you watched a simulation of your father overcoming the symbol grounding problem. You watched a cartoon. Perhaps you confused the cartoon with reality, and thought you saw your real father understanding something, but in that case you weren't paying attention. -gts From stathisp at gmail.com Fri Dec 18 13:48:06 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 19 Dec 2009 00:48:06 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <873042.45735.qm@web36506.mail.mud.yahoo.com> References: <873042.45735.qm@web36506.mail.mud.yahoo.com> Message-ID: 2009/12/18 Gordon Swobe : >> You seem to accept that dumb matter which itself does not >> have understanding can give rise to understanding, but not >> that an appropriately programmed computer can pull off the >> same miracle. Why not? > > Biological brains do something we don't yet understand. Call it X. Whatever X may be, it causes the brain to have the capacity for intentionality. We don't yet know the details of X but if we cannot refute Searle then we must say this about it: > > X != the running of formal syntactical programs. > > X = some biological process that takes place in the brain in addition to, or instead of, running programs. The level of description which you call a computer program is, in the final analysis, just a set of rules to help you figure out exactly how you should arrange a collection of matter so that it exhibits a desired behaviour, with no separate causal role of its own. That you can describe the chemical reactions in the brain algorithmically should not detract from the brain's consciousness, so why should an algorithmic description of a computer in action detract from the computer's consciousness? -- Stathis Papaioannou From painlord2k at libero.it Fri Dec 18 14:35:56 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 18 Dec 2009 15:35:56 +0100 Subject: [ExI] atheism In-Reply-To: <580930c20912180325m60c74785he7058963d0560a3e@mail.gmail.com> References: <580930c20912171007v4cb8cec5rdd39b16da79feb59@mail.gmail.com> <20091217185557.LCBPY.40286.root@hrndva-web17-z01> <580930c20912180325m60c74785he7058963d0560a3e@mail.gmail.com> Message-ID: <4B2B934C.2020901@libero.it> Il 18/12/2009 12.25, Stefano Vaj ha scritto: > 2009/12/17: >> No, it's the definition if transcendence. It simply means that the concepts your using are insufficient to model some event or relationship > > Sorry, I cannot parse the sentence. If what? My using of what? I bet "No, it's the definition if transcendence." is "No, it's the definition of transcendence." The "i" and the "o" are side by side on the keyboard. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.716 / Database dei virus: 270.14.113/2573 - Data di rilascio: 12/18/09 08:35:00 From stefano.vaj at gmail.com Fri Dec 18 15:19:34 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 18 Dec 2009 16:19:34 +0100 Subject: [ExI] atheism In-Reply-To: <4B2B934C.2020901@libero.it> References: <580930c20912171007v4cb8cec5rdd39b16da79feb59@mail.gmail.com> <20091217185557.LCBPY.40286.root@hrndva-web17-z01> <580930c20912180325m60c74785he7058963d0560a3e@mail.gmail.com> <4B2B934C.2020901@libero.it> Message-ID: <580930c20912180719o5dc62654n144df6dc6fc9f627@mail.gmail.com> 2009/12/18 Mirco Romanato > Il 18/12/2009 12.25, Stefano Vaj ha scritto: > > 2009/12/17: >> >>> No, it's the definition if transcendence. It simply means that the >>> concepts your using are insufficient to model some event or relationship >>> >> >> Sorry, I cannot parse the sentence. If what? My using of what? >> > > I bet "No, it's the definition if transcendence." is "No, it's the > definition of transcendence." > > The "i" and the "o" are side by side on the keyboard. > Ah, OK, right. My case then becomes: there is a non-zero probability that Silver Surfer, the Flying Spaghetti Monster, etc. actually exist, and exist not in a mythical but "empirical" sense, as something you can touch. While all kind of existence seems barred for everyone whose "existence" takes place out of time and space. As a matter of definition, that is... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Fri Dec 18 15:36:43 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 18 Dec 2009 07:36:43 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <541679.45178.qm@web36508.mail.mud.yahoo.com> --- On Fri, 12/18/09, Stathis Papaioannou wrote: > The level of description which you call a computer program > is, in the final analysis, just a set of rules to help you figure > out exactly how you should arrange a collection of matter so that it > exhibits a desired behaviour Our task here involves more than mimicking intelligent human behavior (weak AI). Strong AI is not about behavior of neurons or brains or computers. It's about *mindfulness*. I don't disagree (nor would Searle) that artificial neurons such as those you describe might produce intelligent human-like behavior. Such a machine might seem very human. But would it have intentionality as in strong AI, or merely seem to have it as in weak AI? If programs drive your artificial neurons (and they do) then Searle rightfully challenges you to show how those programs that drive behavior can in some way constitute a mind, i.e., he challenges you to show that you have not merely invented weak AI, which he does not contest. > That you can describe the chemical reactions in the brain > algorithmically should not detract from the brain's consciousness, True. > so why should an algorithmic description of a computer in action > detract from the computer's consciousness? Programs that run algorithms do not and cannot have semantics. They do useful things but have no understanding of the things they do. Unless of course Searle's formal argument has flaws, and that is what is at issue here. -gts From natasha at natasha.cc Fri Dec 18 15:41:18 2009 From: natasha at natasha.cc (Natasha Vita-More) Date: Fri, 18 Dec 2009 09:41:18 -0600 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <20091218074347.GM17686@leitl.org> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <20091218074347.GM17686@leitl.org> Message-ID: <192C79390DA1400A97CBD8CF07A8D870@DFC68LF1> Yes, understood but (cells are bio-machines); nevertheless, you are coming closer to what I had in mind. Psychedelics do not increase chronological life extension (visions yes, but I am looking for more than altered states of existence located within and fixed to the human brain). Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Eugen Leitl Sent: Friday, December 18, 2009 1:44 AM To: extropy-chat at lists.extropy.org Subject: Re: [ExI] Sick of Cyberspace? On Thu, Dec 17, 2009 at 05:42:01PM -0500, natasha at natasha.cc wrote: > Are we totally locked into cybernetics for evolution? I thought this > next era was to be about chemistry rather than machines. The both are not necessarily disparate. Enzymes are molecular machines doing chemistry, and machine-phase is basically numerically controlled chemistry. Or nanoscale 3d rapid prototyping. It is really just about human classification, there is no fundamental boundary where chemistry stops and physics starts. > Does anyone have thoughts on how chemistry might be another venue for > personal existence/communication/transportation, etc? (I don't mean > through mind altering psychedelics.) Nothing wrong with these. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From natasha at natasha.cc Fri Dec 18 15:52:24 2009 From: natasha at natasha.cc (Natasha Vita-More) Date: Fri, 18 Dec 2009 09:52:24 -0600 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> Message-ID: Yes, of course. Hello folks! Now, getting back to the issue - the fact that machines are becoming chemical is the area. The organic machine has been used metaphorically for quite some time. How do you see a machine being injected with biochemistry? You are correct with biochips, which uses bits of DNA as an outreach agent. The microchip is made from macromolecules instead of semiconductor, but doesn't it also use silicon? Which is it? But this is more of what I was looking for. Thanks. Long way from the cybernetic connectivity of cyberspace, but I suppose if the brain's matter which houses personal identity could be secreted onto a microchip ... And then this gets into Anders' area, and we are back to whole brain emulation but from a different set of media. Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Stefano Vaj >In other words, if we are becoming machines, machines are becoming "chemical" and "organic" at an even >faster pace (carbon rather than steel and silicon, biochips, nano...). -- Stefano Vaj _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From sparge at gmail.com Fri Dec 18 15:52:28 2009 From: sparge at gmail.com (Dave Sill) Date: Fri, 18 Dec 2009 10:52:28 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <541679.45178.qm@web36508.mail.mud.yahoo.com> References: <541679.45178.qm@web36508.mail.mud.yahoo.com> Message-ID: On Fri, Dec 18, 2009 at 10:36 AM, Gordon Swobe wrote: > Programs that run algorithms do not and cannot have semantics. Programs are really nothing but semantics encoded in a rigorous syntax understandable to a processor. If programs were just meaningless syntax, they wouldn't be capable of doing anything useful. >They do useful things but have no understanding of the things they do. They aren't self aware, yet, but that doesn't mean they have no meaning/semantics. -Dave From jonkc at bellsouth.net Fri Dec 18 16:29:20 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 18 Dec 2009 11:29:20 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <235730.52445.qm@web36503.mail.mud.yahoo.com> References: <235730.52445.qm@web36503.mail.mud.yahoo.com> Message-ID: On Dec 17, 2009, at 7:09 PM, Gordon Swobe wrote: > Let's go inside that neuron and look around. What do we see? > I see a computer running a formal program, a program no different in principle from those running on the computer in front of me right now. That program has no understanding of the symbols it manipulates, yet it drives all the behavior of the neuron. On your account your brain runs billions of these mindless programs, and together they comprise the greater program that causes your thoughts and behaviors. But I see nothing in your scenario that explains how billions of mindless neurons come together to create mindfulness. You want an explanation for mind and that is a very natural thing to want, but what does "explanation" mean? In general an explanation means breaking down a large complex and mysterious phenomenon until you find something that is understandable, it can mean nothing else. Science has done that with mind but you object that there must be more to it than that because the basic building block science has found is so mundane. Well of course it's mundane and simple, if it wasn't and that small part of the phenomena was still complex and mysterious then you haven't explained anything. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Dec 18 16:42:09 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 18 Dec 2009 11:42:09 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <863424.11585.qm@web36504.mail.mud.yahoo.com> References: <863424.11585.qm@web36504.mail.mud.yahoo.com> Message-ID: <3E8C07B2-4CDC-49B4-A2D0-0EE5B976641B@bellsouth.net> On Dec 17, 2009, at 7:27 PM, Gordon Swobe wrote: > Searle only assumes exactly what he states he assumes: > P1) Programs are formal (syntactic) [which is NOT to say they have no semantics or that they cannot cause or have minds] > P2) Minds have mental contents (semantics) > P3) Syntax is neither constitutive nor sufficient for semantics. > That's all he assumes, Stathis. Nothing more, nothing less. He assumes that syntax is neither constitutive nor sufficient for semantics and then announces with great fanfare that by using these assumptions he has proven that syntax is neither constitutive nor sufficient for semantics. If a machine can not have a mind then a machine can not have a mind; well it's not exactly brilliant but at least it's correct. But to say it's not even constitutive of semantics is just dumb. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Dec 18 17:27:29 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 18 Dec 2009 12:27:29 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <873042.45735.qm@web36506.mail.mud.yahoo.com> References: <873042.45735.qm@web36506.mail.mud.yahoo.com> Message-ID: <0155E897-CE78-41F2-9DCC-B4913953BC3A@bellsouth.net> On Dec 18, 2009, Gordon Swobe wrote: > Biological brains do something we don't yet understand. Call it X. Ok let's do that, X it is. I'm assuming you're not talking about anything supernatural that we can never understand even in principle, I'm assuming you're talking about a perfectly rational principle that we just haven't discovered yet. I can't deny that there is a lot we don't know, and the human brain is the most complex object in the observable universe, we've only been studying it for a short time so we may be in for some major surprises. But if Process X is rational, that means we can use our minds to examine what sort of thing it might turn out to be. It seems pretty clear that information processing can produce something that's starting to look like intelligence, but we'll assume that Process X can do this too, but in addition Process X can generate consciousness and a feeling of self, something mere information processing alone cannot do. What Process X does is certainly not simple, so it's very hard to avoid concluding that Process X itself is not simple. If it's complex then it can't be made of only one thing, it must be made of parts. If Process X is not to act in a random, incoherent way some order must exist between the parts. A part must have some knowledge of what the other parts are doing and the only way to do that is with information. But maybe communication among the parts is of only secondary importance and that the major work is done by the parts themselves. However then the parts must be very complex and be made of sub parts. The simplest possible sub part is one that can change in only one way, say, on to off. It's getting extremely difficult to tell the difference between Process X and information processing. The only way to avoid this conclusion is if there is some ethereal substance that is all of one thing and has no parts thus is very simple, yet acts in a complex, intelligent way; and produces feeling and consciousness while its at it. If you accept that, then I think the most honest thing to do would be to throw in the towel, call it a soul, and join the religious camp. I'm not ready to surrender to the forces of irrationality. > By the way ignore those who say we can't define consciousness. If it has subjective understanding of anything whatsoever -- in the common parlance if it can hold in anything whatsoever in mind -- then it has consciousness. > So something is conscious if it has subjective understanding and something has subjective understanding if it is conscious. > > Another sign of consciousness: things that have it can overcome the symbol grounding problem. So something has consciousness if it can overcome the symbol grounding problem and something can overcome the symbol grounding problem if it has consciousness. And round and round we go. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Dec 18 17:54:20 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 18 Dec 2009 12:54:20 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <541679.45178.qm@web36508.mail.mud.yahoo.com> References: <541679.45178.qm@web36508.mail.mud.yahoo.com> Message-ID: <3C5AAAF4-549F-4045-A8D6-D2E09F19C471@bellsouth.net> On Dec 18, 2009, Gordon Swobe wrote: > Our task here involves more than mimicking intelligent human behavior (weak AI). From a human point of view that's all that's important. Conscious or unconscious if a Jupiter Brain is a billion times smarter than we are then we're toast. > I don't disagree (nor would Searle) that artificial neurons such as those you describe might produce intelligent human-like behavior. Such a machine might seem very human. But would it have intentionality as in strong AI, or merely seem to have it as in weak AI? If this distinction between strong and weak AI is real as Searle thinks then the backbone of all the biological sciences, Evolution, is wrong. Are you really ready to side with the creationists? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Dec 18 17:37:46 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 18 Dec 2009 12:37:46 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <144147.66676.qm@web36507.mail.mud.yahoo.com> References: <144147.66676.qm@web36507.mail.mud.yahoo.com> Message-ID: <1EC0FCA4-7754-432F-B2C2-40481B6A9F05@bellsouth.net> On Dec 17, 2009, at 8:04 PM, Gordon Swobe wrote: > No understanding of Chinese from following Chinese syntax. Right. Wrong. There is not a scrap of information to indicate that a deep understanding of Chinese was not involved. > And yet he still passes the Turing test in Chinese. He? In English rooms don't have a gender. And nobody said Turing found the perfect test for consciousness, it's just that its all we have so it will have to do. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Fri Dec 18 16:02:49 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Fri, 18 Dec 2009 08:02:49 -0800 (PST) Subject: [ExI] atheism Message-ID: <311055.56333.qm@web59907.mail.ac4.yahoo.com> Agreed on almost everything. Religion/faith is necessary fiction to fill empty heads that would otherwise be filled with something else; perhaps something worse; that is what I meant before about 'filling a vacuum', only?in the empty vessel?of some?redneck's beer sloshed?head. Religion?is IMO necessary for some families for the same reason; say, a family of yahoos ceases churchgoing, then they might trade off religion/faith for eating more to fill the void in their psyches; drink more spirits, become sex addicts. They might dissemble more to each other than they would if they didn't have the pressure from religious guilt. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Fri Dec 18 18:32:10 2009 From: sparge at gmail.com (Dave Sill) Date: Fri, 18 Dec 2009 13:32:10 -0500 Subject: [ExI] atheism In-Reply-To: <311055.56333.qm@web59907.mail.ac4.yahoo.com> References: <311055.56333.qm@web59907.mail.ac4.yahoo.com> Message-ID: 2009/12/18 Post Futurist > > Religion?is IMO necessary for some families for the same reason; say, a family of yahoos ceases churchgoing, > then they might trade off religion/faith for eating more to fill the void in their psyches; drink more spirits, become > sex addicts. So you think food, drink, and sex are inherently "bad", at least when they're partaken for recreational purposes? And you think it's a net win for society to replace them with fear of a mystical busybody--or at least hope that that will be the result? *boggle* > They might dissemble more to each other than they would if they didn't have the pressure from > religious guilt. Huh? -Dave From thespike at satx.rr.com Fri Dec 18 20:14:04 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 18 Dec 2009 14:14:04 -0600 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <541679.45178.qm@web36508.mail.mud.yahoo.com> References: <541679.45178.qm@web36508.mail.mud.yahoo.com> Message-ID: <4B2BE28C.60006@satx.rr.com> On 12/18/2009 9:36 AM, Gordon Swobe wrote: > If programs drive your artificial neurons (and they do) then Searle rightfully challenges you to show how those programs that drive behavior can in some way constitute a mind, i.e., he challenges you to show that you have not merely invented weak AI, which he does not contest. I see that Gordon ignored my previous post drawing attention to the Hopfield/Walter Freeman paradigms. I'll add this comment anyway: it is not at all clear to me that neurons and other organs and organelles are computational (especially in concert), even if their functions might be emulable by algorithms. Does a landslide calculate its path as it falls under gravity into a valley? Does the atmosphere perform a calculation as it help create the climate of the planet? I feel it's a serious error to think so, even though the reigning metaphors among physical scientists and programmers make it inevitable that this kind of metaphor or simile (it's not really a model) will be mistaken for an homology. I suspect that this is the key to whatever it is that puzzles Searle and his acolytes, which I agree is a real puzzle. I don't think the Chinese Room helps clarify it, however. I haven't read much Humberto Maturana and the Santiago theory of cognition but that might be one place to look for some handy hints. Damien Broderick From pharos at gmail.com Fri Dec 18 20:26:11 2009 From: pharos at gmail.com (BillK) Date: Fri, 18 Dec 2009 20:26:11 +0000 Subject: [ExI] Name for carbon project In-Reply-To: References: Message-ID: On 12/15/09, Keith Henson wrote: > A few of you have been following my work on solving the energy and > carbon problems. Regardless of how you feel about carbon, energy > really is a problem, one that if not solved could really make a mess > of world civilization. > > Unfortunately attention is focused on carbon and relatively little on > the energy problem even though the two are deeply connected. Here is > one: > http://www.virgin.com/subsites/virginearth/ > > To have any chance of competing for the prize, the focus must be on > sequestering carbon. That's relatively easy and painless if we > produce 15 TW of power satellites beyond human energy needs and use it > to make synthetic oil for storage in empty oil fields. > > Looks like Sandia National Labs already have a solution. New Reactor Uses Sunlight to Turn Water and Carbon Dioxide Into Fuel By Clay Dillow Posted 11.23.2009 Talk about a Eureka moment. Scientists at Sandia National Labs, seeking a means to create cheap and abundant hydrogen to power a hydrogen economy, realized they could use the same technology to "reverse-combust" CO2 back into fuel. Researchers still have to improve the efficiency of the system, but they recently demonstrated a working prototype of their "Sunshine to Petrol" machine that converts waste CO2 to carbon monoxide, and then syngas, consuming nothing but solar energy. ------------- BillK From stathisp at gmail.com Fri Dec 18 22:50:06 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 19 Dec 2009 09:50:06 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <541679.45178.qm@web36508.mail.mud.yahoo.com> References: <541679.45178.qm@web36508.mail.mud.yahoo.com> Message-ID: 2009/12/19 Gordon Swobe : > --- On Fri, 12/18/09, Stathis Papaioannou wrote: > >> The level of description which you call a computer program >> is, in the final analysis, just a set of rules to help you figure >> out exactly how you should arrange a collection of matter so that it >> exhibits a desired behaviour > > Our task here involves more than mimicking intelligent human behavior (weak AI). Strong AI is not about behavior of neurons or brains or computers. It's about *mindfulness*. > > I don't disagree (nor would Searle) that artificial neurons such as those you describe might produce intelligent human-like behavior. Such a machine might seem very human. But would it have intentionality as in strong AI, or merely seem to have it as in weak AI? > > If programs drive your artificial neurons (and they do) then Searle rightfully challenges you to show how those programs that drive behavior can in some way constitute a mind, i.e., he challenges you to show that you have not merely invented weak AI, which he does not contest. Could you say what you think you would experience and how you would behave if these artificial neurons were swapped for some of your biological neurons? I have asked this several times and you have avoided answering. >> That you can describe the chemical reactions in the brain >> algorithmically should not detract from the brain's consciousness, > > True. > >> so why should an algorithmic description of a computer in action >> detract from the computer's consciousness? > > Programs that run algorithms do not and cannot have semantics. They do useful things but have no understanding of the things they do. Unless of course Searle's formal argument has flaws, and that is what is at issue here. Suppose we encounter a race of intelligent aliens. Their brains are nothing like either our brains or our computers, using a combination of chemical reactions, electric circuits, and mechanical nanomachinery to do whatever it is they do. We would dearly like to kill these aliens and take their technology and resources, but in order to do this without feeling guilty we need to know if they are conscious. They behave as if they are conscious and they insist they are conscious, but of course unconscious beings may do that as well. Neither does evidence that they evolved naturally convince us, since there is nothing to stop nature from giving rise to weak AI machines. So, how do we determine if the activity in the alien brains is some fantastically complex program running on fantastically complex architecture; and if we decide that it is, does that mean that the aliens are not conscious? -- Stathis Papaioannou From asyluman at gmail.com Fri Dec 18 23:14:10 2009 From: asyluman at gmail.com (Will Steinberg) Date: Fri, 18 Dec 2009 18:14:10 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <541679.45178.qm@web36508.mail.mud.yahoo.com> Message-ID: Searle, deliberately or ignorantly, fails to take the experiment to its logical ends. Right now we have a man who processes and produces syntactical Chinese inside a box. Arguing that he is not conscious is like arguing that the language center of the brain is (correctly) not conscious. The part of consciousness which is left out of the man resides in the book, in the rules of response. An algorithm made to perfectly emulate a human must be able to draw on a changing set of information. Imagine, one day, a person wearing tells the machine his name is Barry and that he is going to Staten Island. The machine uses on the template "Hello, [stated name]" to respond, and Barry leaves. The next day, Barry's mother asks the room where Barry went; he informed her that he was going to see the machine. To *accurately and verifiably** *produce human results, the machine must have a memory, i.e. a symbol grounding area. Yet this is obviously computational--certain patterns in the brain are causally linked to recieved sensory input; it is not hard to imagine the brain producing a random keystring upon the sight of "ice cream" that is retroactively made to associate with the sound of an ice cream truck. This is programmable. An interesting thing to imagine is the experiment extended to completion. The man uses meta-algorithms based on a changing storage bank (on paper, of course) of symbols to derive the algorithms used for speech, as well as all other functions associated with a human. We transmit all the box's output to an android which performs the man's commands. The man knows the inputs and outputs as the machine, the memory within, and the rules for manipulation. How is this different from a human? A human does not know the rules of manipulation. Think of placing a window on our linguistic machinations, if we were able to see our brains at work producing speech. Now we are AWARE of the process, and the manipulations in which we are engaging. We become the (extended) Chinese man. I don't see how people can talk for so long about a limited, flawed thought experiment with an easily deducible answer when minds of this caliber should perhaps be more interested in HOW things like qualia and thought are constructed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Dec 18 23:33:12 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 19 Dec 2009 10:33:12 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <4B2BE28C.60006@satx.rr.com> References: <541679.45178.qm@web36508.mail.mud.yahoo.com> <4B2BE28C.60006@satx.rr.com> Message-ID: 2009/12/19 Damien Broderick : > On 12/18/2009 9:36 AM, Gordon Swobe wrote: > >> If programs drive your artificial neurons (and they do) then Searle >> rightfully challenges you to show how those programs that drive behavior can >> in some way constitute a mind, i.e., he challenges you to show that you have >> not merely invented weak AI, which he does not contest. > > I see that Gordon ignored my previous post drawing attention to the > Hopfield/Walter Freeman paradigms. I'll add this comment anyway: it is not > at all clear to me that neurons and other organs and organelles are > computational (especially in concert), even if their functions might be > emulable by algorithms. Does a landslide calculate its path as it falls > under gravity into a valley? Does the atmosphere perform a calculation as it > help create the climate of the planet? I feel it's a serious error to think > so, even though the reigning metaphors among physical scientists and > programmers make it inevitable that this kind of metaphor or simile (it's > not really a model) will be mistaken for an homology. I suspect that this is > the key to whatever it is that puzzles Searle and his acolytes, which I > agree is a real puzzle. I don't think the Chinese Room helps clarify it, > however. I haven't read much Humberto Maturana and the Santiago theory of > cognition but that might be one place to look for some handy hints. Is a computer any more computational than a landslide? Both involve a collection of physical parts transitioning from one configuration to the next according to exactly the same laws of physics. The computer program is just a mental aid, like a mnemonic or a map, which allows the programmer to set up his collection of physical parts in such a way that the series of configuration changes reliably leads to some desired result. The symbol manipulation has no separate physical existence but is something that resides in the Platonic realm, like a mathematical theorem. -- Stathis Papaioannou From thespike at satx.rr.com Sat Dec 19 00:11:41 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 18 Dec 2009 18:11:41 -0600 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <541679.45178.qm@web36508.mail.mud.yahoo.com> <4B2BE28C.60006@satx.rr.com> Message-ID: <4B2C1A3D.3020606@satx.rr.com> On 12/18/2009 5:33 PM, Stathis Papaioannou wrote: > Is a computer any more computational than a landslide? Is gravity plus friction plus and the jostling of innumerable fragments tumbling in a chaotic manner functionally identical to a carefully noise-minimized algorithm-governed computation? Damien Broderick From gts_2000 at yahoo.com Sat Dec 19 00:21:00 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 18 Dec 2009 16:21:00 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <805084.88597.qm@web36504.mail.mud.yahoo.com> --- On Fri, 12/18/09, Stathis Papaioannou wrote: >> If programs drive your artificial neurons (and they > do) then Searle rightfully challenges you to show how those > programs that drive behavior can in some way constitute a > mind, i.e., he challenges you to show that you have not > merely invented weak AI, which he does not contest. > > Could you say what you think you would experience and how > you would behave if these artificial neurons were swapped > for some of your biological neurons? I have asked this several > times and you have avoided answering. Because I represent Searle here (even as I criticize him on another discussion list) I will say that I think my consciousness might very well fade away in proportion to the number of neurons you replaced that had relevance to it. This could happen even as I continued to behave in a manner consistent with intelligence. In other words, it seems to me that I would change gradually from a living example of strong AI to a living example of weak AI. >> Programs that run algorithms do not and cannot have > semantics. They do useful things but have no understanding > of the things they do. Unless of course Searle's formal > argument has flaws, and that is what is at issue here. > > Suppose we encounter a race of intelligent aliens. > Their brains are nothing like either our brains or our computers, > using a combination of chemical reactions, electric circuits, > and mechanical nanomachinery to do whatever it is they do. We > would dearly like to kill these aliens and take their technology > and resources, but in order to do this without feeling guilty we > need to know if they are conscious. They behave as if they are > conscious and they insist they are conscious, but of course > unconscious beings may do that as well. Neither does evidence that > they evolved naturally convince us, since > there is nothing to stop nature from giving rise to weak AI > machines. So, how do we determine if the activity in the alien > brains is some fantastically complex program running on fantastically > complex architecture I notice first that we need to ask ourselves the same question, (as many here no doubt already have): how do we know for certain that the human brain does not do anything more than run some fantastically complex program on some fantastically complex architecture? I think that if my brain runs any programs then it must do something else too. I understand the symbols that my mind processes and having studied Searle's arguments carefully I simply I do not see how a mere program can do the same, no matter how complex. As for how we would know about the aliens... I think we would need to present them with the same arguments and information and ask them to use reason and logic to decide for themselves, just as I ask you to use reason and logic to decide for yourself. >... and if we decide that it is, does that mean that the > aliens are not conscious? Yes. -gts From p0stfuturist at yahoo.com Fri Dec 18 19:47:21 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Fri, 18 Dec 2009 11:47:21 -0800 (PST) Subject: [ExI] atheism Message-ID: <503025.63317.qm@web59904.mail.ac4.yahoo.com> >So you think food, drink, and sex are inherently "bad", at least when they're partaken for recreational purposes? And you think it's a net win for society to replace them with fear of a mystical busybody--or at least hope that that will be the result? >Dave Sill ? Alrighty, but is faith/religion any sillier than celebrity culture, sports, politics, gossip? ?Who wrote anything about a net good for 'society'? who here has written that religion is good? It's merely marginally less unappetizing IMO than bad politics, bad porn, bad gossip, etc. I don't see anything as 'good', just less pernicious. Busybodies are everywhere, you don't need to go to a?house of worship?to find them. Right now, in your office or lab, are religious people pushing you in any way? or is it the govt? Does a?religious org?pressure you to pay taxes?by April 15th? Are?religious charities?anywhere near as inefficient as govt? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Dec 19 01:05:45 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 18 Dec 2009 17:05:45 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <4B2BE28C.60006@satx.rr.com> Message-ID: <55003.34376.qm@web36506.mail.mud.yahoo.com> --- On Fri, 12/18/09, Damien Broderick wrote: > > If programs drive your artificial neurons (and they > do) then Searle rightfully challenges you to show how those > programs that drive behavior can in some way constitute a > mind, i.e., he challenges you to show that you have not > merely invented weak AI, which he does not contest. > > I see that Gordon ignored my previous post drawing > attention to the Hopfield/Walter Freeman paradigms. I didn't ignore it, Damien. I just have very little time and lots of posts to respond to, not only here but on other discussion lists. In your post before your last, you wrote something along the lines of "BULLSHIT". (No, I take that back. That's exactly what you wrote.) I don't mind a little profanity, and I didn't take offense, but as a general rule I tend to give priority to posts of those who seem most interested in what I have to say. > I'll add this comment anyway: it is not at all clear to me that > neurons and other organs and organelles are computational > (especially in concert), even if their functions might be > emulable by algorithms. Does a landslide calculate its path > as it falls under gravity into a valley? Does the atmosphere > perform a calculation as it help create the climate of the > planet? I feel it's a serious error to think so, even though > the reigning metaphors among physical scientists and > programmers make it inevitable that this kind of metaphor or > simile (it's not really a model) will be mistaken for an > homology. I suspect that this is the key to whatever it is > that puzzles Searle and his acolytes, which I agree is a > real puzzle. Well, I don't see that as very relevant, but then I base my opinion only on what you've written above. Searle considers it trivially true that we could in principle create a perfectly accurate computer simulation of the brain. I don't see that he would think it any less trivial if we could not, though it would certainly put a damper on the strong AI research program that he already considers a waste of time. Good to see that you agree a "real puzzle" exists. That tells me you understand I'm not just bullshitting. -gts From stathisp at gmail.com Sat Dec 19 01:13:30 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 19 Dec 2009 12:13:30 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <4B2C1A3D.3020606@satx.rr.com> References: <541679.45178.qm@web36508.mail.mud.yahoo.com> <4B2BE28C.60006@satx.rr.com> <4B2C1A3D.3020606@satx.rr.com> Message-ID: 2009/12/19 Damien Broderick : > On 12/18/2009 5:33 PM, Stathis Papaioannou wrote: > >> Is a computer any more computational than a landslide? > > Is gravity plus friction plus and the jostling of innumerable fragments > tumbling in a chaotic manner functionally identical to a carefully > noise-minimized algorithm-governed computation? They are different processes, of course, but the behaviour of an actual, physical computer is governed by exactly the same physical laws as that of the landslide. Since there is no fundamental physical difference between a computer implementing a program and any other physical process it is not possible, in general, to examine an intelligently behaving alien machine and decide whether it is a computer or a non-computational brain. -- Stathis Papaioannou From stathisp at gmail.com Sat Dec 19 01:52:40 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 19 Dec 2009 12:52:40 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <805084.88597.qm@web36504.mail.mud.yahoo.com> References: <805084.88597.qm@web36504.mail.mud.yahoo.com> Message-ID: 2009/12/19 Gordon Swobe : > Because I represent Searle here (even as I criticize him on another discussion list) I will say that I think my consciousness might very well fade away in proportion to the number of neurons you replaced that had relevance to it. > > This could happen even as I continued to behave in a manner consistent with intelligence. In other words, it seems to me that I would change gradually from a living example of strong AI to a living example of weak AI. So you might lose your visual perception but to an external observer you would behave just as if you had normal vision and, more to the point, you would believe you had normal vision. You would look at a person's face, recognise them, experience all the emotional responses associated with that person, describe their features vividly, but in actual fact you would be seeing nothing. How do you know you don't have this kind of zombie vision right now? Would you pay to have your normal vision restored, knowing that it could make no possible subjective or objective difference to you? By the way, I can't find the reference, but Searle claims that you *would* notice that you were going blind with this sort of neural replacement experiment, but be unable to do anything about it. You would struggle to scream out that something had gone terribly wrong, but your body would not obey you, instead smiling and declaring that everything was just fine. >>> Programs that run algorithms do not and cannot have >> semantics. They do useful things but have no understanding >> of the things they do. Unless of course Searle's formal >> argument has flaws, and that is what is at issue here. >> >> Suppose we encounter a race of intelligent aliens. >> Their brains are nothing like either our brains or our computers, >> using a combination of chemical reactions, electric circuits, >> and mechanical nanomachinery to do whatever it is they do. We >> would dearly like to kill these aliens and take their technology >> and resources, but in order to do this without feeling guilty we >> need to know if they are conscious. They behave as if they are >> conscious and they insist they are conscious, but of course >> unconscious beings may do that as well. Neither does evidence that >> they evolved naturally convince us, since >> there is nothing to stop nature from giving rise to weak AI >> machines. So, how do we determine if the activity in the alien >> brains is some fantastically complex program running on fantastically >> complex architecture > > I notice first that we need to ask ourselves the same question, (as many here no doubt already have): how do we know for certain that the human brain does not do anything more than run some fantastically complex program on some fantastically complex architecture? > > I think that if my brain runs any programs then it must do something else too. I understand the symbols that my mind processes and having studied Searle's arguments carefully I simply I do not see how a mere program can do the same, no matter how complex. Well how about this theory: it's not the program that has consciousness, since a program is just an abstraction. It's the physical processes the machine undergoes while running the program that causes the consciousness. Whether these processes can be interpreted as a program or not doesn't change their consciousness. -- Stathis Papaioannou From sparge at gmail.com Sat Dec 19 03:30:10 2009 From: sparge at gmail.com (Dave Sill) Date: Fri, 18 Dec 2009 22:30:10 -0500 Subject: [ExI] atheism In-Reply-To: <503025.63317.qm@web59904.mail.ac4.yahoo.com> References: <503025.63317.qm@web59904.mail.ac4.yahoo.com> Message-ID: 2009/12/18 Post Futurist > > Alrighty, but is faith/religion any sillier than celebrity culture, sports, politics, gossip? OK, so minimizing silliness is your goal, and you think religion is less silly than other pastimes? > ?Who wrote anything about a net good for 'society'? I did, but I was trying to paraphrase you, so I phrased it as a question so you could confirm or deny that it was accurate. I'm not trying to trick you, just trying to understand you. > who here has written that religion is good? I thought you said it was more desirable than alternatives like sex, overeating, and boozing. To me that's roughly synonymous to "better" and not that far from "good". > It's merely marginally less unappetizing IMO than bad politics, bad porn, bad gossip, etc. Above you were talking about silliness. Now it's "appetizingness". You may personally find porn uninteresting, "bad", or maybe just a bigger waste of time/effort/resources than religion, but that's just your opinion. Not everyone agrees with that. I'd rather see children watching videos that treat sex as a natural, fun activity in which normal people engage without shame than some "good", "wholesome" religious propaganda that depicts it as deviant behaviour unless it's being done, reluctantly, solely for the purpose of reproduction. > I don't see anything as 'good', just less pernicious. OK, so now the target is perniciousness? > Busybodies are everywhere, you don't need to go to a?house of worship?to find them. Wait, now the target is privacy? I can't keep up with you... > Right now, in your office or lab, are religious people pushing you in any way? New target detected: pushiness. Look, I think religion is worse than overeating, watching sports on TV, celebrity worship, gossip, politics, ..., all the bugaboos you've listed added together. You're not going to convince me otherwise, nor am I going to convince you that I'm right. > or is it the govt? Does a?religious org?pressure you to pay taxes?by April 15th? > Are?religious charities?anywhere near as inefficient as govt? Yeah, there are lots of things I'd abolish if I were the king. Religion would be first and big government would be second. -Dave From msd001 at gmail.com Sat Dec 19 04:22:45 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 18 Dec 2009 23:22:45 -0500 Subject: [ExI] Name for carbon project In-Reply-To: References: Message-ID: <62c14240912182022k6afd72daxb712b5b5ebe3f201@mail.gmail.com> On Fri, Dec 18, 2009 at 3:26 PM, BillK quoted: > Talk about a Eureka moment. Scientists at Sandia National Labs, > seeking a means to create cheap and abundant hydrogen to power a > hydrogen economy, realized they could use the same technology to > "reverse-combust" CO2 back into fuel. Researchers still have to > improve the efficiency of the system, but they recently demonstrated a > working prototype of their "Sunshine to Petrol" machine that converts > waste CO2 to carbon monoxide, and then syngas, consuming nothing but > solar energy. > "Sunshine to Petrol" is pretty good, but Americans generally don't use the term 'petrol.' I was pretty close with my suggestion "Liquid Sunshine..." :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Dec 19 04:36:16 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 18 Dec 2009 22:36:16 -0600 Subject: [ExI] Name for carbon project In-Reply-To: <62c14240912182022k6afd72daxb712b5b5ebe3f201@mail.gmail.com> References: <62c14240912182022k6afd72daxb712b5b5ebe3f201@mail.gmail.com> Message-ID: <4B2C5840.9010100@satx.rr.com> On 12/18/2009 10:22 PM, Mike Dougherty wrote: > waste CO2 to carbon monoxide, and then syngas, consuming nothing but > solar energy. > "Sunshine to Petrol" is pretty good, but Americans generally don't use > the term 'petrol.' > > I was pretty close with my suggestion "Liquid Sunshine..." :) Sungas? Sun Gas? (Close to BillK's excellent Stargas, but how many people know the sun is a star?) And has the sounds-like link to syngas. From gts_2000 at yahoo.com Sat Dec 19 04:26:27 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 18 Dec 2009 20:26:27 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <399339.94610.qm@web36501.mail.mud.yahoo.com> --- On Fri, 12/18/09, Stathis Papaioannou wrote: After a complete replacement of my brain with your nano-neuron brain... > So you might lose your visual perception but to an external > observer you would behave just as if you had normal vision and, Yes. By the way, though not exactly the same thing as we mean here, the phenomenon of blindsight exists. People with this condition detect objects in their line of sight but cannot see them. Or, rather, they cannot see that they see them. > more to the point, you would believe you had normal vision. I would have no awareness of any beliefs I might have. > You would look at a person's face, recognise them, I would not know that I recognized them, but I would act as if I did. > experience all the emotional responses associated with that person, Bodily responses, but I would have no awareness of them. > describe their features vividly, but in actual fact you would be seeing > nothing. I would see but not know it. > How do you know you don't have this kind of zombie vision right now? Because I know I can see. > Would you pay to have your normal vision restored, knowing that it > could make no possible subjective or objective difference to you? No, but I wouldn't know that I didn't. In all the above except the second to last, I lack intentionality. > Well how about this theory: it's not the program that has > consciousness, since a program is just an abstraction. It's > the physical processes the machine undergoes while running the > program that causes the consciousness. Whether these processes can > be interpreted as a program or not doesn't change their > consciousness. I don't think S/H systems have minds but I do think you've pointed in the right direction. I think matter matters. More on this another time. -gts From msd001 at gmail.com Sat Dec 19 04:54:56 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 18 Dec 2009 23:54:56 -0500 Subject: [ExI] Sick of Cyberspace? In-Reply-To: References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> Message-ID: <62c14240912182054m564638d8na28f60d30d59f094@mail.gmail.com> On Fri, Dec 18, 2009 at 10:52 AM, Natasha Vita-More wrote: > Yes, of course. Hello folks! > > Now, getting back to the issue - the fact that machines are becoming > chemical is the area. The organic machine has been used metaphorically for > quite some time. How do you see a machine being injected with > biochemistry? > You are correct with biochips, which uses bits of DNA as an outreach agent. > The microchip is made from macromolecules instead of semiconductor, but > doesn't it also use silicon? Which is it? But this is more of what I was > looking for. Thanks. Long way from the cybernetic connectivity of > cyberspace, but I suppose if the brain's matter which houses personal > identity could be secreted onto a microchip ... > And then this gets into Anders' area, and we are back to whole brain > emulation but from a different set of media. > OLED are now available in consumer HD TVs. That's a considerable step to putting organics to use in a domain that had previously only been done with LCD (chemistry?) and Plasma/CRT (physics?). I would also like to see more articles on DNA origami: http://www.blog.speculist.com/archives/001864.html A Sierpinski gasket made of DNA? "That's Crazy!" (crazy awesome) http://en.wikipedia.org/wiki/DNA_nanotechnology -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Dec 19 05:03:24 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 19 Dec 2009 00:03:24 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <235730.52445.qm@web36503.mail.mud.yahoo.com> Message-ID: <62c14240912182103s6678e6f5s29731abe950a072a@mail.gmail.com> 2009/12/18 John Clark > You want an explanation for mind and that is a very natural thing to want, > but what does "explanation" mean? In general an explanation means breaking > down a large complex and mysterious phenomenon until you find something that > is understandable, it can mean nothing else. Science has done that with mind > but you object that there must be more to it than that because the basic > building block science has found is so mundane. Well of course it's mundane > and simple, if it wasn't and that small part of the phenomena was still > complex and mysterious then you haven't explained anything. > And the simple/mundane mechanisms seem generally more reliable than complex/fancy mechanisms :: be glad our brains work on relatively cheap neurons, else they might not work at all. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sat Dec 19 05:13:06 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Fri, 18 Dec 2009 21:13:06 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <666456.12354.qm@web36502.mail.mud.yahoo.com> --- On Fri, 12/18/09, Will Steinberg wrote: > Right now we have a man who processes and produces > syntactical Chinese inside a box.? Arguing that he is not > conscious is like arguing that the language center of the > brain is (correctly) not conscious.? Nobody has ever argued that the man has no consciousness. You must have some other Chinese guy in mind. > To accurately and verifiably produce human results, the machine must > have a memory, i.e. a symbol grounding area.? Symbol grounding involves comprehension of the meanings of words and symbols, not their mere storage in memory. Simply stated, programs manipulate symbols but they have no way to know what the symbols mean, unless programs somehow have or cause minds. And that's what the brouhaha is all about. > I don't see how people can talk for so long about a > limited, flawed thought experiment I happen to agree the experiment has a flaw, though not for the reason you think. Fortunately for Searle, his argument does not depend on the thought experiment. -gts From stathisp at gmail.com Sat Dec 19 06:49:21 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 19 Dec 2009 17:49:21 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <399339.94610.qm@web36501.mail.mud.yahoo.com> References: <399339.94610.qm@web36501.mail.mud.yahoo.com> Message-ID: 2009/12/19 Gordon Swobe : > --- On Fri, 12/18/09, Stathis Papaioannou wrote: > > After a complete replacement of my brain with your nano-neuron brain... It's important that you consider first the case of *partial* replacement, eg. all of your visual cortex but the rest of the brain left intact. >> So you might lose your visual perception but to an external >> observer you would behave just as if you had normal vision and, > > Yes. By the way, though not exactly the same thing as we mean here, the phenomenon of blindsight exists. People with this condition detect objects in their line of sight but cannot see them. Or, rather, they cannot see that they see them. Patients with blindsight can to an extent detect objects put in front of their eyes but do not have visual perception of the objects. Patients with Anton's syndrome have the opposite phenomenon: they are blind and stagger about walking into things but claim that they can see, and confabulate when asked to describe what they see. >> more to the point, you would believe you had normal vision. > > I would have no awareness of any beliefs I might have. Yes you would, if only your visual cortex was replaced and the rest of your brain unchanged. >> You would look at a person's face, recognise them, > > I would not know that I recognized them, but I would act as if I did. Yes you would recognise them. >> experience all the emotional responses associated with that person, > > Bodily responses, but I would have no awareness of them. Yes you would, even without any visual cortex. >> describe their features vividly, but in actual fact you would be seeing >> nothing. > > I would see but not know it. You would not see but you *would* know that you are seeing, as surely as you know that you are seeing now. >> How do you know you don't have this kind of zombie vision right now? > > Because I know I can see. And you would have the same kind of knowledge with a zombified visual cortex, since the rest of your brain would be unchanged. >> Would you pay to have your normal vision restored, knowing that it >> could make no possible subjective or objective difference to you? > > No, but I wouldn't know that I didn't. You would know (because you have been reliably informed) that you had a zombified visual cortex, but try as you might you can't tell any difference compared to before the operation. This is because the artificial neurons are sending the same signals to the rest of your brain. So you would be postulating that you are blind but that the blindness makes absolutely no difference to you, since you have all the thoughts and feelings and behaviours associated with normal vision. Is it possible that you could lose all visual experience like this but not even notice? If it is, then experience is something very different to what we all intuitively know it to be. You're going very far in postulating this strange theory of partial zombiehood (which could be afflicting us all at this very moment) just so that you can maintain that the artificial neurons lack consciousness. The alternative simpler and more plausible theory is that if the artificial neurons reproduce the behaviour of biological neurons, then they also reproduce the consciousness of biological neurons. > In all the above except the second to last, I lack intentionality. > > >> Well how about this theory: it's not the program that has >> consciousness, since a program is just an abstraction. It's >> the physical processes the machine undergoes while running the >> program that causes the consciousness. Whether these processes can >> be interpreted as a program or not doesn't change their >> consciousness. > > I don't think S/H systems have minds but I do think you've pointed in the right direction. I think matter matters. More on this another time. But you also think that if the matter behaves in such a way that it can be interpreted as implementing a computer program it lacks consciousness. The CR lacks understanding because the man in the room, who can be seen as implementing a program, lacks understanding; whereas a different system which produces similar behaviour but with dumb components the interactions of which can't be recognised as algorithmic has understanding. You are penalising the CR because it has something extra in the way of pattern and intelligence. -- Stathis Papaioannou From eschatoon at gmail.com Sat Dec 19 08:04:07 2009 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Sat, 19 Dec 2009 09:04:07 +0100 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> Message-ID: <1fa8c3b90912190004k27e38da7ma69724be2171f827@mail.gmail.com> In the long term I see humans merging with AI subsystems and becoming purely computational beings with movable identities based on some or some other kind of physical hardware. I don't think there is any other viable long term choice, not if we want to leave all limits behind and increase our options without bonds. But this will take long. In the meantime there are many other stepping stones to go through, based on improving our biology and gradually merging it with our technology. On Fri, Dec 18, 2009 at 12:19 PM, Stefano Vaj wrote: > 2009/12/17 ?: >> Are we totally locked into cybernetics for evolution? I thought this next >> era was to be about chemistry rather than machines. > > I come myself from "wet transhumanism" (bio/cogno), and while I got in > touch with the movement exactly out of curiosity to learn more about > the "hard", "cyber/cyborg" side of things, I am persuased the next era > is still about chemistry, and, that when it will stops being there > will be little difference between the two. > > In other words, if we are becoming machines, machines are becoming > "chemical" and "organic" at an even faster pace (carbon rather than > steel and silicon, biochips, nano...). > > -- > Stefano Vaj > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From bbenzai at yahoo.com Sat Dec 19 10:56:11 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 19 Dec 2009 02:56:11 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <564198.9615.qm@web113603.mail.gq1.yahoo.com> Gordon Swobe wrote: > --- On Fri, 12/18/09, Ben Zaiboc > wrote: ... > > OK, so given that, what does 'symbol grounding' > mean?? > > It means that the meaning of a mental symbol is built > up > > from internal representations that derive from this > 'World > > according to You'.? There's nothing mysterious or > > difficult about it, and it doesn't really even deserve > the > > description 'problem'.? > > It's a problem for simulations of people, Ben. Not a > problem for real people. I'm talking about simulations. I'm talking about why, if the simulation is good enough, there is no functional difference. You start by accepting what I'm saying, which is based entirely on information-processing, then deny that this can apply to a non-biological information processing system, even though you agree it does apply to a biological one. You haven't commented on the other part of my post, where I say: ------------------------ > > If your character had a brain, and it was a complete > > simulation of a biological brain, then how could it > > not have understanding? > > Because it's just a program and programs don't have > semantics. You keep saying this, but it's not true. Complex enough programs *can* have semantics. This should be evident from my description of internal world-building above. The brain isn't doing anything that a (big) set of interacting data-processing modules in a program, (or more likely a large set of interacting programs) can't also do. Semantics isn't something that can exist outside of a mind. Meaning is an internally-generated thing. ------------------------- This is an important point, which is why I'm going back to it. Do you agree or disagree that Meaning (semantics) is an internally-generated phenomenon in a sufficently complex, and suitably organised, information processing system, with sensory inputs, motor outputs and memory storage? > > There seems to be an implication that a simulation is > > somehow 'inferior' to the 'real thing'. > > > > I remember simulating my father's method of tying > shoelaces > > when I was small.? I'm sure that my shoelace-tying > now > > is just as good as his ever was. > > You didn't simulate you father. You imitated him. I didn't say that. I said I simulated his shoelace-tying method. Which I did. If I had simulated *him* (accurately enough) I would *be* him. And yes, I imitated that part of his behaviour. I simulated it by imitating him doing it, over several repetitions, because that was sufficient to create a good simulation of that behaviour. > If you took a video of your father tying his shoelaces and > watched that video, you would watch a simulation. No, I'd be watching an audio-visual recording. That doesn't contain enough information to call it a simulation. If it was a recording that captured his muscle movements, his language patterns, and his belief systems about tying shoelaces, over many repetitions, then it would be a simulation. If it recorded every detail of his biochemical interactions, then it would be a good simulation. > Is that really your father tying his shoelaces in the > video, Ben? Or it just pixels on the screen? I.e., just a > simulation? Answered above. > And if you ever watched a video of your father taken while > he read and understood a newspaper, you watched a simulation > of your father overcoming the symbol grounding problem. You > watched a cartoon. Perhaps you confused the cartoon with > reality, and thought you saw your real father understanding > something, but in that case you weren't paying attention. A simulation is not something which captures any old information about a process (such as what it looks like from a distance of x metres in light between 400 - 700 nm wavelength). It's a set of relevant information that captures the functioning of the process at whatever level of detail you need for the purposes of the simulation. In the case of a book, it's usually the information represented by the words that is relevant, so a digital copy, a scan, or audio recording of those words being read is sufficient. If you're interested in the physical structure of the book, you'd need to capture a lot more information. In the case of a mind, you need to capture the information-processing mechanisms, along with the exact data that is being processed or held in storage. If you have that, you have that mind. As John K Clark says, it's all about information. If you don't accept that, you must posit some other 'thing' that is important. I only know of two other 'things': matter and energy. Do you know of another? I have a question: Suppose someone built a brain by taking one cell at a time, and was somehow able to attach them together in exactly the same configuration, with exactly the same synaptic strengths, same myelination, same tight junctions, etc., etc., cell for cell, as an existing biological brain, would the result be a conscious individual, the same as the natural one? (assuming it was put in a suitable body, all connected up properly, etc.). Ben Zaiboc From eugen at leitl.org Sat Dec 19 12:24:46 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Dec 2009 13:24:46 +0100 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <62c14240912182054m564638d8na28f60d30d59f094@mail.gmail.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <62c14240912182054m564638d8na28f60d30d59f094@mail.gmail.com> Message-ID: <20091219122446.GG17686@leitl.org> On Fri, Dec 18, 2009 at 11:54:56PM -0500, Mike Dougherty wrote: > OLED are now available in consumer HD TVs. That's a considerable step to It's mostly AMOLED, which still use Si (poly-Si or a-Si) in TFT backplanes. I like the AMOLED displays a lot, since they actually make reading off screen feasible. TFT LCD, including nicer kinds (LED)-backlit (S-PVA, e-IPS) can't quite make the cut. Probably never will, though reflective liquid crystals are not nearly done yet. > putting organics to use in a domain that had previously only been done with > LCD (chemistry?) and Plasma/CRT (physics?). The particular substrate is not important, yet. The underlying technology is much more interesting. The choice of substrate happens for many, more or less mundane reasons, and also changes over time. (I can expand, if needed, but I doubt many would find it interesting talking about oxide layers, monocrystals, III-V semiconductors and wilder blends, band gaps, stoichiometry, thermal dependency, immersion litho and similar). Larger changes in technology are more rare, and several technologies can be used in the same substrate (or substrates, since every computer today uses some 50 elements intricately patterned on top of silicon, and the palette is further expanding rapidly). We're still mostly using electronics (CMOS (mostly MOSFET)), though we're increasingly seeing photonics sitting along with spintronics, especially single-electron quantum-effect devices which are technically still electronics, but not as we know it, Jim. And we have to use them, or Moore will run into a wall, and even quite soon. When we start approaching the limits of computational physics, and degree of utilization of matter in the local system, then the choices will be a lot less limited. The details of that remote (though not necessarily in terms of wall clock) epoch we do not know of course. What is the energy/matter tradeoff in transmutation -- should you do some energetically expensive alchemy in order to obtain better EROEI or ops/J on the long run or just use whatever stellar excreta throw your way? We can't quite tell, yet. > I would also like to see more articles on DNA origami: > http://www.blog.speculist.com/archives/001864.html > > A Sierpinski gasket made of DNA? "That's Crazy!" (crazy awesome) > http://en.wikipedia.org/wiki/DNA_nanotechnology It is always a good idea to utilize the only kind of molecular technology we know and can control (life) in the bootstrap. Proteins and DNA are great for self-assembly, as long as you can't do your own self-assembly from scratch, or use machine-phase which allows you to maximize functionality concentration/volume down to the theoretical limit. This means we'll lose biological components along the way, first as sacrificial scaffolding, and then completely. No harsh feelings, really. So in general don't latch onto the organic/inorganic thing. The reality is more complicated in practice anyway. The distinction is artificial in practice, and if we're, say, using graphene/diamond spintronics as a optimal substrate it isn't for some magic reasons. It's just carbon is special in that it makes nicely stable chains and cages which no other element can as well. From that property stem other remarkable thermal and electronic properties. Of course it won't be pure carbon, whether it's nitrogen vacances, SiC islets or transition metal groups in active machine-phase or synthetic enzyme centers. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From pharos at gmail.com Sat Dec 19 14:16:45 2009 From: pharos at gmail.com (BillK) Date: Sat, 19 Dec 2009 14:16:45 +0000 Subject: [ExI] Scientists Behaving Badly In-Reply-To: <4B2841A0.2060509@satx.rr.com> References: <20091213175616.61346.qmail@moulton.com> <4902d9990912131035t594f06a4k2c3dad93584f5b20@mail.gmail.com> <4B2841A0.2060509@satx.rr.com> Message-ID: On 12/16/09, Damien Broderick wrote: > If climate change keeps getting worse then I would expect denialists to > grasp at stranger straws, many skeptics to become warners, the warners to > start pushing geoengineering schemes like sulfur dust in the stratosphere, > and the calamatists to push liberal political agendas ? just as the > denialists said they would. > > Cartoon: 'What if it's a big hoax and we create a better world for nothing'? BillK From gts_2000 at yahoo.com Sat Dec 19 14:48:26 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 19 Dec 2009 06:48:26 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <644488.92139.qm@web36508.mail.mud.yahoo.com> --- On Sat, 12/19/09, Stathis Papaioannou wrote: > > After a complete replacement of my brain with your > nano-neuron brain... > > It's important that you consider first the case of > *partial* replacement, eg. all of your visual cortex but the rest of > the brain left intact. I based all my replies, each with which you disagreed, on a complete replacement because the partial just seems too speculative to me. (The complete replacement is extremely speculative as it is!) I simply don't know (nor do you or Searle) what role the neurons in the visual cortex play in conscious awareness. Do they only play a functional role as I think you suppose, as mere conduits of visual information to consciousness, or do they also play a role in the conscious experience? I don't know and I don't think anyone does. Perhaps Searle ventured to guess, but I don't think we can spear him for being a good sport and playing along with the game. > You're going very far in postulating this strange theory of > partial zombiehood I think I postulated a highly speculative theory of complete zombiehood, built on top of your already highly speculative design for computerized artificial neurons that behave identical to natural neurons, and on top of your highly speculative theory that you could use them to replace my natural neurons without killing me in the process. Lots of speculation going on there, and your name appears on a lot of it. :-) > The alternative simpler and more plausible theory is > that if the artificial neurons reproduce the behaviour of > biological neurons, then they also reproduce the consciousness > of biological neurons. I disagree completely. We simply cannot logically deduce consciousness from considering behavior alone, no matter whether we consider the behavior of neurons or of brains or of persons or of doorknobs. In fact the behaviorist school of psychology came along for exactly that reason. The idea also infiltrated the philosophy of mind. (John Clark has tried to hang with me with that rope, by the way, but in the process he denied his own intentionality.) Likewise, we cannot deduce from evidence of the behavior of a neuron that it has what the brain needs to produce a mind. At best we can hope it has what it needs to produce the correct behavior of the organism. I can hold that assumption in mind only long enough to play the zombie game. -gts > > > In all the above except the second to last, I lack > intentionality. > > > > > >> Well how about this theory: it's not the program > that has > >> consciousness, since a program is just an > abstraction. It's > >> the physical processes the machine undergoes while > running the > >> program that causes the consciousness. Whether > these processes can > >> be interpreted as a program or not doesn't change > their > >> consciousness. > > > > I don't think S/H systems have minds but I do think > you've pointed in the right direction. I think matter > matters. More on this another time. > > But you also think that if the matter behaves in such a way > that it > can be interpreted as implementing a computer program it > lacks > consciousness. The CR lacks understanding because the man > in the room, > who can be seen as implementing a program, lacks > understanding; > whereas a different system which produces similar behaviour > but with > dumb components the interactions of which can't be > recognised as > algorithmic has understanding. You are penalising the CR > because it > has something extra in the way of pattern and > intelligence. > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From gts_2000 at yahoo.com Sat Dec 19 15:24:08 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 19 Dec 2009 07:24:08 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <351662.66076.qm@web36505.mail.mud.yahoo.com> --- On Sat, 12/19/09, Stathis Papaioannou wrote: > The CR lacks understanding because the man in the room, who can be > seen as implementing a program, lacks understanding; Yes. > whereas a different system which produces similar > behaviour but with dumb components the interactions of which can't be > recognised as algorithmic has understanding. What different system? If you mean the natural brain, (the only different system known to have understanding), then it doesn't matter whether we can recognize its processes as algorithmic. Any computer running those possible algorithms would have no understanding. More generally, computer simulations of things do not equal the things they simulate. -gts From gts_2000 at yahoo.com Sat Dec 19 16:12:58 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 19 Dec 2009 08:12:58 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <564198.9615.qm@web113603.mail.gq1.yahoo.com> Message-ID: <848281.47384.qm@web36507.mail.mud.yahoo.com> --- On Sat, 12/19/09, Ben Zaiboc wrote: > You haven't commented on the other part of my post, where I > say: We need first to get this business about simulations straight... It seems you don't understand even that a video of your father qualifies as a simulation of your father. >> If you took a video of your father tying his shoelaces >> and watched that video, you would watch a simulation. > > No, I'd be watching an audio-visual recording.? That > doesn't contain enough information to call it a > simulation.? Sorry, it's a simulation of your father. > If it was a recording that captured his > muscle movements, his language patterns, and his belief > systems about tying shoelaces, over many repetitions, then > it would be a simulation.? If it recorded every detail > of his biochemical interactions, then it would be a good > simulation. Now you've created a much better simulation, and good for you. However your original video also counts as a simulation. It makes no difference how many details you include in your simulation; it will never become more than a simulation. Even if you record computer simulations of every atom in your father's body, you will still have recorded only a simulation of your father. When you later observe that recorded computer simulation of your father, you will watch a computer simulation of your father. It's a simulation, a cartoon. If you think you see somebody real in the cartoon who ties real shoelaces and who really understands words then you've simply deceived yourself. -gts From msd001 at gmail.com Sat Dec 19 16:21:34 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 19 Dec 2009 11:21:34 -0500 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <20091219122446.GG17686@leitl.org> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <62c14240912182054m564638d8na28f60d30d59f094@mail.gmail.com> <20091219122446.GG17686@leitl.org> Message-ID: <62c14240912190821q61a5fe70m1805b0efacffefd2@mail.gmail.com> On Sat, Dec 19, 2009 at 7:24 AM, Eugen Leitl wrote: > The particular substrate is not important, yet. The underlying technology > is much more interesting. The choice of substrate happens for many, more > Natasha asked about chemistry, conversation ensued and we got to biology. The substrate was my point about OLED, not that it was important - only that it is available right now. > It is always a good idea to utilize the only kind of molecular technology > we know and can control (life) in the bootstrap. Proteins and DNA are > great for self-assembly, as long as you can't do your own self-assembly > from scratch, or use machine-phase which allows you to maximize > functionality > concentration/volume down to the theoretical limit. This means we'll > lose biological components along the way, first as sacrificial > scaffolding, and then completely. No harsh feelings, really. > > So in general don't latch onto the organic/inorganic thing. > Again, DNA is available and being used right now (albeit primitively). Machine-phase nano-magic is still only being talked about... Unless y'all have been withholding links to interesting science? > The distinction is artificial in practice, and if we're, say, using > graphene/diamond spintronics as a optimal substrate it isn't for > some magic reasons. It's just carbon is special in that it makes nicely > stable chains and cages which no other element can as well. From > that property stem other remarkable thermal and electronic properties. > Of course it won't be pure carbon, whether it's nitrogen vacances, > SiC islets or transition metal groups in active machine-phase > or synthetic enzyme centers. > Oh.. well then.. I guess... uh... I'll just take a nap until all they get it all worked out. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Dec 19 16:30:41 2009 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 19 Dec 2009 08:30:41 -0800 Subject: [ExI] Name for carbon project Message-ID: On Fri, Dec 18, 2009 at 3:33 PM, BillK wrote: > On 12/15/09, Keith Henson wrote: >> A few of you have been following my work on solving the energy and >> ?carbon problems. ?Regardless of how you feel about carbon, energy >> ?really is a problem, one that if not solved could really make a mess >> ?of world civilization. >> >> ?Unfortunately attention is focused on carbon and relatively little on >> ?the energy problem even though the two are deeply connected. ?Here is >> ?one: >> ?http://www.virgin.com/subsites/virginearth/ >> >> ?To have any chance of competing for the prize, the focus must be on >> ?sequestering carbon. ?That's relatively easy and painless if we >> ?produce 15 TW of power satellites beyond human energy needs and use it >> ?to make synthetic oil for storage in empty oil fields. > > Looks like Sandia National Labs already have a solution. > > > > New Reactor Uses Sunlight to Turn Water and Carbon Dioxide Into Fuel > By Clay Dillow Posted 11.23.2009 > > Talk about a Eureka moment. Scientists at Sandia National Labs, > seeking a means to create cheap and abundant hydrogen to power a > hydrogen economy, realized they could use the same technology to > "reverse-combust" CO2 back into fuel. Researchers still have to > improve the efficiency of the system, but they recently demonstrated a > working prototype of their "Sunshine to Petrol" machine that converts > waste CO2 to carbon monoxide, and then syngas, consuming nothing but > solar energy. > ------------- > > BillK Bill, the problem is not the high school chemistry, but the energy source to make the hydrogen. The US uses about 20 million bbls of oil a day. To make that much synthetic fuel would take a dedicated 2 TW. Nothing wrong with "pop sci" but you need to apply better tests than the reporter or for that matter, the people who funded this work. Someone suggested Carbon+ in the mode of H+. But more carbon isn't the concept. So another suggestion is Carbon- (Carbon minus.) Comments? Keith From natasha at natasha.cc Sat Dec 19 16:34:26 2009 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 19 Dec 2009 10:34:26 -0600 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <1fa8c3b90912190004k27e38da7ma69724be2171f827@mail.gmail.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc><580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <1fa8c3b90912190004k27e38da7ma69724be2171f827@mail.gmail.com> Message-ID: This is what I have thought as well, for 20 years, but I am thinking that it is has become just a bit dogmatic. This could be because it has now gone so mainstream, even folks at TED are discussing it and now there is a university to pomote a watered-down version of it. BUT, that does not change my view that it is wise to avoid sticking so firmly to an absolute and to always question our premises and consider alternatives as transdiciplinary ideas and new insights. **The chemistry of communication has been crucial for human evolution. I simply wonder what its future will be. Best, Natasha Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Giulio Prisco (2nd email) Sent: Saturday, December 19, 2009 2:04 AM To: ExI chat list Subject: Re: [ExI] Sick of Cyberspace? In the long term I see humans merging with AI subsystems and becoming purely computational beings with movable identities based on some or some other kind of physical hardware. I don't think there is any other viable long term choice, not if we want to leave all limits behind and increase our options without bonds. But this will take long. In the meantime there are many other stepping stones to go through, based on improving our biology and gradually merging it with our technology. On Fri, Dec 18, 2009 at 12:19 PM, Stefano Vaj wrote: > 2009/12/17 ?: >> Are we totally locked into cybernetics for evolution? I thought this >> next era was to be about chemistry rather than machines. > > I come myself from "wet transhumanism" (bio/cogno), and while I got in > touch with the movement exactly out of curiosity to learn more about > the "hard", "cyber/cyborg" side of things, I am persuased the next era > is still about chemistry, and, that when it will stops being there > will be little difference between the two. > > In other words, if we are becoming machines, machines are becoming > "chemical" and "organic" at an even faster pace (carbon rather than > steel and silicon, biochips, nano...). > > -- > Stefano Vaj > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From natasha at natasha.cc Sat Dec 19 16:35:03 2009 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 19 Dec 2009 10:35:03 -0600 Subject: [ExI] Recall: Sick of Cyberspace? Message-ID: Natasha Vita-More would like to recall the message, "[ExI] Sick of Cyberspace?". -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 1137 bytes Desc: not available URL: From msd001 at gmail.com Sat Dec 19 16:49:41 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 19 Dec 2009 11:49:41 -0500 Subject: [ExI] Name for carbon project In-Reply-To: References: Message-ID: <62c14240912190849m118ec63ep82915bbe47b3838@mail.gmail.com> On Sat, Dec 19, 2009 at 11:30 AM, Keith Henson wrote: > > Someone suggested Carbon+ in the mode of H+. But more carbon isn't > the concept. So another suggestion is Carbon- (Carbon minus.) > > maybe use TV marketing concepts: Free Gas* (* just pay for processing sunlight into useful hydrocarbons) How about calling them SolarCarbons? Anyone with knowledge of high-school chem would understand there is no such thing and (maybe) ask what makes them / how they exist. Those less knowledgeable (or inclined to ask) would eventually get on the bandwagon that we need more of them since smart people think they're a good idea. SolarCarbon: Any technologically produced carbon-based storage of solar energy. (This definition rules out things like trees and algae, since nature probably already has a patent on them) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Dec 19 16:28:34 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 19 Dec 2009 11:28:34 -0500 Subject: [ExI] atheism In-Reply-To: <503025.63317.qm@web59904.mail.ac4.yahoo.com> References: <503025.63317.qm@web59904.mail.ac4.yahoo.com> Message-ID: On Dec 18, 2009, Post Futurist wrote: > is faith/religion any sillier than celebrity culture, sports, politics, gossip? Yes because of the utter seriousness and weight that religious adherents give to their subject that is rivaled in no other area. However I will admit that the sillier the joke the more seriously it must be told. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Dec 19 17:07:55 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 19 Dec 2009 12:07:55 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <644488.92139.qm@web36508.mail.mud.yahoo.com> References: <644488.92139.qm@web36508.mail.mud.yahoo.com> Message-ID: <076111E9-C56F-41D4-8934-429D36276218@bellsouth.net> On Dec 19, 2009, Gordon Swobe wrote: > We simply cannot logically deduce consciousness from considering behavior alone You are entirely wrong. We most certainly can deduce consciousness from considering behavior alone because there is a HUGE amount of evidence in support of that idea, there is a HUGE amount of evidence that evolution is true. There is a certain breed of philosopher who acts as if there has been no advance in science since Aristotle and just by pondering in his armchair can come to a conclusion that he insists is correct even though there is a mountain of experimental evidence showing that he cannot be. This is the sort of thing that gives philosophy a bad name. Either Searle was wrong or Darwin was, it's as simple as that. There is not the smallest particle of hard evidence in support of Searle but there is an enormous amount in favor of evolution. Searle doesn't dispute this, he just ignores Darwin and what is probably the single best idea that any human being ever had, as do you. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eschatoon at gmail.com Sat Dec 19 17:47:27 2009 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Sat, 19 Dec 2009 18:47:27 +0100 Subject: [ExI] Sick of Cyberspace? In-Reply-To: References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <1fa8c3b90912190004k27e38da7ma69724be2171f827@mail.gmail.com> Message-ID: <1fa8c3b90912190947j24f3813al55a4350acda84a@mail.gmail.com> The mainstream is certainly more open to the concept of post-biological life than it was, say, 20 years ago, and this is a good outcome in which our combined efforts played a part. I see the _possibility_of post-biological life as compatible with the current scientific paradigm, so I am confident (not certain, but confident) it will be achieved someday. Perhaps not as soon as some predict, but someday. And I think it is not only doable but also good. However, we are going to remain stuck with biology for many decades at least, probably some centuries, and of course we should try making the best of it. G. On Sat, Dec 19, 2009 at 5:34 PM, Natasha Vita-More wrote: > This is what I have thought as well, for 20 years, but I am thinking that it > is has become just a bit dogmatic. ?This could be because it has now gone so > mainstream, even folks at TED are discussing it and now there is a > university to pomote a watered-down version of it. ?BUT, that does not > change my view that it is wise to avoid sticking so firmly to an absolute > and to always question our premises and consider alternatives as > transdiciplinary ideas and new insights. > > **The chemistry of communication has been crucial for human evolution. ?I > simply wonder what its future will be. > > Best, > Natasha > > > Nlogo1.tif Natasha Vita-More > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Giulio Prisco > (2nd email) > Sent: Saturday, December 19, 2009 2:04 AM > To: ExI chat list > Subject: Re: [ExI] Sick of Cyberspace? > > In the long term I see humans merging with AI subsystems and becoming purely > computational beings with movable identities based on some or some other > kind of physical hardware. I don't think there is any other viable long term > choice, not if we want to leave all limits behind and increase our options > without bonds. > > But this will take long. In the meantime there are many other stepping > stones to go through, based on improving our biology and gradually merging > it with our technology. > > On Fri, Dec 18, 2009 at 12:19 PM, Stefano Vaj wrote: >> 2009/12/17 ?: >>> Are we totally locked into cybernetics for evolution? I thought this >>> next era was to be about chemistry rather than machines. >> >> I come myself from "wet transhumanism" (bio/cogno), and while I got in >> touch with the movement exactly out of curiosity to learn more about >> the "hard", "cyber/cyborg" side of things, I am persuased the next era >> is still about chemistry, and, that when it will stops being there >> will be little difference between the two. >> >> In other words, if we are becoming machines, machines are becoming >> "chemical" and "organic" at an even faster pace (carbon rather than >> steel and silicon, biochips, nano...). >> >> -- >> Stefano Vaj >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > -- > Giulio Prisco > http://cosmeng.org/index.php/Giulio_Prisco > aka Eschatoon Magic > http://cosmeng.org/index.php/Eschatoon > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From p0stfuturist at yahoo.com Sat Dec 19 04:26:24 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Fri, 18 Dec 2009 20:26:24 -0800 (PST) Subject: [ExI] atheism In-Reply-To: Message-ID: <475024.13856.qm@web59907.mail.ac4.yahoo.com> "Look, I think religion is worse than overeating, watching sports on TV, celebrity worship, gossip, politics, ..., all the bugaboos you've listed added together. You're not going to convince me otherwise, nor am I going to convince you that I'm right. >David Sill" No, you wont convince me, I wont convince you, because neither of us have anything to back our positions up with, so your paragraph above is a nonstarter. My posts on this topic also have no traction except to write that you can't prove religion is more futile than all other effluvia. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Dec 19 18:12:39 2009 From: pharos at gmail.com (BillK) Date: Sat, 19 Dec 2009 18:12:39 +0000 Subject: [ExI] Name for carbon project In-Reply-To: References: Message-ID: On 12/19/09, Keith Henson wrote: > Bill, the problem is not the high school chemistry, but the energy > source to make the hydrogen. > The US uses about 20 million bbls of oil a day. To make that much > synthetic fuel would take a dedicated 2 TW. > > Nothing wrong with "pop sci" but you need to apply better tests than > the reporter or for that matter, the people who funded this work. > > Hmmm. Do you think the Sandia scientists would be pleased to know that they are just doing 'high school chemistry'? Sandia press release here: They don't intend to replace *all* the oil production with this device. Their idea is that in say, 15 years, every coal power station could have one of these devices attached, taking all the CO2 thrown out by the power station and using sunlight to produce fuel. Every 'green power' device extends the lifetime of the remaining oil supplies. BillK From jonkc at bellsouth.net Sat Dec 19 17:48:53 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 19 Dec 2009 12:48:53 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <805084.88597.qm@web36504.mail.mud.yahoo.com> References: <805084.88597.qm@web36504.mail.mud.yahoo.com> Message-ID: On Dec 18, 2009, at 7:21 PM, Gordon Swobe wrote: > programs manipulate symbols but they have no way to know what the symbols mean A mechanical punch card reader from half a century ago knows that particular hole is a symbol that means "put this card in the third column from the left". How do we know this, because the machine put the card in the third column from the left. > Any computer running those possible algorithms would have no understanding. If understanding has no effect on behavior then who needs it, what's the point of it? I don't understand the meaning of understanding, or perhaps it's the meaning of meaning that confuses me. > I simply don't know (nor do you or Searle) what role the neurons in the visual cortex play in conscious awareness. Both of you are in luck because I do know. People who suffer damage in that part of the cortex are not aware of their visual environment. That's why it's called visual cortex. > computer simulations of things do not equal the things they simulate Does a computer do real arithmetic or simulated arithmetic? > I think my consciousness might very well fade away in proportion to the number of neurons you replaced that had relevance to it. You are talking about replacing atoms with atoms the scientific method cannot distinguish between that nevertheless produces a huge change. In short you are talking about a soul. John K Clark > > > This could happen even as I continued to behave in a manner consistent with intelligence. > > I notice first that we need to ask ourselves the same question, (as many here no doubt already have): how do we know for certain that the human brain does not do anything more than run some fantastically complex program on some fantastically complex architecture? > > I think that if my brain runs any programs then it must do something else too. I understand the symbols that my mind processes and having studied Searle's arguments carefully I simply I do not see how a mere program can do the same, no matter how complex. > > -gts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Sat Dec 19 18:16:38 2009 From: sparge at gmail.com (Dave Sill) Date: Sat, 19 Dec 2009 13:16:38 -0500 Subject: [ExI] atheism In-Reply-To: <475024.13856.qm@web59907.mail.ac4.yahoo.com> References: <475024.13856.qm@web59907.mail.ac4.yahoo.com> Message-ID: 2009/12/18 Post Futurist > > No, you wont convince me, I wont convince you, because neither of us have anything to back > our positions up with, so your paragraph above is a nonstarter. We don't have numbers or a controlled experiment, we do have plenty of history. > My posts on this topic also have no traction except to write that you can't prove religion is > more futile than all other effluvia. I don't consider religion merely futile, I think it's actively harmful to humankind. I guess that pretty succinctly highlights where we differ. -Dave From gts_2000 at yahoo.com Sat Dec 19 18:28:58 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 19 Dec 2009 10:28:58 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <076111E9-C56F-41D4-8934-429D36276218@bellsouth.net> Message-ID: <274291.79812.qm@web36501.mail.mud.yahoo.com> --- On Sat, 12/19/09, John Clark wrote: > You are entirely wrong. We most certainly > can?deduce consciousness from considering behavior > alone because there is a HUGE amount of evidence in support > of that idea Does an amoeba have it? How about a virus? >,?there is a HUGE amount of > evidence?that evolution is true. Yes, and so? This question has no bearing on the evolution vs creationism debate, a debate in which I certainly take your side as does Searle and everyone else whose intellect I admire. > There is not the smallest particle of hard evidence in support of > Searle but there is an enormous amount in favor of evolution. Searle > doesn't dispute this, he just ignores Darwin So you say. You state above that you can deduce consciousness from behavior. You must think then that consciousness plays a role in human behavior (and reject epiphenomenalism as does Searle), else you could not deduce its existence from behavior. (I think you mean infer, not deduce, but anyway..) You agree with Searle that consciousness exists and that it plays a role in human behavior. So what's your beef with him regarding evolution? -gts From eugen at leitl.org Sat Dec 19 19:35:12 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Dec 2009 20:35:12 +0100 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <62c14240912190821q61a5fe70m1805b0efacffefd2@mail.gmail.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <62c14240912182054m564638d8na28f60d30d59f094@mail.gmail.com> <20091219122446.GG17686@leitl.org> <62c14240912190821q61a5fe70m1805b0efacffefd2@mail.gmail.com> Message-ID: <20091219193512.GK17686@leitl.org> On Sat, Dec 19, 2009 at 11:21:34AM -0500, Mike Dougherty wrote: > Again, DNA is available and being used right now (albeit primitively). Again, I am aware, and I kept plugging using biology for bootstrap in machine-phase circles in early 1990s. Everybody thought it wasn't needed, and we'd have self rep by now. Well, we don't. In fact, less than 10 people are still working on machine-phase, and it's mostly computational work. We still can't solve the inverse protein folding problem worth spit, and even DNA for scaffolding is in its infancy. > Machine-phase nano-magic is still only being talked about... Unless y'all > have been withholding links to interesting science? An interesting paper in regards to machine-phase is hitting mainstream at least once a year. Unfortunately most interesting things are hidden behind a paywall. Feel free to subscribe and to contribute to http://postbiota.org/pipermail/nano/ > > The distinction is artificial in practice, and if we're, say, using > > graphene/diamond spintronics as a optimal substrate it isn't for > > some magic reasons. It's just carbon is special in that it makes nicely > > stable chains and cages which no other element can as well. From > > that property stem other remarkable thermal and electronic properties. > > Of course it won't be pure carbon, whether it's nitrogen vacances, > > SiC islets or transition metal groups in active machine-phase > > or synthetic enzyme centers. > > > > Oh.. well then.. I guess... uh... I'll just take a nap until all they get it > all worked out. :) I am describing science as it develops. None of the above is invented. If you want to see it all worked out to full conclusion, your nap will have to be of the cryogenic variety, I'm afraid. Unless we pull a quick AI runaway things will continue to develop at human time scale, which is too slow for comfort. Assuming I'm at all alive 40-50 years from now I'm unlikely to give a damn. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From gts_2000 at yahoo.com Sat Dec 19 19:41:26 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 19 Dec 2009 11:41:26 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <3929.64427.qm@web36503.mail.mud.yahoo.com> --- On Sat, 12/19/09, John Clark wrote: > A mechanical punch card reader from half a century ago knows that > particular hole is a symbol that means "put this card in the third > column from the left". How do we know this, because the > machine?put the card in the third column from the left. In other words you want to think that if something causes X to do Y then we can assume X actually knows how to do Y. That idea entails panpsychism; the theory that everything has mind. As I mentioned to someone here a week or so ago, panpsychism does refute my position here that only brains have minds, and it does so coherently. But most people find panpsychism implausible if not outrageous. -gts From brent.allsop at canonizer.com Sat Dec 19 20:16:42 2009 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sat, 19 Dec 2009 13:16:42 -0700 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <1fa8c3b90912190947j24f3813al55a4350acda84a@mail.gmail.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <1fa8c3b90912190004k27e38da7ma69724be2171f827@mail.gmail.com> <1fa8c3b90912190947j24f3813al55a4350acda84a@mail.gmail.com> Message-ID: <4B2D34AA.2010002@canonizer.com> Hi Natasha, Yes! Thanks for asking this. I am sooo sick of 'cyberspace'. Virtual simulated 'realities' are not consciously real, and not really worth much, until they are represented by our brain in our conscious awareness. Everyone seems to be talking about mechanical vs chemichal vs bioligical... but still all everyone is talking about with all this is just cause and effect behavioral properties of such. When light reflects off of the surface of a strawberry (or anything, whether it is mechanical, chemical, biological...), that light is behaving in a way that can be mapped back to the causal behavior of the surface of the strawberry. In other words, the light is an abstracted representation of this causal property of the strawberry. Though the light is an abstract representation, it is not fundamentally, and especially not phenomenally anything like the surface of that strawberry or whatever was the original cause of the perception process. Cause and effect detection and observation (cyberspace is still limited to this kind of communication) is blind to any properties except causal properties of matter and abstract representations of such. This ever further abstracting cause and effect chain of perception includes the light entering our eyes, the detection of such by the retina, and the processing of such by our optic nerve and pre cortical neural structures. The final result of this perception and brain processing is our conscious knowledge of the strawberry in the cortex of our brain. The 'causal red' on the surface of the strawberry is very different from the 'phenomenal red' which is what our conscious knowledge of such is made of. 'Causal red' is a causal property of the surface of the strawberry and is the initial cause of the perception process, and phenomenal red is a categorically different ineffable property of something in our brain. Phenomenal red is the final result of the perception process. Though 'phenomenal red' is surely a property of something in our brain that we already know causally and chemically everything about, its phenomenal nature is blind to our cause and effect observation. Though the reflected light is detecting the causal properties of the surface of the strawberry, it is completely blind to any phenomenal properties such may or may not have. This fact is comonly refereed to as the 'veil of perception', and why we refer to such properties as ineffable. If you have some virtual reality or cyberspace abstract simulation of a strawberry abstractly representing only the cause and effect behavioral properties, it will forever be lacking this phenomenal red, until a brain like ours perceives it as such in a unified conscious, phenomenal world of knowledge. Surely, whatever it is in our brain that has these invisible phenomenal properties that we are consciousnley aware of, that our brain uses to represent our conscious knowledge with, has a lot to do with chemistry. All we know about chemistry today, is what is causally detectable. But surely there is much more to just these causes and effects. You also brilliantly asked about 'communication', and that is another critical part that ignorant people always ignore when they think about virtual realities, cyberspace, and so on. If the theory described in the consciousness is representational and real camp (see: _http://canonizer.com/topic.asp/88/6_) turns out to be THE ONE, we will soon be able to communicate or 'eff' these ineffable properties. This theory predicts and describes how the conscious worlds of awareness in our brains will be able to be merged and shared and how effing will work. When I hug someone, today, I only experience half of what is phenomenally happening, and I am blind to the rest of the phenomenal knowledge. In the future, I'll be able to merge my world of conscious awarenss, with the person I am hugging, and both of us will be able to comunicate, share, eff and experience 100% of the phenomenal representations, not just half. Cyberspace, virtual reality, and everything is, and will forever be, nothing of much interest, without that. I don't want to be uploaded into some phenomenally blind 'cyberspace', I look forward to when my phenomenal 'spirit' (unlike the most of the rest of my phenomenal knowledge, does not have a referent in reality) is able to peirce this phenomenal veil of perception, and is finally able to escape from this mortal prison wall that is my skull. I look forward to breaking out into an immortal shared phenomenal world where we will finally know not only much more about nature than causal properties, not only will we finally have disproved solipsism, solved the problem of other minds, and so on and so fourth, but we will finally also be able to share what everyone else is phenomenally like and experiencing. Fuck cyberspace, and all the primitive idiots still completely blind to anything more, I want effing phenomenal worlds. Giulio Prisco (2nd email) wrote: > The mainstream is certainly more open to the concept of > post-biological life than it was, say, 20 years ago, and this is a > good outcome in which our combined efforts played a part. > > I see the _possibility_of post-biological life as compatible with the > current scientific paradigm, so I am confident (not certain, but > confident) it will be achieved someday. Perhaps not as soon as some > predict, but someday. And I think it is not only doable but also good. > > However, we are going to remain stuck with biology for many decades at > least, probably some centuries, and of course we should try making the > best of it. > > G. > > On Sat, Dec 19, 2009 at 5:34 PM, Natasha Vita-More wrote: > >> This is what I have thought as well, for 20 years, but I am thinking that it >> is has become just a bit dogmatic. This could be because it has now gone so >> mainstream, even folks at TED are discussing it and now there is a >> university to pomote a watered-down version of it. BUT, that does not >> change my view that it is wise to avoid sticking so firmly to an absolute >> and to always question our premises and consider alternatives as >> transdiciplinary ideas and new insights. >> >> **The chemistry of communication has been crucial for human evolution. I >> simply wonder what its future will be. >> >> Best, >> Natasha >> >> >> Nlogo1.tif Natasha Vita-More >> >> -----Original Message----- >> From: extropy-chat-bounces at lists.extropy.org >> [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Giulio Prisco >> (2nd email) >> Sent: Saturday, December 19, 2009 2:04 AM >> To: ExI chat list >> Subject: Re: [ExI] Sick of Cyberspace? >> >> In the long term I see humans merging with AI subsystems and becoming purely >> computational beings with movable identities based on some or some other >> kind of physical hardware. I don't think there is any other viable long term >> choice, not if we want to leave all limits behind and increase our options >> without bonds. >> >> But this will take long. In the meantime there are many other stepping >> stones to go through, based on improving our biology and gradually merging >> it with our technology. >> >> On Fri, Dec 18, 2009 at 12:19 PM, Stefano Vaj wrote: >> >>> 2009/12/17 : >>> >>>> Are we totally locked into cybernetics for evolution? I thought this >>>> next era was to be about chemistry rather than machines. >>>> >>> I come myself from "wet transhumanism" (bio/cogno), and while I got in >>> touch with the movement exactly out of curiosity to learn more about >>> the "hard", "cyber/cyborg" side of things, I am persuased the next era >>> is still about chemistry, and, that when it will stops being there >>> will be little difference between the two. >>> >>> In other words, if we are becoming machines, machines are becoming >>> "chemical" and "organic" at an even faster pace (carbon rather than >>> steel and silicon, biochips, nano...). >>> >>> -- >>> Stefano Vaj >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >>> >> >> -- >> Giulio Prisco >> http://cosmeng.org/index.php/Giulio_Prisco >> aka Eschatoon Magic >> http://cosmeng.org/index.php/Eschatoon >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > > > From lcorbin at rawbw.com Sat Dec 19 20:47:30 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 19 Dec 2009 12:47:30 -0800 Subject: [ExI] Scientists Behaving Badly In-Reply-To: References: <20091213175616.61346.qmail@moulton.com> <4902d9990912131035t594f06a4k2c3dad93584f5b20@mail.gmail.com> <4B2841A0.2060509@satx.rr.com> Message-ID: <4B2D3BE2.5030808@rawbw.com> BillK wrote: > > Cartoon: > 'What if it's a big hoax and we create a better world for nothing'? > > That's quite a nice list of benefits from fighting global warming: 1 energy independence 2 preserve rainforests 3 sustainability 4 green jobs 5 liviable cities (they're not quite livable, today, it seems) 6 renewables 7 clean water, air 8 healthy children, etc., etc. Yes, what if it's a big hoax and we create a better world FOR NOTHING. "For nothing"?? Well, not exactly for nothing. One thing really ought to be added to that list: 9 economic collapse Lee From msd001 at gmail.com Sat Dec 19 20:51:33 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 19 Dec 2009 15:51:33 -0500 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <20091219193512.GK17686@leitl.org> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <62c14240912182054m564638d8na28f60d30d59f094@mail.gmail.com> <20091219122446.GG17686@leitl.org> <62c14240912190821q61a5fe70m1805b0efacffefd2@mail.gmail.com> <20091219193512.GK17686@leitl.org> Message-ID: <62c14240912191251h40bbb66du38eaf9bf095fd35f@mail.gmail.com> On Sat, Dec 19, 2009 at 2:35 PM, Eugen Leitl wrote: > > Oh.. well then.. I guess... uh... I'll just take a nap until all they get > it > > all worked out. :) > > I am describing science as it develops. None of the above is invented. > If you want to see it all worked out to full conclusion, your nap will > have to be of the cryogenic variety, I'm afraid. Unless we pull a quick > AI runaway things will continue to develop at human time scale, which is > too slow for comfort. Assuming I'm at all alive 40-50 years from now > I'm unlikely to give a damn. > > haha, yeah that's the nap I was talking about. If there is there a H+ equivalent of the "dirt nap" I guess it'd be a "tank nap." I'd use the term "dewar" but that always makes me think of the scotch, and that's an entirely different kind of nap. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Dec 19 21:03:30 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 19 Dec 2009 15:03:30 -0600 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <62c14240912191251h40bbb66du38eaf9bf095fd35f@mail.gmail.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <62c14240912182054m564638d8na28f60d30d59f094@mail.gmail.com> <20091219122446.GG17686@leitl.org> <62c14240912190821q61a5fe70m1805b0efacffefd2@mail.gmail.com> <20091219193512.GK17686@leitl.org> <62c14240912191251h40bbb66du38eaf9bf095fd35f@mail.gmail.com> Message-ID: <4B2D3FA2.6040904@satx.rr.com> On 12/19/2009 2:51 PM, Mike Dougherty wrote: > I'd use the term "dewar" but that always makes me think of the scotch, > and that's an entirely different kind of nap. :) Isn't that a nip? Or am I thinking of sake? From pharos at gmail.com Sat Dec 19 21:08:50 2009 From: pharos at gmail.com (BillK) Date: Sat, 19 Dec 2009 21:08:50 +0000 Subject: [ExI] Scientists Behaving Badly In-Reply-To: <4B2D3BE2.5030808@rawbw.com> References: <20091213175616.61346.qmail@moulton.com> <4902d9990912131035t594f06a4k2c3dad93584f5b20@mail.gmail.com> <4B2841A0.2060509@satx.rr.com> <4B2D3BE2.5030808@rawbw.com> Message-ID: On 12/19/09, Lee Corbin wrote: > That's quite a nice list of benefits from fighting global warming: > 1 energy independence > 2 preserve rainforests > 3 sustainability > 4 green jobs > 5 liviable cities (they're not quite livable, today, it seems) > 6 renewables > 7 clean water, air > 8 healthy children, etc., etc. > > Yes, what if it's a big hoax and we create a better > world FOR NOTHING. "For nothing"?? > > Well, not exactly for nothing. One thing really ought > to be added to that list: > > 9 economic collapse > > We've got that coming anyway. Just wait.......... BillK From msd001 at gmail.com Sat Dec 19 21:24:08 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 19 Dec 2009 16:24:08 -0500 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <4B2D3FA2.6040904@satx.rr.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <62c14240912182054m564638d8na28f60d30d59f094@mail.gmail.com> <20091219122446.GG17686@leitl.org> <62c14240912190821q61a5fe70m1805b0efacffefd2@mail.gmail.com> <20091219193512.GK17686@leitl.org> <62c14240912191251h40bbb66du38eaf9bf095fd35f@mail.gmail.com> <4B2D3FA2.6040904@satx.rr.com> Message-ID: <62c14240912191324y2580f0acn1daff352ce48c1f6@mail.gmail.com> On Sat, Dec 19, 2009 at 4:03 PM, Damien Broderick wrote: > On 12/19/2009 2:51 PM, Mike Dougherty wrote: > > I'd use the term "dewar" but that always makes me think of the scotch, >> and that's an entirely different kind of nap. :) >> > > Isn't that a nip? > > Or am I thinking of sake? > > No, that's just a Freudian 'slip' -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Dec 19 22:46:10 2009 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 19 Dec 2009 14:46:10 -0800 Subject: [ExI] Sandia and energy Message-ID: On Sat, Dec 19, 2009 at 12:17 PM, BillK wrote: > On 12/19/09, Keith Henson wrote: >> ?Bill, the problem is not the high school chemistry, but the energy >> ?source to make the hydrogen. >> ?The US uses about 20 million bbls of oil a day. ?To make that much >> ?synthetic fuel would take a dedicated 2 TW. >> >> ?Nothing wrong with "pop sci" but you need to apply better tests than >> ?the reporter or for that matter, the people who funded this work. > > Hmmm. Do you think the Sandia scientists would be pleased to know that > they are just doing 'high school chemistry'? It would surprise me if they thought otherwise. > Sandia press release here: > > > They don't intend to replace *all* the oil production with this > device. Their idea is that in say, 15 years, every coal power station > could have one of these devices attached, taking all the CO2 thrown > out by the power station and using sunlight to produce fuel. Bill, for basic science reasons that's nonsense. It would take more energy than a power station is making to convert the CO2 it puts out into liquid fuel. So a coal plant would have to be surrounded by square miles of solar collectors. It's only when you have solved and oversolved the energy source problem that making liquid fuels from CO2 makes sense. The energy source is what's important, not the well understood chemistry of making liquid fuels. Keith From bbenzai at yahoo.com Sat Dec 19 22:22:43 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 19 Dec 2009 14:22:43 -0800 (PST) Subject: [ExI] (no subject) In-Reply-To: Message-ID: <533145.41148.qm@web113604.mail.gq1.yahoo.com> Damien Broderick suggested: > On 12/18/2009 10:22 PM, Mike Dougherty wrote: > >? ???waste CO2 to carbon monoxide, > and then syngas, consuming nothing but > >? ???solar energy. > > > "Sunshine to Petrol" is pretty good, but Americans > generally don't use > > the term 'petrol.' > > > > I was pretty close with my suggestion "Liquid > Sunshine..."? :) > > Sungas? Sun Gas? (Close to BillK's excellent Stargas, but > how many > people know the sun is a star?) And has the sounds-like > link to syngas. Sungas/Stargas says Hydrogen, to me. But then, I'm British, and have a scientific education, so it would. SolarFuel? SunFuel? Ben Zaiboc From natasha at natasha.cc Sat Dec 19 22:50:22 2009 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 19 Dec 2009 16:50:22 -0600 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <62c14240912191324y2580f0acn1daff352ce48c1f6@mail.gmail.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc><580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com><62c14240912182054m564638d8na28f60d30d59f094@mail.gmail.com><20091219122446.GG17686@leitl.org><62c14240912190821q61a5fe70m1805b0efacffefd2@mail.gmail.com><20091219193512.GK17686@leitl.org><62c14240912191251h40bbb66du38eaf9bf095fd35f@mail.gmail.com><4B2D3FA2.6040904@satx.rr.com> <62c14240912191324y2580f0acn1daff352ce48c1f6@mail.gmail.com> Message-ID: <44CDB427D47A44458FB71C2E6DB21469@DFC68LF1> Or a transhumanist blip. Nlogo1.tif Natasha Vita-More _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mike Dougherty Sent: Saturday, December 19, 2009 3:24 PM To: ExI chat list Subject: Re: [ExI] Sick of Cyberspace? On Sat, Dec 19, 2009 at 4:03 PM, Damien Broderick wrote: On 12/19/2009 2:51 PM, Mike Dougherty wrote: I'd use the term "dewar" but that always makes me think of the scotch, and that's an entirely different kind of nap. :) Isn't that a nip? Or am I thinking of sake? No, that's just a Freudian 'slip' -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From bbenzai at yahoo.com Sat Dec 19 22:27:09 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 19 Dec 2009 14:27:09 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <824356.86196.qm@web113618.mail.gq1.yahoo.com> > From: Gordon Swobe prophesied: > I think matter matters. Then you're doomed. Doooomed! Protons decay, you know. Ben Zaiboc From natasha at natasha.cc Sat Dec 19 23:01:50 2009 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 19 Dec 2009 17:01:50 -0600 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <1fa8c3b90912190947j24f3813al55a4350acda84a@mail.gmail.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc><580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com><1fa8c3b90912190004k27e38da7ma69724be2171f827@mail.gmail.com> <1fa8c3b90912190947j24f3813al55a4350acda84a@mail.gmail.com> Message-ID: <2A3889EE1A754B3EBC0AC7B0063F8402@DFC68LF1> This thread was meant to start from a transhumanist point of view. In that regard, and because it is a given that we are aware of evolving onto non-biological platforms, "Sick of Cyberspace?" was more of elbow in the ribs of our cyborg history in my desiring conversation about ways in which chemistry plays a valuable part in our presumed transformative evolution. Slurp. Nlogo1.tif Natasha Vita-More From pharos at gmail.com Sat Dec 19 23:43:38 2009 From: pharos at gmail.com (BillK) Date: Sat, 19 Dec 2009 23:43:38 +0000 Subject: [ExI] Sandia and energy In-Reply-To: References: Message-ID: On 12/19/09, Keith Henson wrote: > Bill, for basic science reasons that's nonsense. > > It would take more energy than a power station is making to convert > the CO2 it puts out into liquid fuel. So a coal plant would have to > be surrounded by square miles of solar collectors. > > It's only when you have solved and oversolved the energy source > problem that making liquid fuels from CO2 makes sense. The energy > source is what's important, not the well understood chemistry of > making liquid fuels. > > You didn't read their press release, did you? They are not using miles of solar collectors. They intend to use parabolic dishes to make a solar furnace to get the thermal energy required. But this is only a proof of concept plant at present. That's why they estimate 10-15 years to improve efficiency and for the price of oil to increase and greater need to extract CO2 from the atmosphere. BillK From stathisp at gmail.com Sat Dec 19 23:47:13 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 20 Dec 2009 10:47:13 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <644488.92139.qm@web36508.mail.mud.yahoo.com> References: <644488.92139.qm@web36508.mail.mud.yahoo.com> Message-ID: 2009/12/20 Gordon Swobe : >> It's important that you consider first the case of >> *partial* replacement, eg. all of your visual cortex but the rest of >> the brain left intact. > > I based all my replies, each with which you disagreed, on a complete replacement because the partial just seems too speculative to me. (The complete replacement is extremely speculative as it is!) It's a thought experiment, so you can do anything as long as no physical laws are broken. > I simply don't know (nor do you or Searle) what role the neurons in the visual cortex play in conscious awareness. Do they only play a functional role as I think you suppose, as mere conduits of visual information to consciousness, or do they also play a role in the conscious experience? I don't know and I don't think anyone does. If you don't believe in a soul then you believe that at least some of the neurons in your brain are actually involved in producing the visual experience. It is these neurons I propose replacing with artificial ones that interact normally with their neighbours but lack the putative extra ingredient for consciousness. The aim of the exercise is to show that this extra ingredient cannot exist, since otherwise it would lead to one of two absurd situations: (a) you would be blind but you would not notice you were blind; or (b) you would notice you were blind but you would lose control of your body, which would smile and say everything was fine. Here is a list of the possible outcomes of this thought experiment: (a) as above; (b) as above; (c) you would have normal visual experiences (implying there is no special ingredient for consciousness); (d) there is something about the behaviour of neurons which is not computable, which means even weak AI is impossible and this thought experiment is impossible. I'm pretty sure that is an exhaustive list, and one of (a) - (d) has to be the case. I favour (c). I think (a) is absurd, since if nothing else, having an experience means you are aware of having the experience. I think (a) is very unlikely because it would imply that you are doing your thinking with an immaterial soul, since all your neurons would be constrained to behave normally. I think (d) is possible, but unlikely, and Searle agrees. There is nothing so far in physics that has been proved to be uncomputable, and no reason to think that it should be hiding inside neurons. -- Stathis Papaioannou From bbenzai at yahoo.com Sun Dec 20 00:04:28 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 19 Dec 2009 16:04:28 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <63782.10327.qm@web113603.mail.gq1.yahoo.com> gordon.swobe at yahoo.com, declared: > > I think that if my brain runs any programs then it > must do something else too. Assuming that by 'running programs', you mean processing information, I'm not with you here. Can't think of anything else it could possibly do (apart from generate heat, perhaps). > I understand the symbols that my mind processes Impressive. I can't claim the same. I *use* the symbols, but I don't really *understand* them. > and having studied Searle's arguments carefully I simply I do > not see how a mere program can do the same, no matter how complex. I reckon that's because you are making a careful examination of the wrong thing. Study neuroscience and programming instead of Searle's arguments, and you'll see that a 'mere' program can indeed do the same. In fact, when it comes to symbol processing, a computer program can potentially do this *much* better (and faster, of course) than any natural brain. Ben Zaiboc From bbenzai at yahoo.com Sat Dec 19 23:40:20 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 19 Dec 2009 15:40:20 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <245343.29477.qm@web113617.mail.gq1.yahoo.com> > From: Gordon Swobe Persisted: > --- On Sat, 12/19/09, Ben Zaiboc > wrote: > > > You haven't commented on the other part of my post, > where I > > say: > > We need first to get this business about simulations > straight... > > It seems you don't understand even that a video of your > father qualifies as a simulation of your father. > > >> If you took a video of your father tying his > shoelaces > >> and watched that video, you would watch a > simulation. > > > > No, I'd be watching an audio-visual recording.? That > > doesn't contain enough information to call it a > > simulation.? > > Sorry, it's a simulation of your father. Would you please try not to say things like "it seems you don't understand that X", and "Sorry, but it's X", when giving your point of view? It comes across as arrogant, and I'm sure you don't mean to be. I do understand that you think that a video of my father qualifies as a simulation of him. I just disagree with this point of view. But let's not use that word. I'm talking about "things-that-reproduce-functional-properties-of-other-things", as distinct from recordings of incidental properties, such as reflectance, colour, etc., and/or abstract representations of these recordings. Can you agree that these are two different things? A piece of paper with the words "Dad tying his shoelaces" is an abstract representation. It might be a label for a video. Neither of those things are going to recreate my dad's shoelace-tying behaviour though. A model (to avoid the "S" word) that exactly reproduces all the functional properties of the relevant shoelace-tying behaviour, however, is a different thing. I made such a model in my mind when I was a child, and was successful in reproducing this behaviour. Still am, in fact, and it works very well. I'm satisfied that it's not fake or Zombie shoelace-tying, it's the Real Deal. Now, could you please reply to my other questions?: 1) Do you agree or disagree that Meaning (semantics) is an internally-generated phenomenon in a sufficently complex, and suitably organised, information processing system, with sensory inputs, motor outputs and memory storage? 2) Suppose someone built a brain by taking one cell at a time, and was somehow able to attach them together in exactly the same configuration, with exactly the same synaptic strengths, same myelination, same tight junctions, etc., etc., cell for cell, as an existing biological brain, would the result be a conscious individual, the same as the natural one? (assuming it was put in a suitable body, all connected up properly, etc.). Thanks, Ben Zaiboc From gts_2000 at yahoo.com Sun Dec 20 00:19:10 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 19 Dec 2009 16:19:10 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <63782.10327.qm@web113603.mail.gq1.yahoo.com> Message-ID: <79763.25825.qm@web36505.mail.mud.yahoo.com> --- On Sat, 12/19/09, Ben Zaiboc wrote: >> I understand the symbols that my mind processes > > Impressive.? I can't claim the same.? I *use* the > symbols, but I don't really *understand* them. No argument here! -gts From gts_2000 at yahoo.com Sun Dec 20 00:57:02 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 19 Dec 2009 16:57:02 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <245343.29477.qm@web113617.mail.gq1.yahoo.com> Message-ID: <573079.867.qm@web36503.mail.mud.yahoo.com> --- On Sat, 12/19/09, Ben Zaiboc wrote: > Now, could you please reply to my other questions?: > > 1) Do you agree or disagree that Meaning (semantics) is an > internally-generated phenomenon in a sufficently complex, > and suitably organised, information processing system, with > sensory inputs, motor outputs and memory storage? Not if it depends on formal programs to generate the semantics. I've already explained how I once created a computer simulation of a brain by creating an object in my code to represent one. The brain object generated seemingly meaningful answers to real human voice commands, but the semantics came entirely from the human. The simulation had no idea what it meant except in the mind of the human player, and then only if the human took a voluntary vacation from reality. My primitive simulation took only a couple of hundred lines of code, but I have no reason to think it would have worked differently with a couple of hundred billion lines of code. > 2) Suppose someone built a brain by taking one cell at a > time, and was somehow able to attach them together in > exactly the same configuration, with exactly the same > synaptic strengths, same myelination, same tight junctions, > etc., etc., cell for cell, as an existing biological brain, > would the result be a conscious individual, the same as the > natural one? (assuming it was put in a suitable body, all > connected up properly, etc.). If you transplanted those neurons very carefully from one brain to create the other then probably so. If you manufactured them and if programs drive them I don't think so. See my dialogue with Stathis. -gts From stathisp at gmail.com Sun Dec 20 05:37:54 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 20 Dec 2009 16:37:54 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <351662.66076.qm@web36505.mail.mud.yahoo.com> References: <351662.66076.qm@web36505.mail.mud.yahoo.com> Message-ID: 2009/12/20 Gordon Swobe : > --- On Sat, 12/19/09, Stathis Papaioannou wrote: > >> The CR lacks understanding because the man in the room, who can be >> seen as implementing a program, lacks understanding; > > Yes. > >> whereas a different system which produces similar >> behaviour but with dumb components the interactions of which can't be >> recognised as algorithmic has understanding. > > What different system? > > If you mean the natural brain, (the only different system known to have understanding), then it doesn't matter whether we can recognize its processes as algorithmic. Any computer running those possible algorithms would have no understanding. > > More generally, computer simulations of things do not equal the things they simulate. But it seems that you and Searle are saying that the CR lacks understanding *because* the man lacks understanding of Chinese, whereas the brain, with completely dumb components, has understanding. So you are penalising the CR because it has smart components and because what it does has an algorithmic pattern. By this reasoning, if neurons had their own separate rudimentary intelligence and if someone could see a pattern in the brain's functioning to which the term "algorithmic" could be applied, then the brain would lack understanding also. -- Stathis Papaioannou From stathisp at gmail.com Sun Dec 20 05:45:44 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 20 Dec 2009 16:45:44 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <3929.64427.qm@web36503.mail.mud.yahoo.com> References: <3929.64427.qm@web36503.mail.mud.yahoo.com> Message-ID: 2009/12/20 Gordon Swobe : > --- On Sat, 12/19/09, John Clark wrote: > >> A mechanical punch card reader from half a century ago knows that >> particular hole is a symbol that means "put this card in the third >> column from the left". How do we know this, because the >> machine?put the card in the third column from the left. > > In other words you want to think that if something causes X to do Y then we can assume X actually knows how to do Y. > > That idea entails panpsychism; the theory that everything has mind. As I mentioned to someone here a week or so ago, panpsychism does refute my position here that only brains have minds, and it does so coherently. But most people find panpsychism implausible if not outrageous. Not everything has a mind, just information-processing things. Mind is not a binary quality: even in biology there is a gradation between bacteria and humans. The richer and more complex the information processing, the richer and more complex the mind. -- Stathis Papaioannou From jonkc at bellsouth.net Sun Dec 20 06:02:03 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 20 Dec 2009 01:02:03 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <274291.79812.qm@web36501.mail.mud.yahoo.com> References: <274291.79812.qm@web36501.mail.mud.yahoo.com> Message-ID: <8D24EF87-B67C-469D-8C89-8C07596125E7@bellsouth.net> On Dec 19, 2009, at 1:28 PM, Gordon Swobe wrote: > You state above that you can deduce consciousness from behavior. You must think then that consciousness plays a role in human behavior > Consciousness is a high level description as is pressure, but it is not the only valid description. It is true that pressure made the balloon expand but it is also true that air molecules hitting the inside of the balloon made it expand and molecules know nothing about pressure. It is true that I scratched my nose because I wanted too but its also true that it happened because an electrochemical signal was sent from my brain to the nerves in my hand. > and reject epiphenomenalism as does Searle No I do not, to say that the mind can move the body is an entirely reasonable way to describe whats going on but not the only way. > else you could not deduce its existence from behavior. I can deduce it but you and Searle cannot. > I think you mean infer, not deduce, but anyway.. > No, its a deduction not a inference. I deduce that evolution is true, I deduce that evolution is blind to consciousness and can only see behavior, I know from direct experience (which outranks both deduction and induction) that evolution has produced consciousness at least once and probably many more times, therefore I conclude the consciousness MUST be a byproduct of intelligent behavior and you can't have intelligence without consciousness. > You agree with Searle that consciousness exists Of course it does. Some people claim not to believe in consciousness but I don't believe them and think they are just trying to be provocative. Everybody believes in consciousness. > and that it plays a role in human behavior. So what's your beef with him regarding evolution? Searle says intelligent behavior is possible without consciousness, Darwin says it is not. I'm putting my money on Darwin. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Sun Dec 20 06:39:10 2009 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 19 Dec 2009 22:39:10 -0800 (PST) Subject: [ExI] Effing and Privacy Message-ID: <962714.35379.qm@web65602.mail.ac4.yahoo.com> > Cyberspace, virtual reality, and everything is, and will forever be, nothing of > much interest, without that.? I don't want to be uploaded into some phenomenally > blind 'cyberspace', I look forward to when my phenomenal 'spirit' (unlike the > most of the rest of my phenomenal knowledge, does not have a referent in > reality) is able to peirce this phenomenal veil of perception, and is finally > able to escape from this mortal prison wall that is my skull. The same skull also keeps others out. But they are figuring out ways around that. http://arstechnica.com/old/content/2008/12/mindreading-101-identifying-images-by-watching-the-brain.ars http://www.navysbirprogram.com/NavySearch/Summary/summary.aspx?pk=F5B07D68-1B19-4235-B140-950CE2E19D08 > I look forward to breaking out into an immortal shared phenomenal world where we > will finally know not only much more about nature than causal properties, not > only will we finally have disproved solipsism, solved the problem of other > minds, and so on and so fourth, but we will finally also be able to share what > everyone else is phenomenally like and experiencing. > > Fuck cyberspace, and all the primitive idiots still completely blind to anything > more, I want effing phenomenal worlds. You could just as easily get the?thought police,?psi-rape, memory theft,?and?mind control?before you get anywhere near an "immortal?shared phenomological world" Stuart LaForge ? "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten." - Neil Armstrong From eugen at leitl.org Sun Dec 20 09:12:32 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 20 Dec 2009 10:12:32 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <351662.66076.qm@web36505.mail.mud.yahoo.com> Message-ID: <20091220091232.GP17686@leitl.org> On Sun, Dec 20, 2009 at 04:37:54PM +1100, Stathis Papaioannou wrote: > > What different system? > > > > If you mean the natural brain, (the only different system known to have understanding), then it doesn't matter whether we can recognize its processes as algorithmic. Any computer running those possible algorithms would have no understanding. > > > > More generally, computer simulations of things do not equal the things they simulate. Still having fun? Still think you're having an argument? In reality you don't. After much back and forth everybody's positions will be exactly where they've been before. So save the wear on your fingertips and on our retinas. > But it seems that you and Searle are saying that the CR lacks > understanding *because* the man lacks understanding of Chinese, > whereas the brain, with completely dumb components, has understanding. > So you are penalising the CR because it has smart components and > because what it does has an algorithmic pattern. By this reasoning, if > neurons had their own separate rudimentary intelligence and if someone > could see a pattern in the brain's functioning to which the term > "algorithmic" could be applied, then the brain would lack > understanding also. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Sun Dec 20 09:17:09 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 20 Dec 2009 10:17:09 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <245343.29477.qm@web113617.mail.gq1.yahoo.com> References: <245343.29477.qm@web113617.mail.gq1.yahoo.com> Message-ID: <20091220091709.GQ17686@leitl.org> On Sat, Dec 19, 2009 at 03:40:20PM -0800, Ben Zaiboc wrote: > Would you please try not to say things like "it seems you don't understand that X", and "Sorry, but it's X", when giving your point of view? It comes across as arrogant, and I'm sure you don't mean to be. Yes, Ben, he is so arrogant as he's clueless. I think you can stop giving him the benefit of doubt. He's a prime illustration for utility of moderation on mailing lists. Technically he might be not a troll, but he's indistinguishable from one. And one thing you don't do with trolls is feeding them. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Sun Dec 20 10:08:27 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 20 Dec 2009 11:08:27 +0100 Subject: [ExI] Sandia and energy In-Reply-To: References: Message-ID: <20091220100827.GS17686@leitl.org> On Sat, Dec 19, 2009 at 02:46:10PM -0800, Keith Henson wrote: > It's only when you have solved and oversolved the energy source > problem that making liquid fuels from CO2 makes sense. The energy > source is what's important, not the well understood chemistry of > making liquid fuels. Energy is a given, but you need good scrubbers, mild conditions, preferrably electrochemistry or photochemistry to drive the water+carbon dioxide reaction, catalysts and such. And of course if you make, say methanol, you need stable, cheap catalysts for the inverse reaction. There are certainly encouraging noises being heard in that direction, but it's all pure research for now. The problem with practical processes and plants out there doing them is plenty of time and money. We don't really have the time anymore, and we pissed away the money as well. Nevermind the skills, good chemistry people are extremely scarce now as the discipline has gone out of fashion. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From bbenzai at yahoo.com Sun Dec 20 16:16:51 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 20 Dec 2009 08:16:51 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <841886.34930.qm@web113607.mail.gq1.yahoo.com> Eugen Leitl sagely advised: > > On Sat, Dec 19, 2009 at 03:40:20PM -0800, Ben Zaiboc > wrote: > > > Would you please try not to say things like "it seems > you don't understand that X", and "Sorry, but it's X", when > giving your point of view?? It comes across as > arrogant, and I'm sure you don't mean to be. > > Yes, Ben, he is so arrogant as he's clueless.? I think > you can > stop giving him the benefit of doubt. He's a prime > illustration > for utility of moderation on mailing lists. > > Technically he might be not a troll, but he's > indistinguishable > from one. And one thing you don't do with trolls is feeding > them. I defer to Eugen. When someone ignores simple logic, and resorts to inventing their own terminology in order to deperately hang on to their argument, it's probably time to say "Yeah, if you like" and go on to more productive things. You will hear no more from me on this, and I'll do my best to just ignore the words "Chinese Room" or "John Searle" in future. Ben Zaiboc From natasha at natasha.cc Sun Dec 20 16:54:24 2009 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 20 Dec 2009 10:54:24 -0600 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <4B2D34AA.2010002@canonizer.com> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <1fa8c3b90912190004k27e38da7ma69724be2171f827@mail.gmail.com> <1fa8c3b90912190947j24f3813al55a4350acda84a@mail.gmail.com> <4B2D34AA.2010002@canonizer.com> Message-ID: Hi Brent, Stuart responded under a different subject line, but I think he speaks to something of import: http://www.independent.co.uk/news/science/scientists-able-to-read-peoples-mi nds-1643968.htmlhttp://www.guardian.co.uk/science/2007/feb/09/neuroscience.e thicsofscience http://www.cnn.com/2009/TECH/09/25/brain.scans.wired/index.html On another point, I have not read much of Merleau-Ponty, but I am familiar with his works. Perception is the key to aesthetics, without it there would be no aesthetics - and what would a world be like with out the induction of physiological senses for conceptualizing aesthetics. Last night, under the glow of our ambient library, Max and I discussed human senses in comparison to our dog's ability to distinguish between molecules (and becomes what seems like forever lost in his own sniffing world), and his immeasurable auditory capabilities in comparison to our cat's somewhat mysterious sense capabilities. Considering the limit of human senses, and therefore perceptions, we then talked about our escapades into the mountains to watch the sky, and missing that, and the unknown factors of the universe's gravitational push and pull of the building blocks of life. Nevertheless, after cyberneticistic manifestations of uploading and the subsequent diversity of personal existences, perceptual expansions hedge on new methods for perceiving the universe, what these might be in light of space being comprised of complex carbon and amino acids. Natasha Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Brent Allsop Sent: Saturday, December 19, 2009 2:17 PM To: ExI chat list Subject: Re: [ExI] Sick of Cyberspace? Hi Natasha, Yes! Thanks for asking this. I am sooo sick of 'cyberspace'. Virtual simulated 'realities' are not consciously real, and not really worth much, until they are represented by our brain in our conscious awareness. Everyone seems to be talking about mechanical vs chemichal vs bioligical... but still all everyone is talking about with all this is just cause and effect behavioral properties of such. When light reflects off of the surface of a strawberry (or anything, whether it is mechanical, chemical, biological...), that light is behaving in a way that can be mapped back to the causal behavior of the surface of the strawberry. In other words, the light is an abstracted representation of this causal property of the strawberry. Though the light is an abstract representation, it is not fundamentally, and especially not phenomenally anything like the surface of that strawberry or whatever was the original cause of the perception process. Cause and effect detection and observation (cyberspace is still limited to this kind of communication) is blind to any properties except causal properties of matter and abstract representations of such. This ever further abstracting cause and effect chain of perception includes the light entering our eyes, the detection of such by the retina, and the processing of such by our optic nerve and pre cortical neural structures. The final result of this perception and brain processing is our conscious knowledge of the strawberry in the cortex of our brain. The 'causal red' on the surface of the strawberry is very different from the 'phenomenal red' which is what our conscious knowledge of such is made of. 'Causal red' is a causal property of the surface of the strawberry and is the initial cause of the perception process, and phenomenal red is a categorically different ineffable property of something in our brain. Phenomenal red is the final result of the perception process. Though 'phenomenal red' is surely a property of something in our brain that we already know causally and chemically everything about, its phenomenal nature is blind to our cause and effect observation. Though the reflected light is detecting the causal properties of the surface of the strawberry, it is completely blind to any phenomenal properties such may or may not have. This fact is comonly refereed to as the 'veil of perception', and why we refer to such properties as ineffable. If you have some virtual reality or cyberspace abstract simulation of a strawberry abstractly representing only the cause and effect behavioral properties, it will forever be lacking this phenomenal red, until a brain like ours perceives it as such in a unified conscious, phenomenal world of knowledge. Surely, whatever it is in our brain that has these invisible phenomenal properties that we are consciousnley aware of, that our brain uses to represent our conscious knowledge with, has a lot to do with chemistry. All we know about chemistry today, is what is causally detectable. But surely there is much more to just these causes and effects. You also brilliantly asked about 'communication', and that is another critical part that ignorant people always ignore when they think about virtual realities, cyberspace, and so on. If the theory described in the consciousness is representational and real camp (see: _http://canonizer.com/topic.asp/88/6_) turns out to be THE ONE, we will soon be able to communicate or 'eff' these ineffable properties. This theory predicts and describes how the conscious worlds of awareness in our brains will be able to be merged and shared and how effing will work. When I hug someone, today, I only experience half of what is phenomenally happening, and I am blind to the rest of the phenomenal knowledge. In the future, I'll be able to merge my world of conscious awarenss, with the person I am hugging, and both of us will be able to comunicate, share, eff and experience 100% of the phenomenal representations, not just half. Cyberspace, virtual reality, and everything is, and will forever be, nothing of much interest, without that. I don't want to be uploaded into some phenomenally blind 'cyberspace', I look forward to when my phenomenal 'spirit' (unlike the most of the rest of my phenomenal knowledge, does not have a referent in reality) is able to peirce this phenomenal veil of perception, and is finally able to escape from this mortal prison wall that is my skull. I look forward to breaking out into an immortal shared phenomenal world where we will finally know not only much more about nature than causal properties, not only will we finally have disproved solipsism, solved the problem of other minds, and so on and so fourth, but we will finally also be able to share what everyone else is phenomenally like and experiencing. Fuck cyberspace, and all the primitive idiots still completely blind to anything more, I want effing phenomenal worlds. Giulio Prisco (2nd email) wrote: > The mainstream is certainly more open to the concept of > post-biological life than it was, say, 20 years ago, and this is a > good outcome in which our combined efforts played a part. > > I see the _possibility_of post-biological life as compatible with the > current scientific paradigm, so I am confident (not certain, but > confident) it will be achieved someday. Perhaps not as soon as some > predict, but someday. And I think it is not only doable but also good. > > However, we are going to remain stuck with biology for many decades at > least, probably some centuries, and of course we should try making the > best of it. > > G. > > On Sat, Dec 19, 2009 at 5:34 PM, Natasha Vita-More wrote: > >> This is what I have thought as well, for 20 years, but I am thinking that it >> is has become just a bit dogmatic. This could be because it has now gone so >> mainstream, even folks at TED are discussing it and now there is a >> university to pomote a watered-down version of it. BUT, that does not >> change my view that it is wise to avoid sticking so firmly to an absolute >> and to always question our premises and consider alternatives as >> transdiciplinary ideas and new insights. >> >> **The chemistry of communication has been crucial for human evolution. I >> simply wonder what its future will be. >> >> Best, >> Natasha >> >> >> Nlogo1.tif Natasha Vita-More >> >> -----Original Message----- >> From: extropy-chat-bounces at lists.extropy.org >> [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Giulio Prisco >> (2nd email) >> Sent: Saturday, December 19, 2009 2:04 AM >> To: ExI chat list >> Subject: Re: [ExI] Sick of Cyberspace? >> >> In the long term I see humans merging with AI subsystems and becoming purely >> computational beings with movable identities based on some or some other >> kind of physical hardware. I don't think there is any other viable long term >> choice, not if we want to leave all limits behind and increase our options >> without bonds. >> >> But this will take long. In the meantime there are many other stepping >> stones to go through, based on improving our biology and gradually merging >> it with our technology. >> >> On Fri, Dec 18, 2009 at 12:19 PM, Stefano Vaj wrote: >> >>> 2009/12/17 : >>> >>>> Are we totally locked into cybernetics for evolution? I thought this >>>> next era was to be about chemistry rather than machines. >>>> >>> I come myself from "wet transhumanism" (bio/cogno), and while I got in >>> touch with the movement exactly out of curiosity to learn more about >>> the "hard", "cyber/cyborg" side of things, I am persuased the next era >>> is still about chemistry, and, that when it will stops being there >>> will be little difference between the two. >>> >>> In other words, if we are becoming machines, machines are becoming >>> "chemical" and "organic" at an even faster pace (carbon rather than >>> steel and silicon, biochips, nano...). >>> >>> -- >>> Stefano Vaj >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >>> >> >> -- >> Giulio Prisco >> http://cosmeng.org/index.php/Giulio_Prisco >> aka Eschatoon Magic >> http://cosmeng.org/index.php/Eschatoon >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > > > _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Sun Dec 20 16:42:54 2009 From: spike66 at att.net (spike) Date: Sun, 20 Dec 2009 08:42:54 -0800 Subject: [ExI] ...is coming to town... In-Reply-To: References: <3929.64427.qm@web36503.mail.mud.yahoo.com> Message-ID: Greetings friends, Apologies I dropped the ball on this, being as I have been in Oregon since the 10th dealing with a family crisis (dear old uncle, cancer, it's bad) and I might need to run back up there as soon as the last gift bow goes in the recycle bag. Our own Dr. Amara Graps will be in town (San Jose area) around Newtonmass for a few days. If the locals wish to have a gathering at a restaurant or visit at a home, someone do step up and volunteer to organize it. I have grandparents visiting and may not make the scene this time, but I envy those who do have a chance to visit with this delightful young lady and her baby. I suggest moving this discussion over to Exi-Bay list. I used to get ExI-Bay on my office computer, back when I had an office. For privacy reasons, please do not put Amara's name in the subject line (unless she does it herself or says it is OK) nor the exact days of the visit in the subject line please. If you pull together a schmooze, do take some pictures too, and one of you web gurus (of which our gang is so richly blessed) post them for those of us unfortunates who have other obligations, thanks! spike From jonkc at bellsouth.net Sun Dec 20 16:47:19 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 20 Dec 2009 11:47:19 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <3929.64427.qm@web36503.mail.mud.yahoo.com> References: <3929.64427.qm@web36503.mail.mud.yahoo.com> Message-ID: On Dec 19, 2009, Gordon Swobe wrote: > In other words you want to think that if something causes X to do Y then we can assume X actually knows how to do Y. To a limited extent yes, of course the more impressive Y is the more powerful a mind we can expect behind it, and putting a punch card in a column isn't very impressive. Still, it is a specific task carried out by reading and understanding the meaning of a symbol. You think there is a sharp divide between mind and no mind, I believe that like most things in life there is no sharpness to be found, there is only a blob. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sun Dec 20 18:46:49 2009 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 20 Dec 2009 10:46:49 -0800 Subject: [ExI] Sandia and energy Message-ID: On Sun, Dec 20, 2009 at 4:00 AM, BillK wrote: > On 12/19/09, Keith Henson wrote: >> ?Bill, for basic science reasons that's nonsense. >> >> ?It would take more energy than a power station is making to convert >> ?the CO2 it puts out into liquid fuel. ?So a coal plant would have to >> ?be surrounded by square miles of solar collectors. >> >> ?It's only when you have solved and oversolved the energy source >> ?problem that making liquid fuels from CO2 makes sense. ?The energy >> ?source is what's important, not the well understood chemistry of >> ?making liquid fuels. >> > > You didn't read their press release, did you? I did, and further I understood it, unlike the funding agency. > They are not using miles of solar collectors. They intend to use > parabolic dishes to make a solar furnace to get the thermal energy > required. Bill, parabolic mirror concentrators will make a small area very hot, but they do not increase the collected energy. They are not magic. If you need a GW of power to drive the chemical reaction, then (at 100 W/m^2 time average) you need 10E9W10E2W/m^2 or 10,000,000m^2 or ten square km. > But this is only a proof of concept plant at present. No kidding. > That's why they estimate 10-15 years to improve efficiency and for the > price of oil to increase and greater need to extract CO2 from the > atmosphere. They are *not* going to increase the efficiency beyond the theoretical limit for the energy it takes to break chemical bonds. Or rather, if they do, we will be living in a magical world where where the laws of termodynamics have been repealed. And it is not hard to pull CO2 out of the atmosphere, http://people.ucalgary.ca/~keith/AirCapture.html 100 kWh/ton will do it. That's a small part (2%) of the energy you need to make the CO2 into hydrocarbons. > From: Eugen Leitl > > Energy is a given, but you need good scrubbers, mild conditions, > preferrably electrochemistry or photochemistry to drive the > water+carbon dioxide reaction, catalysts and such. And of course > if you make, say methanol, you need stable, cheap catalysts for > the inverse reaction. There are certainly encouraging noises > being heard in that direction, but it's all pure research for now. Hardly. Sasol has a 34,000 bbl/day plant in Qatar and there are a number of other such plants around the world. The plant in Qatar cost a billion dollars. It is being fed with partially oxidized natural gas, but would run just the same on hydrogen and CO2 from the air. > The problem with practical processes and plants out there doing > them is plenty of time and money. We don't really have the time > anymore, and we pissed away the money as well. Nevermind the skills, > good chemistry people are extremely scarce now as the discipline > has gone out of fashion. The particular chemistry needed has been around since the Germans were making liquid fuels our of coal in WWI. http://en.wikipedia.org/wiki/Fischer%E2%80%93Tropsch_process Keith From gts_2000 at yahoo.com Sun Dec 20 18:49:22 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 20 Dec 2009 10:49:22 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <329614.21019.qm@web36502.mail.mud.yahoo.com> --- On Sun, 12/20/09, Stathis Papaioannou wrote: >> That idea entails panpsychism; the theory that >> everything has mind. As I mentioned to someone here a week >> or so ago, panpsychism does refute my position here that >> only brains have minds, and it does so coherently. But most >> people find panpsychism implausible if not outrageous. > > Not everything has a mind, just information-processing > things. Mind is not a binary quality: even in biology there > is a gradation between bacteria and humans. The richer and more > complex the information processing, the richer and more complex the mind. One can argue that everything "processes information" at some level. Again, it entails panpsychism, in which even the lowly punch card somehow has a mind capable of "understanding". -gts From gts_2000 at yahoo.com Sun Dec 20 18:54:13 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 20 Dec 2009 10:54:13 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <414735.36422.qm@web36505.mail.mud.yahoo.com> Stathis, Searle responds to the brain simulation reply to his CRA: III. The brain simulator reply (Berkeley and M.I.T.). "Suppose we design a program that doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. The machine takes in Chinese stories and questions about them as input, it simulates the formal structure of actual Chinese brains in processing these stories, and it gives out Chinese answers as outputs. We can even imagine that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate when they process natural language. Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?" Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares: on the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology. If we had to know how the brain worked to do AI, we wouldn't bother with AI. However, even getting this close to the operation of the brain is still not sufficient to produce understanding. To see this, imagine that instead of a mono lingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes. Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination. The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties. -gts From gts_2000 at yahoo.com Sun Dec 20 19:12:19 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 20 Dec 2009 11:12:19 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <8D24EF87-B67C-469D-8C89-8C07596125E7@bellsouth.net> Message-ID: <579032.89182.qm@web36503.mail.mud.yahoo.com> --- On Sun, 12/20/09, John Clark wrote: >> So what's your beef with him regarding evolution?? > Searle says?intelligent behavior is possible without > consciousness, Darwin says it is not. I'm putting my > money on Darwin. In other words, you would rather adopt the position that anything capable of intelligent behavior has a mind capable of holding thoughts than adopt Searle's much more conventional position that only advanced organisms with brains can do so. Panpsychism is as fine a religion as any I suppose. PS. My watch says to say hello. :) -gts From gts_2000 at yahoo.com Sun Dec 20 19:29:01 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 20 Dec 2009 11:29:01 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <89454.51497.qm@web36508.mail.mud.yahoo.com> --- On Sun, 12/20/09, Stathis Papaioannou wrote: > But it seems that you and Searle are saying that the CR > lacks understanding *because* the man lacks understanding of > Chinese, whereas the brain, with completely dumb components, has > understanding. The brain has understanding, yes, but Searle makes no claim about the dumbness or lack thereof of its components. You added that to his argument. He starts with the self-evident axiom that brains have understanding and then asks if Software/Hardware systems can ever have it too. He concludes they cannot based on his logical argument, which I've posted here several times. > So you are penalising the CR because it has smart > components and because what it does has an algorithmic pattern. He penalizes the CR only because it runs a formal program, and nobody has shown how programs can have minds capable of understanding the symbols they manipulate. In other words, nobody has shown his formal argument false. If somebody has seen it proved false then point me to it. I see people here like Eugen who scoff but who offer no evidence that Searle's logic fails. Is it just an article of religious faith on ExI that programs have minds? And if it is, and if we cannot explain how it happens, then should we adopt the mystical philosophy that everything has mind merely to protect the notion that programs do or will? > By this reasoning, if neurons had their own separate rudimentary > intelligence and if someone could see a pattern in the brain's > functioning to which the term "algorithmic" could be applied, then > the brain would lack understanding also. No, Searle argues that even if we can describe brain processes algorithmically, those algorithms running on a S/H system would not result in understanding; that it's not enough merely to simulate a brain in software running on a computer. S/H systems are not hardware *enough*. -gts From pharos at gmail.com Sun Dec 20 19:52:03 2009 From: pharos at gmail.com (BillK) Date: Sun, 20 Dec 2009 19:52:03 +0000 Subject: [ExI] Sandia and energy In-Reply-To: References: Message-ID: On 12/20/09, Keith Henson wrote: > Bill, parabolic mirror concentrators will make a small area very hot, > but they do not increase the collected energy. They are not magic. > If you need a GW of power to drive the chemical reaction, then (at 100 > W/m^2 time average) you need 10E9W10E2W/m^2 or 10,000,000m^2 or ten > square km. > > They are *not* going to increase the efficiency beyond the theoretical > limit for the energy it takes to break chemical bonds. Or rather, if > they do, we will be living in a magical world where where the laws of > thermodynamics have been repealed. > > And it is not hard to pull CO2 out of the atmosphere, > http://people.ucalgary.ca/~keith/AirCapture.html 100 kWh/ton will do > it. That's a small part (2%) of the energy you need to make the CO2 > into hydrocarbons. > Well, it looks as though either you or Sandia have misunderstood something about this project. Sandia have a reputation for technical excellence and employing some of the best scientists in the country. This project has been going for years and will soon be in test production status. Darpa has also contributed some funding. Let's watch for results. BillK From brent.allsop at canonizer.com Sun Dec 20 20:43:43 2009 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 20 Dec 2009 13:43:43 -0700 Subject: [ExI] Effing and Privacy In-Reply-To: <962714.35379.qm@web65602.mail.ac4.yahoo.com> References: <962714.35379.qm@web65602.mail.ac4.yahoo.com> Message-ID: <4B2E8C7F.3020706@canonizer.com> Hi Stuart, Absolutely. We've already experienced something very similar to all this when computers first started communicating on the internet. We very quickly, after most all of our computers got terribly infected, had to come up with security measures to have control over things. Surely similar problems and solutions will develop once we start developing abilities to merge phenomenal spaces, getting ever better at reading others brains, and so on. The experience with computers the first time, and cautions being provided by people like you, will surely help us do a much better job the second, much more important time around. I'll hug most everyone, but surely most would be more selective as to who they would share and eff what they're feeling during such hugs, and so on. But what does everyone imagine the ultimate end situation or ultimate goal as being? Would it be everyone merged into the same phenomenal awareness space with everyone experiencing, effing, and knowing everything? Or how much would we half to isolated, hide, or how much of our sharing would have to be cut out and destroyed, before things were perfect? I'm in the pefection would be when we don't destroy any communications or don't hide much of anything from anyone camp. Brent Allsop The Avantguardian wrote: >> Cyberspace, virtual reality, and everything is, and will forever be, nothing of >> much interest, without that. I don't want to be uploaded into some phenomenally >> blind 'cyberspace', I look forward to when my phenomenal 'spirit' (unlike the >> most of the rest of my phenomenal knowledge, does not have a referent in >> reality) is able to peirce this phenomenal veil of perception, and is finally >> able to escape from this mortal prison wall that is my skull. >> > > The same skull also keeps others out. But they are figuring out ways around that. > > http://arstechnica.com/old/content/2008/12/mindreading-101-identifying-images-by-watching-the-brain.ars > > http://www.navysbirprogram.com/NavySearch/Summary/summary.aspx?pk=F5B07D68-1B19-4235-B140-950CE2E19D08 > > >> I look forward to breaking out into an immortal shared phenomenal world where we >> will finally know not only much more about nature than causal properties, not >> only will we finally have disproved solipsism, solved the problem of other >> minds, and so on and so fourth, but we will finally also be able to share what >> everyone else is phenomenally like and experiencing. >> >> Fuck cyberspace, and all the primitive idiots still completely blind to anything >> more, I want effing phenomenal worlds. >> > > You could just as easily get the thought police, psi-rape, memory theft, and mind control before you get anywhere near an "immortal shared phenomological world" > > Stuart LaForge > > > > "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten." - Neil Armstrong > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From brent.allsop at canonizer.com Sun Dec 20 20:58:25 2009 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 20 Dec 2009 13:58:25 -0700 Subject: [ExI] Sick of Cyberspace? In-Reply-To: References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <1fa8c3b90912190004k27e38da7ma69724be2171f827@mail.gmail.com> <1fa8c3b90912190947j24f3813al55a4350acda84a@mail.gmail.com> <4B2D34AA.2010002@canonizer.com> Message-ID: <4B2E8FF1.2040508@canonizer.com> Hi Natasha, Thanks for the great very true comments. I certainly look forward to when we can eff aeshtetics. Sometimes, people are obviously enjoying a piece of artwork that is just putrid to me. And you had a great example of how we think some of the smells dogs obviously enjoy are similarly putrid. I so look forward to finally sharing, what all it is they are experience, upon which I'll probably say something like "No Wonder that is so enjoyable to you, I had no idea." Even beyond that, I look forward to finally being able to be freely choose and artistically design what is and is not aesthetic to me, and how I feel or what phenomenal emotions I represent things with. Only when we can choose what we want, will we be truly free. As great as most of it is, I'm getting tired of being just so hard wired by my creator and so limited to having much of what is beauty for me, hard wired to the female form. I want to be truly free, and be able to rewire these pupppet strings given to me by evolution, and be able to enjoy what I want to enjoy, the way I want to enjoy it, when I want to enjoy it, and be able to do all such infinitely phenomenally more. Brent Allsop Natasha Vita-More wrote: > Hi Brent, > > Stuart responded under a different subject line, but I think he speaks to > something of import: > http://www.independent.co.uk/news/science/scientists-able-to-read-peoples-mi > nds-1643968.htmlhttp://www.guardian.co.uk/science/2007/feb/09/neuroscience.e > thicsofscience > http://www.cnn.com/2009/TECH/09/25/brain.scans.wired/index.html > > On another point, I have not read much of Merleau-Ponty, but I am familiar > with his works. Perception is the key to aesthetics, without it there would > be no aesthetics - and what would a world be like with out the induction of > physiological senses for conceptualizing aesthetics. > > Last night, under the glow of our ambient library, Max and I discussed human > senses in comparison to our dog's ability to distinguish between molecules > (and becomes what seems like forever lost in his own sniffing world), and > his immeasurable auditory capabilities in comparison to our cat's somewhat > mysterious sense capabilities. Considering the limit of human senses, and > therefore perceptions, we then talked about our escapades into the mountains > to watch the sky, and missing that, and the unknown factors of the > universe's gravitational push and pull of the building blocks of life. > Nevertheless, after cyberneticistic manifestations of uploading and the > subsequent diversity of personal existences, perceptual expansions hedge on > new methods for perceiving the universe, what these might be in light of > space being comprised of complex carbon and amino acids. > > Natasha > > > > > > > Nlogo1.tif Natasha Vita-More > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Brent Allsop > Sent: Saturday, December 19, 2009 2:17 PM > To: ExI chat list > Subject: Re: [ExI] Sick of Cyberspace? > > > Hi Natasha, > > Yes! Thanks for asking this. I am sooo sick of 'cyberspace'. Virtual > simulated 'realities' are not consciously real, and not really worth much, > until they are represented by our brain in our conscious awareness. > Everyone seems to be talking about mechanical vs chemichal vs bioligical... > but still all everyone is talking about with all this is just cause and > effect behavioral properties of such. > > When light reflects off of the surface of a strawberry (or anything, whether > it is mechanical, chemical, biological...), that light is behaving in a way > that can be mapped back to the causal behavior of the surface of the > strawberry. In other words, the light is an abstracted representation of > this causal property of the strawberry. > > Though the light is an abstract representation, it is not fundamentally, and > especially not phenomenally anything like the surface of that strawberry or > whatever was the original cause of the perception process. Cause and effect > detection and observation (cyberspace is still limited to this kind of > communication) is blind to any properties except causal properties of matter > and abstract representations of such. > > This ever further abstracting cause and effect chain of perception includes > the light entering our eyes, the detection of such by the retina, and the > processing of such by our optic nerve and pre cortical neural structures. > The final result of this perception and brain processing is our conscious > knowledge of the strawberry in the cortex of our brain. The 'causal red' on > the surface of the strawberry is very different from the 'phenomenal red' > which is what our conscious knowledge of such is made of. 'Causal red' is a > causal property of the surface of the strawberry and is the initial cause of > the perception process, and phenomenal red is a categorically different > ineffable property of something in our brain. Phenomenal red is the final > result of the perception process. Though 'phenomenal red' is surely a > property of something in our brain that we already know causally and > chemically everything about, its phenomenal nature is blind to our cause and > effect observation. Though the reflected light is detecting the causal > properties of the surface of the strawberry, it is completely blind to any > phenomenal properties such may or may not have. This fact is comonly > refereed to as the 'veil of perception', and why we refer to such properties > as ineffable. > > If you have some virtual reality or cyberspace abstract simulation of a > strawberry abstractly representing only the cause and effect behavioral > properties, it will forever be lacking this phenomenal red, until a brain > like ours perceives it as such in a unified conscious, phenomenal world of > knowledge. > > Surely, whatever it is in our brain that has these invisible phenomenal > properties that we are consciousnley aware of, that our brain uses to > represent our conscious knowledge with, has a lot to do with chemistry. > All we know about chemistry today, is what is causally detectable. But > surely there is much more to just these causes and effects. > > You also brilliantly asked about 'communication', and that is another > critical part that ignorant people always ignore when they think about > virtual realities, cyberspace, and so on. If the theory described in the > consciousness is representational and real camp (see: > _http://canonizer.com/topic.asp/88/6_) turns out to be THE ONE, we will soon > be able to communicate or 'eff' these ineffable properties. This theory > predicts and describes how the conscious worlds of awareness in our brains > will be able to be merged and shared and how effing will work. > > When I hug someone, today, I only experience half of what is phenomenally > happening, and I am blind to the rest of the phenomenal knowledge. In the > future, I'll be able to merge my world of conscious awarenss, with the > person I am hugging, and both of us will be able to comunicate, share, eff > and experience 100% of the phenomenal representations, not just half. > > Cyberspace, virtual reality, and everything is, and will forever be, > nothing of much interest, without that. I don't want to be uploaded > into some phenomenally blind 'cyberspace', I look forward to when my > phenomenal 'spirit' (unlike the most of the rest of my phenomenal knowledge, > does not have a referent in reality) is able to peirce this phenomenal veil > of perception, and is finally able to escape from this mortal prison wall > that is my skull. > > I look forward to breaking out into an immortal shared phenomenal world > where we will finally know not only much more about nature than causal > properties, not only will we finally have disproved solipsism, solved the > problem of other minds, and so on and so fourth, but we will finally also be > able to share what everyone else is phenomenally like and experiencing. > > Fuck cyberspace, and all the primitive idiots still completely blind to > anything more, I want effing phenomenal worlds. > > > > Giulio Prisco (2nd email) wrote: > >> The mainstream is certainly more open to the concept of >> post-biological life than it was, say, 20 years ago, and this is a >> good outcome in which our combined efforts played a part. >> >> I see the _possibility_of post-biological life as compatible with the >> current scientific paradigm, so I am confident (not certain, but >> confident) it will be achieved someday. Perhaps not as soon as some >> predict, but someday. And I think it is not only doable but also good. >> >> However, we are going to remain stuck with biology for many decades at >> least, probably some centuries, and of course we should try making the >> best of it. >> >> G. >> >> On Sat, Dec 19, 2009 at 5:34 PM, Natasha Vita-More >> > wrote: > >> >> >>> This is what I have thought as well, for 20 years, but I am thinking that >>> > it > >>> is has become just a bit dogmatic. This could be because it has now gone >>> > so > >>> mainstream, even folks at TED are discussing it and now there is a >>> university to pomote a watered-down version of it. BUT, that does not >>> change my view that it is wise to avoid sticking so firmly to an absolute >>> and to always question our premises and consider alternatives as >>> transdiciplinary ideas and new insights. >>> >>> **The chemistry of communication has been crucial for human evolution. I >>> simply wonder what its future will be. >>> >>> Best, >>> Natasha >>> >>> >>> Nlogo1.tif Natasha Vita-More >>> >>> -----Original Message----- >>> From: extropy-chat-bounces at lists.extropy.org >>> [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Giulio >>> > Prisco > >>> (2nd email) >>> Sent: Saturday, December 19, 2009 2:04 AM >>> To: ExI chat list >>> Subject: Re: [ExI] Sick of Cyberspace? >>> >>> In the long term I see humans merging with AI subsystems and becoming >>> > purely > >>> computational beings with movable identities based on some or some other >>> kind of physical hardware. I don't think there is any other viable long >>> > term > >>> choice, not if we want to leave all limits behind and increase our >>> > options > >>> without bonds. >>> >>> But this will take long. In the meantime there are many other stepping >>> stones to go through, based on improving our biology and gradually >>> > merging > >>> it with our technology. >>> >>> On Fri, Dec 18, 2009 at 12:19 PM, Stefano Vaj >>> > wrote: > >>> >>> >>>> 2009/12/17 : >>>> >>>> >>>>> Are we totally locked into cybernetics for evolution? I thought this >>>>> next era was to be about chemistry rather than machines. >>>>> >>>>> >>>> I come myself from "wet transhumanism" (bio/cogno), and while I got in >>>> touch with the movement exactly out of curiosity to learn more about >>>> the "hard", "cyber/cyborg" side of things, I am persuased the next era >>>> is still about chemistry, and, that when it will stops being there >>>> will be little difference between the two. >>>> >>>> In other words, if we are becoming machines, machines are becoming >>>> "chemical" and "organic" at an even faster pace (carbon rather than >>>> steel and silicon, biochips, nano...). >>>> >>>> -- >>>> Stefano Vaj >>>> _______________________________________________ >>>> extropy-chat mailing list >>>> extropy-chat at lists.extropy.org >>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>>> >>>> >>>> >>> -- >>> Giulio Prisco >>> http://cosmeng.org/index.php/Giulio_Prisco >>> aka Eschatoon Magic >>> http://cosmeng.org/index.php/Eschatoon >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >>> _______________________________________________ >>> extropy-chat mailing list >>> extropy-chat at lists.extropy.org >>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >>> >>> >>> >> >> >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From aware at awareresearch.com Sun Dec 20 19:55:09 2009 From: aware at awareresearch.com (Aware) Date: Sun, 20 Dec 2009 11:55:09 -0800 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <89454.51497.qm@web36508.mail.mud.yahoo.com> References: <89454.51497.qm@web36508.mail.mud.yahoo.com> Message-ID: On Sun, Dec 20, 2009 at 11:29 AM, Gordon Swobe wrote: > I see people here like Eugen who scoff but who offer no evidence that > Searle's logic fails. Is it just an article of religious faith on ExI that > programs have minds? And if it is, and if we cannot explain how it > happens, then should we adopt the mystical philosophy that everything > has mind merely to protect the notion that programs do or will? It's not a problem with Searle's logic, but with his premises, and those of most who argue against him in defense of a functionalist account of consciousness which needs no defending. These perennial problems of qualia, consciousness and personal identity all revolve around an assumption of an *essential* self-awareness that, however seductive and deeply reinforced by personal observation, language and culture, is entirely lacking in empirical support. There is no essential consciousness to be explained, but there is the very real phenomenon of self-awareness, rife with gaps, distortions, delays and confabulation, displayed by many adapted organisms, conferring obvious evolutionary advantages in terms of the agent modeling its /self/ within its environment of interaction. Such silly philosophical questions are /unasked/ when one realizes that the system need not have an essential experiencer to report experiences. Okay, carry on... - Jef From gts_2000 at yahoo.com Sun Dec 20 21:06:14 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 20 Dec 2009 13:06:14 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <841886.34930.qm@web113607.mail.gq1.yahoo.com> Message-ID: <188114.19446.qm@web36507.mail.mud.yahoo.com> > I'll do my best to just ignore the words "Chinese Room" or "John Searle" > in future. In other words, Ben, you think we should stick our heads in the sand and pretend that Searle never presented these same arguments that I present here. He made these arguments about 30 years ago. So far as I can tell, nobody has refuted them. And to Eugen: I'm no troll. Not even close. I joined this group approximately 10 years ago and have never unsubscribed. I participate from time to time when I think I have something of interest to discuss and when I think my fellow extropians might have some valuable input. I appreciate people like Stathis and Ben and Damien and Brent and others who, unlike you Eugen, have actually tried to offer something constructive. For whatever it's worth, on another list devoted to philosophy I am currently criticizing some of the finer points of Searle's philosophy. Nothing would please me more than to see someone here on Exi come up with a cogent and complete counterargument to Searle's argument against strong AI. -gts From eugen at leitl.org Sun Dec 20 21:09:16 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 20 Dec 2009 22:09:16 +0100 Subject: [ExI] Effing and Privacy In-Reply-To: <4B2E8C7F.3020706@canonizer.com> References: <962714.35379.qm@web65602.mail.ac4.yahoo.com> <4B2E8C7F.3020706@canonizer.com> Message-ID: <20091220210916.GE17686@leitl.org> On Sun, Dec 20, 2009 at 01:43:43PM -0700, Brent Allsop wrote: > Absolutely. We've already experienced something very similar to all > this when computers first started communicating on the internet. We > very quickly, after most all of our computers got terribly infected, had That is not a very good way to describe what happened. The first serious incident was the Morris worm. It happened because there was a transition from a small trusted network to a large untrusted one. We have immune systems now. The fitness function currently does not penalize parasite-infested hosts very much, though. > to come up with security measures to have control over things. Surely > similar problems and solutions will develop once we start developing > abilities to merge phenomenal spaces, getting ever better at reading > others brains, and so on. The experience with computers the first time, There will always remain distinct individuals. Different sizes and complexities, but distinct individuals. We're superorganisms as far as individual cells are concerned, but there are still single-cell organisms out there. The bulk of the biomass is not complex critters as us. There's no reason why a power law distribution won't still hold. The domain is not that different, if you look at fundamental constraints of this universe. > and cautions being provided by people like you, will surely help us do a > much better job the second, much more important time around. > > I'll hug most everyone, but surely most would be more selective as to > who they would share and eff what they're feeling during such hugs, and > so on. There's not a lot of choice. It's less what you want to do, it's more what you have to do in order to stay around. This is no different from now. > But what does everyone imagine the ultimate end situation or ultimate > goal as being? Would it be everyone merged into the same phenomenal Same as before. What is our ultimate goal right now? There isn't a an explicitly defined one. > awareness space with everyone experiencing, effing, and knowing > everything? Or how much would we half to isolated, hide, or how much of You cannot know anything larger than you can contain. Your invididual cells have no idea you even wrote that message. > our sharing would have to be cut out and destroyed, before things were > perfect? What does perfect even mean? > I'm in the pefection would be when we don't destroy any communications > or don't hide much of anything from anyone camp. Try to not be disappointed too much. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From suomichris at gmail.com Sun Dec 20 20:36:48 2009 From: suomichris at gmail.com (Christopher Doty) Date: Sun, 20 Dec 2009 12:36:48 -0800 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <89454.51497.qm@web36508.mail.mud.yahoo.com> References: <89454.51497.qm@web36508.mail.mud.yahoo.com> Message-ID: I just joined this list, and I'm kind of bummed that the first discussion I see is one about the dreaded Chinese Room.* Nonetheless, my two cents: The biggest issue I've seen in these emails seems to be the (implicit) assumption that language should be our one and only way of determining if a computer system is conscious/intelligent. Is a program that only produces, algorithmically, correct responses to language input conscious? I think not; it's a translation program. But it does not then follow that ANY computer which correctly outputs speech is also non-conscious/non-intelligent. To say, e.g., that a complete and accurate model of a human brain running on a computer would not be conscious, based on Searle's argument, is a non sequitur. Further, Searle's argument is pretty worthless, as it ignores the fact that human beings process speech algorithmically. Some words are easy to define and clearly have a meaning (dog, run, sit, etc.) but all languages have tons of words which native speakers wouldn't be able to define or accurately describe the use of (the, a, which, etc.). Nonetheless, native speakers know what they mean when they hear or use these words. Are we to say, based on Searle, that their lack of understanding of the uses of these words, coupled with the fact that they use them correctly, means that they don't actually speak the language? I hope not! The real test of consciousness, I think, is not simply that correct outputs are given, but that outputs demonstrate that the inputs have been incorporated into a general model about the world. This would show that the system actually does understand language (as demonstrated by the fact that it correctly incorporates inputs into a model), and that is capable of independent thought (by provide outputs which, while being based on the inputs, demonstrate a unique insight or perspective). Chris * Because, as a linguist, I despise thought experiments about language. Every one that I have ever seen takes some completely silly premise, runs it to its end, and then applies its conclusion back to actual language. They seem to miss the fact that, by starting with a completely arbitrary (and wrong) understanding of how language works, the conclusions they draw aren't actually about real language--they're about the silly idea of language that they made up. It's masturbation, basically: it's fun, but it doesn't tell you much about sex. From steinberg.will at gmail.com Sun Dec 20 21:23:48 2009 From: steinberg.will at gmail.com (Will Steinberg) Date: Sun, 20 Dec 2009 16:23:48 -0500 Subject: [ExI] Symbol Grounding: Sets, Searle, and the Seat of the mind Message-ID: <4e3a29500912201323w73cb2d2ak7fdcedbb7f049fe9@mail.gmail.com> Thinking about all this symbol manipulation and Searle stuff. Imagine there are two general and distinct sets of information in the human brain--nominal and sensory objects. Nominal objects are what we think of as true objects like an ice cream cone or a block of wood. Sensory objects are descriptive objects we use to describe nominal objects--milky, white, sweet. When experiencing something for the first time, we only have sensory objects. Our sense organs provide information to the brain, and if this information does not mathematically translate into a sobject we already have, it uses something (be it an algorithm or an RNG) to create a string representing it. This also necessitates the creation of a new nobject, if this sobject is found to not be associated with any nobjects (which it is not, because the sense is a new one). Therefore we have established rules already: to physically identify and confirm the newness of nobject, we first must be sure the sobject is new. If it is not, some prior nobject has produced it. In this way the relatedness of nobjects can be approximated by sobjects in common. Milk is associated with sobjects {white, liquid, milkflavor, milksmell, usually cold} as well as many society-produced sobjects that are closer to abstract notions than senses. Ice cream, having sobjects {white, milkflavor, milksmell, sweet, solid that melts, always cold} can be mapped to milk using a few strong ties, notably the powerfulness and uniqueness of the milk-something sobjects (showing that uniqueness of the sobject is in direct correlation to ease of identification) and the use of symbol-manipulating guidelines (i.e. a cold solid often will produce a liquid because it is a frozen form. We learn this early in life. In fact, we even must learn the guideline that sharing of sobjects can be a measure of relatedness. These guidelines, since they are learned, constitute a new sort of processive object--adhering to our nomenclature, a probject. Coming to things like Wernicke's and Broca's areas: a failure Wernicke's area produces semantic failure, meaning the algorithm to correctly select symbols has been compromised. It would be interesting to note whether substituted symbols are produced by a semi-predictable pattern or whether they are a more general shuffling of symbols; this could offer a clue as to how the strings for each symbol are arranged. A failure in Broca's area means the algorithm to correctly arrange symbols has been compromised, perhaps leading to the conclusion that each object has a syntactical tail of information responsible for placement. It is almost like looking at portions of DNA responsible for sorting. This is the reason why Searle's CRA fails. To approximate a human, the man needs more than to internalize the rules; he needs to have a base object manipulation method that can combine a changing bank of these three objects. So a conversation with a human unfolds as such: A: Hello. [Conversation starter a la methionine) B: Hello. [Recognition of conversation] A: How is your wife? [Analyzed--"How is" is a probject mapping nobject wife to sobjects happy, sick, pregnant] B: She is well. ["She is" is the language produced in response to probject "How is" + sobject "feminine" associated with nobject "wife"; "well" is sobject currently associated with "wife"] A: That is good. ["That is {sobject}" is part of a societally mediated probject used in order to communicate empathy. A series of probjects, given inputs "wife" and "well" figure that, will make person A produce a socially acceptable response.] B: *Checks Watch.* Oh! I've got to be going! [simple time check algorithm, checks against sobject "meeting at ten" associated with nobject "meeting", uses probjects to realize that sobject "my watch says 9:50" means person A must leave now in order to get to his meeting] A: Alright. Goodbye. [societal probject maps sobject "heard: 'I've got be be going' ", parses it into understandable sobjects, and produces a societal response.] B: Goodbye [Recognition again] -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Dec 20 22:15:57 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 20 Dec 2009 14:15:57 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <68100.19270.qm@web36502.mail.mud.yahoo.com> --- On Sun, 12/20/09, Aware wrote: > There is no essential consciousness to be explained, but there is the > very real phenomenon of self-awareness, rife with gaps, distortions, > delays and confabulation, displayed by many adapted organisms, > conferring obvious evolutionary advantages in terms of the agent > modeling its /self/ within its environment of interaction.? More to the point, we have this phenomenon to which I referred in the title of the thread: symbol grounding. Frankly for all I really care consciousness does not exist. But symbol grounding does seem to happen by some means. The notion of consciousness seems to help explain it but it doesn't matter. If we cannot duplicate symbol grounding in programs then it seems we can't have strong AI in S/H systems. -gts From nebathenemi at yahoo.co.uk Sun Dec 20 22:17:23 2009 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Sun, 20 Dec 2009 22:17:23 +0000 (GMT) Subject: [ExI] Proton decay (was Re: The symbol grounding problem in strong AI) In-Reply-To: Message-ID: <192386.59121.qm@web27005.mail.ukl.yahoo.com> Ben wrote:"protons decay, you know." Do they? Experiments have not detected a proton decaying, establishing that there is a lower bound for proton decay. Some grand unification theories require proton decay, others don't. So, someday when a really good GUT wins the intellectual undisputed heavyweight of the world title fight and parades around its metaphorical prize belt like Rocky, or if one of the big proton decay experiments shows results, then we can be sure of proton decay. For now, I wouldn't be so sure. Tom (enjoying the mental image of a theory showing off its trophies) From aware at awareresearch.com Sun Dec 20 23:29:17 2009 From: aware at awareresearch.com (Aware) Date: Sun, 20 Dec 2009 15:29:17 -0800 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <68100.19270.qm@web36502.mail.mud.yahoo.com> References: <68100.19270.qm@web36502.mail.mud.yahoo.com> Message-ID: On Sun, Dec 20, 2009 at 2:15 PM, Gordon Swobe wrote: > --- On Sun, 12/20/09, Aware wrote: > >> There is no essential consciousness to be explained, but there is the >> very real phenomenon of self-awareness, rife with gaps, distortions, >> delays and confabulation, displayed by many adapted organisms, >> conferring obvious evolutionary advantages in terms of the agent >> modeling its /self/ within its environment of interaction. > > More to the point, we have this phenomenon to which I referred in the title of the thread: symbol grounding. > > Frankly for all I really care consciousness does not exist. But symbol grounding does seem to happen by some means. The notion of consciousness seems to help explain it but it doesn't matter. If we cannot duplicate symbol grounding in programs then it seems we can't have strong AI in S/H systems. "Symbol grounding" is a non-issue when you understand, as I tried to indicate earlier, that meaning (semantics) is not "in the mind" but in the *observed effect* due to a particular stimulus. There is no "true, grounded meaning" of the stimulus, nor is there any local need for interpretation or an interpreter. Our evolved nature is frugal; there is stimulus and the system's response, and any "meaning" is that reported by an observer, whether that observer is another person, or even the same person associated with that mind. We act according to our nature within context. Awareness of self, and meaning, are useful addons, which, as is to be expected of their function of discriminating self from other, will, if asked, always refer to that agent as their self. - Jef From mbb386 at main.nc.us Mon Dec 21 00:29:21 2009 From: mbb386 at main.nc.us (MB) Date: Sun, 20 Dec 2009 19:29:21 -0500 (EST) Subject: [ExI] Name for carbon project In-Reply-To: References: Message-ID: <32782.12.77.169.15.1261355361.squirrel@www.main.nc.us> Sunoco From stathisp at gmail.com Mon Dec 21 00:35:51 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 21 Dec 2009 11:35:51 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <329614.21019.qm@web36502.mail.mud.yahoo.com> References: <329614.21019.qm@web36502.mail.mud.yahoo.com> Message-ID: 2009/12/21 Gordon Swobe : >> Not everything has a mind, just information-processing >> things. Mind is not a binary quality: even in biology there >> is a gradation between bacteria and humans. The richer and more >> complex the information processing, the richer and more complex the mind. > > One can argue that everything "processes information" at some level. Again, it entails panpsychism, in which even the lowly punch card somehow has a mind capable of "understanding". Where would you draw the line in the animal kingdom? -- Stathis Papaioannou From gts_2000 at yahoo.com Mon Dec 21 00:44:00 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 20 Dec 2009 16:44:00 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <636244.44626.qm@web36503.mail.mud.yahoo.com> --- On Sun, 12/20/09, Aware wrote: I think I've seen this sort of psychedelic word salad before, but under a less ambiguous moniker. Hello again jef. Long time. > "Symbol grounding" is a non-issue when you understand, as I > tried to indicate earlier, that meaning (semantics) is not "in the > mind" but in the *observed effect* due to a particular stimulus. I won't argue that it does not appear as an observed effect due to a stimulus, but if the word "mind" has any meaning then when I understand the meaning of anything else, I understand it there in my mind. > There is no "true, grounded meaning" of the stimulus That's fine. Meaning != truth. > nor is there any local need for interpretation or an interpreter. Somebody sits here in my chair. He wants to interpret the meanings of your funny words. He sits here locally. Really. > Our evolved nature is frugal; there is stimulus and the system's > response, and any "meaning" is that reported by an observer, whether > that observer is another person, or even the same person associated with > that mind. Good that you at least you allow the existence of minds. That's a start. Now then when that observer reports the meaning of a word to or in his own mind, whence comes the understanding of the meaning? More importantly, how do we get that in software? How can a program get semantics? > We act according to our nature within context. I've seen that "within context" qualifier many times before also from the from the same jef I remember. :) -gts From gts_2000 at yahoo.com Mon Dec 21 01:13:59 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 20 Dec 2009 17:13:59 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <629410.20105.qm@web36505.mail.mud.yahoo.com> --- On Sun, 12/20/09, Stathis Papaioannou wrote: > Where would you draw the line in the animal kingdom? I can only speculate, but to me it seems reasonable to assume that other primates and higher mammals and other critters with reasonably well developed brains have minds capable of at least primitive symbol grounding. Someday we'll understand the mystery of Wernicke's area and of other brain structures related to thought, meaning and consciousness. I think then we'll have an idea what to look for in other animals. -gts From aware at awareresearch.com Mon Dec 21 01:48:25 2009 From: aware at awareresearch.com (Aware) Date: Sun, 20 Dec 2009 17:48:25 -0800 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <636244.44626.qm@web36503.mail.mud.yahoo.com> References: <636244.44626.qm@web36503.mail.mud.yahoo.com> Message-ID: On Sun, Dec 20, 2009 at 4:44 PM, Gordon Swobe wrote: > --- On Sun, 12/20/09, Aware wrote: > > I think I've seen this sort of psychedelic word salad before, but under a less ambiguous moniker. Hello again jef. Long time. "Psychedelic word salad" could be taken as an insult, but I accept that my writing is abstruse, although far from haphazard. You might try parsing it section by section, and ask for clarification or expansion where necessary. > >> "Symbol grounding" is a non-issue when you understand, as I >> tried to indicate earlier, that meaning (semantics) is not "in the >> mind" but in the *observed effect* due to a particular stimulus. > > I won't argue that it does not appear as an observed effect due to a stimulus, but if the word "mind" has any meaning then when I understand the meaning of anything else, I understand it there in my mind. I scare-quoted "in the mind" for a reason, and followed up with clarification intended to distinguish the "mind" which acts, from the "mind" of the observer who reports meaning EVEN IF THEY ARE PARTS OF THE SAME BRAIN. It seems significant at a meta-level that you've already contested my use of the word "mind" isolated from its context. >> There is no "true, grounded meaning" of the stimulus > > That's fine. Meaning != truth. Non sequitur. I didn't equate the two, but used "true" as a a qualifying adjective. Again you argue with disregard for context. >> nor is there any local need for interpretation or an interpreter. > > Somebody sits here in my chair. He wants to interpret the meanings of your funny words. He sits here locally. Really. I used the word "local" to emphasize *local to the mind that acts* as opposed to the logically separate observer. >> Our evolved nature is frugal; there is stimulus and the system's >> response, and any "meaning" is that reported by an observer, whether >> that observer is another person, or even the same person associated with >> that mind. > > Good that you at least you allow the existence of minds. That's a start. Now then when that observer reports the meaning of a word to or in his own mind, whence comes the understanding of the meaning? That's the point, there is no meaning in the system except in terms of an observer (which observer could very well be a function of the same brain implementing that which is observed.) > More importantly, how do we get that in software? How can a program get semantics? You don't program-in semantics. You *observe* semantics in the perception of "meaningful" behavior. Meaningful behavior is achieved via methods adapting the agent system to its environment. >> We act according to our nature within context. > > I've seen that "within context" qualifier many times before also from the from the same jef I remember. :) Yes, I often emphasize it because it's a frequent and characteristic blind spot of many of my highly analytical, but reductionism-prone friends. - Jef From spike66 at att.net Mon Dec 21 04:38:27 2009 From: spike66 at att.net (spike) Date: Sun, 20 Dec 2009 20:38:27 -0800 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <89454.51497.qm@web36508.mail.mud.yahoo.com> Message-ID: <7FE7CF2DACF343C0B3C1E0F13EE1CE99@spike> > ...On Behalf Of Christopher Doty > Subject: Re: [ExI] The symbol grounding problem in strong AI > > I just joined this list, and I'm kind of bummed that the > first discussion I see is one about the dreaded Chinese > Room... Chris Welcome Chris. Do not be bummed. Introduce your favorite meme. We like bundles of new memes. {8-] Where are you from? Are you a professor? Do tell us about you. spike From stathisp at gmail.com Mon Dec 21 06:51:09 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 21 Dec 2009 17:51:09 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <89454.51497.qm@web36508.mail.mud.yahoo.com> References: <89454.51497.qm@web36508.mail.mud.yahoo.com> Message-ID: 2009/12/21 Gordon Swobe : > --- On Sun, 12/20/09, Stathis Papaioannou wrote: > >> But it seems that you and Searle are saying that the CR >> lacks understanding *because* the man lacks understanding of >> Chinese, whereas the brain, with completely dumb components, has >> understanding. > > The brain has understanding, yes, but Searle makes no claim about the dumbness or lack thereof of its components. ?You added that to his argument. > > He starts with the self-evident axiom that brains have understanding and then asks if Software/Hardware systems can ever have it too. He concludes they cannot based on his logical argument, which I've posted here several times. > >> So you are penalising the CR because it has smart >> components and because what it does has an algorithmic pattern. > > He penalizes the CR only because it runs a formal program, and nobody has shown how programs can have minds capable of understanding the symbols they manipulate. In other words, nobody has shown his formal argument false. If somebody has seen it proved false then point me to it. > > I see people here like Eugen who scoff but who offer no evidence that Searle's logic fails. Is it just an article of religious faith on ExI that programs have minds? And if it is, and if we cannot explain how it happens, then should we adopt the mystical philosophy that everything has mind merely to protect the notion that programs do or will? > >> By this reasoning, if neurons had their own separate rudimentary >> intelligence and if someone could see a pattern in the brain's >> functioning to which the term "algorithmic" could be applied, then >> the brain would lack understanding also. > > No, Searle argues that even if we can describe brain processes algorithmically, those algorithms running on a S/H system would not result in understanding; that it's not enough merely to simulate a brain in software running on a computer. > > S/H systems are not hardware *enough*. But a S/H system is a physical system, like a brain. You claim that the computer lacks something the brain has: that it is only syntactic, and syntax does not entail semantics. But even if it is true that syntax does not entail semantics, how can you be sure that the brain has the extra ingredient for semantics and the computer does not, and how does the CR argument show this? You've admitted that it isn't because the the parts of the CR have components with independent intelligence and you've admitted that it isn't because the operation of the CR has an algorithmic description and that of the brain does not. What other differences between brains computers are there which are illustrated by the CRA? (Don't say that the brain has understanding while the computer or CR does not: that is the thing in dispute). Although the CRA does not show that computers can't be conscious, it would still seem possible that there is some substrate-specific special ingredient which a computer behaving like a brain lacks, as a result of which the computer would be unconscious or at least differently conscious. But Chalmer's "fading qualia" argument constitutes a decisive refutation of such an idea. I cut and paste from my previous post. Searle favours alternative (b); you suggested that (a) would be the case, but then seemed to backtrack: If you don't believe in a soul then you believe that at least some of the neurons in your brain are actually involved in producing the visual experience. It is these neurons I propose replacing with artificial ones that interact normally with their neighbours but lack the putative extra ingredient for consciousness. The aim of the exercise is to show that this extra ingredient cannot exist, since otherwise it would lead to one of two absurd situations: (a) you would be blind but you would not notice you were blind; or (b) you would notice you were blind but you would lose control of your body, which would smile and say everything was fine. Here is a list of the possible outcomes of this thought experiment: (a) as above; (b) as above; (c) you would have normal visual experiences (implying there is no special ingredient for consciousness); (d) there is something about the behaviour of neurons which is not computable, which means even weak AI is impossible and this thought experiment is impossible. I'm pretty sure that is an exhaustive list, and one of (a) - (d) has to be the case. I favour (c). I think (a) is absurd, since if nothing else, having an experience means you are aware of having the experience. I think (a) is very unlikely because it would imply that you are doing your thinking with an immaterial soul, since all your neurons would be constrained to behave normally. I think (d) is possible, but unlikely, and Searle agrees. There is nothing so far in physics that has been proved to be uncomputable, and no reason to think that it should be hiding inside neurons. -- Stathis Papaioannou From stathisp at gmail.com Mon Dec 21 10:45:47 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 21 Dec 2009 21:45:47 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <89454.51497.qm@web36508.mail.mud.yahoo.com> Message-ID: A mistake in my previous post and the post I quoted from: original-- I favour (c). I think (a) is absurd, since if nothing else, having an experience means you are aware of having the experience. I think (a) is very unlikely because it would imply that you are doing your thinking with an immaterial soul, since all your neurons would be constrained to behave normally. should have been-- I favour (c). I think (a) is absurd, since if nothing else, having an experience means you are aware of having the experience. I think (b) is very unlikely because it would imply that you are doing your thinking with an immaterial soul, since all your neurons would be constrained to behave normally. -- It seems clear that you are convinced that the CRA is correct. The counterarguments we have presented seem equally obvious to the rest of us to be clear refutations of the CRA, but you simply restate the CRA and claim that it remains unrefuted. You also haven't really responded adequately to the "fading qualia" argument, which purports to prove that computers of a certain design not only can but *must* have minds. So, it seems that we are at an impasse. To us it seems that you're being stubborn, to you it probably seems that we're being stubborn. -- Stathis Papaioannou From gts_2000 at yahoo.com Mon Dec 21 14:07:15 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 21 Dec 2009 06:07:15 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <267079.86892.qm@web36507.mail.mud.yahoo.com> --- On Mon, 12/21/09, Stathis Papaioannou wrote: > But a S/H system is a physical system, like a brain. You > claim that the computer lacks something the brain has: that it is > only syntactic, and syntax does not entail semantics. Right. > But even if it > is true that syntax does not entail semantics, how can you be sure that > the brain has the extra ingredient for semantics and the computer > does not, and how does the CR argument show this? You've admitted that > it isn't because the the parts of the CR have > components with independent intelligence and you've admitted that it > isn't because the operation of the CR has an algorithmic description > and that of the brain does not. What other differences between brains > computers are there which are illustrated by the CRA? (Don't say that > the brain has understanding while the computer or CR does not: that is > the thing in dispute). I can't heed the first part of your prohibition at the end. You know your brain has understanding as surely as you can understand the words in this sentence. If you understand anything whatsoever, you have semantics. And you can reasonably locate that capacity in your brain because when your brain loses consciousness, you no longer have it. The experiment in the CRA shows that programs don't have it because the man representing the program can't grok Chinese even if the syntactic rules of the program enable him to speak it fluently. The same thing happens to be true in English too, and even of natural brains that know English. It's not so easy to see, but you cannot understand English sentences merely from knowing their syntactic structure, or merely from following syntactic rules. Syntactic rules are form based, not semantics based. Programs manipulate symbols according to their forms. A program takes an input like for example "What day of week is it?" It looks at the *forms* of the words in the question to determine the operation it must perform to generate a proper output. It does not look at or know the *meanings* of the words. The meaning of the output comes from the human who reads it or hears t. If we want to say that the program has semantics then we must say it has what philosophers of the subject call "derived semantics", meaning that the program derives its semantics from the human operator. > Although the CRA does not show that computers can't be > conscious, It shows that even if computers *did* have consciousness, they still would have no understanding the meanings of the symbols contained in their programs. The conscious Englishman in the room represents a program operating on Chinese symbols. He cannot understand Chinese no matter how well he performs those operations. I'll try again to answer your partial brain replacement scenario again later. Sorry not putting you off... out of time -gts From stathisp at gmail.com Mon Dec 21 15:29:57 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 22 Dec 2009 02:29:57 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <267079.86892.qm@web36507.mail.mud.yahoo.com> References: <267079.86892.qm@web36507.mail.mud.yahoo.com> Message-ID: 2009/12/22 Gordon Swobe : > --- On Mon, 12/21/09, Stathis Papaioannou wrote: > >> But a S/H system is a physical system, like a brain. You >> claim that the computer lacks something the brain has: that it is >> only syntactic, and syntax does not entail semantics. > > Right. > >> But even if it >> is true that syntax does not entail semantics, how can you be sure that >> the brain has the extra ingredient for semantics and the computer >> does not, and how does the CR argument show this? You've admitted that >> it isn't because the the parts of the CR have >> components with independent intelligence and you've admitted that it >> isn't because the operation of the CR has an algorithmic description >> and that of the brain does not. What other differences between brains >> computers are there which are illustrated by the CRA? (Don't say that >> the brain has understanding while the computer or CR does not: that is >> the thing in dispute). > > I can't heed the first part of your prohibition at the end. You know your brain has understanding as surely as you can understand the words in this sentence. If you understand anything whatsoever, you have semantics. And you can reasonably locate that capacity in your brain because when your brain loses consciousness, you no longer have it. I know my brain has understanding but it is at least provisionally an open question whether computers, or systems with only syntax, do. You can't assume that they don't as part of your argument to prove that they don't. > The experiment in the CRA shows that programs don't have it because the man representing the program can't grok Chinese even if the syntactic rules of the program enable him to speak it fluently. Forget any prejudices you may have. You are an alien scientist and you observe a Chinese speaker and a CR, both of which seem to speak fluent Chinese, a language which you have managed to learn from radio transmissions. You are not sure if either of them actually understands Chinese, and if so which one. The man in the CR freely admits to you in English that he does not speak Chinese. What do you conclude? That the man in the CR does not speak Chinese does not bias you against the CR in your assessment of its understanding, since the cells of the brain are obviously too stupid to understand anything at all, let alone Chinese. So if either the brain or the man understands Chinese it is an emergent, or high level property supervening on the low level behaviour of their components, not a simple property of the components themselves. It could be due to the action potentials in the neurons of the left temporal lobe, or to the flurry of card-sorting activity by man in the CR, particularly involving the thumb and index finger of the right hand, since this is what is observed when the Chinese speaking is most active. With very careful observation you can pick out more specific patterns: a consistent sequence of neuronal firings or card-shuffling whenever the Chinese word for "dog" is heard, for example. After long observation you come to these conclusions: (1) you can't be absolutely sure that either of the subjects actually understands what they are saying, and (2) there is no basis for saying chemical reactions are more likely to yield understanding that card-sorting is, or vice-versa. > The same thing happens to be true in English too, and even of natural brains that know English. It's not so easy to see, but you cannot understand English sentences merely from knowing their syntactic structure, or merely from following syntactic rules. Syntactic rules are form based, not semantics based. > > Programs manipulate symbols according to their forms. A program takes an input like for example "What day of week is it?" It looks at the *forms* of the words in the question to determine the operation it must perform to generate a proper output. It does not look at or know the *meanings* of the words. The meaning of the output comes from the human who reads it or hears t. If we want to say that the program has semantics then we must say it has what philosophers of the subject call "derived semantics", meaning that the program derives its semantics from the human operator. Brains also just respond in a deterministic way, taking an input and producing an output. In so doing they sometimes derive meaning from the input. Why cannot the physical activity in computers or the CR also derive meaning, if the physical activity in brains can? >> Although the CRA does not show that computers can't be >> conscious, > > It shows that even if computers *did* have consciousness, they still would have no understanding the meanings of the symbols contained in their programs. The conscious Englishman in the room represents a program operating on Chinese symbols. He cannot understand Chinese no matter how well he performs those operations. And if the neurons in the brain had a separate consciousness, even linked in a swarm mind (so that the conscious entity comprises the entire system), they wouldn't necessarily understand anything beyond their low level operations either. That is trivially obvious; you don't need the CRA to demonstrate it. -- Stathis Papaioannou From jrd1415 at gmail.com Mon Dec 21 20:02:08 2009 From: jrd1415 at gmail.com (Jeff Davis) Date: Mon, 21 Dec 2009 13:02:08 -0700 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <636244.44626.qm@web36503.mail.mud.yahoo.com> References: <636244.44626.qm@web36503.mail.mud.yahoo.com> Message-ID: On Sun, Dec 20, 2009 at 5:44 PM, Gordon Swobe wrote: >... How can a program get semantics? Earlier in this thread, Gordon enumerated the following points,: 1) Programs are formal (syntactic) 2) Minds have mental contents (semantics) 3) Syntax is neither constitutive of nor sufficient for semantics 4) Programs are neither constitutive of nor sufficient for minds. and then challenged interested parties thusly: "If you think you see a logical problem then show it to me." Let me give it a shot. With one caveat: Searle may have had a good deal more to say beyond the above four points, and as I am not well versed in that additional context, there's ample room for error on my part. First let me restate the four points somewhat more bluntly: 1) Programs are about rules, not meanings, 2) Minds have contents that have meanings, 3) Rules won't generate meanings 4) So programs can't generate minds. Okay. Programs don't exist in a vacuum, they RUN on a suitable substrate. That substrate has information in both its structure and storage devices. When a program runs, it generates output, which modifies and increases its information content. Eventually, the inherent functionality of the program, combined with the initial information (time zero data set) and ongoing external inputs, generates the necessary semantics/meanings. I offer as an example the human sperm and egg. They appear to lack mind and meaning, but they have information, structure, and functionality. Combine the sperm and the egg, and they can produce a human infant. In regard to mind and meaning, that infant starts out a blank -- setting aside the probable substantial inventory of genetically-encoded meanings in the form of instinctive mental behaviors. That syntax-equipped-yet-semantically-blank proto-mind will then proceed to absorb additional inputs and generate the full complement of screwed-up semantics that we see in the typical mature human. Searle's argument seems little more than another attempt -- born of ooga-booga spirituality -- to deny the basic truth of materialism: Life, persona, mind, and consciousness are the entirely unspecial result of the "bubble, bubble, toil, and trouble" of stardust in the galactic cauldron. When life, persona, mind, consciousness are eventually deconstructed, they will be seen to be as mundane as the dirt from which they sprang. Some may find this a bad thing. I consider it, already, immensely liberating. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From jonkc at bellsouth.net Mon Dec 21 21:49:04 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 21 Dec 2009 16:49:04 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <414735.36422.qm@web36505.mail.mud.yahoo.com> References: <414735.36422.qm@web36505.mail.mud.yahoo.com> Message-ID: Searle Wrote: > > "Suppose we design a program that doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. [...] Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native > Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?" > Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. I don't think we need to know how the human mind works (although it would certainly be helpful) to make an AI, but you do. So to convince you of the error of your ways we use a thought experiment that at the logical level slavishly works just like a human brain. And assuming your objection is valid you still haven't explained why it doesn't also prove that the native Chinese man also doesn't understand Chinese. You state that native Chinese speakers understand Chinese but if your objection is valid they can't comprehend it any more than the Chinese Room does. > However, even getting this close to the operation of the brain is still not sufficient to produce understanding. I believe one of us does not understand understanding. > To see this, imagine that instead of a mono lingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes. Now where is the understanding in this system? That question is meaningless. Nouns have a position, understanding is not a noun, understanding has no position. > It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes So you decree Mr. Searle, but some evidence of that would be nice, a proof would be even better. > and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands You don't find it absurd that 3 pounds of grey goo in our head can have understanding even though not one of the 100 billion neurons that make it up has understanding. You don't find it absurd because you are accustomed to the idea. You do think its absurd for a room shuffling symbols to have understanding and it is true that not one of those symbols has understanding, but that's not who you find it absurd. You find it absurd because you are not accustomed to it; there can't be any other reason because logically the grey goo and the room are identical. > remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination. And after the human being does something superhuman, after a human does something that even a Jupiter Brain would be far too small to accomplish you just decree that no part of the human understands Chinese, you offer no proof or even one scrap of evidence to indicate that is indeed true, you just decree it and then claim to have proven something profound. I said "no part of the man" because I think a mind of that astronomical size would be a multitude. For that matter, I think there is a part of your mind and mine that doesn't understand English and yet we can both write screeds in English on the Internet. > The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. In the above you are demanding that somebody simulate the soul, and I don't believe any rational theory of mind would satisfy you as you would ALWAYS have one of two objections no matter what the theory: 1) Your theory of mind has reduced it to a huge number of parts called Part Z interacting with each other. But Part Z is very simple and very dull, and even the interactions it has with other parts is mundane. There must be more to a grand and mysterious thing like mind than that! 2) Your theory of mind has reduced it to Part Z, but part Z is still complex and mysterious so we still don't understand mind. It's hopeless, nothing could satisfy you. As I said before one of us doesn't understand understanding. > formal properties are not sufficient for the causal properties is shown by the water pipe example Mr. Searle I've never heard you mention the name so I'm really really curious, have you ever heard of a fellow by the name of Charles Darwin? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Dec 21 21:55:22 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 21 Dec 2009 16:55:22 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <55003.34376.qm@web36506.mail.mud.yahoo.com> References: <55003.34376.qm@web36506.mail.mud.yahoo.com> Message-ID: <80D8A2DD-33D3-46E1-849A-D5D18C947E9B@bellsouth.net> On Dec 18, 2009 Gordon Swobe wrote: > I didn't ignore it, Damien. I just have very little time and lots of posts to respond to, not only here but on other discussion lists. In your post before your last, you wrote something along the lines of "BULLSHIT". (No, I take that back. That's exactly what you wrote.) I don't mind a little profanity, and I didn't take offense, but as a general rule I tend to give priority to posts of those who seem most interested in what I have to say. Yes Damien I was shocked, SHOCKED I tell you, that you would use such a foul word. I fear you have been hanging around with the wrong sort of people. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Dec 21 22:09:50 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 21 Dec 2009 17:09:50 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <579032.89182.qm@web36503.mail.mud.yahoo.com> References: <579032.89182.qm@web36503.mail.mud.yahoo.com> Message-ID: <80BD8F00-CBEE-4F6F-9EA5-7F5F8582A91F@bellsouth.net> On Dec 20, 2009 Gordon Swobe wrote: > In other words, you would rather adopt the position that anything capable of intelligent behavior has a mind capable of holding thoughts than adopt Searle's much more conventional position that only advanced organisms with brains can do so. Not a bad paraphrase of my words, my only quibble is with the "much more conventional position". I would say that Searle's ideas are conventional in the same way that creationists and their 6000 year old Earth is conventional. Searle is certainly in their camp and it astounds me you want to be associated with such scum. > Panpsychism is as fine a religion as any I suppose. Tell the atheist that atheism is a religion, wow, I never heard that one before! John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Dec 21 22:38:05 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 21 Dec 2009 16:38:05 -0600 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <80D8A2DD-33D3-46E1-849A-D5D18C947E9B@bellsouth.net> References: <55003.34376.qm@web36506.mail.mud.yahoo.com> <80D8A2DD-33D3-46E1-849A-D5D18C947E9B@bellsouth.net> Message-ID: <4B2FF8CD.30809@satx.rr.com> On 12/21/2009 3:55 PM, John Clark wrote: > On Dec 18, 2009 Gordon Swobe wrote: >> I didn't ignore it, Damien. I just have very little time and lots of >> posts to respond to, not only here but on other discussion lists. In >> your post before your last, you wrote something along the lines of >> "BULLSHIT". (No, I take that back. That's exactly what you wrote.) I >> don't mind a little profanity, and I didn't take offense, but as a >> general rule I tend to give priority to posts of those who seem most >> interested in what I have to say. > Yes Damien I was shocked, SHOCKED I tell you, that you would use such a > foul word. I fear you have been hanging around with the wrong sort of > people. I was shockingly shocked that Gordon's miraculous internal semantic engine didn't notice that I was *quoting* the wrong sort of people on the list, and that this frightful word had often been applied, by those very people, to posts by... me. :) Damien Broderick From spike66 at att.net Mon Dec 21 22:31:23 2009 From: spike66 at att.net (spike) Date: Mon, 21 Dec 2009 14:31:23 -0800 Subject: [ExI] time to devour the dog Message-ID: <52C75A09949548E6947D00AEF57FD05B@spike> Who knew? A dog is worse than an SUV: http://news.yahoo.com/s/afp/20091220/sc_afp/lifestyleclimatewarminganimalsfo od;_ylt=AujXbeP6jnj13L53lJiX7iGs0NUE;_ylu=X3oDMTQ4YXZnZDQyBGFzc2V0A2FmcC8yMD A5MTIyMC9saWZlc3R5bGVjbGltYXRld2FybWluZ2FuaW1hbHNmb29kBGNjb2RlA21vc3Rwb3B1bG FyBGNwb3MDMTAEcG9zAzcEcHQDaG9tZV9jb2tlBHNlYwN5bl9oZWFkbGluZV9saXN0BHNsawNwb2 xsdXRpbmdwZXQ spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Mon Dec 21 23:32:18 2009 From: natasha at natasha.cc (natasha at natasha.cc) Date: Mon, 21 Dec 2009 18:32:18 -0500 Subject: [ExI] time to devour the dog In-Reply-To: <52C75A09949548E6947D00AEF57FD05B@spike> References: <52C75A09949548E6947D00AEF57FD05B@spike> Message-ID: <20091221183218.yg4wyqrps0ws0008@webmail.natasha.cc> I'm keeping our doodle Oscar. My defense is that we planted 29 trees in our yard the past 2 years, and my rose garden also has value. I don't party in Austin, so I get a few points for not expanding my carbon recklessly around town. I am constipated from time to time, so that has gotta give me some leverage. And I'll practice breathing less often. Natasha Quoting spike : > > Who knew? A dog is worse than an SUV: > > http://news.yahoo.com/s/afp/20091220/sc_afp/lifestyleclimatewarminganimalsfo > od;_ylt=AujXbeP6jnj13L53lJiX7iGs0NUE;_ylu=X3oDMTQ4YXZnZDQyBGFzc2V0A2FmcC8yMD > A5MTIyMC9saWZlc3R5bGVjbGltYXRld2FybWluZ2FuaW1hbHNmb29kBGNjb2RlA21vc3Rwb3B1bG > FyBGNwb3MDMTAEcG9zAzcEcHQDaG9tZV9jb2tlBHNlYwN5bl9oZWFkbGluZV9saXN0BHNsawNwb2 > xsdXRpbmdwZXQ > > spike > From jonkc at bellsouth.net Mon Dec 21 23:38:18 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 21 Dec 2009 18:38:18 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <267079.86892.qm@web36507.mail.mud.yahoo.com> References: <267079.86892.qm@web36507.mail.mud.yahoo.com> Message-ID: <5B428DD9-6E6D-46A1-B60D-50910927BFDD@bellsouth.net> On Dec 21, 2009, at 9:07 AM, Gordon Swobe wrote: > > The experiment in the CRA shows that programs don't have it because the man representing the program can't grok Chinese The man represents the program? What utter crap. The in the idiotic Chinese room world the silly man doesn't even represent something important like a if then statement, at best the man represents a very specific and thus dull thing like let let k =3. > even if computers *did* have consciousness, they still would have no understanding the meanings of the symbols contained in their programs. There may be stupider statements than the one that can be seen above, but I am unable to come up with an example of one, at least right at this instant off the top of my head. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From suomichris at gmail.com Mon Dec 21 23:48:10 2009 From: suomichris at gmail.com (Christopher Doty) Date: Mon, 21 Dec 2009 15:48:10 -0800 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <5B428DD9-6E6D-46A1-B60D-50910927BFDD@bellsouth.net> References: <267079.86892.qm@web36507.mail.mud.yahoo.com> <5B428DD9-6E6D-46A1-B60D-50910927BFDD@bellsouth.net> Message-ID: 2009/12/21 John Clark : >> even if computers *did* have consciousness, they still would have no >> understanding the meanings of the symbols contained in their programs. > > There may be stupider statements than the one that can be seen above, but I > am unable to come up with an example of one, at least right at this instant > off the top of my head. The *entire* statement is not stupid. It is certainly possible that a conscious computer could correctly respond to questions about, e.g., colors, even though all it knew of them were definitions about wavelengths and hadn't ever "seen" or processed any images. This might also go for human emotions. To take a cheesy example, "love" might be understood by a computer as a sort of motivating principle in human society, how it arose via evolutionary processes, etc., but without knowing, in some sense, what it means to "love." Nonetheless, I'm hard-pressed to see how a computer to come to consciousness without having any understanding of any of the symbols in its programming...... Chris From max at maxmore.com Tue Dec 22 00:00:39 2009 From: max at maxmore.com (Max More) Date: Mon, 21 Dec 2009 18:00:39 -0600 Subject: [ExI] time to devour the dog Message-ID: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com> I suspect that this study is actually bullshit. I know I've seen some other info about this... I'll try to find it. One quote that immediately makes me doubt the report: >calculated that a medium-sized dog eats around 164 kilos (360 >pounds) of meat and 95 kilos of cereal a year. Natasha's and my dog Oscar is a large dog (around 80 to 85 lbs) but he certainly does not consume a pound of meat per day. A can of his Pedigree Healthy Digestion has a net weight of 375 g or 13.2 oz, i.e., less than a pound. This is far from all-meat, it's ingredients including brown rice, brewer's rice, and other non-meat ingredients. A can last him... how long, Natasha? At least three days, I'd say. Seems to me, the study authors have picked an unrealistically exaggerated number for amount of meat consumed by a medium dog per day. Even if the number was accurate, so what? Should we apologize for living and having our beloved dog put down? It's just another case of greens trying to make all humans feel guilty for living. I'm really sick of this culture. Max From max at maxmore.com Tue Dec 22 00:36:25 2009 From: max at maxmore.com (Max More) Date: Mon, 21 Dec 2009 18:36:25 -0600 Subject: [ExI] time to devour the dog Message-ID: <200912220036.nBM0aexE029193@andromeda.ziaspace.com> Damn! I hate it when I don't proofread a post. Here it is again, with corrections to paragraphs four, five, and six. The second mistake made it sound like we had put our beloved dog down. (In fact, he's alive and well, and snoring as I type.) ----------------------- I suspect that this study is actually bullshit. I know I've seen some other info about this... I'll try to find it. One quote that immediately makes me doubt the report: >calculated that a medium-sized dog eats around 164 kilos (360 >pounds) of meat and 95 kilos of cereal a year. Natasha's and my dog Oscar is a large dog (around 80 to 85 lbs) but he certainly does not consume a pound of meat per day. A can of his Pedigree Healthy Digestion has a net weight of 375 g or 13.2 oz, i.e., less than a pound. This is far from all-meat, its ingredients including brown rice, brewer's rice, and other non-meat ingredients. A can lasts him... how long, Natasha? At least three days, I'd say. Seems to me, the study authors have picked an unrealistically exaggerated number for amount of meat consumed by a medium dog per day. Even if the number was accurate, so what? Should we apologize for living and then have our beloved dog put down? It's just another case of Greens trying to make all humans feel guilty for living. I'm really sick of this culture. Max From emlynoregan at gmail.com Tue Dec 22 01:29:55 2009 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 22 Dec 2009 11:59:55 +1030 Subject: [ExI] time to devour the dog In-Reply-To: <52C75A09949548E6947D00AEF57FD05B@spike> References: <52C75A09949548E6947D00AEF57FD05B@spike> Message-ID: <710b78fc0912211729s6f90a6ebv46043125d9bd2b93@mail.gmail.com> 2009/12/22 spike : > > Who knew?? A dog is worse than an SUV: > > http://news.yahoo.com/s/afp/20091220/sc_afp/lifestyleclimatewarminganimalsfood;_ylt=AujXbeP6jnj13L53lJiX7iGs0NUE;_ylu=X3oDMTQ4YXZnZDQyBGFzc2V0A2FmcC8yMDA5MTIyMC9saWZlc3R5bGVjbGltYXRld2FybWluZ2FuaW1hbHNmb29kBGNjb2RlA21vc3Rwb3B1bGFyBGNwb3MDMTAEcG9zAzcEcHQDaG9tZV9jb2tlBHNlYwN5bl9oZWFkbGluZV9saXN0BHNsawNwb2xsdXRpbmdwZXQ > > spike Stories like this put the cart before the horse (or the SUV before the dog). The same reasoning would show that humans are the worst thing of all, and we need to reduce population. I think anytime your philosophy leads you to the idea of a purge, active or passive, it's a sign you've lost track of what you were doing. To my mind, the purpose of worrying about the environment, is that we need a long term viable planet *for people*. If in solving a sub problem of "how can we protect the future for people", you get the answer "kill a lot of people", you've hit reductio ad absurdum; some of your premises are wrong, *and*, if you don't recognise that, then you've lost track of the problem you were trying to solve in the first place. As to dogs, well, given that they are sentients who can suffer, I think they deserve some level of similar consideration of interests to humans. Also, to the extent that they have a social life intrinsically linked to human social life, they should be including in the set of "people" who's future you are trying to secure. ie: You are also trying to save the dogs, so killing your dog for that purpose is not right. As to SUVs, they are a tool. They are not a social sentient. If reducing SUVs helps secure the future of humans, that's great. -- Emlyn http://emlyntech.wordpress.com - coding related http://point7.wordpress.com - ranting http://emlynoregan.com - main site From natasha at natasha.cc Tue Dec 22 02:30:10 2009 From: natasha at natasha.cc (natasha at natasha.cc) Date: Mon, 21 Dec 2009 21:30:10 -0500 Subject: [ExI] Natasha Vita-More & Hugo de Garis on China's Today - Now! Message-ID: <20091221213010.hugkqtopmoogo8kg@webmail.natasha.cc> http://english.cri.cn/08webcast/today.htm From msd001 at gmail.com Tue Dec 22 02:31:47 2009 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 21 Dec 2009 21:31:47 -0500 Subject: [ExI] time to devour the dog In-Reply-To: <710b78fc0912211729s6f90a6ebv46043125d9bd2b93@mail.gmail.com> References: <52C75A09949548E6947D00AEF57FD05B@spike> <710b78fc0912211729s6f90a6ebv46043125d9bd2b93@mail.gmail.com> Message-ID: <62c14240912211831g4399d2b4sa19ed50921bef695@mail.gmail.com> On Mon, Dec 21, 2009 at 8:29 PM, Emlyn wrote: > Stories like this put the cart before the horse (or the SUV before the > dog). The same reasoning would show that humans are the worst thing of > all, and we need to reduce population. I think anytime your philosophy > leads you to the idea of a purge, active or passive, it's a sign > you've lost track of what you were doing. > Maybe we should purge the climatologists? No, that's wrong. Maybe we should take away their toys until they learn how to use them properly? -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Dec 22 03:04:50 2009 From: spike66 at att.net (spike) Date: Mon, 21 Dec 2009 19:04:50 -0800 Subject: [ExI] time to devour the dog In-Reply-To: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com> References: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com> Message-ID: <1A6373438E534E53A5D908A32D7F38D9@spike> > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Max More > ... > It's just another case of greens trying to make all humans > feel guilty for living. I'm really sick of this culture... Max Speaking of this culture, I was up to my eyeballs in a family situation last week, still actually, but as I recall there was supposed to be a big climate hootnanny in Copenhagen, didn't follow it. Is that over now? I don't really want to read a bunch of long-winded URLs about it, but would far prefer anyone here offering a very few word summary, perhaps a single sentence, that will tell me all I really need to know about how that went or what happened. Last spring we were anticipating putting in a few thousand trees, but held off hoping to make a cubic buttload of money selling carbon credits. I get the feeling that cubic buttload of money will not be made by me. spike From max at maxmore.com Tue Dec 22 04:59:14 2009 From: max at maxmore.com (Max More) Date: Mon, 21 Dec 2009 22:59:14 -0600 Subject: [ExI] time to devour the dog Message-ID: <200912220459.nBM4xUQ6016889@andromeda.ziaspace.com> >Speaking of this culture, I was up to my >eyeballs in a family situation last week, still >actually, but as I recall there was supposed to >be a big climate hootnanny in Copenhagen, didn't >follow it. Is that over now? I don't really >want to read a bunch of long-winded URLs about >it, but would far prefer anyone here offering a >very few word summary, perhaps a single >sentence, that will tell me all I really need to >know about how that went or what happened. spike: Here's a summary from the stimulating and often incisive yet rather rabid Viscount Monckton: http://sppiblog.org/news/parturient-montes-nascetur-ridiculus-mus#more-314 The mountains shall labor, and what will be born? A stupid little mouse. Thanks to hundreds of thousands of US citizens who contacted their elected representatives to protest about the unelected, communistic world government with near-infinite powers of taxation, regulation and intervention that was proposed in early drafts of the Copenhagen Treaty, there is no Copenhagen Treaty. There is not even a Copenhagen Agreement. There is a ?Copenhagen Accord?. The White House spinmeisters spun, and their official press release proclaimed, with more than usual fatuity, that President Obama had ?salvaged? a deal at Copenhagen in bilateral talks with China, India, Brazil, and South Africa, which had established a negotiating bloc. The plainly-declared common position of these four developing nations had been the one beacon of clarity and common sense at the foggy fortnight of posturing and gibbering in the ghastly Copenhagen conference center. This is what the Forthright Four asked for: Point 1. No compulsory limits on carbon emissions. Point 2. No emissions reductions at all unless the West paid for them. Point 3. No international monitoring of any emissions reductions not paid for by the West. Point 4. No use of ?global warming? as an excuse to impose protectionist trade restrictions on countries that did not cut their carbon emissions. After President Obama?s dramatic intervention to save the deal, this is what the Forthright Four got: Point 1. No compulsory limits on carbon emissions. Point 2. No emissions reductions at all unless the West paid for them. Point 3. No international monitoring of any emissions reductions not paid for by the West. Point 4. No use of ?global warming? as an excuse to impose protectionist trade restrictions on countries that did not cut their carbon emissions. Here, in a nutshell ? for fortunately nothing larger is needed ? are the main points of the ?Copenhagen Accord?: Main points: In the Copenhagen Accord, which is operational immediately, the parties?underline that climate change is one of the greatest challenges of our time?; emphasize their ?strong political will to urgently combat climate change?; recognize ?the scientific view that the increase in global temperature should be below 2 C?? and perhaps below 1.5 C?; aspire to ?cooperate in achieving the peaking of global and national emissions as soon as possible?; acknowledge that eradicating poverty is the ?overriding priority of developing countries?; and accept the need to help vulnerable countries ? especially the least developed nations, small-island states, and Africa ? to adapt to climate change. Self-imposed emissions targets: All parties will set for themselves, and comply with, emissions targets for 2020, to be submitted to the secretariat by 31 January 2010. Where developing countries are paid to cut their emissions, their compliance will be monitored. Developed countries will financially support less-developed countries to prevent deforestation. Carbon trading may be used. New bureaucracies and funding: Under the supervision of a ?High-Level Panel?, developed countries will give up to $30 billion for 2010-12, aiming for $100 billion by 2020, in ?scaled up, new and additional, predictable and adequate funding? to developing countries via a ?Copenhagen Green Fund?. A ?Technology Mechanism? will ?accelerate technology development and transfer? to developing countries. From moulton at moulton.com Tue Dec 22 07:32:50 2009 From: moulton at moulton.com (moulton at moulton.com) Date: 22 Dec 2009 07:32:50 -0000 Subject: [ExI] time to devour the dog Message-ID: <20091222073250.56900.qmail@moulton.com> On Mon, 2009-12-21 at 22:59 -0600, Max More wrote: > spike: Here's a summary from the stimulating and > often incisive yet rather rabid Viscount Monckton: > > http://sppiblog.org/news/parturient-montes-nascetur-ridiculus-mus#more-314 I do not find Monckton to be stylistically pleasant for either reading or listening. But I am sure there are those who have different tastes from me and like his style; perhaps a person has to have grown up in the UK to appreciate the Monckton style. As to the question of substance and accuracy in what Monckton publishes I am a bit confused because I sometimes find it difficult to tell when he is being serious as opposed to just being sarcastic. But I read the URL supplied and did not find the comments useful. So I looked at the actual Copenhagen Accord which is really quite short and can be read fairly quickly. I found a couple of different URLs for the accord. The difference appears to be that the first URL listed is an advance unedited version: http://unfccc.int/files/meetings/cop_15/application/pdf/cop15_cph_auv.pdf The second URL is for a version which appears to be about the same text and is easier to read. http://unfccc.int/resource/docs/2009/cop15/eng/l07.pdf It is a quick read. I suggest everyone read it and make up their own mind. Fred From rafal.smigrodzki at gmail.com Tue Dec 22 08:52:27 2009 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 22 Dec 2009 03:52:27 -0500 Subject: [ExI] time to devour the dog In-Reply-To: <1A6373438E534E53A5D908A32D7F38D9@spike> References: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com> <1A6373438E534E53A5D908A32D7F38D9@spike> Message-ID: <7641ddc60912220052s3291f0c8ra66cb876112f86ab@mail.gmail.com> On Mon, Dec 21, 2009 at 10:04 PM, spike wrote: > >> [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Max More >> ... >> It's just another case of greens trying to make all humans >> feel guilty for living. I'm really sick of this culture... Max > > > Speaking of this culture, I was up to my eyeballs in a family situation last > week, still actually, but as I recall there was supposed to be a big climate > hootnanny in Copenhagen, didn't follow it. ?Is that over now? ?I don't > really want to read a bunch of long-winded URLs about it, but would far > prefer anyone here offering a very few word summary, perhaps a single > sentence, that will tell me all I really need to know about how that went or > what happened. > ### What happened? The Gore effect happened - Copenhagen got slammed with unusually severe snowfall, thousands of envirowhackos froze their asses off waiting (unsuccessfully) for the UN to issue IDs. In other news, limos had to be driven from 500 miles away to accommodate the envirocelebrities, private jets swarmed so thick they had to be parked in airports in Sweden, an orgy of hysterical alarmist proclamations and hypocritical moral posturing erupted, Hugo Chavez was feted for putting the boot in on the US, hundreds of avowedly communist demonstrators broke a lot of other people's stuff, Lord Monckton was roughed up by the police, Obama had to leave early because the most severe snowstorm in many years was closing in on Washington DC, and Gore lied. Your taxes at work, politics as usual. Rafal From pharos at gmail.com Tue Dec 22 09:08:43 2009 From: pharos at gmail.com (BillK) Date: Tue, 22 Dec 2009 09:08:43 +0000 Subject: [ExI] time to devour the dog In-Reply-To: <7641ddc60912220052s3291f0c8ra66cb876112f86ab@mail.gmail.com> References: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com> <1A6373438E534E53A5D908A32D7F38D9@spike> <7641ddc60912220052s3291f0c8ra66cb876112f86ab@mail.gmail.com> Message-ID: On 12/22/09, Rafal Smigrodzki wrote: > Your taxes at work, politics as usual. > > It's worse than that. They spent all our taxes ages ago. They're spending monopoly money now, and increasing. BillK From gts_2000 at yahoo.com Tue Dec 22 12:38:21 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 22 Dec 2009 04:38:21 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <844704.43367.qm@web36507.mail.mud.yahoo.com> --- On Mon, 12/21/09, Stathis Papaioannou wrote: > you suggested that (a) would be the case, but then seemed to backtrack: I suggested (a) would be the case if we replaced all neurons with your programmatic neurons. > If you don't believe in a soul then you believe that at > least some of the neurons in your brain are actually involved in > producing the visual experience. It is these neurons I propose replacing > with > artificial ones that interact normally with their > neighbours but lack > the putative extra ingredient for consciousness. The aim of > the > exercise is to show that this extra ingredient cannot > exist, since > otherwise it would lead to one of two absurd situations: > (a) you would > be blind but you would not notice you were blind; or (b) > you would > notice you were blind but you would lose control of your > body, which > would smile and say everything was fine. I suppose (b) makes sense for the partial replacement scenario you want me to consider. If it seems bizarre, well then so too does the thought experiment! And how does it in any way speak to the issue at hand? As in the title of the thread, our concern here is the symbol grounding problem in strong AI, or more generally "understanding" in S/H systems. To target Searle's argument (as you want to and which I appreciate) we need to use your nano-neuron thought experiments to somehow undermine his position that programs do not have semantics. -gts From alfio.puglisi at gmail.com Tue Dec 22 13:12:46 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Tue, 22 Dec 2009 14:12:46 +0100 Subject: [ExI] time to devour the dog In-Reply-To: <62c14240912211831g4399d2b4sa19ed50921bef695@mail.gmail.com> References: <52C75A09949548E6947D00AEF57FD05B@spike> <710b78fc0912211729s6f90a6ebv46043125d9bd2b93@mail.gmail.com> <62c14240912211831g4399d2b4sa19ed50921bef695@mail.gmail.com> Message-ID: <4902d9990912220512t349265e6gc71beb6a4ac974b3@mail.gmail.com> 2009/12/22 Mike Dougherty > On Mon, Dec 21, 2009 at 8:29 PM, Emlyn wrote: > >> Stories like this put the cart before the horse (or the SUV before the >> dog). The same reasoning would show that humans are the worst thing of >> all, and we need to reduce population. I think anytime your philosophy >> leads you to the idea of a purge, active or passive, it's a sign >> you've lost track of what you were doing. >> > > Maybe we should purge the climatologists? No, that's wrong. Maybe we > should take away their toys until they learn how to use them properly? > The two are described as "specialists in sustainable living". Doesn't sound to me like climatologists... Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Tue Dec 22 13:15:47 2009 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Tue, 22 Dec 2009 14:15:47 +0100 Subject: [ExI] time to devour the dog In-Reply-To: <1A6373438E534E53A5D908A32D7F38D9@spike> References: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com> <1A6373438E534E53A5D908A32D7F38D9@spike> Message-ID: <4902d9990912220515k2c5ce439ube6f07815b0bf518@mail.gmail.com> On Tue, Dec 22, 2009 at 4:04 AM, spike wrote: > Speaking of this culture, I was up to my eyeballs in a family situation > last > week, still actually, but as I recall there was supposed to be a big > climate > hootnanny in Copenhagen, didn't follow it. Is that over now? I don't > really want to read a bunch of long-winded URLs about it, but would far > prefer anyone here offering a very few word summary, perhaps a single > sentence, that will tell me all I really need to know about how that went > or > what happened. > That's easy :-) . Apart from producing lots of newspaper articles and blog posts pro and con, nothing happened and nothing was agreed on. Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Tue Dec 22 13:17:39 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 22 Dec 2009 05:17:39 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <526603.59939.qm@web36507.mail.mud.yahoo.com> This post is about Searle's logic. --- On Mon, 12/21/09, Stathis Papaioannou wrote: > ...it is at least provisionally an open question whether computers, or > systems with only syntax, do. You can't assume that they don't as part > of your argument to prove that they don't. You and others keep suggesting that I (or Searle) has begged the question, so in this post I will address that and only that issue. (Again! :-) Searle assumes these three propositions as premises (he calls them axioms. I prefer to call them premises because premises seem more open to criticism): P1) Programs are formal (syntactic). P2) Minds have mental contents (semantics). P3) Syntax is neither constitutive of nor sufficient for semantics. >From there he draws his conclusion that programs don't cause or have minds, but here we concern ourselves with his premises. Does Searle, as is charged, include his conclusion in his premises? Let's take a look at each premise, one at a time! Premise P1) states that programs are formal (syntactic). Nothing more, nothing less. What does P1) mean? It means that, at the very least, programs do syntactical form-based operations on symbols, something with which any programmer will agree. Notice what P3) does not state. It does not state this: P1) Programs are formal (syntactic) and cannot have semantics. Nor does it state this: P1) Programs are formal (syntactic) and cannot have minds. Nor does it state this: P1) Programs are merely and only formal (syntactic). If P3 stated those things or similar then Searle would be guilty as charged; in that case he would have only proved what he assumed. And instead of being a tenured professor of philosophy at UC Berkeley, he would be the laughing stock of the academic community and we mostly likely would never have heard of him. As for P2 and P3, they say nothing about programs or minds! Here I will show why it might *seem* that he begged the question. The argument again: P1) Programs are formal (syntactic). P2) Minds have mental contents (semantics). P3) Syntax is neither constitutive of nor sufficient for semantics. C1) Programs are neither constitituve nor sufficient for minds. If you look carefully, you'll see that the argument does not preclude the possibility that something other than syntax gives programs semantics or minds. I suggest that it's because the argument does not preclude some other possibility that your intuitions tell you that he has assumed otherwise. Consider this alternative argument, which would refute Searle's if true: P1) Programs are formal (syntactic). P2) Minds have mental contents (semantics). P3) Syntax is neither constitutive of nor sufficient for semantics. P4) Zeus so loved Extropians that he gave programs semantics. C1) Programs are therefore constitutive or sufficient for minds. If we want to refute Searle's formal argument, and if we cannot refute his three premises or that those premises lead to his conclusion, then we need to find a suitable replacement for P4). -gts From stathisp at gmail.com Tue Dec 22 13:53:46 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 23 Dec 2009 00:53:46 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <844704.43367.qm@web36507.mail.mud.yahoo.com> References: <844704.43367.qm@web36507.mail.mud.yahoo.com> Message-ID: 2009/12/22 Gordon Swobe : > --- On Mon, 12/21/09, Stathis Papaioannou wrote: > >> you suggested that (a) would be the case, but then seemed to backtrack: > > I suggested (a) would be the case if we replaced all neurons with your programmatic neurons. > >> If you don't believe in a soul then you believe that at >> least some of the neurons in your brain are actually involved in >> producing the visual experience. It is these neurons I propose replacing >> with >> artificial ones that interact normally with their >> neighbours but lack >> the putative extra ingredient for consciousness. The aim of >> the >> exercise is to show that this extra ingredient cannot >> exist, since >> otherwise it would lead to one of two absurd situations: >> (a) you would >> be blind but you would not notice you were blind; or (b) >> you would >> notice you were blind but you would lose control of your >> body, which >> would smile and say everything was fine. > > I suppose (b) makes sense for the partial replacement scenario you want me to consider. If it seems bizarre, well then so too does the thought experiment! The experiment involves replacing biological neurons with artificial neurons. It's certainly no more bizarre that the CR, which is probably physically impossible as a normal human could never do the information processing fast enough or accurately enough to pass as a Chinese speaker. Even today there is talk of developing neural prostheses for people with brain lesions - look up "artificial hippocampus". I guess the team behind that project has not so far had spectacular success or we have heard about it, but extrapolate the technology a few decades hence and it doesn't seem wildly implausible that the technical problems will be overcome. The question will then be, will the cyborgised brain have the same consciousness, feelings, semantics etc. that a normal brain has? If you believe that with partial brain replacement you would feel different but behave normally, then you are proposing that it is possible for you to think with something other than your brain. This is because your remaining biological brain is constrained to go through exactly the same sequence of neural firings after the replacement as before. It's not *impossible* that your cognition is dependent on an immaterial soul but I don't think you want to go down this line of argument; and even Descartes thought that the soul and the brain were always perfectly synchronised. > And how does it in any way speak to the issue at hand? As in the title of the thread, our concern here is the symbol grounding problem in strong AI, or more generally "understanding" in S/H systems. To target Searle's argument (as you want to and which I appreciate) we need to use your nano-neuron thought experiments to somehow undermine his position that programs do not have semantics. You claim that a S/H brain analogue would lack understanding. This thought experiment shows that it would have understanding. If you think the visual cortex example is missing the point then consider repalcement of the neurons in Wernicke's area. You would claim to feel exactly the same, you would believe that you understood language the same as before, and you would use language appropriately as far as anyone else could tell. If someone asks you what you had for dinner last night you feel that you understand what he is asking, you recall an image of last night's meal, perhaps also its taste and aroma, and you describe all this in clear and appropriate English. And yet, you would say (I think) that because the artificial neurons just follow an algorithm, and syntax is not sufficient for meaning, you don't *really* understand either the question or your answer; you just have have the delusional belief that you understand it. But if it is possible to be deluded about such a thing, how do you know that you aren't deluded right now? -- Stathis Papaioannou From stefano.vaj at gmail.com Tue Dec 22 14:09:05 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 22 Dec 2009 15:09:05 +0100 Subject: [ExI] atheism In-Reply-To: <503025.63317.qm@web59904.mail.ac4.yahoo.com> References: <503025.63317.qm@web59904.mail.ac4.yahoo.com> Message-ID: <580930c20912220609ha57735br67dee97811bc8519@mail.gmail.com> 2009/12/18 Post Futurist > Alrighty, but is faith/religion any sillier than celebrity culture, sports, politics, gossip? There again, it depends on the religion. If it is one that postulates the existence of an entity "out of time and space", I would say that it definitely is. -- Stefano Vaj From stefano.vaj at gmail.com Tue Dec 22 14:11:32 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 22 Dec 2009 15:11:32 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <873042.45735.qm@web36506.mail.mud.yahoo.com> References: <873042.45735.qm@web36506.mail.mud.yahoo.com> Message-ID: <580930c20912220611y31afd5afte1b1b18b144e0e3e@mail.gmail.com> 2009/12/18 Gordon Swobe : > Biological brains do something we don't yet understand. Call it X. Whatever X may be, it causes the brain to have the capacity for intentionality. Really? How would you define it? -- Stefano Vaj From stathisp at gmail.com Tue Dec 22 14:17:26 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 23 Dec 2009 01:17:26 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <526603.59939.qm@web36507.mail.mud.yahoo.com> References: <526603.59939.qm@web36507.mail.mud.yahoo.com> Message-ID: 2009/12/23 Gordon Swobe : > This post is about Searle's logic. > > --- On Mon, 12/21/09, Stathis Papaioannou wrote: > >> ...it is at least provisionally an open question whether computers, or >> systems with only syntax, do. You can't assume that they don't as part >> of your argument to prove that they don't. > > You and others keep suggesting that I (or Searle) has begged the question, so in this post I will address that and only that issue. (Again! :-) > > Searle assumes these three propositions as premises (he calls them axioms. I prefer to call them premises because premises seem more open to criticism): > > P1) Programs are formal (syntactic). > P2) Minds have mental contents (semantics). > P3) Syntax is neither constitutive of nor sufficient for semantics. > > >From there he draws his conclusion that programs don't cause or have minds, but here we concern ourselves with his premises. Does Searle, as is charged, include his conclusion in his premises? Let's take a look at each premise, one at a time! > > Premise P1) states that programs are formal (syntactic). Nothing more, nothing less. What does P1) mean? It means that, at the very least, programs do syntactical form-based operations on symbols, something with which any programmer will agree. > > Notice what P3) does not state. It does not state this: > > P1) Programs are formal (syntactic) and cannot have semantics. > > Nor does it state this: > > P1) Programs are formal (syntactic) and cannot have minds. > > Nor does it state this: > > P1) Programs are merely and only formal (syntactic). > > If P3 stated those things or similar then Searle would be guilty as charged; in that case he would have only proved what he assumed. And instead of being a tenured professor of philosophy at UC Berkeley, he would be the laughing stock of the academic community and we mostly likely would never have heard of him. > > As for P2 and P3, they say nothing about programs or minds! > > Here I will show why it might *seem* that he begged the question. > > The argument again: > > P1) Programs are formal (syntactic). > P2) Minds have mental contents (semantics). > P3) Syntax is neither constitutive of nor sufficient for semantics. > C1) Programs are neither constitituve nor sufficient for minds. > > If you look carefully, you'll see that the argument does not preclude the possibility that something other than syntax gives programs semantics or minds. I suggest that it's because the argument does not preclude some other possibility that your intuitions tell you that he has assumed otherwise. > > Consider this alternative argument, which would refute Searle's if true: > > P1) Programs are formal (syntactic). > P2) Minds have mental contents (semantics). > P3) Syntax is neither constitutive of nor sufficient for semantics. > P4) Zeus so loved Extropians that he gave programs semantics. > C1) Programs are therefore constitutive or sufficient for minds. It is also possible that programs are *only* formal but programs can have minds because P3 is false, and syntax actually is constitutive and sufficient for semantics. I base this on the fact that all my brain does is manipulate information, and yet I feel that I understand things. Searle of course disagrees because he takes it as axiomatic that symbol-manipulation can't give rise to understanding; but it also used to be taken as axiomatic that matter could not give rise to understanding. -- Stathis Papaioannou From stefano.vaj at gmail.com Tue Dec 22 14:18:58 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 22 Dec 2009 15:18:58 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <3929.64427.qm@web36503.mail.mud.yahoo.com> References: <3929.64427.qm@web36503.mail.mud.yahoo.com> Message-ID: <580930c20912220618t7a0366ebkb0886027c45cf9a5@mail.gmail.com> 2009/12/19 Gordon Swobe : > In other words you want to think that if something causes X to do Y then we can assume X actually knows how to do Y. > > That idea entails panpsychism; Or the opposite. Meaning that "psychism" is simply a high-level description which becomes useful when some processes are complicate enough, but does not involve any underlying mystical phenomenon of a different standing of what ordinarily happens in nature. You might be interested in glancing at the principle of computational equivalence in Wolphram's A New Kind of Science... What remains is simply a matter of different performances in the execution of some software, our brain being obviously rather well optimised (within the constraints dictated by the need to develop it only through evolutionary methods) to do what it does. -- Stefano Vaj From stefano.vaj at gmail.com Tue Dec 22 14:42:18 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 22 Dec 2009 15:42:18 +0100 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <20091218114641.GT17686@leitl.org> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <20091218114641.GT17686@leitl.org> Message-ID: <580930c20912220642o5fa4c0a6w56c6860449bcf22b@mail.gmail.com> 2009/12/18 Eugen Leitl : > I figured out in vivo patching wasn't going to be feasible within > natural lifetimes when I was around 17. From what has gone so far > (almost 30 years) it looks like I was correct. Another 30-40 > years and I'll be distinctly past caring. Very little left to > patch, if any. Yes, I agree. But there again my initial interest in transhumanism was more about reprogenetic techs, biotech, and (limited) enhancement than about radical life extension. > Cyborg belongs into in vivo patching cathegory. Doesn't work either, > at least if implants in life extension and capabilities amplification > are concerned. Wearable stuff is fine. Implanted stuff is no good. > You'll notice we're not even in decent wearable cathegory. I would > have bet good money a decade ago we would have normal people using HMDs > and HUDs out in the streets by now. Absolutely. Even though I was intrigued in principle by the idea that transplants, implants and organ replacements had not precise boundaries on the way to a full upload, and so... >> In other words, if we are becoming machines, machines are becoming >> "chemical" and "organic" at an even faster pace (carbon rather than >> steel and silicon, biochips, nano...). > > Organic is one thing, biology another. It's a safe bet there will > be zero proteins, DNA, lipid bilayers or water in the result after > convergence. Yes. What I mean is that be it as it may I do not really expect to see artificial or "human" individuals in the shape of sci-fi movies of the fifties (pulls, levers, thermoionic valves, etc.), ? la Forbidden Planet, at any time along such convergence. -- Stefano Vaj From stefano.vaj at gmail.com Tue Dec 22 14:46:07 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 22 Dec 2009 15:46:07 +0100 Subject: [ExI] time to devour the dog In-Reply-To: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com> References: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com> Message-ID: <580930c20912220646i3e10c850tcbab22b3e911c23f@mail.gmail.com> 2009/12/22 Max More : > It's just another case of greens trying to make all humans feel guilty for > living. I'm really sick of this culture. Well said. -- Stefano Vaj From max at maxmore.com Tue Dec 22 15:45:13 2009 From: max at maxmore.com (Max More) Date: Tue, 22 Dec 2009 09:45:13 -0600 Subject: [ExI] time to devour the dog Message-ID: <200912221545.nBMFjRef022583@andromeda.ziaspace.com> Thanks, Stefano, but I see that it's said much better and in more detail here: http://www.spiked-online.com/index.php/debates/copenhagen_article/7860/ >2009/12/22 Max More : > > It's just another case of greens trying to make all humans feel guilty > > for living. I'm really sick of this culture. > >Well said. > > >-- >Stefano Vaj From jonkc at bellsouth.net Tue Dec 22 15:23:42 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 22 Dec 2009 10:23:42 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <89454.51497.qm@web36508.mail.mud.yahoo.com> References: <89454.51497.qm@web36508.mail.mud.yahoo.com> Message-ID: <775922A3-B60C-4C1C-8B16-CFD1AD578A3C@bellsouth.net> On Dec 20, 2009, at Gordon Swobe wrote: > The brain has understanding, yes, but Searle makes no claim about the dumbness or lack thereof of its components. You added that to his argument. I think Stathis was trying to be generous, but if Searle an individual neuron is the souse of our understanding then the man is an even bigger fool than I thought he was. > He starts with the self-evident axiom that brains have understanding Why the plural? The only self-evident axiom is that one brain has understanding. > nobody has shown his formal argument false. If somebody has seen it proved false then point me to it. Good God almighty! I've shown in devastating and unanswered detail that his formal argument is not just false but positively vapid in a post this very day, and yesterday too, and the day before that, and the day before that, and the day before that. But I can't brag, I'm a latecomer to all this. Charles Darwin proved that ideas such as Searle's and yours were idiotic in 1859. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Tue Dec 22 16:48:13 2009 From: kanzure at gmail.com (Bryan Bishop) Date: Tue, 22 Dec 2009 10:48:13 -0600 Subject: [ExI] Exploring perceived acceleration of time as we age Message-ID: <55ad6af70912220848m6d8d5683m9cf53deb19c30224@mail.gmail.com> I found this insightful (quoted below): http://everything2.com/user/Professor+Pi/writeups/Why+time+appears+to+speed+up+with+age Please note that it's from the Journal of Irreproducible Results before you invest much thought into it. I was wondering if the kind fellows in the Gerontology Research Group could mention whether or not supercentenarians report a perception of acceleration of time at age, say, 115, greater than at age 100 or age 80, etc. """ In a groundbreaking article, T. L. Freeman discusses the relationship between actual age and effective age1. His conclusion is that the passing of the years goes faster as we grow older. This makes sense; for instance when you are 10 years of age, a year represents 10% of your life, and seems like a very long time. However, when you are 50 years old, one year has reduced to only 2% of your life, and hence seems only one-fifth as long. Summarizing this work, Freeman comes to the conclusion that the actual age (AA) needs to be corrected for the apparent length of a year (AY). The apparent length of a year is inversely proportional to one person's actual age: AY= ?/AA The constant of proportionality ? is rather loosely defined by Freeman as the age at which a year really seems to last a year, and it was arbitrarily set at 20 years (?=20). Now Freeman determines the concept of Effective age, which is simply the integral over time of the Apparent Year from age 1 to the actual age (AA) of interest: AA AA EA = ? AY d(AA) = ? 20/AA d(AA) = 20 ln(AA) 1 1 Although this formula results in some interesting conclusions, there are several flaws with this concept. As mentioned above, the choice of the proportionality constant is rather arbitrary. There is no rational justification for the choice of this age, but it was solely chosen based on Freeman's own perception of (the passing of) time. Next, the evaluation of the integral seems incorrect, since its lower limit was set at 1, and not at 0. Obviously, the choice of zero as lower integration boundary yields can not be evaluated due to the logarithmic term in the expression. Because of the obvious problems with Freeman's concept of time perception, it is necessary to redefine the Effective Age on a sounder basis. In the traditional concept of time perception, one person's Actual Age is proportional to the passing of time (t). AA = ?t + ? Note the occurrence of two parameters ? and ? that are traditionally set to one and zero, respectively. However, each has a clear (though usually underappreciated) function in time perception. The ?-parameter describes the rate at which one person ages; some persons remain annoying little crybabies during their life, while others become boring old farts at 20. The ?-parameter describes the origin of one person's time perception. Did you ever meet those proud parents boasting about their little one who is only x months old, and already walks, writes obfuscated C, or recently sold his first dot.com? No, these youngsters aren't bright for their age; they simply have a high ?-factor. It is clear that with this definition, one person's Actual Age may already be non-synchronous with time. However, analogous to Freeman's work, the apparent length of a year (AY) is not constant: AY= ?/AA = ?/(?t + ?) We can remove one of the parameters by defining two parameters ? and ?. AY= ?/(?t + ?) = (?/?)/(t + ?/?) = ?/(t + ?) The actual values of ? and ? will become clear from the boundary conditions. In order to obtain the Effective Age, the integral of AY is evaluated. Note that the integral is evaluated over time, and not over Actual Age, since AA is a function of time: t t EA = ? AY d(t) = ? ?/(t + ?) d(t) 0 0 EA = ? ln(t + ?) - ? ln(?) The lower boundary condition (t=0) should yield an Effective Age of zero years (EA=0). Therefore ? = 1. The upper boundary is less apparent. It should be chosen so that at t=tmax, EA = t. At death, the Effective Age and real time are again equal. However, no person knows for sure his or her personal life expectancy. This is clearly an issue for molecular biologists to address. However, if we assume for a person a life expectancy of 80 years (t=80, EA=80), we obtain: ? = 80/ln(81) 80 ln(t + 1) EA = ---------- ln(81) This formula can now be used to calculate the Effective Age (and the Effective percentage Completion of Life) as a function of time. This is shown in the following table: time (yrs.) EA (yrs.) Life% 0 0.0 0 1 12.6 16 2 20.0 25 3 25.2 32 4 29.3 37 5 32.6 41 10 43.7 55 15 50.5 63 20 55.4 69 30 62.5 78 40 67.6 85 50 71.6 89 60 74.8 94 70 77.6 97 80 80.0 100 And thus, the bold statement in the title is justified. Life is half over at age ten, and three quarters over at age thirty. Note the rapid increase at very young ages: in the initial stages of life, life itself makes big strides forward. For instance, consider the concepts of speech, eating and walking; skills that are learned at a young age and are carried on throughout a person's life. Another interesting observation that we can make is the age at which one year really seems to last one year. This can be calculated quite easily from the derivation above. For a life expectancy of 80 years, it is equal to 80/ ln(81) - 1 = 17.2 years. Quite close to Freeman's original assumption of 20 years. Consequences: The concept of Effective Age has far stretching implications. Some of these I have summarized below: * "Summer vacations lasted almost forever when I was in grammar school": True, they did. In fact, when you were six years old, an Apparent Year would be close to three years. That would make a three week summer vacation feel like almost nine weeks! * "Now that I am older, I can communicate better with my parents" Right. As you can see, you're catching up with them! Closing the "generation gap", so to speak. * "Life starts after 65" The credo of many people close to their pension age. Wrong: at 65, you only have about 5% of your Effective Age left. Choose your time wisely; start working late, and retire early. * "Old people are slow" That is such an insensitive comment. Old people aren't slow at all, they simply have a different time perception. * "Those annoying birthdays seem to roll around faster every year True, they do. Better start celebrating your Effective Age. T. L. Freeman, Why it's later than you think, J. Irr. Res., 1983. """ Now, what happens when you throw in longevity escape velocity? The perception of the duration between SENS-like-treatments goes down. According to Freeman's model, or Professor Pi's model, your aging of "effective age" (EA) slows down considerably once you hit 80. At 1k years, your EA is 120, and at 10k years, your EA is 160, and at 100k years your EA is just at 210. Perceiving time like you're (at most) 200 years old for 100k years is a pretty neat deal, if you don't account for (1) any benefits that SENS-like treatments give to perception of time (i.e. maybe it's accumulative damage on a molecular scale in the brain that causes the peculiar perception of time), or (2) any sort of neurological intervention. I suspect though that the actual results will not follow this Irreproducible Result/model, but it's still interesting as an extrapolation and something to bounce questions off of. Of course, different people likely have different considerations of what they consider to be a normal set point for time perception. So far, in my life, I am just now hitting 20 years in January, and I already feel ancient- half of my life has been spent on the internet, for instance, and according to that Irreproducible Result, this has been 70% of my perception of my lifespan if I was to live to 80. I'll have to play with this some more. Just some fun, don't read too much into this. - Bryan http://heybryan.org/ 1 512 203 0507 From jonkc at bellsouth.net Tue Dec 22 17:12:03 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 22 Dec 2009 12:12:03 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <526603.59939.qm@web36507.mail.mud.yahoo.com> References: <526603.59939.qm@web36507.mail.mud.yahoo.com> Message-ID: <7A26F54C-13CA-4C35-906E-12FC538A4DF3@bellsouth.net> On Dec 22, 2009, at Gordon Swobe wrote: > Searle assumes these three propositions as premises (he calls them axioms. I prefer to call them premises because premises seem more open to criticism): > > P1) Programs are formal (syntactic). > P2) Minds have mental contents (semantics). > P3) Syntax is neither constitutive of nor sufficient for semantics. From P3 Searle assumes that syntactics and semantics have absolutely nothing to do with each other and that is obviously false. And Searle makes great use of an axiom not stated above in his infamous Chinese Room. let's call it P4: P4) Semantics is a very simple thing with only 2 values, understanding and non-understanding; you either have semantics or you don't . P4 is even sillier than P3, and that's pretty silly. Understanding is the most important part of mind and mind is the most complex thing in the known universe, and Searle thinks it has only 2 values. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Tue Dec 22 17:26:19 2009 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 22 Dec 2009 10:26:19 -0700 Subject: [ExI] Carbon Message-ID: Twenty two years ago I wrote: ". . . the real carbon dioxide crisis will be when there is too little from people taking carbon (the strongest engineering material) out of the air to build houses, roads, tunnels through the mantle, industrial works, and spacecraft in large numbers." Carbon is for many purposes the best engineering material. Think of the compressive strength of diamond and the tensile strength of nanotubes. It is freely available to mine from the air and takes (in this context) modest energy to convert it to elemental form. Post or even slightly pre singularity I don't see how we can avoid a crash in the CO2 content of the air. As I put it: "Some civic minded types (the Autaban Society? Serria Club?) might burn coal fields to bring the level back up so plant productivity wouldn't be seriously hurt." People who are worried about the climate of the year 2100 don't seem to be aware of the larger picture. Extropians should be. Keith From scerir at libero.it Tue Dec 22 18:22:09 2009 From: scerir at libero.it (scerir) Date: Tue, 22 Dec 2009 19:22:09 +0100 (CET) Subject: [ExI] Exploring perceived acceleration of time as we age Message-ID: <18385260.366621261506129525.JavaMail.defaultUser@defaultHost> > I'll have to play with this some more. Just some fun, don't read too > much into this. > - Bryan There are intriguing pictures by Gilbert Garcin http://www.gilbert-garcin. com/ about all that (more or less): http://www.gilbert-garcin.com/chrono/photos/photo_2004_254.php http://www.gilbert-garcin.com/chrono/photos/photo_1997_6.php http://www.gilbert-garcin.com/chrono/photos/photo_2006_334.php From spike66 at att.net Tue Dec 22 19:29:08 2009 From: spike66 at att.net (spike) Date: Tue, 22 Dec 2009 11:29:08 -0800 Subject: [ExI] Carbon In-Reply-To: References: Message-ID: ... On Behalf Of Keith Henson > Subject: [ExI] Carbon > > Twenty two years ago I wrote: > > ". . . the real carbon dioxide crisis will be when there is > too little from people taking carbon (the strongest > engineering material) out of the air... "Some civic minded types (the Autaban Society? Serria Club?) > might burn coal fields to bring the level back up so plant > productivity wouldn't be seriously hurt." Keith Ja, but we wouldn't need to haul down the carbon out of the air. Rather we would deliver it directly to the homes and factories in the form of coal, the old fashioned way. Also, the quantities of carbon I expect we will use is miniscule, since all the really cool stuff I can imagine we would build would be tiny. Reason: we don't have the space to build really big stuff. We already build our homes mostly of carbon. If we made them from better-organized carbon, it would take far less than we currently use. spike From spike66 at att.net Tue Dec 22 20:05:57 2009 From: spike66 at att.net (spike) Date: Tue, 22 Dec 2009 12:05:57 -0800 Subject: [ExI] time to devour the dog In-Reply-To: <20091222073250.56900.qmail@moulton.com> References: <20091222073250.56900.qmail@moulton.com> Message-ID: > ...On Behalf Of moulton at moulton.com > ... > So I looked at the actual Copenhagen Accord which is really > quite short and can be read fairly quickly... > http://unfccc.int/resource/docs/2009/cop15/eng/l07.pdf > > It is a quick read. I suggest everyone read it and make up > their own mind. Fred Thanks Fred. In the rocket science biz, when a specification is finalized and signed, the next task is to create a requirements verification matrix (RVM), which unambiguously defines how the subcontracting entity or responsible organization fulfills every requirement to achieve earned value management milestones in order to get paid. The engineer first goes thru the document and finds all the "shalls" for shalls define requirements, which are then followed by a set of criteria to determine how the requirement fulfillment is to be verified. Only shalls are relevant in this step; the term "should" is an opinion, "must" is a prerequisite for bidding, "will" is a prediction, but "shall" defines a binding contract requirement. If I try to create the RVM, it is a short document indeed, and lacks some critical information. The first column or my RVM would have something like: RVM 1. ...we shall...enhance cooperative action... RVM 2. ...developed countries shall...give money to developing countries... RVM 3. ...actions (or progress) shall be communicated every two years... RVM 4. ...money and improved access shall be provided to developing countries... RVM 5. ...Copenhagen Green Climate Fund shall be established... The document doesn't say which are the developed countries and which are the developing countries, but clearly the latter benefits from the nebulous generosity of the former. The document doesn't say how a country can get from the former list to the latter, or what are the criteria for defining each, or if there is a third category which is a transitioning country, and if transitioning, which direction they are going. RVM 1, enhancing cooperative action carries no units to determine or measure if cooperative action has been enhanced. Since this is about governments working together, which is often frustrating, the unit I would suggest would be ergs. RVM 2, how much money, when, to do what, etc. RVM 3, the least ambiguous requirement, write a report every two years telling what, if anything, has been accomplished. RVM 3 doesn't specifically require that anything actually be accomplished, only that a report is written about what has or has not been accomplished. I have seen these kinds of requirements in real-world RVMs. They are always the first and most likely to be fulfilled. RVM 4, almost a repeat of 2, same questions. RVM 5, requires only that a fund be established, with no specific definition of who must contribute what. If I received this document as a request for proposal, I would no-bid this job on the basis that the requirements are insufficiently defined, responsible parties are undefined and the criteria for successful completion are missing. Finally, the logic escapes me for calling this kind of meeting in the dead of winter. Do they really believe that the planet is on the verge of meltdown if urgent action is not taken in the next few months? The whole notion would sell much better in a sweltering European July than in a blizzardy December. My dog is safe. spike From stathisp at gmail.com Tue Dec 22 21:04:20 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 23 Dec 2009 08:04:20 +1100 Subject: [ExI] time to devour the dog In-Reply-To: <580930c20912220646i3e10c850tcbab22b3e911c23f@mail.gmail.com> References: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com> <580930c20912220646i3e10c850tcbab22b3e911c23f@mail.gmail.com> Message-ID: <7C6A7861-9FEF-4E3D-9031-F87FA29CC177@gmail.com> On 23/12/2009, at 1:46 AM, Stefano Vaj wrote: > 2009/12/22 Max More : >> It's just another case of greens trying to make all humans feel >> guilty for >> living. I'm really sick of this culture. > > Well said. Another way to look at it is that the Green agenda is very human- centric. They want to maintain a pretty garden planet able to sustain us and the species we favour forever. If they really had just as much respect for weeds, bacteria and insects then climate change wouldn't be a problem. The biosphere has suffered far more catastrophic insults and life in some form has always continued. -- Stathis Papaioannou From kanzure at gmail.com Tue Dec 22 21:07:39 2009 From: kanzure at gmail.com (Bryan Bishop) Date: Tue, 22 Dec 2009 15:07:39 -0600 Subject: [ExI] [GRG] Exploring perceived acceleration of time as we age In-Reply-To: <20091222200914.HKVMB.148816.root@web11-winn.ispmail.private.ntl.com> References: <55ad6af70912220848m6d8d5683m9cf53deb19c30224@mail.gmail.com> <20091222200914.HKVMB.148816.root@web11-winn.ispmail.private.ntl.com> Message-ID: <55ad6af70912221307v71464a7dm5d6ba1a8a1219bf1@mail.gmail.com> On Tue, Dec 22, 2009 at 2:09 PM, Michael C Price wrote: > Is that extropian *Tim* Freeman? While trying to figure it out (and failing), I did find this: http://www.acm.org/ubiquity/volume_9/v9i39_yaffe.html from: http://news.ycombinator.com/item?id=713339 Anyway, nice to see the ycombinator people commenting on the issue. I still don't know who Tim is. - Bryan http://heybryan.org/ 1 512 203 0507 From sjatkins at mac.com Tue Dec 22 21:50:16 2009 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Dec 2009 13:50:16 -0800 Subject: [ExI] atheism In-Reply-To: References: <311055.56333.qm@web59907.mail.ac4.yahoo.com> Message-ID: <27396BA3-97FD-4882-9B93-EAE63E83DA0B@mac.com> On Dec 18, 2009, at 10:32 AM, Dave Sill wrote: > 2009/12/18 Post Futurist >> >> Religion is IMO necessary for some families for the same reason; say, a family of yahoos ceases churchgoing, >> then they might trade off religion/faith for eating more to fill the void in their psyches; drink more spirits, become >> sex addicts. > It has been my experience that the biggest drinkers, overeaters and sex maniacs are much more often religious than not. I have no idea where you think you are going with such a line of argument. I grew up being hauled to church 4 times a week (Southern Baptist) and have observed other religious folks since. Except for a handful of serious meditators and another handful of more or less cult folks I haven't noticed any general improvement in character or habits from most of what passes for religion. Indeed, most (not all) religion is such a sloppy unthinking pile of myth and superstition that one would expect those of low standards in general, not just intellectually, to be affirming it. - samantha From sjatkins at mac.com Tue Dec 22 21:54:59 2009 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Dec 2009 13:54:59 -0800 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <297005.31845.qm@web113616.mail.gq1.yahoo.com> References: <297005.31845.qm@web113616.mail.gq1.yahoo.com> Message-ID: <3A6051B5-A611-4F4D-AD97-B02F19FDB47B@mac.com> On Dec 18, 2009, at 3:09 AM, Ben Zaiboc wrote: > Gordon Swobe wrote: > >> --- On Thu, 12/17/09, Ben Zaiboc >> wrote: > ... > >>> It's "Yes", "Yes", and "What symbol grounding >> problem?" >> >> You'll understand the symbol grounding problem if and when >> you understand my last sentence, that I did nothing more >> interesting than does a cartoonist. > > LOL. I didn't mean that I don't understand what the 'symbol grounding problem' is, I meant that there is no such problem. This seems to be a pretty fundamental sticking point, so I'll explain my thinking. > > We do not know what 'reality' is. There is nothing in our brains that can directly comprehend reality (if that even means anything). What we do is collect sensory data via our eyes, ears, etc., and sift it, sort it, combine it, distort it with preconceptions and past memories, and create 'sensory maps' which are then used to feed the more abstract parts of our minds, to create 'the World according to You'. Your argument is wanting. What is our sensory experience of if not reality? In what do our senses and mind exist if not in reality? What would "direct comprehension" be, some mystical meandering down fantasy lane? Please explain how any material (i.e., existing or possibly existing) being could apprehend reality *except* through some type of senses and brain creating a map of what is "out there" from sense data. To condemn the only possible form of knowing reality that there can possibly be as actually not knowing reality at all is a bizarre argument. - samantha From sjatkins at mac.com Tue Dec 22 22:05:51 2009 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Dec 2009 14:05:51 -0800 Subject: [ExI] Sick of Cyberspace? In-Reply-To: <20091218114641.GT17686@leitl.org> References: <20091217174201.e3r17nvw0oc4wk0o@webmail.natasha.cc> <580930c20912180319k51b1bdedle5056ee9e6595dd5@mail.gmail.com> <20091218114641.GT17686@leitl.org> Message-ID: <2092CC64-4234-4AF5-8E56-52E795BD5F42@mac.com> On Dec 18, 2009, at 3:46 AM, Eugen Leitl wrote: > On Fri, Dec 18, 2009 at 12:19:37PM +0100, Stefano Vaj wrote: > >> I come myself from "wet transhumanism" (bio/cogno), and while I got in > > I figured out in vivo patching wasn't going to be feasible within > natural lifetimes when I was around 17. From what has gone so far > (almost 30 years) it looks like I was correct. Another 30-40 > years and I'll be distinctly past caring. Very little left to > patch, if any. Please show your argument. I don't care much when you decided that this was so. I rather expect to spend time in a dewar before the "natural" decay of aging is licked but I doubt I will do so before there is significant enhancement to the biological. > >> touch with the movement exactly out of curiosity to learn more about >> the "hard", "cyber/cyborg" side of things, I am persuased the next era > > Cyborg belongs into in vivo patching cathegory. Doesn't work either, > at least if implants in life extension and capabilities amplification > are concerned. Wearable stuff is fine. Implanted stuff is no good. Why do you say this? We are making more types of implants every so often now. Many are crude but the work in signaling and reading individual neurons gives me some hope. Also I believe I read of work growing nerve/chip connects some time back. So I don't see the basis for categorically denying there is any possible good that can come of this area of R&D. > > You'll notice we're not even in decent wearable cathegory. I would > have bet good money a decade ago we would have normal people using HMDs > and HUDs out in the streets by now. HMDs and HUDs are still pricey. I have some hope for the contact lens HUD however. Also notice what people are doing. They are carrying a more powerful computer than they could buy not long ago in the palm of their hand. Whether the tech is inside your body or not does not make you significantly less symbiotic with technology. > >> is still about chemistry, and, that when it will stops being there >> will be little difference between the two. > > When we're talking about convergence, it's mostly convergence towards > the nanoscale. The dry/stiff versus solvated/floppy isn't going to > converge at all. There doesn't seem a lot of need for volatiles, apart > from cooling and power supply maybe. I am not at all sure we will ever solve many aspects of intelligences without rather sloppy/floppy circuit equivalents. > >> In other words, if we are becoming machines, machines are becoming >> "chemical" and "organic" at an even faster pace (carbon rather than >> steel and silicon, biochips, nano...). > > Organic is one thing, biology another. It's a safe bet there will > be zero proteins, DNA, lipid bilayers or water in the result after > convergence. Is it really? I am not so sure. - samantha From natasha at natasha.cc Tue Dec 22 23:19:01 2009 From: natasha at natasha.cc (natasha at natasha.cc) Date: Tue, 22 Dec 2009 18:19:01 -0500 Subject: [ExI] time to devour the dog In-Reply-To: <7C6A7861-9FEF-4E3D-9031-F87FA29CC177@gmail.com> References: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com> <580930c20912220646i3e10c850tcbab22b3e911c23f@mail.gmail.com> <7C6A7861-9FEF-4E3D-9031-F87FA29CC177@gmail.com> Message-ID: <20091222181901.6pggahj6m8c4owgo@webmail.natasha.cc> Bty, Does a paw have to touch the earth to leave a carbon footprint? We purchased socks from our local Walmart and put them on our dog so that when he takes walks around the block, he has less tactile contact the Earth. Natasha From natasha at natasha.cc Tue Dec 22 23:39:19 2009 From: natasha at natasha.cc (natasha at natasha.cc) Date: Tue, 22 Dec 2009 18:39:19 -0500 Subject: [ExI] time to devour the dog In-Reply-To: <20091222181901.6pggahj6m8c4owgo@webmail.natasha.cc> References: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com> <580930c20912220646i3e10c850tcbab22b3e911c23f@mail.gmail.com> <7C6A7861-9FEF-4E3D-9031-F87FA29CC177@gmail.com> <20091222181901.6pggahj6m8c4owgo@webmail.natasha.cc> Message-ID: <20091222183919.usw7htkhwk8s88ck@webmail.natasha.cc> If a brain's carbon level rises, does its linguistic acuity dim? Quoting natasha at natasha.cc: > Bty, > > Does a paw have to touch the earth to leave a carbon footprint? > > We purchased socks from our local Walmart and put them on our dog so > that when he takes walks around the block, he has less tactile contact > the Earth. > > Natasha > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Wed Dec 23 01:12:19 2009 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 22 Dec 2009 18:12:19 -0700 Subject: [ExI] Carbon Message-ID: On Tue, Dec 22, 2009 at 3:05 PM, "spike" wrote: >> >> Twenty two years ago I wrote: >> >> ". . . the real carbon dioxide crisis will be when there is >> too little from people taking carbon (the strongest >> engineering material) out of the air... "Some civic minded types (the > Autaban Society? Serria Club?) >> might burn coal fields to bring the level back up so plant >> productivity wouldn't be seriously hurt." Keith > > Ja, but we wouldn't need to haul down the carbon out of the air. ?Rather we > would deliver it directly to the homes and factories in the form of coal, > the old fashioned way. By the time carbon is coming out of the air more than it goes in, we will have solved the energy problem. Let's say you have a 100 amp service at 240 volts. That's 24 kW, 576 kWh/day. 17,280 kWh per month. To get a ton of carbon out of the air takes 360 kWh. To make it into hydrocarbons (or carbon) takes ~50 times that much, about 18,000 kWh which is close enough to a month of power feed to your house. 18,000 kWh at a penny per kWh is $180. That's a bit more than coal cost delivered to power plants, but probably around the same price as retail delivery of coal Besides, CO2 out of the air has got to be a cleaner starting source. > Also, the quantities of carbon I expect we will use > is miniscule, since all the really cool stuff I can imagine we would build > would be tiny. Think in terms of a fractal floating beach, 100 meters for every person on the planet anchored in the Pacific. Keith >Reason: we don't have the space to build really big stuff. > We already build our homes mostly of carbon. ?If we made them from > better-organized carbon, it would take far less than we currently use. > > spike From spike66 at att.net Wed Dec 23 01:03:30 2009 From: spike66 at att.net (spike) Date: Tue, 22 Dec 2009 17:03:30 -0800 Subject: [ExI] time to devour the dog In-Reply-To: <20091222183919.usw7htkhwk8s88ck@webmail.natasha.cc> References: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com><580930c20912220646i3e10c850tcbab22b3e911c23f@mail.gmail.com><7C6A7861-9FEF-4E3D-9031-F87FA29CC177@gmail.com><20091222181901.6pggahj6m8c4owgo@webmail.natasha.cc> <20091222183919.usw7htkhwk8s88ck@webmail.natasha.cc> Message-ID: <60C2B5A4BFDF4382823D477EF07FFC8A@spike> ...On Behalf Of natasha at natasha.cc ... > Subject: Re: [ExI] time to devour the dog > > If a brain's carbon level rises, does its linguistic acuity dim? ... > Natasha That would depend entirely on the form of this carbon. {8^D When a pair of these carbon atoms are attached to one oxygen and five hydrogens in my brain, I have been known to utter the eloquent linguistic acuities. You may have read some of my posts here which were clearly inspired by rising carbon levels in this form, such as the sex lamas post from a few Newtonmasses ago. spike From gts_2000 at yahoo.com Wed Dec 23 02:31:38 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 22 Dec 2009 18:31:38 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <580930c20912220611y31afd5afte1b1b18b144e0e3e@mail.gmail.com> Message-ID: <395894.67074.qm@web36502.mail.mud.yahoo.com> --- On Tue, 12/22/09, Stefano Vaj wrote: > > Biological brains do something we don't yet > understand. Call it X. Whatever X may be, it causes the > brain to have the capacity for intentionality. > > Really? How would you define it? I like this basic definition from the Stanford encyclopedia of philosophy online: "Intentionality is the power of minds to be about, to represent, or to stand for, things, properties and states of affairs." http://plato.stanford.edu/entries/intentionality/ We sometimes get caught up in discussions of the related word "consciousness" but I can understand how that word makes some people uncomfortable. To have intentionality is just simply to have something in mind. You have it just as surely as you have this sentence in mind. -gts From gts_2000 at yahoo.com Wed Dec 23 02:57:56 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 22 Dec 2009 18:57:56 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <874907.77882.qm@web36504.mail.mud.yahoo.com> --- On Tue, 12/22/09, Stathis Papaioannou wrote: >> The argument again: >> >> P1) Programs are formal (syntactic). >> P2) Minds have mental contents (semantics). >> P3) Syntax is neither constitutive of nor sufficient >> for semantics. >> C1) Programs are neither constitituve nor sufficient >> for minds. > It is also possible that programs are *only* formal but > programs can have minds because P3 is false, and syntax actually is > constitutive and sufficient for semantics. Sure, we just need to show P3 false. > I base this on the fact that all my brain does is manipulate > information, and yet I feel that I understand > things. Searle of course disagrees because he takes it as > axiomatic that symbol-manipulation can't give rise to understanding; > but it also used to be taken as axiomatic that matter could not give > rise to understanding. Well P3 is certainly open to debate. Can you show how syntax gives rise to semantics? Can you show how the man in the room who does nothing more than shuffle Chinese symbols according to syntactic rules can come to know the meanings of those symbols? If so then you've cooked Searle's goose. -gts From p0stfuturist at yahoo.com Tue Dec 22 00:47:15 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Mon, 21 Dec 2009 16:47:15 -0800 (PST) Subject: [ExI] atheism Message-ID: <802158.41458.qm@web59916.mail.ac4.yahoo.com> >I don't consider religion merely futile, I think it's actively harmful to humankind. I guess that pretty succinctly highlights where we differ. --John Clark ? No doubt religion is actively harmful. My point is you offer no evidence religion is more actively destructive than politics, entertainment, etc. Because you cannot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Tue Dec 22 14:46:59 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Tue, 22 Dec 2009 06:46:59 -0800 (PST) Subject: [ExI] atheism Message-ID: <235370.43610.qm@web59913.mail.ac4.yahoo.com> >There again, it depends on the religion. If it is one that postulates the existence of an entity "out of time and space", I would say that it definitely is. Stefano Vaj ? >But more destructive? -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Wed Dec 23 04:10:02 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Tue, 22 Dec 2009 20:10:02 -0800 (PST) Subject: [ExI] atheism Message-ID: <746121.54668.qm@web59910.mail.ac4.yahoo.com> >It has been my experience that the biggest drinkers, overeaters and sex maniacs are much more often religious than not. I have no idea where you think you are going with such a line of argument. I grew up being hauled to church 4 times a week (Southern Baptist) and have observed other religious folks since. Except for a handful of serious meditators and another handful of more or less cult folks I haven't noticed any general improvement in character or habits from most of what passes for religion. Indeed, most (not all) religion is such a sloppy unthinking pile of myth and superstition that one would expect those of low standards in general, not just intellectually, to be affirming it. samantha For starters, how are religious schools worse than public schools? They are not. Though yours' is anecdotal evidence (your experience with Southern Baptism several decades ago is outdated anecdote) I agree with it as far as individuals go. But families who go to houses of worship are usually harmless to themselves and to others. Where am I going with this? Again, it is to write that none of you has evidence-- nor will stats furnish you any evidence-- that bad religion is more destructive than bad politics, bad entertainment, and so forth. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Dec 23 05:18:29 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 23 Dec 2009 00:18:29 -0500 Subject: [ExI] D Wave back from the grave? (was: Steorn back from the grave?) In-Reply-To: <802158.41458.qm@web59916.mail.ac4.yahoo.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> Message-ID: <144653D7-BE2F-42B8-93A4-978AD9560035@bellsouth.net> I was very skeptical of D waves claim to have made a working Quantum Computer, it's still probably Bullshit but I'm no longer quite as certain. It seems that Google has taken an interest in D Wave through their head of image recognition, Hartmut Neven. Neven started a image recognition company called Neven Vision and about 3 years ago Google bought the company. Either Neven and Google are not as smart as I thought they were or D Wave is not a total scam after all. http://www.newscientist.com/article/dn18272-google-demonstrates-quantum-computer-image-search.html http://74.125.47.132/search?q=cache:mXGa0R64U-EJ:www.searchenginejournal.com/google-neven-vision-image-recognition/3728/+%22Hartmut+Neven%22&cd=2&hl=en&ct=clnk&gl=us John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Dec 23 05:31:09 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 23 Dec 2009 00:31:09 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <395894.67074.qm@web36502.mail.mud.yahoo.com> References: <395894.67074.qm@web36502.mail.mud.yahoo.com> Message-ID: <19714471-4C1A-4854-A395-05C10FA8B6BF@bellsouth.net> On Dec 22, 2009, Gordon Swobe wrote: > I like this basic definition from the Stanford encyclopedia of philosophy online: "Intentionality is the power of minds to be about, to represent, or to stand for, things, properties and states of affairs." And the power of minds to be about, to represent, or to stand for, things, properties and states of affairs means they have intentionality. This confirms what I have always thought, you will never in your life, not even once, receive philosophical enlightenment from reading a definition from an encyclopedia, and a dictionary is even worse. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Dec 23 05:35:12 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 23 Dec 2009 16:35:12 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <874907.77882.qm@web36504.mail.mud.yahoo.com> References: <874907.77882.qm@web36504.mail.mud.yahoo.com> Message-ID: 2009/12/23 Gordon Swobe : > Well P3 is certainly open to debate. Can you show how syntax gives rise to semantics? Can you show how the man in the room who does nothing more than shuffle Chinese symbols according to syntactic rules can come to know the meanings of those symbols? If so then you've cooked Searle's goose. The man doesn't know the meaning of the symbols, all he knows is how to manipulate them. Neither do the neurons know the meaning of the symbols they manipulate. However, amazingly, complex enough symbol manipulation by neurons, electronic circuits or even men in Chinese rooms gives rise to a system that understands the symbols. I know this because I have a brain which at the basic level only "knows" how to follow the laws of physics, but in so doing it gives rise to a mind which has understanding. At first glance it seems that this may possibly be due to some property of the substrate, but the neural replacement experiment I keep going on about shows that duplicating brain behaviour with a completely different substrate will also duplicate the understanding, and this implies that it is actually the function rather than the substance of the brain that is important. At a more basic level, it seems clear to me that all semantics must at bottom reduce to syntax. A child learns to associate one set of inputs - the sound or shape of the word "dog" - with another set of inputs - a hairy four-legged beast that barks. Everything you know is a variant on this theme, and it's all symbol manipulation. -- Stathis Papaioannou From eschatoon at gmail.com Wed Dec 23 06:02:53 2009 From: eschatoon at gmail.com (Giulio Prisco (2nd email)) Date: Wed, 23 Dec 2009 07:02:53 +0100 Subject: [ExI] time to devour the dog In-Reply-To: <52C75A09949548E6947D00AEF57FD05B@spike> References: <52C75A09949548E6947D00AEF57FD05B@spike> Message-ID: <1fa8c3b90912222202y781d0556n5bda552fdf7c745e@mail.gmail.com> I recommend that the authors estimate their own carbon footprint, compare it to that of a dog or a rabbit, then apply the logic of the article to themselves and draw the appropriate conclusions. G. 2009/12/21 spike : > > Who knew?? A dog is worse than an SUV: > > http://news.yahoo.com/s/afp/20091220/sc_afp/lifestyleclimatewarminganimalsfood;_ylt=AujXbeP6jnj13L53lJiX7iGs0NUE;_ylu=X3oDMTQ4YXZnZDQyBGFzc2V0A2FmcC8yMDA5MTIyMC9saWZlc3R5bGVjbGltYXRld2FybWluZ2FuaW1hbHNmb29kBGNjb2RlA21vc3Rwb3B1bGFyBGNwb3MDMTAEcG9zAzcEcHQDaG9tZV9jb2tlBHNlYwN5bl9oZWFkbGluZV9saXN0BHNsawNwb2xsdXRpbmdwZXQ > > spike > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- Giulio Prisco http://cosmeng.org/index.php/Giulio_Prisco aka Eschatoon Magic http://cosmeng.org/index.php/Eschatoon From protokol2020 at gmail.com Wed Dec 23 08:24:14 2009 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Wed, 23 Dec 2009 09:24:14 +0100 Subject: [ExI] Carbon In-Reply-To: References: Message-ID: A few months ago, AGW was still a bit popular on this list. Not anymore, I am glad to see that. Now, it is time to enrich te air with some more CO2, for a greener planet! I am NOT joking. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Dec 23 10:13:42 2009 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 23 Dec 2009 11:13:42 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <636244.44626.qm@web36503.mail.mud.yahoo.com> Message-ID: <20091223101342.GP17686@leitl.org> On Mon, Dec 21, 2009 at 01:02:08PM -0700, Jeff Davis wrote: > Searle's argument seems little more than another attempt -- born of > ooga-booga spirituality -- to deny the basic truth of materialism: > Life, persona, mind, and consciousness are the entirely unspecial > result of the "bubble, bubble, toil, and trouble" of stardust in the > galactic cauldron. When life, persona, mind, consciousness are > eventually deconstructed, they will be seen to be as mundane as the > dirt from which they sprang. Some may find this a bad thing. I > consider it, already, immensely liberating. I find it really difficult to understand what drives reasonably smart people today to bog down in a rerun of the vis vitalis debate. Which was empirically put to death on 22th February of 1828 or so (though I presume some die-hards will only be convinced by a tour de force of synthetic biology). Be careful, or you'll soon start battling the swobes on merits of phlogiston. He looks quite the type. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From gts_2000 at yahoo.com Wed Dec 23 11:32:25 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 23 Dec 2009 03:32:25 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <545413.48194.qm@web36508.mail.mud.yahoo.com> --- On Wed, 12/23/09, Stathis Papaioannou wrote: > However, amazingly, complex enough symbol manipulation by neurons, > electronic circuits or even men in Chinese rooms gives rise to a system > that understands the symbols. Or perhaps nothing "amazing" happens. Instead of believing in magic, I find easier to accept that the computationalist theory of mind is simply incoherent. It does not explain the facts. > I know this because I have a brain which at the basic level only > "knows" how to follow the laws of physics, but in so doing it gives > rise to a mind which has understanding. Nobody has suggested we need violate any laws of physics to obtain understanding. The suggestion is that the brain must do something in addition to or instead of running formal programs. Searle's work brings us one step closer to understanding what's really going on in the brain. > At a more basic level, it seems clear to me that all > semantics must at bottom reduce to syntax. > A child learns to associate one set of inputs > - the sound or shape of the word "dog" - with another set > of inputs - a hairy four-legged beast that barks. Everything you > know is a variant on this theme, and it's all symbol manipulation. The child learns the meaning of the sound or shape "dog", whereas the program merely learns to associate the form of the word "dog" with an image of a dog. While their behaviors might match, the former has semantics accompanying its behavior, the latter does not (or if it does then we need to explain how). -gts From gts_2000 at yahoo.com Wed Dec 23 11:43:38 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 23 Dec 2009 03:43:38 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <462364.55721.qm@web36502.mail.mud.yahoo.com> --- On Mon, 12/21/09, Christopher Doty wrote: > 2009/12/21 John Clark : > >> even if computers *did* have consciousness, they > still would have no > >> understanding the meanings of the symbols > contained in their programs. > > > > There may be stupider statements than the one that can > be seen above, but I > > am unable to come up with an example of one, at least > right at this instant > > off the top of my head. > > The *entire* statement is not stupid.? Thank you Chris. Some people understand the symbol grounding problem in formal programs. Others don't really care to understand it. > Nonetheless, I'm hard-pressed to see how a computer to come > to consciousness without having any understanding of any of > the symbols in its programming...... It would not by virtue of its syntactic processing of symbols come to an understanding of them, i.e., syntax is not enough to give semantics. However I would not go so far as to say that conscious computers might not find some other way to get semantics, just as humans do. The point is that it would involve something other than running programs. -gts From bbenzai at yahoo.com Wed Dec 23 11:32:58 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 23 Dec 2009 03:32:58 -0800 (PST) Subject: [ExI] Sensory Reality (was: The symbol grounding problem in strong AI) In-Reply-To: Message-ID: <946952.15931.qm@web113609.mail.gq1.yahoo.com> Samantha Atkins wrote: > On Dec 18, 2009, at 3:09 AM, Ben Zaiboc wrote: > > We do not know what 'reality' is.? There is > nothing in our brains that can directly comprehend reality > (if that even means anything).? What we do is collect > sensory data via our eyes, ears, etc., and sift it, sort it, > combine it, distort it with preconceptions and past > memories, and create 'sensory maps' which are then used to > feed the more abstract parts of our minds, to create 'the > World according to You'.? > > Your argument is wanting.? What is our sensory > experience of if not reality?? In what do our senses > and mind exist if not in reality?? What would "direct > comprehension" be, some mystical meandering down fantasy > lane?? Please explain how any material (i.e., existing > or possibly existing) being could apprehend reality *except* > through some type of senses and brain creating a map of what > is "out there" from sense data.???To condemn > the only possible form of knowing? reality that there > can possibly be as actually not knowing reality at all is a > bizarre argument. (I've changed the subject line, as I don't want it to get confused with that other, fruitless, argument) You're quite right. I probably put it poorly. What I'm saying is just that we create our own reality within our heads, from the sensory feeds we get. I'm not advocating solipsism. Those sensory feeds obvously come from the real world, but as you say, we have no way of directly apprehending it, we have to build a representation that is consistent with our sensory inputs. This explains why different people seem to 'live in different worlds' (their internal representations are different in some way. To Degas, a river would be a completely different thing to the same river experienced by Tiger Woods, for example), and also why there is no 'symbol grounding problem', because the 'things' that our mental symbols represent are other mental 'things', built from scraps of sensory information that our eyes, ears etc., glean from the environment. It would be interesting to create an exhibition of the actual data that we are capable of directly getting from the environment (with unenhanced sense organs, I mean). I think most people would be surprised by it, not to mention confused. Ben Zaiboc From gts_2000 at yahoo.com Wed Dec 23 11:59:58 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 23 Dec 2009 03:59:58 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <20091223101342.GP17686@leitl.org> Message-ID: <477951.24225.qm@web36504.mail.mud.yahoo.com> Jeff, > Searle's argument seems little more than another > attempt -- born of ooga-booga spirituality -- to deny the basic truth > of materialism: It's precisely the opposite of that. Searle wants to know what possesses some intelligent people to attribute "mind" to mere programs running on computers, programs which in the final analysis do nothing more interesting that kitchen can-openers. He's not the mystic. They are. -gts From gts_2000 at yahoo.com Wed Dec 23 12:27:23 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 23 Dec 2009 04:27:23 -0800 (PST) Subject: [ExI] Sensory Reality In-Reply-To: <946952.15931.qm@web113609.mail.gq1.yahoo.com> Message-ID: <977163.57141.qm@web36503.mail.mud.yahoo.com> --- On Wed, 12/23/09, Ben Zaiboc wrote: > What > I'm saying is just that we create our own reality within our > heads, from the sensory feeds we get.? I'm not > advocating solipsism.? Those sensory feeds obvously > come from the real world, but as you say, we have no way of > directly apprehending it, we have to build a representation > that is consistent with our sensory inputs.? It seems you have embarked on the project of re-creating the metaphysical idealism of Immanuel Kant. He had a name for the thing that we have "no way of directly apprehending". He called it the "thing-in-itself". In general he called that external unknowable reality of things-in-themselves "the noumena" as distinct from world given to us by sense perception, which he called "the phenomena". Kant revolutionized philosophy with that idea and others related to it. Too bad he's not here to pat you on the back. :) -gts From stathisp at gmail.com Wed Dec 23 13:18:10 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 24 Dec 2009 00:18:10 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <545413.48194.qm@web36508.mail.mud.yahoo.com> References: <545413.48194.qm@web36508.mail.mud.yahoo.com> Message-ID: 2009/12/23 Gordon Swobe : > --- On Wed, 12/23/09, Stathis Papaioannou wrote: > >> However, amazingly, complex enough symbol manipulation by neurons, >> electronic circuits or even men in Chinese rooms gives rise to a system >> that understands the symbols. > > Or perhaps nothing "amazing" happens. Instead of believing in magic, I find easier to accept that the computationalist theory of mind is simply incoherent. It does not explain the facts. So you find the idea that in some unknown way chemical reactions cause mind not particularly amazing, while the same happening with electric circuits is obviously incredible? >> I know this because I have a brain which at the basic level only >> "knows" how to follow the laws of physics, but in so doing it gives >> rise to a mind which has understanding. > > Nobody has suggested we need violate any laws of physics to obtain understanding. The suggestion is that the brain must do something in addition to or instead of running formal programs. Searle's work brings us one step closer to understanding what's really going on in the brain. A computer only runs a formal program in the mind of the programmer. A computer undergoes internal movements according to the laws of physics, which movements can (incidentally) be described algorithmically. This is the most basic level of description. The programmer comes along and gives a chunkier, higher level description which he calls a program, and the end user, blind to either the electronics or the program, describes the computer at a higher level still. But how you describe it does not change what the computer actually does or how it does it. The program is like a plan to help the programmer figure out where to place the various parts of the computer in relation to each other so that they will do a particular job. Both the computer and the brain go clickety-clack, clickety-clack and produce similar intelligent behaviour. The computer's parts were deliberately arranged by the programmer in order to bring this result about, whereas the brain's parts were arranged in a spontaneous and somewhat haphazard way by nature, making it more difficult to see the algorithmic pattern (although it must be there, at least at the level of basic physics). In the final analysis, it is this difference between them that convinces you the computer doesn't understand what it's doing and the brain does. >> At a more basic level, it seems clear to me that all >> semantics must at bottom reduce to syntax. >> A child learns to associate one set of inputs >> - the sound or shape of the word "dog" - with another set >> of inputs - a hairy four-legged beast that barks. Everything you >> know is a variant on this theme, and it's all symbol manipulation. > > The child learns the meaning of the sound or shape "dog", whereas the program merely learns to associate the form of the word "dog" with an image of a dog. While their behaviors might match, the former has semantics accompanying its behavior, the latter does not (or if it does then we need to explain how). What is it to learn the meaning of the word "dog" if not to associate its sound or shape with an image of a dog? Anyway, despite the above, and without any help from Searle, it might still seem reasonable to entertain the possibility that there is something substrate-specific about consciousness, and fear that if you agree to upload your brain the result would be a mindless zombie. That is where the partial brain replacement (eg. of the visual cortex or Wernicke's area) thought experiment comes into play, proving that if you duplicate the behaviour of neurons, you must also duplicate the consciousness/qualia/intentionality/understanding. If you disagree that it proves this, please explain why you disagree, and what you think would actually happen if such a replacement were made. Perhaps you could also ask the people on the other discussion group you have mentioned. -- Stathis Papaioannou From stathisp at gmail.com Wed Dec 23 13:26:34 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 24 Dec 2009 00:26:34 +1100 Subject: [ExI] Sensory Reality (was: The symbol grounding problem in strong AI) In-Reply-To: <946952.15931.qm@web113609.mail.gq1.yahoo.com> References: <946952.15931.qm@web113609.mail.gq1.yahoo.com> Message-ID: 2009/12/23 Ben Zaiboc : > (I've changed the subject line, as I don't want it to get confused with that other, fruitless, argument) > > You're quite right. ?I probably put it poorly. What I'm saying is just that we create our own reality within our heads, from the sensory feeds we get. ?I'm not advocating solipsism. ?Those sensory feeds obvously come from the real world, but as you say, we have no way of directly apprehending it, we have to build a representation that is consistent with our sensory inputs. > > This explains why different people seem to 'live in different worlds' (their internal representations are different in some way. To Degas, a river would be a completely different thing to the same river experienced by Tiger Woods, for example), and also why there is no 'symbol grounding problem', because the 'things' that our mental symbols represent are other mental 'things', built from scraps of sensory information that our eyes, ears etc., glean from the environment. Yes, there is no special knowledge of the world created in your brain, there is just a complex network of relationships between sensory inputs. -- Stathis Papaioannou From sparge at gmail.com Wed Dec 23 13:56:25 2009 From: sparge at gmail.com (Dave Sill) Date: Wed, 23 Dec 2009 08:56:25 -0500 Subject: [ExI] atheism In-Reply-To: <802158.41458.qm@web59916.mail.ac4.yahoo.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> Message-ID: 2009/12/21 Post Futurist > > >I don't consider religion merely futile, I think it's actively harmful > to humankind. I guess that > pretty succinctly highlights where we differ. > --John Clark That was me, not John. > No doubt religion is actively harmful. My point is you offer no evidence religion is more actively > destructive than politics, entertainment, etc. > Because you cannot. Well, that's pretty much what I said in the previous sentence in that message, which you snipped: "We don't have numbers or a controlled experiment, we do have plenty of history." So I could provide lots of evidence from history, the Inquisition, the Crusades, and other acts of repression and persecution, but it's really just anecdotal. I can't prove it, but I still believe it. -Dave From ankara at tbaytel.net Wed Dec 23 14:54:07 2009 From: ankara at tbaytel.net (ankara at tbaytel.net) Date: Wed, 23 Dec 2009 09:54:07 -0500 Subject: [ExI] atheism Message-ID: <9AAAE3E2-71DC-480E-83BA-FF1B21FA0BBF@tbaytel.net> Did you just say there's was no evidence of harm ? Guess you've never been a female human ordered to procreate, denied dominion over your own body then. Nothing like a shoe full of kids to make a woman happy, eh? > From: Post Futurist > > No doubt religion is actively harmful. My point is you offer no > evidence religion is more actively destructive than politics, > entertainment, etc. > Because you cannot. From kanzure at gmail.com Wed Dec 23 15:02:29 2009 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 23 Dec 2009 09:02:29 -0600 Subject: [ExI] Fwd: [Space Projects] New Space Exploration Unconference Announced In-Reply-To: References: Message-ID: <55ad6af70912230702n61bc7de4sa5530a23ff6bb11@mail.gmail.com> ---------- Forwarded message ---------- From: scifiguysandiego Date: Wed, Dec 23, 2009 at 8:18 AM Subject: [Space Projects] New Space Exploration Unconference Announced To: spaceprojects at yahoogroups.com Thought this might be of interest to the group ... WORLD'S FIRST SPACE EXPLORATION UNCONFERENCE ANNOUNCED "SpaceUp" to be held in San Diego San Diego, CA ? It's an exciting time for space exploration: SpaceX is launching the first low-cost rockets, Virgin Galactic recently unveiled SpaceShipTwo, and Bigelow Aerospace is building private habitats in orbit. These forward-thinking companies?and dozens of others just like them?are all within a day's drive of San Diego. Which is why the San Diego Space Society is today announcing SpaceUp, the world's first space exploration unconference, to be held in late February/early March, 2010. More information is available at SpaceUp.org. "The tech unconferences I've been part of are downright electrifying. It's high time we start one for the space community," says Chris Radcliff, San Diego Space Society board member and SpaceUp unconference chairman. "There's so much excitement about this idea already, we want to let people participate even before we have a definite date." The unconference format, which has been popularized in the tech community via such events as BarCamp and FooCamp, offers a unique forum where the participants decide the topics, the schedule, even the structure of the event. Everyone who attends the unconference is encouraged to give a talk, moderate a panel or start a discussion. SpaceUp applies this non-profit model to the space industry for the first time. Both industry professionals and ardent outer-space enthusiasts are invited to participate, with attendance capped at 200 people to ensure meaningful interaction among all participants. Registration details for the unconference are still being set, but individuals can indicate their interest and reserve a spot via http://tr.im/kickitup. Further information on SpaceUp is available at SpaceUp.org, via the SpaceUp Twitter feed, www.twitter.com/spaceupconf, or on the SpaceUp Facebook page, http://www.facebook.com/pages/SpaceUp/97724374485. # # # __._,_.___ Reply to sender | Reply to group Messages in this topic( 1) Recent Activity: - New Members 1 Visit Your Group Start a New Topic MARKETPLACE Going Green: Your Yahoo! Groups resource for green living ------------------------------ Mom Power: Discover the community of moms doing more for their families, for the world and for each other [image: Yahoo! Groups] Switch to: Text-Only, Daily Digest? Unsubscribe ? Terms of Use . __,_._,___ -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Wed Dec 23 16:11:39 2009 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 23 Dec 2009 10:11:39 -0600 Subject: [ExI] Fwd: Exploring perceived acceleration of time as we age In-Reply-To: References: <55ad6af70912220848m6d8d5683m9cf53deb19c30224@mail.gmail.com> <55ad6af70912220859k55a7d41dn71fe3298ff7cafe4@mail.gmail.com> Message-ID: <55ad6af70912230811g174eb4e9i8c61748ebeaa27e2@mail.gmail.com> On Wed, Dec 23, 2009 at 12:40 AM, Jeff L Jones wrote: > On Tue, Dec 22, 2009 at 8:59 AM, Bryan Bishop wrote: >> Another interesting observation that we can make is the age at which >> one year really seems to last one year. > > I don't understand this part. ?What does it mean to think a year lasts > a year, if a year seems to last a different length for each age? ?It > doesn't seem like there is any meaningful question to be asked here. Not sure what the original author was trying to say there. But I didn't go into enough detail on what I was thinking. Suppose that "effective age" has a different value than your actual age, and suppose we hit LEV, and in other words, you're going to live for a very, very long time. At what "effective age" would you prefer to be set at? I suppose most people would answer, whatever the global average is for what people's "effective age" is. I would prefer to be set to where 1 year feels like 3 years, or even greater. - Bryan http://heybryan.org/ 1 512 203 0507 From max at maxmore.com Wed Dec 23 16:17:18 2009 From: max at maxmore.com (Max More) Date: Wed, 23 Dec 2009 10:17:18 -0600 Subject: [ExI] 50 years of discoveries in science, technology, engineering, medicine and mathematics Message-ID: <200912231617.nBNGHRif009398@andromeda.ziaspace.com> 50 Science Sagas for 50 Years How do you summarize the past 50 years of discoveries in science, technology, engineering, medicine and mathematics? http://www.casw.org/casw/article/50-science-sagas-50-years From nanite1018 at gmail.com Wed Dec 23 16:22:09 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Wed, 23 Dec 2009 11:22:09 -0500 Subject: [ExI] Sensory Reality In-Reply-To: <977163.57141.qm@web36503.mail.mud.yahoo.com> References: <977163.57141.qm@web36503.mail.mud.yahoo.com> Message-ID: <85684B9F-8D45-4F7B-B633-7F4D07E509B3@GMAIL.COM> >> What >> I'm saying is just that we create our own reality within our >> heads, from the sensory feeds we get. I'm not >> advocating solipsism. Those sensory feeds obvously >> come from the real world, but as you say, we have no way of >> directly apprehending it, we have to build a representation >> that is consistent with our sensory inputs. > > It seems you have embarked on the project of re-creating the > metaphysical idealism of Immanuel Kant. He had a name for the thing > that we have "no way of directly apprehending". He called it the > "thing-in-itself". > > In general he called that external unknowable reality of things-in- > themselves "the noumena" as distinct from world given to us by sense > perception, which he called "the phenomena". > > Kant revolutionized philosophy with that idea and others related to > it. Too bad he's not here to pat you on the back. :) > > -gts What importance does it have that you cannot magically "experience" "things-in-themselves"? Outside of sort-of solving that symbol grounding problem, it doesn't effect anything at all. We create a model of the world based on our sense-perceptions of reality. That model is based, necessarily on those sense-perceptions. We revise our model continually based on all of our sense-perceptions, and over time it gets better and better, closer and closer to reality as it is "in- itself." We use this model to plan and act in the world in order to live. We are not divorced from reality because we are not gods. We simply have to work in order to understand it. We may be wrong, but if so, we correct our model. I do not see any meaningful consequences in acknowledging the fact that we aren't omniscient gods. Reason is still valid, and science is the way to understanding the natural world. It isn't limiting to acknowledge we are not infallible. In fact, that very admission is what is necessary for us to begin understanding the world. Oh, and Ben, your experiment, where you experience the world directly as sense inputs, is something that is easy to at least see: look at babies. Babies have no mental representations of the world, and are just beginning to try to form one. And I think everyone knows how much help babies need to survive. From natasha at natasha.cc Wed Dec 23 16:31:25 2009 From: natasha at natasha.cc (Natasha Vita-More) Date: Wed, 23 Dec 2009 10:31:25 -0600 Subject: [ExI] time to devour the dog In-Reply-To: <60C2B5A4BFDF4382823D477EF07FFC8A@spike> References: <200912220027.nBM0RYS7003522@andromeda.ziaspace.com><580930c20912220646i3e10c850tcbab22b3e911c23f@mail.gmail.com><7C6A7861-9FEF-4E3D-9031-F87FA29CC177@gmail.com><20091222181901.6pggahj6m8c4owgo@webmail.natasha.cc><20091222183919.usw7htkhwk8s88ck@webmail.natasha.cc> <60C2B5A4BFDF4382823D477EF07FFC8A@spike> Message-ID: Glad to see someone has my sense of humor! Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of spike Sent: Tuesday, December 22, 2009 7:04 PM To: 'ExI chat list' Subject: Re: [ExI] time to devour the dog ...On Behalf Of natasha at natasha.cc ... > Subject: Re: [ExI] time to devour the dog > > If a brain's carbon level rises, does its linguistic acuity dim? ... > Natasha That would depend entirely on the form of this carbon. {8^D When a pair of these carbon atoms are attached to one oxygen and five hydrogens in my brain, I have been known to utter the eloquent linguistic acuities. You may have read some of my posts here which were clearly inspired by rising carbon levels in this form, such as the sex lamas post from a few Newtonmasses ago. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From natasha at natasha.cc Wed Dec 23 16:35:16 2009 From: natasha at natasha.cc (Natasha Vita-More) Date: Wed, 23 Dec 2009 10:35:16 -0600 Subject: [ExI] atheism In-Reply-To: <9AAAE3E2-71DC-480E-83BA-FF1B21FA0BBF@tbaytel.net> References: <9AAAE3E2-71DC-480E-83BA-FF1B21FA0BBF@tbaytel.net> Message-ID: <55E5CF4DD3B648E5AC767F516A3B3663@DFC68LF1> Not to mention having your sex sewn up or cut off. Nlogo1.tif Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of ankara at tbaytel.net Sent: Wednesday, December 23, 2009 8:54 AM To: extropy-chat at lists.extropy.org Subject: Re: [ExI] atheism Did you just say there's was no evidence of harm ? Guess you've never been a female human ordered to procreate, denied dominion over your own body then. Nothing like a shoe full of kids to make a woman happy, eh? > From: Post Futurist > > No doubt religion is actively harmful. My point is you offer no > evidence religion is more actively destructive than politics, > entertainment, etc. > Because you cannot. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jonkc at bellsouth.net Wed Dec 23 17:42:37 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 23 Dec 2009 12:42:37 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <477951.24225.qm@web36504.mail.mud.yahoo.com> References: <477951.24225.qm@web36504.mail.mud.yahoo.com> Message-ID: <047783E3-4940-4EBD-9D2E-9479E4748C28@bellsouth.net> On Dec 23, 2009, at 6:59 AM, Gordon Swobe wrote: > > Searle wants to know what possesses some intelligent people to attribute "mind" to mere programs running on computers, programs which in the final analysis do nothing more interesting that kitchen can-openers. Searle claims he is receptive to a rational explanation of mind but the above shows that he is not and neither are you. Any explanation, assuming its any good, is going to reduce it to a "mere" something. That's just in the nature of explanations, it's what they do. It's like a man who was greatly impressed by a spectacular magic trick but when he learned the mundane secret of its performance is unsatisfied with the explanation because it was not mystical or supernatural. > Can you show how syntax gives rise to semantics? I can just as soon as you show me that syntax and semantics have absolutely nothing to do with each other and that semantics can have only 2 values, understanding and non understanding. > Can you show how the man in the room who does nothing more than shuffle Chinese symbols according to syntactic rules can come to know the meanings of those symbols? What would be the point of telling you that again? Myself and others have explained that over and over but you don't rebut what we say you just repeat the same tired old question yet again. > He's not the mystic. They are. Searle is the one who doesn't believe in Evolution not us. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Wed Dec 23 18:00:32 2009 From: aware at awareresearch.com (Aware) Date: Wed, 23 Dec 2009 10:00:32 -0800 Subject: [ExI] Sensory Reality In-Reply-To: <85684B9F-8D45-4F7B-B633-7F4D07E509B3@GMAIL.COM> References: <977163.57141.qm@web36503.mail.mud.yahoo.com> <85684B9F-8D45-4F7B-B633-7F4D07E509B3@GMAIL.COM> Message-ID: On Wed, Dec 23, 2009 at 8:22 AM, JOSHUA JOB wrote: > What importance does it have that you cannot magically "experience" > "things-in-themselves"? Outside of sort-of solving that symbol grounding > problem, it doesn't effect anything at all. We create a model of the world > based on our sense-perceptions of reality. That view exemplifies the "scientific enlightenment" of one who has learned enough to get one's head above the superstitions and folk ontologies of the masses, and while pausing to enjoy the improved view and look down upon those others, fails to notice an epistemological layer yet to be surmounted. An unfortunate side-effect of that approximate level of sophistication, common to the scientifically literate denizens of this discussion list, is the tendency to conflate "that which does not yet make sense" with "that which doesn't make sense." The difference is one of context, and the test to distinguish which is which is whether one can coherently explain the basis for the other's belief. A child in a dispute with an adult may honestly believe the adult doesn't understand (and the child might be right) but it's more often the adult who can explain the child's point of view, not vice versa. "We" don't create a model of the world based on our sense-perceptions of reality. For the most part, "we" are not in that functional loop. The organism models *its world* (its umwelt) to some limited extent, in terms of particular features relevant to its nature within its environment of interaction. And the organism (in the case of humans and some others) expresses additional functionality providing limited awareness of its self within its environment, providing benefits of more abstract prediction, planning and control, with the property that when it refers to itself, it indicates the organism (not merely the layer expressing self-awareness.) This is directly applicable to the "symbol-grounding problem" because it illustrates how at no point in the process is Truth or Reality ever modeled the organism, adapted to its environment of interaction, including its functionality for limited self-awareness, gets by all the same. And if you were to interrogate the organism, it would necessarily express the meaningfulness and intentionality (semantics) of its actions. No matter how many levels you want to take it, the semantics are a function of an observer/reporter. Most of us are not familiar or comfortable with recursion (the topic is near the top of my list for _A Child's Garden of Conceptual Archetypes), so even though we may have a schematically complete description of the above, we nevertheless feel that (based on our undeniable experience) somewhere in the system there must be such essentials as "qualia", "meaning", "free will" and "personal identity." Never had it, don't need it. Derivative, yes; essential, no. > That model is based, necessarily > on those sense-perceptions. We revise our model continually based on all of > our sense-perceptions, and over time it gets better and better, closer and > closer to reality as it is "in-itself." While this is the popular position among the scientifically literate (including most of my friends), similar to the presumption that evolution continues to "perfect" its designs, with humans being "the most evolved" at present, it's easy enough to show that our scientific progress may be leading us in a present direction quite different from the direction we may be heading later. If 50 years from now we commonly agree that we're living within a digital simulation, would you then say Newton or Einstein was closer to the absolute Truth of gravity? The best we can aim for is not increasing "Truth", but increasing coherence over increasing context of observation. - Jef From jonkc at bellsouth.net Wed Dec 23 18:25:18 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 23 Dec 2009 13:25:18 -0500 Subject: [ExI] Sensory Reality In-Reply-To: <977163.57141.qm@web36503.mail.mud.yahoo.com> References: <977163.57141.qm@web36503.mail.mud.yahoo.com> Message-ID: <13249BE6-FDB5-49BC-B86B-16FF8560BA31@bellsouth.net> On Dec 23, 2009, Gordon Swobe wrote: > Kant revolutionized philosophy with that idea and others related to it. Charles Darwin was not a philosopher but he gave the world far deeper philosophical insights than Kant did. In fact none of the really big philosophical advances have come from professional philosophers. The sad thing is that philosophers don't even realize that fact and that's why very silly people like Searle continue to play in his Chinese Room and pretend that Darwin never existed. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanite1018 at gmail.com Wed Dec 23 18:43:41 2009 From: nanite1018 at gmail.com (JOSHUA JOB) Date: Wed, 23 Dec 2009 13:43:41 -0500 Subject: [ExI] Sensory Reality In-Reply-To: References: <977163.57141.qm@web36503.mail.mud.yahoo.com> <85684B9F-8D45-4F7B-B633-7F4D07E509B3@GMAIL.COM> Message-ID: <209E6482-D7DA-4D3A-AF10-2989E07B75D9@GMAIL.COM> On Dec 23, 2009, at 1:00 PM, Aware wrote: > On Wed, Dec 23, 2009 at 8:22 AM, JOSHUA JOB > wrote: >> What importance does it have that you cannot magically "experience" >> "things-in-themselves"? Outside of sort-of solving that symbol >> grounding >> problem, it doesn't effect anything at all. We create a model of >> the world >> based on our sense-perceptions of reality. > "We" don't create a model of the world based on our sense-perceptions > of reality. For the most part, "we" are not in that functional loop. We most certainly are. Haven't you ever thought about something and changed your mind? Or noticed something new and figured out how to account for it? Someone brings up some fact in a debate, its new, and you revise your opinions. I am quite certain that it is YOU who are doing that, because you are aware of the process of revising your opinions. Sure, much of this process is automatized, and babies are largely unaware of the process, but as an adult human being, you are involved in the process of refining your model > Most of us are not familiar or comfortable with recursion (the topic > is near the top of my list for _A Child's Garden of Conceptual > Archetypes), so even though we may have a schematically complete > description of the above, we nevertheless feel that (based on our > undeniable experience) somewhere in the system there must be such > essentials as "qualia", "meaning", "free will" and "personal > identity." Never had it, don't need it. Derivative, yes; essential, > no. Meaning is a relation between mental contents (which are representations of relations in the world, in most cases). Free will is simply the fact that you make decisions, i.e. the entity that is you does things, and it is you who chooses to do them. Personal identity is simply the representation you have of yourself. So they exist. And they are a necessary consequence of a being that is self- aware. >> That model is based, necessarily >> on those sense-perceptions. We revise our model continually based >> on all of >> our sense-perceptions, and over time it gets better and better, >> closer and >> closer to reality as it is "in-itself." > While this is the popular position among the scientifically literate > (including most of my friends), similar to the presumption that > evolution continues to "perfect" its designs, with humans being "the > most evolved" at present, it's easy enough to show that our scientific > progress may be leading us in a present direction quite different from > the direction we may be heading later. If 50 years from now we > commonly agree that we're living within a digital simulation, would > you then say Newton or Einstein was closer to the absolute Truth of > gravity? The best we can aim for is not increasing "Truth", but > increasing coherence over increasing context of observation. Well evolution perfects its designs for the given context. Humans are just as evolved as alligators in that sense. Just because our current theories are likely incorrect does not mean we aren't moving toward truth. Our work on our present theories is in the worst case necessary for us to rule them out is incorrect in some way, thereby eliminating another possibility (and likely in the process a whole class of possibilities along with it), thereby moving a bit closer toward a correct representation. The more things your model can predict (not explain, predict in advance) without getting some things wrong, the better it is. By saying that the most you can hope for is increased coherence and increased context of observation, you are saying that the most you can hope for is getting closer to the truth. After all, if you take the limit of increasing coherence and context of observation, what do you get? A true representation of reality. So you are moving in that direction, even if for some reason you (and many others in philosophy, particularly philosophy of science) do not think so for whatever reason. From aware at awareresearch.com Wed Dec 23 20:19:57 2009 From: aware at awareresearch.com (Aware) Date: Wed, 23 Dec 2009 12:19:57 -0800 Subject: [ExI] Sensory Reality In-Reply-To: <209E6482-D7DA-4D3A-AF10-2989E07B75D9@GMAIL.COM> References: <977163.57141.qm@web36503.mail.mud.yahoo.com> <85684B9F-8D45-4F7B-B633-7F4D07E509B3@GMAIL.COM> <209E6482-D7DA-4D3A-AF10-2989E07B75D9@GMAIL.COM> Message-ID: On Wed, Dec 23, 2009 at 10:43 AM, JOSHUA JOB wrote: >> of reality. For the most part, "we" are not in that functional loop. > > We most certainly are... Huh. Well, given your certainty, what do you suppose might have made Jef say such a thing? >> ...we nevertheless feel that (based on our >> undeniable experience) somewhere in the system there must be such >> essentials as "qualia", "meaning", "free will" and "personal >> identity." ?Never had it, don't need it. ?Derivative, yes; essential, no. > > Meaning is a relation... > Free will is simply... > Personal identity is simply... I AM suggesting that your view is a little too simple... >>> We revise our model continually based on all of >>> our sense-perceptions, and over time it gets better and better, closer >>> and closer to reality as it is "in-itself." >> >> While this is the popular position among the scientifically literate >> (including most of my friends), similar to the presumption that >> evolution continues to "perfect" its designs, with humans being "the >> most evolved" at present, it's easy enough to show that our scientific >> progress may be leading us in a present direction quite different from >> the direction we may be heading later. ?If 50 years from now we >> commonly agree that we're living within a digital simulation, would >> you then say Newton or Einstein was closer to the absolute Truth of >> gravity? ?The best we can aim for is not increasing "Truth", but >> increasing coherence over increasing context of observation. > > Well evolution perfects its designs for the given context. Not "perfects", but satisfices, within a particular (eventually changing) environment. > Humans are just > as evolved as alligators in that sense. Huh? I could argue that alligators are more adapted to their usual environment, or I could argue that humans are more adapted to a more complex environment, but "just as"? > Just because our current theories are likely incorrect does not mean we > aren't moving toward truth. It doesn't mean they necessarily /are/, at any particular moment, either. > Our work on our present theories is in the worst > case necessary for us to rule them out is incorrect in some way, thereby > eliminating another possibility (and likely in the process a whole class of > possibilities along with it), thereby moving a bit closer toward a correct > representation. This idea of successive approximation to absolute truth, which is popular accepted and even taught in schools, is the misconception I was trying to highlight for you. > The more things your model can predict (not explain, predict in advance) > without getting some things wrong, the better it is. Yes, at any given moment. Consider the information content of an account of THE TRUTH of our world 1000 years ago. How does that compare with the information content of a true account of our world, EFFECTIVE FOR PREDICTION today? What about THE TRUTH, EFFECTIVE FOR PREDICTION, 50 years from now? Can you really argue, on the basis of predictive effectiveness, that we keep getting closer to knowing THE TRUTH? >From a practical point of view, how might it effect your long term risk management and social policy if you knew that, rather than society getting closer and closer to THE TRUTH, we are actually getting increasing instrumental effectiveness within an environment of even greater increasing uncertainty? > By saying > that the most you can hope for is increased coherence and increased context > of observation, you are saying that the most you can hope for is getting > closer to the truth. After all, if you take the limit of increasing > coherence and context of observation, what do you get? A true representation > of reality. The key point you seem to missing is that at any given moment, you have no way of knowing, no frame of reference, telling you whether you're moving toward or away from the "Truth" you'll find yourself at after 10, 50, 100 years more observation, nor do you know how long you've been moving in that direction (since you can't know what it is), nor do you know how close "to the limit" (1%, 10%, 90%) you are. So in a practical and moral sense, the best you can do is strive for increasing coherence over increasing context of observation. There is nothing more, there never was. So much for ultimate grounding, ultimate Truth. > So you are moving in that direction... Which direction? If you were to say "outward" then as an extropian I suppose I would agree... > ...even if for some reason you (and many others in philosophy, > particularly philosophy of science) do not think so Want another practical example of this "philosophy"? Well, consider two agents (tribes, for example) in conflict, each with their own "truth." How should they approach agreement for their mutual benefit? I've already given you the answer. > for whatever reason. Your "for whatever reason" seems significant, especially since you deleted without comment the part of my post where I said the following: >> The difference is one of context, and the test to distinguish which is >> which is whether one can coherently explain the basis for the other's belief. - Jef From gts_2000 at yahoo.com Thu Dec 24 02:37:27 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 23 Dec 2009 18:37:27 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <822991.33211.qm@web36507.mail.mud.yahoo.com> --- On Wed, 12/23/09, Stathis Papaioannou wrote: >> Or perhaps nothing "amazing" happens. Instead of >> believing in magic, I find easier to accept that the >> computationalist theory of mind is simply incoherent. It >> does not explain the facts. > > So you find the idea that in some unknown way chemical > reactions cause mind not particularly amazing, while the same happening > with electric circuits is obviously incredible? Not exactly, but close. Brains contain something like electric circuits but I still find it incredible that a mind that runs only on programs can have everything biological minds have. Again, I find the computationalist theory of mind incredible. > A computer only runs a formal program in the mind of the > programmer. Where did you buy your computer? I built mine, and I can tell you it runs formal programs in RAM. :) > A computer undergoes internal movements according to the laws > of physics, which movements can (incidentally) be described > algorithmically. This is the most basic level of > description. Yes. > The programmer comes along and gives a chunkier, higher level > The program is like a plan to help the programmer figure out where to > place the various parts of the computer in relation to each other so > that they will do a particular job. Both the computer and the brain > go clickety-clack, clickety-clack and produce similar intelligent > behaviour. The computer's parts were deliberately arranged by the > programmer in order to bring this result about, whereas the brain's > parts were arranged in a spontaneous and somewhat haphazard way by > nature, making it more difficult to see the algorithmic pattern > (although it must be there, at least at the level of basic physics). In > the final analysis, it is this difference between them that convinces > you the computer doesn't understand what it's doing and the brain does. No, in the final analysis nothing can get meaning (semantics) from form-based rules (syntax). It makes not one wit of difference what sort of entity happens to perform the syntactical operations. Neither computers nor biological brains can get semantics from syntax. > What is it to learn the meaning of the word "dog" if not to > associate its sound or shape with an image of a dog? Both you and the computer make that association, and both of you act accordingly. But only you know about it, i.e, only you know the meaning. > Anyway, despite the above, and without any help from > Searle, it might still seem reasonable to entertain the possibility > that there is something substrate-specific about consciousness, and > fear that if you agree to upload your brain the result would be a > mindless zombie. I would not use the word "substrate-specific" but I do like your mention of "chemical reactions" in your first paragraph above. > That is where the partial brain replacement (eg. of the visual > cortex or Wernicke's area) thought experiment comes into play, I've given this idea some thought today, by the way. We can take your experiment deeper, and instead of creating a program driven nano-neuron to substitute for the natural neuron, we keep everything about the natural neuron and replace only the nucleus. This neuron will appear even more natural than yours. Now we take it another step and keep the nucleus. We create artificial program-driven DNA (whatever that might look like) to replace the DNA inside the nucleus. And so on. In the limit we will have manufactured natural program-less neurons. I don't know if Searle (or anyone) has considered the ramifications of this sort of progression that I describe in terms of Searle's philosophy, but it seems to me that on Searle's view the person's intentionality would become increasingly apparent to him as his brain became driven less by abstract formal programs and more by natural material processes. This also leaves open the possibility that your more basic nano-neurons, those you've already supposed, would not deprive the subject completely of intentionality. Perhaps your subject would become somewhat dim but not completely lose his grip on reality. -gts From gts_2000 at yahoo.com Thu Dec 24 02:37:35 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 23 Dec 2009 18:37:35 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <100204.32863.qm@web36507.mail.mud.yahoo.com> --- On Wed, 12/23/09, Stathis Papaioannou wrote: >> Or perhaps nothing "amazing" happens. Instead of >> believing in magic, I find easier to accept that the >> computationalist theory of mind is simply incoherent. It >> does not explain the facts. > > So you find the idea that in some unknown way chemical > reactions cause mind not particularly amazing, while the same happening > with electric circuits is obviously incredible? Not exactly, but close. Brains contain something like electric circuits but I still find it incredible that a mind that runs only on programs can have everything biological minds have. Again, I find the computationalist theory of mind incredible. > A computer only runs a formal program in the mind of the > programmer. Where did you buy your computer? I built mine, and I can tell you it runs formal programs in RAM. :) > A computer undergoes internal movements according to the laws > of physics, which movements can (incidentally) be described > algorithmically. This is the most basic level of > description. Yes. > The programmer comes along and gives a chunkier, higher level > The program is like a plan to help the programmer figure out where to > place the various parts of the computer in relation to each other so > that they will do a particular job. Both the computer and the brain > go clickety-clack, clickety-clack and produce similar intelligent > behaviour. The computer's parts were deliberately arranged by the > programmer in order to bring this result about, whereas the brain's > parts were arranged in a spontaneous and somewhat haphazard way by > nature, making it more difficult to see the algorithmic pattern > (although it must be there, at least at the level of basic physics). In > the final analysis, it is this difference between them that convinces > you the computer doesn't understand what it's doing and the brain does. No, in the final analysis nothing can get meaning (semantics) from form-based rules (syntax). It makes not one wit of difference what sort of entity happens to perform the syntactical operations. Neither computers nor biological brains can get semantics from syntax. > What is it to learn the meaning of the word "dog" if not to > associate its sound or shape with an image of a dog? Both you and the computer make that association, and both of you act accordingly. But only you know about it, i.e, only you know the meaning. > Anyway, despite the above, and without any help from > Searle, it might still seem reasonable to entertain the possibility > that there is something substrate-specific about consciousness, and > fear that if you agree to upload your brain the result would be a > mindless zombie. I would not use the word "substrate-specific" but I do like your mention of "chemical reactions" in your first paragraph above. > That is where the partial brain replacement (eg. of the visual > cortex or Wernicke's area) thought experiment comes into play, I've given this idea some thought today, by the way. We can take your experiment deeper, and instead of creating a program driven nano-neuron to substitute for the natural neuron, we keep everything about the natural neuron and replace only the nucleus. This neuron will appear even more natural than yours. Now we take it another step and keep the nucleus. We create artificial program-driven DNA (whatever that might look like) to replace the DNA inside the nucleus. And so on. In the limit we will have manufactured natural program-less neurons. I don't know if Searle (or anyone) has considered the ramifications of this sort of progression that I describe in terms of Searle's philosophy, but it seems to me that on Searle's view the person's intentionality would become increasingly apparent to him as his brain became driven less by abstract formal programs and more by natural material processes. This also leaves open the possibility that your more basic nano-neurons, those you've already supposed, would not deprive the subject completely of intentionality. Perhaps your subject would become somewhat dim but not completely lose his grip on reality. -gts From gts_2000 at yahoo.com Thu Dec 24 02:39:15 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 23 Dec 2009 18:39:15 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <249940.33395.qm@web36507.mail.mud.yahoo.com> --- On Wed, 12/23/09, Stathis Papaioannou wrote: >> Or perhaps nothing "amazing" happens. Instead of >> believing in magic, I find easier to accept that the >> computationalist theory of mind is simply incoherent. It >> does not explain the facts. > > So you find the idea that in some unknown way chemical > reactions cause mind not particularly amazing, while the same happening > with electric circuits is obviously incredible? Not exactly, but close. Brains contain something like electric circuits but I still find it incredible that a mind that runs only on programs can have everything biological minds have. Again, I find the computationalist theory of mind incredible. > A computer only runs a formal program in the mind of the > programmer. Where did you buy your computer? I built mine, and I can tell you it runs formal programs in RAM. :) > A computer undergoes internal movements according to the laws > of physics, which movements can (incidentally) be described > algorithmically. This is the most basic level of > description. Yes. > The programmer comes along and gives a chunkier, higher level > The program is like a plan to help the programmer figure out where to > place the various parts of the computer in relation to each other so > that they will do a particular job. Both the computer and the brain > go clickety-clack, clickety-clack and produce similar intelligent > behaviour. The computer's parts were deliberately arranged by the > programmer in order to bring this result about, whereas the brain's > parts were arranged in a spontaneous and somewhat haphazard way by > nature, making it more difficult to see the algorithmic pattern > (although it must be there, at least at the level of basic physics). In > the final analysis, it is this difference between them that convinces > you the computer doesn't understand what it's doing and the brain does. No, in the final analysis nothing can get meaning (semantics) from form-based rules (syntax). It makes not one wit of difference what sort of entity happens to perform the syntactical operations. Neither computers nor biological brains can get semantics from syntax. > What is it to learn the meaning of the word "dog" if not to > associate its sound or shape with an image of a dog? Both you and the computer make that association, and both of you act accordingly. But only you know about it, i.e, only you know the meaning. > Anyway, despite the above, and without any help from > Searle, it might still seem reasonable to entertain the possibility > that there is something substrate-specific about consciousness, and > fear that if you agree to upload your brain the result would be a > mindless zombie. I would not use the word "substrate-specific" but I do like your mention of "chemical reactions" in your first paragraph above. > That is where the partial brain replacement (eg. of the visual > cortex or Wernicke's area) thought experiment comes into play, I've given this idea some thought today, by the way. We can take your experiment deeper, and instead of creating a program driven nano-neuron to substitute for the natural neuron, we keep everything about the natural neuron and replace only the nucleus. This neuron will appear even more natural than yours. Now we take it another step and keep the nucleus. We create artificial program-driven DNA (whatever that might look like) to replace the DNA inside the nucleus. And so on. In the limit we will have manufactured natural program-less neurons. I don't know if Searle (or anyone) has considered the ramifications of this sort of progression that I describe in terms of Searle's philosophy, but it seems to me that on Searle's view the person's intentionality would become increasingly apparent to him as his brain became driven less by abstract formal programs and more by natural material processes. This also leaves open the possibility that your more basic nano-neurons, those you've already supposed, would not deprive the subject completely of intentionality. Perhaps your subject would become somewhat dim but not completely lose his grip on reality. -gts From gts_2000 at yahoo.com Thu Dec 24 02:39:22 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 23 Dec 2009 18:39:22 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <519993.95001.qm@web36503.mail.mud.yahoo.com> --- On Wed, 12/23/09, Stathis Papaioannou wrote: >> Or perhaps nothing "amazing" happens. Instead of >> believing in magic, I find easier to accept that the >> computationalist theory of mind is simply incoherent. It >> does not explain the facts. > > So you find the idea that in some unknown way chemical > reactions cause mind not particularly amazing, while the same happening > with electric circuits is obviously incredible? Not exactly, but close. Brains contain something like electric circuits but I still find it incredible that a mind that runs only on programs can have everything biological minds have. Again, I find the computationalist theory of mind incredible. > A computer only runs a formal program in the mind of the > programmer. Where did you buy your computer? I built mine, and I can tell you it runs formal programs in RAM. :) > A computer undergoes internal movements according to the laws > of physics, which movements can (incidentally) be described > algorithmically. This is the most basic level of > description. Yes. > The programmer comes along and gives a chunkier, higher level > The program is like a plan to help the programmer figure out where to > place the various parts of the computer in relation to each other so > that they will do a particular job. Both the computer and the brain > go clickety-clack, clickety-clack and produce similar intelligent > behaviour. The computer's parts were deliberately arranged by the > programmer in order to bring this result about, whereas the brain's > parts were arranged in a spontaneous and somewhat haphazard way by > nature, making it more difficult to see the algorithmic pattern > (although it must be there, at least at the level of basic physics). In > the final analysis, it is this difference between them that convinces > you the computer doesn't understand what it's doing and the brain does. No, in the final analysis nothing can get meaning (semantics) from form-based rules (syntax). It makes not one wit of difference what sort of entity happens to perform the syntactical operations. Neither computers nor biological brains can get semantics from syntax. > What is it to learn the meaning of the word "dog" if not to > associate its sound or shape with an image of a dog? Both you and the computer make that association, and both of you act accordingly. But only you know about it, i.e, only you know the meaning. > Anyway, despite the above, and without any help from > Searle, it might still seem reasonable to entertain the possibility > that there is something substrate-specific about consciousness, and > fear that if you agree to upload your brain the result would be a > mindless zombie. I would not use the word "substrate-specific" but I do like your mention of "chemical reactions" in your first paragraph above. > That is where the partial brain replacement (eg. of the visual > cortex or Wernicke's area) thought experiment comes into play, I've given this idea some thought today, by the way. We can take your experiment deeper, and instead of creating a program driven nano-neuron to substitute for the natural neuron, we keep everything about the natural neuron and replace only the nucleus. This neuron will appear even more natural than yours. Now we take it another step and keep the nucleus. We create artificial program-driven DNA (whatever that might look like) to replace the DNA inside the nucleus. And so on. In the limit we will have manufactured natural program-less neurons. I don't know if Searle (or anyone) has considered the ramifications of this sort of progression that I describe in terms of Searle's philosophy, but it seems to me that on Searle's view the person's intentionality would become increasingly apparent to him as his brain became driven less by abstract formal programs and more by natural material processes. This also leaves open the possibility that your more basic nano-neurons, those you've already supposed, would not deprive the subject completely of intentionality. Perhaps your subject would become somewhat dim but not completely lose his grip on reality. -gts From gts_2000 at yahoo.com Thu Dec 24 02:39:34 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 23 Dec 2009 18:39:34 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <12894.34060.qm@web36507.mail.mud.yahoo.com> --- On Wed, 12/23/09, Stathis Papaioannou wrote: >> Or perhaps nothing "amazing" happens. Instead of >> believing in magic, I find easier to accept that the >> computationalist theory of mind is simply incoherent. It >> does not explain the facts. > > So you find the idea that in some unknown way chemical > reactions cause mind not particularly amazing, while the same happening > with electric circuits is obviously incredible? Not exactly, but close. Brains contain something like electric circuits but I still find it incredible that a mind that runs only on programs can have everything biological minds have. Again, I find the computationalist theory of mind incredible. > A computer only runs a formal program in the mind of the > programmer. Where did you buy your computer? I built mine, and I can tell you it runs formal programs in RAM. :) > A computer undergoes internal movements according to the laws > of physics, which movements can (incidentally) be described > algorithmically. This is the most basic level of > description. Yes. > The programmer comes along and gives a chunkier, higher level > The program is like a plan to help the programmer figure out where to > place the various parts of the computer in relation to each other so > that they will do a particular job. Both the computer and the brain > go clickety-clack, clickety-clack and produce similar intelligent > behaviour. The computer's parts were deliberately arranged by the > programmer in order to bring this result about, whereas the brain's > parts were arranged in a spontaneous and somewhat haphazard way by > nature, making it more difficult to see the algorithmic pattern > (although it must be there, at least at the level of basic physics). In > the final analysis, it is this difference between them that convinces > you the computer doesn't understand what it's doing and the brain does. No, in the final analysis nothing can get meaning (semantics) from form-based rules (syntax). It makes not one wit of difference what sort of entity happens to perform the syntactical operations. Neither computers nor biological brains can get semantics from syntax. > What is it to learn the meaning of the word "dog" if not to > associate its sound or shape with an image of a dog? Both you and the computer make that association, and both of you act accordingly. But only you know about it, i.e, only you know the meaning. > Anyway, despite the above, and without any help from > Searle, it might still seem reasonable to entertain the possibility > that there is something substrate-specific about consciousness, and > fear that if you agree to upload your brain the result would be a > mindless zombie. I would not use the word "substrate-specific" but I do like your mention of "chemical reactions" in your first paragraph above. > That is where the partial brain replacement (eg. of the visual > cortex or Wernicke's area) thought experiment comes into play, I've given this idea some thought today, by the way. We can take your experiment deeper, and instead of creating a program driven nano-neuron to substitute for the natural neuron, we keep everything about the natural neuron and replace only the nucleus. This neuron will appear even more natural than yours. Now we take it another step and keep the nucleus. We create artificial program-driven DNA (whatever that might look like) to replace the DNA inside the nucleus. And so on. In the limit we will have manufactured natural program-less neurons. I don't know if Searle (or anyone) has considered the ramifications of this sort of progression that I describe in terms of Searle's philosophy, but it seems to me that on Searle's view the person's intentionality would become increasingly apparent to him as his brain became driven less by abstract formal programs and more by natural material processes. This also leaves open the possibility that your more basic nano-neurons, those you've already supposed, would not deprive the subject completely of intentionality. Perhaps your subject would become somewhat dim but not completely lose his grip on reality. -gts From gts_2000 at yahoo.com Thu Dec 24 02:52:19 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 23 Dec 2009 18:52:19 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <12894.34060.qm@web36507.mail.mud.yahoo.com> Message-ID: <848409.1046.qm@web36503.mail.mud.yahoo.com> Sorry for those dupe messages! It seems this web-based yahoo email application from which I send messages can do only form-based syntactical operations on symbols. It cannot get semantics from syntax, and so it never has any idea what it's doing! -gts From thespike at satx.rr.com Thu Dec 24 03:13:33 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 23 Dec 2009 21:13:33 -0600 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <848409.1046.qm@web36503.mail.mud.yahoo.com> References: <848409.1046.qm@web36503.mail.mud.yahoo.com> Message-ID: <4B32DC5D.3030501@satx.rr.com> On 12/23/2009 8:52 PM, Gordon Swobe wrote: > Sorry for those dupe messages! > > It seems this web-based yahoo email application from which I send messages can do only form-based syntactical operations on symbols. It cannot get semantics from syntax, and so it never has any idea what it's doing! Yeah, but the entertaining thing is that, in effect, you're doing exactly the same thing with this thread. Damien Broderick From sparge at gmail.com Thu Dec 24 04:20:36 2009 From: sparge at gmail.com (Dave Sill) Date: Wed, 23 Dec 2009 23:20:36 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <100204.32863.qm@web36507.mail.mud.yahoo.com> References: <100204.32863.qm@web36507.mail.mud.yahoo.com> Message-ID: On Wed, Dec 23, 2009 at 9:37 PM, Gordon Swobe wrote: > > Where did you buy your computer? I built mine, and I can tell you it runs formal programs in RAM. :) Really? Not in the CPU? Huh... I though RAM was for storage. -Dave From kanzure at gmail.com Thu Dec 24 05:33:37 2009 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 23 Dec 2009 23:33:37 -0600 Subject: [ExI] Fwd: [GRG] Building a Search Engine of HM's Brain, Slice by Slice In-Reply-To: References: Message-ID: <55ad6af70912232133l674e9867je44e645c22aa2ea3@mail.gmail.com> "Search me" takes on new meanings. ---------- Forwarded message ---------- From: L. Stephen Coles, M.D., Ph.D. Date: Wed, Dec 23, 2009 at 11:26 PM Subject: [GRG] Building a Search Engine of HM's Brain, Slice by Slice To: Gerontology Research Group Cc: "Harry Vinters, M.D." To Members and Friends of the Los Angeles Gerontology Research Group: The brain of an anterograde amnesia patient, Henry Molaison (HM), has opened the door to a much more ambitious brain project at UCSD... -- Steve Coles______________ *"Building a Search Engine of the Brain, Slice by Slice"**by* Benedict Carey [image: []] Diego Mariscal/The Brain Observatory, University of California, San Diego *DISSECTION:* A project with Henry Molaison's brain, shown in a mold of gelatin, aims to create the first entirely reconstructed, whole-brain atlas available to anyone. December 21, 2009; San Diego, CA (*The New York Times*, pp. D1, 6) -- On a gray Wednesday afternoon here in early December, scientists huddled around what appeared to be a two-gallon carton of frozen yogurt, its exposed top swirling with dry-ice fumes. [image: []] Diego Mariscal/The Brain Observatory, University of California, San Diego *LAYERS:* ?I feel like the world is watching over my shoulder,? said Jacopo Annese, who dissected the brain of *Henry Molaison*, known as *HM*. As the square container, fixed to a moving platform, inched toward a steel blade mounted level with its surface, the group held its collective breath. The blade peeled off the top layer, rolling it up in slow motion like a slice of pale prosciutto. ?Almost there,? someone said. Off came another layer, another, and another. And then there it was: a pink spot at first, now a smudge, now growing with every slice like spilled ros? on a cream carpet ? a human brain. Not just any brain, either, but the one that had belonged to Henry Molaison, known worldwide as H. M., an amnesic who collaborated on hundreds of studies of memory and died last year at age 82. (Mr. Molaison agreed to donate his brain years ago, in consultation with a relative.) ?You can see why everyone?s so nervous,? said Jacopo Annese, an Assistant Professor of Radiology at the University of California, San Diego, as he delicately removed a slice with an artist?s paintbrush and placed it in a labeled tray of saline solution. ?I feel like the world is watching over my shoulder.? And so it was: thousands logged on to view the procedure via live Webcast. The dissection marked a culmination, for one thing, of H. M.?s remarkable life, which was documented by Suzanne Corkin, a memory researcher at MIT who had worked with Mr. Molaison for the last five decades of his life. But it was also a beginning of something much larger, Dr. Annese and many other scientists hope. ?The advent of brain imaging opened up so much,? said Sandra Witelson, a neuroscientist with the Michael G. DeGroote School of Medicine at McMaster University in Canada, who manages a bank of 125 brains, including Albert Einstein?s. ?But I think in all the excitement people have forgotten how important the anatomical study of brain tissue still is, and this is the sort of project that could really restart interest in this area.? The Brain Observatory at U.C. San Diego, set up to accept many donated brains, is an effort to bridge past and future. Brain dissection is a craft that goes back centuries and has helped scientists to understand where functions like language processing and vision are clustered, to compare gray and white matter and cell concentrations across different populations and to understand the damage done in ailments like Alzheimer?s diseaseand stroke. Yet there is no single standard for cutting up a brain. Some researchers slice from the crown of the head down, parallel to the plane that runs through the nose and ears; others cut the organ into several chunks, and proceed to section areas of interest. No method is perfect, and any cutting can make it difficult, if not impossible, to reconstruct circuits that connect cells in disparate areas of the brain and somehow create a thinking, feeling mind. To create as complete a picture as possible, Dr. Annese cuts very thin slices ? *70 microns each*, paper-thin ? from the whole brain, roughly parallel with the plane of the forehead, moving from front to back. Perhaps the best-known pioneer of such whole-brain sectioning is Dr. Paul Ivan Yakovlev, who built a collection of slices from hundreds of brains now kept at a facility in Washington. But Dr. Annese has something Dr. Yakovlev did not: advanced computer technology that tracks and digitally reproduces each slice. An entire brain produces some 2,500 slices, and the amount of information in each one, once microscopic detail is added, will fill about a* terabyte of computer storage *. Computers at UCSD are now fitting all those pieces together for Mr. Molaison?s brain, to create what Dr. Annese calls a ?*Google- Earthlike search engine*,? the first entirely reconstructed, whole-brain atlas available to anyone who wants to log on. ?We?re going to get the kind of resolution, all the way down to the level of single cells, that we have not had widely available before,? said Donna Simmons, a Visiting Scholar at the Brain Architecture Center at the University of Southern California. The thin whole-brain slicing ?will allow much better opportunities to study the connection between cells, the circuits themselves, which we have so much more to learn about.? Experts estimate that there are about 50 brain banks in the world, many with organs from medical patients with neurological or psychiatric problems, and some with a stock donated by people without disorders. ?Ideally, anyone with the technology could do the same with their own specimens,? Dr. Corkin said. The technical challenges, however, are not trivial. To prepare a brain for dissection, Dr. Annese first freezes it in a formaldehyde and sucrose solution, to about -40 degrees Celsius. The freezing in the case of HM was *done over four hours, a few degrees at a time:* the brain, like most things, becomes more brittle when frozen. It can crack. *Mr. Molaison lost his ability to form new memories after an operation that removed a slug-size chunk of tissue from deep in each hemisphere of his brain,* making it more delicate than most. ?A crack would have been a disaster,? Dr. Annese said. It did not happen. With the help of David Malmberg, a mechanical engineer at UCSD who had designed equipment for use in the Antarctic, the laboratory fashioned a metal collar to keep the suspended brain at just the right temperature. A few degrees too cold and the blade would chatter instead of cutting cleanly; too warm, and the blade wants to dip into the tissue. Mr. Malmberg held the temperature steady by pumping ethanol through the collars continually, at minus 40 degrees. He suspended the hoses using surfboard leashes picked up days before the dissection. After the slicing and storing, a process that took some* 53 hours*, Dr. Annese?s laboratory will soon begin the equally painstaking process of mounting each slice in a glass slide. The lab will stain slides at regular intervals, to illustrate the features of the reconstructed organ. And it plans to provide slides for study. Outside researchers can request samples and use their own methods to stain and analyze the composition of specific high-interest areas. ?For the work I do, looking at which genes are preferentially expressed in different areas of the brain, this will be an enormous resource,? Dr. Simmons said. If all goes as planned, and the Brain Observatory catalogs a diverse collection of normal and abnormal brains ? and if, crucially, other laboratories apply similar techniques to their own collections ? brain scientists will have data that will keep them busy for generations. In her own work, Dr. Witelson has found interesting anatomical differences between male and female brains; and, in Einstein?s brain, *a parietal lobe, where spatial perception is centered, that was 15 percent larger than average. * ?With more of this kind of data,? Dr. Witelson said, ?we'll be able to look at all sorts of comparisons, for example, comparing the brain of people who are superb at math with those who are not so good.? ?You could take someone like Wayne Gretzky, for example,? she added, ?who could know not only where the puck was but where it was going to be ? who was apparently seeing a fourth dimension, time ? and see whether he had any special anatomical features.? (For the time being, Mr. Gretzky is still using his brain.) So it is that Mr. Molaison, who kicked off the modern study of memory by cooperating in studies in the middle of the 20th century, may help inaugurate a new era in the 21st century. That is, as soon as Dr. Annese and his lab team finish sorting the slices they have collected. ?It?s very exciting work to talk about,? Dr. Annese said. ?But to see it being done, it?s like watching the grass grow.? *__________________________________* *"Dissection Begins on Famous Brain"**by * Benedict Carey December 2, 2009; San Diego, CA (*NY Times*) -- The man who could not remember has left scientists a gift that will provide insights for generations to come: his brain, now being dissected and digitally mapped in exquisite detail. The man, Henry Molaison ? known during his lifetime only as H.M., to protect his privacy ? lost the ability to form new memories after a brain operation in 1953, and over the next half century he became the most studied patient in brain science. He consented years ago to donate his brain for study, and last February Dr. Jacopo Annese, an assistant professor of radiology at the University of California, San Diego, traveled across the country and flew back with the brain seated next to him on Jet Blue. Just after noon on Wednesday, on the first anniversary of Mr. Molaison?s death at 82 from pulmonary complications, Dr. Annese and fellow neuroscientists began painstakingly slicing their field?s most famous organ. The two-day process will produce about 2,500 tissue samples for analysis. A computer recording each sample will produce a searchable Google Earth-like map of the brain with which scientists expect to clarify the mystery of how and where memories are created ? and how they are retrieved. ?Ah ha ha!? Dr. Annese said, as he watched a computer-guided blade scrape the first shaving of gray matter from Mr. Molaison?s frozen brain. ?One down, 2,499 more to go.? Dr. Annese carefully dropped the shaving into fluid. The procedure is being shown live on-line: thebrainobservatory.ucsd.edu/hm_live.php. ?It?s just amazing that this one patient ? this one person ? would contribute so much historically to the early study of memory,? said Dr. Susumu Tonegawa, a Professor of Neuroscience at the Picower Institute for Learning and Memory at M.I.T.?And now his brain will be available? for future study. Good fortune and very bad luck conspired to make Mr. Molaison one of science?s most valuable resources and most productive collaborators. Growing up in and around Hartford, he began to suffer seizuresas a boy. The seizures grew worse after he was knocked to the ground by a bicycle rider, and by the time he was 26 they were so severe he consented to an experimental brain operation to relieve them. His doctor, the prominent brain surgeon William Beecher Scoville, suctioned out two slug-sized slivers of tissue, one from each side of the brain. The operation controlled the seizures, but it soon became clear that the patient could not form new memories. ?He loved to converse, for example, but within 15 minutes he would tell you the same story three times, with same words and intonation, without remembering that he'd just told it,? said Suzanne Corkin, a neuroscientist at the Massachusetts Institute of Technology who studied and followed Mr. Molaison in the last five decades of his life. Each time he met a new acquaintance, each time he visited the corner store, each time he strolled around the block, it was as if for the first time. Before H.M., scientists thought that memory was widely distributed throughout the brain, not dependent on any one area. But by testing Mr. Molaison, researchers in Montreal and Hartford soon established that the areas that were removed ? in the medial temporal lobe, about an inch deep in the brain level with the ear ? are critical to forming new memories. One organ, the* hippocampus*, is especially crucial and is now the object of intense study. In a series of studies, Mr. Molaison soon altered forever the understanding of learning by demonstrating that a part of his memory was fully intact. A 1962 paper by Dr. Brenda Milner of the Montreal Neurological Institute described a landmark study in which she had Mr. Molaison try to trace a line between two five-point stars, one inside the other. Each time he tried the experiment, it seemed to him an entirely new experience. Yet he gradually became more proficient ? showing that there are at least two systems in the brain for memory, one for events and facts and another for implicit or motor learning, for things like playing a guitar or riding a bicycle. In the new brain-mapping project here, set to catalog many donated brains, scientists will have the ability to study areas of Mr. Molaison?s brain at a level of detail that imaging cannot reveal, to solve lingering mysteries about the man and the brain. Mr. Molaison stunned researchers several times, for instance, by demonstrating that he could hold onto some new memories. He could reproduce exactly the floor map of his house on Crescent Drive in East Hartford, where he lived for years after his operation with his parents. These and other scattered surprises suggest that Mr. Molaison?s brain, deprived of its central memory hub, recruited other nearby areas to try to compensate, scientists say. Now researchers can begin to study these poorly understood areas more closely. One region, called the parahippocampal cortex, appears to support ?familiarity? memory, the sensation that we have seen or heard something before, though we cannot place it. ?We've learned a lot about memory, we?re getting close to the fire, and H.M.?s brain will really help us clarify the division of labor in this area for making memories,? said Dr. Lila Davachi, a neuroscientist at New York University. The dissection, a novel whole-brain technique, is part of a project known as the Brain Observatory and the culmination of a year of frantic preparation. Dr. Corkin arranged for Mr. Molaison?s brain to be preserved and imaged; up until Sunday the laboratory was tweaking its equipment, buying surfboard leashes at the last minute to support some of its freezing hoses. ?We hope that this project as it grows will catalyze cooperation across many disciplines? to study disorders like amnesia, tremors, and dementias, Dr. Annese said. ?But we wanted to kick it off with the most famous brain of them all.? _____________________________ At 06:22 PM 12/22/2009, you wrote: Dr. George Martin, Thanks for this article. Note that the preservation was with formaldehyde ("To prepare a brain for dissection, Dr. Annese first freezes it in a formaldehyde and sucrose solution, to about -40 degrees Celsius.") which would make a problem for analyzing gene expression. Maybe, if we would combine freezing with vitrification and then slicing like they did in the study of Henry Molaison's brain, we could do even better. That would be the Rolls Royce version! -- Stan *From:* gmmartin at u.washington.edu *Sent:* Tuesday, December 22, 2009 8:32 PM *To:* Stanley_Primmer at hotmail.com *Subject:* NYTimes.com: Building a Search Engine of the Brain, Slice by Slice Message from sender: Would they try this out for three or four of our Supercentarians (two males & two females)? Costs? -- George Martin L. Stephen Coles, M.D., Ph.D., Co-Founder Los Angeles Gerontology Research Group *URL:* http://www.grg.org *E-mail:* *scoles at grg.org **E-mail: **scoles at ucla.edu* _______________________________________________ GRG mailing list GRG at lists.ucla.edu http://lists.ucla.edu/cgi-bin/mailman/listinfo/grg -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu Dec 24 09:18:03 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 24 Dec 2009 20:18:03 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <100204.32863.qm@web36507.mail.mud.yahoo.com> References: <100204.32863.qm@web36507.mail.mud.yahoo.com> Message-ID: 2009/12/24 Gordon Swobe : > Not exactly, but close. Brains contain something like electric circuits but I still find it incredible that a mind that runs only on programs can have everything biological minds have. Again, I find the computationalist theory of mind incredible. > >> A computer only runs a formal program in the mind of the >> programmer. > > Where did you buy your computer? I built mine, and I can tell you it runs formal programs in RAM. :) What is it about the makeup of your computer that marks it as implementing formal programs? Because you built it you can see certain patterns in it which represent the programs, but this is just you superimposing an interpretation on it. It is no more a physical fact about the computer than interpreting constellations as looking like animals is a physical fact about stars. You believe that programs can't give rise to minds, but the right kind of physical activity can. Would you then object to the theory that it isn't the program that gives rise to the computer's mind, but the physical activity that takes place during the program's implementation? >> What is it to learn the meaning of the word "dog" if not to >> associate its sound or shape with an image of a dog? > > Both you and the computer make that association, and both of you act accordingly. But only you know about it, i.e, only you know the meaning. If I don't know the meaning of a symbol that is because I don't know what object to associate it with. Once I make the association, I know the meaning. I don't see how I could coherently claim that I can correctly and consciously make the association but not know the meaning. > We can take your experiment deeper, and instead of creating a program driven nano-neuron to substitute for the natural neuron, we keep everything about the natural neuron and replace only the nucleus. This neuron will appear even more natural than yours. Now we take it another step and keep the nucleus. We create artificial program-driven DNA (whatever that might look like) to replace the DNA inside the nucleus. And so on. In the limit we will have manufactured natural program-less neurons. > > I don't know if Searle (or anyone) has considered the ramifications of this sort of progression that I describe in terms of Searle's philosophy, but it seems to me that on Searle's view the person's intentionality would become increasingly apparent to him as his brain became driven less by abstract formal programs and more by natural material processes. > > This also leaves open the possibility that your more basic nano-neurons, those you've already supposed, would not deprive the subject completely of intentionality. Perhaps your subject would become somewhat dim but not completely lose his grip on reality. I think you've missed the main point of the thought experiment, which is to consider the behaviour of the normal neurons in the brain. We replace Wernicke's area with an artificial analogue that is as unnatural, robotlike and (it is provisionally assumed) mindless as we can possibly make it. The only requirement is that it masquerade as normal for the benefit of the neural tissue with which it interfaces. This subject should have no understanding of language, but not only will he behave as if he has understanding, he will also believe that he has understanding and all his thoughts (except those originating in Wernicke's area) will be exactly the same as if he really did have understanding. He will thus be able to read a sentence, comment on it, have an appropriate emotional response to it, paint a picture or write a poem about it, and everything else exactly the same as if he had real understanding. These will not just be the behaviours of a mindless zombie, but based on genuine subjective experiences. Is it possible that the subject lacks such an important component of his consciousness as language or, in my previous example, vision, but doesn't even realise? If so, how do you know that you aren't aphasic or blind without realising it now? And what advantage would having true language or vision bring if it makes no objective or subjective difference to the subject or those with whom he comes into contact? The conclusion from the absurdity of the alternatives is that if it is possible to duplicate the behaviour of neural tissue, then all the subjective experiences associated with the neural tissue will also be duplicated. -- Stathis Papaioannou From gts_2000 at yahoo.com Thu Dec 24 12:33:05 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 24 Dec 2009 04:33:05 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <966854.97496.qm@web36503.mail.mud.yahoo.com> --- On Thu, 12/24/09, Stathis Papaioannou wrote: > What is it about the makeup of your computer that marks it > as implementing formal programs? Because you built it you can > see certain patterns in it which represent the programs, but this is > just you superimposing an interpretation on it. With all due respect, I simply don't buy into that sort of post-modernist nonsense! Although we interpret reality, reality does not consist only of our interpretations of it. There really does exist a reality separate from our interpretations of it (we don't merely imagine it) and some interpretations of computers better match the facts about computers than others. However, notwithstanding my joking around in my previous post, I will agree with you there exists a sense in which we might say that formal programs exist only in the minds of humans and not "in computers". But it supports Searle's position that programs exist only as abstractions. When we replace real things with abstraction descriptions of those real things, we lose the thing and have instead a description of the thing. This happens in your program-driven neurons, for example. To whatever degree we replace the real machinery of the brain with abstract descriptions of that machinery, to that same degree we lose the real machine that thinks. Computer simulations of things describe but do not equal the things they simulate, and I can't emphasize this enough. > You believe that programs can't give rise to minds, but the > right kind of physical activity can. Would you then object to the > theory that it isn't the program that gives rise to the computer's mind, > but the physical activity that takes place during the program's > implementation? I object not because of the physical activity (I like that part of your argument) but rather because that physical activity represents only the implementation of purely syntactic operations in a formal program. As I mentioned in my last, syntax cannot give semantics no matter what entity does those operations. >> This also leaves open the possibility that your more > basic nano-neurons, those you've already supposed, would not > deprive the subject completely of intentionality. Perhaps > your subject would become somewhat dim but not completely > lose his grip on reality. > > I think you've missed the main point of the thought > experiment, I don't think I've missed your point, and I also don't want you to think I have finished discussing it. I just leap-frogged ahead of it to ponder your thought experiment in other ways that to my way of thinking really address the subject of this thread. I found a sense in which I can agree with you. A man with a complete replacement with your artificial neurons might not completely lose all sense of meaning; he might not completely lose his capacity to ground symbols as I had originally supposed. But as I wrote above, to the extent that we replaced his biological machinery with abstract descriptions of it, to that extent he lost some of that capacity. At one extreme we can replace his entire brain with a program, resulting in mindlessness. At the other extreme we can replace only the finest details of his brain with programs, leaving the material of his brain almost completely intact. In that case he keeps most of his capacity for intentionality, proportional to the extent that we left his brain alone and did not replace it with abstract formal descriptions of what *used to be* his brain. I'll keep coming back to your experiments. As I said, not done with it. I just have serious time constraints. I appreciate both your thoughtfulness and your courtesy, Stathis. Enjoying this discussion. -gts From gts_2000 at yahoo.com Thu Dec 24 13:30:49 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 24 Dec 2009 05:30:49 -0800 (PST) Subject: [ExI] Sensory Reality In-Reply-To: <13249BE6-FDB5-49BC-B86B-16FF8560BA31@bellsouth.net> Message-ID: <485073.32913.qm@web36504.mail.mud.yahoo.com> --- On Wed, 12/23/09, John Clark wrote: >> Kant revolutionized philosophy with that idea and others related >> to it. > Charles Darwin was not a philosopher but he gave > the world far deeper philosophical insights than Kant did. > In fact none of the really big philosophical advances have > come from professional philosophers. The sad thing is that > philosophers don't even realize that fact and that's > why very silly people like?Searle continue to play in > his Chinese Room and pretend that Darwin never > existed. Ben created this thread to discuss his philosophy after someone criticized it. I just stopped by here quickly yesterday to say that one of the most significant philosophers ever to live reached the same conclusions as Ben (not that I agree with either Ben or Kant -- just to say that Ben's not all wet here). Not sure why you stopped by, John, except to try to harass me about something unrelated. -gts From eugen at leitl.org Thu Dec 24 13:41:06 2009 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 24 Dec 2009 14:41:06 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <100204.32863.qm@web36507.mail.mud.yahoo.com> Message-ID: <20091224134106.GX17686@leitl.org> On Wed, Dec 23, 2009 at 11:20:36PM -0500, Dave Sill wrote: > On Wed, Dec 23, 2009 at 9:37 PM, Gordon Swobe wrote: > > > > Where did you buy your computer? I built mine, and I can tell you it runs formal programs in RAM. :) > > Really? Not in the CPU? Huh... I though RAM was for storage. That memory and computation in current systems are separated is largely due to historical reasons. If you want to make systems quicker, both need to move close and closer, and eventually fuse. The word "formal" is completely meaningless in this context. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Thu Dec 24 14:04:18 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 25 Dec 2009 01:04:18 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <966854.97496.qm@web36503.mail.mud.yahoo.com> References: <966854.97496.qm@web36503.mail.mud.yahoo.com> Message-ID: 2009/12/24 Gordon Swobe : >> You believe that programs can't give rise to minds, but the >> right kind of physical activity can. Would you then object to the >> theory that it isn't the program that gives rise to the computer's mind, >> but the physical activity that takes place during the program's >> implementation? > > I object not because of the physical activity (I like that part of your argument) but rather because that physical activity represents only the implementation of purely syntactic operations in a formal program. As I mentioned in my last, syntax cannot give semantics no matter what entity does those operations. It's as if you believe that some physical activity is not "purely syntactic", and therefore can potentially give rise to mind; but as soon as it is organised in a complex enough way that it can be interpreted as implementing a program, this potential is destroyed! You would also have to have a test to distinguish between the "purely syntactic" (and therefore mentally impotent) and the syntactic that could also be viewed as non-syntactic, and therefore can give rise to mind by means of its non-syntactic components. The brain, for example, could be seen as following an algorithm if viewed at its most basic physical level, and a higher level algorithm if viewed at a higher level, insofar as you could possibly come up with the rules that determine neuronal firing. But, presumably, the brain is saved by its non-algorithmic component. How do you recognise this component and how do you know a computer doesn't also have it? -- Stathis Papaioannou From eugen at leitl.org Thu Dec 24 15:51:42 2009 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 24 Dec 2009 16:51:42 +0100 Subject: [ExI] Carbon In-Reply-To: References: Message-ID: <20091224155142.GE17686@leitl.org> On Tue, Dec 22, 2009 at 11:29:08AM -0800, spike wrote: > Ja, but we wouldn't need to haul down the carbon out of the air. Rather we Energetically, with modern processes it doesn't matter much whether your scrubbing flue gas or air. As to running out of carbon, quite a lot of Earth's crust is carbonates along with alumosilicates. > would deliver it directly to the homes and factories in the form of coal, > the old fashioned way. Also, the quantities of carbon I expect we will use Pipelines for gases or liquids are better. > is miniscule, since all the really cool stuff I can imagine we would build > would be tiny. Reason: we don't have the space to build really big stuff. The solar system is pretty big though. > We already build our homes mostly of carbon. If we made them from US maybe, in other locations people like nonorganics. Geopolymers in form of mineral foams can be quite cheap and low-carbon. > better-organized carbon, it would take far less than we currently use. If we can organize matter at molecular scale better, we don't really need houses anymore. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jameschoate at austin.rr.com Thu Dec 24 16:36:40 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 24 Dec 2009 16:36:40 +0000 Subject: [ExI] Suggested Reading: The Fourth Paradigm: Data-Intensive Scientific Discovery Message-ID: <20091224163640.76H56.132687.root@hrndva-web28-z02> You can get it in either pdf or paper ($$$) format, if you build or develop with computers it may be worth your while... http://research.microsoft.com/en-us/collaboration/fourthparadigm/default.aspx -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From p0stfuturist at yahoo.com Wed Dec 23 17:01:01 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Wed, 23 Dec 2009 09:01:01 -0800 (PST) Subject: [ExI] atheism Message-ID: <512170.35499.qm@web59908.mail.ac4.yahoo.com> >ankara at tbaytel.net wrote: >Did you just say there's was no evidence of harm ? Guess you've never been a female human ordered to procreate, denied dominion over your own body then. Nothing like a shoe full of kids to make a woman happy, eh? Sure, harm in a relative sense, and personally I despise fundamentalists; yet are most faiths fundamentalist in first-world nations? Not anymore. This list consists of people living primarily in America and the UK. Are most religious orgs, save for in the Deep South, fascistic today? no. And what number on this list have been denied the opportunity to have abortions? not many. Some extropians in Old Blighty can tell us about the Church of England. If anything, the Anglican Church is a harmless, silly tax exempt business. In America there is a sizable minority of would-be fascists, but no more than there are social fascist govt. employees, Commies, Cosa Nostra-types, etc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p0stfuturist at yahoo.com Wed Dec 23 17:18:42 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Wed, 23 Dec 2009 09:18:42 -0800 (PST) Subject: [ExI] atheism Message-ID: <705374.55784.qm@web59902.mail.ac4.yahoo.com> >Not to mention having your sex sewn up or cut off. Natasha In third world cesspools, sure. Now, you take govts in the Mideast... Please. The govts are as bad in their own way as the feudalist 'societies' themselves. People turn to religion if they can't trust their govts-- so right off you can see the appeal of faith. I'm souring on the future altogether. We can live longer, but govts will almost certainly worsen. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Dec 24 17:08:45 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 24 Dec 2009 12:08:45 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <822991.33211.qm@web36507.mail.mud.yahoo.com> References: <822991.33211.qm@web36507.mail.mud.yahoo.com> Message-ID: <61436BC1-A0B5-4F40-8F18-A76F36E9AA43@bellsouth.net> On Dec 23, 2009, Gordon Swobe wrote: > I still find it incredible that a mind that runs only on programs can have everything biological minds have. Again, I find the computationalist theory of mind incredible. Ok lets accept that Gordon Swobe finds it incredible, lets accept that Gordon Swobe cannot think of any way that intelligence alone can produce consciousness without an unknown something extra thrown into the mix. And yet if Gordon Swobe is a rational man he would know that you can't let personal incredulity prevent him from accepting something when there is a mountain of physical evidence supporting it; he would say I Gordon Swobe cannot think of a way for intelligence to produce consciousness, however that inability is my fault because IT MUST BE TRUE. Either that or Charles Darwin didn't know what he was talking about. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Dec 24 17:26:36 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 24 Dec 2009 12:26:36 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: References: <100204.32863.qm@web36507.mail.mud.yahoo.com> Message-ID: On Dec 23, 2009, Dave Sill wrote: > On Wed, Dec 23, Gordon Swobe wrote: >> >> Where did you buy your computer? I built mine, and I can tell you it runs formal programs in RAM. :) > > Really? Not in the CPU? Huh... I though RAM was for storage. But Dave, programs don't even run on the CPU, really all programs do is open and close semiconductor switches. Opening and closing a switch is a pretty mundane thing to do therefore following Mr.Swobe's logic everything any program can do, any possible output a computer can produce, is mundane. Well, that's certainly true when it comes to reading Mr.Swobe's posts on my computer. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Dec 24 17:46:49 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 24 Dec 2009 12:46:49 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <12894.34060.qm@web36507.mail.mud.yahoo.com> References: <12894.34060.qm@web36507.mail.mud.yahoo.com> Message-ID: <6B524BF4-BAB5-4D1B-975A-2C54A387C6D7@bellsouth.net> On Dec 23, 2009, Gordon Swobe wrote: > Neither computers nor biological brains can get semantics from syntax. What an extraordinary thing to say! I've never met you but I think its safe to assume you either have a biological brain or are a computer; therefore from the above I can only conclude that never in your life have you read something that you understood. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Dec 24 18:40:15 2009 From: spike66 at att.net (spike) Date: Thu, 24 Dec 2009 10:40:15 -0800 Subject: [ExI] Carbon In-Reply-To: <20091224155142.GE17686@leitl.org> References: <20091224155142.GE17686@leitl.org> Message-ID: > ...On Behalf Of Eugen Leitl > ... > > > We already build our homes mostly of carbon... > > US maybe, in other locations people like nonorganics. > Geopolymers in form of mineral foams can be quite cheap and > low-carbon... Well, how ungreen of those other locations. By building American homes of carbon, we prevent that carbon from going into the atmosphere in the form of CO2. For a single digit BOTEC, I would estimate the amount of CO2 we exhale and defecate every day on the order of about a kilo, then estimate the size of the woodpile needed to build an American prole's home, oh about 10 or more cubic meters, close enough to 10,000 kg of wood, so the typical American prole ties up 30 years worth of her exhalations and defecations merely by virtue of living in an American house. Eskimos on the other hand build their homes of ice and snow (the igloo thing) which contributes exactly nothing to sequestering carbon in a non-greenhouse-promoting form. How white of those non-greenist eskimos. On the other hand, the eskimos' solid waste is less likely to be converted to CO2 by aquatic microbeasts at the waste treatment plant, so they contribute to sequestering carbon in the form of solidly frozen fecal matter. Furthermore, they occasionally slay whales, which carry an enormous carbon finprint. But I digress. By building American homes of carbon, we also create and nurture a market for wood products, encouraging the diversion of water out of rivers to otherwise dry and fallow ground, to grow commercial forests, further greenifying the planet. > > better-organized carbon, it would take far less than we > currently use. Eugen* Leitl ... But we want to *use* more carbon, if we use it in that form. spike From gts_2000 at yahoo.com Thu Dec 24 20:31:39 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 24 Dec 2009 12:31:39 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <752820.31575.qm@web36505.mail.mud.yahoo.com> --- On Thu, 12/24/09, Stathis Papaioannou wrote: > It's as if you believe that some physical activity is not > "purely syntactic", and therefore can potentially give rise to > mind; but as soon as it is organised in a complex enough way that it can > be interpreted as implementing a program, this potential is > destroyed! Real or hypothetical examples help to illustrate concepts, so let's try to use them when possible. I offer one: Consider an actual program that takes a simple input asking for information about the day of the week and reports "Thursday". You and I of course understand the meaning of "Thursday". We agree (for the moment) that the program did not understand the meaning because it did only syntactic operations and syntax does not give semantics. Now you ask what about the hardware? You want to know if the hardware (RAM, CPU and so on) that implemented those syntactic operations at the very lowest level (in 1's and 0's or ons and offs) knew the meaning of "Thursday" even while the higher program level did not. Odd question to ask, I think. Unlike the higher program level (which at least appears to have understanding!) at the machine level computers cannot even recognize or spell "Thursday". How then could the machine level understand the meaning of it? Now you might object and point out that you never actually agreed that the higher program level lacked understanding of "Thursday". Understandable - after all if any understanding exists then we should expect to find it at the higher levels - but now we find ourselves asking the same question about if and how programs can get semantics from syntax. -gts From jameschoate at austin.rr.com Thu Dec 24 21:14:38 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 24 Dec 2009 21:14:38 +0000 Subject: [ExI] Why is there Anti-Intellectualism? (re: Atheism as somehow different than other religions) Message-ID: <20091224211438.060YE.40918.root@hrndva-web23-z02> http://www.uwgb.edu/dutchs/PSEUDOSC/WhyAntiInt.htm1 "Unsatisfied curiosity is nagging, and there is a sense of comfort and relief when it's satisfied. Carl Sagan related how dissatisfied people were when he answered that he did not know whether there were extraterrestrial civilizations. People kept pressing him "But what do you think?" The ability to accept uncertainty requires extraordinary intellectual discipline. Medieval maps were full of spurious details simply because their makers couldn't tolerate blank spaces. There is abundant evidence that most people prefer the appearance of immediate certainty to the existence of uncertainty, even if uncertainty carries with it the certainty of getting closer to the truth later. Many people prefer religions that promise theological certainty, even if based on demonstrably spurious reasoning, rather than a religion that reasons soundly but accepts uncertainty or ambiguity. Having acquired a feeling of certainty, people naturally resist any attempt to re-open inquiry, because it will require effort and because it will subject them anew to that nagging feeling of uncertainty." -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From painlord2k at libero.it Thu Dec 24 23:24:05 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Fri, 25 Dec 2009 00:24:05 +0100 Subject: [ExI] atheism In-Reply-To: <705374.55784.qm@web59902.mail.ac4.yahoo.com> References: <705374.55784.qm@web59902.mail.ac4.yahoo.com> Message-ID: <4B33F815.8030404@libero.it> Il 23/12/2009 18.18, Post Futurist ha scritto: >>Not to mention having your sex sewn up or cut off. > Natasha > > In third world cesspools, sure. Derborn, Michigan? Londonistan? Sweden? Italy? Finland? Western societies imported a few third world cesspool to enrich their multicultural society. Apparently, more one is for open borders and free immigration from third world places and less is available to cracking down on this behavior. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.722 / Database dei virus: 270.14.119/2585 - Data di rilascio: 12/24/09 09:11:00 From lcorbin at rawbw.com Fri Dec 25 00:02:17 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 24 Dec 2009 16:02:17 -0800 Subject: [ExI] Why is there Anti-Intellectualism? Message-ID: <4B340109.60709@rawbw.com> jameschoate at austin.rr.com wrote: > http://www.uwgb.edu/dutchs/PSEUDOSC/WhyAntiInt.htm1 > people prefer the appearance of immediate certainty > to the existence of uncertainty...Many people prefer > religions that promise theological certainty...> It gives them a working hypothesis, so that they can get on with what they regard as more important. Few people actually like to think, yet that's what you have to do if there are unresolved important questions. This explains why there is so much anti-intellectualism. Most people simply find that they have far better things to do with their time than think, while on the other hand, many of us enjoy it, even to the exclusion of practically everything else. Lee From spike66 at att.net Fri Dec 25 01:12:16 2009 From: spike66 at att.net (spike) Date: Thu, 24 Dec 2009 17:12:16 -0800 Subject: [ExI] rethinking ai In-Reply-To: <4B340109.60709@rawbw.com> References: <4B340109.60709@rawbw.com> Message-ID: <1FFF908A39CE449197AA6E875961B665@spike> http://news.bbc.co.uk/2/hi/technology/8401349.stm Science goes back to basics on AI Robots are widely used but few are considered intelligent The Massachusetts Institute of Technology has begun a project to re-think artificial intelligence research. The Mind Machine Project will return to the basics of AI research to re-examine what lies behind human intelligence. Spanning five years and funded by a $5m (?3.1m) grant, it will bring together scientists who have had success in distinct fields of AI. By uniting researchers, MIT hopes to produce robotic companions smart enough to aid those suffering from dementia. "Essentially, we want to rewind to 30 years ago and revisit some ideas that had gotten frozen," said Neil Gershenfeld, one of the scientists leading the MMP and director of the MIT Center for Bits and Atoms. Mental help The MMP will bring together more than 20 senior AI scientists in a loose coalition to conduct research. Dr Gershenfeld said that although AI research was more than 50 years old, many scientists involved with the field were frustrated by the piecemeal progress that had been made. The MMP will go back to re-visit some of the basic assumptions made when AI research got underway. Dr Gershenfeld said AI research had got stuck on three separate areas that the MMP would tackle : mind, body and memory. On the mind, the research will look at ways to model thought, produce problem solving systems and understand the social context in which human intelligence is played out. In re-thinking memory, the researchers are interested in making machines that can handle the inconsistencies and messiness of human knowledge. Finally, the team aims to end the division of mind and body to produce systems whose intelligence derives from what they can do. The ultimate aim for the five-year project is not to produce an artificial human but to create a physical system that is smart enough to read a child's story book, understand the context surrounding that narrative and explain what happened. This could lead, said MIT, to the creation of a "brain co-processor" initially intended for those with Alzheimer's to give them a better quality of life. Such mental prostheses could also be used by anyone needing help to co-ordinate their lives. From stathisp at gmail.com Fri Dec 25 02:31:30 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 25 Dec 2009 13:31:30 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <752820.31575.qm@web36505.mail.mud.yahoo.com> References: <752820.31575.qm@web36505.mail.mud.yahoo.com> Message-ID: 2009/12/25 Gordon Swobe : > --- On Thu, 12/24/09, Stathis Papaioannou wrote: > >> It's as if you believe that some physical activity is not >> "purely syntactic", and therefore can potentially give rise to >> mind; but as soon as it is organised in a complex enough way that it can >> be interpreted as implementing a program, this potential is >> destroyed! > > Real or hypothetical examples help to illustrate concepts, so let's try to use them when possible. I offer one: > > Consider an actual program that takes a simple input asking for information about the day of the week and reports "Thursday". You and I of course understand the meaning of "Thursday". We agree (for the moment) that the program did not understand the meaning because it did only syntactic operations and syntax does not give semantics. Now you ask what about the hardware? You want to know if the hardware (RAM, CPU and so on) that implemented those syntactic operations at the very lowest level (in 1's and 0's or ons and offs) knew the meaning of "Thursday" even while the higher program level did not. Odd question to ask, I think. Unlike the higher program level (which at least appears to have understanding!) at the machine level computers cannot even recognize or spell "Thursday". How then could the machine level understand the meaning of it? There is no real distinction between program level, machine level or atomic level. These are levels of description, for the benefit of the observer, and a description of something has no causal efficacy. > Now you might object and point out that you never actually agreed that the higher program level lacked understanding of "Thursday". Understandable - after all if any understanding exists then we should expect to find it at the higher levels - but now we find ourselves asking the same question about if and how programs can get semantics from syntax. Understanding is something that is associated with understanding-like, or intelligent, behaviour. A program is just a plan in your mind or on a piece of paper to help you arrange matter in such a way as to give rise to this intelligent behaviour. There was no plan behind the brain, but post hoc analysis can reveal patterns which have an algorithmic description (provided that the physics in the brain is computable). Now, if such patterns in the brain do not detract from its understanding, why should similar patterns detract from the understanding of a computer? In both cases you can claim that the understanding comes from the actual physical structure and behaviour, not from the description of that physical structure and behaviour. -- Stathis Papaioannou From gts_2000 at yahoo.com Fri Dec 25 02:15:01 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 24 Dec 2009 18:15:01 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <783488.67415.qm@web36501.mail.mud.yahoo.com> --- On Thu, 12/24/09, Stathis Papaioannou wrote: > We replace Wernicke's area with an artificial analogue > that is as unnatural, robotlike and (it is provisionally assumed) > mindless as we can possibly make it. I have no concerns about how "robotlike" you might make your artificial neurons. I don't assume that natural neurons do not also behave robotically. I do however assume that natural neurons do not run formal programs like those running now on your computer. (If they do then I must wonder who wrote them.) > The only requirement is that it masquerade as normal for the benefit > of the neural tissue with which it interfaces. You have not shown that the effects that concern us here do not emanate in some way from the interior behaviors and structures of neurons. As I recall the electrical activities of neurons takes place inside them, not outside them, and it seems very possible to me that this internal electrical activity has an extremely important role to play. > This subject should have no understanding of language... I don't jump so easily to conclusions. -gts From thespike at satx.rr.com Fri Dec 25 03:01:25 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 24 Dec 2009 21:01:25 -0600 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <783488.67415.qm@web36501.mail.mud.yahoo.com> References: <783488.67415.qm@web36501.mail.mud.yahoo.com> Message-ID: <4B342B05.8000308@satx.rr.com> On 12/24/2009 8:15 PM, Gordon Swobe wrote: > I do however assume that natural neurons do not run formal programs like those running now on your computer. (If they do then I must wonder who wrote them.) O my god. Literally. So the Argument from Design comes back, if only by spurious default. From protokol2020 at gmail.com Fri Dec 25 07:47:17 2009 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Fri, 25 Dec 2009 08:47:17 +0100 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <6B524BF4-BAB5-4D1B-975A-2C54A387C6D7@bellsouth.net> References: <12894.34060.qm@web36507.mail.mud.yahoo.com> <6B524BF4-BAB5-4D1B-975A-2C54A387C6D7@bellsouth.net> Message-ID: It is good that John K Clark writes here, so I don't have to. His style is much better than mine, but I would usually say approximatively the same he does. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Dec 25 08:42:39 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 25 Dec 2009 19:42:39 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <783488.67415.qm@web36501.mail.mud.yahoo.com> References: <783488.67415.qm@web36501.mail.mud.yahoo.com> Message-ID: 2009/12/25 Gordon Swobe : > --- On Thu, 12/24/09, Stathis Papaioannou wrote: > >> We replace Wernicke's area with an artificial analogue >> that is as unnatural, robotlike and (it is provisionally assumed) >> mindless as we can possibly make it. > > I have no concerns about how "robotlike" you might make your artificial neurons. I don't assume that natural neurons do not also behave robotically. > > I do however assume that natural neurons do not run formal programs like those running now on your computer. (If they do then I must wonder who wrote them.) Natural neurons do not run human programming languages but they do run algorithms, insofar as their behaviour can be described algorithmically. At the lowest level there is a small set of rules, the laws of physics, which rigidly determine the future state and output of the neuron from the present state and input. That the computer was engineered and the neuron evolved should make no difference: if running a program destroys consciousness then it should do so in both cases. On the other hand, if the abstract program cannot give rise to consciousness then in either the computer or the neuron you can attribute the consciousness to the physical activity associated with running the program. >> The only requirement is that it masquerade as normal for the benefit >> of the neural tissue with which it interfaces. > > You have not shown that the effects that concern us here do not emanate in some way from the interior behaviors and structures of neurons. As I recall the electrical activities of neurons takes place inside them, not outside them, and it seems very possible to me that this internal electrical activity has an extremely important role to play. The electrical activity consists in a potential difference across the neuron's cell membrane due to ion gradients. However, to be sure you have correctly modelled the behaviour of the neuron you have to model all of its internal processes. For example, when exposed to a certain pattern of inputs a neuron may decide to upregulate the number of a particular type of receptor on its surface, which involves complex coordination of activity in the nucleus, ribosomes, mitochondria, in fact most of the organelles and subsystems of the cell. So, in order to successfully masquerade as a biological neuron, the artificial neuron must be able to compute exactly what the biological neuron would have done with its receptors, and alter its output and response to input accordingly. Such a molecular level model would be beyond the capability of modern computers, and the field of computational neuroscience in large part involves creating simplified models which computers can cope with. However, there is no guarantee that the simplified model won't deviate from the natural behaviour, in which case the subject with the neural prosthesis might well both experience a subjective change and behave differently. >> This subject should have no understanding of language... > > I don't jump so easily to conclusions. I am sure that if Wernicke's area were replaced with artificial neurons close enough in behaviour to the biological neurons then the subject would understand language normally. You, on the other hand, have been stating all along that if the artificial neurons pulls off the masquerade by means of running a computer program then despite the external appearance of understanding there will be no actual understanding. But this state of affairs would create a very strange situation: the subject would think he understands, give appropriate responses to questions, and feel that nothing at all has changed as a result of the experiment, while in fact he understands nothing at all. So what is the difference between pseudo-understanding and real understanding, and how can you be sure you aren't now reading this with pseudo-understanding? -- Stathis Papaioannou From bbenzai at yahoo.com Fri Dec 25 09:58:43 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 25 Dec 2009 01:58:43 -0800 (PST) Subject: [ExI] Sensory Reality In-Reply-To: Message-ID: <322399.49564.qm@web113606.mail.gq1.yahoo.com> JOSHUA JOB revealed: > What importance does it have that you cannot magically > "experience"? > "things-in-themselves"? Outside of sort-of solving that > symbol? > grounding problem, it doesn't effect anything at all. We > create a? > model of the world based on our sense-perceptions of > reality. That? > model is based, necessarily on those sense-perceptions. We > revise our? > model continually based on all of our sense-perceptions, > and over time? > it gets better and better, closer and closer to reality as > it is "in- > itself." We use this model to plan and act in the world in > order to? > live. We are not divorced from reality because we are not > gods. We? > simply have to work in order to understand it. We may be > wrong, but if? > so, we correct our model. I do not see any meaningful > consequences in? > acknowledging the fact that we aren't omniscient gods. > Reason is still? > valid, and science is the way to understanding the natural > world. It? > isn't limiting to acknowledge we are not infallible. In > fact, that? > very admission is what is necessary for us to begin > understanding the? > world. > > Oh, and Ben, your experiment, where you experience the > world directly? > as sense inputs, is something that is easy to at least see: > look at? > babies. Babies have no mental representations of the world, > and are? > just beginning to try to form one. And I think everyone > knows how much? > help babies need to survive. The only importance I ascribe to this idea is that it refutes the 'symbol-grounding problem' (i.e. there is no problem), and shows that there is no reason that (non-biological) machines can experience just as much 'meaning' as biological ones. As for 'truth' (or 'reality'), I tried to avoid mentioning the "Ding-an-Sich" ding, precisely because it leads to people talking about metaphysics, which in my world is a dirty word. I don't think we necessarily get closer and closer to the truth, or reality, I think we refine our models to work better. There's a difference. Whether a model that works better is equivalent to getting closer to 'reality' is pretty much irrelevant. What's important is what works. (I should say "what works within a given context", to avoid the Wrath of Jef ;>) 'Truth' is a side-effect, and less important. Everyone is (or should be) aware that there are 'useful lies'. As long as you don't *believe* them, that's ok. You can use them, test them, keep them if they give you good results, and discard them if they don't. Newtonian gravity is a good example. It's a 'useful lie' that works pretty well, until you need to explain the orbit of Mercury. The Standard Model of particle physics is almost certainly another one. Many people's idea that the guy they call 'dad' is actually their father is another. So is the concept of solid objects. Etc. Ben Zaiboc From bbenzai at yahoo.com Fri Dec 25 10:17:31 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 25 Dec 2009 02:17:31 -0800 (PST) Subject: [ExI] Atheism In-Reply-To: Message-ID: <653343.73227.qm@web113605.mail.gq1.yahoo.com> Yesterday I spotted a "Christianity for Dummies" book, and while mentally projecting the word "is" into the title - I'm easily amused - I had a quick look at it (I had a lot of time to kill waiting for a flight), and spotted a section about atheism. The author actually said that atheism was a faith-based position because atheists "Believe there is no god". No wonder people get confused about atheism, when they read misrepresentations like this in books. I wasn't sure whether it was a deliberate lie, or whether the author genuinely didn't get it. Is it really so difficult to understand the distinction between "I believe X is untrue" and "I don't have any belief about X"? Ben Zaiboc From stefano.vaj at gmail.com Fri Dec 25 12:29:51 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 25 Dec 2009 13:29:51 +0100 Subject: [ExI] atheism In-Reply-To: <746121.54668.qm@web59910.mail.ac4.yahoo.com> References: <746121.54668.qm@web59910.mail.ac4.yahoo.com> Message-ID: <580930c20912250429g39f9d2f3sa030e65cec0aba2@mail.gmail.com> 2009/12/23 Post Futurist > For starters, how are religious schools worse than public schools? They are not. In Italy, religious schools are often much better - and, surprise surprise, way more expensive (even though a few exist which are still expensive but are rather specialised in difficult, as opposed to gifted and affluent, students). This is why many atheist patents do not think twice about opting for them, especially Jesuit-managed ones. Those who are more on the "militant atheist" side often rationalise such choice as "know thy enemy", etc. In fact, I studied law myself in the Catholic University of Milan, and went - I must say, with some curiosity - through three theology exams. Speaking of one's children, and the choice Italian parents have on whether they should have one hour per week of "religious education", my general idea is that since you cannot shield them anyway from monotheistic propaganda, it is probably best that you have them vaccinated as soon as possible... ;-) -- Stefano Vaj From stefano.vaj at gmail.com Fri Dec 25 12:38:52 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 25 Dec 2009 13:38:52 +0100 Subject: [ExI] atheism In-Reply-To: <235370.43610.qm@web59913.mail.ac4.yahoo.com> References: <235370.43610.qm@web59913.mail.ac4.yahoo.com> Message-ID: <580930c20912250438v6a5cb78ax6c9e71e85211e10e@mail.gmail.com> 2009/12/22 Post Futurist > *>There again, it depends on the religion. If it is one that postulates > the existence of an entity "out of time and space", I would say that > it definitely is.* > *Stefano Vaj* > > >But more destructive? > > Define "destructive"... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Fri Dec 25 13:30:38 2009 From: mbb386 at main.nc.us (MB) Date: Fri, 25 Dec 2009 08:30:38 -0500 (EST) Subject: [ExI] Atheism In-Reply-To: <653343.73227.qm@web113605.mail.gq1.yahoo.com> References: <653343.73227.qm@web113605.mail.gq1.yahoo.com> Message-ID: <36373.12.77.168.202.1261747838.squirrel@www.main.nc.us> > > The author actually said that atheism was a faith-based position because atheists > "Believe there is no god". > Arrgh! What is wrong with the word "think"??? Have we forgotten how to use that word? We (the culture) "believe" (or even worse, "believe in") any and everything. It's frustrating. :( Regards, MB From scerir at libero.it Fri Dec 25 14:42:18 2009 From: scerir at libero.it (scerir) Date: Fri, 25 Dec 2009 15:42:18 +0100 (CET) Subject: [ExI] Myrrh again Message-ID: <27579291.118791261752138753.JavaMail.defaultUser@defaultHost> It seems that Nadia Saleh Al-Amoudi (International Journal of Food Safety, Nutrition and Public Health (IJFSNPH) Volume 2 - Issue 2 - 2009) http://tinyurl. com/yfggyl4 has found that myrrh helps lower cholesterol levels. It seems that Ayurvedic medicine has found that Indian myrrh (guggul) contains compounds which lower blood lipids (Complement Ther Med. 2009 Jan;17(1):16-22. Epub 2008 Aug 15). It also seems that a study by Rutgers University in New Jersey has found http://ur.rutgers.edu/medrel/viewArticle.html?ArticleID=1927 that myrrh could be used to fight prostate and breast cancers. What can I say about that ... myrrhabilia! From jonkc at bellsouth.net Fri Dec 25 16:20:26 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 25 Dec 2009 11:20:26 -0500 Subject: [ExI] Why is there Anti-Intellectualism? In-Reply-To: <4B340109.60709@rawbw.com> References: <4B340109.60709@rawbw.com> Message-ID: <28386343-5065-4BCE-B85E-B31CED24F8E8@bellsouth.net> The link Lee provided appears to be broken but I think there are 3 reasons for Anti-Intellectualism: 1)Thinking is harder than accepting and nature often follows the path of least action. 2)Logic does not always give the answer that people want to hear. 3)Many believe that being certain is more important than being correct. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Dec 25 16:00:39 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 25 Dec 2009 11:00:39 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <783488.67415.qm@web36501.mail.mud.yahoo.com> References: <783488.67415.qm@web36501.mail.mud.yahoo.com> Message-ID: <1C3BBD11-D073-4DC4-8AD3-B72F348062D3@bellsouth.net> On Dec 24, 2009, at 9:15 PM, Gordon Swobe wrote: > I have no concerns about how "robotlike" you might make your artificial neurons. I don't assume that natural neurons do not also behave robotically. I do however assume that natural neurons do not run formal programs like those running now on your computer. Then I have no idea what you mean by "robotically" and would be willing to bet money that you don't either. > If they do then I must wonder who wrote them. Well naturally you'd wonder about who wrote those programs, because like Searle you are ignorant of things that any good High School biology student knows, and can pretend, or perhaps doesn't even know, that a book explaining exactly how those programs came to be was written 150 years ago. Searle sits in his armchair, a man who has never once dirtied his hands performing an actual experiment and concludes that X cannot be true despite a huge amount of evidence gathered over the centuries indicating that it MUST be true. He say's "I personally don't understand how X could be and the only possible explanation for my lack of understanding is that X is in fact not true, and to hell with that titanic pile of empirical confirmation. I am smarter than the evidence, if I can't find the answer then I know the answer does not exist". As I said before this is the sort of thing that gives philosophy a bad name and is the reason that no great philosophical breakthrough has ever come from philosophers. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Dec 25 16:47:51 2009 From: spike66 at att.net (spike) Date: Fri, 25 Dec 2009 08:47:51 -0800 Subject: [ExI] christmas carols for the psychologically challenged Message-ID: <5427A1AC22E84BF194B0138980AAAC65@spike> Christmas Carols for the Psychologically Challenged 1. Schizophrenia --- Do You Hear What I Hear? 2. Multiple Personality Disorder --- We Three Queens Disoriented Are 3. Amnesia --- I Don't Know if I'll be Home for Christmas 4. Narcissistic --- Hark the Herald Angels Sing About Me 5. Manic --- Deck the Halls and Walls and House and Lawn and Streets and Stores and Office and Town and Cars and Buses and Trucks and Trees and Fire Hydrants and ... 6. Paranoid --- Santa Claus is Coming to Get Me 7. Borderline Personality Disorder --- Thoughts of Roasting on an Open Fire 8 . Full Personality Disorder-- You Better Watch Out, I'm Gonna Cry, I'm Gonna Pout, Maybe I'll tell You Why 9. Obsessive Compulsive Disorder ---Jingle Bells, Jingle Bells, Jingle Bells, Jingle Bells, Jingle Bells, Jingle Bells, Jingle Bells, Jingle Bells, Jingle Bells .. 10. Agoraphobia --- I Heard the Bells on Christmas Day But Wouldn't Leave My House 11. Senile Dementia --- Walking in a Winter Wonderland Miles From My House in My Slippers and Robe 12. Oppositional Defiant Disorder --- I Saw Mommy Kissing Santa Claus So I Burned Down the House -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Fri Dec 25 17:28:45 2009 From: natasha at natasha.cc (Natasha Vita-More) Date: Fri, 25 Dec 2009 11:28:45 -0600 Subject: [ExI] christmas carols for the psychologically challenged In-Reply-To: <5427A1AC22E84BF194B0138980AAAC65@spike> References: <5427A1AC22E84BF194B0138980AAAC65@spike> Message-ID: LOL Nlogo1.tif Natasha Vita-More _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of spike Sent: Friday, December 25, 2009 10:48 AM To: 'ExI chat list' Subject: [ExI] christmas carols for the psychologically challenged Christmas Carols for the Psychologically Challenged 1. Schizophrenia --- Do You Hear What I Hear? 2. Multiple Personality Disorder --- We Three Queens Disoriented Are 3. Amnesia --- I Don't Know if I'll be Home for Christmas 4. Narcissistic --- Hark the Herald Angels Sing About Me 5. Manic --- Deck the Halls and Walls and House and Lawn and Streets and Stores and Office and Town and Cars and Buses and Trucks and Trees and Fire Hydrants and ... 6. Paranoid --- Santa Claus is Coming to Get Me 7. Borderline Personality Disorder --- Thoughts of Roasting on an Open Fire 8 . Full Personality Disorder-- You Better Watch Out, I'm Gonna Cry, I'm Gonna Pout, Maybe I'll tell You Why 9. Obsessive Compulsive Disorder ---Jingle Bells, Jingle Bells, Jingle Bells, Jingle Bells, Jingle Bells, Jingle Bells, Jingle Bells, Jingle Bells, Jingle Bells .. 10. Agoraphobia --- I Heard the Bells on Christmas Day But Wouldn't Leave My House 11. Senile Dementia --- Walking in a Winter Wonderland Miles >From My House in My Slippers and Robe 12. Oppositional Defiant Disorder --- I Saw Mommy Kissing Santa Claus So I Burned Down the House -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From jrd1415 at gmail.com Fri Dec 25 18:38:43 2009 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 25 Dec 2009 11:38:43 -0700 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <20091223101342.GP17686@leitl.org> References: <636244.44626.qm@web36503.mail.mud.yahoo.com> <20091223101342.GP17686@leitl.org> Message-ID: On Wed, Dec 23, 2009 at 3:13 AM, Eugen Leitl wrote: > I find it really difficult to understand what drives reasonably smart > people today to bog down in a rerun of the vis vitalis debate. Which was > empirically put to death on 22th February of 1828... Okay, Gene, I'll bite. All I could find re 22 Feb 1828 was this: ...In 1828, Karl Ernst Ritter von Baer, having examined the fetal anatomy of numerous species, published the view that all animals have three germ layers and that that the ontogeny of embryos proceeds from initial homogeneity to heterogeneity by stages similar to other young animals, but not by the recapitulation of the adult forms of lower animals. Is this what you had in mind? Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From thespike at satx.rr.com Fri Dec 25 19:13:48 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 25 Dec 2009 13:13:48 -0600 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <636244.44626.qm@web36503.mail.mud.yahoo.com> <20091223101342.GP17686@leitl.org> Message-ID: <4B350EEC.6030105@satx.rr.com> On 12/25/2009 12:38 PM, Jeff Davis wrote: > On Wed, Dec 23, 2009 at 3:13 AM, Eugen Leitl wrote: > >> > I find it really difficult to understand what drives reasonably smart >> > people today to bog down in a rerun of the vis vitalis debate. Which was >> > empirically put to death on 22th February of 1828... > > Okay, Gene, I'll bite. > > All I could find re 22 Feb 1828 was this Nope. "organic chemistry has begun in 1828 when F. W?hler synthesized urea"... "...Until 1828, it was believed that organic substances could only be formed under the influence of the vital force in the bodies of animals and plants. W?hler proved by the artificial preparation of urea from inorganic materials that this view was false." From jrd1415 at gmail.com Fri Dec 25 19:30:41 2009 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 25 Dec 2009 12:30:41 -0700 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <477951.24225.qm@web36504.mail.mud.yahoo.com> References: <20091223101342.GP17686@leitl.org> <477951.24225.qm@web36504.mail.mud.yahoo.com> Message-ID: On Wed, Dec 23, 2009 at 4:59 AM, Gordon Swobe wrote: >... Searle wants to know what possesses some intelligent people to attribute "mind" to mere programs running on computers, programs which in the final analysis do nothing more interesting that kitchen can-openers. This is where you lose me. You've got the faulty theoretical confused with the indisputably empirical: a theory,... no, not even a theory,.. a funky-ass notion of mind, thoroughly corrupted by a persistent pre-conscious legacy of spiritualism, versus the blunt, mundane, un-hyped FACT ...... of mind . Minds arise from dirt. This is self-evident. This is empirical. Consequently the default assumption, should be that they "...do nothing more interesting than kitchen can openers..." You are such a mind... and you're understandably impressed. But your notion that minds are -- ergo -- "interesting" -- more interesting than kitchen can-openers,... well sorry bro, but that's not logic/science, it's egoism. Humans, with their minds are no more interesting than mosquitoes with their lower order of mind. Such is the pernicious penetration of spiritualism, that even this is faulty. By using a mosquito -- a form of life -- as a comparator, I have engaged the unspoken spiritualist assumption that life is "interesting". To make the break completely, I should write "Humans, with their minds are no more "interesting" than kitchen can-openers." Wrap your mind around that and you'll come to understand what I meant when I described this epiphany as liberating. It allows you to cast off the entire legacy of ooga booga(superstitious nonsense). Sweep away the legacy blindfold of ignorance, and start over, clear and clean. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From lcorbin at rawbw.com Fri Dec 25 20:13:10 2009 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 25 Dec 2009 12:13:10 -0800 Subject: [ExI] Why is there Anti-Intellectualism? In-Reply-To: <28386343-5065-4BCE-B85E-B31CED24F8E8@bellsouth.net> References: <4B340109.60709@rawbw.com> <28386343-5065-4BCE-B85E-B31CED24F8E8@bellsouth.net> Message-ID: <4B351CD6.6070506@rawbw.com> John Clark wrote: > The link Lee provided appears to be broken James had originally supplied the correct link: http://www.uwgb.edu/DutchS/PSEUDOSC/WhyAntiInt.htm > but I think there are 3 reasons for Anti-Intellectualism: > > 1) Thinking is harder than accepting and nature often follows the path of > least action. > 2) Logic does not always give the answer that people want to hear. > 3) Many believe that being certain is more important than being correct. adding to my 4) [Accepting a dogma] gives people a working hypothesis 5) Few people enjoy thinking, and find they have better things to do. All these are properly elaborated by crucial sociobiological (evolutionary psychology) explanation. 1. "Thinking is harder", yes, and more expensive. Therefore often dangerous and always costly. 2. People want to hear that which is consistent with what they already believe (I do!). Otherwise you pay penalties for indecisiveness and delay. Nobody here, for example, would actually *enjoy* reading even the strongest and most reliable new study showing that evolution was wrong. 3. In many cases, unfortunately, being certain *is* more important than being correct. While usually true in leadership issues and time-critical decision making, most of us, hopefully, enjoy those times when we have the luxury of unhurriedly seeking the truth. 4. Accepting, according to PCR, many things as provisionally true furnishes a basis for further exploration and progress, which is obviously beneficial. But this is essentially the same as 2). 5. Thinking just for the fun of it has to be fairly new on the EP scene, and today's culture is really a throwback to a much, much more primitive time when thinking reduced biological fitness. Indeed, having many people who like to think is no longer an ESS for any extant population---just look at who is having lots of children. Also, you have to admit that many people's disdain for thinking and for intellectuals has a lot going for it, given how much thinking Rousseau, Marx, Lenin, Hitler, and Mao engaged in. Probably the key difference between those bad guys who brought us so much distress, and those whose thinking has brought such wonderful benefit--- scientists, entrepreneurs, artists, and engineers ---is that the latter think locally and act locally, whereas the former always have grand schemes, be it nationalized health care or warding off global warming, just so long as it requires trillions of OPM. Lee From spike66 at att.net Fri Dec 25 20:36:23 2009 From: spike66 at att.net (spike) Date: Fri, 25 Dec 2009 12:36:23 -0800 Subject: [ExI] ho^3 = evil? In-Reply-To: <4B351CD6.6070506@rawbw.com> References: <4B340109.60709@rawbw.com><28386343-5065-4BCE-B85E-B31CED24F8E8@bellsouth.net> <4B351CD6.6070506@rawbw.com> Message-ID: <95DC576F31F54BBFBA70A65C770AFDDE@spike> While participating in the usual season's practice of exchanging gifts, I noticed one of the wrappings had the comment "HoHoHo" which caused me to ponder the phenomenon universal in and unique to humans, that of laughter. I further wondered if the different styles of laughter would carry a universal meaning. For instance, imagine jolly old Santa Claus and a randomly selected mad scientist. Both have a unique laugh, but imagine switching the two. Then they would utter the comments "...then I, Simon Bar Sinister, shall RULE THE WORLD! HO HO HO..." and "...Meeerrry Christmaaaass MuuWAHHaaahaahaaaa..." If so, would we then switch the meanings, universally associating the Santa-like Ho^3 with evil insanity, and Jaba-the-Huttesque MuWaahahaha with generosity and kindness? Or would history have been written differently, such as "That St. Nicolas was a kind and generous soul who led a life to be immitated in every way, but he often uttered an evil cackle that caused the children to flee in terror." And "Dr. Bar Sinister was bent on destroying humanity, but he had a most amusing jocular laugh." Just wondering. spike From max at maxmore.com Fri Dec 25 22:18:13 2009 From: max at maxmore.com (Max More) Date: Fri, 25 Dec 2009 16:18:13 -0600 Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <200912252218.nBPMILBV003537@andromeda.ziaspace.com> >On 12/25/2009 12:38 PM, Jeff Davis wrote: > > On Wed, Dec 23, 2009 at 3:13 AM, Eugen Leitl wrote: > > > >> > I find it really difficult to understand what drives reasonably > >> > smart people today to bog down in a rerun of the vis vitalis > >> > debate. Which was empirically put to death on 22th February of > >> > 1828... I hope you'll note that I have carefully avoided this thread, despite being well schooled in Searle. > > > > Okay, Gene, I'll bite. > > > > All I could find re 22 Feb 1828 was this > >Nope. > >"organic chemistry has begun in 1828 when F. W?hler synthesized urea"... > >"...Until 1828, it was believed that organic substances could only be >formed under the influence of the vital force in the bodies of animals >and plants. W?hler proved by the artificial preparation of urea from >inorganic materials that this view was false." In other words: "I urinate on your vitalism" (and I fart in your general direction). Which reminds me on this silly day of the year... for those who appreciate both Monty Python *and* classic Doctor Who!: http://www.youtube.com/watch?v=l9sOiE5cTK4&NR=1 Max From jameschoate at austin.rr.com Sat Dec 26 00:37:05 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Fri, 25 Dec 2009 18:37:05 -0600 Subject: [ExI] Why is there Anti-Intellectualism? In-Reply-To: <4B340109.60709@rawbw.com> Message-ID: <20091226003705.EOD0B.102245.root@hrndva-web07-z01> ---- Lee Corbin wrote: > It gives them a working hypothesis, so that they > can get on with what they regard as more important. This doesn't explain anything however because you're asserting they're making a specific reasoned decision about a minor point so they can get on with more important issues. Just ignore it and get on with it. No, people consider this an important point, many would say it is the point. > Few people actually like to think, yet that's what > you have to do if there are unresolved important > questions. Everyone thinks, you ignore the question of how they do it. > This explains why there is so much anti-intellectualism. No, it doesn't explain anything. If anything it is itself anti-intellectual. It's a pat answer, practically begs the question. > Most people simply find that they have far better > things to do with their time than think, while on > the other hand, many of us enjoy it, even to the > exclusion of practically everything else. That's a rather self serving perspective. I'd say that it rests on the same sort of arrogance that many religious people use to justify their own belief structure. -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From jameschoate at austin.rr.com Sat Dec 26 00:44:58 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Sat, 26 Dec 2009 0:44:58 +0000 Subject: [ExI] Why is there Anti-Intellectualism? In-Reply-To: <28386343-5065-4BCE-B85E-B31CED24F8E8@bellsouth.net> Message-ID: <20091226004458.Z71VK.102274.root@hrndva-web07-z01> ---- John Clark wrote: > 1)Thinking is harder than accepting and nature often follows the path of least action. Malarky. Two people can see the exact same information and come to distinctly different conclusions. In fact their conclusions may be diametric. Everyone thinks. To believe that a religious person is somehow stupid or lazy is itself anti-intellectual. The question is what is the qualitative difference? I would contend that from an intellectual perspective a theist and an atheist are both anti-intellectual. They're both dogmatic and absolutist in their perspectives. They both deny any potential for being wrong fundamentally. > 2)Logic does not always give the answer that people want to hear. Logic doesn't always give the write answer. See paraconsistent logic as well as Godel. A reliance on logic for giving the right answer is no different than relying on some other book for the right answer. > 3)Many believe that being certain is more important than being correct. This implies some absolutism to reality. I take it you are a Platonist then? -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From jameschoate at austin.rr.com Sat Dec 26 00:59:41 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Sat, 26 Dec 2009 0:59:41 +0000 Subject: [ExI] Why is there Anti-Intellectualism? In-Reply-To: <4B351CD6.6070506@rawbw.com> Message-ID: <20091226005941.RA9IW.102307.root@hrndva-web07-z01> ---- Lee Corbin wrote: > adding to my > > 4) [Accepting a dogma] gives people a working hypothesis And what is the different between a dogma and a working hypothesis in some other context? > 5) Few people enjoy thinking, and find they have better things to do. No, people think. They just do it in different ways. > All these are properly elaborated by crucial sociobiological > (evolutionary psychology) explanation. How so? Many of the things we discuss are beyond evolutionary forces in that they don't affect our procreation success. Evolution is concerned with with reproduction rates at the individual and group levels as compared to other individuals/groups. Things that don't effect that can't be strictly or completely explained by it. It's comparative to epigenetics and how it dooms Dawkin's biological perspective (see Margulis for example). > 1. "Thinking is harder", yes, and more expensive. Therefore > often dangerous and always costly. How so? More expensive in what way? > 2. People want to hear that which is consistent with > what they already believe (I do!). Otherwise you > pay penalties for indecisiveness and delay. Nobody > here, for example, would actually *enjoy* reading > even the strongest and most reliable new study > showing that evolution was wrong. That evolution is wrong or that evolution doesn't work the way we thought? Those are distinctly different issues, it's not about right and wrong really. > 3. In many cases, unfortunately, being certain *is* > more important than being correct. While usually > true in leadership issues and time-critical decision > making, most of us, hopefully, enjoy those times > when we have the luxury of unhurriedly seeking > the truth. Those are where one has to do something, it's not really right/wrong in the strictest sense, it's a game theoretic pay-off matrix. > 4. Accepting, according to PCR, many things as provisionally > true furnishes a basis for further exploration and progress, > which is obviously beneficial. But this is essentially the > same as 2). I think the 'provisional' consideration is important, it is not in fact covered in #2. I'd say you're getting closer to the point that was why I submitted the question. > 5. Thinking just for the fun of it has to be fairly > new on the EP scene, and today's culture is really > a throwback to a much, much more primitive time when > thinking reduced biological fitness. Indeed, having > many people who like to think is no longer an ESS > for any extant population---just look at who is > having lots of children. Why does it have to be fairly new? Evidence? We've got at least three other species of hominids besides our own making jewelry, painting, etc. That covers a couple million years of hominid evolution, doesn't sound very new to me. > Also, you have to admit that many people's disdain for > thinking and for intellectuals has a lot going for it, > given how much thinking Rousseau, Marx, Lenin, Hitler, > and Mao engaged in. No, actually I don't. I don't think your assertion here has any real evidence to support it. > Probably the key difference between those bad guys > who brought us so much distress, and those whose > thinking has brought such wonderful benefit--- > scientists, entrepreneurs, artists, and engineers I'll call bullshit on that one. Without all these same guys the 'bad guys' as you class them wouldn't have gotten off the ground. You are drawing a false distinction here. -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- -- From jonkc at bellsouth.net Sat Dec 26 04:22:34 2009 From: jonkc at bellsouth.net (John Clark) Date: Fri, 25 Dec 2009 23:22:34 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <200912252218.nBPMILBV003537@andromeda.ziaspace.com> References: <200912252218.nBPMILBV003537@andromeda.ziaspace.com> Message-ID: On Dec 25, 2009, at Max More wrote: > In other words: "I urinate on your vitalism" (and I fart in your general direction). Damn I wish I'd said that! Oh well, undoubtably I will. As Picasso said, good artists copy, great artists steal. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Sat Dec 26 13:07:19 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 26 Dec 2009 05:07:19 -0800 (PST) Subject: [ExI] Dr. Searle and the Daleks In-Reply-To: Message-ID: <702291.91095.qm@web113601.mail.gq1.yahoo.com> Max More wrote: > I hope you'll note that I have carefully avoided > this thread, despite being well schooled in Searle. Extremely well done, Max. After bowing out from the discussion myself, I understand how difficult it is to just bite your tongue and say nothing. > for those who appreciate both Monty Python *and* > classic Doctor Who!: http://www.youtube.com/watch?v=l9sOiE5cTK4&NR=1 Brilliant! Thanks for that. Had me in stitches. Ben Zaiboc From bbenzai at yahoo.com Sat Dec 26 14:03:32 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 26 Dec 2009 06:03:32 -0800 (PST) Subject: [ExI] Why is there Anti-Intellectualism? In-Reply-To: Message-ID: <68322.43490.qm@web113602.mail.gq1.yahoo.com> wrote: > > ---- John Clark > wrote: > > > 1)Thinking is harder than accepting and nature often > follows the path of least action. > > Malarky. Two people can see the exact same information and > come to distinctly different conclusions. In fact their > conclusions may be diametric. This follows naturally from my earlier contention that everyone creates their own internal worlds, and tries to make the best sense they can of their sensory inputs, combined with their memories of past events. This doesn't mean that John's comment is malarkey, though. Critical thinking *is* harder than just accepting (I have personal evidence of this with my ongoing struggle with Maths), and it's obvious that nature often follows the path of least action (no, not quite obvious. It's more like an actual law of physics). > > Everyone thinks. To believe that a religious person is > somehow stupid or lazy is itself anti-intellectual. Not if you've arrived at that conclusion through thinking about it critically, rather than it just being a gut-reaction. Everyone does indeed think, but that thinking takes place on a spectrum from critical reasoning to unexamined emotional responses. Guess which end most people do most of their thinking at? > The > question is what is the qualitative difference? I would > contend that from an intellectual perspective a theist and > an atheist are both anti-intellectual. They're both dogmatic > and absolutist in their perspectives. They both deny any > potential for being wrong fundamentally. Hardly. Atheism is evidence-based, religion is faith-based. I can't stress enough that atheism is *not a Belief*. If you wish to posit a Belief system that states as a matter of faith that god/s do not exist, please do so, but *don't* call it atheism. Atheism is a *lack* of Belief, and is anything but dogmatic and absolutist. In fact, it's atheists' hate of dogmatism and absolutism that leads so many of them to become 'militant' (which is a completely incorrect term, btw. Atheists don't blow things up or kill people in the name of atheism), and 'strident' anti-religionists. (I'm capitalising "Belief" to disinguish it from the everyday usage of the word, such as "I believe it's time for lunch", which is a stomach-based, rather than a faith-based position). > > > 2)Logic does not always give the answer that people > want to hear. > > Logic doesn't always give the write answer. See > paraconsistent logic as well as Godel. A reliance on logic > for giving the right answer is no different than relying on > some other book for the right answer. Not quite. At least logic is self-consistent. > > 3)Many believe that being certain is more important > than being correct. > > This implies some absolutism to reality. I take it you are > a Platonist then? John may or may not be a Platonist, he can speak for himself on that. I don't see any connection between your remark and his, though. One is a statement about human psychology, the other is about philosophy. Ben Zaiboc From gts_2000 at yahoo.com Sat Dec 26 15:05:45 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 26 Dec 2009 07:05:45 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <703564.97308.qm@web36506.mail.mud.yahoo.com> --- On Thu, 12/24/09, Stathis Papaioannou wrote: > There is no real distinction between program level, machine > level or atomic level. These are levels of description, for the benefit > of the observer, and a description of something has no causal efficacy. If there exists "no real distinction" then why do you think what applies at the program level does not also apply at the hardware level or of the system in its entirety? In any case the formal nature of programs becomes most obvious at the machine level, where the machine states represented by 1's and 0's have no semantic content either real or imagined. These two symbols and the states they represent differ in *form only* and this is what is meant by "formal program." On these meaningless differences in form turn the gears of computer logic. Out of these meaningless differences in form we have created high level languages and with them programs that create the illusion of semantics and intrinsic intentionality, i.e., understanding. I see this remarkable achievement this as evidence of human ingenuity -- not as evidence that computers really have minds. > Understanding is something that is associated with > understanding-like, or intelligent, behaviour. Understanding is associated with intelligent behavior, yes, but the two things do not equal one another. > A program is just a > plan in your mind or on a piece of paper to help you arrange matter > in such a way as to give rise to this intelligent behaviour. Exactly. Programs act as blueprints for intelligent *behavior*. > There was no plan behind the brain, but post hoc analysis can reveal > patterns which have an algorithmic description (provided that the > physics in the brain is computable). Now, if such patterns in the brain > do not detract from its understanding, why should similar patterns > detract from the understanding of a computer? If computers had understanding then those patterns we might find and write down would not detract from their understanding any more than do patterns of brain behavior detract from the brain's understanding. But how can computers that run formal programs have understanding? > In both cases you can claim that the understanding comes from the actual > physical structure and behaviour, not from the description of that > physical structure and behaviour. I don't claim computers have understanding. They act as if they have it (as in weak AI) but they do not actually have it (as in strong AI). Let us say machine X has strong AI, and that we abstract from it a formal program that exactly describes and determines its intelligent behavior. We then run that abstracted formal program on a software/hardware system called computer Y. Computer Y will act exactly like machine X but it will have only weak AI. (If you get that then you've gotten what there is to get! :-) Formal programs exist as abstract simulations. They do not equal the things they simulate. They contain the forms of things but not the substance of things. -gts From ankara at tbaytel.net Sat Dec 26 15:52:58 2009 From: ankara at tbaytel.net (ankara at tbaytel.net) Date: Sat, 26 Dec 2009 10:52:58 -0500 Subject: [ExI] atheism msg 7 Dec 23 Message-ID: <94C6F716-F179-4A09-974F-EB659A7CF646@tbaytel.net> In Message: 7, Post Futurist writes: > Sure, harm in a relative sense, and personally I despise > fundamentalists; yet are most faiths fundamentalist in first-world > nations? Not anymore. > There are plenty of robust religionists (read - male supremacists) residing in North America, see for example: http://en.wikipedia.org/wiki/Quiverfull "Quiverfull: Inside the Christi? " by Kathryn Joyce > This list consists of people living primarily in America and the > UK. Are most religious orgs, save for in the Deep South, fascistic > today? no. And what number on this list have been denied the > opportunity to have abortions? not many. > Abortion may be legal but that doesn't make it accessible. Child protection agencies freely traffic in children and harvest newborns from vulnerable (socially or economically oppressed) girls. See for example: http://www.acf.hhs.gov/programs/cb/programs_fund/discretionary/iaatp.htm http://adoptionsbygladney.com/ http://www.marieclaire.com/world-reports/news/international/surrogate- mothers-india http://www.originsnsw.com/nswinquiry2/id12.html Sexual assault / sexploitation are crimes yet access to justice, assistance and visibility are almost impossible. The victims are blamed. The perps are not pathologized. See for example: http://www.rainn.org/index.php > Some extropians in Old Blighty can tell us about the Church of > England. If anything, the Anglican Church is a harmless, silly tax > exempt business. > The Anglican & United Church and others are responsible for human suffering and degradation, sexual exploitation... to mention only the tip of a corrupt and dirty iceberg. Of course these crimes against humanity may pale in comparison to the Catholic Church's dirty laundry. See: http://en.wikipedia.org/wiki/Magdalene_Asylum > In America there is a sizable minority of would-be fascists, but no > more than there are social fascist govt. employees, Commies, Cosa > Nostra-types, etc. > WTF? We were discussing harm - direct harm initiated and inflicted via male supremacy. In order to Identify the perps, we need only look for its beneficiaries. From gts_2000 at yahoo.com Sat Dec 26 16:12:11 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 26 Dec 2009 08:12:11 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <231739.15596.qm@web36502.mail.mud.yahoo.com> --- On Fri, 12/25/09, Jeff Davis wrote: >>... Searle wants to know what possesses some >> intelligent people to attribute "mind" to mere programs >> running on computers, programs which in the final analysis >> do nothing more interesting than kitchen can-openers. > > This is where you lose me.? You've got the faulty > theoretical confuse with the indisputably empirical: a theory,... no, > not even a theory,.. a funky-ass notion of mind, thoroughly corrupted > by a persistent pre-conscious legacy of spiritualism, versus the blunt, > mundane, un-hyped FACT ...... of mind . People have argued along lines similar to yours that Searle subscribes to or espouses some sort of spiritualistic/mystical dualism of mind/matter. I once thought so myself until recently, when I did the research and learned otherwise. I would describe him as a "tough-minded realist". As in my paragraph that you quoted, he really does wonder what possesses intelligent people to attribute "mind" to computers. If computers have minds then so do kitchen can-openers and the word loses all meaning. Does the abacus have a mind merely because humans use it to do maths? People have since the beginning of recorded history shown a propensity to anthropomorphize. Humans imagine gods with human-like minds, trees with human-like minds, mountains and weather patterns with human-like minds, and so on. In the last century humans started assigning these same sorts of human-like mental properties to these kitchen appliances we call computers. Searle puts the kibosh on it, and then people like you call *him* the spiritualist! In any case if you have a philosophical bent and a sincere interest in the subject, (and don't want merely to join the crowd in throwing spitwads at Searle while shooting me as the messenger), then I invite you to read this paper: Why I Am Not a Property Dualist http://www.imprint.co.uk/pdf/searle-final.pdf In this paper Searle explains why charges such as those you have levied miss his point completely. -gts From jonkc at bellsouth.net Sat Dec 26 16:53:47 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 26 Dec 2009 11:53:47 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <703564.97308.qm@web36506.mail.mud.yahoo.com> References: <703564.97308.qm@web36506.mail.mud.yahoo.com> Message-ID: <7C21F64A-2257-4821-8A6F-96BFD044EEC8@bellsouth.net> On Dec 26, 2009, Gordon Swobe wrote: > the formal nature of programs becomes most obvious at the machine level, where the machine states represented by 1's and 0's have no semantic content either real or imagined. A series of electrical impulses representing 1's and 0's came down a wire and into my computer, the machine then translated those 1's and 0's into the above words. You are insisting that those words have no meaning real or imagined. You may have a point. > Understanding is associated with intelligent behavior, yes, but the two things do not equal one another. You keep saying that over and over again but repetition doesn't make it true. If Darwin was right those two things MUST be true. You claim to be a rational man but ignore a mountain of evidence that shows your understanding of how the world works must be WRONG. If you are really rational then even if you don't understand how those two things could equal each other you must conclude that they DO equal each other. The evidence allows for no other conclusion. > I don't claim computers have understanding. They act as if they have it (as in weak AI) but they do not actually have it (as in strong AI). And that's why I don't use the terms strong or weak AI, it's a distinction that quite literally cannot be made, not between computers and not between our fellow human beings. Perhaps some people have suffered a mutation that renders them no more conscious than a rock, however intelligent and sociable they appear to be; in fact that mutation would likely find its way into future populations as it would be no more detrimental than the mutation to lack eyes or pigmentation have for animals who have lived for thousands of generations in dark caves. The mutation would even be beneficial as the individual wouldn't be making something useless from nature's point of view and those resources could be used for something important that Evolution could actually see. I believe my brain cells have better things to do than ponder this possibility very deeply because even if true it is the "weak" intelligent people's (or computer's) problem not mine. > how can computers that run formal programs have understanding? I take it that this is to be regarded as a Zen koan, like "What is the sound of one hand clapping?" or "Why does a man ask a question if he is determined to ignore the answer?" John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Sat Dec 26 17:08:50 2009 From: steinberg.will at gmail.com (Will Steinberg) Date: Sat, 26 Dec 2009 12:08:50 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <7C21F64A-2257-4821-8A6F-96BFD044EEC8@bellsouth.net> References: <703564.97308.qm@web36506.mail.mud.yahoo.com> <7C21F64A-2257-4821-8A6F-96BFD044EEC8@bellsouth.net> Message-ID: <4e3a29500912260908wd31cf10y44ccdca5ea920c0a@mail.gmail.com> There was once a rat who chased his tail but it seemed his quest would be to no avail this thought occured after he made the realization that he was going nowhere (just like this conversation) I know conversion is the work of a saint and Clark (et al.), I'm not sayin' you ain't but is it really worth all this messin' just to teach Swobe a lesson? I think ExI, perhaps, has had enough of this here tautological clusterfuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Dec 26 17:10:19 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 26 Dec 2009 11:10:19 -0600 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <231739.15596.qm@web36502.mail.mud.yahoo.com> References: <231739.15596.qm@web36502.mail.mud.yahoo.com> Message-ID: <4B36437B.6050508@satx.rr.com> On 12/26/2009 10:12 AM, Gordon Swobe wrote: > he really does wonder what possesses intelligent people >to attribute "mind" to computers. ... Does the abacus have >a mind merely because humans use it to do maths? Who seriously attributes mind (or consciousness or intentionality) to their laptop, or even the world's best existing supercomputer? This is a strawman. The interesting question is what sort of advanced computer hardware+software might develop these experiences and capacities for independent decision. However-- >If computers have minds then so do kitchen can-openers and the word loses all meaning. That's as ridiculous, in the context of considering a hypothetical human-grade AGI machine, as saying "If human brains have minds then so do bedbugs and individual neurons." Damien Broderick From jonkc at bellsouth.net Sat Dec 26 17:10:04 2009 From: jonkc at bellsouth.net (John Clark) Date: Sat, 26 Dec 2009 12:10:04 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <231739.15596.qm@web36502.mail.mud.yahoo.com> References: <231739.15596.qm@web36502.mail.mud.yahoo.com> Message-ID: On Dec 26, 2009, Gordon Swobe wrote: > People have since the beginning of recorded history shown a propensity to anthropomorphize. That is true and the reason that mode of thought has survived for so long is that anthropomorphism is often valid and has enormous survival value. From Evolution's point of view probably the most important task a mind can set itself is figuring out what that animal is going to do next and even more important what is that fellow human being going to do next; both tasks can benefit by thinking what would I do next. I anthropomorphize all the time and make no apology for doing so. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Dec 26 17:18:47 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 27 Dec 2009 04:18:47 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <703564.97308.qm@web36506.mail.mud.yahoo.com> References: <703564.97308.qm@web36506.mail.mud.yahoo.com> Message-ID: 2009/12/27 Gordon Swobe : >> There was no plan behind the brain, but post hoc analysis can reveal >> patterns which have an algorithmic description (provided that the >> physics in the brain is computable). Now, if such patterns in the brain >> do not detract from its understanding, why should similar patterns >> detract from the understanding of a computer? > > If computers had understanding then those patterns we might find and write down would not detract from their understanding any more than do patterns of brain behavior detract from the brain's understanding. But how can computers that run formal programs have understanding? Because the program does not *prevent* the computer from having understanding even if it is conceded (for the sake of argument) that the program cannot *by itself* give rise to understanding. The matter in the computer and the matter in the brain both follow absolutely rigid and mindless rules - at the lowest level of description, exactly the same rigid and mindless rules - which at the highest level of description leads to intelligent behaviour. It so happens that at intermediate levels the patterns in the computer are recognisable as programs, because that was an easy way for the engineer to figure out how to put the matter together to do his bidding. Similarly, at intermediate levels in the brain patterns appear which can be mapped onto a computer program, such as a neural network. But if you can say of the brain that it's something other than the symbol manipulation that leads to understanding, what impediment is there to saying the same of the computer? >> In both cases you can claim that the understanding comes from the actual >> physical structure and behaviour, not from the description of that >> physical structure and behaviour. > > I don't claim computers have understanding. They act as if they have it (as in weak AI) but they do not actually have it (as in strong AI). > > Let us say machine X has strong AI, and that we abstract from it a formal program that exactly describes and determines its intelligent behavior. We then run that abstracted formal program on a software/hardware system called computer Y. Computer Y will act exactly like machine X but it will have only weak AI. (If you get that then you've gotten what there is to get! :-) > > Formal programs exist as abstract simulations. They do not equal the things they simulate. They contain the forms of things but not the substance of things. As I have explained, even if it is accepted that formal programs lack understanding it does not mean that a machine running such a program lacks understanding, since it may get the understanding from something else, such as the overall intelligent behaviour of the system or a specific physical process. The latter would allow for the possibility (but not necessity) that computers lack understanding because they lack a special quality that neurons have. However, the partial brain replacement thought experiment shows that that would lead to absurdity, as previously discussed and not rebutted. Therefore, the conclusion is that provided that the behaviour of the brain can be reproduced in a different substrate, whether semiconductors or beer cans and toilet paper, the consciousness/experience/qualia/intentionality/understanding/feelings will also be reproduced. -- Stathis Papaioannou From painlord2k at libero.it Sat Dec 26 21:28:43 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Sat, 26 Dec 2009 22:28:43 +0100 Subject: [ExI] Why is there Anti-Intellectualism? (re: Atheism as somehow different than other religions) In-Reply-To: <20091224211438.060YE.40918.root@hrndva-web23-z02> References: <20091224211438.060YE.40918.root@hrndva-web23-z02> Message-ID: <4B36800B.2040002@libero.it> Il 24/12/2009 22.14, jameschoate at austin.rr.com ha scritto: > http://www.uwgb.edu/dutchs/PSEUDOSC/WhyAntiInt.htm1 > > "Unsatisfied curiosity is nagging, and there is a sense of comfort > and relief when it's satisfied. Carl Sagan related how dissatisfied > people were when he answered that he did not know whether there were > extraterrestrial civilizations. People kept pressing him "But what do > you think?" The ability to accept uncertainty requires extraordinary > intellectual discipline. Medieval maps were full of spurious details > simply because their makers couldn't tolerate blank spaces. There is > abundant evidence that most people prefer the appearance of immediate > certainty to the existence of uncertainty, even if uncertainty > carries with it the certainty of getting closer to the truth later. > Many people prefer religions that promise theological certainty, even > if based on demonstrably spurious reasoning, rather than a religion > that reasons soundly but accepts uncertainty or ambiguity. Having > acquired a feeling of certainty, people naturally resist any attempt > to re-open inquiry, because it will require effort and because it > will subject them anew to that nagging feeling of uncertainty." Anti-intellectualism is, here, regarded as something inherently wrong. It is described as the will to not think about some unsatisfied or unsatisfiable curiosity. The description appear as "If they don't think like I like, they are stupid and anti-intellectual". From what appear, the anti-intellectualism is more pronounced inside the social domain. The unwashed masses don't like social, moral, ethical, political and often economics intellectualism, where I find very few people having problem with hard science and technological intellectualism. The reason common people distrust intellectualism and intellectuals in social matter is their propensity for believing silly apparently novel ideas and their eagerness to experiment with them using the common people hides. This is a new phenomenon, as until a few decades ago, intellectuals were kept in high consideration from all stratas of society. Under we could have a reason for this. Clever Sillies - Why the high IQ lack common sense http://medicalhypotheses.blogspot.com/2009/11/clever-sillies-why-high-iq-lack-common.html After many decades of intellectualism, the common people have developed (evolved?) a healthy distrust of the intellectuals, mainly of the social intellectuals. In the future this distrust will probably grow stronger, as we will reap the fruits of many silly ideas implemented as policies. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.722 / Database dei virus: 270.14.120/2587 - Data di rilascio: 12/26/09 09:27:00 From stefano.vaj at gmail.com Sat Dec 26 21:51:20 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 26 Dec 2009 22:51:20 +0100 Subject: [ExI] Why is there Anti-Intellectualism? (re: Atheism as somehow different than other religions) In-Reply-To: <4B36800B.2040002@libero.it> References: <20091224211438.060YE.40918.root@hrndva-web23-z02> <4B36800B.2040002@libero.it> Message-ID: <580930c20912261351i302316eby54fa317d88ffb1c5@mail.gmail.com> 2009/12/26 Mirco Romanato > After many decades of intellectualism, the common people have developed > (evolved?) a healthy distrust of the intellectuals, mainly of the social > intellectuals. In the future this distrust will probably grow stronger, as > we will reap the fruits of many silly ideas implemented as policies. > At the very least, I would not underestimate "anti-intellectualism" as a reaction against some kind of secular clericalism where a caste of self-referential grand priests believe to be in a position to snub what the istincts may dictate of those who are faced with somewhat more concrete problems... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtomek at ceti.pl Sat Dec 26 23:02:37 2009 From: rtomek at ceti.pl (Tomasz Rola) Date: Sun, 27 Dec 2009 00:02:37 +0100 (CET) Subject: [ExI] Why is there Anti-Intellectualism? (re: Atheism as somehow different than other religions) Message-ID: On Sat, 26 Dec 2009, Mirco Romanato wrote: [...] > After many decades of intellectualism, the common people have developed > (evolved?) a healthy distrust of the intellectuals, mainly of the social > intellectuals. In the future this distrust will probably grow stronger, as we > will reap the fruits of many silly ideas implemented as policies. Uhum... You know, I am not challenging you, but if you (or anybody) could give me a list of intellectualists (names, specialty area, you got the idea), whom I am to blame... And if you don't mind, for what exactly? The longer such a list, the more interesting from my point of view. Give me those names, I beg you. Otherwise, it is encouraging my suspicion, that this "blame the stupid intellectuals/scientists/hackers/all-the-wiser-folk" story is just pure manipulation by someone who is trying to cover his/their trails. Anybody? You can't imagine how happy I will be. Really. You can (kind of) make my day brighter. Regards, Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:tomasz_rola at bigfoot.com ** From thespike at satx.rr.com Sat Dec 26 23:42:37 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 26 Dec 2009 17:42:37 -0600 Subject: [ExI] Why is there Anti-Intellectualism? (re: Atheism as somehow different than other religions) In-Reply-To: References: Message-ID: <4B369F6D.8010701@satx.rr.com> On 12/26/2009 5:02 PM, Tomasz Rola wrote: >> After many decades of intellectualism, the common people have developed >> > (evolved?) a healthy distrust of the intellectuals, mainly of the social >> > intellectuals. In the future this distrust will probably grow stronger, as we >> > will reap the fruits of many silly ideas implemented as policies. > > Uhum... You know, I am not challenging you, but if you (or anybody) could > give me a list of intellectualists (names, specialty area, you got the > idea), whom I am to blame... And if you don't mind, for what exactly? > > The longer such a list, the more interesting from my point of view. Give > me those names, I beg you. In US: the neocons are intellectuals, of a sort. They happily provided the Iraq war. Dr. Leon Kass is clearly an intellectual, and he helped to ban embryonic stem cell research. The doctrines of both these geniuses, on the other hand, seem to have been welcomed by a large proportion of the common people. On a more highbrow level, if Heidegger wasn't an intellectual, nobody is. He would be despised by the common people as someone who wrote incomprehensible horseshit for a living, but for all that he was a great supporter of the Nazi Volk so who knows how their muddled minds would have responded. There seems some confusion in this thread between non- or unintellectual and anti-intellectual. Most humans are not intellectuals, but they don't necessarily condemn those who are, although some of the more thuggish might well shove their "pointy heads" down the toilet bowl pour le sport. Damien Broderick From gts_2000 at yahoo.com Sun Dec 27 00:27:35 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 26 Dec 2009 16:27:35 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <648290.92183.qm@web36501.mail.mud.yahoo.com> --- On Sat, 12/26/09, Stathis Papaioannou wrote: >> If computers had understanding then those patterns we >> might find and write down would not detract from their >> understanding any more than do patterns of brain behavior >> detract from the brain's understanding. But how can >> computers that run formal programs have understanding? > > Because the program does not *prevent* the computer from > having understanding even if it is conceded (for the sake of > argument) that the program cannot *by itself* give rise to understanding. You skipped over my points about the formal nature of machine level programming and the machine states represented by the 0's and 1's that have no symanctic content real OR imagined. That's what we're talking about here at the most basic hardware level to which you want now to appeal: "On" vs "Off"; "Open" vs "Closed". They mean nothing even to you and me, except that they differ in form one from the other. If we must say they mean something to the hardware then they each mean exactly the same thing: "this form, not that form". And from these meaningless differences in form computers and their programmers create the *appearance* of understanding. If you want to believe that computers have intrinsic understanding of the symbols their programs input and output, and argue provisionally as you do above that they can have it because their mindless programs don't prevent them from having it, but you can't show me how the hardware allows them to have it even if the programs don't, then I can only shrug my shoulders. After all people can believe anything they wish. :) -gts From gts_2000 at yahoo.com Sun Dec 27 00:40:41 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 26 Dec 2009 16:40:41 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <4B36437B.6050508@satx.rr.com> Message-ID: <18706.48961.qm@web36505.mail.mud.yahoo.com> --- On Sat, 12/26/09, Damien Broderick wrote: > Who seriously attributes mind (or consciousness or > intentionality) to their laptop, or even the world's best > existing supercomputer? Some people do, but thanks for bringing the subject back to strong vs weak. >> If computers have minds then so do kitchen can-openers >> and the word loses all meaning. > > That's as ridiculous, in the context of considering a > hypothetical human-grade AGI machine, as saying "If human > brains have minds then so do bedbugs and individual > neurons." Just by way of clarification: AGI does not require intentionality, at least not as I use the term AGI. Strong AI of the sort that Searle refutes does. -gts From gts_2000 at yahoo.com Sun Dec 27 01:16:40 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sat, 26 Dec 2009 17:16:40 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <500588.16983.qm@web36501.mail.mud.yahoo.com> --- On Fri, 12/25/09, Stathis Papaioannou wrote: >> I do however assume that natural neurons do not run > formal programs like those running now on your computer. (If > they do then I must wonder who wrote them.) > > Natural neurons do not run human programming languages but > they do run algorithms, insofar as their behaviour can be described > algorithmically. We cannot assume that merely because we can describe a given natural process algorithmically that the process must then happen as a result of the supposed algorithm actually running somewhere programmatically! > At the lowest level there is a small set of rules, the laws of physics, > which rigidly determine the future state and output of the neuron from > the present state and input. Looks like you want to liken these supposed lowest level laws of physics to a program. Where does that supposed program run? > That the computer was engineered and the neuron evolved should make > no difference: if running a program destroys consciousness > then it should do so in both cases. Well if you read my post from the other day (you never replied to the relevant portion of it) I allowed that if the programs replace only a negligible part of the material brain processes they simulate, they would negate the subject's intentionality/consciousness to a similarly negligible degree. >> You have not shown that the effects that concern us > here do not emanate in some way from the interior behaviors > and structures of neurons. As I recall the electrical > activities of neurons takes place inside them, not outside > them, and it seems very possible to me that this internal > electrical activity has an extremely important role to > play. > > The electrical activity consists in a potential difference > across the neuron's cell membrane due to ion gradients. However, to > be sure you have correctly modelled the behaviour of the neuron... I will in the next day or so if time allows write a separate post for the sole purpose of explaining what I see as the logical fallacy in your behaviorist/functionalist arguments. I wrote one already (the post with "0-0-0-0" diagram) but I see it didn't leave any lasting impression on you even if you never offered any counter-arguments. So I'll try putting another one together. -gts From stathisp at gmail.com Sun Dec 27 03:43:03 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 27 Dec 2009 14:43:03 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <648290.92183.qm@web36501.mail.mud.yahoo.com> References: <648290.92183.qm@web36501.mail.mud.yahoo.com> Message-ID: 2009/12/27 Gordon Swobe : > --- On Sat, 12/26/09, Stathis Papaioannou wrote: > >>> If computers had understanding then those patterns we >>> might find and write down would not detract from their >>> understanding any more than do patterns of brain behavior >>> detract from the brain's understanding. But how can >>> computers that run formal programs have understanding? >> >> Because the program does not *prevent* the computer from >> having understanding even if it is conceded (for the sake of >> argument) that the program cannot *by itself* give rise to understanding. > > You skipped over my points about the formal nature of machine level programming and the machine states represented by the 0's and 1's that have no symanctic content real OR imagined. That's what we're talking about here at the most basic hardware level to which you want now to appeal: "On" vs "Off"; "Open" vs "Closed". They mean nothing even to you and me, except that they differ in form one from the other. If we must say they mean something to the hardware then they each mean exactly the same thing: "this form, not that form". ?And from these meaningless differences in form computers and their programmers create the *appearance* of understanding. I agree with you that the symbols a computer program uses have no absolute meaning, which is why I have been asking you to pretend that you are an alien scientist examining a computer and a brain side by side. What you see is switches going on and off in the computer and neurons going on and off in the brain. You can work out some simple rules determining what the switches or neurons will do depending on what the neighbours they are connected to do, and you can work out patterns of behaviour, predicting that a certain input will consistently give rise to a particular output. If you're very clever you may be able to come up with a mathematical model of the neurons or the computer circuitry, allowing you to predict more complex behaviours. You understand that the symbolic representations of the brain and computer you have used in your model are completely arbitrary, and you don't know if the designers of the brain or computer used a similar symbolic representation, a different symbolic representation, or if there were no designers at all and the brain or computer just evolved naturally. So what reason do you have at this point to conclude that the computer, the brain, both or neither has understanding? > If you want to believe that computers have intrinsic understanding of the symbols their programs input and output, and argue provisionally as you do above that they can have it because their mindless programs don't prevent them from having it, but you can't show me how the hardware allows them to have it even if the programs don't, then I can only shrug my shoulders. After all people can believe anything they wish. :) You can't show me how the hardware in your head has understanding either. However, given that it does, and given that its behaviour can be simulated by a computer, then that computer *must* also have understanding. I've explained this several times, and you have not challenged either the premises or the reasoning leading to the conclusion. -- Stathis Papaioannou From stathisp at gmail.com Sun Dec 27 04:05:55 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 27 Dec 2009 15:05:55 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <500588.16983.qm@web36501.mail.mud.yahoo.com> References: <500588.16983.qm@web36501.mail.mud.yahoo.com> Message-ID: 2009/12/27 Gordon Swobe : > Well if you read my post from the other day (you never replied to the relevant portion of it) I allowed that if the programs replace only a negligible part of the material brain processes they simulate, they would negate the subject's intentionality/consciousness to a similarly negligible degree. > >>> You have not shown that the effects that concern us >> here do not emanate in some way from the interior behaviors >> and structures of neurons. As I recall the electrical >> activities of neurons takes place inside them, not outside >> them, and it seems very possible to me that this internal >> electrical activity has an extremely important role to >> play. >> >> The electrical activity consists in a potential difference >> across the neuron's cell membrane due to ion gradients. However, to >> be sure you have correctly modelled the behaviour of the neuron... My reply was that the internal processes of the neuron need to be taken into consideration in order to properly simulate it. It doesn't matter if part of the neuron, the whole neuron, or a large chunk of the brain are artificial: as long as the simulation is adequate, there will be no change in consciousness. > I will in the next day or so if time allows write a separate post for the sole purpose of explaining what I see as the logical fallacy in your behaviorist/functionalist arguments. I wrote one already (the post with "0-0-0-0" diagram) but I see it didn't leave any lasting impression on you even if you never offered any counter-arguments. So I'll try putting another one together. Perhaps you missed this post: > Another way to look at this problem of functionalism (the real issue here, I think)... > > Consider this highly simplified diagram of the brain: > > 0-0-0-0-0-0 > > The zeros represent the neurons, the dashes represent the relations between neurons, presumably the activities in the synapses. You contend that provided the dashes exactly match the dashes in a real brain, it will make no difference how we construct the zeros. To test whether you really believed this, I asked if it would matter if we constructed the zeros out of beer cans and toilet paper. Somewhat to my astonishment, you replied that such a brain would still have consciousness by "logical necessity". > > It seems very clear then that in your view the zeros merely play a functional role in supporting the seat of consciousness, which you see in the dashes. > > Your theory may seem plausible, and it does allow for the tantalizing extropian idea of nano-neurons replacing natural neurons. > > But before we become so excited that we forget the difference between a highly speculative hypothesis and something we must consider true by "logical necessity", consider a theory similar to yours but contradicting yours: in that competing theory the neurons act as the seat of consciousness while the dashes merely play the functional role. That functionalist theory of mind seems no less plausible than yours, yet it does not allow for the possibility of artificial neurons. It is not my theory, it is standard functionalism. The thought experiment shows that if you replicate the function of the brain, you must also replicate the consciousness. In your simplified brain above suppose the two leftmost neurons are sensory neurons in the visual cortex and the rest are neurons in the association cortex and motor cortex. The sensory neurons receive input from the retina, process this information and send output to association and motor cortex neurons, including neurons in Wernicke's and Broca's area which end up moving the muscles that produce speech. We then replace the sensory neurons 0 with artificial neurons X, giving: X-X-0-0-0-0 Now, the brain receives visual input from the retina. This is processed by the X neurons, which send output to the 0 neurons. As far as the 0 neurons are concerned, nothing has changed: they receive the same inputs as if the change had not been made, so they behave the same way as they would have originally, and the brain's owner produces speech correctly describing what he sees and declaring that it all looks just the same as before. It's trivially obvious to me that this is what *must* happen. Can you explain how it could possibly be otherwise? -- Stathis Papaioannou From rafal.smigrodzki at gmail.com Sun Dec 27 16:21:51 2009 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sun, 27 Dec 2009 11:21:51 -0500 Subject: [ExI] Why is there Anti-Intellectualism? (re: Atheism as somehow different than other religions) In-Reply-To: References: Message-ID: <7641ddc60912270821h10f801aey2761ab3d8829bbf0@mail.gmail.com> On Sat, Dec 26, 2009 at 6:02 PM, Tomasz Rola wrote: > > On Sat, 26 Dec 2009, Mirco Romanato wrote: > > [...] >> After many decades of intellectualism, the common people have developed >> (evolved?) a healthy distrust of the intellectuals, mainly of the social >> intellectuals. In the future this distrust will probably grow stronger, as we >> will reap the fruits of many silly ideas implemented as policies. > > Uhum... You know, I am not challenging you, but if you (or anybody) could > give me a list of intellectualists (names, specialty area, you got the > idea), whom I am to blame... And if you don't mind, for what exactly? > > The longer such a list, the more interesting from my point of view. Give > me those names, I beg you. ### Maynard Keynes, John Kenneth Galbraith, Karl Marx, Friedrich Engels, Noam Chomsky, almost any random sociologist since Emile Durkheim, Upton Sinclair, Paul Krugman, Joseph Lincoln Steffens, Albert Einstein, Jeremy Rifkin - collectively contributing to the enactment of a staggering number of stupid policies, starting with meat packing regulations and genetic engineering limits all the way to affirmative action, social security, and the Fed. Rafal From bbenzai at yahoo.com Sun Dec 27 16:32:59 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 27 Dec 2009 08:32:59 -0800 (PST) Subject: [ExI] Searle and AI In-Reply-To: Message-ID: <134368.61811.qm@web113609.mail.gq1.yahoo.com> I was led to think that Searle believes that conscious AI is impossible (due to certain people saying things like "Strong AI of the sort that Searle refutes"), but in "Why I Am Not a Property Dualist", he says: "Maybe someday we will be able to create conscious artifacts, in which case subjective states of consciousness will be ?physical? features of those artifacts" and "Consciousness is thus an ordinary feature of certain biological systems, in the same way that photosynthesis, digestion, and lactation are ordinary features of biological systems" Thinking that we will maybe be able to create conscious artifacts someday is what I call affirmation of the idea, not refutation. He even hints at how this might be possible, by comparing it to photosynthesis, which we are now very close to being able to reproduce in non-biological systems. Ben Zaiboc From gts_2000 at yahoo.com Sun Dec 27 17:19:02 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 27 Dec 2009 09:19:02 -0800 (PST) Subject: [ExI] Searle and AI In-Reply-To: <134368.61811.qm@web113609.mail.gq1.yahoo.com> Message-ID: <485245.94678.qm@web36504.mail.mud.yahoo.com> --- On Sun, 12/27/09, Ben Zaiboc wrote: > I was led to think that Searle > believes that conscious AI is impossible (due to certain > people saying things like "Strong AI of the sort that Searle > refutes"), but in "Why I Am Not a Property Dualist", he > says: > > "Maybe someday we will be able to create conscious > artifacts, in which case subjective states of consciousness > will be ?physical? features of those artifacts" > > and > > "Consciousness is thus an ordinary feature of certain > biological systems, in the same way that photosynthesis, > digestion, and lactation are ordinary features of biological > systems" > > Thinking that we will maybe be able to create conscious > artifacts someday is what I call affirmation of the idea, > not refutation. > > He even hints at how this might be possible, by comparing > it to photosynthesis, which we are now very close to being > able to reproduce in non-biological systems. Searle believes we will never create strong AI on software/hardware systems, not that strong AI is impossible. He believes you have strong AI, Ben, and he considers you a biological machine. We may someday create other strong AI machines a lot like you. But they won't run formal programs on hardware any more than you do (even if you think you do). -gts From thespike at satx.rr.com Sun Dec 27 17:19:25 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 27 Dec 2009 11:19:25 -0600 Subject: [ExI] Searle and AI In-Reply-To: <134368.61811.qm@web113609.mail.gq1.yahoo.com> References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> Message-ID: <4B37971D.4070406@satx.rr.com> On 12/27/2009 10:32 AM, Ben Zaiboc wrote: > I was led to think that Searle believes that conscious AI is impossible (due to certain people saying things like "Strong AI of the sort that Searle refutes"), but in "Why I Am Not a Property Dualist", he says: > > "Maybe someday we will be able to create conscious artifacts, in which case subjective states of consciousness will be ?physical? features of those artifacts" > > and > > "Consciousness is thus an ordinary feature of certain biological systems, in the same way that photosynthesis, digestion, and lactation are ordinary features of biological systems" Yes, and this is what his more careless disciples (and foes) seem to overlook. It's why John Clark's repeated wailing about Darwin misses the point. Searle knows perfectly well that consciousness is a feature of evolved systems (and so far only of them); he is arguing that current computational designs lack some critical feature of evolved intentional systems. We don't know that this is wrong. The wonderfully named Dr. Johnjoe McFadden, professor of molecular genetics at the University of Surrey and author of Quantum Evolution, argues ( http://www.surrey.ac.uk/qe/ ) that certain quantum fields and interactions are crucial to the function of mind. If that turns out to be right, it's possible that only entirely novel kinds of AIs will experience initiative and qualia, etc. And if that is the case, the standard reply to the Chinese Room asserting that the room as a whole has consciousness will be falsified, since such an arrangement would lack the requisite entanglements, etc, that have been installed in human embodied brains by... yes, Mr. Darwin's friend, evolution by natural selection of gene variants. Damien Broderick From jonkc at bellsouth.net Sun Dec 27 16:53:42 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 27 Dec 2009 11:53:42 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <648290.92183.qm@web36501.mail.mud.yahoo.com> References: <648290.92183.qm@web36501.mail.mud.yahoo.com> Message-ID: <32BCF6C4-6BEC-4A7B-BB28-8512068E8635@bellsouth.net> On Dec 26, 2009, at 7:27 PM, Gordon Swobe wrote: > > You skipped over my points about the formal nature of machine level programming and the machine states represented by the 0's and 1's that have no symanctic content real OR imagined. So you're surprised that Stathis didn't comment on your observation that if you take something very big, even as big as mind, and keep dividing it, eventually you will get to something that is not big. I suspect he didn't reply to your little homily because he already knew that. > That's what we're talking about here at the most basic hardware level to which you want now to appeal: "On" vs "Off"; "Open" vs "Closed". They mean nothing even to you and me, except that they differ in form one from the other. One thing differing from another is the atom of meaning and from that small bit a universe can be made. You seem to find it so astounding as to be unbelievable that a very small part of a system can have properties that are different from the entire very large system. I don't. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Dec 27 17:00:50 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 27 Dec 2009 09:00:50 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <43379.73473.qm@web36504.mail.mud.yahoo.com> --- On Sat, 12/26/09, Stathis Papaioannou wrote: >> I will in the next day or so if time allows write a >> separate post for the sole purpose of explaining what I see >> as the logical fallacy in your behaviorist/functionalist >> arguments. I wrote one already (the post with "0-0-0-0" >> diagram) but I see it didn't leave any lasting impression on >> you even if you never offered any counter-arguments. So I'll >> try putting another one together. > > Perhaps you missed this post: I didn't miss it. You did not actually address my argument that another person who starts from the same sorts of premises as you can create a competing argument that negates yours. It seems that you take it on faith that science will one day find the neurological correlates of consciousness (NCC) in the biological activities taking place between neurons and not in the activities taking place inside them. Your competing functionalist claims the opposite as you: he says that what happens inside the neurons matters to consciousness - that we will find the NCC there - and that what happens outside neurons only serves to drive the internal behaviors of natural neurons. His argument seems to me just as reasonable as yours, but it negates yours. > It is not my theory, it is standard functionalism. The > thought experiment shows that if you replicate the function of the > brain, you must also replicate the consciousness. Even if I believed that replicating the function of the brain would replicate consciousness in the way you hope, you did not actually replicate the function of the brain. You merely replicated the externally observable inputs and outputs of neurons, on the unspoken and as far as I can tell unjustified assumption that the internal functions of neurons do not and cannot matter. It looks to me like you just drew an arbitrary wall at the cellular membrane. Nobody has yet elucidated a coherent theory of consciousness. Nobody knows what happened in George Foreman's brain to cause him lose consciousness when Muhammad Ali punched him, or what happened in his brain a few moments later to cause him to regain it. Because any theory then seems about as good as any another, I offer one that I made up two minutes ago that illustrates the problem that I see with your neuron replacement arguments: I call it the mitochondrial theory of consciousness. On this theory, which I just made up, consciousness appears as an effect of chemical reactions taking place inside the mitochondria located inside neurons. When those reactions get disturbed the subject loses consciousness. When the reactions begin again normally, the subject regains consciousness. Your challenge is to show that replacing natural neurons with your mitochondria-less nano-neurons that only behave externally like real neurons will still result in consciousness, given that science has now (hypothetically) discovered that chemical reactions in mitochondria act as the NCC. I think you will agree that you cannot show it, and I note that my mitochondrial theory of consciousness represents just one of a very large and possibly infinite number of possible theories of consciousness that relate to the interiors of natural neurons, any one of which may represent the truth and all of which would render your nano-neurons ineffective. -gts From eugen at leitl.org Sun Dec 27 17:41:41 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 27 Dec 2009 18:41:41 +0100 Subject: [ExI] Searle and AI In-Reply-To: <134368.61811.qm@web113609.mail.gq1.yahoo.com> References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> Message-ID: <20091227174141.GF17686@leitl.org> On Sun, Dec 27, 2009 at 08:32:59AM -0800, Ben Zaiboc wrote: > I was led to think that Searle believes that conscious AI is impossible > (due to certain people saying things like "Strong AI of the sort that Searle > refutes"), but in "Why I Am Not a Property Dualist", he says: You have to give the man his dues: as a troll he's really good. But even that is mostly our fault, not his accomplishment. > "Maybe someday we will be able to create conscious artifacts, in which case > subjective states of consciousness will be ?physical? features of those artifacts" Physical, as opposed to what? Ether flux? Dark matter bunnies? How does he think computation works in this universe? > and > > "Consciousness is thus an ordinary feature of certain biological systems, > in the same way that photosynthesis, digestion, and lactation are ordinary > features of biological systems" Well, what else can you expect from a mythical substratist. > Thinking that we will maybe be able to create conscious artifacts > someday is what I call affirmation of the idea, not refutation. Thoughts are cheap, deeds are hard. You only get credit for the latter. > He even hints at how this might be possible, by comparing it to Why are you looking so hard for merit that just isn't there? There's plenty more usable material in an old MAD magazine, if you used the same amount of attention you grant Searle for no damn reason. > photosynthesis, which we are now very close to being able > to reproduce in non-biological systems. The only way how achieving practical artificial photosynthesis would help is because it would represent a certain milestone in our capabilities of designing and manufacturing at molecular scale, which would also immediately profit molecular electronics, which will allow extracting huge number of crunch from a given number of atoms and Joules, which would be enough to bootstrap AI. With a sufficiently large hammer, few things are impossible. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From painlord2k at libero.it Sun Dec 27 18:07:33 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Sun, 27 Dec 2009 19:07:33 +0100 Subject: [ExI] Why is there Anti-Intellectualism? (re: Atheism as somehow different than other religions) In-Reply-To: References: Message-ID: <4B37A265.7050403@libero.it> Il 27/12/2009 0.02, Tomasz Rola ha scritto: > > On Sat, 26 Dec 2009, Mirco Romanato wrote: > > [...] >> After many decades of intellectualism, the common people have developed >> (evolved?) a healthy distrust of the intellectuals, mainly of the social >> intellectuals. In the future this distrust will probably grow stronger, as we >> will reap the fruits of many silly ideas implemented as policies. > Uhum... You know, I am not challenging you, but if you (or anybody) could > give me a list of intellectualists (names, specialty area, you got the > idea), whom I am to blame... And if you don't mind, for what exactly? For example, Greenpeace is against the use of chlorine to disinfect water. Could we consider the policy-maker of Greenpeace some intellectuals? Why I Left Greenpeace http://www.waterandhealth.org/drinkingwater/greenpeace.html The return of Killer Chlorine http://www.theregister.co.uk/2008/07/24/numberwatch_chlorine/ Dewey? Supporters of Whole Word reading versus Phonics http://www.improve-education.org/id58.html The opposition to OGM by so many it is too long to list them (left, center and right) The prohibitionist of drugs? The laws, in California for example, that force ex-sexual parters males named by a woman as father of her child to pay even if they are not the real father. No-fault divorce with alimony [for the woman]? http://laws.justsickshit.com/california-stupid-laws/ > Otherwise, it is encouraging my suspicion, that this "blame the stupid > intellectuals/scientists/hackers/all-the-wiser-folk" story is just pure > manipulation by someone who is trying to cover his/their trails. It is evident you don't read the page I linked. They are not stupid. They are smart. Very intelligent. But they substitute the use of an expert system evolved and selected in mammals to manage emotions and social relations with general intelligence that is not able to manage and elaborate so much informations. If you use a hammer instead of a screwdriver it is not strange that the screws don't work as intended. > Anybody? You can't imagine how happy I will be. Really. You can (kind of) > make my day brighter. Do you ever asked yourself why people in leading positions in academia and politics (and sometimes also in industry and commerce) is so often plagued with silly ideas? Can they really be all so stupid? Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.722 / Database dei virus: 270.14.121/2589 - Data di rilascio: 12/27/09 10:18:00 From pharos at gmail.com Sun Dec 27 18:22:26 2009 From: pharos at gmail.com (BillK) Date: Sun, 27 Dec 2009 18:22:26 +0000 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <43379.73473.qm@web36504.mail.mud.yahoo.com> References: <43379.73473.qm@web36504.mail.mud.yahoo.com> Message-ID: On 12/27/09, Gordon Swobe wrote: > Your challenge is to show that replacing natural neurons with your > mitochondria-less nano-neurons that only behave externally like real > neurons will still result in consciousness, given that science has now > (hypothetically) discovered that chemical reactions in mitochondria > act as the NCC. > > I think you will agree that you cannot show it, and I note that my > mitochondrial theory of consciousness represents just one of a very > large and possibly infinite number of possible theories of consciousness > that relate to the interiors of natural neurons, any one of which may > represent the truth and all of which would render your nano-neurons > ineffective. > > No. Your point of contention is only of interest to armchair philosophers who have no practical interest in building AI systems. The Wikipedia article on the Chinese Room points out the irrelevancy of your philosophical contortions: Strong AI v. AI research Searle's argument does not limit the intelligence with which machines can behave or act; indeed, it fails to address this issue directly, leaving open the possibility that a machine could be built that acts intelligently but does not have a mind or intentionality in the same way that brains do. Since the primary mission of artificial intelligence research is only to create useful systems that act intelligently, Searle's arguments are not usually considered an issue for AI research. Stuart Russell and Peter Norvig observe that most AI researchers "don't care about the strong AI hypothesis?as long as the program works, they don't care whether you call it a simulation of intelligence or real intelligence." Searle's "strong AI" should not be confused with "strong AI" as defined by Ray Kurzweil and other futurists, who use the term to describe machine intelligence that rivals human intelligence. Kurzweil is concerned primarily with the amount of intelligence displayed by the machine, whereas Searle's argument sets no limit on this, as long as it understood that it is merely a simulation and not the real thing. ---------------------------- And that's the important point for the future of humanity. We don't care whether the AGI is 'really' intelligent or just 'simulating' intelligence. It is the practical results that matter. BillK From aware at awareresearch.com Sun Dec 27 18:08:15 2009 From: aware at awareresearch.com (Aware) Date: Sun, 27 Dec 2009 10:08:15 -0800 Subject: [ExI] Searle and AI In-Reply-To: <4B37971D.4070406@satx.rr.com> References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> <4B37971D.4070406@satx.rr.com> Message-ID: On Sun, Dec 27, 2009 at 9:19 AM, Damien Broderick wrote: > > Yes, and this is what his more careless disciples (and foes) seem to > overlook. It's why John Clark's repeated wailing about Darwin misses the > point. This is why I said they argue in terms of functionalism which needs no defense. > Searle knows perfectly well that consciousness is a feature of > evolved systems (and so far only of them); he is arguing that current > computational designs lack some critical feature of evolved intentional > systems. While "feature" isn't wrong, it may convey connotations of inherency. A better term might be "attribute" with its emphasis on the role of the observer. > We don't know that this is wrong. Not that it's wrong, but that it's unnecessary. It's like adding another layer of support for the *possibility* of the existence of God, when the relevant observations (e.g. professed belief in God) are already explained in terms coherent with a broad base of understanding (e.g. evolutionary psychology, sociology.) > The wonderfully named Dr. Johnjoe > McFadden, professor of molecular genetics at the University of Surrey and > author of Quantum Evolution, argues ( http://www.surrey.ac.uk/qe/ ) that > certain quantum fields and interactions are crucial to the function of mind. > If that turns out to be right, it's possible that only entirely novel kinds > of AIs will experience initiative and qualia, etc. And if that is the case, > the standard reply to the Chinese Room asserting that the room as a whole > has consciousness will be falsified, since such an arrangement would lack > the requisite entanglements, etc, that have been installed in human embodied > brains by... yes, Mr. Darwin's friend, evolution by natural selection of > gene variants. To paraphrase our friend Eliezer (as irksome as that can sometimes be), mysterious questions don't require mysterious answers. Observations of "consciousness", "qualia", and "meaning", are adequately and coherently explained in terms of the relationship of the observer to the observed, even when the subject is identified with the observer, given that any observer, adapted to its environment of interaction, cannot but attribute "meaning" in terms of its nature. In other words, a hypothetical advanced thermostat, with the necessary functionality to observe and report (even to itself) its values ("good" temperature set point, acceptable operating range, ...) within its environment will necessarily report "meaningfully" about its "self." - Jef From gts_2000 at yahoo.com Sun Dec 27 19:32:19 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 27 Dec 2009 11:32:19 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <344453.58620.qm@web36504.mail.mud.yahoo.com> --- On Sun, 12/27/09, BillK wrote: > And that's the important point for the future of humanity. > We don't care whether the AGI is 'really' intelligent or just > 'simulating' intelligence. It is the practical results that matter. As I wrote near the outset of this discussion (to John Clark as I recall) some people care about the difference between strong and weak AI, some people don't. To those like me who care, Searle has something interesting to say. -gts From gts_2000 at yahoo.com Sun Dec 27 19:56:32 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 27 Dec 2009 11:56:32 -0800 (PST) Subject: [ExI] Searle and AI In-Reply-To: Message-ID: <530298.70261.qm@web36504.mail.mud.yahoo.com> Damien, > [Searle] is arguing that current computational designs lack some > critical feature of evolved intentional systems. It's not so much about any mysterious "critical features" in nature or evolution so much as it is about the formality of programs running on top of hardware. Even if Zeus handed us a concrete example of an artificially constructed machine with strong AI, we could not abstract from careful study of that machine a formal program to run on a software/hardware system that would enable that s/h system to also have strong AI. We would need instead to recreate that machine. -gts From stefano.vaj at gmail.com Sun Dec 27 21:09:44 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 27 Dec 2009 22:09:44 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <648290.92183.qm@web36501.mail.mud.yahoo.com> References: <648290.92183.qm@web36501.mail.mud.yahoo.com> Message-ID: <580930c20912271309k27c581abk24e0001ef0fdc75d@mail.gmail.com> 2009/12/27 Gordon Swobe > If you want to believe that computers have intrinsic understanding of the > symbols their programs input and output, and argue provisionally as you do > above that they can have it because their mindless programs don't prevent > them from having it, but you can't show me how the hardware allows them to > have it even if the programs don't, then I can only shrug my shoulders. > After all people can believe anything they wish. :) > Yes, but usually they have some kind of reason to believe something which can be communicated. Now, why would you want to believe, or rather, what would you mean by saying, that organic brains have an "intrinsic understanding of the symbols their programs input and output"? :-/ -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Dec 27 21:14:42 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 27 Dec 2009 22:14:42 +0100 Subject: [ExI] Searle and AI In-Reply-To: <485245.94678.qm@web36504.mail.mud.yahoo.com> References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> <485245.94678.qm@web36504.mail.mud.yahoo.com> Message-ID: <580930c20912271314k49eb4c3ds4f19b9a0fcf8eaf2@mail.gmail.com> 2009/12/27 Gordon Swobe > Searle believes we will never create strong AI on software/hardware > systems, not that strong AI is impossible. > OTOH, every kind of features can be emulated on any system at all which exhibits the so-called "computational universality", from cellular automata to Chinese rooms, from Turing machines to PCs to organic brains. What changes (dramatically) are the performances in the execution of a given program. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Dec 27 21:24:47 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 27 Dec 2009 22:24:47 +0100 Subject: [ExI] Searle and AI In-Reply-To: <530298.70261.qm@web36504.mail.mud.yahoo.com> References: <530298.70261.qm@web36504.mail.mud.yahoo.com> Message-ID: <580930c20912271324q10ae9551i9974ef56405e1403@mail.gmail.com> 2009/12/27 Gordon Swobe > Even if Zeus handed us a concrete example of an artificially constructed > machine with strong AI, we could not abstract from careful study of that > machine a formal program to run on a software/hardware system that would > enable that s/h system to also have strong AI. We would need instead to > recreate that machine. > I am perhaps not following this thread closely enough to decide whether I agree with that statement, but I suspect I do. In fact, either one drops an exagerately anthropomorphic view of "intelligence" (with projections such as "conscience", "agency", etc., which are already quite problematic to extend to the other organic brains); or I believe that the only way to deliver what he or she considers as "generally intelligent" would be to create a relatively close emulation of a human being. Accordingly, it might be marginally easier to emulate at increasing levels of fidelity a *given* human being (thus producing what for all practical matters would end up being considered soon or later as un "upload" or a "mental clone" of the original) rather than artificially recreating one from scratch (i.e., probably by patching together arbitrary pieces of different and/or "generic" individuals). -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sun Dec 27 21:45:34 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 27 Dec 2009 16:45:34 -0500 Subject: [ExI] Searle and AI In-Reply-To: <134368.61811.qm@web113609.mail.gq1.yahoo.com> References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> Message-ID: Searle: > "Maybe someday we will be able to create conscious artifacts, in which case subjective states of consciousness will be ?physical? features of those artifacts" I believe Mr. Searle said that to try to convince us and perhaps even himself that he's not talking about a soul, but I'm not buying it. If Mr. Future Searle claims to have invented a conscious machine I could use his same lame arguments against him: If I look at smaller and smaller parts of your wonderful machine eventually I will come to a part that is not so wonderful, and if that part is not wonderful then the entire machine can't be wonderful. Consciousness is wonderful so I'm sorry to tell you Mr. Future Searle that your machine is not conscious, it's just intelligent. If he protests and says He has a theory of consciousness and according to his theory the machine is conscious I could say you have no way to test the theory to see if it is correct. More Searle: > > "Consciousness is thus an ordinary feature of certain biological systems, in the same way that photosynthesis, digestion, and lactation are ordinary features of biological systems" This illustrates the embarrassing ignorance Searle has for an idea that forms the bedrock of all the biological sciences. Photosynthesis, digestion, and lactation are all processes that help an organism's genes get into the next generation. Intelligence does the same thing, BUT CONSCIOUSNESS DOES NOT. Only results matter to Evolution, how they were generated is of no importance. If consciousness were not an inevitable consequence of intelligence it would not exist. If Searle were educated and intellectually honest he would say I don't understand how intelligence produces consciousness but there is no doubt that it does. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Sun Dec 27 21:49:37 2009 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 27 Dec 2009 22:49:37 +0100 Subject: [ExI] Searle and AI In-Reply-To: <580930c20912271314k49eb4c3ds4f19b9a0fcf8eaf2@mail.gmail.com> References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> <485245.94678.qm@web36504.mail.mud.yahoo.com> <580930c20912271314k49eb4c3ds4f19b9a0fcf8eaf2@mail.gmail.com> Message-ID: <20091227214937.GT17686@leitl.org> On Sun, Dec 27, 2009 at 10:14:42PM +0100, Stefano Vaj wrote: > What changes (dramatically) are the performances in the execution of a given > program. This is a very large understatement. And also my primary source for contempt for philosophical armchair gedanken experiments. Which are mostly about taking human intuition to places it wasn't meant to go. Proofs by apparent absurdity are frequently quite absurd themselves. Ha ha, so clever. Not. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jonkc at bellsouth.net Sun Dec 27 21:56:26 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 27 Dec 2009 16:56:26 -0500 Subject: [ExI] Searle and AI In-Reply-To: <485245.94678.qm@web36504.mail.mud.yahoo.com> References: <485245.94678.qm@web36504.mail.mud.yahoo.com> Message-ID: On Dec 27, 2009, Gordon Swobe wrote: > Searle believes we will never create strong AI on software/hardware systems, not that strong AI is impossible. So Searle thinks a conscious machine is possible, just don't use information (software) or matter (hardware) when you make the machine. Obey that very minor restriction and making a conscious AI should be easy. And to think some foolish people thought Searle was talking about a soul, I can't imagine where they got that idea. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Sun Dec 27 22:24:59 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 27 Dec 2009 14:24:59 -0800 (PST) Subject: [ExI] Searle and AI In-Reply-To: <580930c20912271324q10ae9551i9974ef56405e1403@mail.gmail.com> Message-ID: <380128.48297.qm@web36504.mail.mud.yahoo.com> --- On Sun, 12/27/09, Stefano Vaj wrote: >> Even if Zeus handed us a concrete example of an >> artificially constructed machine with strong AI, we could >> not abstract from careful study of that machine a formal >> program to run on a software/hardware system that would >> enable that s/h system to also have strong AI. We would need >> instead to recreate that machine. > > > I am perhaps not following this thread closely enough to > decide whether I agree with that statement, but I suspect I > do. If you both understand and agree then great, you can count yourself as the first! :) -gts From jonkc at bellsouth.net Sun Dec 27 22:07:01 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 27 Dec 2009 17:07:01 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <344453.58620.qm@web36504.mail.mud.yahoo.com> References: <344453.58620.qm@web36504.mail.mud.yahoo.com> Message-ID: On Dec 27, 2009, Gordon Swobe wrote: > > As I wrote near the outset of this discussion (to John Clark as I recall) some people care about the difference between strong and weak AI, some people don't. I would care passionately about the difference if there were a difference, if weak AI actually could exist. It can't, or at least Evolution says it can't. > To those like me who care, Searle has something interesting to say. How can a man who would flunk a 8'th grade pop quiz on biology have something interesting to say? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sun Dec 27 23:11:22 2009 From: jonkc at bellsouth.net (John Clark) Date: Sun, 27 Dec 2009 18:11:22 -0500 Subject: [ExI] Searle and AI In-Reply-To: <4B37971D.4070406@satx.rr.com> References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> <4B37971D.4070406@satx.rr.com> Message-ID: <735B7862-40B5-4964-AF86-B61CF5D8B9F6@bellsouth.net> On Dec 27, 2009, Damien Broderick wrote: > John Clark's repeated wailing about Darwin misses the point. Searle knows perfectly well that consciousness is a feature of evolved systems That is just untrue. Granted Searle may say "I know perfectly well that consciousness is a feature of evolved systems" but when you read his stuff further it is bloody obvious that Searle does not know perfectly well that consciousness is a feature of evolved systems. To Searle those words are just a sound he likes to make with his mouth, or to put it in a way that he would understand, it's all syntax with no semantics. If Searle understood biology he would know that intelligent behavior is the coin of the realm not consciousness, but if I said that to the man I think he would look at me with a blank stare. > The wonderfully named Dr. Johnjoe McFadden, professor of molecular genetics at the University of Surrey and author of Quantum Evolution, argues ( http://www.surrey.ac.uk/qe/ ) that certain quantum fields and interactions are crucial to the function of mind. And he has just as much evidence to support his theory as I do to support my theory that consciousness is caused by my left foot. > If that turns out to be right, it's possible that only entirely novel kinds of AIs will experience initiative and qualia, etc. Even if it was true there is absolutely no way we could ever know it is true. So why waste valuable brain cells considering the matter? > And if that is the case, the standard reply to the Chinese Room asserting that the room as a whole has consciousness will be falsified, since such an arrangement would lack the requisite entanglements, etc, that have been installed in human embodied brains by... yes, Mr. Darwin's friend, evolution by natural selection of gene variants. Even if all my other objections were not valid there is absolutely no doubt that Evolution managed to come up with consciousness, that means making a unconscious intelligence would be HARDER than making a conscious one. So if something is intelligent its a damn good bet its conscious too. And even if you're right and there is a consciousness gene why do we still have it? Why haven't we lost it through genetic drift as if has zero survival value? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Mon Dec 28 00:40:36 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Sun, 27 Dec 2009 16:40:36 -0800 (PST) Subject: [ExI] Searle and AI In-Reply-To: <735B7862-40B5-4964-AF86-B61CF5D8B9F6@bellsouth.net> Message-ID: <10874.3972.qm@web36505.mail.mud.yahoo.com> --- On Sun, 12/27/09, John Clark wrote: > 2009, ?Damien Broderick wrote: >> John Clark's repeated wailing about Darwin misses the point. >> Searle knows perfectly well that consciousness is a feature >> of evolved systems > That is just untrue. Damien has your number, John. Searle cites evolution in his arguments against competing theories of mind, specifically the theory that you want so desperately and wrongly to attribute to him. I would explain further but I've tried once or twice already and your sarcastic and abrasive posts on this subject give me with very little motivation. -gts From sjatkins at mac.com Mon Dec 28 00:56:38 2009 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 27 Dec 2009 16:56:38 -0800 Subject: [ExI] atheism In-Reply-To: <802158.41458.qm@web59916.mail.ac4.yahoo.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> Message-ID: There is ample evidence that belief regardless of evidence or argument is harmful. There is also ample evidence that such "faith" regardless of evidence often leads to severe oppression and persecution of dissenters. This makes it quite obvious it is more destructive than any pursuit that does not have these faith based characteristics. - samantha On Dec 21, 2009, at 4:47 PM, Post Futurist wrote: > >I don't consider religion merely futile, I think it's actively harmful > to humankind. I guess that > pretty succinctly highlights where we differ. > --John Clark > > No doubt religion is actively harmful. My point is you offer no evidence religion is more actively destructive than politics, entertainment, etc. > Because you cannot. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Mon Dec 28 01:05:49 2009 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 27 Dec 2009 17:05:49 -0800 Subject: [ExI] Searle and AI In-Reply-To: <4B37971D.4070406@satx.rr.com> References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> <4B37971D.4070406@satx.rr.com> Message-ID: On Dec 27, 2009, at 9:19 AM, Damien Broderick wrote: > On 12/27/2009 10:32 AM, Ben Zaiboc wrote: > >> I was led to think that Searle believes that conscious AI is impossible (due to certain people saying things like "Strong AI of the sort that Searle refutes"), but in "Why I Am Not a Property Dualist", he says: >> >> "Maybe someday we will be able to create conscious artifacts, in which case subjective states of consciousness will be ?physical? features of those artifacts" >> >> and >> >> "Consciousness is thus an ordinary feature of certain biological systems, in the same way that photosynthesis, digestion, and lactation are ordinary features of biological systems" > > Yes, and this is what his more careless disciples (and foes) seem to overlook. It's why John Clark's repeated wailing about Darwin misses the point. Searle knows perfectly well that consciousness is a feature of evolved systems (and so far only of them); he is arguing that current computational designs lack some critical feature of evolved intentional systems. Fair enough. The Turing machine notion and the von Neumann architecture are indeed bottlenecks in computing. So are the limits of the human mind which manifest in limitations of the type of computing architectures we can actually think dependably about, design and debug. Our skills as programmers are woefully inefficient to understand and debug the design of the human brain/mind. We understand more and more aspects of course and may emulate them but not in standard computer architectures or programming languages. So I agree that other approaches are essential to AGI and even to the further advancement of software generally. The human programmer must come out of the loop or interact with the process in a much less obtrusive way. > We don't know that this is wrong. The wonderfully named Dr. Johnjoe McFadden, professor of molecular genetics at the University of Surrey and author of Quantum Evolution, argues ( http://www.surrey.ac.uk/qe/ ) that certain quantum fields and interactions are crucial to the function of mind. I think that line is not fruitful. It is tautologically true that quantum interactions are present at some level but it (so far) does not appear true that quantum effects are essential to any major part of human cognition or that there is some kind of likely quantum computer hiding in the micro-tubels or wherever. - samantha From sjatkins at mac.com Mon Dec 28 01:16:07 2009 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 27 Dec 2009 17:16:07 -0800 Subject: [ExI] Searle and AI In-Reply-To: <580930c20912271324q10ae9551i9974ef56405e1403@mail.gmail.com> References: <530298.70261.qm@web36504.mail.mud.yahoo.com> <580930c20912271324q10ae9551i9974ef56405e1403@mail.gmail.com> Message-ID: <468D09E3-8A24-4CAC-A7D4-D258A3DEF755@mac.com> On Dec 27, 2009, at 1:24 PM, Stefano Vaj wrote: > 2009/12/27 Gordon Swobe > Even if Zeus handed us a concrete example of an artificially constructed machine with strong AI, we could not abstract from careful study of that machine a formal program to run on a software/hardware system that would enable that s/h system to also have strong AI. We would need instead to recreate that machine. > > I am perhaps not following this thread closely enough to decide whether I agree with that statement, but I suspect I do. > > In fact, either one drops an exagerately anthropomorphic view of "intelligence" (with projections such as "conscience", "agency", etc., which are already quite problematic to extend to the other organic brains); or I believe that the only way to deliver what he or she considers as "generally intelligent" would be to create a relatively close emulation of a human being. I disagree with both of those statements. A system with a self-reflective model of its own actions and state as part of the state considered for decision making will exhibit what we think of as conscience, agency and so on given enough power/complexity and time. It will by design have "strange loops". This is of course my intuition but I will happily bet that I am correct if you can create a valid test of whether this is true or not. :) > > Accordingly, it might be marginally easier to emulate at increasing levels of fidelity a *given* human being (thus producing what for all practical matters would end up being considered soon or later as un "upload" or a "mental clone" of the original) rather than artificially recreating one from scratch (i.e., probably by patching together arbitrary pieces of different and/or "generic" individuals). > A human being is a biological machine exhibiting all those aspects which Searle and his ilk claim seem to claim no machine can exhibit. Or at any rate that standard programming cannot exhibit. I may agree with the latter but not with the stronger statement. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Dec 28 01:31:51 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 27 Dec 2009 19:31:51 -0600 Subject: [ExI] Searle and AI In-Reply-To: References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> <4B37971D.4070406@satx.rr.com> Message-ID: <4B380A87.4020203@satx.rr.com> On 12/27/2009 12:08 PM, Jef wrote: > To paraphrase our friend Eliezer (as irksome as that can sometimes > be), mysterious questions don't require mysterious answers. Sometimes, though, they require previously unsuspected answers. When your estimate of the sun's age can't be reconciled with all the energy sources known to science, including gravity, sometimes you have to go to that mysterious newfangled quantum theory (so new it hasn't been invented yet) and invoke radioactivity. It's true that this looks mysterious to anyone who's unfamiliar with radioactivity and the quantum formalism that explains it. When the universe starts looking much queerer than anyone supposed, sometimes you have to just make up dark matter and dark energy (consistent with what's already known, naturally) and hope for the best. Damien Broderick From stathisp at gmail.com Mon Dec 28 02:02:43 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 28 Dec 2009 13:02:43 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <43379.73473.qm@web36504.mail.mud.yahoo.com> References: <43379.73473.qm@web36504.mail.mud.yahoo.com> Message-ID: 2009/12/28 Gordon Swobe : > Your challenge is to show that replacing natural neurons with your mitochondria-less nano-neurons that only behave externally like real neurons will still result in consciousness, given that science has now (hypothetically) discovered that chemical reactions in mitochondria act as the NCC. > > I think you will agree that you cannot show it, and I note that my mitochondrial theory of consciousness represents just one of a very large and possibly infinite number of possible theories of consciousness that relate to the interiors of natural neurons, any one of which may represent the truth and all of which would render your nano-neurons ineffective. Let's assume the seat of consciousness is in the mitochondria. You need to simulate the activity in mitochondria because otherwise the artificial neurons won't behave normally: there might be some chemical reaction in the mitochondria which would have made the biological neuron fire earlier than the artificial neuron, giving the game away. You install these artificial neurons in the subject's head replacing a part of the brain that has some easily identifiable role in consciousness and understanding, such as Wernicke's area. What will happen? If the replacement neurons behave normally in their interactions with the remaining brain, then the subject *must* behave normally. But what will he experience? It has to be one of the following: (a) His experience will be different, but he won't realise it. He will think he understands what people say when they speak to him, be amused when he hears something funny, write poetry and engage in philosophical debate, but in fact he will understand nothing. (b) His experience will be different and he will realise it, but he will be unable to change his behaviour. That is, he will realise that he can't understand anything and may make an attempt to run screaming out of the room but his body will not obey: it sits calmly chatting with the experimenter. (c) His experience will be normal. Reproducing the function of neurons also reproduces consciousness. If (a) is the case it would imply a very weird notion of consciousness and understanding. If you think you understand something and you behave as if you understand it, then you do understand it; if not, then what is the difference between real understanding and pseudo-understanding, and how can you be sure you have real understanding now? If (b) is the case that would mean the subject is doing his thinking with something other than his brain, since the part of the brain that has not been replaced is constrained to behave normally. So (a) is incoherent and (b) implies the existence of an immaterial soul that does your thinking in concert with the brain until you mess with it by putting in artificial neurons. That leaves (c) as the only plausible alternative. -- Stathis Papaioannou From stathisp at gmail.com Mon Dec 28 02:46:24 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 28 Dec 2009 13:46:24 +1100 Subject: [ExI] Searle and AI In-Reply-To: <4B37971D.4070406@satx.rr.com> References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> <4B37971D.4070406@satx.rr.com> Message-ID: 2009/12/28 Damien Broderick : > On 12/27/2009 10:32 AM, Ben Zaiboc wrote: > >> I was led to think that Searle believes that conscious AI is impossible >> (due to certain people saying things like "Strong AI of the sort that Searle >> refutes"), but in "Why I Am Not a Property Dualist", he says: >> >> "Maybe someday we will be able to create conscious artifacts, in which >> case subjective states of consciousness will be ?physical? features of those >> artifacts" >> >> and >> >> "Consciousness is thus an ordinary feature of certain biological systems, >> in the same way that photosynthesis, digestion, and lactation are ordinary >> features of biological systems" > > Yes, and this is what his more careless disciples (and foes) seem to > overlook. It's why John Clark's repeated wailing about Darwin misses the > point. Searle knows perfectly well that consciousness is a feature of > evolved systems (and so far only of them); he is arguing that current > computational designs lack some critical feature of evolved intentional > systems. We don't know that this is wrong. The wonderfully named Dr. Johnjoe > McFadden, professor of molecular genetics at the University of Surrey and > author of Quantum Evolution, argues ( http://www.surrey.ac.uk/qe/ ) that > certain quantum fields and interactions are crucial to the function of mind. > If that turns out to be right, it's possible that only entirely novel kinds > of AIs will experience initiative and qualia, etc. And if that is the case, > the standard reply to the Chinese Room asserting that the room as a whole > has consciousness will be falsified, since such an arrangement would lack > the requisite entanglements, etc, that have been installed in human embodied > brains by... yes, Mr. Darwin's friend, evolution by natural selection of > gene variants. Searle's error is to agree that the function of the brain is Turing emulable but claims that consciousness is not; in other words, that computers are capable of weak AI but not strong AI. The argument due to David Chalmers that I have been putting to Gordon (http://consc.net/papers/qualia.html) shows that this position is absurd. IF the brain is computable THEN so is consciousness; IF weak AI on a computer is possible THEN strong AI is also possible. On the other hand, if there is some crucial aspect of brain physics is not computable, then neither weak nor strong AI will be possible. Quantum mechanics is computable and a quantum computer can't do anything a classical computer can't do (they just do it faster), but it is possible that neurons depend on effects described by some as yet undiscovered physical theory which is not computable, such as Roger Penrose has proposed. Penrose does not believe either weak AI or strong AI is possible on a computer, and is therefore consistent, though probably wrong (hardly any other scientists agree with him). Searle, on the other hand, is inconsistent. -- Stathis Papaioannou From stathisp at gmail.com Mon Dec 28 03:09:14 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 28 Dec 2009 14:09:14 +1100 Subject: [ExI] Searle and AI In-Reply-To: <530298.70261.qm@web36504.mail.mud.yahoo.com> References: <530298.70261.qm@web36504.mail.mud.yahoo.com> Message-ID: 2009/12/28 Gordon Swobe : > Damien, > >> [Searle] is arguing that current computational designs lack some >> critical feature of evolved intentional systems. > > It's not so much about any mysterious "critical features" in nature or evolution so much as it is about the formality of programs running on top of hardware. > > Even if Zeus handed us a concrete example of an artificially constructed machine with strong AI, we could not abstract from careful study of that machine a formal program to run on a software/hardware system that would enable that s/h system to also have strong AI. We would need instead to recreate that machine. What if in recreating the machine we made some changes which have no obvious effect on function, such as replacing copper wiring with silver, or replacing BJTs with the equivalent MOSFET circuit? -- Stathis Papaioannou From jonkc at bellsouth.net Mon Dec 28 06:20:05 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 28 Dec 2009 01:20:05 -0500 Subject: [ExI] Searle and AI In-Reply-To: References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> <4B37971D.4070406@satx.rr.com> Message-ID: On Dec 27, 2009, Stathis Papaioannou wrote: > Penrose does not believe either weak AI or strong AI is possible on a computer, and is therefore consistent, though probably wrong (hardly any other scientists agree with him). Searle, on the other hand, is inconsistent. That is a very good point. I think Penrose is probably wrong and Searle is certainly stupid. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Dec 28 06:01:05 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 28 Dec 2009 01:01:05 -0500 Subject: [ExI] Searle and AI In-Reply-To: <10874.3972.qm@web36505.mail.mud.yahoo.com> References: <10874.3972.qm@web36505.mail.mud.yahoo.com> Message-ID: <2370F588-A316-48A2-A8C2-E8E5DF44B532@bellsouth.net> On Dec 27, 2009, Gordon Swobe wrote: >> > Damien has your number, John. Damien is a very smart man, but he's not smarter than me. > Searle cites evolution in his arguments against competing theories of mind, specifically the theory that you want so desperately and wrongly to attribute to him. BULLSHIT! > > I would explain further but I've tried once or twice already BULLSHIT! > and your sarcastic and abrasive posts on this subject give me with very little motivation. So lets see, you could easily show me the error of my ways, in fact you could make a fool out of me, you could turn me into a complete laughingstock and the only reason you don't do that is because you dislike me so intensely. Well, that makes about as much sense as anything else you wrote. The true answer is you have no answer to any of my criticisms, you just shut your eyes to the fact that Emperor Searle is wearing no clothes. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Dec 28 07:07:18 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 28 Dec 2009 02:07:18 -0500 Subject: [ExI] Searle and AI In-Reply-To: <530298.70261.qm@web36504.mail.mud.yahoo.com> References: <530298.70261.qm@web36504.mail.mud.yahoo.com> Message-ID: <5DE2ED78-D711-42BB-A956-9989BC835224@bellsouth.net> On Dec 27, 2009, Gordon Swobe wrote: > Even if Zeus handed us a concrete example of an artificially constructed machine with strong AI, we could not abstract from careful study of that machine a formal program to run on a software/hardware system that would enable that s/h system to also have strong AI. We would need instead to recreate that machine. Well I agree that if you duplicated a conscious machine the copy would be conscious. Of course making a perfect copy of such a complex thing wouldn't be easy; to do so you would need a very long list of instructions specifying which of the 80 elements with a stable isotope you're dealing with, and information on where to obtain such such an atom, and the coordinates of where to move that atom to. There is a name for a list of instructions of that sort, it's called a, it's called a,..., oh damn, it's right on the tip of my tongue, give me a second, ... ah yes now I have it, it's called a PROGRAM. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Dec 28 07:52:37 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 28 Dec 2009 02:52:37 -0500 Subject: [ExI] A paranormal prediction for the next year In-Reply-To: <4B380A87.4020203@satx.rr.com> References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> <4B37971D.4070406@satx.rr.com> <4B380A87.4020203@satx.rr.com> Message-ID: <21E37E45-F81B-4434-A6D0-8483644A2A53@bellsouth.net> One year ago I sent the following post to the list, I did not change one word. One year from now I intend to send this same message yet again. ================ One year ago I sent the following post to the list, I did not change one word. One year from now I intend to send this same message yet again. ================= One year ago I sent the following post to the list, I did not change one word. One year from now I intend to send this same message yet again. ================= One year ago I sent the following post to the list, I did not change one word. One year from now I intend to send this same message yet again. ================= One year ago I sent the following post to the list, I did not change one word. One year from now I intend to send this same message yet again. ================= Happy New Year all. I predict that a paper reporting positive psi results will NOT appear in Nature or Science in the next year. This may seem an outrageous prediction, after all psi is hardly a rare phenomena, millions of people with no training have managed to observe it, or claim they have. And I am sure the good people at Nature and Science would want to say something about this very important and obvious part of our natural world if they could, but I predict they will be unable to find anything interesting to say about it. You might think my prediction is crazy, like saying a waitress with an eight's grade education in Duluth Minnesota can regularly observe the Higgs boson with no difficulty but the highly trained Physicists at CERN in Switzerland cannot. Nevertheless I am confident my prediction is true because my ghostly spirit guide Mohammad Duntoldme spoke to me about it in a dream. PS: I am also confident I can make this very same prediction one year from today. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Mon Dec 28 11:18:34 2009 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 28 Dec 2009 12:18:34 +0100 Subject: [ExI] Carbon In-Reply-To: References: <20091224155142.GE17686@leitl.org> Message-ID: <20091228111834.GY17686@leitl.org> On Thu, Dec 24, 2009 at 10:40:15AM -0800, spike wrote: > Well, how ungreen of those other locations. By building American homes of > carbon, we prevent that carbon from going into the atmosphere in the form of > CO2. Not sustainably. A much better approach to immobilize carbon in the soil long-term is biochar, which is easy enough to do even on small scale with improvised retorts and dry destillation. It also improves the soil, and you can combine it with outdoor cooking (barbecue) without creating too much pollution. http://www.holon.se/folke/carbon/simplechar/simplechar.shtml Notice this also helps against soil demineralization and denitrification. > For a single digit BOTEC, I would estimate the amount of CO2 we exhale and > defecate every day on the order of about a kilo, then estimate the size of > the woodpile needed to build an American prole's home, oh about 10 or more > cubic meters, close enough to 10,000 kg of wood, so the typical American > prole ties up 30 years worth of her exhalations and defecations merely by > virtue of living in an American house. Given that the average USian carbon footprint is some 20 t/year (against global average of 4 t/year) it's a drop in the bucket. > By building American homes of carbon, we also create and nurture a market > for wood products, encouraging the diversion of water out of rivers to Using irrigation to grow biomass in dry climates does not strike me as particularly sustainable, and fraught with lots of nasty side effects (see Oz and Taxifornia). > otherwise dry and fallow ground, to grow commercial forests, further > greenifying the planet. > > > > better-organized carbon, it would take far less than we > > currently use. Eugen* Leitl ... > > But we want to *use* more carbon, if we use it in that form. Once we start fixating atmospheric carbon dioxide on a large scale using renewable energy sources for more than synfuels which are immediately returning it we *will* start running into atmospheric depletion, which could result in a runaway glaciation event and/or have negative impact on plant bioproductivity and hence the ecosystem. You can bake out the carbonates, though, and and bury them. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stefano.vaj at gmail.com Mon Dec 28 12:36:00 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 28 Dec 2009 13:36:00 +0100 Subject: [ExI] Searle and AI In-Reply-To: <20091227214937.GT17686@leitl.org> References: <134368.61811.qm@web113609.mail.gq1.yahoo.com> <485245.94678.qm@web36504.mail.mud.yahoo.com> <580930c20912271314k49eb4c3ds4f19b9a0fcf8eaf2@mail.gmail.com> <20091227214937.GT17686@leitl.org> Message-ID: <580930c20912280436y4c016c44x6999c3594f46e331@mail.gmail.com> 2009/12/27 Eugen Leitl > On Sun, Dec 27, 2009 at 10:14:42PM +0100, Stefano Vaj wrote: > > > What changes (dramatically) are the performances in the execution of a > given > > program. > > This is a very large understatement. And also my primary source for > contempt for philosophical armchair gedanken experiments. Which > are mostly about taking human intuition to places it wasn't meant to go. > Yet, the rather counterintuitive implications of the recognition that there is in principle nothing else beyond the leap to "universal computation", and that the latter is pretty common owing to the very low complexity threshold required to achieve it, are huge. In particular, I find A New Kind of Science pretty persuasive in this respect, and I daresay it influenced a lot my view of AGI. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Mon Dec 28 12:47:32 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 28 Dec 2009 04:47:32 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <845939.46868.qm@web36506.mail.mud.yahoo.com> --- On Sun, 12/27/09, Stathis Papaioannou wrote: > Let's assume the seat of consciousness is in the > mitochondria. You need to simulate the activity in mitochondria > because otherwise the artificial neurons won't behave normally: Your second sentence creates a logical contradiction. If real biological processes in the mitochondria act as the seat of consciousness then because conscious experience plays a role in behavior including the behavior of neurons, we cannot on Searle's view simulate those real processes with abstract formal programs (compromising the subject's consciousness) and then also expect those neurons (and therefore the organism) to behave "normally". > If the replacement neurons behave normally in their > interactions with the remaining brain, then the subject *must* > behave normally. But your replacement neurons *won't* behave normally, and so your possible conclusions don't follow. You've short-circuited the feedback loop between experience and behavior. Your thought experiment might make more sense if we were testing the theories of an epiphenomenalist, who believes conscious experience plays no role in behavior, but Searle adamantly rejects epiphenomenalism for the same reasons most people do. Getting back to my original point, science has almost no idea at present how to define the so-called "seat of consciousness" (what I prefer to call the neurological correlates of consciousness or NCC). In real terms, we simply don't know what happened in George Foreman's brain that caused him to lose consciousness when Ali delivered the KO punch. For that reason artificial neurons such as those you have in mind remain extremely speculative for use in thought experiments or otherwise. It seems to me that we cannot prove anything whatsoever with them. -gts From stefano.vaj at gmail.com Mon Dec 28 13:03:56 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 28 Dec 2009 14:03:56 +0100 Subject: [ExI] atheism In-Reply-To: References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> Message-ID: <580930c20912280503t63ff4969j20435f64e8eb7254@mail.gmail.com> 2009/12/28 Samantha Atkins > There is ample evidence that belief regardless of evidence or argument is > harmful. > Mmhhh. I would qualify that as an opinion of a moral duty to swear on the truth of unproved or disproved facts. This has something to do with the theist objection that positively believing that Allah does not "exist" would be a "faith" on an equally basis as their own. Now, I may well be persuaded that my cat is sleeping in the other room even though no final evidence of the truth of my opinion thereupon is (still) there, and to form thousand of such provisional or ungrounded - and often wrong - beliefs is probably inevitable. But would I claim that such circumstances are a philosophical necessity or of ethical relevance? Obviously not... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Mon Dec 28 13:19:16 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 28 Dec 2009 14:19:16 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <43379.73473.qm@web36504.mail.mud.yahoo.com> Message-ID: <580930c20912280519q1d9179c9y7c1d3a76f940e2c@mail.gmail.com> 2009/12/28 Stathis Papaioannou > So (a) is incoherent and (b) implies the existence of an immaterial > soul that does your thinking in concert with the brain until you mess > with it by putting in artificial neurons. That leaves (c) as the only > plausible alternative. > It sounds plausible enough to me. But, once more, isn't the whole issue pretty close to ko'an questions such as "what kind of noise makes a falling tree when nobody hears its falling?". What obfuscates the AGI debate is IMHO an abuse of poorly defined terms such as "conscience", etc., in an implicitely metaphysical and essentialist, rather than phenomenical, sense. This does not bring one very far either in the discussion of "artificial" brains nor in the understanding of organic ones. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Dec 28 13:31:17 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 29 Dec 2009 00:31:17 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <845939.46868.qm@web36506.mail.mud.yahoo.com> References: <845939.46868.qm@web36506.mail.mud.yahoo.com> Message-ID: 2009/12/28 Gordon Swobe : > --- On Sun, 12/27/09, Stathis Papaioannou wrote: > >> Let's assume the seat of consciousness is in the >> mitochondria. You need to simulate the activity in mitochondria >> because otherwise the artificial neurons won't behave normally: > > Your second sentence creates a logical contradiction. If real biological processes in the mitochondria act as the seat of consciousness then because conscious experience plays a role in behavior including the behavior of neurons, we cannot on Searle's view simulate those real processes with abstract formal programs (compromising the subject's consciousness) and then also expect those neurons (and therefore the organism) to behave "normally". > >> If the replacement neurons behave normally in their >> interactions with the remaining brain, then the subject *must* >> behave normally. > > But your replacement neurons *won't* behave normally, and so your possible conclusions don't follow. You've short-circuited the feedback loop between experience and behavior. > > Your thought experiment might make more sense if we were testing the theories of an epiphenomenalist, who believes conscious experience plays no role in behavior, but Searle adamantly rejects epiphenomenalism for the same reasons most people do. > > Getting back to my original point, science has almost no idea at present how to define the so-called "seat of consciousness" (what I prefer to call the neurological correlates of consciousness or NCC). In real terms, we simply don't know what happened in George Foreman's brain that caused him to lose consciousness when Ali delivered the KO punch. For that reason artificial neurons such as those you have in mind remain extremely speculative for use in thought experiments or otherwise. It seems to me that we cannot prove anything whatsoever with them. Well, I think you've finally understood the problem. If indeed there is something in the physics of neurons that is not computable, then we won't be able to make artificial neurons based on computation that behave like biological neurons. That would mean neither weak AI nor strong AI is possible. But Searle claims that weak AI *is* possible. He even alludes to Church's thesis to support this: quote-- The answer to 3. seems to me equally obviously "Yes", at least on a natural interpretation. That is, naturally interpreted, the question means: Is there some description of the brain such that under that description you could do a computational simulation of the operations of the brain. But since according to Church's thesis, anything that can be given a precise enough characterization as a set of steps can be simulated on a digital computer, it follows trivially that the question has an affirmative answer. The operations of the brain can be simulated on a digital computer in the same sense in which weather systems, the behavior of the New York stock market or the pattern of airline flights over Latin America can. So our question is not, "Is the mind a program?" The answer to that is, "No". Nor is it, "Can the brain be simulated?" The answer to that is, "Yes". The question is, "Is the brain a digital computer?" And for purposes of this discussion I am taking that question as equivalent to: "Are brain processes computational?" --endquote (from http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html) However, Searle thinks that although the behaviour of the brain can be replicated by a computer, the conscious cannot. But that position leads to absurd conclusions, as you perhaps are now realising. It still remains a possibility that the brain does in fact utilise uncomputable physics. This is the position of Roger Penrose, who believes neither strong AI nor weak AI is possible, and speculates that an as yet undiscovered theory of quantum gravity plays an important role in subcellular processes and will turn out to be uncomputable. The problem with this idea is that there is no evidence for it, and most scientists dismiss it out of hand; but at least it has the merit of consistency. A final point is that even if it turns out the brain is uncomputable, that would be a fatal blow for computationalism but not for functionalism. If we were able to incorporate a hypercomputer (perhaps based on the exotic physics) able to do the relevant calculations into the artificial neurons so that they behave like biological neurons, then consciousness would follow. -- Stathis Papaioannou From stathisp at gmail.com Mon Dec 28 13:40:33 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 29 Dec 2009 00:40:33 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <580930c20912280519q1d9179c9y7c1d3a76f940e2c@mail.gmail.com> References: <43379.73473.qm@web36504.mail.mud.yahoo.com> <580930c20912280519q1d9179c9y7c1d3a76f940e2c@mail.gmail.com> Message-ID: 2009/12/29 Stefano Vaj : > 2009/12/28 Stathis Papaioannou >> >> So (a) is incoherent and (b) implies the existence of an immaterial >> soul that does your thinking in concert with the brain until you mess >> with it by putting in artificial neurons. That leaves (c) as the only >> plausible alternative. > > It sounds plausible enough to me. > > But, once more, isn't the whole issue pretty close to ko'an questions such > as "what kind of noise makes a falling tree when nobody hears its falling?". > > What obfuscates the AGI debate is IMHO an abuse of? poorly defined terms > such as "conscience", etc., in an implicitely metaphysical and essentialist, > rather than phenomenical, sense. This does not bring one very far either in > the discussion of "artificial" brains nor in the understanding of organic > ones. We don't need to define or explain consciousness in order to use the term. I could say, I don't know what consciousness is, but I know it when it's happening to me. So my question is, Will I still have consciousness in this sense if my brain is replaced with an electronic one that results in the same behaviour? And the answer is, Yes. That's what the thought experiment I've described demonstrates. -- Stathis Papaioannou From p0stfuturist at yahoo.com Fri Dec 25 04:19:58 2009 From: p0stfuturist at yahoo.com (Post Futurist) Date: Thu, 24 Dec 2009 20:19:58 -0800 (PST) Subject: [ExI] atheism Message-ID: <462432.73463.qm@web59916.mail.ac4.yahoo.com> Derborn, Michigan? Londonistan? Sweden? Italy? Finland? >Mirco Romanato ? Genital mutilation is not common in the nations you mention above, it is anomalous. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Dec 28 16:28:25 2009 From: spike66 at att.net (spike) Date: Mon, 28 Dec 2009 08:28:25 -0800 Subject: [ExI] Carbon In-Reply-To: <20091228111834.GY17686@leitl.org> References: <20091224155142.GE17686@leitl.org> <20091228111834.GY17686@leitl.org> Message-ID: > ...On Behalf Of Eugen Leitl >... > > Once we start fixating atmospheric carbon dioxide on a large > scale using renewable energy sources for more than synfuels > which are immediately returning it we *will* start running > into atmospheric depletion, which could result in a runaway > glaciation event and/or have negative impact on plant > bioproductivity and hence the ecosystem. > You can bake out the carbonates, though, and and bury them. > -- > Eugen* Leitl leitl This is a major difficulty with even claiming that humans can impact climate. We are hearing of blizzards slaying proles this week in the US. If our technology can cool the climate, then we humans become liable for those deaths. I predict no changes in climate legislation in the immediately foreseeable. spike From jonkc at bellsouth.net Mon Dec 28 17:00:06 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 28 Dec 2009 12:00:06 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <43379.73473.qm@web36504.mail.mud.yahoo.com> References: <43379.73473.qm@web36504.mail.mud.yahoo.com> Message-ID: <9A997E17-583E-4512-B2B9-D4F77DA4081A@bellsouth.net> On Dec 27, 2009, Gordon Swobe wrote: > It seems that you take it on faith that science will one day find the neurological correlates of consciousness (NCC) in the biological activities taking place between neurons and not in the activities taking place inside them. [...]It looks to me like you just drew an arbitrary wall at the cellular membrane. The reason activity between neurons is important is that 100 billion neurons can do things that one neuron can't, but if its something inside the neuron that causes consciousness with no signal about this sent to other neurons then its all on it's own; then one neuron is all you need for consciousness. If you want to run with that idea you're a braver man that me because even giving it a grand name like "neurological correlates of consciousness" doesn't make the idea any less dumb or evolutionary tenable. In fact the name is contradictory, you say consciousness is internal to the cell with no signals about it sent to other neurons, but "neurological correlates" means a connection between neurons. > Nobody has yet elucidated a coherent theory of consciousness. Nobody has yet elucidated a coherent theory of intelligence, consciousness theories are a dime a dozen. > any theory then seems about as good as any another, I offer one that I made up two minutes ago that illustrates the problem that I see with your neuron replacement arguments: > I call it the mitochondrial theory of consciousness. On this theory, which I just made up, consciousness appears as an effect of chemical reactions taking place inside the mitochondria located inside neurons. Like I say, a dime a dozen. > Your challenge is to show that replacing natural neurons with your mitochondria-less nano-neurons that only behave externally like real neurons will still result in consciousness I can't disprove your consciousness theory any more than you can disprove my theory that consciousness is caused by my left foot. > I note that my mitochondrial theory of consciousness represents just one of a very large and possibly infinite number of possible theories of consciousness That is true and no observation in the real world depends on the truth or falsehood of a single one of those infinite number of theories. That's why consciousness theories are crap. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Dec 28 17:10:39 2009 From: jonkc at bellsouth.net (John Clark) Date: Mon, 28 Dec 2009 12:10:39 -0500 Subject: [ExI] ho^3 = evil? In-Reply-To: <95DC576F31F54BBFBA70A65C770AFDDE@spike> References: <4B340109.60709@rawbw.com><28386343-5065-4BCE-B85E-B31CED24F8E8@bellsouth.net> <4B351CD6.6070506@rawbw.com> <95DC576F31F54BBFBA70A65C770AFDDE@spike> Message-ID: On Dec 25, 2009, spike wrote: > While participating in the usual season's practice of exchanging gifts, I > noticed one of the wrappings had the comment "HoHoHo" which caused me to > ponder the phenomenon universal in and unique to humans, that of laughter. I read somewhere that laughter might have evolved as a sort of all clear signal. Somebody issues a danger signal to the group only to discover that its a false alarm and there is nothing dangerous after all. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Mon Dec 28 17:28:31 2009 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 28 Dec 2009 18:28:31 +0100 Subject: [ExI] Carbon In-Reply-To: References: <20091228111834.GY17686@leitl.org> Message-ID: <20091228172831.GI17686@leitl.org> On Mon, Dec 28, 2009 at 08:28:25AM -0800, spike wrote: > I predict no changes in climate legislation in the immediately foreseeable. Legislation is completely teethless, if unenforced, and we don't have even global consensus in legislation yet. This one looks like we have to ride it out -- zero surprise there. Obviously our best chances lie with technology that is cost-effective even without subsidies. People will only move when their wallet is hurting. Hopefully, energy won't remain too cheap for too long. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From spike66 at att.net Mon Dec 28 17:49:23 2009 From: spike66 at att.net (spike) Date: Mon, 28 Dec 2009 09:49:23 -0800 Subject: [ExI] Carbon In-Reply-To: <20091228172831.GI17686@leitl.org> References: <20091228111834.GY17686@leitl.org> <20091228172831.GI17686@leitl.org> Message-ID: <74777D607FD543FBBF5A0BF89E1A751C@spike> > ...On Behalf Of Eugen Leitl > ... > > Obviously our best chances lie with technology that is > cost-effective even without subsidies. People will only move > when their wallet is hurting... Eugen* Leitl Thanks Gene, I couldn't have said it better. Once again, capitalism rides in to save humanity. spike From stefano.vaj at gmail.com Mon Dec 28 18:30:34 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 28 Dec 2009 19:30:34 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <43379.73473.qm@web36504.mail.mud.yahoo.com> <580930c20912280519q1d9179c9y7c1d3a76f940e2c@mail.gmail.com> Message-ID: <580930c20912281030i4df4a2f6u2427d29422840047@mail.gmail.com> 2009/12/28 Stathis Papaioannou > So my question is, Will I still have > consciousness in this sense if my brain is replaced with an electronic > one that results in the same behaviour? And the answer is, Yes. That's > what the thought experiment I've described demonstrates. > > Yes. Or you might already be a philosophical zombie, in which case neither you (by definition, not being "conscious") nor I (because we are restricted to dealing with *phenomena*, any hypothetical Ding-an-sich being unattainable anyway) would know anything about that. This is another way to say that philosophical zombies, either "natural" or electronic, cannot be part of anybody's reality. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From painlord2k at libero.it Mon Dec 28 18:33:25 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Mon, 28 Dec 2009 19:33:25 +0100 Subject: [ExI] atheism In-Reply-To: References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> Message-ID: <4B38F9F5.9050601@libero.it> Il 28/12/2009 1.56, Samantha Atkins ha scritto: > There is ample evidence that belief regardless of evidence or argument > is harmful. Is it so only for religion or for anything? For example, is the belief of AGW without raw data and methods used to allow replication harmful? For example, is the belief of the possibility/utility to redeem convict felons without evidence is harmful? For example, is the belief that all humans have the same moral rights without proof harmful? For example, is the belief that all humans are equals without proof harmful? For example, the belief that the government can do better than private enterprises without proof (well, with large proof of the opposite) is harmful? I think that some belief unsupported by proof are useful for the people believing them. The problem is that so many people use general intelligence and rationality with problems too complex to be solved by simple rationality. It is like trying to compute the trajectory of a stone you throw with your hand and hoping to hit the target, then marveling why an uneducated is better at it. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.722 / Database dei virus: 270.14.122/2590 - Data di rilascio: 12/28/09 08:16:00 From stathisp at gmail.com Mon Dec 28 21:15:21 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 29 Dec 2009 08:15:21 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <580930c20912281030i4df4a2f6u2427d29422840047@mail.gmail.com> References: <43379.73473.qm@web36504.mail.mud.yahoo.com> <580930c20912280519q1d9179c9y7c1d3a76f940e2c@mail.gmail.com> <580930c20912281030i4df4a2f6u2427d29422840047@mail.gmail.com> Message-ID: <3DE97FC7-CFF4-41D0-8620-0E496CE9C333@gmail.com> On 29/12/2009, at 5:30 AM, Stefano Vaj wrote: > 2009/12/28 Stathis Papaioannou > So my question is, Will I still have > consciousness in this sense if my brain is replaced with an electronic > one that results in the same behaviour? And the answer is, Yes. That's > what the thought experiment I've described demonstrates. > > > Yes. > > Or you might already be a philosophical zombie, in which case > neither you (by definition, not being "conscious") nor I (because we > are restricted to dealing with *phenomena*, any hypothetical Ding-an- > sich being unattainable anyway) would know anything about that. > > This is another way to say that philosophical zombies, either > "natural" or electronic, cannot be part of anybody's reality. I wouldn't know it if I were or suddenly became a zombie, but I would know it if, not currently being a zombie, part of my brain were replaced with zombie components. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Mon Dec 28 22:39:03 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 28 Dec 2009 14:39:03 -0800 (PST) Subject: [ExI] Searle and AI In-Reply-To: Message-ID: <393486.79420.qm@web36508.mail.mud.yahoo.com> --- On Sun, 12/27/09, Stathis Papaioannou wrote: > The argument due to David Chalmers that I have been putting to Gordon > (http://consc.net/papers/qualia.html) shows that this position is > absurd. IF the brain is computable THEN so is consciousness I think you misunderstand "computable consciousness" but we'll discuss this in the other thread. I read Chalmer's "hard problem" book a few years ago, by the way (don't recall the exact title offhand). In fact I discussed it with others here. Chalmer's critics have it right; Chalmer leaves us with more questions than we had to start with! -gts From gts_2000 at yahoo.com Mon Dec 28 23:35:10 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 28 Dec 2009 15:35:10 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <41597.7066.qm@web36502.mail.mud.yahoo.com> --- On Mon, 12/28/09, Stathis Papaioannou wrote: > Well, I think you've finally understood the problem. If > indeed there is something in the physics of neurons that is not > computable, then we won't be able to make artificial neurons based on > computation that behave like biological neurons. It seems you still don't understand the position that you really want (or should want) to refute here. :) 1) I do not believe anything about the physics of neurons makes them impossible to compute. We can in principle make exact blueprints of real neurons with and on a computer. 2) I believe we can in principle create neurons "based on" those computer blueprints, just as we can make anything from blueprints, and that those manufactured neurons will behave exactly like natural neurons. 3) I do *not* however believe that any neurons you might manufacture that contain computer simulations (i.e., formal programs) in place of the natural processes that correlate with consciousness will act like natural neurons. The reason for this is simple: computer simulations of things do not equal the things they simulate. They contain the forms of things but not the substance of things. > But Searle claims that weak AI *is* possible. > He even alludes to Church's thesis to support this: Yes, and so you think Searle is hoist with his own petard! > However, Searle thinks that although the behaviour of the > brain can be replicated by a computer, the conscious cannot. Consciousness can be replicated on a computer in much the same way as a cartoonist replicates it. The most simple kind of cartoonist puts a little cloud over his character's head and type words into it to "replicate" consciousness. A more advanced cartoonist will add a time dimension by adding several frames to his cartoon. An even more advanced cartoonist will make a computer animation and add audio to replace the clouds (or simply show the character thinking). A yet even more advanced cartoonist will make his cartoon into a 3-D hologram. At the most sophisticated level the cartoonist will create a perfect computer model of a real brain and insert that baby into his hologram, creating weak AI. But no matter sophisticated the cartoon gets, it remains just a cartoon. It never magically turns into the real McCoy. -gts From stathisp at gmail.com Tue Dec 29 00:44:33 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 29 Dec 2009 11:44:33 +1100 Subject: [ExI] Searle and AI In-Reply-To: <393486.79420.qm@web36508.mail.mud.yahoo.com> References: <393486.79420.qm@web36508.mail.mud.yahoo.com> Message-ID: 2009/12/29 Gordon Swobe : > --- On Sun, 12/27/09, Stathis Papaioannou wrote: > >> The argument due to David Chalmers that I have been putting to Gordon >> (http://consc.net/papers/qualia.html) shows that this position is >> absurd. IF the brain is computable THEN so is consciousness > > I think you misunderstand "computable consciousness" but we'll discuss this in the other thread. > > I read Chalmer's "hard problem" book a few years ago, by the way (don't recall the exact title offhand). In fact I discussed it with others here. Chalmer's critics have it right; Chalmer leaves us with more questions than we had to start with! But that specific paper purporting to prove that computers can be conscious is one of the most significant in philosophy of mind in recent years. -- Stathis Papaioannou From gts_2000 at yahoo.com Tue Dec 29 01:05:33 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 28 Dec 2009 17:05:33 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <580930c20912280519q1d9179c9y7c1d3a76f940e2c@mail.gmail.com> Message-ID: <448387.50674.qm@web36507.mail.mud.yahoo.com> --- On Mon, 12/28/09, Stefano Vaj wrote: > What obfuscates the AGI debate is IMHO an abuse of? poorly > defined terms such as "conscience", etc., I agree. For many people the word consciousness has a sort of mystical new-age connotation that throws them off the track. For that reason I prefer the word intentionality. You have it if you have something in mind. Simple and straightforward. -gts From gts_2000 at yahoo.com Tue Dec 29 01:35:17 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 28 Dec 2009 17:35:17 -0800 (PST) Subject: [ExI] Searle and AI In-Reply-To: <5DE2ED78-D711-42BB-A956-9989BC835224@bellsouth.net> Message-ID: <846517.62688.qm@web36502.mail.mud.yahoo.com> --- On Mon, 12/28/09, John Clark wrote: >> Even if Zeus handed us a concrete example of an artificially >> constructed machine with strong AI, we could not abstract >> from careful study of that machine a formal program to run >> on a software/hardware system that would enable that s/h >> system to also have strong AI. We would need instead to >> recreate that machine. > Well I agree that if you duplicated a conscious machine the copy would > be conscious. Glad we can agree on something. > Of course making a perfect copy of such a complex thing wouldn't be easy; > to do so you would need a very long list of instructions > specifying which of the?80 elements?with a stable > isotope?you're dealing with, and information on > where to obtain such such an atom,?and the coordinates > of where to move that atom to. There is a name for a list of > instructions of that sort, it's called a, it's > called a,..., oh damn, it's right on the tip of my > tongue, give me a second, ... ah yes now I have it, it's > called a PROGRAM. Sure, we would most likely need a program like yours to guide the assembly. We might even build robots that run your program and put them to work on assembly lines for the purpose of building strong AI machines. But the program-driven robots would not have strong AI, at least not by virtue of your program. However, in phase II of the project, the strong AI machines might take the place of the robots. (Unless they're lazy like me and would rather let the robots do it.) -gts From stathisp at gmail.com Tue Dec 29 01:35:34 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 29 Dec 2009 12:35:34 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <41597.7066.qm@web36502.mail.mud.yahoo.com> References: <41597.7066.qm@web36502.mail.mud.yahoo.com> Message-ID: 2009/12/29 Gordon Swobe : > --- On Mon, 12/28/09, Stathis Papaioannou wrote: > >> Well, I think you've finally understood the problem. If >> indeed there is something in the physics of neurons that is not >> computable, then we won't be able to make artificial neurons based on >> computation that behave like biological neurons. > > It seems you still don't understand the position that you really want (or should want) to refute here. :) > > 1) I do not believe anything about the physics of neurons makes them impossible to compute. We can in principle make exact blueprints of real neurons with and on a computer. > > 2) I believe we can in principle create neurons "based on" those computer blueprints, just as we can make anything from blueprints, and that those manufactured neurons will behave exactly like natural neurons. > > 3) I do *not* however believe that any neurons you might manufacture that contain computer simulations (i.e., formal programs) in place of the natural processes that correlate with consciousness will act like natural neurons. The reason for this is simple: computer simulations of things do not equal the things they simulate. They contain the forms of things but not the substance of things. > >> But Searle claims that weak AI *is* possible. >> He even alludes to Church's thesis to support this: > > Yes, and so you think Searle is hoist with his own petard! > >> However, Searle thinks that although the behaviour of the >> brain can be replicated by a computer, the conscious cannot. > > Consciousness can be replicated on a computer in much the same way as a cartoonist replicates it. > > The most simple kind of cartoonist puts a little cloud over his character's head and type words into it to "replicate" consciousness. A more advanced cartoonist will add a time dimension by adding several frames to his cartoon. An even more advanced cartoonist will make a computer animation and add audio to replace the clouds (or simply show the character thinking). A yet even more advanced cartoonist will make his cartoon into a 3-D hologram. At the most sophisticated level the cartoonist will create a perfect computer model of a real brain and insert that baby into his hologram, creating weak AI. > > But no matter sophisticated the cartoon gets, it remains just a cartoon. It never magically turns into the real McCoy. Before proceeding, I would like you to say what you think you would experience if some of your neurons were replaced with artificial neurons that behave externally like biological neurons but, being tainted with programming, lack understanding. I assumed from your previous post that you were saying you would experience something different and would behave differently, because it's not possible to make artificial neurons that behave normally, but this post says otherwise, leaving me confused as to your position. -- Stathis Papaioannou From gts_2000 at yahoo.com Tue Dec 29 02:26:23 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Mon, 28 Dec 2009 18:26:23 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <518629.35006.qm@web36504.mail.mud.yahoo.com> --- On Mon, 12/28/09, Stathis Papaioannou wrote: > Before proceeding, I would like you to say what you think > you would experience if some of your neurons were replaced with > artificial neurons that behave externally like biological neurons but, > being tainted with programming, lack understanding. Again it looks as if you've asked me to predict the outcome of a logical impossibility. Similar to your last cyborg-like experiment, you have tampered with or completely short-circuited the feedback loop between the subject's behavior, including the behavior of his neurons, and his understanding. And so contrary to the wording in your question, the program-driven neurons will not "behave externally like biological neurons". What will "I" experience in the midst of this highly dubious and seemingly impossible state of affairs, you ask? I dare not even guess. Unclear that "I" should even exist, and then only because you included the word "some". -gts From stathisp at gmail.com Tue Dec 29 02:40:56 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 29 Dec 2009 13:40:56 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <518629.35006.qm@web36504.mail.mud.yahoo.com> References: <518629.35006.qm@web36504.mail.mud.yahoo.com> Message-ID: 2009/12/29 Gordon Swobe : > --- On Mon, 12/28/09, Stathis Papaioannou wrote: > >> Before proceeding, I would like you to say what you think >> you would experience if some of your neurons were replaced with >> artificial neurons that behave externally like biological neurons but, >> being tainted with programming, lack understanding. > > Again it looks as if you've asked me to predict the outcome of a logical impossibility. > > Similar to your last cyborg-like experiment, you have tampered with or completely short-circuited the feedback loop between the subject's behavior, including the behavior of his neurons, and his understanding. And so contrary to the wording in your question, the program-driven neurons will not "behave externally like biological neurons". > > What will "I" experience in the midst of this highly dubious and seemingly impossible state of affairs, you ask? I dare not even guess. Unclear that "I" should even exist, and then only because you included the word "some". You claim both that the physics of neurons is computable AND that it is impossible to make program-driven neurons that behave like natural neurons, which is a contradiction. Even Searle agrees that you can make artificial neurons that behave like natural neurons, in the passage I quoted earlier: that's what weak AI is! -- Stathis Papaioannou From reasonerkevin at yahoo.com Tue Dec 29 04:34:34 2009 From: reasonerkevin at yahoo.com (Kevin Freels) Date: Mon, 28 Dec 2009 20:34:34 -0800 (PST) Subject: [ExI] atheism In-Reply-To: <4B38F9F5.9050601@libero.it> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <4B38F9F5.9050601@libero.it> Message-ID: <379052.14440.qm@web81607.mail.mud.yahoo.com> ________________________________ From: Mirco Romanato To: ExI chat list Sent: Mon, December 28, 2009 12:33:25 PM Subject: Re: [ExI] atheism Il 28/12/2009 1.56, Samantha Atkins ha scritto: > There is ample evidence that belief regardless of evidence or argument > is harmful. Is it so only for religion or for anything? For example, is the belief of AGW without raw data and methods used to allow replication harmful? For example, is the belief of the possibility/utility to redeem convict felons without evidence is harmful? For example, is the belief that all humans have the same moral rights without proof harmful? For example, is the belief that all humans are equals without proof harmful? For example, the belief that the government can do better than private enterprises without proof (well, with large proof of the opposite) is harmful? I think that some belief unsupported by proof are useful for the people believing them. The problem is that so many people use general intelligence and rationality with problems too complex to be solved by simple rationality. It is like trying to compute the trajectory of a stone you throw with your hand and hoping to hit the target, then marveling why an uneducated is better at it. Mirco Samantha, I believe that killing you would be wrong, yet I have no proof............. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Dec 29 05:13:57 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 29 Dec 2009 00:13:57 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <41597.7066.qm@web36502.mail.mud.yahoo.com> References: <41597.7066.qm@web36502.mail.mud.yahoo.com> Message-ID: <07219FFC-E155-4959-83E0-FDC1E60E24FF@bellsouth.net> On Dec 28, 2009, Gordon Swobe wrote: > computer simulations of things do not equal the things they simulate. Sometimes simulations are exactly equal to the things they simulate and the more abstract something is the more likely that is to happen. Computer arithmetic is real arithmetic, digital music is real music. And when a computer makes an action it is a real action and there is nothing simulated about it. A computer can duplicate many adjectives and verbs and even a few nouns; so there is one question you need to ask yourself, is consciousness more like a symphony or more like a brick? There are 2 other points I'd like to make: 1) Even if you're correct there is absolutely no way you will ever know you're correct. 2) Even if you're correct that fact will never have any effect on the Human Race, it would be the AI's problem not ours. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Dec 29 05:21:39 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 29 Dec 2009 00:21:39 -0500 Subject: [ExI] Searle and AI In-Reply-To: <846517.62688.qm@web36502.mail.mud.yahoo.com> References: <846517.62688.qm@web36502.mail.mud.yahoo.com> Message-ID: <1D42F91B-B9D9-4190-9381-7DE479A2DDCD@bellsouth.net> On Dec 28, 2009, Gordon Swobe wrote: > We might even build robots that run your program and put them to work on assembly lines for the purpose of building strong AI machines. Am I to take it that you are conceding the debate? You say a program could make a conscious machine, so the program is making consciousness. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Tue Dec 29 12:15:08 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 29 Dec 2009 04:15:08 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <902030.79191.qm@web36502.mail.mud.yahoo.com> --- On Mon, 12/28/09, Stathis Papaioannou wrote: > You claim both that the physics of neurons is computable Yes. > AND that it is impossible to make program-driven neurons that behave > like natural neurons, which is a contradiction. No you misunderstood me, and I should have made myself more clear. I meant that your artificial neurons in your experiment would not act as would have the natural neurons that they replaced -- not that they would act in a manner uncharacteristic of neurons. > Even Searle agrees that you can make artificial neurons that behave > like natural neurons, As do I. That was the #2 point in my post to you yesterday. I quote myself here: "2) I believe we can in principle create neurons "based on" those computer blueprints, just as we can make anything from blueprints, and that those manufactured neurons will behave exactly like natural neurons." To my way of thinking, your cyborg-like thought experiment takes a single snapshot of a process. You want to focus on that single snapshot but I look at the entire process. In that process you have me changing into a computer simulation. While the circumstance pictured in that single snapshot seems odd to me subjectively, to you as the objective observer everything would seem quite normal. At the end of that process I no longer exist as an intentional entity. Although that simulation of me exhibits all the objective characteristics of a person with intentionality, my simulated consciousness no longer has a first-person ontology. I once believed as do many here that a computer simulation of me would equal "me". I now understand that such a simulation would no more equal me than would a photograph of me. The simulation of me differs from the photograph of me only in the accuracy of the caricature. Rather silly of me to have thought otherwise! -gts From gts_2000 at yahoo.com Tue Dec 29 12:27:08 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 29 Dec 2009 04:27:08 -0800 (PST) Subject: [ExI] Searle and AI In-Reply-To: <1D42F91B-B9D9-4190-9381-7DE479A2DDCD@bellsouth.net> Message-ID: <436332.97988.qm@web36507.mail.mud.yahoo.com> --- On Tue, 12/29/09, John Clark wrote: > Am I to take it that you are conceding the debate? You say a program > could make a conscious machine, so the program is making consciousness.? Not conceding the debate at all, and I think you make a good point. A properly programmed robot could construct an intentional entity inside itself. Nature did it by natural selection. Why not robots? -gts From gts_2000 at yahoo.com Tue Dec 29 12:50:15 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 29 Dec 2009 04:50:15 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <07219FFC-E155-4959-83E0-FDC1E60E24FF@bellsouth.net> Message-ID: <81067.97750.qm@web36502.mail.mud.yahoo.com> --- On Tue, 12/29/09, John Clark wrote: >> computer simulations of things do not equal the things they >> simulate.? > Sometimes simulations are exactly equal to the > things they simulate?and the more abstract something is > the more likely that is to happen. Computer arithmetic is > real arithmetic, digital music is real music. To borrow a phrase popularized by a philosopher by the name of Thomas Nagel, who famously wrote an essay titled _What is it like to be a bat?_, there exists something "it is like" to mentally solve or understand a mathematical equation. Computers do math well, but you can't show me how they could possibly know what it's like. Digital music? Same story. -gts From gts_2000 at yahoo.com Tue Dec 29 13:47:44 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 29 Dec 2009 05:47:44 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <580930c20912281030i4df4a2f6u2427d29422840047@mail.gmail.com> Message-ID: <467359.61586.qm@web36504.mail.mud.yahoo.com> Stathis, > So my question is, Will I still have consciousness in this sense if my > brain is replaced with an electronic one that results in the same > behaviour? And the answer is, Yes. That's what the thought experiment > I've described demonstrates. If that's what it demonstrates then it misses the point of Searle's argument, which has nothing to do with replacing a natural brain with an electronic one. Perhaps we will find a way to create intentionality by electronic means. Assume we have and that the engineer who accomplished that remarkable feat puts his blueprint into a program to run on hardware. The resulting software/hardware system will then only *simulate* the object described in the blueprint. -gts From stathisp at gmail.com Tue Dec 29 14:01:17 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 30 Dec 2009 01:01:17 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <902030.79191.qm@web36502.mail.mud.yahoo.com> References: <902030.79191.qm@web36502.mail.mud.yahoo.com> Message-ID: 2009/12/29 Gordon Swobe : > --- On Mon, 12/28/09, Stathis Papaioannou wrote: > >> You claim both that the physics of neurons is computable > > Yes. > >> AND that it is impossible to make program-driven neurons that behave >> like natural neurons, which is a contradiction. > > No you misunderstood me, and I should have made myself more clear. I meant that your artificial neurons in your experiment would not act as would have the natural neurons that they replaced -- not that they would act in a manner uncharacteristic of neurons. > >> Even Searle agrees that you can ?make artificial neurons that behave >> like natural neurons, > > As do I. That was the #2 point in my post to you yesterday. I quote myself here: > > "2) I believe we can in principle create neurons "based on" those computer blueprints, just as we can make anything from blueprints, and that those manufactured neurons will behave exactly like natural neurons." You leave me confused. Would the artificial neuron behave like a natural neuron or would it not? It seems to me that you have to agree that it would. After all, you agree with Searle that a computer could in theory fool a human into thinking that it was a fellow human, and just fooling the adjacent neurons into thinking it's one of them should be a vastly easier task. So what do you mean by saying the artificial neurons "would not act as would have the natural neurons that they replaced"? > To my way of thinking, your cyborg-like thought experiment takes a single snapshot of a process. You want to focus on that single snapshot but I look at the entire process. In that process you have me changing into a computer simulation. While the circumstance pictured in that single snapshot seems odd to me subjectively, to you as the objective observer everything would seem quite normal. > > At the end of that process I no longer exist as an intentional entity. Although that simulation of me exhibits all the objective characteristics of a person with intentionality, my simulated consciousness no longer has a first-person ontology. I operate on your brain and install my artificial neurons in place of a volume of tissue involved in some important aspect of cognition, such as visual perception or language. Your brain and hence you would have to behave normally if my artificial neurons behave normally, by definition. But although I can make my artificial neurons behave normally, you claim that I can't imbue them with intentionality. So, will you feel normal or won't you? It seems you are suggesting that you would not feel normal, but would instead feel that something weird was happening. Can you explain how you would feel this - where the acknowledgment of the weird feeling will physically occur in your brain - given that all your neurons are forced to behave as they would have if no change had been made? -- Stathis Papaioannou From stathisp at gmail.com Tue Dec 29 14:09:12 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 30 Dec 2009 01:09:12 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <467359.61586.qm@web36504.mail.mud.yahoo.com> References: <580930c20912281030i4df4a2f6u2427d29422840047@mail.gmail.com> <467359.61586.qm@web36504.mail.mud.yahoo.com> Message-ID: 2009/12/30 Gordon Swobe : > Stathis, > >> So my question is, Will I still have consciousness in this sense if my >> brain is replaced with an electronic one that results in the same >> behaviour? And the answer is, Yes. That's what the thought experiment >> I've described demonstrates. > > If that's what it demonstrates then it misses the point of Searle's argument, which has nothing to do with replacing a natural brain with an electronic one. It proves that Searle's argument is wrong, since Searle's argument is that a software based system can't have a mind. That's what I meant by electronic brain. > Perhaps we will find a way to create intentionality by electronic means. Assume we have and that the engineer who accomplished that remarkable feat puts his blueprint into a program to run on hardware. The resulting software/hardware system will then only *simulate* the object described in the blueprint. Are you now claiming that a brain based on analogue but not digital circuits can have a mind? -- Stathis Papaioannou From gts_2000 at yahoo.com Tue Dec 29 15:12:12 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 29 Dec 2009 07:12:12 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <262949.67576.qm@web36502.mail.mud.yahoo.com> --- On Tue, 12/29/09, Stathis Papaioannou wrote: > Would the artificial neuron behave like a natural neuron or would it > not? In your partial replacement scenario, we would not look at it and say "natural neurons never act in such a way" so in this respect they work just like natural neurons. But still I'm inclined to say they would behave in a slightly different way than those naturals that would have existed contrafactually, at least momentarily during the process. But then on the other hand who's to say what could have been? The point is that the subject begins to lose his first person perspective. If we freeze the picture at that very moment then admittedly I find myself left with a conundrum, one not unlike, say, Zeno's paradox. I resolve it by moving forward or backward in time. When we move forward to the end and complete the experiment, simulating the entire person and his environs, we find ourselves creating only objects in computer code, mere blueprints of real things. If we move backward to the beginning then we find ourselves with the original intentional person in the real world. In between the beginning and the end we can play all sorts of fun and interesting games that challenge and stretch our imaginations, but that's all they seem to me. Contrary to the rumor going around, reality really does exist. :) -gts From stathisp at gmail.com Tue Dec 29 15:34:38 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 30 Dec 2009 02:34:38 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <262949.67576.qm@web36502.mail.mud.yahoo.com> References: <262949.67576.qm@web36502.mail.mud.yahoo.com> Message-ID: 2009/12/30 Gordon Swobe : > --- On Tue, 12/29/09, Stathis Papaioannou wrote: > >> Would the artificial neuron behave like a natural neuron or would it >> not? > > In your partial replacement scenario, we would not look at it and say "natural neurons never act in such a way" so in this respect they work just like natural neurons. But still I'm inclined to say they would behave in a slightly different way than those naturals that would have existed contrafactually, at least momentarily during the process. But then on the other hand who's to say what could have been? You're inclined to say they would behave in a slightly different way? You may as well say, God will intervene because he's so offended by the idea that computers can think. > The point is that the subject begins to lose his first person perspective. If we freeze the picture at that very moment then admittedly I find myself left with a conundrum, one not unlike, say, Zeno's paradox. I resolve it by moving forward or backward in time. When we move forward to the end and complete the experiment, simulating the entire person and his environs, we find ourselves creating only objects in computer code, mere blueprints of real things. If we move backward to the beginning then we find ourselves with the original intentional person in the real world. In between the beginning and the end we can play all sorts of fun and interesting games that challenge and stretch our imaginations, but that's all they seem to me. > > Contrary to the rumor going around, reality really does exist. :) Up until this point it seemed there was a chance you might follow the argument to wherever it rationally led you. -- Stathis Papaioannou From stefano.vaj at gmail.com Tue Dec 29 16:12:01 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 29 Dec 2009 17:12:01 +0100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <3DE97FC7-CFF4-41D0-8620-0E496CE9C333@gmail.com> References: <43379.73473.qm@web36504.mail.mud.yahoo.com> <580930c20912280519q1d9179c9y7c1d3a76f940e2c@mail.gmail.com> <580930c20912281030i4df4a2f6u2427d29422840047@mail.gmail.com> <3DE97FC7-CFF4-41D0-8620-0E496CE9C333@gmail.com> Message-ID: <580930c20912290812h52932b07vca80dacb735471a2@mail.gmail.com> 2009/12/28 Stathis Papaioannou : > I wouldn't know it if I were or suddenly became a zombie, but I would know > it if, not currently being a zombie, part of my brain were replaced with > zombie components. The issue is of course pure non-sense for me, and I suspect we are more or less on the same side anyway, but just for the sake of idle discussion, how would you know? It is unclear to me whether any difference exists in the "internal" (?) status of philosophical zombies, the same being part after all of their "behaviour", but if they have an illusion to be conscious you would of course conserve it as well, if they do not you could not be aware of the moment you stopped being... Such awareness would require a homunculus trapped in the zombie and screaming about the zombification of... whom, if he has not become one himself? -- Stefano Vaj From jonkc at bellsouth.net Tue Dec 29 16:28:29 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 29 Dec 2009 11:28:29 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <81067.97750.qm@web36502.mail.mud.yahoo.com> References: <81067.97750.qm@web36502.mail.mud.yahoo.com> Message-ID: <3F7C0FFA-41F7-412F-88CB-8000F02C7F07@bellsouth.net> On Dec 29, 2009, Gordon Swobe wrote: > To borrow a phrase popularized by a philosopher by the name of Thomas Nagel, who famously wrote an essay titled _What is it like to be a bat?_, I've read that essay and I think Nagel was totally confused and his confusion has nothing to do with the mysteries of consciousness, it has to do with logic. He makes it clear that he doesn't want to know what it would be like for Thomas Nagel to be a bat, he wants to know what's it is like for a bat to be a bat. The only way to do that is to turn the man into a bat, but then Thomas Nagel still wouldn't know because he'd no longer be Thomas Nagel, he'd be a bat. Only a bat would know what it's like to be a bat because like it or not consciousness is a private experience. > there exists something "it is like" to mentally solve or understand a mathematical equation. Computers do math well, but you can't show me how they could possibly know what it's like. Do you know what it's like? When you multiply 243 by 613 do you "KNOW" the answer is 147959, does your intuition insist that it can be no other number, do you feel it in your bones or did you just follow a purely mechanical procedure that you learned in the third grade to come up with that figure? There is no need to translate computer arithmetic into real arithmetic, it's already real. > Digital music? Same story. You said computer simulations are not real but I must insist that digital music is real music. John K Clark PS: If you do feel it in your bones then you need your bones checked because the true answer is 148959. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Dec 29 16:51:21 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 29 Dec 2009 11:51:21 -0500 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <448387.50674.qm@web36507.mail.mud.yahoo.com> References: <448387.50674.qm@web36507.mail.mud.yahoo.com> Message-ID: <2558C0DE-4F5B-4015-83C7-487B19EF0B43@bellsouth.net> On Dec 28, 2009, Gordon Swobe wrote: > the word consciousness has a sort of mystical new-age connotation that throws them off the track. > For that reason I prefer the word intentionality. Intentionality means doing something with an intention and that means doing something with an aim or a plan. A program is a plan. But perhaps you just mean doing things for a reason. Well, I maintain it is beyond dispute that you, me, computers, everything in the universe does things because of cause and effect OR they don't do things because of cause and effect; and if they don't then those things are by definition random. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Tue Dec 29 16:55:00 2009 From: aware at awareresearch.com (Aware) Date: Tue, 29 Dec 2009 08:55:00 -0800 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <262949.67576.qm@web36502.mail.mud.yahoo.com> References: <262949.67576.qm@web36502.mail.mud.yahoo.com> Message-ID: On Tue, Dec 29, 2009 at 7:12 AM, Gordon Swobe wrote: > > Contrary to the rumor going around, reality really does exist. :) I'm calling foul. I've been watching pretty closely the last several days and I haven't seen anyone make that assertion. I HAVE seen more sophisticated statements that it's meaningless to refer to "reality" as if independent of context. - Jef From painlord2k at libero.it Tue Dec 29 17:21:10 2009 From: painlord2k at libero.it (Mirco Romanato) Date: Tue, 29 Dec 2009 18:21:10 +0100 Subject: [ExI] atheism In-Reply-To: <462432.73463.qm@web59916.mail.ac4.yahoo.com> References: <462432.73463.qm@web59916.mail.ac4.yahoo.com> Message-ID: <4B3A3A86.60306@libero.it> Il 25/12/2009 5.19, Post Futurist ha scritto: > Derborn, Michigan? Londonistan? Sweden? Italy? Finland? >> Mirco Romanato > Genital mutilation is not common in the nations you mention above, it > is anomalous. Well, in some groups is not so anomalous. http://islamineurope.blogspot.com/2009/12/uk-girls-face-fgm-during-christmas.html UK: Girls face FGM during Christmas holidays > Hundreds of British schoolgirls are facing the terrifying prospect of > female genital mutilation (FGM) over the Christmas holidays as > experts warn the practice continues to flourish across the country. > Parents typically take their daughters back to their country of > origin for FGM during school holidays, but The Independent on Sunday > has been told that "cutters" are being flown to the UK to carry out > the mutilation at "parties" involving up to 20 girls to save money. > > > The police face growing criticism for failing to prosecute a single > person for carrying out FGM in 25 years; new legislation from 2003 > which prohibits taking a girl overseas for FGM has also failed to > secure a conviction. > > > Experts say the lack of convictions, combined with the Government's > failure to invest enough money in education and prevention > strategies, mean the practice continues to thrive. Knowledge of the > health risks and of the legislation remains patchy among practising > communities, while beliefs about the supposed benefits for girls > remain firm, according to research by the Foundation for Women's > Health, Research and Development (Forward). > > > As a result, specialist doctors and midwives are struggling to cope > with increasing numbers of women suffering from long-term health > problems, including complications during pregnancy and childbirth. > > > Campaigners are urging ministers to take co-ordinated steps to work > with communities here and overseas to change deep-seated cultural > attitudes and stamp out this extreme form of violence against women. > > > (...) > > > The Somali model Waris Dirie was mutilated at the age of five. She > set up the Waris Dirie Foundation in 2002 to help eradicate FGM. She > said: "I am worried about the situation in Europe and the US, as FGM > seems to be on the rise in these places. In the 21st century, a crime > this cruel should not be accepted in a society as developed as > England. No one can undo the trauma that is caused by this horrible > crime; it stays in your head for ever. So what we should focus on is > that there won't be another victim." > > > Jackie Mathers, a nurse from the Bristol Safeguarding Children Board, > said: "These families do not do this out of spite or hatred; they > believe this will give their daughters the best opportunities in > life. We would like a conviction, not against the parents, but > against a cutter, someone who makes a living from this. We have > anecdotal information that the credit crunch means people can't go > home, so they're getting cutters over for 'FGM parties'. It is hard > for people to speak out because they are from communities that are > already vilified as asylum seekers, so to stand up against their > communities is to risk being ostracised. But we have to empower girls > and women to address this, along with teachers, school nurses and > social workers. We can't ignore it; it is mutilation." > > > A Home Office spokesman said: "We have appointed an FGM co-ordinator > to drive forward a co-ordinated government response to this appalling > crime and make recommendations for future work." Some girls get a breast augmentation as a gift from their parents. Someone else get a not requested clitoridectomy. I hope we are able to make this more anomalous than now. Mirco -------------- next part -------------- Nessun virus nel messaggio in uscita. Controllato da AVG - www.avg.com Versione: 9.0.722 / Database dei virus: 270.14.123/2592 - Data di rilascio: 12/29/09 08:47:00 From scerir at libero.it Tue Dec 29 17:36:51 2009 From: scerir at libero.it (scerir) Date: Tue, 29 Dec 2009 18:36:51 +0100 (CET) Subject: [ExI] The symbol grounding problem in strong AI Message-ID: <31469183.54211262108211380.JavaMail.defaultUser@defaultHost> [-gts] To borrow a phrase popularized by a philosopher by the name of Thomas Nagel, who famously wrote an essay titled _What is it like to be a bat?_, there exists something "it is like" to mentally solve or understand a mathematical equation. Computers do math well, but you can't show me how they could possibly know what it's like. # There are robot-scientists http://www.wired.com/wiredscience/2009/12/download- robot-scientist/ and smart softwares. I do not know if they are conscious or intentional. I'm not expert in "semantics", but it seems to me that every meaning is contextual, or inten*s*ional. For "semantics", in the context of programming languages, see here http://tinyurl.com/yjdpkry . Also, there are several examples (i.e. quantum mechanics, its principles, its rules) showing that scientists cannot get any idea, any mental representation of what they write. They do understand their equations, but they do not understand the meaning. From stefano.vaj at gmail.com Tue Dec 29 17:55:01 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 29 Dec 2009 18:55:01 +0100 Subject: [ExI] atheism In-Reply-To: <462432.73463.qm@web59916.mail.ac4.yahoo.com> References: <462432.73463.qm@web59916.mail.ac4.yahoo.com> Message-ID: <580930c20912290955w25c2aca6rbb32b8ed169931dd@mail.gmail.com> 2009/12/25 Post Futurist > Genital mutilation is not common in the nations you mention above, it is anomalous. One would be tempted to say "thank God", but in this event it really would not be accurate, would it? :-D -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Dec 29 17:57:38 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 29 Dec 2009 18:57:38 +0100 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <07219FFC-E155-4959-83E0-FDC1E60E24FF@bellsouth.net> References: <41597.7066.qm@web36502.mail.mud.yahoo.com> <07219FFC-E155-4959-83E0-FDC1E60E24FF@bellsouth.net> Message-ID: <580930c20912290957h606d8374kbc4d8f8fcc9205d@mail.gmail.com> 2009/12/29 John Clark > There are 2 other points I'd like to make: > > 1) Even if you're correct there is absolutely no way you will ever know > you're correct. > 2) Even if you're correct that fact will never have any effect on the Human > Race, it would be the AI's problem not ours. > Not even that, in fact. Because, how could an "unconscious" entity ever have a problem? ;-) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Dec 29 18:04:53 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 29 Dec 2009 19:04:53 +0100 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <3F7C0FFA-41F7-412F-88CB-8000F02C7F07@bellsouth.net> References: <81067.97750.qm@web36502.mail.mud.yahoo.com> <3F7C0FFA-41F7-412F-88CB-8000F02C7F07@bellsouth.net> Message-ID: <580930c20912291004t5a2c1b47ledc69c5a351e9cfa@mail.gmail.com> 2009/12/29 John Clark > I've read that essay and I think Nagel was totally confused and his > confusion has nothing to do with the mysteries of consciousness, it has to > do with logic. He makes it clear that he doesn't want to know what it would > be like for Thomas Nagel to be a bat, he wants to know what's it is like for > a bat to be a bat. The only way to do that is to turn the man into a bat, > but then Thomas Nagel still wouldn't know because he'd no longer be Thomas > Nagel, he'd be a bat. Only a bat would know what it's like to be a bat > because like it or not consciousness is a private experience. > Impeccably said. How would you know what's like to be somebody else? You wouldn't, because it is impossible for somebody to be somebody else. Organic, electronic, ethereal... Identity is social construct. Everybody feels veself whomever ve may be, if he feels anything at all, and if he is Mr. Jones or Mr. Brown or his Zombie Replacement by Alien Abductors or a califlower is for others to say. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Tue Dec 29 18:09:49 2009 From: aware at awareresearch.com (Aware) Date: Tue, 29 Dec 2009 10:09:49 -0800 Subject: [ExI] The symbol grounding problem in strong AI. In-Reply-To: <580930c20912291004t5a2c1b47ledc69c5a351e9cfa@mail.gmail.com> References: <81067.97750.qm@web36502.mail.mud.yahoo.com> <3F7C0FFA-41F7-412F-88CB-8000F02C7F07@bellsouth.net> <580930c20912291004t5a2c1b47ledc69c5a351e9cfa@mail.gmail.com> Message-ID: 2009/12/29 Stefano Vaj : > 2009/12/29 John Clark >> >> I've read that essay and I think Nagel was totally confused and his >> confusion has nothing to do with the mysteries of consciousness, it has to >> do with logic.?He makes it clear that he doesn't want to know what it would >> be like for Thomas Nagel to be a bat, he wants to know what's it is like for >> a bat to be a bat. The only way to do that is to turn the man into a bat, >> but then?Thomas Nagel still wouldn't know because he'd no longer be?Thomas >> Nagel, he'd be a bat. Only a bat would know what it's like to be a bat >> because like it or not consciousness is a private experience. > > Impeccably said. > > How would you know what's like to be somebody else? > > You wouldn't, because it is impossible for somebody to be somebody else. > > Organic, electronic, ethereal... Identity is social construct. Everybody > feels veself whomever ve may be, if he feels anything at all, and if he is > Mr. Jones or Mr. Brown or his Zombie Replacement by Alien Abductors or a > califlower is for others to say. > Right! So that pretty well wraps it up for now, shall we say? ;-) -Jef From stefano.vaj at gmail.com Tue Dec 29 18:18:00 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 29 Dec 2009 19:18:00 +0100 Subject: [ExI] atheism In-Reply-To: <379052.14440.qm@web81607.mail.mud.yahoo.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <4B38F9F5.9050601@libero.it> <379052.14440.qm@web81607.mail.mud.yahoo.com> Message-ID: <580930c20912291018v1738a493ta1bb77315ef3c9d5@mail.gmail.com> 2009/12/29 Kevin Freels > Samantha, I believe that killing you would be wrong, yet I have no proof............. Come on, killing Cosmic Engineers when I am around is always wrong... ;-) Seriously, wrong as in "morally wrong" has to do with Sollen, not with Sein (what is the expression in English?), and here it is easier to accept, for those who are not ethical objectivists, that you may adopt values without any need for "proof": But let us say that you wrote "I believe that killing you would be useless to reduce Global Warming, yet I have no proof". I take it this would not be a matter of faith either, because I assume that you would accept proof to the contrary, and that there is some reasonable, albeit inconclusive, reasoning behind such assumption. -- Stefano Vaj From stefano.vaj at gmail.com Tue Dec 29 18:46:03 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 29 Dec 2009 19:46:03 +0100 Subject: [ExI] Some new angle about AI Message-ID: <580930c20912291046n232bf153ybfddf7c368602f7f@mail.gmail.com> 2009/12/28 Stathis Papaioannou : > It still remains a possibility that the brain does in fact utilise > uncomputable physics. This is the position of Roger Penrose, who > believes neither strong AI nor weak AI is possible, and speculates > that an as yet undiscovered theory of quantum gravity plays an > important role in subcellular processes and will turn out to be > uncomputable. The problem with this idea is that there is no evidence > for it, and most scientists dismiss it out of hand; but at least it > has the merit of consistency. One wonders, since there is no obvious hint that quantum mechanics or other low-level physical effects play any role with regard to the working of liver cells, and I do not see why this would be any different with regard to the brain of an ant - and thus to that of a human being. But, there again, quantum computing fully remains in the field of computability, does it not? And the existence of "organic computers" implementing such principles would be proof that such computers can be built. In fact, I would suspect that "quantum computation", in a Wolframian sense", would be all around us, also in other, non-organic, systems. There again, the theoretical issue would be simply that of executing a program emulating what we execute ourselves closely enough to qualify as "human-like" for arbitrary purposes, and find ways to implement it in manner not making us await its responses for multiples of the duration of the Universe... ;-) -- Stefano Vaj From jonkc at bellsouth.net Tue Dec 29 20:22:07 2009 From: jonkc at bellsouth.net (John Clark) Date: Tue, 29 Dec 2009 15:22:07 -0500 Subject: [ExI] D Wave back from the grave? (was: Steorn back from the grave?) In-Reply-To: <580930c20912291018v1738a493ta1bb77315ef3c9d5@mail.gmail.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <4B38F9F5.9050601@libero.it> <379052.14440.qm@web81607.mail.mud.yahoo.com> <580930c20912291018v1738a493ta1bb77315ef3c9d5@mail.gmail.com> Message-ID: <65DFA3F9-0053-4003-93E4-570EF3F87F6B@bellsouth.net> I was very skeptical of D-waves claim to have made a working Quantum Computer, it's still probably Bullshit but I'm no longer quite as certain. It seems that Google has taken an interest in D-Wave through their head of image recognition, Hartmut Neven. Neven started a image recognition company called Neven Vision and about 3 years ago Google bought the company. Either Neven and Google are not as smart as I thought they were or D Wave is not a total scam after all. http://www.theregister.co.uk/2009/12/15/google_quantum_computing_research/ John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Dec 29 21:08:45 2009 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 29 Dec 2009 22:08:45 +0100 Subject: [ExI] Searle and AI In-Reply-To: <1D42F91B-B9D9-4190-9381-7DE479A2DDCD@bellsouth.net> References: <846517.62688.qm@web36502.mail.mud.yahoo.com> <1D42F91B-B9D9-4190-9381-7DE479A2DDCD@bellsouth.net> Message-ID: <20091229210845.GL17686@leitl.org> On Tue, Dec 29, 2009 at 12:21:39AM -0500, John Clark wrote: > Am I to take it that you are conceding the debate? Debates assume progress in opponents' point of view. None is so far apparent. Give up on that person already. He's not worth your time. To say it with Heinlein: Never try to teach a pig to sing; it wastes your time and it annoys the pig. It definitely annoys the innocent bystanders also. From thespike at satx.rr.com Tue Dec 29 21:13:16 2009 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 29 Dec 2009 15:13:16 -0600 Subject: [ExI] D Wave back from the grave? In-Reply-To: <65DFA3F9-0053-4003-93E4-570EF3F87F6B@bellsouth.net> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <4B38F9F5.9050601@libero.it> <379052.14440.qm@web81607.mail.mud.yahoo.com> <580930c20912291018v1738a493ta1bb77315ef3c9d5@mail.gmail.com> <65DFA3F9-0053-4003-93E4-570EF3F87F6B@bellsouth.net> Message-ID: <4B3A70EC.1080301@satx.rr.com> On 12/29/2009 2:22 PM, John Clark wrote: > I was very skeptical of D-waves claim to have made a working Quantum > Computer, it's still probably Bullshit but I'm no longer quite as certain. Hey, they feature in my ASIMOV'S magazine sf story "The Qualia Engine" so it must be true. Damien Broderick From scerir at libero.it Tue Dec 29 22:18:44 2009 From: scerir at libero.it (scerir) Date: Tue, 29 Dec 2009 23:18:44 +0100 (CET) Subject: [ExI] Some new angle about AI Message-ID: <29719886.66751262125124734.JavaMail.defaultUser@defaultHost> [Stathis] It still remains a possibility that the brain does in fact utilise uncomputable physics. [...] [Stefano] One wonders, since there is no obvious hint that quantum mechanics or other low-level physical effects play any role with regard to the [...] # We can find "uncomputability" in quantum physics (i.e. essential randomness, contextuality, etc.) but also in classical physics. So it is possible that the brain does in fact utilise, in a technical meaning, uncomputable physics. Speaking of elaboration of information by the brain (here supposed to be a complex quantum system), according to a speculation the mind/brain might encode "propositions" in quantum states, and then might "measure" these quantum states to test the "truth values" of the propositions. Random outcomes given by measurements associated with the propositions should mean that the "truth values" are uncertain, that is to say that the propositions are undecidable (within the context of that mind/brain, and its stored information). But it is just a speculation. From gts_2000 at yahoo.com Tue Dec 29 23:06:10 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Tue, 29 Dec 2009 15:06:10 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <125388.6769.qm@web36507.mail.mud.yahoo.com> --- On Tue, 12/29/09, Aware wrote: >> Contrary to the rumor going around, reality really >> does exist. :) > > I'm calling foul.? Well, perhaps you missed the smile emoticon. But seriously I see that we have two categories of things and objects 1) the real kind and 2) the computer simulated kind. I make a clear distinction between those two categories of things. Computer simulations of real things do not equal those real things they simulate, and some "simulate" nothing real in the first place. -gts From avantguardian2020 at yahoo.com Tue Dec 29 23:13:46 2009 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Tue, 29 Dec 2009 15:13:46 -0800 (PST) Subject: [ExI] Some new angle about AI Message-ID: <85434.91068.qm@web65601.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stefano Vaj > To: ExI chat list > Sent: Tue, December 29, 2009 10:46:03 AM > Subject: [ExI] Some new angle about AI > One wonders, since there is no obvious hint that quantum mechanics or > other low-level physical effects play any role with regard to the > working of liver cells, and I do not see why this would be any > different with regard to the brain of an ant - and thus to that of a > human being. Well some hints are more obvious than others. ;-) http://www.hplusmagazine.com/articles/bio/spooky-world-quantum-biology http://www.ks.uiuc.edu/Research/quantum_biology/ http://cosmos.asu.edu/publications/papers/'Does%20quantum%20mechanics%20play%20a%20non%20trivial%20role%20in%20life'%20BioSystems%20paper.pdf There's lots more info out there but you need a subscription or an amazon purchase. ? > But, there again, quantum computing fully remains in the field of > computability, does it not? And the existence of "organic computers" > implementing such principles would be proof that such computers can be > built. In fact, I would suspect that "quantum computation", in a > Wolframian sense", would be all around us, also in other, non-organic, > systems. I have no proof but I suspect that many biological processes are indeed quantum computations. Quantum tunneling of information backwards through time could, for example, explain life's remarkable ability to anticipate things. > There again, the theoretical issue would be simply that of executing a > program emulating what we execute ourselves closely enough to qualify > as "human-like" for arbitrary purposes, and find ways to implement it > in manner not making us await its responses for multiples of the > duration of the Universe... ;-) In order to do so, it would have to consider a superposition of?every possible response and collapse?the ouput?"wavefunction" on the most appropriate response. Stuart LaForge "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten." - Neil Armstrong From aware at awareresearch.com Tue Dec 29 23:37:04 2009 From: aware at awareresearch.com (Aware) Date: Tue, 29 Dec 2009 15:37:04 -0800 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <125388.6769.qm@web36507.mail.mud.yahoo.com> References: <125388.6769.qm@web36507.mail.mud.yahoo.com> Message-ID: On Tue, Dec 29, 2009 at 3:06 PM, Gordon Swobe wrote: > > But seriously I see that we have two categories of things and objects 1) the real kind and 2) the computer simulated kind. > > I make a clear distinction between those two categories of things. Computer simulations of real things do not equal those real things they simulate, and some "simulate" nothing real in the first place. To use a thought-experiment familiar to this list, how would you know (experientially) whether or not you're being run as a simulation right now? As an embedded observer of whatever local environment of interaction you inhabit, you fundamentally LACK THE CONTEXT that would make any difference. It seems that through this protracted thread you consistently and simply beg the question, as does Searle. Functionalism aside (as I've said twice now, it's not the issue and needs no defense) you spin around the flawed premise that "consciousness" is indisputably (according to all the 1st-person evidence you might ever as for) instantiated in, at least, the human brain, but not within the workings of any formally described system. Well, It's not "in" either system (formal or evolved.) As I've said before, the semantics/intentionality/meaning is a function of the observer, EVEN when that observer happens to be a functional expression of the same brain/body of that which it observes. It simply expresses its evolved nature, "meaningful" as a result of adaptive selection that rejected all manner of behavior at all scales that was not "meaningful" in the evolutionary environment of adaptation. Of course it refers to itself as "I". Of course it perceives its experience as true and complete--IT LACKS THE CONTEXT to know (experientially) otherwise. Of course the illusion is convincingly seductive; it's the result of thousands of generations of selection based on survival and reproduction. And making it harder for you, even though you're a kind of programmer by trade, it's clear you're not comfortable with recursion. - Jef From stathisp at gmail.com Wed Dec 30 00:01:54 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 30 Dec 2009 11:01:54 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <29719886.66751262125124734.JavaMail.defaultUser@defaultHost> References: <29719886.66751262125124734.JavaMail.defaultUser@defaultHost> Message-ID: 2009/12/30 scerir : > [Stathis] > It still remains a possibility that the brain does in fact utilise > uncomputable physics. [...] > > [Stefano] > One wonders, since there is no obvious hint that quantum mechanics or other > low-level physical effects play any role with regard to the [...] > > # > > We can find "uncomputability" ?in quantum physics (i.e. essential randomness, > contextuality, etc.) but also in classical physics. So it is possible that the > brain does in fact utilise, in a technical meaning, uncomputable physics. > Speaking of elaboration of information by the brain (here supposed to be a > complex quantum system), according to a speculation the mind/brain might encode > "propositions" in quantum states, and then might "measure" these quantum states > to test the "truth values" of the propositions. Random outcomes given by > measurements associated with the propositions should mean that the "truth > values" are uncertain, that is to say that the propositions are undecidable > (within the context of that mind/brain, and its stored information). But it is > just a speculation. We can compute probabilistic answers, often with high certainty, where true randomness is evolved (eg. I predict that I won't quantum tunnel to the other side of the Earth), or we can use pseudorandom number generators. I don't think anyone has shown a situation where true random can be distinguished from pseudorandom, but even if that should be a stumbling block in simulating a brain, it would be possible to bypass it by including a true random source, such as radioactive decay, in the machine. By uncomputable I was thinking not of randomness but of solving undecidable problems, such as the halting problem. Quantum computers can't do that. -- Stathis Papaioannou From aware at awareresearch.com Wed Dec 30 00:03:11 2009 From: aware at awareresearch.com (Aware) Date: Tue, 29 Dec 2009 16:03:11 -0800 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: References: <125388.6769.qm@web36507.mail.mud.yahoo.com> Message-ID: [Sorry, resending due to a small oversight which I wouldn't want to derail the message.] On Tue, Dec 29, 2009 at 3:37 PM, Aware wrote: > On Tue, Dec 29, 2009 at 3:06 PM, Gordon Swobe wrote: >> >> But seriously I see that we have two categories of things and objects 1) the real kind and 2) the computer simulated kind. >> >> I make a clear distinction between those two categories of things. Computer simulations of real things do not equal those real things they simulate, and some "simulate" nothing real in the first place. > > To use a thought-experiment familiar to this list, how would you know > (experientially) whether or not you're being run as a simulation right > now? ?As an embedded observer of whatever local environment of > interaction you inhabit, you fundamentally LACK THE CONTEXT that would > make any difference. > > It seems that through this protracted thread you consistently and > simply beg the question, as does Searle. ?Functionalism aside (as I've > said twice now, it's not the issue and needs no defense) you spin > around the flawed premise that "consciousness" is indisputably > (according to all the 1st-person evidence you might ever ask for) > instantiated in, at least, the human brain, but not within the > workings of any formally described system. > > Well, it's not "in" either system (formal or evolved.) ?As I've said > before, the semantics/intentionality/meaning is a function of the > observer, EVEN when that observer happens to be a functional > expression of the same brain/body of that which it observes. ?It > simply expresses its nature, "meaningful" as a result of > evolutionary and developmental process that rejected all manner > of behavior that was not pragmatic and reinforced behavior > that was. > > Of course it refers to itself as "I". ?Of course it perceives its > experience as true and complete--IT LACKS THE CONTEXT to know > (experientially) otherwise. ?Of course the illusion is convincingly > seductive; it's the result of thousands of generations of selection > based on survival and reproduction. > > And making it harder for you, even though you're a kind of programmer > by trade, it's clear you're not comfortable with recursion. > > - Jef From avantguardian2020 at yahoo.com Wed Dec 30 02:21:27 2009 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Tue, 29 Dec 2009 18:21:27 -0800 (PST) Subject: [ExI] Some new angle about AI Message-ID: <945049.54403.qm@web65608.mail.ac4.yahoo.com> ----- Original Message ---- > From: Stathis Papaioannou > To: scerir ; ExI chat list > Sent: Tue, December 29, 2009 4:01:54 PM > Subject: Re: [ExI] Some new angle about AI > > We can find "uncomputability" ?in quantum physics (i.e. essential randomness, > > contextuality, etc.) but also in classical physics. So it is possible that the > > brain does in fact utilise, in a technical meaning, uncomputable physics. > > Speaking of elaboration of information by the brain (here supposed to be a > > complex quantum system), according to a speculation the mind/brain might > encode > > "propositions" in quantum states, and then might "measure" these quantum > states > > to test the "truth values" of the propositions. Random outcomes given by > > measurements associated with the propositions should mean that the "truth > > values" are uncertain, that is to say that the propositions are undecidable > > (within the context of that mind/brain, and its stored information). But it is > > just a speculation. > > We can compute probabilistic answers, often with high certainty, where > true randomness is evolved (eg. I predict that I won't quantum tunnel > to the other side of the Earth), or we can use pseudorandom number > generators. I don't think anyone has shown a situation where true > random can be distinguished from pseudorandom, but even if that should > be a stumbling block in simulating a brain, it would be possible to > bypass it by including a true random source, such as radioactive > decay, in the machine. > By uncomputable I was thinking not of randomness but of solving > undecidable problems, such as the halting problem. Quantum computers > can't do that. I appreciate that you accept that possibility that the brain may be uncomputable so I won't belabor that point. But randomness is not as cut and dry as you seem to think. The only measurable difference between randomness and predetermined chaos seems to be a priori knowledge of the system dynamics. Furthermore chaos theory, essentially an uncomputable?branch of classical physics as suggested by Serafino,?blurs the distinction?between randomness and undecidability by recursion. ? For example Conway's "Game of Life" is undecidable at high iterations despite using a very simple set of rules which?could be considered to be a canon of physical law in the greatly simplified microcosm of the game. In other words, based on an initial configuation of pixels the only way to determine whether the resultant?pattern will stabilize?as an infinite loop or?go "extinct"?is to actually run the program and see what happens. ? Similarly despite the fact that fluid turbulence is known to be governed by the Navier-Stokes equation, turbulent flow is similarly uncomputable.. This is the main reason why long-term weather prediction is not currently possible and why one cant predict the shapes of clouds or the dispersion patterns of fallen autumn leaves. These are all *deterministic*?systems yet the mind is so boggled by the recursive complexity and sensitivity to initial conditions thereof that it hides its ignorance under the security blanket of *randomness*.????? Stuart LaForge "Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten." - Neil Armstrong From natasha at natasha.cc Wed Dec 30 02:21:05 2009 From: natasha at natasha.cc (natasha at natasha.cc) Date: Tue, 29 Dec 2009 21:21:05 -0500 Subject: [ExI] Searle and AI In-Reply-To: <20091229210845.GL17686@leitl.org> References: <846517.62688.qm@web36502.mail.mud.yahoo.com> <1D42F91B-B9D9-4190-9381-7DE479A2DDCD@bellsouth.net> <20091229210845.GL17686@leitl.org> Message-ID: <20091229212105.v9k695uwolwsgogk@webmail.natasha.cc> Well said 'gene. Quoting Eugen Leitl : > On Tue, Dec 29, 2009 at 12:21:39AM -0500, John Clark wrote: > >> Am I to take it that you are conceding the debate? > > Debates assume progress in opponents' point of view. None is so > far apparent. > > Give up on that person already. He's not worth your time. To say it with > Heinlein: Never try to teach a pig to sing; it wastes your time and > it annoys the pig. > > It definitely annoys the innocent bystanders also. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From jonkc at bellsouth.net Wed Dec 30 05:17:56 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 30 Dec 2009 00:17:56 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <125388.6769.qm@web36507.mail.mud.yahoo.com> References: <125388.6769.qm@web36507.mail.mud.yahoo.com> Message-ID: <1E1CBA61-165F-4DF8-B2E8-87381C564295@bellsouth.net> On Dec 29, 2009, Gordon Swobe wrote: > > I see that we have two categories of things and objects 1) the real kind and 2) the computer simulated kind. I make a clear distinction between those two categories of things. Computer simulations of real things do not equal those real things they simulate, and some "simulate" nothing real in the first place. As I've said before a simulated flame is a perfectly real and objective phenomenon, but care must be taken not to confuse levels. A simulated flame won't burn your computer but it will burn a simulated object. A real flame won't burn the laws of chemistry but it will burn your finger. And the real world is seldom interested in categories, and making a clear distinction between them is even rarer. Putting things in categories is the sort of thing people do with their mind, it's the sort of thing people do when they try to simulate the real objective world and put it into their subjective universe. The mental simulation is not exact but one does the best one can. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Dec 30 06:13:00 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 30 Dec 2009 17:13:00 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <125388.6769.qm@web36507.mail.mud.yahoo.com> References: <125388.6769.qm@web36507.mail.mud.yahoo.com> Message-ID: 2009/12/30 Gordon Swobe : > But seriously I see that we have two categories of things and objects 1) the real kind and 2) the computer simulated kind. > > I make a clear distinction between those two categories of things. Computer simulations of real things do not equal those real things they simulate, and some "simulate" nothing real in the first place. A simulated thunderstorm won't be wet except in its simulated world. A simulated mind, however, will be a mind everywhere. That is because (a) the mind is its own observer, and (b) it takes just as much intelligence to solve a simulated as a real problem. -- Stathis Papaioannou From scerir at libero.it Wed Dec 30 07:17:21 2009 From: scerir at libero.it (scerir) Date: Wed, 30 Dec 2009 08:17:21 +0100 (CET) Subject: [ExI] Some new angle about AI Message-ID: <31292399.76111262157441165.JavaMail.defaultUser@defaultHost> [Stuart] Well some hints are more obvious than others. ;-) http://www.hplusmagazine.com/articles/bio/spooky-world-quantum-biology http://www.ks.uiuc.edu/Research/quantum_biology/ http://cosmos.asu.edu/publications/papers/'Does%20quantum%20mechanics%20play% 20a%20non%20trivial%20role%20in%20life'%20BioSystems%20paper.pdf There's lots more info out there but you need a subscription or an amazon purchase. # There is something more, specifically about the mind/brain, like ... http://www.theswartzfoundation.org/papers/caltech/koch-hepp-07-final.pdf The relation between quantum mechanics and higher brain functions: Lessons from quantum computation and neurobiology -Christof Koch and Klaus Hepp www.klab.caltech.edu/news/koch-hepp-06.pd Quantum mechanics in the brain. Does the enormous computing power of neurons mean consciousness can be explained within a purely neurobiological framework, or is there scope for quantum computation in the brain? -Christof Koch and Klaus Hepp www-physics.lbl.gov/~stapp/koch-hepp.doc Quantum Mechanics in the Brain -Henry P. Stapp http://cogsci.uwaterloo.ca/Articles/quantum.pdf Is the Brain a Quantum Computer? -Abninder Litta, Chris Eliasmith, Frederick W. Kroona, Steven Weinstein, Paul Thagarda From scerir at libero.it Wed Dec 30 07:23:49 2009 From: scerir at libero.it (scerir) Date: Wed, 30 Dec 2009 08:23:49 +0100 (CET) Subject: [ExI] R: Re: Some new angle about AI Message-ID: <6514987.73091262157829703.JavaMail.defaultUser@defaultHost> >www.klab.caltech.edu/news/koch-hepp-06.pd >Quantum mechanics in the brain. Does the enormous computing power of neurons >mean consciousness can be explained within a purely neurobiological framework, >or is there scope for quantum computation in the brain? >-Christof Koch and Klaus Hepp should be www.klab.caltech.edu/news/koch-hepp-06.pdf Quantum mechanics in the brain. Does the enormous computing power of neurons mean consciousness can be explained within a purely neurobiological framework, or is there scope for quantum computation in the brain? -Christof Koch and Klaus Hepp From stefano.vaj at gmail.com Wed Dec 30 10:39:21 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 30 Dec 2009 11:39:21 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: <29719886.66751262125124734.JavaMail.defaultUser@defaultHost> References: <29719886.66751262125124734.JavaMail.defaultUser@defaultHost> Message-ID: <580930c20912300239x4daed5c5qa9df74102d94cc34@mail.gmail.com> 2009/12/29 scerir : > We can find "uncomputability" ?in quantum physics (i.e. essential randomness, > contextuality, etc.) but also in classical physics. So it is possible that the > brain does in fact utilise, in a technical meaning, uncomputable physics. Sure. But as long as it computes information, rather than, say, burning glucides or secreting something, the process itself would seem obviously, well, computable, this being exactly what the brain itself does. Or would it not? It remains of course theoretically possible that the emulation of such process might require, at least for practical purposes, a quantum computer (which we would know in such event to be certainly feasible). OTOH, such requirement would seem pretty unlikely, since the human brain, let alone the brain of an ant or a worm, does not seem to exhibit any of the features that are normally related to quantum, as opposed to mere "universal", computing. -- Stefano Vaj From stefano.vaj at gmail.com Wed Dec 30 10:45:01 2009 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 30 Dec 2009 11:45:01 +0100 Subject: [ExI] Some new angle about AI In-Reply-To: References: <29719886.66751262125124734.JavaMail.defaultUser@defaultHost> Message-ID: <580930c20912300245g365d02fcrdd09e7e5faacbc4@mail.gmail.com> 2009/12/30 Stathis Papaioannou : > By uncomputable I was thinking not of randomness but of solving > undecidable problems, such as the halting problem. Quantum computers > can't do that. But I never saw an organic brain actually solving undecidable problems... And, btw, as repeatedly suggested, there are very simple organic brains, which really do not appear to do anything special, and the emulation of which seems pretty accessible, albeit perhaps not very interesting, including at relatively low level. How would that be ever possible, if they profited *in their information processing features* from physics which could not be emulated on any universal computer? -- Stefano Vaj From stathisp at gmail.com Wed Dec 30 11:24:46 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 30 Dec 2009 22:24:46 +1100 Subject: [ExI] Some new angle about AI In-Reply-To: <945049.54403.qm@web65608.mail.ac4.yahoo.com> References: <945049.54403.qm@web65608.mail.ac4.yahoo.com> Message-ID: 2009/12/30 The Avantguardian : > I appreciate that you accept that possibility that the brain may be uncomputable so I won't belabor that point. But randomness is not as cut and dry as you seem to think. The only measurable difference between randomness and predetermined chaos seems to be a priori knowledge of the system dynamics. Yes, that's what I meant by saying it isn't possible to tell true randomness apart from pseudorandomness. Incidentally, it is possible to generate true randomness on a computer by simulating something like the MWI of QM with a branching algorithm. The true randomness exists relative to an observer embedded in the program, while for an external observer (i.e. for the multiverse as a whole) the program is completely deterministic. > Furthermore chaos theory, essentially an uncomputable branch of classical physics as suggested by Serafino, blurs the distinction between randomness and undecidability by recursion. > > For example Conway's "Game of Life" is undecidable at high iterations despite using a very simple set of rules which could be considered to be a canon of physical law in the greatly simplified microcosm of the game. In other words, based on an initial configuation of pixels the only way to determine whether the resultant pattern will stabilize as an infinite loop or go "extinct" is to actually run the program and see what happens. If the brain is computable it does not necessarily mean there will be computational shortcuts in predicting human behaviour. You may just have to simulate the human and let the program run to see what happens. > Similarly despite the fact that fluid turbulence is known to be governed by the Navier-Stokes equation, turbulent flow is similarly uncomputable.. This is the main reason why long-term weather prediction is not currently possible and why one cant predict the shapes of clouds or the dispersion patterns of fallen autumn leaves. These are all *deterministic* systems yet the mind is so boggled by the recursive complexity and sensitivity to initial conditions thereof that it hides its ignorance under the security blanket of *randomness*. These chaotic systems *are* computable, strictly speaking, since given initial parameters and the laws of physics you can predict exactly what they will do. The problem is that the systems are so sensitive to initial parameters that no matter how accurately you measured these the model would diverge from the original after only a short time. However, the model would also diverge from the original if you attempted not a computer simulation but an atom for atom replica, so the problem is not with computability per se. -- Stathis Papaioannou From bbenzai at yahoo.com Wed Dec 30 11:22:36 2009 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 30 Dec 2009 03:22:36 -0800 (PST) Subject: [ExI] Zombie In-Reply-To: Message-ID: <155397.90774.qm@web113617.mail.gq1.yahoo.com> John, Stathis, Stefano, et. al: Your patience and tenacity is admirable. I think you may have discovered a zombie among us that's capable of arguing a point without any understanding of its meaning, and who will continue to meaninglessly argue in a repeating loop until entropy overcomes them. As this list is named for the opposite of entropy, I'd question the utility of prolonging this circular argument any further. It's starting to remind me of "Dracula: Dead and loving it" where Count Dracula and van Helsing are each determined to have the last word. At least it has shown that maybe philosophical zombies are possible after all. Or rather that people can have zombie thoughts: syntax without semantics (or is that Post-Modernism?). Who'd 'a thunk it? Ben Zaiboc From stathisp at gmail.com Wed Dec 30 12:10:43 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 30 Dec 2009 23:10:43 +1100 Subject: [ExI] Zombie In-Reply-To: <155397.90774.qm@web113617.mail.gq1.yahoo.com> References: <155397.90774.qm@web113617.mail.gq1.yahoo.com> Message-ID: 2009/12/30 Ben Zaiboc : > John, Stathis, Stefano, et. al: ?Your patience and tenacity is admirable. ?I think you may have discovered a zombie among us that's capable of arguing a point without any understanding of its meaning, and who will continue to meaninglessly argue in a repeating loop until entropy overcomes them. > > As this list is named for the opposite of entropy, I'd question the utility of prolonging this circular argument any further. ?It's starting to remind me of "Dracula: Dead and loving it" where Count Dracula and van Helsing are each determined to have the last word. > > At least it has shown that maybe philosophical zombies are possible after all. Or rather that people can have zombie thoughts: syntax without semantics (or is that Post-Modernism?). > > Who'd 'a thunk it? > > Ben Zaiboc I don't regret the debate and despite several onlist and offlist protests, I hope it was interesting for at least some list members. This subject would have to rank, after all, as one of the most important for anyone interested in transhumanism. However, I finally have to concede that you are right and no amount of reasoning will change Gordon's mind. -- Stathis Papaioannou From gts_2000 at yahoo.com Wed Dec 30 14:42:52 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 30 Dec 2009 06:42:52 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <1E1CBA61-165F-4DF8-B2E8-87381C564295@bellsouth.net> Message-ID: <799040.51329.qm@web36504.mail.mud.yahoo.com> --- On Wed, 12/30/09, John Clark wrote: > As I've said before a simulated flame is a perfectly real and objective > phenomenon, but care?must be taken not to confuse levels. A simulated > flame won't burn your?computer but it will burn a simulated object. > A real flame won't burn?the laws of chemistry but it will burn your > finger. Computer simulations of flames do not actually burn anything, even simulated things, except that some real person chooses to imagine so. Similarly, computer simulations of people do not have understanding of words except that some real person chooses to suspend his grip on reality and imagine so. I'll answer a few more posts tonight or tomorrow if I find time but I can see from the reactions that I get here that my views on this subject amount to something akin to religious blasphemy. -gts From natasha at natasha.cc Wed Dec 30 15:21:28 2009 From: natasha at natasha.cc (Natasha Vita-More) Date: Wed, 30 Dec 2009 09:21:28 -0600 Subject: [ExI] BUS: Memory Venture Message-ID: <1438FCD922B447AB9451517E74F433A3@DFC68LF1> Friends, this past year (or so) I recall someone mentioning a new business or venture which is focused on developing memory portfolios for people to have for their loved ones after their death. Cryonicists are also interested in this type of venture for providing a historical account of one's life if and when reanimated. Does anyone know what this business/venture is and/or have any knowledge about what methods and technologies would be used to develop such a project? Many thanks and Happy New Year! Nlogo1.tif Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 731 bytes Desc: not available URL: From pharos at gmail.com Wed Dec 30 17:06:32 2009 From: pharos at gmail.com (BillK) Date: Wed, 30 Dec 2009 17:06:32 +0000 Subject: [ExI] BUS: Memory Venture In-Reply-To: <1438FCD922B447AB9451517E74F433A3@DFC68LF1> References: <1438FCD922B447AB9451517E74F433A3@DFC68LF1> Message-ID: On 12/30/09, Natasha Vita-More wrote: > Friends,? this past year (or so) I recall someone mentioning a new > business or venture which is focused on developing memory > portfolios for people to have for their loved ones after their death. > Cryonicists are also interested in this type of venture for providing > a historical account of one's life if and when reanimated. > > Does anyone know what this business/venture is and/or have any > knowledge about what methods and technologies would be used > to develop such a project? > > Hey, I thought somebody promised we were all going to live forever??? ;) The service you may be thinking of is called Legacy Locker. See: and Of course, once you get one site, 50 others follow soon after, so you would have to search the market. There are similar sites where you can leave emails to be sent after your death, (and you could include details there of your online presence, with account numbers, passwords, etc.). Nowadays, even when making an old-fashioned paper will it is a good idea to include a sheet with all your online bank accounts, mail lists, websites, with all the relevant passwords. That sheet has to be updated every year, of course, at your annual will review, where changes are considered. Facebook sites tend to become memorial sites after someone dies, but you should leave a record of your password so that cleanup jobs can be done. I think Second Life is also developing something where you can leave a memorial, with recordings and play-back scenarios. There will probably be a lot of development in this area soon. (I noticed a comedy sketch where a teen died and sent text messages back via an Ouija board full of GR8 and LOL and ROTFL). BillK From scerir at libero.it Wed Dec 30 20:04:11 2009 From: scerir at libero.it (scerir) Date: Wed, 30 Dec 2009 21:04:11 +0100 (CET) Subject: [ExI] Some new angle about AI Message-ID: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> [Stathis] We can compute probabilistic answers, often with high certainty, where true randomness is evolved (eg. I predict that I won't quantum tunnel to the other side of the Earth), or we can use pseudorandom number generators. I don't think anyone has shown a situation where true random can be distinguished from pseudorandom, but even if that should be a stumbling block in simulating a brain, it would be possible to bypass it by including a true random source, such as radioactive decay, in the machine. # To my knowledge there are: -pseudo-randomness, which is computable and deterministic; specific softwares are the sources. -quantum randomness, which is uncomputable (not by definition, but because of theorems; no Turing machine can enumerate an infinity of correct bits of the sequence produced by a quantum device); there are several sources (radioactive decays; arrival times; beam splitters; metastable states decay; etc.) -algorithmic randomness, which is uncomputable (I would say by definition). To my knowledge nobody knows if quantum randomness is also algorithmic randomness. But there are problems here, because there is more than a single quantum randomness, depending on the specific quantum source used to create the string. So, it is possible there is a class of quantum randomness, and a class of algorithmic randomness. It is possible to say that whether quantum randomness satisfies the requirements of algorithmic randomness is uncertain, at least. Comparing (using Borel normality test or other tests) finite (but very long) strings produced by pseudo-randomness sources and finite (but very long) strings produced by quantum devices, it seems there are no differences FAPP. In theory there should be differences in case of infinite strings. From mlatorra at gmail.com Wed Dec 30 20:09:11 2009 From: mlatorra at gmail.com (Michael LaTorra) Date: Wed, 30 Dec 2009 13:09:11 -0700 Subject: [ExI] D Wave back from the grave? In-Reply-To: <4B3A70EC.1080301@satx.rr.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <4B38F9F5.9050601@libero.it> <379052.14440.qm@web81607.mail.mud.yahoo.com> <580930c20912291018v1738a493ta1bb77315ef3c9d5@mail.gmail.com> <65DFA3F9-0053-4003-93E4-570EF3F87F6B@bellsouth.net> <4B3A70EC.1080301@satx.rr.com> Message-ID: <9ff585550912301209r5c4feaedge2b9c20c3a323d17@mail.gmail.com> Hey, for anyone here who has failed to notice, Damien has been getting a lot of his fiction published in ASIMOV'S science fiction magazine in recent months. In my opinion, publishing his work has been the best move that editor Sheila Williams has made since she took over from Gardner Dozois. Regards, Mike LaTorra On Tue, Dec 29, 2009 at 2:13 PM, Damien Broderick wrote: > On 12/29/2009 2:22 PM, John Clark wrote: > > I was very skeptical of D-waves claim to have made a working Quantum >> Computer, it's still probably Bullshit but I'm no longer quite as certain. >> > > Hey, they feature in my ASIMOV'S magazine sf story "The Qualia Engine" so > it must be true. > > Damien Broderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Dec 30 20:24:59 2009 From: jonkc at bellsouth.net (John Clark) Date: Wed, 30 Dec 2009 15:24:59 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <799040.51329.qm@web36504.mail.mud.yahoo.com> References: <799040.51329.qm@web36504.mail.mud.yahoo.com> Message-ID: <8F016B54-FB1A-4748-ACCE-0A168F9C85AA@bellsouth.net> On Dec 30, 2009, Gordon Swobe wrote: > Computer simulations of flames do not actually burn anything, even simulated things, Nonsense, if simulated objects didn't effect other simulated objects there would be no point in making computer simulations at all. > computer simulations of people do not have understanding of words So you say but you can offer no reason why we should think you are correct and can't even give a coherent reason why you don't use the same procedure for computers that you use on your fellow human beings to determine if they understand something or not. > I can see from the reactions that I get here that my views on this subject amount to something akin to religious blasphemy. Well, somebody around here is certainly asking us to believe something on faith. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Wed Dec 30 23:48:23 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 30 Dec 2009 15:48:23 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <8F016B54-FB1A-4748-ACCE-0A168F9C85AA@bellsouth.net> Message-ID: <611033.725.qm@web36503.mail.mud.yahoo.com> --- On Wed, 12/30/09, John Clark wrote: >> Computer simulations of flames do not actually burn anything, even >> simulated things, > Nonsense, if simulated objects didn't effect other simulated objects Simulated flames don't *burn* simulated objects, John, except in a child's overly-vivid imagination. Perhaps that child hasn't learned the difference between the worlds depicted in his video games and actual world in which he lives. Perhaps he'll understand when he grows up. Let's hope. -gts From gts_2000 at yahoo.com Thu Dec 31 00:38:11 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 30 Dec 2009 16:38:11 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <754450.44588.qm@web36502.mail.mud.yahoo.com> --- On Tue, 12/29/09, Stathis Papaioannou wrote: Sorry I fell behind in my postings to you. > You're inclined to say they would behave in a slightly > different way? You may as well say, God will intervene because he's > so offended by the idea that computers can think. Perhaps different from how they might have behaved otherwise, yes, but not unnaturally. Perhaps the person turned left when he might otherwise have turned right. Doesn't prove anything for either of us. >> Contrary to the rumor going around, reality really > does exist. :) > > Up until this point it seemed there was a chance you might > follow the argument to wherever it rationally led you. Up until what point? My assertion of reality as distinct from simulations of it? Do you agree with John's statement (on Tuesday, I think) that simulations of flames can burn simulated objects? -gts From gts_2000 at yahoo.com Thu Dec 31 01:02:37 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Wed, 30 Dec 2009 17:02:37 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <795898.54775.qm@web36502.mail.mud.yahoo.com> --- On Wed, 12/30/09, Stathis Papaioannou wrote: >> I make a clear distinction between those two > categories of things. Computer simulations of real things do > not equal those real things they simulate, and some > "simulate" nothing real in the first place. > > A simulated thunderstorm won't be wet except in its > simulated world. Really? Seems to me that not even wetness exists in that supposed world. Your supposed world exists only in some person's imagination, helped along by some well-written code resident in some computer's RAM that helps to make that supposed world easy to imagine. Not only does no wetness exist there, there's no there there. And simulated minds can't observe themselves any more than can the simulated goldfish that appear to swim on some screen-savers. -gts From stathisp at gmail.com Thu Dec 31 02:07:42 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 31 Dec 2009 13:07:42 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <754450.44588.qm@web36502.mail.mud.yahoo.com> References: <754450.44588.qm@web36502.mail.mud.yahoo.com> Message-ID: 2009/12/31 Gordon Swobe : > --- On Tue, 12/29/09, Stathis Papaioannou wrote: > > Sorry I fell behind in my postings to you. > >> You're inclined to say they would behave in a slightly >> different way? You may as well say, God will intervene because he's >> so offended by the idea that computers can think. > > Perhaps different from how they might have behaved otherwise, yes, but not unnaturally. Perhaps the person turned left when he might otherwise have turned right. Doesn't prove anything for either of us. > >>> Contrary to the rumor going around, reality really >> does exist. :) >> >> Up until this point it seemed there was a chance you might >> follow the argument to wherever it rationally led you. > > Up until what point? My assertion of reality as distinct from simulations of it? No, I was referring to your assertion that the brain would behave differently with the artificial neurons in place, with no reason given for it. The whole point of the argument was to show that the idea that neuronal function can be separated from consciousness leads to absurdity, but you could have saved time by explaining at the start that any argument which shows such a thing must be wrong even if you can't point out how, because your position on this can't possibly be wrong. -- Stathis Papaioannou From rtomek at ceti.pl Thu Dec 31 04:38:34 2009 From: rtomek at ceti.pl (Tomasz Rola) Date: Thu, 31 Dec 2009 05:38:34 +0100 (CET) Subject: [ExI] I accuse intellectuals... or else (was Re: Why is there Anti-Intellectualism? (re: Atheism as somehow different than other religions)) In-Reply-To: <4B369F6D.8010701@satx.rr.com> References: <4B369F6D.8010701@satx.rr.com> Message-ID: Hello and Happy New Year everybody, First, thanks for answers. It have taken me some time to munch them. I should also write, maybe too late a little, but I think I was expecting/looking for something else. Actually, I wanted to make a list of intellectual Ghenghis Khans. My reason was this: since there really seems to be a resentment towards intellectuals and no resentment should be without a cause, so what exactly is that cause? Who lost what because of what? So far, not many names found. Among those found there are dubious cases (I was not able to find enough against them). By civilised standards, maybe two (maybe four) could be added to my list, making the total of maybe two, maybe four. Those in fact need more research for which I have no time at the moment, so it is possible I will erase them later. My second thought is this. A very rough (and taken from the ceiling) estimate of a number of intellectuals is, I dare to say, some 0.1% of population living in cities above 100,000. It is probably smaller than this in less populated areas. Anyway, this gives some 100,000-200,000 people or so for a US-like country. Now, the list of names is really short, even if I could accept all of them. Does this make anti-intellectualism similar to, say, racial prejudices? My guess is yes, it does. Damien, Rafal and Mirco - your answers are below. On Sat, 26 Dec 2009, Damien Broderick wrote: > On 12/26/2009 5:02 PM, Tomasz Rola wrote: > > > After many decades of intellectualism, the common people have developed > > > > (evolved?) a healthy distrust of the intellectuals, mainly of the > > > > social > > > > intellectuals. In the future this distrust will probably grow stronger, > > > > as we > > > > will reap the fruits of many silly ideas implemented as policies. > > > > Uhum... You know, I am not challenging you, but if you (or anybody) could > > give me a list of intellectualists (names, specialty area, you got the > > idea), whom I am to blame... And if you don't mind, for what exactly? > > > > The longer such a list, the more interesting from my point of view. Give > > me those names, I beg you. > > In US: the neocons are intellectuals, of a sort. They happily provided the > Iraq war. Dr. Leon Kass is clearly an intellectual, and he helped to ban > embryonic stem cell research. The doctrines of both these geniuses, on the > other hand, seem to have been welcomed by a large proportion of the common > people. On a more highbrow level, if Heidegger wasn't an intellectual, nobody > is. He would be despised by the common people as someone who wrote > incomprehensible horseshit for a living, but for all that he was a great > supporter of the Nazi Volk so who knows how their muddled minds would have > responded. Martin Heidegger, a philosopher. His guilt: giving active support for Nazism. I can accept this case into my list, but it seems to me, he was "simply" charmed by Nazism just as milions of people around the world and this had not much to do with him being an intellectual. He could have as well be electrician or physician. It does not make him innocent but unless he invented gas chambers I would have to let him go. Leon Kass - a physician and bioethician, phd in biochemistry. His case is much different. I have read about him in wikipedia and from what I have read I could support at least some of his decisions or opinions. For example, I am not a big euthanasia supporter (I understand and accept the fact that "there are cases" but I am strongly against using such cases as an excuse for generalisation - which is, coincidentally, so good from economic point of view, just cut them off and save loads of money). I don't consider myself neither religious nor ethical (I guess explaining is beyond the scope of this thread and list). So my reasons may be different from his, yet I cannot find him guilty. Also, I was feeling a bit uncomfortably about utilising embryos as a source of stem cells. I don't care much about their eventual sancticity but I want to minimize my katzenjammer. But look what happened - there is another way, we can reverse adult cells back into their stem state. And I am very happy now. So much for Leon Kass - in my eyes, he does not qualify and he can go. Yes, even if I could speculate about his postponing some research. Frankly, I don't think we should do whatever we want. But we should think about what we do want or will want and what this really means in the longterm. A lot of people seem to think this is some kind of races, that we should do whatever we need and by whatever means that happen to be here. Maybe it is so indeed. But unfortunately we don't know rules of this race. Maybe it's not the fastest horse that wins? Anyway, the first thing they taught me during my driving course was "when in doubt, slow down". Anonymous neocons - I grok the term is for "neoconservative". Being from Europe, it doesn't ring any bell besides being, perhaps, an oxymoron. You sound unsure about their intelligence level. Do they have any kind of web-based publication list, so I can try to assess it by myself? > There seems some confusion in this thread between non- or unintellectual and > anti-intellectual. Most humans are not intellectuals, but they don't > necessarily condemn those who are, although some of the more thuggish might > well shove their "pointy heads" down the toilet bowl pour le sport. Oh, thugs are so sweet. They dream of things that possibly turn against them in the long run. Making everybody "normal" and without individual opinions (beyond the "normal" boundaries). Or making slaves of eggheads and counting on them magically replenish just like any other cattle. Either thugs will pay or their children. Just a matter of time. Pity, however, that they take so many bystanders down the tarpit. On Sun, 27 Dec 2009, Rafal Smigrodzki wrote: > ### Maynard Keynes, John Kenneth Galbraith, Karl Marx, Friedrich > Engels, Noam Chomsky, almost any random sociologist since Emile > Durkheim, Upton Sinclair, Paul Krugman, Joseph Lincoln Steffens, > Albert Einstein, Jeremy Rifkin - collectively contributing to the > enactment of a staggering number of stupid policies, starting with > meat packing regulations and genetic engineering limits all the way to > affirmative action, social security, and the Fed. > > Rafal Some of those names are more or less familiar, some not (Keynes, Galbraith, Durkheim, Steffens and Rifkin - I will read about them one day, I hope). Could you be more specific? Because when I know something about the name I still cannot imagine what kind of wrong is connected with them? Anyway, I can try and make an effort. Karl Marx - philosopher, political economist, historian etc etc. Guilt: founder of communism, I guess. Friedrich Engels - social scientist, political theorist, philosopher. Guilt: founder of communism, right? I have heard about them from time to time. However, as much as they were mentioned sometimes (on the 1st May for example), I cannot recall anybody giving me explanation why exactly they were so cool. I think I can temporarily include them into my list, however before I leave them there forever I need to learn what exactly were their postulates (find this "Capital" behemoth and have a look). I have no time to do this right now. Upton Sinclair - a journalist, writer, Pulitzer winner. I guess he can be named as an inspirator of creating the Food and Drug Administration. Is it his guilt? Or maybe his socialist bent? I have just read throu first three chapters of "The Jungle". http://en.wikisource.org/wiki/The_Jungle http://en.wikipedia.org/wiki/Meat_packing_industry http://en.wikipedia.org/wiki/The_Jungle What I have found there was consistent with what I met in other authors' writings on similar topics. Exposing such things is very much welcomed. Personally I think it were guys like Sinclair (and those who did not write books, they too), delivering constant kicking in the young capitalism' lower back, so that it had to get up and move on to something better (which is supposed to be a today's state of things). Judging what I have learnt about those times, nobody would like to live in them unless born into a rich and lucky family. I doubt human nature changed much - I have heard some nasty stories about young Polish capitalism as it was creating itself (some of them call for new Upton Sinclair, oh my) and I believe those stories could have been worser but nowadays standards are better a bit. Paul Krugman - economist, journalist. Professor at Princeton Uni and hmmm... a Nobel Prize laureate in Economy. I think I have read some of his articles available somewhere on the net. Noam Chomsky - linguist, philosopher, cognitive scientist, political activist. Professor emeritus at MIT. I have read some of his non-scientific works. Krugman & Chomsky - I have always thought those two were rather harmless? What did they do? Albert Einstein - a physicist, Nobel in Physics. Common, man. What have Albert done wrong? Does it have something to do with the A-bomb? What exactly? Overally, thanks to your mail I have finally started reading "The Jungle". Besides this, I need some more explanations from you. What those guys did so badly that I should put them on my list? On Sun, 27 Dec 2009, Mirco Romanato wrote: > Il 27/12/2009 0.02, Tomasz Rola ha scritto: > > > > give me a list of intellectualists (names, specialty area, you got the > > idea), whom I am to blame... And if you don't mind, for what exactly? > > For example, Greenpeace is against the use of chlorine to disinfect water. > Could we consider the policy-maker of Greenpeace some intellectuals? > > Why I Left Greenpeace > http://www.waterandhealth.org/drinkingwater/greenpeace.html > The return of Killer Chlorine > http://www.theregister.co.uk/2008/07/24/numberwatch_chlorine/ Not sure of this... Did they propose anything in exchange of chlorine? Did they conduct studies all around the world checking substance levels etc? Moneywise, I've read making water by chlorine for one person for a year costs only $0.50. How are the alternatives doing? While Greepeace sometimes does not look all that bad, overally the eco movement makes me feel uneasy, like a histeric mob yelling their truths into my face. This noise only makes me go around them... in a big circle. Besides, aggressive attitude does not promise much intellect inside the circle, and since I use to connect intellectual with reasonable mind effort and seeking truth... If they have this kind of guys I would rather suspect they are kept in a cellar and have not much to say other than from time to time invent some catchy idea. Sure, I may be wrong. If only one could prove the link between some intellectuals inside Greenpeace and cholera outbreak, the guys go on my list. But they have to actually exist (and have names). On the other hand, cholera spread from Peru to other countries in which there had been no changes in chlorination policy, I guess? After reading this page, I have some doubts that GP is the only culprit. Me thinks, holes in public funds did a lot to cholera spreading. If there was no chlorination at the time, it could have been because of money shortage and not necesarrily because of GP's bad advice. http://www.colorado.edu/geography/gcraft/warmup/cholera/cholera_f.html > Dewey? > Supporters of Whole Word reading versus Phonics > http://www.improve-education.org/id58.html John Dewey - philosopher, psychologist, educational reformer. PhD from Johns Hopkins University. A number of honoris causa doctorates from around the world. The idea of "whole-word" reading does not sound cool. I'm thankful they taught me by phonics. However, in wikipedia articles on Dewey, whole-word and phonics I could not spot anything about him screwing pupils' minds so badly. I have read these articles (as well as your link and the "phooey on Dewey") and they suggest (indirectly) that whole-word is still used and picture Dewey as some kind of malevolent socialisto-communistic propagator of uniformity: http://www.americanchronicle.com/articles/view/20392 http://www.improve-education.org/id42.html The article on phonics claims this is actually the approach being used nowadays: http://en.wikipedia.org/wiki/Phonics And his biographical note says he was an anti-stalinist and defended freedom and democratic ideals: http://en.wikipedia.org/wiki/John_Dewey The only opposer to phonics worth calling him by name was http://en.wikipedia.org/wiki/Horace_Mann I guess, if Dewey was in the same camp, they would have issued a word or two? So, IMHO John Dewey does not qualify, due to the lack of hard evidence... > The opposition to OGM by so many it is too long to list them (left, center and > right) What is OGM - if not something connected to ogg (music codec and container)? > The prohibitionist of drugs? Well? > The laws, in California for example, that force ex-sexual parters males named > by a woman as father of her child to pay even if they are not the real father. > No-fault divorce with alimony [for the woman]? > > http://laws.justsickshit.com/california-stupid-laws/ Sounds scary :-). So, I may visit California one day but then my d!ck will stay in Texas, armed with a colt. But, seriously, this is _The_Fact_: women are amongst the most successful predators on our planet. The men can be divided into those, who acknowledge _The_Fact_ and those who become a "husbie" before they say "o mamma mia" (they can say anything else, knowledge of Italian is not a required qualification). So, a law or not a law, they are dangerous beasts. And beautiful, sometimes. Case dismissed... :-) > > Otherwise, it is encouraging my suspicion, that this "blame the stupid > > intellectuals/scientists/hackers/all-the-wiser-folk" story is just pure > > manipulation by someone who is trying to cover his/their trails. > > It is evident you don't read the page I linked. Mmm, no I did not. I have skimmed through it and decided it had not much to do with my questions... I know, I posted them as responce to your mail. You have simply fallen victim to my chronic inability to follow h2h communication standards. After reading it, however, it is now obvious this article is yet another voice that should have interested me. So, let's try to mend the damages :-). The first half is description of interesting phenomenon. I have nothing to oppose here, however what could be discussed is whether labelling the high-IQ people is correct. For example, I'd rather seek for something interesting even if not quite new. It can be really old if only I can have some reflection upon it. Besides, I've never thought about myself in terms of "leftism", "rightism" or anything like this. I simply try to gather facts and learn from them. Cannot recall of political party describing itself as "rational" in the first place (now I would be wary of it as it could be packed with clever sillies). Yes, I've made an unspoken assumption that I am one of high-IQ herd. I don't have any problem with this. ;-) The second half concentrates on uncorrectness of political correctness. Myself I am not a fan of this idea (i.e. PC). So again, not much to oppose. But I would like to know, who introduced this concept into media and our lifes. You know, a success that has no fathers, no mothers? Very surprising. And if this is not a success, than why we are being forced into using it? My only ideal about correctness is that the truth is always correct. Sometimes it needs to be told in a way that does not hurt other people. But that's all. Those who were unable to deal with the truth are gone. Expect same in the future. As of adding politics to anything, this is the way to blow it up. > They are not stupid. They are smart. Very intelligent. > But they substitute the use of an expert system evolved and selected in > mammals to manage emotions and social relations with general intelligence that > is not able to manage and elaborate so much informations. > If you use a hammer instead of a screwdriver it is not strange that the screws > don't work as intended. Me is interested in how their brains worked so they only used the hammer. Maybe this can be cured by retraining their neural nets? Just a little, of course. Only brain bath, no brainwashing. > > Anybody? You can't imagine how happy I will be. Really. You can (kind of) > > make my day brighter. > > Do you ever asked yourself why people in leading positions in academia and > politics (and sometimes also in industry and commerce) is so often plagued > with silly ideas? > Can they really be all so stupid? Frankly, I think it depends whom are we talking about. In case of politicians, I guess they know quite well what they are doing (well, they should or else we should be totally screwed by now). If they play stupid afterward it's like in those stories - husband comes back, finds wife and the other guy in bed, then the other guy starts crying and asking to let him go. What a heartbreaker. In academia - it's probably a different story. Trying to find grants, five minute long attention of the crowd, inside games... Complicated issue. Also, I have observed that silly ideas sometimes come from people who know very well there is no chance to realise them. And still sometimes, wrong input = wrong output. Yea, they make mistakes on occasion. In commerce, promoters of silly ideas self-eliminate. That's it. All seems to be a game-theoretical real life challenge. Regards, Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:tomasz_rola at bigfoot.com ** From jonkc at bellsouth.net Thu Dec 31 05:13:38 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 31 Dec 2009 00:13:38 -0500 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <611033.725.qm@web36503.mail.mud.yahoo.com> References: <611033.725.qm@web36503.mail.mud.yahoo.com> Message-ID: On Dec 30, 2009, Gordon Swobe wrote: > Simulated flames don't *burn* simulated objects, John, I assume you believe the words "simulated flames" has a meaning otherwise you wouldn't be talking about them, so I would think such a thing growing would also be understandable to you, but apparently not. I know what I mean by the term but I can't imagine what you do. And if simulated objects couldn't effect each other why in the world would scientists spend so much time making such computer programs? And real flames don't *burn* either, they just obey the laws of chemistry. > simulated minds can't observe themselves So you have decreed many times. You ask us to ignore a century and a half of hard evidence on how Evolution works simply on your authority, and then you accuse us of being slaves to religious doctrine. There is plenty of evidence that Charles Darwin was right and there is none that you are. You both can't be right. > any more than can the simulated goldfish that appear to swim on some screen-savers. That's probably because most screen-savers I've seen don't have minds. I refuse to use the term "simulated minds" because it is as nonsensical as "simulated arithmetic". John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Dec 31 06:03:40 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 31 Dec 2009 01:03:40 -0500 Subject: [ExI] I accuse intellectuals... or else In-Reply-To: References: <4B369F6D.8010701@satx.rr.com> Message-ID: On Dec 30, 2009, Tomasz Rola wrote: > I wanted to make a list of intellectual Ghenghis Khans. I don't say all of these were as evil as Genghis Khan, that's asking for rather a lot, but off the top of my head here are some intellectuals that the world would probably have been better off if they'd never been born: Paul of Tarsus Augustine of Hippo Martin Luther Jean-Paul Marat Vladimir Lenin Philipp Lenard John Searle Oops, forget that last one, I was thinking of a different thread. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Dec 31 05:48:27 2009 From: spike66 at att.net (spike) Date: Wed, 30 Dec 2009 21:48:27 -0800 Subject: [ExI] Carbon In-Reply-To: <20091228172831.GI17686@leitl.org> References: <20091228111834.GY17686@leitl.org> <20091228172831.GI17686@leitl.org> Message-ID: <81A857D01B6347909402B5F36ECCEF0A@spike> I had an idea today regarding sequestering of carbon. We create an enormous hole in the desert, perhaps a km deep and 100 meters in diameter with a cover, build a pipe from the large population centers, and fill the hole with solid human waste. At our current sewage processing plants, bacteria break down this substance forming carbon dioxide, but note that the volume of a container increases as the cube of the linear dimension whereas the heat-losing surface area increases as the square. So given a large enough hole, the heat from the metabolism of the micro-organisms that break down the solids increases the temperature sufficiently to actually slay the bacteria and stop the decay process itself. So we could imagine a hole large enough to keep the solid waste sufficiently hot to prevent its breakdown, thus maintaining it in its mostly carbon form while boiling away the water, leaving the hot shit buried safely below ground. As a side benefit, those who spend time pondering the greenest way to dispose of themselves after they perish could arrange to have their remains hurled into the shit pit. spike Well what am I supposed to call it? Open to suggestion. From stathisp at gmail.com Thu Dec 31 08:13:23 2009 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 31 Dec 2009 19:13:23 +1100 Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: <795898.54775.qm@web36502.mail.mud.yahoo.com> References: <795898.54775.qm@web36502.mail.mud.yahoo.com> Message-ID: 2009/12/31 Gordon Swobe : > --- On Wed, 12/30/09, Stathis Papaioannou wrote: > >>> I make a clear distinction between those two >> categories of things. Computer simulations of real things do >> not equal those real things they simulate, and some >> "simulate" nothing real in the first place. >> >> A simulated thunderstorm won't be wet except in its >> simulated world. > > Really? Seems to me that not even wetness exists in that supposed world. Your supposed world exists only in some person's imagination, helped along by some well-written code resident in some computer's RAM that helps to make that supposed world easy to imagine. Not only does no wetness exist there, there's no there there. > > And simulated minds can't observe themselves any more than can the simulated goldfish that appear to swim on some screen-savers. If a mind is like any other thing at all it is like an abstract mathematical entity, not like a physical object. The brain or computer is the physical object instantiating the mind, like a sphere made of stone is a physical instantiation of an abstract sphere. You can destroy a physical sphere but you can't destroy the abstract sphere, since you can instantiate it again multiple times on multiple substrates, provided that you know how to do so. Similarly, you can destroy a brain but you can instantiate the mind it represents again multiple times on multiple substrates, provided that you retain the information enabling you to do so. So against your intuition that mere abstraction cannot give rise to mind I put my intuition that abstraction is necessary for mind, and the only purpose of the real object is to instantiate the abstraction. Not a rigorous argument, but you've more or less admitted that you trust your intuitions more than you trust rigorous argument. -- Stathis Papaioannou From pharos at gmail.com Thu Dec 31 16:43:48 2009 From: pharos at gmail.com (BillK) Date: Thu, 31 Dec 2009 16:43:48 +0000 Subject: [ExI] D Wave back from the grave? In-Reply-To: <9ff585550912301209r5c4feaedge2b9c20c3a323d17@mail.gmail.com> References: <802158.41458.qm@web59916.mail.ac4.yahoo.com> <4B38F9F5.9050601@libero.it> <379052.14440.qm@web81607.mail.mud.yahoo.com> <580930c20912291018v1738a493ta1bb77315ef3c9d5@mail.gmail.com> <65DFA3F9-0053-4003-93E4-570EF3F87F6B@bellsouth.net> <4B3A70EC.1080301@satx.rr.com> <9ff585550912301209r5c4feaedge2b9c20c3a323d17@mail.gmail.com> Message-ID: On 12/30/09, Michael LaTorra wrote: > Hey, for anyone here who has failed to notice, Damien has been getting a lot > of his fiction published in ASIMOV'S science fiction magazine in recent > months. > > In my opinion, publishing his work has been the best move that editor Sheila > Williams has made since she took over from Gardner Dozois. > > You mean.......... Current Issue Feb 2010 Dead Air by Damien Broderick A Phil-Dickian meditation of claustrophobic urban sprawl and the recently deceased visiting from beyond the grave and through your television screen. October/November 2009 Damien Broderick channels the inventive spirit of the classic SF works of Roger Zelazny in his tale ?Flowers of Asphodel? August 2009 Damien Broderick, swiftly becoming one of the most prolific and dynamic writers in Asimov?s (with more to come), contributes a fine novelette, ?The Qualia Engine,? in which a group of terribly intelligent Children of Wonder must not only advance the state of their van Vogtian super-science, but also deal with the more complex problem of puberty; June 2009 This Wind Blowing, and This Tide Damien Broderick?s fine ?This Wind Blowing, and This Tide? explores a mysterious alien shipwreck on Saturn?s moon Titan with a decidedly unusual science team! January 2009 Uncle Bones by Damien Broderick a funny, and somewhat disconcerting, tale of a young man?s troubled relationship with his uncle. The trouble is that his uncle?s, well, dead, though not exactly . . . and that?s only part of the problem! From jonkc at bellsouth.net Thu Dec 31 17:17:29 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 31 Dec 2009 12:17:29 -0500 Subject: [ExI] Some new angle about AI. In-Reply-To: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> References: <5240992.130651262203451684.JavaMail.defaultUser@defaultHost> Message-ID: On Dec 30, 2009, at 3:04 PM, scerir wrote: > no Turing machine can enumerate an infinity of correct bits of the > sequence produced by a quantum device It's worse than that, there are numbers (almost all numbers in fact) that a Turing machine can't even come arbitrarily close to evaluating. A Quantum Computer probably couldn't do that either but it hasn't been proven. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Thu Dec 31 17:43:09 2009 From: scerir at libero.it (scerir) Date: Thu, 31 Dec 2009 18:43:09 +0100 (CET) Subject: [ExI] Some new angle about AI Message-ID: <1818292.177611262281389204.JavaMail.defaultUser@defaultHost> > no Turing machine can enumerate an infinity of correct bits of the sequence produced by a quantum device. It's worse than that, there are numbers (almost all numbers in fact) that a Turing machine can't even come arbitrarily close to evaluating. A Quantum Computer probably couldn't do that either but it hasn't been proven. John K Clark It is difficult stuff for me. Difficult. But there is some literature. In example: Quantum randomness and value indefiniteness http://arxiv.org/abs/quant-ph/0611029v2 -Cristian S. Calude, Karl Svozil Abstract: As computability implies value definiteness, certain sequences of quantum outcomes cannot be computable. How Random Is Quantum Randomness? An Experimental Approach http://arxiv.org/abs/0912.4379v1 -Cristian S. Calude, Michael J. Dinneen, Monica Dumitrescu, Karl Svozil Our aim is to experimentally study the possibility of distinguishing between quantum sources of randomness--recently proved to be theoretically incomputable--and some well-known computable sources of pseudo-randomness. Incomputability is a necessary, but not sufficient "symptom" of "true randomness". We base our experimental approach on algorithmic information theory which provides characterizations of algorithmic random sequences in terms of the degrees of incompressibility of their finite prefixes. Algorithmic random sequences are incomputable, but the converse implication is false. We have performed tests of randomness on pseudo-random strings (finite sequences) of length 2^{32} generated with software (Mathematica, Maple), which are cyclic (so, strongly computable), the bits of pi, which is computable, but not cyclic, and strings produced by quantum measurements (with the commercial device Quantis and by the Vienna IQOQI group). Our empirical tests indicate quantitative differences, some statistically significant, between computable and incomputable sources of "randomness". Happy new year! s. John von Neumann: ?Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.? From J. von Neumann, ?Various Techniques Used in Connection With Random Digits,? National Bureau of Standards Applied Math Series 12, 36?38 (1951), reprinted in John von Neumann, Collected Works, (Vol. V), A. H. Traub, editor, MacMillan, New York, 1963, p. 768?770. From spike66 at att.net Thu Dec 31 18:44:09 2009 From: spike66 at att.net (spike) Date: Thu, 31 Dec 2009 10:44:09 -0800 Subject: [ExI] there's another one! from symphony of science Message-ID: <7A63333EFD2C443BB41860E3DE74ED0A@spike> Remember that Sagan video synthesis that went around a few weeks ago? There is another one! It's from Symphony of Science, and it is even better than the first. It has not only Sagan, but Robert Jastrow, Michio Kaku and Richard Dawkins. Oh my this is wiiicked cool, and gives me hope as we go into a new year. Check it out: http://www.youtube.com/watch?v=vioZf4TjoUI &feature=player_embedded spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Dec 31 19:58:22 2009 From: spike66 at att.net (spike) Date: Thu, 31 Dec 2009 11:58:22 -0800 Subject: [ExI] Carbon Message-ID: <328DDA8B20DA4DBA9B96B27D38844376@spike> ...Spike, your current outside-the-box thinking remains worthy of you... You are too kind sir. ...In fact, at the time I knew several ladies that I would have gladly ushered off onto a very long space voyage... Yes and I knew several ladies who would have gladly ushered us off onto a long one-way space voyage. My notion regarding small female astronauts was primarily motivated by the observation that the weight of a pressure vessel scales as the cube of the linear dimension. This is a classic weights engineer notion: if we can find a three-foot tall astronaut to replace the six footer, I see no reason why nearly everything would not scale down to half scale, which means a pressure vessel one-eighth the mass. As you pointed out at the time, they *almost did* choose the Mercury astronauts based on their size: Ike decided that fighter pilots would be the right choice. These were almost all short light guys, for they needed to fit into the tight confines of the jet cockpits of the day, limiting their height to 5'9" which clearly would have excluded both of us, and most of the men we know. Not everything in a space mission scales as the cube of the linear dimension of course: the pressure vessel does, but the vessel's surface area only scales as the square of the linear dimension and the computers and communications gear cares not at all scale with the size of the human cargo. An SAWE paper I always wanted to write and never did is to ask the question how does the mass of an interplanetary mission scale with the human aboard? As near as I can estimate, that exponent is somewhere in the 2.2 to 2.4 range. But I digress. ...I can think of numerous workable variations to your current idea... Your idea.in effect, a process that uses a large-closed-pressurized container.would result in a particular family of products... Yes but of course when you say large closed pressurized container, I had in mind maintaining atmospheric pressure inside the vessel. I would not wish to risk forcing the material into fissures which could find its way into a local unmapped aquafer. That being said, your memetic contribution points out that there are useable vapor phase products that would likely be formed; the metabolic processes in the micro-organisms are not completely and immediately stopped by heat in the newly introduced material near the top surface and along the cooler sides of the pit. Right when the new material is introduced, some breakdown would occur from temporarily surviving organisms within the waste matter. This would result in carbon dioxide, methane, sulfates and traces of hydrogen and phosphates for instance. Behind the CO2 and H20, methane would be the next largest fraction. This could be recovered and used as fuel. The sulfates and phosphates could be processed into fertilizers and plastics. ...CO2 could easily be captured and used for different purposes.as fed into CO2 enriched vegetation-growing facilities. At the moment this valuable raw material is.well.simply flushed away. Keep thinking! Bob It has been proposed to bury carbon in the form of ground up charcoal, however the idea of burying excrement is compelling, for it is already in a form that can be relatively easily transported. In fact the infrastructure already exists to a large extent: tanker trucks specifically designed for hauling that particular material. spike --------------------------------- Bob I had an idea today regarding sequestering of carbon. We create an enormous hole in the desert, perhaps a km deep and 100 meters in diameter with a cover, build a pipe from the large population centers, and fill the hole with human excrement. At our current sewage processing plants, bacteria break down this substance forming carbon dioxide, but note that the volume of a container increases as the cube of the linear dimension whereas the surface area (through which heat is lost) increases as the square. It scales such that given a large enough hole, the heat from the metabolism of the micro-organisms that break down the waste increases the temperature sufficiently to actually slay the bacteria and stop the decay process itself. So we could imagine a hole large enough to keep the solid waste sufficiently hot to prevent its breakdown, thus maintaining it in its mostly carbon form while boiling away the water, leaving the rest buried safely below ground. It is easy to estimate: waste is produced at perhaps a kilogram a day, so a city the size of San Jose population of about a megaperson, would produce about a thousand tons of excreta per day, at approximately the same density of water (some float, some sink) so the volume would be about 1000 cubic meters. The hole I described would have a volume of about 8 million cubic meters, so San Jose would fill the shit pit in about 8000 days, or about 22 years. As a side benefit, it would provide the greenest way I know for those who wish to dispose of themselves after they pass from this earthly existence: arrange for their remains to be flushed. We could even imagine it as a terrifying form of execution that would discourage even the most cruel murders and rapists and the most zealous Presbyterian terrorists. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 46471 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 38443 bytes Desc: not available URL: From jonkc at bellsouth.net Thu Dec 31 20:59:05 2009 From: jonkc at bellsouth.net (John Clark) Date: Thu, 31 Dec 2009 15:59:05 -0500 Subject: [ExI] there's another one! from symphony of science In-Reply-To: <7A63333EFD2C443BB41860E3DE74ED0A@spike> References: <7A63333EFD2C443BB41860E3DE74ED0A@spike> Message-ID: <393739AE-1FDC-447C-9DD5-773E512E1159@bellsouth.net> On Dec 31, 2009, at 1:44 PM, spike wrote: > Remember that Sagan video synthesis that went around a few weeks ago? There is another one! It's from Symphony of Science, and it is even better than the first. It has not only Sagan, but Robert Jastrow, Michio Kaku and Richard Dawkins. > > Oh my this is wiiicked cool, and gives me hope as we go into a new year. Check it out: > > http://www.youtube.com/watch?v=vioZf4TjoUI&feature=player_embedded First rate stuff, thanks Spike. Take a look at: http://www.youtube.com/watch?v=XGK84Poeynk&NR=1 John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Thu Dec 31 22:43:51 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 31 Dec 2009 14:43:51 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <776662.76806.qm@web36506.mail.mud.yahoo.com> --- On Thu, 12/31/09, Stathis Papaioannou wrote: >> And simulated minds can't observe themselves any more > than can the simulated goldfish that appear to swim on some > screen-savers. > > If a mind is like any other thing at all it is like an > abstract mathematical entity, not like a physical object. So said Renee Descartes and other mystics who believed in your supposed duality of mind/matter. Unfortunately the strong AI research program came out of that same false paradigm: the mind as abstract software, the brain as physical hardware. > The brain or computer is the physical object instantiating the mind, > like a sphere made of stone is a physical instantiation of an abstract > sphere. You can destroy a physical sphere but you can't destroy the > abstract sphere It seems then that you suppose yourself as possessing or equaling this "abstract sphere of mind" that your brain instantiated, and that you suppose further that this abstract sphere of yours will continue to exist after your body dies. Correct me if I'm wrong. -gts From gts_2000 at yahoo.com Thu Dec 31 23:15:02 2009 From: gts_2000 at yahoo.com (Gordon Swobe) Date: Thu, 31 Dec 2009 15:15:02 -0800 (PST) Subject: [ExI] The symbol grounding problem in strong AI In-Reply-To: Message-ID: <853659.71219.qm@web36508.mail.mud.yahoo.com> --- On Thu, 12/31/09, John Clark wrote: > And if simulated objects couldn't effect each other why in the world > would scientists spend so much time making such computer programs? Simulated objects affect each other in the sense that mathematical abstractions affect each other, and we can make pragmatic use of those abstractions in computer modeling. But those objects cannot as you claimed in a previous message "burn" each other, nor can they as Stathis claimed have the property of "wetness". Simulated fire doesn't burn things and simulated waterfalls are not "wet". >?You ask us to ignore a century and a half of hard evidence on > how Evolution works simply on your authority, and then you > accuse us of being slaves to religious doctrine. It looks like religion to me when people here confuse computer simulations of things with real things, especially when those simulations happen to represent intentional entities, e.g., real people. -gts From jameschoate at austin.rr.com Thu Dec 31 15:00:10 2009 From: jameschoate at austin.rr.com (jameschoate at austin.rr.com) Date: Thu, 31 Dec 2009 15:00:10 +0000 Subject: [ExI] Update: Confusion Research Center - An Austin Hackspace Message-ID: <20091231150010.VMQHI.152199.root@hrndva-web12-z02> Here is some updated information about the status and near term goals of Confusion Research Center. http://hackerspaces.org/wiki/Confusion_Research_Center -- -- -- -- -- Venimus, Vidimus, Dolavimus jameschoate at austin.rr.com james.choate at g.austincc.edu james.choate at twcable.com h: 512-657-1279 w: 512-845-8989 www.ssz.com http://www.twine.com/twine/1128gqhxn-dwr/solar-soyuz-zaibatsu http://www.twine.com/twine/1178v3j0v-76w/confusion-research-center Adapt, Adopt, Improvise -- -- -- --