<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content="text/html; charset=us-ascii" http-equiv=Content-Type>
<META name=GENERATOR content="MSHTML 8.00.6001.19088"></HEAD>
<BODY>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2
face=Arial> Kevin wrote;</FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011> </SPAN>Quoting
<natasha@natasha.cc><BR><BR>> For me, I am not a libertarian, an
anarchist or a singulartarian. I<BR>> am transhumanist and I support
Extropy above all else. I don't like<BR>> Extropy tethered to other
stuff that is not expressly focused on life<BR>> expansion and well
being.<BR><BR>>Precisely. This does not mean that I am not political or
that I do not<BR>>support the Singularity. I simply do not subscribe to any
one political<BR>>party and I am not what is known as a singulartarian.
There are many<BR>>theorists, experts, activists and knowledgeable
individuals of the<BR>>Technological Singularity who are not
"singulartarian".<BR><BR>So, you are a Transhumanist who "supports" Extropy, but
who is not a Singularitarian, but who (based on your comments far below,)
believes that one day, Technological Singularity will be achieved (I saw
in a post coming up from Max, his thought's and the chart in the link that he
provided, which clarified his ideas.) You would fall within the
"Voluntarist Emergent Surge" category, I assume?<SPAN
class=765284113-26062011><FONT color=#0000ff size=2
face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2 face=Arial>**I
can understand a need to place people and ideas in categories
for arguing or explaining a theory, but I am not the easily molded and I
try not to do that to others. I am a transhumanist who supports Extropy,
yes. I also support human rights, morphological freedom, ageless thinking,
life expansion, n-cybernetics, and design thinking. I do think (not
believe) that a Technological Singularity will occur, most likely in
surges. I do not fall into any one category: for example, on the
chart you refer to, I place myself as a voluntarist emergent surge
humanity-positive Singularity with the caveat that if superintelligences are
aggressive and hostile to humanity. In this case I would fit into a
"strategist" column (which is not on the chart) <BR>and align with AGIs to
prevent hostility and coercive behaviors of either the superintelligences or the
humans. If this fails, then I would be an authoritarian about
stopping attacks on humans, transhumans and posthumans as a defensive
stance. </FONT></SPAN></DIV><SPAN class=765284113-26062011><FONT color=#0000ff
size=2 face=Arial></FONT></SPAN>
<DIV><BR>>I think Kurzweil is not optimistic, I think he is an advocate of
exponential<BR>>acceleration as a matter of technological fact. De
Garis is not negative, I<BR>>think he is presenting a particular theory that
is more science fiction than<BR>>science fact. I think Goertzel is
mostly interested in AGI. But all in<BR>>all, I think Max More's view
on "surges" is the most appropriate theoretical<BR>>position on a
Technological Singularity.<BR><BR>I would say that if Kurweil is advocating that
technogical singularity as a matter of factually happening, it 'is' postitive,
because we still are not positive that other factors will not intercede.
If I am right, and you are VESist (and I know from your words that I will quote
below that you don't like "isms," but it does seem to fit,here,) then I can see
why you might worry about the 'ultimate' end of this the exponential
acceleration, as least as it happens too soon for your own concerns.<SPAN
class=765284113-26062011><FONT color=#0000ff size=2
face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2 face=Arial>Yes,
as I said as a matter of "technological fact". I don't think he is
optimistic, I think he is pragmatic about the need for this to happen which will
benefit humanity in numerous ways. It could hurt humanity (as I said
above), but Kurzweil is focused on the benefits rather than the negatives.
Nevertheless, this does not mean that he does not consider the negative affects.
For his theoretical agenda, he takes the position of being positive about the
Singularity. IN this case, he is ardently working on educating the general
public and academics about issues concerning the Singularity, thus the
Singularity University. This would not be developed by a person who is
optimistic, but one that is practical and positive about *humans steering our
future*. In this regard is is more aligned with n-order cybernetics and
autopoeisis, or as I say "automormphic".</FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN><SPAN class=765284113-26062011><FONT
color=#0000ff size=2 face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2
face=Arial>Regarding me, I am not a VESist, I am a "combination of elements that
are applied to best address the situation." Allow me to explain: I am
a designer. Designers build ideas and bring these ideas to fruition through
strategy and object. Whether it is an analysis or a building, a theory or
a virtual habitat, a strategy or a graphic narrative; designer first and
foremost goal is to problem-solve. In order to problem-solve the designer
must be adaptable and not sequestered to any one system, political or
otherwise, as true and absolute. </FONT> <FONT color=#0000ff
size=2 face=Arial>It is the ways in which we deal with problems that is of
concern, not categories.</FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN><SPAN class=765284113-26062011><FONT
color=#0000ff size=2 face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2
face=Arial>Regarding an "ultimate" end of exponential acceleration, I don't see
any "end", I see a continuous evolution and the Technological Singularity is one
type of evolution and what you refer to as its "end" is the beginning of
something else, or even just a process within a larger
system.</FONT> </SPAN><BR><BR>Whether or not De Garis concept is science
fiction, I would consider people killing each other in a war that would kill
billions of people 'quite' negative. Blindly horrific might be a better
term. Further, he seemed very serious about the idea when he
expressed it in "Transcendent Man," yet still willing to create AGI (after
holding his breath.) I would think that, if he thinks that billions of
people may die, and you are working toward bettering human health and better
standards of living, that you wouldn't look at his idea, science fiction or not,
as being anything beyond a negative view how things go. (Except if you
believe the cost is worth the goal, either way, in which case I could see your
point.) <SPAN class=765284113-26062011><FONT color=#0000ff size=2
face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2
face=Arial>Dramatic stories are fiction and stories which contain technological
alterations of humanity and the environment is science. Science
fiction is both a drama about the future and also a eye-opener to possibilities,
both positive and negative. The idea that genocide and any future
scenario causing the death of billions of people does not belong to de
Garis.</FONT> <FONT color=#0000ff size=2 face=Arial>It is a narrative that
is historical when philosophers and theorists consider consequences of any
number of tragedies that could occur in the present and
future.</FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN><SPAN class=765284113-26062011><FONT
color=#0000ff size=2 face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2 face=Arial>I did
not say I would not look at his idea! I have been on the radio interviewed
with him several times, most recently in China last year where we discussed
these things and my own paper, which was published in 2008, and delivered at a
conference in Gijon, Spain in 2008 on a similar topic about humanity and
problems:</FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011><FONT face=Arial></FONT></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011>
<P style="TEXT-ALIGN: justify; TEXT-INDENT: -0.5in; MARGIN: 0in 0in 0pt 0.5in"
class=MsoNormal><FONT size=2><FONT color=#0000ff><B
style="mso-bidi-font-weight: normal"><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt">Title:</SPAN></B><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt"><SPAN
style="mso-spacerun: yes"> </SPAN><?xml:namespace prefix = o ns =
"urn:schemas-microsoft-com:office:office" /><o:p></o:p></SPAN></FONT></FONT></P>
<P style="TEXT-ALIGN: justify; TEXT-INDENT: -0.5in; MARGIN: 0in 0in 0pt 0.5in"
class=MsoNormal><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt"><FONT size=2><FONT
color=#0000ff>"The Design War: Humanish vs. Postbiologicals – controversy that
may affect humanity"<o:p></o:p></FONT></FONT></SPAN></P>
<P style="TEXT-ALIGN: justify; MARGIN: 0in 0in 0pt" class=MsoNormal><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt"><o:p><FONT
color=#0000ff size=2> </FONT></o:p></SPAN></P>
<P style="TEXT-ALIGN: justify; MARGIN: 0in 0in 0pt" class=MsoNormal><FONT
size=2><FONT color=#0000ff><B style="mso-bidi-font-weight: normal"><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt">Abstract</SPAN></B><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt">:<o:p></o:p></SPAN></FONT></FONT></P>
<P style="TEXT-ALIGN: justify; MARGIN: 0in 0in 0pt" class=MsoNormal><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt"><FONT size=2><FONT
color=#0000ff>Struggles of political and religious hegemony reveal distinct
biases concerning what is or is not an acceptable method of design for sapient
life.<SPAN style="mso-spacerun: yes"> </SPAN>"Humanish," the biological
fundamentalists, argue for classical style.<SPAN
style="mso-spacerun: yes"> </SPAN>Postbiologicals, a variety of species
derived from Homo sapiens and artificial general intelligence, might lobby for
ingenuity.<o:p></o:p></FONT></FONT></SPAN></P>
<P style="TEXT-ALIGN: justify; MARGIN: 0in 0in 0pt" class=MsoNormal><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt"><o:p><FONT
color=#0000ff size=2> </FONT></o:p></SPAN></P>
<P style="TEXT-ALIGN: justify; MARGIN: 0in 0in 0pt" class=MsoNormal><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt"><FONT size=2><FONT
color=#0000ff>One hundred thousand years ago, the human species experienced an
indisputable improvement in its cognitive architecture.<SPAN
style="mso-spacerun: yes"> </SPAN>Now, an evident shift from biological
cells to programmable AI takes the processes of intelligence from human neurons
to more resilient and faster performing substrates, one million times
over.<o:p></o:p></FONT></FONT></SPAN></P>
<P style="TEXT-ALIGN: justify; MARGIN: 0in 0in 0pt" class=MsoNormal><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt"><o:p><FONT
color=#0000ff size=2> </FONT></o:p></SPAN></P>
<P style="TEXT-ALIGN: justify; MARGIN: 0in 0in 0pt" class=MsoNormal><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt"><FONT
color=#0000ff><FONT size=2>This paper addresses the issue of species hierarchy
as it concerns whether humanity ought to look biological as it merges with
smarter-than-human intelligence.<SPAN style="mso-spacerun: yes"> </SPAN>In
a perfect world, these species would learn to get along.<SPAN
style="mso-spacerun: yes"> </SPAN>Due to the Singularity, humanity learns
they are not the only life form with consciousness and aesthetic taste.<SPAN
class=765284113-26062011> </SPAN></FONT></FONT></SPAN></P>
<P style="TEXT-ALIGN: justify; MARGIN: 0in 0in 0pt" class=MsoNormal><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt"><FONT
color=#0000ff><FONT size=2><SPAN
class=765284113-26062011></SPAN></FONT></FONT></SPAN> </P>
<P style="TEXT-ALIGN: justify; MARGIN: 0in 0in 0pt" class=MsoNormal><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt"><FONT
color=#0000ff><FONT size=2><SPAN
class=765284113-26062011>______________________________</SPAN></FONT></FONT></SPAN></P>
<P style="TEXT-ALIGN: justify; MARGIN: 0in 0in 0pt" class=MsoNormal><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt"><FONT
color=#0000ff><FONT size=2><SPAN
class=765284113-26062011></SPAN></FONT></FONT></SPAN> </P>
<P style="TEXT-ALIGN: justify; MARGIN: 0in 0in 0pt" class=MsoNormal><SPAN
style="FONT-FAMILY: 'Arial','sans-serif'; FONT-SIZE: 11pt"><SPAN
class=765284113-26062011><FONT size=2><FONT color=#0000ff> At the same
time, I wrote a paper on "Deconstructing Transhumanism", to address the need to
look outside the transhumanist proposition to address issues: "<SPAN
style="mso-bidi-font-style: italic">Within collections of subcultures and
countercultures, which social practice is to give rise to alternative futures,
the origination and<SPAN style="mso-spacerun: yes"> </SPAN>dissemination
of creative ideas are endogenous, arising out of both collaborative and
clandestine practices.<SPAN style="mso-spacerun: yes"> </SPAN>When
creative ideas are presented to the external environment, they often are
translated into a semblance of principles, postulates, and<SPAN
style="mso-spacerun: yes"> </SPAN>theories which may not reflect the core
values of the culture."
<o:p></o:p></SPAN></FONT></FONT></P></SPAN></SPAN></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN><SPAN class=765284113-26062011><FONT
color=#0000ff size=2 face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011> </SPAN>>This is a good question.
I never liked the term "extropians" or<BR>>extropianism" because
Extropy is similar to a cybernetic approach and>within >this approach is
the worldview of transhumanism, which is a<BR>>philosophy of Extropy.
Certainly other people see that Extropy is the core<BR>>philosophy of
transhumanism, which is okay too. But all in all transhumanism<BR>>cannot
exist without Extropy because it is Extropy that presents the concept<BR>>of
continuous expansion, critical thinking and practical optimism.<BR><BR>Based on
your description, I would look at it the other way around. The only thing
that differentiates the two is that Extropy entails 'practical optimism,'
whereas Transhumanism can be optimism, practical optimism, neutrality, practical
negativism, or overtly negative. This really would make Transhumanism the
more encompassing genus, with Extropy as one of it's species. <BR><SPAN
class=765284113-26062011><FONT color=#0000ff size=2
face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2
face=Arial>Transhumanism, by its very nature, cannot be overtly negative.
It would be like saying negentropy can be extropy.</FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011> </SPAN><BR>>One issue here is the
topic of negentropy, which still should be discussed and<BR>>revisited in the
21st century. I don't recall any discussions on it for 10<BR>>years or
so.<BR><BR>This sounds like it would be a form of TESism or even DESism; high
levels of human development, falling short of creating the
Singularity. <SPAN class=765284113-26062011><FONT color=#0000ff
size=2 face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2 face=Arial>Again,
too much categorizing my my brain to muster - I simply do not think along
these hard lines.</FONT> </SPAN><BR><BR>>People do not call
themselves extropians today because transhumanism is a<BR>>term that was
promoted over Extropy in the late 1990s in order to push the<BR>>political
views of the WTA and to promote Huxley as being the originator of<BR>>the
ideas, which is entirely incorrect and a political move by WTA
that<BR>>backfired on the organization and its principles. Today we are
more even<BR>>minded and Humanity+ has combined the beneficial work of WTA
with ExI and<BR>>produced a more even minded organization that is inclusive
rather than<BR>>exclusive.<BR><BR>Politics aside, it does sound like the
Transhumanist term, whatever it started out as, has become more encompassing
than the term 'Extropy.'<SPAN class=765284113-26062011><FONT color=#0000ff
size=2 face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2 face=Arial>It
does not matter because transhumanism is build on the tenets of continues
growth; whether fast or slow, or even stilled it is still alive.
</FONT></SPAN><BR><BR>>Nevertheless, the term "transhumanist" is not as scary
to the general public<BR>>as "extropian" and the term Extropy and extropian
may gain momentum in later<BR>>years because thing change and no one really
knows what ideas stick or<BR>>terms, etc.<BR><BR>I agree. I think this will
help alleviate fears of most of the worlds population, including many in the
scientific communities, that AGI will be completed and realistically end the
human race, rather in a positive way or negative way. Extropy may become a
"safety" code-word for high technology for human betterment that stops somewhere
just short of creation of AGI. <SPAN class=765284113-26062011><FONT
color=#0000ff size=2 face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2 face=Arial>Nice
thought.</FONT> </SPAN><BR><BR>>Regarding Warwick, his ideas seem to fit
quite nicely within the<BR>>"Transhumanist" conecept. Unless I am
missing something, he is seeking<BR>>progress in health, length of life, and
development of superior qualities of<BR>>people through the use and physical
adaptation of technology. In short, he<BR>>is seeking the evolution of
mankind through technology.<BR><BR>>Well, frankly these things have been
promoted by transhumanists for decades!<BR>>t is just recently that the
general public has become interested, including<BR>>Kevin. While Kevin
has been deeply engageding in cyborgization of his body<BR>>for a very long
time and a forerunner in this domain, the ideas of<BR>>transhumanism are now
posted it on his cyborg theory. But when we think of<BR>>cyorg, it is
Manfred Clynes' vision and cybernetics. I do not know why<BR>>Kevin
does not call himself a transhumanist but it seems that is may be<BR>>because
he is deeply invested in the term cyborg for his work.<BR><BR>Yes, people
do get vested in terms, especially if they are heavily invested in fine
distinctions, but I can understand because they do better help where we are all
coming from on the spectrum of ideas.<SPAN class=765284113-26062011><FONT
color=#0000ff size=2 face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2 face=Arial>Of
course, yes.</FONT> </SPAN><BR><BR>>For goodness sakes, of course!
Someone who works on the Singularity and<BR>>writes about it, etc. is
not necessarily a "Singularitarian".<BR><BR>Thank you very much for making this
clear. It means more to me than you know.<SPAN
class=765284113-26062011><FONT color=#0000ff size=2
face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2 face=Arial>You
are welcome.</FONT> </SPAN><BR><BR>>You are correct and I could have
said this, but it seemed obvious to me. I<BR>>apologize.<BR><BR>>I
do not favor the dogma of Singularitarianism because it is a about
"isms"<BR>>and not the Technological Singularity as I learned of it from
Vernor Vinge<BR>>and as Extropy Institute introduced it as its conferences in
the 1990s.<BR><BR>No need to apologize. It's the beauty of communication, right?
Regarding the "isms," again, the two I 'created' based on Max's chart
just seem to fit.<BR><BR>>I am very interested in and lecture on the
Singularity but I do not call<BR>>myself a Singularitarian because I do not
think that superintelligences will<BR>>kill off our species and I do not
think that Friendly AI is the answer, it<BR>>is just one theory. I do
think that humans will merge more and more with<BR>>machines and that humans
will integrate with AGI. I think we will have to<BR>>learn how to
accept new intelligences that are not offspring of the homo<BR>>sapiens
sapiens species and that will be both difficult and rewarding.<BR><BR>>The
central issues about the Technological Singularity is about how
we<BR>>adapt to our future, how we make wise choices, how we diversify and
how we<BR>>help others understand what this means and to prepare for it. It
will<BR>>happen, but most likely in surges rather than hitting a
wall.<BR><BR>>Best,<BR>>Natasha<BR><BR>Yes, we do have to deal with our
theories. I agree that we should attempt to prepare, make wise choices,
and how we help others understand. (The diversify part, I don't know what you
mean.) But all eyes must be based on doing whatever is necessary to
achieve AGI evolution: Technological Singularity, and we must do this as
possible. Extropy should be helpful, but should not slow down the
process. <SPAN class=765284113-26062011><FONT color=#0000ff size=2
face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2
face=Arial>Extropy is a beautiful - stunning concept.</FONT> <FONT
color=#0000ff size=2 face=Arial>Transhumanism seems to be more of the work we
need to do now while aspiring toward Extropy. Even conservative
transhumanists and Marxist transhumanists understand that our species evolution
and survival is the sin quo non and that whatever we look like, whatever
platform/substrate we exist on or within, that it is our core personhood - our
consciousness - that is valued. That the human notion that death is normal
and kicking "dead" humans and their personhood to the curb is okay, to me is a
sign that humans who think this way and that any human life can be replaced with
a new life are crass. </FONT></SPAN><BR><BR>My thought is, we either
evolve, and soon, because one way or the other, we are going to end as a
species, so we might as well contribute something positive to the
universe. Human beings, even evolved ones, are not capable of escaping the
dangers that are harbored in this galaxy alone. <BR><BR>If knowledge is to
'live," it must expand at a pace that Transhumans can't do, must be able to
become agents outside of our galaxy, and must be able to manipulate space, time,
and and all of the other dimensions themselves. Beginning the process for
highly, ever-evolving, ever-expanding, ever-living intelligence to live, by
creating AGI, is the most just and high goal we can possibly give to the
universe, and to life itself as a species.<SPAN class=765284113-26062011><FONT
color=#0000ff size=2 face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2
face=Arial>Noosphere?</FONT> </SPAN><BR><BR>In a way, it is our duty to
create AGI.<SPAN class=765284113-26062011><FONT color=#0000ff size=2
face=Arial> </FONT></SPAN></DIV>
<DIV><SPAN class=765284113-26062011></SPAN> </DIV>
<DIV><SPAN class=765284113-26062011><FONT color=#0000ff size=2
face=Arial>Agreed.</FONT> </SPAN><BR><BR>Kevin<BR><BR>
<BR><BR><BR><BR> <BR><BR><BR><BR><BR><BR>
<BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR></DIV></BODY></HTML>