[ExI] A better way of saying "transhumanism" (re: twobits.net)

Gina Miller nanogirl at halcyon.com
Sun Aug 3 21:41:53 UTC 2008


I got it okay, see if you can read it below:
Gina www.nanogirl.com


BEGIN_________________________________--
I tried to say in http://heybryan.org/transhumanism_def.html what was
successfully achieved in: http://twobits.net/discuss/chapter2

> My favorite transhumanist is Eugen Leitl (who is, in fact, an
> authentic transhumanist and has been vice-chair of the World
> Transhumanist Association). Eugen is Russian-born, lives in Munich,
> and once worked in a cryobiology research lab. He is well versed in
> chemistry, nanotechnology, artificial-intelligence (AI) research,
> computational- and network-complexity research, artificial organs,
> cryobiology, materials engineering, and science fiction. He writes,
> for example,
>
> 76
>
> If you consider AI handcoded by humans, yes. However, given
> considerable computational resources (~cubic meter of computronium),
> and using suitable start population, you can coevolve machine
> intelligence on a time scale of much less than a year. After it
> achieves about a human level, it is potentially capable of entering
> an autofeedback loop. Given that even autoassembly-grade computronium
> is capable of running a human-grade intellect in a volume ranging
> from a sugar cube to an orange at a speed ranging from 10^4 . . .
> 10^6 it is easy to see that the autofeedback loop has explosive
> dynamics. (I hope above is intelligible, I've been exposed to weird
> memes for far too long).30
>
> 77
>
> Eugen is also a polymath (and an autodidact to boot), but in the
> conventional sense. Eugen's polymathy is an avocational necessity:
> transhumanists need to keep up with all advances in technology and
> science in order to better assess what kinds of human-augmenting or
> human-obsolescing technologies are out there. It is not for work in
> this world that the transhumanist expands his or her knowledge, nor
> quite for the next, but for a "this world" yet to arrive.
>
> 78
>
> Eugen and I were introduced during the Napster debates of 2001, which
> seemed at the time to be a knock-down, drag-out conflagration, but
> Eugen has been involved in so many online flame wars that he probably
> experienced it as a mere blip in an otherwise constant struggle with
> less-evolved intelligences like mine. Nonethe[PAGE 91]less, it was
> one of the more clarifying examples of how geeks think, and think
> differently, about technology, infrastructure, networks, and
> software. Transhumanism has no truck with old-fashioned humanism.
>
> 79
>
> >>From: Ramu Narayan . . .
> >>I don't like the
> >>notion of technology as an unstoppable force with a will of its own
> >> that has nothing to do with the needs of real people.
>
> [Eugen Leitl:] Emergent large-scale behaviour is nothing new. How do
> you intend to control individual behaviour of a large population of
> only partially rational agents? They don't come with too many
> convenient behaviour-modifying hooks (pheromones as in social
> insects, but notice menarche-synch in females sharing quarters), and
> for a good reason. The few hooks we have (mob, war, politics,
> religion) have been notoriously abused, already. Analogous to
> apoptosis, metaindividuals may function using processes
> deletorious[sic] to its components (us).31
>
> 80
>
> Eugen's understanding of what "technological progress" means is
> sufficiently complex to confound most of his interlocutors. For one
> surprising thing, it is not exactly inevitable. The manner in which
> Leitl argues with people is usually a kind of machine-gun prattle of
> coevolutionary, game-theoretic, cryptographic sorites. Eugen piles on
> the scientific and transhumanist reasoning, and his interlocutors
> slowly peel away from the discussion. But it isn't craziness, hype,
> or half-digested popular science-Eugen generally knows his stuff-it
> just fits together in a way that almost no one else can quite grasp.
> Eugen sees the large-scale adoption and proliferation of technologies
> (particularly self-replicating molecular devices and evolutionary
> software algorithms) as a danger that transcends all possibility of
> control at the individual or state level. Billions of individual
> decisions do not "average" into one will, but instead produce complex
> dynamics and hang perilously on initial conditions. In discussing the
> possibility of the singularity, Eugen suggests, "It could literally
> be a science-fair project [that causes the singularity]." If Francis
> Bacon's understanding of the relation between Man and Nature was that
> of master and possessor, Eugen's is its radicalization: Man is a
> powerful but ultimately arbitrary force in the progress of
> Life-Intelligence. Man is fully incorporated into Nature in this
> story, [PAGE 92] so much so that he dissolves into it. Eugen writes,
> when "life crosses over into this petri dish which is getting
> readied, things will become a lot more lively. . . . I hope we'll
> make it."
>
> 81
>
> For Eugen, the arguments about technology that the polymaths involve
> themselves in couldn't be more parochial. They are important only
> insofar as they will set the "initial conditions" for the grand
> coevolutionary adventure of technology ahead of us. For the
> transhumanist, technology does not dissolve. Instead, it is the
> solution within which humans are dissolved. Suffering, allocation,
> decision making-all these are inessential to the ultimate outcome of
> technological progress; they are worldly affairs, even if they
> concern life and death, and as such, they can be either denounced or
> supported, but only with respect to fine-tuning the acceleration
> toward the singularity. For the transhumanist, one can't fight the
> inevitability of technical evolution, but one certainly can
> contribute to it. Technical progress is thus both law-like and
> subject to intelligent manipulation; technical progress is
> inevitable, but only because of the power of massively parallel human
> curiosity.
>
> 82
>
> Considered as one of the modes of thought present in this-worldly
> political discussion, the transhumanist (like the polymath) turns
> technology into a rhetorical argument. Technology is the more
> powerful political argument because "it works." It is pointless to
> argue "about" technology, but not pointless to argue through and with
> it. It is pointless to talk about whether stopping technology is good
> or bad, because someone will simply build a technology that will
> invalidate your argument.
>
> 83
>
> There is still a role for technical invention, but it is strongly
> distinguished from political, legal, cultural, or social
> interventions. For most transhumanists, there is no rhetoric here, no
> sophistry, just the pure truth of "it works": the pure, undeniable,
> unstoppable, and undeconstructable reality of technology. For the
> transhumanist attitude, the reality of "working code" has a reality
> that other assertions about the world do not. Extreme transhumanism
> replaces the life-world with the world of the computer, where bad
> (ethically bad) ideas won't compile. Less-staunch versions of
> transhumanism simply allow the confusion to operate
> opportunistically: the progress of technology is unquestionable
> (omniscient), and only its effects on humans are worth investigating.
>
> 84
>
> The pure transhumanist, then, is a countermodern. The transhumanist
> despises the present for its intolerably slow descent into the [PAGE
> 93] future of immortality and superhuman self-improvement, and fears
> destruction because of too much turbulent (and ignorant) human
> resistance. One need have no individual conception of the present, no
> reflection on or synthetic understanding of it. One only need
> contribute to it correctly. One might even go so far as to suggest
> that forms of reflection on the present that do not contribute to
> technical progress endanger the very future of life-intelligence.
> Curiosity and technical innovation are not historical features of
> Western science, but natural features of a human animal that has
> created its own conditions for development. Thus, the transhumanists'
> historical consciousness consists largely of a timeline that makes
> ordered sense of our place on the progress toward the Singularity.
>
> 85
>
> The moral of the story is not just that technology determines
> history, however. Transhumanism is a radically antihumanist position
> in which human agency or will-if it even exists-is not ontologically
> distinct from the agency of machines and animals and life itself.
> Even if it is necessary to organize, do things, make choices,
> participate, build, hack, innovate, this does not amount to a belief
> in the ability of humans to control their destiny, individually or
> collectively. In the end, the transhumanist cannot quite pinpoint
> exactly what part of this story is inevitable-except perhaps the
> story itself. Technology does not develop without millions of
> distributed humans contributing to it; humans cannot evolve without
> the explicit human adoption of life-altering and identity-altering
> technologies; evolution cannot become inevitable without the
> manipulation of environments and struggles for fitness. As in the
> dilemma of Calvinism (wherein one cannot know if one is saved by
> one's good works), the transhumanist must still create technology
> according to the particular and parochial demands of the day, but
> this by no means determines the eventual outcome of technological
> progress. It is a sentiment well articulated by Adam Ferguson and
> highlighted repeatedly by Friederich Hayek with respect to human
> society: "the result of human action, but not the execution of any
> human design."32

- Bryan

END________________________________________


----- Original Message ----- 
From: "Damien Broderick" <thespike at satx.rr.com>
To: "ExI chat list" <extropy-chat at lists.extropy.org>
Sent: Sunday, August 03, 2008 1:02 PM
Subject: Re: [ExI] A better way of saying "transhumanism" (re: twobits.net)


> At 02:49 PM 8/3/2008 -0500, you wrote:
>>Content-Transfer-Encoding: base64Content-Disposition: inlineI tried to say 
>>in http://heybryan.org/transhumanism_def.html what was
>>successfully achieved in: http://twobits.net/discuss/chapter2
>>
>>^H~]>Üs]H~[oÚ[X[s\Ý\È]YÙ[^Leitl (who is, in fact, an
>> > authentic transhumanist and has been vice-chair of the World
>> > Transhumanist Association). Eugen is Russian-born, lives in Munich,
>> > and once worked in a cryobiology research lab. He is well versed in
>>Ú[Z\ÝzK~[>ÝXÚ>ÛÙÞK\YsXÚX[Z[[YÙ[~ÙH
>>@I) research,
>> > computational- and network-complexity research, artificial organs,
>> > cryobiology, materials engineering, and science fiction. He writes,
>>>Ü^^[\K?£â76
>> >
>>Y^[ÝHÛÛoÚY\^RH[TÛÙYzH[X[oËY\Ë^ÝÙ]?er, given
>> > considerable computational resources (XÝXsXÈmeter of computronium),
>> > and using suitable start population, you can coevolve machine
>> > intelligence on a time scale of much less than a year. After it
>> > achieves about a human level, it is potentially capable of entering
>>[? autofeedback loop. Given that even autoassembly-grade computronium
>>\ÈØ\X>HÙ^[>s[TÈH[X[ >> > from a sugar cube to an orange at a speed 
>>ranging from 10^4 . . .
>> > 10^6 it is easy to see that the autofeedback loop has explosive
>> > dynamics. (I hope above is intelligible, I've been exposed to weird
>> > memes for far too long).30
>>^
>>͏,,WVvVâ-2Ç6òolymath (and an autodidact to boot), but in the
>> > conventional sense. Eugen's polymathy is an avocational necessity:
>>~[oÚ[X[s\ÝÈTYYÈÙY\\Ú][Y~[~Ù\È@n technology and
>>ØÚY[~ÙH[^ÜT\^ÈT]\^\ÜÙ\ÜÈÚ] kinds of human-augmenting or
>> > human-obsolescing technologies are out there. It is not for work in
>> > this world that the transhumanist expands his or her knowledge, nor
>>  quite for the next, but for a "this world" yet to arrive.
>> >
>>
>>Ώ,^]YÙ[^[THÙ\TH[>ÙXÙY\s[TÈH?
>
>
>
> H
>
> Testing.
>
> Maybe I'm going to have to read this stuff on the archive.
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> 





More information about the extropy-chat mailing list