[extropy-chat] RE: Singularitarian verses singularity

mike99 mike99 at lascruces.com
Wed Dec 21 22:52:31 UTC 2005


What Samantha says here about the value -- and the dangers -- of superhuman
intelligence reflects quite precisely what I believe on this subject. Let me
add a few points to bolster this position:

1) If merely human intelligence, and any current or past forms of human
organization, were sufficient to create the sort of world we desire to live
in, then such a world would have already been created. Therefore, no
rearrangement of human relations (political, economic, etc.) and no
humanly-manageable system of development could be sufficient to take us from
our current unsatisfactory state of life to the enhanced, long-lived,
healthy, wealthy and free life we desire. Greater than human intelligence is
required.

2) Superhuman artificial intelligence (SAI) could (or as Eliezer Yudkowsky
claims **would**) destroy us if it were not designed to be inherently and
unalterably "Friendly" toward our species. So SAI is both the ultimate
promise and the ultimate threat. Should we relinquish the development of SAI
as Bill Joy advocates? I do not believe that relinquishment is possible even
if it were desirable. Nor do I believe it is desirable: if we do not develop
SAI, then we will never get beyond the quarrelsome, impoverished
(intellectual and materially), short-lived and short-tempered state our
species is currently in. There is no chance whatsoever that we could develop
and correctly deploy all the transhumanist technologies we need for the
creation of a better world unless we have superintelligence to manage the
process.

3) Many major economic organizations (companies) and political/military
institutions recognize that SAI will inevitably be developed, so each
nation-state and many major corporations (often with governmental backing)
are working toward this goal. Whether or not these organizations and
institutions seek the Singularity, their self-interest (and survival
instinct) drives them toward creating the SAI that will inaugurate the
Singularity of asymptotically-increasing development. Once the SAI genie is
out of the R&D (sand)box, it cannot be put back in.


Regards,

Michael LaTorra

mike99 at lascruces.com
mlatorra at nmsu.edu
English Dept., New Mexico State University

"For any man to abdicate an interest in science is to walk with open eyes
towards slavery."
-- Jacob Bronowski

"Experiences only look special from the inside of the system."
-- Eugen Leitl

"Man is a rope stretched between the animal and the Superman: a rope across
an abyss - a dangerous going across, a dangerous wayfaring, a dangerous
looking back, a dangerous shuddering and staying still."
-- Friedrich Nietzsche

Member:
Board of Directors, World Transhumanist Association: www.transhumanism.org
Board of Directors, Institute for Ethics and Emerging Technologies:
http://ieet.org/
Extropy Institute: www.extropy.org
Alcor Life Extension Foundation: www.alcor.org
Society for Universal Immortalism: www.universalimmortalism.org
President, Zen Center of Las Cruces:
www.zencenteroflascruces.orgwww.zencenteroflascruces.org


> Date: Wed, 21 Dec 2005 03:14:40 -0800
> From: Samantha Atkins <sjatkins at mac.com>
> Subject: Re: [extropy-chat] Singularitarian verses singularity
> To: ExI chat list <extropy-chat at lists.extropy.org>
...
> I am of the opinion that human level intelligence is increasingly
> insufficient to address the problems we face, problems whose solution
> is critical to our continuing viability.   Thus I see >human
> intelligence as essential for the survival of humanity (or whatever
> some or all of us choose to become when we have such choice).   Yes
> the arrival of >human intelligence poses dangers.  But I think it is
> also the only real chance we have.
>
> - samantha





More information about the extropy-chat mailing list