[ExI] Hard Takeoff
Stefano Vaj
stefano.vaj at gmail.com
Sun Nov 14 18:51:35 UTC 2010
2010/11/14 Michael Anissimov <michaelanissimov at gmail.com>:
> The idea of a mind that can copy itself directly is a really huge deal.
I am quite interested in the subject, especially since we are
preparing an issue of Divenire. Rassegna di Studi Interdisciplinari
sulla Tecnica e il Postumano entirely devoted to robotics and AI, and
we might be offering you a tribune to present your or SIAI's ideas on
the subject.
Personally, however, I find the idea of "mind" and "intelligence"
presented in the linked post still way too antropomorphic.
I am in fact not persuaded that "intelligence" is anything special,
mystical or rare, or that human (animal?) brains escape under some
aspects or other Wolfram's Principle of Computational Equivalence.
Accordingly, "AI" is little more to me than human-like features which
have not be practlcally implemented yet in artificial computers -
receding in the field of general IT once they are.
As to "minds" in the sense above, I suspect that they have little to
do with intelligence, and are nothing else than evolutionary
artifacts, which of course can be emulated with varying performances -
as anything else, for that matter - on any conceivable platform,
ending up either with "uploads" of existing individuals, or with
purely "artificial", patchwork personalities made up from arbitrary
fragments.
If this is the case, we can of course implement systems passing not
just a Turing-generic test (i.e., systems which cannot be
statistically distinguished from human beings in a finite series of
exchanges), or a Turing-specific test (i.e., systems which cannot be
distinguished from John), or a Turing-categorial test (systems which
cannot be distinguished from the average 40-years old serial killer
from Washington, DC). All of them exhibiting an "agency" which would
otherwise require some billion years of selection of long chains of
carbon-chemistry molecules.
This is per se an interesting experiment, but not so paradigm-changing
per se, since it would appear to me that anything which can be
initiated by such an emulation can also be initiated by a
flesh-and-bone (or... uploaded) individual with equivalent processing
resources and bandwith and interfaces at his or her fingertips.
Especially since it is reasonable to assume that animal brains be
already decently optimised to many essentially "animal-like" tasks.
Moreover, as already discussed on a few lists, meaningful concerns
about the "risks for the survival of the human race" in a framework
where they would become increasingly widespread would require, to
escape paradox, a more critical and explicit definition of our
concepts of "risk", "survival", "human", "extinction", "race",
"offspring", "death", and so forth, as well as of the underlying value
system.
--
Stefano Vaj
More information about the extropy-chat
mailing list