[extropy-chat] Reliable tech
Adrian Tymes
wingcat at pacbell.net
Tue Aug 30 05:52:30 UTC 2005
--- Amara Graps <amara at amara.com> wrote:
> A group that advocates technology needs to put on a
> face
> that their own technology works, I think.
This brings up one of the less easily countered common type of
objections I keep hearing from Luddites: "Even assuming this tech
you're advocating does great things when it works, what do we do when
we rely on tech and it fails?"
In practice, this is rarely an issue for truly critical systems.
Computers responsible for critical public safety are usually (though
not always) far better protected than Hollywood would have people
believe. Cars can break down, but there's an established procedure for
what to do when that happens, and an established safety net for
affected drivers. Cell phones drop out of range all the time without
causing heart attacks (except for certain individuals who'd be in need
of stress management therapy anyway). And our food system is robust
enough that even the slightest outbreak of poisoning or significant
shortage is a major news item (which, ironically, is why we hear about
them).
In theory, though...safety systems can be one of the last bits
developed, for any new technology. What do we do when, not if, GMO
crops cross-fertilize normal crops? (Answer: the exact same thing we
do when different varieties of "ordinary" crops cross-fertilize.
Indeed, this was one pre-modern-era method of creating GMOs, so even
introducing novel types of genes seems unlikely to cause harm.) What
do we do if some computer virus manages to infect and wreck all of a
city's 3D plotters and other nanotech manufacturing equipment, bringing
its manufacturing base to a halt? (Answer: import from other cities,
and/or fall back to the old ways if necessary until replacement systems
can be imported. And fire the guy who thought it was a good idea to
remotely control these systems without some minimal on-site
verification, thus giving an excuse to connect these systems to the
Internet without protection in the first place.) What do we do if we
create a friendly (or even Friendly) AI, entrust our economic systems
to it as it becomes (apparently) more intelligent than we can
comprehend, then simply stops? (Answer: never trust a single point of
failure, even a super-bright post-Singularity AI. Any AI intelligent
enough to run the world, which actually wanted to do so for the good of
humanity, would realize this possibility and create nominally competing
backups.)
Moreover, in practice, a lot of new tech isn't that reliable - then
again, if it was, it'd likely already be used by the critical systems.
For instance, neural networks and genetic algorithms can produce code
that kind of favors a certain condition...but change the conditions
significantly (with unexpected parameters that are always there in real
life), and you have to retrain them. Require 99.999% reliability, and
the training takes a very long time - each time. Not to mention that
they tend to ferret out unstated assumptions about how things should or
should not be done, thus requiring them to become stated (after
discovery), which then requires retraining (though usually not entirely
from scratch).
More mundanely, power fails. Communications get interrupted and/or
garbled. User errors happen. And so forth. Solve any one of those
problems, and the economic opportunities would be extreme - even
without consideration of how they could contribute to transhumanistic
ends. These things are perceived, usually incorrectly, to have no
parallel in the "old" ways...but even if the perception is incorrect, I
wonder - might there be a way to make some technology 100% reliable?
More information about the extropy-chat
mailing list