[extropy-chat] Reliable tech

Eugen Leitl eugen at leitl.org
Tue Aug 30 10:32:48 UTC 2005


On Mon, Aug 29, 2005 at 10:52:30PM -0700, Adrian Tymes wrote:
> --- Amara Graps <amara at amara.com> wrote:
> > A group that advocates technology needs to put on a
> > face
> > that their own technology works, I think.

High-Availability is something with a hefty price tag currently.
Not a problem for a business, but ExI is not such a business.
We're operating on David's hospitality, on an aged hardware badly
needing retiring, and his gracious free administration. Major kudos
to David for that. 

I've promised a HA system to migrate ExI services to, but unfortunately
things are delayed on my end. The technology is complicated, setup
is expensive, and I don't have a properly configured, idling spare 
as a stopgap, even in a non-HA configuration. I'm sorry for this, but 
some external things changed. 

So we will have to infringe on David's hospitality for a brief while
longer, unless we have a total hardware failure, or somebody else steps
with a hardware/monetary donation to rectify the situation.  
 
> This brings up one of the less easily countered common type of
> objections I keep hearing from Luddites: "Even assuming this tech
> you're advocating does great things when it works, what do we do when
> we rely on tech and it fails?"

Which technology? Some of our technology is more reliable than others.
Some are more mission critical than others. Reliability is a mature
engineering discipline, even for brittle IT systems. Procedures to
select the best available technology are frequently defective, though.

That overall failure is not technology's fault, it's a human failure. 
 
> In practice, this is rarely an issue for truly critical systems.
> Computers responsible for critical public safety are usually (though
> not always) far better protected than Hollywood would have people

What is the basis for this statement? If you read RISKS, the average
situation is not so very good.

> believe.  Cars can break down, but there's an established procedure for
> what to do when that happens, and an established safety net for
> affected drivers.  Cell phones drop out of range all the time without

Car accidents and pollution are a major source of mortality. We're just
got used to what is essentially an unacceptable situation. 

> causing heart attacks (except for certain individuals who'd be in need
> of stress management therapy anyway).  And our food system is robust
> enough that even the slightest outbreak of poisoning or significant
> shortage is a major news item (which, ironically, is why we hear about
> them).

The food infrastructure is quite vulnerable to an attack. The reason
this hasn't happened is not becaused it's intrinsically safe.
We've seen some quite spectacular failures in the recent years, actually,
even without an attack. 
 
> In theory, though...safety systems can be one of the last bits
> developed, for any new technology.  What do we do when, not if, GMO
> crops cross-fertilize normal crops?  (Answer: the exact same thing we

There are events which are irreversible with current level of technology.
If there's an organism or a gene at large in the global ecosystem,
there's no way to recall it save of finding an operating upon every 
single instance of it.

We obviously don't have such technology yet. Such a powerful
technology would have obviously its own risks as well. 

> do when different varieties of "ordinary" crops cross-fertilize.
> Indeed, this was one pre-modern-era method of creating GMOs, so even
> introducing novel types of genes seems unlikely to cause harm.)  What

I am unconvinced by this breezy assertion. I would agree that catastrophic
events are rare (we haven't had any yet), but our abilities to
manipulate and engineer biology are quite primitive, and there are some
safety precautions in place.

> do we do if some computer virus manages to infect and wreck all of a
> city's 3D plotters and other nanotech manufacturing equipment, bringing
> its manufacturing base to a halt?  (Answer: import from other cities,

Answer: do not build systems which are susceptible to such an attack.
It's a good recipe to become a stratum in the fossil record.  

> and/or fall back to the old ways if necessary until replacement systems
> can be imported.  And fire the guy who thought it was a good idea to
> remotely control these systems without some minimal on-site
> verification, thus giving an excuse to connect these systems to the
> Internet without protection in the first place.)  What do we do if we

Firing the guy is perhaps a bit late. He's dead, and so are a few millions,
or a few billions, if we have been particularly foolish. Nothing is
as ridiculous as some guy on TV saying "I'll be personally responsible
in case of a failure". Haha, soo funny. So he's the guy who blew up
the Bhopal factory. What do we do now?  

> create a friendly (or even Friendly) AI, entrust our economic systems
> to it as it becomes (apparently) more intelligent than we can
> comprehend, then simply stops?  (Answer: never trust a single point of

Optimists are lousy failure modelers. You need to sample all failure
modes exhaustively, including all worst cases. 

> failure, even a super-bright post-Singularity AI.  Any AI intelligent
> enough to run the world, which actually wanted to do so for the good of
> humanity, would realize this possibility and create nominally competing
> backups.)

Superintelligence is a great recipe for human primate extinction. 
 
> Moreover, in practice, a lot of new tech isn't that reliable - then
> again, if it was, it'd likely already be used by the critical systems.
> For instance, neural networks and genetic algorithms can produce code
> that kind of favors a certain condition...but change the conditions

Evolutionary and neuronal systems are typically one thing: gracefully
failing/degrading. No single point of failure too, by evolutionary design.  

> significantly (with unexpected parameters that are always there in real
> life), and you have to retrain them.  Require 99.999% reliability, and
> the training takes a very long time - each time.  Not to mention that

Aargh. You're confusing reliability and determinism. It's probably
not a good thing to do numerics or cryptography with evolved systems.
They really kick ass as control paradigm though.

> they tend to ferret out unstated assumptions about how things should or
> should not be done, thus requiring them to become stated (after
> discovery), which then requires retraining (though usually not entirely
> from scratch).
> 
> More mundanely, power fails.  Communications get interrupted and/or

Power failure is a human design failure. Communication don't get interrupted,
if you installed redundant routes with failover, and don't get garbled, 
because the communication protocol deals with that. Bugs in redundancy
failover and quorum sensing are also designer's fault. Choosing an
underengineered system is head honcho's is financier's fault. 

If it doesn't work, it's a human failure. Failure to factor in
human failure is a human failure, too.   

> garbled.  User errors happen.  And so forth.  Solve any one of those
> problems, and the economic opportunities would be extreme - even
> without consideration of how they could contribute to transhumanistic
> ends.  These things are perceived, usually incorrectly, to have no
> parallel in the "old" ways...but even if the perception is incorrect, I
> wonder - might there be a way to make some technology 100% reliable?

Nothing ever is 100%. But 99.999% is a pretty good approximation.
Which specific technology and failure modes do you have in mind?

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07100, 11.36820            http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20050830/3679ad50/attachment.bin>


More information about the extropy-chat mailing list