[RISKS] Risks Digest 23.74

From: RISKS List Owner (risko@private)
Date: Wed Feb 23 2005 - 11:11:46 PST


RISKS-LIST: Risks-Forum Digest  Weds 23 February 2005  Volume 23 : Issue 74

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, moderator, chmn ACM Committee on Computers and Public Policy

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
  <http://catless.ncl.ac.uk/Risks/23.74.html>
The current issue can be found at
  <http://www.csl.sri.com/users/risko/risks.txt>

  Contents:
Mobile phone virus infiltrates U.S. (NewsScan)
Re: Component Architecture (Jim Horning, Rick Russell, Kurt Fredriksson,
  Fred Cohen, Jay R. Ashworth, Ray Blaak, Richard Karpinski,
  Geoff Kuenning, Dimitri Maziuk, Stephen Bull, George Jansen)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Tue, 22 Feb 2005 10:02:43 -0700
From: "NewsScan" <newsscan@private>
Subject: Mobile phone virus infiltrates U.S.

The world's first mobile phone virus "in the wild," dubbed Cabir, has
migrated to the U.S. from its point of origin in the Philippines eight
months ago, infecting phones in a dozen countries along the way. Experts say
the mobile-phone virus threat will increase as virus-writers become more
sophisticated and phones standardize technologies that will make it easier
to for viruses to spread not just across devices, but the whole industry. Up
until now, disparate technical standards have worked against fast-moving
virus infiltration, but Cabir has now been found in countries ranging from
the China to the U.K., spread via Bluetooth wireless technology. The biggest
impact of the relatively innocuous virus is that it's designed to drain
mobile phone batteries, says Finnish computer security expert Mikko
Hypponnen. Last November, another virus known as "Skulls" was distributed to
security firms as a so-called "proof-of-concept alert, but was not targeted
at consumers.  [Reuters/New York Times 21 Feb 2005; NewsScan Daily, 22 Feb
2005]
  http://www.nytimes.com/reuters/technology/tech-tech-security.html

------------------------------

Date: Sun, 20 Feb 2005 22:15:47 -0800
From: "Jim Horning" <home@private>
Subject: Re: Component Architecture (Robinson, RISKS-23.73)

Excellent points.  Made elegantly by Doug McIlroy at the 1968 NATO
Conference on Software Engineering and published in its proceedings
(http://homepages.cs.ncl.ac.uk/brian.randell/NATO/) under the title
"'Mass Produced' Software Components."

Nothing new under the sun, as the prophet said.
  Jim H.  http://virtualbumperstickers.blogspot.com/

  Software components (routines), to be widely applicable to different
  machines and users, should be available in families arranged according to
  precision, robustness, generality and timespace performance. Existing
  sources of components - manufacturers, software houses, users' groups and
  algorithm collections - lack the breadth of interest or coherence of
  purpose to assemble more than one or two members of such families, yet
  software production in the large would be enormously helped by the
  availability of spectra of high quality routines, quite as mechanical
  design is abetted by the existence of families of structural shapes,
  screws or resistors. The talk will examine the kinds of variability
  necessary in software components, ways of producing useful inventories,
  types of components that are ripe for such standardization, and methods of
  instituting pilot production.  The Software Industry is Not
  Industrialized.  We undoubtedly produce software by backward
  techniques. We undoubtedly get the short end of the stick in
  confrontations with hardware people because they are the industrialists
  and we are the crofters.  Software production today appears in the
  scale of industrialization somewhere below the more backward construction
  industries. I think its proper place is considerably higher, and would
  like to investigate the prospects for mass-production techniques in
  software.  In the phrase 'mass production techniques,' my emphasis is on
  'techniques' and not on mass production plain. Of course mass production,
  in the sense of limitless replication of a prototype, is trivial for
  software. But certain ideas from industrial technique I claim are
  relevant. The idea of subassemblies carries over directly and is well
  exploited.  The idea of interchangeable parts corresponds roughly to our
  term 'modularity,' and is fitfully respected. The idea of machine tools
  has an analogue in assembly programs and compilers.  Yet this fragile
  analogy is belied when we seek for analogues of other tangible symbols of
  mass production.  There do not exist manufacturers of standard parts, much
  less catalogues of standard parts. One may not order parts to individual
  specifications of size, ruggedness, speed, capacity, precision or
  character set...   [MDM]

[We received a plethora comments on this, many of which follow.  Although
there is some repetitiveness, it perhaps amplifies points that need to be
made, REPEATEDLY.  PGN]

------------------------------

Date: Sun, 20 Feb 2005 15:15:54 -0600
From: Rick Russell <rickr@private>
Subject: Re: Component Architecture (Robinson, RISKS-23.73)

Although there are many benefits to software components, there are many
risks as well. Many exploitable failures result from using the same
component in many different applications. Consider the buffer overflow in
the commonly-used JPEG decoding algorithm, for example. Or the innumerable
different operating systems that are compromised because of a bug in a
popular off-the-shelf component like SSH, SAMBA, or JPEG decoding.

The problem -- and the place where Dr. Robinson's analogy falls down, I
think -- is that software is a lot more complicated, in an algorithmic
sense, than a pine board or a copper pipe or a steel spring. Even a
widely-used software component that is believed to be reliable can have
unidentified problems, and when you expand the definition of software
components to large-scale systems with tens of thousands of lines of code
(database managers are the most glaring example), you end up with lots of
otherwise well-written software depending on buggy components.

------------------------------

Date: Mon, 21 Feb 2005 15:06:49 +0100
From: Kurt Fredriksson <kurt.fredriksson@private>
Subject: Re: Component Architecture (Robinson, RISKS-23.73)

Paul Robinson points at an interesting phenomena.

I was, a long time ago, working for a large corporation. I was member of a
working group to establish how we should document software properly.

Some background: Every product was assigned a product number in the form
"ABC 12345". Every document describing the product was assigned a
standardised decimal number, for example "1551" was the description of the
product. Thus the description of "ABC 12345" was labeled "1551-ABC 12345"

What we discovered for software was, that we never had the product visible.
For hardware you could produce X items and put them labelled on a shelf.
That was impossible for software as the only visible item was a printout of
the code, which had its own decimal number, or the code on paper tape (I
told you this was a long time ago) which also had its own decimal number.

Thus software was much more intangible than hardware. This makes it hard to
describe a software component in a way that makes it easy to find it when
you need it. This especially for real-time applications where timing was
critical.

I also had an interesting discussion with a project manager who wanted to
hurry up the introduction of reusable software. I asked him what he meant by
reusable code. I wasn't surprised that he couldn't define what he asked for.
This was the time when OO started to come into fashion. For a critical
real-time application you had to know the response time. The black box
philosophy of OO doesn't work here.

Paul Robinson mentions the development from machine code, through assembler
to higher level languages. He then think that OO were the next step in the
development because that is when the hyperbole of reusability became in
fashion.

If we are to use an engineering approach to reusability, it is called
experience.  An experienced engineer reuses his/her knowledge to create a
new product.

What OO has done to the development of software engineering is devastating.
Instead of continue to develop more advanced languages we got stuck with
half-assembler languages like C and followers. A compiler for a high level
language (re-)uses code templates. A compiler for a more advanced language
could reuse even larger chunks of code, without any need for a programmer to
try to find the code in a catalog.

To my disappointment, I have seen very little progress during the last two
decades in the field of software development. The ever increasing speed of
the processors and the cheap memory prices has more encouraged fast hacking
than a systematic development based on sound engineering principles.

------------------------------

Date: Sun, 20 Feb 2005 21:22:59 -0800 (PST)
From: Fred Cohen
Subject: Re: Component Architecture (Robinson, RISKS-23.73)

PGN and others have been working on this for some time.  The problems are
substantial - in particular in specifying interfaces and properties of
components and forming mathematics required for producing properties of
composites from the properties of their components and the topology of their
composition.  Unfortunately inadequate funding is behind this work and too
few researchers are persuing it.  The few agencies that fund that sort of
thing are also funding few real innovators.  But like I said - Peter is
working on it... so there is always hope.

Fred Cohen: http://all.net/ fred.cohen at all.net tel/fax: 925-454-0171

  [Fred, Many thanks for the kind words.  See
     http://www.csl.sri.com/neumann/chats4.pdf
     http://www.csl.sri.com/neumann/chats4.html
  But that report is just a beginning of an ongoing struggle, seemingly
  tilting at the same windmills for a long time.  PGN]

------------------------------

Date: Mon, 21 Feb 2005 13:00:48 -0500
From: "Jay R. Ashworth" <jra@private>
Subject: Re: Component Architecture (Robinson, RISKS-23.73)

Aren't reusable software components great?  Why doesn't anyone use them?
Well, Paul: they're much more common than you think.

I build a lot of open source and otherwise free software programs.  *Lots*
of them make use of reusable code components, most notably PHP's PEAR
repository, and the older code repository from which it is still learning
lessons: perl.org's CPAN -- the Comprehensive Perl Archive Network.

And comprehensive it is: CPAN has modules for damn near everything you might
need to to in writing a perl program.

And the code is pretty decent, too.

But here's the rub: it's not free (as in labor).

Even if you depend on code that someone else wrote to do a job, *someone
still had to write that code*.  And maintain it.  And, most importantly,
*know how to write reusable modules well*.  It's not all that easy.

If you choose poorly the modules written by other people upon which you
decide to depend, you (or your users) may find yourselves in Dependency
Hell, when you either end up on the pointy end of a cyclic dependency graph
between 3 or more of your subordinate modules (A and B both depend on C, but
A requires C v2.1 or newer, and B won't work if C is newer than v1.9), or
(the degenerate case of that) C gets upgraded before A, and A won't work
with the new version -- because the writer of C wasn't careful enough about
reverse compatibility.

So, using reusable components doesn't necessarily reduce the amount of work
you're going to put into a project -- sometimes it merely redistributes that
work from coders to librarians and testers.

The RISK?  Not looking deeply enough to find all the RISKS.

As it happens, CPAN, in particular, meets many of the requirements Paul
places on reusable components: it has a *very* nice test-harnessing system
built into it (though tracking down why tests *break* may be problematic for
end-users).  And, of course, it's an interpretive language at heart, so it's
end-sites that have to *do* that tracking.  But still, there are many good
lessons learned from looking at where CPAN is and how it got there; I
wouldn't want anyone to think I was putting it down.

Jay R. Ashworth, Ashworth & Associates, St Petersburg FL USA
http://baylink.pitas.com   +1 727 647 1274  jra@private

------------------------------

Date: Tue, 22 Feb 2005 05:35:33 GMT
From: Ray Blaak <rAYblaaK@private>
Subject: Re: Component Architecture (Robinson, RISKS-23.73)

> If you have small components that you know are right, and you then combine
> those components to manipulate each other according to their published
> interface specifications, the results should be consistently correct. The
> results will be predictable, the usage will be consistent every time.

This is false. The results will not necessarily be correct at all. "Know are
right" is not possible except in very specific and controlled contexts.

When components are used in new situations, any existing assumptions cannot
be relied on at all, without tedious and careful work to reestablish them.

Software components are not physical components. They do not scale the same
way.

That the software industry does not offer the same reliability and quality
as physically engineered products is not because software practitioners are
pulling at fast one (although they often are, but for different reasons).

They don't offer the same guarantees, not because they don't want to, but
because they cannot. Getting software right is hard. Very hard. So hard that
even really really smart people are not willing to be on the hook for it.

Customers have to tolerate software with mistakes, because that is the only
way they can get affordable software at all. If they insisted on the same
guarantees, they wouldn't be able to pay for it.

Yes, we need to make better software. We need to try. Things can be
improved. They must be. The current situation is not acceptable.

But it is not easy. Humans don't seem to be good at it.

------------------------------

Date: Mon, 21 Feb 2005 12:32:07 -0800
From: Richard Karpinski <dick@private>
Subject: Re: Component Architecture (Robinson, RISKS-23.73)

It's actually worse than you say.

Nobody delivers programs without serious defects built in, bugs as we say.

All programs come with modes which confuse their users and cause them to
commit errors.

All operating systems which support applications do so by providing
mechanisms to switch between those applications. The applications are
separate and again constitute major modes where gestures are interpreted
differently. This means that users must learn each application from the
beginning. Even text entry, which virtually every application uses, may
differ between applications in many ways which cause more confusions and
more frustrations.

But help is on the way.

There is exactly one university level text which addresses these matters in
a systematic way. The book is "The Humane Interface" by Jef Raskin. He is,
incidentally, the creator of the Macintosh project at Apple. He also led the
creation of the Canon Cat almost two decades ago. In 1987, the Cat provided
a consistent system which had only two modes: programming and using. Most
folks used only the latter. Twenty thousand of these machines were sold and
NO USER EVER REPORTED A BUG!

Be that as it may, why should anyone care? Because he's doing it again.  In
a little while, you will be able to download Archy from the Raskin Center
for Humane Interfaces. Archy has bugs now and will still have some when the
Alpha release occurs, soon. The Raskin Center will try very hard to have
none left by the time the general release occurs, many months later. You all
are invited to help accomplish that difficult but desirable goal.

Archy provides a thorough text handling facility which does not require use
of a mouse and does not require documents to be named or filed in
folders. Instead of separate applications, Archy accepts collections of
commands. Once accepted, every command is available all the time. Such
clumsy devices as modal dialog boxes are rigorously avoided in favor of more
humane means to accomplish those ends.

Archy is the pronunciation of RCHI and you can find more information about
the system and its philosophy at RaskinCenter.org at your leisure.

------------------------------

Date: 21 Feb 2005 09:33:57 +0100
From: Geoff Kuenning <geoff@private>
Subject: Re: Component Architecture (Robinson, RISKS-23.73)

Paul Robinson wrote a lengthy essay asking that we stop building custom
software and start assembling it from components.

This is nothing new, of course.  People have been calling for
component-based software for decades.  The usual analogy isn't a
peanut-butter sandwich, but rather the electrical components, especially
ICs, used to build the computers themselves.

Of course, the problem with this suggestion is that (as far as I know)
nobody has been able to tell us exactly what components are needed or how we
are going to interconnect them.

Unix made a pretty good start with the pipe model; I have built some very
complex applications by combining these basic utilities.  Unfortunately, we
haven't succeeded in moving beyond the model; in fact, I've observed that
many younger Windows-trained programmers have begun to forget the lessons
and have stopped writing "pipeable" utilities.

To be fair, we actually have made quite a bit of progress.  I don't write in
assembly language any more, and I use extensive libraries to build my
software.  Some of those libraries, though complex, provide very powerful
components (e.g., text-entry windows with scrollbars).  But compared to the
complexity of what we are building, even these components are akin to
7400-series TTL.  (And if you look at the systems that were built with early
7400 gates, I think you'll find much the same "build everything from
scratch" approach--30 years ago were still creating flip-flops by
cross-connecting two NAND gates.)

The other factor is that the deceptively low materials cost of software
development--anybody with a computer can get into the field--misleads us
into thinking we are qualified to build a large system.  Building the first
breadboard of a large electronic system would cost thousands or even tens of
thousands in parts and labor; that encouraged caution and kept costs down.
Building the first prototype of a software system requires only the time of
the coder(s), encouraging much larger designs.

In other words, the problem isn't a lack of components, it's that we're
building much larger systems in relation to the power of those components.
The software equivalent of a PB&J sandwich is easy and much more powerful:

        ls -l | grep -v '^d' | awk '{total+=$5}END{print total}'

Geoff Kuenning   geoff@private   http://www.cs.hmc.edu/~geoff/

------------------------------

Date: Mon, 21 Feb 2005 19:20:07 -0600
From: dmaziuk@private (Dimitri Maziuk)
Subject: Re: Component Architecture (Robinson, RISKS-23.73)

The problem with analogies is that they never work.

An architect makes a few assumptions: that owner will not move the house and
put it down on a bed of quicksand, that surrounding houses will not swell up
and lean on his work from all sides. (Some of his assumptions may later
prove unwarranted, like that builder won't use substandard materials or that
owner won't try to turn the attic into indoor pool, but that's another
story.)

Software developers cannot even assume that user's Pentium will return 4 for
8/2. Not to mention "unforeseen interactions between components in a complex
system" -- since they are "unforeseen" we can't foresee them, and if we can't
foresee them, all bets are off.

There was an article in an earlier RISKS issue that proposed exactly that:
"all bets are off", or "fault-oblivious" programming -- that's basically
what happens if I make the same kind of assumptions in my work as architect
does in his. Attractive as it sounds, in real life the result is hordes of
angry users demanding that I fix my stuff YESTERDAY! (or else I don't get
paid).

Did I mention users? People who don't read the fine manuals?  A screwdriver
is only guaranteed for life if you don't use it as a cold chisel. Otherwise
the handle *will* break and they most likely *won't* replace it on
warranty. Unfortunately, software is not nearly as simple as a screwdriver
and to find out what it can and cannot do you need to read a few dozen pages
of (usually) mind-numbingly boring manual. In spite of Apple Microsoft
advertisements that's been telling you "all you need to do is just point and
click" for decades.

> But in general, this is not how we are designing software.

Because in real life they aren't. Because in real life partitioning into
smaller and smaller components comes with efficiency overhead that nobody
wants. Because other people's components will only work for you if those
people's domain model is sufficiently close to yours -- otherwise they are
be too generic to be of any use to anybody, all they are is overhead.  And
so on and so forth.  Sad, but true.

------------------------------

Date: Tue, 22 Feb 2005 08:20:43 +0000
From: Stephen.Bull@private
Subject: Re: Component Architecture (Robinson, RISKS-23.73)

I agree with Paul Robinson that the software industry could do better.  You
only have to look at recent spectacular failures in computer based systems
to confirm this (e.g. Ariane V and the (UK) Child Support Agency).

However, I don't think his article tells the whole story.  As any reader of
RISKS knows, software is very different from any other product - so any
comparisons should be treated with extreme caution.  Software is not
constrained by the laws of nature (until or unless it comes to controlling a
real system) - consider the recent Matrix trilogy of films.  Thus while
traditional manufacture is bounded by well-established physical parameters
which lend themselves to repeatable solutions, requirements for software
systems are not so bounded.  This tends to mean that the requirements for
each system are unique.  And because of the perception that software can do
anything, the requirements tend to be complex too: arguably excessively so.
Working this down into the details of implementation, this means that the
components needed tend to be unique for each system - thus limiting the
possibilities of reuse.

The problem is not helped by the perception that software is "easy to
change".  (I suspect this is because software production does not involve
making physical artifacts in the normal way.)  As a result, late changes to
system requirements tend to get implemented in software, where the far more
reliable solution may be to make a small adjustment to the hardware.  This
usually means "tweaks" throughout the system, bypassing the usual design
processes.  This can mean that the resulting product is not amenable to
reuse - either because the developer did not have time to make the change
"properly" or because the associated documentation was not updated in line
with the software itself.

Reuse is not always a good thing, either - witness Ariane V, for example.
However, I don't think it is fair to claim (nearly) zero reuse within the
software industry.  On several projects which I am aware of (in the safety
critical field) the developers have bent over backwards to reuse legacy code
in order to reduce development costs - usually with much more success than
Ariane V.

Another facet of the problem is the requirements: problems with requirements
have been cited as the major cause of problems in many software systems.
Often, the software will meet its requirements - or, at least, the
designers' interpretation of the requirements - the problem often lies in
the requirements themselves being inadequate.

All in all, I agree that software reuse could be improved.  However, it is
not fair to level all the criticism at software development. Good software
(well designed, and trouble free) has been produced: some of this software
even has significant amount of reused code.  Where reuse is low, the lack of
reuse is partly due to the different nature of software; it is also partly
due to lack of appreciation of the software lifecycle.  Several disciplines
need to work together if software reuse is to be improved successfully.

Stephen Bull CEng MBCS MIEE AMIRSE  Systems Safety Engineer
Westinghouse Rail Systems  T: +44 (0)1249 442593  E: stephen.bull@private

------------------------------

Date: Tue, 22 Feb 2005 07:52:57 -0500
From: "George Jansen" <GJANSEN@private>
Subject: Re: Component Architecture (Robinson, RISKS-23.73)

I question Mr. Robinson's take, both as a programmer and sysadmin and as one
who has had a kitchen remodeled.

For the first case, I think that the depiction on the state of
do-it-yourselfness borders on caricature, at least for the business as it
has been in the last dozen years. The trucking companies may suffer pain
implementing Great Plains or Deltek or an ERP system, but they aren't
rolling their own accounts receivables systems. An awful lot of small
systems are built in IDEs with lots of dragging & dropping of icons, and
little or no hand-coding. Oracle's Jdeveloper, for example, will allow you
to generate quite elaborate JSP systems, whether or not you can tell
"implements" from "extends" or recall what "private static" adds to
"private".

And as this use of components has been increasing, has security and
stability increased with it? The obvious example of Microsoft's COM
suggests: Not always. Security alerts do appear for the Java VM. I'd be
curious to know the consensus of the Risks contributors on this one.

As for kitchens, on some flat earth, in some perfectly plumb and squared
room, the component-based model works perfectly every time.  Since 1989 I
have lived in two houses of very different look and age, and been through
renovations of both. Neither house, the one from the 1980s nor the one from
the 1930s, was perfectly plumb or square. Neither had everything--electrical
outlets, for example--where it should be. The quality of the renovation
depended on the care, experience, and common sense of the renovator. Even
the house of prefabricated components relies on the existence of
subcontractors who know not to saw through joists to run ducts. Christopher
Alexander may be a better guide to this world than Eli Whitney.

Finally, building technologies have their lurking flaws. For a simple
example, all rowhouses built in the Eastern US from about 1975 to about 1990
used fire-retardant plywood to obviate the need for firewalls protruding
beyond the roof. Then it was found that this plywood crumbled away not just
at fire temperatures but, more slowly, at normal summer attic
temperatures. If you think that the house owners were made whole by the
courts, you don't know enough about accountants and lawyers.

Years ago I quoted to an architect Weinberg's line that "If architects built
buildings as programmers build programs, the first woodpecker to come along
would destroy civilization." "Oh," she said, "but that's just how they do
build them."

------------------------------

Date: 29 Dec 2004 (LAST-MODIFIED)
From: RISKS-request@private
Subject: Abridged info on RISKS (comp.risks)

 The RISKS Forum is a MODERATED digest.  Its Usenet equivalent is comp.risks.
=> SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent)
 if possible and convenient for you.   Mailman can let you subscribe directly:
   http://lists.csl.sri.com/mailman/listinfo/risks
 Alternatively, to subscribe or unsubscribe via e-mail to mailman your
 FROM: address, send a message to
   risks-request@private
 containing only the one-word text subscribe or unsubscribe.  You may
 also specify a different receiving address: subscribe address= ... .
 You may short-circuit that process by sending directly to either
   risks-subscribe@private or risks-unsubscribe@private
 depending on which action is to be taken.

 Subscription and unsubscription requests require that you reply to a
 confirmation message sent to the subscribing mail address.  Instructions
 are included in the confirmation message.  Each issue of RISKS that you
 receive contains information on how to post, unsubscribe, etc.

   INFO     [for unabridged version of RISKS information]
 .UK users should contact <Lindsay.Marshall@private>.
=> SPAM challenge-responses will not be honored.  Instead, use an alternative
 address from which you NEVER send mail!
=> The INFO file (submissions, default disclaimers, archive sites,
 copyright policy, PRIVACY digests, etc.) is also obtainable from
   <http://www.CSL.sri.com/risksinfo.html>
 The full info file may appear now and then in future issues.  *** All
 contributors are assumed to have read the full info file for guidelines. ***
=> SUBMISSIONS: to risks@private with meaningful SUBJECT: line.
 *** NOTE: Including the string "notsp" at the beginning or end of the subject
 *** line will be very helpful in separating real contributions from spam.
 *** This attention-string may change, so watch this space now and then.
=> ARCHIVES: ftp://ftp.sri.com/risks [subdirectory i for earlier volume i]
 <http://www.risks.org> redirects you to Lindsay Marshall's Newcastle archive
 http://catless.ncl.ac.uk/Risks/VL.IS.html gets you VoLume, ISsue.
   Lindsay has also added to the Newcastle catless site a palmtop version
   of the most recent RISKS issue and a WAP version that works for many but
   not all telephones: http://catless.ncl.ac.uk/w/r
 <http://the.wiretapped.net/security/info/textfiles/risks-digest/> .
==> PGN's comprehensive historical Illustrative Risks summary of one liners:
    <http://www.csl.sri.com/illustrative.html> for browsing,
    <http://www.csl.sri.com/illustrative.pdf> or .ps for printing

------------------------------

End of RISKS-FORUM Digest 23.74
************************



This archive was generated by hypermail 2.1.3 : Wed Feb 23 2005 - 12:23:31 PST