[risks] Risks Digest 22.27

From: RISKS List Owner (riskoat_private)
Date: Sat Sep 28 2002 - 16:57:33 PDT


RISKS-LIST: Risks-Forum Digest  Sat 28 September 2002  Volume 22 : Issue 27

   FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <URL:http://catless.ncl.ac.uk/Risks/22.27.html>
and by anonymous ftp at ftp.sri.com, cd risks .

  Contents:
Risky Auckland harbour bridge lane signals (Nickee Sanders)
Dewie the Turtle comes out for computer security (NewsScan)
Re: Real risks of cyberterrorism? (Ralf Bendrath)
Probability Risk Assessments/Homeland Insecurity (Peter B. Ladkin)
Paper ballots, no panacea (Andy Neff)
Leeches for Sale (Rebecca Mercuri)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Tue, 24 Sep 2002 11:47:37 +1200
From: Nickee Sanders <njsat_private>
Subject: Risky Auckland harbour bridge lane signals

The Auckland harbour bridge is an arched, 8-lane structure, whose inner 4
lanes are employed in a so-called "tidal" system to cope with changing
traffic demands.

For decades, control has been achieved by a simple system of lane signals
above each lane, every 200m or so: a green arrow if the lane was open to
traffic, a red cross if it was closed, and a diagonal arrow if the lane was
closing ahead.

Now some bright spark has obviously decided it's much simpler to indicate
that a lane is open by having NO SIGNAL AT ALL above it.  Shall we open a
RISKs sweepstake on how soon it'll be before a power outage causes an
accident?

[Fortunately, head-on crashes are prevented by the use of a movable barrier.]

Nickee Sanders, Software Engineer, Auckland, New Zealand

------------------------------

Date: Thu, 26 Sep 2002 10:34:11 -0700
From: "NewsScan" <newsscanat_private>
Subject: Dewie the Turtle comes out for computer security

In the tradition of Smokey the Bear's campaign for fire safety, the new
cartoon figure Dewie the Turtle is being promoted by the Federal Trade
Commission to teach kids and their parents of the importance of computer and
network security (http://www.ftc.gov/infosecurity). Dewie urges the
selection of hard-to-guess passwords, the use of antivirus software and
computer firewalls, and other security practices. Do as Dewie says or you'll
be sorry. (*San Jose Mercury News*, 25 Sep 2002; NewsScan Daily, 26
September 2002)
  http://www.siliconvalley.com/mld/siliconvalley/4151919.htm

    [So, Do We Do as Dewie Says?  OK, but that is nowhere nearly enough.
    But that's just what the recent draft of the President's Critical
    Infrastructure Protection Board (CIPB) said *each user* should do.
    Unfortunately, the CIPB's recommended 60 measures totally ignore the
    reality that most of the computer systems are so lame that those user
    measures are still seriously inadequate.  Are you Dewie-eyed?  Not me.
    The Dewie I'd root for would move faster than a turtle.  PGN]

------------------------------

Date: Mon, 23 Sep 2002 22:07:59 +0200
From: Ralf Bendrath <bendrathat_private-berlin.de>
Subject: Re: Real risks of cyberterrorism? (Norloff, RISKS-22.22)

> ... study by the Gartner Group ... not referenced 

The non-publication of the Gartner/NWC study is a problem, I agree. At
least, the audio recording of the conference discussing the outcomes
afterwards is available:
  http://www3.gartner.com/2_events/audioconferences/dph/dph.html.

But let's talk about what we know from open sources. The outcome of the
study was, in my understanding, that the assumption "give me ten hackers,
and I'll bring this nation to its knees" is plainly wrong. The U.S. military
(including NSA) probably has more experience in offensive computer network
attacks (CNA) than any other government body in the world. CNA have been
part of the doctrine of "information operations" since 1998 (see Joint
Pub. 3-13, Joint Doctrine for Information Operations,
http://www.dtic.mil/doctrine/jel/new_pubs/jp3_13.pdf) and have been used in
Kosovo and on other occasions. The "after action reviews" and people from
these units I talked to all concluded that it turned out much more difficult
than expected. It takes an immense effort in net intelligence (NETINT),
technology and human expertise and manpower to really get some serious
damage done.

Therefore, the government cyber threat estimates in the last months (after
some hysteria about "cyberterrorism" after 9/11) have been reduced to a more
sober assessment. Though I normally am not in line with him, I totally agree
with the conclusion Richard Clarke, the White House's cyber security czar,
drew after the Gartner/NWC exercise: "There are terrorist groups that are
interested. We now know that al Qaeda was interested. But the real major
threat is from the information-warfare brigade or squadron of five or six
countries." (quoted after Ariana Eunjung Cha / Jonathan Krim, "White House
Officials Debating Rules for Cyberwarfare", Washington Post, 22 August
2002).

If you look at the latest National Strategy to Secure Cyberspace, which was
released on September 18 (http://www.whitehouse.gov/pcipb), in the chapter
on threats and vulnerabilities there is one scenario that lists a number of
cyber security/computer risks incidents that have already happened:

"Consider the Following Scenario... A terrorist organization announces one
morning that they will shut down the Pacific Northwest electrical grid for
six hours starting at 4:00PM; they then do so. (...) Other threats follow,
and are successfully executed, demonstrating the adversary's capability to
attack our critical infrastructure. (...)  Imagine the ensuing public panic
and chaos." (p. 4)

It clearly looks impressive, but: Many of these incidents have not occurred
by purpose, but by plain technical failures. This is not really something
any cyber attacker can rely on. And the main example for cyber
vulnerabilities and risks in the Strategy are the Nimda and Code Red
worms. These kinds of "weapons" can really not be used for any directed
attack, and they to my knowledge are not at all capable of spreading to
SCADA systems that do not rely on MS Outlook. ;-)

I have just finished a review of the changes in the U.S. cyber threat
discourse before and after 9/11, and one conclusion for me was: 

"The threat perception can change when the criteria for a threat are
changed. The problem here is: There still are no clear criteria even within
government organizations for deciding what is an attack and what is not, and
some security agencies tend to overstate the real incidents.  Until 1998 the
Pentagon counted every attempt to establish a telnet connection (which can
be compared with a knock on a closed door) as an electronic attack.  Another
example shows even better how arbitrary some estimates are. When asked by
the Department of Justice about the number of computer security cases in
2000, the Air Force Office of Special Investigations (AFOSI) staff counted
14 for the whole Air Force. The Department of Defense overall count for all
services, to the surprise of the AFOSI staff, later summed up to some 30
000. The explanation: The other services had counted non-dangerous events
like unidentified pings as hacker attacks, while the AFOSI only had
considered serious cases.  On the vulnerability side of the problem as well,
there are still no standard procedures for identifying and estimating the
vulnerability of critical infrastructures. These are being developed since
June 2000 in the Critical Infrastructure Protection Office's project
"Matrix". Slowly, a discussion seems to emerge on the validity of statistics
about the numbers, dangers and damages of computer insecurities. Even Richard
Power of the Computer Security Institute that conducts the annual Computer
Crime Survey for the FBI was quoted with some self-critical words on this
problem."

(I can send a copy of the full article to anyone interested. It will be
published this fall as: The American Cyber-Angst and the Real World -
Any Link?, in: in: Robert Latham (ed.): Bytes, Bombs, and Bandwidth, New
York: New Press, 2002)

Talking about "cyberterrorism": My problem with many of the publications and
fears about it is the total focus on vulnerabilities. While you can see tons
of quotes from "security professionals" or IT lobbyists on this, you never
find any expert on real-world terrorism being asked about it by the
media. If you try to think from this angle, the threat becomes much smaller:
Terrorists are not used to hacking, and hackers and terrorists are totally
different milieus and cultures. Terrorists don't need to hack, because
low-tech approaches work perfectly well (I just say "boxcutters"). But even
more important: Terrorism is a form of political communication. The
terrorist act itself is not the goal, but the message transported by it and
the psychological impacts. For this, computer attacks are just not "sexy"
enough - you don't get these "great" TV pictures if you bring down a
telephone network or a computer in a satellite control center. So, IMHO
terrorist will use the nets more and more for organisational and
communicational purposes, but not for attacks.

So I guess, my main point is: Be aware of the risks related to computer
networking, but do not participate in the fearmongering parts of the media
and some interested parties on Capitol Hill are doing.

------------------------------

Date: Wed, 25 Sep 2002 10:02:16 +0200
From: "Peter B. Ladkin" <ladkinat_private-bielefeld.de>
Subject: Probability Risk Assessments/Homeland Insecurity (RISKS-22.21 to 23)

I'm glad that Stephen Fairfax in RISKS-22.23 considers as a "classic
example" my rejection in RISKS-22.22 of his claim in RISKS-22,21 that a
probabilistic risk assessment (PRA) finds "overwhelming evidence" that
arming commercial pilots is an overall plus. I thank him for that
characterisation. I myself didn't rate my note so highly. I hope to do
better here.

Fairfax doesn't buy my criticism of his reasoning by a long margin. It seems
worth understanding the issue in detail, for two reasons. First, while the
topic of arming commercial pilots is only marginally relevant to Risks (in
that computerised control systems may be more vulnerable to bullets than
hydromechanical systems), the subject of the appropriate application of PRAs
is central. It was discussed in Risks eleven years ago inter alia by Hoffman
(RISKS-12.16), Agre (RISKS-12.21, 12.24), Gardner, Seidel (RISKS-12.22), and
Kerns (RISKS-21.24). Second, I have seen the type of invalid reasoning,
exemplified by (**) below, more than once in discussions of PRAs for
particular phenomena.  It seems useful to put a refutation in the public
record.

To the argument.

Fairfax correctly notes that I focus on just one assertion of his, namely
that (B): there is "overwhelming evidence" that (A): arming commercial
pilots would ameliorate hijacking situations. He wishes us to believe (A)
with him on the basis of (B). Indeed, were (B) to be true, we would be
irrational not to believe (A). How does he wish us to believe (B)? On the
basis (C) of assessing the "probabilities of success and failure"; in short,
a PRA.

Let us look at the form of the argument. First, we have the 
indisputable premise that
  (A) follows from (B). 

Fairfax's argument then continues ostensibly with the form: 
  (*) (C), therefore (B), therefore (A). 

But in fact it doesn't have this form, as his reply in RISKS-22.23 makes
clear. His argument actually has the form:

  (**)  If one were to perform (C), one would find (B). Therefore (A)

That Fairfax hasn't actually performed a PRA (C) is made clear by his
comments in RISKS-22.23 about how one would go about doing it.  Not: how one
actually did it; but, rather: how one would go about doing it were one to do
so.

It would be convenient were (**) to be valid under the supposition
(=A7). For then we could achieve our desired results, not by actually doing
things to achieve them, but simply by imagining the outcomes were we to do
so. Making wine, bringing up children, winning the Olympics, and proving
Fermat's Last Theorem would all be so much easier than we had thought. But
unfortunately it is not so valid.

Fairfax wishes us to believe (A). The reasoning he proposes is (*).  He
himself believes (A) on other grounds, though, for he does not have the
components of (*); he has at most (**). So the grounds he actually has for
believing (A) are not the grounds he is proposing that there are for
believing (A).  C.S. Peirce called this "sham reasoning" [1]. I called it
bogus. Reader's choice.

So much for the general point. I also doubted that the chain of reasoning
(C, therefore B) could be established, even were one to attempt it. I said
Fairfax had no data. He disputes that. We have a different classification of
data. I think that to perform any kind of probabilistic assessment of the
consequences of arming commercial pilots, he needs at least some cases in
which commercial pilots have been armed, and as far as we know there aren't
any. He claims that all he needs are cases of attempted hijacking. OK, let's
take that at face value and see what we get.

He does note that the data are "sparse". Let me indicate how sparse.
Aviation Safety Network lists just 16 occasions in the 50 years before
September 11, 2001 on which aircraft have been lost to hijacking incidents
[2]. These are the most damaging hijacking incidents in larger numbers of
lives were lost. Others were more or less successfully concluded. The list
is not complete. It omits, for example, one hijacking-to-destruction of a US
domestic flight (PSA, a BAe 146 near San Luis Obispo, CA on 12 December 1987
by a passenger with a gun). It also omits three suicide/murder incidents by
pilots (one Air Maroc, whose date I do not recall, and two recent ones to a
Silk Air Boeing 737 in Indonesia and an Egyptair Boeing

767 off the East Coast of the US. Note that the first is supposed, not
proven, and the two latter are so considered by the NTSB but not necessarily
by other parties to the investigation). So let's double the number to
30. Can these, *probabilistically*, tell us that arming US domestic pilots
will help or hinder?  Of course not.  There are more potential confounding
factors than there are incidents, so it is impossible to control for them,
except in the one obvious case of the 4 incidents due to Al Qaeda
operations.

That virtually nothing probabilistically follows from these incidents does
not mean that they cannot be analysed. One could go through on a
case-by-case basis and propose counterfactuals: what do we think would have
happened, had the cockpit crew been armed? Indeed, Fairfax proposes
something like this. Additional incidents may become appropriate for such an
analysis, say the Air Algerie incident which Fairfax notes. But this is not
any kind of probabilistic evaluation, let alone a PRA, as proposed in
(C). It is a counterfactual case analysis, the typical analysis used in
accident investigation of all sorts, and does not have a role to play in an
argument of form (*).

Fairfax regrets that I didn't consider his "additional layer of safety"
argument. OK, I'll bite. First of all, it is a metaphor.  Second, I think it
is an inappropriate metaphor to describe what is being proposed. The policy
of the FAA and US domestic airlines up to now has been "clean
aircraft". That is, no anti-personnel weaponry on board (with certain -
unloaded - exceptions). The justification is that, if there is none on
board, then none can be used. Arming pilots violates this policy. Far from
adding an "additional layer of safety", it peels one off and replaces it
with another. Besides, third, I don't think evaluating metaphors, mine above
included, is an appropriate way to reason in safety cases. Fourth, what
about cases in which pilots themselves are the problem (there have been
three, at least, as above, fully ten per cent of what I take to be the total
if one is impressed by such argument from tiny numbers)? Even the deployment
of weaponry on board by trained enforcement agents has had problems which
would not occur were the weaponry not to be present [3].

Finally, readers please note that I have neither said nor implied what my
considered position on (A) actually is. As I said above, I don't consider it
a theme appropriate to the Risks Forum.

Footnotes:

[1] Peirce used the phrase to refer to reasoning to a conclusion
to which the proponent is already committed for other reasons. See
Haack, Manifesto of a Passionate Moderate, Chicago U.P., 1998, p8ff.
I am using it here to characterise a situation in which the reasons
one gives for a conclusion are not the reasons one really has, which 
is the same thing in other words. Haack was more concerned with the
case in which a proponent was committed to a conclusion and would 
not give it up no matter what. I am not suggesting in any way that
this is the case here.

[2] http://aviation-safety.net/events/seh.shtml

[3] See Bob Herbert's frightening NYT account of what happened to 
Dr. Bob Rajcoomar, a retired army major and physician, published in 
The International Herald Tribune on 24 September, 2002, at
http://www.iht.com/articles/71537.htm  [Also made *TheNYTimes*.  PGN]

Peter B. Ladkin, University of Bielefeld, http://www.rvs.uni-bielefeld.de=

------------------------------

Date: Wed, 18 Sep 2002 17:25:36 -0700
From: Andy Neff <aneffat_private>
Subject: Paper ballots, no panacea

In analyzing the recent election failures in Florida, it is important to
avoid jumping to erroneous conclusions about the role that machines can play
in election systems of the highest quality. There are significant
differences between information-based election systems and the simplistic
electronic-based systems (often called Direct Recording Electronics or DREs)
generally offered in the market today. Research on information-based voting
systems has been conducted since the 1980's.  Little, if any, of this
research has been incorporated into the electronic voting systems widely
used today.

First of all, the vast majority of objections to electronic systems are not
directed at fraud, which is actually the biggest weakness of simple DREs.
Rather, objections are often directed at issues of reliability and
performance.  These issues are certainly important to the voting process;
however, they can be resolved through proper certification, testing, and
training.  Such flaws are avoidable and are not problems uniquely associated
with voting systems.

Remember the butterfly ballot in Palm Beach County, Florida in the 2000
Presidential Election? This example clearly illustrates that even certified
paper-based systems are subject to reliability and performance problems.
Justifiable indignation, then, should be focused on the absurdly outdated
and ineffective election standards and certification process.  Ultimately,
it is the job of an unbiased standards organization to enforce minimum
reliability and performance policies for election systems.

An unfortunate consequence of belaboring performance issues is that the
thorniest election issues are not examined carefully enough. Those against
electronic solutions have concluded, without appropriate supporting
evidence, that election systems that use countable paper ballots are most
trustworthy.

The fallacy of this conclusion is demonstrated by both the facts that are
often given to support the paper ballot solution, and by those that are
conveniently omitted:

1) As most who witnessed the 2000 US Presidential Election agree, paper
ballots created problems. Paper ballots, be they optical scan or punch card,
still have to be counted by machines in an election of any reasonable
size. This means that the opportunity for election fraud is not eliminated
by the use of paper, but only shifted to a different point in the election
process.

2) It is often suggested that electronic voting systems get retrofitted with
some form of paper ballot output. I call this the $2500 #2 pencil solution.
Doesn't an electronic machine retrofitted this way remain just as vulnerable
to "catastrophic failure," "malfunction," and "usability problems"?

3) While most people intuitively understand how a collection of voted paper
ballots could be supervised procedurally, in reality the process is always
far from perfect.  Even in what was arguably the most scrutinized election
in history -- the 2000 US Presidential Election -- ballots were lost,
damaged, and/or destroyed.  We don't know, and never will know, the extent
of the damage; nor will we know how much damage was due to accident and how
much was due to malice. But it is clear that many voters were
disenfranchised.

The truth is that paper-based voting systems are "voter verifiable" in that
they can help each voter check that his/her choices are recorded properly.
But they are not "publicly verifiable" in that they cannot ensure that the
final count is an accurate tally of all the voters' choices.  Simple DRE
electronic voting systems are neither voter verifiable nor publicly
verifiable.  Our goal should be to create a system that is both, and modern
information technology gives us this opportunity.

Another common objection raised is the use of "proprietary systems."  I
wholeheartedly support this objection. One of the basic tenants of a
trustworthy election system is that nothing should be secret about the
election process except the link between an individual voter and any one
specific voted ballot. Actually, I support something stronger than "open
source," namely "open protocol," which publishes the underlying voting
technology in addition to the software source code.

As Rebecca Mercuri recently said on this forum, "democracy is at stake."  I
agree.  But I also fear the recommended paper-based solution.  Doctors once
prescribed leeches for deathly ill patients. Sometimes the patients got
better; sometimes they died.  In any case, the state of medical science was
not well served by the common wisdom of the time.

C. Andrew Neff, Ph.D., Chief Scientist, VoteHere, Inc.
Copyright (c) C. Andrew Neff, 2002. All rights reserved.  

------------------------------

Date: Tue, 24 Sep 2002 00:24:21 -0400
From: "Rebecca Mercuri" <mercuriat_private>
Subject: Leeches for Sale (Re: Neff, RISKS-22.27)

Dr. Neff makes some interesting points but MISSES the point of the paper
ballot solution.  Here are the facts. DREs fail because of reliability,
performance, and security issues, but these can NOT be resolved ENTIRELY
through standards and testing.  It is a fact of computer science that no
manner of testing or code examination can assure software or system
integrity. This was explained by Ken Thompson in his classic speech/paper
"Reflections on Trusting Trust" (available in its entirety -- at
http://www.acm.org/classics/sep95 -- it's a must read, especially if you
believe Open Source is a viable solution to the voting problem).

Neff appears to entirely misunderstand my paper ballot concept.  First of
all, I have NEVER said that people should go out and spend millions of
dollars on expensive paper printers, rather, I have been recommending for
years that communities buy simple optical scanning voting systems if they
feel they must unload their coffers of the tax dollars they have collected.
But the DREs (WITH PRINTERS) can do a better job in preparing the paper
ballots, there's no need for blanks prepared in advance, and overvotes and
undervotes can be flagged and brought to the attention of the voters.  Where
I see the computers being used with paper is to provide an ENHANCED voting
system.

For example, Dr. David Chaum has worked out an amazing system, using
cryptography, where the voter can VISUALLY VERIFY that their ballot was
cast, the ballot is produced in a form that can not reveal its contents
(except through a verifiable process that does not identify the voter), AND
the voter can anonymously verify AFTER the election that their ballot was
indeed cast as intended.  A human-readable physical ballot is ESSENTIAL to
the process, not only in Chaum's system, but for any electronic ballot
casting and tabulating device, because it is the ONLY WAY that the voter can
be assured that their ballot is entered into the count correctly (no manner
of recording of electronic data will suffice). But the "paper" (in Chaum's
scheme, laminated plastic, but still a physical audit trail) is essential to
the process.  Once the vendors become willing to admit this is not possible
without something the humans can actually SEE, they might finally start
implementing viable systems that are truly auditable.  BTW, you can read all
about Chaum's and my theories in this week's issue of The Economist.

Dr. Neff is wrong on two more counts. As it turns out, leeches ARE still
used in medicine.  They emit a type of substance that can be helpful in
certain cases. And actually blood-letting (in modest degrees) also turns out
to be an effective treatment for some ailments. (There were some articles on
this a few years back, either in Science News or Smithsonian, I forget
which, but well documented.)  But I think the analogy he made is quite
apropos to this discussion -- it illustrates a mode of erroneous thinking
where older technologies (like paper and leeches) are characterized as
inherently bad, in favor of new- fangled (and occasionally widely off-base)
solutions.

This is consistent with other VoteHere technology choices -- only a few
years ago, their president, Jim Adler, was pushing Internet voting.  At a
debate sponsored by George Washington University in January 2001 -- the GWU
report (available at www.democracyonline.org) states that -- Adler's team at
Votehere.net "includes scientists who claim they have already solved many of
the hardest problems associated with Internet voting, namely the security,
privacy and auditing challenges.  For example, addressing the question about
audit trails, Mr. Adler said that Votehere.net has designed a system where
votes are "burned onto a cd-rom"."  Now that's real security for you.
Thankfully, the NSF decided that Internet voting isn't a good idea, or the
VoteHere scientists might have sold some of their secure systems to
Florida. Even Bruce Schneier thinks Internet voting is implausible, and he
does know a thing or two about crypto.

I could go on further, but my thoughts are embodied in papers on my website
(at www.notablesoftware.com/evote.html).  I commend Dr. Neff on his
initiative in engaging in this debate.  I hope that he might also re-examine
the immutable facts of computer science and perhaps he can eventually
convince his team of scientists to develop voting systems that are truly
verifiable, auditable, and secure.  In the meanwhile, I have a few leeches
for sale.

Rebecca T. Mercuri, Ph.D., Professor of Computer Science, Bryn Mawr College 

------------------------------

Date: 29 Mar 2002 (LAST-MODIFIED)
From: RISKS-requestat_private
Subject: Abridged info on RISKS (comp.risks)

 The RISKS Forum is a MODERATED digest.  Its Usenet equivalent is comp.risks.
=> SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent)
 if possible and convenient for you.  Alternatively, via majordomo,
 send e-mail requests to <risks-requestat_private> with one-line body
   subscribe [OR unsubscribe]
 which requires your ANSWERing confirmation to majordomoat_private .
 If Majordomo balks when you send your accept, please forward to risks.
 [If E-mail address differs from FROM:  subscribe "other-address <x@y>" ;
 this requires PGN's intervention -- but hinders spamming subscriptions, etc.]
 Lower-case only in address may get around a confirmation match glitch.
   INFO     [for unabridged version of RISKS information]
 There seems to be an occasional glitch in the confirmation process, in which
 case send mail to RISKS with a suitable SUBJECT and we'll do it manually.
   .MIL users should contact <risks-requestat_private> (Dennis Rears).
   .UK users should contact <Lindsay.Marshallat_private>.
=> The INFO file (submissions, default disclaimers, archive sites,
 copyright policy, PRIVACY digests, etc.) is also obtainable from
 http://www.CSL.sri.com/risksinfo.html  ftp://www.CSL.sri.com/pub/risks.info
 The full info file will appear now and then in future issues.  *** All
 contributors are assumed to have read the full info file for guidelines. ***
=> SUBMISSIONS: to risksat_private with meaningful SUBJECT: line.
=> ARCHIVES are available: ftp://ftp.sri.com/risks or
 ftp ftp.sri.com<CR>login anonymous<CR>[YourNetAddress]<CR>cd risks
   [volume-summary issues are in risks-*.00]
   [back volumes have their own subdirectories, e.g., "cd 21" for volume 21]
 http://catless.ncl.ac.uk/Risks/VL.IS.html      [i.e., VoLume, ISsue].
   Lindsay Marshall has also added to the Newcastle catless site a
   palmtop version of the most recent RISKS issue and a WAP version that
   works for many but not all telephones: http://catless.ncl.ac.uk/w/r
 http://the.wiretapped.net/security/info/textfiles/risks-digest/ .
 http://www.planetmirror.com/pub/risks/ ftp://ftp.planetmirror.com/pub/risks/
==> PGN's comprehensive historical Illustrative Risks summary of one liners:
    http://www.csl.sri.com/illustrative.html for browsing,
    http://www.csl.sri.com/illustrative.pdf or .ps for printing

------------------------------

End of RISKS-FORUM Digest 22.27
************************



This archive was generated by hypermail 2b30 : Sat Sep 28 2002 - 17:55:14 PDT