[risks] Risks Digest 23.08

From: RISKS List Owner (risko@private)
Date: Mon Dec 22 2003 - 16:25:13 PST

  • Next message: RISKS List Owner: "[risks] Risks Digest 23.09"

    RISKS-LIST: Risks-Forum Digest  Monday 22 December 2003  Volume 23 : Issue 08
    
       FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
       ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator
    
    ***** See last item for further information, disclaimers, caveats, etc. *****
    This issue is archived at http://www.risks.org as
      http://catless.ncl.ac.uk/Risks/23.08.html
    The current issue can be found at
      http://www.csl.sri.com/users/risko/risks.txt
    
      Contents:
    Railroad accident results from deactivated crossing gates (PGN)
    Chats led to Acxiom hacker bust (Kevin Poulsen via Monty Solomon)
    Moderation and Immoderation (PGN)
    Re: Tragedy of the Commons (Douglas W. Jones)
    Re: Proper Understanding of the Human Factor (Peter B. Ladkin)
    Poor writing is the problem, not PowerPoint (Simson L. Garfinkel)
    Why have electronic voting machines at all? (Finn Poschmann, 
      Sander Tekelenburg)
    CFP: CyberCrime and Digital Law Enforcement Conference, Mar 2004 
      (Michel E. Kabay)
    Abridged info on RISKS (comp.risks)
    
    ----------------------------------------------------------------------
    
    Date: Mon, 22 Dec 2003 14:56:11 PST
    From: "Peter G. Neumann" <neumann@private>
    Subject: Railroad accident results from deactivated crossing gates
    
    An upgrade to the Caltrain guarded crossing system was designed by SRI many
    years ago, and has been very effective at diminishing road-rage by
    re-opening up the forward gates when trains are stopped in the station.  For
    the past few years, the Caltrain folks have been upgrading the tracks,
    adding sidings to enable high-speed trains to pass.  To do so, they have
    shut down passenger service altogether on weekends, turning off the crossing
    controls.  However, the rails have been used to move the needed construction
    materials (roadbed, ties, rails, etc.), with flagmen posted as needed.
    However, at 11:30pm on Sunday, 14 Dec 2003, just a few blocks from SRI in
    Menlo Park, a truck struck a slow-moving train already 3/4 of the way
    through the crossing.  Why this happened was not known.
      [Source: *San Jose Mercury News* (Peninsula Edition), 16 Dec 2003, page 3B.]
    
    ------------------------------
    
    Date: Mon, 22 Dec 2003 00:26:43 -0500
    From: Monty Solomon <monty@private>
    Subject: Chats led to Acxiom hacker bust
    
    Kevin Poulsen, SecurityFocus, 19 Dec 2003
    
    A Cincinnati man who plead guilty Thursday to cracking and cloning giant
    consumer databases was only caught because he helped out a friend in the
    hacker community.  Daniel Baas, 25, plead guilty on 18 Dec 2003 to a single
    federal felony count of "exceeding authorized access" to a protected
    computer for using a cracked password to penetrate the systems of
    Arkansas-based Acxiom Corporation -- a company known among privacy advocates
    for its massive collection and sale of consumer data. The company also
    analyzes in-house consumer databases for a variety of companies.
    
    From October 2000 until June 2003, Baas worked as the system administrator
    at the Market Intelligence Group, a Cincinnati data mining company that was
    performing work for Acxiom. As part of his job, he had legitimate access to
    an Acxiom FTP server. At some point, while poking around on that server, he
    found an unprotected file containing encrypted passwords.
    
    Some of those passwords proved vulnerable to a run-of-the-mill password
    cracking program, and one of them, "packers," gave Baas access to all of the
    accounts used by Acxiom customers -- credit card companies, banks, phone
    companies, and other enterprises -- to access or manage consumer data stored
    by Acxiom. He began copying the databases in bulk, and burning them onto
    CDs.  ...  
      http://www.securityfocus.com/news/7697
    
    ------------------------------
    
    Date: Mon, 22 Dec 2003 14:44:13 PST
    From: "Peter G. Neumann" <neumann@private>
    Subject: Moderation and Immoderation
    
    Your RISKS moderator is absolutely mortified.  After my silly OMITTED MINUS
    ONE gaffe in RISKS-23.06 in the Mersenne prime item, I compounded it in
    RISKS-23.07.  (Thanks to all of you who responded.)  I started out having
    typed P>=1 and did not like how it looked, and meant to change it to P>0.
    Somehow I forgot to do so.  In trying to keep many balls in the air at once,
    I unfortunately sometimes have to squeeze RISKS moderation in between
    handling the other balls.  Having a ball sometimes becomes Halving a ball.
    
    The "notsp" Subject line experiment has been a tremendous help in allowing
    me to separate the wheat from the chaff.  Thanks to those of you who picked
    up on it.  (I continue to get over 1000 spams a day that are caught by
    SpamAssassin, and many more that are not.)  Nevertheless, I regret that I
    cannot put out more issues and include more of your would-be postings.  On
    the other hand, if we had more RISKS issues, I would have to do with even
    more responses, and you all would have even more e-mail to read as well, so
    perhaps you should be happy I cannot devote more time to moderating.  So
    moderation in moderating may be a good thing after all.
    
    Incidentally, for those of you who have stumbled onto some of the annoying
    Majordomo glitches, I anticipate that RISKS will eventually be cutting over
    to Mailman -- which my lab is already using experimentally on other lists.
    
    Let me take this opportunity to wish you all a risk-free holiday season.  PGN
    
    ------------------------------
    
    Date: Thu, 18 Dec 2003 19:03:48 -0600
    From: "Douglas W. Jones" <jones@private>
    Subject: Re: Tragedy of the Commons (Norman, RISKS-23.07)
    
    *Science Magazine*, 12 Dec. 2003, Vol 302, No 5652, has a set of articles on
    the Tragedy of the Commons, one of which is very relevant to us.
    
       Tales from a Troubled Marriage:
       Science and Law in Environmental Policy
       by Oliver Houck, Pages 1926 to 1929
    
    The section of this article that is most relevant to us is entitled: Four
    Cautionary Tales.  There, he talks about how science has come to be used and
    abused in public policy debates surrounding environmental issues, but we're
    involved in a different public policy debate, and science is being used and
    abused here too.  The 4 examples are:
    
    "The lure of a return to scientific management" should be viewed with
    suspicion.  There are attractive and rational arguments that favor
    iterative, impact based and localized management strategies instead of
    "unrealistic" one-size-fits-all policies.  Several people spoke in these
    terms at the NIST meeting on voting systems Dec 10 and 11, urging
    incrementalism and arguing against top-down approaches that attempted to
    look at the big picture and overall system architecture.
    
    "Good science" and "peer review" are sometimes invoked to set extremely high
    standards for the admissibility of scientific arguments that favor any
    change in current policy, but it is unusual to find such standards applied
    to the arguments favoring retention of the status quo.  The head of the CS
    department at Kennesaw State cornered me recently using this argument
    against the Hopkins report on security flaws in Diebold's voting systems,
    despite the fact that the SAIC report had already come out confirming most
    of the flaws first reported in the Hopkins report; as far as he was
    concerned, the fact that the Hopkins report had not been subject to
    prepublication peer review was grounds for censure.
    
    "The lure of money" has biased science.  There are good studies of this in
    the health care field as well as the environmental field.  Researchers with
    industrial funds are less likely to publish results that reflect negatively
    on their source of funds.  Who is supporting the different scientists who
    have engaged in the voting systems debate?  We ought to be very open about
    this.  The conflict of interest stories that popped up after the release of
    the Hopkins report touch on this issue, illustrating the extent to which
    bogus conflicts can be as important here as real ones.
    
    "The lure of the safe life" has led researchers to avoid drawing
    conclusions.  We can do good science, confining ourselves to the technical
    and avoiding drawing conclusions that would engage us in public policy
    debate.  Many of those on this list have elected to forgo this option, but
    many of our colleagues may be more reluctant to participate.  This is
    unfortunate and I think we need regular reminders.  When outrageous claims
    are made for what computer science can do, or when utterly incompetent
    security audits are brought forward into the public debate, those who have
    technical qualifications should not stand by idly.
    
    ------------------------------
    
    Date: Sat, 20 Dec 2003 09:37:34 +0100
    From: "Peter B. Ladkin" <ladkin@private-bielefeld.de>
    Subject: Re: Proper Understanding of the Human Factor (Norman, RISKS-23.07)
    
    In his argument for the view of Mike Smith (RISKS-23.04) and against that
    suggested by Dave Brunberg (RISKS-23.06) on Vicente's book The Human Factor,
    Don Norman invites us to consider two points of view on systems failure in
    which human operators are involved.
    
    He suggests that 75% of accidents with such systems are blamed on operator
    error (in aviation, the generally-aired figure says accident reports
    attribute probable cause to pilot error in 70% of cases), and that the cause
    should be taken to lie rather in the system design which affords those kinds
    of errors. He points out that this view has been around for some half a
    century.
    
    The other view is that of Brunberg, who gives hypothetical examples of the
    "Bubba factor", according to which operators engage in typically human but,
    in terms of their professional skills and requirements, inappropriate
    behavior when operating a system.
    
    Norman, prefers the first view. For example, it is a part of critical system
    design that hazards (defined as situations in which certain unwanted events,
    including accidents, are particularly afforded in some way or another) must
    be identified, and avoided or mitigated as far as possible.
    
    The classic statement of the "Bubba factor" position is a comment made in
    1949 by Edsel Murphy, an engineer on the USAF project MX191: Human
    Deceleration Tests, after observing some incorrect wiring that had led to
    failure of measuring equipment. If there was a way for one of the
    technicians to make a mistake, observed Murphy, that would be the way things
    would be done. Murphy's Law, as its successor has come to be known, is also
    half a century old [1].
    
    So, Norman or Bubba?
    
    I believe with Norman that more attention could be paid to the system
    affordances that encourage inappropriate operator actions or inactions. I
    also suspect that the operator's cognitive state is systematically
    underemphasised in most accident investigations, and consider proof of this
    claim to be a significant research project.  Some progress has been
    made. Let me give four examples, based on a particular conception of human
    cognitive capabilities.
    
    There are ways of defining an operator's "rational cognitive state" which do
    not depend on reconstructing hisher mental state, namely by looking at the
    information presented to the operator by the system and closing under simple
    inferences. This idea derives from (and may even be identical with) the
    "information theoretic" view proposed by Norman himself. One may consider
    such a state to be that of an ideal operator, and thereby somewhat
    unrealistic, but it suffices to highlight, in some significant cases, how a
    system afforded operator error.
    
    Consider the "Oops" series of aircraft-simulator runs, in which researchers
    at NASA Ames Research Center set up scenarios for pilots of an MD-80-series
    flight simulator. The pilots were led to "bust" (fly through) their cleared
    altitude on climb. John Rushby has published what I consider to be a seminal
    paper, in which he used the Mur(phi) model checker to demonstrate that the
    pilot's "mental model" (what I called above the rational cognitive state)
    did not match the system state at a crucial point in the proceedings [1]. In
    other words, crucial information about the system was not presented to its
    operator. This is therefore a case in which the only prophylaxis is to
    design out the hazard situation. It amounts to a canonical example of
    Norman's contention.
    
    Sidney Dekker gave the Tuesday Luncheon talk at the 21st International
    System Safety Conference in Ottawa in August 2003, in which he showed a
    series of still photographs of the views available to the pilots of a
    Singapore Airlines B747-400, which attempted to take off from a closed
    runway in Taipei and collided with construction equipment.  The accident was
    widely discussed in commercial aviation circles, particularly with respect
    to the ground guidance technology at the airport and the judicial treatment
    of the crew. Sidney's sequence of photographs gave me the impression that I
    would have made similar decisions in those circumstances to those which led
    to the accident (I am a pilot, though not a professional). This view had
    been promoted by some discussants since the accident, and I believe it is to
    be credited with keeping the crew out of jail.
    
    A similar case of "seeing what the operator saw" is made by the series of
    photographs shown by Marcus Mandelartz of the signalling en route to the
    train derailment at Brühl in the Rhineland in Germany, in which a driver of
    an intercity train went through a switching points at something over three
    times the appropriate speed [3].
    
    Finally, I have argued that the decision by the Russian pilot of one of the
    aircraft that collided over Lake Constance in July 2002 to descend in
    contravention of his ACAS "climb" advisory could well have been rational,
    given his "rational cognitive state" as defined above [4]. I also pointed
    out that all participants in that unfortunate affair, the two crews and the
    air traffic controller, had distinct "rational cognitive states", a
    situation engendered by a cognitive slip by ATC. I believe this situation
    has been woefully incompletely analysed from the point of view of the ACAS
    system. To me, the situation represents a hazard that must be designed out
    or mitigated, as with any such system hazard. This view contrasts with that
    of, say, Eurocontrol, which advises that ACAS "resolution advisories" (RA)
    should be followed by pilots without exception, also a view propounded by
    many pilots. A more cautious view is expressed by the International Civil
    Aviation Organisation (ICAO), which advises that pilots should not manoeuvre
    against an RA, and an even more cautious view has been expressed by the UK
    Civil Air Authority, which advises that pilots should not manoeuvre against
    an RA without overwhelming reason. I believe the crew of the Russian
    aircraft had such reason, as shown by considering the "rational cognitive
    state" (I emphasise that the "rational cognitive state" is not to be
    identified with the actual mental state of the pilots, which we can no
    longer know). If so, only the UK CAA view is consistent with focusing on the
    system, and not the operator. This appears to me also to be a canonical
    example to which Norman's view applies.
    
    All this argues for Norman's view. What is there to argue for Brunberg's?
    
    Consider the following crude but general argument for the Bubba phenomenon.
    Operators have responsibilities. They are intended to perceive certain
    partial system states and to devise actions which depend on those partial
    states. These actions are stipulated by procedures. In the case of some
    systems, pilots flying airplanes for example, some of these actions and
    their consequences are unavoidably safety-critical. Human beings may freely
    choose their actions, and it is open to an operator of even the most
    carefully designed system, in such a situation, to choose an action which
    will lead to an unwanted event such as an accident.
    
    One may contravene such an argument in commercial aviation only by
    advocating pilotless commercial aircraft, a prospect that fills not only
    some passengers but also some systems people like myself with dread.
    
    To illustrate the situation which the argument highlights, consider an
    accident in November 2000 to a Luxair Fokker 50 turboprop on approach to
    Luxembourg Findel airport. The aircraft was on final approach using the
    ILS. The crew selected "ground fine-pitch" on the propellors while still
    airborne. This "low-speed fine-pitch regime [is] normally only usable on the
    ground" [5]. Control was lost, the aircraft crashed on approach, and most on
    board died.
    
    An interlock prevents ground fine-pitch mode from being selected while the
    aircraft is airborne: power lever movement into this regime is
    inhibited. However, there was a known interlock failure mode in which the
    interlock does not function for some 16 seconds after the landing gear has
    locked down. A Notice to Operators concerning this phenomenon had been
    issued, and a system fix for this problem was available but had not been
    incorporated on the accident aircraft [5].
    
    Activating ground fine-pitch while airborne is obviously a big no-no.
    The big question is why this regime may have been selected. The report
    has recently been issued [6]. It criticises the crew. "The captain put
    the power levers into the beta range while trying to regain the glidepath
    from above after beginning a go-around due to poor visibility, and then
    reversing his decision - all without communicating with the first officer.
    He had earlier begun what should have been a Category II [ILS] approach
    without informing his colleague" [6]. The accident report says: "All
    applicable procedures as laid down in the operations manual were violated
    at some stage of the approach" [6]. All this raises red flags to just about
    everyone involved with flight.
    
    The report "extensively questions the airline's hiring and training
    practices" as well as noting that the "design [of the aircraft] did not
    prevent the crew from selecting ground-idle while in flight - the final
    error in a chain that led to the crash" [6].
    
    The question. Norman or Bubba?, is ill-posed. Both Brunberg and Norman
    overstate their cases. As Norman says, people are still too ready to fault
    operators, even after 50 years. But operators must be allowed their free
    will, otherwise one doesn't need an operator. It is open to operators
    to freely choose wrongly, even catastrophically. And it happens.
    
    PBL
    
    References
    
    [1] John Rushby, Using Model Checking to Help Discover Mode Confusions and
    Other Automation Surprises, in Reliability Engineering and System Safety
    75(2):167-77, February 2002, also available from
    http://www.csl.sri.com/users/rushby/
    
    [2] Robert A.J. Matthews, The Science of Murphy's Law, in Peter Day, ed.,
    Killers in the Brain, Oxford U.P. 1999.
    
    [3] Marcus A. Mandelartz, Das Zugunglück in Brühl aus der
    Lokführerperspektive (The Train Accident in Brühl from the Perspective
    of the Driver), in German, http://www.online-club.de/~feba/br0.htm
    
    [4] Peter B. Ladkin, ACAS and the South German Midair, Technical Note
    RVS-Occ-02-02, available from http://www.rvs.uni-bielefeld.de
    
    [5] David Learmount, Propellors yield Lusair crash clue, Flight
    International, 26 November - 2 December 2002, p8.
    
    [6] Kieran Daly, Luxair crew slammed in crash report, Flight International,
    16-22 December 2003, p6.
    
    Peter B. Ladkin  University of Bielefeld,  http://www.rvs.uni-bielefeld.de
    
    ------------------------------
    
    Date: Fri, 19 Dec 2003 11:47:02 -0500
    From: "Simson L. Garfinkel" <slg@private>
    Subject: Poor writing is the problem, not PowerPoint (Re: RISKS-23.07)
    
    Re: Over-reliance on PowerPoint leads to simplistic thinking
    
    Having read about this in the report and some coverage in eWeek and
    ComputerWorld, I need to argue that the real problem is not PowerPoint (as
    much as I dislike the program) --- the problem is that many engineers are
    simply poor verbal communicators.
    
    > Because only about 40 words fit on each slide, a viewer can zip through a
    > series of slides quickly, spending barely 8 seconds on each one.
    
    This seems like poor rationalization. Here's what you can do with 40 words:
    
    	FALLING FOAM COULD DESTROY A SHUTTLE!
    
    (hm; that's just six words.)
    
    	* Falling foam has been clocked at faster than 500 mph
    	* Impact with wing could destroy fragile ceramic tiles on launch
    	* Repair not possible in space; shuttle would burn-up on re-entry
    
    (that's another 30 words; total word count is 36)
    
    I just finished a semester of paper grading in a class at MIT. Many of the
    students were really angry that I took off points for poor writing, improper
    citations, etc.  "This is a class in computer security, not writing," one
    student told me (paraphrased).
    
    I wrote a long e-mail back to that student that without the ability to write
    clearly, their security skills would be of little use.
    
    ------------------------------
    
    Date: Fri, 19 Dec 2003 15:42:20 -0500
    From: "Finn Poschmann" <finn@private>
    Subject: Why have electronic voting machines at all? (RISKS-23.06,07)
    
    Russell Cooper (RISKS-23.07) says that to raise voter turnout when people
    are broadly distributed, reducing the need for travel to a polling station
    is "highly desired" and a compelling reason for e-voting, and that this
    desired benefit is being neglected in common discussion.
    
    In fact, in the e-government world, which is populated by hordes of
    promoters of e-democracy and e-everything else, there is much attention paid
    to the benefits of making voting easier. (Too much? I might note
    parenthetically that we should probably ask ourselves if we really want
    disinterested people to vote more often, but that would be a distraction.)
    In the endless rounds of worldwide conferences and discussion papers on
    e-governance and the "democracy deficit," what there is not enough of is
    attention to risks and costs.
    
    We have difficulty in practice getting close to a verifiably accurate
    polling station implementation of e-voting, though as Rebecca Mercuri will
    tell you it is surely possible to do so. The risks and costs multiply when
    we contemplate e-voting from home.
    
    Of course we can get *close enough* to an acceptably accurate and verifiable
    home-based system; after all, we use similar systems in financial
    transactions representing many billions of dollars daily. Encryption and
    tunnelling protocols can be powerful tools.
    
    Observations: 1) *Close enough* is nonetheless a long way off, owing to
    technical requirements and the concomitant need to raise voters' comfort and
    skill levels. 2) It will be expensive owing to equipment needs on both ends,
    where that equipment would not otherwise be necessary. 3) It will be
    intrusive. I should think we would want to know, while you hold your eye to
    the scanner and your finger on the pad sensor, that your true voting wish is
    being expressed. And what to do about the possibility that someone is paying
    you and watching your vote, or holding a gun to your head? I don't know the
    answer to that one, just as I don't know now why some US states have so
    enthusiastically adopted the mail-in ballot.
    
    In any case, the costs of achieving a reasonably fair and verifiable
    e-vote-from-home are certainly large. What were the benefits again?
    
      [Remember, as other contributors have, that Internet voting and other
      remote voting schemes all suffer from the ability to sell your vote --
      along with all of the other problems of whom and what can you trust.  PGN]
    
    ------------------------------
    
    Date: Fri, 19 Dec 2003 08:41:53 +0100
    From: Sander Tekelenburg <tekelenb@private>
    Subject: Re: Why have electronic voting machines at all? (Cooper, RISKS-23.06)
    
    [I may have missed a step in this thread, but the original subject seems to
    have been electronic voting machines vs paper voting. Somehow it moved to
    voting from the comfort of the home, which I think should be treated as a
    different subject.]
    
    Wed, 10 Dec 2003 05:09:05 -0500, "Russ" <Russ.Cooper@private> wrote:
    
    > Maybe I missed the comment, but it seems to me that one of the most
    > compelling reasons for e-voting, getting more people out to vote, is being
    > missed in these threads. Maybe voter turnout in the States is always >50%,
    > it isn't here (Canada).
    
    Technological security issues aside, it would mean giving up on secret
    voting. Not something to take lightly. Voting from the privacy of your home
    would make it even easier for people to force each other to vote for
    candidate x than the 'regular' abuse within the sacrecy of the home that's
    already happening on a grand scale. A public voting station, with secret
    voting, avoids that RISK.
    
    While discussing the issues with electronic voting machines, and the
    suggestion that a paper trail would fix most of that, I ran into this. Some
    people seem to present that paper trail as a receipt: the voter gets to take
    it with her. That would mean people can force each other to prove they voted
    for the candidate they were told to vote for. Dangerous. A paper trail is
    necessary (thus indeed: why electronic voting at all?), but it should not
    break secret voting.
    
      [Almost all of the sensible proposals for voter-verified paper trails
      retain the paper within the system.  Voters do not take them home.
      However, David Chaum's proposal is somewhat different, allowing you to
      take a part of the audit trail with you from which you can verify your
      vote was correctly recorded.]
    
    > I fail to see how anything else could be as likely to increase voter
    > participation.
    
    If voter's can't be bothered to go to a voting station, maybe it's healthier
    for society to leave it at that. You don't want utterly uninformed voters to
    vote, just for the sake of voting. You'll just get more votes for whoever
    happens to have the most likeable TV-face of the day... (No doubt some
    politicians see that too and are therefore in favour of e-voting...)
    
    It would be nice to see more people participate. But I'm not sure what would
    be the way to make that happen. No doubt the causes and solutions will differ
    per country. In some countries better and more easily accessible education
    might help. But in countries that already have that you see many people still
    not voting. Sometimes as a (misguided) way of protest, sometimes because they
    think their one vote won't make a difference, sometimes because they feel
    things are fine as they are.
    
    > [...] in a country such as ours where people are broadly distributed,
    > reducing the need for people to go to a polling station is highly desired.
    
    Yes, different countries may need different solutions. In the (compared to
    Canada ;)) utterly overcrowded Netherlands a stroll to a voting station
    usually takes no more than 5 minutes. If that's too much work, then don't
    vote - and lose your right to complain about the government.
    
    (In Dutch national elections turnout is around 70% on average I think. For
    EU elections it is something like 40% or even just 30%.)
    
    Sander Tekelenburg, <http://www.euronet.nl/~tekelenb/>
    
    ------------------------------
    
    Date: Thu, 18 Dec 2003 14:57:15 -0500
    From: "Michel E. Kabay" <mkabay@private>
    Subject: CFP:  CyberCrime and Digital Law Enforcement Conference, Mar 2004
    
    Yale Law School's Information Society Project (ISP) invites you to the
    CyberCrime and Digital Law Enforcement conference, taking place on March
    26-28, 2004 at Yale Law School.
    
    Registration and further information are available at:
      http://islandia.law.yale.edu/isp/digital_cops.htm
    
    Nimrod Kozlovski, Fellow, Information Society Project, Yale Law School
    
    ------------------------------
    
    Date: 7 Oct 2003 (LAST-MODIFIED)
    From: RISKS-request@private
    Subject: Abridged info on RISKS (comp.risks)
    
     The RISKS Forum is a MODERATED digest.  Its Usenet equivalent is comp.risks.
    => SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent)
     if possible and convenient for you.  Alternatively, via majordomo,
     send e-mail requests to <risks-request@private> with one-line body
       subscribe [OR unsubscribe]
     which requires your ANSWERing confirmation to majordomo@private .
     If Majordomo balks when you send your accept, please forward to risks.
     [If E-mail address differs from FROM:  subscribe "other-address <x@y>" ;
     this requires PGN's intervention -- but hinders spamming subscriptions, etc.]
     Lower-case only in address may get around a confirmation match glitch.
       INFO     [for unabridged version of RISKS information]
     There seems to be an occasional glitch in the confirmation process, in which
     case send mail to RISKS with a suitable SUBJECT and we'll do it manually.
       .UK users should contact <Lindsay.Marshall@private>.
    => SPAM challenge-responses will not be honored.  Instead, use an alternative 
     address from which you NEVER send mail!
    => The INFO file (submissions, default disclaimers, archive sites,
     copyright policy, PRIVACY digests, etc.) is also obtainable from
     http://www.CSL.sri.com/risksinfo.html  ftp://www.CSL.sri.com/pub/risks.info
     The full info file will appear now and then in future issues.  *** All
     contributors are assumed to have read the full info file for guidelines. ***
    => SUBMISSIONS: to risks@private with meaningful SUBJECT: line.
     *** NEW: Including the string "notsp" at the beginning or end of the subject
     *** line will be very helpful in separating real contributions from spam.
     *** This attention-string may change, so watch this space now and then.
    => ARCHIVES: http://www.sri.com/risks
     http://www.risks.org redirects you to the Lindsay Marshall's Newcastle archive
     http://catless.ncl.ac.uk/Risks/VL.IS.html      [i.e., VoLume, ISsue]
       Lindsay has also added to the Newcastle catless site a palmtop version 
       of the most recent RISKS issue and a WAP version that works for many but 
       not all telephones: http://catless.ncl.ac.uk/w/r
     http://the.wiretapped.net/security/info/textfiles/risks-digest/ .
     http://www.planetmirror.com/pub/risks/ ftp://ftp.planetmirror.com/pub/risks/
    ==> PGN's comprehensive historical Illustrative Risks summary of one liners:
        http://www.csl.sri.com/illustrative.html for browsing,
        http://www.csl.sri.com/illustrative.pdf or .ps for printing
    
    ------------------------------
    
    End of RISKS-FORUM Digest 23.08
    ************************
    



    This archive was generated by hypermail 2b30 : Mon Dec 22 2003 - 17:11:20 PST