[risks] Risks Digest 21.84

From: RISKS List Owner (riskoat_private)
Date: Sat Jan 05 2002 - 12:24:43 PST

  • Next message: RISKS List Owner: "[risks] Risks Digest 21.85"

    RISKS-LIST: Risks-Forum Digest  Saturday 5 January 2002  Volume 21 : Issue 84
       ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator
    ***** See last item for further information, disclaimers, caveats, etc. *****
    This issue is archived at <URL:http://catless.ncl.ac.uk/Risks/21.84.html>
    and by anonymous ftp at ftp.sri.com, cd risks .
    Peak time for Eurorisks (Paul van Keep)
    More Euro blues (Paul van Keep)
    ING bank debits wrong sum from accounts (Paul van Keep)
    Euro bank notes to embed RFID chips by 2005 (Ben Rosengart)
    TruTime's Happy New Year, 2022? (William Colburn)
    Airplane takes off without pilot (Steve Klein)
    Harvard admissions e-mail bounced by AOL's spam filters (Daniel P.B. Smith)
    Risk of rejecting change (Edward Reid)
    Security problems in Microsoft and Oracle software (NewsScan)
    "Buffer Overflow" security problems (Henry Baker, PGN)
    Sometimes high-tech isn't better... (Laura S. Tinnel)
    When a "secure site" isn't (Jeffrey Mogul)
    Abridged info on RISKS (comp.risks)
    Date: Fri, 28 Dec 2001 14:38:29 +0100
    From: Paul van Keep <paulat_private>
    Subject: Peak time for Eurorisks
    There was a veritable plethora of Euro related gaffes just before the final
    changeover.  A short overview of what has happened:
    * A housing society in Eindhoven (The Netherlands) has sent out bills for
    next month's rent in Euro's. But the amount on it is the same as last
    month's rent, which was in guilders, a 2.2x increase. The society has
    declared that anyone who paid the incorrect amount will be fully refunded
    and will get a corrected bill later.
    * A branch of ING bank in Blerick inadvertently inserted the wrong cassette
    into an ATM, which began to give out Euro bills instead of Guilders.  This
    was illegal before January 1st, 2002. The bank has traced the bills and all
    but 10 euro have already been returned.
    * Rabobank has made an error in it's daily processing of 300,000 automatic
    transfers. Instead of transferring guilders, the transfers were made in
    Euro's, again 2.2x what should have been transferred. The bank hopes to have
    corrected the mistake before 6pm tonight (Due to the Euro changeover, no
    banking activity will take place in any of the Euro countries on monday).
    * (Just somewhat related:) Two companies thought they were being really
    smart by putting Eurobills in Christmas gifts for their employees. They have
    been ordered by the Dutch central bank to recover all those bills or face
    Date: Thu, 3 Jan 2002 22:44:37 +0100
    From: Paul van Keep <paulat_private>
    Subject: More Euro blues
    The Euro troubles keep coming in. Even though banks have had four to five
    years to prepare for the introduction of the Euro, things still go wrong.
    Yesterday and today over 200 smaller Dutch post offices have had to remain
    closed because of computer problems relating to the Euro changeover. It is
    still unclear whether the situation will finally be resolved tomorrow.
    Date: Fri, 4 Jan 2002 09:45:07 +0100
    From: Paul van Keep <paulat_private>
    Subject: ING bank debits wrong sum from accounts
    About 51,000 customers who withdrew money from their ING Bank account on 1 &
    2 Jan 2002 (through an ATM) have had the wrong amount debited from their
    account.  The bank hasn't yet given an explanation for the error other than
    to suspect that it was related to the high stress their systems were under
    during the first few days of the new year. The amounts debited from customer
    accounts was a hundred times what they withdrew from the ATMs. This got some
    people into trouble when their balance went negative and they could no
    longer use their bank PIN card to pay with in shops.  ING Bank corrected the
    error yesterday.
    On a related note, my wife withdrew Euros from an ATM yesterday and the
    printed receipt came up blank. My guess is that the ink ribbon on the
    embedded matrix printer ran out. Bank personnel are working like crazy to
    feed the machines with Euro bills and simply forget to check the printer. If
    her bank makes a mistake similar to the one ING made, she would have a hard
    time proving that she didn't withdraw 10000 Euro.
    Date: Thu, 27 Dec 2001 15:27:20 -0500
    From: Ben Rosengart <benat_private>
    Subject: Euro bank notes to embed RFID chips by 2005
    The European Central Bank is working with technology partners on a hush-hush
    project to embed radio frequency identification tags into the very fibers of
    euro bank notes by 2005.  Intended to foil counterfeiters, the project is
    developing as Europe prepares for a massive changeover to the euro, and
    would create an instant mass market for RFID chips, which have long sought
    profitable application.  http://www.eetimes.com/story/OEG20011219S0016
    I hardly know where to begin even thinking about the RISKS involved.
      [Those who are Europeein' may need a Eurologist to detect bogus chips.  PGN]
    Date: Wed, 2 Jan 2002 12:22:14 -0700
    From: "Schlake (William Colburn)" <schlakeat_private>
    Subject: TruTime's Happy New Year, 2022?
    Apparently (from the postings on comp.protocols.time.ntp) some TruTime
    GPS units suddenly jumped 1024 weeks into future sometime around New
    Year's Day 2002.
      [GPS units have jumped 1024 weeks into the past before (R-18.24, R-20.07).
      Back to the future is a nice twist.  PGN]
    Date: Fri, 28 Dec 2001 09:01:56 -0500
    From: Steve Klein <gpmguyat_private>
    Subject: Airplane takes off without pilot
    An empty plane took off by itself and flew 20 miles before crashing in
    California's rural Napa County.  The two-seater Aeronica Champion had a
    starter system requiring the pilot to get out and hand-crank the engine.
    [*Wall Street Journal*, 28 Dec 2001]
    Date: Tue, 01 Jan 2002 06:42:26 -0500
    From: "Daniel P.B. Smith" <dpbsmithat_private>
    Subject: Harvard admissions e-mail bounced by AOL's spam filters
    According to today's Globe, AOL's spam filters rejected e-mail sent by
    Harvard's admissions department to anxious applicants.  The interesting
    thing is that "AOL officials could not explain" why their servers identified
    these e-mail messages as spam.  No explanation, no responsibility,
    apparently no indication of anything that Harvard could do to avoid the
    problem in the future.  Just one of those things, like the weather.
    Despite jokes, USPS "snail mail" is very highly reliable.  Those of us who
    have used e-mail for years are aware that it is much less reliable; for
    example, Verizon DSL's mail server slowed to a crawl for several months last
    year, and during that time period less than half of e-mail I sent from
    another account to test it were actually received.  Antispam filters
    decrease this reliability further.
    The facile name "e-mail" was helpful in the eighties as a characterization
    of a form of electronic communication.  However, there is a risk that the
    name may mislead people into assuming that it is comparable in reliability
    to postal mail.
    Let us hope that organizations do not begin use e-mail for communications
    more important than university admissions letters, in the name of "security"
    (and cost reduction).
      [The AOL Harvard problem was noted by quite a few RISKS readers.  TNX]
    Date: Fri, 28 Dec 2001 10:26:42 -0500
    From: Edward Reid <edwardat_private>
    Subject: Risk of rejecting change (Re: Sampson, RISKS-21.83)
    > Mind you, that "drivers ignoring signals" is another example of RISKy
    > behaviour.
    Perhaps we need a parallel forum: FORUM ON RISKS TO THE PUBLIC FROM THE 
    USE OF HUMANS IN TECHNOLOGY. While research on human safety factors is 
    widespread, comments in this forum often treat human-based systems as 
    the baseline and assume that automation can only create risks. Comments 
    often fail to consider improvements in safety from automation, much 
    less more widely ramified benefits such as improved health resulting 
    from our ability to transfer resources to health care.
    What would happen in a forum devoted to asking the opposite question? 
    That is, "what are the risks or benefits of introducing human actors 
    into a given system?".
    At least, considering the question can provide some needed balance. 
    It's a way of considering the RISK of rejecting change.
    Edward Reid
      [Not ANOTHER forum!  From the very beginning, RISKS has discussed risks
      due to people as well as risks to people due to technology (which
      typically are due to people anyway!).  There's nothing new there.  PGN]
    Date: Fri, 21 Dec 2001 08:47:58 -0700
    From: "NewsScan" <newsscanat_private>
    Subject: Security problems in Microsoft and Oracle software
    Two top companies have issued new statements acknowledging security flaws in
    their products: Microsoft (Windows XP) and Oracle (the 9i application
    server, which the company had insisted was "unbreakable." Resulting from a
    vulnerability called "buffer overflow," both problems could have allowed
    network vandals to take over a user's computer from a remote location.
    Microsoft and Oracle have released software patches to close the security
    holes, and a Microsoft executive says: "Although we've made significant
    strides in the quality of the software, the software is still being written
    by people and it's imperfect. There are mistakes. This is a mistake." (San
    Jose Mercury News 21 Dec 2001; NewsScan Daily, 21 December 2001)
    Date: Wed, 26 Dec 2001 21:19:22 -0800
    From: Henry Baker <hbaker1at_private>
    Subject: "Buffer Overflow" security problems
    I'm no fan of lawyers or litigation, but it's high time that someone defined
    "buffer overflow" as being equal to "gross criminal negligence".
    Unlike many other software problems, this problem has had a known cure since
    at least PL/I in the 1960's, where it was called an "array bounds
    exception".  In my early programming days, I spent quite a number of unpaid
    overtime nights debugging "array bounds exceptions" from "core dumps" to
    avoid the even worse problems which would result from not checking the array
    I then spent several years of my life inventing "real-time garbage
    collection", so that no software -- including embedded systems software --
    would ever again have to be without such basic software error checks.
    During the subsequent 25 years I have seen the incredible havoc wreaked upon
    the world by "buffer overflows" and their cousins, and continue to be amazed
    by the complete idiots who run the world's largest software organizations,
    and who hire the bulk of the computer science Ph.D.'s.  These people _know_
    better, but they don't care!
    I asked the CEO of a high-tech company whose products are used by a large
    fraction of you about this issue and why no one was willing to spend any
    money or effort to fix these problems, and his response was that "the
    records of our customer service department show very few complaints about
    software crashes due to buffer overflows and the like".  Of course not, you
    idiot!  The software developers turned off all the checks so they wouldn't
    be bugged by the customer service department!
    The C language (invented by Bell Labs -- the people who were supposed to be
    building products with five 9's of reliability -- 99.999%) then taught two
    entire generations of programmers to ignore buffer overflows, and nearly
    every other exceptional condition, as well.  A famous paper in the
    Communications of the ACM found that nearly every Unix command (all written
    in C) could be made to fail (sometimes in spectacular ways) if given random
    characters ("line noise") as input.  And this after Unix became the de facto
    standard for workstations and had been in extensive commercial use for at
    least 10 years.  The lauded "Microsoft programming tests" of the 1980's were
    designed to weed out anyone who was careful enough to check for buffer
    overflows, because they obviously didn't understand and appreciate the
    intricacies of the C language.
    I'm sorry to be politically incorrect, but for the ACM to then laud "C" and
    its inventors as a major advance in computer science has to rank right up
    there with Chamberlain's appeasement of Hitler.
    If I remove a stop sign and someone is killed in a car accident at that
    intersection, I can be sued and perhaps go to jail for contributing to that
    accident.  If I lock an exit door in a crowded theater or restaurant that
    subsequently burns, I face lawsuits and jail time.  If I remove or disable
    the fire extinguishers in a public building, I again face lawsuits and jail
    time.  If I remove the shrouding from a gear train or a belt in a factory, I
    (and my company) face huge OSHA fines and lawsuits.  If I remove array
    bounds checks from my software, I will get a raise and additional stock
    options due to the improved "performance" and decreased number of calls from
    customer service.  I will also be promoted, so I can then make sure that
    none of my reports will check array bounds, either.
    The most basic safeguards found in "professional engineering" are cavalierly
    and routinely ignored in the software field.  Software people would never
    drive to the office if building engineers and automotive engineers were as
    cavalier about buildings and autos as the software "engineer" is about his
    I have been told that one of the reasons for the longevity of the Roman
    bridges is that their designers had to stand under them when they were first
    used.  It may be time to put a similar discipline into the software field.
    If buffer overflows are ever controlled, it won't be due to mere crashes,
    but due to their making systems vulnerable to hackers.  Software crashes due
    to mere incompetence apparently don't raise any eyebrows, because no one
    wants to fault the incompetent programmer (and his incompetent boss).  So we
    have to conjure up "bad guys" as "boogie men" in (hopefully) far-distant
    lands who "hack our systems", rather than noticing that in pointing one
    finger at the hacker, we still have three fingers pointed at ourselves.
    I know that it is my fate to be killed in a (real) crash due to a buffer
    overflow software bug.  I feel like some of the NASA engineers before the
    Challenger disaster.  I'm tired of being right.  Let's stop the madness and
    fix the problem -- it's far worse, and caused far more damage than any Y2K
    bug, and yet the solution is far easier.
    Cassandra, aka Henry Baker <hbaker1at_private>
    Date: Wed, 26 Dec 2001 21:19:22 -0800
    From: Peter G Neumann <Neumannat_private>
    Subject: "Buffer Overflow" security problems (Re: Baker, RISKS-21.84)
    Henry, Please remember that an expressive programming language that prevents
    you from doing bad things would with very high probability be misused even
    by very good programmers and especially by programmers who eschew
    discipline; and use of a badly designed programming language can result in
    excellent programs if done wisely and carefully.  Besides, buffer overflows
    are just one symptom.  There are still lots of lessons to be learned from an
    historical examination of Fortran, Pascal, Euclid, Ada, PL/I, C, C++, Java,
    Perhaps in defense of Ken Thompson and Dennis Ritchie, C (and Unix, for that
    matter) was created not for masses of incompetent programmers, but for Ken
    and Dennis and a few immediate colleagues.  That it is being used by so many
    people is not the fault of Ken and Dennis.  So, as usual in RISKS cases,
    blame needs to be much more widely distributed than it first appears.  And
    pursuing Henry's name the blame game, whom should we blame for Microsoft
    systems used unwisely in life- and mission-critical applications?  OS
    developers?  Application programmers?  Programming language developers?
    Users?  The U.S. Navy?  Remember the unchecked divide-by-zero in an
    application that left the U.S.S. Yorktown missile cruiser dead in the water
    for 2.75 hours (RISKS-19.88 to 94).  The shrinkwrap might disclaim liability
    for critical uses, but that does not stop fools from rushing in.
    Nothing in the foregoing to the contrary notwithstanding, it would be very
    helpful if designers of modern programming languages, operating systems, and
    application software would more judiciously observe the principles that we
    have known and loved lo these many years (and that some of us have even
    practiced!).  Take a look at my most recent report, on principles and their
    potential misapplication, for DARPA's Composable High-Assurance Trustworthy
    Systems (CHATS) program, now on my Web site:
    Date: Sat, 29 Dec 2001 17:19:30 -0500
    From: "Laura S. Tinnel" <ltinnelat_private>
    Subject: Sometimes high-tech isn't better
    We're all aware that many companies have buried their heads in the sand on 
    the security issues involved with moving to high-tech solutions in the name 
    of convenience, among other things. When we're talking about on-line sales, 
    educational applications, news media, and the like, the repercussions of 
    such are usually not critical to human life, and therefore the trade-off is 
    made. However, I've just encountered something that is, well, disconcerting 
    at best.
    Earlier today as I sat unattended in an examination room for a half hour
    waiting on the doctor to show up, I carefully studied the new computer
    systems they had installed in each patient room. Computers that access ALL
    patient records on a centralized server located elsewhere in the building,
    all hooked up using a Windows 2000 domain on an ethernet based LAN.
    Computers that contained accessible CD and floppy drives and that could be
    rebooted at the will of the patient. Computers hooked up to a hot LAN jack
    (oh for my trusty laptop instead of that Time magazine...) Big mistake #1 -
    the classic insider problem.
    Once the doctor arrived and we got comfy, I started asking him about the
    computer system. (I just can't keep my big mouth shut.) Oh he was SO proud
    of their new fangled system. So I asked the obvious question - what would
    prevent me from accessing someone else's records while I sat here unattended
    for a half hour waiting for you to show up? With a big grin on his face, he
    said "Lots of people ask that question. We have security here; let me show
    you." Big mistake #2 - social engineering. Then he proceeded to show me that
    the system is locked until a password is entered. Of course, he said, if
    someone stole the password, then they could get in, but passwords are
    changed every 3 months. And, he continued, that's as secure as you can get
    unless you use retinal scans. (HUH?) I know all about this stuff, for you
    see "my dear", I have a masters degree in medical information technology,
    and I'm in charge of the computer systems at XXXX hospital. OK. Time to fess
    up. Doc, I do this for a living, and you've got a real problem here. 1, Have
    you thought about the fact that you have a machine physically in this room
    that anyone could reboot and install trojan software on? A: Well that's an
    issue. 2. Have you thought about the fact that there's a live network
    connection in this room and anyone could plug in and have instant access to
    your network? A: You can really do that???  There's a guy that brings his
    laptop in here all the time. 3. I assume you are using NTFS (yes), have you
    locked down the file system and set the security policies properly? You do
    understand that it is wide open out of the box. A: I don't know what was
    done when the computers were set up. 4. Have you thought beyond just the
    patient privacy issue to the issue of unauthorized modification of patient
    records? What are you doing to prevent this? What could someone do if they
    modified someone else's records? Make them very ill? Possibly kill them? A:
    That's a big concern. (well, duh?)  Then there was a big discussion about
    access to their prescription fax system that could allow people to illegally
    obtain medication. I didn't bother to ask whether or not they were in any
    way connected to the Internet. They either have that or modems to fax out
    the prescriptions. At least he said he'd talk to his vendor to see how they
    have addressed the other issues. Perhaps they have addressed some of these
    things and the doctor I was chatting with simply didn't know.
    I'm not trying to come down on these doctors as I'm sure they have very good
    intentions. I personally think having the medical records on-line is a good
    idea in the long term as it can speed access to records and enable remote
    and collaborative diagnoses, potentially saving lives. But I'm not convinced
    that today we can properly secure these systems to protect the lives they
    are intended to help save. (Other opinions are welcome.) And with the state
    of medical malpractice lawsuits and insurance, what could a breach in a
    computer system that affects patient health do to the medical industry if it
    becomes reliant on computer systems for storage/retrieval of all patient
    A couple of things. First, I'm not up on the state of cyber security in
    medical applications. I was wondering if anyone out there is up on these
    things or if anyone else has seen stuff like this.
    Second, if a breach in the computer system was made and someone was
    mistreated as a result, who could be held liable? The doctors for sure.
    What about the vendor that sold and/or set up the system for them? Does "due
    diligence" enter in? If so, what is "due diligence" in cyber security for
    medical applications?
    Third, does anyone know if the use of computers for these purposes in a
    physician's office changes the cost of malpractice insurance? Is this just
    too new and not yet addressed by the insurance industry? Is there any set of
    criteria for "certification" of the system for medical insurance purposes,
    possibly similar to that required by the FDIC for the banking industry? If
    so, is the criteria really of any value??
      [This is reproduced here from an internal e-mail group, with Laura's
      permission.  A subsequent response noted the relative benignness of past
      incidents and the lack of vendor interest in good security -- grounds that
      we have been over many times here.  However, Laura seemed hopeful that the
      possibility of unauthorized modification of patient data by anyone at all
      might stimulate some greater concerns.  PGN]
    Date: Fri, 28 Dec 2001 17:22:33 -0800
    From: Jeffrey Mogul <JeffMogulat_private>
    Subject: When a "secure site" isn't
    Most security-aware Web users are familiar with the use of SSL to provide
    privacy (through encryption) and authentication (through a system of
    signatures and certificate authorities).  We have also learned to check for
    the use of SSL, when doing sensitive operations such as sending a credit
    card number, by looking for the "locked-lock icon" in the lower margin of
    the browser window.  (Netscape and Mozilla display an "unlocked lock" icon
    for non-SSL connections; IE appears to display a lock icon only for SSL
    connections.)  Notwithstanding the risks (cf. Jeremy Epstein in RISKS 21.83)
    of assuming that the use of SSL means "totally secure," its use is at least
    a prerequisite for secure Web commerce.
    A few months ago, I was using the Web site of an established catalog
    merchant (predating the Internet) to order merchandise with my credit card.
    This merchant (whose name I have changed to "MostlyInnocent.com") displays a
    "VeriSign Secure Site - Click to verify" logo on almost every page, and I
    had no reason to distrust their security.  I had just typed the card number
    into their order form when I realized that the browser's icon was in the
    unlocked state -- fortunately, before I submitted the page.
    Aha! an allegedly "Secure Site" that wasn't using SSL. Something was fishy.
    I verified that my credit card would have been sent in cleartext by running
    "tcpdump", entering a made-up credit card number, and finding that number in
    the tcpdump trace.
    At this point, I did click on the "VeriSign Secure Site - Click to verify"
    logo, which popped up a window proving the validity of the site's SSL
    authentication.  This window did indeed use SSL (evidenced by the locked
    icon).  Moreover, the merchant's "Privacy Policy" page says "sensitive
    information [...] is protected both online and off-line by state-of-the-art,
    industry-standard technology [...] what we believe to be the best encryption
    software in the industry - SSL."
    I immediately complained to the site's maintainers (this was made somewhat
    trickier because the phrase "If you have any questions about the security at
    the Site, you can send an e-mail to:" wasn't followed by an e-mail address!).
    Their first response was
      I checked with our Web master and was assured that our new site is secure.
      Located on the right-hand side of our home page is the VeriSign logo, and
      a customer can verify that our site is indeed "valid" and secure.
    I replied, pointing out the bug in this statement.  Within 3 hours, they
    responded again that they had fixed the problem, and I was able to
    successfully place an order using SSL.  I suspect that MostlyInnocent.com
    had simply forgotten to use "https" instead of "http" in a URL somewhere in
    the order-entry path, which shows evidence of negligence, but nothing worse.
    (I have no idea how long their site was afflicted by this bug.)
    However, the larger problem remains: the site design (and, at first, the
    site maintainers) relies heavily on the implication that the "Secure Site"
    logo proves the security of the site.  Clearly, it does not prove very much.
    True network security is an end-to-end property. Saltzer et al., in
    "End-to-end arguments in system design," relate that this has been known
    since at least 1973 (see Dennis K. Branstad. Security aspects of computer
    networks. AIAA Paper No. 73-427, AIAA Computer Network Systems Conf,
    Huntsville, AL, April, 1973).  The best place to verify that a site is using
    SSL properly is therefore in the browser software.  Modern browsers do a
    fairly good job of alerting the user to the use of SSL (via the locked icon)
    and to questionable SSL certificates (through various popups).
    This implies that we should continue educating users to check for the locked
    icon when sending sensitive information (and, to be fair, the Privacy page
    at MostlyInnocent.com does say "While on a secure page, such as our order
    form, the lock icon on the bottom of Web browsers [...] becomes locked.")
    The use of the "VeriSign Secure Site" logo actively subverts this training,
    because it is much more prominent on the screen, yet proves very little
    about the site's security.
    I complained to VeriSign about this problem.  After a terse (and somewhat
    irrelevant) response, they have not replied to several repeated e-mail
    messages.  My guess is that VeriSign profits from the proliferation of this
    logo (by charging merchants for its use), and therefore has little interest
    in promoting the end-to-end (locked-icon) model.
    I don't see any efficient way for VeriSign to verify that sites using the
    logo are properly using SSL.  Perhaps they could audit these sites on a
    regular basis, but that would require significant investments (in software,
    client systems for running audits, and network bandwidth) and it still would
    not be as reliable as the end-to-end model.
    Browser implementors might be able to provide some protection, on the other
    hand, by flagging the user's attempt to enter what appears to be a
    credit-card number on a non-SSL connection.  (My browser could store a
    one-way hash function of all of my credit-card numbers, thus facilitating
    this check without risking the security of my numbers.)  I'm not sure
    whether most browser vendors have any incentive to do that; most have much
    deeper security problems to solve.
    Date: 12 Feb 2001 (LAST-MODIFIED)
    From: RISKS-requestat_private
    Subject: Abridged info on RISKS (comp.risks)
     The RISKS Forum is a MODERATED digest.  Its Usenet equivalent is comp.risks.
    => SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent)
     if possible and convenient for you.  Alternatively, via majordomo,
     send e-mail requests to <risks-requestat_private> with one-line body
      subscribe [OR unsubscribe]
     which requires your ANSWERing confirmation to majordomoat_private .
     [If E-mail address differs from FROM:  subscribe "other-address <x@y>" ;
     this requires PGN's intervention -- but hinders spamming subscriptions, etc.]
     Lower-case only in address may get around a confirmation match glitch.
      INFO     [for unabridged version of RISKS information]
     There seems to be an occasional glitch in the confirmation process, in which
     case send mail to RISKS with a suitable SUBJECT and we'll do it manually.
      .MIL users should contact <risks-requestat_private> (Dennis Rears).
      .UK users should contact <Lindsay.Marshallat_private>.
    => The INFO file (submissions, default disclaimers, archive sites,
     copyright policy, PRIVACY digests, etc.) is also obtainable from
     http://www.CSL.sri.com/risksinfo.html  ftp://www.CSL.sri.com/pub/risks.info
     The full info file will appear now and then in future issues.  *** All
     contributors are assumed to have read the full info file for guidelines. ***
    => SUBMISSIONS: to risksat_private with meaningful SUBJECT: line.
    => ARCHIVES are available: ftp://ftp.sri.com/risks or
     ftp ftp.sri.com<CR>login anonymous<CR>[YourNetAddress]<CR>cd risks
      [volume-summary issues are in risks-*.00]
      [back volumes have their own subdirectories, e.g., "cd 20" for volume 20]
     http://catless.ncl.ac.uk/Risks/VL.IS.html      [i.e., VoLume, ISsue].
      Lindsay Marshall has also added to the Newcastle catless site a
      palmtop version of the most recent RISKS issue and a WAP version that
      works for many but not all telephones: http://catless.ncl.ac.uk/w/r
     http://the.wiretapped.net/security/info/textfiles/risks-digest/ .
     http://www.planetmirror.com/pub/risks/ ftp://ftp.planetmirror.com/pub/risks/
    ==> PGN's comprehensive historical Illustrative Risks summary of one liners:
     http://www.csl.sri.com/illustrative.html for browsing,
     http://www.csl.sri.com/illustrative.pdf or .ps for printing
    End of RISKS-FORUM Digest 21.84

    This archive was generated by hypermail 2b30 : Sat Jan 05 2002 - 13:47:11 PST