RISKS-LIST: Risks-Forum Digest Wednesday 19 June 2006 Volume 24 : Issue 34 ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, moderator, chmn ACM Committee on Computers and Public Policy ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as <http://catless.ncl.ac.uk/Risks/24.34.html> The current issue can be found at <http://www.csl.sri.com/users/risko/risks.txt> Contents: Computer closes Berlin tunnel again (Debora Weber-Wulff) B747 freighter crash (Peter B. Ladkin) Y2038 bug strikes early (Conrad Heiney) One fewer risk (R.A. Whitfield) Yet another example of accidental disclosure of redacted info (Aaron Emigh) More university data exposures (PGN) Deceiving a computer is now a crime (Vassilis Prevelakis) Risks of increasingly complex hardware/software in rescue gear (Fernando Pereira) Unexpected electromagnetic interference (Ken Winters) Companies still unclear on authentic e-mail transmission (Steve Summit) Re: Sunken Ferry Crew didn't know how to use ECS display software (Joseph A. Dellinger) Re: Microsoft Patches crash IBM Midrange Consoles (Henry Baker, Al Mac) REVIEW: "How to Break Web Software", Mike Andrews/James A. Whittaker (Rob Slade) REVIEW (sorta): "Dictionary of Information Security", Robert Slade (Rob Slade) Abridged info on RISKS (comp.risks) ---------------------------------------------------------------------- Date: Tue, 4 Jul 2006 20:27:31 +0200 (CEST) From: "Debora Weber-Wulff" <D.Weber-Wulff@fhtw-berlin.de> Subject: Computer closes Berlin tunnel again The Berlin newspaper reports on July 4, 2006 [http://archiv.tagesspiegel.de/archiv/04.07.2006/2638585.asp] on another computer-caused tunnel-closing (a previous episode is in RISKS-24.09: http://catless.ncl.ac.uk/Risks/24.09.html#subj1.1). Seems about 1am the central computer lost contact with the traffic system in the tunnel. A technician was aroused, but he pointed out that the city had not signed a support agreement in order so save money, so he was not on call at nights. An accident occurred in the tunnel with a car flipping over. The sensors reported the problem, but because the central computer could not communicate with the system in the tunnel, it could not be closed. The car caught fire, and the smoke alarmed more sensors that were programmed to automatically close the tunnel (with the accident victim inside). Since one of the gates was not closing (it had been demolished but not repaired), the out-of-control system went into fail-safe mode and turned all of the traffic lights red. Even in the middle of the night, Berlin never sleeps (and especially so during a World Cup in Soccer), so the traffic piled up with no one being able to go anywhere near the tunnel. Police were called to direct traffic and get the accident car and victim out (who was unhurt, if ruffled) by about 5.30 am, shortly before rush hour begins (7.30 is a normal working day start in the eastern parts of Berlin). The computer refused to budge from the fail-safe mode. They called the technician again (who was now awake, anyway). He agreed to come in, but could not get the system to restart, either, until he cut through the cabling to get a cold-start on the traffic lights on the major streets. It took another few hours to get everything working again. MTBPF (mean time between published failure): 7 months. Prof. Dr. Debora Weber-Wulff, FHTW Berlin, FB 4, Treskowallee 8, 10313 Berlin GERMANY +49-30-5019-2320 http://www.f4.fhtw-berlin.de/people/weberwu/ ------------------------------ Date: Tue, 04 Jul 2006 13:17:36 +0200 From: "Peter B. Ladkin" <ladkin@private-bielefeld.de> Subject: B747 freighter crash The Canadian TSB have issued the report on the 14 October 2004 crash of a Boeing B747 freighter on takeoff at Halifax airport, Nova Scotia. According to a Flight International report by David Kaminski-Morrow (4-10 July 2006, p4), the TSB "says that the crew's misunderstanding of a laptop computer tool for calculating take-off performance led to the accidents. It concludes that the crew unwittingly transferred and used weight data from the aircraft's previous flight while calculating performance criteria for the next take-off. The obsolete data misled the crew to derive incorrect thrust settings and critical speeds for take-off." The aircraft failed to lift off after rotation and overran the end of the runway by 250 meters, briefly lifting off but then striking an earth berm, severing the tail section and bringing the aircraft to earth again. All seven crew were killed. This is the second laptop-involved accident to be reported (but the first to have occurred). The Southwest Airlines accident in 2005, also a runway overrun but on landing, was discussed in Risks-24.15 (Thompson), 24.16 (Ladkin, dwikstrom), 24.17 (Norman) and subsequently (24.18, 24.19). Peter B. Ladkin, Causalis Limited and University of Bielefeld www.causalis.com www.rvs.uni-bielefeld.de ------------------------------ Date: Thu, 29 Jun 2006 13:38:25 -0700 From: Conrad Heiney <conrad@private> Subject: Y2038 bug strikes early Starting on May 12, 2006, many installations of the AOLServer web server failed. Not all versions or all configurations failed, but the ones that did became unusable. On start, the server would eat virtual memory and then terminate with a memory allocation error. Discussion on the mailing list revealed the starting date of the problem, indicating that some part of the software had a clock issue. On careful inspection it was discovered that database threads were a common factor. It was then noted by a perceptive person that the servers all failed on or before exactly one billion seconds before the end of the Unix epoch in 2038. Many installations had very long database timeouts, which caused the software to look ahead and see the End of Time. Adjusting the timeouts stopped the crashes. The risk of the known clock bug striking 32 years early indicates there may be other "pre-problems" lurking in software that will show up long before the date we have comfortably set as the deadline. The thread discussing the problem and its resolution is here: http://www.mail-archive.com/aolserver@private/msg09812.html ------------------------------ Date: Sun, 25 Jun 2006 05:47:48 -0400 From: "R. A. Whitfield" <inquiry@quality-control.us> Subject: One fewer risk Abatement of a risk posed by computing is only occasionally noted in RISKS. So it is a pleasure to announce a risk that has been eliminated altogether. Robert Siegal, the co-host of National Public Radio's "All Things Considered" program, interviewed Stuart Levey, the Under Secretary for Terrorism and Financial Intelligence at the Department of the Treasury, on the June 23, 2006 program. The two discussed the recent revelation of a surveillance program of banking transactions conducted by the Treasury Department in association with the Society for Worldwide Interbank Financial Telecommunication (SWIFT) in Brussels. An audio recording of the interview can be found at http://www.npr.org/templates/story/story.php?storyId=5507148 Mr. Siegal expressed concerned that the surveillance of a tremendous amount of financial data might be viewed as a "fishing expedition." He was reassured by Mr. Levey as follows: "Well, actually I think there's a little bit of misconception there... We have a set of data that is provided to us by SWIFT. But in fact we cannot access it just to do whatever we want - to browse through it, for example, or to do broad searches. Instead, we can only access it if our analyst first explains exactly who or what wants to do a search on and then articulates how that person or entity is connected to an ongoing terrorism investigation." Q. (by Mr. Siegal) "To whom is that person explaining and articulating those conditions?" A. (by Mr. Levey) "Well, the way it works is the analyst has to type that into the database and cannot access the database until that has been accomplished. And there are two sets of auditors that are monitoring what is going on. One, in a very, I think, creative and unusual arrangement that we set up with the company, with SWIFT... SWIFT itself is on site and has real time access to all the searches that are being done and at any time their representatives can stop the search and say, 'Wait a minute. I have a question about that. I'm not sure why that's connected to terrorism or the connection hasn't been articulated sufficiently.' And then, after the fact, it can be audited both by SWIFT and by outside auditors that we've engaged to do just that." The risk eliminated by Mr. Levey's explanation is, of course, the risk that anyone will believe an official statement from the U.S. Treasury Department about the Department's surveillance activities. R.A. Whitfield inquiry@quality-control.us www.quality-control.us [Reminder: RISKS has *always* had a policy of eagerly welcoming items relating to the avoidance of risks through good practices, or even accidentally! Unfortunately, they are rarely submitted (and this one is of course not an exception). By the way, my regrets for the long hiatus between the previous issue and this one. This year's seasonal slowdown has been slower than usual. PGN] ------------------------------ Date: Thu, 22 Jun 2006 14:56:39 -0700 From: "Aaron Emigh" <aaron-risks@private> Subject: Yet another example of accidental disclosure of redacted info This article reports on yet another case in which an electronic document with redactions actually contains all of the redacted data: http://www.nytimes.com/2006/06/22/washington/22cnd-leak.htm After many such incidents, it seems a reasonable conclusion that complex document formats that permit overlays, such as Word and PDF, are too prone to misuse when information is intended to be redacted. In a closely related issue, buffers in many document formats can inadvertently contain sensitive information that the author has intended to delete. It seems clear that only simple electronic document formats - preferably just imaging formats such as TIF/GIF/PNG - are suitable for use in cases where sensitive information is intended to be excluded from an edited document. Even then, the inclusion of a step that removes the possibility of contamination beyond the visible content - such as faxing to a fax machine and scanning the fax in - may be advisable. Aaron Emigh, Radix Labs 1-415-297-1305 ------------------------------ Date: Wed, 21 Jun 2006 12:06:29 PDT From: "Peter G. Neumann" <neumann@private> Subject: More university data exposures Two universities last week reported incidents in which outsiders may have gained access to personal information, including Social Security numbers, of a total of up to nearly a quarter-million students. [On 13 Jun,] Western Illinois University announced that a hacker may have copied Social Security or credit-card numbers of between 200,000 and 240,000 current or former students. The credit cards had been used to purchase textbooks online or for stays in a university hotel. [Source: Vincent Kiernan, Incidents at 2 Universities Put More Than 200,000 Students at Risk of Data Theft, The Chronicle of Higher Education, 19 Jun 2006] ------------------------------ Date: Sun, 16 Jul 2006 05:00:35 -0400 (EDT) From: Vassilis PREVELAKIS <vp@private> Subject: Deceiving a computer is now a crime Earlier this year the UK Attorney General introduced an amendment to the Fraud Bill to make "deceiving a computer" a criminal offense. While the intention (preventing people from hacking chip-and-PIN devices) is worthwhile, the attempt to attribute human characteristics to machines is bound to cause problems. I was going to write something on the subject when I discovered that this has already been discussed in this forum 18 years ago! Risks Volume 7: Issue 69, Thursday 3 November 1988 dan@private wrote: > Tue, 01 Nov 88 16:39:47 -0500 > > Re the Confederation of British Industry's proposal to change the law on > defrauding to include deception of computers as well as people: > > To state the obvious, computer programs are so limited in their ability to > understand what someone might be trying to do, and what information is > necessary for that purpose, that it's often necessary to "deceive" them > just to get them to do the right thing. It's much like the problem of > figuring out what to put on a complex form, like tax forms: every > individual situation is different, and the form either provides no way at > all to say what your situation is, or provides several equally plausible > ways to express it. But at least forms have margins, and you can attach > additional pieces of paper to them. Computer-based "forms" have neither. > Here's an example: in the process of trying to provide some service, a > computer asks for my telephone number. I don't believe it has any right to > that number for this purpose, so I refuse to answer. But it won't go on to > the next query until I answer that one. I find someone in charge: "I don't > want to give my phone number out. Is that OK?" "Sure. Just give it a fake > number and go on." The computer is now "deceived". It's ridiculous to > think that both I and the computer's owner could now be charged with > fraud! > > Taken literally, such a law would also preclude thorough testing of > computer software. In testing, you're almost always "deceiving" the > computer in order to see whether it will handle some case correctly, > particularly if you're checking error handling. Are testers going to have > to insert special routines that print out "It's OK, I know this is a test" > before giving any answers, to avoid prosecution? > > There are also serious theoretical problems with the notion of "deceiving" > a computer. In theory, deception occurs when an individual is deliberately > led to believe X when not-X is true. But what does "belief" mean when > applied to a computer system? If I have a file on a computer system that > says I'm 3 years old, does that mean the computer "believes" I'm three > years old? Of course not, you say. What if it's in a database? Is it > deception then? > > I think it's all the fault of some AI people who would like us to think > that all it takes to be able to say that a computer system believes a fact > is that it's in a Lisp-based inference system that includes a "believes" > predicate! > > Dan Franklin They never give up, do they? ------------------------------ Date: Fri, 30 Jun 2006 16:31:19 -0400 From: Fernando Pereira <pereira@private> Subject: Risks of increasingly complex hardware/software in rescue gear Avalanche beacons have been undergoing rapid development as low-power DSP technology improves and pursuits like backcountry skiing grow in popularity. For those readers who don't know about these beacons, here's a description <http://www.telemarkski.com/html/ how_beacon_select.html>. A beacon is carried strapped to the user's upper body. In its default on state, it transmits a regular signal at 457khz. If the user is buried by an avalanche, other members of the group turn on their beacons to receive mode. In receive mode, a beacon can be used to follow flux lines from the transmitting beacon to locate the buried beacon and the user to which it is strapped. Ease of operation by rescuers is critical, since chances of survival for the buried victim decrease rapidly after 20 minutes under the snow. Therefore, modern beacons have gained increasingly sophisticated DSP features that facilitate tracking a transmitting beacon, and also distinguishing among multiple signals in a multiple burial situation. Not so long ago, a newly released and highly touted beacon model had to receive a firmware update because of concerns with its multiple burial detection software. Now, one of the leading vendors has introduced a beacon that claims to signal to like beacons whether the buried victim is still alive, allowing rescuers to move on to dig out those victims that are still alive <http:// powdermag.com/features/news/mammut-pulse/>. This is causing some concern among experienced backcountry skiers. What if rescuers rely on this feature, but a transmitting beacon fails to detect its user's vital signs? Conversely, what if a beacon hallucinates vital signs that are not there? What are the responsibilities of rescuers relative to possible beacon-generated misinformation? What are their responsibilities dealing with a multiple burial where some of the beacons do not have the feature? Does the additional (mis?) information add to the cognitive and emotional overload that is well known to affect decision-making among rescuers? We have limited capacities, should those part of those limited capacities be devoted to adjudicating these difficult issues in a life-or-death situation? ------------------------------ Date: Wed, 28 Jun 2006 20:45:52 -0400 From: Ken Winters <k27winters1@private> Subject: Unexpected electromagnetic interference Shortly after reading the item about the laptop seizing up when the cell phone was put next to it, I used my kitchen scale (digital, but hardly a complex piece of electronic equipment) while I was using my cordless phone. The scale wouldn't keep a stable "zero" setting. With some trial and error, I found that accessing the stored information (previous calls and directory) would cause the scale reading to briefly change by as much as 34 grams. In 5 to 10 seconds it returned to normal. Hardly serious, at least in this case, but it reminds us: "Expect the Unexpected"! ------------------------------ Date: Sun, 09 Jul 2006 20:44:06 -0400 From: Steve Summit <scs@private> Subject: Companies still unclear on authentic e-mail transmission I received e-mail from PayPal the other day -- *real* e-mail, not one of the 15-20 PayPal-branded phishing scams I get each day. I was pleasantly surprised that it made it past my spam filter, and I was curious to see what sorts of things PayPal is doing these days to assure recipients that their missives *are* genuine. Answer: not much, and in fact the first Received line -- which as we know is about all you can trust in an ordinary e-mail -- indicated that it came from "protege.postdirect.com", i.e., some third party. Sigh. ------------------------------ Date: Thu, 22 Jun 2006 23:00:44 -0500 From: "Joseph A. Dellinger" <geojoe@private> Subject: Re: Sunken Ferry Crew didn't know how to use ECS display software (Manning, RISKS-24.33) At the George Observatory near Houston Texas we also need to "dim down the display" so we can work in near darkness while imaging 20th-magnitude asteroids. We used to have a monitor with an analog wheel that could be used to control the display brightness. Alas, it seems that all the modern monitors adjust their brightness with button controls. Changing the brightness up and down is a pain to do, and the display that comes up while changing the brightness is itself uncontrollably bright. You can use Windows to change the "theme" to something red and dim, but then some important control menus in programs we need come up black on black, and so are unusable. Our solution was a red piece of plastic the size of the monitor held on with Velcro. Pops on (for dim) and off (for bright), and never causes any software incompatibility issues. :-) The risk? Insisting on a software solution for a software problem. Although the folks on the ferry did come up with a hardware solution (turning the monitor off) it wasn't a very good one! ------------------------------ Date: Tue, 20 Jun 2006 13:36:28 -0700 From: Henry Baker <hbaker1@private> Subject: Re: Microsoft Patches crash IBM Midrange Consoles (Macintyre, R-24.33) This is not a new problem for IBM. My first job in high school was looking after a large (at that time!) 7040 system which utilized an (IBM, of course!) selectric typewriter as the system console. Guess which I/O device on the 7040 caused the most down-time ? (Hint: not the tape drives, the printer, the front-end 1401 or the card reader/punch...) ------------------------------ Date: Thu, 22 Jun 2006 11:05:44 -0500 From: Al Mac <macwheel99@private> Subject: IBM Patch Troubles While there are a FEW people in IBM Customer Land who apply patches without reading the instructions, or researching the gotchas, I imagine there are a LOT of people in Microsoft Customer Land in that boat. The difference is the rate of patches, the likelihood of problems, and number of occupants of Customer Land who consider what is being patched to be mission critical. http://www.itjungle.com/tfh/tfh061906-story02.html ------------------------------ Date: Mon, 26 Jun 2006 11:46:59 -0800 From: "Rob, grandpa of Ryan, Trevor, Devon & Hannah" <rMslade@private> Subject: REVIEW: "How to Break Web Software", Mike Andrews/James A. Whittaker BKHTBWSW.RVW 20060520 "How to Break Web Software", Mike Andrews/James A. Whittaker, 2006, 0-321-36944-0, U$34.99/C$46.99 %A Mike Andrews Mike.Andrews@private %A James A. Whittaker jw@private %C P.O. Box 520, 26 Prince Andrew Place, Don Mills, Ontario M3C 2T8 %D 2006 %G 0-321-36944-0 %I Addison-Wesley Publishing Co. %O U$34.99/C$46.99 416-447-5101 800-822-6339 bkexpress@private %O http://www.amazon.com/exec/obidos/ASIN/0321369440/robsladesinterne http://www.amazon.co.uk/exec/obidos/ASIN/0321369440/robsladesinte-21 %O http://www.amazon.ca/exec/obidos/ASIN/0321369440/robsladesin03-20 %O Audience i+ Tech 3 Writing 2 (see revfaq.htm for explanation) %P 219 p. + CD-ROM %T "How to Break Web Software" The preface stresses that this book is neither about how to attack a Web site, nor how to develop one, but, rather, how to test. Chapter one points out that the Web is a different environment, in terms of software security, because we have desktop machines, not centrally administered, talking to everyone (with much of the traffic being commercial in nature). The authors even point out that issues of error-handling, performance, and ease-of-use all contribute to increased levels of vulnerability. Various attacks designed to obtain information about Web applications, structure, and functions are described in chapter two. For client-side scripting, chapter three notes, any validation done on the client should be untrusted and re- validated on the host, since it may be altered on the client, or data manually entered as if it came from the client. Chapter four explains the danger of using client-side data (cookies or code) for state information. Chapter five examines user supplied data, and delves into cross-site scripting (XSS, the explanation of which is not well done), SQL (Standard Query Language) injection, and directory traversal. Language-based attacks, in chapter six, involve buffer overflows (which are not explained terribly well), canonicalization (HTML and Unicode encoding and parsing), and null string attacks. The server, with utilities and the underlying operating system, can be reached via stored procedures (excessive functionality), fingerprinted for other attempts, or subject to denial of service (in limited ways) as chapter seven notes. "Authentication," in chapter eight, is really more about encryption: the various false forms (encryption via obscurity?), brute force attacks against verification systems, and forcing a system to use weak encryption. Privacy, and related Web technologies (of which cookies are only one), is reviewed in chapter nine. Chapter ten looks at Web services, and the vulnerabilities associated with some of these systems. The CD-ROM included with the book contains a number of interesting and useful tools for trying out the various attacks and tests mentioned in the text. This book is a valuable addition to the software security literature. The attacks listed in the work are known, but often by name only. This text collects and explains a wide variety of Web application attacks and weaknesses, providing developers with a better understanding of how their programs may be assailed. Some of the items mentioned are defined or explained weakly, but these are usually items that do have good coverage in other security works. copyright Robert M. Slade, 2006 BKHTBWSW.RVW 20060520 rslade@private slade@private rslade@private http://victoria.tc.ca/techrev/rms.htm ------------------------------ Date: Fri, 07 Jul 2006 17:31:47 -0800 From: "Rob, grandpa of Ryan, Trevor, Devon & Hannah" <rMslade@private> Subject: REVIEW (sorta): "Dictionary of Information Security", Robert Slade BKDCINSC.RVW 20060528 "Dictionary of Information Security", Robert Slade, 2006, 1-59749-115-2, U$29.95/C$38.95 %A Robert Slade rslade@private %C 800 Hingham Street, Rockland, MA 02370 %D 2006 %G 1-59749-115-2 %I Syngress %O U$29.95/C$38.95 781-681-5151 fax: 781-681-3585 amy@private %O http://www.amazon.com/exec/obidos/ASIN/1597491152/robsladesinterne http://www.amazon.co.uk/exec/obidos/ASIN/1597491152/robsladesinte-21 %O http://www.amazon.ca/exec/obidos/ASIN/1597491152/robsladesin03-20 %O http://www.syngress.com/catalog/?pid=4150 %O Audience n+ Tech 3 Writing 3 (well, I would, wouldn't I?) %P 256 p. %T "Dictionary of Information Security" Their our lots of wurds in this book. Sum of the werds are big. They're are no pitchers in this book. If ewe like big wirds and no pitchers you will like this book. [Nut meny mispelingz in the book, though. PGN] The courier driver showed up at noon, today, with the box of author copies. So I can, with assurance (p. 13) state that the volume now actually exists in hardcopy. After four years of maintaining it mostly as a resource for those studying for the CISSP exam, it's now going to be available in bookstores for everyone. It's been interesting, working with Syngress. Having worked with more traditional publishers, I was rather expecting the usual 2-3 months of contract negotiations, 2-3 months to get out the final manuscript (the book had, after all, already been basically finished: I'd been using it on the Website for some time), and the usual 4-6 months in copy editing and galley proofing. The contract negotiations took about a month and a half. I got the final contract May 18th. They wanted the manuscript on the 26th. I got the galley proofs on June 1st, and had them back to Syngress on June 4th. (Then there seems to have been some kind of hiccup with the printer: it's been "due" every day now for about three weeks.) So now, I suppose, I'd better get a move on. I've already replaced the glossary page (http://victoria.tc.ca/techrev/secgloss.htm) with an errata page, and I've got about 60 entries that need to be added or corrected. So I hope you'll all actually buy the book, and Syngress will be moved to putting out a new edition fairly soon. (And regularly, after that.) copyright Robert M. Slade, 2006 BKDCINSC.RVW 20060528 rslade@private slade@private rslade@private http://victoria.tc.ca/techrev/rms.htm ------------------------------ Date: 2 Oct 2005 (LAST-MODIFIED) From: RISKS-request@private Subject: Abridged info on RISKS (comp.risks) The ACM RISKS Forum is a MODERATED digest, with Usenet equivalent comp.risks. => SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent) if possible and convenient for you. The mailman web interface can be used directly to subscribe and unsubscribe: http://lists.csl.sri.com/mailman/listinfo/risks Alternatively, to subscribe or unsubscribe via e-mail to mailman your FROM: address, send a message to risks-request@private containing only the one-word text subscribe or unsubscribe. You may also specify a different receiving address: subscribe address= ... . You may short-circuit that process by sending directly to either risks-subscribe@private or risks-unsubscribe@private depending on which action is to be taken. Subscription and unsubscription requests require that you reply to a confirmation message sent to the subscribing mail address. Instructions are included in the confirmation message. Each issue of RISKS that you receive contains information on how to post, unsubscribe, etc. => The complete INFO file (submissions, default disclaimers, archive sites, copyright policy, etc.) is online. <http://www.CSL.sri.com/risksinfo.html> The full info file may appear now and then in RISKS issues. *** Contributors are assumed to have read the full info file for guidelines. => .UK users should contact <Lindsay.Marshall@private>. => SPAM challenge-responses will not be honored. Instead, use an alternative address from which you NEVER send mail! => SUBMISSIONS: to risks@private with meaningful SUBJECT: line. *** NOTE: Including the string "notsp" at the beginning or end of the subject *** line will be very helpful in separating real contributions from spam. *** This attention-string may change, so watch this space now and then. => ARCHIVES: ftp://ftp.sri.com/risks [subdirectory i for earlier volume i] <http://www.risks.org> redirects you to Lindsay Marshall's Newcastle archive http://catless.ncl.ac.uk/Risks/VL.IS.html gets you VoLume, ISsue. Lindsay has also added to the Newcastle catless site a palmtop version of the most recent RISKS issue and a WAP version that works for many but not all telephones: http://catless.ncl.ac.uk/w/r <http://the.wiretapped.net/security/info/textfiles/risks-digest/> . ==> PGN's comprehensive historical Illustrative Risks summary of one liners: <http://www.csl.sri.com/illustrative.html> for browsing, <http://www.csl.sri.com/illustrative.pdf> or .ps for printing ------------------------------ End of RISKS-FORUM Digest 24.34 ************************
This archive was generated by hypermail 2.1.3 : Wed Jul 19 2006 - 16:28:19 PDT