RISKS-LIST: Risks-Forum Digest Tuesday 4 April 2006 Volume 24 : Issue 23 ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, moderator, chmn ACM Committee on Computers and Public Policy ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as <http://catless.ncl.ac.uk/Risks/24.23.html> The current issue can be found at <http://www.csl.sri.com/users/risko/risks.txt> Contents: Three days of San Francisco BART upgrade crashes (PGN) Nashville airport X-ray baggage screeners offline: "software glitch" (Carl G. Alphonce) IT Corruption in the UK (Jerome Ravetz) "Invisible fences" pose risks for dogs from coyotes (Philipp Hanes) Computer problems with voting system (Danya Hooker via Dana A. Freiburger) eFax/J2 opens door to expensive Joe-jobbing (Dallman Ross) Fake E-Mail Topples Japan Opposition Party (Hans Greimel excerpt) phishing@private (Al Macintyre) Maplin gives "How To..." advice on Wireless Networks (Chris Leeson) Rootkit: erosion of terms? (Rob Slade) Error bounds on estimated probabilities (Jacob Palme) Re: Excel garbles microarray experiment data (Przemek Klosowski) Re: It's now a crime to delete files (Crispin Cowan) Re: The Spider of Doom (Steve Summit) Re: The 2005 Helios B737 Crash - A test for Don Norman... (Martyn Thomas, Tom Watson, noone, Eric T. Ferguson) Re: Man is charged $4,334.33 for four burgers (Mark Feit) Abridged info on RISKS (comp.risks) ---------------------------------------------------------------------- Date: Fri, 31 Mar 2006 07:12:11 PST From: "Peter G. Neumann" <neumann@private> Subject: Three days of San Francisco BART upgrade crashes BART has been attempting a software upgrade to modernize the software controlling its rapid transit system. Unfortunately, computer system problems were responsible for a combination of system-wide slowdowns and shutdowns on three consecutive days (Monday/Tuesday/Wednesday), including during rush hours. The first two days' problems resulted from observed potential safety failures of the new software. The third day's problems resulted from an attempt to revert to a backup system -- which apparently overloaded a network switch, which crashed the computer system. The new supposedly self-correcting software had passed all of its tests on the previous Sunday, but evidently the testing was incomplete. (This upgrade is only part of what is thought to be a carefully phased multiyear modernization that is expected to take at least another five months.) [Sources: PGN-ed from an item in the *San Francisco Chronicle*, 30 Mar 2006, the *San Jose Mercury*, 30 Mar 2006 http://www.mercurynews.com/mld/mercurynews/14223072.htm and Computerworld, 31 Mar 2006.] ------------------------------ Date: Fri, 31 Mar 2006 21:22:23 -0500 From: "Carl G. Alphonce" <alphonce@private> Subject: Nashville airport X-ray baggage screeners offline: "software glitch" As reported at www.cnn.com/2006/TRAVEL/03/31/airport.security.ap/index.html: A software glitch knocked out the computerized X-ray machines at Nashville International Airport for five hours Friday, causing long lines and flight delays. No indication of what the glitch might be. The article goes on to state: David Beecroft, who oversees security operations at Nashville for the federal Transportation Security Administration, said all U.S. international airports were alerted because the company that supplies the software for the Smiths Heimann X-ray detectors also serves several other airports. But TSA spokeswoman Laura Uselding later said other airports were not notified because the situation in Nashville was an isolated event. The discrepancy could not be immediately resolved. There are two discrepancies as I see it. Was this an isolated event or one that affected screening machines at several airports? Were several airports alerted or not? Also some questions come to mind. If there was a problem, why would only *international* airports be alerted (shouldn't domestic baggage be screened with correctly functioning equipment)? Were these machines perhaps only used at intl airports (this isn't clear from the story)? Carl Alphonce, Dept of Computer Science and Engineering, University at Buffalo Buffalo, NY 14260-2000 www.cse.buffalo.edu/faculty/alphonce 716-645-3180 x115 ------------------------------ Date: Sun, 2 Apr 2006 11:59:43 +0100 From: Jerome Ravetz <jerome-ravetz@private> Subject: IT Corruption in the UK The 2 Apr 2006 issue of the Sunday Times has an article by the distinguished journalist Simon Jenkins, 'Desperate Dispatches from the banana republic of Great Britain'. There he lists a number of multi-million pound scams. I have told him by way of consolation that in the U.S.A. they multi-billions. Here was one item that he missed! In a recent *Private Eye* (#1154, 17 March 2005, p.4) we have the following item, starting: ``How appropriate that Mapeley, the company that does most of its business through secretive tax havens -- hotbeds of money laundering, terrorist financing and tax dodging -- should win the contract to manage the 70-odd `authentication by interview' centres at which the Passport Service will vet and biometrically test new applicants in the interests of national security.'' *The Eye* goes on to remind readers of Mapeley's questionable record as financial manipulators. For readers of RISKS, it is more interesting that the UK government has chosen this firm -- which at best has no experience whatever in the field -- to manage the introduction of a controversial, untried, rapidly developing and highly sensitive technology. In that anyone with the most elementary prudence would recognise that the ID card scheme will immediately attract hackers, criminals and terrorists to come in on the ground floor, this contract is evidence for the genuineness of the Blair government's commitment to security. Jerry Ravetz, 111 Victoria Road, Oxford OX2 7QG James Martin Institute for Science and Civilization, Oxford www.jerryravetz.co.uk +44 [0]1865 512247 ------------------------------ Date: Tue, 04 Apr 2006 17:03:50 +0000 From: "Philipp Hanes" <philipphanes@private> Subject: "Invisible fences" pose risks for dogs from coyotes The cited article concerns dogs that are given electrically activated collars to keep them in their virtually enclosed yards. Unfortunately, coyotes -- who obviously don't have the collars -- can easily enter the dogs' yards, and have been reportedly killing dogs. [PGN-ed; This situation is another variant on attempting to solve one problem without considering others, in this case considering the confinement problem without remembering the intrusion problem, which is sometimes seen in simple-minded computer security approaches -- as is its converse.] http://www.pioneerlocal.com/cgi-bin/ppo-story/localnews/current/gl/03-30-06-876429.html [Moral: Bilateral perimeter (de)fences are more effective than border collars for border collies? PGN] ------------------------------ Date: Fri, 31 Mar 2006 07:09:52 -0600 From: "Dana A. Freiburger" <dafreiburger@private> Subject: Computer problems with voting system (Danya Hooker) Computer problems caused the University of Wisconsin-Madison Student Council to throw out online votes cast this week for campus offices, but retained votes cast for two referendums on the same ballot. The cause of the problem may have been a "little-used, multiple-name tool has worked in prior elections but may have been corrupted by a database upgrade several months ago." The main risk appears to be the lack of testing of the voting system prior to the vote (along with no testing after a major software upgrade). The parallels with the world of voting machines are obvious: the voting system needs to be tested and certified BEFORE voting occurs. Six thousand votes for Student Council seats will be tossed out, but votes cast for two referendums will be counted, under a plan approved by the Associated Students of Madison on Wednesday night. [...] [Source: Students plan to toss council votes after glitch Danya Hooker, dhooker@private, *Wisconsin State Journal*, 31 Mar 2006] http://www.madison.com/wsj/home/local/index.php?ntid=78393 ------------------------------ Date: Thu, 30 Mar 2006 02:24:02 +0200 From: "Dallman Ross" <dman@private> Subject: eFax/J2 opens door to expensive Joe-jobbing eFax, which is owned by j2.com (a.k.a. "jFax"), recently sent me a member e-mail containing the following text: > eFax Tip for Easier Faxing > How to send a fax by e-mail > 1. Open a new e-mail. > 2. Add fax number (including country code) to > "@efaxsend.com". > 3. Attach the document you wish to fax. > (supported file types > <http://mx3.efax.com/redir3/zYEGTw_CD!https://www.efax.com/en/ > efax/twa/page/supportedFileTypesPopup> ) > 4. Send e-mail. We'll convert the attachment and fax it for you. > > You'll receive a confirmation e-mail once the fax has been delivered. > > Example > 1. You want to send a fax to London (UK's country code: 44) > 2. Fax number is (0) 20 7555 1234 > 3. You would type: 442075551234@private Sounds neat, you say? I thought so too, for a second or two. Then it dawned on me: how will they know whom to bill? The answer seems to be that they bill member accounts based simply on the From-address! That is, if Mr. Joe-Jobber with a nit to pick against you knows that you have an eFax or j2 account and knows or guesses the e-mail address you use with that service, he can send a spate of bogus (or real) faxes using your address and clear your account or bank balance! One acquaintance of mine tested this with his j2 account. No prior registration for this service was required. He simply e-mailed as above, and his account was debited 10 cents and the fax was sent. There does not seem to be any way to disable e-mail addresses from this service, for anyone with an eFax or j2 account. One can, of course, change the registered e-mail address used with the service, however. What an accident waiting to happen this is, all in the name of "convenience"! Dallman Ross http://vsnag.spamless.us/ - plug-in for procmail ------------------------------ Date: Sun, 2 Apr 2006 12:02:49 PDT From: "Peter G. Neumann" <neumann@private> Subject: Fake E-Mail Topples Japan Opposition Party Japan's opposition party suffered a fresh humiliation Friday when its leadership resigned en masse over a fake e-mail scandal, handing Prime Minister Junichiro Koizumi an uncontested grip on power in his last six months in office. ... Party leader Seiji Maehara and his lieutenants stepped down after the party's credibility was torpedoed by one of its own lawmakers, who used a fraudulent e-mail in an apparent attempt to discredit Koizumi's ruling Liberal Democratic Party. [Source: Hans Greimel, Associated Press, 31 Mar 2006; PGN-excerpted. TNX to Lauren Weinstein for noting this one.] http://www.newsday.com/news/nationworld/wire/sns-ap-japan-politics,0,191417.story?coll=sns-ap-nationworld-headlines ------------------------------ Date: Tue, 28 Mar 2006 01:30:24 -0600 From: Al Macintyre <macwheel99@private> Subject: phishing@private phishing@private is an e-mail address with the IRS where we can forward e-mails that we think fraudulently claim to come from the IRS. Example scams include: claim that a tax refund is owed you http://www.fcw.com/article92749-03-27-06-Web http://en.wikipedia.org/wiki/User:AlMac http://www.ryze.com/go/Al9Mac ------------------------------ Date: Mon, 3 Apr 2006 14:20:04 +0100 From: "LEESON, Chris" <chris.leeson@private> Subject: Maplin gives "How To..." advice on Wireless Networks I dropped in to one of my local Maplin (www.maplin.co.uk) stores today. While standing in the queue, I noticed a leaflet entitled "How to...Create a wireless network". It was full of useful information (the 5 most important advantages of a wireless network are, apparently, no messy cables; no need to drill holes; simple to expand for more users; the ultimate freedom - Internet anywhere in your house of garden; no need to open up your PC to install hardware). Alas, absolutely nothing about the risks of a wireless network, and nothing on how to secure one. Bear in mind that this is supposedly a "how to...", not an advert. They did, however, offer a link (www.maplin.co.uk/wireless) with more information. When I got back to the office I tried the link, hoping for a more complete set of instructions dealing with the issues - after all, there is a limit to what can be put on an A4 sheet. The further information consisted of the phrase "DETAILS TO FOLLOW...". I waited for a few minutes just in case there was a flash animation loading or a page redirect being especially slow. Then I checked the page source. No flash, no redirect - they just hadn't uploaded the page. Risks? 1. Pushing technology without due regard to security (a common topic in RISKS, alas). 2. Publishing literature with web links in. but not having them ready when the literature is released, does not reflect well on a company. ------------------------------ Date: Tue, 04 Apr 2006 12:57:53 -0800 From: Rob Slade <rMslade@private> Subject: Rootkit: erosion of terms? Wearing my "glossary guy" hat, one of the things I've noticed is how difficult it is to come to complete agreement on the precise definition of many terms that are used in infosec. There are, for example, three quite distinct meanings for the term "tar pit." (And that's in terms of networking alone.) (It is highly unlikely that we will ever be able to reduce the number of tar pit definitions to one: all the definitions came at about the same time, and all are important and equally valid.) However, what really irks me is when defined and agreed upon terms start being misused, sometimes to the point where the original term becomes useless. There is, of course, "hacker." (And I've given Hal a diatribe about "zero day" which will probably be coming out in the next ISMH.) The latest endangered term seems to be "rootkit." A rootkit has been defined as programming that allows escalation of privilege or the option to re-enter the compromised system with greater ease in the future. Often rootkits also contain functions that prevent detection of, or recovery from, the compromise. Starting with the recent Sony "digital rights management" debacle, the general media now seems to be using "rootkit" to refer to any programming that hides any form of information on a system, and specifically any functions that impede the detection of malware. The latest reports are that Bagle and other malware/virus families now contain "rootkits." Antidetection features in viruses are nothing new: there was a form of tunneling stealth implemented in the Brain virus 20 years ago. Therefore, to use the term rootkit to refer to this activity can only degrade the value of the term. It has been difficult to ensure that infosec specialists can at least talk to each other and exchange useful information. However, this may not last much longer if our "precious verbal essences" become contaminated. rslade@private slade@private rslade@private ------------------------------ Date: Sun, 2 Apr 2006 11:17:40 +0200 From: Jacob Palme <jpalme@private> Subject: Error bounds on estimated probabilities Martyn Thomas (RISKS-24.19) asks about the use of error bounds on estimated probabilities. There is actually one researcher, Love Ekenberg, who has built a theory of error bounds on probability estimates, and also developed software to help in evaluating different alternatives where such error bounds are used. His software is described at http://www.preference.nu/site_en/index.php Jacob Palme <jpalme@private> (Stockholm University and KTH) URL: http://www.dsv.su.se/jpalme/ ------------------------------ Date: Fri, 24 Mar 2006 22:57:58 -0500 From: Przemek Klosowski <przemek@private> Subject: Re: Excel garbles microarray experiment data (Malcolm, RISKS-24.21) While working on a joint UK / German product development we discovered that the 'standard' separator employed in many German CSV files is the semi-colon ';' - I do not know why. Probably because Germans use 'decimal comma' instead of 'decimal point' between the integer and fractional parts of a floating point number, thus interfering with the use of comma in CSV files. The period is used for grouping of digits, i.e. every three digits. [Also noted by George M. Sigut. This is a very old problem that has been noted in RISKS on numerous occasions. It keeps recurring. PGN] ------------------------------ Date: Wed, 29 Mar 2006 11:47:19 -0800 From: Crispin Cowan <crispin@private> Subject: Re: It's now a crime to delete files (Peterson, RISKS-24.20) > The 7th Circuit made two remarkable leaps. First, the judges said that > deleting files from a laptop counts as ``damage''. Second, they ruled that > Citrin's implicit ``authorization'' evaporated when he (again, allegedly) > chose to go into business for himself and violate his employment contract. This actually makes perfect sense to me, on both counts. File deletion is damage, and both the laptop and the data seem to have been the property of IAC at the time that he chose to destroy the data. Imagine sacking a developer, and the developer deletes all the source code he has written during his employment before leaving the building. Such data vandalism is justifiable only if you also plan to return all wages paid during employment, and even then the employer should have the choice. More over, depending on the terms of the employment contract, Citrin may not even have had a right to a copy of the data for himself. Crispin Cowan, Ph.D. http://crispincowan.com/~crispin/ ------------------------------ Date: Sat, 01 Apr 2006 17:24:22 -0500 From: Steve Summit <scs@private> Subject: Re: The Spider of Doom (RISKS 24.22) "one particularly troublesome external IP had gone in and deleted *all* of the content on the system [...] googlebot.com, Google's very own web crawling spider. ... the CMS authentication subsystem didn't take into account the sophisticated hacking techniques of Google's spider." I can see Joe Loughry's tongue in his cheek pretty clearly from here, but it might not be obvious to a casual reader that this was manifestly *not* a "hacking" attempt by Google. That a simple and naive traversal of some hyperlinks could cause content to be deleted makes it pretty obvious that something was badly wrong with the site's editing and access-control model. Needless to say (or, it *ought* to be needless, but is actually pretty needful), security that assumes that visitors *will* have cookies and JavaScript enabled, that can be compromised if these features are disabled, is no security at all. That content could have been inadvertently deleted by any visitor to the vulnerable website; google's spider just happened to get to it all first. ------------------------------ Date: Sat, 1 Apr 2006 15:25:08 +0100 From: "Martyn Thomas" <martyn@thomas-associates.co.uk> Subject: RE: The 2005 Helios B737 Crash ... (RISKS-24.22) Why is it necessary for the cabin pressurisation switch to have the property that it is possible to leave it set to manual, accidentally, on takeoff? Wouldn't a thorough hazard analysis reveal the risk, and normal systems engineering (reducing risk ALARP) eliminate it? Surely it would be safer if all settings defaulted to normal on start-up, and required explicit setting to hazardous positions. Are there other such known traps on commercial aircraft? ------------------------------ Date: Mon, 3 Apr 2006 14:42:26 -0700 From: "Tom Watson" <t_wtom@private> Subject: Re: The 2005 Helios B737 Crash - A test for Don Norman... (R-24:22) While we all speculate on the cause of the error, if the pilots were in communication with the engineering department of the airline, it seems that the ground people might want to say to the flight deck crew something like "Oxygen Now!!" then diagnose the problem. If the flight deck people had oxygen (they were by recollection communicating with the ground people), this might have brought them to their senses so they could solve the problem. I guess it stems from the "If you are up to your ass in Alligators, you forget that the original objective was to drain the swamp!" syndrome. Sometimes you drain the swamp before dispatching the alligators, and suffer the consequences. -- Tom Watson Generic Signature t_wtom@private (I'm at work now) ------------------------------ Date: Mon, 03 Apr 2006 15:53:03 -0400 From: noone <noone@private> Subject: The 2005 Helios B737 Crash (Re: RISKS-24.22) One aspect of the Helios B737 crash not discussed by either Peter Ladkin or Don Norman (RISKS-24.22) is the relatively short interval available for the crew to discern the problem and take corrective action. According to a published accident report (http://aviation-safety.net/database/record.php?id=20050814-0) the airplane climbed from Larnaca airport, at sea level, to 34,000 feet in approximately 19 minutes. That means an average rate of climb of nearly 1,800 feet per minute (FPM). Normally jets climb rapidly from takeoff to 10,000 feet in order maintain a speed of 250 knots or less, then enter a cruise climb with higher airspeed and better fuel economy, although air traffic control can require deviations. I was unable to find any more detailed information about the climb profile of this particular flight. From the time the cabin altitude alert went off at 12,000 feet (http://www.ntsb.gov/ntsb/brief2.asp?ev_id=20050825X01309&ntsbno=DCA05RA092&akey=1) it would take just 6 minutes and 40 seconds to reach 24,000 feet if the climb rate was 1,800 FPM. The Time of Useful Consciousness (TUC) at 24,000 feet is at most 3 minutes (http://www.smartcockpit.com/operations/Surviving%20Cabin%20Decompression.pdf, http://www.aviationmedicine.co.za/AM_S_Hypoxia.php). Victims of hypoxia may be conscious after the TUC but are incapable of taking proper corrective and protective action, even when instructed or coached. The TUC is further reduced by the rate of change of altitude, by increased mental activity, such as pilot workload in an emergency, and by exercise, such as struggling out of a cramped cockpit seat to check circuit breakers. Apparently the captain and co-pilot did not don their oxygen masks when the cabin altitude alarm sounded. The "Surviving Cabin Decompression" document discusses incidents where pilots alerted by an explosive or rapid decompression lost consciousness after brief delays in donning their masks. These events are announced by unmistakable cues such as a loud bang and/or mist forming in the cabin. The slower loss of pressure as the accident airplane climbed appears to have allowed hypoxia to develop without these cues. It would probably take a few minutes after an initial radio call to base to get qualified engineering assistance to the microphone. The fact that neither flight crew member responded to the Helios engineering department instruction/question regarding the status of the pressurization panel indicates that hypoxia was probably far advanced by the time the instruction was given. In the US, Federal Aviation Regulations require that whenever the airplane is above 25,000 feet, if one pilot leaves the controls, the other must don an oxygen mask. The rules may be different in Greek airspace. There are reports (e.g., http://www.airlinesafety.com/editorials/737CrashInGreece.htm) that this regulation is not always strictly observed. Had the Helios co-pilot followed this rule when the captain left his seat, this accident would have been prevented, regardless of the crew's apparent misunderstanding of the cabin altitude alarm. Even if he was so trained, his hypoxia may have prevented him from performing normally. Most depressurization incidents are resolved without fatal crashes. This and other depressurization accidents demonstrate how little time is available for even fully trained, alert, and competent flight crew to don their oxygen masks and prevent a tragedy. The uniqueness of the warning signal may or may not have played a role in this case; the short interval between the first indication of trouble and complete incapacitation of the crew certainly did so. ------------------------------ Date: Sun, 02 Apr 2006 09:42:11 +0200 From: "Eric Ferguson" <e.ferguson@private> Subject: Re: Helios B737 Crash (RISKS-24.22) I am astonished at the underlying safety concept. It is obvious that climbing with no pressurization and no (crew and passenger) oygen is fatal. Then why just issue a "warning"? I would propose this Basic safety concept: before the system will allow itself to move into dangerous situations, the pilot must confirm that he is aware of the specific danger involved. Implementation for this case: well before attaining a dangerously low cabin pressure, the autopilot refuses to allow further climbing (even manual) until the pilots override this barrier by confirming explicitly "we have donned oxygen". The same system would - in case of high altitude depressurization -- initiate an automatic rapid descent until the pilots override it with the same confirmation. Dr.ir. Eric T. Ferguson, Consultant for Energy and Development, van Reenenweg 3, 3702 SB ZEIST Netherlands +31 30-2673638 e.ferguson@private ------------------------------ Date: Sat, 1 Apr 2006 09:13:00 -0500 From: Mark Feit <mfeit@private> Subject: Re: Man is charged $4,334.33 for four burgers (RISKS-24.23) Credit cards are relatively new things for fast food restaurants. Just about every one I've set foot in recently hasn't upgraded its point-of-sale systems to integrate them beyond adding a "paid by credit" button so there won't be cash expected in the till. Card transactions are being handled separately by VeriFone or similar countertop terminals which have no idea whether you're selling French fries or Ferraris. Transactions at countertop terminals do have a bounds check, but it happens at the wrong point in the transaction. The customer receipt and store copy are printed *after* the charge has been committed to the clearing house, leaving the cardholder with no way to approve the amount. (Even restaurants, which have an extra step where you add a gratuity, have this problem, because the final figure is still un-verified by the customer.) Even if the customer refuses to consummate the transaction by signing, it's still a done deal and the only recourse for correcting it is to take it up with the bank. I suspect that's what happened in this case, and it's a very good reason to use a real credit card instead of a debit card. ------------------------------ Date: 2 Oct 2005 (LAST-MODIFIED) From: RISKS-request@private Subject: Abridged info on RISKS (comp.risks) The ACM RISKS Forum is a MODERATED digest, with Usenet equivalent comp.risks. => SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent) if possible and convenient for you. The mailman web interface can be used directly to subscribe and unsubscribe: http://lists.csl.sri.com/mailman/listinfo/risks Alternatively, to subscribe or unsubscribe via e-mail to mailman your FROM: address, send a message to risks-request@private containing only the one-word text subscribe or unsubscribe. You may also specify a different receiving address: subscribe address= ... . You may short-circuit that process by sending directly to either risks-subscribe@private or risks-unsubscribe@private depending on which action is to be taken. Subscription and unsubscription requests require that you reply to a confirmation message sent to the subscribing mail address. Instructions are included in the confirmation message. Each issue of RISKS that you receive contains information on how to post, unsubscribe, etc. => The complete INFO file (submissions, default disclaimers, archive sites, copyright policy, etc.) is online. <http://www.CSL.sri.com/risksinfo.html> The full info file may appear now and then in RISKS issues. *** Contributors are assumed to have read the full info file for guidelines. => .UK users should contact <Lindsay.Marshall@private>. => SPAM challenge-responses will not be honored. Instead, use an alternative address from which you NEVER send mail! => SUBMISSIONS: to risks@private with meaningful SUBJECT: line. *** NOTE: Including the string "notsp" at the beginning or end of the subject *** line will be very helpful in separating real contributions from spam. *** This attention-string may change, so watch this space now and then. => ARCHIVES: ftp://ftp.sri.com/risks [subdirectory i for earlier volume i] <http://www.risks.org> redirects you to Lindsay Marshall's Newcastle archive http://catless.ncl.ac.uk/Risks/VL.IS.html gets you VoLume, ISsue. Lindsay has also added to the Newcastle catless site a palmtop version of the most recent RISKS issue and a WAP version that works for many but not all telephones: http://catless.ncl.ac.uk/w/r <http://the.wiretapped.net/security/info/textfiles/risks-digest/> . ==> PGN's comprehensive historical Illustrative Risks summary of one liners: <http://www.csl.sri.com/illustrative.html> for browsing, <http://www.csl.sri.com/illustrative.pdf> or .ps for printing ------------------------------ End of RISKS-FORUM Digest 24.23 ************************
This archive was generated by hypermail 2.1.3 : Tue Apr 04 2006 - 17:18:41 PDT