[RISKS] Risks Digest 25.31

From: RISKS List Owner <risko_at_private>
Date: Wed, 10 Sep 2008 9:35:33 PDT
RISKS-LIST: Risks-Forum Digest Wednesday 10 September 2008 Volume 25 : Issue 31

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, moderator, chmn ACM Committee on Computers and Public Policy

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
  <http://catless.ncl.ac.uk/Risks/25.31.html>
The current issue can be found at
  <http://www.csl.sri.com/users/risko/risks.txt>

  Contents:
FAA redundancy -- or the lack thereof (Tessler and Robertson via PGN)
Corrupt File Brought Down Flight Planning System (Gabe Goldberg)
UK software upgrade issues (John Sawyer)
JPMorgan Chase: The Bank Account That Sprang a Leak (Monty Solomon)
Software problems affect the bottom line at J. Crew (Steven M. Bellovin)
Google ads and language (Erling Kristiansen)
Worditudinality (Rob Slade)
Control-C vs. Bourne-Again SHell (jidanni)
Control-C Control-C vs. gnus (jidanni)
Risks of better security and "smarter" users (Ron Garret)
BNY Mellon Data Breach Potentially Massive (George Hulme via Monty Solomon)
Student hacker exposes Carleton U cash, ID card security holes
  (Sergei Patchkovski)
Whit Diffie and Susan Landau: Internet Eavesdropping (Randall Webmail)
US .gov website asks for personal info without https protection
  (Jonathan Thornburg)
Re: Germany's New Unified Tax Identification Codes (Kevin Pfeiffer)
Re: Firefox 3's Step Backwards ... (Dimitri Maziuk)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Sat, 30 Aug 2008 15:12:08 PDT
From: "Peter G. Neumann" <neumann_at_private>
Subject: FAA redundancy -- or the lack thereof

The FAA computer problem on 26 Aug 2008 (RISKS-25.30) "served as a reminder
that the U.S. flight system is waiting for a modernizing overhaul."  The FAA
is apparently using "computing practices that would be considered poor in
credit card networks or power plant operators -- relying on only the Atlanta
and Salt Lake City centers for flight planning.  For example, power and
water utilities can be find a million dollars a day if they are willfully
negligent.  [Source: Joelle Tessler and Jordan Robertson, FAA outage reveals
odd computing practices, AP item, 29 Aug 2008; PNG-ed]
http://www.washingtonpost.com/wp-dyn/content/article/2008/08/29/AR2008082902088.html?hpid=sec-tech

------------------------------

Date: Fri, 29 Aug 2008 14:17:58 -0400
From: Gabe Goldberg <gabe_at_private>
Subject: Corrupt File Brought Down Flight Planning System

Strike 1: A corrupt file wasn't caught by validation?

Strike 2: It took 2 1/2 hours to restart after the failure?

Strike 3: The "backup" computer couldn't handle the failover load?

Strike 4: The restored-to-service computer couldn't clear the accumulated
backlog until new transactions were suppressed?

 - - - - -

Corrupt File Brought Down Flight Planning System A corrupt file contained in
a normal software upload brought down the FAA's main flight planning
computer on Tuesday, delaying hundreds of flights and prompting questions
about the inevitability of it happening again. FAA spokesman Paul Takemoto
told eWeek the corrupt file stopped flight plans from being filed at the
FAA's Hampton, Ga. facility, which is the principal flight planning
computer. "Basically, all the flight plans that were in the system were
kicked out," Takemoto said. "For aircraft already in the air, or had just
been pushed back form the gate, they had no problems. But for all other
aircraft, it meant delays."

The system switched to the FAA's backup flight planning computer in Salt
Lake City, which was quickly overwhelmed by airlines trying in vain to enter
flight plans. "They just kept hitting the 'Enter' button. So the queues
immediately became huge," Takemoto said. "On top of that, it happened right
during a peak time as traffic was building. Salt Lake City just couldn't
keep up." The Georgia computer was fixed in two-and-a-half hours but it
wasn't until the FAA asked airlines to stop filing flight plans that the
backlogs started to clear. All was reported normal on Wednesday but eWeek is
openly wondering how much longer the "a creaky old IT system" can
continue. They system is more than 20 years old and the company that built
it has been out of business most of that time, eWeek reported.

------------------------------

Date: Sat, 30 Aug 2008 01:33:07 +0100
From: John Sawyer <jpgsawyer_at_private>
Subject: UK software upgrade issues

Yet another example of UK government not testing software properly
before upgrading!

http://news.bbc.co.uk/1/hi/england/cambridgeshire/7588551.stm

They want us to trust them with more information?

Dr John Sawyer, Wiltshire England.

------------------------------

Date: Sun, 31 Aug 2008 10:04:21 -0400
From: Monty Solomon <monty_at_private>
Subject: JPMorgan Chase: The Bank Account That Sprang a Leak

Surely customers of the elite private banking operation at JPMorgan Chase,
serving only the bank's wealthiest clients, are safe from such problems,
right?  Wrong, says Guy Wyser-Pratte, an activist investor on Wall Street
for more than 40 years who uses his hedge fund's war chest of roughly $500
million to wage takeover fights and proxy battles in the United States and
Europe.  In May, he learned that someone had siphoned nearly $300,000 from
his personal account at the private bank through many small electronic
transfers over a 15-month period.  Then he was told by the bank that he
could stop the theft only by closing his account and opening a new one.  And
then JPMorgan Chase told him that the bank would cover only $50,000 of his
losses. ...  [Source: Diana B. Henriques, *The New York Times*, 30 Aug 2008;
PGN-ed]

http://www.nytimes.com/2008/08/30/business/yourmoney/30theft.html?partner=rssuserland&emc=rss&pagewanted=all

------------------------------

Date: Sat, 30 Aug 2008 10:31:22 -0400
From: "Steven M. Bellovin" <smb_at_private>
Subject: Software problems affect the bottom line at J. Crew

According to the Wall Street Journal's Business Technology blog, software
problems from a "botched system upgrade" caused earnings for the third
quarter to drop by 12% from a year earlier.  Problems included outages,
performance, botched orders, return problems, call center issues, and more.
http://blogs.wsj.com/biztech/2008/08/27/j-crew-blames-software-for-its-bad-quarter/

Steve Bellovin, http://www.cs.columbia.edu/~smb

------------------------------

Date: Wed, 03 Sep 2008 20:09:26 +0200
From: Erling Kristiansen <erling.kristiansen_at_private>
Subject: Google ads and language

Googling something in one language while accessing from another language
environment can give rather amusing ads.

I looked for "SOA SWIM" (SOA = Service oriented architecture; SWIM = System
wide information management. Both terms are related to network
architecture).  The first half-dozen hits were spot on, and gave me what I
was looking for.

But the ads somehow caught my eye: Cures for chlamydia.
Explanation: SOA is the Dutch abbreviation for sexually transmitted disease.

So it seems that, while the search engine tries to match as many search
terms as possible, the ads go for single words.

At first, I had a good laugh. But, thinking a bit more about what happened,
what if somebody was trying to make a profile of me based on single words?

Some time ago (actually, quite a longtime ago), I was googling for "EROS
data centre", a professional repository of Earth resource imagery.

Put EROS and SOA together, and you might get the wrong idea of what I had
been up to, and what were the consequences.

------------------------------

Date: Thu, 4 Sep 2008 18:25:35 -0800
From: Rob Slade <rMslade_at_private>
Subject: Worditudinality

Go look up the term rootkit on Wikipedia.  (Go ahead, I'll wait.)  Lovely
entry, isn't it?  Lots of information.  Trouble is, there's lots of
misinformation, too.

A rootkit is *not* "a program ... designed to take fundamental [or] ...
`root' access" for a system.  It's designed to *keep* that access, once you
broken into the system and grabbed it.  (And rootkits were around before
1990, etc, but we'll let that go for the moment.)

Or, at least, it used to be defined that way.  Recently, all kinds of people
have been redefining what rootkit means, to the point that it may no longer
mean anything.

Wikipedia is a wonderful tool, and the English encyclopedia made with it is
a wonderful resource.  For the most part.  But when you get to the real
specialty areas you start running into problems.  As John Lawton has pointed
out, the irony of the information age is that it has given new
respectability to uninformed opinion.  And Wikipedia is susceptible to that
problem.

Now the Wikipedia people are aware of the problem, and have provided ways to
address it.  There is the fact that anyone can correct errors, when errors
have been made.  There are technical controls in terms of limits on changes.
There are administrative controls in the granting of elevated privileges to
editors.  But occasionally you get a breakdown, such as the fact that an
editor can be, him or herself, in error.  And then you get entries like the
one for rootkit.

But Wikipedia is not what I really want to talk about.  I want to talk about
words.  Specifically, the jargon that we use, and create, in technical
fields, and in the field of information security in particular.  Because
language is kind of like a giant Wikipedia, where anyone at all can make an
entry.  And anyone at all can try and modify that entry.

Lots of people like to talk about computer security.  It's quite likely that
more people like to talk about security than actually *do* anything about
security.  So it's not hard to see that a lot of the people who are talking,
and writing, about security often talk about things that, well, they are not
quite certain about.

If I say that Alan Turing was a homosexual, I might be right, or I might be
wrong.  But it would be fairly easy to check whether I was right or wrong.
However, if I say that a Turing Machine is a universal computer because it
can be implemented on any computer, I am making a different kind of
assertion, and one that it harder to check.  Someone who hears me say that,
and knows that I'm wrong, might not challenge it immediately, because it's
partly right, and the error I've made may not be important to the point that
I'm making.  But the people who hear me make that statement, and who do not
know why the statement is in error, are probably going to assume and
generate various kinds of mistaken ideas about Turing machines.  And if I
make the statement frequently enough, and in enough different places, it
starts being taken as true.  And eventually we'll have people saying that a
universal computer is any entity that can be implemented on any platform.
Which had nothing at all to do with what Turing was doing and proving.

So it is with a number of the specialized terms that we have been using in
infosec.  A lot of people are getting hold of them, and using them in sloppy
ways.  Now, a great many people say that language is living, and you have to
make allowances for that growth.  Fair enough: much of the vocabulary that
we use every day in computer security didn't even exist fifty years ago, so
it would be hard to argue the point.  However, if the terms can be changed
by anyone, at any time, then they lose meaning.  If I use the word virus to
mean one thing, and you use it to mean something quite different, then we
aren't going to come to any agreement.  We can't communicate.  And, in all
of these rapidly changing technical fields, communication is vitally
important.

So, in the blort, I just want to regrify you to smetnicate all forms of
antrifact.

Yelth you for your fesculiant.

victoria.tc.ca/techrev/rms.htm blogs.securiteam.com/index.php/archives/author/p1/

------------------------------

Date: Mon, 01 Sep 2008 02:04:01 +0800
From: jidanni_at_private
Subject: Control-C vs. Bourne-Again SHell

Naturally I only use orthodox Free Software, like bash, the GNU
Bourne-Again SHell, to control my household projects.
$ sleep 55; launch_rocket
The problem is if one discovers a missing O-ring etc., then a
Control-C interrupt will not cancel the whole launch as it does in
other leading brand SHells, but instead just cancel the countdown --
VROOM. Next time use an && operator instead of a ;.

------------------------------

Date: Mon, 01 Sep 2008 02:52:22 +0800
From: jidanni_at_private
Subject: Control-C Control-C vs. gnus

We continue our Control-C adventures with gnus.
As you know I only use the highest pedigree Free Software,
Stallman->emacs->gnus, wherein lies
  C-c C-c runs the command message-send-and-exit
I told them 13 times this was too easy to hit by accident,
and rigged up my own Child Safety Cap macro.
Well, just the other day I was attempting to hit
  C-c C-f C-c runs the command message-goto-cc
to merely add somebody important to the CC: header,
http://news.gmane.org/group/gmane.emacs.gnus.general/thread=67308
but guess which key I pressed too lightly? Imagine me sending half
baked messages out the door before they are complet

------------------------------

Date: Tue, 19 Aug 2008 11:28:19 -0700
From: Ron Garret <ron_at_private>
Subject: Risks of better security and "smarter" users

The other day I visited the site of a well known issuer of SSL certificates
to look up some information in my account.  I was shocked to realize that I
was able to access this information without going through a login procedure.
All I did was click the "login" button, and my account information came up.
This behavior persisted across a browser restart, and even a machine reboot.

I was shocked.  Here's a company whose business is security, but their site
(apparently) issues login cookies that don't expire!  Worse, there didn't
seem to be a "logout" button!

(Spoiler alert: it makes an interesting exercise to see if you can figure
out how this happened, other than that the web site designers were idiots.
Here's a hint: I use a Mac.)

Most "secure" (I've been reading RISKS far too long not to put that in scare
quotes) web sites follow a common motif: there's a login page where you type
your user name and password into an HTML form.  That information gets sent
to the server, which verifies your credentials and issues a session cookie.
After you've done your business you log out, which either removes the
session cookie from your browser or invalidates it at the server.  Usually
the session cookie expires after some period of inactivity.

But there is another method of authentication on the Web: HTTP
authentication.  This is the kind of authentication that makes a browser
dialog pop up to ask for your user name and password rather than entering it
into an HTML form.  There are different kinds of HTTP authentication.  The
most common one is "basic" authentication, because it's the easiest to set
up.  It is also fairly insecure because it sends passwords in the clear
(usually with an accompanying warning in the browser dialog).  Because of
this, HTTP authentication is generally frowned up for "serious" security,
despite the fact that there are variants that are more secure than "basic"
authentication.

The site in question was using one of these more advanced HTTP
authentication schemes.  The first time I ever logged into the site, the
login dialog popped up and, without really thinking about it, I marked the
checkbox next to "remember this username and password in my keychain."

Now, this alone should not have produced the behavior that I saw because
normally in order to access the OS X keychain an application has to ask
permission, and my browser wasn't.  But it turns out that the OS X keychain
has a handy-dandy convenience "feature" that allows you to permanently grant
access to a particular keychain item to a particular application, and Safari
had "helpfully" added itself to this list when it created the keychain item.

So here we have a security risk that is a confluence of three circumstances,
two of which are the result, arguably, of too much knowledge.  They are:

1. The web site used a secure authentication scheme that behaves almost
   identically to a less secure scheme

2. I am familiar with the more common design of secure sites and

3. OS X and Safari conspire to subvert the security of HTTP authentication
   in a very subtle way in order to make things more convenient for the user

I find myself at a loss to suggest how this particular risk might have been
avoided.

------------------------------

Date: Sat, 30 Aug 2008 01:24:54 -0400
From: Monty Solomon <monty_at_private>
Subject: BNY Mellon Data Breach Potentially Massive (George Hulme)

http://www.informationweek.com/blog/main/archives/2008/08/bny_mellon_data.html

BNY Mellon Data Breach Potentially Massive

Posted by George Hulme, Aug 29, 2008 10:09 PM

It was in May when we noted an investigation launched by the authorities in
the state of Connecticut into a backup tape lost by the Bank of New York
Mellon. The results of that investigation are in, and they don't look good.

First, some background (which is available in my earlier post, here).  A
backup 10 unencrypted backup tapes with millions of customers' information
had gone missing on Feb. 27, and the Connecticut authorities wanted to know
more, as there were up to half-million Connecticut residents private
information place at risk.

Here's what those (unencrypted) tapes contained, according to Attorney
General Blumenthal's letter:

BNY representatives informed my office that the information on the tapes
contained, at a minimum, Social Security numbers, names and addresses, and
possibly bank account numbers and balances.

That's just great, isn't it.

At first, we thought there were 4 million whose private financial
information was on those tapes; turns out now that there could be up to 10
million. Here's what Connecticut Gov. M. Jodi Rell has to say in a statement
released yesterday:

"It is simply outrageous that this mountain of information was not better
protected and it is equally outrageous that we are hearing about a possible
six million additional individuals and businesses six months after the
fact," Governor Rell said. "We fear a substantial number of Connecticut
residents are among this latest group."

I couldn't agree more. There is absolutely no acceptable excuse as to why
this information was not encrypted on these tapes. None.

The BNY Mellon has set up this Web site for those who may have been affected
by this incident.

------------------------------

Date: Tue, 09 Sep 2008 12:09:33 +0000
From: Sergei Patchkovski <serguei.patchkovskii_at_private>
Subject: Student hacker exposes Carleton U cash, ID card security holes

On 9 Sep 2009, CBC News carried a report of a security breach of the
Carleton University student ID cards. The Ottawa-based university issues the
barcode and magnetic stripe-equipped cards to the students. The cards can be
used to access on-campus buildings (including some of the residences), pay
for services on-campus, and access university e-mail systems. According to
the news report, a student (it is not clear from the report whether this was
a student at Carleton, or at the Ottawa U - the other large university in
the area) has compromised the security of the card by writing a piece of
software "in a few hours" and installing it on a computer lab terminal. The
attacker was able to collect the e-mail login credentials of at least 32
Carleton students. He/she then proceeded to report the breach to the victims
and the university authorities, under an alias "Kasper Holmberg". In the
report, he/she suggested that the system in its present form lacks the most
basic safeguards against misuse, and should be suspended. University
authorities has issued new ID cards to the affected students, and assured
campus ID card users that "the campus e-mail system and campus card network
are safe". The university is further considering calling in the police and
charging "Karsper Holmberg" criminally for taking "a very odd way to draw
attention to the security of the system", according to the university
spokes-person, Christopher Walters.

The complete news story can be found at:
http://www.cbc.ca/canada/ottawa/story/2008/09/08/ot-security-080908.html

------------------------------

Date: August 30, 2008 1:32:46 AM EDT
From: Randall Webmail <rvh40_at_private>
Subject: Whit Diffie and Susan Landau: Internet Eavesdropping

  [From several other groups that I see, including Dave Farber's IP.  PGN]

http://www.sciam.com/article.cfm?id=internet-eavesdropping

As telephone conversations have moved to the Internet, so have those who
want to listen in. But the technology needed to do so would entail a
dangerous expansion of the government's surveillance powers

By Whitfield Diffie and Susan Landau

As long as people have engaged in private conversations, eavesdroppers have
tried to listen in. When important matters were discussed in parlors, people
slipped in under the eaves -- literally within the `eaves drop' -- to hear
what was being said. When conversations moved to telephones, the wires were
tapped. And now that so much human activity takes place in cyberspace, spies
have infiltrated that realm as well.

Unlike earlier, physical frontiers, cyberspace is a human construct.  The
rules, designs and investments we make in cyberspace will shape the ways
espionage, privacy and security will interact. Today there is a clear
movement to give intelligence activities a privileged position, building in
the capacity of authorities to intercept cyberspace communications. The
advantages of this trend for fighting crime and terrorism are obvious.

The drawbacks may be less obvious. For one thing, adding such intercept
infrastructure would undermine the nimble, bottom-up structure of the
Internet that has been so congenial to business innovation: its costs would
drive many small U.S. Internet service providers (ISPs) out of business,
and the top-down control it would require would threaten the nation's role
as a leader and innovator in communications.

Furthermore, by putting too much emphasis on the capacity to intercept
Internet communications, we may be undermining civil liberties. We may also
damage the security of cyberspace and ultimately the security of the
nation. If the U.S. builds extensive wiretapping into our communications
system, how do we guarantee that the facilities we build will not be
misused? Our police and intelligence agencies, through corruption or merely
excessive zeal, may use them to spy on Americans in violation of the
U.S. Constitution. And, with any intercept capability, there is a risk that
it could fall into the wrong hands. Criminals, terrorists and foreign
intelligence services may gain access to our surveillance facilities and use
them against us. The architectures needed to protect against these two
threats are different.

Such issues are important enough to merit a broad national debate.
Unfortunately, though, the public's ability to participate in the discussion
is impeded by the fog of secrecy that surrounds all intelligence,
particularly message interception (`signals intelligence').  [...]

http://tinyurl.com/6oolcn
IP Archives: https://www.listbox.com/member/archive/247/=now

  [Beware of the Adamant Eaves Drop.  PGN]

------------------------------

Date: Fri, 29 Aug 2008 08:34:40 +0100 (BST)
From: Jonathan Thornburg <J.Thornburg_at_private>
Subject: US .gov website asks for personal info without https protection

I recently used the US Immigration and Customs Enforcement Agency's
SEVIS (Student and Exchange Visitor Information System) to pay the fee
for a US visa.  The online version of this lives at
  http://www.ice.gov/sevis/i901/index.htm
This process requires typing a variety of personal information into web
forms lined from the ice.gov site, including full name, place/date of
birth, and passport number.  If one wants to pay online, credit card
information (including CVV2) is also required.

The index page for this system links to https urls for the actual form (as
javascript-activated popups), but disables browser titlebars on the popular
windows, so for most users there's little evidence of https security.  And
indeed, those https urls aren't even under a .gov domain, but rather
(outsourced?) under fmjfee.com.  I also saw no warnings about the dangers of
typing such sensitive information on a public computer.

The risks of identity theft, or even "just" credit-card fraud, seem very
large.

------------------------------

Date: Fri, 05 Sep 2008 11:15:10 +0200
From: Kevin Pfeiffer <pfeiffer_at_private>
Subject: Re: Germany's New Unified Tax Identification Codes (Fritzsch, R-25.29)

> It seems definite that obviously white spaces in the original data were
> misinterpreted during data transfer. Technical reasons remain until now
> unknown.

Empty fields, not "white space"

Ralf Fritzsch repeats the error from the original posting: the
German-language newspaper source quoted wrote "empty [data] fields", not
"white spaces". (Shades of the children's game "Stille Post" --
"Telephone".)

------------------------------

Date: Wed, 20 Aug 2008 12:52:47 -0500
From: Dimitri Maziuk <dmaziuk_at_private>
Subject: Re: Firefox 3's Step Backwards ... (Barrett, RISKS-25.29)

> ... I believe that the criticism of Firefox 3.0 was simply misguided and
> ill-informed. This is not helpful.

Side note #1: the obvious self-contradiction. State that argument was
strictly about encryption, then list a bunch of things that have nothing to
do with encryption, then conclude that argument is misguided and
ill-informed.

Side note #2: from day one SSL's been criticized for mixing two different
things: encryption and authentication in one protocol. This precisely why.
It's a design problem, most of the time you can't fix those in an
implementation.

Aside from those, three problems with this argument are:

1. Why would I ever trust a certificate signed by someone called GoDaddy?
   Especially over the one I generated and signed myself?

2. Nobody expects browser developers to come up with a solution for design
   flaw in underlying protocol (see side note #2) that works well for every
   user. Yet,

> The general logic is that most users should never be presented with ... a
> choice and the browser should make the decision for them.  [sic]

3. The problem is not that firefox complains, it's that it previously
complained that *signature cannot be verified*.  Now it complains about
*invalid certificate*. Technically a properly self-signed cert -- "criminal"
or not -- is "invalid" only because firefox developers say so. And thy shalt
trust their judgment because they, like GoDaddy, Know Better(tm).

Side note #3: another annoying new feature is that you can't type or paste
into file upload field anymore.

Dimitri Maziuk, BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu

------------------------------

Date: Thu, 29 May 2008 07:53:46 -0900
From: RISKS-request_at_private
Subject: Abridged info on RISKS (comp.risks)

 The ACM RISKS Forum is a MODERATED digest, with Usenet equivalent comp.risks.
=> SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent)
 if possible and convenient for you.   The mailman web interface can
 be used directly to subscribe and unsubscribe:
   http://lists.csl.sri.com/mailman/listinfo/risks
 Alternatively, to subscribe or unsubscribe via e-mail to mailman
 your FROM: address, send a message to
   risks-request_at_private
 containing only the one-word text subscribe or unsubscribe.  You may
 also specify a different receiving address: subscribe address= ... .
 You may short-circuit that process by sending directly to either
   risks-subscribe_at_private or risks-unsubscribe_at_private
 depending on which action is to be taken.

 Subscription and unsubscription requests require that you reply to a
 confirmation message sent to the subscribing mail address.  Instructions
 are included in the confirmation message.  Each issue of RISKS that you
 receive contains information on how to post, unsubscribe, etc.

=> The complete INFO file (submissions, default disclaimers, archive sites,
 copyright policy, etc.) is online.
   <http://www.CSL.sri.com/risksinfo.html>
 The full info file may appear now and then in RISKS issues.
 *** Contributors are assumed to have read the full info file for guidelines.

=> .UK users should contact <Lindsay.Marshall_at_private>.
=> SPAM challenge-responses will not be honored.  Instead, use an alternative
 address from which you NEVER send mail!
=> SUBMISSIONS: to risks_at_private with meaningful SUBJECT: line.
 *** NOTE: Including the string "notsp" at the beginning or end of the subject
 *** line will be very helpful in separating real contributions from spam.
 *** This attention-string may change, so watch this space now and then.
=> ARCHIVES: ftp://ftp.sri.com/risks for current volume
     or ftp://ftp.sri.com/VL/risks for previous VoLume
 <http://www.risks.org> redirects you to Lindsay Marshall's Newcastle archive
 http://catless.ncl.ac.uk/Risks/VL.IS.html gets you VoLume, ISsue.
   Lindsay has also added to the Newcastle catless site a palmtop version
   of the most recent RISKS issue and a WAP version that works for many but
   not all telephones: http://catless.ncl.ac.uk/w/r
 <http://the.wiretapped.net/security/info/textfiles/risks-digest/> .
==> PGN's comprehensive historical Illustrative Risks summary of one liners:
    <http://www.csl.sri.com/illustrative.html> for browsing,
    <http://www.csl.sri.com/illustrative.pdf> or .ps for printing
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

------------------------------

End of RISKS-FORUM Digest 25.31
************************
Received on Wed Sep 10 2008 - 09:35:33 PDT

This archive was generated by hypermail 2.2.0 : Wed Sep 10 2008 - 10:00:50 PDT