[RISKS] Risks Digest 24.24

From: RISKS List Owner (risko@private)
Date: Wed Apr 12 2006 - 12:46:42 PDT


RISKS-LIST: Risks-Forum Digest  Wednesday 12 April 2006  Volume 24 : Issue 24

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, moderator, chmn ACM Committee on Computers and Public Policy

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
  <http://catless.ncl.ac.uk/Risks/24.24.html>
The current issue can be found at
  <http://www.csl.sri.com/users/risko/risks.txt>

  Contents:
Casino can reprogram slot machines in seconds (PGN)
Deleting May Be Easy, but Your Hard Drive Still Tells All (Eric Taub via
  Monty Solomon)
Man Gets $218 Trillion Phone Bill (Les Hatton)
Borders with Customs computers (David Magda)
Australian police inadvertently reveal e-mail addresses/passwords
  (Mike Martin)
The risks of scaling incompetence to big numbers (Poul-Henning Kamp)
Secure colocation in the North Sea (Dan Jacobson)
Classified military documents exposed through file sharing
  (Diomidis Spinellis)
Unexpected Internet Explorer behaviour when copy/pasting
  (Pierre Pierre Blais)
Re: Three days of San Francisco BART upgrade crashes (Martyn Thomas)
Re: Rootkit: erosion of terms? (Steven M. Bellovin)
Washington voting hijacked by computer mischief (Peter Gregory)
Computer problems with U.Wisconsin voting system (Dana A. Freiburger)
Risks of email-to-fax services (Jim Youll)
Re: Man is charged $4,334.33 for four burgers (Martin Ward)
Helios B737 Crash (Michael Loftis, David Alexander)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Wed, 12 Apr 2006 11:10:27 PDT
From: "Peter G. Neumann" <neumann@private>
Subject: Casino can reprogram slot machines in seconds

As an enormous operational improvement, the 1,790 slot machines in Las
Vegas's Treasure Island Casino can now be reprogrammed in about 20 seconds
from the back-office computer.  Previously this was an expensive manual
operation that required replacing the chip and the glass display in each
machine.  Now it is even possible to have different displays for different
customers, e.g., changing between "older players and regulars" during the
day and a different crowd at night ("younger tourists and people with bigger
budgets".  (Slot machines generate more than $7B revenue annually in
Nevada.)  Casinos are also experimenting with chips having digital tags that
can be used to profile bettors, and wireless devices that would enable
players to gamble while gamboling (e.g., in swimming pools!).  [Source:
Article by Matt Richtel, Prefer Oranges to Cherries?  Done!  *The New York
Times*, 12 Apr 2006, C1,C4; PGN-ed]

There are various risks of interest to RISKS.  Regulators are concerned that
machines might be "invaded by outsiders", while bettors are concerned that
casinos could be intentionally manipulating the odds -- for example, giving
preferential treatment to high rollers.  Internal and external manipulation
are clearly potential issues, which of course could be exacerbated by
compromisible wireless security.  By Nevada law, odds cannot be manipulated
while someone is playing, although with four-minute timeouts before and
afterward, machines may be reprogrammed on the fly.

If it were still April Fools' Day, I might suggest that the slot machines
could be reprogrammable for use as voting machines on election day.  That
way you could have instant payoff if you vote the right way.

------------------------------

Date: Mon, 10 Apr 2006 08:55:00 -0500
From: Monty Solomon <monty@private>
Subject: Deleting May Be Easy, but Your Hard Drive Still Tells All (Taub)

Scott Cooper, a computer forensics expert, discovered that a "1" digit had
been deleted from a 20-page digital contract in Microsoft Word.  His work
discovered when the document had been changed and by whom, and resulted in
his client receiving the originally contracted 15% share instead of the
altered 5% share in his sold company, that is $96M instead of $32M.
[Source: Eric A. Taub, *The New York Times*, 5 Apr 2006; PGN-ed]
  http://www.nytimes.com/2006/04/05/technology/techspecial4/05forensic.html

------------------------------

Date: Wed, 12 Apr 2006 16:25:07 +0000
From: Les Hatton <L.Hatton@private>
Subject: Man Gets $218 Trillion Phone Bill

A Malaysian man said he nearly fainted when he received a $218 trillion
phone bill and was ordered to pay up within 10 days or face prosecution.
Yahaya Wahab said he disconnected his late father's phone line in January
after he died and settled the 84 ringgit ($23) bill, the *New Straits Times*
reported.  But Telekom Malaysia later sent him a 806,400,000,000,000.01
ringgit ($218 trillion) bill for recent telephone calls ... [more].
[Source: Associated Press, 10 Apr 2006]

  An interesting one this.  Unless this got misprinted somewhere, they must
  have gone to 64-bit arithmetic to issue bills this big.  If they have
  implemented it as fixed point arithmetic and sucked up about 7 bits for
  the fraction, that would leave about 56 bits in signed arithmetic to play
  with which according to my trusty Linux version of bc would allow them to
  issue a bill up to:-

  72,057,594,037,927,936 ringgits. or around $2 quadrillion.

  Of course they could have gone to arbitrary precision arithmetic in the
  hope of making a fast googleplex or two.

  The guy is actually lucky because at least its obviously stupid.  It could
  have equally well been an erroneous number which was vaguely reasonable
  but expensive and because the computer says it, it must as we all know, be
  right.

------------------------------

Date: Wed, 12 Apr 2006 08:02:27 -0400
From: David Magda <dmagda@private>
Subject: Borders with Customs computers

In August 2005, the computer systems used by US Customs failed for about
five hours (RISKS-24.02).

Documents obtained through a freedom of information request by *WiReD*
actually point to a virus being the culprit. The main issue being that a
security patch was not deployed (on purpose), but once the virus threat was
found, the patch was pushed out to the systems.

One sentence in the story [1] jumped out at me, though:

> Publicly, officials initially attributed the failure to a virus, but later
> reversed themselves and claimed the incident was a routine system failure.

I'm curious to know why "system failure" is considered "routine".  While it
is prudent to plan for things breaking (redundancy, backups, etc.), and it
will inevitably happen in many cases (especially in physical systems),
should it ever be considered "routine"?

[1] http://www.wired.com/news/technology/0,70642-0.html

------------------------------

Date: Wed, 5 Apr 2006 18:43:20 +1000
From: "mike martin" <mke.martn@private>
Subject: Australian police inadvertently reveal e-mail addresses/passwords

A blunder by New South Wales police has led to a database of e-mail
passwords being available on the Internet for as many as 800 people,
including those of the anti-terrorism chief and hundreds of journalists.
The database appears to have been taken offline within the past month, but
is still accessible [e.g., mirrored elsewhere] through Google.  [Source:
*Sydney Morning Herald*, 5 Apr 2005; PGN-ed]

http://www.smh.com.au/news/technology/police-secret-password-blunder/2006/04/05/1143916566038.html

  It is not clear why a police server would hold passwords of police and
  journalists simply so they can receive police news releases.  And if it
  does hold passwords, are they the same passwords as the people use to
  access their own e-mail accounts. (Human nature being what it is, some
  surely do.)  Mike Martin, Sydney, coriaria.arborea@private

------------------------------

Date: Sat, 08 Apr 2006 08:54:53 +0200
From: Poul-Henning Kamp <phk@private>
Subject: The risks of scaling incompetence to big numbers

A swarm of D-Link products prod my NTP server despite the fact that they
have never gotten an answer from it.  I have spent nearly half a year trying
to get D-Link to act responsibly and cover my costs but so far to no avail.
You can read my side of the story here:

	http://people.freebsd.org/~phk/dlink/

A feature of modern fast-cycle product development and manufacturing is that
a million defective products can be spread all over the market before
anybody can get a chance to point out the defects.

In this case, the failure is relatively benign, and if D-Link covers the
expenses it has cost me, no serious harm has come of it.  But considering
the lousy quality of software in these low-end devices, it is a safe bet
that at least one or two of these products can be subverted as agents for a
DoS attack.

In fact, only a few years ago, the NTP client component of NetGear devices
did act as a DoS attack on University of Wisconsin, as some of you probably
remember:

	http://www.cs.wisc.edu/~plonka/netgear-sntp/

If risk to life and limb is involved, product recalls seems to happen
automatically because the manufacturer fears litigation.  The auto industry,
Intels P5 divide instruction, hot and exploding lithium batteries, hot or
flaming switchmode power supplies.  The list goes on and on.

But unless a legal risk of significant magnitude is present, the vendor,
like in this case D-Link, will not even reply to the complaint.

Here in Denmark buildings in which many people may be present, sports
arenas, theaters and similar, must meet a higher standard in the building
code than a regular house.

To my naïve mind, it would make a lot of sense if there were a legal
requirement for a higher standard of product review and testing for high
volume products in general, and legal liability should scale with at least
log(number_of_units_sold).

Poul-Henning Kamp  phk@private  FreeBSD committer
BSD since 4.3-tahoe  TCP/IP since RFC 956  UNIX since Zilog Zeus 3.20

------------------------------

Date: Thu, 30 Mar 2006 11:36:33 +0800
From: Dan Jacobson <jidanni@private>
Subject: Secure colocation in the North Sea

Hmmm, http://www.havenco.com/: "The Principality of Sealand is a former
World War II anti-aircraft military fortress in the North Sea.  Only
authorized persons directly involved in the HavenCo project are permitted to
land on the island.  The Sealand Government is ideal for Web business, as
there are no direct reporting or registration requirements."

"Tamper-resistant computing hardware, designed to protect customer
transactions from all possible attackers, including HavenCo and its staff
... unmatched security, including 12" thick concrete walls, 24x7 armed
security, and miles of empty sea between you and any threat."

  Dan says: Probably hard to get spare parts to there during a storm though.

    [PGN wonders whether there is remote access for maintenance purposes?]

------------------------------

Date: Wed, 05 Apr 2006 19:23:59 +0300
From: Diomidis Spinellis <dds@private>
Subject: Classified military documents exposed through file sharing

The Greek newspaper *Eleftherotypia* in an article on April 5th 2006 [1],
describes an interesting incident where classified Greek military documents
became available on the Internet.

According to the article, an unnamed individual found on the Internet a
number of military documents containing names of military units, details of
mobilization procedures, and names and phone numbers of military officers.
He notified the special forces chief of staff, and apparently thereafter all
units that had active Internet connections were instructed to disconnect
their machines from the network.  Yet the individual could still access the
files for hours, until he shut down his Internet connection.

Military sources explained that the incident occurred when an armed forces
technician, while fixing a military unit's computer, copied the files to his
laptop in order to burn them to a CD for backup purposes.  He then forgot to
remove them from his laptop's hard disk, and the files became exposed when
he connected his laptop to the Internet through a private non-firewalled
connection.  The article's terminology doesn't clarify whether the files
were shared on the Internet through Windows file shares or through a
peer-to-peer file sharing program.

I would classify this story as a plain inept security management (what was a
private laptop doing in an IT installation with classified documents?) were
there not for the fact that the technician could conceivably be trying to do
his job battling against other security measures.  I can well imagine hat
the damaged computer was lacking a CD-ROM burner and a network connection as
a (half-baked) security precaution.

[1] http://www.enet.gr/online/online_text/c=110,id=20584664 (in Greek)

Diomidis Spinellis - http://www.spinellis.gr/

------------------------------

Date: Thu, 6 Apr 2006 09:10:57 -0400 (EDT)
From: Pierre Pierre Blais <ppblais@private>
Subject: Unexpected Internet Explorer behaviour when copy/pasting

It's interesting that at the same time I was reading the recent postings
about Excel's non-obvious behaviour, I ran into an unexpected Internet
Explorer behaviour when copy/pasting.

I was visiting a Web page that has text only. It provides a list of on-line
or webcast courses that one might be interested in taking. I needed to make
a list of the courses I had taken.

Given that I had taken most of the courses, I highlighted the whole page and
copy/pasted it into an Outlook e-mail I was composing, figuring all I needed
to do was to delete the entries for the courses I had not taken.

I was quite surprised to see that more text was pasted than I thought I had
copied. Some of the text was just repetition of what was already there. I
blamed that on the copy process picking up both link destinations (HTML
href) as well as the text itself.

However, I also noticed that the set of courses was much longer than what I
could see on the page. I quickly ran a "view source" on the page to see that
the list is indeed much longer than what is visible, with some entries
marked not to be displayed:

   <tr height=0 style='display:none'>

So, IE actually copies all the text (presumably because it wants to be able
to copy and paste the HTML) and since I pasted into a text-only document, it
converted the copied HTML to text with the result that I am not getting what
I was seeing on the Web page. A non-intuitive result.

Presumably, if I had pasted into a location that was not text-only, I would
have ended up with the HTML...

I wonder how many sites use this technique to hide some critical information
temporarily...

------------------------------

Date: Wed, 5 Apr 2006 15:59:52 +0100
From: "Martyn Thomas" <martyn@thomas-associates.co.uk>
Subject: Re: Three days of San Francisco BART upgrade crashes (RISKS-24.23)

PGN: "The new supposedly self-correcting software had passed all of its
tests on the previous Sunday, but evidently the testing was incomplete. "

What would _complete_ testing look like?

  [Martyn, Many thanks for your good sense of humo(u)r.  Knowing that
  testing is NEVER complete in the larger sense, this was clearly a cynical
  comment on my part, leaving the reader to ponder whether
    * the test requirements were incomplete (undoubtedly)
    * the testing against those requirements was incomplete (most likely)
    * the testing methodology was inherently incomplete (certainly)
    * and so on.
  PGN]

------------------------------

Date: Wed, 5 Apr 2006 21:17:39 -0400
From: "Steven M. Bellovin" <smb@private>
Subject: Re: Rootkit: erosion of terms? (Slade, RISKS-24.23)

Rob Slade complains that the word "rootkit" is being misused to describe
cloaking software.  I believe that that usage is, in fact, historically
correct, as counter-intuitive as that may be.  Certainly, it had that
meaning 5 years ago; see CERT Advisory CA-2001-05
(http://www.cert.org/advisories/CA-2001-05.html).  Wikipedia's description
of the origin of the word agrees, but that's a very large can of worms I
don't feel like opening now...  Asking Google 'define rootkit' yields both
meanings, as does the Jargon File.  But the last word may be in an article
on a Symantec effort to standardize the definition
(http://www.computerpartner.nl/article.php?news=int&id=2353):

  But while efforts like the one Symantec is proposing may help
  professionals in the field, they will do nothing to alter popular usage,
  said Alan Paller, director of research with the SANS Institute, a training
  organization for computer security professionals.

  "I don't think you can stop the public and the marketing people from using
  words any way they choose," he said. "So even if there were a standard
  definition of a rootkit, it wouldn't change the use of the term."

Steven M. Bellovin, http://www.cs.columbia.edu/~smb

  [And so it goes with many other terms:
    * "Virus" is used generically somewhat like "Kleenex" and "Xerox".
    * "Intrusion detection" typically applies to insiders and network
      denials of service that require no intrusion.
    * ...

------------------------------

Date: Wed, 12 Apr 2006 11:11:01 -0700
From: "Peter Gregory" <Peter.Gregory@private>
Subject: Washington voting hijacked by computer mischief

An online poll asking Washingtonians to pick their favorite design for the
state's quarter coin was suspended, after the balloting was hijacked by
computer programs whose automated scripts pushed the tally past 1 million
votes over the weekend.  [Source: Associated Press item, seen in *The
Seattle Times*, 12 Apr 2006; PGN-ed]

http://seattletimes.nwsource.com/html/localnews/2002923164_webquarter10.html

Peter H Gregory, Concur Technologies http://www.concur.com 1-425-702-8808

------------------------------

Date: Sat, 08 Apr 2006 13:41:40 -0500
From: "Dana A. Freiburger" <dafreiburger@private>
Subject: Computer problems with U.Wisconsin voting system (Re: RISKS-24.23)

An attempt to hold a campus election for the student council at the
University of Wisconsin failed *again* due to "significant software errors",
according to the University's Division of Information Technology (DoIT)
group.  According to their news release, "DoIT detected a disparity between
the number of student votes cast and the number of votes confirmed in the
online election database."  No root cause was indicated in a DoIT news
release and plans are being made now to run a paper-based election.

While the problem-struck online election system will "not be used again,"
there exists concern that the next attempt will suffer low turnout because
of these computer snafus.  Also, I noticed the local newspaper (the
*Wisconsin State Journal*) did not offer an article on this event compared
to the first time it occurred the previous week.  Given that this newspaper
is bored with this matter, voters can't be far behind.

The risks?  Loss of respect for computer-based voting systems, reduced
voter turnout due to these repeated problems, and continued delays in
electing the next student council.

News from the University of Wisconsin's Division of Information
Technology: "DoIT Information on ASM Election Issues"

<http://www.doit.wisc.edu/news/story.asp?filename=649>

------------------------------

Date: Wed, 5 Apr 2006 09:33:13 -0400
From: Jim Youll <jim@private>
Subject: Risks of email-to-fax services (Re: Ross, RISKS-24.23)

Dallman Ross (RISKS-24.23) wrote about the possibility of "Joe-jobbing"
someone via the email-to-fax services that only authenticate the e-mail
"from" address when sending (expensive) faxes.

The risks /appear to be/ mitigated such that real financial damage to a
target is impractical, but the devil is in the details as I've just
confirmed in examination of a large fax/voicemail service:

* This service (and JFax as well) once offered concerned customers (me) the
  option to place a text password inline at the top of the email body, eg:
  <password="SendMyFax007">. However, I noticed the password string
  sometimes leaked into the sent message, and its absence didn't always
  prevent a message going out. This "feature" doesn't seem to be publicly
  documented and was never user- configurable. I don't know if it's still
  available.

* The service under study this morning seems to update its authentications
  after a huge delay, if at all. I removed all references to an account's
  formerly authorized email address via the web page at 8:14am and replaced
  it with another. At 9:17am the service is still sending faxes received
  from the deleted e-mail address. So, even removing a compromised address
  doesn't stop the attack immediately. Inexplicably, it's referencing a
  "free trial account" now (the account was started as a free trial years
  ago). But it's charging the faxes against a real account, and logging them
  there.

* The services top-up a debit balance held at the service, then run it down
  before charging the credit card again. If you keep a low refill amount,
  this would throttle an attack, but the victim remains dependent on the
  company to "do the right thing" to reimburse.

* There is no way to stop faxes going out, and no way to remove stored
  credit card data or to stop the auto-charging of same.  Attempts to erase
  credit card details yield a "you have entered an invalid credit card
  number" error. The service's contract requires that it be allowed to store
  credit cards and auto-charge both fixed monthly fees and per-use fees.

* The company cannot be easily reached by telephone, even in an emergency.

* The service allows account holders to disable notification of sent
  faxes. Presumably large account holders (those topping up with $100 or
  $250 per occurrence) thus wouldn't learn about an attack quickly.  These
  accounts would presumably be the most in-demand.

* The service allows broadcast faxing on approved accounts, the fax
  equivalent of a spam relay.

I discussed these risks in 2002 with an architect of JFax, who is also a
principal at another fax service. His (anonymized) comments below shed some
light on their reasoning. He, and JFax before, considered this design
necessary and reasonable given the limitations of both technology and
customers. He's troublingly confident about the utility of "tracing an email
back to where it came from" as a means of solving the problem.

 "Yes, we've been through this one about a thousand times in the
  past.  When we started (the service) back in 1996, we used to make the
  sender place their customer ID and password in the subject line of the
  email. We lost a lot of business because most folks could never figure out
  how to send a fax.
  We do send a confirmation to your email address every time a fax is sent
  on your behalf, so if someone is scamming your account, you should know
  fairly quickly. Please inform us immediately and we'll credit your account
  and trace the mail trail back to find out where the email came from.
  This is a small risk that we have to face in order to do business in our
  market. Fortunately it hasn't been too big a problem (stolen credit cards
  seems to be a much more real issue for us to deal with). In my dealings
  with J2 (JFax)... I learned that they really hadn't had any issues with
  this type of issue either. We'll keep our eyes open though."

------------------------------

Date: Thu, 6 Apr 2006 10:23:44 +0100
From: Martin Ward <martin@private>
Subject: Re: Man is charged $4,334.33 for four burgers (Feit, RISKS-24.23)

> I suspect that's what happened in this case, and it's a very good reason to
> use a real credit card instead of a debit card.

When you use a credit card, the bank takes a cut of the transaction which
mostly goes straight to their bottom line.  When you use s debit card, their
cut is much smaller. So it is in the financial interests of the bank if
things happen to be arranged so that debit card transactions are risky, so
that people continue to give (valid) advice such as the above. After all,
its the customer's whether to use a credit card or a debit card, and it
doesn't cost the *customer* anything to use a credit card. The bank's gain
is the merchant's loss: but the merchant can't afford not to accept credit
cards.

I'm not suggesting a great conspiracy on the bank's part: just a slight
disinclination to fix issues (such as the above) which are financially
beneficial to the bank.  In other words: a definite conflict of interest!

martin@private http://www.cse.dmu.ac.uk/~mward/

------------------------------

Date: Tue, 04 Apr 2006 20:23:00 -0600
From: Michael Loftis <mloftis@private>
Subject: Helios B737 Crash (RISK-24.23, Ferguson)

What Eric Ferguson has completely forgotten about is these huge looming
things we call mountains out here in the mid western US.  They're pretty
solid, and descending into, or failing to ascend over, one of these is most
always fatal.

I would completely refuse to be on an airplane with such an unsafe system in
place.  If it were to falsely believe there was a depressurization event
while climbing out of say, Missoula, MT here, you'd certainly die.  Lots of
mountains to crash into.

A better solution would be some clearer warning signs as well as better
training.  It might not be a bad idea to have some form of mandatory hypoxia
training though I have no idea how that could be done.

ANY system that impedes the pilots ability to control the airplane
significantly for the sake of what the system designer thinks to be 'safety'
will quite likely be far less safe than the original failure mode.  Humans
are most usually far smarter than these systems.

------------------------------

Date: Wed, 05 Apr 2006 09:20:36 +0100
From: David Alexander <dave_ale@private>
Subject: The 2005 Helios B737 Crash (Re: RISKS-24.22 & 24.23)

I can attest to the accuracy of the comments made about Time of Useful
Consciousness. I have experienced hypoxia first-hand.

I trained as a pilot in the (UK) Royal Air Force. It may have changed in
the last 25 years, but back then one of the first things we did in training
was to sit in a chamber with an instructor to experience:
  1) an explosive decompression from 12000 ft to (I think) 24000 ft
  2) hypoxia

The idea is that you 'know your enemy' and can react properly if it happens
for real.

I can tell you that hypoxia is very insidious and the effects are a lot like
being very drunk, but it happens very quickly. You are sat in the chamber as
a group after the explosive decompression, wearing an oxygen mask.  'One at
a time they make you take your mask off and do exercises with pen and
paper. You think you're doing fine and the effects haven't started yet, then
the instructor puts the mask back on and you look at the complete garbage
you have scrawled on the paper. The first third of the page is OK, then it
gets worse and worse - first in accuracy, then the handwriting looks like
some thing a three year old would do, then there is a line off the edge of
the page where you lost it completely (which is when they put your mask back
on).

You experience it yourself and you get to see 9 other people go through it
too. It's a very valuable lesson and one that ought to be taught to all
pilots who fly planes that can exceed 12000 amsl.

  [We have already received over a dozen messages on the Helios situation,
  from which this and the preceding one have been sampled.  PGN]

------------------------------

Date: 2 Oct 2005 (LAST-MODIFIED)
From: RISKS-request@private
Subject: Abridged info on RISKS (comp.risks)

 The ACM RISKS Forum is a MODERATED digest, with Usenet equivalent comp.risks.
=> SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent)
 if possible and convenient for you.   The mailman web interface can
 be used directly to subscribe and unsubscribe:
   http://lists.csl.sri.com/mailman/listinfo/risks
 Alternatively, to subscribe or unsubscribe via e-mail to mailman your
 FROM: address, send a message to
   risks-request@private
 containing only the one-word text subscribe or unsubscribe.  You may
 also specify a different receiving address: subscribe address= ... .
 You may short-circuit that process by sending directly to either
   risks-subscribe@private or risks-unsubscribe@private
 depending on which action is to be taken.

 Subscription and unsubscription requests require that you reply to a
 confirmation message sent to the subscribing mail address.  Instructions
 are included in the confirmation message.  Each issue of RISKS that you
 receive contains information on how to post, unsubscribe, etc.

=> The complete INFO file (submissions, default disclaimers, archive sites,
 copyright policy, etc.) is online.
   <http://www.CSL.sri.com/risksinfo.html>
 The full info file may appear now and then in RISKS issues.
 *** Contributors are assumed to have read the full info file for guidelines.

=> .UK users should contact <Lindsay.Marshall@private>.
=> SPAM challenge-responses will not be honored.  Instead, use an alternative
 address from which you NEVER send mail!
=> SUBMISSIONS: to risks@private with meaningful SUBJECT: line.
 *** NOTE: Including the string "notsp" at the beginning or end of the subject
 *** line will be very helpful in separating real contributions from spam.
 *** This attention-string may change, so watch this space now and then.
=> ARCHIVES: ftp://ftp.sri.com/risks [subdirectory i for earlier volume i]
 <http://www.risks.org> redirects you to Lindsay Marshall's Newcastle archive
 http://catless.ncl.ac.uk/Risks/VL.IS.html gets you VoLume, ISsue.
   Lindsay has also added to the Newcastle catless site a palmtop version
   of the most recent RISKS issue and a WAP version that works for many but
   not all telephones: http://catless.ncl.ac.uk/w/r
 <http://the.wiretapped.net/security/info/textfiles/risks-digest/> .
==> PGN's comprehensive historical Illustrative Risks summary of one liners:
    <http://www.csl.sri.com/illustrative.html> for browsing,
    <http://www.csl.sri.com/illustrative.pdf> or .ps for printing

------------------------------

End of RISKS-FORUM Digest 24.24
************************



This archive was generated by hypermail 2.1.3 : Wed Apr 12 2006 - 13:19:17 PDT