[ISN] CRYPTO-GRAM, January 15, 2002 (fwd)

From: InfoSec News (isnat_private)
Date: Tue Jan 15 2002 - 07:59:25 PST

  • Next message: InfoSec News: "[ISN] DOT sees security short-changed"

                      CRYPTO-GRAM
    
                    January 15, 2002
    
                   by Bruce Schneier
                    Founder and CTO
           Counterpane Internet Security, Inc.
                schneierat_private
              <http://www.counterpane.com>
    
    
    A free monthly newsletter providing summaries, analyses, insights, and 
    commentaries on computer security and cryptography.
    
    Back issues are available at 
    <http://www.counterpane.com/crypto-gram.html>.  To subscribe, visit 
    <http://www.counterpane.com/crypto-gram.html> or send a blank message to 
    crypto-gram-subscribeat_private
    
    Copyright (c) 2002 by Counterpane Internet Security, Inc.
    
    
    ** *** ***** ******* *********** *************
    
    In this issue:
          Windows UPnP Vulnerability
          Crypto-Gram Reprints
          News
          Counterpane News
          Password Safe 2.0
          The Doghouse:  AGS Encryptions
          Comments from Readers
    
    
    ** *** ***** ******* *********** *************
    
               Windows UPnP Vulnerability
    
    
    
    The big news of late December was a security flaw in Microsoft's Universal 
    Plug and Play system, a feature in a variety of Windows flavors.  On the 
    one hand, this is a big deal: the vulnerability can allow anyone to take 
    over a target computer.  On the other hand, this is just one of many 
    similar vulnerabilities in all sorts of software -- Microsoft and 
    non-Microsoft -- and one for which there is no rapidly spreading exploit.
    
    There are several lessons from all of this.
    
    One, the amount of press coverage is not indicative of the level of 
    severity, and the press is the only way to get the news out to the 
    public.  This thing got Nimda-like press, but there was no exploit.  While 
    it is a critical patch to install, it's not severe enough to trigger the 
    "wake up, drive to work, and install this patch now!" 
    reflex.  Unfortunately, the public will have patience for only so many of 
    these stories before their eyes glaze over.  The rate of patch installation 
    is decreasing, as people simply stop paying attention.
    
    Two, Microsoft still sacrifices accuracy for public relations 
    value.  Here's a quote from Scott Culp, manager of Microsoft's security 
    response center: "This is the first network-based, remote compromise that 
    I'm aware of for Windows desktop systems."  I was all set to write a 
    longish rant, calling the statement a lie and listing other network-based 
    remote Windows compromises -- Back Orifice, Nimda, etc., etc., etc. -- but 
    Richard Forno beat me to it.  Read his excellent commentary on Microsoft 
    and security.
    
    To combat this, open and public discussion is important.  In the first days 
    of the vulnerability, there was a lot of debate in the press: which systems 
    were vulnerable by default, how best to fix the problem, etc.  Even the FBI 
    got into the act, albeit with wrong information they later adjusted.  The 
    importance here is a multitude of voices and a multitude of views, 
    something that secrecy won't provide.  As Greg Guerin commented, when 
    there's a fire in a theater, you want as many audience members as possible 
    to shout "Fire!" rather than sitting around waiting for the theater manager 
    to say it.  The theater manager is going to put his own spin on the news, 
    and it's not likely to be an unbiased one.
    
    Three, bug secrecy hurts us all.  According to reports, eEye Digital 
    Security told Microsoft about this vulnerability nearly two months before 
    Microsoft released its patch.  What's with the two-month delay?  It's a 
    simple buffer overflow, and should be patched within days.  Delays just 
    increase the likelihood that someone will exploit the vulnerability.  (To 
    think, some time ago I criticized eEye for not waiting long enough before 
    releasing a vulnerability.  Shows how hard it is to get the balance right.)
    
    Four, Microsoft still pays lip service to security.  This vulnerability is 
    a buffer overflow, the easy-to-use low-hanging-fruit automatic-tools-to-fix 
    kind of security vulnerability.  It's not new or subtle; buffer overflows 
    have been causing serious security problems for decades.  It's an obvious, 
    stupid-ass programming mistake that ANY reasonably implemented security 
    program should have caught.  Remember Microsoft's big PR fuss about their 
    Secure Windows Initiative?  If it can't catch this simple stuff, how can it 
    secure software against the complex attacks and vulnerabilities?  This is a 
    software quality problem, pure and simple.  And the real solution is better 
    software design, implementation, and quality procedures, not more patches 
    and alerts and press releases.
    
    And five, complexity equals insecurity.  UPnP is a complex set of protocols 
    to support ad hoc peer-to-peer networking.  Even though no one uses it, 
    it's installed in a bunch of Microsoft OSs.  Even though no one needs it 
    turned on, sometimes it's turned on by default.  This kind of "feature 
    feature feature" mentality, without regard to security, means this kind of 
    thing is going to happen again and again.  Until software companies are 
    held liable for the code they produce, they will continue to pack their 
    software with needless features and neglect to consider their associated 
    security ramifications.
    
    This vulnerability also illustrates why Microsoft is so keen on bug 
    secrecy.  The industry analysts at Gartner issued a warning, urging 
    companies to delay upgrading to Windows XP for "three to six months," lest 
    more of these kind of vulnerabilities surface.  If Microsoft had learned of 
    this vulnerability in secret, and fixed it in secret, Gartner would not 
    make any such statements.  No one would be the wiser.  (But, of course, if 
    Microsoft learned of this vulnerability in secret, what impetus would they 
    have to fix it quickly?  Wouldn't it be easier on everyone if they just 
    rolled it into the next product update?)
    
    Honestly, security experts don't pick on Microsoft because we have some 
    fundamental dislike for the company.  Indeed, Microsoft's poor products are 
    one of the reasons we're in business.  We pick on them because they've done 
    more to harm Internet security than anyone else, because they repeatedly 
    lie to the public about their products' security, and because they do 
    everything they can to convince people that the problems lie anywhere but 
    inside Microsoft.   Microsoft treats security vulnerabilities as public 
    relations problems.  Until that changes, expect more of this kind of 
    nonsense from Microsoft and its products.  (Note to Gartner: The 
    vulnerabilities will come, a couple of them a week, for years and 
    years...until people stop looking for them.  Waiting six months isn't going 
    to make this OS safer.)
    
    
    News
    <http://www.zdnet.com/zdnn/stories/news/0,4586,5100941,00.html>
    <http://theregister.co.uk/content/55/23480.html>
    <http://www.wired.com/news/business/0,1367,49301,00.html>
    <http://www.msnbc.com/news/675850.asp?0dm=B13QT>
    
    News article with the Culp quote:
    <http://www.washingtonpost.com/wp-dyn/articles/A7050-2001Dec20.html>
    
    Advisories:
    <http://www.eeye.com/html/Research/Advisories/AD20011220.html>
    <http://www.nipc.gov/warnings/advisories/2001/01-030-2.htm>
    <http://www.cert.org/advisories/CA-2001-37.html>
    <http://www.microsoft.com/technet/security/bulletin/MS01-059.asp>
    
    FBI's statement:
    <http://www.computerworld.com/storyba/0,4125,NAV47_STO66939,00.html>
    FBI's retraction:
    <http://www.nandotimes.com/technology/story/210127p-2028258c.html>
    <http://www.computerworld.com/storyba/0,4125,NAV47_STO67069,00.html>
    
    Gartner commentary:
    <http://news.cnet.com/news/0-1003-201-8254545-0.html?tag=prntfr>
    
    Forno's commentary:
    <http://www.infowarrior.org/articles/2001-15.html>
    
    Gibson's commentary:
    <http://grc.com/UnPnP/UnPnP.htm>
    
    Other analysis:
    <http://www.securityfocus.com/columnists/50>
    <http://www.internetnews.com/dev-news/article/0,,10_945371,00.html>
    
    
    ** *** ***** ******* *********** *************
    
                Crypto-Gram Reprints
    
    
    
    A cyber Underwriters Laboratories?
    <http://www.counterpane.com/crypto-gram-0101.html#1>
    
    Code signing:
    <http://www.counterpane.com/crypto-gram-0101.html#10>
    
    Publicity attacks:
    <http://www.counterpane.com/crypto-gram-0001.html#KeyFindingAttacksandPublic 
    ityAttacks>
    
    Block and stream ciphers:
    <http://www.counterpane.com/crypto-gram-0001.html#BlockandStreamCiphers>
    
    
    ** *** ***** ******* *********** *************
    
                         News
    
    
    
    The year in vulnerabilities.  This document contains a list (with 
    references) of all the operating systems and applications with 
    vulnerabilities (not necessarily known or public exploits).
    <http://www.nipc.gov/cybernotes/2001/cyberissue2001-26.pdf>
    
    Software and liabilities:
    <http://www.securityfocus.com/columnists/47>
    
    Dmitry Sklyarov reached a plea bargain with U.S. attorneys.  He's free.
    <http://news.cnet.com/news/0-1005-200-8324114.html>
    The agreement:
    <http://www.usdoj.gov/usao/can/press/assets/applets/2001_12_13_sklyarov.pdf>
    <http://www.wired.com/news/politics/0,1283,49272,00.html>
    And here's Sklyarov's side.  It turns out that the Justice Department 
    misrepresented several facts:
    <http://www.oreillynet.com/cs/weblog/view/wlg/983>
    
    IDSs and false alarms:
    <http://www.theregister.co.uk/content/55/23420.html>
    
    Viruses and Trojans will be commonplace on cellphones before long.  Lots of 
    them will accept executable code over the air.
    <http://news.bbc.co.uk/hi/english/in_depth/sci_tech/2000/dot_life/newsid_170 
    9000/1709107.stm>
    Why not spam a Trojan to a million cellphones?
    
    Excellent essay by Mike Godwin on digital rights management and the battle 
    between computer companies and entertainment companies:
    <http://cryptome.org/mpaa-v-net-mg.htm>
    
    Two more good essays on the inherent dangers in keeping security 
    vulnerabilities secret, and Microsoft's threat to security:
    <http://www.theregister.co.uk/content/4/23418.html>
    <http://www.infowarrior.org/articles/2001-11.html>
    
    Judge upholds government in Scarfo case.  The evidence surreptitiously 
    gathered from the defendant's computer can be admitted in court.
    <http://www.nj.com/news/ledger/index.ssf?/jersey/ledger/1568cd5.html>
    <http://www.wired.com/news/privacy/0,1848,49455,00.html>
    Text of the decision:
    <http://lawlibrary.rutgers.edu/fed/html/scarfo2.html-1.html>
    
    CCBill hacked:
    <http://www.zdnet.com/zdnn/stories/news/0,4586,5100990,00.html>
    <http://www.theregister.co.uk/content/6/23482.html>
    <http://www.computerworld.com/storyba/0,4125,NAV47_STO66920,00.html>
    Even worse, CCBill knew about their security problem months ago.  They made 
    a quick fix of part of the problem, and then hoped that no one would find 
    out about the rest of the insecurities.  This is a good illustration of the 
    problems that ensue when there is no pressure on a company to fix its 
    security problems.  If researchers are prohibited from making their 
    discoveries public, expect a lot more of this kind of thing.
    <http://www.theregister.co.uk/content/55/23493.html>
    
    Good essay on the flaws inherent in large databases, and how they can 
    affect any national ID system:
    <http://www.newhousenews.com/archive/story1a122001.html>
    
    GAO Security Auditing Guide.  Interesting reading.
    <http://www.gao.gov/special.pubs/mgmtpln.pdf>
    
    Yet another report saying the obvious: many companies are vulnerable to 
    cyber attacks.
    <http://www.cnn.com/2002/TECH/industry/01/08/security.reut/index.html>
    And attacks are on the rise:
    <http://www.pcworld.com/news/article/0,aid,79303,00.asp>
    
    New virus that infects Shockwave Flash files:
    <http://www.theregister.co.uk/content/56/23594.html>
    
    Microsoft caught rigging online poll:
    <http://news.zdnet.co.uk/story/0,,t269-s2102244,00.html>
    This has interesting security implications.  On the one hand, these polls 
    are not to be trusted, and anyone who knows anything about statistics knows 
    this.  On the other hand, almost no one knows anything about statistics, 
    and companies rely on these polls to make business decisions.  This 
    incident illustrates how little security there is in on-line polling, and 
    this incident illustrates the lengths Microsoft will go to rewrite the 
    truth.  But the real moral is about PR vs. reality.  Any time that PR 
    dominates the information stream, you can't trust the information.
    
    The author of DeCSS has been indicted in Norway:
    <http://www.securityfocus.com/news/306>
    <http://news.cnet.com/news/0-1005-200-8434181.html>
    
    Announcing a Workshop on Economics and Information Security.
    <http://www.cl.cam.ac.uk/users/rja14/econws.html>
    
    Interesting research on full-disclosure and security.  This research tracks 
    all the CERT vulnerabilities of 2001 and who discovered them.  Only 22 of 
    the 200 vulnerabilities looked at were discovered by vendors.
    <http://www.software.umn.edu/~mcarney/>
    
    Interview with Ed Felton on the futility of digital copy prevention:
    <http://www.businessweek.com/bwdaily/dnflash/jan2002/nf2002019_7170.htm>
    
    Two-part article on social engineering:
    <http://www.securityfocus.com/infocus/1527>
    <http://www.securityfocus.com/infocus/1533>
    
    
    ** *** ***** ******* *********** *************
    
                   Counterpane News
    
    
    
    Network World Magazine named Bruce Schneier a Power Executive for 2002.  (I 
    promise to use my power only for good.)
    <http://www.nwfusion.com/power01/50most/executives.html>
    <http://www.counterpane.com/pr-power.html>
    
    CEO Tom Rowley was named a Technology Pioneer by the World Economic Forum:
    <http://www.counterpane.com/pr-pioneers.html>
    
    Counterpane has its first Asian reseller:
    <http://www.counterpane.com/pr-infinitum.html>
    
    Schneier is speaking about Counterpane and its monitoring services in Los 
    Angeles, Seattle, Portland, San Jose, Detroit, and DC.  If you are 
    interested in attending, please contact Robin Caputo at 
    <caputoat_private>.
    
    Schneier is speaking at the International Security Management Association 
    winter meeting in San Diego on 1/21.
    
    Schneier is speaking at the RSA Conference.  He will chair the 
    Cryptographers' Panel on 2/19, and give a solo talk on 2/20 at (ick) 8:00 AM.
    <http://www.rsaconference.com>
    
    
    ** *** ***** ******* *********** *************
    
                   Password Safe 2.0
    
    
    
    Better.  Cleaner.  Tighter.  Multi-platform support.  Open source.  And 
    it's under development.  You can watch the progress on SourceForge.  I hope 
    to have beta, with most if not all of the high-priority fixes and 
    enhancements, available for download by the end of the month.
    
    Soon I will be looking for someone to port the code to Linux, Macintosh, 
    and PalmOS.
    
    <http://passwordsafe.sourceforge.net>
    
    
    ** *** ***** ******* *********** *************
    
            The Doghouse:  AGS Encryptions
    
    
    
    This company believes they can get the perfect security of a one-time pad, 
    but without the annoyance of long keys.
    
    How will they accomplish this magic?  They specify keys with no preset 
    length.  Interesting idea, but in practice users will almost always select 
    a short key, because long keys are too hard to manage.  However, they argue 
    that the cryptanalyst has to keep in mind the possibility that the user 
    might have used a long key.  Presumably the intention is that the 
    cryptanalyst won't be able to rule out any possible plaintext, because 
    there's always some key (no matter how unlikely) that could yield this 
    plaintext.  That's the argument, anyway.  To make things sound even more 
    secure, they get to use fancy words like "key equivocation."
    
    Pay no attention to the bogosity of this reasoning.  AGS ought to know 
    better.  There's no such thing as a free lunch, and if you want the 
    unconditional security of a one-time pad, you have to pay for it with long 
    keys.  Shannon proved this (a mathematical proof, not open for debate) 
    fifty years ago.  You'd think people would have caught on by now.
    
    The AGS Web site has various hard-to-read white papers on it:
    <http://www.agsencryptions.com/>
    
    And they somehow even got an academic paper published at INDOCRYPT 
    2001.  (Don't ask me how; the program committee has some sharp people on 
    it, and the paper is an embarrassment.)
    <http://link.springer.de/link/service/series/0558/bibs/2247/22470330.htm>
    
    
    ** *** ***** ******* *********** *************
    
                 Comments from Readers
    
    
    
    From: Greg Guerin <glguerinat_private>
    Subject:  Thoughts on Software Liability
    
    The purpose of liability is to hold someone accountable.  Liability is a 
    means to an end, not an end in itself.  The end is accountability, and 
    there may be other ways to achieve it.  Liability is just one way.
    
    Liability is a tangible cost for making mistakes.  Being tangible, it can 
    also be predicted or forecast with some degree of accuracy.  Good 
    predictions of the cost rely on experience and analysis.  Good decisions 
    come from a history of good predictions.  Bad predictions broaden the 
    experience of what contributes to the cost of mistakes, and improves the 
    analysis process.  Errors in assessing liability are not just inevitable, 
    but necessary.
    
    Liability can only work when there is a specific "someone" who can be held 
    accountable.  That someone could also be a small group.  It can't be too 
    large a group, though, or liability fails.  "Holding society accountable" 
    is a nice phrase, but it's completely impractical when it comes to 
    liability.  Political action and legislation, not liability, are the means 
    to hold an industry, a government, or a society accountable.
    
    Liability also fails if there's no one that can be found and held 
    accountable, or if the consequences of being held accountable are 
    insignificant.  People you can't find or hold can't be made accountable, 
    even to criminal laws.  People without tangible assets don't care if they 
    are held accountable, because they can't lose anything 
    significant.  "Nothing to lose" breaks accountability and 
    liability.  That's why tougher laws sometimes have no effect.  If you have 
    nothing to lose, then losing more of nothing is still nothing, so what's 
    the difference?
    
    Liability also requires an authority accepted and recognized by both 
    parties.  If one party or the other is not governed by the same authority, 
    or simply refuses governance, then liability is just an exercise in public 
    relations.  The harmed party can't possibly recover any damages, or even 
    affect the liable party in any way (no consequences).  So liability can't 
    work without a mutually accepted authority.  For that authority to be 
    effective, though, it needs both laws and the power of enforceable 
    action.  Without the laws, the authority is a lawless tyrant.  Without 
    enforceable actions, the authority is a paper tiger.
    
    If liability is a cost, then it can be insured against.  Insurance is a way 
    to spread risk over a group by partitioning it into small amounts that each 
    group member pays for upfront.  That's what an insurance premium is: your 
    fraction of the anticipated overall cost.  But insurance only works with a 
    large enough population, or a predictable enough event, or a small enough 
    per-event cost.  Small groups, utterly unpredictable events, or high 
    per-event costs mean no insurance.  No insurance means you take the entire 
    risk on yourself.  Risking everything is, well, risking everything.  Lose 
    and you invariably lose big.  Win and you may or may not win big.
    
    To get insurance, an insurance company wants to know how reliable something 
    is before insuring against its failure.  If reliability is unknown or 
    unknowable, then they just charge a high premium and take a gamble, hoping 
    to spread a loss to other less-risky areas.  When we talk about software 
    security, we're really talking about software reliability.  Secure software 
    is reliable.  Unreliable software is insecure.  So software liability 
    insurance is essentially software reliability insurance.  Producing 
    unreliable software is simply shifting the costs from initial development 
    into deployment.  Producing reliable software costs more in development, 
    but less in deployment and maintenance.
    
    Insurance costs are directly related to reliability.  Show that your 
    software is reliable before you release it, then your liability exposure is 
    diminished, and so is your liability insurance cost.  If you can't show 
    reliability, or your software fails an expert analysis and assessment of 
    its predicted reliability, then your insurance costs are higher.  To lower 
    your insurance costs, you then have to demonstrate in the marketplace that 
    your software is reliable in day-to-day practice in the real world.  If it 
    works well enough and long enough, then your insurance costs go down.  If 
    it doesn't, then your costs go up.  If the costs exceed the revenues from 
    the software, you're losing money keeping the product on the market.  So 
    even if the development costs were lower for producing unreliable software, 
    you can still end up losing money if the liability costs are too high.
    
    Liability also acts as a barrier to market entry.  If you can't show that 
    your new software is reliable, or that you have a history of producing 
    reliable software, then it costs you more to enter the software 
    marketplace.  If you can't pay those entry costs, you can't release the 
    software.  Just like in states with mandatory automobile liability 
    insurance, where you can't drive without insurance.  Driving without 
    insurance is breaking the law, just as releasing software without insurance 
    would be if liability were the law.  Or you have to take on higher risks to 
    enter the market, or you have to get someone reliable (trustworthy) to 
    vouch for your software, or you get a reliable intermediary to distribute 
    your software and provide the insurance, or you limit your liability 
    exposure in other ways.
    
    Entry barriers, however, are basically filtering or censoring mechanisms, 
    and they have costs.  One cost is that they keep out anyone who can't 
    afford reliability insurance, or who isn't willing to risk everything they 
    own on a single piece of software.  That situation would pretty much kill 
    Free Software and Open Source as software providers.  If liability is the 
    law, then disclaiming liability is a lawless act.  Then you'd be risking 
    criminal penalties as well as civil ones.
    
    One way to lower the barrier is to limit liability.  For example, to no 
    more than the cost of the product.  That's not a perfect solution, however, 
    since there's no way to prohibit dangerously unreliable software that's 
    given away or sold for a tiny amount.  The problem seems insurmountable 
    until we simply accept the situation as a balance of forces: accountability 
    by liability vs. barriers to entry.  We don't have to get it perfect, just 
    reasonably well balanced.  We don't have to get it exactly balanced at the 
    outset, either, just set up a system that can evolve with time and experience.
    
    Bad software also has another pressure to contend with: reputation.  If 
    it's really bad, people will talk about it.  This is true for freely 
    available software just as much as for commercial software.  But this 
    speech can only occur if there are no constraints on the public discussion 
    of software quality.  Today, some software licenses prohibit certain kinds 
    of public discussions, such as bugs or benchmark results involving the 
    licensed software.  This approach may have short-term value to the vendor, 
    but it eventually erodes the reputation of the product or vendor 
    itself.  When we can't fairly decide whether to trust something or not, we 
    call it "risky" or even "untrustworthy".  A censored discussion is an 
    untrustworthy discussion -- you don't know what's being left out.
    
    Reputations matter.  Reputation is cumulative trustworthiness, and must be 
    earned.  It can disappear much more rapidly than it was 
    acquired.  Reputations are built from trustworthy statements about you made 
    by others. Of course, the reputations of the others matter, too.  You don't 
    believe everything some random person tells you, and you don't do 
    everything some random person tells you to do.  Well, not unless you're a 
    computer on the Internet.
    
    Reputation is not an absolute, though.  If a piece of software is the only 
    game in town, then you have to live with it no matter how bad it is or how 
    awful the vendor acts.  But that, too, is a pressure, because some software 
    developer somewhere is eventually going to say "I can do better than this," 
    and then do it.  So competition is a way of making a reputation have 
    consequences.  In a sense, competition is one of the liabilities of 
    reputation.  But only if the entry barriers aren't too high.  Competition 
    between elites is too easily turned into collusion between plunderers.
    
    Laws are another pressure on reputation.  Really serious harm becomes 
    criminal at some point, so the criminal laws come into effect.  Sometimes 
    just a possibility or threat of harm is enough to make a crime, if that 
    threat is severe enough and plausible enough.  Or if a hazard to the public 
    is left in an openly dangerous state.  In some cases, the laws even 
    prescribe the actions of public officials if the owner of the hazard can't 
    be coerced into action.  For example, food and health inspections, fire 
    codes, building condemnation, and so on.
    
    In the end, liability does not stand alone.  It's just one part of a larger 
    picture, which is the overall landscape of trustworthiness, reliability, 
    meaningful information, and informed action.  Liability isn't a panacea, 
    either.  It comes with costs.  There are ways to balance the costs against 
    the benefits, but that takes a certain amount of time, experience, and 
    genuine wisdom in order to really work.  There's no simple and easy 
    answer.  If there were, we almost certainly would have thought of it by now.
    
    
    
    From: Markus Kuhn <Markus.Kuhnat_private>
    Subject: Re: National ID Cards
    
    Your essay provides a sensible and knowledgeable overview of the pitfalls 
    in the design of national identity verification infrastructures, of which 
    photo ID cards are just one aspect. In particular, it avoids the common 
    mistake of basing the entire discussion of national identity 
    infrastructures solely around the question of whether it could have 
    prevented the September 11 plane crashes, a naive and narrow-minded 
    approach taken unfortunately by numerous other commentators.
    
    Nevertheless, I'd like to add a few remarks and offer a few very different 
    conclusions:
    
    Firstly, the discussion on counterfeiting is somewhat misleading as it 
    misses an essential point. While it is true that there are many fake 
    security documents in circulation, the overwhelming majority of these are 
    quite easily spotted by a professional verifier who is properly trained for 
    the document type in question. It might be impossible to design security 
    documents, seals, or even banknotes that any untrained offline user can 
    verify reliably. On the other hand, it requires quite enormous resources to 
    produce independently a convincing replica of a state-of-the-art security 
    document that will pass the inspection of a professional verifier, such as 
    for example a police officer or (hopefully) a bank clerk. Such people would 
    of course have to attend repeated training courses to learn how to verify 
    the security features.
    
    Such courses usually train and examine the verification skills with 
    collections of many different types of fake documents that have to be 
    spotted by participants among genuine ones. The smaller the number of 
    different types of ID documents there are to be verified, the easier and 
    more effective such verification training will become, which is *the* 
    crucial advantage of having one single standard national ID card.
    
    In high-security environments, officers would of course have to be 
    confronted deliberately with high-end sample fake documents during routine 
    work, in order to ensure their continued alertness for dubious 
    reproductions of security features. What is indeed more difficult to spot 
    offline are genuine security documents that were wrongfully issued by the 
    regular producers of these documents, for example with the help of bribed 
    employees in the relevant agencies. But experienced officers know a large 
    number of tricks for plausibility testing of an identity claim (starting 
    with now widely known tricks such as asking for both age and 
    date-of-birth), which would of course not be obsoleted by ID cards. Also, 
    good security management techniques will have to be used to guard issuing 
    facilities, and I hope and trust very much that a nation that manages vast 
    numbers of weapons of mass destruction can handle security management at 
    that scale adequately.  If you can't even protect a national identity card 
    issuing facility properly....
    
    Visitors from Continental Europe, where national ID card systems have been 
    used very successfully and have been a part of daily life for many decades, 
    are frequently amused about the bizarre identification rituals and at the 
    same time extremely sloppy document security practise they encounter in 
    those parts of the English speaking world that are so proud of their lack 
    of ID cards. As an anecdotal example, video rental shops in Cambridge 
    didn't accept my EU passport when I applied to become a member, but a 
    utility bill or recent bank statement would do (both merely laserprinted, 
    no standardized form or security features, without any biometric, not 
    routinely carried with me, and orders of magnitude easier to fake). 
    Similarly, I found that many places in the US ask for two forms of photo 
    ID, but then accept completely random laminated pieces of plastic such as 
    photo tickets.
    
    Call it a cultural bias, but I *do* find it personally worrying that anyone 
    who stole a recent bank statement and knows my mother's maiden name and age 
    of my account will be easily able to acquire full access to my account. In 
    countries with national ID cards, any access to my account without both PIN 
    and banking card usually requires the verification of identity via a 
    national ID card. Similarly, national ID cards have to be presented to and 
    are verified by university examiners in many European countries to avoid 
    the for-hire exam sitting practice not uncommon in the US.
    
    How useful a national ID card is depends mostly on a list of which 
    activities make proper identification mandatory, because only in these 
    areas will individuals be effectively protected by the system against 
    impersonation. This is an aspect often forgotten in discussions, and for 
    example in the UK, the idea of a national ID card is still tightly coupled 
    to the unrealistic assumption quoted routinely by its opponents that anyone 
    walking on a street would have to carry one.
    
    There exist decades of solid experience and many detail lessons to be 
    learned from places such as Germany or Scandinavia, where national identity 
    systems have generally been found to be a useful and reassuring security 
    infrastructure that can easily be made compatible with far more paranoid 
    data-protection expectations than those common in the US.
    
    Claims about the uselessness or cost of national identity verification 
    systems should be based first of all on the practical experience and 
    impersonation fraud statistics in countries who have already used such 
    facilities for a long time, instead of mere speculation.
    
    National ID cards obviously will not prevent suicide terrorism. This is 
    simply because it is in the nature of that particular activity that known 
    repeat offenders are rare and post-mortem identification is not even in 
    their interest. Hardly *any* imaginable security mechanism will noticeably 
    deter creative suicide terrorists and all we can hope is that their overall 
    numbers are rather limited.
    
    But an infrastructure for the convenient and reliable verification of 
    identity remains a useful and important element of fraud prevention in 
    daily activities ranging from proof-of-age to university examinations to 
    banking. A single standardized security document with anti-counterfeit 
    features superior to that of high-value banknotes will tremendously 
    increase the effectiveness of card verifier training. It will make it far 
    more difficult to use fake identities than an environment with a 
    bewildering range of identity documents, such as the many different state 
    driver licenses and banking cards used as de facto ID cards in the US 
    today, in which effective verifier training becomes a logistic nightmare.
    
    
    
    From: Malcolm Melville <malcolmat_private>
    Subject: "Judges Punish Bad Security"
    
    If only those who take measures to prevent their connected system being 
    used to launch attacks on others are to be allowed to be connected to the 
    Internet, how is this to be policed, and how is it to be administered in a 
    manner that does not prevent legitimate users from being connected?  The 
    law seems to be a very blunt instrument -- which largely acts after the 
    event currently -- and were it used up front, the hurdles put in the way of 
    small companies or individuals could be onerous to the point of a denial of 
    freedom of movement within the netWorld.
    
    I understand where your brief article is coming from and agree with what 
    you wrote, but the implications maybe need further expansion.  For example, 
    should telcos and service providers be obliged to provide the first line of 
    defence in order to prevent their subscribers' machines from being abused, 
    rather than requiring each connected entity to take that 
    responsibility?  The relationship may then not be symmetrical, since as a 
    subscriber, I would want to put in place my own defences to protect myself 
    and my data, whereas the responsibility for preventing my machines from 
    being used for attacking others is that of the service provider.  These may 
    well be different security issues, with different costs, borne by different 
    parties.
    
    Was there not in the past a way in which Apple Macs could be used to launch 
    distributed DoS attacks without the user even being aware of it?  Whose 
    responsibility is it then to prevent -- as far as is possible -- such an 
    occurrence?
    
    Similarly, when mail relay was open by default on sendmail, whose 
    responsibility was it if the daemon was used for a mass relay of stuff 
    leading to a mail DoS?  The software supplier, the subscriber running the 
    daemon or the service provider for not stopping the daemon being used in 
    that way?
    
    The way your article concluded, we could end up with restrictions on who is 
    allowed to be connected simply because of a risk profile that says they may 
    not be able to correctly configure a service.  Maybe we need something like 
    a driver's license that grants you the right to use the electronic highway, 
    and can be endorsed or removed if you use the vehicle in the wrong manner?
    
    
    
    From: Alexander Gantman <agantmanat_private>
    Subject: Bad Computer Laws
    
    Have you seen the PA law on computer crime?
    <http://hercules.legis.state.pa.us/WU01/LI/BI/ALL/1999/0/SB1077.HTM>
    
    My favorite part is the definition of "computer":
    
    "Computer.  An electronic, magnetic, optical, hydraulic, organic or other 
    high speed data processing device or system which performs logic, 
    arithmetic or memory functions and includes all input, output, processing, 
    storage, software or communication facilities which are connected or 
    related to the device in a system or network."
    
    Hmm, an organic system which performs logic or memory functions....  It 
    seems obvious that animals are computers, but I wonder if plants are as 
    well.  Can one claim that plants are performing memory functions by storing 
    information in their DNA?
    
    
    
    From: Joergen Ramskov <joergenat_private>
    Subject: Windows Update
    
    I think it's funny how Windows Update forces you to lower your 
    security.  If you run IE6 with high security, you get this message:
    
    "To view and download updates for your computer, your Internet Explorer 
    security settings must meet the following requirements:
    - Security must be set to medium or lower
    - Active scripting must be set to enabled
    - The download and initialization of ActiveX Controls must be set to enabled"
    
    WindowsUpdate needs to download several ActiveX controls.  To keep your PC 
    updated with the latest security fixes, you have no choice but to add 
    Microsoft into the Trusted Sites Zone -- nice.
    
    
    ** *** ***** ******* *********** *************
    
    
    CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, 
    insights, and commentaries on computer security and cryptography.  Back 
    issues are available on <http://www.counterpane.com/crypto-gram.html>.
    
    To subscribe, visit <http://www.counterpane.com/crypto-gram.html> or send a 
    blank message to crypto-gram-subscribeat_private  To unsubscribe, 
    visit <http://www.counterpane.com/unsubform.html>.
    
    Please feel free to forward CRYPTO-GRAM to colleagues and friends who will 
    find it valuable.  Permission is granted to reprint CRYPTO-GRAM, as long as 
    it is reprinted in its entirety.
    
    CRYPTO-GRAM is written by Bruce Schneier.  Schneier is founder and CTO of 
    Counterpane Internet Security Inc., the author of "Secrets and Lies" and 
    "Applied Cryptography," and an inventor of the Blowfish, Twofish, and 
    Yarrow algorithms.  He is a member of the Advisory Board of the Electronic 
    Privacy Information Center (EPIC).  He is a frequent writer and lecturer on 
    computer security and cryptography.
    
    Counterpane Internet Security, Inc. is the world leader in Managed Security 
    Monitoring.  Counterpane's expert security analysts protect networks for 
    Fortune 1000 companies world-wide.
    
    <http://www.counterpane.com/>
    
    Copyright (c) 2002 by Counterpane Internet Security, Inc.
    
    
    
    -
    ISN is currently hosted by Attrition.org
    
    To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY
    of the mail.
    



    This archive was generated by hypermail 2b30 : Tue Jan 15 2002 - 17:00:01 PST