[RRE]special issue of Crypto-Gram

From: Phil Agre (pagreat_private)
Date: Mon Oct 01 2001 - 17:28:56 PDT

  • Next message: Phil Agre: "[RRE]attack"

    [Reformatted to 70 columns.]
    
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    This message was forwarded through the Red Rock Eater News Service (RRE).
    You are welcome to send the message along to others but please do not use
    the "redirect" option.  For information about RRE, including instructions
    for (un)subscribing, see http://dlis.gseis.ucla.edu/people/pagre/rre.html
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    
    Date: Sun, 30 Sep 2001
    From: Bruce Schneier <schneierat_private>
    
    http://www.counterpane.com/crypto-gram-0109a.html
    
    
    
                     CRYPTO-GRAM
    
                  September 30, 2001
    
                  by Bruce Schneier
                   Founder and CTO
          Counterpane Internet Security, Inc.
               schneierat_private
             <http://www.counterpane.com>
    
    
    A free monthly newsletter providing summaries, analyses, insights, and
    commentaries on computer and network security.
    
    Back issues are available at <http://www.counterpane.com/crypto-gram.html>.
    To subscribe, visit <http://www.counterpane.com/crypto-gram.html>
    or send a blank message to crypto-gram-subscribeat_private
    
    Copyright (c) 2001 by Counterpane Internet Security, Inc.
    
    
    ** *** ***** ******* *********** *************
    
    This is a special issue of Crypto-Gram, devoted to the September 11
    terrorist attacks and their aftermath.
    
    Please distribute this issue widely.
    
    In this issue:
         The Attacks
         Airline Security Regulations
         Biometrics in Airports
         Diagnosing Intelligence Failures
         Regulating Cryptography
         Terrorists and Steganography
         News
         Protecting Privacy and Liberty
         How to Help
    
    
    ** *** ***** ******* *********** *************
    
                    The Attacks
    
    
    
    Watching the television on September 11, my primary reaction was
    amazement.
    
    The attacks were amazing in their diabolicalness and audacity: to
    hijack fuel-laden commercial airliners and fly them into buildings,
    killing thousands of innocent civilians.  We'll probably never know if
    the attackers realized that the heat from the jet fuel would melt the
    steel supports and collapse the World Trade Center.  It seems probable
    that they placed advantageous trades on the world's stock markets just
    before the attack.  No one planned for an attack like this.  We like
    to think that human beings don't make plans like this.
    
    I was impressed when al-Qaeda simultaneously bombed two American
    embassies in Africa.  I was more impressed when they blew a 40-foot
    hole in an American warship.  This attack makes those look like minor
    operations.
    
    The attacks were amazing in their complexity.  Estimates are that
    the plan required about 50 people, at least 19 of them willing to die.
    It required training.  It required logistical support.  It required
    coordination.  The sheer scope of the attack seems beyond the
    capability of a terrorist organization.
    
    The attacks rewrote the hijacking rule book.  Responses to hijackings
    are built around this premise: get the plane on the ground so
    negotiations can begin.  That's obsolete now.
    
    They rewrote the terrorism book, too.  Al-Qaeda invented a new
    type of attacker.  Historically, suicide bombers are young, single,
    fanatical, and have nothing to lose.  These people were older and more
    experienced.  They had marketable job skills.  They lived in the U.S.:
    watched television, ate fast food, drank in bars.  One left a wife and
    four children.
    
    It was also a new type of attack.  One of the most difficult things
    about a terrorist operation is getting away.  This attack neatly
    solved that problem.  It also solved the technological problem.
    The United States spends billions of dollars on remote-controlled
    precision-guided munitions; al-Qaeda just finds morons willing to fly
    planes into skyscrapers.
    
    Finally, the attacks were amazing in their success.  They weren't
    perfect.  We know that 100% of the attempted hijackings were
    successful, and 75% of the hijacked planes successfully hit their
    targets.  We don't know how many planned hijackings were aborted for
    one reason or another.  What's most amazing is that the plan wasn't
    leaked.  No one successfully defected.  No one slipped up and gave the
    plan away.  Al-Qaeda had assets in the U.S. for months, and managed to
    keep the plan secret.  Often law enforcement has been lucky here; in
    this case we weren't.
    
    Rarely do you see an attack that changes the world's conception of
    attack, as these terrorist attacks changed the world's conception of
    what a terrorist attack can do.  Nothing they did was novel, yet the
    attack was completely new.  And our conception of defense must change
    as well.
    
    
    ** *** ***** ******* *********** *************
    
           Airline Security Regulations
    
    
    
    Computer security experts have a lot of expertise that can be applied
    to the real world.  First and foremost, we have well-developed senses
    of what security looks like.  We can tell the difference between real
    security and snake oil.  And the new airport security rules, put in
    place after September 11, look and smell a whole lot like snake oil.
    
    All the warning signs are there: new and unproven security measures,
    no real threat analysis, unsubstantiated security claims.  The ban on
    cutting instruments is a perfect example.  It's a knee-jerk reaction:
    the terrorists used small knives and box cutters, so we must ban
    them.  And nail clippers, nail files, cigarette lighters, scissors
    (even small ones), tweezers, etc.  But why isn't anyone asking the real
    questions: what is the threat, and how does turning an airplane into a
    kindergarten classroom reduce the threat?  If the threat is hijacking,
    then the countermeasure doesn't protect against all the myriad of ways
    people can subdue the pilot and crew.  Hasn't anyone heard of karate?
    Or broken bottles?  Think about hiding small blades inside luggage.
    Or composite knives that don't show up on metal detectors.
    
    Parked cars now must be 300 feet from airport gates.  Why?  What
    security problem does this solve?  Why doesn't the same problem imply
    that passenger drop-off and pick-up should also be that far away?
    Curbside check-in has been eliminated.  What's the threat that this
    security measure has solved?  Why, if the new threat is hijacking, are
    we suddenly worried about bombs?
    
    The rule limiting concourse access to ticketed passengers is another
    one that confuses me.  What exactly is the threat here?  Hijackers
    have to be on the planes they're trying to hijack to carry out
    their attack, so they have to have tickets.  And anyone can call
    Priceline.com and "name their own price" for concourse access.
    
    Increased inspections -- of luggage, airplanes, airports -- seem
    like a good idea, although it's far from perfect.  The biggest problem
    here is that the inspectors are poorly paid and, for the most part,
    poorly educated and trained.  Other problems include the myriad ways
    to bypass the checkpoints -- numerous studies have found all sorts
    of violations -- and the impossibility of effectively inspecting
    everybody while maintaining the required throughput.  Unidentified
    armed guards on select flights is another mildly effective idea: it's
    a small deterrent, because you never know if one is on the flight you
    want to hijack.
    
    Positive bag matching -- ensuring that a piece of luggage does not get
    loaded on the plane unless its owner boards the plane -- is actually a
    good security measure, but assumes that bombers have self-preservation
    as a guiding force.  It is completely useless against suicide bombers.
    
    The worst security measure of them all is the photo ID requirement.
    This solves no security problem I can think of.  It doesn't even
    identify people; any high school student can tell you how to get a
    fake ID.  The requirement for this invasive and ineffective security
    measure is secret; the FAA won't send you the written regulations if
    you ask.  Airlines are actually more stringent about this than the FAA
    requires, because the "security" measure solves a business problem for
    them.
    
    The real point of photo ID requirements is to prevent people from
    reselling tickets.  Nonrefundable tickets used to be regularly
    advertised in the newspaper classifieds.  Ads would read something
    like "Round trip, Boston to Chicago, 11/22 - 11/30, female, $50."
    Since the airlines didn't check ID but could notice gender, any
    female could buy the ticket and fly the route.  Now this doesn't work.
    The airlines love this; they solved a problem of theirs, and got to
    blame the solution on FAA security requirements.
    
    Airline security measures are primarily designed to give the
    appearance of good security rather than the actuality.  This makes
    sense, once you realize that the airlines' goal isn't so much to make
    the planes hard to hijack, as to make the passengers willing to fly.
    Of course airlines would prefer it if all their flights were perfectly
    safe, but actual hijackings and bombings are rare events and they know
    it.
    
    This is not to say that all airport security is useless, and that we'd
    be better off doing nothing.  All security measures have benefits, and
    all have costs: money, inconvenience, etc.  I would like to see some
    rational analysis of the costs and benefits, so we can get the most
    security for the resources we have.
    
    One basic snake-oil warning sign is the use of self-invented
    security measures, instead of expert-analyzed and time-tested ones.
    The closest the airlines have to experienced and expert analysis is
    El Al.  Since 1948 they have been operating in and out of the most
    heavily terroristic areas of the planet, with phenomenal success.
    They implement some pretty heavy security measures.  One thing they
    do is have reinforced, locked doors between their airplanes' cockpit
    and the passenger section.  (Notice that this security measure is
    1) expensive, and 2) not immediately perceptible to the passenger.)
    Another thing they do is place all cargo in decompression chambers
    before takeoff, to trigger bombs set to sense altitude.  (Again, this
    is 1) expensive, and 2) imperceptible, so unattractive to American
    airlines.)  Some of the things El Al does are so intrusive as to be
    unconstitutional in the U.S., but they let you take your pocketknife
    on board with you.
    
    Airline security:
    <http://www.time.com/time/covers/1101010924/bsecurity.html>
    <http://www.accessatlanta.com/ajc/terrorism/atlanta/0925gun.html>
    
    FAA on new security rules:
    <http://www.faa.gov/apa/faq/pr_faq.htm>
    
    A report on the rules' effectiveness:
    <http://www.boston.com/dailyglobe2/266/nation/Passengers_say_banned_items_ha ve_eluded_airport_monitors+.shtml>
    
    El Al's security measures:
    <http://news.excite.com/news/ap/010912/18/israel-safe-aviation>
    <http://news.excite.com/news/r/010914/07/international-attack-israel-elal-dc>
    
    More thoughts on this topic:
    <http://slate.msn.com/HeyWait/01-09-17/HeyWait.asp>
    <http://www.tnr.com/100101/easterbrook100101.html>
    <http://www.tisc2001.com/newsletters/317.html>
    
    Two secret FAA documents on photo ID requirement, in text and GIF:
    <http://www.cs.berkeley.edu/~daw/faa/guid/guid.txt>
    <http://www.cs.berkeley.edu/~daw/faa/guid/guid.html>
    <http://www.cs.berkeley.edu/~daw/faa/id/id.txt>
    <http://www.cs.berkeley.edu/~daw/faa/id/id.html>
    
    Passenger profiling:
    <http://www.latimes.com/news/nationworld/nation/la-091501profile.story>
    
    A CATO Institute report: "The Cost of Antiterrorist Rhetoric," written well before September 11:
    <http://www.cato.org/pubs/regulation/reg19n4e.html>
    
    I don't know if this is a good idea, but at least someone is thinking
    about the problem:
    <http://www.zdnet.com/anchordesk/stories/story/0,10738,2812283,00.html>
    
    
    ** *** ***** ******* *********** *************
    
                Biometrics in Airports
    
    
    
    You have to admit, it sounds like a good idea.  Put cameras throughout
    airports and other public congregation areas, and have automatic
    face-recognition software continuously scan the crowd for suspected
    terrorists.  When the software finds one, it alerts the authorities,
    who swoop down and arrest the bastards.  Voila, we're safe once again.
    
    Reality is a lot more complicated; it always is.  Biometrics is
    an effective authentication tool, and I've written about it before.
    There are three basic kinds of authentication: something you know
    (password, PIN code, secret handshake), something you have (door
    key, physical ticket into a concert, signet ring), and something
    you are (biometrics).  Good security uses at least two different
    authentication types: an ATM card and a PIN code, computer access
    using both a password and a fingerprint reader, a security badge
    that includes a picture that a guard looks at.  Implemented properly,
    biometrics can be an effective part of an access control system.
    
    I think it would be a great addition to airport security: identifying
    airline and airport personnel such as pilots, maintenance workers,
    etc.  That's a problem biometrics can help solve.  Using biometrics to
    pick terrorists out of crowds is a different kettle of fish.
    
    In the first case (employee identification), the biometric system has
    a straightforward problem: does this biometric belong to the person
    it claims to belong to?  In the latter case (picking terrorists out
    of crowds), the system needs to solve a much harder problem: does
    this biometric belong to anyone in this large database of people?
    The difficulty of the latter problem increases the complexity of the
    identification, and leads to identification failures.
    
    Setting up the system is different for the two applications.  In the
    first case, you can unambiguously know the reference biometric belongs
    to the correct person.  In the latter case, you need to continually
    worry about the integrity of the biometric database.  What happens if
    someone is wrongfully included in the database?  What kind of right of
    appeal does he have?
    
    Getting reference biometrics is different, too.  In the first
    case, you can initialize the system with a known, good biometric.
    If the biometric is face recognition, you can take good pictures
    of new employees when they are hired and enter them into the system.
    Terrorists are unlikely to pose for photo shoots.  You might have a
    grainy picture of a terrorist, taken five years ago from 1000 yards
    away when he had a beard.  Not nearly as useful.
    
    But even if all these technical problems were magically solved, it's
    still very difficult to make this kind of system work.  The hardest
    problem is the false alarms.  To explain why, I'm going to have to
    digress into statistics and explain the base rate fallacy.
    
    Suppose this magically effective face-recognition software is 99.99
    percent accurate.  That is, if someone is a terrorist, there is a
    99.99 percent chance that the software indicates "terrorist," and if
    someone is not a terrorist, there is a 99.99 percent chance that the
    software indicates "non-terrorist."  Assume that one in ten million
    flyers, on average, is a terrorist.  Is the software any good?
    
    No.  The software will generate 1000 false alarms for every one real
    terrorist.  And every false alarm still means that all the security
    people go through all of their security procedures.  Because the
    population of non-terrorists is so much larger than the number of
    terrorists, the test is useless.  This result is counterintuitive
    and surprising, but it is correct.  The false alarms in this kind
    of system render it mostly useless.  It's "The Boy Who Cried Wolf"
    increased 1000-fold.
    
    I say mostly useless, because it would have some positive effect.
    Once in a while, the system would correctly finger a frequent-flyer
    terrorist.  But it's a system that has enormous costs: money to
    install, manpower to run, inconvenience to the millions of people
    incorrectly identified, successful lawsuits by some of those people,
    and a continued erosion of our civil liberties.  And all the false
    alarms will inevitably lead those managing the system to distrust
    its results, leading to sloppiness and potentially costly mistakes.
    Ubiquitous harvesting of biometrics might sound like a good idea,
    but I just don't think it's worth it.
    
    Phil Agre on face-recognition biometrics:
    <http://dlis.gseis.ucla.edu/people/pagre/bar-code.html>
    
    My original essay on biometrics:
    <http://www.counterpane.com/crypto-gram-9808.html#biometrics>
    
    Face recognition useless in airports:
    <http://www.theregister.co.uk/content/4/21916.html>
    According to a DARPA study, to detect 90 per cent of terrorists we'd
    need to raise an alarm for one in every three people passing through
    the airport.
    
    A company that is pushing this idea:
    <http://www.theregister.co.uk/content/6/21882.html>
    
    A version of this article was published here:
    <http://www.extremetech.com/article/0,3396,s%253D1024%2526a%253D15070,00.asp>
    
    
    ** *** ***** ******* *********** *************
    
           Diagnosing Intelligence Failures
    
    
    
    It's clear that U.S. intelligence failed to provide adequate warning
    of the September 11 terrorist attacks, and that the FBI failed to
    prevent the attacks.  It's also clear that there were all sorts of
    indications that the attacks were going to happen, and that there were
    all sorts of things that we could have noticed but didn't.  Some have
    claimed that this was a massive intelligence failure, and that we
    should have known about and prevented the attacks.  I am not convinced.
    
    There's a world of difference between intelligence data and
    intelligence information.  In what I am sure is the mother of all
    investigations, the CIA, NSA, and FBI have uncovered all sorts of
    data from their files, data that clearly indicates that an attack
    was being planned.  Maybe it even clearly indicates the nature
    of the attack, or the date.  I'm sure lots of information is there,
    in files, intercepts, computer memory.
    
    Armed with the clarity of hindsight, it's easy to look at all the data
    and point to what's important and relevant.  It's even easy to take
    all that important and relevant data and turn it into information.
    And it's real easy to take that information and construct a picture of
    what's going on.
    
    It's a lot harder to do before the fact.  Most data is irrelevant,
    and most leads are false ones.  How does anyone know which is the
    important one, that effort should be spent on this specific threat and
    not the thousands of others?
    
    So much data is collected -- the NSA sucks up an almost unimaginable
    quantity of electronic communications, the FBI gets innumerable
    leads and tips, and our allies pass all sorts of information to us --
    that we can't possibly analyze it all.  Imagine terrorists are hiding
    plans for attacks in the text of books in a large university library;
    you have no idea how many plans there are or where they are, and the
    library expands faster than you can possibly read it.  Deciding what
    to look at is an impossible task, so a lot of good intelligence goes
    unlearned.
    
    We also don't have any context to judge the intelligence effort.
    How many terrorist attempts have been thwarted in the past year?
    How many groups are being tracked?  If the CIA, NSA, and FBI succeed,
    no one ever knows.  It's only in failure that they get any recognition.
    
    And it was a failure.  Over the past couple of decades, the U.S. has
    relied more and more on high-tech electronic eavesdropping (SIGINT
    and COMINT) and less and less on old fashioned human intelligence
    (HUMINT).  This only makes the analysis problem worse: too much
    data to look at, and not enough real-world context.  Look at the
    intelligence failures of the past few years: failing to predict
    India's nuclear test, or the attack on the USS Cole, or the bombing of
    the two American embassies in Africa; concentrating on Wen Ho Lee to
    the exclusion of the real spies, like Robert Hanssen.
    
    But whatever the reason, we failed to prevent this terrorist attack.
    In the post mortem, I'm sure there will be changes in the way we
    collect and (most importantly) analyze anti-terrorist data.  But
    calling this a massive intelligence failure is a disservice to those
    who are working to keep our country secure.
    
    Intelligence failure is an overreliance on eavesdropping and not
    enough on human intelligence:
    <http://www.sunspot.net/bal-te.intelligence13sep13.story>
    <http://www.newscientist.com/news/news.jsp?id=ns99991297>
    
    Another view:
    <http://www.wired.com/news/politics/0,1283,46746,00.html>
    
    Too much electronic eavesdropping only makes things harder:
    <http://www.wired.com/news/business/0,1367,46817,00.html>
    
    Israel alerted the U.S. about attacks:
    <http://www.latimes.com/news/nationworld/nation/la-092001probe.story>
    Mostly retracted:
    <http://www.latimes.com/news/nationworld/nation/la-092101mossad.story>
    
    
    ** *** ***** ******* *********** *************
    
                Regulating Cryptography
    
    
    
    In the wake of the devastating attacks on New York's World Trade
    Center and the Pentagon, Senator Judd Gregg and other high-ranking
    government officials quickly seized on the opportunity to resurrect
    limits on strong encryption and key escrow systems that ensure
    government access to encrypted messages.
    
    I think this is a bad move.  It will do little to thwart terrorist
    activities, while at the same time significantly reducing the
    security of our own critical infrastructure.  We've been through
    these arguments before, but legislators seem to have short memories.
    Here's why trying to limit cryptography is bad for Internet security.
    
    One, you can't limit the spread of cryptography.  Cryptography is
    mathematics, and you can't ban mathematics.  All you can ban is a
    set of products that use that mathematics, but that is something
    quite different.  Years ago, during the cryptography debates, an
    international crypto survey was completed; it listed almost a thousand
    products with strong cryptography from over a hundred countries.
    You might be able to control cryptography products in a handful of
    industrial countries, but that won't prevent criminals from importing
    them.  You'd have to ban them in every country, and even then it won't
    be enough.  Any terrorist organization with a modicum of skill can
    write its own cryptography software.  And besides, what terrorist is
    going to pay attention to a legal ban?
    
    Two, any controls on the spread of cryptography hurt more than they
    help.  Cryptography is one of the best security tools we have to
    protect our electronic world from harm: eavesdropping, unauthorized
    access, meddling, denial of service.  Sure, by controlling the spread
    of cryptography you might be able to prevent some terrorist groups
    from using cryptography, but you'll also prevent bankers, hospitals,
    and air-traffic controllers from using it.  (And, remember, the
    terrorists can always get the stuff elsewhere: see my first point.)
    We've got a lot of electronic infrastructure to protect, and we need
    all the cryptography we can get our hands on.  If anything, we need to
    make strong cryptography more prevalent if companies continue to put
    our planet's critical infrastructure online.
    
    Three, key escrow doesn't work.  Short refresher: this is the notion
    that companies should be forced to implement back doors in crypto
    products such that law enforcement, and only law enforcement, can
    peek in and eavesdrop on encrypted messages.  Terrorists and criminals
    won't use it.  (Again, see my first point.)
    
    Key escrow also makes it harder for the good guys to secure the
    important stuff.  All key-escrow systems require the existence of
    a highly sensitive and highly available secret key or collection
    of keys that must be maintained in a secure manner over an extended
    time period.  These systems must make decryption information quickly
    accessible to law enforcement agencies without notice to the key
    owners.  Does anyone really think that we can build this kind
    of system securely?  It would be a security engineering task of
    unbelievable magnitude, and I don't think we have a prayer of getting
    it right.  We can't build a secure operating system, let alone a
    secure computer and secure network.
    
    Stockpiling keys in one place is a huge risk just waiting for
    attack or abuse.  Whose digital security do you trust absolutely and
    without question, to protect every major secret of the nation?  Which
    operating system would you use?  Which firewall?  Which applications?
    As attractive as it may sound, building a workable key-escrow system
    is beyond the current capabilities of computer engineering.
    
    Years ago, a group of colleagues and I wrote a paper outlining why
    key escrow is a bad idea.  The arguments in the paper still stand, and
    I urge everyone to read it.  It's not a particularly technical paper,
    but it lays out all the problems with building a secure, effective,
    scalable key-escrow infrastructure.
    
    The events of September 11 have convinced a lot of people that we
    live in dangerous times, and that we need more security than ever
    before.  They're right; security has been dangerously lax in many
    areas of our society, including cyberspace.  As more and more of our
    nation's critical infrastructure goes digital, we need to recognize
    cryptography as part of the solution and not as part of the problem.
    
    My old "Risks of Key Recovery" paper:
    <http://www.counterpane.com/key-escrow.html>
    
    Articles on this topic:
    <http://cgi.zdnet.com/slink?140437:8469234>
    <http://www.wired.com/news/politics/0,1283,46816,00.html>
    <http://www.pcworld.com/news/article/0,aid,62267,00.asp>
    <http://www.newscientist.com/news/news.jsp?id=ns99991309>
    <http://www.zdnet.com/zdnn/stories/news/0,4586,2814833,00.html>
    
    Al-Qaeda did not use encryption to plan these attacks:
    <http://dailynews.yahoo.com/h/nm/20010918/ts/attack_investigation_dc_23.html>
    
    Poll indicates that 72 percent of Americans believe that anti-
    encryption laws would be "somewhat" or "very" helpful in preventing
    a repeat of last week's terrorist attacks on New York's World Trade
    Center and the Pentagon in Washington, D.C.  No indication of what
    percentage actually understood the question.
    <http://news.cnet.com/news/0-1005-200-7215723.html?tag=mn_hd>
    
    
    ** *** ***** ******* *********** *************
    
            Terrorists and Steganography
    
    
    
    Guess what?  Al-Qaeda may use steganography.  According to nameless
    "U.S. officials and experts" and "U.S. and foreign officials,"
    terrorist groups are "hiding maps and photographs of terrorist targets
    and posting instructions for terrorist activities on sports chat
    rooms, pornographic bulletin boards and other Web sites."
    
    I've written about steganography in the past, and I don't want to
    spend much time retracing old ground.  Simply, steganography is the
    science of hiding messages in messages.  Typically, a message (either
    plaintext or, more cleverly, ciphertext) is encoded as tiny changes to
    the color of the pixels of a digital photograph.  Or in imperceptible
    noise in an audio file.  To the uninitiated observer, it's just a
    picture.  But to the sender and receiver, there's a message hiding in
    there.
    
    It doesn't surprise me that terrorists are using this trick.  The very
    aspects of steganography that make it unsuitable for normal corporate
    use make it ideally suited for terrorist use.  Most importantly, it
    can be used in an electronic dead drop.
    
    If you read the FBI affidavit against Robert Hanssen, you learn how
    Hanssen communicated with his Russian handlers.  They never met, but
    would leave messages, money, and documents for one another in plastic
    bags under a bridge.  Hanssen's handler would leave a signal in a
    public place -- a chalk mark on a mailbox -- to indicate a waiting
    package.  Hanssen would later collect the package.
    
    That's a dead drop.  It has many advantages over a face-to-face
    meeting.  One, the two parties are never seen together.  Two, the
    two parties don't have to coordinate a rendezvous.  Three, and most
    importantly, one party doesn't even have to know who the other one is
    (a definite advantage if one of them is arrested).  Dead drops can be
    used to facilitate completely anonymous, asynchronous communications.
    
    Using steganography to embed a message in a pornographic image and
    posting it to a Usenet newsgroup is the cyberspace equivalent of
    a dead drop.  To everyone else, it's just a picture.  But to the
    receiver, there's a message in there waiting to be extracted.
    
    To make it work in practice, the terrorists would need to set up some
    sort of code.  Just as Hanssen knew to collect his package when he
    saw the chalk mark, a virtual terrorist will need to know to look for
    his message. (He can't be expected to search every picture.)  There
    are lots of ways to communicate a signal: timestamp on the message,
    an uncommon word in the subject line, etc.  Use your imagination here;
    the possibilities are limitless.
    
    The effect is that the sender can transmit a message without ever
    communicating directly with the receiver.  There is no e-mail between
    them, no remote logins, no instant messages.  All that exists is
    a picture posted to a public forum, and then downloaded by anyone
    sufficiently enticed by the subject line (both third parties and the
    intended receiver of the secret message).
    
    So, what's a counter-espionage agency to do?  There are the standard
    ways of finding steganographic messages, most of which involve looking
    for changes in traffic patterns.  If Bin Laden is using pornographic
    images to embed his secret messages, it is unlikely these pictures
    are being taken in Afghanistan.  They're probably downloaded from
    the Web.  If the NSA can keep a database of images (wouldn't that
    be something?), then they can find ones with subtle changes in the
    low-order bits.  If Bin Laden uses the same image to transmit multiple
    messages, the NSA could notice that.  Otherwise, there's probably
    nothing the NSA can do.  Dead drops, both real and virtual, can't be
    prevented.
    
    Why can't businesses use this?  The primary reason is that legitimate
    businesses don't need dead drops.  I remember hearing one company
    talk about a corporation embedding a steganographic message to its
    salespeople in a photo on the corporate Web page.  Why not just send
    an encrypted e-mail?  Because someone might notice the e-mail and know
    that the salespeople all got an encrypted message.  So send a message
    every day: a real message when you need to, and a dummy message
    otherwise.  This is a traffic analysis problem, and there are other
    techniques to solve it.  Steganography just doesn't apply here.
    
    Steganography is good way for terrorist cells to communicate, allowing
    communication without any group knowing the identity of the other.
    There are other ways to build a dead drop in cyberspace.  A spy can
    sign up for a free, anonymous e-mail account, for example.  Bin Laden
    probably uses those too.
    
    News articles:
    <http://www.wired.com/news/print/0,1294,41658,00.html>
    <http://www.usatoday.com/life/cyber/tech/2001-02-05-binladen.htm>
    <http://www.sfgate.com/cgi-bin/article.cgi?file=/gate/archive/2001/09/20/sig intell.DTL>
    <http://www.cnn.com/2001/US/09/20/inv.terrorist.search/>
    <http://www.washingtonpost.com/wp-dyn/articles/A52687-2001Sep18.html>
    
    My old essay on steganography:
    <http://www.counterpane.com/crypto-gram-9810.html#steganography>
    
    Study claims no steganography on eBay:
    <http://www.theregister.co.uk/content/4/21829.html>
    
    Detecting steganography on the Internet:
    <http://www.citi.umich.edu/techreports/reports/citi-tr-01-11.pdf>
    
    A version of this essay appeared on ZDnet:
    <http://www.zdnet.com/zdnn/stories/comment/0,5859,2814256,00.html>
    <http://www.msnbc.com/news/633709.asp?0dm=B12MT>
    
    
    ** *** ***** ******* *********** *************
    
                         News
    
    
    
    I am not opposed to using force against the terrorists.  I am not
    opposed to going to war -- for retribution, deterrence, and the
    restoration of the social contract -- assuming a suitable enemy can
    be identified.  Occasionally, peace is something you have to fight for.
    But I think the use of force is far more complicated than most people
    realize.  Our actions are important; messing this up will only make
    things worse.
    
    Written before September 11: A former CIA operative explains why the
    terrorist Usama bin Laden has little to fear from American intelligence.
    <http://www.theatlantic.com/issues/2001/07/gerecht.htm>
    And a Russian soldier discusses why war in Afghanistan will be a nightmare.
    <http://www.latimes.com/news/printedition/asection/la-000075191sep19.story>
    A British soldier explains the same:
    <http://www.sunday-times.co.uk/news/pages/sti/2001/09/23/stiusausa02023.html?>
    Lessons from Britain on fighting terrorism:
    <http://www.salon.com/news/feature/2001/09/19/fighting_terror/index.html>
    1998 Esquire interview with Bin Ladin:
    <http://www.esquire.com/features/articles/2001/010913_mfe_binladen_1.html>
    
    Phil Agre's comments on these issues:
    <http://commons.somewhere.com/rre/2001/RRE.War.in.a.World.Witho.html>
    <http://commons.somewhere.com/rre/2001/RRE.Imagining.the.Next.W.html>
    
    Why technology can't save us:
    <http://www.osopinion.com/perl/story/13535.html>
    
    Hactivism exacts revenge for terrorist attacks:
    <http://news.cnet.com/news/0-1003-201-7214703-0.html?tag=owv>
    FBI reminds everyone that it's illegal:
    <http://www.nipc.gov/warnings/advisories/2001/01-020.htm>
    <http://www.ananova.com/news/story/sm_400565.html?menu=>
    
    Hackers face life imprisonment under anti-terrorism act:
    <http://www.securityfocus.com/news/257>
    Especially scary are the "advice or assistance" components.  A
    security consultant could face life imprisonment, without parole, if
    he discovered and publicized a security hole that was later exploited
    by someone else.  After all, without his "advice" about what the hole
    was, the attacker never would have accomplished his hack.
    
    Companies fear cyberterrorism:
    <http://cgi.zdnet.com/slink?140433:8469234>
    <http://computerworld.com/nlt/1%2C3590%2CNAV65-663_STO63965_NLTSEC%2C00.html>
    They're investing in security:
    <http://www.washtech.com/news/software/12514-1.html>
    <http://www.theregister.co.uk/content/55/21814.html>
    
    Upgrading government computers to fight terrorism:
    <http://www.zdnet.com/zdnn/stories/news/0,4586,5096868,00.html>
    
    Risks of cyberterrorism attacks against our electronic infrastructure:
    <http://www.businessweek.com/bwdaily/dnflash/sep2001/nf20010918_8931.htm?&_r ef=1732900718>
    <http://cgi.zdnet.com/slink?143569:8469234>
    
    Now the complaint is that Bin Laden is NOT using high-tech
    communications:
    <http://www.theregister.co.uk/content/57/21790.html>
    
    Larry Ellison is willing to give away the software to implement a
    national ID card.
    <http://www.siliconvalley.com/docs/news/svfront/ellsn092301.htm>
    Security problems include: inaccurate information, insiders issuing
    fake cards (this happens with state drivers' licenses), vulnerability
    of the large database, potential privacy abuses, etc.  And, of course,
    no trans-national terrorists would be listed in such a system, because
    they wouldn't be U.S. citizens.  What do you expect from a company
    whose origins are intertwined with the CIA?
    
    
    ** *** ***** ******* *********** *************
    
            Protecting Privacy and Liberty
    
    
    
    Appalled by the recent hijackings, many Americans have declared
    themselves willing to give up civil liberties in the name of security.
    They've declared it so loudly that this trade-off seems to be a fait
    accompli.  Article after article talks about the balance between
    privacy and security, discussing whether various increases of security
    are worth the privacy and civil-liberty losses.  Rarely do I see a
    discussion about whether this linkage is a valid one.
    
    Security and privacy are not two sides of a teeter-totter.  This
    association is simplistic and largely fallacious.  It's easy and
    fast, but less effective, to increase security by taking away liberty.
    However, the best ways to increase security are not at the expense of
    privacy and liberty.
    
    It's easy to refute the notion that all security comes at the expense
    of liberty.  Arming pilots, reinforcing cockpit doors, and teaching
    flight attendants karate are all examples of security measures that
    have no effect on individual privacy or liberties.  So are better
    authentication of airport maintenance workers, or dead-man switches
    that force planes to automatically land at the closest airport, or
    armed air marshals traveling on flights.
    
    Liberty-depriving security measures are most often found when system
    designers failed to take security into account from the beginning.
    They're Band-aids, and evidence of bad security planning.  When
    security is designed into a system, it can work without forcing people
    to give up their freedoms.
    
    Here's an example: securing a room.  Option one: convert the room into
    an impregnable vault.  Option two: put locks on the door, bars on the
    windows, and alarm everything.  Option three: don't bother securing
    the room; instead, post a guard in the room who records the ID of
    everyone entering and makes sure they should be allowed in.
    
    Option one is the best, but is unrealistic.  Impregnable vaults just
    don't exist, getting close is prohibitively expensive, and turning a
    room into a vault greatly lessens its usefulness as a room.  Option
    two is the realistic best; combine the strengths of prevention,
    detection, and response to achieve resilient security.  Option three
    is the worst.  It's far more expensive than option two, and the most
    invasive and easiest to defeat of all three options.  It's also a sure
    sign of bad planning; designers built the room, and only then realized
    that they needed security.  Rather then spend the effort installing
    door locks and alarms, they took the easy way out and invaded people's
    privacy.
    
    A more complex example is Internet security.  Preventive
    countermeasures help significantly against script kiddies, but fail
    against smart attackers.  For a couple of years I have advocated
    detection and response to provide security on the Internet.  This
    works; my company catches attackers -- both outside hackers and
    insiders -- all the time.  We do it by monitoring the audit logs of
    network products: firewalls, IDSs, routers, servers, and applications.
    We don't eavesdrop on legitimate users or read traffic.  We don't
    invade privacy.  We monitor data about data, and find abuse that
    way.  No civil liberties are violated.  It's not perfect, but nothing
    is.  Still, combined with preventive security products it is more
    effective, and more cost-effective, than anything else.
    
    The parallels between Internet security and global security are
    strong.  All criminal investigation looks at surveillance records.
    The lowest-tech version of this is questioning witnesses.  In this
    current investigation, the FBI is looking at airport videotapes,
    airline passenger records, flight school class records, financial
    records, etc.  And the better job they can do examining these records,
    the more effective their investigation will be.
    
    There are copycat criminals and terrorists, who do what they've seen
    done before.  To a large extent, this is what the hastily implemented
    security measures have tried to prevent.  And there are the clever
    attackers, who invent new ways to attack people.  This is what we saw
    on September 11.  It's expensive, but we can build security to protect
    against yesterday's attacks.  But we can't guarantee protection
    against tomorrow's attacks: the hacker attack that hasn't been
    invented, or the terrorist attack yet to be conceived.
    
    Demands for even more surveillance miss the point.  The problem is not
    obtaining data, it's deciding which data is worth analyzing and then
    interpreting it.  Everyone already leaves a wide audit trail as we
    go through life, and law enforcement can already access those records
    with search warrants.  The FBI quickly pieced together the terrorists'
    identities and the last few months of their lives, once they knew
    where to look.  If they had thrown up their hands and said that they
    couldn't figure out who did it or how, they might have a case for
    needing more surveillance data.  But they didn't, and they don't.
    
    More data can even be counterproductive.  The NSA and the CIA have
    been criticized for relying too much on signals intelligence, and not
    enough on human intelligence.  The East German police collected data
    on four million East Germans, roughly a quarter of their population.
    Yet they did not foresee the peaceful overthrow of the Communist
    government because they invested heavily in data collection instead
    of data interpretation.  We need more intelligence agents squatting
    on the ground in the Middle East arguing the Koran, not sitting in
    Washington arguing about wiretapping laws.
    
    People are willing to give up liberties for vague promises of security
    because they think they have no choice.  What they're not being told
    is that they can have both.  It would require people to say no to the
    FBI's power grab.  It would require us to discard the easy answers in
    favor of thoughtful answers.  It would require structuring incentives
    to improve overall security rather than simply decreasing its costs.
    Designing security into systems from the beginning, instead of tacking
    it on at the end, would give us the security we need, while preserving
    the civil liberties we hold dear.
    
    Some broad surveillance, in limited circumstances, might be warranted
    as a temporary measure.  But we need to be careful that it remain
    temporary, and that we do not design surveillance into our electronic
    infrastructure.  Thomas Jefferson once said: "Eternal vigilance
    is the price of liberty."  Historically, liberties have always
    been a casualty of war, but a temporary casualty.  This war -- a war
    without a clear enemy or end condition -- has the potential to turn
    into a permanent state of society.  We need to design our security
    accordingly.
    
    The events of September 11th demonstrated the need for America to
    redesign our public infrastructures for security.  Ignoring this need
    would be an additional tragedy.
    
    Quotes from U.S. government officials on the need to preserve liberty
    during this crisis:
    <http://www.epic.org/alert/EPIC_Alert_8.17.html>
    
    Quotes from editorial pages on the same need:
    <http://www.epic.org/alert/EPIC_Alert_8.18.html>
    
    Selected editorials:
    <http://www.nytimes.com/2001/09/16/weekinreview/16GREE.html>
    <http://www.nytimes.com/2001/09/23/opinion/23SUN1.html>
    <http://www.freedomforum.org/templates/document.asp?documentID=14924>
    
    Schneier's comments in the UK:
    <http://www.theregister.co.uk/content/55/21892.html>
    
    War and liberties:
    <http://www.salon.com/tech/feature/2001/09/22/end_of_liberty/index.html>
    <http://www.washingtonpost.com/wp-dyn/articles/A21207-2001Sep12.html>
    <http://www.wired.com/news/politics/0,1283,47051,00.html>
    
    More on Ashcroft's anti-privacy initiatives:
    <http://www.theregister.co.uk/content/6/21854.html>
    
    Editorial cartoon:
    <http://www.claybennett.com/pages/latest_08.html>
    
    Terrorists leave a broad electronic trail:
    <http://www.wired.com/news/politics/0,1283,46991,00.html>
    
    National Review article from 1998: "Know nothings: U.S. intelligence
    failures stem from too much information, not enough understanding"
    <http://www.findarticles.com/m1282/n14_v50/21102283/p1/article.jhtml>
    
    A previous version of this essay appeared in the San Jose Mercury News:
    <http://www0.mercurycenter.com/premium/opinion/columns/security27.htm>
    
    
    ** *** ***** ******* *********** *************
    
                        How to Help
    
    
    
    How can you help?  Speak about the issues.  Write to your elected
    officials.  Contribute to organizations working on these issues.
    
    This week the United States Congress will act on the most sweeping
    proposal to extend the surveillance authority of the government since
    the end of the Cold War. If you value privacy, there are three steps
    you should take before you open your next email message:
    
    1. Urge your representatives in Congress to protect privacy.
    - Call the White House switchboard at 202-224-3121.
    - Ask to be connected to the office of your Congressional representative.
    - When you are put through, say "May I please speak to the staff
    member who is working on the anti-terrorism legislation?" If that
    person is not available to speak with you, say  "May I please leave a
    message?"
    - Briefly explain that you appreciate the efforts of your
    representative to address the challenges brought about by the
    September 11th tragedy, but it is your view that it would be a mistake
    to make any changes in the federal wiretap statute that do not respond
    to "the immediate threat of investigating or preventing terrorist
    acts."
    
    2. Go to the In Defense of Freedom web site and endorse the statement:
    <http://www.indefenseoffreedom.org>
    
    3. Forward this message to at least five other people.
    
    We have less than 100 hours before Congress acts on legislation that
    will (a) significantly expand the use of Carnivore, (b) make computer
    hacking a form of terrorism, (c) expand electronic surveillance in
    routine criminal investigations, and (d) reduce government
    accountability.
    
    Please act now.
    
    More generally, I expect to see many pieces of legislation that will
    address these matters.  Visit the following Web sites for up-to-date
    information on what is happening and what you can do to help.
    
    The Electronic Privacy Information Center:
    <http://www.epic.org>
    
    The Center for Democracy and Technology:
    <http://www.cdt.org>
    
    The American Civil Liberties Union:
    <http://www.aclu.org>
    
    
    ** *** ***** ******* *********** *************
    
    CRYPTO-GRAM is a free monthly newsletter providing summaries,
    analyses, insights, and commentaries on computer security and
    cryptography.  Back issues are available on
    <http://www.counterpane.com/crypto-gram.html>.
    
    To subscribe, visit <http://www.counterpane.com/crypto-gram.html>
    or send a blank message to crypto-gram-subscribeat_private
    To unsubscribe, visit <http://www.counterpane.com/unsubform.html>.
    
    Please feel free to forward CRYPTO-GRAM to colleagues and friends who
    will find it valuable.  Permission is granted to reprint CRYPTO-GRAM,
    as long as it is reprinted in its entirety.
    
    CRYPTO-GRAM is written by Bruce Schneier.  Schneier is founder and
    CTO of Counterpane Internet Security Inc., the author of "Secrets
    and Lies" and "Applied Cryptography," and an inventor of the Blowfish,
    Twofish, and Yarrow algorithms.  He is a member of the Advisory Board
    of the Electronic Privacy Information Center (EPIC).  He is a frequent
    writer and lecturer on computer security and cryptography.
    
    Counterpane Internet Security, Inc. is the world leader in Managed
    Security Monitoring.  Counterpane's expert security analysts protect
    networks for Fortune 1000 companies world-wide.
    
    <http://www.counterpane.com/>
    
    Copyright (c) 2001 by Counterpane Internet Security, Inc.
    



    This archive was generated by hypermail 2b30 : Tue Oct 02 2001 - 09:29:18 PDT