[ISN] The Fallacy of Cracking Contests (fwd)

From: mea culpa (jerichoat_private)
Date: Tue Dec 15 1998 - 23:35:01 PST

  • Next message: mea culpa: "[ISN] International justice officials talk crime, terrorism in , cyberspace"

    Forwarded From: William Knowles <erehwonat_private>
    Originally From: Bruce Schneier <schneierat_private>
    
    
    	The Fallacy of Cracking Contests
                          Bruce Schneier
    
    
    You see them all the time: "Company X offers $1,000,000 to anyone who can
    break through their firewall/crack their algorithm/make a fraudulent
    transaction using their protocol/do whatever."  These are cracking
    contests, and they're supposed to show how strong and secure the target of
    the contests are.  The logic goes something like this:  We offered a prize
    to break the target, and no one did.  This means that the target is secure.
    
    It doesn't.
    
    Contests are a terrible way to demonstrate security.  A
    product/system/protocol/algorithm that has survived a contest unbroken is
    not obviously more trustworthy than one that has not been the subject of a
    contest.  The best products/systems/protocols/algorithms available today
    have not been the subjects of any contests, and probably never will be.
    Contests generally don't produce useful data.  There are three basic
    reasons why this is so.
    
    1.  The contests are generally unfair.  
    
    Cryptanalysis assumes that the attacker knows everything except the secret.
    He has access to the algorithms and protocols, the source code,
    everything.  He knows the ciphertext and the plaintext.  He may even know
    something about the key.
    
    And a cryptanalytic result can be anything.  It can be a complete break: a
    result that breaks the security in a reasonable amount of time.  It can be
    a theoretical break: a result that doesn't work "operationally," but still
    shows that the security isn't as good as advertised.  It can be anything in
    between.
    
    Most cryptanalysis contests have arbitrary rules.  They define what the
    attacker has to work with, and how a successful break looks.  Jaws
    Technologies provided a ciphertext file and, without explaining how their
    algorithm worked, offered a prize to anyone who could recover the
    plaintext.  This isn't how real cryptanalysis works; if no one wins the
    contest, it means nothing.
    
    Most contests don't disclose the algorithm.  And since most cryptanalysts
    don't have the skills for reverse-engineering (I find it tedious and
    boring), they never bother analyzing the systems.  This is why COMP128,
    CMEA, ORYX, the Firewire cipher, the DVD cipher, and the Netscape PRNG were
    all broken within months of their disclosure (despite the fact that some of
    them have been widely deployed for many years); once the algorithm is
    revealed, it's easy to see the flaw, but it might take years before someone
    bothers to reverse-engineer the algorithm and publish it.  Contests don't
    help.
    
    (Of course, the above paragraph does not hold true for the military.  There
    are countless examples successful reverse-engineering--VENONA, PURPLE--in
    the "real" world.  But the academic world doesn't work that way,
    fortunately or unfortunately.)
    
    Unfair contests aren't new.  Back in the mid-1980s, the authors of an
    encryption algorithm called FEAL issued a contest.  They provided a
    ciphertext file, and offered a prize to the first person to recover the
    plaintext.  The algorithm has been repeatedly broken by cryptographers,
    through differential and then linear cryptanalysis and by other statistical
    attacks.  Everyone agrees that the algorithm was badly flawed.  Still, no
    one won the contest.
    
    2.  The analysis is not controlled.
    
    Contests are random tests.  Do ten people, each working 100 hours to win
    the contest, count as 1000 hours of analysis?  Or did they all try the same
    things?  Are they even competent analysts, or are they just random people
    who heard about the contest and wanted to try their luck?  Just because no
    one wins a contest doesn't mean the target is secure...it just means that
    no one won.
    
    3.  Contest prizes are rarely good incentives.  
    
    Cryptanalysis of an algorithm, protocol, or system can be a lot of work.
    People who are good at it are going to do the work for a variety of
    reasons--money, prestige, boredom--but trying to win a contest is rarely
    one of them.  Contests are viewed in the community with skepticism: most
    companies that sponsor contests are not known, and people don't believe
    that they will judge the results fairly.  And trying to win a contest is no
    sure thing: someone could beat you, leaving you nothing to show for your
    efforts.  Cryptanalysts are much better off analyzing systems where they
    are being paid for their analysis work, or systems for which they can
    publish a paper explaining their results.
    
    Just look at the economics.  Taken at a conservative $125 an hour for a
    competent cryptanalyst, a $10K prize pays for two weeks of work, not enough
    time to even dig through the code.  A $100K prize might be worth a look,
    but reverse-engineering the product is boring and that's still not enough
    time to do a thorough job.  A prize of $1M starts to become interesting,
    but most companies can't afford to offer that.  And the cryptanalyst has no
    guarantee of getting paid: he may not find anything, he may get beaten to
    the attack and lose out to someone else, or the company might not even pay.
    
     Why should a cryptanalyst donate his time (and good name) to the company's
    publicity campaign?
    
    Cryptanalysis contests are generally nothing more than a publicity tool.
    Sponsoring a contest, even a fair one, is no guarantee that people will
    analyze the target.  Surviving a contest is no guarantee that there are no
    flaws in the target.
    
    The true measure of trustworthiness is how much analysis has been done, not
    whether there was a contest.  And analysis is a slow and painful process.
    People trust cryptographic algorithms (DES, RSA), protocols (Kerberos), and
    systems (PGP, IPSec) not because of contests, but because all have been
    subjected to years (decades, even) of peer review and analysis.  And they
    have been analyzed not because of some elusive prize, but because they were
    either interesting or widely deployed.  The analysis of the fifteen AES
    candidates is going to take several years.  There isn't a prize in the
    world that's going to make the best cryptanalysts drop what they're doing
    and examine the offerings of Meganet Corporation or RPK Security Inc., two
    companies that recently offered cracking prizes.  It's much more
    interesting to find flaws in Java, or Windows NT, or cellular telephone
    security.
    
    The above three reasons are generalizations.  There are exceptions, but
    they are few and far between.  The RSA challenges, both their factoring
    challenges and their symmetric brute-force challenges, are fair and good
    contests.  These contests are successful not because the prize money is an
    incentive to factor numbers or build brute-force cracking machines, but
    because researchers are already interested in factoring and brute-force
    cracking.  The contests simply provide a spotlight for what was already an
    interesting endeavor.  The AES contest, although more a competition than a
    cryptanalysis contest, is also fair 
    
    Our Twofish cryptanalysis contest offers a $10K prize for the best negative
    comments on Twofish that aren't written by the authors.  There are no
    arbitrary definitions of what a winning analysis is.  There is no
    ciphertext to break or keys to recover.  We are simply rewarding the most
    successful cryptanalysis research result, whatever it may be and however
    successful it is (or is not).  Again, the contest is fair because 1) the
    algorithm is completely specified, 2) there are no arbitrary definition of
    what winning means, and 3) the algorithm is public domain.
    
    Contests, if implemented correctly, can provide useful information and
    reward particular areas of research.  But they are not useful metrics to
    judge security.  I can offer $10K to the first person who successfully
    breaks into my home and steals a book off my shelf.  If no one does so
    before the contest ends, that doesn't mean my home is secure.  Maybe no one
    with any burgling ability heard about my contest.  Maybe they were too busy
    doing other things.  Maybe they weren't able to break into my home, but
    they figured out how to forge the real-estate title to put the property in
    their name.  Maybe they did break into my home, but took a look around and
    decided to come back when there was something more valuable than a $10,000
    prize at stake.  The contest proved nothing.
    
    Gene Spafford wrote against hacking contests.
    http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher/old-issues/issue9602
    
    Matt Blaze has too, but I can't find a good URL.
    
    **********************************************************************
    Bruce Schneier, President, Counterpane Systems     Phone: 612-823-1098
    101 E Minnehaha Parkway, Minneapolis, MN  55419      Fax: 612-823-1590
               Free crypto newsletter.  See:  http://www.counterpane.com
    
    
    -o-
    Subscribe: mail majordomoat_private with "subscribe isn".
    Today's ISN Sponsor: Repent Security Incorporated [www.repsec.com]
    



    This archive was generated by hypermail 2b30 : Fri Apr 13 2001 - 13:13:59 PDT