Re: Can we afford full disclosure of security holes?

From: Randy Taylor (rtaylorat_private)
Date: Fri Aug 10 2001 - 13:06:43 PDT

  • Next message: Oracle Security Alerts: "Re: Vulnerability in otrcrep in Oracle 8.0.5"

    Replies inline below...
    
    At 02:39 PM 8/10/2001 -0400, Richard M. Smith wrote:
    >Hello,
    >
    >The research company Computer Economics is calling Code Red
    >the most expensive computer virus in the history of the Internet.
    >They put the estimated clean-up bill so far at $2 billion.
    >I happen to think the $2 billion figure is total hype,
    >but clearly a lot of time and money has been spent cleaning up after
    >Code Red.
    >
    >For the sake of argument, let's say that Computer Economics
    >is off by a factor of one hundred.  That still puts the
    >clean-up costs at $20 million.
    >
    >This $20 million figure begs the question was it really
    >necessary for eEye Digital Security to release full details
    >of the IIS buffer overflow that made the Code Red I and II worms
    >possible?  I think the answer is clearly no.
    
    eEye disclosed the details, but the exploit was already known
    by a "close circle" - the persons or persons who originally
    discovered the vulnerability and authored the exploit code. While
    the vulnerability remained in that pre-disclosure state, it presented
    more of a danger to the community than it does now.
    
    As for eEye's method of disclosure, I don't think it differs too much
    from the current standard  - and that has at least some of its origins
    in the methods used to dissect and discuss the Morris Worm of 1988.
    The point being that full disclosure has been around for a long
    time, and it has been invaluable to those of us in the security community.
    
    Having said all that, I do feel your pain. Back in the early-mid 90's I
    questioned full disclosure, too. I often felt I did not have adequate time
    to get the systems I needed to fix patched fast enough to escape
    the onslaught of the "newest exploit" of the day. It was frustrating to say
    the least. But when it was all said and done, I came to the conclusion
    that knowing was much better than not knowing - full disclosure is better
    than no disclosure or limited disclosure.
    
    
    
    >Wouldn't it have been much better for eEye to give the details
    >of the buffer overflow only to Microsoft?  They could have still
    >issued a security advisory saying that they found a problem in IIS
    >and where to get the  Microsoft patch.  I realized that a partial
    >disclosure policy isn't as sexy as a full disclosure policy, but
    >I believe that less revealing eEye advisory would have saved a lot
    >companies a lot of money and grief.
    
    In a word, no. Dan Farmer often argued (and I am liberally paraphrasing
    posts I read _years_ ago)  that full disclosure "forced the hand" of software
    vendors to fix immediately what they would have waited until the next release
    to patch in. Although back then I disagreed with that view, I've long since
    changed my mind and support full disclosure. It's not pretty - but it
    is very necessary. Imagine the fray that would have been caused by Code Red
    if only Microsoft and eEye knew about it (not to mention the original
    developers - _that_ circle would have expanded quickly). *shiver*
    
    Further, I'd suggest that a "limited disclosure" policy would become
    full disclosure by brute force of public opinion, or at least by brute force
    reverse engineering. In other words, if any part of the cat is out of the bag,
    it won't be long before the entire beast becomes visible, claws and all. That
    much is attributable to human nature.
    
    
    >Unlike the eEye advisory, the Microsoft advisory on the IIS
    >security hole shows the right balance.  It gives IIS customers
    >enough information about the buffer overflow without giving a recipe
    >to virus writers of how to exploit it.
    
    One thing I think you might not be taking into account is the wealth
    of knowledge that already exists about discovering vulnerabilities. Code
    Red didn't fall that far from the tree, capabilities-wise, and is a logical
    extension of current trends. Finally, factor in an old military maxim,
    "Defense always lags offense" - I'll always support shortening that
    lag to the minimum amount of time possible. Full disclosure does that.
    If I'm going to be hit by something, I'd like to know as much as I can
    about it rather than get blindsided by it. It's been my experience
    that people usually want to know _why_ they are getting stomped on
    as well as how they can make it stop.
    
    
    
    >Thanks,
    >Richard M. Smith
    >CTO, Privacy Foundation
    >http://www.privacyfoundation.org
    
    That's just my opinion, Richard. I could be wrong. ;)
    
    Best regards,
    
    Randy
    



    This archive was generated by hypermail 2b30 : Fri Aug 10 2001 - 18:21:41 PDT