RE: CRIME REMINDER: Free Seminar on Computer Security tomorrow!

From: Andrew Plato (aplato@private)
Date: Thu Sep 05 2002 - 20:06:20 PDT

  • Next message: Seth Arnold: "Re: CRIME REMINDER: Free Seminar on Computer Security tomorrow!"

    > Yes, it does. Once one person has discovered the hole, others can
    > exploit it. Regardless of what it says about who discovered it and who
    > wrote the exploit, it says unequivocally that the product is vulnerable,
    
    > a synonym for "insecure."
    
    I think you and I have different definitions of the word "insecure." To me
    insecure means something that is easily exploited and there is a 
    significant probability that somebody will exploit such a hole. 
    
    For example, an unpatched Windows NT 4.0 server running IIS 4.0 on the Internet
    is an extremely insecure system for hosting a web site. The chance of it being 0wned
    is almost 100%. 
    
    But, if that system was upgraded, patched, moved behind a firewall, and hardened 
    from attack, as well as managed and maintained by a skilled staff - its chance of 
    being hacked significantly decreases and as such is not an insecure solution. 
    Its maybe not the MOST secure solution (a Wirex box would be best, of course :-) ),
    but I would not deem it "insecure." Even in a hardened state, it may still be vulnerable to 
    some more esoteric attacks, but since the probability of those attacks has been 
    significantly mitigated thanks to firewalls (and an IDS hopefully) then the platform
    can be deemed reasonably secure.  
    
    Therefore, even though said technology has said vulnerabilities, it can be used in 
    a (more) secure manner. 
    
    > 1. Not *every* product has holes if you pound hard enough.
    
    Sure it does. Again, I think you and I have a different definition of "holes."
    To me a hole is anything that could render a system unusable or allow
    unauthorized access. Whacking a computer with a sledge hammer is 
    a security vulnerability. This is easily mitigated by placing systems in
    secure locations. A security hole is not merely a hole in the programming.
    This should be obvious, since the biometric hole the Counterpane article points out
    is a non-programmatic hole. Its a social-engineering attack. 
    
    Perhaps the most troublesome security hole is the one nobody can patch:
    
    How do you stop an unauthorized person armed with legitimate credentials? 
    
    In a sense that is the same hole the biometric mouse has. And essentially 
    every system every built - including Wirex systems - has that same hole. 
    If a person obtains credentials through some off-line mechanism (such as 
    social engineering) and uses those credentials in a non-obvious manner, 
    then virtually nothing can stop them. I mean will your servers catch and 
    stop a user who logons on using a legitimate account who does nothing
    out of the ordinary than look at files or print something? 
    
    That is what I mean when I say - everything has holes. Therefore, the question 
    is how do you reduce and eliminate as many holes as possible. Some holes -
    like a hacker armed with stolen credentials or a gummy thumb - can only
    be mitigated through a general increase in security measures. You will 
    never completely eliminate these holes because there will always be
    human errors and mistakes.
    
    > 2. The "insecure" attribute is transient; the product is insecure
          until it is patched. Some problems cannot be patched, rendering
          the product insecure until it is re-worked.
    
    I'll agree with that in concept. 
    
    > On the contrary, if the hole is in widely deployed software, it almost
    > certainly does. See "How to Own the Internet in Your Spare Time",
    > Staniford, Paxson, and Weaver, USENIX Security 2002.
    > http://www.icir.org/vern/papers/cdc-usenix-sec02/ <http://www.icir.org/vern/papers/cdc-usenix-sec02/>  That Code Red and
    > Nimda are *still* rampaging across the Internet is largely a result of
    > lazy admins failing to patch known vulnerabilities.
    
    Okay, you proved my point right here. Code Red is rampaging across
    the Internet still because of poor maintenance on servers. Not because
    the servers are fundamentally flawed and must be discarded. 
    
    Hence my point, a vulnerability does not render a technology insecure.
    It may be temporarily insecure or insecure based on poor maintenance
    or use, but the technology itself can be secured. 
    
    > Caveat: there has been a lot of research done on computing the
    > probability of attack, and the results have been fruitless. It turns out
    > to be very difficult to compute such a probability with any degree of
    > accuracy.
    
    Yes, this is true. That's why we don't talk about specific probability values but
    the act of *reducing probability." Some security measures have the effect 
    of reducing the probability of an attack. IDSs for example have the power
    to help reduce probability. 
    
    Hence the goal in any security endeavor is to (in order of importance):
    
    1. Eliminate as many holes as possible (harden systems, apply patches)
    2. Reduce the probability of attacks (control access, monitor and manage systems)
    3. Detect and analyze any attacks that do happen (IDS, IDS, IDS!) 
    4. Mitigate the effect an attack has on the systems/network (modularize systems).
    5. Provide recovery capabilities if an attack does happen (incident response, backups, etc.) 
    
    This is why ANY form of two-factor authentication is better than single-factor
    authentication. Although either factor may be very easy to break - the two factors
    put together reduce the overall probability of a compromise. You have, in a sense,
    put TWO barriers in front of our would be hacker instead of one. 
    
    Even if our would-be hacker manages to build gummy thumbs, he/she still has to crack the 
    password on the system. Likewise, even if he sniffs the password from the wire, he still 
    has gain access to your office and get your print and then go make a gummy print of it
    then gain access to your office AGAIN, and use both. See - those two factors put together
    would likely run off all but the most committed hacker scum. 
    
    I agree that security tokens are better for two-factor authentication. But their expense and 
    complexity renders them an impractical solution. Now, this thing Seth talked about looks
    promising. Got to see about that. 
    
    This isn't some crazy Andrew opinion. It’s all over any security training. 
    I distinctly remember this in the CISSP material. 
    
    > In other cases, such as the bio-mouse, the vulnerabilities are so
    > ingrained to the design that they cannot be mitigated. THEN the product
    > needs to be thrown out, because it cannot be repaired.
    
    But they can be mitigated. So, good. We agree. 
    
    ------------------------------------
    Andrew Plato, CISSP
    President / Principal Consultant
    Anitian Corporation
    
    (503) 644-5656 office
    (503) 201-0821 cell
    http://www.anitian.com <http://www.anitian.com> 
    ------------------------------------
    
     
    
    



    This archive was generated by hypermail 2b30 : Thu Sep 05 2002 - 21:11:53 PDT