Re: CRIME REMINDER: Free Seminar on Computer Security tomorrow!

From: Crispin Cowan (crispin@private)
Date: Fri Sep 06 2002 - 18:28:33 PDT

  • Next message: Crispin Cowan: "Re: CRIME REMINDER: Free Seminar on Computer Security tomorrow!"

    Andrew Plato wrote:
    
    >> Yes, it does. Once one person has discovered the hole, others can
    >> exploit it. Regardless of what it says about who discovered it and who
    >> wrote the exploit, it says unequivocally that the product is vulnerable,
    >
    >> a synonym for "insecure."
    >
    > I think you and I have different definitions of the word "insecure." To me
    > insecure means something that is easily exploited and there is a
    > significant probability that somebody will exploit such a hole.
    >
    > For example, an unpatched Windows NT 4.0 server running IIS 4.0 on the 
    > Internet
    > is an extremely insecure system for hosting a web site. The chance of 
    > it being 0wned
    > is almost 100%.
    >
    I'd agree with that.
    
    Notice that this definition is highly context-sensitive: it depends 
    critically on whether there are known vulnerabilities, and known by who? 
    The probability of being attacked with that vulnerability changes over 
    time, as documented by Browne et al. 
    http://www.cs.umd.edu/~waa/pubs/CS-TR-4200.pdf
    
    > But, if that system was upgraded, patched, moved behind a firewall, 
    > and hardened
    > from attack, as well as managed and maintained by a skilled staff - 
    > its chance of
    > being hacked significantly decreases and as such is not an insecure 
    > solution.
    >
    Patched means that the vulnerability has been fixed, and the system is 
    once again secure.
    
    Moved behind a firewall means that it is still insecure, but is no 
    longer exposed to wide attack. This is different from "secure", as 
    evidenced by incidents such as Code Red running rampant across 
    *internal* web servers at IBM, where they have literally 10s of 
    thousands of web servers on their internal net behind the firewalls.
    
    Monitored by security staff is definitely insecure: the staff just 
    provide for rapid response once they notice something.
    
    >> 1. Not *every* product has holes if you pound hard enough.
    >
    > Sure it does. Again, I think you and I have a different definition of 
    > "holes."
    > To me a hole is anything that could render a system unusable or allow
    > unauthorized access. Whacking a computer with a sledge hammer is
    > a security vulnerability. This is easily mitigated by placing systems in
    > secure locations. A security hole is not merely a hole in the programming.
    > This should be obvious, since the biometric hole the Counterpane 
    > article points out
    > is a non-programmatic hole. Its a social-engineering attack.
    >
    What you're describing is formally known as a "threat model". Server 
    security (WireX's business) mostly involves a network threat model. 
    Biometric authentication necessarily includes a physical threat model. 
    It is important to understand the threat model that a security feature 
    purports to defend against.
    
    > Perhaps the most troublesome security hole is the one nobody can patch:
    >
    > How do you stop an unauthorized person armed with legitimate credentials?
    >
    > In a sense that is the same hole the biometric mouse has.
    >
    Definitely not. As you correctly observe, all authentication schemes are 
    subject to extortion attacks against authenticated people. But 
    compromising biometric authentication can be done completely silently, 
    so that the authorized person doesn't even know they have been compromised.
    
    > And essentially
    > every system every built - including Wirex systems - has that same hole.
    > If a person obtains credentials through some off-line mechanism (such as
    > social engineering) and uses those credentials in a non-obvious manner,
    > then virtually nothing can stop them.
    >
    While that's true, it is beside the point:
    
        * With strong authentication systems, the credentials are
          cryptographically strong secrets, stored in secure devices (smart
          cards, secure computers, etc.).
        * With biometric authentication, the credentials are left lying
          around in public, and all you need to do is collect and spoof
          them. Finger prints, voice prints, retina/iris scans, etc. can all
          be obtained from public sources and replicated.
    
    A credential needs to be a well-kept secret. Biometrics are very poorly 
    kept secrets.
    
    > Hence my point, a vulnerability does not render a technology insecure.
    > It may be temporarily insecure or insecure based on poor maintenance
    > or use, but the technology itself can be secured.
    >
    Agreed, give or take some quibling over terminology.
    
    > Hence the goal in any security endeavor is to (in order of importance):
    >
    > 1. Eliminate as many holes as possible (harden systems, apply patches)
    > 2. Reduce the probability of attacks (control access, monitor and 
    > manage systems)
    > 3. Detect and analyze any attacks that do happen (IDS, IDS, IDS!)
    > 4. Mitigate the effect an attack has on the systems/network 
    > (modularize systems).
    > 5. Provide recovery capabilities if an attack does happen (incident 
    > response, backups, etc.)
    >
    I'd re-order that a little, again moving the IDS to the bottom. There is 
    no point spending money on IDS if you don't have rapid recovery in 
    place, and mitigation is more cost-effective than detection and 
    recovery. So my version of this list is:
    
       1. Eliminate as many holes as possible (harden systems, apply patches)
       2. Reduce the probability of attacks (control access, monitor and
          manage systems)
       3. Mitigate the effect an attack has on the systems/network
          (modularize systems)
       4. Provide intrusion detection and recovery capabilities
    
    
    > This is why ANY form of two-factor authentication is better than 
    > single-factor
    > authentication. Although either factor may be very easy to break - the 
    > two factors
    > put together reduce the overall probability of a compromise. You have, 
    > in a sense,
    > put TWO barriers in front of our would be hacker instead of one.
    >
    Agreed, but again with a caveat: the biometric factor is a hazard because:
    
        * it has a very low actual security value; it is not much of a
          barrier to overcome
        * it has a very high *perceived* security value among non-technical
          users; they *think* they are highly protected by the thumb scanner
        * thus it may lead users to be careless in use of passwords, because
          they think the other factor will save them
    
    Which is where I get the idea that a single factor authentication (well 
    managed passwords, sent strictly through crypto tunnels) can be more 
    secure than 2-factor authentication where both factors are weak 
    (biometrics that always suck combined with badly managed passwords). In 
    that sense, biometrics may have actual NEGATIVE security value, making 
    things worse, not better.
    
    >> In other cases, such as the bio-mouse, the vulnerabilities are so
    >> ingrained to the design that they cannot be mitigated. THEN the product
    >> needs to be thrown out, because it cannot be repaired.
    >
    > But they can be mitigated. So, good. We agree.
    >
    But I have yet to see any way in which the weakness of biometrics can be 
    mitigated. They just flat out suck.
    
    For busy admins not willing to follow all the intricate arguments, the 
    bottom line is very simple: NEVER use biometrics. There is always 
    something more cost-effective to deploy.
    
    This is much simpler than the IDS question, where my guidance is more 
    like "How much were you planning on spending?" and "Have you already 
    done all the other stuff?"
    
    Crispin
    
    -- 
    Crispin Cowan, Ph.D.
    Chief Scientist, WireX                      http://wirex.com/~crispin/
    Security Hardened Linux Distribution:       http://immunix.org
    Available for purchase: http://wirex.com/Products/Immunix/purchase.html
    



    This archive was generated by hypermail 2b30 : Fri Sep 06 2002 - 18:37:54 PDT