Re: Behavior analysis vs. Integrity analysis [was: Binary Bruteforcing]

From: Michal Zalewski (lcamtufat_private)
Date: Thu Mar 28 2002 - 08:42:50 PST

  • Next message: Michal Zalewski: "RE: New Binary Bruteforcing Method Discovered"

    On Thu, 28 Mar 2002, auto12012 auto12012 wrote:
    
    > However, what does that have to do with pinpointing vulnerabilities?
    > Vulnerabilities are not about how a process is to behave, but based on
    > what information - and its integrity - it is to behave. Predictable
    > behavior, in the context of security analysis, only gives you the
    > ability to increase granularity. It is not a requirement for any
    > security analysis.
    
    To tell how the process is to behave in certain conditions, you have to be
    able to predict this behavior, or actually run / go thru the program and
    see what happens. And you have to know it for all possible input
    perameters. Both approaches, without making significant sacrifices, are
    not very feasible for a typical real-life project (say, Sendmail), where
    it is even difficult to imagine the input set, and some very minor
    problems can propagate and pop up in the future. Having a good way to
    predict program behavior in reasonable time would enable us to build
    pretty good general models and compare them with functional
    specifications. Unfortunately, we do not know how to predict program
    behavior in a feasible manner.
    
    You can't tell the flow of untrusted information in the code in all
    possible cases if you don't do one of above. And if you don't do it, you
    can't tell whether functions that are not supposed to take untrusted data
    don't accidentally receive it because it was retained at some point in
    some variable or such...
    
    > I do not see, then, how vulnerabilities are linked to execution paths.
    > An application does not become vulnerable simply because it took
    > execution path A instead of path B. It was already vulnerable in first
    > place because it based its decision on untrustworthy information.
    
    Huh? Say there's a sprintf() call in the code of some function. Is it a
    vulnerability?  Well, perhaps. It all depends on how this code is reached.
    Some execution path can lead to delivering nicely trimmed buffers, another
    to delivering unchecked data. But is there this second path at all? Maybe
    not, should we raise an alarm and return false positive? Maybe. What if
    this will result in 10,000 false positives, dramatically increasing the
    chances of overlooking one actual problem? And this is just a trivial
    example, one might argue that "sprintf is bad, period". Or that "every
    function should check input, period". But we are talking about existing
    code, not lab design. And think about bugs such as signal delivery races.
    They are caused solely by the fact that program execution takes an
    unexpected turn and certain data is modified when the program relies on
    having it intact.
    
    Another example: there is a function that calls snprintf and accepts
    buffer and buffer size. Everything looks clean, but because of some design
    flaw, buffer size passed from some other variable to this function at some
    point is a bit too big. Hey, it is trivial to detect execution path
    independent vulns. We have 'grep' to do it. But unfortunately, most of
    recent bugs were very, very execution path dependent, and very often you
    can't tell it will go wrong before it does.
    
    -- 
    _____________________________________________________
    Michal Zalewski [lcamtufat_private] [security]
    [http://lcamtuf.coredump.cx] <=-=> bash$ :(){ :|:&};:
    =-=> Did you know that clones never use mirrors? <=-=
              http://lcamtuf.coredump.cx/photo/
    



    This archive was generated by hypermail 2b30 : Thu Mar 28 2002 - 08:55:17 PST