Re: Behavior analysis vs. Integrity analysis [was: Binary Bruteforcing]

From: auto12012 auto12012 (auto12012at_private)
Date: Thu Mar 28 2002 - 11:28:50 PST

  • Next message: Morgan: "OpenSSH channel_lookup() off by one exploit"

    >
    >On Thu, 28 Mar 2002, auto12012 auto12012 wrote:
    >
    > > However, what does that have to do with pinpointing vulnerabilities?
    > > Vulnerabilities are not about how a process is to behave, but based on
    > > what information - and its integrity - it is to behave. Predictable
    > > behavior, in the context of security analysis, only gives you the
    > > ability to increase granularity. It is not a requirement for any
    > > security analysis.
    >
    >To tell how the process is to behave in certain conditions, you have to be
    >able to predict this behavior, or actually run / go thru the program and
    >see what happens. And you have to know it for all possible input
    >perameters. Both approaches, without making significant sacrifices, are
    >not very feasible for a typical real-life project (say, Sendmail), where
    >it is even difficult to imagine the input set, and some very minor
    >problems can propagate and pop up in the future. Having a good way to
    >predict program behavior in reasonable time would enable us to build
    >pretty good general models and compare them with functional
    >specifications. Unfortunately, we do not know how to predict program
    >behavior in a feasible manner.
    
    Given this is your reply to my previous statement, you obviously mistake 
    'understanding logic' (or 'understanding on what basis the process is to 
    behave') for 'understanding behavior in certain conditions'. The difference 
    is as much important as the difference between a compiler and a machine. 
    Saying that the behavior of a process cannot be predicted under all 
    circumstances is true, but saying that the logic of a process cannot be 
    deduced is wrong.
    
    >
    >You can't tell the flow of untrusted information in the code in all
    >possible cases if you don't do one of above. And if you don't do it, you
    >can't tell whether functions that are not supposed to take untrusted data
    >don't accidentally receive it because it was retained at some point in
    >some variable or such...
    >
    
    What you track is integrity, not data itself. The concept is much more 
    abstract. If you fail to see it, check out 
    http://pag.lcs.mit.edu/6.893/readings/denning-cacm76.pdf, section (3) 
    Enforcement of Security. If you already knew about the principles of 
    security requirements for implicit information flow, than I fail to 
    understand why you are stuck on such a simple problem, which was basicaly 
    solved in 1976. Later research demonstrated other failures of the principle 
    in very specific circumstances, but none were related to the conditions you 
    state.
    
    > > I do not see, then, how vulnerabilities are linked to execution paths.
    > > An application does not become vulnerable simply because it took
    > > execution path A instead of path B. It was already vulnerable in first
    > > place because it based its decision on untrustworthy information.
    >
    >Huh? Say there's a sprintf() call in the code of some function. Is it a
    >vulnerability?  Well, perhaps. It all depends on how this code is reached.
    >Some execution path can lead to delivering nicely trimmed buffers, another
    >to delivering unchecked data. But is there this second path at all? Maybe
    >not, should we raise an alarm and return false positive? Maybe. What if
    >this will result in 10,000 false positives, dramatically increasing the
    >chances of overlooking one actual problem? And this is just a trivial
    >example, one might argue that "sprintf is bad, period". Or that "every
    >function should check input, period". But we are talking about existing
    >code, not lab design. And think about bugs such as signal delivery races.
    >They are caused solely by the fact that program execution takes an
    >unexpected turn and certain data is modified when the program relies on
    >having it intact.
    >
    >Another example: there is a function that calls snprintf and accepts
    >buffer and buffer size. Everything looks clean, but because of some design
    >flaw, buffer size passed from some other variable to this function at some
    >point is a bit too big. Hey, it is trivial to detect execution path
    >independent vulns. We have 'grep' to do it. But unfortunately, most of
    >recent bugs were very, very execution path dependent, and very often you
    >can't tell it will go wrong before it does.
    
    You refering to 'grep' for finding vulnerabilities demonstrates your lack of 
    understanding of what I meant by execution path independence. Read the 
    information I forwarded you to first, then I hope you will make the 
    respective distinctions between 'vulnerability regardless of data being 
    delivered', 'vulnerability dependent on data being delivered', and 
    'vulnerability dependent on integrity of data being delivered'.
    
    >
    >--
    >_____________________________________________________
    >Michal Zalewski [lcamtufat_private] [security]
    >[http://lcamtuf.coredump.cx] <=-=> bash$ :(){ :|:&};:
    >=-=> Did you know that clones never use mirrors? <=-=
    >           http://lcamtuf.coredump.cx/photo/
    >
    >
    
    
    _________________________________________________________________
    Chat with friends online, try MSN Messenger: http://messenger.msn.com
    



    This archive was generated by hypermail 2b30 : Thu Mar 28 2002 - 12:27:27 PST