Re: [logs] Log archival

From: Paul D. Robertson (probertsat_private)
Date: Thu Dec 12 2002 - 20:22:42 PST

  • Next message: Tom Perrine: "Re: [logs] Log archival"

    On Thu, 12 Dec 2002 erinat_private wrote:
    
    > My advocacy for open code review is based on notions of transparency, 
    > repeatability, falsifiability, and error rates - which are standards that 
    > courts use to judge the reliability of technical and scientific 
    > evidence.   The key assumption underlying your comment is: "...as long as 
    
    But bugs/kloc doesn't tell us *what* sort of error it is- if it's "doesn't 
    log $foo," or if it's "misinterprets IP address."  Just like there are 
    errors that affect security and those which don't, there are errors that 
    affect integrity and those which don't.  I think it's a slippery slope to 
    go too far down the error path- erros which definitely affect what's 
    logged is on thing, overall trends is probably a bad thing, so getting 
    repeatability or validation or something akin to that in as a primary 
    metric would be more important than general error rates, bugs/kloc or 
    whatever.  I guess I'm saying that error rates are generally a false 
    positive, but I agree with your other metrics.
    
    > the scope of the expected behavior is tested."  That said, would you agree 
    > that black box testing of complex software, say IDS, given the inability 
    > (at least as our methods currently allow us) to test all the possible 
    > input/output variables in a network environment, poses a challenge to 
    > validating the function ... as compared to verifying with open code?
    
    Well, we kinda have an IDS testing and certification program too- and so 
    long as what you're testing is detection of attacks, it's not all that 
    difficult[2] to black box test, and it's certainly *way* easier than 
    trying to read code and figure out if the code applies a signature in a 
    specific way given packets that look like...  In fact, it's easier to 
    automate the swaying of packets and the testing of IDS than to automate 
    the reading of code ;)
    
    > As for testability, can one determine if there are portions of code that 
    > have not been executed (thus, potentially affecting the evidentiary 
    > outcome) in the case of black box software?
    
    Yep, you sure could do that, but you could black box verify that input X 
    produced outupt Y too- which in my mind gives a better validation of 
    functionality than code paths.
    
    > >Surely though, there's a threshold of normalcy that, absent other
    > >evidence, should stand as true.  For instance, I know that the creation
    > >and last accessed times on my disk drive are changable, but absent
    > >evidence that they have been, each instance of a file timestamp shouldn't
    > >be a mini-challenge waiting to happen.  Now, certainly part of this may be
    > >that the analysis and more importantly, standards of analysis might need
    > >to be professionally done or created.
    > 
    > That 'threshold of normalcy' goes to the issue of presumptions that are the 
    > default settings, if you will, and subject to rebuttal by the opponent to 
    > the evidence in question.  And, yes, I agree that we can't split hairs on 
    > every issue raised by the mutability of digital evidence just because there 
    > is a "possibility" that timestamps may have been altered, for instance.  If 
    > we were to go down that road, the "unknown 3rd party" defense of "isn't it 
    > POSSIBLE that, because my client's computer was connected to the Internet, 
    > a hacker got on his system and planted everything" would reign supreme and 
    > prosecutors/plaintiffs would be forced down a road of having to prove a 
    > negative.
    
    That seems to be the current trend in defense ;)
    
    > Your point is well taken, and it boils down to the notion of defining 
    > reasonableness in a context that many judges/attorneys/laypersons have 
    > little basis upon which to judge as compared to familiar, physical world 
    > settings.
    
    So, now that we've got two good lawyers[1] and a bunch of logging geeks 
    talking, is there anything we can do to help with that?  It seems to me 
    that the right ammount of work up front in (a) education and (b) logging 
    software could do much to improve the situation.
    
    I guess what I'm getting at is how hard and fast are the FRE, and are 
    better rules possible to implement and should that be a goal that a bunch 
    of us computer folks could slog away at with the right input from the 
    other sides of the tables?
    
    > >If there's a complete and comprehensive explaination- does that provide
    > >enough protection?  It seems to me that such an explaination doesn't
    > >affect the integrity of the process, which is the thing that *should* be
    > >the issue, so perhaps you can shed some light on the rationale from a
    > >lawyers perspective of why this is necessary- I certainly don't need to
    > >produce a complete and comprehensive explaination of an internal
    > >combustion engine in a hit and run accident (IOW, is this an artifiact of
    > >the court and the defense just needing to understand the technology, or is
    > >there a fundamental need of the prosecution to understand why a particular
    > >record was generated?)
    > 
    > Absolutely, multiple streams of corroborating evidence is the means by 
    > which integrity is inferred in many cases.   And yes, there is a degree of 
    > education and becoming comfortable with the technology that is evolving 
    > ...... from a social standpoint, our digital gauges are still forming, so 
    > to speak.
    
    Darnit!  You completely sidestepped my understanding vs. integrity of 
    process trap.  Seriously- I think we need to start to evolve a way of 
    talking about the integrity of digital processes, and the indicators that 
    change our confidence in such.  Is that too far out for the law?  Are we 
    doomed to try to paint in black and white to get evidence admitted, or can 
    we give good shades of gray confidences?  I've heard of folks leaving 
    trojans behind on the PC they do the breakin from to try to paint a 
    picture of "dog ate my homework" in defense.  I'd rather not give them a 
    get out of jail free card, but neither do I want one of them painting 
    someone else into jail with falsified logs.  With that in mind, are rules 
    v.s. analysis the right way to even go?  Is there room in this scheme for 
    a 3rd party that "tests" the evidence in some way and determines how 
    likely it is to be good to go?
    
    > >It's my guess that a lot of precedent was set before we had farily
    > >widespread computer evidence technicians- can you perhaps enligten us on
    > >where that puts the courts in terms of admissability and veracity?
    > 
    > There's actually a very interesting issue brewing along the lines of 
    > testimony by forensic examiners and those employing forensic analysis 
    > software.  That is to say, should these persons be allowed to take the 
    > stand as experts or fact witnesses/technicians?  The dangerous slope we're 
    
    I have a friend who's just done both in a rather high profile case.  He 
    did the imaging, and testified to that, the defense (in hindsight 
    erroneously) stipulated to him being an expert during that phase, and lost 
    the ability to challenge his expertise later in the trial when he was made 
    the prosecution's expert.  I'm not sure that it isn't entirely appropriate 
    to have a bit of technician and a bit of analyst out of the people doing 
    the job- it certainly takes us away from the situation you describe 
    next...
    
    > heading toward is what I call "automating experts", and it is partially a 
    > result of software being engineered to be usable by 
    > less-than-computer-literate folks.  I'm all for improving human-computer 
    > interface and increasing functionality, but don't call a point-and-clicker 
    > an expert, especially if the opinion that contributes to whether someone 
    > gets sent down the river or has to pay out $20M is a regurgitation of what 
    > a software program told him/her, exclusively.
    > 
    
    [snip]
    
    > 
    > >I'm not sure this is always true- having done a fair ammount of
    > >investigation in cases where the logs were most of what was there, but the
    > >suspect's admission of performing the act corroberated them, ommission and
    > >decision points aren't all that important compared to actual data points-
    > >the fact that 127.0.0.2 connected to the SSH port at 12:00 EST is a good
    > >indicator even if the IDENT record wasn't captured by a remote log
    > >server because of the SYN flood the attacker started in conjunction with
    > >the attack.
    > >
    > >In other words, while sometimes it's possible that dropping the record
    > >that says "127.0.0.3" connected next, I think we shouldn't then
    > >automatically dismiss the logged evidence as hearsay and not admit it, but
    > >we should rely on expert analysis of the logs to help seperate the
    > >difference between possible, probable and factual.
    > 
    > I think you made my point nicely, as it pertains to the importance of 
    > context and establishing multiple streams of corroborating 
    > evidence.  Digital evidence must be taken in context, which includes other 
    > digital and non-digital sources of proof.
    
    Given the volitility of evidence under usual circumstances (which is why 
    we were discussing write once media just a few messages ago) it's possible 
    that you don't have a large ammount of corroborating evidence.  Given that 
    the virtual accelerant can be poured over the disk and lit, we need to be 
    able to go with what we have as much as possible- assuming there isn't a 
    high probability it's been tampered with.
    
    > And definitely, the expert opinion is crucial to translating this binary 
    > data within the context from which it came into conclusions regarding an 
    > ultimate issue.... as I said before, we can do alot with IT, but we have 
    > yet to be able to automate expertise.
    > 
    > 
    > >I'm sure I'll get clubbed down if we're veering too far from the
    > >technical, but I'd appreciate more of your insight, as these issues are
    > >near and dear to one of my roles :)
    > >
    > >I think it'd be nice to see the FRE have a "computer record" section,
    > >rather than trying to lump them all into the buckets we all started out
    > >with.  There's a fine balance between falsified evidence and crimnals
    > >escaping justice that needs to be balanced.  If we can address even
    > >some of that in the logging technology, we'll all be better off.
    > 
    > Agree..... this is an issue worthy of paying attention to.
    > 
    > If you get clubbed for veering too far from the technical, I may not have 
    > much time left based on my enclosed short response.......... in any case, 
    > keep it coming.
    
    I'm sure Tina will cut me off when I get too boorish (the advantages of 
    moderating a mailing list of technoweenies can never be underestimated...)
    
    Thanks,
    
    Paul
    [1] Sheesh, that's the second time I've had to say that.
    [2] For values of "not too difficult" that aren't trivial, but look easy 
    when you're the guy helping to pick the attacks, not the guy stuck 
    implementing the test ;)
    -----------------------------------------------------------------------------
    Paul D. Robertson      "My statements in this message are personal opinions
    probertsat_private      which may have no basis whatsoever in fact."
    probertsonat_private Director of Risk Assessment TruSecure Corporation
    
    _______________________________________________
    LogAnalysis mailing list
    LogAnalysisat_private
    http://lists.shmoo.com/mailman/listinfo/loganalysis
    



    This archive was generated by hypermail 2b30 : Thu Dec 12 2002 - 20:58:43 PST