Re: [cftt] (unknown)

From: Mark St.Laurent (nmstlauat_private)
Date: Wed Dec 18 2002 - 12:45:19 PST

  • Next message: Andrew Rosen: "Clusters and the meaning of life"

    Interesting, Thinking off the top of my head,
    
    I can see a beowolf option come into play if a
    facility (for example) performs forensics on Network
    Intrusions, and there is terabytes of data to go
    through.
    
    Lets say you had 60 40gig drives from a case.  You dd
    each drive to an image, sored each image on a
    server(s) and mount each drive image via the loop
    device and shared it out (NFS). Share it out SAMBA if
    you logically wanted to view it with external Winblows
    tools encase, FTK etc in conjuction with the Beowolf
    process. Keep in mind, with Linux one could mount
    heterogeneous systems via the loop (sun, hpus, IRIX,
    winblows, etc)if the case had alot of different host
    types.
    
    A beowolf cluster could parallelize a super grep
    command (i.e., find /mnt/images -type f | xargs grep
    -if filename) where filename has a list of key words
    to go through, like 100 IP Addresses or something.  Of
    course the command above could be decked out alot
    more, and would most likely be written in C with a PVM
    wrap-a-round.  The command might even map the
    searchlist some how to draw a picture or timeline. 
    The Beowolf would make this time intensive process
    quicker, especially with alot of data to dig through.
    
    One could parallelize a command that could do negitive
    hashing and get rid of any knowns that exist on a big
    data set like in our example above to reduce the data
    with the NIST hash set for example.
    
    One might even parallelize a command that would go
    through the headers of a file and copy all the hits to
    a stage location by file type (e.g., magic header --
    file command) for other analysis.  One set of beowolf
    nodes could do exe and images, another Office docs and
    audio, another ascii and htm, another misc etc.  Or
    let all the nodes break out the files one at a time. 
    Because of the large data set this would be fast, even
    with a huge list and would make categorizing large
    data sets fast.
    
    Just some ideas,  I am sure there are alot more out
    there.  Interesting Topic.
    
    Norman Mark St. Laurent
    Computer Sciences Corportation
    
    --- Jason Fuller <eforensicsat_private> wrote:
    > I am interested in a question posted early in one of
    > our forensics emailing 
    > lists.  I want to implement a beowulf designed
    > network into the computer 
    > forensics examination process.  Does anyone
    > currently use a beowulf for 
    > examinations and if so, how are you incorporating it
    > into the process. For 
    > example, if you are using a windows based app such
    > as Access Data or 
    > Guidance Software how are you "merging" the
    > examinations into the beowulf 
    > network. Or on the other hand, if you are using
    > @Stake's software (some 
    > linux-based app), how could this be implemented.
    > 
    > I believe the earlier post was presented from a
    > college student researching 
    > this possibility for a research paper.
    > 
    > Thanks,
    > 
    > Jason Fuller
    > ABI
    > 
    > 
    > 
    >
    _________________________________________________________________
    > Help STOP SPAM with the new MSN 8 and get 2 months
    > FREE*  
    > http://join.msn.com/?page=features/junkmail
    > 
    > 
    
    
    __________________________________________________
    Do you Yahoo!?
    Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
    http://mailplus.yahoo.com
    
    -----------------------------------------------------------------
    This list is provided by the SecurityFocus ARIS analyzer service.
    For more information on this free incident handling, management 
    and tracking system please see: http://aris.securityfocus.com
    



    This archive was generated by hypermail 2b30 : Thu Dec 19 2002 - 19:30:30 PST