RE: Identifying Win2K/XP Encrypted Files

From: George M. Garner Jr. (gmgarnerat_private)
Date: Wed Feb 12 2003 - 06:53:17 PST

  • Next message: Gary L. Palmer: "DFRWS 2003 - Call for Papers and Participation"

    Brian,
    
    Good to hear from you.  First, permit me a clarification.  When I
    referred to the "write cache" in my previous post, I was thinking
    primarily of the cache manager file cache.
    http://msdn.microsoft.com/library/en-us/fileio/base/file_caching.asp.
    Most modern hard drives also maintain rather large on-disk write caches;
    and it appears to me that you are thinking primarily of the on-disk
    write cache in your comments.  Note that some applications, e.g. Active
    Directory, disable the on-disk write cache because of the risk of data
    corruption due to a sudden loss of power.  It also is possible to bypass
    both the cache manager and on-disk write caches programmatically. 
    
    >> Pulling the plug gives you the state at one instant.<<
    
    Pulling the plug gives you the state *on the oxide* at one instant.  The
    file system of a mounted volume on a running computer system includes
    memory resident components, some of which may never be flushed to disk.
    See http://msdn.microsoft.com/library/en-us/fileio/base/file_caching.asp
    ("Lazy writers do not flush temporary files, because the assumption is
    that they will be deleted by the system.")  Your statement that pulling
    the plug is "better" than live acquisition makes a number of assumptions
    that need to be stated.  Better for what?  What is it that you are
    trying to prove, or disprove?  What do you suspect?  What will the
    standard of evidence be?  What methods will be used to analyze and
    interpret the evidence?  
    
    In many cases, pulling the plug will be better than live acquisition.
    In other cases, it will not be better, particularly if pulling the plug
    results in the loss of critical (volatile or encrypted) evidence.
    Fortunately, it's not necessarily an either/or proposition:  It is
    possible to do "some limited poking around," as another commentator on
    the list calls it, or even a live image acquisition prior to pulling the
    plug.   
    
    >> On the other hand, a live acquisition will require the usage of the
    >> cache and the I/O will force the caches to be flushed more
    frequently. <<
    
    A live acquisition should not *directly* result in information being
    written to the drive that is being imaged.  Direct reads to a drive
    (\\.\PhysicalDrivex) or logical volume (\\.\C:)  bypass the cache
    manager cache.  I do not see how reading from a drive or logical volume
    will affect the rate at which the on-drive *write* cache is flushed to
    disk.
    
    A live acquisition may *indirectly* result in information being written
    to the system volume (or other volume on which a pagefile is located)
    due to the effects which the forensic application has on the host
    operating system, e.g. if the OS exceeds its commit limit and pages
    information to the system pagefile(s) or the host OS logs events to the
    Windows Eventlog.  If the write cache has not been disabled on a drive,
    system paging and event logging may affect the rate at which the
    on-drive write cache is flushed.  Event logging additionally may affect
    the rate at which the cache manager cache for a logical volume is
    flushed.  
    
    A well written forensic application should not result in significant
    amounts of information being written to any drive on the subject system
    (assuming that you do not run the forensic application from the drive
    that is being acquired :-)).  A greater risk is that the suspect still
    may be active on the subject system and will begin to destroy evidence.
    For this reason it is important to properly isolate the subject system
    in accordance with established incident handling protocols before
    attempting to acquire volatile digital evidence.  Periods of routine or
    scheduled system maintenance activities such as automatic
    defragmentation or system restore activity also should be avoided.
    
    >> ...(for example, all clusters that were allocated for files after the
    
    >> MFT was imaged would not be seen using normal tools... <<
    
    NTFS is a "recoverable" file system.  A recoverable file system uses
    journaling to ensure that volume metadata may be restored to a
    consistent state in the event of a sudden loss of power.  
    
    A recoverable file system does not guarantee the consistency of a
    mounted volume at any moment in time.  More probably a mounted volume
    will not be in a totally consistent state.  The only way to acquire a
    consistent image from a mounted NTFS volume is to flush the file system
    do disk prior to imaging, either programmatically or by shutting the
    system down normally.  Flushing a mounted volume will, of course,
    destroy any data on the oxide that is overwritten by cached file
    contents or metadata. 
    
    A recoverable file system also does not guarantee the recoverability of
    user data.  Both pulling the plug and live acquisition are likely to
    result in some user data loss.  But pulling the plug is likely to result
    in more user data loss.  As you noted in your previous post, live
    acquisition takes longer so more user data will be captured during the
    imaging process.  
    
    When faced with a decision between capturing more consistent file system
    metadata and more user file data, it is not immediately clear to me
    which is the "better" choice.  Usable information may be acquired even
    from badly damaged file systems.  Also, metadata inconsistencies
    themselves might provide important clues as to what is happening on the
    subject system at the time the evidence was acquired.  
    
    Of course, one could always re-image the MFT at the end of a live
    acquisition.  A careful comparison of the MFT before and after a live
    acquisition might provide valuable insight into the nature of the change
    that is underway on the subject system.
    
    >> ...(except for the fragmented MFT entries that are further in the 
    >> disk)). <<
    
    These changes also might be visible in MFT mirror ($MftMirr) that
    usually is located towards the middle of the volume, or in the log file
    ($LogFile) or in the USN Journal ($Extend\$UsnJrnl).  A thorough
    investigation usually ends up looking at unallocated space anyway
    because information is often hidden there.
    
    My purpose here is not to argue in favor of live acquisition.  Pulling
    the plug or shutting the system down normally will be better methods in
    many (if not the majority) of cases.  Rather, it is to call attention to
    the nature of digital evidence gathered from a *running* computer
    system.  A running computer system is not inert.  It is by nature
    dynamic, interactive and interconnected.  It should come as no surprise
    that evidence gathered from a running computer system reflects the
    artifacts of this change.  The artifacts of change are viewed as
    "defects" only because of the manner in which we are accustomed to
    interpret the evidence.  Interpreted in a different light, the artifacts
    of change may provide an answer to what often is the most important
    question concerning a running computer system:  What is it *doing*.
    
    Regards,
    
    George.
    
    
    -----------------------------------------------------------------
    This list is provided by the SecurityFocus ARIS analyzer service.
    For more information on this free incident handling, management 
    and tracking system please see: http://aris.securityfocus.com
    



    This archive was generated by hypermail 2b30 : Wed Feb 12 2003 - 09:09:49 PST