Re: Verifying file data integrity using L6

From: Jim Dennis (jimdat_private)
Date: Thu Dec 24 1998 - 17:21:37 PST

  • Next message: Aleph One: "Network Scan Vulnerability [SUMMARY]"

    > In article <75jlfu$lk3$1at_private> you wrote:
    >> This seems to be something that a good backup/archive system[1] ought
    >> to be recording whilst it backs up files, so that one could simply run
    >> a check of all files on a system against those in the database.
    >
    > tar has a compare option, comparing filestamps, size and md5.
    
            That's file contents *NOT* checksums.
    
            That would be the GNU tar 'd' directive, as in:
    
                    tar df /dev/st0
    
            (issued from a directory relative to that where the
            tar was created).
    
            The GNU tar 'd' (differences) will compare size,
            permissions, ownership, and actual file contents
    
            If you're going to use this as an integrity test
            tool --- make a baseline tape before the system is
            ever exposed to threat (after other configuration)
            *and write protect it*.  Naturally you could also create
            your baseline and cut it to a CDR.
    
    
    > rpm has a verify option
    
            The poor man's "Red Hat tripwire"; you issue a command like:
    
                    rpm -Va
    
            ... to verify (-V) the contents of all (-a) installed
            packages against the online database.
    
            You can also selectively compare these installations
            against the contents of package files (on your installation
            media/CD's, for example) by using commands of the form:
    
                    rpm -Vp foo-1.2-3.i386.rpm
    
            The difference between these invocations is that the first
            will trust the rpm databases stored in /var/lib/rpm/ while
            the other will compare the installation against the contents
            of a package file.
    
            In either case the output looks like:
    
    .M..L...   /opt
    .....U..   /tmp
    .M....G.   /usr/lib/adabas
    S.5....T   /bin/tar
    missing    /usr/doc/packages/ytree/README
    missing    /usr/doc/packages/kbd/README
    missing    /usr/doc/packages/lilo/README
    missing    /usr/doc/packages/umsprogs/README
    missing    /usr/doc/packages/wg15-locale/README
    S.5....T c /etc/login.defs
    
            Where the eight character string in the first field of each
            line is a series of flags --- . means "matches" and letters
            refer to comparison failures like "size" (S), "mode" (M),
            MD5 checksum (5), owner/group (U and G), timestamp (T),
            device node (D), and symlink (L).  There is a option second
            field which might have a "d" or "c" in it --- and is
            otherwise filled with a space (yuck! --- ugly for shell
            and awk script post processing) --- these are for
            "documentation" and "configuration" files.
    
    > cpio has a CRC option.
    
            While it has that option this doesn't nothing to report
            on ownership and permissions changes.  I'd like to see
            the FSF add that (or write a separate cmpcpio or cpiodiff
            utility).
    
    > tar can be used with Amanda backup.
    
            Although it seems like it would be ugly to use these
            to generate "differences" since you have to use 'dd'
            to seek past amanda's headers to get to the embedded
            tar image.
    
     All of these integrity test techniques beg the question.  You have
     to be able to trust the integrity of the tools being used for the
     audit.  Thus a comprehensive audit must start from "known good"
     (preferably vendor supplied, read-only) bootable media.
    
     (Assume the case where 'root' has been compromised on your target
     --- the attaacker could replace your kernel and libc with a version
     that will detect the execution of your auditing tools (tripwire,
     l5, binaudit, whatever) and replace the output with seemingly valid
     results.  Unlikely?  Yes.  Impossible?  Prove it.).
    
     One thing that bugs me, with the advent of writable "flash BIOS"
     and with things like Intel's "encrypted upgradable micocode"
     (embedded in their Pentium II, MMX, and later processors) and
     with the designs of some SCSI host adapters and drives (some
     drive electronics allegedly read their operating code from their
     "diagnostic cylinder") is the danger that someone will embed
     hostile code at a very low level.  These "features" should be
     implemented with absolute, hire wired, physical write protection.
     (Make me open the case and put a jumper on a set of pins before
     any of these "updates" can be written to my electronics, DAMNIT!)
    
    
    --
    Jim Dennis  (800) 938-4078              consultingat_private
    Proprietor, Starshine Technical Services:  http://www.starshine.org
    



    This archive was generated by hypermail 2b30 : Fri Apr 13 2001 - 14:26:18 PDT