RE: [logs] Seeking suggestions on a secure central syslog setup..

From: Desai, Ashish (Ashish.Desaiat_private)
Date: Mon Nov 12 2001 - 10:16:40 PST

  • Next message: Nick Vargish: "RE: [logs] Syslog client alternatives for NT"

    >     I know I've specified a bit of a wishlist beyond just basic syslog
    > log viewing/reporting, but I supposed I ought to aim high and see what
    > comes out of it.   I'm toying with the idea of writing such a system
    > such as above, but, I don't know if I have the time to dedicate to it;
    > so I'm hoping that at least something will come somewhere close.
    > 
    >     So, this is where I turn it over to you -- can anyone give me any
    > examples of how you may have managed a situation like this?   Or
    > specifically of any software available to do as I described?
    > 
    >     Any comments, examples, or pointers to resources would be greatly
    > appreciated.
    
    One of the things we did is to write a script/program 
    [ in language of your choice, we chose perl]  for every data source we have.
    All the scripts support at least the following arguments,
    	start date
    	end date
    	keyword regex (OPTIONAL)
    Example
    	pdata -s 12/15/2001 -e 12/20/2001 -k "128.*"  
    	# this would spit out to stdout all the data between those dates for
    the matched pattern
    
    This satisfied our power/intermediate users as they could continue to use
    the standard
    unix tools, grep, pipe etc to play with the results. Note, we output data in
    tab delimited format so
    the standard IFS work fine with the output.
    
    Next we wrote small cgi's that call these scripts and save the output to a
    file. We then
    send the requester email with the URL to the results (compressed with gzip).
    This allows the
    naive users to request data and fetch it via a browers. Some of these users
    put the data in Excel
    for analysis while others just use a text editor.
    The advantage of this is once you implement the scripts you can trivally
    reuse them for the web interface
    Also some of our data sets are large, can take upto 48 hrs to search through
    them. So the web interface
    just dumps the request into a file and a shell scripts queues up the
    request. This way the requestor
    does not have to wait at the web interface.
    
    The other BIG advantage of using scripts with a well defined interface is
    you can NFS mount
    your data on multiple clients and all your code will work just fine. In
    fact, in our group
    we SMB mount the filesystem and all our codes runs as is on NT (just have to
    deal with the drive letter mapping)
    This allows our real power users to use their machine's CPU cycles.
    You can also use the GRID software from SUN to do load distribution.
    
    Let me know if you have any questions
    
    Ashish
    
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: loganalysis-unsubscribeat_private
    For additional commands, e-mail: loganalysis-helpat_private
    



    This archive was generated by hypermail 2b30 : Mon Nov 12 2001 - 11:41:56 PST