[Apologies for the long response] Well -- not to be too difficult; but I'd argue that "abnormal" is largely in the eye of the beholder. :) The key, I believe, is to develop a log analysis methodology such that you can effectively monitor issues in your specific environment. The tools to implement the methodology have always been a secondary consideration in my mind. To many people pick the technology and then try to fit a methodology within its constraints. I've focused quite heavily on application and OS log analysis, rather than NIDS. (Snort does me just fine.....) The methodology that I've used successfully includes the following components (summary); 1) Get a list of hardware and software vendors used in your network If you don't know what's in your own network [common problem], nmap and snmp probes become valuable. Host assessment clients should be installed on all capable systems that can respond to a central inventory request with information such as; interface list, architecture, OS, list of running processes [process inventory], etc) 2) Solicit from your vendors what log messages each component can produce Getting a comprehensive list is critical to identify what events are worthy of being monitored from a security perspective, as well as correlation analysis between rules. This should be a part of any vendor acceptance process. I've only encountered a few vendors whom don't have such information, or do have but wont release it. 3) Build a monitoring policy from each rule. Each rule should be analyzed to determine what the monitoring metrics are going to be. This allows you to ensure that in-direct security events are monitored as well. For example; a) You may develop several policy rules for "su root failures" "If you see 3 su root failures from a single user on a single system in 5 minutes, do X.." (problem with a user) "If you see 10 su root failures for all users on a single system in 10 minutes, do Y..." (problem with a system) "If you see 30 su root failures for any user on any system in 15 minutes, do Z..." (problem in the network) This policy analysis also allows you to take events that aren't "directly" security related and make them into a policy; "If you see 20 'interface down/interface up' messages from Cisco A in 5 minutes, do A..." "If you see 100 'interface down/interface up' messages from ALL ciscos in 15 minutes, do B..." b) This would also allow you to correlate rules together. For example, you may end up collecting data you wouldn't otherwise analyze just for the purposes of correlation. 4) Once you have ALL of the events that are possible to generate, and ALL of the policies built, you now know what data you DON'T want to see. Your technology pick should allow you to support the ability to; a) Feed ALL data from a client to a collector (preferably support a combo of tcp and udp feeds depending on the 'importance' of the data) b) The collector should PURGE all data that it knows you DON'T want to see c) Rest of the data should go through the policy d) Data that doesn't match your PURGE list or POLICY list should go into an UNMATCH, or NEW bucket This allows you to find events you were aware existed yet (new upgrades, capabilities, forgetful vendors, apps not in your list, etc). 5) The last key is the ability for your IDS solution to interact with your installed clients in the field. This allows you to implement a "false positive" action from your IDS server to verify the severity of specific types of alarms. For example; Your policy triggers that you've seen a SU ROOT failure alarm from system A. Your action could be "INVESTIGATE", where your IDS server connects to the client and analyzes the current security of the system; = collect where user logged in from = Check how the user logged in (lsof analysis, password, no password, ssh, telnet, etc.. compare with how they 'usually' log in) = look for Trojan or backdoor installs = analyze user history for "evil" commands (gcc, every cmds ending in ';', vi passwd, etc) = lsof analysis = is user a "ROOT" user? = md5 checksum analysis on key binaries (a hack to TCT does this quite well) The above commands can be used for almost all IDS policy rules, with some minor exceptions (for anyone interested, I can provide some insight into the "INVESTIGATE" rules we built to reduce false positives). The script that collects the data should give it to the IDS server whom should analyze the output and give back a "yes/no". If "no" (aka everything OK), then it means a legitimate user just forgot their root password. If "yes", then a problem of some nature exists, and you already have almost all the data you need to begin your analysis to determine if its an authorized user trying to get root access (common in engineering environments) or an unauthorized user you need to worry about. Your selection of technology should be based on your methodology architecture; if none exists then you have a platform for driving requirements -- ALWAYS a good thing from the "vendors need to suck less" perspective. Hope that starts to answer your question. Dale Dale ====================================== "SUCCESS THROUGH TEAMWORK" Dale Drew Director, Global Security/AAA Engineering & Architecture Level(3) Communications, LLC 720-888-2963 | dale.drewat_private -----Original Message----- From: Marcus J. Ranum [mailto:mjrat_private] Sent: Tuesday, October 29, 2002 9:04 PM To: Drew, Dale; loganalysisat_private Subject: RE: [logs] what is normal ? Dale.Drewat_private wrote: >You need to be able to look for >"abnormal" patterns in log data I'd like to know how to do this. Any pointers? mjr. --- Marcus J. Ranum http://www.ranum.com Computer and Communications Security mjrat_private _______________________________________________ LogAnalysis mailing list LogAnalysisat_private http://lists.shmoo.com/mailman/listinfo/loganalysis
This archive was generated by hypermail 2b30 : Wed Oct 30 2002 - 08:34:18 PST