loganalysis list, i got this question ages ago and figured you might all be interested in the discussion. i'm answering this for sort of archival and FAQ reasons, at this point -- myles, how did your panel go? did you makes notes on >your< answers to these questions? this will probably end up on loganalysis.org at some point, or bits and pieces anyhow. please feel free to contribute to the conversation. On Wed, 17 Sep 2003, myles wrote: > * Has the state of open source syslog parsing progressed beyond swatch > & logdaemon? I've checked freshmeat & log_analysis was about the only > thing that looked interesting. Tina's Counterpane list has dozens of > choices, but which ones don't suck? Is it time to do a comparative > analysis? > first off, note that the live location of my loganalysis site is http://www.loganalysis.org -- i quickly decided i didn't want to give c-pane the benefit of my research etc. once i'd left their employment, and what with one thing and another they've never given me a redirect from their page. i would >love< to do a comparative analysis of the generic parsing tools. swatch is by far the most common choice i see for alerting, although i have the impression that people who advance beyond the "what's a log" stage are far happier with logsurfer, since it allows far more sophisticated responses, and includes the ability to modify its monitoring behavior based on context. direct link to parsing tools: http://www.loganalysis.org/sections/parsing/generic-log-parsers/index.html > * To set the stage for a list of things the things not to do, I plan > on handing out a set of colorized logs with all as many flaws as > possible. Below is what I've thought of, please kibitz at will. Which > OS has the worst default syslog conf? Debian's is actually quite good. > hmm. that's a good question, and i'm not sure i have a strong opinion yet. probably solaris. solaris logging really sucks. and of course, if you go a bit past unix, the default windows audit config is "all auditing disabled" -- think that's because the default size of all event logs is 512kb which is just silly in this day of cheap disk. i'm starting a link on the library probably this week of default configurations for logging on a variety of systems, and events that you might expect to see that you won't actually see with that default (i'm thinking lack of failed login logging on cisco ios but i'm sure i'll come up with others). i'll be asking the mailing list to send me defaults with their pet peeves, so stay tuned. > Empty logs > -everything logged to console > -logfile rotating programs that rotate too fast. ( and also logs > truncated at 2G) > - localhost firewalled off > > Everything crammed into one big log > i think this is a matter of style rather than a true "flaw", in the sense that running with a single destination isn't necessarily likely to >lose< you data, whereas many of the issues you've listed above will. on networks where log parsing is done entirely off line having a single big lump of data is often quite useful. > redundant logs > Port 135 being blocked every 20 seconds on the firewall > NT web scanning on your unix box. (if I have time, I'll whip up some > perl to grab a class of Nessus scan attacks & put them in a log parser > blacklist ) > nit picking: i'd call this uninteresting rather than redundant. the data's not being duplicated, it's just dull. although, lots of old c-pane customers liked to get weekly reports on this sort of thing so they could demonstrate to their management that they were getting "value" (for some really depressing and dull definition of "value") out of their security infrastructure. > Unsecured syslog > Someone hosing the system with AAAAs. > cross site scripting in the logs. > despite the ease of data spoofing in standard syslog, it's a very rare attack -- at least in the networks whose data i've seen -- and of course under the assumption that one could tell what data was forged. and again, it's not clear how much of a risk this is, in the sense that forging data is not necessarily going to disrupt the reception of >other< data, at least in an environment that's designed for bursty logging. but it is certainly a risk. > Logs from multiple machines > with different timestamps. > With the same priorities & facilities > with routers, firewalls & hosts in the same file. > again, repeat my comment on logging everything into one file is a question of style rather than a genuine "it's not going to work if i do this" issue. so few people rely solely on priority and facility for incident response or data parsing that i doubt that's a huge risk -- hell, most people wouldn't >notice< it cos they don't record them in their logs in the first place! my votes for "most common logging mistakes": 1) focusing on logs from perimeter devices (firewalls, IDS in particular) and not capturing data from systems targetted by attacks (web servers, DBs, etc) -- perimeter devices will see the first wave of an attack but in most cases will not see the victim machine's response or subsequent actions by the attacker 2) not checking to see whether the events they're really interested in (config changes, failed logins, etc) are being recorded with the logging config they've got, and figuring it out only when the apocalypse is upon them. or in other words, assuming that the logging the device does "by default" gives them the information they need to keep the thing running securely, and never bothering to check that assumption. > thanks for getting this far. > thanks tons for the opportunity to ponder my favorite subject, and again i'm sorry i took such a long time to answer. blame microsoft, again. tbird _______________________________________________ LogAnalysis mailing list LogAnalysis@private http://lists.shmoo.com/mailman/listinfo/loganalysis
This archive was generated by hypermail 2b30 : Wed Oct 15 2003 - 04:13:02 PDT