In an enterprise setting, where you want to forward critical messages to a suitable collection of group-management servers, where log data is trawled to generate realtime alarms, a recurring problem shows up when a client goes berzerk and floods its logserver. Sometimes you can solve the problem by using simple stateless filtering to drop messages that aren't necessary for alert generation, but that doesn't always turn the trick; I've seen hardware errors provoke hideous floods of syslogged I/O errors, as an example. You want enough to trigger an alarm, but you want to shut the flood off fairly quickly, a second or two of it is more than plenty. I'm thinking about ways to solve this. So far, the cleanest I've figured out is to use SEC or something like it to do the data reduction, using it's "no more than N msgs that match a given pattern within T seconds" style limiting, having it write a named pipe, then forwarding whatever shows up there to a syslogd on another machine. Since syslog-ng can happily write data for SEC to analyze which can then be shoved into a TCP connection to another syslog-ng, I think these pieces will fit together without any hacking.[1] Has anyone else done this sort of realtime data throttling? What tools have you used? -Bennett [1] Don't get me wrong, I _love_ to hack, but in a corporate setting, it's cheaper to avoid having to maintain in-house forks.
This archive was generated by hypermail 2b30 : Wed Oct 15 2003 - 21:36:50 PDT