On 2002-01-30 12:58:51 +1000, Greg Black wrote: > "Marcus J. Ranum" wrote: > > | My client took 98 seconds to send a million records, which is to say > | 10,000+ records/second. So we know that syslog (on my box, with > | my O/S) can't handle 10,000+ records/second. > > [and loses over 50% of the data] > > | Conclusions: > | syslogd is decidedly not OK for loads in the 10,000 messages/second range on > | my machine > > Somehow, I don't think syslogd is suitable for high loads at > all; certainly UDP-based logging is a waste of time if you want > to get all the log data. Actually, UDP is a good choice for some types of logging, if you build an ack mechanism on top of it. If you're logging from hundreds or thousands of hosts, then you will either consume a lot of server-side resources by keeping all those TCP sessions open, or you will send lots of extra packets in the 3-way handshake for TCP open. Of course, if you are sending large amounts of log messages from a small number of hosts, then TCP is the way to go. > Clearly, 45,000 to 75,000+ records/second with full reliability > is much better than the syslog figures we have been given. It's > time to build better network logging tools. In the meantime, > I'll keep using netcat and multilog for network logging and > fifos or pipes with multilog for local logging. Syslog is old, and AFAIK not really designed for huge amounts of logging (I think if you told a syslog designer in 1984 that the application would be expected to do 10000 records/second they would have laughed at you). There are other logging tools, e.g. syslog-ng, but I can't vouch for whether they can handle this load. -- Shane Carpe Diem --------------------------------------------------------------------- To unsubscribe, e-mail: loganalysis-unsubscribeat_private For additional commands, e-mail: loganalysis-helpat_private
This archive was generated by hypermail 2b30 : Wed Jan 30 2002 - 07:04:13 PST