I couldn't resist chiming in and apologize for the blatant plug. <blatant plug>High volume log data management/analysis is what we do. We routinely load 20,000 - 40,000 records/second and more on a small cluster of PCs. We also provide a SQL interface for querying the data. If you're interested in learning more please let me know. </blatant plug> Thanks, Dan Barahona 415.420.8425 www.addamark.com -----Original Message----- From: Marcus J. Ranum [mailto:mjrat_private] Sent: Tuesday, January 29, 2002 2:36 PM To: William D. Colburn (aka Schlake) Cc: loganalysisat_private Subject: Re: [logs] Re: syslogd / some analysis Marcus J. Ranum wrote: >Conclusions: >syslogd is decidedly not OK for loads in the 10,000 messages/second range on > my machine I didn't add the obvious: "which is ridiculous" A machine capable of servicing thousands of web hits/second should be capable of logging them without dropping them on the floor. :) To aggregate large amounts of logs, 10,000 messages/second might not be an extraordinary number, assuming some buffering and sane handling of I/O on the host and the network. I don't think I'd want to try to insert 10,000 records/second into a SQL database, even using bulk inserts. ;) So one implication is that for really big-ass (yes, that's the technical term...) logging servers prefiltering and coalescing will be a requirement. Of course coalescing _fast_ is a real challenge... ;) mjr. --- Marcus J. Ranum Chief Technology Officer, NFR Security, Inc. Work: http://www.nfr.com Personal: http://www.ranum.com --------------------------------------------------------------------- To unsubscribe, e-mail: loganalysis-unsubscribeat_private For additional commands, e-mail: loganalysis-helpat_private --------------------------------------------------------------------- To unsubscribe, e-mail: loganalysis-unsubscribeat_private For additional commands, e-mail: loganalysis-helpat_private
This archive was generated by hypermail 2b30 : Tue Jan 29 2002 - 18:13:50 PST