Re: [logs] High Network Load

From: Paul Robertson (probertsat_private)
Date: Fri Sep 19 2003 - 06:32:14 PDT

  • Next message: Brown, James (Jim): "RE: [logs] High Network Load"

    On Fri, 19 Sep 2003, Philip Webster wrote:
    > I'm implementing a central log server over a large class B network, and 
    > have chosen syslog-ng as the server.  One of syslog-ng's features is 
    > that it can report the number of messages dropped internally, usually 
    > through either the receive or write buffers not being large enough. 
    > These values can be tweaked, and at least you know when there is 
    > something going wrong.
    > But what if the OS kernel drops the message?  Does anyone here have any 
    > experience with the OS losing messages before they get to the syslogd 
    > process?  How can this be monitored and overcome?
    With the right volume, the OS won't even get the message, it'll be dropped 
    at the router if its buffers get full...
    > The server (Red Hat Advanced Server) will be accepting logs over both 
    > UDP and TCP (and possibly via SSH port forwarding and/or stunnel), 
    > sitting on a 100Mb connection, and may potentially have hundreds of 
    > machines logging to it, as well as routers, switches, and several very 
    > high volume proxy servers.
    > Any thoughts?
    Don't put all your logs in one basket.
    I can't imagine what design criteria fed into "Log everything over the 
    network to a single server," but you should re-evaluate it fairly 
    critically.  Disk is slow, everyting going to one logging daemon, logging 
    to one filesystem (probably through one route) is going to be 
    I don't know if syslog-ng is the paragon of syslog performance, but I'd 
    doubt it.  I don't know if someone on this list has put their really fast 
    UDP over the network syslog daemon up publicly yet, but if they have 
    (*cough*) it'll probably run circles around syslog-ng.  It'd still be a 
    bad architectural design in most cases to run hundreds of machine's logs 
    to a single server (you can mitigate some of it with multiple addresses, 
    multiple routes, multiple buses..., but all it takes is for that single 
    server to spend time handling say lots of big ICMP packets and no fun for 
    your logs (looking at the Linux stack's performance for basically empty 
    ICMP echo requests versus ones with large data portions is left as an 
    exercise for the reader.)
    You might get away with it under ideal conditions, but the logs are going 
    to be important under less than ideal conditions, and that's where the 
    posterior chomping architectural design monster is going to come a 
    Paul D. Robertson      "My statements in this message are personal opinions
    probertsat_private      which may have no basis whatsoever in fact."
    probertsonat_private Director of Risk Assessment TruSecure Corporation
    LogAnalysis mailing list

    This archive was generated by hypermail 2b30 : Fri Sep 19 2003 - 09:53:57 PDT