Re: UDP packet handling weird behaviour of various operating systems

From: Sean Hunter (seanat_private)
Date: Fri Jul 27 2001 - 00:56:15 PDT

  • Next message: Boyan Krosnov: "RE: UDP packet handling weird behaviour of various operating systems"

    I was entirely unable to duplicate the problem on my linux-2.4.7 box, and this
    is why:
    
    Firstly, I apply some rate throttling on incoming and outgoing packets thussly...
    
    #jump to connection throttling table
    $IPTABLES -t filter -A INPUT -j in-throttle
    
    echo -n "    rate throttling: "
    #syn flood limit
    $IPTABLES -t filter -A in-throttle -j RETURN        -p tcp -m tcp --tcp-flags SYN,RST,ACK SYN     -m limit --limit 10/sec
    $IPTABLES -t filter -A in-throttle -j rate-logdrop  -p tcp -m tcp --tcp-flags SYN,RST,ACK SYN
    #rst flood limit
    $IPTABLES -t filter -A in-throttle -j RETURN        -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK RST -m limit --limit 10/sec
    $IPTABLES -t filter -A in-throttle -j rate-logdrop  -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK RST
    $IPTABLES -t filter -A in-throttle -j RETURN        -p tcp
    #icmp rate limit
    $IPTABLES -t filter -A in-throttle -j RETURN        -p icmp                                       -m limit --limit 25/sec
    $IPTABLES -t filter -A in-throttle -j RETURN        -p udp                                        -m limit --limit 25/sec
    #log the flood
    $IPTABLES -t filter -A in-throttle -j LOG -m limit --limit 10/min --log-prefix "FW-IN-THROTTLE: "
    
    #enable this to debug inbound flood throttling
    #$IPTABLES -t filter -A in-throttle -j RETURN
    #normally this is the policy
    $IPTABLES -t filter -A in-throttle -j DROP
    
    ...I do the same sort of thing outgoing.  YMMV, I am not a qualified doctor etc
    etc etc.  You may have to tweak the limits to make them suitable for your site,
    but they work for me.
    
    The next reason is that I proc rate limit icmp traffic thussly:
    
    net.ipv4.icmp_destunreach_rate = 50
    net.ipv4.icmp_echoreply_rate = 50
    net.ipv4.icmp_paramprob_rate = 50
    net.ipv4.icmp_timeexceed_rate = 50
    
    net.ipv4.icmp_ignore_bogus_error_responses = 1
    net.ipv4.icmp_echo_ignore_broadcasts = 1
    
    On most modern linux systems you can add these lines to /etc/sysctl.conf and go
    "sysctl -p" to install them.
    
    on others you have to tweak /proc by hand, doing:
    
    echo 50 > /proc/sys/net/ipv4/icmp_destunreach_rate
    
    etc etc.
    
    Finally, you should really firewall all traffic (including, but not limited to,
    biff) that you don't really really have to serve[1] and that goes for udp and
    tcp.  And don't ignore the loopback interface or your local users may bite you.
    
    Hope this helps
    
    Sean
    
    [1] Strangely, I need to open the biff port on the loopback interface or I get
    packet logs.  I haven't had the chance to track that down yet.
    
    On Thu, Jul 26, 2001 at 01:59:59AM +0300, Stefan Laudat wrote:
    > > Most UDP packets should be firewalled from the Internet.
    > 
    > Agree.
    > 
    > > This is only really useful if someone has access to the local network. Is
    > > Linux/UP actually *locking* or just temporarily unresponsive? Also, it is
    > > invalid to compare Windows ME running on $3000 hardware with Linux/*BSD
    > > running on an old Pentium. Are you running all of this on the same
    > > hardware? Obviously faster hardware is going to be affected less by a UDP
    > > flood. How about the network cards?
    > 
    > Identical network cards for Win2k, Linux SMP and OpemBSD processor (Intel
    > Pro 100). Linux was run on dual p3/1Ghz(SMP), Pentium2/400Mhz and P3/800Mhz
    > (UP). Windows 2000 was run on p3/1Ghz UP. I've made tests with same results
    > against Linux UP boxes running on Celeron/600 with 3com Vortex and realtek
    > 8139 NICs. I've outlined that the result is the same no matter if you hit
    > via 1Gbit or 100Mbit. 
    > 
    > > I am suspicious that you are just comparing hardware, given that different
    > > versions of W2K perform much differently in your analysis. (You said the
    > > load was server: 35%, professional: 60%) I somehow doubt that MS tuned the
    > > network stack so much on the ``server'' version & wouldn't do the same on
    > > the ``professional'' version.
    > 
    > Some of the Linux servers have just the same configuration with the w2k
    > servers. The reaction IS different. That's what amazes me. Also WinME was
    > run on a cheap p2/350 box with an old intel NIC. No slowdown at all :(
    > 
    > > I bet a Sun E10K with lots of NICs could flood the Sun UE3500 with lots of
    > > NICs, but that probably doesn't mean that the Solaris 8 network stack is
    > > better than the Solaris 8 network stack; it's because the E10K is faster.
    > 
    > well then someone will clear all this stuff for me.
    > 
    > -- 
    > Stefan Laudat
    > CCNA,CCAI
    > Senior Network Engineer
    > Allianz-Tiriac SA
    > 
    > "Let's call it an accidental feature."
    >         -- Larry Wall
    
    
    



    This archive was generated by hypermail 2b30 : Fri Jul 27 2001 - 08:40:19 PDT