As an author of a free NAT/firewalling tool for Unix (IP Filter), I know there are DoS possibilities due to size limitations of tables, etc. When I coded it I knew they were there - the design is for it to work, and work well, not withstand a situation that will always overwhelm it - and very few have reached them in real use. One problem here is, unless you have *very* agressive timeouts, there's not much you can do in defence unless you work with an IDS that decides "too much traffic of type X from Y", resulting in you blocking more packets. Of course, with agressive timeouts come performance problems over connections with large RTT's and packets being dropped (not to mention idle connections). Or you open up holes (like FW-1 does) to allow arbitrary packets (ACKs) through regardless of rules. _Maybe_ a firewall can decide for itself when "too much is enough" but where do you draw the line ? Start imposing more arbitrary limits such as "no more than 50 concurrent connections to the mail server" ? (Tempting, mind you :) To give you some idea of the maximum size a table could be, to support NAT'ing to a /24 address space, you're looking at max. of about 16M entries. In real life, I just can't see that happening. The only real advantage of making source available, in this instance, is people can recompile the code with bigger/smaller tables, taking up more/less kernel memory. In reality, these DoS `relevations' are not new, except in how they affect one particular product (FW-1). The Internet at large is still an easy target for someone willing to deploy a DoS attack - stories of hosts being blackholed using routing to circumvent large (200Mb/s) attacks where routers `melt' when filtering is enabled, are not unheard of. All very well for people to go and find these problems, but having made the obvious more obvious, how about some suggestions on finding real solutions? Tip: this isn't quite as easy as fixing root exploits. Darren
This archive was generated by hypermail 2b30 : Fri Apr 13 2001 - 14:54:58 PDT