The larger the upstream bandwidth, the easier the attacker's job, within limits. The question is really "What links become saturated, and where?" If the attacker's machine(s) are directly connected to (a) high speed network(s), and the attack paths do not converge until "close" to the victim, the attack is most effective. Even fairly powerful attack engines loose effectiveness if there are weak links between themselves and the backbone. Inet2 helps to ensure that once packets reach the backbone, they are unlikely to cause congestion (and be dropped) until they reach a lower capacity link near the victim. The issues of firewalls and filters are interesting. There is simply not time to do much at the router backplane level, a problem that becomes worse as segments become faster in proportion to memory and cpu cycle times. A good figure of merit for evaluating techniques is to look at available memory cycles/packet. My recent paper in the International Journal of Information Security (Springer Vol. 1 No. 1) has a chart that expresses this for IDS pattern matching, but the problem is the same for filtering. There is some hope for designing specialized filters for very limited purposes such as egress filtering and realizing them in hardware, but more elaborate filters are probably problematic. Disjoint sets of rules that can be evaluated in parallel may help. <PLUG PLUG> - Sven Dietrich and I will be giving a tutorial on DDOS attacks at ACSAC in December http://www.acsac.org/2001/tutorials.html#tut2.<\PLUG PLUG> John McHugh
This archive was generated by hypermail 2b30 : Sun May 26 2002 - 11:29:27 PDT