I think Andrew misses the point of the Geer, et. al. paper. The fact that many sites can limit the damage caused by an exploit through the use of third party solutions and/or good hygiene does not really solve the problem. As long as there are a sufficient number of vulnerable targets, even the minority that protect themselves will be subject to various forms of collateral damage. We have seen the huge volume of traffic generated by several worms do transient damage to the routing infrastructure. We have seen overall slowdowns of performance due to excessive worm and virus traffic. Even though it is possible for clueful people to limit damage to their own sites, there is evidence that many sites do not and the fact that these sites largely represent a monoculture makes them an attractive target for worm and virus writers. For a possible approach that preserves the M**t functionality that we all love (or hate) you might take a look at the current BAA 03-44 (research announcement) from DARPA. Among other things, this seeks ways to create large numbers of variants of functionally equivalent programs. Suppose that there were 1000 different versions of, say, IIS, each requiring a different buffer overflow exploit, but appearing identical in function and performance to the user. Now, the developer of a new exploit must develop the 1000 variations and launch them simultaneously. In addition, each variant will have a 1000 times more difficult task in propagating. As an admitted side effect, it is possible that an exploit that fails to match its target may nonetheless cause the target to crash, but the limitation of propagation potential may reduce the overall damage potential of the exploit. John McHugh
This archive was generated by hypermail 2b30 : Tue Oct 07 2003 - 10:36:17 PDT