On Tue, 2003-10-07 at 09:26, John McHugh wrote: > Among other things, this seeks ways to create large numbers of variants > of functionally equivalent programs. Suppose that there were 1000 > different versions of, say, IIS, each requiring a different buffer > overflow exploit, but appearing identical in function and performance > to the user. Now, the developer of a new exploit must develop the 1000 > variations and launch them simultaneously. In addition, each variant > will have a 1000 times more difficult task in propagating. I have often wondered about this approach. It is sort like the canary from StackGuard (CFIAPM) combined with the "roll your own" philosophy of some Linux folks. By placing randomness throughout the build process you might mitigate the impact of the overflows, though I am not sure to what extent the randomness would have to be. While shifting code around would disturb the pattern, it is unlikely to remove the overflow. Not having read the paper in detail. Did they mention as part of the monoculture the Intel chipset? Having very different processors could also mitigate this. > As an > admitted side effect, it is possible that an exploit that fails to > match its target may nonetheless cause the target to crash, but the > limitation of propagation potential may reduce the overall damage > potential of the exploit. > > John McHugh -- Zot O'Connor http://www.ZotConsulting.com http://www.WhiteKnightHackers.com
This archive was generated by hypermail 2b30 : Tue Oct 07 2003 - 11:20:26 PDT