On Mon, 07 Oct 2002 17:01:10 PDT, Crispin Cowan said: > What makes the LKML position reasonable is "threat model." For any > security enhancement proposal, one needs to construct a threat model in > which the feature will prevent some class of attackers from doing some > class of bad things. I fully understand that, and I don't disagree with it. The problem is that from where I sit, there seems to be a bit of inconsistency in the application of that position.. > A security feature that is completely bypassable > needs to either be matched up with additional features to close the > bypass(es), or the proposal should be dropped. Otherwise you have built > a fence half-way around your target. As an example of "inconsistent", this logic means that that since the setrlimit code and per-user process limits and other stock features are insufficient to prevent a certain class of attacks, they should either be matched up with additional features (like the setaffinity hooks proposed) or the feature should be dropped... ;) OK, that's taking it to the extreme, but the point still stands that since it's demonstrable that you can't close *all* the holes, that doesn't mean that we shouldn't take action. Yes, there's probably other ways to totally bugger up the system even if a setaffinity patch is in place - but at least they willl hopefully need to use something more complicated than a "hello-world" program that sets affinity and then loops.... (And yes, in 2 decades I've seen a *lot* of ways to DoS a system, some of which were even intentional. I'm still amazed that I've worked on not one, not two, but *three* very different operating systems that all managed to make the same mistake of making the "handle disk error" a pageable kernel module, and then die a REALLY horrid death when an I/O error happens on the swap file ;) > So the affinity hooks will be of use only if you can demonstrate to > LKML's satisfaction that all other ways of DoS'ing a local CPU are also > closed. Ahh.. now the tricky question here - does this mean "mathematically proven closed" or "made sufficiently difficult to exploit"? There are large areas of DoS that are basically impossible to prevent - for instance, I've never seen a virtual memory manager that supported overcommital of main storage that couldn't be made to thrash under the proper pathological page reference pattern. However, it's not TOO hard to make one that is almost totally immune to abuse by a single process, which requires 2 or more cooperating processes in order to cause a problem... I guess the question is how high the standard is... /Valdis
This archive was generated by hypermail 2b30 : Mon Oct 07 2002 - 21:22:52 PDT