On Mon, 26 Aug 2002 10:03:51 PDT, Greg KH said: > On Mon, Aug 26, 2002 at 09:41:43AM -0700, Crispin Cowan wrote: > > On the other hand, it is this kind of thinking that gave the Linux > > kernel a scheduler that linearly scanned the process ready list, and had > > to be upgraded when Linux went into the big time. > > Ok, that's not even relevant to this argument. When people realized > that the current scheduler had problems, based on larger loads, and > bigger boxes, they fixed it. Hmm... So what you're saying that since previous experience with algorithms that scale poorly on big iron has shown that we end up ripping it out and doing it over, we should do so AGAIN when faced with something likely to scale poorly? Remember - it's not my laptop that's going to hit these issues, it's the big 16/32 CPU boxes doing tons of stuff that will hit these issues. > The policies for OWLSM are already able to be turned on and off at > config time. And the next thing you know, somebody will want it on a per-process basis. ;) > > Now suppose that you want to do this at run-time, i.e. when running this > > application (e.g. Courier mail server) disable that policy (e.g. > > hard-link following). Now we have OWLSM wanting to use sys_security, and > > share it with the SELinux/LIDS/SubDomain big-ass policy engine. Hey... there's that per-process basis. ;) > But what kind of speed is needed for the call to disable a specific > option for a specific program. Once at startup for that program. > Again, don't try to optimize something that you haven't even measured :) I think the complaint is that currently there's not enough infrastructure to implement enough to measure. I suspect if there was enough there so Crispin *could* measure it, he'd be posting numbers. ;) -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech
This archive was generated by hypermail 2b30 : Mon Aug 26 2002 - 10:22:36 PDT