"Anil B. Somayaji" wrote: > > Karim Yaghmour <karymat_private> writes: > > > But you'd be interested to know that adding the hooks within > > the kernel yields at most 1% overhead over very heavy load. > > With the case of a kernel compile, for example, the overhead > > is around 0.25%. > > These results are quite good; however, I was wondering - was this for > micro-benchmarks, or only macro ones? In my own work, I've noticed > that doubling the time a fork and exec takes only results in a few > percent slowdown for kernel builds. > > Have you run something like LMBench or HBench-OS on an LTT-enabled > kernel? I'd be very curious about the results. It seems like system > call latency actually doesn't matter too much under normal workloads - > after all, most interesting system calls relate to I/O, which is > almost always slow. But if we are to get anything incorporated into > the main kernel tree, we have to show that our modifications have > minimal impact on system call and interrupt latency. The impact is minimal as can be. You can check the code to that effect. That being said, I haven't run micro-benchmarks because I usually view them with caution (a fact that some reviewers of the Usenix paper actually agreed on). I tend to think that micro-benchmarks reflect fringe behavior that _may_ have little to do with reality, hence the use of macro-benchmarks. More precisely, the following tests were run: -Compile (Kernel compile): 0.25% -Archive (Tar on 800M): 0.17% -Compress (bzip2 on 50M): 0.14% -Desktop (X/netscape/acroread/startoffice/gnuplot): 0.97% The last case was the slowest for obvious reasons (most X apps intensively use the system to communicate the X server ... and most of the apps used were system-intensive by nature, which doesn't help). The last case, in my opinion, as was used in the test bench (netscape loaded 14 different pages, acroread opened 7 different documents, staroffice opened 8 different documents and gnuplot drew 4 functions 14 times) represents an extreme case of system usage which doesn't represent typical system usage, contrary to the 3 other tests. > My hunch is that the LTT represents a rough lower-bound for the > performance of a flexible security module interface. If the LTT has > minimal impact in micro-benchmarks, then we have a shot at getting > Linus to accept a general interface; if not, we're going to have to > make do with something more specialized. > > I hope we can have a general interface (that would be very good for > me, definitely), but I bring this up because I'm not that optimistic. The interface is general and flexible enough and I do think that performance-wise this is minimal when the kernel is compiled with the hooks as part of the code (remember that there is no code generated if the kernel is compiled without the hooks and hence, the hooks bare no impact on the kernel). As I think, I think that the current hooks are all ready to be used by any security module that needs to be notified of all important activities within the kernel. Any further enhancements to reactivity to hook return values can be made at little cost. Cheers, Karim =================================================== Karim Yaghmour karymat_private Embedded and Real-Time Linux Expert =================================================== _______________________________________________ linux-security-module mailing list linux-security-moduleat_private http://mail.wirex.com/mailman/listinfo/linux-security-module
This archive was generated by hypermail 2b30 : Sun Apr 15 2001 - 18:43:22 PDT