"D. J. Bernstein" <djbat_private> wrote To: BUGTRAQat_private Subject: Re: EMERGENCY: new remote root exploit in UW imapd Here's an example of the Dijkstra phenomenon. <snip> der Mouse writes: > modular code usually ends up being slower There are three misconceptions here. Misconception #1 is that modularization means moving common code into subroutines. In fact, modularization need not have any effect on the compiled program, thanks to macros, inline code, etc. Subroutines are convenient but not required. Misconception #2 is that moving common code into subroutines imposes a speed penalty. In fact, procedure-call overhead is wiped out by cache effects in any subroutine that does more than a little bit of work. Misconception #3 is that speed is something programmers should consider along with security, verifiability, etc. In fact, the computer spends almost all of its time executing an amazingly small amount of code. For most programmers, speed simply doesn't matter. Actually modular code (even subroutine calls) often is much faster because most compilers and most harware architectures take advantage of locality of reference. An optimizing compiler can only eliminate common sub-expressions that can be shown to have no intervening branches. a modular subroutine is less likely to have thes. I once toaught a computer science course where the question of speed as a design crterion arose. When the class discusssed it we came to the conculsion that a program that gave a wrong answer was never fast. Correctness, maintainability, security, flexibility are all more important than flat out speed. Speed is important when it is part of correctness, that is a response is needed fast enough to avoid time outs to not delay the humans using the computer etc. But this is a consequence of the other criteria, not one in itself.
This archive was generated by hypermail 2b30 : Fri Apr 13 2001 - 14:10:02 PDT