Re: EMERGENCY: new remote root exploit in UW imapd

From: Kragen (kragenat_private)
Date: Wed Jul 29 1998 - 06:31:52 PDT

  • Next message: John Ladwig: "Re: Microsoft Security Bulletin (MS98-008)"

    On Tue, 28 Jul 1998, der Mouse wrote:
    > > Beware of the Dijkstra phenomenon.
    >
    > > The phenomenon is that immodular code seems more ``productive'' than
    > > heavily modularized code.  You can read or write many more lines per
    > > hour of malloc(), strcpy(), free() than of unfamiliar high-level
    > > routines.
    >
    > This is actually "you can read or write many more lines per hour of
    > familiar calls than unfamiliar calls"; if you're not familiar with
    > malloc(), strcpy(), and free(), you can't churn them out very fast
    > either - at least not with even the minimal correctness most programs
    > exhibit.
    
    If you modularize your program, you're going to create brand-new
    high-level routines, which will be unfamiliar -- because they didn't
    exist before.
    
    > > Of course, the modular code ends up being _much_ smaller.
    >
    > This is far too dogmatic a statement.  It is true in many
    > circumstances, but it is also false in plenty of other circumstances.
    
    Modularity makes things smaller, and using stronger materials makes
    things stronger.  Nevertheless, it's possible to add stronger materials
    in such a way as to not make things stronger (or even make them
    weaker), and it's possible to modularize in such a way as to make
    things bigger.
    
    > getdtablesize, execle.)  I don't for a moment believe using "modular
    > code" would have made this program "_much_ smaller".
    
    It doesn't sound like it could get much smaller, modular or no, unless
    you rewrote it in a more compact language.
    
    > The modular code usually ends up being slower, too.  Not enough slower
    > to matter, in many circumstances, but it's something to keep in mind.
    
    Modularity overhead is often insignificant.  Modularizing can make
    things faster, too, by increasing cache hit rates, by enabling
    optimizations that would have been impractical without modularity, and
    by making it possible to change representations without changing their
    interfaces.
    
    > > It also lets you independently check the correctness of each module;
    > > this scales to arbitrarily large systems if the modules remain small.
    >
    > It also means that a missed bug affects an arbitrarily large proportion
    > of the system as a whole, of course.
    
    This depends on aspects of your design other than modularity.  A missed
    buffer-overflow bug affects an "arbitrarily large proportion of the
    system as a whole", regardless of how big your functions are.  Using
    stralloc (a module) for string handling centralizes all possible
    buffer-overflows in one place.  If stralloc has a bug, you may have
    buffer overflows *everywhere* -- but the probability of stralloc having
    a bug is much, much lower than the probability of ad-hoc
    bounds-checking spread throughout your program having a bug.
    
    If you want to avoid having a missed bug affect an arbitrarily large
    proportion of the system, the answer is compartmentalization (more
    modularity), with effective enforcement of the boundaries between
    compartments.  qmail's strategy of having several different processes
    is an example of this.  Orange Book mandatory access controls are
    another.  These mechanisms are actually able to prevent a missed bug
    from affecting an arbitrarily large proportion of the system.
    
    Kragen
    



    This archive was generated by hypermail 2b30 : Fri Apr 13 2001 - 14:09:51 PDT