Re: Writing Secure code

From: Crispin Cowan (crispinat_private)
Date: Sat Dec 28 2002 - 11:04:12 PST

  • Next message: Jonny Stone: "white-box test methodology"

    Glynn Clements wrote:
    
    >Valdis.Kletnieksat_private wrote:
    >  
    >
    >>>And one more thing...<this one might be intresting ;-)>  Is it possible 
    >>>to write code that is completely secure and not exploitable? 
    >>>      
    >>>
    >>This is just a specific case of the question "Is it possible to write
    >>totally bug-free code"?
    >>    
    >>
    >
    >Maybe, maybe not; it depends upon how you define such terms. I can
    >think of reasonable definitions of "bug" and "vulnerability" such that
    >the former isn't a superset of the latter. To provide a (rather crude)
    >example:
    >
    > Bug: the program doesn't do something which it is meant to do.
    > Vulnerability: the program does something which it isn't meant to do.
    >
    That is incorrect: bugs can make programs do unexpected things, and so 
    your definition of vulnerabilities encompases bugs.
    
    We had this discussion right here in secprog 
    <http://online.securityfocus.com/archive/98/142495/2000-10-29/2000-11-04/2> 
    back in 2000, and the definition that I like is Iván Arce's:
    
        * Reliable: something that does everything it is specified to do.
        * Secure : something that does everything it is specified to do..and
          nothing else.
    
    Vulnerabilities are a strict subset of bugs. Vulnerabilities are bugs 
    that the attacker can leverage, and all other bugs result in simple 
    self-DoS.
    
    >Is it a vulnerability if a program fails to wipe data from memory, or
    >allows it to be written to swap, or slack space, or "unused" parts of
    >files? Or if a network client can deduce which parts of certain files
    >are memory-resident by sending carefully chosen requests and analysing
    >the timing of the server's responses?
    >
    That depends on the threat model. If the threat model explicitly 
    includes local attackers that have access to swap, then these are 
    bugs/vulnerabilities. If the attack model includes only remote 
    attackers, then these are just security design weaknesses that would 
    become vulnerabilities if the greater system allowed remote attackers to 
    gain local access and deploy a staged attack.
    
    Gary McGraw (of "Building Secure Software" 
    <http://www.buildingsecuresoftware.com/>) likes to talk about the focus 
    on bugs & vulnerabilities as the building of high-quality bricks, and 
    wishes we could get on to secure architecture. I am less passionate 
    about it, but I agree we should do it. So then, secure design is an 
    architecture which tends to make bugs not exploitable, and therefore not 
    be vulnerabilities. An exampled would be the priv-sep wor recently done 
    to OpenSSH (even if the release was fumbled :) which makes buffer 
    overflows in the non-privileged code either less exploitable or 
    non-exploitable, depending on where the attacker is sitting.
    
    >From a reliability (no bugs) perspective, one can specify that e.g. 
    >input timings don't affect the output data and are therefore
    >irrelevant, and it's pretty straightforward to make it so.
    >
     From a reliability (no bugs) perspective, a program should do what it 
    is supposed to do, i.e. be functional.
    
    >From a security (no vulnerabilities) perspective, ensuring that the
    >input data don't affect the output timings is (at a guess) somewhere
    >between insanely difficult and downright impossible; and simply
    >deeming them to be irrelevant doesn't help at all.
    >
     From a security (no vulnerabilities/bugs) perspective, a program should 
    not do anything else.
    
    >So, while it might be possible to produce a "completely reliable"
    >program (modulo the requirement that all other components behave
    >exactly as specified; IOW, not necessarily fault-tolerant), I'm not
    >sure that "completely secure" is a meaningful concept.
    >
    It is possible to write a completely secure (no vulnerabilities WRT the 
    program's specification) program: just write perfect (bug-free) code. 
    The problem is that it becomes exponentially expensive to asymtopically 
    approach perfection, and with current technology, this becomes totally 
    infeasible somewhere around 10,000 to 100,000 lines of code (depending 
    on the complexity of the application and the size of your bag of money).
    
    And since it hasn't been brought up yet, anyone who hasn't read it yet 
    should go read the classic Saltzer and Schroeder paper 
    <http://web.mit.edu/Saltzer/www/publications/protection/index.html>. 
    This paper defined Foo principles of secure software design. I quote:
    
       1. Economy of mechanism: Keep the design as simple and small as
          possible. This well-known principle applies to any aspect of a
          system, but it deserves emphasis for protection mechanisms for
          this reason: design and implementation errors that result in
          unwanted access paths will not be noticed during normal use (since
          normal use usually does not include attempts to exercise improper
          access paths). As a result, techniques such as line-by-line
          inspection of software and physical examination of hardware that
          implements protection mechanisms are necessary. For such
          techniques to be successful, a small and simple design is essential.
       2.
    
          Fail-safe defaults: Base access decisions on permission rather
          than exclusion. This principle, suggested by E. Glaser in 1965,^8
          <http://web.mit.edu/Saltzer/www/publications/protection/notes.html#8>
          means that the default situation is lack of access, and the
          protection scheme identifies conditions under which access is
          permitted. The alternative, in which mechanisms attempt to
          identify conditions under which access should be refused, presents
          the wrong psychological base for secure system design. A
          conservative design must be based on arguments why objects should
          be accessible, rather than why they should not. In a large system
          some objects will be inadequately considered, so a default of lack
          of permission is safer. A design or implementation mistake in a
          mechanism that gives explicit permission tends to fail by refusing
          permission, a safe situation, since it will be quickly detected.
          On the other hand, a design or implementation mistake in a
          mechanism that explicitly excludes access tends to fail by
          allowing access, a failure which may go unnoticed in normal use.
          This principle applies both to the outward appearance of the
          protection mechanism and to its underlying implementation.
    
       3.
    
          Complete mediation: Every access to every object must be checked
          for authority. This principle, when systematically applied, is the
          primary underpinning of the protection system. It forces a
          system-wide view of access control, which in addition to normal
          operation includes initialization, recovery, shutdown, and
          maintenance. It implies that a foolproof method of identifying the
          source of every request must be devised. It also requires that
          proposals to gain performance by remembering the result of an
          authority check be examined skeptically. If a change in authority
          occurs, such remembered results must be systematically updated.
    
       4.
    
          Open design: The design should not be secret [27]. The mechanisms
          should not depend on the ignorance of potential attackers, but
          rather on the possession of specific, more easily protected, keys
          or passwords. This decoupling of protection mechanisms from
          protection keys permits the mechanisms to be examined by many
          reviewers without concern that the review may itself compromise
          the safeguards. In addition, any skeptical user may be allowed to
          convince himself that the system he is about to use is adequate
          for his purpose.^9
          <http://web.mit.edu/Saltzer/www/publications/protection/notes.html#9>
          Finally, it is simply not realistic to attempt to maintain secrecy
          for any system which receives wide distribution.
    
       5.
    
          Separation of privilege: Where feasible, a protection mechanism
          that requires two keys to unlock it is more robust and flexible
          than one that allows access to the presenter of only a single key.
          The relevance of this observation to computer systems was pointed
          out by R. Needham in 1973. The reason is that, once the mechanism
          is locked, the two keys can be physically separated and distinct
          programs, organizations, or individuals made responsible for them.
          From then on, no single accident, deception, or breach of trust is
          sufficient to compromise the protected information. This principle
          is often used in bank safe-deposit boxes. It is also at work in
          the defense system that fires a nuclear weapon only if two
          different people both give the correct command. In a computer
          system, separated keys apply to any situation in which two or more
          conditions must be met before access should be permitted. For
          example, systems providing user-extendible protected data types
          usually depend on separation of privilege for their implementation.
    
       6.
    
          Least privilege: Every program and every user of the system should
          operate using the least set of privileges necessary to complete
          the job. Primarily, this principle limits the damage that can
          result from an accident or error. It also reduces the number of
          potential interactions among privileged programs to the minimum
          for correct operation, so that unintentional, unwanted, or
          improper uses of privilege are less likely to occur. Thus, if a
          question arises related to misuse of a privilege, the number of
          programs that must be audited is minimized. Put another way, if a
          mechanism can provide "firewalls," the principle of least
          privilege provides a rationale for where to install the firewalls.
          The military security rule of "need-to-know" is an example of this
          principle.
    
       7.
    
          Least common mechanism: Minimize the amount of mechanism common to
          more than one user and depended on by all users [28]. Every shared
          mechanism (especially one involving shared variables) represents a
          potential information path between users and must be designed with
          great care to be sure it does not unintentionally compromise
          security. Further, any mechanism serving all users must be
          certified to the satisfaction of every user, a job presumably
          harder than satisfying only one or a few users. For example, given
          the choice of implementing a new function as a supervisor
          procedure shared by all users or as a library procedure that can
          be handled as though it were the user's own, choose the latter
          course. Then, if one or a few users are not satisfied with the
          level of certification of the function, they can provide a
          substitute or not use it at all. Either way, they can avoid being
          harmed by a mistake in it.
    
       8.
    
          Psychological acceptability: It is essential that the human
          interface be designed for ease of use, so that users routinely and
          automatically apply the protection mechanisms correctly. Also, to
          the extent that the user's mental image of his protection goals
          matches the mechanisms he must use, mistakes will be minimized. If
          he must translate his image of his protection needs into a
          radically different specification language, he will make errors.
    
    These principles are not a cook-book for secure code, but following them 
    greatly enhances the likelyhood of building secure software.
    
    Crispin
    
    -- 
    Crispin Cowan, Ph.D.
    Chief Scientist, WireX                      http://wirex.com/~crispin/
    Security Hardened Linux Distribution:       http://immunix.org
    Available for purchase: http://wirex.com/Products/Immunix/purchase.html
    			    Just say ".Nyet"
    
    
    
    



    This archive was generated by hypermail 2b30 : Tue Dec 31 2002 - 08:02:37 PST