RE: Anyone looked at the canary stack protection in Win2k3?

From: Jason Coombs (jasoncat_private)
Date: Wed Aug 06 2003 - 15:45:11 PDT

  • Next message: xenophi1e: "Re: Some help With BOF Exploits Writing."

    I wrote up a simple analysis of Microsoft's /GS compiler option for Visual C++
    in my book "IIS Security and Programming Countermeasures" -- includes
    disassembly-mode debugger screen shots before and after using a simple exploit
    proof of concept.
    
    http://www.forensics.org/jasonc/iisforensics.zip
    
    See Chapter 1, Figure 1-4, for the unguarded scenario and Chapter 10, Figures
    10-5 to 10-8 for the /GS scenario.
    
    This is introductory-level material, but important for anyone who has never
    seen a stack buffer overflow occur in a debugger. Chapter 10's text is
    reproduced below. Take a look at the figures, which you'll find in the
    above-referenced ZIP file.
    
    Jason Coombs
    jasoncat_private
    
    --
    
    Compiler Security Optimizations
    
    Nearly every software vendor in the last twenty years has faithfully repeated
    the same information security mistakes made by every other software vendor
    before them. Standard operating procedure throughout the twentieth century for
    software vendors was build it, ship it, fix it. For the twenty-first century
    software vendors are expected to behave differently, and the toolset used by
    programmers today reflects this change. Security optimizations in runtime
    libraries and compilers are part of the new and improved software industry. Do
    they prevent security problems? No, but they’re a lot of fun. And they do more
    good than harm, hopefully. One such compiler security optimization that has
    had a big impact on IIS version 6 is the new Visual C++ /GS compiler option.
    Most of IIS version 6 was written with Visual C++ and unlike previous releases
    of IIS, IIS source code is now compiled with the new Guard Stack (/GS) option
    in the Visual C++ compiler.
    
    In Chapter 1 you saw the simplest possible buffer overflow, where an array
    index reference exceeds the dimensional boundary of a stack-allocated buffer
    and thereby modifies the function return address that is popped off the stack
    when the microprocessor encounters a return instruction. Most buffer overflow
    vulnerabilities exploitable in the wild involve less explicit means of
    stuffing malicious bytes into stack memory, so that every byte of memory
    beginning with the starting address of the buffer is overwritten with
    malicious bytes that have just the right length and structure to drop a
    replacement value onto the authentic return address. This messy slaughter of
    everything in between the memory buffer and the function return address leaves
    evidence of a buffer overflow condition, if we could just get all attackers to
    give us predictable malicious bytes in their exploit byte stuffing code then
    we could detect such conditions at runtime. Or, we could do as coal miners did
    before technical safety innovations improved gas fume detection: bring a
    canary with us everywhere we go. The coal miner’s canary died quickly in the
    presence of odorless toxic fumes, and when the canary died the miner knew to
    leave the mine while the fumes dissipate. Our electronic canary equivalent is
    a cookie, a token of unpredictable value selected at runtime that we can use
    to confirm that the authentic return address has not been modified before we
    allow the microprocessor to trust it and pop it off the stack. The /GS
    compiler option in Visual C++ 7.0 places just such a canary on the stack and
    checks to see that it is still alive when a vulnerable stack frame returns. A
    modified version of the Hello World! stack buffer overflow Chapter 1 sample
    appears below.
    
    int main(int argc, char* argv[]) {
     void * p[2] = {(void *)p[3],(void *)p[4]};
     char unchecked[13];
     p[3] = (void *)&p[0];
     p[4] = (void *)0x00411DB6;
     printf(strcpy(unchecked,"Hello World!\n"));
     return 0; }
    
    This code, when compiled with /GS enabled in Visual C++ 7.0, results in
    precisely the same endless loop demonstration as illustrated in Chapter 1
    (where the compiler used was Visual C++ 6.0). The difference is that when p[4]
    is set equal to the address of the original call to the main function (to set
    up the infinite recursion while reusing the current stack frame base address)
    0x00411DB6 it results in the death of the canary and program execution
    terminates abruptly after the first iteration. Figure 10-5 shows the security
    cookie being retrieved into the EAX register. This cookie value was generated
    dynamically by the C runtime prolog code and stored in memory location
    0x00425B40 where the compiled code expects to find it at runtime. The security
    cookie is not the canary, it is the pseudorandom encryption key used to
    produce the canary through the very next instruction involving exclusive or
    (xor).
    
    F10XX05.PNG
    Figure 10-5: The Guard Stack Security Cookie Retrieved into EAX
    
    What happens next is the birth of the canary, or its placement into the cage
    if you prefer to think of it in those terms. The canary is the bitwise xor
    combination of the security cookie and the authentic return address to which
    the current call stack expects program execution to return when the subroutine
    completes. Figure 10-6 shows the canary, the value 42FC11E7 stored in the four
    bytes just below the previous stack frame base address that was pushed onto
    the stack in the very first instruction at the beginning of the main function
    (push ebp). The four byte canary is placed in this location because it is the
    last four byte region of the new stack frame. Any buffer overflow exploits
    that impact the stack frame will have to write through the canary to get at
    the return address, which is stored in the four bytes of memory beginning four
    bytes above the base address of the new stack frame. Between the canary and
    the return address are the four bytes containing the previous stack frame base
    address 0012FFC0. The authentic return address shown in Figure 10-6 is
    00411DBB which you can see on the line addressed beginning at 0x0012FEDC which
    happens to be the current value of EBP; the current stack frame base address.
    
    F10XX06.PNG
    Figure 10-6: An Electronic Canary 42FC11E7 on The Stack
    
    Like the sample shown in Chapter 1, this sample’s mission is to capture the
    previous stack frame base address into the first element of void pointer array
    p and then take control of the return address to which the present stack frame
    will jump when the next return instruction is executed. Figure 10-7 shows
    these steps being carried out as planned. The hard-coded address 0x00411DB6 is
    the address to which the sample exploit prefers instead of the authentic
    return address and you can see the mov instruction at the top of the
    disassembly shown in Figure 10-7 that forces this new value in place of the
    authentic original. The authentic address of the previous stack frame base has
    also been overwritten by this time. Both of these malicious replacement values
    appear on the line addressed beginning at 0x0012FEDC. The next instruction to
    be executed, marked by the arrow and the address shown in EIP, moves the
    canary into the EXC register where it can be examined.
    
    F10XX07.PNG
    Figure 10-7: Our Sample Exploit Code Hard at Work
    
    You can see in Figure 10-8 that the canary is still alive in its cage located
    at the very top of the stack frame where it was first placed as shown in
    Figure 10-6. Or is it? The canary value hasn’t changed, but the return pointer
    has. Another quick xor using the new return address and the same security
    cookie as used previously and we can take the pulse of the canary to see if it
    ’s really alive. Figure 10-8 shows the result. The ECX register contains a
    value other than the security cookie and the canary is shown to have died. Of
    old age, perhaps, since it’s now outdated and doesn’t confirm the authenticity
    of the return address that program execution is about to be handed over to
    when the return instruction is encountered. The __security_check_cookie
    runtime library function calls ExitProcess when it detects the security
    compromise.
    
    F10XX08.PNG
    Figure 10-8: The Canary Looks Fine Until It Fails The Security Check
    
    The compiler used to build object code from C++ source may have some security
    optimizations available but it can’t change the basic fact that low-level
    control over code safety is placed in the hands of the C++ programmer. Many of
    the problems that are caused by unsafe code are indefensible so long as the
    unsafe code is present. Without a way to predict the type of problems that
    impact data security you can’t add defensive layers around code that might
    benefit from such layers. Guarding against predictable problems is important,
    but if you already knew what all of the problems were and where those problems
    lived in code you would remove the dangerous code completely or place
    protective wrappers around every bit of it. This makes the unknown more
    dangerous than the known, and code known to be unsafe is often allowed to
    execute in spite of its attendant risks because the risks are known.
    



    This archive was generated by hypermail 2b30 : Fri Aug 08 2003 - 14:56:48 PDT