An update on MS private key (in)security issues

From: Aleph One (aleph1at_private)
Date: Fri Feb 06 1998 - 16:39:11 PST

  • Next message: Solar Designer: "Re: Another ld-linux.so problem"

    ---------- Forwarded message ----------
    Date: Wed, 4 Feb 1998 01:36:20 (NZDT)
    >From: Peter Gutmann <pgut001at_private>
    To: cryptographyat_private
    Subject: An update on MS private key (in)security issues
    
    A fortnight ago I posted a message exposing a number of weaknesses in the way
    various Microsoft security products handle users private keys.  A few days
    before I was due to vanish for a security conference (where it was very
    difficult to contact me), the article started getting a bit of attention.  This
    is a general response which clarifies several issues relating to the original
    message.
    
    First, Russ Cooper (moderator of the NT Bugtraq mailing list) made some wildly
    inaccurate claims about the article.  I've had a bit of feedback which
    indicated that it wasn't even worth dignifying this stuff with a response but
    I've written one anyway, at least for the original NT Bugtraq messages he
    posted (the stuff he put on a web page is just more of the same).  You can find
    it at http://www.cs.auckland.ac.nz/~pgut001/pubs/breakms2.txt (even if you
    don't want to plough through the whole thing, you might want to read the last
    two paragraphs for a giggle).
    
    After this, Microsofts spin doctors leaped into action and posted a response to
    the article.  I've replied to this one in a bit more detail, since it raises
    several important issues.
    
    Comments on Microsofts Response
    -------------------------------
    
    >Microsoft's Response to Recent Questions on the Recovery of Private Keys from
    >various Microsoft Products
    
    >[...]
    
    >With the exception of the details about a possible PKCS-12 attack, all the
    >issues raised in the recent discussions are specifically aimed at Microsoft's
    >base CSPs, not at CryptoAPI.  The base CSPs are the royalty-free,
    >software-based CSPs provided with Windows NT 4.0, Windows 95, and Internet
    >Explorer. These attacks do not apply in general to other CryptoAPI CSPs and do
    >not indicate any fundamental concerns with the security of the CryptoAPI
    >architecture or applications built upon it.
    
    This statement is mere obfuscation.  As Microsoft have said, every single
    (recent) copy of Windows NT, Windows'95, and MSIE (and, presumably, other
    products like IIS and Outlook, which rely on them) ship with these CSP's,
    therefore every recent system comes with this security hole installed by
    default.
    
    In addition, the CryptExportKey() function is a standard CryptAPI function,
    which is described by Microsoft with the text "The CryptExportKey function is
    used to export cryptographic keys out of a cryptographic service provider in a
    secure manner" (obviously some new use of the term "secure" with which I wasn't
    previously familiar).  There's nothing in there that says "Only the Microsoft
    CSP's support this" (some may not support it, but by Microsofts own admission
    the default used on any recent system does indeed exhibit this flaw).
    
    >Microsoft products do not "store" private key material using PKCS-12, contrary
    >to recent descriptions on the Internet.
    
    I was unfortunately somewhat unclear in my article, I've fixed up the online
    version at http://www.cs.auckland.ac.nz/~pgut001/pubs/breakms.txt.  To
    summarise, if you want to get at an exported key stored on disk, you can use
    the pre-4.0 key file/PKCS #12 key file flaws.  If you want to get at the
    current users key, you can use the CryptExportKey() flaw.
    
    >The example of breaking a PKCS-12 data blob, which was given in discussion on
    >the Internet, is not an attack on the "weak" cryptography of PKCS-12. Rather,
    >it is simply a dictionary attack (long known and well understood in
    >cryptography),
    
    This statement is accurate.  As Microsoft have said, dictionary attacks have
    been known to the computer industry for a long, long time (several decades).
    For example the Unix password encryption uses 25 iterations of DES to protect
    users passwords from dictionary attacks (this is rather inadequate now, but was
    appropriate more than two decades ago when it was introduced).
    
    If this is a well-known flaw which the entire industry has known about for
    decades, why are Microsoft's very latest "security" products so extremely
    vulnerable to it?  As I mentioned in my writeup, I published an attack on the
    exact format used in older MS products more than 1 1/2 years ago, but Microsoft
    made no attempt to fix this problem at the time.  It wasn't until I published
    the current writeup that they finally addressed it.
    
    >One statement by the original poster argues that Microsoft managed to design a
    >security format that actually makes it easier to break into the protected
    >data. As stated above, a number of well respected companies were involved
    >extensively in the design and review of PKCS-12 before it became a standard.
    
    The problem is that Microsoft ignored the other companies recommendations for
    increasing security.  I'll refer people back to the original writeup for
    mention of the various security features which were added to PKCS #12 and
    subsequently ignored by Microsoft.  PKCS #12 is just a basic modification of
    PFX, which was a purely Microsoft invention, and Microsofts implementation of
    PKCS #12 does indeed make it very easy to perform the attack I described (you'd
    have to read my writeup for the full technical details in which I describe each
    individual flaw which, as each is exploited in turn, make it progressively
    easier and easier to recover a users private key).
    
    In addition (as I mentioned in my original message), Microsoft use RC2/40 to
    encrypt the private keys, which means that no matter how good a password you
    choose, the default PKCS #12 format private key produced by MSIE can be
    recovered in a few days with fairly modest computing power.  There already
    exists a Windows screen saver which will recover RC2/40 keys for S/MIME
    messages, and the prize of an RSA private key is a much higher motivating
    factor than the "Hello, how are you?" typically found in an S/MIME message. As
    Microsoft point out, Netscape can indeed handle RC2/40-encrypted files, however
    they usually use triple DES and not RC2/40 (they encrypt the private key
    components - the really valuable part of the PKCS #12 payload - with triple DES
    and the other odds and ends with RC2/40).  Since Netscape also iterate the
    password and MAC processing, they aren't vulnerable to the attacks which the
    Microsoft software is vulnerable to, even though both are based on the same
    PKCS #12 standard.
    
    >Exploiting Internet Explorer
    >
    >One of the fundamental requirements to perform any possible cryptographic
    >attack discussed in the recent postings is the assumption that a malicious
    >entity could somehow access to data on a user's computer system, without the
    >permission or knowledge of the user.  This is a large leap of faith.
    >
    >Users that are using Public Key Certificates today are generally sophisticated
    >and savvy users,
    
    Bwahahahahaha!
    
    Sorry, I guess I should explain that in more precise terms.  Now I'm not trying
    to claim that the average Win95 user isn't as sophisticated and savvy as
    Microsoft seem to think, or that relying entirely on the sophistication of the
    average Win95 user for security is a dangerous move.  However these statements
    do seem to completely ignore the reality of the situation.  Let me go into the
    background of the weaknesses a bit further.
    
    When I tested the weaknesses, I asked some "guinea pig" users to send me their
    exported keys, with the promise that I'd destroy the keys and not keep any
    record of where the keys came from and so on.  Because of this I can't give
    exact figures, however here are a few data points:
    
    - More than half the keys (I can't remember the exact figure) were in the
      older, pre-4.x format.  This indicates that the majority of users (or at
      least of the crypto-using guinea pigs) are still running old versions of MSIE
      which contain a number of widely-publicised problems, including precisely the
      weaknesses required to run arbitrary code on the machine or read files off
      the machine.  One of the respondents commented that the key was "the key I
      use with Outlook", I'm not sure what that says about the origins of the key.
      Another key was one used with IIS for a web site which runs an online
      commerce service that processes credit-card based orders.  This must have
      been an older version of IIS since the key was in the pre-4.x format.
    
    - When I asked two of the people why they were still using an old version, the
      responses were "It came installed with the machine when we bought it" and "It
      does what I want, so I haven't upgraded".  I expect most users of the older
      version would have a similar reason for using it.
    
      In a company of 20 people (some of whom participated in this test), only two
      were running 4.x.  These people are all highly competent software developers
      (presumably this includes them in the group of "sophisticated and savvy
      users" which MS were referring to), however none of the remaining 18 wanted
      to risk installing MSIE 4 because of the extent with which it messed with
      their system, possibly jeopardising their development work.  Therefore most
      of these "sophisticated and savvy users" were still running old, insecure
      versions of MSIE, and weren't going to upgrade any time soon.
    
    - For the two remaining 4.x users in the company, both are still using
      straight, unpatched 4.0 because they considered it painful enough to download
      all of 4.0 and they didn't want to bother with an upgrade just yet.  This
      makes them vulnerable to the bug pointed out in the l0pht advisory.
    
    - When I informed one of the guinea pigs of the recovered password, his
      response was "Now I'll have to change my login password" (the others
      generally restricted themselves to some variation of "Oops").  This comment
      confirms that users do indeed sometimes use their login password to protect
      their private keys, so that a successful attack recovers not only their
      private keys but their login password as well.
    
    - One respondent commented that most of the code they downloaded from the net
      was unsigned, but they ran it anyway.  This is probably typical of the
      average user.
    
    Although I've only been back for two days, I haven't yet managed to find anyone
    who has applied all the latest patches and kludges to their system which makes
    them immune to these problems.  This includes at least one person who has a
    code signing key which could be used to sign malicious ActiveX controls.
    Therefore all of the guinea pigs could in theory have their private keys
    stolen.
    
    >Attacks against Authenticode signing keys
    >
    >[...]
    >
    >However, it is extremely unlikely anyone could be successful with a simplistic
    >strategy for stealing Authenticode signing keys and then using them.
    
    See my comments about about this.  I could, right now, obtain at least one
    Authenticode signing key with which I could create a malicious ActiveX control.
    I'm sure that if I were serious about this I could obtain a number of others.
    
    >Second, anyone signing code using Authenticode technology is extremely
    >unlikely to leave their key material sitting on an end user machine routinely
    >used for browsing the Internet.
    
    I see no basis for this claim.  Every developer I know uses the same machine
    for both code development and web browsing (in fact a number of users keep a
    copy of MSIE or Netscape permantly running on their machine so that they'll
    have something to do between compiles).  Microsoft's statement seems to imply
    that users will be running two machines, one dedicated entirely to web browsing
    and the other to code signing.  I find this extremely unlikely.
    
    >Attacks on Encrypted Key material
    >
    >There is some confusion over the algorithms and methods that Microsoft uses to
    >provide protection of encrypted key material in Internet Explorer when using
    >the standard Microsoft base CSPs. There were changes between Internet Explorer
    >3.0x and Internet Explorer 4.x specifically to address any possible concern.
    
    And the astounding thing was that, despite a complete rewrite of the code, the
    newer version offers no extra security at all, as my article points out.  Both
    are equally weak, you just need to attack them in slightly different ways.
    
    >Key export attacks
    >
    >The original Internet posting raises concern about the CryptoAPI interface
    >CryptExportKey().  This function is fully documented and does indeed export
    >private key material in an encrypted format.
    
    The statement from Microsoft omits one important point here.  Myself and
    another security researcher have been trying to tell Microsoft for more than
    four months (probably a lot longer, I didn't log the earlier mail) that this is
    a very serious security flaw.  In all cases Microsoft's response was some
    variation of "We don't see this as a flaw".  It wasn't until my warning was
    published that they finally addressed the problem, and even in this case (the
    program to set the CRYPT_USER_PROTECTED flag) it's only a quick hack to plaster
    over the cracks and allow them to declare the problem fixed.  In fact it
    doesn't fix the problem at all.  I'll post details in a fortnight or so when
    I've had time to test things, please don't ask me about this until then.
    
    >The presence of the CryptExportKey() function is to support functionality such
    >as migrating key material between machines for a user or creating a backup
    >copy of a key.  It should be noted however, that many CSPs, including most
    >hardware based CSPs, do not allow exportable private keys and will return and
    >error in response to a CryptExportKey() request.
    
    However the CSP's installed by default on every recent system do allow the
    export.  The fact that there may exist, somewhere, an implementation which
    doesn't exhibit this flaw really doesn't help the majority of users.
    
    >The posting also asserts that an ActiveX control could be downloaded from a
    >web page, simply ask for the current users key, ship the key off for
    >collection by an unscrupulous person, and then delete itself without a trace.
    >
    >If users run unsigned code, or code from an unknown origin, a number of
    >unpleasant things can happen.  This could indeed occur if the key as been
    >marked exportable and the CRYPT_USER_PROTECTED flag is not set.
    
    Which was, until my warning, the automatic default action for Verisign, and is
    still the automatic default for many other CA's.  This means that keys
    generated right up until a few days ago (and in some cases keys being generated
    right now) have, by default, no protection whatsoever.  In addition I've just
    been sent mail to say that CRYPT_USER_PROTECTED still isn't the default with
    Verisign, you have to click a box asking for extra security or you get the
    usual default of no protection.
    
    To add to the confusion, a lot of documentation (including the Microsoft
    Developer Network (MSDN) online documentation on Microsofts web site) describes
    the CRYPT_USER_PROTECTED flag with "The Microsoft RSA Base Provider ignores
    this flag", which can cause programmers to leave it out when it is *essential*
    it is always included.  Incidentally, MSDN also claims that Microsoft are
    "supporting" PFX, which is in direct contrast with the claims in their press
    release.
    
    >There was also discussion of 16 dialog boxes appearing to the user for their
    >password if the CRYPT_USER_PROTECTED flag is set. Certainly asking the user
    >too many times would be better than too few times, however in our tests, under
    >extreme (and uncommon cases), a user might be asked for their password at most
    >four times when a key is used with the Microsoft base CSPs. Perhaps the claim
    >came from an early beta release.
    
    The 16 dialog boxes problem was present in 4.0, and it's been confirmed as
    still being present in 4.01.  This was apparently verified on Microsoft's own
    CryptoAPI mailing list (the claimed "beta" version was actually MSIE 4.0
    final).
    
    >Microsoft is constantly working to improve the security of our products.
    
    Myself and another person had been trying to convince Microsoft for more than
    four months that things like CryptExportKey() are a major security hole.  Their
    response each time has been some variation of "We don't see this as a problem".
    It simply wasn't possible to get them to acknowledge these flaws, therefore my
    posting of the details wasn't "irresponsible" (as some have claimed) but a
    necessity in order to get them fixed.  When I pointed out several months ago
    that Microsoft were using, in their pre-4.x products, an exported key format
    which was identical to the one which I broke more than 1 1/2 years ago, their
    response was "What's the problem with this?".  I would have liked to have
    included URL's for the CryptoAPI mailing list archives to prove that Microsoft
    were warned of these problems some time ago, but the CryptoAPI archive seems to
    be temporarily offline.
    
    Risks of a Private Key Compromise
    ---------------------------------
    
    One or two respondents have tried to play down the seriousness of a private key
    compromise, saying that I exaggerated the dangers in my original message.  What
    I wanted to point out was how extremely valuable a users private key is, and
    how disappointingly little concern Microsoft seem to have for protecting this
    valuable resource.  For example Garfinkel and Spaffords "Web Security and
    Commerce" (O'Reilly, 1997) contains the warning:
    
      "Today's PC's are no better at storing private keys once they have been
      generated.  Even though both Navigator and Internet Explorer can store keys
      encrypted, they have to be decrypted to be used.  All an attacker has to do
      is write a program that manages to get itself run on the users computer (for
      example by using [...] Microsoft's ActiveX technology), waits for the key to
      be decrypted, and then sends the key out over the network" (p.120).
    
    Including a function like CryptExportKey() (which hands over a users private
    key) makes this very attack possible.  The potential damage which can be caused
    with a misappropriated private key is enormous.  Consider the relative risks in
    the compromise of a logon password and a private key.  With a logon password,
    the worst an attacker can do (apart from stealing possibly valuable data) is to
    trash the victims hard drive, which involves reinstalling the operating system
    (an action which many users are intimately familiar with).  In contrast an
    attacker who obtains a private key can potentially drain the victims credit
    card, clean out their bank account (if the victim uses one of the emerging
    online banking services), sign documents in their name, and so on.
    
    The implications of that last point can be quite serious.  Take for example the
    Utah digital signature act, which was used as a model by a number of other
    states who implemented or are implementing digital signature legislation.
    Under the Utah act, digitally signed documents are given the same evidentiary
    weight as notarised documents, and someone trying to overcome this has to
    provide "clear and convincing evidence" that the document is fraudulent, which
    is difficult since it bears a valid signature from the users key (this fact has
    been used in the past to criticise digital signature laws based on this act).
    In addition, under the Utah act and derived acts, anyone who loses their
    private key bears unlimited liability for the loss (in contrast, consumer
    liability for credit card loss is limited to $50).  This leads to the spectre
    of a malicious attacker who has the ability to issue notarised documents in
    your name for which you carry unlimited liability.  This is a lot more
    important than someone reformatting your hard drive, or stealing last months
    sales figures.
    
    Peter.
    



    This archive was generated by hypermail 2b30 : Fri Apr 13 2001 - 13:42:04 PDT