The Large-Scale Threat of Bad Data in DNS

From: FORENSICS.ORG Security Coordinator (secalertat_private)
Date: Fri Aug 09 2002 - 19:37:31 PDT

  • Next message: Jeff Mcadams: "New l2tpd release 0.68"

    The Large-Scale Threat of Bad Data in DNS
    
    Since DNS was first created it has been commonly known that "bad data" in
    DNS is a potential security problem. DNSSEC is a new standard under
    development now that is designed to plug one of the security holes caused by
    "bad data" in DNS -- the fact that any DNS server can be filled with
    incorrect information by a malicious third party through a variety of
    techniques that don't require the third party to possess passwords that
    grant access to the computer that runs the DNS server. When an attacker
    fills a DNS server with malicious information, it gives the attacker full
    control over where on the network any client computer directs information
    sent to server computers within impacted DNS domains. This is commonly known
    as "DNS Hijacking" or "DNS Spoofing" or "DNS Poisoning". In and of itself
    DNS Hijacking is a significant threat, but one that is generally ignored
    because everyone knows that DNS is insecure (at least until DNSSEC gets
    deployed) and everyone lives with the risk. More to the point, everyone who
    needs security understands that DNS doesn't provide it, and they build
    additional security mechanisms on top of DNS. Secure Sockets Layer (SSL),
    for example.
    
    It is a commonly-held belief today, even amongst information security
    professionals, that it doesn't matter if DNS hijacking occurs, because it
    doesn't impact the security of SSL -- if a client is hijacked through "bad
    data" in DNS and redirected to a malicious attacker instead of the authentic
    SSL-secured server, the client will be informed of this fact because the SSL
    protocol authenticates the identity of the server reliably through the use
    of PKI certificates. There's no way for an attacker to spoof the SSL
    identity of the authentic server without stealing that server's secret key
    which corresponds to the public key contained within the server's PKI
    certificate as issued by a Certificate Authority (CA) such as Verisign. It
    is generally believed that secret keys are hard to steal, and that keeping
    secret keys secret is easy, so nobody should be especially worried about a
    DNS hijacker and "bad data" in DNS as long as they always use SSL or similar
    encryption and authentication software and protocols to protect sensitive
    communications on the Internet.
    
    Two recent software bugs discovered by researchers and reported on BugTraq,
    both of which impact Microsoft Internet Explorer as well as certain other
    software (a definitive list won't exist for some time, as there are many
    variations of the attack techniques and even the researchers themselves
    acknowledge that they haven't explored very many of the variations or looked
    into whether or not other software may also be vulnerable to these attacks)
    tear apart the belief that "bad data" in DNS is not a severe threat because
    of the existence of SSL and other standards that give everyone the security
    that DNS can not.
    
    Mike Benham [moxieat_private] and Slav Pidgorny [pidgornsat_private]
    reported separate but related discoveries about the fact that commonly-used
    SSL certificate creation software can be used to forge PKI certificates
    that are trusted by certain software such as Internet Explorer due to
    the fact that authentic certificates issued by Verisign, Thawte, and other
    CA's are improperly authorized (automatically) to function as CA
    certificates for the purpose of issuing and certifying malicious forged
    SSL certificates. Mike Benham demonstrated this week a forged SSL
    certificate for www.amazon.com that Internet Explorer trusts automatically
    as the authentic SSL certificate for Amazon's SSL-secured Web server. The
    technique and technical steps necessary to forge SSL certificates in this
    way is trivial and requires access only to an authentic certificate issued
    by a legitimate CA and the corresponding private key/public key pair that
    anyone must generate when applying to a legitimate CA to purchase an SSL
    certificate. (See references [1] and [2] below)
    
    Adam Megacz [adamat_private] of XWT Foundation reported a vulnerability in
    Internet Explorer's implementation of JavaScript that allows an attacker to
    hijack a user's browser and force it to attack Web servers located behind
    the user's firewall. The attack takes advantage of the fact that anyone who
    registers a DNS domain through any registrar can put "bad data" in their own
    DNS zone entries (called resource records) that contain addresses found only
    on internal (intranet) private networks of the type that exist commonly
    behind firewalls. RFC 1918 (see reference [3] below) documents the private
    address ranges, such as 10.x.x.x or 192.168.x.x, that have been reserved for
    private networks. This RFC makes it clear that addresses that reference
    private networks must not be exposed outside of the private network where
    they have ambiguous meaning, and this is supposed to have included
    restrictions on DNS servers to prevent them from serving, and perhaps
    restrictions on DNS resolvers to prevent them from accepting, such RFC 1918
    "bad data" from DNS zone resource records outside of private networks.
    Because there are known buffer overflow vulnerabilities in Web servers that
    are installed behind firewalls, and because certain of these vulnerabilities
    (in particular Microsoft Internet Information Server) can be exploited by a
    simple URL with name/value pair parameters that trigger the buffer overflow
    and deliver malicious executable code to the compromised server, the idea of
    attacking intranet Web servers using RFC 1918 "bad data" addresses in DNS
    using both Adam Megacz's JavaScript exploit and buffer overflow URL exploits
    against servers that have not been properly patched due to the fact that
    they weren't supposed to be vulnerable to attacks originating from the
    Internet thanks to the protection allegedly provided by firewalls is now
    obvious to anyone who would undertake such attacks in the first place. (see
    reference [4] below)
    
    There ARE protections available to everyone impacted by "bad data" in DNS.
    
    The first protection is knowledge. 1) SSL can no longer be presumed to
    provide automatic server identity authentication; especially not when
    Internet Explorer is used. 2) Any domain name can resolve to an address that
    points at computers behind your firewall if your firewall doesn't have the
    ability to filter out this type of RFC 1918 "bad data" in DNS lookups; as a
    result, any URL can cause any Web browser to attack computers behind the
    firewall -- INCLUDING your own computer: 127.0.0.1 is the loopback adapter
    address present in every TCP/IP-enabled computer which causes the computer
    to refer to itself without knowledge of its own name or address, and this
    address can ALSO be configured as "bad data" in any DNS zone resource
    record. 3) Sending code, including script, over the network is a very risky
    practice that should be discouraged; even when digital signatures are used
    to authenticate the trustworthiness of such code, these signatures do not
    provide a guarantee of safety and software bugs (or key theft) can render
    them useless -- automatic trust should never be granted to code that travels
    over the network and better tools for human users to use for making trust
    determinations should be deployed wherever possible.
    
    The second protection is software patches. 1) Web browsers and other
    software vulnerable right now to the various "bad data" DNS threats need to
    be fixed or uninstalled. 2) DNS resolvers should properly implement RFC 1918
    and reject private network addresses received from any DNS server not
    located on the same private network. 3) Firewalls need to be upgraded with
    the ability to properly implement RFC 1918 AND filter out 127.0.0.1 RR's
    (not mentioned explicitly in RFC 1918) 4) ISP's need to update the resolvers
    they have deployed in the DNS servers they operate on behalf of customers so
    that they will never relay "bad data", which would ideally include 5) DEPLOY
    DNSSEC! (see reference [5] below)
    
    The third protection is enhanced security policy for Internet infrastructure
    operators. 1) Enforce a new common-sense policy for DNS and DNSSEC (which is
    not currently designed to prevent "bad data" if that "bad data" is entered
    on purpose by an attacker who legitimately, or through key or credential
    theft, has control of a DNS zone) which says that IP address can't be
    misappropriated without permission by anyone who registers a domain name;
    instead, everyone who wishes to map a domain name to an IP address should be
    required to obtain permission first from the authority who has been assigned
    control (and responsibility) for managing the IP address block that contains
    the IP address. 2) Require, as a condition of participating in the DNS (or
    DNSSEC) that legitimate, verifiable contact information be registered with
    the appropriate authority (IANA, ARIN, RIPE, etc.) for the legal entity that
    has been assigned, and has accepted, responsibility (and possibly legal
    liability) for each address block. 3) Exclude from participation in DNS (or
    DNSSEC) any IP address that belongs to an address block that is under the
    control of anonymous or unverifiable entities.
    
    Comments or suggestions on these issues are welcome. We seek to facilitate a
    properly-secured Internet that takes into consideration the reality of
    software bugs and implements minimal security policy to provide common-sense
    safeguards for all users. With respect to DNS, and its eventual successor
    DNSSEC, it can no longer be tolerated by Internet users that the DNS makes
    no attempt to restrict the spread of "bad data". Existing RFC's need to be
    implemented properly, new operational procedures need to be developed, and
    bad software that contains DNS-based security bugs needs to be patched
    promptly by all vendors.
    
    On a related subject, everyone involved in the process of computer security
    vulnerability discovery, disclosure, and software bug fixes should take a
    moment to familiarize themselves with the internet draft of the Responsible
    Vulnerability Disclosure Process, and in particular note the important role
    of a third-party "coordinator" in cases where any party involved in the
    process needs help communicating with any other party to ensure proper
    handling and comprehensive understanding of complex technical materials:
    
    http://www.ietf.org/internet-drafts/draft-christey-wysopal-vuln-disclosure-0
    0.txt
    
    Most vulnerability disclosures occur today without comprehensive
    cross-vendor research facilitated by a coordinator. Our group of forensic
    experts makes its members available to function as Security Coordinators to
    any party who needs this type of technical assistance.
    
    Thank you for your time and attention to these important matters.
    
    Jason Coombs [jasoncat_private]
    FORENSICS.ORG Security Coordinator
    secalertat_private
    
    REFERENCES
    
    [1] http://online.securityfocus.com/archive/1/286290
    
    [2] http://online.securityfocus.com/archive/1/273101
    
    [3] http://www.ietf.org/rfc/rfc1918.txt
    
    [4] http://www.xwt.org/sop.txt
    
    [5] http://www.ietf.org/html.charters/dnsext-charter.html
    



    This archive was generated by hypermail 2b30 : Tue Aug 13 2002 - 07:33:32 PDT