More Comments: Security Scanners.

From: Craig H. Rowland (crowlandat_private)
Date: Fri Feb 12 1999 - 23:35:57 PST

  • Next message: aleph1at_private: "PPTP Revisited"

    As others have done, I am posting from my home domain as an individual and
    all opinions should be considered my own.
    
    I work for Cisco Systems and Wheelgroup before that. I'm a lead developer
    on our auditing tool NetSonar and my primary responsibility is coding
    active network attack tools.
    
    Please bear with me as I share some personal thoughts on this issue:
    
    
    1) Don't underestimate the frailty of your network.
    
    This subject has been touched on here, but I think a lot of people don't
    realize how frail many of the network resources in common use are. It has
    been my experience (and the release of new version of nmap has continued
    to show to the readership) that many well known devices/OS/daemons have a
    hard time handling a simple port scan let alone a full-blown sequence of
    active attack probes. Nevermind the fact that these probes are
    often intentionally ramming illegal data in hopes the daemon coughs up
    useful information.
    
    So when people say "The scanner should do attack XYZ just like a
    hacker" you need to really consider the implications when the tool is
    being used to scan a large number of hosts. An active probe is a reliable
    way to check for a problem, but you need to thoroughly test what the
    ramifications will be across a wide variety of hosts when you run it.
    
    A personal experience with this is when I wrote an attack library for
    telnet. The library used a common set of default accounts. Not a big
    deal, but take a look for yourself and see if you can find the issue:
    
    root
    uucp
    sys
    nuucp
    adm
    lp
    bin
    toor
    halt
    listen
    nobody
    smtp
    oracle
    etc.
    
    Everything is going great, you're hacking accounts like a fiend and then
    suddenly the system you're probing stops responding. You can't figure out
    what happened until you see that the system has been halted remotely.
    Looking back you can see that in a list of 50+ default usernames I
    accidently included the "halt" username. On most hosts this account is
    inactive or non-existent, but on many (especially older) platforms it will
    run the /bin/halt command as soon as you login! This was a silly error
    that can cause a significant problem on many networks during testing.
    
    This is just one example, what about the remote buffer overflow check that
    crashes the parent process? Now you've just shut down that service to all
    users. Sure you've found the problem 100%, but now you may have dozens
    (hundreds??) of hosts across your network that are probably in a state of
    limbo while you frantically try to restore order.
    
    2) Checking for banners is useful.
    
    Virtually all operating systems and many network daemons willingly give
    you a banner when you connect to them. It seems to be wasteful to at least
    not *consider* this information as part of your security audit scan. This
    information is being provided for a reason (I don't agree with the reason,
    but that's because I'm a paranoid security type) and while it can be
    altered by the user in some instances, I've generally found this case to
    be exceptionally rare.
    
    3) Things aren't always what they seem.
    
    As anyone of the authors of these tools will attest to, writing active
    probes is time-consuming and requires quite a bit of testing before it can
    be accepted into a release product. It's not simply a matter of
    downloading some scripts from rootshell.com, pasting them into a wrapper
    and then checking it into the source tree. There are regression test
    procedures, stability checks, OS impact tests and of course reliability in
    detecting the problem stated.
    
    Because a human is coding the attack and they can't possibly know every
    configuration of every system or how systems in the field may or may not
    react, it is entirely possible that an active probe may in fact not catch
    what it is designed to. Many factors can affect this such as:
    
    - Custom user configurations on a system.
    - The way a daemon was compiled.
    - A remote system error such as a full filesystem.
    - etc.
    
    An automated probe in these cases may miss a bug that a human
    investigating the problem by hand can work around to get to work
    successfully.
    
    A banner check in this area when used with an active probe can report to
    the user "This system appears potentially vulnerable, but the active check
    can't confirm it." This is certainly a suspicious situation *I'd* want to
    be aware of.
    
    
    
    With all of that said I would like to add a few things that we do in
    NetSonar to differentiate how we detect problems in a host.
    
    Scanning Process.
    
    NetSonar has six phases of scanning:
    
    1) Ping sweep.
    2) TCP/UDP Port sweep/banner collection.
    3) Banner and port rules analysis.
    4) Active attack probes.
    5) Active attack probe rule analysis.
    6) Present data.
    
    This doesn't mean too much to the average user except for how we handle
    the actual classifications of checks which we classify as the following:
    
    Potential Vulnerability (Vp)- These vulnerabilities are derived from
    banner checks and active "nudges" against the target host. A potential
    vulnerability is a problem that the system is indicating may exist but an
    active probe has not confirmed that it does in fact fall victim to.
    
    Confirmed Vulnerability (Vc) - These vulnerabilities have been actively
    checked for and almost certainly exist on the target.
    
    
    How we get a Potential Vulnerability (Vp)
    
    Potential vulnerability checks are done primarily through the collection of
    banners and the use of non-intrusive "nudges." A nudge is used on daemons
    that don't respond with a banner but will respond with a simple command.
    An example of this may be an rpc dump or sending a "HEAD" command to get
    a version from a HTTP server.
    
    Once this information is obtained we use a rules engine and a set of rules
    to determine if a problem exists. The rules are based on simple logic
    statements and the user can add rules as they see fit. Examples (some of
    these are off the top of my head so excuse any errors, some from the
    included user.rules file with the product):
    
    # Service report section: The system is running netstat
    port 15 using protocol tcp => Service:Info-Status:netstat
    
    # Service report section: report the rexd service running on target
    scanfor "100017" on port 111 => Service:Remote-Exec:rpc-rexd
    
    # OS Type report section: The system is Ueber Elite OS 1.x
    (scanfor "Ueber Elite OS 1.(\d)" on port 21,23) || (scanfor "UEOS
    [Ss]mail" on port 25) => OS:workstation:unix:ueber:elite:1
    
    # Vp Report section: The system has a known a buggy FTPD version
    Service:Data-Transfer:ftp && scanfor "BuggyFTPD 0.01" on port 21 =>
    VUL:3:Access:FTP.BuggyFTPD-Overflow:Vp:888
    
    # User Custom Rule: Someone has a rogue IRC server running.
    has port 6667 => VUL:1:Policy-Violation:IRC-Server-Active:Vp:100002
    
    # User Custom Rule: SuperApp 1.0 running on host.
    (scanfor "SuperApp 1.0" on port 1234) || (scanfor "SuperApp 1.0 Ready"
    on port 1235) => VUL:1:Old-Software:Super-App-Ancient:Vp:100003
    
    
    This of course becomes very useful to an admin who discovers the latest
    vulnerability off of BugTraq and decides to write up a quick rule and scan
    their network. This is also a useful feature for sites that run custom
    applications and wish to track versioning/usage across a network, or
    you found out the hacker you bought the software to help get rid of
    likes to put a backdoor on all compromised machines at port 447, etc.
    
    
    Why this matters.
    
    The above process is important because it is largely non-intrusive and
    non-destructive to most systems. Because of this you can safely run this
    type of scan and know you *probably* are not going to impact your network.
    Indeed this type of scan is perfect for scheduled unattended use because
    you know you won't crash critical systems at 3:00AM when your scan starts.
    
    Why it might not work.
    
    Overall the majority of system daemons that return banner information
    generally are running the version of the software they are saying they do.
    It might be possible that an admin has enough time to go around changing
    version numbers on everything. Then again this time would probably be
    better spent on applying relevant patches to the problem they are trying
    to cover up.
    
    In the end though, many vulnerabilities are simply not going to reveal
    themselves unless you poke at them a little. For this case the confirmed
    vulnerability checks take over and actual attacks are launched against the
    suspect process. In some cases this may produce overlap of a Vp and Vc,
    but this is an acceptable situation because now you have informed the
    administrator that:
    
    a) This system *may* have this problem.
    b) This system definitely has this problem.
    
    I believe this is a nice approach to the issue because you get the best
    that each mechanism has to offer: Banner checks report the facts as
    offered to them by the system upon initial inspection and the active
    checks that disregard banner information and check for themselves
    directly.
    
    In either case the admin is left with several options:
    
    1) Run the scan with potential vulnerabilities only. This will give a good
    coverage of the systems on your network. This method is by far the least
    disruptive option and also the fastest.
    
    2) Run the scan with potential vulnerability and confirmed vulnerability
    checks. Here you get the coverage of the rules and the probing of
    problems with actively enabled probes. This method may cause system
    disruption during the attacks, but no deliberate DoS attacks are done
    without letting the user know.
    
    3) Run the scan with your custom rules to find a specific problem.
    
    4) All of the above.
    
    
    This is just another approach to the problem that a network vulnerability
    scanner faces. The variances of a modern network are anything but simple
    and it should be remembered that a scanner is just one piece of good
    network security practice. All the products on the market attempt to
    produce the most accurate and complete results possible. Just like virus
    scanning utilities however, there exists a margin of error in whatever
    method is chosen.
    
    -- Craig
    
    Speaking for myself.
    



    This archive was generated by hypermail 2b30 : Fri Apr 13 2001 - 14:34:19 PDT