How scanners actually work

From: David LeBlanc (dleblancat_private)
Date: Wed Feb 10 1999 - 07:21:38 PST

  • Next message: David LeBlanc: "Re: ISS Internet Scanner Cannot be relied upon for conclusive"

    Instead of replying to every post, I'm going to try and summarize various
    points.  First of all, I am an ISS employee, but I post to mailing lists
    from my home account (hence the mindspring address), and on my own time.
    Anything I post here shouldn't be construed as an official ISS statement.
    I did most of the porting work moving it from UNIX to NT, so I'm very
    familiar with all the issues raised, and how most of our vuln-checking code
    works.  Some of these points also apply to every other piece of scanning
    software available, and so are general issues with using this type of
    software.
    
    1) Does the scanner report hosts as 'clean'?
    
    No.  It reports when it finds hosts which are vulnerable.  This may seem
    like semantics, but it is an important point, and is an important thing to
    remember when interpreting results from a scan.  If a scanner does not find
    a problem, it doesn't always imply that there isn't one there - it just
    didn't find it.  Think of it along the same lines as finding software bugs
    - you can never prove that a complex piece of software is bug-free, but you
    can prove it has bugs by finding them.
    
    2) How could a vulnerability escape detection?
    
    This could be due to something as simple as tcpwrappers - the telnet daemon
    might be vulnerable to the environment linker bug, but it just won't take
    connections from the IP address we're scanning from.  You can run into much
    more complex examples - rsh spoofing is pretty complicated, and depends on
    a number of pre-conditions, such as figuring out who a trusted host might
    be.  Except for really trivial cases (like the OOB crash), vulnerability
    checks are prone to all sorts of errors.  The huge number of different
    implementations of the same service give us all sorts of fits, as sometimes
    you just can't easily tell what's going on.  I've seen NT systems _appear_
    vulnerable when they weren't - but you can't tell the difference between it
    and the actual vulnerable system over the wire.
    
    3) How do you decide what to use to find a given bug?
    
    I consider speed, accuracy, and non-disruptive behavior desirable.  I also
    consider ease of implementation desirable.  If I can tell if something is
    vulnerable to a denial of service attack _without_ crashing it, I'll do
    that.  Given time, I also prefer to use more than one mechanism to find
    bugs, and we do that in some cases - though this can get a bit complex, as
    you can see where a lot of dependencies arise - we'd like to have one piece
    of code which parses sysDescr strings, but another that actually looks for
    something.  This gets really complicated once you start checking for very
    many problems.  We also have a limited amount of time to add functionality
    to the product, and if we were to do everything that _I_ would like to do,
    we'd need about 3x more programmers working on it.  So if I can find
    something at least some of the time accurately, and I can get it
    implemented quickly, then I'll go ahead and ship it - then try to come back
    to it and enhance it later.
    
    4) What constitutes 'checking' for something?
    
    IMNSHO, if you have something that can find a problem at least some of the
    time, you _are_ checking for it.  For example, if you ask me whether a
    friend is home or not, I decide to check by calling them, and they don't
    answer the phone.  I then tell you they are not home.  You then go knock on
    the door, and they answer.  Don't call me a liar just because what I tried
    didn't work.  I might be wrong, or need to use other methods, but I tried
    to do what I said I'd do.  If there is more than one method, or a better
    method, and it will improve either your accuracy or your ability to find a
    particular problem, then these ought to be implemented - assuming you have
    time to get it into the product.
    
    5) Wouldn't it be better to report a test as either positive, negative, or
    not run?
    
    Sure.  That would be great.  I'd love it if the scanner could do that.
    HOWEVER, this would take a LOT of time, and add a lot of complexity.  You
    should _see_ what the decision tree for some of these checks looks like -
    especially the NetBIOS checks.  In order to do this right, I'd have to have
    all the toggles mapped to all the vulns (this may sound easy, but is really
    non-trivial), and then maintain state.  There may be ways to get around
    this, but it still would mean a LOT of work.  It is something we've talked
    about having for at least the last 2 years, but haven't found time to add.
    Until then, just remember that you can't distinguish from either the GUI or
    a report whether something is a negative or not run.  As I pointed out,
    there is a lot of great info in the logs, so if you're technical enough and
    interested in a particular bug, read them.  The problem with putting out a
    lot of very verbose output to regular users is that I don't think our
    average user is quite as technical as the average BUGTRAQ poster, and that
    we already put out a HUGE amount of info - I've seen scan results where we
    found 100,000+ vulns.  That's an awful lot to wade through.
    
    6) The scanner should say just how it is checking for something, so that
    the user can have a better idea of what we did.
    
    This is true.  Probably the most important thing we need to learn from this
    whole episode, and something _I_ think ought to be in the documentation.
    However, it will be a lot of work, and I personally don't control when it
    will get done.  Just for perspective, the documentation has been an awful
    lot of work - a more recent project has been to get the fix info up to date
    and verbose enough for normal admins.
    
    As a final point, I come from a scientific background, and one of the
    things that got hammered into my head over and over was that I always need
    to understand the limitations and behavior of my tools and methods.  All
    tools have limitations, and a scanner is a very, very complex tool.
    
    I'd imagine Aleph and the rest of the list is getting tired of this thread,
    so please direct replies just to me unless you've really got something new
    to add.
    
    
    David LeBlanc
    dleblancat_private
    



    This archive was generated by hypermail 2b30 : Fri Apr 13 2001 - 14:33:44 PDT