Neil wrote about Edward Salm's EICAR.COM comments: "The reason I mentioned the other eicar.com is I noticed NAV on my > test machine wouldn't detect your version of eicar.com unless > bloodhound was activated! When I turned bloodhound heuristics off > (even though autoprotect was sill running), I could put your > eicar.com anywhere on the drive!" Doesn't that give you a good > pointer. Bloodhound seems pretty important. It's also possible that > bloodhound ignores the default exclusions. I have a contact at SARC > whom I'll ask about this and let you all know the response. Oh, and > in case you're wondering, there was only a difference of one byte > between our copies of EICAR.COM. Mine terminated in an <LF>, Ed's in > a <CR><LF>. As Elias has already commented, the lowdown on the EICAR test string is at http://www.eicar.org/, although not immediately obvious, as it once was... FWIW, here is an extract from the original article/white paper/whatever that explained the raison d'etre for the existence of EICAR.COM: Any anti-virus product which supports the EICAR test file should "detect" it in any file which starts with the following 68 characters: X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H* [Source: \EICAR\TESTING\testfile.htm from EICAR'99 conference CD] Note all you who got a virus warning from your Email scanners that this message is "infected" with the EICAR test string/virus/whatever that according to the people who made it and its definition, your product has just false alarmed. (In fact, it may suggest that your scanner is doing a lot of brain-dead grunt scanning which means it will be slow and poor at detecting viruses in general, but that is a separate discussion we won't start here.) Note the spec is very clear -- the string above *is* the EICAR test string. It does not matter what follows it. Thus, files consisting of just those 68 bytes, of them plus CR LF, of them plus LF, of them plus CR, of them plus <any arbitrary string of characters> should *all* be detected as the EICAR test string (the last example may be problematic in some cases, as some scanners may complain about invalid COM format files based on the size and the fact that the file is not internally an EXE, if too large a string is appended to the EICAR string). > Here's an idea. The statement by McAfee that they can't go looking > for XORed files because it's not feasible got me thinking. It seems > to me that it's not feasible because it would take too long. People Well, it is unfeasible *eventually*. Note that very similar issues are faced by scanner developers now with run-time decompressors. How these are handled by (some) extant scanners is that commonly used RTDs are detected from the constant-ish stub and/or header structure the compressor attaches to the compressed EXE. If such an EXE is found, it is decompressed and then scanned. If a common run-time EXE encryptor evolved (why would it, other than as a lame proof of concept that "known virus scanners" can only detect known forms of malware?) then a similar approach would be adopted -- detect the run-time decryption code, decrypt then scan... > would be annoyed at 2 second waits for their files to open and > whatnot. Now, I'm no AV expert and some even may work like this, but > here's what I came up with. An AV checker could do a real hard look > at a file, doing whatever it needed to be really thorough with the > file (I understand that breaking XOR programmatically is pretty > straight forward). It would then the store an MD5 hash for that file > in an index. Whenever it needed to scan a file, it would just > compare hashes (which is quick), and only re-scan the files if they In fact, as you suspect, some scanners do implement schemes along these lines. > had been changed. Special handling would probably be needed for data > files as they get changed all the time, but overall it seems Where it gets harder is "data" files that *can* also harbour code -- embedded macros in MS Office, MS Project, Viso, AutoCAD, etc files and embedded scripts in HTML, HLP, I presume CHM, and so on. > reasonable to me. I'd also think that AV scanners could do more > advanced scans in the background with CPU idle cycles. There are a > LOT of spare cycles on the average desktop. I have been advocating for some time that the scanner developers should move to a "code integrity management" model, or at least provide that as an option for their corporate customers. They already have tested technology to intercept very low-level file system processes to inspect file accesses for change vs. straight reading, complex file format decomposition (to check for and identify macros in Word documents, etc), and most of the other building blocks for doing what is needed. This should be layered this with an access control system allowing the sys-admin to specify who could run what *code*. Because of the failure of OS, browser and productivity app developers to keep data and code separate, this is *not* the same thing as file-level permissions and is the necessary improvement over what MS seems to think is the panacea -- code signing. Such tools could actually *prevent* infection of (virtually) all future (i.e. currently unknown) viruses. (The weakness is the arrival of new compound "data and code together" file formats that typically require intensive reverse engineering before their code resources can be reliably extarcted and identified, but that is an equal shortcoming in today's "known virus scanning" technology. We could partly get around that as a professional group by simply refusing en masse to use products whose developers do not provide adequate and timely file format details to our security product developers -- if no-one is using a product there cannot be a compelling business reason to do so, so ill-informed managerial pressure to adopt the product would be weakened. Cue the open source chorus...) -- Nick FitzGerald Computer Virus Consulting Ltd. (NZ) Ph/FAX: +64 3 3529854
This archive was generated by hypermail 2b30 : Fri Apr 13 2001 - 15:33:28 PDT