First of all, we'd like to thank everybody who expressed an interest in the challenge and those who contributed. We'd also like to acknowledge those who expressed concerns about how this would affect the future of this list. For those who voiced concerns about the extra traffic, we had no idea what sort of response to expect when we put this together so were pretty lenient with approving posts as to not discourage anybody from participating. We decided it was best to handle it this way so that we could establish guidelines which were appropriate to the types of posts that were being submitted as opposed to having to moderate based on preconceived expectations. Now that we know what type of response to expect, these will be the basic guidelines for how we conduct the challenges in the future: a) Since the challenges are really for the list's benefit, solutions and other discussion should be posted directly to the list. I think if we were to assess people's solutions and observations in private, it'd detract from everybody's learning experience. If people want, they should feel free to drop hints instead of providing outright spoilers. We also encourage in-depth analysis of the program or of any proof-of-concept code that arises from the challenge. b) There were a number of posts which discussed the robustness or correctness of the example program. While these issues are relevant when auditing software, we're more concerned with the security aspects of any bugs in the program. We didn't really design the initial program to be syntactically perfect or to demonstrate good programming habits, since it is intentionally buggy code. It is understandable why people responded with these types of observations, since we didn't really say too much about the example program or what the exact objectives of the challenge were. In the future though, we'll be rejecting a lot of these posts if they don't mention possible security implications of errors. In this way it will be no different than when a possible real world vulnerability is reported, in that we want to keep the focus on the security problems themselves. c) The eventual goal of each challenge will be to generate a proof-of-concept that demonstrates that the problem is exploitable (if indeed it is). Each challenge will be designed to take a theoretical problem and then either prove or disprove that it could be a vulnerability in certain circumstances (ie: if it were setuid, etc.). Not all problems will be exploitable across all architectures and platforms, but we'd rather that people experimented and figured this out on their own as opposed to spoiling it for everyone. This could also generate some interesting discussion about why something would be a vulnerability in one environment and not in another. d) Many posts were sent that independently confirmed someone else's observations. Much of the normal vuln-dev traffic is also of this nature. We will use the same guidelines for challenges as we do with other vuln-dev traffic for these types of posts, which is normally to choose the one that has the most details over the others we have received at the time of moderation. This is done to avoid repetition. I would also like to state that we were impressed with the overwhelmingly positive response to this idea. One of the main reasons for doing this is to help people learn more of the theory behind security vulnerabilities, which can be applied to real world issues. We think that if people have a good learning resource, it will definitely benefit the list in other ways, since more people who use vuln-dev will have a stronger theoretical basis when discussing real world vulnerabilities. Any questions or comments about these guidelines can be directed to myself or to Aaron Adams <aadamsat_private>. This list of guidelines may be prone to additions or changes down the road, depending on how other challenges play out and the consensus of the list. As a side note, we'll also be posting summaries of each challenge upon their conclusion. Dave McKinney Symantec keyID: BF919DD7 key fingerprint = 494D 6B7D 4611 7A7A 5DBB 3B29 4D89 3A70 BF91 9DD7
This archive was generated by hypermail 2b30 : Fri May 16 2003 - 03:47:25 PDT