When I asserted that: >>>... But if you really want secure code, the MOST important thing is to get >>>developers trained in how to write secure programs. The basic problem isn't >>>that we need better books or guidance. The problem is that developers don't >>>grok _ANY_ of the books. In short, you only need one meta-practice: if you're >>>a developer, you MUST sit down and learn how to write secure code. Period. Peter Gutmann replied: >>Yup, that's the single "methodology" which works for writing secure code: Get >>it written by a skilled programmer who has the self-discipline to very >>carefully check every part of their work to make sure there are no problems. ... >>The whole >>point of the CC and everything like it is to (try to) emulate the >>functionality of the skilled security programmer using unskilled labour. It >>works about as well as handing a random kid a sheet of music and a fingering >>guide and expecting to hear Yehudi Menudin. You can't fake this, you need >>actual *talent* to make it happen (although you can produce a lot of paperwork >>claiming it should be OK if that's all you're after). >> >>There are some downsides to this approach. Marv Schaefer (I think... well it >>sounds like the sort of thing he would have said) once observed that "To get a >>truly secure system, you must ensure that it's designed and built by geniuses. >>Unfortunately, geniuses are in short supply". Still, I'm much more confident >>that something like Postfix or Qmail is secure than "Unbreakable Oracle", no >>matter how many security certificates and full-page ads Oracle have for it. I agree that trying to use "unskilled labor" is hopeless. But, I think there's a middle ground that applies to most products: train the developers so that they're no longer unskilled. Instead, developers should be trained so that they're at least basically skilled, and so they know when they need the experts. Most of the vulnerabilities currently being found in software aren't from some exotic attack or approach. They're from well-known attacks, based on common mistakes made by developers, and there are already-known approaches that completely counter them. The problem is that the DEVELOPERS don't know about them. If developers avoided these mistakes, we'd eliminate the "easy" attacks, making their software much harder to attack. That wouldn't make the software "invincible", but it would raise the bar significantly. Not everyone is willing to pay to develop software that can, on its own, defend against a sustained attack by a foreign government. But it's reasonable to expect that developers will produce software that resists unskilled 13-year-olds who have a little spare time. I think that's the goal we should set: _more_ secure software, not perfectly secure. Although geniuses are in short supply, we have a greater supply of people who can follow guidance... once they know that guidance. Once you do this, other evaluation techniques become more effective too. We'll still need geniuses. But we shouldn't need geniuses to write a basic word processor. At the least, a developer should know "if I introduce a macro language, I create a major potential problem, and I need to call in the geniuses if I do." I strongly emphasize "don't create your own encyption algorithm", for example - yet we still have developers who don't know that it's a specialized area. --- David A. Wheeler dwheelerat_private
This archive was generated by hypermail 2b30 : Mon Jan 13 2003 - 07:57:23 PST