Jackie's message struck a cord with me, since these are issues I've dealt with many times. Toward the end he was talking about specifications, but I think you need to take a step further back and deal with requirements. Over the years I've become much more progressive (pushy?) about getting development involved with eliciting and refining requirements. Here are some random thoughts on the theme. * I've never made a visit to a customer site without learning something new about their goals and requirements. This was true as a developer, and even more true when I was eliciting requirements for new projects. Of course, if you are always visiting customer you will never get any work done :-) * The best projects have information flowing both ways. Usually the customer does not know precisely what they want, the marketing folks don't either. Development, at all levels, has a responsibility to find out the true requirements and make sure the implementation matches them. This is especially critical in security designs, because of the weakest link principle. * I can't emphasize enough the importance of not mixing the requirements with the proposed implementations. In rare cases I've found a customer who says I want XY and Z where it turns out that it really is best for them. More often I get customers who say something like "I need a PKI" because that is the latest silver bullet. * I'm sure a lot of my attitude comes from the fact that security in general is still not well understood. I've always had to "pioneer" these concepts (explain them, relate them). Of course my specialty is obscure even within the security industry (logical security for trusted hardware). * I fully agree on the importance of peer review for security products. Over the years I've seen (and perhaps even made a few) boneheaded mistakes. I've also seen some very subtle problems discovered by reviews. * It is difficult, yet almost mandatory, to preserve organizational knowledge. The single biggest problem with outsourcing is the loss of knowledge that inevitably comes about. One of my former security companies had a number of "general principles" used to review designs. They even printed them up on a card in the 70's, and distributed them to customers as part of a pioneering education effort. * One of the last times I made a subtle design error, it was caught by peer review when someone noticed I had violated a general principle. I thought I had compensated for my violation, but further analysis revealed attacks I had not thought about. These types of rules are one way of preserving and using hard won knowledge. Regards, Michael McKay mmckayat_private -----Original Message----- From: Jackie Chan [mailto:blue0neat_private] Sent: Monday, May 21, 2001 4:48 AM To: David Wheeler Cc: secprogat_private Subject: Re: Security != Reliability - need flexible responses. David Wheeler used his freedom of speech to express: > We've had a "reliability vs. security" definition debate before. > I guess what I'd contribute is that software developers need to > examine the intended environment & determine the best trade-off BEFORE > the software is developed. If there's no single answer for the intended > market/user base, the code needs to be configurable so that > the user/administrator can select the best response to the circumstance. You have just pointed out a truly problematic trend, not only in security software companies, but all software companies. You state that "software developers need to examine the intended environment", however the real case is hte fact that the software developers get their marching orders, for the most part, from marketing and the sales force. These two organizations traditionally communicate to develpment, what will sell, and what needs to be there, obviously the CTO weighs in on this, but he/she too has to keep the quicker buck in mind, and balance that the technological requirements. But most times, if a customer comes in and says they want XYZ and they will pay $ABC Million for it.... profit wins out. I have seen that most developers of security products, have never, and will never use their software in the "real world", therefore they rely on those who write the MRD's (Marketing Requirement Documents) to truly understand the customers need. Yet most marketers or sales folks have never been in an operational security position either. This makes for a slow evolutionary cycle of commercial software, with still a decent life-span for those products that will fall to natural selection. The natural ,albeit indirect, predator of the company that cant see beyond its next customer requirement, is the open source community, as well as what may be termed the purists and practitioners in our midst. When these two forces represent themselves in the market place, they act indirectly to impact companies that "dont get it". Those companies that do "get it" will beg, borrow, and steal from the free information that has gained acceptance with early adopters. So, as for an answer to the problem of kludgey security software, the only way for a software company, security or otherwise, to clean house while at the same time producing robust tools, is to remember the first phase of the software design cylce... "Specification". In this cycle, you are supposed to not only write your specification with your customer, but also share it with your colleagues to see where the flaws might be. This means colleagues who understand the problem that is attempting to be solved. Our developers need more insight into how the software is used, and why, as well as an experiential understanding of what its like to be the end-user. -blue0ne
This archive was generated by hypermail 2b30 : Wed May 23 2001 - 14:00:53 PDT