> A proposal: > > Since a lot of the discussion on this thread (including my own > contributions) has focused on semantic issues such as defining "secure > code", why not take a stab at a working definition for secure code so we > can get down to brass tacks? I agree.. I think this discussion has lost site of some important key issues. Some people skimmed them but I don't think they were nailed. For instance, take this psuedo discussion: A: In order for me to deem your project secure, your software must be stable. Is your software 100% stable? B: Yes. A: So if I take a hammer to your hard drive, your software will continue to run? B: No.... But physical security is not the responsibility of software developers! A: Sorry.... Your software is not secure! There is a flaw! In order to define security, you have to define the environment it will be judged in. The exit(0) command is only flawed on certain systems with certain libraries. If I have a system with flawed FTP libraries but it is NOT networked to any other computer, is it insecure? In addition, security from the business POV is about risk management and if you are lucky risk elimination in certain cases. Look at car development. An air bag and ABS brakes makes your car more secure but only in certain situations. As long as you are on the road, their will be risk. The goal in car development is to make the car as safe as possible and still usable. You can define a program to be "secure" in a specific environment under specific conditions. It is the responsibility of everyone (not just developers) to ensure that the software remains secure. Guards prevent the hard drive attack, sys admins prevent the network attacks and code developers must ensure that inputs are checked, etc. *It is the responsibility of all the users of the software to notify everyone involved when the environment changes so that each part of the team adjusts to ensure the software's security.* If un-networked software suddenly becomes networked then the environment changes and the software is not secure until it is audited for the new environment. It is common for developers to say, "Well I never thought someone would use it for that!" Having said that, you can get into a more specific discussions of the security attacks that software programmers are responsible for and how they can prevent them. If a library the programmer uses is flawed, is it his software's flaw or the libraries flaw? Is the programmer responsible for all the libraries his code includes? These type of specific definitions define whether software is "secure". Security is a relative measurement. -Peleus
This archive was generated by hypermail 2b30 : Thu Jan 02 2003 - 18:45:41 PST