> From: "mshines" <mshinesat_private> > To: <secprogat_private> > Subject: FW: Repost > Date: Mon, 14 May 2001 13:00:10 -0500 > Message-ID: <008301c0dc9f$b8302e50$5565d280at_private> > We don't practice what we preach? :) > > New software = failed software... > > Mike Hines > mshinesat_private > > RELIABILITY would be one criteria for secure programming... it would seem > to me. Well, your statement appears to be mostly tongue-in-cheek, but there's actually a valid issue hiding here. I don't agree with your statement. Security requirements vary, depending on the use of the system, and might include: * confidentiality/secrecy ("an attacker can't read my stuff") * integrity ("an attacker can't change my stuff"), and/or * availability/defense from denial-of-service ("an attacker can't keep me from using my stuff"). Not all users require all of these requirements to the same degree. Indeed, can trade between these. For example, a system that tries to detect "suspicious" situations, and prevents you from reading your data in such cases, may be very unreliable... but have great confidentiality. Whether or not that's okay depends on how the system is being used, but in some contexts this would be a _more_ secure system. We've had a "reliability vs. security" definition debate before. I guess what I'd contribute is that software developers need to examine the intended environment & determine the best trade-off BEFORE the software is developed. If there's no single answer for the intended market/user base, the code needs to be configurable so that the user/administrator can select the best response to the circumstance.
This archive was generated by hypermail 2b30 : Sun May 20 2001 - 23:22:31 PDT