Valdis.Kletnieksat_private wrote: > > And one more thing...<this one might be intresting ;-)> Is it possible > > to write code that is completely secure and not exploitable? > > This is just a specific case of the question "Is it possible to write > totally bug-free code"? Maybe, maybe not; it depends upon how you define such terms. I can think of reasonable definitions of "bug" and "vulnerability" such that the former isn't a superset of the latter. To provide a (rather crude) example: Bug: the program doesn't do something which it is meant to do. Vulnerability: the program does something which it isn't meant to do. However, a program typically has only finitely many requirements but infinitely many non-requirements. Is it a vulnerability if a program fails to wipe data from memory, or allows it to be written to swap, or slack space, or "unused" parts of files? Or if a network client can deduce which parts of certain files are memory-resident by sending carefully chosen requests and analysing the timing of the server's responses? For a program to be "secure", exactly *what* must it *not* do? From a reliability (no bugs) perspective, one can specify that e.g. input timings don't affect the output data and are therefore irrelevant, and it's pretty straightforward to make it so. From a security (no vulnerabilities) perspective, ensuring that the input data don't affect the output timings is (at a guess) somewhere between insanely difficult and downright impossible; and simply deeming them to be irrelevant doesn't help at all. So, while it might be possible to produce a "completely reliable" program (modulo the requirement that all other components behave exactly as specified; IOW, not necessarily fault-tolerant), I'm not sure that "completely secure" is a meaningful concept. -- Glynn Clements <glynn.clementsat_private>
This archive was generated by hypermail 2b30 : Sat Dec 28 2002 - 00:15:30 PST