[logs] Re: on database logging

From: Marcus J. Ranum (mjr@private)
Date: Thu Mar 22 2007 - 07:00:49 PST


Fenwick, Wynn wrote:
>The root seems that few are practicing business-level risk management on
>IT assets and safeguards because the risks are difficult to quantify and
>express in dollar units as they do in the physical realm. 

I agree with you 100%. But the root cause is that the notion of
"risk management" is a fantasy and I think that that realization
has begun to sink in where it matters.

And that's because, exactly as you say, "the risks are difficult to
quantify." Which is a bit of an understatement, really. The problem
is that:
- Risk is hard to quantify
- Expected loss is hard to quantify
- Effectiveness of protective measures is hard to quantify
In fact, "hard" is generous - "impossible" might be a better word.

To get all Rumsfeld on you, we're dealing with "unknown unknowns"
and a very small handful of "known unknowns." The whole premise
of risk management is false - it's that you can somehow perform
a cost/benefit analysis based on a risk probability. You don't
have to have passed Statistics 101 to realize that cost/benefit
analysis is complete science fiction if you've got a single
unknown (known or unknown) in your equation.

The "quants" understand this, too. If you were to go to an
insurance company and try to get them to write a policy
based on no useful actuarial data, no reliable threat
data, and unbounded costs they'll either laugh at you
or ask "what's the value of the thing you want us to
ensure? because that's the annual premium."

> Add a certain
>amount of eye-of-newt and ear-of-bat, and you could come up with an ALE
>that might point out that some increased vigilance, increased precision
>of monitoring could result in lower operational cost (from the risk
>component). If the risk component is not too large, risk acceptance and
>monitoring of the risk level is an equally good practice. 

This is the problem, in a nutshell, with the risk management
philosophy. I'm not slamming you, personally, here, but the
statement above is a perfect example of why security remains
a backwater. Basically, what you're saying is:
"since we don't really have anything real, let's make up some
plausible-sounding bullsh*t and try to sell it to executive management." 

That summarizes the last 15 years of computer security
quite nicely!! We have relied way too much on selling management
plausible-sounding bullsh*t to the point where it's made us
sound absolutely stupid. Because we are. After all, if we're
asking management for millions and millions of dollars to
improve the computer security situation, it ought not to be
getting WORSE, right?

>My issue is that most organizations accept unquantifiable risk without
>even attempting to measure it. This is not good risk management.

There is no good risk management.

As you say, it all depends on being able to measure risk. And, as
you say earlier - all we can offer is " a certain amount of eye-of-newt
and ear-of-bat"   That's pseudoscience. When you talk about there
being "good risk management" what you're really saying is:
"If risk management weren't complete bullsh*t it wouldn't be
complete bullsh*t."

That's true. The question is whether we can get from where we
are (100% unadulterated bullsh*t - or "unknown unknowns") to
at least being able to deal with the "known unknowns."  Personally
I do not think that is possible because of site variances in
security implementation, site variances in targets and their
value, and site variances in practices. You can compute
long-term cigarette-related cancer mortality rates in humans
because there is a large (but dwindling!) sample of tobacco
smokers but that works because cigarette smoking affects
all smokers more or less the same way in large samples. In
computer security about the only things we can say with any
degree of certainty is that there are large populations of
Windows users - but as soon as you start thinking about
threat models by patchlevel, administrative practices, and
local network topologies - it all falls apart quickly. If you start
trying to quantify things like "dollar value at risk if this
server fails" you discover that it's all hand-waving because
nobody knows (and I mean "nobody") what systems really
depend on eachother, anymore. I was at one site where they
had multiple layers of redundancy for all their stuff and the
software layer would have all gone kerblooie if their local
DNS server had failed. Anyhow - you get the point.

And to muddy things up you have "security legends"
like "80% of attacks come from the inside" that people
repeat as if they are gospel, but, in fact, that's a number
that someone pulled out of his butt one day at a conference
and everyone has been repeating it ever since as if it
had been handed down by the Archangel Gabriel on
a stone tablet. :(

> Others
>many have sized the potential $risk by eyeball, and are afraid to
>actually put the $numbers to slide decks because of the $safeguards
>required to demonstrate $effective risk management.

What you mean there is "many have pulled numbers out of their
butts and have been afraid to stand by them because they know
they are bullsh*t."

>Government departments in Canada are working toward compliance of a
>different sort. Communications Security Establishment has laid out
>control objectives in "a series of Baseline Security Requirements" that
>are prescribed by Canadian Government Security Policy. These control
>objectives are prescriptive, if unrealistic, for zones of the network.
>ie: "Continuous" monitoring is prescribed for most network zones. A
>database often resides in a zone that requires continuous monitoring.

Yup. And the problem is that a lot of those concepts come from,
well, people like us. We're cutting our own throats. You know how
it goes - someone at VISA asks, "what should we require for PCI
compliance?" and some security practitioner thinks, "...Well, a bit
of pen-testing wouldn't hurt. Besides, I make my living doing
pen-testing."   "Pen testing!"  These prescriptive recommendations
are largely nonsensical because they are not tied to a meaningful
design. And that's the problem. None of this stuff makes any
sense unless it's tied to a design but none of what we have was
ever designed - it just "happened" one night. So we're trying to
come up with ubiquitous retro-design patches for flawed
architectures. Yeah... THAT is gonna work... :(

>The next step is to define what are the attributes of a effective
>monitoring system. What is the point of an exception monitoring and
>handling system if it does not generate and handle exceptions? We often
>talk of monitoring but that's only part of the process. Business types
>are too literal. Why monitor without identifying exceptions, determining
>if they are sufficient risk to contain and if necessary eradicate? 

Bingo. Now you're asking design questions. But these questions
cannot be meaningfully answered except on a case-by-case
basis.

>Too often management are afraid of extra unscoped costs that a
>investigating the exceptions a monitoring system will inevitably
>generate. 

Managers are HORRIFIED by the unscoped cost labelled
"computer security." And rightly so. We keep asking for more
and more money, as if throwing money at the next little
facet of the problem is going to be the one little band-aid
that stops the gushing blood.

I'll stop there.

mjr. 

_______________________________________________
LogAnalysis mailing list
LogAnalysis@private
http://lists.shmoo.com/mailman/listinfo/loganalysis



This archive was generated by hypermail 2.1.3 : Thu Mar 22 2007 - 09:21:32 PST