[ISN] Interview with Marcus Ranum

From: InfoSec News (isn@private)
Date: Thu Jun 23 2005 - 02:09:02 PDT


http://www.securityfocus.com/columnists/334

Federico Biancuzzi
2005-06-21

Could you introduce yourself?

I am Marcus Ranum, Chief Security Officer of Tenable Network Security,
Inc., the producers of the Nessus vulnerability scanner and a suite of
security vulnerability management tools. I've been working in the
computer security arena for about 20 years, now, and was the designer
and implementor of a variety of security solutions in the past,
including firewalls, VPNs, and intrusion detection systems. I like to
think I've been around long enough and done a wide enough variety of
things that I've achieved a pretty good perspective on the trade-offs
inherent in security technology.

I was the designer and implementor of the first commercial firewall
product, the DEC SEAL, in 1990, and was the "inventor" of the proxy
firewall concept. In 1992 I wrote the TIS Firewall Toolkit and
Gauntlet firewall, and set up and managed The President's email server
(whitehouse.gov) during its first year of operation. I was founder and
CEO of Network Flight Recorder, an early innovator in the IDS market,
as well.


IPv6 should be the future. Do you see a more secure future then ?

No, IPv6 isn't going to solve anything.

IPv6 is just another network protocol, and if you look at where the
problems are occurring in computer security, they're largely up in
application space. From a security standpoint IPv6 adds very little
that could offer an improvement: in return for the addition of some
encryption and machine-to-machine authentication, we get a great deal
of additional complexity. The additional complexity of the IPv6 stack
will certainly prove to be the home of all kinds of fascinating new
bugs and denial-of-service attacks. Also, don't forget that the
current version of IP has encryption and authentication built in
already - and that hasn't helped solve any problems at all.


Do you think that the problem is that we can't develop a secure
protocol, or that people who define standards underestimate security
threats ?

That's a profound question.

There are a lot of factors that combine to defeat security in up-front
design. For example, there's basic human nature: the guys who are
defining standards can't resist the urge to leave their personal stamp
on the future - which results in standards that generally have been
assembled based on a process of negotiation by committee. That doesn't
really work. That's what gives us these insanely complex
multi-optioned heavily layered standards that nobody really
understands: every person on the committee had to lobby to get his or
her favorite feature included. I don't think that process in any way
helps bring about useful security standards. A case in point would be
the IETF's terrible fruitless attempts to establish a standard on
IPSEC (IP crypto) It only took something like 9 years. Those of us in
the commercial world who needed solutions just went ahead and solved
the problem for ourselves while the IETF kept arguing. If I recall
correctly, when we added IP crypto to our Gauntlet firewall in 1993,
it took my engineer on that feature about two months to come up with a
complete proprietary implementation.

I don't think that the standards committees underestimate security
threats; I just think they're too busy doing things that are more
important to them -- like holding meetings and writing minutes, or
whatever it is that they do all the time. The standards I've seen that
try to address security all seem to be over-engineered and too late,
while the standards that ignore security are usually rapidly adopted
and full of security problems. It's a no-win situation either way.


Do you have any idea how to improve the way RFCs get created ?

I think the whole RFC process is obsolete.

In fact, it never would have worked at all, if not for the fact that
in the early days, nobody cared about the Internet. So the IETF could
have their meetings and write their RFCs in a vacuum that was free of
commercial interest. Once the Internet became a commercial phenomenon,
you can see that the IETF's productivity basically went to zero
because the vendors were all trying to pack the working groups with
their people to make sure that their existing implementations got
selected as the standard. That's pretty much what happened with IPSEC,
for example. IETF nearly converged on an IPSEC standard several times
until Cisco and other large vendors began making rumblings about "we
won't support this" and "we hold patents on that" to try to keep the
market divided.

How would I improve it? I think if you look at what standards
committees have become today, they're really little more than
ratification bodies that rubber-stamp the de facto standard. Usually
they tweak it a little bit to salve their pride but that is about it.

I think we could do away with the whole standards thing very easily if
a few customers just exercised their economic power a little bit
intelligently. Big customers have huge power, but they seem to have
forgotten that. If the CTOs of 10 FORTUNE 500 firms announced that
they were deferring further purchases of VPN products until they saw
proof of interoperability, and open published specifications that
weren't encumbered by patents or licenses, the whole market would
standardize practically overnight. Because the truth is nobody cares
about standards - everyone cares about what you can do with
interoperable systems. If customers just openly refused to do business
with vendors that produce non-interoperable systems, the whole thing
would clear up really fast.

The RFC idea could be brought into the present day if it came from
customers not vendors and dilettantes. How about if the CTO of AT&T
announced "We're going to standardize on XYZ's implementation of
online telephony" and the CTOs of GE, Verizon, Ford Motor [Company],
and Citibank announced "we're doing that, too." Game over. Big
customers need to drive standards by not tolerating market-dividing
games from vendors. Sitting back and waiting for vendors to come up
with standards means that they can divide the market while they're
waiting to see who becomes the dominant player. Then everyone has to
standardize on the dominant player anyhow. Right now, the whole way we
do standards is 100% backwards. Just flip it around and it might work
a whole lot better.


If a standard protocol is broken or insecure, what is the best
solution? Maybe supporting only some features or adding a crypto
layer?

If it's broken, adding crypto just makes it broken and hidden.

If a standard protocol is broken, the best solution is to deprecate
the standard and use something else. Just fix it and move on. It's not
like standards are some kind of holy writ; nobody is going to be
punished for ignoring bad standards, right? Remember the ISO
networking protocols? Too late, too complicated, and everyone said "no
thanks." We can do the same, and we should.

Big customers should feel empowered to tell vendors (or standards
committees, for that matter), "Nope. That sucks. No money for you,
until you fix it." The customer is always right.


Have you ever chosen to avoid a protocol because you considered it
completely broken by design?

All the time. I avoid 90% of the current internet protocols. It's a
hard fight, though. When I was CTO of one company I kept having to
fight to keep our sales team from using those stupid, "remote control
your PC to give a customer demo" technologies. What kind of customer
would give a vendor's sales rep control of their desktop? But people
keep/kept asking for it. Eventually, these problems migrate from being
technical problems to political problems, and then security goes out
the window.


What about WiFi?

I waited for 8 years until the technology was fairly sorted-out before
I spent any of my money on it. So, unlike all the "early adopters" who
bought wireless access points with buggy crypto and huge security
holes, I got something fairly decent for under $100, and it supports
WPA which, by all accounts, is pretty good.

Sometimes, patience is a terrific strategy. Wait and see what happens
to the early adopters. If they're all getting hacked to pieces or
spending tons of money on patches and upgrades and fixes to the stuff
they bought - then it's not ready, yet. This seems obvious to me, but
a lot of very senior IT managers don't appear to understand it. The
longer you wait the more desperate the vendors will get, and, if you
can articulate your requirements clearly, the more likely they'll
listen to you.


Do you see any new, interesting, or promising path for network
security?

Nope! I see very little that's new and even less that's interesting.  
The truth is that most of the problems in network security were fairly
well-understood by the late 1980's. What's happening is that the same
ideas keep cropping up over and over again in different forms. For
example, how many times are we going to re-invent the idea of
signature-based detection? Anti-virus, Intrusion detection, Intrusion
Prevention, Deep Packet Inspection - they all do the same thing: try
to enumerate all the bad things that can happen to a computer. It
makes more sense to try to enumerate the good things that a computer
should be allowed to do.

I believe we're making zero progress in computer security, and have
been making zero progress for quite some time. Consider this: it's
2005 and people still get viruses. How much progress are we making,
really? If we can't get a handle on relatively simple problems such as
controlled execution and filesystem/kernel permissions, how much
progress are we going to make on the really hard problems of security,
such as dealing with transitive trust? It's 2005, and IT managers
still don't seem to know how to build networks that don't collapse
when a worm gets loose on them. Security thinkers realized back in the
early 80's that networks were a good medium for attack propagation and
that networks would need to be broken into separate security domains
with gateways between them. None of this is rocket science - I think
that what we're seeing today is the results of this massive exuberance
in the late 1990's in which everyone rushed to put all their mission
critical assets onto these poorly protected networks that they then
hooked to the Internet. That was a dumb idea, and that fact just
hasn't sunk in, yet.


Do you like the approach of De-Perimeterisation (moving the firewall
from a centralized position to each host) ?

I've heard of this concept under a variety of names before; it's been
around for a long time. The problem is that, by itself, it won't work.

Why push security down to the individual host level? Well, the obvious
reason is that the network is not trustworthy. But, if the network is
not trustworthy, how can any 2 hosts communicate safely? Most of the
application protocols in use are still insecure and unencrypted. So,
you set up little VPNs between each host, and you tunnel some
applications over SSH or SSL. But that still doesn't work because
you've now got a problem of transitive trust. If host A talks to host
B and host B talks to host C, then a vulnerability in host B leaves
host A open to attack from host C. Transitive trust is the "secret
killer" of computer security but most of the time we never bump up
against it in practice because it's easier for hackers to get in via
simpler methods.

We recently saw a case where a hacker made significant penetrations
into some very secure systems using an attack against the trust
relationships between the different systems in a large research
community. The hacker compromised one researcher's account at a
university and trapdoored the researcher's SSH client. When the
researcher logged into a system at another research facility, the
hacker now had the researchers' SSH password and was able to penetrate
the next facility, set up a trapdoored SSH client there, and
eventually he got the root account as the administrator SSH'd into a
local server. The hacker had several months worth of fun and by the
time it was all over, he had compromised several hundred systems and
gained administrative privileges in 5 different research facilities
across the Internet. Having per-desktop firewalls would not have
helped at all in this type of scenario, unfortunately, since once the
hacker was into the first system, they were operating entirely at an
application level.

To really secure systems, everything needs to be done 100% right at
application layer, kernel layer, network layer, and at the boundary of
the network. That's a huge undertaking and nobody has made any effort
to tackle it directly because the resulting system would probably be
unusable. The guys who wrote the rainbow series in the 1980's
understood this and tried to get security practitioners to think about
the problem, but solutions like that simply aren't commercially
viable. So the security industry and many security users have been
bouncing back and forth between, "let's secure the networks with
firewalls and forget about host security," and, "let's secure the
hosts and forget about the networks" Neither by itself will really
work.

I've seen some practitioners (coincidentally, the ones who sell file
encryption products) saying "let's just secure the data! forget
firewalls and network security! forget host security!" but that's an
even worse idea. If you just secure the data, the the first person who
installs a keyboard sniffer has your password and it's all over.

Whenever someone tells you that there's a novel, easy, solution to
security, it's either because they don't understand security or
they're trying to sell you something that isn't going to work.


What about buying a switch that includes a packet filter? This
solution should provide a trustworthy network with the added bonus of
isolating and filtering each host.

It's not a technology problem, it's a management problem. There are
plenty of tools that can be used to control inter-host trust, but they
are generally not used because they're "too hard" or "inconvenient" or
whatever. For example, the big Cisco switches all have the ability to
process ACLs at high speed. Isolating and filtering each host is very
possible and would be very effective using existing technology.

Let's imagine a simple scenario: suppose I have a subnet consisting of
150 hosts that all access a local departmental server with file
serviceand print service, etc. Further, let's imagine that the hosts
on that subnet need Internet browsing access and access to an
enterprise Email server (IMAP + SMTP) that sits someplace else on my
corporate LAN. And, perhaps, some of my users need access to the
mainframe for SQL, while others don't. So, I could put ACLs in the
switch to, "allow all/all to the local subnet server," "allow IMAP,
SMTP to the off-network mail server," "allow all, port 80, to the web
caching proxy off-network," "allow {list} SQL to the mainframe,"  
"default: deny all." That's not very hard, is it? Does Bob's
workstation need to talk directly to Jane's? No? Then don't allow it.

And a network like that is going to be extremely resistant to worms or
active penetration. Of course nobody does that kind of thing: they
just plug it all together, make it work, and then ignore it and hope
it doesn't get hacked.

In order to build really secure systems you need to understand the
trust relationships between your systems and then build your systems
to enhance and support your mission based on those trust
relationships. But that's hard work that very few people have the
courage and patience to undertake. So instead, they want to just throw
technology at the problem - which won't work - because there is no
amount of technology that can effectively build your trust
relationships for you if you don't understand them yourself.

The computer security industry is trapped in this backwards mindset in
which its practitioners keep trying to "list and deny all the things
that are bad" rather than "list and permit all the things that are
necessary and good" It may have worked for a while, back when there
were only a handful of attack techniques being used, but nowadays
there are far more attack techniques than there are legitimate forms
of traffic. Security system designers who focus on permitting only
what is known to be good will always build systems that are more
reliable, durable, and hack-proof.


Do you see a growing gap between common hosts on the Internet and
hosts managed by security people?

Not really! Security practitioners these days have very little power
to encourage other IT professionals to actually secure their systems.  
In fact, I'm pretty convinced that a lot of security practitioners
really don't know how to secure systems at all. It's always a surprise
to me when I talk to a security practitioner and they say something
like, "I recommended against running [pick your favorite stupid online
chat program] through our firewall but was overruled by one of our VPs
who wanted to use it." Most of the firewalls that I've seen are
configured with rulesets that are ridiculously loose. And the results
show: 80% of corporate desktops are infected with spyware, 15% of them
are infected with keystroke loggers. Is that better than the common
home user's system? Maybe a bit, but hardly enough to make a
difference.


If we consider the Internet as a big local network, we will see that
some of our neighbours keep getting exploited by spyware, virus, and
so on. Who should we blame? OS producers? Or our neighbours that chose
that particular software and then run it without an appropriate secure
setup?

There's enough blame for everyone.

Blame the users who don't secure their systems and applications.

Blame the vendors who write and distribute insecure shovel-ware.

Blame the sleazebags who make their living infecting innocent people
with spyware, or sending spam.

Blame Microsoft for producing an operating system that is bloated and
has an ineffective permissions model and poor default configurations.

Blame the IT managers who overrule their security practitioners'
advice and put their systems at risk in the interest of convenience.  
Etc.

Truly, the only people who deserve a complete helping of blame are the
hackers. Let's not forget that they're the ones doing this to us.  
They're the ones who are annoying an entire planet. They're the ones
who are costing us billions of dollars a year to secure our systems
against them. They're the ones who place their desire for fun ahead of
everyone on earth's desire for peace and [the] right to privacy.

 
Copyright 2005, SecurityFocus



_________________________________________
Attend the Black Hat Briefings and
Training, Las Vegas July 23-28 - 
2,000+ international security experts, 
10 tracks, no vendor pitches.
www.blackhat.com 



This archive was generated by hypermail 2.1.3 : Thu Jun 23 2005 - 02:27:53 PDT