http://www.computerworld.com/securitytopics/security/story/0,,95656,00.html Opinion by Mudge Intrusic Inc. SEPTEMBER 09, 2004 COMPUTERWORLD In the sciences, there are general principles that can apply to all environments. The principles of physics (i.e. the general laws) are ubiquitous across disciplines. Why should the information security field be any different? It turns out that it isn't. In my experience, the following general principles have proved beneficial. Companies can apply them with existing internal resources. 1. Map security around business functions In few areas is the relationship of security to business functions more obvious than in comparing electrical utilities with industrial refineries. Both business models use a segmentation structure around Supervisory Control and Data Acquisition and/or distributed control systems. While both electrical utilities and refineries have these environments, the refineries, in general, have a much more secure implementation of this model. Was this due to particular security requirements? No. Upon querying technical experts from both industries, the rationale became clear: One field had to be much more competitive in the business realm than the other. Industrial refineries had to compete in the business market, while utilities were subsidized and regulated by the government. If one company operated at even a fraction of a percentage more efficiently and cost-effectively than a competitor did, that business had an edge in the public markets. Tremendous amounts of effort were spent designing and making networks and systems perform core technical requirements in a way that was as efficient and organized as possible. These efforts resulted in networks with a relatively high security baseline. More important, they provided a solid foundation for future security components that might be desired in the future. Without the economic driver of competition for the electrical utilities, the optimization and maximization of underlying business architectures didn't receive the same attention. As various utilities markets are deregulated, many players find themselves in the position of having to make a profit. However, the underlying infrastructure lacks a foundation solid enough to confidently run critical business tasks, let alone withstand hostile attacks. 2. Define information and data labeling and handling guidelines Although an arduous initial task, implementing data classification, labeling and handling guidelines will pay huge dividends in the long run. Many companies will invest substantial capital toward vulnerability assessments, network intrusion-detection systems and security best-practice guidelines. Unfortunately, few of these companies ever embrace information labeling and classification guidelines. If an engineer comes across a business memo he doesn't understand, what are the odds that this information will be handled in a secure fashion commensurate with the memo's value? Conversely, if a secretary receives an e-mail that carries with it an attachment of source code, will the secretary automatically know whether it's permissible to forward this e-mail to a recipient outside of the corporate network? No matter how perfect the technical security might be within an organization, not understanding what's valuable or sensitive and how to appropriately handle it will negate those technical defenses. While I was working with the U.S. government on the problem of vulnerabilities in critical infrastructure, data labeling and handling guidelines surfaced as one of the most glaring problems. More than 80% of the time, there was no need to break into an organization that was a key player in one of the critical-infrastructure segments to demonstrate key vulnerabilities. Simply engaging in intelligence gathering would invariably yield the information to circumvent their corporate security or gain direct access to back-end networks responsible for the command and control of utility, financial, transportation and communications networks. 3. Learn how your network actually works Many companies lack internal network diagrams altogether, let alone up-to-date ones. While this might not be surprising, the following point very well might be: Of all the "up-to-date" internal network diagrams I have seen, only a small fraction of them are accurate in their representation of what really transpires on the underlying networks. The divergence of actual network routing/flow from many paper mappings put together by internal network operations groups is easy to understand. The introduction or removal of network devices (primarily routers, switches and hubs in this case) without documentation or the knowledge of IT is an obvious culprit. This can be accidental or intentional. While this does happen, it's usually not the greatest contributor to inaccurate network maps. The larger contributor comes in two parts. First is the use of dynamic protocols in an inherently static environment. The second is the willingness to forget the fact that most network infrastructure devices intentionally "fail open." Few internal networks are set up with multiple entry and exit points. There is usually a single router per network or subnet that connects each leg to form the corporate network. Yet it's tremendously common to find internal routers running dynamic routing and discovery protocols. Because of this, normal maintenance of infrastructure components or reconfiguration of individual elements can result in cascading modifications to routes and paths. There are many suboptimal ways of switching and/or routing traffic that will continue to provide base functionality (albeit at a cost of performance and complexity). Why is it that so many organizations have infrastructure devices configured to use dynamic routing and/or discovery protocols? The answer is simple: Vendors ship them by default. Manufacturers of infrastructure devices have to make a choice as to how their equipment will act under unusual or unknown circumstances. Should the expensive switch stop working entirely, or should it revert to broadcast mode where it acts more akin to a repeater/hub? The choice is obvious. Put yourself in their situation and guess which option might be more or less disruptive to the customer's environment. Unfortunately, the customer is usually unaware of the fact that a switch has failed open. The general rule of thumb for both business optimization and security is, "Keep it simple." By configuring infrastructure equipment to be static if it's deployed in a static environment and including periodic promiscuous sampling of network traffic at various locations, you'll maintain an accurate understanding of your network. At the very least, you'll be more aware of when things change. 4. Understand the components in your environment and how they relate to business By following the above recommendations, this final project is much easier. This step allows an organization to engage in detecting and defending against external entities that have gained access to the network, or from internal personnel with ulterior motives. Standard intrusion-detection systems won't identify these threats because they're already inside. They don't attack a system because access is implicitly granted. The activities engaged in won't be detected, nor will they be thwarted by patching vulnerabilities discovered through network vulnerability assessments. One must engage in more classical counterintelligence practices to effectively combat this threat. Let's say that these steps are in place: Business functions have been made as efficient as possible and are realistically mapped through their corresponding optimized network flows; corporatewide information and data classification and guidelines are in place, and network maps are known beyond any doubt to accurately represent how packets and information actually flow. Now it's possible to identify information-gathering and reconnaissance activities, data removal and passive control of systems by covert adversaries. Adversaries have more to gain by maintaining access to internal networks for as long as possible without being discovered. However, to move data outside of this constrained environment, they must engage in activities that bend, if not flat-out violate, the general economics and information-theory principles of soundly designed and run businesses. Soundly run businesses by necessity require soundly understood internal networks and data items. Peiter Mudge Zatko is founding scientist of Waltham, Mass.-based Intrusic Inc. and a division scientist at Cambridge, Mass.-based BBN Technologies. _________________________________________ Donate online for the Ron Santo Walk to Cure Diabetes - http://www.c4i.org/ethan.html
This archive was generated by hypermail 2.1.3 : Mon Sep 13 2004 - 01:38:38 PDT