Crispin - I actually have been giving this a lot of thought, and research, and discussion with other folks, and while I agree that more often than not you will be dealing with a victimized party, that doesn't end the analysis. As a legal matter the "innocence defense" - or "inattentive and clueless defense" - at some point turns into the *negligence liability* which might justify intervention. When and under what conditions that should happen is not at all clear - that's part of what I'm trying to figure out - but from a strictly legal POV the fact that your machine is the instrumentality of harm to my systems means either (1) intentional activity on the part of the machine's owner, or (2) potentially actionable negligence on the part of the machine's owner, if securing it according to the applicable standard of care would have prevented a third party from using it for the attack. In either situation there is caselaw emerging which would support getting a court order to stop the activity - see Intel v. Hamidi (California, computer trespass by intentional mass email) and CI Host v. Devx (Texas, negligence in allowing defendant's systems to be taken over to attack plaintiff's network). The question then is, if I could get a court order to stop harmful activity, can I stop it myself without one? There is precedent in other areas of the law which indicates that self-help may be justifiable to prevent the use of another's property to harm your own, under conditions which might be fairly analogized to this kind of situation (again, figuring out if the analogy works and what its limitations and misleading assumptions might be is part of what I'm trying to figure out). If I did hack back I might face criminal prosecution and/or a civil suit by the party whose machine I at least partially deactivated, so I better have very strong facts which would both convince a prosecutor my activity was sufficiently justified that prosecution isn't worth pursuing, and a judge (and possibly a jury) that the other guy's own fault and the risk of harm to me was sufficiently great that I shouldn't be liable for civil damages. And in order to do so I'd need sufficient financial and personnel time reserves to withstand the burden of responding to investigations and potentially litigation, and lots of persuasive, credible expertise on my side. Being on the bleeding edge of the law is probably even riskier and more expensive than being on the bleeding edge of technology - but consider the technology environment we are in. I know of at least one credible report of a hack back (which was apparently effective and had no legal fallout), and have heard tell of others. Somebody, somewhere (somebodies, more likely) will take the law into his/her own hands - maybe to stop some kind of attack, maybe to protect some (real or purported) intellectual property - without thinking through consequences or justification. Or, an organization which really does have critical infrastructure protection or homeland security responsibilities may have good reason to believe key systems are under some kind of genuine threat which cannot otherwise be reasonably managed in the time and under the operating conditions which apply, and will need to know what might happen if this option were exercised. In the former case we need some reasonably well articulated theory which might help educate the wannabe vigilante - "just say no" probably being a less effective strategy, though it's a given that some people may be beyond any education - and will give prosecutors and judges a principled way to decide how to handle the matter. Federal prosecutors and judges in particular may want this, since they may be concerned to articulate decisions which both define why the case at hand calls for prosecution or punishment - if it does - while preserving the theoretical right of national defense/homeland security agencies at least to undertake such actions. Which gets us to the latter case, in which situation having some sense of the potential "rules of engagement" which might allow justified action would at least let the organization address the question and move on, and might permit an action which would in fact be beneficial and preserve valuable assets which would otherwise be lost. (I'm trying to avoid overly alarmist scenarios - for example, if the question is "should we put SCADA systems on the Internet" I would definitely "just say no" - but there are many other valuable assets and systems which cannot reasonably be disconnected, etc.) The underlying problem is that there is no effective method for prompt law enforcement intervention and other legal recourse is generally slow and cannot in itself generate protections. Under these conditions there is probably an inevitable drift toward vigilantism - that's why we have the RIAA and MPAA and Sen. Hatch taking these positions. So in lieu of "sherriffs" with effective enforcement powers - which is what Sen. Hatch would apparently de facto permit copyright holders to act as - it is up to the community to figure our how to manage these problems. That's what I'm trying to work out, as are a number of other folks. I agree, risk analysis is going to show that hack backs are *almost* always the wrong thing to do. But we've got to know how to handle it when somebody does the wrong thing, without unnecessarily foreclosing it in the rare (but by definition important) situations where it might be the right thing to do. John R. Christiansen Preston | Gates | Ellis LLP *Direct: 206.370.8118 *Cell: 206.683.9125 Reader Advisory Notice: Internet email is inherently insecure. Message content may be subject to alteration, and email addresses may incorrectly identify the sender. If you wish to confirm the content of this message and/or the identity of the sender please contact me at one of the phone numbers given above. Secure messaging is available upon request and recommended for confidential or other sensitive communications. -----Original Message----- From: Crispin Cowan [mailto:crispin@private] Sent: Wednesday, June 18, 2003 11:43 PM To: Christiansen, John (SEA) Cc: 'Dorning, Kevin E - DI-3'; crime@private Subject: Re: CRIME Senator Hatch - Destroy file swappers' computers Christiansen, John (SEA) wrote: >I don't think this is funny at all. I have actually been doing some >theoretical work on active defense (or "hack back") as a potentially >legitimate response to some kinds of network-based threats. While I am not >convinced it is necessarily proper (and am also not convinced it is >necessarily improper, either), it is very clear it would need to be >undertaken carefully, with a high degree of reliability in target >identification and proportionality of response to risk, where other recourse >is not reasonably possible. This kind of statement at best reflects a lack >of thought about or insight into the issues, and at worst may be taken by >irresponsible intellectual property claimants (or wannabes) as a license to >do what they want. > Uh, oookaaayyy .... sounds to me like you haven't thought about this very much. Attacks are almost *always* launched from a computer belonging to an innocent 3rd party, who just happened to have been cracked before you were. So if you "hack back", you almost certainly are committing an offense against an innocent party who has already been victimized by the attacker. To be fair, John did say "with a high degree of reliability in target identification." But that's problematic: with an attack coming from a remote machine, where you have no access, and the legitimate owner is very likely both inattentive and clueless, just how is it that you might reliably establish identity? So if you do the risk analysis, "hack back" is almost *always* the wrong thing to do. Crispin -- Crispin Cowan, Ph.D. http://immunix.com/~crispin/ Chief Scientist, Immunix http://immunix.com http://www.immunix.com/shop/
This archive was generated by hypermail 2b30 : Thu Jun 19 2003 - 10:31:49 PDT