Forwarded from: "Deus, Attonbitus" <Thorat_private> Cc: jasoncat_private -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 At 08:35 PM 1/15/2003, Jason Coombs wrote: >Aloha, Tim. Hey Jason- >Rights in and liability for abandoned property is a complex subject of law. >Nobody would argue that you don't have the right to perform remote system >administration on abandoned property that happens to still be connected to a >power source and the Internet, but a server that has been owned by a worm or >an anonymous third-party attacker is not clearly abandoned. As physical >property it still belongs to its legal owner. If we allow anonymous remote >system administration that is allegedly benign or even beneficial to >information security why shouldn't we also encourage the coding of >self-replicating concept worms and viruses that exploit security >vulnerabilities for the sole purpose of demonstrating that such >vulnerabilities exist? A self-replicating worm, even if its purpose is to inoculate some other parent worm, still carries with it the issues of bandwidth and process utilization. It also requires "scan-based" propagation where it must first connect to a host to determine if it is infected with the target worm(s). The same initial group of systems would end up hosting the new process; the end result being an event that is no less of a worm than the worm it set out to destroy in the first place. The neutralizing technology we are talking about acts in singularity to a specific attacking host as a defensive measure. The intent is to only affect the attacking process, not to perform "remote administration." I know that there are many hypothetical scenarios one may present that illustrates where this type of action will fail, but for the most part, I think we have done a satisfactory job of addressing most of the "real-world" issues. I'd really like it if you would check out the whitepaper (and all others on the list that are interested) and give me some feedback: http://www.hammerofgod.com/strikeback.txt >Code Red was a concept worm. It did no real harm. Its spread could have >educated IIS administrators as to the threat of their unpatched boxes, but >it didn't. Code Red II DID result in widespread awareness of the security >risk of unpatched IIS boxes because it did cause widespread harm. It's not >difficult to see the slippery slope that begins with your good intentions >and ends with the logical conclusion that in order to cause real security >for the good of your nation and the world, you have to write malicious code >that self-replicates and causes global electronic paralysis. Otherwise >nobody will listen, nobody will acknowledge the threat even though you see >it clearly, and nobody will act to prevent more severe penetrations before >they occur. Again, we are not suggesting self-replicating code, so I'm not clear on how one could arrive as that being the resultant logical conclusion. This is not a long term goal- it is a short term solution; a means an end... Ultimately, I would like to see a level of responsibility and accountability assigned to system owners for the state their assets when they choose to participate in a public network. Ideally, people would practice due diligence in system security- the current state of the Internet shows that this is simply not being done. To me, if you have something that needs to be done, but those who should be responsible can't or won't do it, then the only logical conclusion I can come to is that someone else has to do it for them. That doesn't mean that it gets to be me or you, but it does mean that we need to take a look at other options to ensure that the integrity of the systems and services we have come to depend on is maintained. This is one of those options that I think should be explored. From a legal standpoint ( as I understand it from many conversations with counsel) one can not prove a "duty to perform" due diligence in system security unless one can prove that such duty would be expected by a general portion of the populous. From a security standpoint, the competence:installations ratio has simply not permeated the computing community to the degree that would be necessary to reasonably expect such duty. A general understanding of "if you don't secure your system you may be responsible" just might raise that level. >When electronic trespassing is permitted in violation of other people's >reasonable legal rights under the condition that the trespasser must be >attempting to do something beneficial to the security of the property in >which she trespasses the entire notion of illegal electronic trespassing >disappears, to be replaced with forensic arguments made by expert witnesses >in front of juries. I obviously don't consider it trespassing. This is an ACK to a SYN- a defensive ACK to an attacking SYN. There are certainly grounds for disagreement here, and it is something that must be discussed- particularly since issues pertaining to the degree cyber-events are related to the physical continuum are just going to become more complex. I must say thank you, however, for not bringing up the "If a burglar breaks into your neighbor's house" analogy! > You do not want a jury of your peers to decide whether >or not the prosecution's interpretation of the computer evidence is accurate >or whether your defense expert witness is correct in her forensic >counter-analysis that proves your innocence. This is a losing situation for >you, the accused, because law enforcement will always appear to be the more >credible witness. > >"Ladies and gentlemen of the jury, who are you going to believe? >THORat_private or the FBI?" That is another question altogether... But that is the main reason I am looking for, at the least, some sort of legal backing in case one should find himself in a position of having to educate a jury as to the difference between physical trespass and digital self-defense. >Advocating white hack hacker penetrations of other people's property for the >purpose of remote system administration also fails the common sense test. >Common sense tells us that we can defend ourselves adequately from all of >our badly-configured and compromised peers simply by unplugging our >computers from the network that connects us to them. I don't think that defensive measure has any common sense to it at all. Though I am loath to bring another physical analogy into the discussion, that is like saying that if I don't want to receive telemarketing calls that common sense would dictate that we just unplug our phones. >If the only viable >solution to the world's information security problems includes automated >legalized trespassing, then the world needs brand new computer products >designed from the ground up with infosec in mind. The fact that we will soon >see the first generation of these systems enter the marketplace may be proof >of the fundamental insecurity of existing programmable computers; though the >jury is still out deliberating this point. Well, I would say that the world needs new computer products with security built into the manufacturing and development models anyway- but we are a ways off from that. We have worm problems now, and need to find a solution now. Will this work? Maybe, maybe not. But I would rather spend some time and energy looking for a solution, even if not ideal, than to just sit around and take it. I very much appreciate the comments, Jason- please give the whitepaper a once-over and let me know what you think. All comments appreciated. ttyl Tim -----BEGIN PGP SIGNATURE----- Version: PGP 7.1 iQA/AwUBPiciYIhsmyD15h5gEQIgTACg2EkFmnM1XDQq/fgtCSkAMbJH5EkAoP8Z 5aP+xjvBbfDUorU6q5GZjb6S =jaBJ -----END PGP SIGNATURE----- - ISN is currently hosted by Attrition.org To unsubscribe email majordomoat_private with 'unsubscribe isn' in the BODY of the mail.
This archive was generated by hypermail 2b30 : Fri Jan 17 2003 - 01:16:26 PST