[ISN] The Chilling Effect

From: InfoSec News (alerts@private)
Date: Tue Jan 16 2007 - 22:32:42 PST


http://www.csoonline.com/read/010107/fea_vuln.html

By Scott Berinato
January 2007 
CSO Magazine

Last February at Purdue University, a student taking "cs390sSecure 
Computing" told his professor, Dr. Pascal Meunier, that a Web 
application he used for his physics class seemed to contain a serious 
vulnerability that made the app highly insecure. Such a discovery didn't 
surprise Meunier. "It's a secure computing class; naturally students 
want to discover vulnerabilities."

They probably want to impress their prof, too, who's a fixture in the 
vulnerability discovery and disclosure world. Dr. Meunier has created 
software that interfaces with vulnerability databases. He created 
ReAssure, a kind of vulnerability playground, a safe computing space to 
test exploits and perform what Meunier calls "logically destructive 
experiments." He sits on the board of editors for the Common 
Vulnerabilities and Exposures (CVE) service, the definitive dictionary 
of all confirmed software bugs. And he has managed the Vulnerabilities 
Database and Incident Response Database projects at Purdue's Center for 
Education and Research in Information and Assurance, or Cerias, an 
acronym pronounced like the adjective that means "no joke."

When the undergraduate approached Meunier, the professor sensed an 
educational opportunity and didn't hesitate to get involved. "We wanted 
to be good citizens and help prevent the exploit from being used," he 
says. In the context of vulnerable software, it would be the last time 
Meunier decided to be a good citizen. Meunier notified the authors of 
the physics department application that one of his studentshe didn't say 
which onehad found a suspected flaw, "and their response was beautiful," 
says Meunier. They found, verified and fixed the bug right away, no 
questions asked.

But two months later, in April, the same physics department website was 
hacked. A detective approached Meunier, whose name was mentioned by the 
staff of the vulnerable website during questioning. The detective asked 
Meunier for the name of the student who had discovered the February 
vulnerability. The self-described "stubborn idealist" Meunier refused to 
name the student. He didn't believe it was in that student's character 
to hack the site and, furthermore, he didn't believe the vulnerability 
the student had discovered, which had been fixed, was even connected to 
the April hack.

The detective pushed him. Meunier recalls in his blog: "I was quickly 
threatened with the possibility of court orders, and the number of 
felony counts in the incident was brandished as justification for 
revealing the name of the student." Meunier's stomach knotted when some 
of his superiors sided with the detective and asked him to turn over the 
student. Meunier asked himself: "Was this worth losing my job? Was this 
worth the hassle of responding to court orders, subpoenas, and possibly 
having my computers (work and personal) seized?" Later, Meunier recast 
the downward spiral of emotions: "I was miffed, uneasy, disillusioned."

This is not good news for vulnerability research, the game of 
discovering and disclosing software flaws. True, discovery and 
disclosure always have been contentious topics in the information 
security ranks. For many years, no calculus existed for when and how to 
ethically disclose software vulnerabilities. Opinions varied on who 
should disclose them, too. Disclosure was a philosophical problem with 
no one answer but rather, schools of thought. Public shaming adherents 
advised security researchers, amateurs and professionals alike to go 
public with software flaws early and often and shame vendors into fixing 
their flawed code. Back-channel disciples believed in a strong but 
limited expert community of researchers working with vendors behind the 
scenes. Many others' disclosure tenets fell in between.

Still, in recent years, with shrink-wrapped software, the community has 
managed to develop a workable disclosure process. Standard operating 
procedures for discovering bugs have been accepted and guidelines for 
disclosing them to the vendor and the public have fallen into place, and 
they have seemed to work. Economists have even proved a correlation 
between what they call "responsible disclosure" and improved software 
security.

But then, right when security researchers were getting good at the 
disclosure game, the game changed. The most critical code moved to the 
Internet, where it was highly customized and constantly interacting with 
other highly customized code. And all this Web code changed often, too, 
sometimes daily. Vulnerabilities multiplied quickly. Exploits followed.

But researchers had no counterpart methodology for disclosing Web 
vulnerabilities that mirrored the system for vulnerability disclosure in 
off-the-shelf software. It's not even clear what constitutes a 
vulnerability on the Web. Finally, and most serious, legal experts can't 
yet say whether it's even legal to discover and disclose vulnerabilities 
on Web applications like the one that Meunier's student found.

To Meunier's relief, the student volunteered himself to the detective 
and was quickly cleared. But the effects of the episode are lasting. If 
it had come to it, Meunier says, he would have named the student to 
preserve his job, and he hated being put in that position. "Even if 
there turn out to be zero legal consequences" for disclosing Web 
vulnerabilities, Meunier says, "the inconvenience, the threat of being 
harassed is already a disincentive. So essentially now my research is 
restricted."

He ceased using disclosure as a teaching opportunity as well. Meunier 
wrote a five-point don't-ask-don't-tell plan he intended to give to 
cs390s students at the beginning of each semester. If they found a Web 
vulnerability, no matter how serious or threatening, Meunier wrote, he 
didn't want to hear about it. Furthermore, he said students should 
"delete any evidence you knew about this problem...go on with your 
life," although he later amended this advice to say students should 
report vulnerabilities to CERT/CC.

A gray pall, a palpable chilling effect has settled over the security 
research community. Many, like Meunier, have decided that the discovery 
and disclosure game is not worth the risk. The net effect of this is 
fewer people with good intentions willing to cast a necessary critical 
eye on software vulnerabilities. That leaves the malicious ones, 
unconcerned by the legal or social implications of what they do, as the 
dominant demographic still looking for Web vulnerabilities. The Rise of 
Responsible Disclosure

In the same way that light baffles physicists because it behaves 
simultaneously like a wave and a particle, software baffles economists 
because it behaves simultaneously like a manufactured good and a 
creative expression. It's both product and speech. It carries the 
properties of a car and a novel at the same time. With cars, 
manufacturers determine quality largely before they're released and the 
quality can be proven, quantified. Either it has air bags or it doesn't. 
With novels (the words, not the paper stock and binding), quality 
depends on what consumers get versus what they want. It is subjective 
and determined after the book has been released. Moby-Dick is a 
high-quality creative venture to some and poor quality to others. At any 
rate, this creates a paradox. If software is both scientifically 
engineered and creatively conjured, its quality is determined both 
before and after it's released and is both provable and unprovable.

In fact, says economist Ashish Arora at Carnegie Mellon University, it 
is precisely this paradox that leads to a world full of vulnerable 
software. "I'm an economist so I ask myself, Why don't vendors make 
higher quality software?" After all, in a free market, all other things 
being equal, a better engineered product should win over a lesser one 
with rational consumers. But with software, lesser-quality products, 
requiring massive amounts of repair post-release, dominate. "The truth 
is, as a manufactured good, it's extraordinarily expensive [and] 
time-consuming [to make it high quality]." At the same time, as a 
creative expression, making "quality" software is as indeterminate as 
the next best-seller. "People use software in so many ways, it's very 
difficult to anticipate what they want.

"It's terrible to say," Arora concedes, "but in some ways, from an 
economic perspective, it's more efficient to let the market tell you the 
flaws once the software is out in the public." The same consumers who 
complain about flawed software, Arora argues, would neither wait to buy 
the better software nor pay the price premium for it if more-flawed, 
less-expensive software were available sooner or at the same time. True, 
code can be engineered to be more secure. But as long as publishing 
vulnerable software remains legal, vulnerable software will rule because 
it's a significantly more efficient market than the alternative, 
high-security, low-flaw market.

The price consumers pay for supporting cheaper, buggy software is they 
become an ad hoc quality control department. They suffer the 
consequences when software fails. But vendors pay a price, too. By 
letting the market sort out the bugs, vendors have ceded control over 
who looks for flaws in their software and how flaws are disclosed to the 
public. Vendors can't control how, when or why a bug is disclosed by a 
public full of people with manifold motivations and ethics. Some want 
notoriety. Some use disclosure for corporate marketing. Some do it for a 
fee. Some have collegial intentions, hoping to improve software quality 
through community efforts. Some want to shame the vendor into patching 
through bad publicity. And still others exploit the vulnerabilities to 
make money illicitly or cause damage.

"Disclosure is one of the main ethical debates in computer security," 
says researcher Steve Christey. "There are so many perspectives, so many 
competing interests, that it can be exhausting to try and get some 
movement forward."

What this system created was a kind of free-for-all in the disclosure 
bazaar. Discovery and disclosure took place without any controls. 
Hackers traded information on flaws without informing the vendors. 
Security vendors built up entire teams of researchers whose job was to 
dig up flaws and disclose them via press release. Some told the vendors 
before going public. Others did not. Freelance consultants looked for 
major flaws to make a name for themselves and drum up business. 
Sometimes these flaws were so esoteric that they posed minimal 
real-world risk, but the researcher might not mention that. Sometimes 
the flaws were indeed serious, but the vendor would try to downplay 
them. Still other researchers and amateur hackers tried to do the right 
thing and quietly inform vendors when they found holes in code. 
Sometimes the vendors chose to ignore them and hope security by 
obscurity would protect them. Sometimes, Arora alleges, vendors paid 
mercenaries and politely asked them to keep it quiet while they worked 
on a fix.

Vulnerability disclosure came to be thought of as a messy, ugly, 
necessary evil. The madness crested, famously, at the Black Hat hacker 
conference in Las Vegas in 2005, when a researcher named Michael Lynn 
prepared to disclose to a room full of hackers and security researchers 
serious flaws in Cisco's IOS software, the code that controls many of 
the routers on the Internet. His employer, ISS (now owned by IBM) warned 
him not to disclose the vulnerabilities. So he quit his job. Cisco in 
turn threatened legal action and ordered workers to tear out pages from 
the conference program and destroy conference CDs that contained Lynn's 
presentation. Hackers accused Cisco of spin and censorship. Vendors 
accused hackers of unethical and dangerous speech. In the end, Lynn gave 
his presentation. Cisco sued. Lynn settled and agreed not to talk about 
it anymore.

The confounding part of all the grandstanding, though, was how 
unnecessary it was. In fact, as early as 2000, a hacker known as Rain 
Forest Puppy had written a draft proposal for how responsible disclosure 
could work. In 2002, researchers Chris Wysopal and Christey picked up on 
this work and created a far more detailed proposal. Broadly, it calls 
for a week to establish contact between the researcher finding a 
vulnerability and a vendor's predetermined liaison on vulnerabilities. 
Then it gives the vendor, as a general guideline, 30 days to develop a 
fix and report it to the world through proper channels. It's a 
head-start program, full disclosuredelayed. It posits that a 
vulnerability will inevitably become public, so here's an opportunity to 
create a fix before that happens, since the moment it does become public 
the risk of exploit increases. Wysopal and Christey submitted the draft 
to the IETF (Internet Engineering Task Force), where it was 
well-received but not adopted because it focused more on social 
standards, not technical ones.

Still, its effects were lasting, and by 2004, many of its definitions 
and tenets had been folded into the accepted disclosure practices for 
shrink-wrapped software. By the time Lynn finally took the stage and 
disclosed Cisco's vulnerabilities, US-CERT, Mitre's CVE dictionary 
(Christey is editor), and Department of Homeland Security guidelines all 
used large swaths of Wysopal's and Christey's work.

Recently, economist Arora conducted several detailed economic and 
mathematical studies on disclosure, one of which seemed to prove that 
vendors patch software faster when bugs are reported through this 
system. For packaged software, responsible disclosure works. From Buffer 
Overflows to Cross-Site Scripting

Three vulnerabilities that followed the responsible disclosure process 
recently are CVE-2006-3873, a buffer overflow in an Internet Explorer 
DLL file; CVE-2006-3961, a buffer overflow in an Active X control in a 
McAfee product; and CVE-2006-4565, a buffer overflow in the Firefox 
browser and Thunderbird e-mail program. It's not surprising that all 
three are buffer overflows. With shrink-wrapped software, buffer 
overflows have been for years the predominant vulnerability discovered 
and exploited.

But shrink-wrapped, distributable software, while still proliferating 
and still being exploited, is a less desirable target for exploiters 
than it once was. This isn't because shrink-wrapped software is harder 
to hack than it used to bethe number of buffer overflows discovered has 
remained steady for half a decade, according to the CVE (see chart on 
Page 21). Rather, it's because websites have even more vulnerabilities 
than packaged software, and Web vulnerabilities are as easy to discover 
and hack and, more and more, that's where hacking is most profitable. In 
military parlance, webpages provide a target-rich environment.

The speed with which Web vulnerabilities have risen to dominate the 
vulnerability discussion is startling. Between 2004 and 2006, buffer 
overflows dropped from the number-one reported class of vulnerability to 
number four. Counter to that, Web vulnerabilities shot past buffer 
overflows to take the top three spots. The number-one reported 
vulnerability, cross-site scripting (XSS) comprised one in five of all 
CVE-reported bugs in 2006.

To understand XSS is to understand why, from a technical perspective, it 
will be so hard to apply responsible disclosure principles to Web 
vulnerabilities.

Cross-site scripting (which is something of a misnomer) uses 
vulnerabilities in webpages to insert code, or scripts. The code is 
injected into the vulnerable site unwittingly by the victim, who usually 
clicks on a link that has HTML and JavaScript embedded in it. (Another 
variety, less common and more serious, doesn't require a click). The 
link might promise a free iPod or simply seem so innocuous, a link to a 
news story, say, that the user won't deem it dangerous. Once clicked, 
though, the embedded exploit executes on the targeted website's server. 
The scripts will usually have a malicious intentfrom simply defacing the 
website to stealing cookies or passwords, or redirecting the user to a 
fake webpage embedded in a legitimate site, a high-end phishing scheme 
that affected PayPal last year. A buffer overflow targets an 
application. But XSS is, as researcher Jeremiah Grossman (founder of 
WhiteHat Security) puts it, "an attack on the user, not the system." It 
requires the user to visit the vulnerable site and participate in 
executing the code.

This is reason number one it's harder to disclose Web vulnerabilities. 
What exactly is the vulnerability in this XSS scenario? Is it the design 
of the page? Yes, in part. But often, it's also the social engineering 
performed on the user and his browser. A hacker who calls himself RSnake 
and who's regarded in the research community as an expert on XSS goes 
even further, saying, "[XSS is] a gateway. All it means is I can pull 
some code in from somewhere." In some sense it is like the door to a 
house. Is a door a vulnerability? Or is it when it's left unlocked that 
it becomes a vulnerability? When do you report a door as a weaknesswhen 
it's just there, when it's left unlocked, or when someone illegally or 
unwittingly walks through it? In the same way, it's possible to argue 
that careless users are as much to blame for XSS as software flaws. For 
the moment, let's treat XSS, the ability to inject code, as a technical 
vulnerability.

Problem number two with disclosure of XSS is its prevalence. Grossman, 
who founded his own research company, WhiteHat, claims XSS 
vulnerabilities can be found in 70 percent of websites. RSnake goes 
further. "I know Jeremiah says seven of 10. I'd say there's only one in 
30 I come across where the XSS isn't totally obvious. I don't know of a 
company I couldn't break into [using XSS]."

If you apply Grossman's number to a recent Netcraft survey, which 
estimated that there are close to 100 million websites, you've got 70 
million sites with XSS vulnerabilities. Repairing them one-off, two-off, 
200,000-off is spitting in the proverbial ocean. Even if you've 
disclosed, you've done very little to reduce the overall risk of 
exploit. "Logistically, there's no way to disclose this stuff to all the 
interested parties," Grossman says. "I used to think it was my moral 
professional duty to report every vulnerability, but it would take up my 
whole day."

What's more, new XSS vulnerabilities are created all the time, first 
because many programming languages have been made so easy to use that 
amateurs can rapidly build highly insecure webpages. And second because, 
in those slick, dynamic pages commonly marketed as "Web 2.0," code is 
both highly customized and constantly changing, says Wysopal, who is now 
CTO of Veracode. "For example, look at IIS [Microsoft's shrink-wrapped 
Web server software]," he says. "For about two years people were 
hammering on that and disclosing all kinds of flaws. But in the last 
couple of years, there have been almost no new vulnerabilities with IIS. 
It went from being a dog to one of the highest security products out 
there. But it was one code base and lots of give-and-take between 
researchers and the vendor, over and over.

"On the Web, you don't have that give and take," he says. You can't 
continually improve a webpage's code because "Web code is highly 
customized. You won't see the same code on two different banking sites, 
and the code changes all the time."

That means, in the case of Web vulnerabilities, says Christey, "every 
input and every button you can press is a potential place to attack. And 
because so much data is moving you can lose complete control. Many of 
these vulnerabilities work by mixing code where you expect to mix it. It 
creates flexibility but it also creates an opportunity for hacking."

There are in fact so many variables in a Web sessionhow the site is 
configured and updated, how the browser is visiting the site configured 
to interact with the sitethat vulnerabilities to some extent become a 
function of complexity. They may affect some subset of userspeople who 
use one browser over another, say. When it's difficult to even recreate 
the set of variables that comprise a vulnerability, it's hard to 
responsibly disclose that vulnerability.

"In some ways," RSnake says, "there is no hope. I'm not comfortable 
telling companies that I know how to protect them from this." A wake-up 
call for websites

Around breakfast one day late last August, RSnake started a thread on 
his discussion board, Sla.ckers.org, a site frequented by hackers and 
researchers looking for interesting new exploits and trends in Web 
vulnerabilities. RSnake's first post was titled "So it begins." All that 
followed were two links, www.alexa.com and www.altavista.com, and a 
short note: "These have been out there for a while but are still 
unfixed." Clicking on the links exploited XSS vulnerabilities with a 
reasonably harmless, proof-of-concept script. RSnake had disclosed 
vulnerabilities.

He did this because he felt the research community and, more to the 
point, the public at large, neither understood nor respected the 
seriousness and prevalence of XSS. It was time, he says, to do some 
guerilla vulnerability disclosure. "I want them to understand this isn't 
Joe Shmoe finding a little hole and building a phishing site," RSnake 
says. "This is one of the pieces of the puzzle that could be used as a 
nasty tool."

If that first post didn't serve as a wake-up call, what followed it 
should. Hundreds of XSS vulnerabilities were disclosed by the regular 
klatch of hackers at the site. Most exploited well-known, highly 
trafficked sites. Usually the posts included a link that included a 
proof-of-concept exploit. An XSS hole in www.gm.com, for example, simply 
delivered a pop-up dialog box with an exclamation mark in the box. By 
early October, anonymous lurkers were contributing long lists of 
XSS-vulnerable sites. In one set of these, exploit links connected to a 
defaced page with Sylvester Stallone's picture on it and the message 
"This page has been hacked! You got Stallown3d!1" The sites this hacker 
contributed included the websites of USA Today, The New York Times, The 
Boston Globe, ABC, CBS, Warner Bros., Petco, Nike, and Linens 'n Things. 
"What can I say?" RSnake wrote. "We have some kick-ass lurkers here."

Some of the XSS holes were closed up shortly after appearing on the 
site. Others remain vulnerable. At least one person tried to get the 
discussion board shut down, RSnake says, and a couple of others "didn't 
react in a way that I thought was responsible." Contacts from a few of 
the victim sitesGoogle and Mozilla, among otherscalled to tell RSnake 
they'd fixed the problem and "to say thanks through gritted teeth." Most 
haven't contacted him, and he suspects most know about neither the 
discussion thread nor their XSS vulnerabilities.

By early November last year, the number of vulnerable sites posted 
reached 1,000, many discovered by RSnake himself. His signature on his 
posts reads "RSnakeGotta love it." It connotes an aloofness that 
permeates the discussion thread, as if finding XSS vulnerabilities were 
too easy. It's fun but hardly professionally interesting, like Tom Brady 
playing flag football.

Clearly, this is not responsible disclosure by the standards 
shrink-wrapped software has come to be judged, but RSnake doesn't think 
responsible disclosure, even if it were somehow developed for Web 
vulnerabilities (and we've already seen how hard that will be, 
technically), can work. For one, he says, he'd be spending all day 
filling out vulnerability reports. But more to the point, "If I went out 
of my way to tell them they're vulnerable, they may or may not fix it, 
and, most importantly, the public doesn't get that this is a big 
problem." Discovery Is (Not?) a Crime

RSnake is not alone in his skepticism over proper channels being used 
for something like XSS vulnerabilities. Wysopal himself says that 
responsible disclosure guidelines, ones he helped develop, "don't apply 
at all with Web vulnerabilities." Implicit in his and Christey's process 
was the idea that the person disclosing the vulnerabilities was entitled 
to discover them in the first place, that the software was theirs to 
inspect. (Even on your own software, the end user license 
agreementEULAand the Digital Millennium Copyright ActDMCAlimit what you 
can do with/to it). The seemingly endless string of websites RSnake and 
the small band of hackers had outed were not theirs to audit.

Disclosing the XSS vulnerabilities on those websites was implicitly 
confessing to having discovered that vulnerability. Posting the exploit 
codeno matter how innocuouswas definitive proof of discovery. That, it 
turns out, might be illegal.

No one knows for sure yet if it is, but how the law develops will 
determine whether vulnerability research will get back on track or 
devolve into the unorganized bazaar that it once was and that RSnake's 
discussion board hints it could be.

The case law in this space is sparse, but one of the few recent cases 
that address vulnerability discovery is not encouraging. A man named 
Eric McCarty, after allegedly being denied admission to the University 
of Southern California, hacked the online admission system, copied seven 
records from the database and mailed the information under a pseudonym 
to a security news website. The website notified the university and 
subsequently published information about the vulnerability. McCarty made 
little attempt to cover his tracks and even blogged about the hack. Soon 
enough, he was charged with a crime. The case is somewhat addled, says 
Jennifer Granick, a prominent lawyer in the vulnerability disclosure 
field and executive director at Stanford's Center for Internet and 
Society. "The prosecutor argued that it's because he copied the data and 
sent it to an unauthorized person that he's being charged," says 
Granick, "but copying data isn't illegal. So you're prosecuting for 
unauthorized testing of the system"what any Web vulnerability discoverer 
is doing"but you're motivated by what they did with the information. 
It's kind of scary."

Two cases in a similar vein preceded McCarty's. One was acquitted in 
less than half an hour, Granick says; in the other, prosecutors managed 
to convict the hacker, but, in a strange twist, they dropped the 
conviction on appeal (Granick represented the defendant on the appeal). 
In the USC case, though, McCarty pleaded guilty to unauthorized access. 
Granick calls this "terrible and detrimental."

"Law says you can't access computers without permission," she explains. 
"Permission on a website is implied. So far, we've relied on that. The 
Internet couldn't work if you had to get permission every time you 
wanted to access something. But what if you're using a website in a way 
that's possible but that the owner didn't intend? The question is 
whether the law prohibits you from exploring all the ways a website 
works," including through vulnerabilities.

Granick would like to see a rule established that states it's not 
illegal to report truthful information about a website vulnerability, 
when that information is gleaned from taking the steps necessary to find 
the vulnerability, in other words, benevolently exploiting it. 
"Reporting how a website works has to be different than attacking a 
website," she says. "Without it, you encourage bad disclosure, or people 
won't do it at all because they're afraid of the consequences." Already 
many researchers, including Meunier at Purdue, have come to view a 
request for a researchers' proof-of-concept exploit code as a 
potentially aggressive tactic. Handing it over, Meunier says, is a bad 
idea because it's proof that you've explored the website in a way the 
person you're giving the code to did not intend. The victim you're 
trying to help could submit that as Exhibit A in a criminal trial 
against you.

RSnake says he thought about these issues before he started his 
discussion thread. "I went back and forth personally," he says. 
"Frankly, I don't think it's really illegal. I have no interest in 
exploiting the Web." As for others on the discussion board "everyone on 
my board, I believe, is nonmalicious." But he acknowledges that the 
specter of illegality and the uncertainty surrounding Web vulnerability 
disclosure are driving some researchers away and driving others, just as 
Granick predicted, to try to disclose anonymously or through back 
channels, which he says is unfortunate. "We're like a security lab. 
Trying to shut us down is the exact wrong response. It doesn't make the 
problem go away. If anything, it makes it worse. What we're doing is not 
meant to hurt companies. It's meant to make them protect themselves. I'm 
a consumer advocate." A Limited Pool of Bravery

What happens next depends, largely, on those who publish vulnerable 
software on the Web. Will those with vulnerable websites, instead of 
attacking the messenger, work with the research community to develop 
some kind of responsible disclosure process for Web vulnerabilities, as 
complex and uncertain a prospect as that is? Christey remains 
optimistic. "Just as with shrink-wrapped software five years ago, there 
are no security contacts and response teams for Web vulnerabilities. In 
some ways, it's the same thing over again. If the dynamic Web follows 
the same pattern, it will get worse before it gets better, but at least 
we're not at square one." Christey says his hope rests in part on an 
efficacious public that demands better software and a more secure 
Internet, something he says hasn't materialized yet.

Or will they start suing, threatening, harassing those who discover and 
disclose their Web vulnerabilities regardless of the researchers' 
intention, confidently cutting the current with the winds of McCarty's 
guilty plea filling their sails? Certainly this prospect concerns legal 
scholars and researchers, even ones who are pressing forward and 
discovering and disclosing Web vulnerabilities despite the current 
uncertainty and risk. Noble as his intentions may be, RSnake is not in 
the business of martyrdom. He says, "If the FBI came to my door [asking 
for information on people posting to the discussion board], I'd say 
'Here's their IP address.' I do not protect them. They know that."

He sounds much as Meunier did when he conceded that he'd have turned 
over his student if it had come to that. In the fifth and final point he 
provides for students telling them that he wants no part of their 
vulnerability discovery and disclosure, he writes: "I've exhausted my 
limited pool of bravery. Despite the possible benefits to the university 
and society at large, I'm intimidated by the possible consequences to my 
career, bank account and sanity. I agree with [noted security 
researcher] H.D. Moore, as far as production websites are concerned: 
'There is no way to report a vulnerability safely.'"

2002-2007 CXO Media Inc. All rights reserved.


_____________________________
Subscribe to InfoSec News
http://www.infosecnews.org/mailman/listinfo/isn
 



This archive was generated by hypermail 2.1.3 : Tue Jan 16 2007 - 22:41:03 PST