[ISN] Brief analysis of "Analyzing Websites for User-Visible Security Design Flaws"

From: InfoSec News <alerts_at_private>
Date: Mon, 28 Jul 2008 02:46:29 -0500 (CDT)
Forwarded from: security curmudgeon <jericho (at) attrition.org>

After being provided a link to the original paper and reading additional 
comments, I wanted to follow-up to my original post [1] with more 
thoughts. If you want the slightly more technical review, search down to 
"methodology review". The paper in question is "Analyzing Websites for 
User-Visible Security Design Flaws" by Laura Falk, Atul Prakash and 
Kevin Borders [2]. I strongly encourage more security professionals to 
provide peer scrutiny to security research coming from universities.

As was pointed out, the research was done in 2006 (testing in Nov/Dec) 
but the results are just now being published. Three people working on a 
study on 214 web sites should not take that long to publish. To wait so 
long in publishing research on a topic like this, one must question if 
it is responsible, or more to the point, relevant. In the world of high 
end custom banking applications, my experience consulting for such 
companies tells me that many will do periodic audits from third parties 
and that these sites get continuous improvements and changes every week. 
One of the web sites I use for personal banking has changed dramatically 
in the last 12 months, making huge changes to the functionality and 
presumably architecture, security and design. The results of a 2006 
audit of that site are probably most irrelevant.

As with most research papers, the lack of publish date in the header is 
annoying. The abstract does not mention the 2006 to 2008 time gap 
between research and publication either. This time difference is seen 
almost immediately in the citation of Schechter et al, regarding people 
"disregarding SSL indicators". The current releases of several browsers, 
most notably IE7 and Firefox 3 make pretty big shifts in how the browser 
handles and warns about SSL indicators. Each browser is considerably 
more paranoid and will throw a warning over more discrepencies that each 
would have ignored in previous versions.

On page 1, Prakash et al list the criteria for the categories of "design 
flaws" they examined. As expected and mentioned in my previous post, the 
design flaws they examined are not necessarily a vulnerability, and 
often times do not put customer data at risk or they require additional 
requisites to be exploitable. To look at one of their design flaws as an 
example, consider the following:

  Presenting secure login options on insecure pages: Some sites present 
  login forms that forward to a secure page but do not come from a 
  secure page. This is problematic because an attacker could modify the 
  insecure page to submit login credentials to an insecure destination.

This summary of a design flaw is problematic in that it makes several 
assumptions and/or does not fully qualify the attack vector. First, to 
"modify" an insecure page being served from the bank to the user's 
client (browser), the attacker would have to compromise the server 
(making this attack moot) or conduct a Man-in-The-Middle (MiTM) attack. 
I assume the latter is meant since the implication is that an attacker 
could not effectively MiTM attack a page wrapped in encryption (SSL).

It is interesting that the lack of SSL encryption is chosen as a design 
flaw with the notion that manipulation of an insecure page is the 
preferred attack vector. Such an attack is considerably more difficult 
to conduct compared to other threats (e.g. SQL Injection, Privilege 
Escalation) and would essentially target a single customers. Many large 
applications serving hundreds of thousands of users makes this trade-off 
of mixed security pages for performance reasons, as the overhead of 
encrypting all traffic can be costly.

Later in the paper when the team attempts to better define this design 
weakness, they say:

  Consider the case where the customer service contact information for 
  resetting passwords is provided on an insecure page. To compromise the 
  system, an attacker only needs to spoof or modify the page, replacing 
  the customer service phone numbers with bogus numbers.

Web pages can be spoofed regardless of the transport, so the presence of 
SSL encryption means little to nothing. If his team is implying an 
attacker "only need [..] modify the page", that would require 
compromising the server or performing a MiTM attack. Again, this is not 
a trivial attack by any means and in the latter, would affect one 
customer.

While this is only one of five design flaws Prakash's team looked for, 
consider the third example which is the exact same design weakness:

  Contact information/security advice on insecure pages: Some sites host 
  their security recommendations, contact information, and various other 
  sensitive information about their site and company on insecure pages. 
  This is dangerous because an attacker could forge the insecure page 
  and present different recommendations and contact information.

This is the exact same issue as #2 in the list but just makes the 
specification of the content on the page. Factor in that issue #1 will 
be more prevalent in large organizations but a non-issue in smaller ones 
and the criteria of five design weaknesses gets cut down from five to 
four, with one that is likely not to be seen on some of the sites tested 
at all.

The paper quickly summarizes their findings before going in detail, 
before concluding "Overall, only 24% of the sites were completely free 
of these design flaws, indicating that some of the flaws we identified 
are not widely understood, even among institutions where security is 
critical." This assumption and conclusion is dangerous and 
irresponsible. The implication that the presence of one or more of these 
flaws is indicative of the site not understanding the threat is 
presumptious. With the example given above about the high overhead of 
encrypting all content, some of the "design flaws" may be business 
decisions and acceptable risk.

Prakash et al begin to demonstrate their lack of understanding of 
client-server relationships and the transport mechanism for different 
protocols. The following paragraph from page 2 immediately calls his 
team's technical competence:

  One of the most interesting design flaws we discovered is the 
  presentation of FAQs and contact information on insecure pages. In the 
  past, FAQs and contact information were usually sent through the mail 
  to the customer. It is not generally recognized that this information 
  should be protected. However, when this information is presented 
  online, the user becomes vulnerable to socialengineering and offline 
  attacks as a result of the information being displayed on an insecure 
  page.

Prakash's contention that unencrypted content delivered from a web 
server to a browser is somehow different than when unencrypted content 
is delivered from a mail server to a mail reader is silly. If an 
attacker has the ability to MiTM attack a person, it isn't going to be 
limited to HTTP. Sending that contact information via mail will result 
in a user deleting it or maybe storing it in a folder. The first time 
the person needs to contact the bank, they would check the web page for 
the contact information. If said information is not available, it now 
further burdens the bank as they may call a generic number and get 
transferred around several times. This adds to customer frustration and 
causes bank employees to spend extra time dealing with a customer that 
could have called the correct number to begin with.

Prakash's team goes on to make more assumptions or not fully understand 
the importance of how web clients behave. Without getting into a full 
discussion on the philosophy of e-commerce sites adding mechanisms to 
invalidate client-side vulnerabilities, the general notion that it 
should be done if feasible seems reasonable. In this context, feasible 
means that it doesn't overly burden the bank web site, does not impact 
performance and is generally transparent to the end user. One example of 
this is a Cross-frame spoofing issue that made it trivial for an 
attacker to use a phishing attack to MiTM attack MSIE 6 users [3]. Web 
sites can add a small bit of javascript to help ensure that browsers 
load their pages in a new frame and essentially mitigates this risk. 
This is a good example of how many banks were helping protect customers, 
even though the vulnerability was in the customer's software, not the 
bank web site. Prakash's team claims:

  Our work is similar in that some of the flaws that we consider impair 
  a user's ability to make correct security decisions. However, our work 
  differs in that the cause is not poor or confusing client-side 
  interfaces. Instead, the flaws originate in poor design or policy 
  choices at the server that prevent or make it difficult for users to 
  make correct choices from the perspective of securing their 
  transactions.

While a mismatched SSL certificate used to be virtually ignored in some 
cases, new versions of popular browsers now behave differently in how 
they alert users, giving them the ability to more easily make correct 
choices. Claiming that this research is not impacted by "poor or 
confusing client-side interfaces" is misleading. While the older 
browsers were not necessarily confusing, they handled some situations 
regarding establishing trust poorly.

The next area of technology Prakash's team doesn't seem to fully 
understand is vulnerability scanners. In the paper his team says:

  Network scanners, such as Nessus [11], and application-level website 
  scanners, such as AppScan [17], can be used to analyze for many 
  configuration and implementation bugs, such as use of unpatched 
  services and vulnerability to cross-side scripting or SQLinjection 
  attacks. As far as we are aware, the design flaws that we examine are 
  currently not identified by these scanners.

Both Nessus and AppScan will identify several vulnerabilities that 
directly relate to the design flaws outlined. Both will give warning 
over invalid or expired SSL certificates, AppScan will warn about 
mixed-mode security pages and neither will perform tests for some of the 
design flaws listed (#3, #4, #5) because no scanner in the world can do 
it.

Methodology Review:

On page 8 (of 10), the team gives very brief descriptions of their 
testing methodology. The lack of description or their testing 
methodology undermines significant portions of the research. For "Break 
in the Chain of Trust", the paper says "Under no circumstance should an 
insecure page make a transition to a securitysensitive website hosted on 
another domain, regardless of whether the destination site uses SSL." 
This is an arbitrary 'rule' that is not widely accepted by anyone 
including the banking industry. Many web pages are designed to act as 
portals that link to additional features. The 'rule' as quotes from the 
paper would force large bank organizations to consolidate all web 
resources on a single domain. While that may be nice, it simply isn't 
feasible to many businesses, especially ones with a large organization 
that includes multiple companies. Linking from http://bigbank.com/ to 
https://regionalbank.com/ is perfectly acceptable and should use proper 
SSL certificates and technology controls to help ensure the user ends up 
on the correct page, loaded directly by the browser.

The second design weakness studied was "Presenting Secure Login Options 
on Insecure Pages". The paper explains their methodology as ".. searched 
each web page for the string "login". If the string was found, we 
searched the same page for the strings "username" or "user id" or 
"password". If the string .login. and .username. or .user id. or 
.password. were found on the same page, we then verified whether the 
page was displayed using the HTTP protocol. If this was the case, we 
assumed this site contained the design flaw." The key word here being 
'assumed'. There are scenarios where the above methodology could easily 
generate a false positive. Even back in 2006, there were trivial ways to 
more easily determine the use of HTTPS with certainty.

The third design weakness studied was "Contact Information/Security 
Advice on Insecure Pages" and is perhaps the most technically lacking 
testing method one could perform:

 We searched each web page for the string "contact", "information", or 
 "FAQ". If those strings where found, we checked whether the page was 
 protected with SSL. If not, then we considered it to contain the design 
 flaw.

The mere presence of these words on a site do not mean they are in the 
context of listing bank contact information. While 'contact' will 
frequently link to a 'contact us' page, looking for 'information' or 
'FAQ' is absurd.

In the fourth weakness, "Inadequate Policies for User IDs and 
Passwords", the team openly admits that their methodology may produce 
"optimistic" results and that they had no way to verify their results 
"without generating an account on the website". Heaven forbid they find 
a couple hundred students at the university to participate by logging 
into their personal banks and checking this in more detail. That extra 
effort would have made this portion the only positive and accurate test.
>From the paper:

  Our count could be optimistic; some sites may require strong passwords 
  without stating an explicit policy. We had no obvious means of 
  verifying this without generating an account on the website. Our count 
  could also be conservative for sites that have poor policies resulting 
  in weak passwords. Thus, our results for this design flaw should only 
  be taken as a rough estimate of the extent of this particular problem.

As before, the fifth design weakness was extrapolated using a glorified 
'grep' of the web page, analyzing proximity of a few keywords and then 
verifying the hits above an 85% threshold. And as before, this testing 
methodology makes huge assumptions about the wording on the page, does 
not positively account for HTML formatting that would impact the 
'distance' between words (especially in pages with frames) and does not 
begin to test the functionality (see page 9, section 4.5).

Finally, the paper attempts to interpret the results of this poorly 
conceived and improperly tested study. Table 1 on page 10 says that 30% 
of the sites tested were affected by "Break in the chain of trust" but 
gets contradicted (clarified) in the first paragraph of the results:

  With automated tools, such as the one used in our study, false 
  positives are possible. To the extent feasible, we manually examined 
  the results to eliminate false positives from the reported data. Our 
  break-in-chain-of-trust data had a significant number false positives. 
  Our automated tool reported about 30% of the websites to potentially 
  use third-party sites in an unsafe way, but only 17% were found to do 
  so without giving some sort of notification to the user about that 
  transition.

Having such an admittedly large margin of error on the results based on 
their own methodology should be an eye-opener in regards to the 
integrity and accuracy of the results. Despite this revelation, the next 
paragraph immediately cites the table with the 30% number and begins to 
make conclusions based on the bogus numbers. They go on to further 
explain their primary tool for gathering information (the wget tool) may 
not have retrieved all of the information needed to properly assess the 
site.

Despite the weak methodology, 44% error rate on at least one test and 
admitted errors, Prakash et al go on to say "We found that 76% of sites 
have at least one design flaw." Such statements are certainly not 
factual or even statistically correct based on the research presented.

- jericho (security curmudgeon)


[1] http://attrition.org/pipermail/dataloss/2008-July/002565.html 
[2] http://cups.cs.cmu.edu/soups/2008/proceedings/p117Falk.pdf 
[3] http://osvdb.org/4078


_______________________________________________      
Attend Black Hat USA, August 2-7 in Las Vegas, 
the world's premier technical event for ICT security experts.
Featuring 40 hands-on training courses and 80 Briefings 
presentations with lots of new content and new tools.
Network with 4,000 delegates from 50 nations.  
Visit product displays by 30 top sponsors in 
a relaxed setting. http://www.blackhat.com
Received on Mon Jul 28 2008 - 00:46:29 PDT

This archive was generated by hypermail 2.2.0 : Mon Jul 28 2008 - 00:53:46 PDT