Securosis

Research

Pragmatic WAF Management: The Trouble with WAF

We kicked off the Pragmatic WAF series by setting the stage in the last post, highlighting the quandary WAFs represent to most enterprises. On one hand, compliance mandates have made WAF the path of least resistance for application security. Plenty of folks have devoted a ton of effort to making WAF work, and they are now looking for even more value, above and beyond the compliance checkbox. On the other hand, there is general dissatisfaction with the technology, even from folks who use WAFs extensively. Before we get into an operational process for getting the most out of your WAF investment, it’s important to understand why security folks often view WAF with a jaundiced eye. The opposing viewpoints between security, app developers, operations, and business managers help pinpoint the issues with WAF deployments. These issues must be addressed before the technology can reach the adoption level of other security technologies (such as firewalls and IPS). The main arguments against WAF are: Pen-tester Abuse: Pen testers don’t like WAFs. There is no reason to beat around the bush. First, the technology makes a pen tester’s job more difficult because a WAF blocks (or should block) the kind of tactics they use to attack clients via their applications. That forces them to find their way around the WAF, which they usually manage. They are able to reach the customer’s environment despite the WAF, so the WAF must suck, right? More often the WAF is not set up to block or conceal the information pen testers are looking for. Information about the site, details about the application, configuration data, and even details on the WAF itself leak out, and are put to good use by pen testers. Far too many WAF deployments are just about getting that compliance checkbox – not stopping hackers or pen testers. So the conclusion is that the technology sucks – rather than pointing at the implementation. WAFs Breaks Apps: The security policies – essentially the rules that tell what a WAF should either block or allow to pass through to the application – can (and do) block legitimate traffic at times. Web application developers are used to turning code – basically pushing changes and new functionality to web applications several times per week, if not more often. Unless the ‘whitelist’ of approved application requests gets updated with every application change, the WAF will break the app, blocking legitimate requests. The developers get blamed, they point at operations, and nobody is happy. Compliance, Not Security: A favorite refrain of many security professionals is, “You can be compliant and still not be secure.” At least the ones who know what they’re talking about. Regulatory and industry compliance initiatives are desgined to “raise a very low bar” on security controls, but compliance mandates inevitably leave loopholes – particularly in light of how often they can realistically be updated. Loopholes attackers can exploit. Even worse, the goal of many security programs become to pass compliance audits – not to actually protect critical corporate data. The perception of WAF as a quick fix for achieving PCI-DSS compliance – often at the expense of security – leaves many security personnel with a negative impression of the technology. WAF is not a ‘set-and-forget’ product, but for compliance it is often used that way – resulting in mediocre protection. Until WAF proves its usefulness in blocking real threats or slowing down attackers, many remain unconvinced of WAF’s overall value. Skills Gaps: Application security is a non-trivial endeavor. Understanding spoofing, fraud, non-repudiation, denial of service attacks, and application misuse are skills rarely all possessed by any one individual. But all those skills are needed by an effective WAF administrator. We once heard of a WAF admin who ran the WAF in learning mode while a pen test was underway – so the WAF thought bad behavior was legitimate! Far too many folks get dumped into the deep waters of trying to make a WAF work, without a fundamental understanding of the application stack, business process, or security controls. The end result is that rules running on the WAF miss something – perhaps not accounting for current security threats, not adapted to changes in the environment, or not reflecting the current state of the application. All too often, the platform lacks adequate granularity to detect all variants of a particular threat, or essential details are not coded into policies, leaving an opening to be exploited. But is this an indictment of the technology, or how it is utilized? Perception and Reality: Like all security products, WAFs have undergone steady evolution over the last 10 years. But their perception is still suffering because original WAFs were themselves subject to many of the attacks they were supposed to defend against (WAF management is through a web application, after all). Early devices also had high false positive rates and ham-fisted threat detection at best. Some WAFs bogged down under the weight of additional policies, and no one ever wanted to remove policies for fear of allowing an attacker to compromise the site. We know there were serious growing pains with WAF, but most of the current products are mature, full-featured, and reliable – despite persistent perception. But when you look at these complaints critically, much of the dissatisfaction with WAFs comes down to poor operational management. Our research shows that WAF failures are far more often a result of operational failure than of fundamental product failure. Make no mistake – WAFs are not a silver bullet – but a correctly deployed WAF makes it much harder to attack the app or to completely avoid detection. The effectiveness of WAF is directly related to the quality of people and processes used to keep it current. The most serious problems with WAF are not about technology, but with management. So that’s what we will present. A pragmatic process to manage Web Application Firewalls, in a way that overcomes the management and perception issues which plague this technology. As usual we will start at

Share:
Read Post

Incite 8/1/2012: Media Angst

Obviously bad news sells. If you have any doubt about that, watch your local news. Wherever you are. The first three stories are inevitably bad news. Fires, murders, stupid political fiascos. Then maybe you’ll see a human interest story. Maybe. Then some sports and the weather and that’s it. Let’s just say I haven’t watched any newscast in a long time. But this focus on negativity has permeated every aspect of the media, and it’s nauseating. Let’s take the Olympics, for example. What a great opportunity to tell great stories about athletes overcoming incredible odds to perform on a world stage. The broadcasts (at least NBC in the US) do go into the backstories of the athletes a bit, and those stories are inspiring. But what the hell is going on with the interviews of the athletes, especially right after competition? Could these reporters be more offensive? Asking question after question about why an athlete didn’t do this or failed to do that. Let’s take an interview with Michael Phelps Monday night, for example. This guy will end these Olympics as the most decorated athlete in history. He lost a race on Sunday that he didn’t specifically train for, coming in fourth. After qualifying for the finals in the 200m Butterfly, the obtuse reporter asked him, “which Michael Phelps will we see at the finals?” Really? Phelps didn’t take the bait, but she kept pressing him. Finally he said, “I let my swimming do the talking.” Zing! But every interview was like that. I know reporters want to get the raw emotion, but earning a silver medal is not a bad thing. Sure, every athlete with the drive to make the Olympics wants to win Gold. But the media should be celebrating these athletes, not poking the open wound when they don’t win or medal. Does anyone think gymnast Jordyn Weiber doesn’t feels terrible that she, the reigning world champion, didn’t qualify for the all-around? As if these athletes’ accomplishments weren’t already impressive enough, their ability to deal with these media idiots is even more impressive. But I guess that’s the world we live in. Bad news sells, and good news ends up on the back page of those papers no one buys anymore. Folks are more interested in who Kobe Bryant is partying with than the 10,000 hours these folks spend training for a 1-minute race. On days like this, I’m truly thankful our DVR allows us to forward through the interviews. And that the mute button enables me to muzzle the commentators. –Mike Photo credits: STFU originally uploaded by Glenn Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide The Business Impact of Managing Endpoints Pragmatic WAF Management New Series: Pragmatic WAF Management Incite 4 U Awareness of security awareness (training): You have to hand it to Dave Aitel – he knows how to stir the pot, poking at the entire security awareness training business. He basically calls it an ineffective waste of money, which would be better invested in technical controls. Every security admin tasked with wiping the machines of the same folks over and over again (really, it wasn’t pr0n) nodded in agreement. And every trainer took offense and pointed both barrels at Dave. Let me highlight one of the better responses from Rob Cheyne, who makes some good points. As usual, the truth is somewhere in the middle. I believe high-quality security training can help, but it cannot prevent everybody from clicking stuff they shouldn’t. The goal needs to be reducing the number of those folks who click unwisely. We need to balance the cost of training against the reduction in time and money spent cleaning up after the screwups. In some organizations this is a good investment. In others, not so much. But there are no absolutes here – there rarely are. – MR RESTful poop flinger: A college prof told me that, when he used to test his applications, he would take a stack of punch cards out of the trash can and feed them in as inputs. When I used to test database scalability features, I would randomly disconnect one of the databases to ensure proper failover to the other servers. But I never wrote a Chaos Monkey to randomly kick my apps over so I could continually verify application ‘survivability’. Netflix announced this concept some time back, but now the source code is available to the public. Which is awesome. Just as no battle plan survives contact with the enemy, failover systems die on contact with reality. This is a great idea for validating code – sort of like an ongoing proof of concept. When universities have coding competitions, this is how they should test. – AL Budget jitsu: Great post here by Rob Graham about the nonsensical approach most security folks take to fighting for more budget using the “coffee fund” analogy. Doing the sales/funding dance is something I tackled in the Pragmatic CSO, and Rob takes a different approach: presenting everything in terms of tradeoffs. Don’t ask for more money – ask to redistribute money to deal with different and emerging threats – which is very good advice. But Rob’s money quote, “Therefore, it must be a dishonest belief in one’s own worth. Cybersecurity have this in spades. They’ve raised their profession into some sort of quasi-religion,” shows a lot of folks need an attitude adjustment in order to sell their priorities. There is (painful) truth in that. – MR Watch me pull a rabbit from my hat: The press folks at Black Hat were frenetic. At one session I proctored, a member of the press literally walked onto stage as I was set to announce the presentation, and several more repeatedly

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.