Maximizing WAF Value: DeploymentBy Adrian Lane
Now we will dig into the myriad ways to deploy a Web Application Firewall (WAF), including where to position it and the pros & cons of on-premise devices versus WAF services. A key part of the deployment process is training the WAF for specific applications and setting up the initial rulesets. We will also highlight effective practices for moving from visibility (getting alerts) to control (blocking attacks). Finally we will present a Quick Wins scenario because it’s critical for any security technology to get a ‘win’ early in deployment to prove its value.
The first major challenge for anyone using a WAF is getting it set up and effectively protecting applications. Your process will start with deciding where you want the WAF to work: on-premise, cloud-hosted, or a combination. On-premise means installing multiple appliances or virtual instances to balance incoming traffic and ensure they don’t degrade the user experience. With cloud services you have the option of scaling up or down with traffic as needed. We’ll go into benefits and tradeoffs of each later in this series.
Next you will need to determine how you want the WAF to work. You may choose either inline or out-of-band. Inline entails installing the WAF “in front of” a web app so all traffic from and to the app runs through it. This blocks attacks directly as they come in, and in some cases before content is returned to users. Both on-premise WAF devices and cloud WAF services provide this option. Alternatively, some vendors offer an out-of-band option to assess application traffic via a network tap or spanning port. They use indirect methods (TCP resets, network device integration, etc.) to shut down attack sessions. This approach has no side-effects on application operation, because traffic still flows directly to the app.
Obviously there are both advantages and disadvantages to having a WAF inline, and we don’t judge folks who opt for out-of-band rather than risking the application impact of inline deployment. But out-of-band enforcement can be evaded via tactics like command injection, SQL injection, and stored cross-site scripting (XSS) attacks that don’t require responses from the application. Another issue with out-of-band deployment is that attacks can make it through to applications, which puts them at risk. It’s not always a clear-cut choice, but balancing risks is why you get paid the big bucks, right?
When possible we recommend inline deployment, because this model gives you flexibility to enforce as many or as few blocking rules as you want. You need to carefully avoid blocking legitimate traffic to your applications. Out-of-band deployment offers few reliable blocking options.
Once the device is deployed you need to figure out what rules you’ll run on it. Rules embody what you choose to block, and what you let pass through to applications. The creation and maintenance of these rules where is you will spend the vast majority of your time, so we will spend quite a bit of time on it. The first step in rule creation is understanding how rules are built and employed. The two major categories are negative and positive security rules: the former are geared toward blocking known attacks, and the latter toward listing acceptable actions for each application. Let’s go into why each is important.
“Negative Security” policies essentially block known attacks. The model works by detecting patterns of known malicious behavior, or ‘signatures’. Things like content scraping, injection attacks, XML attacks, cross-site request forgeries, suspected botnets, Tor nodes, and even blog spam, are universal application attacks that affect all sites. Most negative policies come “out of the box” from vendors’ internal teams, who research and develop signatures for customers.
Each signature explicitly describes one attack or several variants, these rules typically detect SQL injection and buffer overflows. The downside of this method is its fragility: the signature will fail to match any unrecognized variations, and will thus bypass the WAF. If you think “this sounds like traditional endpoint AV” you’re right. So signatures are only suitable when you can reliably and deterministically describe attacks, and don’t expect signatures to be immediately bypassed by simple evasion.
WAFs usually provide a myriad of other detection options to compensate for the limitations of static signatures: heuristics, reputation scoring, detection of evasion techniques, and proprietary methods for qualitatively detecting attacks. Each method has its own strengths and weaknesses, and use cases for which it is better or worse suited. These techniques can be combined to provide a risk score for incoming requests, and with flexible blocking options based on the severity of the attack or your confidence level. This is similar to the “spam cocktail” approach used by email security gateways for years. But the devil is in the details, there are thousands of attack variations, and figuring out how to apply policies to detect and stop attacks is difficult.
Finally there are rules you’ll need specifically to protect your web applications from a class of attacks designed to find flaws in the way application developers code, targeting gaps in how they enforce process and transaction state. These include rules to detect fraud, business logic attacks, content scraping, and data leakage, which cannot be detected using generic signatures or heuristics. Examples of these kinds of attacks include issuing order and cancellation requests in rapid succession to confuse the web server or database into revealing or altering shopping cart information, replaying legitimate transactions, and changing the order of events to attack transaction integrity.
These application-specific rules are constructed using the same analytic techniques, but rather than focusing on the structure and use of HTTP and XML grammars, a fraud detection policy examines user behavior as it relates to the type of transaction being performed. These policies require a detailed understanding of both how attacks work and how your web applications work.
The other side of this coin is the positive security model, called ‘whitelisting.’ Positive security only allows known and authorized web requests, and blocks all others. Old-school network security professionals recall the term “default deny”. This is the web application analogue. It works by observing and cataloging legitimate application traffic, establishing ‘good’ requests as a baseline for acceptable usage, and blocking everything else. You’ll need to ensure you do not include any attacks in your ‘clean’ baseline, and to set up policies to block anything not on your list of valid behaviors.
The good news is that this approach is very effective at catching malicious requests you have never seen before (0-day attacks) without having to explicitly code signatures for each potential attack. You understand the folly of trying to manage a rule set to detect every possible attack. This is also an excellent way to pare down the universe of all threats described above into a smaller and more manageable subset of attacks to include in a blacklist. For example negative policies can restrict HTTP requests to known valid actions.
The bad news is that applications are dynamic and change regularly, so unless you update your whitelist with every application update, your WAF will effectively disable new application features or crash applications. Yet for those willing to do the work positive security is a huge win. Understand that this approach is becoming more complicated as continuous deployment, DevOps, and code trickery such as ‘feature tagging’ all ratchet up the cadence of WAF rule updates. Some organizations have moved testing of WAF rules inside their development pipelines to ensure WAF doesn’t break new functionality.
You will use both positive and negative approaches in tandem because neither approach alone can adequately protect applications.
Once the WAF is installed it’s time to get basic rules in place. You will start with the WAF in monitor-only mode (also known as alert mode) until your rules are set up and vetted. This involves three steps:
- Detect attacks: First turn on any built-in (negative) rules to detect known bad behavior. They should be part of the vendor’s basic bundle.
- Learning mode: Next the WAF automatically learns web traffic to help build policies, saving you time and effort. Most WAF platforms include this as basic functionality.
- Tuning: You will need to operate in this mode from a few days to weeks, depending upon applications and traffic levels, to generate a decent known-good baseline.
Let’s dig into learning mode. You start with discovery. By looking at traffic logs to see what the pre-packaged rules would have blocked if they were enabled, you learn what your applications actually do. This provides you a proverbial smorgasbord of application activities, from which you identify what needs to be secured, and which positive rules are appropriate to permit. This initial discovery process is essential to ensure your initial ruleset covers what each application really does, not just what you think it does.
After this initial learning process you are ready to go through your first round of tuning, to get rid of some false positives and (if you were testing using actual attacks) false negatives. You will need to tweak WAF rules, and possibly add new rules, to ensure your security policies comprehensively protect your applications. Once you understand our application functions and have a good idea of what attacks to expect, you need to determine how you will counter them. For example if your application has a known defect which provides a security vulnerability you cannot address in a timely fashion through code changes, WAF provides several options:
- Block the request: You can remove the offending function from your whitelist, stopping the threat if your deployment supports blocking. The downside is that removing an app function or service this way may break part of the application.
- Create a signature: You can write a specific signature to detect attacks on that defect so you can detect attempts to exploit it. This will stop known attacks, but you must account for all possible variations and evasion techniques.
- Use heuristics: You can use one or more heuristics as clues to abnormal application use or an attack on a vulnerability. Heuristics include malformed requests, odd customer geolocations, customer IP reputation, use of known weak or insecure application areas, requests for sensitive data, and various other attack indicators.
With your traffic baseline you can enable whitelisting to provide positive security. With your rules tuned and whitelisting enabled – and confidence in your results so far – it’s time to enable blocking. You will need to be present and alert when you do this, because you are certain to miss something, so expect another small round of tuning here. We will explore day-to-day WAF management in our next post.
Deploying a WAF for the first time is a very difficult job, more art than science. It’s also high profile because the affected applications are usually high-profile; so you will face scrutiny from the CISO, developers, and the IT team. Improved security is much harder to demonstrate or observe than a broken application or bad performance, which are visible to everyone including customers. We have a few tips to get you up and running quickly and safely, so you can demonstrate positive momentum to your boss.
- Start with learning mode: As mentioned above, learning mode can dramatically accelerate building your initial ruleset. It used to be very crude, but over the years this capability has matured. In some cases we have seen valid whitelists created in under 24 hours.
- Leverage threat intel: It’s a big world out there, and odds are the attack you saw last week hit someone else before that. Global threat intelligence, threat feeds, IP reputation services, and the like can all help you identify attacks – even when you haven’t seen them before. And in many cases threat intelligence can be automatically integrated into your existing ruleset, to improve protection with minimal effort. We have seen great success with threat intel because it’s easy to employ and quickly blocks the basic DDoS, bots, and malware.
- Feed your other security systems: WAFs output event data in several formats, including
syslog– which can be ingested by just about any SIEM, log management tool, or analytics repository. These feeds provide another source of security data to supplement your security monitoring efforts, and help compliance teams substantiate the controls in place to meet compliance mandates.
- Use vendor services where appropriate: Given the lack of sufficient security talent most firms have trouble finding people to manage their WAF. Most WAF admins, like other security folks, have other responsibilities, but WAF requires considerable specialization to operate effectively. We recommend leaning on your WAF vendor for services early in the process, especially during Proof of Concept (PoC) testing before purchase. They are far more familiar with their products than you are, and can help steer you past common pitfalls. Some even offer monitoring services to watch your WAF in action, especially those which offer cloud-based WAF services. They essentially help you tune your protection by detecting false positives and negatives. You should try to bundle in additional services when negotiating fees or purchasing, as expert help can go a long way toward getting your WAF up and useful quickly.
Our next post will talk about daily WAF operation. We’ll discuss rule management, being as agile as application teams, and using threat feeds effectively; we’ll also offer some perspective on machine learning and advanced WAF functions.