As described in last post, deploying a WAF requires knowledge of both application security and your specific application(s). Management it requires an ongoing effort to keep a WAF current with emerging attacks and frequent application changes. Your organization likely adds new applications and changes network architectures at least a couple times a year. We see more and more organizations embracing continuous deployment for their applications. This means application functions and usage are constantly changing as well. So you need to adjust your defenses regularly to keep pace. Test & Tune The deployment process is about putting rules in place to protect applications. Managing the WAF involves monitoring it to figure out how well your rules are actually working, which requires spending a bunch of time examining logs to learn what is working and what’s not. Tuning policies for both protection and performance is not easy. As we have mentioned, you need someone who understands the rule ‘grammars’ and how web protocols work. That person must also understand the applications, the types of data customers should have access to within them, what constitutes bad behavior/application misuse, and the risks the web applications pose to the business. An application security professional needs to balance security, applications, and business skills, because the WAF policies they write and manage touch all three disciplines. The tuning process involves a lot of trial and error, figuring out which changes have unintended consequences like adding attack surface or breaking application functionality, and which are useful for protecting applications from emerging attacks. You need dual tuning efforts, one for positive rules which must be updated when new application functionality is introduced, and another for negative rules which protect applications against emerging attacks. By the time a WAF is deployed customers should be comfortable creating whitelists for applications, having gained a decent handle on application functionality and leveraging the automated WAF learning capabilities. It’s fairly easy to observe these policies in monitor-only mode, but there is still a bit of nail-biting as new capabilities are rolled out. You’ll be waiting for users to exercise a function before you know if things really work, after which reviewing positive rules gets considerably easier. Tuning and keeping negative security policies current still relies heavily on WAF vendor and third-party assistance. Most enterprises don’t have research groups studying emerging attack vectors every day. These knowledge gaps, regarding how attackers work and cutting-edge attack techniques, create challenges when writing specific blacklist policies. So you are will depend on your vendor for as long as you use WAF, which is why we stress finding a vendor who acts as a partner and building support into your contract. As difficult as WAF management is, there is hope on the horizon, as firms embrace continuous deployment and DevOps, and accept daily updates and reconfiguration. These security teams have no choice but to build & test WAF policies as part of their delivery processes. WAF policies must be generated in tandem with new application features, which requires security and development teams to work shoulder-to-shoulder, integrating security as part of release management. New application code goes through several layers of functional testing and WAF rules get tested as code goes into a production environment, but before exposure to the general public. This integrated release process is called Blue-Green deployment testing. In this model both current (Blue) and new (Green) application code are run, on their own servers, in parallel in a fully functional production environment, ensuring applications run as intended in their ‘real’ environment. The new code is gated at the perimeter firewall or routers, limiting access to in-house testers. This way in-house security and application teams can verify that both the application and WAF rules function effectively and efficiently. If either fails the Green deployment is rolled back and Blue continues on. If Green works it becomes the new public production copy, and Blue is retired. It’s early days for DevOps, but this approach enables daily WAF rule tuning, with immediate feedback on iterative changes. And more importantly there are no surprises when updated code goes into production behind the WAF. WAF management is an ongoing process – especially in light of the dynamic attack space blacklists addresses, false-positive alerts which require tuning your ruleset, and application changes driving your whitelist. Your WAF management process needs to continually learn and catalog user and application behaviors, collecting metrics as part of the process. Which metrics are meaningful, and which activities you need to monitor closely, differs between customers. The only consistency is that you cannot measure success without logs and performance metrics. Reviewing what has happened over time, and integrating that knowledge into your policies, is key to success. Machine Learning At this point we need to bring “machine learning” into the discussion. This topic generates confusion, so let’s first discuss what it means to us. In its simplest form, machine learning is looking at application usage metrics to predict bad behavior. These algorithms examine data including stateful user sessions, user behavior, application attack heuristics, function misuse, and high error rates. Additional data sources include geolocation, IP address, and known attacker device fingerprints (IoC). The goal is to detect subtler forms of application misuse, and catch attacks quickly and accurately. Think of it as a form of 0-day detection. You want to spot behavior you know is bad, even if you haven’t seen that kind of badness before. Machine learning is a useful technique. Detecting attacks as they occur is the ideal we strive for, and automation is critical because you cannot manually review all application activity to figure out what’s an attack. So you’ll need some level of automation, to both scale scarce resources and fit better into new continuous deployment models. But it is still early days for this technology – this class of protection has a ways to go for maturity and effectiveness. We see varied success: some types of attacks are spotted, but false positive rates can be high. And we are not fans of the term “machine learning” for this functionality, because it’s too generic. We’ve seen some vendors