Akamai announced that they are adding Web Application Firewall (WAF) capabilities into their distributed EdgePlatform netwok. I usually quote from the articles I reference, but there is simply too much posturing and fluffy marketing-ese about value propositions for me to extract an insightful fragment of information on what they are doing and why it is important, so I will paraphrase. In a nutshell they have ported ModSecurity onto/into the Akamai Edge Server. They are using the Core Rule Set to form the basis of their policy set. As content is pulled from the Akamai cache servers, the request is examined for XSS, SQL Injection, response splitting, and other injection attacks, as well as some error conditions indicative of tampering.
Do I think this is a huge advancement to security? Not really. At least not at the outset. But I think it’s a good idea in the long run. Akamai edge servers are widely used by large commercial vendors and content providers, who are principal targets for many specific XSS attacks. In essence you are distributing Web Application Firewall rules, and enforcing as requests are made for the distributed/cached content. The ModSecurity policy set has been around for a long time and will provide basic protections, but it leaves quite a gap in meaningful coverage. Don’t get me wrong, the rule set covers many of the common attacks and they are proven to be effective. However, the value of a WAF is in the quality of the rule set, and how appropriate those rules are to the specific web application. Rule sets are really hard to get right, and must be updated with the same frequency as your web site content. As you add new pages or functions, you are adding and updating rules.
I think the announcement is important, though, is because I believe it marks the beginning of a trend. We hear far too many complaints about WAF hindering applications, as well as the expense of rule set development and maintenance. The capability is valuable, but the coverage needs to get better, management needs to be easier, and the costs need to come down. I believe this is a model we will see more of because:
- Security is embedded into the service. With many ‘Cloud’ and SaaS offerings being offered, most with nebulous benefits, it’s clear that those who use Akamai are covered from the basic attacks, and the analysis is done on the Akamai network, so your servers remain largely unburdened. Just as with out-sourcing the processing overhead associated with anti-spam into the cloud, you are letting the cloud absorb the overhead of SQL Injection detection. And like Anti-virus, it’s only going to catch a subset of the attacks.
- Commoditization of WAF service. Let’s face it, SaaS and cloud models are more efficient because you commoditize a resource and then leverage the capability across a much larger number of customers. WAF rules are hard to set up, so if I can leverage attack knowledge across hundreds or thousands of sites, the cost goes down. We are not quite there yet, but the possibility of relieving your organization from needing these skills in-house is very attractive for the SME segment. The SME segment is not really using Akamai EdgeServers, so what I am talking about is generic WAF in the cloud, but the model fits really well with outsourced and managed service models. Specific, tailored WAF rules will be the add-on service for those who choose not to build defenses into the web application or maintain their own WAF.
- The knowledge that Akamai can gather and return to WAF & web security vendors provides invaluable analysis on emerging attacks. The statistics, trend data, and metrics they have access to offer security researchers a wealth of information – which can be leveraged to thwart specific attacks and augment firewall rules.
So this first baby step is not all that exciting, but I think it’s a logical progression for WAF service in the cloud, and one we will see a lot more of.
Reader interactions
10 Replies to “Akamai Implements WAF”
Seriously sleazy post Alexander.
That is correct Tim … And you are working for?
Disclosure of your involvement in Art of Defence would be appreciated.You are their CTO,am I correct.
I would not call this the first WAF in the cloud. I would not even call it a WAF. All it is, it’s a black list filter for requests. The technology was invented many years ago for the Apache Web Server as a plug-in and has not made any significant advances since then.
Today’s WAF technology looks very different. Black, white and gray listing is considered as basic functionality. Proactive features like session protection, form field virtualization, learning and assisted security policy refinements are a must. Exchanging information with web application security related products like web application security vulnerability scanners or static code analysis tools are a must have.
Since RSA 2009 there is a WAF SaaS service from art of defence, which offers a fully fledged dWAF for their customers.
Beginning of November a ‘dWAF as a service’ has been started on the Amazon AWS Cloud (aws.artofdefence.com), which will be a fully functional WAF if the customers need it.
The technology behind this is going to be implemented at various other cloud service providers in the near future so they can offer a true dWAF (at least) in their cloud.
The value proposition of WAF In-the-Cloud is obvious – filter the overtly malicious traffic at the edge so that you, as the CDN customer, don’t get charged for bandwidth usage for proxying this bad traffic to origin. It is a tangible cost savings which is an ROI metric that doesn’t normally exist in traditional hardware-based WAF deployments on the customer’s site.
This addresses the advantage of the *location* of the WAF detection/prevention layer, however the next step is to tackle the actual logic involved. Reference the WASC Web Application Firewall Evaluation Criteria (WAFEC) document Detection section – http://projects.webappsec.org/f/wasc-wafec-v1.0.html#N10309. Is the WAF going to use a negative or positive security model or perhaps both with automated learning and profiling of traffic? From a security perspective, it is better to have automated learning to develop a profile of the expected, normal traffic and then you can enforce proper input validation and detect other anomalies. This is further enhanced by correlating any negative security rules/signatures to both help prevent the learning engine from *learning* attack traffic and factoring it into its profile and/or to help further categorize events when displaying information to analysts. On the flip side – negative security is easier to implement and does not impact performance as much as you don’t need to do all of the heavily lifting for the advanced learning logic and creating custom profiles for each customer/app. This makes negative security the optimal solution for a cloud-based WAF as it can scale across the customer-base. The ModSecurity Core Rule Set (CRS) was ideal for this scenario as one of our goals was to not be solely focused on public vulnerabilities but rather to use *attack payload detection*. The point being that the rules look for bad input generically and are not tied to known attack vectors (url + parameter injection point). The advantage of this approach is that it also provides protection against custom coded applications and, as Jeremaih points out, does not need to be updated with each new code change.
Ultimately, it ends up being the 80/20 rule of security – negative security can weed out most of the generic bad traffic while positive security tackles the subtle, more advanced attacks. What this means in practice, and in relation to the Akamai WAF service is that it is a great first layer of defense but it does not preclude the need for an origin-based WAF. We just touched on the first reason why – advanced attacks, but the other reason why is that it may be possible for attackers to bypass the CDN altogether (by accessing the direct IP address of the web sites). This is why Akamai offers the SiteShield service so that origin web apps can be configured with ACLs to *only* accecpt traffic from the Edge servers. The other issue is that of Insider threat. If you are implementing WAFs, you need to make sure that they are located directly in front of the protected web apps so that there are no other back-channel paths that lead to the app. Side note – this is one of my big gripes with the PCI wording focusing only on “Externally facing web applications…”.
I hope this info helps to clarify things a bit.
As you note, this is impressive in terms of performance and product validation. I can’t imagine my former employer deploying a WAF in front of their content engines. I’d love to see performance numbers. 🙂
Very good to see that a large website operator proposing a WAF. There is a WAF company in France deploying WAF at website operators for a program called Pro Web Serenity. The goal is to provide WAF for free and let customers pay only what they use on the product as a membership. They also added an insurance contract to this offer.
It’s basically default blacklist rules with some session tracking system on mostly used cookies. It’s true that using a WAF in this kind of environment is very difficult because people don’t have knowledge and time for this and a WAF remains a time consuming device to maintain. Therefore, this default blacklist is able to block a lot of worm and basic attacks without generating false positive, which cover a big part of hosted website using generic CMS, forum or others opensource scripts.
I also think it’s a model we will see more, WAF are actually sold on a very big customer market. Openning it to cloud and SaaS open a very very big market of hosted website.
Nice post .. wonder about exceptions though .. Do customers get to edit/create/relax rules for specific pages, urls parameters etc? If not, a one size fits all policy might probably just not work for the majority.
Would this offer “virtualized” per cusomter site policies, rulesets, logs, reporting, etc so that customers get to edit/see just their stuff.
Also, without protection for new things like CSRF, cookie encryption etc, some of which require response modification/insertion, I am not sure if you can technically call this a WAF in the cloud.
Adrian, good post, some bits to consider…
One major reason I found this announcement very important is many large website operators who utilize massive bandwidth simply cannot deploy WAFs for performance/manageability reasons. This is why WAFs are rarely found guarding major traffic points. Akamai is known specifically for their performance capabilities so may be able to scale up WAFs where current industry has not.
Secondly, WAF rules will always leave some vulnerability gaps, hopefully lesser so in the future, but complete coverage isn’t necessarily a must. The vast majority of vulnerabilities (by raw numbers) are syntax in nature (ie SQLi, XSS, etc.) By mitigating these (at least temporarily) organizations may prioritize the business logic flaws for code fixes — gaps in the WAF. These approach helps getting down to zero remotely exploitable bugs MUCH easier. We’ve experienced as much in our customer-base.
“Rule sets are really hard to get right, and must be updated with the same frequency as your web site content. As you add new pages or functions, you are adding and updating rules.”
This implies the WAF is deployed in white list mode, which to my understanding is not how Akamai is going to go. ModSecurity Core Rules are black list style, so would not require updates when content is changed. To be fair the rules would have to be changed as the attacks evolve, which may or may be as fast as website/content code changes.
@ Jeremiah –
Great point on white list vs. black list approach … thanks for raising the contradiction in the post. I am making the assumption that Akamai relieves their customers from specific ‘black list’ threats and the burden on web site WAFs, but does not relieve customers of the need to build their own ‘white list’ of policies.