We looked at application denial of service in terms of attacking the application server and the application stack, so now let’s turn our attention to attacking application itself. Clearly every application contains weaknesses that can be exploited, especially when the goal is simply to knock the application offline rather than something more complicated, such as stealing credentials or gaining access to the data. That lower bar of taking the application offline means more places to attack.

If we bust out the kill chain to illuminate attack progression, let’s first focus on the beginning: reconnaissance. That’s where the process starts for application denial of service attacks as well. The attackers need to find the weak points in the application, so they assess it to figure out which pages consume the most resources, the kinds of field-level validation on forms, and the supported attributes on query strings.

For instance, if a form field does a ton of field-level validation or needs to make multiple database calls to multiple sites to render the page, that page would be a good target to blast. Serving dynamic content requires a bunch of database calls to populate the page, and each call consumes resources. The point is to consume as many of resources as possible to impact the application’s ability to serve legitimate traffic.

Flooding the Application

In our Defending Against Denial of Service Attacks paper, we talked about how network-based attacks flood the pipes. Targeting resource intensive pages with either GET or POST requests (or both) provides is an equivalent application flooding attack, exhausting the server’s session and memory capacity. Attackers flood a number of different parts of web applications, including:

  • Top-level index page: This one is straightforward and usually has the fewest protections because it’s open to everyone. When blasted by tens of thousands of clients simultaneously, the server can become overwhelmed.
  • Query string “proxy busting”: Attackers can send a request to bypass any proxy or cache, forcing the application to generate and send new information, and eliminating the benefit of a CDN or other cache in front of the application. The impact can be particularly acute when requesting large PDFs or other files repeatedly, consuming excessive bandwidth and server resources.
  • Random session cookies/tokens: By establishing thousands of sessions with the application, attackers can overload session tables on the server and impact its ability to serve legitimate traffic.

Flood attacks can be detected rather easily (unlike the slow attacks described in Attacking the Server), providing an opportunity to rate-limit attack while allowing legitimate traffic through. Of course this approach puts a premium on accuracy, as false positives slow down or discard legitimate traffic, and false negatives allow attack to consume server resources.

To accurately detect application floods you need a detailed baseline of legitimate traffic, tracking details such as URL distribution, request frequency, maximum requests per device, and outbound traffic rates. With this data a legitimate application behavior profile can be developed. You can then compare incoming traffic (usually on a WAF or application DoS device) against the profile to identify bad traffic, and then limit or block it.

Another tactic to mitigate application floods is input validation on all form fields, to ensure requests neither overflow application buffers nor misuse application resources. If you are using a CDN to front-end your application, make sure it can handle random query string attacks and that you are benefitting from the caching service. Given the ability of some attackers to bypass a CDN (assuming you have one), you will want to ensure your input validation ignores random query strings.

You can also leverage IP reputation services to identify bot traffic and limit or block them. That requires coordination between the application and network-based defenses, but it is effective for detecting and limiting floods.

Pagination

A pagination attack involves requesting the web application to return an unreasonable amount of results by expanding the PageSize query parameter to circumvent limits. This can return tens of thousands or even millions of records. Obviously this consumes significant database resources, especially when servicing multiple requests at the same time. These attacks are typically launched against the search page. Another tactic for overwhelming applications is to use a web scraper to capture information from dynamic content areas such as store locators and product catalogs. If the scraper is not throttled it can overwhelm the application by scraping over and over again.

Mitigation for most pagination attacks must be built into the application. For example, regardless of the PageSize parameter, the application should limit the number of records returned. Likewise, you will want to limit the number of search requests the site will process simultaneously. You can also leverage a Content Delivery Network or web protection service to cache static information and limit search activity. Alternatively, embedding complicated JavaScript on the search pages can deter bots.

Gaming the Shopping Cart

Another frequently exploited legitimate function is the shopping cart. An attacker might put a few items in a cart and then abandon it for a few hours. At some point they can come back and refresh the cart, causes the session to be maintained, and the database to reload the cart. If the attacker has put tens of thousand of products into the cart, it consumes significant resources.

Shopping cart mitigations include limiting the number of items that can be added to a cart and periodically clearing out carts with too many items. You will also want to periodically terminate sufficiently old carts to reclaim session spaces and flush abandoned carts.

Combination Platter

Attackers are smart. They have figured out that they can combine many of these attacks with devastating results. For instance an attacker could launch a volume-based network attack on a site. Then start a GET flood on legitimate pages, limited in avoid looking like a network attack. Follow up with a slow HTTP attack so any traffic that does make it through consumes application resources. Finally they might attack the shopping cart or store locator which looks like legitimate activity.

It is hard enough to defend against one of these attack vectors. Combining a number of different attacks dramatically reduces most organizations’ ability to defend themselves. Mitigations require the defenses we have been discussing throughout this series to come together as a coordinated whole. First your scrubbing center needs to blunt the impact of the network attack. Then your on-premise application protection (either a WAF or purpose-built anti-AppDoS device) needs to accurately profile and limit the GET flood.

Then your web server needs to handle the slow HTTP attack by terminating connections that last too long. Finally application logic must close shopping carts that compromise the PageSize or leave the cart open for too long. If any of these defenses fail the application might go down. It’s a tightrope act, all day every day, for applications targeted by DoS.

Our next post will wrap up with a process for building defenses into both application infrastructure and the applications themselves. Then we will propose a testing approach to ensure applications are sufficiently stress tested before they go live, to be sure they won’t fall down in the face of attack.

Share: