Securosis

Research

Defending Against Application Denial of Service Attacks [New Series]

As we discussed last year in Defending Against Denial of Service Attacks, attackers increasingly leverage availability-impacting attacks both to cause downtime (which costs site owners money) and to mask other kinds of attacks. These availability-impacting attacks are better known as Denial of Service (DoS) attacks. Our research identified a number of adversaries who increasingly use DoS attacks, including: Protection Racketeers: These criminals use DoS threats to demand ransom money. Attackers hold a site hostage by threatening to knock it down, and sometimes follow through. They get paid. They move on to the next target. The only thing missing is the GoodFellas theme music. Hacktivists: DoS has become a favored approach of hacktivists seeking to make a statement and shine a spotlight on their cause, whatever it may be. Hacktivists care less about the target than their cause. The target is usually collateral damage, though they are happy hit two birds with one stone by attacking an organization that opposes their cause when they can. You cannot negotiate with these folks, and starting public discourse is one of their goals. ‘CyberWar’: We don’t like the term – no one has been killed by browsing online (yet), but we can expect to see online attacks as a precursor to warplanes, ships, bombing, and ground forces. By knocking out power grids, defense radar, major media, and other critical technology infrastructure, the impact of an attack can be magnified. Exfiltrators: These folks use DoS to divert attention from the real attack: stealing data they can monetize. This could be an intellectual property theft or a financial attack such as stealing credit cards. Either way, they figure that if they blow in your front door you will be too distracted to notice your TV scooting out through the garage. They are generally right. Competitors: They say all’s fair in love and business. Some folks take that one a bit too far, and actively knock down competitor sites for an advantage. Maybe it’s during the holiday season. Maybe it happens after a competitor resists an acquisition or merger offer. It could be locking folks out from bidding on an auction. Your competition might screen scrape your online store to make sure they beat your pricing, causing a flood of traffic on a very regular and predictable basis. A competitor might try to ruin your hard-earned (and expensive) search rankings. Regardless of the reason, don’t assume an attacker is a nameless, faceless crime syndicate in a remote nation. It could be the dude down the street trying to get any advantage he can – legalities be damned. Given the varied adversaries, it is essential to understand that two totally different types of attacks are commonly lumped under the generic ‘DoS’ label. The first involves the network, blasting a site with enough traffic (sometimes over 300gbps) to flood the pipes and overwhelm security and network devices, as well as application infrastructure. This volumetric attack basically is the ‘cyber’ version of hitting something a billion times with a rock. This brute force attack typically demands a scrubbing service and/or CDN (Content Delivery Network) to deal with the onslaught of traffic and keep sites available. The second type of DoS attack targets weaknesses in applications. In Defending Against DoS we described an application attack as follows: Application-based attacks are different – they target weaknesses in web application components to consume all the resources of a web, application, or database server to effectively disable it. These attacks can target either vulnerabilities or ‘features’ of an application stack to overwhelm servers and prevent legitimate traffic from accessing web pages or completing transactions. These attacks require knowledge of the application and how to break or game it. They can be far more efficient than just blasting traffic at a network, and in many cases take advantage of legitimate features of the application, making defense all the harder. We are pleased to launch the next chapter in our Denial of Service research, entitled “Defending Against Application Denial of Service Attacks” (yep, we are thinking way out of the box for titles). In this series we will dig far more deeply into application DoS attacks and provide both an overview of the tactic and possible mitigations for defense. Here is a preliminary list of what we intend to cover: Application Server Attacks: The first group of AppDoS attacks targets the server and infrastructure stack. We will profile attacks such as Slowloris, Slow HTTP Post, RUDY, Slow read, and XerXes, discussing mitigations for each attack. We will also talk about brute force attacks on SSL (overwhelming servers with SSL handshake requests) and loading common pages – such as login, password reset, and store locators – millions of times. Attacking the Stack: Targeting Databases and Programming Languages: In this post we will talk about the next layers in the application stack – including the database and languages used to build the application. Regarding database DoS, we will highlight some of our recent research in Dealing with Database Denial of Service. Abusing Application Logic: As we continue to climb the application stack, we will talk about how applications are targeted directly with GET floods and variants. By profiling applications and learning which pages are most resource intensive, attackers can focus their efforts on the most demanding pages. To mitigate these attacks, we will discuss the roles of rate controls and input validation, as well as WAF and CDN based approaches to filter out attack requests before the application needs to deal with them. Billions of Results Served: We will profile the common attacks which overwhelm applications by overflowing memory with billions of results from either search results or shopping carts. We will touch on unfriendly scrapers, including search engines and other catalog aggregators that perform ‘legitimate’ searching but can be gamed by attackers. These attacks can only be remediated within the application, so we will discuss mechanisms for doing that (without alienating the developers). Building DoS Protections in: We will wrap up the series by talking about how to implement a productive process for working with developers to build in AppDoS protections.

Share:
Read Post

Incite 9/18/2013: Got No Game

On Monday night I did a guest lecture for some students in Kennesaw State’s information security program. It is always a lot of fun to get in front of the “next generation” of practitioners (see what I did there?). I focused on innovation in endpoint protection and network security, discussing the research I have been doing into threat intelligence. The kids (a few looked as old as me) seemed to enjoy hearing about the latest and greatest in the security space. t also gave me a forum to talk about what it’s really like in the security trenches, which many students don’t learn until they are knee-deep in the muck. I didn’t shy away from the lack of obvious and demonstrable success, or how difficult it is to get general business folks to understand what’s involved in protecting information. The professor had a term that makes a lot of sense: security folks are basically digital janitors, cleaning up the mess the general population makes. When I started talking about the coming perimeter re-architecture (driven by NGFW, et al), I mentioned how much time they will be able to save by dealing with a single policy, rather than having to manage the firewall, IPS, web filter, and malware detection boxes separately. I told them that would leave plenty of time to play Tetris. Yup, that garnered an awkward silence. I started spinning and asked if any knew what Tetris was? Of course they did, but a kind student gently informed me that no one has played that game in approximately 10 years. Okay, how about Gears of War? Not so much – evidently that trilogy is over. I was going to mention Angry Birds, but evidently Angry Birds was so 12 months ago. I quit before I lost all credibility. There it was, stark as day – I have no game. Well no video game anyway. Once I got over my initial embarrassment, I realized my lack of prowess is kind of intentional. I have a fairly addictive personality, so anything that can be borderline addictive (such as video games) is a problem for me. It’s hard to pay my bills if I’m playing Strategic Conquest for 40 hours straight, which I did back in the early 90’s. I have found through the years that if I just don’t start, I don’t have to worry about when (or if) I will stop. I see the same tendencies in the Boy. He’s all into “Clash of Clans” right now. Part of me is happy to see him get his Braveheart on attacking other villages, Game of Thrones style. He seems pretty good at analyzing an adversary’s defenses and finding a way around them, leading his clan to victory. But it’s frustrating when I have to grab the Touch just to have a conversation with him. Although at least I know where he gets it from. Some folks can practice moderation. You know, those annoying people who can take a little break for 15 minutes and play a few games, and then be disciplined enough to stop and get back on task. I’m not one of those people. When I start something, I start something. And that means the safest thing for me is often to not start. It’s all about learning my own idiosyncrasies and not putting myself in situations where I will get into trouble. So no video games for me! –Mike Photo credit: “when it’s no longer a game” originally uploaded by istolethetv Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Firewall Management Essentials Optimizing Rules Change Management Introduction Continuous Security Monitoring Migrating to CSM The Compliance Use Case The Change Control Use Case The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? API Gateways Implementation Key Management Developer Tools Newly Published Papers Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Defending Cloud Data with Infrastructure Encryption Incite 4 U Good guys always get DoS’ed: Django learned the hard way that if you give hackers an inch they’ll take a mile – and your site too. Last week they suffered a denial of service attack when users submitted multi-megabyte passwords – the “computational complexity” of generating strong hashes for a few requests was enough to DoS the site. Awesome. The mitigation to this kind of attack is input validation. Sure, as a security expert I still get pissed when sites limit me to 8 character passwords, but it’s unreasonable to accept the Encyclopedia Britannica as valid field input. I am sorry to be smiling as I write this – I feel bad for the Django folks – but it’s funny how no good security intentions go unpunished. Thanks for patching, guys! – AL DHS gets monitoring religion (to the tune of $6B): Not sure how I missed the award of a $6 billion DHS contract to implement continuous detection and mitigation technology. Evidently this is the new term for continuous monitoring, and it seems every beltway bandit, and scanning and SIEM vendor, is involved. So basically nothing will get done – I guess that’s the American way. But this move, which started with NIST’s push to continuous monitoring and continues with DHS’s rebranded CDM initiative, is going in the right direction. Will they ever get to “real time” monitoring? Does it matter? They can’t actually respond in real time, so I don’t think so. If any of these gold-plated toilet seats provides the ability to see a vulnerability with a few days (rather than showing up on a quarterly report, and being ignored), it’s an improvement. As they said in Contact, “baby steps
” – MR FUD filled vacuum: When working with clients I am often still surprised at how often even mature organizations underestimate the eventual misinterpretations of their

Share:
Read Post

Firewall Management Essentials: Managing Access Risk

We have discussed two of the three legs of comprehensive firewall management: a change management process and optimizing the rules. Now let’s work through managing risk using the firewall. Obviously we need to define risk, because depending on your industry and philosophy, risk can mean many different things. For firewall management we are talking about the risk of unauthorized parties accessing sensitive resources. Obviously if a device with critical data is inaccessible to internal and/or external attackers, the risk it presents is lower. This “access risk management” function involves understanding first and foremost the network’s topology and security controls. The ability to view attack paths provides a visual depiction of how an attacker could gain access to a device. With this information you can see which devices need remediation and/or network workarounds, and prioritize fixes. Another benefit of visualizing attack paths is in understanding when changes on the network or security devices unintentionally expose additional attack surface. So what does this have to do with your firewall? That’s a logical question, but a key firewall function is access control. You configure the firewall and its rule set to ensure that only authorized ports, protocols, applications, users, etc. have access to critical devices, applications, etc. within your network. A misconfigured firewall can have significant and severe consequences, as discussed in the last post. For example, years ago when supporting a set of email security devices, we got a call about an avalanche of spam hitting the mailboxes of key employees. The customer was not pleased, but the deployed email security gateway appeared to be working perfectly. Initially perplexed, one of our engineers checked the backup email server, and discovered it was open to Internet traffic due to a faulty firewall rule. So attackers were able to use the back-up server as a mail relay, and blast all the mailboxes in the enterprise. With some knowledge of network topology and the paths between external networks and internal devices, this issue could have been identified and remediated before any employees were troubled. Key Access Risk Management Features When examining the network and keeping track of attack paths, look for a few key features: Topology monitoring: Topology can be determined actively, passively, or both. For active mapping you will want your firewall management tool to pull configurations from firewalls and other access control devices. You also need to account for routing tables, network interfaces, and address translation rules. Interoperating with passive monitoring tools (network behavioral analysis, etc.) can provide more continuous monitoring. You need the ability to determine whether and how any specific device can be accessed, and from where – both internal and external. Analysis horsepower: Accounting for all the possible paths through a network requires an n_*(_n-1) analysis, and n gets rather large for an enterprise network. The ability to re-analyze millions of paths on every topology change is critical for providing an accurate view. What if?: You will want to assess each possible change before it is made, to understand its impact on the network and attack surface. This enables the organization to detect additional risks posed by a change before committing it. In the example above, if that customer had a tool to help understand that a firewall rule change would make their backup email server a sitting duck for attackers they would have reconsidered. Alternative rules: It is not always possible to remediate a specific device due to operational issues. So to control risk you would like a firewall management tool to suggest appropriate rule changes or alternate network routes to isolate the vulnerable device and protect the network. At this point it should be clear that all these firewall management functions depend on each other. Optimizing rules is part of the change management process, and access risk management comes into play for every change. And vice-versa, so although we discussed these function as distinct requirements of firewall management, in reality you need all of these functions to work together for operational excellence. In this series’ last post we will focus on getting a quick win with firewall management technology. We will discuss deployment architectures and integration with enterprise systems, and work through a deployment scenario to make many of these concepts a bit more tangible. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

“Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.”

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.