Securosis

Research

Defending Against Application Denial of Service: Building Protections in

As we have discussed through this series, many types of attacks can impact the availability of your applications. To reiterate a number of points we made in Defending Against Denial of Service Attacks, your defenses need to be coordinated at multiple levels: at the network layer, in front of your application, within the application stack, and finally within the application. We understand this is a significant undertaking, and security folks have been trying to get developers on board for years to build security into applications – with little effect to date. That said, it doesn’t mean you shouldn’t keep pushing, especially given the relative ease of knocking down an application without proper defenses within the application. We have found the best way to get everyone on board is by implementing a structured web application security program that looks at each application in its entirety, and can be extended to add protections against denial of service attacks. Web Application Security Process Revisiting the process described in Building a Web Application Security Program, web applications need to be protected across the entire lifecycle: Secure Development: You start the process by building security into the software development lifecycle (SDLC). This includes training for people who deliver web applications, and improved processes to guide their activity. Security awareness training for developers is managed through education and supportive process modifications, as a precursor to making security a functional application requirement. This phase of the process leverages tools to automate portions of the effort: static analysis to help engineering identify vulnerable code, and dynamic analysis to detect anomalous application behavior. Secure Deployment: At the point where an application is code complete, and ready for more rigorous testing and validation, it is time to confirm that it doesn’t suffer from serious known security flaws (vulnerabilities) and is configured so it is not subject to any known compromises. This is where you use vulnerability assessments and penetration testing – along with solid operational approaches to configuration analysis, threat discovery, patch levels, and operational consistency checking. Secure Operations: The last phase of the process moves from preventative tools and processes to detecting and reacting to events from production applications. Here you deploy technologies and/or services to front-end the application, including web application firewalls and web protection services. Some technologies can protect applications from unwanted uses; others only monitor requests for inappropriate activity. To look at the specific aspects of what’s required to deal with AppDoS attacks, let’s look at each step in the process. Secure Development In this phase we are looking to build the protections we have been discussing into the application(s). This involves making sure the application stack in use is insulated against HashDoS attacks and no database calls present an opportunity for misuse and excessive queries. The most impactful protections are input validation on form fields to mitigate against buffer overflow, code injection, and other attacks that can break application logic. Understand that heavy input validation impacts application performance at scale, especially when under attack with a GET/POST flood or a similar attack. You should prioritize validating fields that require the least computational resources, and check them as early as possible. Extensive validation may exacerbate the flood attack and take down the application sooner, so you need to balance protection against performance when stress-testing the application prior to deployment. Also ensure your application security testing (static and dynamic) checks the application’s robustness against denial of service attacks, including shopping cart and pagination attacks. Secure Deployment When deploying the application make sure the stack has protections against the common web server DoS attacks including SlowLoris, Slow HTTP, and Apache Killer. You can check for these vulnerabilities using an application scanner or during a penetration test. Keep in mind that you will likely need some tuning to find the optimal timeout for session termination. Secure Operations Once the application goes into production the fun begins – you will be working with live ammunition. You can deploy an anti-DoS appliance or service, or a WAF (either product or service) to rate limit slow HTTP type attacks. This is also where a CDN or web protection service comes into play to absorb high-bandwidth attacks and intelligently cache static content to blunt the impact of random query string attacks. Finally, during the operational phase, you will want to monitor the performance and responsiveness of the application, as well as track inbound traffic to detect emerging DoS attacks as early as possible. You developed profiles for normal application behavior earlier – now you can use them to identify attack traffic before you have an outage. Finding the Right Mix As we have described, you have a bunch of options to defend your applications against denial of service attacks, so how can you determine the right mix of cloud-based, server-based, and application-based protections? You need to think about each in terms of effort and agility required to deploy at each level. Building protections into applications doesn’t happen overnight – it is likely to require development process changes and a development cycle or three to implement proper security controls to protect against this threat. The application may also require significant re-architecture – especially if the database-driven aspects of the applications haven’t been optimized. Keep in mind that new attacks and newly discovered vulnerabilities require you to revisit application security on an ongoing basis. Like other security disciplines, you never really finish securing your application. Somewhat less disruptive is hardening the application stack, including the web server, APIs, and database. This tends to be an operational responsibility, so you will need to collaborate with the ops team to ensure the right protections, patches, and configurations are deployed on the servers. Finally, the quickest path to protection is to front-end your application with an anti-DoS device and/or a cloud-based CDN/website protection service to deal with flood attacks and simple application attacks. As we have mentioned, these defenses are not a panacea – you still need to harden the stack and protect the application as well. But

Share:
Read Post

Friday Summary: November 15, 2013

There is lots I want to talk about this week, so I decided to resort to some three-dot blogging. A few years ago at the security bloggers meet-up, Jeremiah Grossman, Rich Mogull and Robert Hansen were talking about browser security. After I rudely butted into the conversation they asked me if “the market” would be interested in a secure browser, one that was not compromised to allow marketing and advertising concerns to trump security. I felt no one would pay for it but the security community and financial services types would certainly be interested in such a browser. So I was totally jazzed when WhiteHat finally announced Aviator a couple weeks back. And work being what is has been, I finally got a chance to download it today and use it for a few hours. So far I miss nothing from Firefox, Safari, or Chrome. It’s fast, navigation is straightforward, it easily imported all my Firefox settings, and preferences are simple – somewhat the opposite of Chrome, IMO. And I like being able to switch users as I switch between different ISPs/locations (i.e., tunnels to different cloud providers ). I am not giving up my dedicated Fluid browsers dedicated to specific sites, but Fluid has been breaking for unknown reasons on some sites. But the Aviator and Little Snitch combinations is pretty powerful for filtering and blocking outbound traffic. I recommend WhiteHat’s post on key differences between Aviator and Chrome. If you are looking for a browser that does not hemorrhage personal information to any and every website, download a copy of Aviator and try it out. * * * I also want to comment on the MongoHQ breach a couple weeks back. Typically, it was discovered by one of their tenant clients: Buffer. Now that some of the hype has died away a couple facets of the breach should be clarified. First, MongoHQ is a Platform-as-a-Service (PaaS) provider, running on top of Amazon AWS, and specializing in in-memory Mongo databases. But it is important that this is a breach of a small cloud service provider, rather than a database hack, as the press has incorrectly portrayed it. Second, many people assume that access tokens are inherently secure. They are not. Certain types of identity tokens, if stolen, can be used to impersonate you. Third, the real root cause was a customer support application that provided MongoHQ personnel “an ‘impersonate’ feature that enables MongoHQ employees to access our primary web UI as if they were a logged in customer”. Yeah, that is as bad as it sounds, and not a feature you want accessible from just any external location. While the CEO stated “If access tokens were encrypted (which they are now) then this would have been avoided”, that’s just one way to prevent this issue. Amazon provides pretty good security recommendations, and this sort of attack is not possible if management applications are locked down with good security zone settings and restricted to AWS certificates for administrative access. Again, this is not a “big data hack” – it is a cloud service provider who was sloppy with their deployment. * * * It has been a strange year – I am normally “Totally Transparent” about what I am working on, but this year has involved several projects I can’t talk about. Now that things have cleared up, I am moving back to a normal research schedule, and I have a heck of a lot to talk about. I expect that during the next couple weeks I will begin work on: Risk-based Authentication: Simple questions like “who are you” and “what can you do” are no longer simple binary answers in this age of mobile computing. The answers are subjective and tinged with shades of gray. Businesses need to make access control decisions based on simple control lists, but simple lists are no longer adequate – they need to consider risk and behavior when making these decisions. Gunnar and I will explore this trend, and talk about the different techniques in use and the value they can realistically provide. Securing Big Data 2.0: The market has changed significantly over the past 14 months – since I last wrote about how to secure big data clusters – I will refresh that research, add sections on identity management, and take a closer look at application layer security – where a number of the known threats and issues persist. Two-factor Authentication: It is often discussed as the ultimate in security: a second authentication factor to make doubly sure you are who you claim to be. Many vendors are talking about it, both for and against, because of the hype. Our executive summary will look at usage, threats it can help address, and integration into existing systems. Understanding Mobile Identity Management: This will be a big one. A full-on research project in mobile identity management. We will publish a full outline in the coming weeks. Security Analytics with Big Data: I will release a series of targeted summaries of how big data works for security analytics, and how to start a security analytics program. If you have questions on any of these, or if there are other topics you thing we should be covering, shoot us an email. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on Trustwave’s acquisition of Application Security. Favorite Securosis Posts Mike Rothman: How to Detect Cloudwashing by Your Vendors. – Love how Adrian and Gunnar put a pin in the marketing hyperbole around cloud now. And brace yourself – we will see a lot more over the next year. Adrian Lane: The CISO’s Guide to Cloud: How Cloud is Different for Security. This is good old-fashioned Securosis research. Focused. A bit ahead of the curve. Pragmatic. Enjoying this series. Other Securosis Posts Incite 11/13/2013: Bully. New Series: What CISOs Need to Know about Cloud Computing. How to Edit Our Research on GitHub. Trustwave Acquires Application Security Inc. Security Awareness Training Evolution [New Paper]. Blowing Your

Share:
Read Post

Incite 11/13/2013: Bully

  When you really see the underbelly of something, it is rarely pretty. The NFL is no different. Grown men are paid millions of dollars a year to display unbridled aggression, toughness, and competitiveness. That sounds like a pretty Darwinian environment, where the strong prey on the weak. And it is given what we have seen over the last few weeks, as behavior in the Miami Dolphins locker room comes to light. It is counterintuitive to think of a 320-pound offensive lineman being bullied by anyone. You hear about fights on the field and in the locker room as these alpha males all look to establish position within the pride. But how are the bullies in the Dolphins locker room any different than the petty mean girls and boys you had to deal with in high school? They aren’t. If you take a step back, a bully is always compensating for some kind of self-perceived inadequacy that forces him or her to act out. Small people (even if they weigh over 300+ pounds) make themselves feel bigger by making others feel smaller. So the first question is whether the behavior is acceptable. I think everyone can agree racial epithets have no place in today’s society. But what about the other tactics, such as mind games and intentionally excluding a fellow player from activities? I’m not sure that kind of hazing would normally be a huge deal, but combined with an environment of racial insensitivity, it is probably crossing the line as well. What’s more surprising is that no one stepped up and said that behavior was no bueno. Bullies prey on folks, because folks who aren’t directly targeted don’t stand up and make clear what is acceptable and what isn’t. But that has happened since the beginning of time. No one want to stand up for what’s right, so folks just watch catastrophic events happen. Maybe this will be a catalyst to change the culture. There is nothing the NFL hates more than bad publicity. So things will change. Every other team in the NFL made statements about how their work environments are not like that. No one wants to be singled out as a bully or a bigot. Not when they have potential endorsement deals riding on their public image. Like most other changes, some old timers will resist. Others will adapt because they need to. And with the real-time nature of today’s media, and rampant leaks within every organization, it is hard to see this kind of behavior happening again. I guess I can’t understand why players who call themselves brothers would treat each other so badly. Of course you beat up your little brother(s) when you are 10. But if you are still treating your siblings shabbily as an adults, you need some help. Maybe I am getting a bit judgmental, especially considering that I have never worked in an NFL locker room, so I can’t even pretend to understand the mindset. But I do know a bit about dealing with people. One of the key tenets of a functional and successful organization is to manage people in an individual fashion. A guy may be 320 pounds, an athletic freak, and capable of serious violence when the ball is snapped, but that doesn’t mean he wants to get called names or fight a teammate to prove his worth. I learned the importance of managing people individually early in my career, mostly because it worked. This management philosophy is masterfully explained in First, Break All the Rules, which shows how important corporate performance is for keeping happy employees who do what they love every day with people they care about. Clearly someone in Miami didn’t get the memo. And you have to wonder what kind of player Jonathan Martin could be if he worked in a place where he didn’t feel singled out and persecuted, so he could focus on the task at hand: his blocking assignment for each play. Not whether he was going to get jumped in the parking lot. Maybe he’ll even get a chance to find out, but it’s hard to see that happening in Miami. –Mike Photo credit: “Bully Advance Screening Hosted by First Lady Katie O’Malley” originally uploaded by Maryland GovPics Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. What CISOs Need to Know about Cloud Computing Introduction Defending Against Application Denial of Service Attacking the Application Stack Attacking the Application Server Introduction Newly Published Papers Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Incite 4 U What is it that you do? I have to admit that I really did not understand analysts or the entire analyst industry prior to joining Securosis. Analysts were the people on our briefing calendar who were more knowledgable – and far more arrogant – than the press. But they did not seem to have a clear role, nor was their technical prowess close to what they thought it was. I was assured by our marketing team that they were important, but I could not see how. Now I do, but the explanation needs to be repeated every so often. The aneelism blog has a nice primer on technology analyst 101 for startups. Long story short, some analysts speak with customers as an independent advisor, which means two things for small security vendors: we are told things customers will never tell you directly, and we see a breadth of industry issues & trends you won’t because you are focused on your own stuff and try to wedge

Share:
Read Post

The CISO’s Guide to the Cloud: How the Cloud Is Different for Security

This is part two of a series. You can read part one here or track the project on GitHub. How the Cloud Is Different for Security In the early days of cloud computing, even some very well-respected security professionals claimed it was little more than a different kind of outsourcing, or equivalent to the multitenancy of a mainframe. But the differences run far deeper, and we will show how they require different cloud security controls. We know how to manage the risks of outsourcing or multi-user environments; cloud computing security builds on this foundation and adds new twists. These differences boil down to abstraction and automation, which separate cloud computing from basic virtualization and other well-understood technologies. Abstraction Abstraction is the extensive use of multiple virtualization technologies to separate compute, network, storage, information, and application resources from the underlying physical infrastructure. In cloud computing we use this to convert physical infrastructure into a resource pool that is sliced, diced, provisioned, deprovisioned, and configured on demand, using the automation we will talk about next. It really is a bit like the matrix. Individual servers run little more than a hypervisor with connectivity software to link them into the cloud, and the rest is managed by the cloud controller. Virtual networks overlay the physical network, with dynamic configuration of routing at all levels. Storage hardware is similarly pooled, virtualized, and then managed by the cloud control layers. The entire physical infrastructure, less some dedicated management components, becomes a collection of resource pools. Servers, applications, and everything else runs on top of the virtualized environment. Abstraction impacts security significantly in four ways: Resource pools are managed using standard, web-based (REST) Application Programming Interfaces (APIs). The infrastructure is managed with network-enabled software at a fundamental level. Security can lose visibility into the infrastructure. On the network we can’t rely on physical routing for traffic inspection or management. We don’t necessarily know which hard drives hold which data. Everything is virtualized and portable. Entire servers can migrate to new physical systems with a few API calls or a click on a web page. We gain greater pervasive visibility into the infrastructure configuration itself. If the cloud controller doesn’t know about a server it cannot function. We can map the complete environment with those API calls. We have focused on Infrastructure as a Service, but the same issues apply to Platform and Software as a Service, except they often offer even less visibility. Automation Virtualization has existed for a long time. The real power cloud computing adds is automation. In basic virtualization and virtual data centers we still rely on administrators to manually provision and manage our virtual machines, networks, and storage. Cloud computing turns these tasks over to the cloud controller to coordinate all these pieces (and more) using orchestration. Users ask for resources via web page or API call, such as a new server with 1tb of storage on a particular subnet, and the cloud determines how best to provision it from the resource pool; then it handles installation, configuration, and coordinating all the networking, storage, and compute resources to pull everything together into a functional and accessible server. No human administrator required. Or the cloud can monitor demand on a cluster and add and remove fully load-balanced and configured systems based on rules, such as average system utilization over a specified threshold. Need more resources? Add virtual servers. Systems underutilized? Drop them back into the resource pool. In public cloud computing this keeps costs down as you expand and contract based on what you need. In private clouds it frees resources for other projects and requirements, but you still need a shared resource pool to handle overall demand. But you are no longer stuck with under-utilized physical boxes in one corner of your data center and inadequate capacity in another. The same applies to platforms (including databases or application servers) and software; you can expand and contract database storage, software application server capacity, and storage as needed – without additional capital investment. In the real world it isn’t always so clean. Heavy use of public cloud may exceed the costs of owning your own infrastructure. Managing your own private cloud is no small task, and is ripe with pitfalls. And abstraction does reduce performance at certain levels, at least for now. But with the right planning, and as the technology continues to evolve, the business advantages are undeniable. The NIST model of cloud computing is the best framework for understanding the cloud. It consists of five Essential Characteristics, three Service Models (IaaS, PaaS, and SaaS) and four Delivery Models (public, private, hybrid and community). Our characteristic of abstraction generally maps to resource pooling and broad network access, while automation maps to on-demand self service, measured service, and rapid elasticity. We aren’t proposing a different model, just overlaying the NIST model to better describe things in terms of security. Thanks to this automation and orchestration of resource pools, clouds are incredibly elastic, dynamic, agile, and resilient. But even more transformative is the capability for applications to manage their own infrastructure because everything is now programmable. The lines between development and operations blur, offering incredible levels of agility and resilience, which is one of the concepts underpinning the DevOps movement. But of course done improperly it can be disastrous. Cloud, DevOps, and Security in Practice: Examples Here are a few examples that highlight the impact of abstraction and automation on security. We will address the security issues later in this paper. Autoscaling: As mentioned above, many IaaS providers support autoscaling. A monitoring tool watches server load and other variables. When the average load of virtual machines exceeds a configurable threshold, new instances are launched from the same base image with advanced initialization scripts. These scripts can automatically configure all aspects of the server, pulling metadata from the cloud or a configuration management server. Advanced tools can configure entire application stacks. But these servers may only exist for a short period, perhaps never during a vulnerability

Share:
Read Post

Defending Against Application Denial of Service: Abusing Application Logic

We looked at application denial of service in terms of attacking the application server and the application stack, so now let’s turn our attention to attacking application itself. Clearly every application contains weaknesses that can be exploited, especially when the goal is simply to knock the application offline rather than something more complicated, such as stealing credentials or gaining access to the data. That lower bar of taking the application offline means more places to attack. If we bust out the kill chain to illuminate attack progression, let’s first focus on the beginning: reconnaissance. That’s where the process starts for application denial of service attacks as well. The attackers need to find the weak points in the application, so they assess it to figure out which pages consume the most resources, the kinds of field-level validation on forms, and the supported attributes on query strings. For instance, if a form field does a ton of field-level validation or needs to make multiple database calls to multiple sites to render the page, that page would be a good target to blast. Serving dynamic content requires a bunch of database calls to populate the page, and each call consumes resources. The point is to consume as many of resources as possible to impact the application’s ability to serve legitimate traffic. Flooding the Application In our Defending Against Denial of Service Attacks paper, we talked about how network-based attacks flood the pipes. Targeting resource intensive pages with either GET or POST requests (or both) provides is an equivalent application flooding attack, exhausting the server’s session and memory capacity. Attackers flood a number of different parts of web applications, including: Top-level index page: This one is straightforward and usually has the fewest protections because it’s open to everyone. When blasted by tens of thousands of clients simultaneously, the server can become overwhelmed. Query string “proxy busting”: Attackers can send a request to bypass any proxy or cache, forcing the application to generate and send new information, and eliminating the benefit of a CDN or other cache in front of the application. The impact can be particularly acute when requesting large PDFs or other files repeatedly, consuming excessive bandwidth and server resources. Random session cookies/tokens: By establishing thousands of sessions with the application, attackers can overload session tables on the server and impact its ability to serve legitimate traffic. Flood attacks can be detected rather easily (unlike the slow attacks described in Attacking the Server), providing an opportunity to rate-limit attack while allowing legitimate traffic through. Of course this approach puts a premium on accuracy, as false positives slow down or discard legitimate traffic, and false negatives allow attack to consume server resources. To accurately detect application floods you need a detailed baseline of legitimate traffic, tracking details such as URL distribution, request frequency, maximum requests per device, and outbound traffic rates. With this data a legitimate application behavior profile can be developed. You can then compare incoming traffic (usually on a WAF or application DoS device) against the profile to identify bad traffic, and then limit or block it. Another tactic to mitigate application floods is input validation on all form fields, to ensure requests neither overflow application buffers nor misuse application resources. If you are using a CDN to front-end your application, make sure it can handle random query string attacks and that you are benefitting from the caching service. Given the ability of some attackers to bypass a CDN (assuming you have one), you will want to ensure your input validation ignores random query strings. You can also leverage IP reputation services to identify bot traffic and limit or block them. That requires coordination between the application and network-based defenses, but it is effective for detecting and limiting floods. Pagination A pagination attack involves requesting the web application to return an unreasonable amount of results by expanding the PageSize query parameter to circumvent limits. This can return tens of thousands or even millions of records. Obviously this consumes significant database resources, especially when servicing multiple requests at the same time. These attacks are typically launched against the search page. Another tactic for overwhelming applications is to use a web scraper to capture information from dynamic content areas such as store locators and product catalogs. If the scraper is not throttled it can overwhelm the application by scraping over and over again. Mitigation for most pagination attacks must be built into the application. For example, regardless of the PageSize parameter, the application should limit the number of records returned. Likewise, you will want to limit the number of search requests the site will process simultaneously. You can also leverage a Content Delivery Network or web protection service to cache static information and limit search activity. Alternatively, embedding complicated JavaScript on the search pages can deter bots. Gaming the Shopping Cart Another frequently exploited legitimate function is the shopping cart. An attacker might put a few items in a cart and then abandon it for a few hours. At some point they can come back and refresh the cart, causes the session to be maintained, and the database to reload the cart. If the attacker has put tens of thousand of products into the cart, it consumes significant resources. Shopping cart mitigations include limiting the number of items that can be added to a cart and periodically clearing out carts with too many items. You will also want to periodically terminate sufficiently old carts to reclaim session spaces and flush abandoned carts. Combination Platter Attackers are smart. They have figured out that they can combine many of these attacks with devastating results. For instance an attacker could launch a volume-based network attack on a site. Then start a GET flood on legitimate pages, limited in avoid looking like a network attack. Follow up with a slow HTTP attack so any traffic that does make it through consumes application resources. Finally they might attack the shopping cart or store locator which looks like legitimate activity.

Share:
Read Post

How to Detect Cloudwashing by Your Vendors

“There is nothing more deceptive than an obvious fact” – Sherlock Holmes It’s cloud. It’s cloud-ready. It’s cloud insert-name-here. As analysts we have been running into a lot of vendors labeling traditional products as ‘cloud’. Two years ago we expected the practice to die out once customers understood cloud services. We were wrong – vendors are still doing it rather than actually building the technology. Call it cloudwashing, cloudification, or just plain BS. As an enterprise buyer, how can you tell whether the system you are thinking of purchasing is a cloud application or not? It should be easy – just look at the products branded ‘cloud’, right? But dig deeper and you see it’s not so simple. Sherlock Holmes made a science of detection, and being an enterprise buyer today can feel like a being detective in a complex investigation. Vendors have anticipated your questions and have answers ready. What to do? Start by drilling down: what is behind the labels? Is it cloud or just good old enterprise bloatware? Or is it MSO with a thin veneer of cloud? We pause here to state that there is nothing inherently wrong with enterprise software or MSO. There is also no reason cloud is necessarily better for you. Our goal here is to orient your thinking beyond labels and give you some tips so you can an educated consumer. We have seen a grab bag of cloudwashes. We offer the following questions to help you figure out what’s real: Does it run at a third party provider? (not on-premise or ‘private’ cloud) Is the service self-service (i.e., you can use it without other user interactions or without downloading – not installed ‘on the edge’ of your IT network) Is service metered? If you stopped using it tomorrow would bills stop? Can you buy it with a credit card? Can your brother-in-law sign up with the same service? Do you need to buy a software license? Does it have an API? Does it autoscale? Did the vendor start from scratch or rewrite its product? Is the product standalone (i.e., not a proxy-type interface on top of an existing stack)? Can you deploy without assistance; or does it require professional services to design, deploy, and operate? The more of these questions that get a ‘No’ answer, the more likely your vendor is marketing ‘cloud’ instead of selling cloud services. Why does that matter? Because real cloud environments offer specific advantages in elasticity, flexibility, scalability, self-service, pay-as-you-go, and various other areas, which are not present in many non-cloud solutions. What cloudwashing exercises have you seen? Please share in the comments below. Share:

Share:
Read Post

New Series: What CISOs Need to Know about Cloud Computing

This is the first post in a new series detailing the key differences between cloud computing and traditional security. I feel pretty strongly that, although many people are talking about the cloud, nobody has yet done a good job of explaining why and how security needs to adapt at a fundamental level. It is more than outsourcing, more than multitenancy, and definitely more than simple virtualization. This is my best stab at it, and I hope you like it. The entire paper, as I write it, is also posted and updated at GitHub for those of you who want to track changes, submit feedback, or even submit edits. Special thanks to CloudPassage for agreeing to license the paper (as always, we are following our Totally Transparent Research Process and they do not have any more influence than you do, and can back out of licensing the paper if, in the end, they don’t like it). And here we go… What CISOs Need to Know about Cloud Computing Introduction One of a CISO’s most difficult challenges is sorting the valuable wheat from the overhyped chaff, and then figuring out what it means in terms of risk to your organization. There is no shortage of technology and threat trends, and CISOs need not to only determine which matter, but how they impact security. The rise of cloud computing is one of the truly transformative evolutions that fundamentally change core security practices. Far more than an outsourcing model, cloud computing alters the very fabric of our infrastructure, technology consumption, and delivery models. In the long run, the cloud and mobile computing are likely to mark a larger shift than the Internet. This series details the critical differences between cloud computing and traditional infrastructure for security professionals, as well as where to focus security efforts. We will show that the cloud doesn’t necessarily increase risks – it shifts them, and provides new opportunities for significant security improvement. Different, But Not the Way You Think Cloud computing is a radically different technology model – not just the latest flavor of outsourcing. It uses a combination of abstraction and automation to achieve previously impossible levels of efficiency and elasticity. But in the end cloud computing still relies on traditional infrastructure as its foundation. It doesn’t eliminate physical servers, networks, or storage, but allows organizations to use them in different ways, with substantial benefits. Sometimes this means building your own cloud in your own datacenter; other times it means renting infrastructure, platforms, and applications from public providers over the Internet. Most organizations will use a combination of both. Public cloud services eliminate most capital expenses and shift them to on-demand operational costs. Private clouds allow more efficient use of capital, tend to reduce operational costs, and increase the responsiveness of technology to internal needs. Between the business benefits and current adoption rates, we expect cloud computing to become the dominant technology model over the next ten to fifteen years. As we make this transition it is the technology that create clouds, rather than the increased use of shared infrastructure, that really matters for security. Multitenancy is more an emergent property of cloud computing than a defining characteristic. Security Is Evolving for the Cloud As you will see, cloud computing isn’t more or less secure than traditional infrastructure – it is different. Some risks are greater, some are new, some are reduced, and some are eliminated. The primary goal of this series is to provide an overview of where these changes occur, what you need to do about them, and when. Cloud security focuses on managing the different risks associate with abstraction and automation. Mutitenancy tends to be more a compliance issue than a security problem, and we will cover both aspects. Infrastructure and applications are opened up to network-based management via Internet APIs. Everything from core network routing to creating and destroying entire application stacks is now possible using command lines and web interfaces. The early security focus has been on managing risks introduced by highly dynamic virtualized environments such as autoscaled servers, and broad network access, including a major focus on compartmentalizing cloud management. Over time the focus is gradually shifting to hardening the cloud infrastructure, platforms, and applications, and then adapting security to use the cloud to improve security. For example, the need for data encryption increases over time as you migrate more sensitive data into the cloud. But the complexities of internal network compartmentalization and server patching are dramatically reduced as you leverage cloud infrastructure. We expect to eventually see more security teams hook into the cloud fabric itself – bridging existing gaps between security tools and infrastructure and applications with Software Defined Security. The same APIs and programming techniques that power cloud computing can provide highly-integrated dynamic and responsive security controls – this is already happening. This series will lay out the key differences, with suggestions for where security professionals should focus. Hopefully, by the end, you will look at the cloud and cloud security in a new light, and agree that the cloud isn’t just the latest type of outsourcing. Share:

Share:
Read Post

Defending Against Application Denial of Service: Attacking the Stack

  In our last post, we started digging into ways attackers target standard web servers, protocols, and common pages to impact application availability. These kinds of attacks are at the surface level and low-hanging fruit because they can be executed via widely available tools wielded by unsophisticated attackers. If you think of a web application as an onion, there always seems to be another layer you can peel back to expose additional attack surface. The next layer we will evaluate is the underlying application stack used to build the application. One of the great things about web applications is the availability of fully assembled technology stacks, making it trivial to roll out the infrastructure to support a wide variety of applications. But anything widely available inevitably becomes an attack target. The best example of this within the context of an availability attack is how hash tables can be exploited to crush web servers. Hash Collision Attacks We won’t get into advanced programming but you need some context to understand this attack. A hash table is used to map specific keys to values by assigning the value of the key to a specific slot in an array. This provides a very fast way to search for things. On the downside, multiple values may end up in the same slot, which creates a hash collision that needs to be dealt with by the application, requiring significant additional processing. Hash collisions are normally minimized, so the speed trade-off is usually worthwhile. But if an attacker understands the hashing function used by the application they can cause excessive hash collisions. This requires the application to compensate and consume extra resources to manage the hashing function. If enough hash collisions occur… you guessed it: the application can’t handle the workload and goes down. This attack was weaponized as HashDoS, an attack tool that leverages the fact that most web application stacks use the same hashing algorithm within their dictionary tables. With knowledge of this hashing algorithm, the attacker can send a POST request with many variables to create hash table chaos and render the application useless. Mitigation for this attack requires the ability to discard messages with too many variables – typically implemented within a WAF (web application firewall) – or to randomize the hash function using application-layer logic. A good explanation of this attack using cats explains HashDoS in layperson’s terms. Remember that any capabilities within the application stack can be exploited, and given the open source nature of these stacks probably will. So diligence in selection of a stack, ensuring secure implementation, and tracking security notices and implementing patches are all critical to ensure application security and availability. Targeting the Database As part of the application stack, databases tend to get overlooked as a denial of service attack target. Many attackers try to extract the data in the database and then exfiltrate it, so knocking down the database would be counter-productive. But when the mission is to impact application availability or to use a DoS as cover for exfiltration, the database can be a soft target – because in some way, shape, or form, the web application depends on the database. If you recall our Defending Against Denial of Service Attacks paper, we broke DoS attacks into network-based volumetric attacks and application-layer attacks. The issue with databases is that they can be attacked using both tactics. Application servers connect to the database using some kind of network, so a volume attack on that network segment can impact database availability. If the database itself is exploited, the application is also likely to go down. Either way the application is out of business. Database DoS Attacks If we dig a little deeper into the attacks, we find that one path is to crush databases using deliberately wasteful queries. Other attacks target simple vulnerabilities that have never been patched, mostly because the need for continuous database uptime interferes with patching, so that it happens sporadically or not at all. Again, you don’t need to be a brain surgeon to knock a web application offline. Here are some of the attack categories: Abuse of Functions: This type of attack is similar to the slow HTTP attacks mentioned in the last post – attackers use database functionality against you. For example, if you restrict failed logins, they may blast your database with bad password requests to lock legitimate users (or applications) out. Another example involves the attackers taking advantage of database autoscaling, blast it with requests until so many instances are running that the database falls over. Complex Queries: If the attacker gives the database too much work to do, it will fall over. There are many techniques, including nested queries & recursion, Cartesian joins, and the in operator, which can overwhelm the database. The attacker would need to be able to inject a SQL query into the database directly or from within the application for this to work, so you can block these attacks that way. We will talk about defenses below. Bugs and Defects: In these cases the attacker is targeting a known database vulnerability. This includes queries of death and buffer overflow to take down the database. With new functionality being introduced constantly, database attack surface continues to grow. And even if the database vendor identifies the issue and produces a patch (not a sure thing), finding a maintenance window to patch remains challenging in many operational environments. Application Usage: Finally, the way the application uses the database can be gamed to cause an outage. The best example of this is SQL injection, but that attack is rarely used to knock over databases. Also consider the login and store locator page attacks we mentioned in the last post, as well as shopping cart and search engine attacks (to be covered later) as additional examples of application misuse that can impact availability. Database DoS Defenses The tactics used to defend against database denial of service attacks really reflect good database security practices. Go figure. Configuration: Strong

Share:
Read Post

Trustwave Acquires Application Security Inc.

It has been a while since we had an acquisition in the database security space, but today Trustwave announced it acquired Application Security Inc. – commonly called “AppSec” by many who know the company. About 10 years ago I wrote my first competitive analysis paper during my employment with IPLocks, of our principal competitor: another little-known database security company called Application Security, Inc. Every quarter for four years, I updated those competitive analysis sheets to keep pace with AppSec’s product enhancements and competitive tactics in sales engagements. Little did I know I would continue to examine AppSec’s capabilities on a quarterly basis after having joined Securosis – but rather than solely looking at competitive positioning, I have been gearing my analysis toward how features map to the customer inquires, and tracking consumer experiences during proof-of-concept engagements. Of all the products I have tracked, I have been following AppSec the longest. It feels odd to be writing this for a general audience, but this deal is pretty straightforward, and it needed to happen. Application Security was one of the first database security vendors, and while they were considered a leader in the 2004 timeframe, their products have not been competitive for several years. AppSec still has one of the best database assessment products on the market (dbProtectAppDetectivePRO), and one of the better – possibly the best – database vulnerability research team backing it. But Database Activity Monitoring (DAM) is now the key driver in that space, and AppSec’s DAM product (AppDetectivePROdbProtect) has not kept pace with customer demand in terms of performance, integration, ease-of-use, or out-of-the-box functionality. A “blinders on” focus can be both admirable and necessary for very small start-ups to deliver innovative technologies to markets that don’t understand their new technology or value proposition, but as markets mature vendors must respond to customers and competitors. In AppSec’s early days, very few people understood why database security was important. But while the rest of the industry matured and worked to build enterprise-worthy solutions, AppSec turned a deaf ear to criticism from would-be customers and analysts. Today the platform has reasonable quality, but is not much more than an ‘also-ran’ in a very competitive field. That said, I think this is a very good purchase for Trustwave. It means several things for Trustwave customers: Trustwave has filled a compliance gap in its product portfolio – specifically for PCI. Trustwave is focused on PCI-DSS, and data and database security are central to PCI compliance. Web and network security have been part of their product suite, but database security has not. Keep in mind that DAM and assessment are not specifically prescribed for PCI compliance like WAF is; but the vast majority of customers I speak with use DAM to audit activity, discovery to show what data stores are being used, and assessment to prove that security controls are in place. Trustwave should have acquired this technology a while ago. The acquisition fits Trustwave’s model of buying decent technology companies at low prices, then selling a subset of their technology to existing customers where they already know demand exists. That could explain why they waited so long – balancing customer requirements against their ability to negotiate a bargain price. Trustwave knows what their customers need to pass PCI better than anyone else, so they will succeed with this technology in ways AppSec never could. This puts Trustwave on a more even footing for customers who care more about security and don’t just need to check a compliance box, and gives Trustwave a partial response to Imperva’s monitoring and WAF capabilities. I think Trustwave is correct that AppSec’s platform can help with their managed services offering – Monitoring and Assessment as a Service appeals to smaller enterprises and mid-market firms who don’t want to own or manage database security platforms. What does this mean for AppSec customers? It is difficult to say – I have not spoken with anyone from Trustwave about this acquisition, and I am unable to judge their commitment to putting engineering effort behind the AppSec products. And I cannot tell whether they intend to keep the research team which has been keeping the assessment component current. Trustwave tweeted during the official announcement that “.@Trustwave will continue to develop and support @AppSecInc products, DbProtect and AppDetectivePRO”, but that could be limited to features compliance buyers demand, without closing the performance and data collection gaps that are problematic for DAM customers. I will blog more on this as I get more information, but expect them to provide what’s required to meet compliance and no more. And lastly, for those keeping score at home, AppSec is the 7th Database Activity Monitoring acquisition – after Lumigent (BeyondTrust), IPLocks (Fortinet), Guardium (IBM), Secerno (Oracle), Tizor (IBM via Netezza), and Sentrigo (McAfee); leaving Imperva and GreenSQL as the last independent DAM vendors. Share:

Share:
Read Post

Security Awareness Training Evolution [New Paper]

Everyone has an opinion about security awareness training, and most of them are negative. Waste of time! Ineffective! Boring! We have heard them all. And the criticism isn’t wrong – much of the content driving security awareness training is lame. Which is probably the kindest thing we can say about it. But it doesn’t need to be that way. Actually, it cannot remain this way – there is too much at stake. Users remain the lowest-hanging fruit for attackers, and as long as that is the case attackers will continue to target them. Educating users about security is not a panacea, but it can and does help. It’s not like a focus on security awareness training is the flavor of the day for us. We have been talking about the importance of training users for years, as unpopular as it remains. The main argument against security training is that it doesn’t work. That’s just not true. But it doesn’t work for everyone. Like security in general, there is no 100%. Some employees will never get it – mostly because they just don’t care – but they do bring enough value to the organization that no matter what they do (short of a felony) they are sticking around. Then there is everyone else. Maybe it’s 50% of your folks, or perhaps 90%. Regardless of the number of employees who can be influenced by better security training content, wouldn’t it make your life easier if you didn’t have to clean up after them? We have seen training reduce the amount of time spent cleaning up easily avoidable mistakes. We are pleased to announce the availability of our Security Awareness Training Evolution paper. It discusses how training needs to evolve, and presents a path to improve training content and ensure the right support and incentives are in place for training to succeed. We would like to thank our friends at PhishMe for licensing this paper. Remember, it is through the generosity of our licensees that you get to read our stuff for this nifty price. Here is another quote from the paper to sum things up: As we have said throughout this paper, employees are clearly the weakest link in your security defenses, so without a plan to actively prepare them for battle you have a low chance of success. It is not about making every employee a security ninja – instead focus on preventing most of them from falling for simplistic attacks. You will still be exploited, but make it harder for attackers so you suffer less frequent compromise. Security-aware employees protect your data more effectively, it’s as simple as that, regardless of what you hear from naysayers. Check out the page in our Research Library, or download the Security Awareness Training Evolution (PDF) paper directly. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.