Securosis

Research

Incite 11/13/2013: Bully

  When you really see the underbelly of something, it is rarely pretty. The NFL is no different. Grown men are paid millions of dollars a year to display unbridled aggression, toughness, and competitiveness. That sounds like a pretty Darwinian environment, where the strong prey on the weak. And it is given what we have seen over the last few weeks, as behavior in the Miami Dolphins locker room comes to light. It is counterintuitive to think of a 320-pound offensive lineman being bullied by anyone. You hear about fights on the field and in the locker room as these alpha males all look to establish position within the pride. But how are the bullies in the Dolphins locker room any different than the petty mean girls and boys you had to deal with in high school? They aren’t. If you take a step back, a bully is always compensating for some kind of self-perceived inadequacy that forces him or her to act out. Small people (even if they weigh over 300+ pounds) make themselves feel bigger by making others feel smaller. So the first question is whether the behavior is acceptable. I think everyone can agree racial epithets have no place in today’s society. But what about the other tactics, such as mind games and intentionally excluding a fellow player from activities? I’m not sure that kind of hazing would normally be a huge deal, but combined with an environment of racial insensitivity, it is probably crossing the line as well. What’s more surprising is that no one stepped up and said that behavior was no bueno. Bullies prey on folks, because folks who aren’t directly targeted don’t stand up and make clear what is acceptable and what isn’t. But that has happened since the beginning of time. No one want to stand up for what’s right, so folks just watch catastrophic events happen. Maybe this will be a catalyst to change the culture. There is nothing the NFL hates more than bad publicity. So things will change. Every other team in the NFL made statements about how their work environments are not like that. No one wants to be singled out as a bully or a bigot. Not when they have potential endorsement deals riding on their public image. Like most other changes, some old timers will resist. Others will adapt because they need to. And with the real-time nature of today’s media, and rampant leaks within every organization, it is hard to see this kind of behavior happening again. I guess I can’t understand why players who call themselves brothers would treat each other so badly. Of course you beat up your little brother(s) when you are 10. But if you are still treating your siblings shabbily as an adults, you need some help. Maybe I am getting a bit judgmental, especially considering that I have never worked in an NFL locker room, so I can’t even pretend to understand the mindset. But I do know a bit about dealing with people. One of the key tenets of a functional and successful organization is to manage people in an individual fashion. A guy may be 320 pounds, an athletic freak, and capable of serious violence when the ball is snapped, but that doesn’t mean he wants to get called names or fight a teammate to prove his worth. I learned the importance of managing people individually early in my career, mostly because it worked. This management philosophy is masterfully explained in First, Break All the Rules, which shows how important corporate performance is for keeping happy employees who do what they love every day with people they care about. Clearly someone in Miami didn’t get the memo. And you have to wonder what kind of player Jonathan Martin could be if he worked in a place where he didn’t feel singled out and persecuted, so he could focus on the task at hand: his blocking assignment for each play. Not whether he was going to get jumped in the parking lot. Maybe he’ll even get a chance to find out, but it’s hard to see that happening in Miami. –Mike Photo credit: “Bully Advance Screening Hosted by First Lady Katie O’Malley” originally uploaded by Maryland GovPics Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. What CISOs Need to Know about Cloud Computing Introduction Defending Against Application Denial of Service Attacking the Application Stack Attacking the Application Server Introduction Newly Published Papers Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Incite 4 U What is it that you do? I have to admit that I really did not understand analysts or the entire analyst industry prior to joining Securosis. Analysts were the people on our briefing calendar who were more knowledgable – and far more arrogant – than the press. But they did not seem to have a clear role, nor was their technical prowess close to what they thought it was. I was assured by our marketing team that they were important, but I could not see how. Now I do, but the explanation needs to be repeated every so often. The aneelism blog has a nice primer on technology analyst 101 for startups. Long story short, some analysts speak with customers as an independent advisor, which means two things for small security vendors: we are told things customers will never tell you directly, and we see a breadth of industry issues & trends you won’t because you are focused on your own stuff and try to wedge

Share:
Read Post

The CISO’s Guide to the Cloud: How the Cloud Is Different for Security

This is part two of a series. You can read part one here or track the project on GitHub. How the Cloud Is Different for Security In the early days of cloud computing, even some very well-respected security professionals claimed it was little more than a different kind of outsourcing, or equivalent to the multitenancy of a mainframe. But the differences run far deeper, and we will show how they require different cloud security controls. We know how to manage the risks of outsourcing or multi-user environments; cloud computing security builds on this foundation and adds new twists. These differences boil down to abstraction and automation, which separate cloud computing from basic virtualization and other well-understood technologies. Abstraction Abstraction is the extensive use of multiple virtualization technologies to separate compute, network, storage, information, and application resources from the underlying physical infrastructure. In cloud computing we use this to convert physical infrastructure into a resource pool that is sliced, diced, provisioned, deprovisioned, and configured on demand, using the automation we will talk about next. It really is a bit like the matrix. Individual servers run little more than a hypervisor with connectivity software to link them into the cloud, and the rest is managed by the cloud controller. Virtual networks overlay the physical network, with dynamic configuration of routing at all levels. Storage hardware is similarly pooled, virtualized, and then managed by the cloud control layers. The entire physical infrastructure, less some dedicated management components, becomes a collection of resource pools. Servers, applications, and everything else runs on top of the virtualized environment. Abstraction impacts security significantly in four ways: Resource pools are managed using standard, web-based (REST) Application Programming Interfaces (APIs). The infrastructure is managed with network-enabled software at a fundamental level. Security can lose visibility into the infrastructure. On the network we can’t rely on physical routing for traffic inspection or management. We don’t necessarily know which hard drives hold which data. Everything is virtualized and portable. Entire servers can migrate to new physical systems with a few API calls or a click on a web page. We gain greater pervasive visibility into the infrastructure configuration itself. If the cloud controller doesn’t know about a server it cannot function. We can map the complete environment with those API calls. We have focused on Infrastructure as a Service, but the same issues apply to Platform and Software as a Service, except they often offer even less visibility. Automation Virtualization has existed for a long time. The real power cloud computing adds is automation. In basic virtualization and virtual data centers we still rely on administrators to manually provision and manage our virtual machines, networks, and storage. Cloud computing turns these tasks over to the cloud controller to coordinate all these pieces (and more) using orchestration. Users ask for resources via web page or API call, such as a new server with 1tb of storage on a particular subnet, and the cloud determines how best to provision it from the resource pool; then it handles installation, configuration, and coordinating all the networking, storage, and compute resources to pull everything together into a functional and accessible server. No human administrator required. Or the cloud can monitor demand on a cluster and add and remove fully load-balanced and configured systems based on rules, such as average system utilization over a specified threshold. Need more resources? Add virtual servers. Systems underutilized? Drop them back into the resource pool. In public cloud computing this keeps costs down as you expand and contract based on what you need. In private clouds it frees resources for other projects and requirements, but you still need a shared resource pool to handle overall demand. But you are no longer stuck with under-utilized physical boxes in one corner of your data center and inadequate capacity in another. The same applies to platforms (including databases or application servers) and software; you can expand and contract database storage, software application server capacity, and storage as needed – without additional capital investment. In the real world it isn’t always so clean. Heavy use of public cloud may exceed the costs of owning your own infrastructure. Managing your own private cloud is no small task, and is ripe with pitfalls. And abstraction does reduce performance at certain levels, at least for now. But with the right planning, and as the technology continues to evolve, the business advantages are undeniable. The NIST model of cloud computing is the best framework for understanding the cloud. It consists of five Essential Characteristics, three Service Models (IaaS, PaaS, and SaaS) and four Delivery Models (public, private, hybrid and community). Our characteristic of abstraction generally maps to resource pooling and broad network access, while automation maps to on-demand self service, measured service, and rapid elasticity. We aren’t proposing a different model, just overlaying the NIST model to better describe things in terms of security. Thanks to this automation and orchestration of resource pools, clouds are incredibly elastic, dynamic, agile, and resilient. But even more transformative is the capability for applications to manage their own infrastructure because everything is now programmable. The lines between development and operations blur, offering incredible levels of agility and resilience, which is one of the concepts underpinning the DevOps movement. But of course done improperly it can be disastrous. Cloud, DevOps, and Security in Practice: Examples Here are a few examples that highlight the impact of abstraction and automation on security. We will address the security issues later in this paper. Autoscaling: As mentioned above, many IaaS providers support autoscaling. A monitoring tool watches server load and other variables. When the average load of virtual machines exceeds a configurable threshold, new instances are launched from the same base image with advanced initialization scripts. These scripts can automatically configure all aspects of the server, pulling metadata from the cloud or a configuration management server. Advanced tools can configure entire application stacks. But these servers may only exist for a short period, perhaps never during a vulnerability

Share:
Read Post

Defending Against Application Denial of Service: Abusing Application Logic

We looked at application denial of service in terms of attacking the application server and the application stack, so now let’s turn our attention to attacking application itself. Clearly every application contains weaknesses that can be exploited, especially when the goal is simply to knock the application offline rather than something more complicated, such as stealing credentials or gaining access to the data. That lower bar of taking the application offline means more places to attack. If we bust out the kill chain to illuminate attack progression, let’s first focus on the beginning: reconnaissance. That’s where the process starts for application denial of service attacks as well. The attackers need to find the weak points in the application, so they assess it to figure out which pages consume the most resources, the kinds of field-level validation on forms, and the supported attributes on query strings. For instance, if a form field does a ton of field-level validation or needs to make multiple database calls to multiple sites to render the page, that page would be a good target to blast. Serving dynamic content requires a bunch of database calls to populate the page, and each call consumes resources. The point is to consume as many of resources as possible to impact the application’s ability to serve legitimate traffic. Flooding the Application In our Defending Against Denial of Service Attacks paper, we talked about how network-based attacks flood the pipes. Targeting resource intensive pages with either GET or POST requests (or both) provides is an equivalent application flooding attack, exhausting the server’s session and memory capacity. Attackers flood a number of different parts of web applications, including: Top-level index page: This one is straightforward and usually has the fewest protections because it’s open to everyone. When blasted by tens of thousands of clients simultaneously, the server can become overwhelmed. Query string “proxy busting”: Attackers can send a request to bypass any proxy or cache, forcing the application to generate and send new information, and eliminating the benefit of a CDN or other cache in front of the application. The impact can be particularly acute when requesting large PDFs or other files repeatedly, consuming excessive bandwidth and server resources. Random session cookies/tokens: By establishing thousands of sessions with the application, attackers can overload session tables on the server and impact its ability to serve legitimate traffic. Flood attacks can be detected rather easily (unlike the slow attacks described in Attacking the Server), providing an opportunity to rate-limit attack while allowing legitimate traffic through. Of course this approach puts a premium on accuracy, as false positives slow down or discard legitimate traffic, and false negatives allow attack to consume server resources. To accurately detect application floods you need a detailed baseline of legitimate traffic, tracking details such as URL distribution, request frequency, maximum requests per device, and outbound traffic rates. With this data a legitimate application behavior profile can be developed. You can then compare incoming traffic (usually on a WAF or application DoS device) against the profile to identify bad traffic, and then limit or block it. Another tactic to mitigate application floods is input validation on all form fields, to ensure requests neither overflow application buffers nor misuse application resources. If you are using a CDN to front-end your application, make sure it can handle random query string attacks and that you are benefitting from the caching service. Given the ability of some attackers to bypass a CDN (assuming you have one), you will want to ensure your input validation ignores random query strings. You can also leverage IP reputation services to identify bot traffic and limit or block them. That requires coordination between the application and network-based defenses, but it is effective for detecting and limiting floods. Pagination A pagination attack involves requesting the web application to return an unreasonable amount of results by expanding the PageSize query parameter to circumvent limits. This can return tens of thousands or even millions of records. Obviously this consumes significant database resources, especially when servicing multiple requests at the same time. These attacks are typically launched against the search page. Another tactic for overwhelming applications is to use a web scraper to capture information from dynamic content areas such as store locators and product catalogs. If the scraper is not throttled it can overwhelm the application by scraping over and over again. Mitigation for most pagination attacks must be built into the application. For example, regardless of the PageSize parameter, the application should limit the number of records returned. Likewise, you will want to limit the number of search requests the site will process simultaneously. You can also leverage a Content Delivery Network or web protection service to cache static information and limit search activity. Alternatively, embedding complicated JavaScript on the search pages can deter bots. Gaming the Shopping Cart Another frequently exploited legitimate function is the shopping cart. An attacker might put a few items in a cart and then abandon it for a few hours. At some point they can come back and refresh the cart, causes the session to be maintained, and the database to reload the cart. If the attacker has put tens of thousand of products into the cart, it consumes significant resources. Shopping cart mitigations include limiting the number of items that can be added to a cart and periodically clearing out carts with too many items. You will also want to periodically terminate sufficiently old carts to reclaim session spaces and flush abandoned carts. Combination Platter Attackers are smart. They have figured out that they can combine many of these attacks with devastating results. For instance an attacker could launch a volume-based network attack on a site. Then start a GET flood on legitimate pages, limited in avoid looking like a network attack. Follow up with a slow HTTP attack so any traffic that does make it through consumes application resources. Finally they might attack the shopping cart or store locator which looks like legitimate activity.

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.