Securosis

Research

Incite 11/13/2013: Bully

  When you really see the underbelly of something, it is rarely pretty. The NFL is no different. Grown men are paid millions of dollars a year to display unbridled aggression, toughness, and competitiveness. That sounds like a pretty Darwinian environment, where the strong prey on the weak. And it is given what we have seen over the last few weeks, as behavior in the Miami Dolphins locker room comes to light. It is counterintuitive to think of a 320-pound offensive lineman being bullied by anyone. You hear about fights on the field and in the locker room as these alpha males all look to establish position within the pride. But how are the bullies in the Dolphins locker room any different than the petty mean girls and boys you had to deal with in high school? They aren’t. If you take a step back, a bully is always compensating for some kind of self-perceived inadequacy that forces him or her to act out. Small people (even if they weigh over 300+ pounds) make themselves feel bigger by making others feel smaller. So the first question is whether the behavior is acceptable. I think everyone can agree racial epithets have no place in today’s society. But what about the other tactics, such as mind games and intentionally excluding a fellow player from activities? I’m not sure that kind of hazing would normally be a huge deal, but combined with an environment of racial insensitivity, it is probably crossing the line as well. What’s more surprising is that no one stepped up and said that behavior was no bueno. Bullies prey on folks, because folks who aren’t directly targeted don’t stand up and make clear what is acceptable and what isn’t. But that has happened since the beginning of time. No one want to stand up for what’s right, so folks just watch catastrophic events happen. Maybe this will be a catalyst to change the culture. There is nothing the NFL hates more than bad publicity. So things will change. Every other team in the NFL made statements about how their work environments are not like that. No one wants to be singled out as a bully or a bigot. Not when they have potential endorsement deals riding on their public image. Like most other changes, some old timers will resist. Others will adapt because they need to. And with the real-time nature of today’s media, and rampant leaks within every organization, it is hard to see this kind of behavior happening again. I guess I can’t understand why players who call themselves brothers would treat each other so badly. Of course you beat up your little brother(s) when you are 10. But if you are still treating your siblings shabbily as an adults, you need some help. Maybe I am getting a bit judgmental, especially considering that I have never worked in an NFL locker room, so I can’t even pretend to understand the mindset. But I do know a bit about dealing with people. One of the key tenets of a functional and successful organization is to manage people in an individual fashion. A guy may be 320 pounds, an athletic freak, and capable of serious violence when the ball is snapped, but that doesn’t mean he wants to get called names or fight a teammate to prove his worth. I learned the importance of managing people individually early in my career, mostly because it worked. This management philosophy is masterfully explained in First, Break All the Rules, which shows how important corporate performance is for keeping happy employees who do what they love every day with people they care about. Clearly someone in Miami didn’t get the memo. And you have to wonder what kind of player Jonathan Martin could be if he worked in a place where he didn’t feel singled out and persecuted, so he could focus on the task at hand: his blocking assignment for each play. Not whether he was going to get jumped in the parking lot. Maybe he’ll even get a chance to find out, but it’s hard to see that happening in Miami. –Mike Photo credit: “Bully Advance Screening Hosted by First Lady Katie O’Malley” originally uploaded by Maryland GovPics Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. What CISOs Need to Know about Cloud Computing Introduction Defending Against Application Denial of Service Attacking the Application Stack Attacking the Application Server Introduction Newly Published Papers Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Incite 4 U What is it that you do? I have to admit that I really did not understand analysts or the entire analyst industry prior to joining Securosis. Analysts were the people on our briefing calendar who were more knowledgable – and far more arrogant – than the press. But they did not seem to have a clear role, nor was their technical prowess close to what they thought it was. I was assured by our marketing team that they were important, but I could not see how. Now I do, but the explanation needs to be repeated every so often. The aneelism blog has a nice primer on technology analyst 101 for startups. Long story short, some analysts speak with customers as an independent advisor, which means two things for small security vendors: we are told things customers will never tell you directly, and we see a breadth of industry issues & trends you won’t because you are focused on your own stuff and try to wedge

Share:
Read Post

The CISO’s Guide to the Cloud: How the Cloud Is Different for Security

This is part two of a series. You can read part one here or track the project on GitHub. How the Cloud Is Different for Security In the early days of cloud computing, even some very well-respected security professionals claimed it was little more than a different kind of outsourcing, or equivalent to the multitenancy of a mainframe. But the differences run far deeper, and we will show how they require different cloud security controls. We know how to manage the risks of outsourcing or multi-user environments; cloud computing security builds on this foundation and adds new twists. These differences boil down to abstraction and automation, which separate cloud computing from basic virtualization and other well-understood technologies. Abstraction Abstraction is the extensive use of multiple virtualization technologies to separate compute, network, storage, information, and application resources from the underlying physical infrastructure. In cloud computing we use this to convert physical infrastructure into a resource pool that is sliced, diced, provisioned, deprovisioned, and configured on demand, using the automation we will talk about next. It really is a bit like the matrix. Individual servers run little more than a hypervisor with connectivity software to link them into the cloud, and the rest is managed by the cloud controller. Virtual networks overlay the physical network, with dynamic configuration of routing at all levels. Storage hardware is similarly pooled, virtualized, and then managed by the cloud control layers. The entire physical infrastructure, less some dedicated management components, becomes a collection of resource pools. Servers, applications, and everything else runs on top of the virtualized environment. Abstraction impacts security significantly in four ways: Resource pools are managed using standard, web-based (REST) Application Programming Interfaces (APIs). The infrastructure is managed with network-enabled software at a fundamental level. Security can lose visibility into the infrastructure. On the network we can’t rely on physical routing for traffic inspection or management. We don’t necessarily know which hard drives hold which data. Everything is virtualized and portable. Entire servers can migrate to new physical systems with a few API calls or a click on a web page. We gain greater pervasive visibility into the infrastructure configuration itself. If the cloud controller doesn’t know about a server it cannot function. We can map the complete environment with those API calls. We have focused on Infrastructure as a Service, but the same issues apply to Platform and Software as a Service, except they often offer even less visibility. Automation Virtualization has existed for a long time. The real power cloud computing adds is automation. In basic virtualization and virtual data centers we still rely on administrators to manually provision and manage our virtual machines, networks, and storage. Cloud computing turns these tasks over to the cloud controller to coordinate all these pieces (and more) using orchestration. Users ask for resources via web page or API call, such as a new server with 1tb of storage on a particular subnet, and the cloud determines how best to provision it from the resource pool; then it handles installation, configuration, and coordinating all the networking, storage, and compute resources to pull everything together into a functional and accessible server. No human administrator required. Or the cloud can monitor demand on a cluster and add and remove fully load-balanced and configured systems based on rules, such as average system utilization over a specified threshold. Need more resources? Add virtual servers. Systems underutilized? Drop them back into the resource pool. In public cloud computing this keeps costs down as you expand and contract based on what you need. In private clouds it frees resources for other projects and requirements, but you still need a shared resource pool to handle overall demand. But you are no longer stuck with under-utilized physical boxes in one corner of your data center and inadequate capacity in another. The same applies to platforms (including databases or application servers) and software; you can expand and contract database storage, software application server capacity, and storage as needed – without additional capital investment. In the real world it isn’t always so clean. Heavy use of public cloud may exceed the costs of owning your own infrastructure. Managing your own private cloud is no small task, and is ripe with pitfalls. And abstraction does reduce performance at certain levels, at least for now. But with the right planning, and as the technology continues to evolve, the business advantages are undeniable. The NIST model of cloud computing is the best framework for understanding the cloud. It consists of five Essential Characteristics, three Service Models (IaaS, PaaS, and SaaS) and four Delivery Models (public, private, hybrid and community). Our characteristic of abstraction generally maps to resource pooling and broad network access, while automation maps to on-demand self service, measured service, and rapid elasticity. We aren’t proposing a different model, just overlaying the NIST model to better describe things in terms of security. Thanks to this automation and orchestration of resource pools, clouds are incredibly elastic, dynamic, agile, and resilient. But even more transformative is the capability for applications to manage their own infrastructure because everything is now programmable. The lines between development and operations blur, offering incredible levels of agility and resilience, which is one of the concepts underpinning the DevOps movement. But of course done improperly it can be disastrous. Cloud, DevOps, and Security in Practice: Examples Here are a few examples that highlight the impact of abstraction and automation on security. We will address the security issues later in this paper. Autoscaling: As mentioned above, many IaaS providers support autoscaling. A monitoring tool watches server load and other variables. When the average load of virtual machines exceeds a configurable threshold, new instances are launched from the same base image with advanced initialization scripts. These scripts can automatically configure all aspects of the server, pulling metadata from the cloud or a configuration management server. Advanced tools can configure entire application stacks. But these servers may only exist for a short period, perhaps never during a vulnerability

Share:
Read Post

Defending Against Application Denial of Service: Abusing Application Logic

We looked at application denial of service in terms of attacking the application server and the application stack, so now let’s turn our attention to attacking application itself. Clearly every application contains weaknesses that can be exploited, especially when the goal is simply to knock the application offline rather than something more complicated, such as stealing credentials or gaining access to the data. That lower bar of taking the application offline means more places to attack. If we bust out the kill chain to illuminate attack progression, let’s first focus on the beginning: reconnaissance. That’s where the process starts for application denial of service attacks as well. The attackers need to find the weak points in the application, so they assess it to figure out which pages consume the most resources, the kinds of field-level validation on forms, and the supported attributes on query strings. For instance, if a form field does a ton of field-level validation or needs to make multiple database calls to multiple sites to render the page, that page would be a good target to blast. Serving dynamic content requires a bunch of database calls to populate the page, and each call consumes resources. The point is to consume as many of resources as possible to impact the application’s ability to serve legitimate traffic. Flooding the Application In our Defending Against Denial of Service Attacks paper, we talked about how network-based attacks flood the pipes. Targeting resource intensive pages with either GET or POST requests (or both) provides is an equivalent application flooding attack, exhausting the server’s session and memory capacity. Attackers flood a number of different parts of web applications, including: Top-level index page: This one is straightforward and usually has the fewest protections because it’s open to everyone. When blasted by tens of thousands of clients simultaneously, the server can become overwhelmed. Query string “proxy busting”: Attackers can send a request to bypass any proxy or cache, forcing the application to generate and send new information, and eliminating the benefit of a CDN or other cache in front of the application. The impact can be particularly acute when requesting large PDFs or other files repeatedly, consuming excessive bandwidth and server resources. Random session cookies/tokens: By establishing thousands of sessions with the application, attackers can overload session tables on the server and impact its ability to serve legitimate traffic. Flood attacks can be detected rather easily (unlike the slow attacks described in Attacking the Server), providing an opportunity to rate-limit attack while allowing legitimate traffic through. Of course this approach puts a premium on accuracy, as false positives slow down or discard legitimate traffic, and false negatives allow attack to consume server resources. To accurately detect application floods you need a detailed baseline of legitimate traffic, tracking details such as URL distribution, request frequency, maximum requests per device, and outbound traffic rates. With this data a legitimate application behavior profile can be developed. You can then compare incoming traffic (usually on a WAF or application DoS device) against the profile to identify bad traffic, and then limit or block it. Another tactic to mitigate application floods is input validation on all form fields, to ensure requests neither overflow application buffers nor misuse application resources. If you are using a CDN to front-end your application, make sure it can handle random query string attacks and that you are benefitting from the caching service. Given the ability of some attackers to bypass a CDN (assuming you have one), you will want to ensure your input validation ignores random query strings. You can also leverage IP reputation services to identify bot traffic and limit or block them. That requires coordination between the application and network-based defenses, but it is effective for detecting and limiting floods. Pagination A pagination attack involves requesting the web application to return an unreasonable amount of results by expanding the PageSize query parameter to circumvent limits. This can return tens of thousands or even millions of records. Obviously this consumes significant database resources, especially when servicing multiple requests at the same time. These attacks are typically launched against the search page. Another tactic for overwhelming applications is to use a web scraper to capture information from dynamic content areas such as store locators and product catalogs. If the scraper is not throttled it can overwhelm the application by scraping over and over again. Mitigation for most pagination attacks must be built into the application. For example, regardless of the PageSize parameter, the application should limit the number of records returned. Likewise, you will want to limit the number of search requests the site will process simultaneously. You can also leverage a Content Delivery Network or web protection service to cache static information and limit search activity. Alternatively, embedding complicated JavaScript on the search pages can deter bots. Gaming the Shopping Cart Another frequently exploited legitimate function is the shopping cart. An attacker might put a few items in a cart and then abandon it for a few hours. At some point they can come back and refresh the cart, causes the session to be maintained, and the database to reload the cart. If the attacker has put tens of thousand of products into the cart, it consumes significant resources. Shopping cart mitigations include limiting the number of items that can be added to a cart and periodically clearing out carts with too many items. You will also want to periodically terminate sufficiently old carts to reclaim session spaces and flush abandoned carts. Combination Platter Attackers are smart. They have figured out that they can combine many of these attacks with devastating results. For instance an attacker could launch a volume-based network attack on a site. Then start a GET flood on legitimate pages, limited in avoid looking like a network attack. Follow up with a slow HTTP attack so any traffic that does make it through consumes application resources. Finally they might attack the shopping cart or store locator which looks like legitimate activity.

Share:
Read Post

How to Detect Cloudwashing by Your Vendors

“There is nothing more deceptive than an obvious fact” – Sherlock Holmes It’s cloud. It’s cloud-ready. It’s cloud insert-name-here. As analysts we have been running into a lot of vendors labeling traditional products as ‘cloud’. Two years ago we expected the practice to die out once customers understood cloud services. We were wrong – vendors are still doing it rather than actually building the technology. Call it cloudwashing, cloudification, or just plain BS. As an enterprise buyer, how can you tell whether the system you are thinking of purchasing is a cloud application or not? It should be easy – just look at the products branded ‘cloud’, right? But dig deeper and you see it’s not so simple. Sherlock Holmes made a science of detection, and being an enterprise buyer today can feel like a being detective in a complex investigation. Vendors have anticipated your questions and have answers ready. What to do? Start by drilling down: what is behind the labels? Is it cloud or just good old enterprise bloatware? Or is it MSO with a thin veneer of cloud? We pause here to state that there is nothing inherently wrong with enterprise software or MSO. There is also no reason cloud is necessarily better for you. Our goal here is to orient your thinking beyond labels and give you some tips so you can an educated consumer. We have seen a grab bag of cloudwashes. We offer the following questions to help you figure out what’s real: Does it run at a third party provider? (not on-premise or ‘private’ cloud) Is the service self-service (i.e., you can use it without other user interactions or without downloading – not installed ‘on the edge’ of your IT network) Is service metered? If you stopped using it tomorrow would bills stop? Can you buy it with a credit card? Can your brother-in-law sign up with the same service? Do you need to buy a software license? Does it have an API? Does it autoscale? Did the vendor start from scratch or rewrite its product? Is the product standalone (i.e., not a proxy-type interface on top of an existing stack)? Can you deploy without assistance; or does it require professional services to design, deploy, and operate? The more of these questions that get a ‘No’ answer, the more likely your vendor is marketing ‘cloud’ instead of selling cloud services. Why does that matter? Because real cloud environments offer specific advantages in elasticity, flexibility, scalability, self-service, pay-as-you-go, and various other areas, which are not present in many non-cloud solutions. What cloudwashing exercises have you seen? Please share in the comments below. Share:

Share:
Read Post

New Series: What CISOs Need to Know about Cloud Computing

This is the first post in a new series detailing the key differences between cloud computing and traditional security. I feel pretty strongly that, although many people are talking about the cloud, nobody has yet done a good job of explaining why and how security needs to adapt at a fundamental level. It is more than outsourcing, more than multitenancy, and definitely more than simple virtualization. This is my best stab at it, and I hope you like it. The entire paper, as I write it, is also posted and updated at GitHub for those of you who want to track changes, submit feedback, or even submit edits. Special thanks to CloudPassage for agreeing to license the paper (as always, we are following our Totally Transparent Research Process and they do not have any more influence than you do, and can back out of licensing the paper if, in the end, they don’t like it). And here we go… What CISOs Need to Know about Cloud Computing Introduction One of a CISO’s most difficult challenges is sorting the valuable wheat from the overhyped chaff, and then figuring out what it means in terms of risk to your organization. There is no shortage of technology and threat trends, and CISOs need not to only determine which matter, but how they impact security. The rise of cloud computing is one of the truly transformative evolutions that fundamentally change core security practices. Far more than an outsourcing model, cloud computing alters the very fabric of our infrastructure, technology consumption, and delivery models. In the long run, the cloud and mobile computing are likely to mark a larger shift than the Internet. This series details the critical differences between cloud computing and traditional infrastructure for security professionals, as well as where to focus security efforts. We will show that the cloud doesn’t necessarily increase risks – it shifts them, and provides new opportunities for significant security improvement. Different, But Not the Way You Think Cloud computing is a radically different technology model – not just the latest flavor of outsourcing. It uses a combination of abstraction and automation to achieve previously impossible levels of efficiency and elasticity. But in the end cloud computing still relies on traditional infrastructure as its foundation. It doesn’t eliminate physical servers, networks, or storage, but allows organizations to use them in different ways, with substantial benefits. Sometimes this means building your own cloud in your own datacenter; other times it means renting infrastructure, platforms, and applications from public providers over the Internet. Most organizations will use a combination of both. Public cloud services eliminate most capital expenses and shift them to on-demand operational costs. Private clouds allow more efficient use of capital, tend to reduce operational costs, and increase the responsiveness of technology to internal needs. Between the business benefits and current adoption rates, we expect cloud computing to become the dominant technology model over the next ten to fifteen years. As we make this transition it is the technology that create clouds, rather than the increased use of shared infrastructure, that really matters for security. Multitenancy is more an emergent property of cloud computing than a defining characteristic. Security Is Evolving for the Cloud As you will see, cloud computing isn’t more or less secure than traditional infrastructure – it is different. Some risks are greater, some are new, some are reduced, and some are eliminated. The primary goal of this series is to provide an overview of where these changes occur, what you need to do about them, and when. Cloud security focuses on managing the different risks associate with abstraction and automation. Mutitenancy tends to be more a compliance issue than a security problem, and we will cover both aspects. Infrastructure and applications are opened up to network-based management via Internet APIs. Everything from core network routing to creating and destroying entire application stacks is now possible using command lines and web interfaces. The early security focus has been on managing risks introduced by highly dynamic virtualized environments such as autoscaled servers, and broad network access, including a major focus on compartmentalizing cloud management. Over time the focus is gradually shifting to hardening the cloud infrastructure, platforms, and applications, and then adapting security to use the cloud to improve security. For example, the need for data encryption increases over time as you migrate more sensitive data into the cloud. But the complexities of internal network compartmentalization and server patching are dramatically reduced as you leverage cloud infrastructure. We expect to eventually see more security teams hook into the cloud fabric itself – bridging existing gaps between security tools and infrastructure and applications with Software Defined Security. The same APIs and programming techniques that power cloud computing can provide highly-integrated dynamic and responsive security controls – this is already happening. This series will lay out the key differences, with suggestions for where security professionals should focus. Hopefully, by the end, you will look at the cloud and cloud security in a new light, and agree that the cloud isn’t just the latest type of outsourcing. Share:

Share:
Read Post

Defending Against Application Denial of Service: Attacking the Stack

  In our last post, we started digging into ways attackers target standard web servers, protocols, and common pages to impact application availability. These kinds of attacks are at the surface level and low-hanging fruit because they can be executed via widely available tools wielded by unsophisticated attackers. If you think of a web application as an onion, there always seems to be another layer you can peel back to expose additional attack surface. The next layer we will evaluate is the underlying application stack used to build the application. One of the great things about web applications is the availability of fully assembled technology stacks, making it trivial to roll out the infrastructure to support a wide variety of applications. But anything widely available inevitably becomes an attack target. The best example of this within the context of an availability attack is how hash tables can be exploited to crush web servers. Hash Collision Attacks We won’t get into advanced programming but you need some context to understand this attack. A hash table is used to map specific keys to values by assigning the value of the key to a specific slot in an array. This provides a very fast way to search for things. On the downside, multiple values may end up in the same slot, which creates a hash collision that needs to be dealt with by the application, requiring significant additional processing. Hash collisions are normally minimized, so the speed trade-off is usually worthwhile. But if an attacker understands the hashing function used by the application they can cause excessive hash collisions. This requires the application to compensate and consume extra resources to manage the hashing function. If enough hash collisions occur… you guessed it: the application can’t handle the workload and goes down. This attack was weaponized as HashDoS, an attack tool that leverages the fact that most web application stacks use the same hashing algorithm within their dictionary tables. With knowledge of this hashing algorithm, the attacker can send a POST request with many variables to create hash table chaos and render the application useless. Mitigation for this attack requires the ability to discard messages with too many variables – typically implemented within a WAF (web application firewall) – or to randomize the hash function using application-layer logic. A good explanation of this attack using cats explains HashDoS in layperson’s terms. Remember that any capabilities within the application stack can be exploited, and given the open source nature of these stacks probably will. So diligence in selection of a stack, ensuring secure implementation, and tracking security notices and implementing patches are all critical to ensure application security and availability. Targeting the Database As part of the application stack, databases tend to get overlooked as a denial of service attack target. Many attackers try to extract the data in the database and then exfiltrate it, so knocking down the database would be counter-productive. But when the mission is to impact application availability or to use a DoS as cover for exfiltration, the database can be a soft target – because in some way, shape, or form, the web application depends on the database. If you recall our Defending Against Denial of Service Attacks paper, we broke DoS attacks into network-based volumetric attacks and application-layer attacks. The issue with databases is that they can be attacked using both tactics. Application servers connect to the database using some kind of network, so a volume attack on that network segment can impact database availability. If the database itself is exploited, the application is also likely to go down. Either way the application is out of business. Database DoS Attacks If we dig a little deeper into the attacks, we find that one path is to crush databases using deliberately wasteful queries. Other attacks target simple vulnerabilities that have never been patched, mostly because the need for continuous database uptime interferes with patching, so that it happens sporadically or not at all. Again, you don’t need to be a brain surgeon to knock a web application offline. Here are some of the attack categories: Abuse of Functions: This type of attack is similar to the slow HTTP attacks mentioned in the last post – attackers use database functionality against you. For example, if you restrict failed logins, they may blast your database with bad password requests to lock legitimate users (or applications) out. Another example involves the attackers taking advantage of database autoscaling, blast it with requests until so many instances are running that the database falls over. Complex Queries: If the attacker gives the database too much work to do, it will fall over. There are many techniques, including nested queries & recursion, Cartesian joins, and the in operator, which can overwhelm the database. The attacker would need to be able to inject a SQL query into the database directly or from within the application for this to work, so you can block these attacks that way. We will talk about defenses below. Bugs and Defects: In these cases the attacker is targeting a known database vulnerability. This includes queries of death and buffer overflow to take down the database. With new functionality being introduced constantly, database attack surface continues to grow. And even if the database vendor identifies the issue and produces a patch (not a sure thing), finding a maintenance window to patch remains challenging in many operational environments. Application Usage: Finally, the way the application uses the database can be gamed to cause an outage. The best example of this is SQL injection, but that attack is rarely used to knock over databases. Also consider the login and store locator page attacks we mentioned in the last post, as well as shopping cart and search engine attacks (to be covered later) as additional examples of application misuse that can impact availability. Database DoS Defenses The tactics used to defend against database denial of service attacks really reflect good database security practices. Go figure. Configuration: Strong

Share:
Read Post

Trustwave Acquires Application Security Inc.

It has been a while since we had an acquisition in the database security space, but today Trustwave announced it acquired Application Security Inc. – commonly called “AppSec” by many who know the company. About 10 years ago I wrote my first competitive analysis paper during my employment with IPLocks, of our principal competitor: another little-known database security company called Application Security, Inc. Every quarter for four years, I updated those competitive analysis sheets to keep pace with AppSec’s product enhancements and competitive tactics in sales engagements. Little did I know I would continue to examine AppSec’s capabilities on a quarterly basis after having joined Securosis – but rather than solely looking at competitive positioning, I have been gearing my analysis toward how features map to the customer inquires, and tracking consumer experiences during proof-of-concept engagements. Of all the products I have tracked, I have been following AppSec the longest. It feels odd to be writing this for a general audience, but this deal is pretty straightforward, and it needed to happen. Application Security was one of the first database security vendors, and while they were considered a leader in the 2004 timeframe, their products have not been competitive for several years. AppSec still has one of the best database assessment products on the market (dbProtectAppDetectivePRO), and one of the better – possibly the best – database vulnerability research team backing it. But Database Activity Monitoring (DAM) is now the key driver in that space, and AppSec’s DAM product (AppDetectivePROdbProtect) has not kept pace with customer demand in terms of performance, integration, ease-of-use, or out-of-the-box functionality. A “blinders on” focus can be both admirable and necessary for very small start-ups to deliver innovative technologies to markets that don’t understand their new technology or value proposition, but as markets mature vendors must respond to customers and competitors. In AppSec’s early days, very few people understood why database security was important. But while the rest of the industry matured and worked to build enterprise-worthy solutions, AppSec turned a deaf ear to criticism from would-be customers and analysts. Today the platform has reasonable quality, but is not much more than an ‘also-ran’ in a very competitive field. That said, I think this is a very good purchase for Trustwave. It means several things for Trustwave customers: Trustwave has filled a compliance gap in its product portfolio – specifically for PCI. Trustwave is focused on PCI-DSS, and data and database security are central to PCI compliance. Web and network security have been part of their product suite, but database security has not. Keep in mind that DAM and assessment are not specifically prescribed for PCI compliance like WAF is; but the vast majority of customers I speak with use DAM to audit activity, discovery to show what data stores are being used, and assessment to prove that security controls are in place. Trustwave should have acquired this technology a while ago. The acquisition fits Trustwave’s model of buying decent technology companies at low prices, then selling a subset of their technology to existing customers where they already know demand exists. That could explain why they waited so long – balancing customer requirements against their ability to negotiate a bargain price. Trustwave knows what their customers need to pass PCI better than anyone else, so they will succeed with this technology in ways AppSec never could. This puts Trustwave on a more even footing for customers who care more about security and don’t just need to check a compliance box, and gives Trustwave a partial response to Imperva’s monitoring and WAF capabilities. I think Trustwave is correct that AppSec’s platform can help with their managed services offering – Monitoring and Assessment as a Service appeals to smaller enterprises and mid-market firms who don’t want to own or manage database security platforms. What does this mean for AppSec customers? It is difficult to say – I have not spoken with anyone from Trustwave about this acquisition, and I am unable to judge their commitment to putting engineering effort behind the AppSec products. And I cannot tell whether they intend to keep the research team which has been keeping the assessment component current. Trustwave tweeted during the official announcement that “.@Trustwave will continue to develop and support @AppSecInc products, DbProtect and AppDetectivePRO”, but that could be limited to features compliance buyers demand, without closing the performance and data collection gaps that are problematic for DAM customers. I will blog more on this as I get more information, but expect them to provide what’s required to meet compliance and no more. And lastly, for those keeping score at home, AppSec is the 7th Database Activity Monitoring acquisition – after Lumigent (BeyondTrust), IPLocks (Fortinet), Guardium (IBM), Secerno (Oracle), Tizor (IBM via Netezza), and Sentrigo (McAfee); leaving Imperva and GreenSQL as the last independent DAM vendors. Share:

Share:
Read Post

Security Awareness Training Evolution [New Paper]

Everyone has an opinion about security awareness training, and most of them are negative. Waste of time! Ineffective! Boring! We have heard them all. And the criticism isn’t wrong – much of the content driving security awareness training is lame. Which is probably the kindest thing we can say about it. But it doesn’t need to be that way. Actually, it cannot remain this way – there is too much at stake. Users remain the lowest-hanging fruit for attackers, and as long as that is the case attackers will continue to target them. Educating users about security is not a panacea, but it can and does help. It’s not like a focus on security awareness training is the flavor of the day for us. We have been talking about the importance of training users for years, as unpopular as it remains. The main argument against security training is that it doesn’t work. That’s just not true. But it doesn’t work for everyone. Like security in general, there is no 100%. Some employees will never get it – mostly because they just don’t care – but they do bring enough value to the organization that no matter what they do (short of a felony) they are sticking around. Then there is everyone else. Maybe it’s 50% of your folks, or perhaps 90%. Regardless of the number of employees who can be influenced by better security training content, wouldn’t it make your life easier if you didn’t have to clean up after them? We have seen training reduce the amount of time spent cleaning up easily avoidable mistakes. We are pleased to announce the availability of our Security Awareness Training Evolution paper. It discusses how training needs to evolve, and presents a path to improve training content and ensure the right support and incentives are in place for training to succeed. We would like to thank our friends at PhishMe for licensing this paper. Remember, it is through the generosity of our licensees that you get to read our stuff for this nifty price. Here is another quote from the paper to sum things up: As we have said throughout this paper, employees are clearly the weakest link in your security defenses, so without a plan to actively prepare them for battle you have a low chance of success. It is not about making every employee a security ninja – instead focus on preventing most of them from falling for simplistic attacks. You will still be exploited, but make it harder for attackers so you suffer less frequent compromise. Security-aware employees protect your data more effectively, it’s as simple as that, regardless of what you hear from naysayers. Check out the page in our Research Library, or download the Security Awareness Training Evolution (PDF) paper directly. Share:

Share:
Read Post

How to Edit Our Research on GitHub

I am still experimenting with posting research, from drafts through the editing process, on GitHub. No promises that we will keep doing this – it depends on the reaction we get. From a workflow standpoint it isn’t much more effort for us, but I like the radical transparency it enables. I just posted a second paper, which is still very much incomplete. I want to offer some instructions on how to edit or propose changes. This is just quick and dirty, and you should review the GitHub Help to really understand the process. GitHub is meant for code but works for any text files. Unlike any other option we found, GitHub offers an open, transparent way to not only collect feedback, but also to solicit and manage direct edits. Once you set it up the first time it is pretty easy – you subscribe, pull down a copy of the research, make your own edits, then send us a request to incorporate your changes into our master copy. Another nice feature is that GitHub tracks the entire editing process, including our internal edits. For transparency that’s sweet. I don’t expect many people to take advantage of this. I am currently the only Securosis Analyst doing it, and based on your feedback we will decide whether we should continue. Even if you don’t push changes or comments, let us know what you think. Here’s how: We will post all our research at https://github.com/Securosis. Right now I still need to move the Pragmatic Network Security Management project over there because I was still learning the process when I posted that one. For now you can find the research split between those two places. If you only want to leave a comment you can do so here on the blog post, or as an ‘Issue’ on GitHub. Blog comments can be anonymous but GitHub requires an account to create or edit an issue. Click ‘Issues’ and then simply add yours. If you want to actually make edits, go for it! To do this you need to both create a GitHub account and install the software. For you non-command-line types, you can download official GUI versions here. If you are running Linux git is probably already installed. If you try to use the git command under OS X 10.9 Mavericks, the system should install the software if necessary. Next, fork a copy of our repository. Go to https://github.com/Securosis, click the Fork button, and follow the instructions. That fork isn’t on your computer for editing yet, so synchronize your repository. This pulls down the key files to your system. On the web page click “Clone to Desktop”, it will launch your client, and you can choose where to save the fork. Edit away locally. This doesn’t affect our canonical version – just your fork of it. When you are done, commit your changes in your desktop GUI by clicking Changes, then Commit and Sync. Don’t forget to comment on your changes so we know why you submitted them. Then submit a pull request. This notifies us that you made changes. We will run through them and accept or decline. It is our research, after all. This is all new to us, so we need your feedback on whether it is worth continuing. We know many of you might be interested in tracking the research but not participating, and that’s fine, but if you don’t email or send us comments we won’t know you like it. Share:

Share:
Read Post

Defending Against Application Denial of Service: Attacking the Application Server

It has been a while, but it is time to jump back into the Application Denial of Service series with both feet. As we described in the introduction, application denial of service can be harder to deal with than volume-based network DDoS because it is not always obvious what’s an attack and what’s legitimate traffic. Unless you are running all your traffic through a scrubbing center, your applications will remain targets for attacks that exploit the architecture, application stacks, business logic, and even legitimate functionality of the application. As we start digging into specific AppDoS tactics, we will start with attacks that target the server and infrastructure of your application. Given the popularity and proliferation of common application stacks, attackers can attack millions of sites with a standard technique, most of which have been in use for years. But not enough web sites have proper mitigations in place. Go figure. Server and infrastructure attacks are the low-hanging fruit of application denial of service, and will remain that so long as they continue to work. So let’s examine the various types of application infrastructure attacks and some basic mitigations to blunt them. Exploiting the Server Most attacks that directly exploit web servers capitalize on features of the underlying standards and/or protocols that run the web, such as HTTP. This makes many of these attacks very hard to detect because they look like legitimate requests – by the time you figure out it’s an attack your application is down. Here are a few representative attack types: Slowloris: This attack, originally built by Robert ‘RSnake’ Hansen, knocks down servers by slowly delivering request headers, forcing the web server to keep connections open, without ever completing the requests. This rapidly exhausts the server’s connection pool. Slow HTTP Post: Similar to Slowloris, Slow HTTP Post delivers the message body slowly. This serves the same purpose of exhausting resources on the web server. Both Slowloris and Slow HTTP Post are difficult to detect because their requests look legitimate – they just never complete. The R-U-Dead-Yet attack tool automates launching a Slow HTTP Post attack via an automated user interface. To make things easier (for your adversaries), RUDY is included in many penetration testing tool packages to make knocking down vulnerable web servers easy. Slow Read: Yet another variation of the Slowloris approach, Slow HTTP Read involves shrinking the response window on the client side. This forces the server to send data to the client slowly to stay within the response window. The server must keep connections open to ensure the data is sent, which means it can be quickly overwhelmed with connections. As with RUDY, these techniques are already weaponized and available for easy download and usage. You can expect innovative attackers to combine and automate these tactics into weapons of website destruction (as XerSeS has been portrayed). Regardless of packaging, these tactics are real and need to be defended against. Mitigating these server attacks typically requires a combination of web server configuration with network-based and application-based defenses. Keep in mind that ultimately you can’t really defend the application from these kinds of attacks because they are just taking advantage of web server protocols and architecture. But you can blunt their impact with appropriate controls. For example, Slowloris and Slow HTTP Post require tuning the web server to increase the maximum number of connections, prevent excessive connections from the same IP address, and allow a backlog of connection requests to be stored – to avoid losing legitimate application traffic. Network-based defenses on WAFs and IPSes can be tuned to look for certain web connection patterns and block offending traffic before the server becomes overwhelmed. The best approach is actually all of the above. Don’t just tune the web server or install network-based protection in front of the application – also build web applications to limit header and body sizes, and to close connections within a reasonable timeframe to ensure the connection pool is not exhausted. We will talk about building AppDoS protections into applications later in this series. An attack like Slow HTTP Read games the client side of the connection, requires similar mitigations. But instead of looking for ingress patterns of slow activity (on either the web server or other network devices), you need to look for this kind of activity on the egress side of the application. Likewise, fronting the web application with a CDN (content delivery network) service can alleviate some of these attacks, as your web application server is a step removed from the clients, and insulated from slow reads. For more information on these services, consult our Quick Wins with Website Protection Services paper. Brute Force Another tactic is to overwhelm the application server – not with network traffic, but by overloading application features. We will cover an aspect of this later, when we discuss search engine and shopping cart shenanigans. For now let’s look at more basic features of pretty much every website, such as SSL handshakes and serving common web pages like the login screen, password reset, and store locator. These attacks are so effective for overwhelming application servers because functions like SSL handshake and pages which require database calls are very compute intensive. Loading a static page is easy, but checking login credentials against the hashed database of passwords is a different animal. First let’s consider the challenges of scaling SSL. On some pages, such as the login page, you need to encrypt traffic to protect user credentials in motion. SSL is a requirement for such pages. So why is scaling SSL handshaking such an issue? As described succinctly by Microsoft in this Tech Bulletin, there are 9 distinct steps in establishing a SSL handshake, many of which require cryptographic and key generation operations. If an attacker uses a rented botnet to establish a million or so SSL sessions at the same time, guess what happens? It is not a bandwidth issue – it is a compute problem – and the application becomes unavailable because no more SSL

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.