Securosis

Research

Thank You

As you may have noticed, I haven’t been blogging much the past month or so. 2013 has been an… interesting… year, filled with personal and professional highs and lows. Our third child was born, and we were back in the thick of things with 3 kids aged four and under. Don’t even get me stared on the near nonstop string of minor illnesses. There’s nothing like stomach flu twice in a month. Once on a travel day – thus the last month of minimal blogging. On the other hand my son took his first steps just shy of 7 months old, I had one of my best business years yet (yep, Securosis grew once again), and experienced a career highlight by presenting my first technical Black Hat session on 90 minutes notice (including a technical demo). Work-wise I am having more fun than I have in many years – largely due to spending more time hands-on with programming and the cloud. This is the kind of defensive technical security research I have always wanted to drive forward. And the bonus? My daughters love super heroes, robots, and science (plus princesses and the color pink – I’m not trying to turn them into boys or anything). All this is possible because of you reading the site. Every day thousands of you read our content (or point your bots at us), and a fair few hire us to help you with research and analysis. We aren’t a billion-dollar company with a big name – just a couple handfuls of full and part time security obsessives who get the opportunity to do what we love, and enjoy an amazing lifestyle with our families in the process. We never, not for an instant, forget who we serve and who makes this all possible. We only hope we meet our end of the bargain by providing you something useful. We have a lot of new things lined up for 2014. The nature of our work is pulling us in new and exciting directions, and we try to adapt every year to the ever-changing landscape of social media and online tools we use to engage and deliver content. Some of it works and some doesn’t – that’s life. As 2013 comes to a close, I just wanted to personally thank you all. (You know, with a heartfelt generic blog post.) And I would like to also thank our contributors… some of whom get paid, and others who keep us on our toes with their insights. Mort, Pepper, Gattaca, Myrcurial, Gunnar, and Gal. Plus Mike and Adrian – the best two partners I could ask for. Or not ask for – they actually sort of just showed up, uninvited. Share:

Share:
Read Post

Security Management 2.5: Platform Evolution

This post discusses evolutionary changes in SIEM, focusing on how underlying platform capabilities have evolved to meet the requirements discussed in the last post. To give you a sneak peek, it is all about doing more with more data. The change we have seen in these platforms over the past few years has been mostly under the covers. It’s not sexy, but this architectural evolution was necessary to make sure the platforms scaled and could perform the needed analysis moving forward. The problem is that most folks cannot appreciate the boatload of R&D which has been required to enable many platforms to receive a proverbial brain transplant. We will start with the major advancements. Architectural Evolution To be honest, we downplayed the importance of SIEM’s under-the-hood changes in our previous paper. The “brain transplant” was the significant change that enabled a select few vendors to address the performance and scalability issues plaguing the first generation of the platforms, which were built on RDBMS. For simplicity’s sake we skipped over the technical details of how and why. Now it’s time to explore that evolution. The fundamental change is that SIEM platforms are no longer based on a centralized massive service. By leveraging a distributed approach, using a cooperative cluster of many servers independently collecting, digesting, and processing events, policies are distributed across multiple systems to more effectively and efficiently handle the load. If you need to support more locations or pump in a bunch more data, just add nodes to the cluster. If this sounds like big data that’s because it essentially is. Several platforms leverage big data technologies under the hood. The net result is parallel event processing resources deployed ‘closer’ to event sources, faster event collection, and systems designed to scale without massive reconfiguration. This architecture enables different deployment models; it also better accommodates distributed IT systems, cloud providers, and virtual environments – which increasingly constitute the fabric of modern technology infrastructure. The secret sauce making it all possible is distributed system management. It is easy to say “big data”, but much harder to do heavy-duty security analysis at scale. Later, when we discuss proof-of-concept testing and final decision-making, we will explore substantiating these claims. The important parts, though, are the architectural changes to enable scaling and performance, and support for more data sources. Without this shift nothing else matters. Serving Multiple Use Cases The future of security management is not just about detecting the advanced threats and malware, although that is the highest-profile use case. We still need to get work done today, which means adding value to the operations team, as well as to compliance and security functions. This typically involves analyzing vulnerability assessment information so security teams can ensure basic security measures are in place. You can analyze patch and configuration data similarly to help operations teams keep pace within dynamic – and increasingly virtual – environments. We have even seen cases where operations teams detected application DoS attacks through infrastructure event data. This kind of derivative security analysis is the precursor to allowing risk and business analytics teams to make better business decisions – to redeploy resources, take applications offline, etc. – by leveraging data collected from the SIEM. Enhanced Visibility Attackers continually shift their strategies to evade detection, increase efficiency, and maximize the impact of their attacks. Historically one of SIEM’s core value propositions has been an end-to-end view, enabled by collecting all sorts of different log files from devices all around the enterprise. Unfortunately that turned out not to be enough – log files and NetFlow records rarely contain enough information to detect or fully investigate an attack. We needed better visibility into what is actually happening within the environment – rather than expecting analysts to wade through zillions of event records to figure out when you are under attack. We have seen three technical advances which, taken together, provide the evolutionary benefit of much better visibility into the event stream. In no particular order they are more (and better) data, better analysis techniques, and better visualization capabilities. More and Better Data: Collect application events, full packet capture – not just metadata – and other sources that taxed older SIEM systems. In many cases the volume or format of the data was incompatible with the underlying data management engine. Better Analysis: These new data sources enable more detailed analysis, longer retention, and broader coverage; together those improved capabilities provide better depth and context for our analyses. Better Visualization: Enhanced analysis, combined with advanced programmatic interfaces and better visualization tools, substantially improves the experience of interrogating the SIEM. Old-style dashboards, with simplistic pie charts and bar graphs, have given way to complex data representations that much better illuminate trends and highlight anomalous activity. These improvements might look like simple incremental improvements to existing capabilities, but combined they enable a major improvement in visibility. Decreased Time to Value The most common need voiced by SIEM buyers is to have their platforms provide value without requiring major customization and professional services. Customers are tired of buying SIEM toolkits, and then needing to take time and invest money to build a custom SIEM system tailored to their particular environment. As we mentioned in our previous post, collecting an order of magnitude more data requires a similar jump in analysis capabilities – the alternative is to be drowned in a sea of alerts. The same math applies to deployment and management – monitoring many more types of devices and analyzing data in new ways means platforms need to be easier to deploy and manage simply to maintain the old level of manageability. The good news is that SIEM platform vendors have made significant investments to support more devices and offer better installation and integration, which combined make deployment less resource intensive. As these platforms integrate the new data sources and enhanced visibility described above, the competitiveness of a platform can be determined by the simplicity and intuitiveness of its management interface, and the availability of out-of-the-box policies and

Share:
Read Post

Security Management 2.5: Changing Needs

Today’s post discusses the changing needs and requirements organizations have for security management customers, which is just a fancy way of saying “Here’s why customers are unhappy.” The following items are the main discussion points when we speak with end users, and the big picture reasons motivating SIEM users to consider alternatives. The Changing Needs of Security Management Malware/Threat Detection: Malware is by far the biggest security issue enterprises face today. It is driving many of the changes rippling through the security industry, including SIEM and security analytics. SIEM is designed to detect security events, but malware is designed to be stealthy and evade detection. You may be looking for malware, but you don’t always know what it looks like. Basically you are hunting for anomalies that kinda-sorta could be an attack, or just odd stuff that may look like an infection. The days of simple file-based detection are gone, or at least anything resembling the simple malware signature of a few years ago. You need to detect new and novel forms of advanced malware, which requires adding different data sources to the analysis and observing patterns across different event types. We also need to leverage emerging security analytics capabilities to examine data in new and novel ways. Even if we do all this, it might still not be enough. This is why feeding third-party threat intelligence feeds into the SIEM are becoming increasingly common – allowing organizations to look for attacks happening to others. Cloud & Mobile: As firms move critical data into cloud environments and offer mobile applications to employees and customers, the definition of system now encompasses use cases outside the classical corporate perimeter, changing the definition and scope of infrastructure to monitor. Compounding the issue is the difficulty in monitoring mobile devices – many of which you do not fully control, and it’s even harder because of the lack of effective tools to gather telemetry and metrics from the devices. Even more daunting is the lack of visibility (basically log and event data) of what’s happening within your cloud service providers. Some cloud providers cannot provide infrastructure logs to customers, because their event streams combine events from all their customers. In some cases they cannot provide logs because there is simply no ‘network’ to tap because it’s virtual, and existing data collectors are useless. In other cases the cloud provider is simply not willing to share the full picture of events, and you may be prohibited contractually from capturing events. The net result is that you need to tackle security monitoring and event analysis in a fundamentally different fashion. This typically involves collecting the events you can gather (application, server, identity and access logs) and massaging them into your SIEM. For Infrastructure as a Service (IaaS) environments, you should look at adding your own cloud-friendly collectors in the flow of application traffic. General Analytics: If you collect a mountain of data from all IT systems, much more information is available than just security events. This is a net positive, but cuts both ways – some event analysis platforms are set up for IT operations first, and both security and business operations teams piggyback off that investment. In this case analysis, reporting, and visualization tools must be not just accessible to a wider audience (like security), but also optimized to do true correlation and analysis. What Customers Really Want These examples are what we normally call “use cases”, which we use to reflect the business drivers creating the motivation to take action. These situations are significant enough (and they are unhappy enough) for customers to consider jettisoning current solutions and going through the pain of re-evaluating their requirements and current deficiencies. Those bullet points do represent the high-level motivations, but they fail to tell the whole story. These are the business reasons firms are looking, but they fail to capture why many of the current platforms fail to meet expectations. For that we need to take a slightly more technical look at the requirements. Deeper Analysis Requires More Data To address the use cases described above, especially for malware analysis, more data is required. This necessarily means more event volume – such as capturing and storing full packet streams, if even for only a short period. It also means more types of data – such as human-readable data mixed in with machine logs and telemetry from networks and other devices. It includes gathering and storing complex data types; such as binary or image files; which are not easy to parse, store, or even categorize. The Need for Information Requires Better and More Flexible Analysis Simple correlation of events – who, what, and when – is insufficient for the kind of security analysis required today. This is not only because those attributes are insufficient to distinguish bad from good, but also because data analysis approaches are fundamentally evolving. Most customers we speak with want to profile normal traffic and usage; this profile helps understand how systems are being used, and also helps detect anomalies likely to indicate misuse. There is some fragmentation in how customers use analysis – some choose to leverage SIEM for more real-time altering and analysis; others want big picture visibility, created by combining many different views for an overall sense of activity. Some customers want fully automated threat detection, while others want more interactive ad hoc and forensic analysis. To make things even harder for the vendors, today’s hot analysis methods could very well be irrelevant a year or two down the road. Many customers want to make sure they can update analytics as requirements develop – optimized hard-wired analytics are now a liability rather than an advantage for the products. The Velocity of Attacks Requires Threat Intelligence When we talk about threat intelligence we do not limit the discussion to things like IP reputation or ‘fingerprint’ hashes of malware binaries – those features are certainly in wide use, but the field of threat intelligence includes far more. Some threat intelligence feeds look at

Share:
Read Post

Advanced Endpoint and Server Protection [New Series]

Endpoint protection has become the punching bag of security. Every successful attack seems to be blamed on a failure of endpoint protection. Not that this is totally unjustified – most solutions for endpoint protection have failed to keep pace with attackers. In our 2014 Endpoint Security Buyers Guide, we discussed many of the issues around endpoint hygiene and mobility. We also explored the human element underlying many of attacks, and how to prepare your employees for social engineering attacks in Security Awareness Training Evolution. But realistically, hygiene and awareness won’t deter an advanced attacker long. We frequently say advanced attackers are only advanced as they need to be – they take the path of least resistance. But the converse is also true. When this class of adversaries needs advanced techniques they use them. Traditional malware defenses such as antivirus don’t stand much chance against a zero-day attack. So our new series, Advanced Endpoint and Server Protection, will dig into protecting devices against advanced attackers. We will highlight a number of new alternatives for preventing and detecting advanced malware, and examine new techniques and tools to investigate attacks and search for indicators of compromise within your environment. But first let’s provide some context for what has been happening with traditional endpoint protection because you need to understand the current state of AV technology for perspective on how these advanced alternatives help. AV Evolution Signature-based AV no longer works. Everyone has known that for years. It is not just because blocking a file you know is bad isn’t enough any more. But there are simply too many bad files to, and new ones crop up too quickly, for it to be possible to compare every file against a list of bad files. The signature-based AV algorithm still works as well as it ever did, but it is no longer even remotely adequate. Nor is it comprehensive enough to catch the varying types of attacks in the wild today. So the industry adapted, focusing on broadening the suite of endpoint protection technologies to include host intrusion prevention, which blocks known-bad actions at the kernel level. The industry also started sharing information across its broad customer base to identify IP addresses known to do bad things, and files which contain embedded malware. That shared information is known as threat intelligence, and can help you learn from attacks targeting other organizations. Endpoint security providers also keep adding modules to their increasingly broad and heavy endpoint protection suites. Things like server host intrusion prevention, patch/configuration management, and even full application white listing – all attempting to ensure no unauthorized executables run on protected devices. To be fair, the big AV vendors have not been standing still. They are adapting and working to broaden their protection to keep pace with attackers. But even with all their tools packaged together, it cannot be enough. It’s software and it will never be perfect or defect-free. Their tools will always be vulnerable and under attack. We need to rethink how we do threat management as an industry, in light of these attacks and the cold hard reality that not all of them can be stopped. We have been thinking about what the threat management process will come to look like. We presented some ideas in the CISO’s Guide to Advanced Attackers, but that was focused on what needs to happen to respond to an advanced attack. Now we want to document a broader threat management process, which we will refine through 2014. Threat Management Reimagined Threat management is a hard concept to get your arms around. Where does it start? Where does it end? Isn’t threat management really just another way of describing security? Those are hard questions without absolute answers. For the purposes of this research, threat management is about dealing with an attack. It’s not about compliance, even though most mandates are responses to attacks that happened 5 years ago. It’s not really about hygiene – keeping your devices properly configured and patched is good operational practices, not tied to a specific attack. It’s not about finding resources to actually execute on these plans, nor is it an issue of communicating the value of the security team. Those are all responsibilities of the broader security program. Threat management is a subset of the larger security program – typically the most highly visible capability. So let’s explain how we think about threat management (for the moment, anyway) and let you pick it apart. Assessment: You cannot protect what you don’t know about – that hasn’t changed. So the first step is gaining visibility into all devices, data sources, and applications that present risk to your environment. And you need to understand the current security posture of anything to protect. Prevention: Next you try to stop an attack from being successful. This is where most of the effort in security has been for the past decade, with mixed (okay, lousy) results. A number of new tactics and techniques are modestly increasing effectiveness, but the simple fact is that you cannot prevent every attack. It has become a question of reducing your attack surface as much as practical. If you can stop the simplistic attacks, you can focus on the more advanced ones. Detection: You cannot prevent every attack, so you need a way to detect attacks after they get through your defenses. There are a number of different options for detection – most based on watching for patterns that indicate a compromised device. The key is to shorten the time between when the device is compromised and when you discover it has been compromised. Investigation: Once you detect an attack you need to verify the compromise and understand what it actually did. This typically involves a formal investigation, including a structured process to gather forensic data from devices, triage to determine the root cause of the attack, and searching to determine how broadly the attack has spread within your environment. Remediation: Once you understand what happened you can put a

Share:
Read Post

New Paper: Defending Against Denial of Service Attacks

Just in case you had nothing else to do during the holiday season, you can check out our latest research on Application Denial of Service Attacks. This paper continues our research into Denial of Service attacks after last year’s Defending Against Denial of Service Attacks research. As we stated back then, DoS encompasses a number of different tactics, all aimed at impacting the availability of your applications or infrastructure. In this paper we dig much deeper into application DoS attacks. For good reason – as the paper says: These attacks require knowledge of the application and how to break or game it. They can be far more efficient than just blasting traffic at a network, requiring far fewer attack nodes and less bandwidth. A side benefit of requiring fewer nodes is simplified command and control, allowing more agility in adding new application attacks. Moreover, the attackers often take advantage of legitimate application features, making defense considerably harder. We expect a continued focus on application DoS attacks over time, so we offer both an overview of the common types of attacks you will see and possible mitigations for each one. After reading this paper you should have a clear understanding of how your application availability will be attacked – and more importantly, what you can do about it. We would like to thank our friends at Akamai for licensing this content. Without the support of our clients our open research model wouldn’t be possible. To sum up, here are some thoughts on defense: Defending against AppDoS requires a multi-faceted approach that typically starts with a mechanism to filter attack traffic, either via a web protection service running in the cloud or an on-premise anti-DoS device. The next layer of defense includes operational measures to ensure the application stack is hardened, including timely patching and secure configuration of components. Finally, developers must play their part by optimizing database queries and providing sufficient input validation to make sure the application itself cannot be overwhelmed using legitimate capabilities. Keeping applications up and running requires significant collaboration between development, operations, and security. This ensures not only that sufficient defenses are in place, but also that a well-orchestrated response maintains and/or restores service as quickly as possible. It is not a matter of if but when you are targeted by an Application Denial of Service attack. Check out the landing page for the paper, or you can download the Defending Against Application Denial of Service Attacks PDF directly. Share:

Share:
Read Post

Security Management 2.5: Replacing Your SIEM Yet? [New Series]

Security Information and Event Management (SIEM) systems create a lot of controversy with security folks; they are one of the cornerstones on which the security program are built upon within every enterprise. Yet, simultaneously SIEM generates the most complaints and general angst. Two years ago Mike and I completed a research project on “SIEM 2.0: Time to Replace your SIEM?” based upon a series of conversations with organizations who wanted more from their investment. Specifically they wanted more scalability, easier deployment, and the ability to ‘monitor up the stack’ in context of business applications and better integration with enterprise systems (like identity). Over the past two years the pace of customer demands and platform evolution to meet those demands has accelerated. What we thought was the tail end of a trend with second-generation SIEMs improving scalability using purpose-built data stores turned out to be the tip of the iceberg. As enterprises wanted to analyze more types of data, from more sources, with more – re: better – analysis capabilities to derive better information to keep pace with advanced attackers. Despite solid platform upgrades from a number of SIEM vendors, these requirements have blossomed faster than their vendor could respond. And sadly, some security vendors marketed “advanced capabilities” when it was really the same old pig in a new suit, causing further chagrin and disappointment amongst their customers. Whatever the reason, here we are two years later, listening to the same tale from customers looking to replace their SIEM (again) given these new requirements. You may feel like Bill Murray in Groundhog Day, reliving the past over and over again, but this time is different. The requirements have changed! Actually they have. The original architects of the early SIEM platforms could not have envisioned the kind of analysis required to detect attacks designed to evade SIEM tools. The attackers are thinking differently, and that means the defenders that want to keep pace need to rip up their old playbook and very likely critically evaluate their old tools as well. Malware is now the major driver, but since you can’t really detect advanced attacks anymore based on a file signature, you have to mine data for security information in a whole new way. Cloud computing and mobile devices are disrupting the technology infrastructure. And the collection and analysis of these and many other data streams (like network packet capture) are bursting the seams of SIEM. It doesn’t just stop at security alerting either. Other organizations, from IT operations to risk to business analytics, also want to mine the security information collected looking for new ways to streamline operations, maintain availability, and optimize the environment. Moving forward, you’ll need to heavily leverage your investments in security monitoring and analysis technologies. If that resource can’t be leveraged, enterprises will move on and find something more in line with their requirements. Given the rapid evolution we’ve seen in SIEM/Log Management over the past 4-5 years, product obsolescence is a genuine issue. The negative impact of a product that has not kept pace with technical evolution and customer requirements cannot be trivialized. This pain becomes more acute in the event of a missed security incident because the SIEM did not collect the requisite information, or worse, could not detect the threat. Customers spend significant resources (both time and money) on the care and feeding of their SIEM. If they don’t feel the value is in alignment with the investment, again they’ll move on and search for better, easier, and faster products. It’s realistic, if not expected, that these customers start questioning whether the incumbent offering makes sense for their organization moving forward. Additionally, firms are increasingly considering managed services and 3rd party security operations providers to address skills and resource shortages within internal groups. Firms simply don’t have the internal expertise to look for advanced threats. This skills gap also promises to reshape the landscape of security management, so we’ll kick off the series discussing these factors, setting the stage to update our guide to selecting a SIEM. Specifically, we will cover the following topics: The Changing Needs of Security Management: As firms branch into cloud environments and offer mobile applications to their employees and customers, the definition of ‘system’ now encompasses use cases outside what’s long been considered the corporate perimeter, changing the view of “infrastructure” that needs to be monitored. Simultaneously, advanced malware attacks now requires more types of data, threat intelligence and polices to adequately detect these attacks. Additionally, firms are increasingly considering managed services and 3rd party security operations to address skills and resource shortages within internal groups. All of these factors are once again reshaping the landscape of security management, so we’ll kick off the series discussing these factors to set the stage for re-evaluating the security management platform. Evolution of SIEM Platform (and Technology): Next we’ll discuss the evolutionary changes in SIEM – from the standpoint of platform capabilities. It’s still all about more data and more data. We’ll cover architectural evolution, integration and ongoing care and feeding of the environment to meet the scaling requirements. We will also discuss how SIEM is increasingly leveraging other data sources, such as virtual servers, mobile events, big data analytics, threat feeds, as well as human and machine generated data. But all of this data does nothing if you don’t have the capabilities to do something with it, so we will discuss new analysis techniques and updates to older approaches that yield better results faster. To do more with more means, under the covers, scale and performance are being achieved via virtualizing lower cost commodity hardware, leveraging new data storage and data management architectures. SIEM remains the aggregation point for operations and security data, but the demands on the platform to ‘do more with more data’ is pushing the technical definition of SIEM forward and spawning necessary hybrid models to meet the requirements. Revisiting Your Requirements: Given the evolution of both the technology and the attacks, it’s time to revisit your specific requirements and

Share:
Read Post

Security Assurance & Testing: Quick Wins

We started this Security Assurance and Testing (SA&T) series with the need for testing and which tactics make sense within an SA&T program. But it is always helpful to see how the concepts apply to more tangible situations. So we will now show how the SA&T program can provide a quick win for the security team, with two (admittedly contrived) scenarios that show how SA&T can be used – both at the front end of a project, and on an ongoing basis, to ensure the organization is well aware of its security posture. Infrastructure Upgrade For this first scenario let’s consider an organization’s move to a private cloud environment to support a critical application. This is a common situation these days. The business driver is better utilization of data center resources and more agility for deploying hardware resources to meet organizational needs. Obviously this is a major departure from the historical rack and provision approach. This is attractive to organizations because it enables better operational orchestration, allowing for new devices (‘instances’ in cloud land) to be spun up and taken down automatically according to the application’s scalability requirements. The private cloud architecture folks aren’t totally deaf to security, so some virtualized security tools are implemented to enforce network segmentation within the data center and block some attacks from insider threats. Without an SA&T program you would probably sign off on the architecture (which does provide some security) and move on to the next thing on your list. There wouldn’t be a way to figure out whether the environment is really secure until it went live, and then attackers will let you know quickly enough. Using SA&T techniques you can potentially identify issues at the beginning of implementation, saving everyone a bunch of heartburn. Let’s enumerate some of the tests to get a feel for what you may find: Infrastructure scalability: You can capture network traffic to the application, and then replay it to test scalability of the environment. After increasing traffic into the application, you might find that the cloud’s auto-scaling capability is inadequate. Or it might scale a bit too well, spinning up new instances too quickly, or failing to take down instances quickly enough. All these issues impact ability and value of the private cloud to the organization, and handling them properly can save a lot of heartburn for Ops. Security scalability: Another infrastructure aspect you can test is its security – especially virtualized security tools. By blasting the environment with a ton of traffic, you might discover your virtual security tools crumble rather than scaling – perhaps because VMs lack custom silicon – and fall over. This failure normally either “fails open”, allowing attacks, or “fails closed”, impacting availability. You may need to change your network architecture to expose your security tools only to the amount of traffic they can handle. Either way, better to identify a potential bottleneck before it impairs either availability or security. A quick win for sure. Security evasion: You can also test security tools to see how they deal with evasion. If the new tools don’t use the same policy as the perimeter, which has been tuned to effectively deal with evasion, the new virtual device may require substantial tuning to ensure security within the private cloud. Network hopping: Another feature of private clouds is their ability to define network traffic flows and segmentation – “Software Defined Networks”. But if the virtual network isn’t configured correctly, it is possible to jump across logical segments to access protected information. Vulnerability testing of new instances: One of the really cool (and disruptive) aspects of cloud computing is elimination of the need for changing/tuning configurations and patching. Just spin up a new instance, fully patched and configured correctly, move the workload over, and take down the old one. But if new instances spin up with vulnerabilities or poor configurations, auto-scaling is not your friend. Test new instances on an ongoing basis to ensure proper security. Again, a win if something was amiss. As you see, many things can go wrong with any kind of infrastructure upgrade. A strong process to find breaking points in the infrastructure before going live can mitigate much of the deployment risk – especially if you are dealing with new equipment. Given the dynamic nature of technology you will want to make sure you are testing the environment on an ongoing basis, as well ensuring that change doesn’t add unnecessary attack surface. This scenario points out where many issues can be found. What happens if you can’t find any issues? Does that impact the value of the SA&T program? Actually, if anything, it enhances its value – by providing peace of mind that the infrastructure is ready for production. New Application Capabilities To dig into another scenario, let’s move up the stack a bit to discuss how SA&T applies to adding new capabilities within an application serving a large user community, to enable commerce on a web site. Business folks like to sell stuff, so they like these kinds of new capabilities. This initiative involves providing access to a critical data store previously inaccessible directly from an Internet-facing application, which is an area of concern. The development team has run some scans against the application to identify application layer issues such as XSS, and fixed them before deployment by front-ending the application with a WAF. So a lot of the low-hanging fruit of application testing is gone. But that shouldn’t be the end of testing. Let’s look into some other areas which could uncover issues by focusing on realistic attack patterns and tactics: Attack the stack: You could use a slow HTTP attack to see if the application can defend against availability attacks on the stack. These attacks are very hard to detect at the network layer so you need to make sure the underlying stack is configured to deal with them. Shopping cart attack: Another type of availability attack uses the application’s legitimate functionality against it. It’s a bit like

Share:
Read Post

Security Assurance & Testing: Tactics and Programs

As we discussed in the introduction to this Security Assurance & Testing (SA&T) series, it is increasingly hard to adequately test infrastructure and applications before they go into production. But adversaries have the benefit of being able to target the weakest part of your environment – whatever it may be. So the key to SA&T is to ensure you are covering the entire stack. Does that make the process a lot more detailed and complex? Absolutely, but you can’t be sure what will happen when facing real attackers without a comprehensive test. To discuss tactics we will consider how you would test your network and then your applications. We will also discuss testing exfiltration because preventing critical data from leaving your environment disrupts the Data Breach Triangle. Testing Network Security Reading security trade publications you would get the impression that attackers only target applications nowadays, and don’t go after weaknesses in network or security equipment. Au contraire – attackers find the path of least resistance, whatever it is. So if you have a weak firewall ruleset or an easy-to-evade WAF and/or IPS, that’s where they go. Advanced attackers are only as advanced as they need to be. If they can get access to your network via the firewall, evade the IPS, and then move laterally by jumping across VLANs… they will. And they can leave those 0-days on the shelf until they really need them. So you need to test any device that sees the flow of data. That includes network switches, firewalls, IDS/IPS, web application firewalls, network-based malware detection gear, web filters, email security gateways, SSL VPN devices, etc. If it sees traffic it can be attacked, and it probably will be, and you need to be ready. So what should you actually test for network and security devices? Scalability: Spec sheets may be, uh, inaccurate. Even if there is a shred of truth in the spec sheet, it may not be applicable to your configuration or application traffic. Make sure the devices will stand up to real traffic at the peak traffic volumes you will see. Additionally, with increasingly common denial of service attacks, ensuring your infrastructure can withstand a volumetric attack is integral to maintaining availability of your infrastructure. Evasion: Similarly, if a network security device can be evaded, it doesn’t matter how scalable or effective it is at blocking attacks it catches. So you’ll want to ensure you are testing for standard evasion tactics. Reconfiguration: Finally, if the device can be reconfigured and unauthorized policy changes accepted, your finely-tuned and secure policy isn’t worth much. So make sure your devices cannot be accessed except by an authorized party using acceptable policy management. Application Layer Once you are confident the network and security devices will hold up, move on to testing the application layer. Here are the highlights: Profile inbound application traffic: You do this to understand your normal volumes, protocols, and destinations. Then you can build some scenarios that would represent not normal use of the application to test edge cases. Be sure to capture actual application traffic so you can hide the attack within it, the way real attackers will. Application protection: You will also want to stress test the WAF and other application protections using standard application attack techniques – including buffer overflows, application fuzzing, cross-site scripting, slow HTTP, and other denial of service tactics that target applications. Again, the idea is to identify the breaking points of the application before your adversaries do. The key aspect of this assurance and testing process is to make sure you test as much of the application as you can. Similar to a Quality Assurance testing harness used by developers, you want to exercise as much of the code as you can to ensure it will hold up. Keep in mind that adversaries usually have time, so they will search every nook and cranny of your application to find its weak spot. Thus the need for comprehensive testing. Exfiltration The last aspect of your SA&T process is to see if you can actually get data out. Unless the data can be exfiltrated, it’s not really a breach, per se. Here you want to test your content filtering capabilities, including DLP, web filters, email security, and any other security controls that inspect content on the way out. Similar to the full code coverage approach discussed above, you want to make sure you are trying to exfiltrate through as many applications and protocols as possible. That means all the major social networks (and some not-so-major ones) and other logical applications like webmail. Also ensure the traffic is sent out encrypted, because attackers increasingly use multiple layers of encryption to exfiltrate data. Finally, test the feasibility of establishing connections with command and control (C&C) networks. These ‘callbacks’ identify compromised devices, so you will want to make sure you can detect this traffic before data is exfiltrated. This can involve sending traffic to known bad C&C nodes, as well as using traffic patterns that indicate domain generating algorithms and other automated means of finding bot controllers. The SA&T Program Just as we made the case for continuous security monitoring to provide a way to understand how ongoing (and never-ending) changes in your environment must be monitored to assess their impact on security posture, you need to think about SA&T from an ongoing rather than opposed to one-time perspective. In order to really understand how effective your controls will be, you need to implement an SA&T program. Frequency The first set of decisions for establishing your program concerns testing frequency. The underlying network/security equipment and computing infrastructure tends not to change that often so you likely can get away with testing these components less frequently – perhaps quarterly. But if your environment has constant infrastructure changes, or you don’t control your infrastructure (outsourced data center, etc.) you may want to test more often. Another aspect of testing frequency is planning for ad hoc tests. These involve defining a set of catalysts that

Share:
Read Post

Friday Summary: December 20, 2013 year end edition

I have not done a Friday Summary in a couple weeks, which is a blog post we have rarely missed over the last 6 years, so bad on me for being a flake. Sorry about that, but that does not mean I don’t have a few things I to talk about before years end. Noise. Lots of Bitcoin noise in the press, but little substance. Blogs like Forbes are speculating on Bitcoin investment potential, currency manipulation, and hoarding, tying in a celebrity whenever possible. Governments around the globe leverage the Gattaca extension of Godwin’s Law, when they say “YOU ARE EITHER WITH US OR IN FAVOR OF ILLEGAL DRUGS AND CHILD PORNOGRAPHY” – basing their arguments on unreasoning fear. This was the card played by the FBI and DHS this week, when they painted Bitcoin as a haven for money-launderers and child pornographers. But new and disruptive technologies always cause problems – in this case it is fundamentally disruptive for governments and fiat currencies. Governments want to tax it, track it, control exchange rates, and lots of other stuff in their own interest. And unless they can do that they will label it evil. But lost in the noise are the simple questions like “What is Bitcoin?” and “How does it work?” These are very important, and Bitcoin is the first virtual currency with a real shot at being a legitimate platform, so I want to delve into them today. Bitcoin is a virtual currency system, as you probably already knew. The key challenges of digital currency systems are not assigning uniqueness in the digital domain – where we can create an infinite number of digital copies – nor assignment of ownership of digital property, but instead stopping fraud and counterfeiting. This is conceptually no different than traditional currency systems, but the implementation is of course totally different. When I started writing this post a couple weeks ago, I ran across a blog from Michael Nielsen that did a better job of explaining how the Bitcoin system works than my own, so I will just point you there. Michael covers the basic components of any digital currency system, which are simple applications of public-key cryptography and digital signatures/hashes, along with the validation processes that deter fraud and keep the system working. Don’t be scared off by the word ‘cryptography’ – Michael uses understandable prose – so grab yourself a cup of coffee and give yourself a half hour to run through it. It’s worth your time to understand how the system is set up because you may be using it – or a variant of it – at some point in the future. But ultimately what I find most unique about Bitcoin is that the community validates transactions, unlike most other systems which use a central bank or designated escrow authorities to approve money transfers. This avoids a single government or entity taking control. And personally having built a system for virtual currency way back when, before the market was ready for such things, I always root for projects like Bitcoin. Independent and anonymous currency systems are a wonderful thing for the average person; in this day and age where we use virtual environments – think video games and social media – virtual currency systems provide application developers an easy abstraction for money. And that’s a big deal when you’re not ready to tackle money or exchanges or ownership when building an application. When you build a virtual system it should be the game or the social interaction that count. Being able to buy and trade in the context of an application, without having a Visa logo in your face or dealing with someone trying to regulate – or even tax – the hours spent playing, is a genuine consumer benefit. And it allows any arbitrary currency to be created, which can be tuned to the digital experience you are trying to create. More reading if you are interested: Bitcoin, not NFC, is the future of payments, and Mastercoin (Thanks Roy!). Ironically, this Tuesday I wrote an Incite on the idiocy of PoS security and the lack of Point to Point encryption, just before the breach at Target stores which Brian Krebs blogged about. If merchants don’t use P2P encryption, from card swipe to payment clearing, they must rely on ‘endpoint’ security of the Point of Sale terminals. Actually, in a woulda/coulda/shoulda sense, there are many strategies Target could have adopted. For the sake of argument let’s assume a merchant wants to secure their existing PoS and card swipe systems – which is a bit harder than securing desktop computers in an enterprise, and that is already a losing battle. The good news is that both the merchant and the card brands know exactly which cards have been used – this means both that they know the scope of their risk and that they can ratchet up fraud analytics on these specific cards. Or even better, cancel and reissue. But that’s where the bad news comes in: No way will the card brands cancel credit cards during the holiday season – it would be a PR nightmare if holiday shoppers couldn’t buy stuff for friends and families. Besides, the card brands don’t want pissed-off customers because a merchant got hacked – this should be the merchant’s problem, not theirs. I think this is David Rice’s point in Geekonomics: that people won’t act against their own short term best-interests, even if that hurts them in the long run. Of course the attackers know this, which is exactly why they do this during the holiday season: many transactions that don’t fit normal card usage profiles make fraud harder to detect, and their stolen cards are less likely to be canceled en masse. Consumers get collateral poop-spray from the hacked merchant, so it’s prudent for you to look for and dispute any charges you did not make. And, since the card brands have tried to tie debit and credit cards together, there are

Share:
Read Post

Datacard Acquires Entrust

Datacard Group, a firm that produces smart card printers and associated products, has announced its acquisition of Entrust. For those of you who are not familiar with Entrust, they were front and center in the PKI movement in the 1990s. Back then the idea was to issue a public/private key pair to uniquely identify every person and device in the universe. Ultimately that failed to scale and became unmanageable, with many firms complaining “I just spent millions of dollars so I can send encrypted email to the guy sitting next to me.” So for you old-time security people out there saying to yourself “Hey, wait, isn’t PKI dead?”, the answer is “Yeah, kinda.” Still others are saying “I thought Entrust was already acquired?”, to which the answer is “Yes”, by investment firm/holding company Thoma Bravo in 2009. Entrust, just like all the other surviving PKI vendors, has taken its core technologies and fashioned them into other security products and services. In fact, if you believe the financial numbers in the press releases under Thoma Bravo, Entrust has been steadily growing. Still, for most of you, a smart card hardware vendor buying a PKI vendor makes no sense. But in terms of where the smart card market is heading in response to disruptive mobile and cloud computing technologies the acquisition makes sense. Here are some major points to consider: What does this mean for Datacard? One Stop Shop: The smart card market is an interesting case of ‘coopetition’, as each major vendor in the field ends up partnering on some customer deals, then competing head to head on others. “Cobbling together solutions” probably sounds overly critical, but the fact is that most card solutions are pieced together from different providers’ hardware, software, and services. Customer requirements for specific processes, card customization, adjudication requirements, and specific regional requirements tend to force smart card producers tend to partner in order to fill in the gaps. By pulling in a couple key pieces from Entrust – specifically around certificate production, cloud, and PKI services – DCG comes very close to an end-to-end solution. When I read the press release from Datacard this morning, they used an almost a meaningless marketing phrase “reduce complexity while strengthening trust.” I think they mean that a single vendor means less moving parts and fewer providers to worry about. That’s possible, provided Datacard can stitch these pieces together so the customer (or service provider) does not need to. EMV Hedge: If you read this blog on a regular basis, you will have noticed that every month I say EMV is not happening in the US – at least not the way card brands envision it. While I hate to bet against Visa’s ability to force change in the payment space, consumers really don’t see the wisdom in carrying around more credit cards for shopping from their computer or mobile device. Those of you who no longer print out airline boarding passes understand carrying one object For all these simple day-to-day tasks. Entrust’s infrastructure for mobile certificates gives Datacard the potential to offer either a physical card or mobile platform solution for identity and payment. Should the market shift away from physical cards for payment or personal identification, they will be ready to react accordingly. Dipping a Toe into the Cloud: Smart card production technology is decidedly old school. Dropping a Windows-based PC on-site to do user registration and adjudication seems so 1999, but this remains the dominant model for drivers’ licenses, access cards, passports, national ID, and so on. Cloud services are a genuine advance, and offer many advantages for scale, data management, software management, and linking all the phases of card production together. While Entrust does not appear to be on the cutting edge of cloud services, they certainly have infrastructure and experience which Datacard lacks. From this standpoint, the acquisition is a major step in the right direction, toward a managed service/cloud offering for smart card services. Honestly I am surprised we haven’t seen more competitors do this yet, and expect them to buy or build the comparable offerings over time. What does this mean for Entrust Customers? Is PKI Dead or Not? We have heard infamous analyst quotes to the effect that “PKI is dead.” The problem is PKI that infrastructure is often erroneously confused with PKI technologies. Most enterprises who jumped on the PKI infrastructure bandwagon in the 1990s soon realized that identity approach was unmanageable and unscalable. That said, the underlying technologies of public key cryptography and X.509 certificates are not just alive and well, but critical for network security. And getting this technology right is not a simple endeavor. These tools are use in every national ID, passport, and “High Assurance” identity card, so getting them right is critical. This is likely Datacard’s motivation for the acquisition, and it makes sense for them to leverage this technology across their all their customer engagements, so existing Entrust PKI customers should not need to worry about product atrophy. SSL: SSL certificates are more prevalent now than ever because most enterprises, regardless of market, want secure network communications. Or at least they are compelled by some compliance mandate to secure network communications to ensure privacy and message integrity. For web and mobile services this means buying SSL certificates, a market which has been steadily growing for the last 5 years. While Entrust is not dominant in this field, they are one of the first and more trusted providers. That does not mean this acquisition is without risks. Can Datacard run an SSL business? SSL certificate business is fickle, and there is little friction when switching from one vendor to another. We have been hearing complaints about one of the major vendors in this field having aggressive sales tactics and poor service, resulting in several small enterprises switching certificate vendors. There are also risks for a hardware company digesting a software business, with inevitable cultural and technical issues. And there are genuine threats to any certificate authority

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.