Securosis

Research

Incite 2/5/2014: Super Dud

I’m sure long-time Incite readers know I am a huge football fan. I have infected the rest of my family, and we have an annual Super Bowl party with 90+ people to celebrate the end of each football season. I have laughed (when Baltimore almost blew a 20 point lead last year), cried (when the NY Giants won in 2011), and always managed to have a good time. Even after I stopped eating chicken wings cold turkey (no pun intended), I still figure out a way to pollute my body with pizza, chips, and Guinness. Of course, lots of Guinness. It’s not like I need to drive home or anything. This year I was very excited for the game. The sentimental favorite, Peyton Manning, was looking to solidify his legacy. The upstart Seahawks with the coach who builds his players up rather than tearing them down. The second-year QB who everyone said was too short. The refugee wide receiver from the Pats, with an opportunity to make up for the drop that gave the Giants the ring a few years ago. So many story lines. Such a seemingly evenly matched game. #1 offense vs. #1 defense. Let’s get it on! I was really looking forward to hanging on the edge of my seat as the game came down to the final moments, like the fantastic games of the last few years. And then the first snap of the game flew over Peyton’s head. Safety for the Seahawks. 2-0 after 12 seconds. It went downhill from there. Way downhill. The wives and kids usually take off at halftime because it’s a school night. But many of the hubbies stick around to watch the game, drink some brew, and mop up whatever deserts were left by the vultures of the next generation. But not this year. The place cleared out during halftime and I’m pretty sure it wasn’t in protest at the chili peppers parading around with no shirts. The game was terrible. Those sticking around for the second half seemed to figure Peyton would make a run. It took 12 seconds to dispel that myth, as Percy Harvin took the second half kick-off to the house. It was over. I mean really over. But it’s the last football game of the year, so I watched until the end. Maybe Richard Sherman would do something to make the game memorable. But that wasn’t to be, either. He was nothing but gracious in the interviews. WTF? Overall it was a forgettable Super Bowl. The party was great. My stomach and liver hated me the next day, as is always the case. And we had to deal with Rich being cranky because his adopted Broncos got smoked. But it’s not all bad. Now comes the craziness leading up to the draft, free agency, and soon enough training camp. It makes me happy that although football is gone, it’s not for long. –Mike Photo credit: “Mountain Dew flavoured Lip Balm and Milk Duds!!!” originally uploaded by Jamie Moore Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. The Future of Information Security What it means (Part 3) Six Trends Changing the Face of Security A Disruptive Collision Introduction Leveraging Threat Intelligence in Security Monitoring The Threat Intelligence + Security Monitoring Process Revisiting Security Monitoring Benefiting from the Misfortune of Others Reducing Attack Surface with Application Control Use Cases and Selection Criteria The Double Edged Sword Advanced Endpoint and Server Protection Assessment Introduction Newly Published Papers Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Defending Against Application Denial of Service Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services Incite 4 U Scumbag Pen Testers: Check out the Chief Monkey’s dispatch detailing pen testing chicanery. These shysters cut and pasted from another report and used the findings as a means to try to extort additional consulting and services from the client. Oh, man. The Chief has some good tips about how to make sure you aren’t suckered by these kinds of scumbags either. I know a bunch of this stuff should be pretty obvious, but clearly an experienced and good CISO got taken by these folks. And make sure you pay the minimum amount up front, and then on results. – MR Scumbags develop apps too: We seem to be on a scumbag theme today, so this is a great story from Barracuda’s SignNow business about how they found a black hat app developer trying to confuse the market and piggyback on SignNow’s brand and capabilities. Basically copy an app, release a crappy version of it, confuse buyers by ripping off the competitor’s positioning and copy, and then profit. SignNow sent them a cease and desist letter (gotta love those lawyers) and the bad guys did change the name of the app. But who knows how much money they made in the meantime. Sounds a lot like a tale as old as time… – MR He was asking for it: As predicted and with total consistency, the PCI Security Standards Council has once again blamed the victim, defended the PCI standard, and assured the public that nothing is wrong here. In an article at bankinfosecurity.com, Bob Russo of the SSC says: “As the most recent industry forensic reports indicate, the majority of the breaches happening are a result of some kind of breakdown in security basics – poor implementation, poor maintenance of controls. And the PCI standards [already] cover these security controls”. Well, it’s all good, right? Except nobody is capable of meeting the standard consistently, and all these breaches are against PCI Certified organizations. But nothing wrong with the standard – it’s the victim’s fault. You

Share:
Read Post

TISM: The Threat Intelligence + Security Monitoring Process

As we discussed in Revisiting Security Monitoring, there has been significant change on the security monitoring (SM) side, including the need to analyze far more data sources at a much higher scale than before. One of the emerging data sources is threat intelligence (TI), as detailed in Benefiting from the Misfortune of Others. Now we need to put these two concepts together, to detail the process of integrating threat intelligence into your security monitoring process. This integration can yield far better and more actionable alerts from your security monitoring platform, because the alerts are based on what is actually happening in the wild. Developing Threat Intelligence Before you can leverage TI in SM, you need to gather and aggregate the intelligence in a way that can be cleanly integrated into the SM platform. We have already mentioned four different TI sources, so let’s go through them and how to gather information. Compromised Devices: When you talk about actionable information, a clear indication of a compromised device is the most valuable intelligence – a proverbial smoking gun. There are a bunch of ways to conclude that a device is compromised. The first is by monitoring network traffic and looking for clear indicators of command and control traffic originating from the device, such as the frequency and content of DNS requests that might show a domain generating algorithm (DGA) to connect to botnet controllers. Monitoring traffic from the device can also show files or other sensitive data, indicating exfiltration or (via traffic dynamics) a remote access trojan. One approach, which does not require on-premise monitoring, involves penetrating the major bot networks to monitor botnet traffic, in order to identify member devices – another smoking gun. Malware Indicators: As we described in Malware Analysis Quant, you can build a lab and do both static and dynamic analysis of malware samples to identify specific indicators of how the malware compromises devices. This is obviously not for the faint of heart; thorough and useful analysis requires significant investment, resources, and expertise. Reputation: IP reputation data (usually delivered as a list of known bad IP addresses) can trigger alerts, and may even be used to block outbound traffic headed for bad networks. You can also alert and monitor on the reputations of other resources – including URLs, files, domains, and even specific devices. Of course reputation scoring requires a large amount of traffic – a significant chunk of the Internet – to observe useful patterns in emerging attacks. Given the demands of gathering sufficient information to analyze, and the challenge of detecting and codifying appropriate patterns, most organizations look for a commercial provider to develop and provide this threat intelligence as a feed that can be directly integrated into security monitoring platforms. This enables internal security folks to spend their time figuring out the context of the TI to make alerts and reports more actionable. Internal security folks also need to validate TI on an ongoing basis because it ages quickly. For example C&C nodes typically stay active for hours rather than days, so TI must be similarly fresh to be valuable. Evolving the Monitoring Process Now armed with a variety of threat intelligence sources, you need to take a critical look at your security monitoring process to figure out how it needs to change to accommodate these new data sources. First let’s turn back the clock to revisit the early days of SIEM. A traditional SIEM product is driven by a defined ruleset to trigger alerts, but that requires you to know what to look for, before it arrives. Advanced attacks cannot really be profiled ahead of time, so you cannot afford to count on knowing what to look for. Moving forward, you need to think differently about how to monitor. We continue to recommend identifying normal patterns on your network with a baseline, and then looking for anomalous deviation. To supplement baselines watch for emerging indicators identified by TI. But don’t minimize the amount of work required to keep everything current. Baselines are constantly changing, and your definition of ‘normal’ needs ongoing revision. Threat intelligence is a dynamic data source by definition. So you need to look for new indicators and network traffic patterns in near real time, for any hope of keeping up with hourly changes of C&C nodes and malware distribution sites. Significant automation is required to ensure your monitoring environment is keeping pace with attackers, and successfully leveraging available resources to detect attacks. The New Security Monitoring Process Model At this point it is time to revisit the security monitoring process model developed for our Network Security Operations Quant research. By adding a process for gathering threat intelligence and integrating TI into the monitoring process, you can more effectively handle the rapidly changing attack surface and improve your monitoring results.   Gather Threat Intelligence The new addition to the process model is gathering threat intelligence. As described above, there are a number of different sources you can (and should) integrate into the monitoring environment. Here are brief descriptions of the steps: Profile Adversary: As we covered in the CISO’s Guide to Advanced Attackers, it is critical to understand who is most likely to be attacking you, which enables you to develop a profile of their tactics and methods. Gather Samples: The next step in developing threat intelligence is to gather a ton of data that can be analyzed to define the specific indicators that comprise the TI feed (IP addresses, malware indicators, device changes, executables, etc.). Analyze Data and Distill Threat Intelligence: Once the data is aggregated you can mine the repository to identify suspicious activity and distill that down into information pertinent to detecting the attack. This involves ongoing validation and testing of the TI to ensure it remains accurate and timely. Aggregate Security Data The steps involved in aggregating security data are largely unchanged in the updated model. You still need to enumerate which devices to monitor in your environment, scope the kinds of data you will get from them, and define collection policies and correlation rules. Then you can move on to the active step of

Share:
Read Post

Security’s Future: Implications for Security Vendors

This is the fourth post in a series on the future of information security, which will be the basis for a white paper. You can leave feedback here as a blog comment, or even submit edits directly over at GitHub, where we are running the entire editing process in public. This is the initial draft, and I expect to trim the content by about 20%. The entire outline is available. See the first post, second post, and third post. Implications for Security Vendors and Providers These shifts are likely to dramatically affect existing security products and services. We already see cloud and mobile adoption and innovation outpacing many other security tools and services. They are not yet materially affecting the profits of these companies, but the financial risks of failing to adapt in time are serious. Many vendors have chosen to ‘cloudwash’ existing offerings – they simply convert their product to a virtual appliance or make other minor tweaks, but for technical and operational reasons we do not see this as a viable option over the long term. Tools need to fit the job, and we have shown that cloud and mobile aren’t merely virtual tweaks of existing architectures, but fundamentally alter things at a deep level. The application architectures and operations models we see in leading web properties today are quite different than traditional web application stacks, and likely to become the dominant models over time because they fit the capabilities of cloud and mobile. The security trends we identified also assume shifting priorities and spending. For example hypersegregated cloud networks and greater reliance on automatically configuring servers (required for autoscaling, a fundamental cloud function) reduce the need for traditional patch management and antivirus. When it is trivial to replace a compromised server with a new one within minutes, traffic between servers is highly restricted at a per-server level, and detection and incident response are much improved, then AV, IDS, and patch management may not be essential security controls. Security tools need to be as agile and elastic as the infrastructure, endpoints, and services they protect; and they need to fit the new workflow and operational models emerging to take advantages of these advances – such as DevOps. The implications for security vendors and providers fall into two buckets: Fundamental architectural and operational differences require dramatic changes to many security tools and services to operate in the new environment. Shifting priorities make customers shift security spending, impacting security market opportunities. Preparing for the Future It is impossible to include every possible recommendation for every security tool and service on the market, but some guiding principles can prepare security companies to compete in these markets today, and as they become more dominant in the future: Support consumption and delivery of APIs: Adding the ability to integrate with infrastructure, applications, and services directly using APIs increases security agility, supports Software Defined Security, and embeds security management more directly into platforms and services. For example network security tools should integrate directly with Software Defined Networking and cloud platforms so users can manage network security in one place. Customers complain today that they cannot normalize firewall settings between classical infrastructure and cloud providers, and need to manage each separately. Security tools also need to provide APIs so they can integrate into cloud automation, and to avoid becoming a rate limiter – and later inevitably getting kicked to the curb. Software Development Kits and robust APIs will likely become competitive differentiators because they help integrate directly security into operations, rather than interfering and perturbing workflows that provide strong business benefits. Don’t rely on controlling or accessing all network traffic: A large number of security tools today, from web filtering and DLP to IPS, rely on completely controlling network traffic and adding additional bumps in the wire for analysis and action. The more we move into cloud computing and extensive mobility, the fewer opportunities we have to capture connections and manage security in the network. Everything is simply too distributed, with enterprises routing less and less traffic through core networks. Where possible, integrate directly with platforms and services over APIs, or embed security into host agents designed for highly agile cloud environments. You cannot assume the enterprise will route all traffic from mobile workers through fixed control points, so services need to rely on Mobile Device Management APIs and provide more granular protection at the app and service level. Provide extensive logs and feeds: Security logs and tools shouldn’t be black holes of data: receiving but never providing. The Security Operations Center of the future will rely more on aggregating and correlating data using big data techniques, so they will need access to raw data feeds to be most effective. Expect demand to be more extensive than from existing SIEMs. Assume insanely high rates of change: Today, especially in audit and assessment, we rely on managing relatively static infrastructure. But when cloud applications are designed to rely on servers that run for less than an hour, even daily vulnerability scans are instantly out of date. Products should be as stateless as possible – rely on continually connecting and assessing the environment rather than assuming things change slowly. Companies that support APIs, rely less on network hardware for control, provide extensive data feeds, and assume rapid change, are in much better positions to accomodate expanding use of cloud and mobile devices. It is a serious challenge, as we need to provide protection to a large volume of distributed services and users, without anything like the central control we are used to. We work extensively with security vendors. It is hard to overstate how few we see preparing for these shifts. Share:

Share:
Read Post

Firestarter: Inevitable Doom

Okay, let’s just ignore the first part of this Firestarter where we talk about the Denver Broncos, okay? We recorded it on the Friday before the game and, well, enough said. Then we turned to some recent tech and company ideas we have seen, and why they are doomed to fail. Kind of like you-know-who. Sigh. Share:

Share:
Read Post

Security’s Future: What it Means (Part 3)

This is the third post in a series on the future of information security, which will be the basis for a white paper. You can leave feedback here as a blog comment, or even submit edits directly over at GitHub, where we are running the entire editing process in public. This is the initial draft, and I expect to trim the content by about 20%. The entire outline is available. See the first post and the second post. What it Means The disruptions and trends we have described don’t encompass all advances in the worlds of technology and security, but they represent the ones which will most fundamentally transform the practice of security over the next decade. For example we haven’t directly addressed Software Defined Networks (although aspects show up in our cloud, hypersegregation, and Software Defined Security descriptions), malware ecosystems, or the increasing drive toward pervasive encryption (driven, in no small part, by government spying). Our focus is on the changes most fundamentally alter the practice of security, and the resulting outcomes. The changes come in fits and spurts – distributed unevenly, based on technology adoption rates, economics, and even social factors. But aggregated together, they paint a picture we can use to guide decisions today – for both organizations and professionals. All these changes are currently in process, with plenty of real-world examples. This report focuses on the implications for three groups: security professionals, security vendors and providers, and cloud and infrastructure providers. The people tasked with implementing security, the folks who create the tools and services they use, and the public and private IT departments managing our platforms and services. Let’s start with some high-level principles for understanding how security controls will evolve, then dig into the implications for our three audiences. Security Controls Evolution There is no way to predict exactly how the future will turn out or how security controls will evolve as these trends unfold. But one key question with a few logical follow-ups, can quickly help identify how security controls will likely adapt (or at least need to) in the face of change. How does this enable my security strategy? What does the provider or technology give me? What does it do? What do I need to do? The purpose of this question is to examine how the lines of responsibility and control will shift. For example, when choosing a new cloud provider, what security controls do they provide? Which can you manage? Where are the gaps? What security controls can you put in place to address those gaps? Does moving to this provider give you new security capabilities you otherwise lacked? Or, for a new security tool like active defense: Does this obviate our need for IPS? Does it really improve our ability to detect attackers? What kind of attackers and attacks? How can and will we adjust our response strategy? Here are two interrelated examples: iOS 7 includes mobile device management hooks to restrict data migration on the device to only enterprise-approved accounts and apps, all strongly encrypted and protected by stringent sandboxing. While this could significantly improve data security over standard computers, it also means giving up any possibility of Data Loss Prevention monitoring, and needing to implement a particular flavor of mobile device management. However… Cloud storage and collaboration providers keep track of every version of every file they hold for customers. Some even track all device and user access on a per-file basis. Use one of these with your mobile apps, and you might be able to replace DLP monitoring with in-depth real-time auditing of all file activity at the cloud level – including every device that accesses the files. The combination provides a security and audit capability that is effectively impossible with ‘traditional’ device management and storage, but requires you to change how you implement a series of security controls. Focus on your security strategy. Determine what you can do, what your provider or tool will do, who is responsible, and the technology capabilities and limitations – rather than how to migrate a specific, existing control to the new operating environment. Implications for Security Practitioners Security practitioners in the future will rely on a different core skill set than many professionals possess today. Priorities shift as some risks decline, others increase, and operational practices change. The end result is a fundamental alteration of the day-to-day practice of security. Some of these are due to the disruptions of the cloud and mobility, but much of it is due to the continued advancement of our approaches to security (partially driven by our six trends; also influenced by attackers). We covered cloud computing in depth in our paper What CISOs Need to Know about Cloud Computing. Let’s look at the different skills and priorities we expect to be emphasized by the combination of cloud, mobile, and our six inherent security trends. New Skills As with any transition, old jobs won’t be eliminated immediately, but the best opportunities will go to those with knowledge and expertise best aligned to new needs. These roles are also most likely to command a salary premium until the bulk of the labor market catches up, so even if you don’t think demand for current skills will decline, you still have a vested interest in gaining the new skills. All these roles and skills exist today, but we expect them to move into the core of the security profession. Incident Response is already seeing tremendous growth in demand, as more organizations shift from trying only to keep attackers out (which never works) to more rapidly detection, containment, and remediation of successful attacks. This requires extensive security expertise and cannot be handed off to Operations. Secure Programming includes assisting with adding security functions to other applications, evaluating code for security issues (although most of that will be automated), and programming Software Defined Security functions to orchestrate and automate security across tools. It requires both programming and security domain expertise to be truly effective. Some practitioners will find themselves more on the secure application development side (integrating security into applications),

Share:
Read Post

Security’s Future: Six Trends Changing the Face of Security

This is the second post in a series on the future of information security, which will be the basis for a white paper. You can leave feedback here as a blog comment, or even directly submit edits over at GitHub, where we are running the entire editing process in public. This is the initial draft, and I expect to trim the content by about 20%. The entire outline is available. The first post is available. The cloud and mobile computing are upending the foundational technological principles of delivery and consumption, and at the same time we see six key trends within security itself which promise to completely transform its practice over time. These aren’t disruptive innovations so much as disruptive responses and incremental advances that better align us with where the world is heading. When we align these trends with advances in and adoption of cloud and mobile computing, we can picture how security will look over the next seven to ten years. Hypersegregation We have always known the dramatic security benefits of effective compartmentalization, but implementation was typically costly and often negatively impacted other business needs. This is changing on multiple fronts as we gain the ability to heavily segregate, by default, with minimal negative impact. Flat networks and operating systems will not only soon be an artifact of the past, but difficult to even implement. Hypersegregation makes it much more difficult for an attacker to extend their footprint once they gain access to a network or system, and increases the likelihood of detection. Most major cloud computing platforms provide cloud-layer software firewalls, by default, around every running virtual machine. In cloud infrastructure, every single server is firewalled off from every other one by default. The equivalent in a traditional environment would be either a) host-based firewalls on every host, of every system type, with easily and immediately managed policies across all devices, or b) putting a physical firewall in front of every host on the network, which travels with the host if and when it moves. These basic firewalls are managed via APIs, and by default even segregate every server from every other server – even on the same subnet. There is no such thing as a flat network when you deploy onto Infrastructure as a Service, unless you work hard to reproduce the less secure architecture. This segregation has the potential to expand into non-cloud networks thanks to Software Defined Networking, making hypersegregation the default in any new infrastructure. We also see hypersegregation working extremely effectively in operating systems. Apple’s iOS sandboxes every application by default, creating another kind of ‘firewalls’ inside the operating system. This is a major contributor to iOS’s complete lack of widespread malware – going back to the iPhone debut seven years ago. Apple now extends similar protection to desktop and laptop computers by sandboxing all apps in the Mac App Store. Google sandboxes all tabs and plugins in the Chrome web browser. Microsoft sandboxes much of Internet Explorer and supports application level sandboxes. Third-party tools extend sandboxing in operating systems through virtualization technology. Even application architectures themselves are migrating toward further segregating and isolating application functions to improve resiliency and address security. There are practical examples today of task and process level segregation, enforcing security policy on actions by whitelisting. The end result is networks, platforms, and applications that are more resistant to attack, and limit the damage of attackers even when they succeed. This dramatically raises the overall costs of attacks while reducing the necessity to address every vulnerability immediately or face exploitation. Operationalization of Security Security, even today, still performs many rote tasks that don’t actually require security expertise. For cost and operational efficiency reasons, we see organizations beginning to hand off these tasks to Operations to allow security professionals to focus on what they are best at. This is augmented by increasing automation capabilities – not that we can ever eliminate the need for humans. We already see patch and antivirus management being handled by non-security teams. Some organizations now extend this to firewall management and even low-level incident management. Concurrently we see the rise of security automation to handle more rote-level tasks and even some higher-order functions – especially in assessment and configuration management. We expect Security to divest itself of many responsibilities for network security and monitoring, manual assessment, identity and access management, application security, and more. This, in turn, frees up security professionals for tasks that require more security expertise – such as incident response, security architecture, security analytics, and audit/assessment. Security professionals will play a greater role as subject matter experts, as most repetitive security tasks become embedded into day-to-day operations, rather than being a non-operations function. Incident Response One of the benefits of the increasing operationalization of security is freeing up resources for incident response. Attackers continue to improve as technology further embeds itself into our lives and economies. Security professionals have largely recognized and accepted that it is impossible to completely stop attacks, so we need greater focus on detecting and responding to incidents. This is beginning to shift security spending toward IR tools and teams, especially as we adopt the cloud and platforms that reduce our need for certain traditional infrastructure security tools. Leading organizations today are already shifting more and more resources to incident detection and response. To react faster and better, as we say here. Not simply having an incident response plan, or even tools, but conceptually re-prioritizing and re-architecting entire security programs – to focus as much or more on detection and response as on pure defense. We will finally use all those big screens hanging in the SOC to do more than impress prospects and visitors. A focus on incident response, on more rapidly detecting and responding to attacker-driven incidents, will outperform our current security model – which is overly focused on checklists and vulnerabilities – affecting everything from technology decisions to budgeting and staffing. Software Defined Security Today security largely consists of boxes and agents distinct from the infrastructure we protect. They won’t go away, but the cloud and increasingly available APIs

Share:
Read Post

TISM: Revisiting Security Monitoring

In our first post on Leveraging Threat Intelligence in Security Monitoring (TISM), Benefiting from the Misfortune of Others, we discussed threat intelligence as a key information source for shortening the window between compromise and detection. Now we need a look in terms of security monitoring – basically how monitoring processes need to adapt to the ability to leverage threat intelligence. We will start with the general monitoring process first documented in our Network Security Operations Quant research. This is a good starting point – it details all the gory details involved in monitoring things. Of course its focus is firewalls and IPS devices, but expanding it to include the other key devices which require monitoring isn’t a huge deal. Network Security Monitoring   Plan In this phase we define the depth and breadth of our monitoring activities. These are not one-time tasks but processes to revisit every quarter, as well as after incidents that triggers policy review. Enumerate: Find all the security, network, and server devices which are relevant to the security of the environment. Scope: Decide which devices are within scope for monitoring. This involves identifying the asset owner; profiling the device to understand data, compliance, and policy requirements; and assessing the feasibility of collecting data from it. Develop Policies: Determine the depth and breadth of the monitoring process. This consists of two parts: organizational policies (which devices will be monitored and why); and device & alerting policies (which data will be collected from. It may include any network, security, computing, application, or data capture/forensics device. Policies For device types in scope, device and alerting policies are developed to detect potential incidents which require investigation and validation. Defining these policies involves a QA process to test the effectiveness of alerts. A tuning process must be built into alerting policy definitions – over time alert policies need to evolve as the targets to defend change, along with adversaries’ tactics. Finally, monitoring is part of a larger security operations process, so policies are required for workflow and incident response. They define how monitoring information is leveraged by other operational teams and how potential incidents are identified, validated, and investigated. Monitor In this phase monitoring policies are put to use, gathering data and analyzing it to identify areas for validation and investigation. All collected data is stored for compliance, trending, and reporting as well. Collect: Collect alerts and log records based on the policies defined under Plan. This can be performed within a single-element manager or abstracted into a broader Security Information and Event Management (SIEM) system for multiple devices and device types. Store: Collected data must be stored for future access, for both compliance and forensics. Analyze: The collected data is analyzed to identify potential incidents based on alerting policies defined in Phase 1. This may involve numerous techniques, including simple rule matching (availability, usage, attack traffic policy violations, time-based rules, etc.) and/or multi-factor correlation based on multiple device types. Action When an alert fires in the analyze step, this phase kicks in to investigate and determine whether further action is necessary. Validate/Investigate: If and when an alert is generated, it must be investigated to validate the attack. Is it a false positive? Is it a real issue that requires further action? If the latter, move to the Action phase. If this was not a ‘good’ alert, do policies need to be tuned? Action/Escalate: Take action to remediate the issue. This may involve a hand-off or escalation to Operations. After a few alert validations it is time to determine whether policies must be changed or tuned. This must be a recurring feedback loop rather than a one-time activity – networks and attacks are both dynamic, and require ongoing diligence to ensure monitoring and alerting policies remain relevant and sufficient. What Has Changed Security monitoring has undergone significant change over the past few years. We have detailed many of these changes in our Security Management 2.5 series, but we will highlight a few of the more significant aspects. The first is having to analyze much more data from many more sources. We will go into detail later in this post. Next, the kind of analysis performed on the collected data is different. Setting up rules for a security monitoring environment was traditionally a static process – you would build a threat model and then define rules to look for that kind of attack. This approach requires you to know what to look for. For reasonably static attacks this approach can work. Nowadays planning around static attacks will get you killed. Tactics change frequently and malware changes daily. Sure, there are always patterns of activity to indicate a likely attack, but attackers have gotten proficient at evading traditional SIEMs. Security practitioners need to adapt detection techniques accordingly. So you need to rely much more on detecting activity patterns, and looking for variations from normal patterns to trigger the alerts and investigation. But how can you do that kind of analysis on what could be dozens of disparate data sources? Big data, of course. Kidding aside, that is actually the answer, and it is not overstating to say that big data technologies will fundamentally change how security monitoring is done – over time. Broadening Data Sources In Security Management 2.5: Platform Evolution, we explained that to keep pace with advanced attackers, security monitoring platforms must do more with more data. Having more data opens up very interesting possibilities. You can integrate data from identity stores to trace behavior back to users. You can pull information from applications to look for application misuse, or gaming legitimate application functionality including search and shopping carts. You can pull telemetry from server and endpoint devices, to search for specific indicators of compromise – which might represent a smoking gun and point out a successful attack. We have always advocated for collecting more data, and monitoring platforms are beginning to develop capabilities to take advantage of additional data for analytics. As we mentioned, security monitoring platforms are increasingly leveraging advanced data stores, supporting much different (and more advanced) analytics to find patterns among many different data sources.

Share:
Read Post

Security’s Future: a Disruptive Collision

This is the first post in a series on the future of information security, which will be the basis for a white paper. You can leave feedback here as a blog comment, or even directly submit edits over at GitHub, where we run the entire editing process in public. This is the initial draft, and I expect to trim the content by about 20%. The entire outline is available. A Disruptive Collision At the best of times, the practice of information security is defined by disruption. We need to respond to business and technology innovations – not only from those we defend, but also from their attackers. Security is never really in control of our own destiny – we are tasked with managing the risks of decisions made by others, in the face of entire industries (and economies) dedicated to discovering new ways of stealing or hurting them and us. We are reactive because those we protect and those who attack are never fully predictable – not because of an inherent failing of security. But the better we predict these disruptions, and the better we prepare our response, the more effective we are. As analysts, we at Securosis focus most of our research on the here and now – on how best to tackle the security challenges faced by CISOs and security professionals when they show up to work in the morning. Occasionally as part of this research we note trends with the potential to dramatically affect the security industry and our profession. We currently see what appears to be the largest combination (collision) of disruptive forces since the initial adoption of the Internet – with implications for security far beyond our first tentative steps onto the global network. Additionally, we have identified six key trends which are currently altering the practice of security. This combination of external and internal change is fundamentally transforming the practice of security. This paper starts with a description of the disruptive forces and the native security trends, but its real objective is to lay out their long-term implications for the practice of security – and how we expect security to evolve for security professionals, security vendors, and cloud and other infrastructure providers. Through the report we will back up our analysis with real-world examples that show this transformation isn’t a vague possibility in a distant future, but is already well under way. But although these changes are inevitable, they are far from evenly distributed. As you will see, this provides plenty of time and incentive for professionals and organizations to prepare. Two Disruptive Innovations Clayton Christensen first coined the term “disruptive technology” in 1995 (he later changed the term to “disruptive innovation”) to describe new business and technology practices that fundamentally alter, and eventually supersede, existing ones. Innovation always causes change, but disruptive innovation mandates change. Innovation creates new opportunities and disrupts old ones. The technology world is experiencing a combination of two disruptive innovations simultaneously colliding and reinforcing each other. Cloud computing alters the consumption and delivery models for technology at both economic and technical levels. Advances in mobile technology are changing our access and consumption models, and reinforcing demand for the cloud – particularly at scale. Cloud Computing Cloud computing is a radically different technology model – it is not simply the latest flavor of outsourcing. It uses a combination of abstraction and automation to achieve previously impossible levels of efficiency and elasticity. This, in turn, creates new business models and alters the economics of technology delivery and consumption. Sometimes this means building your own cloud in your own datacenter; other times it means renting infrastructure, platforms, and applications from public providers over the Internet. Public cloud services eliminate most capital expenses, shifting them to on-demand operational costs instead. Private clouds allow more efficient use of capital, may reduce operational costs, and make technology more responsive to internal needs. Cloud computing fundamentally disrupts traditional infrastructure because it is more responsive, more efficient, and potentially more resilient and cost effective than our old ways of doing things. These are the same drivers that pushed us toward application service providers and virtualization. Public cloud computing is even more disruptive because it enables organizations to consume only what they need without maintaining overhead, while still rapidly responding to changing needs at effectively infinite scale (assuming an adequate checkbook). Every major enterprise we talk with today uses cloud services, and even some of the most sensitive industries, such as financial services, are exploring more extensive use of public cloud computing. We see no technical, economic, or even regulatory issues slowing this shift. Many security professionals focus on the multitenancy risks introduced by cloud, but abstraction and automation are more significant than shared infrastructure or services. Many security controls today rely on knowing and managing the physical resources that underpin our technology services. Abstraction breaks this model by virtualizing resources (including entire applications) into resource pools managed over the network. We give up physical control and shift management functions to standard network interfaces, creating a new management plane. This separation and remote management challenge or destroy traditional security controls. Abstraction is central to virtualization, and we are at least nominally familiar with its issues. But this kind of automation is specific to the cloud, and adds an orchestration layer to efficiently utilize resource pools. It enables extreme agility, such as servers that exist only for hour or minutes – automatically provisioned, configured, and destroyed without human interaction. Application developers can check in a piece of code, which then runs through a dozen automated checks and is pushed into production on a self-configuring platform that scales to meet demand. Security that relies on controlling the rate of change, or that mandates human checks, simply cannot keep up. Virtualization is the core enabling technology of abstraction, and Application Programming Interfaces (APIs) are the core enabler of automation. The elasticity and agility they together provide enable new operational models such as DevOps, which consolidate historically segregated management functions to improve efficiency and responsiveness. Combined with greater reliance on public cloud computing, the Internet itself becomes the interconnected platform for our applications and workloads. Defining DevOps

Share:
Read Post

Friday Summary: January 31, 2014

During my total and complete laptop fail for this week’s Firestarter, I was trying to make the point that large software projects have a considerably higher probability of failure. It is no surprise that many government IT projects are ‘failures’ – they are normally managed as ginormous projects with many competing requirements. It worked or the Apollo missions so governments doggedly cling to that validated model. But in the commercial environment Agile is having a huge and positive impact on software development. Coincidentally, this week Jim Bird discussed the findings of the 2013 Chaos Report. In a nutshell the topline was “More projects are succeeding (39% in 2012, up from 29% in 2004), mostly because projects are getting smaller”. But Jim points out that you cannot conjure up an Agile development program like the Wonder Twins activate their superhero powers – Agile development processes are one aspect, but program management across multiple Agile efforts is another thing entirely. A lot of thought and work has gone into this over the last few years, and things like the Scaled Agile Framework can help. Still, most government projects I have seen employ no Agile techniques. There is a huge body of knowledge out on how to get these things done, and industry leads the public sector by a wide margin. I used to get a lot of spam with hot stock tips. I was assured a penny stock was about to shoot through the roof because a patent was approved, and got plenty of dire warnings about pharmaceuticals firm failing clinical trials. Of course the info was bogus, but Mr. Market, the psycho he is, actually reacted. Anonymous bloggers could manipulate the market simply by leaving comments on blogs and message boards, offering no evidence but generating huge reactions. If you are a day trader this can pretty much ensure you will make money. This whole RSA deal, where they allegedly took $10M from the NSA to compromise security products, has the same feel – it sounds believable, but we are seeing a huge backlash without any sort of evidence. It feels like market manipulation. Could RSA have been bribed? Absolutely. Would the NSA conduct this business without leaving a paper trail? Probably. But would I buy or sell stocks based on spam, anonymous blogs posts, or my barber’s recommendation? No. That is not an appropriate response. Nor will I grandstand in the media or start a new security conference, trying to hurt RSA, because of what their software division did or did not do years ago. That would also be inappropriate. Pulling the ECC routines in question? Providing a competing solution? Providing my firm some “disaster recovery” options in case of compromised crypto/PRNG routines? Those are all more appropriate responses. For those of you who asked about my upcoming research calendar, I am excited about some projects that will commence in a couple weeks and complete in Q2. First up will be an update to the Big Data Security paper from mid-2012. SOOOO much has happened in the last 6-9 months that a lot it is obsolete, so I will be updating it. Gunnar and I are working on a project we call “Rebel Federation”, which is how we describe the assembly of an identity management solution based on best of breed components, rather than a single suite / single vendor stack. We will go through motivations, how to assemble, and how to mitigate some of the risks. And given the burst of tokenization inquiries over the past 60 days, I will be writing about that as well. If you have questions, please keep them coming – I have not yet decided on an outline. And finally, before RSA, I promise to launch the Security Analytics with Big Data paper. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on Database Denial of Service. David Mortman and Adrian Lane will be presenting at Secure360. Mike and JJ podcast about the Neuro-Hacking talk at RSA. Favorite Securosis Posts Mike Rothman: The Future of Information Security. Rich is our big thinker (when he gets enough sleep, at least) and I am fired up to read this series about how we need to start thinking about information security moving forward. The technology foundation under us is changing dramatically, and that won’t leave much of current security standing in the end. Either get ahead of it now, or clean up the rubble of your security program. Adrian Lane: Southern Snowpocalypse. It snowed here in Phoenix last year, but nothing like it did in ATL yesterday. It does not matter where snow hits – if it is at the wrong time and a city is unprepared, it’s crippling. Other Securosis Posts Firestarter: Government Influence. Leveraging Threat Intelligence in Security Monitoring: Benefiting from the Misfortune of Others. Summary: Mmm. Beer. Favorite Outside Posts Jamie Arlen: James at ShmooCon 2014. Totally self-serving, I know, but awesome none the less. Gunnar: NFC and BLE are friends. Adrian Lane: Pharmaceutical IT chief melds five cloud security companies to bolt down resource access. This is my first NetworkWorld fave – usually I ridicule their stuff – but this is a good description of a trend we have been seeing as well. And you need some guts to walk this path. Mike Rothman: Volunteer at HacKid! If you’re on the west coast and have kids, you should be at HacKid, April 19-20 in San Jose. Plenty of opportunities to volunteer. I’ll be there (with my 10 year old twins), and I think Rich is planning to attend as well. See you there! Research Reports and Presentations Eliminate Surprises with Security Assurance and Testing. What CISOs Need to Know about Cloud Computing. Defending Against Application Denial of Service Attacks. Executive Guide to Pragmatic Network Security Management. Security Awareness Training Evolution. Firewall Management Essentials. A Practical Example of Software Defined Security. Continuous Security Monitoring. API Gateways: Where Security Enables Innovation. Identity and Access Management for Cloud Services. Top News and Posts Software [in]security and scaling automated code review. Just Let Me Fling Birds at Pigs Already!

Share:
Read Post

Incite 1/29/2014: Southern Snowpocalypse

I grew up in the northeast. My memories of snow weren’t really good. I didn’t ski, so all that I knew about snow was that I had to shovel it and it’s hard to drive in. It is not inherently hard to drive in snow, but too many folks have no idea what they are doing, which makes it hard. To be clear, this situation is on me. I had an opportunity to go home earlier today. But I wanted my coffee and the comfort of working in a familiar Starbucks, rather than my familiar basement office. Not my brightest decision. I figured most folks would clear out early, so it would be fine later in the day. Wrong. Wrong. Wrong. Evidently there are an infinite number of people in the northern Atlanta suburbs trying to get home. And they are all on the road at the same time. A few of them have rear wheel drive cars, which get stuck on the mildest of inclines. No one can seem to get anywhere. I depend on the Waze app for navigation. Its crowdsourced traffic info has been invaluable. Not today. It has routed me in a circle, and 90 minutes later I am basically where I started. Although I can’t blame Waze – you can’t really pinpoint where a car gets stuck and causes gridlock until someone passes by. In case it wasn’t clear, no one is going anywhere. So I wait. I read my email. I caught up on my twitter feed. I checked Facebook, where I saw that most of my friends in ATL were similarly stuck in traffic. It’s awesome. My kids have already gone out and played in the snow. I hope the boss took pictures. I missed it. Oh well. Nothing I can do now. Except smile. And breathe. And smile again. At some point I will get home. I will be grateful. Oh yeah, and next time I will stay home when it threatens to snow. Duh. –Mike UPDATE: It took me about 4 1/2 hours to get home. Yes, to travel 6 miles. I could have walked home faster. But it was 20 degrees, so that wouldn’t really have worked well either. Some kids in XX1’s middle school didn’t get home until 10 PM. It was a total nightmare. My family and friends are safe, and that’s all that matters. Now get these kids out of my hair. I have work to do… Photo credit: This is an actual picture of sitting in traffic yesterday. What you see was my view for about an hour inching along. And I don’t normally play on the phone when I’m driving, but at that point I wasn’t really driving… Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. The Future of Information Security Introduction Leveraging Threat Intelligence in Security Monitoring Benefiting from the Misfortune of Others Reducing Attack Surface with Application Control Use Cases and Selection Criteria The Double Edged Sword Advanced Endpoint and Server Protection Assessment Introduction Newly Published Papers Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Defending Against Application Denial of Service Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services Incite 4 U CISOs don’t focus on technology, not for long anyway: Seems like this roundtable that Dan Raywood covered in CISOs have “too much focus on technology” is about 5 years behind the times. I spend a bunch of time with CISOs, and for the most part they aren’t consumed by technology – more likely they are just looking for products to make the hackers go away. They have been focused on staffing and communicating the value of their security program. Yes, they still worry about malware and mobile devices and this cloud thing. But that doesn’t consume them anymore. And any CISO who is consumed by technology and believes any set of controls can make hackers go away should have a current resume – s/he will need it. – MR You don’t want to know: Sri Karnam writes about the 8 things your boss wants you to know about ‘Big Data Security’ on the HP blog – to which I respond ‘Not!’ The three things your boss wants to know, in a security context, are: 1) What sensitive data do we have in there? 2) What is being done to secure it? 3) Is that good enough? The key missing ingredient from Sri’s post is that your boss wants this information off the record. Bosses know to not go looking for trouble, and just want to know how to respond when they are asked when their boss asks. If you formally tell them what’s going on, they have knowledge, and can no longer rely on plausible deniability to blame you when something blows up. Sure, that’s an ethical copout, but it’s also a career-saver. – AL Pure vs. applied research: Interesting post on Andrew Hay’s blog about why security vendors need a research group. It seems every security vendor already has a research group (even if it’s a guy paying someone to do a survey), so he’s preaching to the choir a bit. But I like his breakdown of pure vs. applied research, where he posits vendors should be doing 70% of their research in areas that directly address customer problems. I couldn’t agree more. If you’re talking about a huge IT company, then they can afford to have Ph.D.s running around doing science projects. But folks who have to keep the lights on each quarter should be focused on doing research to help their customers solve problems. Because most customers can’t think about pure research while they are trying to survive each day. –

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.