Securosis

Research

Incident Response Fundamentals: Contain, Investigate, and Mitigate

In our last post, we covered the first steps of incident response – the trigger, escalation, and size up. Today we’re going to move on to the next three steps – containment, investigation, and mitigation. Now that I’m thinking bigger picture, incident response really breaks down into three large phases. The first phase covers your initial response – the trigger, escalation, size up, and containment. It’s the part when the incident starts, you get your resources assembled and responding, and you take a stab at minimizing the damage from the incident. The next phase is the active management of the incident. We investigate the situation and actively mitigate the problem. The final phase is the clean up; where we make sure we really stopped the incident, recover from the after effects, and try and figure out why this happened and how we can prevent it in the future. This includes the mop up (cleaning the remnants of the incident and making sure there aren’t any surprises left behind), your full investigation and root cause analysis, and Q/A (quality assurance) of your response process. Now since we’re writing this as we go, I technically should have included containment in the previous post, but didn’t think of it at the time. I’ll make sure it’s all fixed before we hit the whitepaper. Contain Containing an ongoing incident is one of the most challenging tasks in incident response. You lack a complete picture of what’s going on, yet you have to take proactive actions to minimize damage and potential incident growth. And you have to do it fast. Adding to the difficulty is the fact that in some cases your instincts to stop the problem may actually exacerbate the situation. This is where training, knowledge, and experience are absolutely essential. Specific plans for certain major incident categories are also important. For example: For “standard” virus infections and attacks your policy might be to isolate those systems on the network so the infection doesn’t spread. This might include full isolation of a laptop, or blocking any email attachments on a mail server. For those of you dealing with a well-funded persistent attacker (yeah, APT), the last thing you want to do is start taking known infected systems offline. This usually leads the attacker to trigger a series of deeper exploits, and you might end up with 5 compromised systems for every one you clean. In this case your containment may be to stop putting new sensitive data in any location accessed by those compromised systems (this is just an example, responding to these kinds of attackers is most definitely a complex art in and of itself). For employee data theft, you first get HR, legal, and physical security involved. They may direct you to you to instantly lock them out or perhaps just monitor their device and/or limit access to sensitive information while they build a case. For compromise of a financial system (like credit card processing), you may decide to suspend processing and/or migrate to an alternative platform until you can determine the cause later in your response. These are just a few quick examples, but the goal is clear – make sure things do not get worse. But you have to temper this defensive instinct with any needs for later investigation/enforcement, the possibility that your actions might make the situation worse, and the potential business impact. And although it’s not possible to build scenarios for every possible incident, you want to map out your intended responses for the top dozen or so, to make sure that everyone knows what they should be doing to contain the damage. Investigate At this point you have a general idea of what’s going on and have hopefully limited the damage. Now it’s time to really dig in and figure out exactly what you are facing. Remember – at this point you are in the middle of an active incident; your focus is to gather just as much information as you need to mitigate the problem (stop the bad guys, since this series is security-focused) and to collect it in a way that doesn’t preclude subsequent legal (or other) action. Now isn’t the time to jump down the rabbit hole and determine every detail of what occurred, since that may draw valuable resources from the actual mitigation of the problem. The nature of the incident will define what tools and data you need for your investigation, and there’s no way we can cover them all in this series. But here are some of the major options, some of which we’ll discuss in more detail as we discuss deeper investigation and root cause analysis later in the process. Network security monitoring tools: This includes a range of network security tools such as network forensics, DLP, IDS/IPS, application control, and next-generation firewalls. The key is that the more useful tools not only collect a ton of information, but also include analysis and/or correlation engines that help you quickly sift through massive volumes of information quickly. Log Management and SIEM: These tools collect a lot of data from heterogenous sources you can use to support investigations. Log Management and SIEM are converging, which is why we include both of them here. You can check out our report on this technology to learn more. System Forensics: A good forensics tool(s) is one of your best friends in an investigation. While you might not use it to its complete capabilities until later in the process, the forensics tool allows you to collect forensically-valid images of systems to support later investigations while providing valuable immediate information. Endpoint OS and EPP logs: Operating systems collect a fair bit of log information that may be useful to pinpoint issues, as does your endpoint protection platform (most of the EPP data is likely synced to its server). Access logs, if available, may be particularly useful in any incident involving potential data loss. Application and Database Logs: Including data from security tools like Database Activity Monitoring and Web Application Firewalls. Identity, Directory and DHCP logs: To determine who

Share:
Read Post

PCI 2.0: the Quicken of Security Standards

A long time ago I tried to be one of those Quicken folks who track all their income and spending. I loved all the pretty spreadsheets, but given my income at the time it was more depressing than useful. I don’t need a bar graph to tell me that I’m out of beer money. The even more depressing thing about Quicken was (and still is) the useless annual updates. I’m not sure I’ve ever seen a piece of software that offered so few changes for so much money every year. Except maybe antivirus. Two weeks ago the PCI Security Standards Council released version 2.0 of everyone’s favorite standard to hate (and the PA-DSS, the beloved guidance for anyone making payment apps/hardware). After many months of “something’s going to change, but we won’t tell you yet” press releases and briefings, it was nice to finally see the meat. But like Quicken, PCI 2.0 is really more of a minor dot release (1.3) than a major full version release. There aren’t any major new requirements, but a ton of clarifications and tweaks. Most of these won’t have any immediate material impact on how people comply with PCI, but there are a couple early signs that some of these minor tweaks could have major impact – especially around content discovery. There are many changes to “tighten the screws” and plug common holes many organizations were taking advantage of (deliberately or due to ignorance), which reduced their security. For example, 2.2.2 now requires you to use secure communications services (SFTP vs. FTP), test a sample of them, and document any use of insecure services – with business reason and the security controls used to make them secure. Walter Conway has a good article covering some of the larger changes at StoreFrontBackTalk. In terms of impact, the biggest changes I see are in scope. You now have to explicitly identify every place you have and use cardholder data, and this includes any place outside your defined transaction environment it might have leaked into. Here’s the specific wording: The first step of a PCI DSS assessment is to accurately determine the scope of the review. At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope. To confirm the accuracy and appropriateness of PCI DSS scope, perform the following: The assessed entity identifies and documents the existence of all cardholder data in their environment, to verify that no cardholder data exists outside of the currently defined cardholder data environment (CDE). Once all locations of cardholder data are identified and documented, the entity uses the results to verify that PCI DSS scope is appropriate (for example, the results may be a diagram or an inventory of cardholder data locations). The entity considers any cardholder data found to be in scope of the PCI DSS assessment and part of the CDE unless such data is deleted or migrated/consolidated into the currently defined CDE. The entity retains documentation that shows how PCI DSS scope was confirmed and the results, for assessor review and/or for reference during the next annual PCI SCC scope confirmation activity. Maybe I should change the title of the post, because this alone could merit a full revision designation. You now must scan your environment for cardholder data. Technically you can do it manually. and I suspect various QSAs will allow this for a while, but realistically no one except the smallest organizations can possibly meet this requirement without a content discovery tool. I guess I should have taken a job with a DLP vendor. The virtualization scope also expanded, as covered in detail by Chris Hoff. Keep in mind that anything related to PCI and virtualization is highly controversial, as various vendors try their darndest to water down any requirement that could force physical segregation of cardholder data in virtual environments. Make your life easier, folks – don’t allow cardholder data on a virtual server or service that also includes less-secure operations, or where you can’t control the multi-tenancy. Of course, none of the changes addresses the fact that every card brand treats PCI differently, or the conflicts of interest in the system (the people performing your assessment can also sell you ‘security’; put another way, decisions are made by parties with obvious conflicts of interest which could never pass muster in a financial audit), or shopping for QSAs, or the fact that card brands don’t want to change the system, but prefer to push costs onto vendors and service providers. But I digress. There is one last way PCI is like Quicken. It can be really beneficial if you use it properly, and really dangerous if you don’t. And most people don’t. Share:

Share:
Read Post

MS Atlanta: Protection Is Not Security

Microsoft has announced the beta release of something called Microsoft Codename “Atlanta”, which is being described as a “Cloud-Based SQL Server Monitoring tool”. Atlanta is deployed as an agent that embeds into SQL Server 2008 databases and sends telemetry information back to the Microsoft ‘cloud’ on your behalf. This data is analyzed and compared against a set of configuration policies, generating alerts when Microsoft discovers database misconfiguration. How does it do this? It looks at configuration data and some runtime system statistics. The policies seem geared toward helping DBAs with advanced SQL features such as mirroring, clustering, and virtual deployments. It’s looking at version and patch information, and it’s collecting some telemetry data to assist with root cause analysis for performance issues and failures. And finally, the service gets information into Microsoft’s hands faster, in an automated fashion, so support can respond faster to requests. The model is a little different than most cloud offerings, as it’s not the infrastructure that’s being pushed to the cloud, but rather the management features. Analysis does not appear to occur in real time, but this limitation may be lifted in the production product. If you are like me, you might have gotten excited for a minute thinking that Microsoft had finally released a vulnerability assessment tool for SQL Server databases, but alas, “Atlanta” does not appear to be a vulnerability assessment tool at all. In fact, it does not appear to have general configuration policies for security either. Like most System Center Products, “Data Protection” for SQL Server actually means integrity and reliability, not privacy and security. If you have ever read the “How to protect Microsoft SQL Server” white paper, you know exactly what I mean. So if you were thinking you could getting protection and configuration management for security and compliance, you will have to look elsewhere. The good news is I don’t see any serious downside or imminent security concern with Atlanta. The data sent to the cloud does not present a privacy or security risk, and the agent does not appear to provide any command and control interface, so it’s less likely to have be explotable. Small IT teams could benefit from automated tips on how the database should be set up, so that’s a good thing. As the feature sets grows you will need to pay close attention to changes in agent functionality and what data is being transferred. If this evolves and starts pushing database contents around like the Data Protection Manager, a serious security review is warranted. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.