Securosis

Research

Datum Entanglement

I’m hanging out in the Red Carpet Club at the Orlando airport, waiting to head home from the Cloud Security Alliance Congress. Yesterday Chris Hoff and I presented a three part series – first our joint presentation on disruptive innovation and cloud computing (WINnovation), then his awesome presentation on cloud computing infrastructure security issues (and more: Cloudinomicon), and finally Quantum Datum, my session on information-centric security for cloud computing. It was one of the most complex presentations I’ve ever put together in terms of content and delivery, and the feedback was pretty positive, with a few things I need to fix. Weirdly enough I was asked for more of the esoteric content, and less of the practical, which is sorta backwards. I enjoy the esoteric, but try not to do too much of it because we analyst types already have a reputation for forgetting about the real world. While I don’t intend to blog the entire presentation, and the slides don’t make sense without the talk, I’m going to break out some of the interesting bits as separate posts. As you can imagine from the title, the ‘theme’ was quantum mechanics, which provides some great metaphors for certain information-centric security issues. One of the most fascinating bits about quantum mechanics is the concept of quantum entanglement, sometimes called “spooky action at a distance”. Assuming you trust a history major to talk quantum physics, quantum entanglement is a phenomena that emerges due to the wave-like nature of subatomic particles. Things like electrons don’t behave like marbles, but more like a cross between a marble and a wave. They exhibit characteristics of both particles and waves. One of those is that you can split certain particles into smaller particles, each of which is representative of a different part of the parent wave function. For example, after the split you end up with one piece with an ‘up’ spin, and another with a ‘down’ spin, but never two ups or two downs. You can then separate these particles over a distance, and measuring the state of one instantly collapses the wave function and determines the state of the other. Thus you can instantly affect state across arbitrary distances – but it doesn’t violate the speed of light because technically no information is transferred. This is an interesting metaphor for data loss. If I have a given datum (the singular of ‘data’), the security state of any copy of that datum is affected by the state of all other copies of that datum. Well, sort of. Unlike with quantum entanglement, this is a one-way function. The security state of any datum can only decrease the security of all the rest, never increase it. This is why data loss is such an intractable problem. The more copies of a given datum (which could be a single number, or a 2-hour-long movie), the greater the probability of a security failure (assuming distribution) and the weaker overall relative security becomes. If one copy leaks, considering the interconnectivity of the Internet, that single copy is now potentially available and thus the security of all the other copies is reduced. This is really a stupidly complex way of saying that your overall security of a given datum is no greater than the weakest security of any copy. Now think in practical terms. It doesn’t matter how secure your database server is if someone can run a query, extract the data, dump it into an Excel spreadsheet, and email it. I believe the scientific term for this is ‘bummer’. Share:

Share:
Read Post

Incident Response Fundamentals: Mop up, Analyze, and QA

You did well. You followed your incident response plan and the fire is out. Too bad that was the easy part, and you now get to start the long journey from ending a crisis all the way back to normal. If we get back to our before, during, and after segmentation, this is the ‘after’ part. In the vast majority of incidents the real work begins after the immediate incident is over, when you’re faced with the task of returning operations to status quo ante, finding out the root cause of the problem, and putting controls in place to ensure it doesn’t happen again. The after part of the process consists of three phases (Mop up, Analyze, and Q/A), two of which overlap and can be performed concurrently. And remember – we are describing a full incident response process and tend to use major situations in our examples, but everything we are talking about scales down for smaller incidents too, which might be managed by a single person in a matter of minutes or hours. The process should scale both up and down, depending on the severity and complexity of an incident, but even dealing with what seems to be the simplest incident requires a structured process. That way you won’t miss anything. Mop up We steal the term “mop up” from the world of firefighting – where cleaning up after yourself may literally involve a mop. Hopefully we won’t need to break out the mops in an IT incident (though stranger things have happened), but the concept is the same – clean up after yourself, and do what’s required to restore normal operations. This usually occurs concurrently with your full investigation and root cause analysis. There are two aspects to mopping up, each performed by different teams: Cleaning up incident response changes: During a response we may take actions that disrupt normal business operations, such as shutting down certain kinds of traffic, filtering email attachments, and locking down storage access. During the mop up we carefully return to our pre-incident state, but only as we determine it’s safe to do so, and some controls implemented during the response may remain in place. For example, during an incident you might have blocked all traffic on a certain port to disable the command and control network of a malware infection. During the mop up you might reopen the port, or open it and filter certain egress destinations. Mop up is complete when you have restored all changes to where you were before the incident, or have accepted specific changes as a permanent part of your standards/configurations. Some changes – such as updating patch levels – will clearly stay, while others – including temporary workarounds – need to be backed out as a permanent solution goes into place. Restoring operations: While the incident responders focus on investigation and cleaning out temporary controls they put in place during the incident, IT operations handles updating software and restoring normal operations. This could mean updating patch levels on all systems, or checking for and cleaning malware, or restoring systems from backup and bringing them back up to date, and so on. The incident response team defines the plan to safely return to operations and cleans up the remnants of its actions, while IT operations teams face the tougher task of getting all the systems and networks where they need to be on a ‘permanent’ basis (not that anything in IT is permanent, but you know what we mean). Investigation and Analysis The initial incident is under control, and operations are being restored to normal as a result of the mop up. Now is when you start in-depth investigation of the incident to determine its root cause and determine what you need to do to prevent a similar incident from happening in the future. Since you’ve handled the immediate problem, you should already have a good idea of what happened, but that’s a far cry from a full investigation. To use a medical analogy, think of it as switching from treating the symptoms to treating the source of the infection. To go back to our malware example, you can often manage the immediate incident even without knowing how the initial infection took place. Or in the case of a major malicious data leak, you switch from containing the leak and taking immediate action against the employee to building the forensic evidence required for legal action, and ensuring the leak becomes an isolated incident, not a systematic loss of data. In the investigation we piece together all the information we collected as part of the incident response with as much additional data we can find, to help produce an accurate timeline of what happened and why. This is a key reason we push heavy monitoring so strongly, as a core process throughout your organization – modern incidents and attacks can easily slip through the gaps of ‘point’ tools and basic logs. Extensive monitoring of all aspects of your environment (both the infrastructure and up the stack), often using a variety of technologies, provides more complete information for investigation and analysis. We have already talked about various data sources throughout this series, so instead of rehashing them, here are a few key areas that tend to provide more useful nuggets of information: Beyond events: Although IDS/IPS, SIEM, and firewall logs are great to help manage an ongoing incident, they may provide an incomplete picture during your deeper investigation. They tend to only record information when they detect a problem, which doesn’t help much if you don’t have the right signature or trigger in place. That’s where a network forensics (full network packet capture) solution comes in – by recording everything going on within the network, these devices allow you to look for the trails you would otherwise miss, and piece together exactly what happened using real data. System forensics: Some of the most valuable tools for analyzing servers and endpoints are system forensics tools. OS and application logs are all too easy to fudge during an attack. These tools are also

Share:
Read Post

What You Need to Know about DLP for PCI 2.0

As I mentioned in my PCI 2.0 post, one of the new version’s most significant changes is that organizations now must not only confirm that they know where all their cardholder data is, but document how they know this and keep it up to date between assessments. You can do this manually, for now, but I suspect that won’t work except in the most basic environments. The rest of you will probably be looking at using Data Loss Prevention for content discovery. Why DLP? Because it’s the only technology I know of that can accurately and effectively gather the information you need. For more details (much more detail) check out my big DLP guide. For those of you looking at DLP or an alternate technology to help with PCI 2.0, here are some things to look for: A content analysis engine able to accurately detect PAN data. A good regular expression is a start, although without some additional tweaking that will probably result in a lot of false positives. Potentially a ton… The ability to scan a variety of storage types – file shares, document management systems, and whatever else you use. For large repositories, you’ll probably want a local agent rather than pure network scanning for performance reasons. It really depends on the volume of storage and the network bandwidth. Worst case, drop another NIC into the server (whatever is directly connected to the storage) and connect it via a subnet/private network to your scanning tool. Whatever you get, make sure it can examine common file types like Office documents. A text scanner without a file cracker can’t do this. Don’t forget about endpoints – if there’s any chance they touch cardholder data, you’ll probably be told to either scan a sample, or scan them all. An endpoint DLP agent is your best bet – even if you only run it occasionally. Few DLP solutions can scan databases. Either get one that can, or prepare yourself to manually extract to text files any database that might possibly come into scope. And pray your assessor doesn’t want them all checked. Good reporting – to save you time during the assessment process. DLP offers a lot more, but if all you care about is handling the PCI scope requirement, these are the core pieces and features you’ll need. Another option is to look at a service, which might be something SaaS based, or a consultant with DLP on a laptop. I’m pretty sure there won’t be any shortage of people willing to come in and help you with your PCI problems… for a price. Share:

Share:
Read Post

Friday Summary: November 11, 2010

When we came up with the Friday Summary, the idea was we’d share something personal that was either humorous or relevant to security, then highlight our content from the week, the best thing’s we read on other sites, and any major industry news. The question is always where to draw the line on the personal stuff. I mean, it isn’t like this is Twitter. Hopefully this next story doesn’t cross the line. It’s not too personal, but especially for those of you with kids, it might bring a smile. This morning I was getting my 20-month-old ready for daycare when I may have let loose a little toot. I’ve always known that is one of those things I’ll have to… put a cap on… once she got older and knows what it is. But I’m practically a vegetarian, and that comes with certain consequences. Anyway, it went like this: Me: [toot] Daughter (looking me in the eye): “Daddy pooped!” Me: Er. Anyway, yet one more thing I can’t do in the comfort of my own home. Nope. This has nothing to do with security. Live with it. Webcasts, Podcasts, Outside Writing, and Conferences Rich speaking at the Cloud Security Alliance Congress next week. I’m co-presenting with Hoff again, and premiering my new Quantum Datum pitch on information-centric security for cloud computing. Haven’t been this excited to present new content in a long time. Adrian’s Dark Reading post on NoSQL. Favorite Securosis Posts Rich: Baa Baa Blacksheep. Lather. Rinse. Get pwned. Repeat. Mike Rothman: MS Atlanta: Protection Is Not Security. It’s always hard to wade through the hyperbole and marketing rhetoric, especially with a fairly technical topic. You are lucky Adrian is there to explain things. Adrian: Baa Baa Blacksheep. Zscaler totally freakin’ missed the point. Other Securosis Posts LinkedIn Password Reset FAIL. Incite 11/10/2010: Hallowreck (My Diet). PCI 2.0: the Quicken of Security Standards. React Faster and Better: Contain, Investigate, and Mitigate. React Faster and Better: Trigger, Escalate, and Size up. Security Metrics: Do Something. Favorite Outside Posts Rich: Verizon launches VERIS site to anonymously share incident data. I’m on the advisory board (unpaid) and a bit biased, but I think this is a great initiative. Mike Rothman: Indiana AG sues WellPoint for $300K. $300K * 10-15 states could add up to some real money. This is just a reminder that getting your act together on disclosure remains important, unless you like contributing a couple hundred large to your state’s treasury (and everybody else’s, eventually). Adrian Lane: All In One Skimmers. And yes, it’s really that easy. On a positive note, this may be the only piece of electronic gear not made in China. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Top News and Posts New Android Bug Allows for Silent Malicious App Installation. A Database Administrator Disconnect Over Security Duties. PGP Disk Encryption Bricks Upgraded Macs. It’s time to get very serious about Java updates Java is a friggin’ mess. You’ll hear more about it in the coming years… trust us. Body Armor for Bad Web Sites. Danger to IE users climbs as hacker kit adds exploit. The Great Cyberheist A great, in-depth article on Albert Gonzales (the TJX/Heartland/etc. hacker). Chrome, Pitted. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Asa, in response to Baa Baa Blacksheep. Firesheep is not the attack; it’s the messenger. Share:

Share:
Read Post

Incident Response Fundamentals: Contain, Investigate, and Mitigate

In our last post, we covered the first steps of incident response – the trigger, escalation, and size up. Today we’re going to move on to the next three steps – containment, investigation, and mitigation. Now that I’m thinking bigger picture, incident response really breaks down into three large phases. The first phase covers your initial response – the trigger, escalation, size up, and containment. It’s the part when the incident starts, you get your resources assembled and responding, and you take a stab at minimizing the damage from the incident. The next phase is the active management of the incident. We investigate the situation and actively mitigate the problem. The final phase is the clean up; where we make sure we really stopped the incident, recover from the after effects, and try and figure out why this happened and how we can prevent it in the future. This includes the mop up (cleaning the remnants of the incident and making sure there aren’t any surprises left behind), your full investigation and root cause analysis, and Q/A (quality assurance) of your response process. Now since we’re writing this as we go, I technically should have included containment in the previous post, but didn’t think of it at the time. I’ll make sure it’s all fixed before we hit the whitepaper. Contain Containing an ongoing incident is one of the most challenging tasks in incident response. You lack a complete picture of what’s going on, yet you have to take proactive actions to minimize damage and potential incident growth. And you have to do it fast. Adding to the difficulty is the fact that in some cases your instincts to stop the problem may actually exacerbate the situation. This is where training, knowledge, and experience are absolutely essential. Specific plans for certain major incident categories are also important. For example: For “standard” virus infections and attacks your policy might be to isolate those systems on the network so the infection doesn’t spread. This might include full isolation of a laptop, or blocking any email attachments on a mail server. For those of you dealing with a well-funded persistent attacker (yeah, APT), the last thing you want to do is start taking known infected systems offline. This usually leads the attacker to trigger a series of deeper exploits, and you might end up with 5 compromised systems for every one you clean. In this case your containment may be to stop putting new sensitive data in any location accessed by those compromised systems (this is just an example, responding to these kinds of attackers is most definitely a complex art in and of itself). For employee data theft, you first get HR, legal, and physical security involved. They may direct you to you to instantly lock them out or perhaps just monitor their device and/or limit access to sensitive information while they build a case. For compromise of a financial system (like credit card processing), you may decide to suspend processing and/or migrate to an alternative platform until you can determine the cause later in your response. These are just a few quick examples, but the goal is clear – make sure things do not get worse. But you have to temper this defensive instinct with any needs for later investigation/enforcement, the possibility that your actions might make the situation worse, and the potential business impact. And although it’s not possible to build scenarios for every possible incident, you want to map out your intended responses for the top dozen or so, to make sure that everyone knows what they should be doing to contain the damage. Investigate At this point you have a general idea of what’s going on and have hopefully limited the damage. Now it’s time to really dig in and figure out exactly what you are facing. Remember – at this point you are in the middle of an active incident; your focus is to gather just as much information as you need to mitigate the problem (stop the bad guys, since this series is security-focused) and to collect it in a way that doesn’t preclude subsequent legal (or other) action. Now isn’t the time to jump down the rabbit hole and determine every detail of what occurred, since that may draw valuable resources from the actual mitigation of the problem. The nature of the incident will define what tools and data you need for your investigation, and there’s no way we can cover them all in this series. But here are some of the major options, some of which we’ll discuss in more detail as we discuss deeper investigation and root cause analysis later in the process. Network security monitoring tools: This includes a range of network security tools such as network forensics, DLP, IDS/IPS, application control, and next-generation firewalls. The key is that the more useful tools not only collect a ton of information, but also include analysis and/or correlation engines that help you quickly sift through massive volumes of information quickly. Log Management and SIEM: These tools collect a lot of data from heterogenous sources you can use to support investigations. Log Management and SIEM are converging, which is why we include both of them here. You can check out our report on this technology to learn more. System Forensics: A good forensics tool(s) is one of your best friends in an investigation. While you might not use it to its complete capabilities until later in the process, the forensics tool allows you to collect forensically-valid images of systems to support later investigations while providing valuable immediate information. Endpoint OS and EPP logs: Operating systems collect a fair bit of log information that may be useful to pinpoint issues, as does your endpoint protection platform (most of the EPP data is likely synced to its server). Access logs, if available, may be particularly useful in any incident involving potential data loss. Application and Database Logs: Including data from security tools like Database Activity Monitoring and Web Application Firewalls. Identity, Directory and DHCP logs: To determine who

Share:
Read Post

PCI 2.0: the Quicken of Security Standards

A long time ago I tried to be one of those Quicken folks who track all their income and spending. I loved all the pretty spreadsheets, but given my income at the time it was more depressing than useful. I don’t need a bar graph to tell me that I’m out of beer money. The even more depressing thing about Quicken was (and still is) the useless annual updates. I’m not sure I’ve ever seen a piece of software that offered so few changes for so much money every year. Except maybe antivirus. Two weeks ago the PCI Security Standards Council released version 2.0 of everyone’s favorite standard to hate (and the PA-DSS, the beloved guidance for anyone making payment apps/hardware). After many months of “something’s going to change, but we won’t tell you yet” press releases and briefings, it was nice to finally see the meat. But like Quicken, PCI 2.0 is really more of a minor dot release (1.3) than a major full version release. There aren’t any major new requirements, but a ton of clarifications and tweaks. Most of these won’t have any immediate material impact on how people comply with PCI, but there are a couple early signs that some of these minor tweaks could have major impact – especially around content discovery. There are many changes to “tighten the screws” and plug common holes many organizations were taking advantage of (deliberately or due to ignorance), which reduced their security. For example, 2.2.2 now requires you to use secure communications services (SFTP vs. FTP), test a sample of them, and document any use of insecure services – with business reason and the security controls used to make them secure. Walter Conway has a good article covering some of the larger changes at StoreFrontBackTalk. In terms of impact, the biggest changes I see are in scope. You now have to explicitly identify every place you have and use cardholder data, and this includes any place outside your defined transaction environment it might have leaked into. Here’s the specific wording: The first step of a PCI DSS assessment is to accurately determine the scope of the review. At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope. To confirm the accuracy and appropriateness of PCI DSS scope, perform the following: The assessed entity identifies and documents the existence of all cardholder data in their environment, to verify that no cardholder data exists outside of the currently defined cardholder data environment (CDE). Once all locations of cardholder data are identified and documented, the entity uses the results to verify that PCI DSS scope is appropriate (for example, the results may be a diagram or an inventory of cardholder data locations). The entity considers any cardholder data found to be in scope of the PCI DSS assessment and part of the CDE unless such data is deleted or migrated/consolidated into the currently defined CDE. The entity retains documentation that shows how PCI DSS scope was confirmed and the results, for assessor review and/or for reference during the next annual PCI SCC scope confirmation activity. Maybe I should change the title of the post, because this alone could merit a full revision designation. You now must scan your environment for cardholder data. Technically you can do it manually. and I suspect various QSAs will allow this for a while, but realistically no one except the smallest organizations can possibly meet this requirement without a content discovery tool. I guess I should have taken a job with a DLP vendor. The virtualization scope also expanded, as covered in detail by Chris Hoff. Keep in mind that anything related to PCI and virtualization is highly controversial, as various vendors try their darndest to water down any requirement that could force physical segregation of cardholder data in virtual environments. Make your life easier, folks – don’t allow cardholder data on a virtual server or service that also includes less-secure operations, or where you can’t control the multi-tenancy. Of course, none of the changes addresses the fact that every card brand treats PCI differently, or the conflicts of interest in the system (the people performing your assessment can also sell you ‘security’; put another way, decisions are made by parties with obvious conflicts of interest which could never pass muster in a financial audit), or shopping for QSAs, or the fact that card brands don’t want to change the system, but prefer to push costs onto vendors and service providers. But I digress. There is one last way PCI is like Quicken. It can be really beneficial if you use it properly, and really dangerous if you don’t. And most people don’t. Share:

Share:
Read Post

Incident Response Fundamentals: Trigger, Escalate, and Size up

Okay, your incident response process is in place, you have a team, and you are hanging out in the security operations center, watching for Bad Things to happen. Then, surprise surprise, an alert triggers: what’s next? Trigger and Escalate The first thing you need to do is determine the basic parameters of the incident, and assign resources (people) to investigate and manage it. This is merely a quick and dirty step to get the incident response process kicked off, and the basic information you gather will vary based on what triggered the alert. Not all alerts require a full incident response – much of what you already deal with on a day to day basis is handled by your existing security processes. Incident response is for situations that cannot be adequately handled by your standard processes. Most IDS, DLP, or other alerts/help desk calls don’t warrant any special response – this series is about incidents that fall outside the range of your normal background noise. Where do you draw the line? That depends entirely on your organization. In a small business a single system infected with malware might lead to a response, while the same infection in a larger company could be handled as a matter of course. Technically these smaller issues (in smaller companies) are “incidents” and follow the full response process, but that entire process would be managed by a single individual with a few clicks. Regardless of where the line is drawn, communication is still critical. All parties must be clear on the specifics of which situations require a full incident investigation and which do not. For any incident, you will need a few key pieces of information early to guide next steps. These include: What triggered the alert? If someone was involved or reported it, who are they? What is the reported nature of the incident? What is the reported scope of the incident? This is basically the number and nature of systems/data/people involved. Are any critical assets involved? When did the incident occur, and is it ongoing? Are there any known precipitating events for the incident? In other words, is there a clear cause? All this should be collected in a matter of seconds or minutes through the alerting process, and provides your initial picture of what’s going on. When an incident does look more serious, it’s time to escalate. We suggest you have guidelines for initiating this escalation, such as: Involvement of designated critical data or systems. Malware infecting a certain number of systems. Sensitive data detected leaving the organization. Unusual traffic/behavior that could indicate an external compromise. Once you escalate it’s time to assign an appropriate resource, request additional resources (if needed), and begin the response. Remember that per our incident response principles, whoever first detects and evaluates the incident is in charge of it until they hand it off to someone else of equal or greater authority. Size up The term size up comes from the world of emergency services. It refers to the initial impressions of the responder as they roll up to the scene. They may be estimating the size of a cloud of black smoke coming out of a house, or a pile of cars in the middle of a highway. The goal here is to take the initial information provided and expand on it as quickly as possible to determine the true nature of the incident. For an IT response, this involves determining specific criteria – some of which you might already know: Scope: Which systems/networks/data are involved? While the full scope of an IT incident may take some time to determine, right now we need to go beyond the initial information provided and learn as much about the extent of the incident as possible. This includes systems, networks, and data. Don’t worry about getting all the details of each of them yet – the goal is merely to get a handle on how big a problem you might be facing. Nature: What kind of incident are you dealing with? If it’s on the network, look at packets or reports from your tools. For an endpoint, start digging into the logs or whatever triggered the alert. If it involves data loss, what data might be involved? Be careful not to assume it’s only what you detected going out, or what you think was inappropriately accessed. People: If this is some sort of an external attack, you probably aren’t going to spend much time figuring out the home address of the people involved at this stage. But for internal incidents it’s important to put names to IP addresses for both suspected perpetrators and victims. You also want to figure out what business units are involved. All of this affects investigation and response. Yes, I could have just said, “who, what, when, where, and how”. We aren’t performing more than the most cursory examination at this point, so you’ll need to limit your analysis to basics such as security tool alerts, and system and application logs. The point here is to get a quick analysis of the situation, and that means relying on tools and data you already have. The OODA Loop Many incident responders are familiar with the OODA Loop originally developed by Colonel Boyd of the US Air Force. The concept is that in an incident, or any decision-making process, we follow a recurring cycle of Observe, Orient, Decide, and Act. These cycles, some of which are nearly instantaneous and practically subconscious, describe the process of continually collecting, evaluating, and acting on information. The OODA Loop maps well to our Incident Response Fundamentals process. While we technically follow multiple OODA cycles in each phase of incident response, at a macro level trigger and escalate are a full OODA loop (gathering basic information and deciding to escalate), while size up maps to a somewhat larger loop that increases the scope of our observations, and closes with the action of requesting additional resources or moving on to the next response phase. Once you have collected and reviewed the basics, you should have a reasonable idea of what you’re dealing with. At

Share:
Read Post

Please Read: Major Change to the Securosis Feeds

For those of you who don’t want to read the full post, we’re changing our feeds. Click here to subscribe to the new feed with all the content you are used to. Our existing blog feed will include ‘highlights’ only as of next week. Back when I started this blog, it was nothing more than my own personal site to rant and rave about the security industry, cleaning litter boxes, and hippies (they suck). Since then we have added a bunch of people and a ton of content. But more isn’t always better, despite what those Enzyte commercials say. At least not for everyone. A couple months ago I was looking at our feed and realized we might be overloading everyone with all the content. Especially when we are running multiple deep research projects as series of long posts. We asked a few people (you know – Twitter), and the general conclusion was that some people preferred only seeing our lighter posts, while others enjoyed the insight into all our major research. We try to please everyone, so we decided to make some changes to the site, what we write, and our feeds: We realized we weren’t posting as much on the latest news as we used to, because that was landing in the Incite or the Friday Summary. We are going back to the way we used to do things, and will return to daily news/events analysis. We’ll be putting most of the vendor/market oriented snark into the Incite, with what we call “drive-by posts” focusing more on what security practitioners might be interested in. We have split the site into two views – the Highlights view will contain the Firestarter, Incite, Friday Summary, and general posts and analysis. The Complete view adds Project Quant and all our heavy/deep research. Most of our big multipart series are moving into the Complete feed (for example, the current React Faster and Better series on monitoring and incident response). To be honest, we hope most of you stick with the complete content, because we really appreciate public review of all our research. We will still highlight important parts of these projects in the Highlights view, just not every post. We made the same split in our RSS feeds. The current feed will become the Highlights feed next week. If you want to switch to the highlights, you don’t need to change anything. For everything, subscribe to the Complete feed (available immediately). That’s it. We’re making a big effort to ramp our daily analysis back up while still producing the deep research we’ve been more focused on lately. With the view and feed splits, we hope to meet your needs better. As always, please send us any feedback. And since I made the code changes myself, odds are high that it’s all broken now anyway. The Research Library feed still exists, for all our substantive completed content, organized by topic so you don’t have to search for it. Share:

Share:
Read Post

Download the Securosis 2010 Data Security Survey Report (and Raw Data!)

Guess what? Back in September we promised to release both the full Data Security Survey results and the raw data, and today is the day. This report is chock full of data security goodness. As mentioned in our original post, here are some highlights: We received over 1,100 responses with a completion rate of over 70%, representing all major vertical markets and company sizes. On average, most data security controls are in at least some stage of deployment in 50% of responding organizations. Deployed controls tend to have been in use for 2 years or more. Most responding organizations still rely heavily on ‘traditional’ security controls such as system hardening, email filtering, access management, and network segregation to protect data. When deployed, 40-50% of participants rate most data security controls as completely eliminating or significantly reducing security incident occurrence. The same controls rated slightly lower for reducing incident severity when incidents occur, and still lower for reducing compliance costs. 88% of survey participants must meet at least 1 regulatory or contractual compliance requirement, with many required to comply with multiple regulations. Despite this, “to improve security” is the most cited primary driver for deploying data security controls, followed by direct compliance requirements and audit deficiencies. 46% of participants reported about the same number of security incidents in the last 12 months compared to the previous 12, with 27% reporting fewer incidents, and only 12% reporting an increase. Over the next 12 months, organizations are most likely to deploy USB/portable media encryption and device control or Data Loss Prevention. Email filtering is the single most commonly used control, and the one cited as least effective. Unlike… well, pretty much anyone else, we prefer to release an anonymized version of our raw data to keep ourselves honest. The only things missing from the data are anything that could identify a respondent. This research was performed completely independently, and special thanks to Imperva for licensing the report. Visit the permanent landing page for the report and data, or use the direct links: Report: The Securosis 2010 Data Security Survey report (PDF) Anonymized Survey Data: Zipped CSV Zipped .xlsx Share:

Share:
Read Post

Incident Response Fundamentals: Response Infrastructure and Preparatory Steps

In our last post we covered organizational structure options for incident response. Aside from the right org structure and incident response process, it’s important to have a few infrastructure pieces (tools) in place, and take some preparatory steps ahead of time. As with all our recommendations in this series, remember that one size doesn’t fit all, and those of you in smaller companies will probably skip some of the tools or not need some of the prep steps. Incident Response Support Tools The following tools are extremely helpful (sometimes essential) for managing incidents. This isn’t a comprehensive list, but an overview of the major categories we see most often used by successful organizations: Multiple secure communications channels: It’s bad to run all your incident response communications over an pwned email server, or to lose the ability to communicate if a cell tower is out. Your incident response team should have multiple communications options – landlines, mobile phones (on multiple carriers if possible), encrypted online tools (via secure systems), and possibly even portable mobile radios. For smaller organizations this might be as simple as GPG or S/MIME for encrypted email (and a list of backup email accounts on multiple providers), or a collaboration Web site and some backup cell phones. Incident management system: Many organizations use their trouble ticket systems to manage incidents, or handle them manually. There are also purpose-built tools with improved security and communications options. As long as you have some central and secure place to keep track of an incident, and a backup option or two, you should be covered. Analysis and forensics tools: As we will discuss later in the series, one of the most critical elements of incident response is the investigation. You need a mix of forensics tools to figure out what’s going on – including tools for analyzing network packet captures, endpoints and mobile devices, and various logs (everything from databases to network devices). This is a very broad category that depends on your skill set, the kinds of incidents you are involved with, and budget. Secure test environment/lab: This is clearly more often seen in larger organizations and those with larger budgets and higher incident rates, but even in a smaller organization it is extremely helpful to have some test systems and a network or two – especially for analysis of compromised endpoints and servers. Network taps and capture/analysis laptops: At some point during an investigation, you’ll likely need to physically go to a location and plug in an inline or passive network tap to analyze part of a network – sometimes even the communications from a single system. This kind of monitoring may not be possible on your existing network – not all routers let you plug in and capture local traffic (heck, you might simply be out of ports), so we recommend you have a small tap and spare laptop (with a large hard drive, possibly external) available. These are very cost-effective and useful even for smaller organizations. Data collection and monitoring infrastructure: As previously discussed. Preparatory Steps Hopefully the idea that tools are only a small part of every security process is starting to set in. Once again, tools are a means to an end. The following steps help set up your infrastructure to support the response process and make the best use of your investment in tools. They cost little aside from time, but will determine the success and/or failure of your response efforts: Define a communications plan: As we mentioned above, it’s important to have multiple communications methods. It’s even more important to have a calling list with all the various numbers, emails, and other addresses you need. Don’t forget to include key contacts outside your team – such as management, key business units, and outside resources like local law enforcement contacts (even for federal agencies) or an outside incident response firm in case something exceeds your own capabilities. Establish a point of contact, promote it, and staff it: It is truly surprising how many organizations fail to provide contact options for users or other IT staff for when something goes wrong. Set up a few options, including phone/email, make sure someone is always there to respond, and promote them. Many organizations route everything through the help desk, in which case you need to educate them on how to identify a potential incident, when to escalate and how to contact you if something looks big enough that adding it to your ticket queue might be a tad too passive. Liaise with key business units: Lay the groundwork for working with different business units before an incident occurs. Let them know who you are, that you are there to help, and what their responsibilities will be if they get involved in an incident (either because it’s affecting their unit, or because they are an outside resource). Liaise with outside resources: If you are on a large dedicated incident response team this might mean meeting with local federal law enforcement representatives, auditors, and legal advisors. For a smaller organization it might mean researching and interviewing some response and investigation firms in case something exceeds your internal response capability and getting to know the folks so they’ll call you back when you need them. You don’t want to be calling someone cold in the middle of an incident. Document your incident response plan: Even if your plan is a single page with a bullet list of steps, have it in writing ahead of time. Make sure all of the folks with responsibility to act in an incident understand what exactly they need to do. In any incident, checklists are your friends. Train and practice: Ideally run some full scenarios on top of training on tools and techniques. Even if you are a single part-time responder, create some practice scenarios for yourself. And practice. And practice. And practice again. It’s a bad time to find a hole in your process, while you are responding to a real incident. Again, we could write an entire paper on building your incident response infrastructure, but these key elements will get you on the

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.