Securosis

Research

2011 Research Agenda: the Practical Bits

I always find it a bit of a challenge to fully plan out my research agenda for the coming year. Partly it’s due to being easily distracted, and partly my recognition that there are a lot of moving cogs I know will draw me in different directions over the coming year. This is best illustrated by the detritus of some blog series that never quite made it over the finish line. But you can’t research without a plan, and the following themes encompass the areas I’m focusing on now and plan to continue through the year. I know I won’t able to cover everything in the depth I’d like, so I could use feedback on to what you folks find interesting. This list is as much about the areas I find compelling from a pure research standpoint as what I might write about. This post is about the more pragmatic focus areas, and the next post will delve into more forward-looking research. Information-Centric (Data) Security for the Cloud I’m spending a lot more time on cloud computing security than I ever imagined. I’ve always been focused on information-centric (data) security, and the combination of cloud computing adoption, APT-style threats, the consumerization of IT, and compliance are finally driving real interest and adoption of data security. Data security consistently rates as a top concern – security or otherwise – when adopting cloud computing. This is in large driven part by the natural fear of giving up physical control of information assets, even if the data ends up being more secure than it was internally. As you’ll see at the end of this post, I plan on splitting my coverage into two pieces: what you can do today, and what to watch for the future. For this agenda item I’ll focus on practical architectures and techniques for securing data in various cloud models using existing tools and technologies. I’m considering writing two papers in the first half of the year, and it looks like I will be co-branding them with the Cloud Security Alliance: Assessing Data Risk for the Cloud: A cloud and data specific risk management framework and worksheet. Data Security for Cloud Computing: A dive into specific architectures and technologies. I will also continue my work with the CSA, and am thinking about writing something up on cloud computing security for SMB because we see pretty high adoption there. Pragmatic Data Security I’ve been writing about data security, and specifically pragmatic data security, since I started Securosis. This year I plan to compile everything I’ve learned into a paper and framework, plus issue a bunch of additional research delving into the nuts and bolts of what you need to do. For example, it’s time to finally write up my DLP implementation and management recommendations, to go with Understanding and Selecting. The odds are high I will write up File Activity Monitoring because I believe it’s at an early stage and could bring some impressive benefits – especially for larger organizations. (FAM is coming out both stand-alone and with DLP). It’s also time to cover Enterprise DRM, although I may handle that more through articles (I have one coming up with Information Security Magazine) and posts. I also plan to run year two of the Data Security Survey so we can start comparing year-over-year results. Finally, I’d like to complete a couple more Quick Wins papers, again sticking with the simple and practical side of what you can do with all the shiny toys that never quite work out like you hoped. Small Is Sexy Despite all the time we spend talking about enterprise security needs, the reality is that the vast majority of people responsible for implementing infosec in the world work in small and mid-sized organizations. Odds are it’s a part time responsibility – or at most 1 to 2 people who spend a ton of time dealing with compliance. More often than not this is what I see even in organizations of 4,000-5,000 employees. A security person (who may not even be a full-time security professional) operating in these environments needs far different information than large enterprise folks. As an analyst it’s very difficult to provide definitive answers in written form to the big company folks when I know I can never account for their operational complexities in a generic, mass-market report. Aside from the Super Sekret Squirrel project for S Share:

Share:
Read Post

NSA Assumes Security Is Compromised

I saw an interesting news item: the NSA has changed their mindset and approach to data security. Their new(?) posture is that Security Has Always Been Compromised. Debora Plunkett of the NSA’s “Information Assurance Directorate” stated: There’s no such thing as ‘secure’ any more. The most sophisticated adversaries are going to go unnoticed on our networks. We have to build our systems on the assumption that adversaries will get in. We have to, again, assume that all the components of our system are not safe, and make sure we’re adjusting accordingly. I started thinking about how I would handle this problem and it became mind-boggling. I assume compartmentalization and recovery is the strategy, but the details are of course the issue. Just the thought of going through the planning and reorganization of a data processing facility the size of what the NSA (must) have in place sent chills down my spine. What a horrifically involved process that must be! Just the network and security technology deployment would be huge; the disaster recovery planning and compartmentalization – especially what to do in the face of incomplete forensic evidence – would be even more complex. How would you handle it? Better forensics? How would you scope the damage? How do you handle source code control systems if they are compromised? Are you confident you could identify altered code? How much does network segmentation buy you if you are not sure of the extent of a breach? To my mind this what Mike has been covering with his ‘Vaults’ concept of segmentation, part of the Incident Response Fundamentals. But the sheer scope and complexity casts those recommendations in a whole new light. I applaud the NSA for the effort: it’s the right approach. The implementation, given the scale and importance of the organization, must be downright scary. Share:

Share:
Read Post

React Faster and Better: New Data for New Attacks, Part 1

As we discussed in our last post on Critical Incident Response Gaps, we tend to gather too much of the wrong kinds of information, too early in the process. To clarify that a little bit, we are still fans of collecting as much data as you can, because once you miss the opportunity to collect something you’ll never get another chance. Our point is that there is a tendency to try to boil the ocean with analysis of all sorts of data. That causes failure and has plagued technologies like SIEM, because customers try to do too much too soon. Remember, the objective from an operational standpoint is to react faster, which means discovering as quickly as possible that you have an issue, and then engaging your incident response process. But merely responding quickly isn’t useful if your response is inefficient or ineffective, which is why the next objective is to react better. Collecting the Right Data at the Right Time Balancing all the data collection sources available today is like walking a high wire, in a stiff breeze, after knocking a few back at the local bar. We definitely don’t lack for potential information sources, but many organizations find themselves either overloaded with data or missing key information when it’s time for investigation. The trick is to realize that you need three kinds of data: Data to support continuous monitoring and incident alerts/triggers. This is the stuff you look at on a daily basis to figure out when to trigger an incident. Data to support your initial response process. Once an incident triggers, these are the first data sources you consult to figure out what’s going on. This is a subset of all your data sources. Keep in mind that not all incidents will tie directly to one of these sources, so sometimes you’ll still need to dive into the ocean of lower-priority data. Data to support post-incident investigation and root cause analysis. This is a much larger volume of data, some of it archived, used to for the full in-depth investigation. One of the Network Security Fundamentals I wrote about early in the year was called Monitor Everything because I fundamentally believe in data collection and driving issue identification from the data. Adrian pushed back pretty hard, pointing out that monitoring everything may not be practical, and focus should be on monitoring the right stuff. Yes, there is a point in the middle. How about collect (almost) everything and analyze the right stuff? That seems to make the most sense. Collection is fairly simple. You can generate a tremendous amount of data, but with the log management tools available today scale is generally not an issue. Analysis of that data, on the other hand, is still very problematic; when we mention too much of the wrong kinds of information, that’s what we are talking about. To address this issue, we advocate segmenting your network into vaults and analyzing traffic and events within the critical vaults at a deep level. So basically it’s about collecting all you can within the limits of reason and practicality, then analyzing the right information sources for early indications of problems, so you can then engage the incident response process. You start with a set of sources to support your continuous monitoring and analysis, followed by a set of prioritized data to support initial incident management, and close with a massive archive of different data sources, again based on priorities. Continuous Monitoring We have done a lot of research into SIEM and Log Management, as well as advanced monitoring (Monitoring up the Stack). That’s the kind of information to use in your ongoing operational analysis. For those vaults (trust zones) you deem critical, you want to monitor and analyze: Perimeter networks and devices: Yes, the bad guys tend to be out there, so they need to cross the perimeter to get to the good stuff. So we want to look for issues on those devices. Identity: Who is as important as what, so analyze access to specific resources – especially within a privileged user context. Servers: We are big fans of anomaly detection and white listing on critical servers such as domain controllers and app servers, so you can be alerted to funky stuff happening at the server level – which usually indicates something that warrants investigation. Database: Likewise, correlating database anomalies against other types of traffic (such as reconnaissance and network exfiltration) can indicate a breach in progress. Better to know that early, before your credit card brand notifies you. File Integrity: Most attacks involve some change to key system files, so by monitoring their integrity you can pinpoint when an attacker is trying to make changes. You can even block these attacks using technology like HIPS, but that’s another story for another day. Application: Finally, you should be able to profile normal transactions and user interactions for your key applications (those accessing protected data) and watch for non-standard activities. Again, they don’t always indicate a problem, but do allow you to prioritize investigation. We recommend focusing on your most important zones, but keep in mind that you need some baseline monitoring of everything. The two most common sources we see for baselines are network monitoring and endpoint & server logs (or whatever security tools you have on those systems). Full Packet Capture Sandwich One emerging advanced monitoring capability – the most interesting to us – is full packet capture. Rich wrote about this earlier this year. Basically these devices capture all the traffic on a given network segment. Why? In a nutshell, it’s the only way you can really piece together exactly what happened, because this way you have the actual traffic. In a forensic investigation this is absolutely crucial will provide detail you cannot get from log records. Going back to our Data Breach Triangle, you need some kind of exfiltration for a real breach. So we advocate heavy perimeter egress filtering and monitoring, to (hopefully) prevent valuable data from escaping

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.