Securosis

Research

Cloud Security Q&A from the Field: Questions and Answers from the DC CCSK Class

One of the great things about running around teaching classes is all the feedback and questions we get from people actively working on all sorts of different initiatives. With the CCSK (cloud security) class, we find that a ton of people are grappling with these issues in active projects and different things in various stages of deep planning. We don’t want to lose this info, so we will to blog some of the more interesting questions and answers we get in the field. I’ll skip general impressions and trends today to focus on some specific questions people in last week’s class in Washington, DC, were grappling with: We currently use XXX Database Activity Monitoring appliance, is there any way to keep using it in Amazon EC2? This is a tough one because it depends completely on your vendor. With the exception of Oracle (last time I checked – this might have changed), all the major Database Activity Monitoring vendors support server agents as well as inline or passive appliances. Adrian covered most of the major issues between the two in his Database Activity Monitoring: Software vs. Appliance paper. The main question for cloud )especially public cloud) deployments is whether the agent will work in a virtual machine/instance. Most agents use special kernel hooks that need to be validated as compatible with your provider’s virtual machine hypervisor. In other words: yes, you can do it, but I can’t promise it will work with your current DAM product and cloud provider. If your cloud service supports multiple network interfaces per instance, you can also consider deploying a virtual DAM appliance to monitor traffic that way, but I’d be careful with this approach and don’t generally recommend it. Finally, there are more options for internal/private cloud where you can route the traffic even back to a dedicated appliance if necessary – but watch performance if you do. How can we monitor users connecting to cloud services over SSL? This is an easy problem to solve – you just need a web gateway with SSL decoding capabilities. In practice, this means the gateway essentially performs a man in the middle attack against your users. To work, you install the gateway appliance’s certificate as a trusted root on all your endpoints. This doesn’t work for remote users who aren’t going through your gateway. This is a fairly standard approach for both web content security and Data Loss Prevention, but those of you just using URL filtering may not be familiar with it. Can I use identity management to keep users out of my cloud services if they aren’t on the corporate network? Absolutely. If you use federated identity (probably SAML), you can configure things so users can only log into the cloud service if they are logged into your network. For example, you can configure Active Directory to use SAML extensions, then require SAML-based authentication for your cloud service. The SAML token/assertion will only be made when the user logs into the local network, so they can’t ever log in from another location. You can screw up this configuration by allowing persistent assertions (I’m sure Gunnar will correct my probably-wrong IAM vernacular). This approach will also work for VPN access (don’t forget to disable split tunnels if you want to monitor activity). What’s the CSA STAR project? STAR (Security, Trust & Assurance Registry) is a Cloud Security Alliance program where cloud providers perform and submit self assessments of their security practices. How can we encrypt big data sets without changing our applications? This isn’t a cloud-specific problem, but does come up a lot in the encryption section. First, I suggest you check out our paper on encryption: Understanding and Selecting a Database Encryption or Tokenization Solution. The best cloud option is usually volume encryption for IaaS. You may also be able to use some other form of transparent encryption, depending on the various specifics of your database and application. Some proxy-based in-the-cloud encryption solutions are starting to appear. That’s it from this class… we had a ton of other questions, but these stood out. As we teach more we’ll keep posting more, and I should get input from other instructors as they start teaching their own classes. Share:

Share:
Read Post

Security Management 2.0: Platform Evolution

Our motivation for launching the Security Management 2.0 research project lies in the general dissatisfaction with SIEM implementations – which in some cases have not delivered the expected value. The issues typically result from failure to scale, poor ease of use, excessive effort for care and feeding, or just customer execution failure. Granted some of the discontent is clearly navel-gazing – parsing and analyzing log files as part of your daily job is boring, mundane, and error-prone work you’d rather not do. But dissatisfaction with SIEM is largely legitimate and has gotten worse, as system load has grown and systems have been subjected to additional security requirements, driven by new and creative attack vectors. This all spotlights the fragility and poor architectural choices of some SIEM and Log Management platforms, especially early movers. Given that companies need to collect more – not less – data, review and management just get harder. Exponentially harder. This post is not to focus on user complaints – that doesn’t help solve problems. Instead let’s focus on the changes in SIEM platforms driving users to revisit their platform decisions. There are over 20 SIEM and Log Management vendors in the market, most of which have been at it for 5-10 years. Each vendor has evolved its products (and services) to meet customer requirements, as well as provide some degree of differentiation against the competition. We have seen new system architectures to maximize performance, increase scalability, leverage hybrid deployments, and broaden collection via CEF and universal collection format support. Usability enhancements include capabilities for data manipulation; addition of contextual data via log/event enrichment; as well as more powerful tools for management, reporting, and visualization. Data analysis enhancements include expanding supported data types to include dozens of variants for monitoring, correlating/alerting, and reporting on change controls; configuration, application, and threat data; content analysis (poor man’s DLP) and user activity monitoring. With literally hundreds of new features to comb through, it’s important to recognize that not all innovation is valuable to you, and you should keep irrelevancies out of your analysis of benefits of moving to a new platform. Just because the shiny new object has lots of bells and whistles doesn’t mean they are relevant to your decision. Our research shows the most impactful enhancements have been the enhancements in scalability, along with reduced storage and management costs. Specific examples include mesh deployment models – where each device provides full logging and SIEM functionality – moving real-time processing closer to the event sources. As we described in Understanding and Selecting SIEM/Log Management: the right architecture can deliver the trifecta of fast analysis, comprehensive collection/normalization/correlation of events, and single-point administration – but this requires a significant overhaul of early SIEM architectures. Every vendor meets the basic collection and management requirements, but only a few platforms do well at modern scale and scope. These architectural changes to enhance scalability and extend data types are seriously disruptive for vendors – they typically require a proverbial “brain transplant”: an extensive rebuild of the underlying data model and architecture. But the cost in time, manpower, and disrupted reliability was too high for some early market leaders – as a result some instead opted instead to innovate with sexy new bells and whistles which were easier and faster to develop and show off, but left them behind the rest of the market on real functionality. This is why we all too often see a web console, some additional data sources (such as identity and database activity data) and a plethora of quasi-useful feature enhancements tacked onto a limited scalability centralized server: that option cost less with less vendor risk. It sounds trite, but it is easy to be distracted from the most important SIEM advancements – those that deliver on the core values of analysis and management at scale. Speaking of scalability issues, coinciding with the increased acceptance (and adoption) of managed security services, we are seeing many organizations look at outsourcing their SIEM. Given the increased scaling requirements of today’s security management platforms, making compute and storage more of a service provider’s problem is very attractive to some organizations. Combined with the commoditization of simple network security event analysis, this has made outsourcing SIEM all the more attractive. Moving to a managed SIEM service also allows customers to save face by addressing the shortcomings of their current product without needing to acknowledge a failed investment. In this model, the customer defines the reports and security controls and the service provider deploys and manages SIEM functions. Of course, there are limitations to some managed SIEM offerings, so it all gets back to what problem you are trying to solve with your SIEM and/or Log Management deployment. To make things even more complicated, we also see hybrid architectures in early use, where a service provider does the fairly straightforward network (and server) event log analysis/correlation/reporting, while an in-house security management platform handles higher level analysis (identity, database, application logs, etc.) and deeper forensic analysis. We’ll discuss these architectures in more depth later in this series. But this Security Management 2.0 process must start with the problem(s) you need to solve. Next we’ll talk about how to revisit your security management requirements, ensuring that you take a fresh look at the issues to make the right decision for your organization moving forward. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.