Securosis

Research

IaaS Encryption: External Key Manager Deployment and Feature Options

Deployment and topology options The first thing to consider is how you want deploy external key management. There are four options: An HSM or other hardware key management appliance. This provides the highest level of physical security but the appliance will need to be deployed outside the cloud. When using a public cloud this means running the key manager internally, relying on a virtual private cloud, and connecting the two with a VPN. In private clouds you run it somewhere on the network near your cloud, which is much easier. A key management virtual appliance. Your vendor provides a pre-configured virtual appliance (instance) for you to run in your private cloud. We do not recommend you run this in a public cloud because – even if the instance is encrypted – there is significantly more exposure to live memory exploitation and loss of keys. If you decide to go this route anyway, use a vendor that takes exceptional memory protection precautions. A virtual appliance doesn’t offer the same physical security as a physical server, but they do come hardened and support more flexible deployment options – you can run it within your cloud. Key management software, which can run either on a dedicated server or within the cloud on an instance. The difference between software and a virtual appliance is that you install the software yourself rather than receiving a configured and hardened image. Otherwise it offers the same risks and benefits as a virtual appliance, assuming you harden the server (instance) as well as the virtual appliance. Key management Software as a Service (SaaS). Multiple vendors now offer key management as a service specifically to support public cloud encryption. This also works for other kinds of encryption, including private clouds, but most usage is for public clouds. There are a few different deployment topologies, which we will discuss in a moment. When deploying a key manager in a cloud there are a few wrinkles to consider. The first is that if you have hardware security requirements your only option is to deploy a HSM or encryption/key management appliance compatible with the demands of cloud computing – where you may have many more dynamic network connections than in a traditional network (note that raw key operations per second is rarely the limiting factor). This can be on-premise with your private cloud, or remote with a VPN connection to the virtual private cloud. It could also be provided by your cloud provider in their data center, offered as a service, with native cloud API support for management. Another option is to store the root key on your own hardware, but deploy a bastion provisioning and management server as a cloud instance. This server handles communications with encryption clients/agents and orchestrates key exchanges, but the root key database is maintained outside the cloud on secure hardware. If you don’t have hardware security requirements a number of additional options open up. Hardware is often required for compliance reasons, but isn’t always necessary. Virtual appliances and software servers are fairly self-explanatory. The key issue (no pun intended) is that you are likely to need additional synchronization and orchestration to handle multiple virtual appliances in different zones and clouds. We will talk about this more in a moment, when we get to features. Like deploying a hardware appliance, some key management service providers also deploy a local instance to assist with key provisioning (this is provider dependent and not always needed). In other cases the agents will communicate directly with the cloud provider over the Internet. A final option is for the security provider to partner with the cloud provider and install some components within the cloud to improve performance, to enhance resilience, and/or to reduce Internet traffic – which cloud providers charge for. To choose an appropriate topology answer the following questions: Do you need hardware-level key security? How many instances and key operations will you need to support? What is the topology of your cloud deployment? Public or private? Zones? What degree of separation of duties and keys do you need? Are you willing to work with a key management service provider? Cloud features For a full overview of key management servers, see our paper Understanding and Selecting a Key Management Solution. Rather than copying and pasting an 18-page paper we will focus on a few cloud-specific requirements we haven’t otherwise covered yet. If you use any kind of key management service, pay particular attention to how keys are segregated and isolated between cloud consumers and from service administrators. Different providers have different architectures and technologies to manage this, and you should to map your security requirements agains how they manage keys. In some cases you might be okay with a provider having the technical ability to get your keys, but this if often completely unacceptable. Ask for technical details of how they manage key isolation and the root of trust. Even if you deploy your own encryption system you will need granular isolation and segregation of keys to support cloud automation. For example if a business unit or development team is spinning up and shutting down instances dynamically, you will likely want to provide the capability to manage some of their own keys without exposing the rest of the organization. Cloud infrastructure is more dynamic than traditional infrastructure, and relies more on Application Programming Interfaces (APIs) and network connectivity – you are likely to have more network connections from a greater number of instances (virtual machines). Any cloud encryption tool should support APIs and a high number of concurrent network connections for key provisioning. For volume encryption look for native clients/agents designed to work with your specific cloud platform. These are often able to provide information above and beyond standard encryption agents to ensure only acceptable instances access keys. For example they might provide instance identifiers, location information, and other indicators which do not exist on a non-cloud encryption agent. When they are available you might use them to only allow an instance to

Share:
Read Post

Security Analytics with Big Data: Use Cases

Why do we use big data for security analytics? Aside from big data hype in the press, what motivates customers to look for new solutions? On the other side of the coin, why are vendors altering their products to use – or at least integrate with – big data? In our discussions with customers they cite performance and scalability, particularly for security event analysis. In fact this research project was originally outlined as a broad examination of the potential for big data for security analytics. The customers we speak with don’t care about generalities – they need to solve existing problems, specifically around installed SIEM and log management systems. We refocused this research on a focused need to scale beyond what they have today and get more from existing investments, and big data is a means to that end. Today’s post focuses on the customer use cases and delves into why SIEM, log management, and other event-centric monitoring systems struggle under evolving requirements. Data velocity and clustered data management are new terms in IT, but they define two core characteristics of big data. This is no coincidence – as IT practitioners learn more about the promise of big data they apply its capabilities to the problems of existing SIEM solutions. The inherent strengths of big data overlap beautifully with SIEM deficiencies in the areas of scalability, analysis speed, and rapid data insertion. And given the potential for greater analysis capabilities, big data is viewed as a way to both keep pace with exploding volumes of event data and do more with it. Specific use cases drive interest in big data. Big data analytics are expanding, and complement SIEM. But the reason it is such a major trend is that big data addresses important issues in existing platforms. To serve prospective buyers we need to understand the issues that drive them to investigate new products and solutions. The basic issues above are the ones that always seem to plague SIEM: scaling, efficiency, and detection of threats – but those are generic placeholders for more specific demands. Use Cases More (Types of) Data – The problem we heard most often was “We need to analyze more types of data to get better analysis”. The need to include more data types, beyond traditional netflow and syslog event streams, is to derive actionable information from the sea of data. Threat intelligence is not not a simple signature and detection is more complex than reviewing a single event. Communications data such as Twitter streams, blog comments, voice, and other rich data sources are unstructured and require different parsing algorithms to interpret. Netflow syslog data is highly structured, with each element defined by its location within a record. Blog comments, phishing emails, botnet C&C, or malicious files? Not so much. The problems accommodating more types of data are scalability and usability. First, adding data types means handling more data, and existing systems often can’t handle any more. Adding capacity to already taxed systems often requires costly add-ons. Rolling out additional data collectors and servers to process their output data takes months, and the cost in IT time can be prohibitive as well. That all assumes the SIEM architecture can scale up to greater volumes of data coming in faster. Second, many of these systems cannot handle alternative data types – either they normalize the data in a way that strips much of its value or the system lacks suitable tools for analyzing alternate (raw) data types. Most systems have evolved to include configuration management and identity information, but they don’t handle Twitter feeds or diverse threat intelligence. Given evolving attack profiles, the flexibility to capture and dig into any data type is now a key requirement. Anti-Drill-Down – We have seen steady advances in aggregation, correlation, dashboards, and data enrichment to help security folks identity security threats, faster. But these iterative advancements have not kept pace with the volume of security data that needs to be parsed, nor the diversity of attack signatures. Overall situational awareness has not improved and the signal-to-noise ratio has gotten worse instead of than better. The entire process – the entire mindset – has been called into question. Today the typical process is as follows: a) An event or combination of events that looks interesting is captured. b) SIEM correlates and enriches data to provide better context, analyzes data in terms of rules, and generates an alert if it detects an anomaly. c) To verify that a suspicious event is indeed a threat, generally a human must “drill down” into a combination of machine-readable and human-readable data to make sense of it. The security practitioner must cross reference-multiple data sources. Enrichment is handy but too much manual analysis is still required to weed through false positives. In many cases the analyst extracts data to run other scripts or tools to produce the final analysis – we have even seen exports to MS Excel to find outliers and detect fraud. We need better analytics tools with more options than simple SQL queries and pattern matching. The types of analysis SIEMs can perform are limited, and most SIEM solutions lack programatic extensions to enable more complex analysis. “The net result is we always get a blob of stuff we have to sift through, then verify, investigate, validate and, often adjust the policy to filter our more detritus.” The anti-drill-down use case offers more automated checking using more powerful analytics and data mining tools than simple scripts and SQL queries. Architectural Limitations – Some customers attribute their performance issues – especially lagging timely threat analysis – to SIEM architecture and process. It takes time to gather data, move it to a central location, normalize, correlate, and then enrich. This generally makes near-real-time analysis a fantasy. Queries run on centralized event servers, and often take minutes to complete, while compliance reports generally take hours. Some users report that the volume of data stresses their systems, and queries on relational servers take too long to complete. Centralized computation limits the speed and timelines of analysis and reporting. The current

Share:
Read Post

Incite 5/1/2013: Trailblazing Equality

I recently took the Boy to see “42,” which I highly recommend for everyone. It’s truly a great (though presumably dramatized) story about Jackie Robinson and Branch Rickey as they tore down the color line in major league baseball. My stepfather knew Jackie Robinson pretty well and always says great things about him. It seems the movie downplayed the abuse he took, alone, as he worked to overcome stereotypes, bigotry, and intolerance to move toward the ideal of the US founding fathers that “all men are created equal”. But importantly the movie successfully conveyed the significance of his actions and the courage of the main players. As unlikely as it seemed in 1945 that we would have a black man playing in the major leagues, it must have felt similarly unlikely that we would have an openly gay man playing in the NBA (or any major league sport). Except that it’s not. Jason Collins emerged from his self-imposed dungeon after 12 years in the NBA and became the first NBA player to acknowledge that he’s gay. It turns out men of all creeds, colors, nationalities, and sexual orientations play professional sports. Who knew? This was a watershed moment in the drive toward equal rights. NFL writer Mike Freeman Tweeted that it was a great day in his life: “(I) get to see a true civil rights moment unfold instead of reading about it in a book.” Those interested in equality are ecstatic. Those wanting to maintain the status quo, not so much. I tend to not discuss my personal views on politics, religion, or any other hot topic publicly. The reality is that I believe what I believe, and you believe what you believe. We can have a good, civil discussion about those views, but I’m unlikely to change my mind and you are unlikely to change yours. Most such discussions are a complete waste of time. I accept your right to believe what you want and I hope you accept mine. Unfortunately the world isn’t like that. There was a tremendous amount of support for Jason Collins from basketball players, other athletes, and even the president of the US. There was also a lot of bigotry, ignorance, and hatred spewed in his direction. But when he stepped out of the closet he knew that would be the case. He was ready. And he is laying the groundwork for other gay athletes to emerge from their darkness. As Jackie Robinson blazed the trail for athletes like Roy Campanella, Larry Doby, and Satchel Paige to play in the majors, Jason Collins will be the first of many professional athletes to embrace who they are and stop hiding. I think it’s great. Hats off to Jason Collins and all of the other courageous gay athletes that will become known in the months and years to come. Although you may disagree, which is cool. You are entitled to your own opinions. But to be clear, you can’t stop it. This genie is out of the bottle, and it’s not going back in. –Mike Photo credits: Sports Illustrated cover – May 6, 2013 Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Defending Cloud Data/IaaS Encryption Encrypting Entire Volumes Protecting Volume Storage Understanding Encryption Systems How IaaS Storage Works IaaS Encryption Security Analytics with Big Data Introduction The CISO’s Guide to Advanced Attackers Verify the Alert Mining for Indicators Intelligence, the Crystal Ball of Security Sizing up the Adversary Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Incite 4 U The worst press release of the year: It kills me to do this, but this week I need to slam an “article” on Dark Reading that claims users don’t care about security. This is clearly a press release posted as if it were a news article, which deliberately confuses readers. As an occasional writer for DR (and huge supporter of the team there), it hurts to see such drivel intermingled with good content. Unfortunately many online publications now post press releases as articles in the ongoing battle to collect page views, which is a horrific practice that should be destroyed. Back to the press release, which has more hyperbole than the Encyclopedia of Hyperbole. It claims that users don’t care about security since they reuse passwords and don’t track the latest threats. That’s stupid. They reuse passwords because the alternatives don’t work for most average users. They don’t track threats or obsess about security because it isn’t their job. At least most FUD press releases make minor nods to reality – this one doesn’t even pretend. It reeks of desperation. Pathetic. – RM Stepping into the AV time machine: I know this OPSWAT post, Varied Antivirus Engine Detection of Two Malware Outbreaks is dated April 13, 2013 but it feels like 2003. It talks about the need to use multiple detection engines because anti-virus vendors add signatures for new attacks at different times. Wait. What? Evidently no one told these guys that blacklists are dead. But this seems to be a recurring theme – I recently got into it with another MSS, who told me how great it is that they can scan traffic with two different AV engines to catch advanced malware. I tried to delicately tell him that they wouldn’t catch much advanced malware with 15 AV engines, but they can certainly crush their throughput. I guess I shouldn’t be surprised – AV remains the primary control to fight attacks, even though it’s not good enough. Sigh. – MR Always the last to know: Wendy Nather had exactly the same thought I did on the the latest Verizon Data Breach Report, and hit the nail on the

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.